text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
A simulation experiment study to examine the effects of noise on miners’ safety behavior in underground coal mines
Background Noise pollution in coal mines is of great concern. Personal injuries directly or indirectly related to noise occur from time to time. Its effects impact the health and safety of coal mine workers. This study aimed to identify if and how the level of noise impacts miners’ safety behavior in underground coal mines. Methods In order to study the influence of noise on miners in the mining industry, we built a coal mine noise simulation experiment system, and set the noise test level at 50 dB ~ 120 dB according to the actual working environment at well. We divided the noise gradient into 8 categories and conducted 93 experiments, in which we aim to test miners’ attention distribution, fatigue, and reaction under each level, and the experimental results were analyzed by SPSS22.0 software. Results The results show that the increase of environmental noise level will have an impact on the attention, reaction, and fatigue. The noise is positively related to the fatigue, the noise is negatively related to the attention and reaction. In the noise environment, the sensitivity of the personnel to optic stimuli is higher than that to acoustic stimuli. The test indicators of attention, fatigue, and reaction will change significantly, when the noise level is greater than 70 ~ 80 dB. Conclusions From the perspective of accident prevention, the noise level can be controlled within the range of less than 70 ~ 80 dB, which can control the occurrence of accidents to a certain extent.
Physiologically, scholars have conducted a lot of researches on how noise affects hearing, heart rate, and blood pressure. Noise affects human auditory organs, nervous system, and cardiovascular system, etc. [9][10][11][12][13][14]. Basner et al. [15] studied the effects of noise on hearing, and found hearing loss caused by noise is very common in working environments. Studies [16][17][18][19][20][21][22] have also found that high noise levels can cause hearing loss and general health problems. Masterson et al. [23] studied the hearing loss of workers exposed to noise from 2003 to 2012 and found 76% mining workers are exposed to dangerous noise. It was the highest among all industries. They suffered most from hearing impairment among all industries. Early researches [24,25] found blood pressure and heart rate increase in long-term exposure to noise. Tian et al. [26] found subjects' heart rate would increase in noise environment. Scholars have also explored the relationship between noise and blood pressure, and their conclusions varied about whether and how noise affects blood pressure. Hessel et al. [27] found that occupational noise exposure had no effect on blood pressure. However, Liu et al. [28] suggests that noise in working environment contributes to hypertension and can increase systolic and diastolic blood pressure.
Coal mine noise affects the safety behavior of miners and causes safety accidents. Current researches show that noise in the workplace has a significant impact on the behavior of workers [4,[29][30][31][32]. Behavior refers to the physical, psychological, and action responses to external stimuli. As an external stimulus, noise changes people's physiology, psychology, and actions, and affects people's behavior. Cheng et al. [6] studied how coal mine noise affects physiology and psychology of miners, and the impact of noise on human safety behavior. They found that noise has a serious negative effect on the miners' safety behavior. Deng [33] states that noise has an impact on physiology and psychology, and then affects human behavior and leads to safety accidents. Yu et al. [34] compared the accidents in two factories, and found accidents in a 95 dB are significantly more than in 80 dB; accidents in a noisy environment is 20 times higher than in a quiet environment. Wang et al. [35] studied how noise influences miners' behavior ability, and found the behavior ability of miners in a strong noise environment of 85 dB and 95 dB is significantly lower than in lower noise environments. Some of these studies found that [36,37] noise impacts human attention, and noise above 85 dB would have a greater negative impact on human attention. Reaction time would prolong under strong noise [38]. Tian et al. [29] compared two groups of workers. One group has more knowledge and better awareness about safety production than the other group. The study found that noise has a greater impact on miners with a lower level of knowledge and awareness about safety production.
In general, attention, reaction, and fatigue are three most often studied behavioral ability indicators. Attention means the ability to focus. Attention distribution ability is how accurate to conduct multiple tasks at the same time. In other words, it means how well one can pay attention to different objects at the same time. When workers operate the equipment continuously for a long time, they are often fatigue and their working efficiency decrease [39]. Reaction ability means the response to stimuli signals. First, stimuli are felt by nerve system. Then, it is transmitted from the nerve system to the brain. Brain processes the stimuli and produces instructions to the muscles via nerve system and directs muscle contraction. Reaction ability is evaluated by the reaction time to the stimuli, the time duration from the moment when the external stimulus is received by the nerve system to the completion of reactive behavior by muscles [31]. In addition, noise is commonly believed to be positively correlated with fatigue [40]. Fatigue is often measured by the flicker fusion frequency. And the lower the flash fusion frequency is, the more fatigue the human body are [40]. When being fatigue, people will slow down their thinking and movements, lose concentration. In this case, coordination and accuracy of movements decline and safety behavior ability reduces [41].
The above researches on the impact of noise on people mainly focus on occupational hazards, and relatively little researches has been conducted on the effects of noise on human behavior. However, most of the above studies regard noise as an overall influencing factor, and they haven't divided noise into different levels. Very few of them studied on simulation experiments of real noise environment in coal mines. Therefore, we aimed to explore the relationship between noise changes and miners' behavior ability changes. To do this, this paper built an independent coal mine noise simulation experiment system, and divided the noise data collected in the real coal mine into 8 levels, and studied how 8 noise levels influence the safety behavior of miners in terms of attention, reaction, and fatigue. We hope this study could provide new ideas for underground coal mine noise prevention and coal mine accident prevention.
Methods
In this quantitative research, attention, reaction, and fatigue are selected as research indicators after referring to the relevant literature [26,42,43].
Experimental system design
The experimental system consists of a noise control system and a safety behavior ability testing system. The noise control system consists of noise source, a louder speaker box, a sound meter, and a computer. The noise from the underground coal mine is collected as the noise source. The noise levels are precisely controlled through the louder speaker box and the sound meter.
The safety behavior ability testing system consists of an attention distribution meter, a multiple reaction meter, and a flicker frequency fusion meter. They test the changes in attention, reaction, and fatigue level on 8 noise levels. The specific description is as follows.
Attention
The experiment selected BD-II-314 attention distribution meter, to measure the subjects' attention distribution. The meter tested subjects' ability to perform two tasks at the same time. Q value of attention distribution was used to indicate attention distribution. It is calculated by Eq. 1: Note: S 1 indicates the times of correct reactions to acoustic stimuli; S 2 indicates the total times of reactions to the acoustic stimuli; F 1 indicates the times of correct reactions to optic stimuli; F 2 indicates the total times of reactions to optic stimuli.
The meaning of Q value is as follows: When Q < 0.5, there is no attention distribution value; When 0.5 ≤ Q < 1.0, only a part of the total attention is assigned; When Q = 1.0, the attention distribution value reaches the highest level. It means the efficiency of performing multiple tasks simultaneously is equal to the efficiency when doing a single task. When Q > 1.0, the attention distribution value is invalid.
Reaction ability
In our study, we used BD-II-509B multiple reaction time tester to measure the reaction ability of the subjects to acousto-optic stimuli.
Fatigue
This study used the BD-II-118 flicker frequency fusion meter to measure the critical flicker fusion frequency of the subjects.
The noise test equipment is shown in Table 1.
There are 8 noise levels in our tests, one control group and seven experimental groups. The noise level of control group was 50 dB. The seven experimental groups were 60 dB group, 70 dB group, 80 dB group, 90 dB group, 100 dB group, 110 dB group, and 120 dB group. This is because through reviewing the literature and onsite investigation, we found the range of underground coal mine noise is mainly between 90 dB and 120 dB [4,6]. Also, this experiment also set 4 noise levels below 90 dB to explore the influence of a wider range of noise on workers' safety behavior ability in order to improve the credibility of the experimental results.
The subjects of this study are healthy male graduate and undergraduate students aged from 20 to 25. During the experiment, the subjects did not use any personal protective equipment. In the early stage of the experiment, 14 subjects were selected to conduct experimental tests with 5 noise levels (50 dB,60 dB,70 dB,80 dB, and 90 dB). But 1 of 14 subjects had tinnitus in the 90 dB environment. In the later stage, 8 subjects were selected from the former 14 subjects to conduct tests with 100 dB,110 dB, and 120 dB to meet the consistency of the noise intensity of the test and the real noise environment in the coal mine.
In this study: the fatigue level was measured by the flicker fusion critical frequency; the subject's attention level was measured by their distinguishing different sound and light; the reaction ability was tested by the subject's reaction time to sound and light. The safety behavioral testing system equipment is shown in Fig. 1.
Experimental steps
The experiment was divided into two stages, before the experiment and while experimenting.
Preparation before the experiment was as follows: (a) The subjects were told about the test procedure, and trained to use the instrument so that they can operate the instrument, understand the content of the questionnaire and minimize any unnecessary errors; (b) Keep the environmental conditions in advance, including temperature, humidity, and wind speed at a normal level and debugging the equipment. During the whole test period, the subjects should maintain adequate sleep (Not less than 8 h [44]). The experimental operation involved eight different noise levels. In order to study subjects' safety behavior ability in each noise level, 14 subjects were divided into seven groups with two subjects in each group (due to the capacity of the experiment devices) in the 50~90 dB; 8 subjects were divided into 4 groups with two subjects in each group (due to the capacity of the experiment devices) in100 dB, 110 dB, and 120 dB. The test process of one noise level was as follows: First, subjects in a group entered the test environment with a certain noise level, and adapted to the environment for 30 min. Then, their fatigue, attention, and reaction were tested for 30 min and the data were collected. When this group finished, other groups came in the test room one by one and all the data on this noise level were collected. Notably, the numbers of errors made and the reaction time of subjects were recorded synchronously. In general, the whole process for one group of subjects (two people) lasted for 1 h; the actual test time of seven groups of subjects in the 50~90 dB (altogether 14 people) for each specific noise condition lasted for 7 h in total in a day; the actual test time of four groups of subjects in the 100~120 dB (altogether 8 people) for each specific noise condition lasted for 4 h in total in a day. The eight different noise conditions were tested on 8 different days, during generally the same period of daytime.
Statistical analysis methods
This study mainly used two statistical analysis methods: paired-sample t-test and regression analysis. Pairedsample t-test aims to compare the influence difference of two noise levels on human's safe behavior ability. Specifically, seven experimental groups (60 dB, 70 dB, 80 dB, 90 dB, 100 dB, 110 dB, and 120 dB) were performed paired t-tests respectively with control group (50 dB) and to see if there is a significant difference between the experimental group and the control group, and to find on which noise level workers' safety behavior may change significantly. To make paired-sample t-test valid, an exploratory analysis of the data is required to determine whether it conforms to a normal distribution before pairedsample t-test. Regression analysis aims to research relevance between independent variable (noise) and dependent variable (attention, reaction, and fatigue). In short, this study firstly studied whether noise affects human safety behavior (attention, reaction, and fatigue), and if so, how does it affect (positively or negatively).
Exploratory analysis
As is shown in Table 2, the significant p-value of the S-W test of the acousto-optic reaction correct times and that of the Q values were both greater than 0.05 in all 8 noise levels. This presents a normal distribution, and thus paired sample t-test can be performed.
Sample analysis of t-test
As is shown in Tables 3 and 4, the correct times of the acoustic reactions, the optic reactions and the Q values in the control groups were significantly different from the test values of the control group in 80 dB and above (P < 0.05). That is, when the noise is 80 dB or above, the attention level starts to change significantly compared with the control group (50 dB). Table 5 shows the results of the normality test of the reaction time. From the S-W test in the table, we can see p > 0.05. This indicates the reaction time of the acousto-optic is normally distributed on 8 noise levels. Therefore, paired sample t-test can be used to analyze the influence of different noise levels on the reaction time. Sample analysis of t-test Table 6 shows that as the external noise level increases, the absolute value of t gradually increases, and t always shows a negative value. It indicates that the test acoustooptic reaction time gradually increases with the increase of noise level. In other words, the greater the noise level is, the more significantly reaction ability decline. When the noise level reaches 70 dB, the reaction time of the subjects to the acoustic stimuli becomes significantly longer; after 80 dB, the reaction time to the optic stimuli becomes significantly longer. These show that the subjects react to optic stimuli better than the acoustic stimuli in the same noise level.
Exploratory analysis
S-W analysis results show that p > 0.05. It indicates that the subjects' flicker fusion critical frequency is in normal distribution on 8 noise levels. The specific analysis results of S-W are shown in Table 7. Table 8 shows that the t value of paired sample test increases with the increase of the noise. Flicker fusion critical frequency decreases as the noise increases. It can be concluded that the worker's fatigue increases with the increase of noise. Table 8 shows that there is a significant difference between the control group and the 70 dB group and above 70 dB groups. In other words, the external noise has a significant impact on fatigue from 70 dB.
Prediction of the impact of noise levels on workers' safety working ability
As can be seen from the above analysis, noise has a significant influence on fatigue, reaction, and attention.
In order to find out the relationship between safety behavioral indicators and noise, we took noise level as independent variable and behavior indicators as the dependent variable. The experimental data was subjected to regression analysis. The regression process selects linear, logarithmic, quadratic, power function, and exponential function. The best fit models were selected based on R 2 . When R 2 is greater than 0.9, the data fitting effect becomes better. Figures 2, 3, 4 and 5 shows the trend between noise and behavior indicators. Noise is negatively correlated with the attention and reaction, and it is positively correlated with fatigue. When environmental noise level exceeds 70~80 dB, noise has a significant effect on the subjects' attention, reaction, and fatigue.
It can be seen from Fig. 2 that when the noise level is between 60 and 80 dB, Q value decreases slowly with the increase of noise level. When noise level is greater than 80 dB, the Q value decreases sharply. When the noise level reaches 120 dB, the subject's Q value is 0.55, which is close to the distraction allocation. If the noise level continues to increase, the attention of the subjects will be seriously affected. As shown in Figs. 3 and 4, with the increase of noise levels, the acousto-optic reaction time increases correspondingly, the comparison between the reaction time to acoustic stimuli and light stimuli shows that the reaction time to the acoustic stimuli varies from 0.40s to 0.63 s and the reaction time to the optic stimuli varies from 0.37 s to 0.55 s. The reaction time to acoustic stimuli will be longer. Acoustic reaction time becomes significant when the noise is 70 dB, while optic reaction time is 80 dB. It shows that subjects are more sensitive to optic stimuli than acoustic stimuli in noisy environment.
It can be seen from Fig. 5 that when noise level is lower than 70 dB, the change of flash fusion frequency value is minor. When noise level reaches 70 dB or more, flash fusion frequency decreases greatly. It shows that different levels of noise have different effects on the fatigue degree. The greater noise level is, the more fatigue the subjects are.
From the analysis of fitting effect as is seen in Table 9, exponential function and quadratic function are most suitable for the modeling of the data in this study. The derivative value of the function indicates the speed of change of the behavior indicators. According to the properties of exponential functions and quadratic functions, the absolute value of derivative of the two functional models continue to increase. Therefore, the greater the noise level is, the faster attention, reaction and fatigue will change, so workers are more prone to accidents in high noise environment.
Discussion
Unlike most of the previous studies [6,[9][10][11][12]26], which studied the occupational harm of noise on human, this study focused on the influence of different noise levels on miners' safety behavior in underground coal mines and conducted a simulation quantitative experiment of 93 people/hours. Results of this study show that high noise environment significantly affect fatigue, attention, and reaction. Significance analyses reveal that fatigue is the most sensitive to the change of noise and displays a significant change when the noise is above 70 dB. The sensitiveness of reaction and attention are followed by that of fatigue and displays a significant change when the noise is above 80 dB. In the noise environment, the sensitivity of optic stimuli is more obvious than that of acoustic stimuli. In this sense, optic stimuli can be used to improve safety systems in noisy environment. Regression analysis results show that noise is negatively related to attention and reaction, and positively related to fatigue. At the same time, this study has the following shortcomings: (a) The number of subjects in this experiment is small, and the age of the subjects is different from that of the actual miners. (b) Due to the limitation of our experimental conditions, the experiment did not include the influence of the time duration of the noise exposure.
As a fundamental study in the field of coal mine noise, this paper mainly aims to measure the behavioral indicators of subjects affected by noise environment. To address the limitations of the study, we will conduct an in-depth study on the impact of the noise environment on the safety behavior of miners by expanding the sample size and measuring noise duration.
Similar with the previous studies [35][36][37], this study also found that the safety behavior ability of miners in a high noise environment is significantly lower than that in a low noise environment. However, we found a noise level of 70~80 dB starts to affect the safety behavior ability while other researches [35][36][37] concluded that the specific noise level that changes significantly is different (85~95 dB). The reason for the difference between previous studies and this study may be different sources of noise, different safety behavior indicators, different subjects and different interests (the previous studies focused on physical health, while this study focused on safety behavior). But certain reasons need to be studied in-depth. There are suggested pathways linking long-term exposure to noise environment and human unsafe behavior. In a study by Deng [33], noise affected the physiology and psychology, and then affected human behavior, increasing the probability of safety accidents. Specifically, in the high-noise environment, such effects are manifested as the distraction of attention, the decrease of auditory ability. They lead to auditory and systemic fatigue. In this state, due to the development of protective inhibition, the activity of cerebral cortex cells decreases.
Accordingly, the conditioned reflex activity is affected, the probability of mis operation increases, and the probability of accident increases.
Conclusion
This paper selected three safety behavior indicators: attention, reaction, and fatigue, and studied how coal mine noise effects these safe working abilities. The results were shown as follows:Noise can affect the attention, reaction, and fatigue of miners. When the environmental noise is 80 dB or above, the attention begins to change significantly compared with an environment without noise (50 dB). When the noise is 70 dB or above, fatigue level begins to show a significant change compared with an environment without noise (50 dB). Notably, we found the sensitivity of optic stimuli is more obvious than that of acoustic stimuli: the reaction time to acoustic starts to be statistically significant from 70 dB while the reaction time to optic starts to be statistically significant from 80 dB. In this sense, optic stimuli can be used to improve safety systems in noisy environment. Results of regression analysis show that attention and reaction is negatively related to noise levels, while fatigue is positively related to noise levels. Taking noise as the independent variable, attention (Q value), fatigue (flash fusion frequency), and acoustic reaction time is best fitted by the mathematical model of quadratic function. Optic shows that compared with the no noise (50 dB), the greater the noise level increases, the more significantly the subjects' attention, reaction, and fatigue change. It infers that workers are safer in a low noise environment. It is recommended that the noise level in working place is controlled within 70~80 dB or below. This way, the inclination of accidents will decrease. | 5,426.4 | 2021-02-09T00:00:00.000 | [
"Physics"
] |
RNA Epigenetics in Chronic Lung Diseases
Chronic lung diseases are highly prevalent worldwide and cause significant mortality. Lung cancer is the end stage of many chronic lung diseases. RNA epigenetics can dynamically modulate gene expression and decide cell fate. Recently, studies have confirmed that RNA epigenetics plays a crucial role in the developing of chronic lung diseases. Further exploration of the underlying mechanisms of RNA epigenetics in chronic lung diseases, including lung cancer, may lead to a better understanding of the diseases and promote the development of new biomarkers and therapeutic strategies. This article reviews basic information on RNA modifications, including N6 methylation of adenosine (m6A), N1 methylation of adenosine (m1A), N7-methylguanosine (m7G), 5-methylcytosine (m5C), 2′O-methylation (2′-O-Me or Nm), pseudouridine (5-ribosyl uracil or Ψ), and adenosine to inosine RNA editing (A-to-I editing). We then show how they relate to different types of lung disease. This paper hopes to summarize the mechanisms of RNA modification in chronic lung disease and finds a new way to develop early diagnosis and treatment of chronic lung disease.
Introduction
Chronic lung diseases include chronic obstructive pulmonary disease (COPD), pneumonia, idiopathic pulmonary fibrosis(IPF), asthma, and lung cancer [1]. Despite many new treatments for chronic lung disease, the death rate for the disease has remained nearly unchanged over the years [2]. With the update of research techniques, our understanding of the changes in the genome and signaling pathways associated with chronic lung diseases has dramatically improved. These advances allow clinicians to treat patients precisely to improve outcomes. Unfortunately, due to the vast heterogeneity in most chronic lung diseases, new tried-and-true treatments are still needed for the diagnosis and treatment of chronic lung diseases [3][4][5][6]. Epigenetics and epigenetic-based targeted therapies have begun to be applied in the clinic and have made remarkable progress.
Epigenetics is a study of molecular biology that deals with the heritable variation in gene function above the primary DNA sequence. It regulates gene expression without changing the DNA sequence. The properties of cells and the differences between different types of cells often rely on systems without DNA variation. Epigenetics includes but is not limited to four principal mechanisms: DNA and RNA methylation, chromatin remodeling, noncoding RNAs, and histone modifications [7]. The development of approaches for the detection of RNA modifications has played a crucial role in the field of RNA modification research. Methylated RNA immunoprecipitation sequencing (MeRIP-Seq), as a detection method of m6A modification, can detect the presence of m6A in the whole genome [8]. Sequencing technology based on bisulfite has been widely used for the identification of m5C [9]. Pseudouridine sequencing (Pseudo-seq) is a method for pseudouracil recognition in genome-wide single nucleotide resolution. This method can accurately locate the one-base resolution at the full transcriptome level [10]. The m1A detection methods use antibodies that can specifically recognize m1A to enrich the RNA containing m 1 A, and then detects m 1 A on the mRNA [11,12]. As the approaches for the detection of RNA modifications continue to improve, more and more RNA modifications are being discovered.
RNA epigenetics has introduced a new layer of gene regulation in the study of chronic lung diseases. It dynamically regulates gene expression through a series of different modifications, broadening the potential of epigenetics in the diagnosis and treatment of chronic lung diseases [13]. Currently, a number of articles have summarized the relationship between RNA modification and respiratory diseases [14][15][16]. The majority of them have focused on lung cancer and m 6 A. With the progress of the technologies, more and more studies have confirmed the relationship between lung diseases other than lung cancer and other types of RNA modification. The computational work has made a great contribution to the development of RNA modification research and provided a valuable data set for chronic lung diseases related analysis. This review therefore also covers a number of known database/functional tools to make this review more informative and useful to the readers. This review aims to explore the mechanisms of epigenetic modifications associated with RNA, especially the impact of these modifications on chronic lung diseases.
RNA Epigenetics Mechanisms
RNA epigenetics research has developed rapidly in the last decade. RNAs participate in many processes, such as transcription, splicing, and translation. RNA regulates gene expression not only through the form of an intermediate in protein synthesis (messenger RNA (mRNA)) or an effector molecule (transfer RNA (tRNA) and ribosomal RNA (rRNA)), but also acts directly on gene expression, including through the action of multiple classes of other noncoding RNAs(ncRNAs), such as microRNA (miRNA), small nuclear RNA (snRNA), small nucleolar RNA (snoRNA) and long ncRNA (lncRNA) [17][18][19]. RNA is a novel function as a catalyst and regulator of many biochemical reactions, a carrier of genetic information, an adaptor for protein synthesis, and a structural scaffold for subcellular organelles [20][21][22][23][24]. RNA epigenetics is generally considered to be irreversible changes that have significant effects on RNA structure stability and/or function. However, some RNA modifications are reversible [25,26]. With only 4 bases, RNA is less diverse than protein with 20 different amino acid residues. To enrich the structure and function of RNA, nature modifies RNA through various chemical modifications. More than 150 structurally distinct modification types have been identified across all types of RNA [27,28]. These modifications are associated with various biological processes and human diseases [29,30]. RNA modification was initially only studied in rRNA, tRNA, and snRNA. Using immunoprecipitating RNA or covalently binding RNA-methylase complexes in combination with sequencing, the researchers gained a global understanding of the characteristics of these RNA modifications [8,31,32]. More and more RNA modifications are now confirmed, and these changes are detected in a variety of RNAs [33][34][35][36][37]. RNA modifications affect base pairing, secondary structure, and the ability of RNA to interact with proteins directly. These chemical changes further affect RNA processing, localization, translation, and decay processes to regulate gene expression [38].
Several Most Common RNA Modification Types
The common RNA modifications include N 6 methylation of adenosine (m 6 A), N 1 methylation of adenosine (m 1 A), N 7 -methylguanosine (m 7 G), 5-methylcytosine (m 5 C), 2 O-methylation (2 -O-Me or Nm), pseudouridine (5-ribosyl uracil or Ψ) and adenosine to inosine RNA editing (A-to-I editing), etc. (Figure 1) [39]. Among these RNA modifications, m 6 A is the most abundant form in eukaryotic cells, extensively studied in recent years. m 6 A expression was abundant in the liver, kidney, and brain. The content of it in different cancer cell lines varies greatly. Studies have found that m 6 A is mainly distributed within genes, and the proportion of m 6 A in protein-coding regions (CDS) and untranslated regions (UTRs) is relatively high. m 6 A in UTRs tended to be highly expressed in the third UTR region, while CDS regions were mainly enriched near-stop codons. The modification of m 6 A primarily occurs on the adenine of the RRACH sequence, where R is guanine, or adenine, and H is uracil, adenine, or cytosine [40]. The m 6 A modification has been implicated in the activation of multiple signaling pathways associated with lung cancer, in addition to COPD, pulmonary fibrosis, asthma, and other respiratory diseases. The m 1 A modification is formed by the methylation of N 1 adenosine. It is an isoform of m 6 A and is regulated by multiple transferase complexes and demethylases, and this regulation is reversible [41,42]. The m 1 A-modified tRNA can regulate translation by increasing tRNA stability, while m 1 A-modified mRNA and lncRNA can influence RNA processing or protein translation. Some studies have found that m 1 A may regulate mitochondrial function [43]. The m 7 G was first discovered at the 5 caps and internal positions of mRNAs, as well as inside rRNAs and tRNAs [44][45][46]. Recently, m 7 G has also been detected in miRNAs [47]. The m 7 G is associated with tumor metastasis and growth. The m 5 C is ubiquitous in mRNAs, tRNAs, RNAs, and ncRNAs [48]. It is involved in various RNA metabolisms. In tRNA, m 5 C is involved in stabilizing tRNA secondary structure and enhancing codon recognition. In addition, m 5 C modifies rRNA and ncRNA, thereby regulating mitochondrial dysfunction, stress response defects, gametocyte and embryonic development, tumorigenesis and cell migration [49][50][51] [54]. Ψ can change mRNA's secondary structure. When Ψ occurs in the stop codons, or nonsense codons it can affect the translation process and translation result [55]. Ψ could affect the development of lung cancer. A-to-I editing mainly exists on the primary transcript of mRNA, tRNAs, and miRNAs, and this RNA modification mechanism can modify the secondary structure of RNA. It is the deamination of adenosine in RNA to inosine. Inosine is recognized as guanosine in cells. A-to-I editing is associated with lung cancer cell phenotype.
RNA Modification Database
High-throughput sequencing data has a key impact in the study of RNA modification. These sequencing data are available on public websites, including the NCBI-Gene Expression Omnibus database (NCBI-GEO) (https://www.ncbi.nlm.nih.gov/geo, accessed on 27 November 2022). The data related to chronic lung disease in the GEO database is mainly related to m 6 A modification. The m6A levels in cisplatin-resistant A549 cells were up-regulated compared to A549 cells (GSE140020, GSE136433). In addition, multiple data sets were compared and screened for m6A markers in LUAD cells (GSE198288, GSE176348, GSE161090). There are also RNA-seq expression profiles associated with writer/eraser perturbation. For example, expression profiles after METTL3 knockdown in A549 and H1299 cells (GSE76367), ALKBH5 knockdown in PC9 cells (GSE165453), and YTHDF1 or YTHDF2 knockdown PC9 cells (GSE171634).
RNA Modification Database
High-throughput sequencing data has a key impact in the study of RNA modification. These sequencing data are available on public websites, including the NCBI-Gene Expression Omnibus database (NCBI-GEO) (https://www.ncbi.nlm.nih.gov/geo, accessed on 27 November 2022). The data related to chronic lung disease in the GEO database is mainly related to m 6 A modification. The m6A levels in cisplatin-resistant A549 cells were up-regulated compared to A549 cells (GSE140020, GSE136433). In addition, multiple data sets were compared and screened for m6A markers in LUAD cells (GSE198288, GSE176348, GSE161090). There are also RNA-seq expression profiles associated with writer/eraser perturbation. For example, expression profiles after METTL3 knockdown in A549 and H1299 cells (GSE76367), ALKBH5 knockdown in PC9 cells (GSE165453), and YTHDF1 or YTHDF2 knockdown PC9 cells (GSE171634).
With the further study of RNA modification, databases containing various information are emerging. These databases, which have received widespread interest and use, in turn, provide the basis for further research about the function of RNA modification. The existing RNA modification databases can be divided into biochemical RNA modifi- With the further study of RNA modification, databases containing various information are emerging. These databases, which have received widespread interest and use, in turn, provide the basis for further research about the function of RNA modification. The existing RNA modification databases can be divided into biochemical RNA modification databases, comprehensive reversible RNA modification databases, specialized reversible RNA modification databases, and RNA editing databases [56]. Biochemical RNA modification databases can query the chemical structure and biosynthetic pathways of RNA modification, among which RNA Modification Database (RNAMDB) [57] and Modomics [58] are the most common. Comprehensive reversible RNA modification databases include MethylTranscriptome DataBase (MeT-DB) [59], RNA Modification Base Database (RMBase) [60], m 6 A-Atlas [61], m 6 A2target [62], m 5 C-Atlas [63], m 7 GHub [64] and RNA Epi-transcriptome Collection (REPIC) [65]. RMBase is the most comprehensive RNA modification database available. The m 6 A2Target is a comprehensive database of target genes for m 6 A modified enzymes (writers, erasers, and readers). Specialized reversible RNA modification databases include m 6 Avar [66], m 6 A-TSHub [67], CVm 6 A [68], RMVar [69] and RMDisease [70]. RNA editing databases include RNA Editing Database (REDIdb) [71], Rigorously Annotated Database of A-to-I RNA Editing (RADAR) [72], Database of RNA Editing (DARNED) [73], and REDIportal [74]. There are other functional tools available for RNA modification such as m 6 ASNP [75] and ConsRM [76]. Genetic variants that affect RNA modification play a key role in many aspects of RNA metabolism and are also associated with chronic lung disease. It is important to assess the effect of single nucleotide variants in the human genome on m 6 A modification. The m 6 AVar database is a comprehensive database for studying m 6 A-related variants that may affect m 6 A modification. It will explain the influence of variants through the function of m 6 A modification. The m 6 AVar can be used to predict potential modification sites of m 6 A in chronic lung disease. According to the m 6 AVar database, three m 6 A-related genes (ZCRB1, ADH1C, and YTHDC2) are reliable prognostic indicators for lung adenocarcinoma (LUAD) patients and are potential therapeutic targets [77]. RMVar is similar to m 6 AVar in that it mainly collects related variants affecting m 6 A modification. However, RMVar has more comprehensive functions than m6AVar. RMVar also contains other RNA-modified variants. The databases of genetic variation in RNA modification also include RMDisease and m6A-TSHub. RMDisease integrates the predictions of 18 different RNA modification prediction tools and a large number of experimentally validated RNA modification sites and identifies single nucleotide polymorphisms (SNPs) that may affect eight types of RNA modification. Most of the m 6 Aassociated cancer variants are tissue-and cancer-specific. The m 6 A-TSHub consists of four core components, namely m 6 A-TSDB, m 6 A-TSFinde, m 6 A-TSVar, and m 6 A-CAVar. The m 6 A-TSDB platform can be used to retrieve m 6 A sites in normal lung tissue and lung cancer cell lines. Then, m 6 A-TSVa can be used to explore the influence of lung tissue variation on m 6 A by integrating tissue-specific m 6 A. Finally, m 6 A-CAVar can be used to screen for cancer variants affecting m 6 A in lung tissue. Through these databases, we can screen out the gene variants associated with RNA modification in chronic lung disease for further verification.
The Regulation of RNA Modification
Three proteins have been found to regulate RNA modification. The first is an enzyme that introduces modified nucleotides into RNA during post-transcriptional RNA modification; the second protein interacts with the modified nucleotides; the third protein removes the modification labels [78]. The methylation modification of m 6 A is regulated by three types of proteases, methyltransferases (writers, including methyltransferase like protein-3/14/16 (METTL3/14/16), RNA-binding motif protein 15/15B (RBM15/15B), zinc finger CCCH type containing 13 (ZC3H13), virlike m 6 A methyltransferase associated (VIRMA, also known as KIAA1429), cbl proto-oncogene like 1 (CBLL1), and Wilms' tumor-associated protein (WTAP), demethylases (erasers, including fat mass and obesityassociated (FTO) and alkB homolog 5 (ALKBH5)), and readers (including YTH domain family 1/2/3 (YTHDF1/2/3), YTH domain containing 1/2 (YTHDC1/2), insulinlike growth factor 2 mRNA binding protein 1/2/3 (IGF2BP1/2/3), and heterogeneous nuclear ribonucleoprotein A2B1 (HNRNPA2B1)), and is reversible and can be dynamically regulated [79,80]. These regulatory proteases work together in a coordinated manner to maintain a homeostatic balance of intracellular m 6 A levels. Reversible m 1 A methylomes are achieved by the dynamic modulation of m 1 A RNA-modifying proteins, including m 1 A methyltransferases such as tRNA methyltransferase 6 noncatalytic subunit (TRMT6)-TRMT61A complex, TRMT10C, TRMT61B and nucleomethylin (NML), m 1 A demethylases such as ALKBH1, ALKBH3 and FTO, and m 1 A-dependent RNA-binding proteins such as YTHDF1/2/3 and YTHDC1 [81]. The modification of m 7 G in mammals is catalyzed by the compounds METTL1 and WD repeat domain 4 (WDR4), a complex that facilitates the installation of m 7 G in tRNA, miRNA, and mRNA [82,83]. RNA guanine-7 methyltransferase (RNMT) and its cofactor RNMT-activating miniprotein (RAM) actively catalyze m 7 G. Among them, RNMT is the catalytic subunit, and RAM is the regulatory subunit, which plays the role of activation. Williams-Beuren syndrome chromosome region 22 (WBSCR22) and TRMT112 are responsible for regulating m 7 G in rRNA. The main function of these regulatory mechanisms is to add m 7 G to the target RNAs, thereby mediating many key biological processes by modulating RNA production, structure, and maturation [84].
The m 5 C is reversibly regulated by methyltransferases, including DNA methyltransferase (DNMTs, such as DNMT1, DNMT2, and DNMT3A/3B) and NOP2/Sun RNA methyltransferases (NSUNs), and demethylases, including ten-eleven translocation (TETs), and reader proteins including YTHDF2, Aly/REF Export Factor(ALYREF) and Y-box binding protein 1(YBX1) [85,86]. There are two ways to add 2 -O-Me modification: either by the complex assembly of proteins associated with snoRNA guides (sno(s)RNPs) to carry out site-specific modification or standalone protein enzymes with direct specific site modification [87,88]. In addition, the methylated reader protein TAR RNA-binding protein (TRBP) binds to the methyltransferase FtsJ RNA 2'-O-methyltransferase 3 (FTSJ3) to form a TRBP-FTSJ3 complex, which induces 2 -O-Me [89]. Fibrillarin (FBL) is also a 2'-O-methyltransferase that can form a small nucleolar ribonucleoproteins (snoRNPs) with three other proteins and snoRNA for specific rRNA modifications [90]. Ψ is produced by the isomerization of uridine, catalyzed by pseudouridine synthase (PUS). Thirteen pseudouridine synthases have been identified, which can be divided into two categories, RNA-dependent and RNAindependent PUSs. Dyskerin pseudouridine synthase 1 (DKC1) is the catalytic subunit of the H/ACA snoRNP complex and catalyzes rRNA pseudouridylation. The other 12 writers are PUSs: PUS1, PUSL1, PUS3, TRUB1, TRUB2, PUS7, PUS7L, RPUSD1-4, and PUS10. The cellular localization and RNA targets of these enzymes are fixed. No Ψ erasers and readers have been identified. This is probably because of the formation of relatively inert C-C bonds between the ribose and the base, which leads to the fact that the pseudoureacylation process is irreversible [91]. A-to-I editing is catalyzed by adenosine deaminases acting on the RNA (ADAR) family of enzymes. There are 3 ADAR enzymes, ADAR1 and ADAR2 being catalytically active, while ADAR3 lacks catalytic activity [92].
The Roles of RNA Modifications
RNA modifications have a key influence in the regulation of many fundamental biological processes associated with chronic lung disease ( Table 1). The methylation of m 6 A can affect the RNA stability, localization, turnover, and translation efficiency of genes, thereby regulating cellular processes such as cell self-renewal, differentiation, invasion, and apoptosis, and is even critical for skeletal development and homeostasis [93,94]. The METTL3-METTL14 complex increases the expression of the cyclin-dependent kinase p21 by regulating m 6 A. This complex enhances m 5 C methylation, which synergistically promotes p21 expression and affects oxidative stress-induced cellular senescence [95]. In the cardiovascular system, multiple m 6 A-related regulators promote the progression of atherosclerosis by regulating macrophage polarization and inflammation. WTAP and METTL14 also can affect the phenotypic regulation of vascular smooth muscle cells (VSMCs) via m 6 A modification [96]. Enhanced m 6 A RNA methylation generates compensatory cardiac hypertrophy, whereas decreased m 6 A causes cardiomyocyte remodeling and dysfunction [97]. An increasing number of studies have found that m 6 A modification plays an important role in controlling the generation and self-renewal of hematopoietic stem cells and in mediating the development and differentiation of T and B lymphocytes from hematopoietic stem cells [98]. YTHDF2, the first recognized "reader" of m 6 A, can maintain the homeostasis and maturation of natural killer (NK) cells, and positively regulate the antitumor and antiviral activities of NK cells. Its deletion significantly impairs NK cell antitumor and antiviral activity in vivo [99]. METTL3-mediated m 6 A methylation is also associated with NK cell homeostasis and antitumor immunity [100]. In addition, m 6 A is associated with apoptosis, autophagy, pyroptosis, ferroptosis, and necrosis [101], and m 5 C mediates cell proliferation, differentiation, apoptosis, and stress response [102].
RNA Modifications in the Respiratory System
RNA epigenetic modifications have been broadly reported in lung cancer development and other chronic lung diseases (Figure 2). The most studied m 6 A modification plays an important role in tumorigenesis, proliferation, and metastasis. Active m 6 A regulators in lung cancer are related with the activation of multiple signaling pathways, such as DNA replication, RNA metabolism, epithelial-mesenchymal transition (EMT), cell cycle, cell proliferation and apoptosis, energy metabolism, inflammatory response, drug resistance, tumor metastasis and recurrence [103]. Reprogramming of energy metabolism is the hallmarks of cancer. The m 6 A regulates tumor metabolism by directly regulating nutrient transporters and metabolic enzymes or indirectly by controlling key components in metabolic pathways [104]. Aberrant m 6 A modification contributes to the progression of malignant tumors and affects patient prognosis. Some m 6 A-related mRNA markers have also been shown to be independent prognostic biomarkers in patients with different types of cancer [105,106]. Recent studies have found that the m 6 A methylase METTL3 is abnormally activated in cisplatin-resistant nonsmall-cell lung cancer (NSCLC) cells. METTL3 enhances YAP mRNA translation by introducing YTHDF1/3 and eIF3b into the translation initiation complex and increasing YAP mRNA stability by regulating the MALAT1-miR-1914-3p-YAP axis. This induces NSCLC drug resistance and metastasis [107]. FTO is the first m 6 A demethylase discovered to promote NSCLC cell growth by demethylating USP7 mRNA or MZF1 mRNA transcripts and increasing their stability and transcriptional levels [108,109]. The 5'-UTR of m 6 A-modified PDK4 mRNA positively regulates the glycolysis of cells by binding to IGF2BP3, thereby promoting the development of cancer. While METTL14 knockout can reverse the function of IGF2BP3 and play a tumor suppressor role [110]. In addition, IGF2BP1 can also promote cancer development and induce therapeutic resistance by stabilizing oncogenic mRNAs [111]. The m 6 A modification also affects the mediating tumor metastasis process. When lung cancer brain metastasis (BM) occurs, miR-143-3p was upregulated in paired BM tissues compared with primary lung cancer tissues. METTL3-mediated m 6 A modification induces the maturation of miR-143-3p, which induces lung cancer invasion and angiogenesis via suppressing the expression of the target gene vasohibin-1 [112]. The lncRNA HCG11 is modulated by METTL14-mediated m 6 A modification in LUAD. METTL14-mediated HCG11 inhibits the growth of LUAD by targeting large tumor suppressor kinase 1 (LATS1) mRNA through IGF2BP2 [113]. The m 7 G methyltransferase METTL1 and WDR4 complex is significantly elevated in lung cancer, which can promote lung cancer cell growth and invasion and negatively correlate with patient prognosis. Impaired m 7 G tRNA modification in the absence of METTL1/WDR4 results in decreased proliferation, colony formation, cell invasion, and tumorigenicity of lung cancer cells [114]. Highly expressed METTL1 can also methylate mature let-7 miRNAs by interfering with inhibitory secondary structure (G-quadruplex) in the pri-miRNA transcript of let-7. The m 7 G-let-7 miRNA can inhibit lung cancer cell metastasis by reducing the expression of target oncogenes, including high mobility group AT-hook 2 (HMGA2), RAS, and MYC, at the posttranscriptional level [47]. Studies have confirmed that m 5 C levels can be used as a cancer marker. For example, in lung squamous cell carcinoma (LUSC), the upregulation of m 5 C-related NSUN3 and NSUN4 is related to poor patient prognosis [115]. In LUAD, cells with high NSUN1 expression are more likely to be poorly differentiated [116]. Abnormally elevated RNA m 5 C levels can be found in circulating tumor cells from lung cancer patients [117]. The studies of Ψ are mainly concerned with breast, lung, and prostate cancers. In NSCLC, the expression of lncRNAs PCAT1 is highly expressed, and cooperates with DKC1 to influence the proliferation, invasion, and apoptosis of NSCLC cells through the VEGF/AKT/Bcl-2/caspase9 pathway [118]. The rs9309336 may interfere with PUS10 expression, thereby reducing the sensitivity of tumor cells to tumor necrosis factor-associated apoptosis-inducing ligand (TRAIL) [119]. Finally, it promotes the immortalization of tumor cells and the development of lung cancer. The RNA editing protein ADAR promotes LUAD progression by stabilizing transcripts encoding focal adhesion kinase (FAK). The increased abundance of ADAR in the mRNA and protein levels in lung tissues of LUAD patients was associated with tumor recurrence. ADAR increases the stability of FAK mRNA by binding to FAK. FAK blocks the ADAR-induced invasiveness of LUAD cells [120]. A-to-I microRNA editing is correlated with tumor phe-notypes in multiple cancer types. Altered editing levels of microRNAs in LUAD may be a potential biomarker [121]. [118]. The rs9309336 may interfere with PUS10 expression, thereby reducing the sensitivity of tumor cells to tumor necrosis factor-associated apoptosis-inducing ligand (TRAIL) [119]. Finally, it promotes the immortalization of tumor cells and the development of lung cancer. The RNA editing protein ADAR promotes LUAD progression by stabilizing transcripts encoding focal adhesion kinase (FAK). The increased abundance of ADAR in the mRNA and protein levels in lung tissues of LUAD patients was associated with tumor recurrence. ADAR increases the stability of FAK mRNA by binding to FAK. FAK blocks the ADAR-induced invasiveness of LUAD cells [120]. A-to-I microRNA editing is correlated with tumor phenotypes in multiple cancer types. Altered editing levels of mi-croRNAs in LUAD may be a potential biomarker [121]. (1) Activation of METTL3 promotes drug resistance and metastasis in NSCLC; (2) FTO promotes the growth of NSCLC cells by demethylating USP7 mRNA or (1) LncRNAs PCAT1 is highly expressed in NSCLC and cooperates with DKC1 to affect proliferation, invasion and apoptosis of NSCLC cells; (2) PUS10 promotes the immortalization of tumor cells and the development of lung cancer; (3) ADAR promotes LUAD progression.
[ [96][97][98] A-to-I editing ADAR1, ADAR2 (1) A-to-I microRNA editing is correlated with tumor phenotypes. [99] In addition to studies on lung cancer, the effects of RNA modification in other chronic lung diseases such as COPD, pneumonia, asthma, and pulmonary fibrosis have also been explored (Figure 3). Exposure to some toxicants can cause A-to-I editing of lung cells. Potassium chromate (VI) induced upregulation of ADARB1 in human lung cells [122], whereas tetrachlorodibenzodioxin (TCDD) exposure resulted in the decreased expression of ADARB1 [123]. Carbon nanotubes can increase ADAR expression in mouse lungs [124]. It has been found that the expression of m 6 A RNA methylation regulators is abnormal in COPD, among which the mRNA expressions of IGF2BP3, FTO, METTL3, and YTHDC2 show a tight association with the occurrence of COPD. IGF2BP3, FTO, METTL3, and YTHDC2 have obvious correlations with various important genes enriched in signaling pathways and biological processes that promote the development and progression of COPD [125]. Exposure to fine particulate matter (PM 2.5 ) is an important cause of COPD. METTL16 may regulate sulfate expression through m 6 A modification, thereby participating in PM 2.5 -induced microvascular injury and promoting the development of COPD [126]. In addition, the increase of mRNA m 5 C modification may negatively affect normal lung metabolic activities by upregulating gene expression levels in the lungs of mice exposed to PM 2.5 [127]. LncRNA small nucleolar RNA host gene 4 (SNHG4) promotes LPS-induced lung inflammation by inhibiting METTL3-mediated expression of STAT2 mRNA m 6 A [128]. Myofibroblasts are the main collagen-producing cells in pulmonary fibrosis, which are mostly derived from resident fibroblasts via fibroblast-to-myofibroblast transition (FMT). m 6 A modification was upregulated in the bleomycin (BLM)-induced pulmonary fibrosis mouse model, FMT-derived myofibroblasts, and lung samples from IPF patients. Silencing METTL3 can inhibit FMT by reducing m 6 A levels. KCNH6 is involved in the m 6 A-regulated FMT process. m 6 A modification regulates KCNH6 expression through YTHDF1 [129,130]. ALKBH5 promotes silica-induced pulmonary fibrosis through miR-320a-3p/forkhead box protein M1 (FOXM1) axis or directly targeting FOXM1. Targeting ALKBH5 can be used to treat pulmonary fibrosis [131]. The m 6 A methylation has also been implicated in the pathogenesis of asthma, and YTHDF3 has an effect on eosinophils for severe asthma, which can guide future immunotherapy strategies [132]. Increased METTL1 level in IPF patients is associated with poor prognosis. IPF can be divided into two molecular subtypes (subtype 1 and subtype 2) by combining the expression levels of METTL1 and RNMT. Patients with subtype 2 have a more unfavorable prognosis than patients with subtype 1. It suggests that m 7 G has an important value in predicting the prognosis of IPF patients and early diagnosis of IPF patients [133]. In addition, increased m 5 C modification is also involved in the process of PM2.5 induced COPD by inhibiting lung metabolic activity. LncRNA SNHG4 promotes LPS-induced pneumonia by inhibiting METTL3-mediated STAT2 mRNA expression. The enhancement of FMT promotes pulmonary fibrosis. METTL3 increases the FMT process. The m 6 A modification also is involved in FMT process by regulating KCNH6 expression through YTHDF1. METTL1 level in IPF patients is positive associated with poor prognosis. ALKBH5 promotes silicon-induced pulmonary fibrosis through FOXM1. The m 6 A-related YTHDF3 causes severe asthma by affecting eosinophils.
Diagnosis and Therapeutic Potential
In recent years, RNA modification has been identified as a novel regulatory mechanism in controlling cancer pathogenesis and treatment response/resistance. The m 6 A modification plays a multifunctional role in normal and abnormal biological processes, and its regulatory proteins can act as therapeutic targets for cancer and are expected to be biomarkers for overcoming drug resistance [134]. METTL3 is the major catalytic subunit of m 6 A modification. METTL3 facilitates the translation of a large subset of oncogenic mRNAs and has direct physical and functional interactions with translation initiation factor 3 subunit h (eIF3h). METTL3-eIF3h interaction is required for oncogenic transformation. The depletion of METTL3 inhibits tumorigenicity and sensitizes lung cancer cells to bromodomain-containing protein 4 (BRD4) inhibition [135]. METTL3 promotes tumor development in human lung cancer cells by upregulating the translation of important oncogenes such as EGFR and TAZ. MiR-33a, a negative regulator of METTL3, can directly target the 3'UTR of METTL3 mRNA, reduce its expression, and further inhibit NSCLC cell proliferation [136]. The dynamic m 6 A methylome is a new mechanism for drug resistance in cancer, such as tyrosine kinase inhibitors (TKIs) [137]. FTO is an oncogene of LUSC, Figure 3. The mechanisms and pathways of RNA modifications in chronic lung diseases. RNA modifications promote the occurrence and development of chronic lung diseases such as COPD, pneumonia, asthma, and pulmonary fibrosis. Among these modifications, m 6 A have been the most studied in chronic lung diseases. The mRNA expressions of m 6 A related IGF2BP3, FTO, METTL3 and YTHDC2 promote the progression of COPD. The m 6 A methyltransferase METTL16 regulates the level of sulfate and participates in microvascular injury induced by PM 2.5 . In addition, increased m 5 C modification is also involved in the process of PM 2.5 induced COPD by inhibiting lung metabolic activity. LncRNA SNHG4 promotes LPS-induced pneumonia by inhibiting METTL3-mediated STAT2 mRNA expression. The enhancement of FMT promotes pulmonary fibrosis. METTL3 increases the FMT process. The m 6 A modification also is involved in FMT process by regulating KCNH6 expression through YTHDF1. METTL1 level in IPF patients is positive associated with poor prognosis. ALKBH5 promotes silicon-induced pulmonary fibrosis through FOXM1. The m 6 A-related YTHDF3 causes severe asthma by affecting eosinophils.
Diagnosis and Therapeutic Potential
In recent years, RNA modification has been identified as a novel regulatory mechanism in controlling cancer pathogenesis and treatment response/resistance. The m 6 A modification plays a multifunctional role in normal and abnormal biological processes, and its regulatory proteins can act as therapeutic targets for cancer and are expected to be biomarkers for overcoming drug resistance [134]. METTL3 is the major catalytic subunit of m 6 A modification. METTL3 facilitates the translation of a large subset of oncogenic mRNAs and has direct physical and functional interactions with translation initiation factor 3 subunit h (eIF3h). METTL3-eIF3h interaction is required for oncogenic transformation. The depletion of METTL3 inhibits tumorigenicity and sensitizes lung cancer cells to bromodomain-containing protein 4 (BRD4) inhibition [135]. METTL3 promotes tumor development in human lung cancer cells by upregulating the translation of important oncogenes such as EGFR and TAZ. MiR-33a, a negative regulator of METTL3, can directly target the 3'UTR of METTL3 mRNA, reduce its expression, and further inhibit NSCLC cell proliferation [136]. The dynamic m 6 A methylome is a new mechanism for drug resistance in cancer, such as tyrosine kinase inhibitors (TKIs) [137]. FTO is an oncogene of LUSC, and its increased expression can promote the growth of cancer cells. Knockout of FTO significantly decreased MZF1 mRNA level, and MZF1 gene silencing significantly inhibited the cell viability and invasion of LUSC [109]. Two FTO inhibitors, FB23 and FB23-2, can attenuate the activity of FTO demethylase by directly binding to the activity pocket of FTO demethylase, resulting in a significant lethal effect on cancer cells [138]. Under the action of intermittent hypoxia (IH), the expression of ALKBH5 was upregulated in lung cancer cells, resulting in a decrease in the level of m 6 A. Knockdown of ALKBH5 under this condition significantly inhibited cell invasion by upregulating the m 6 A level of FOXM1 mRNA and reducing its translation efficiency [139]. Downregulation of solute carrier 7A11 (SLC7A11) expression in lung cancer inhibits cell proliferation and colony formation. In LUAD, m 6 A modification destabilizes SLC7A11 mRNA and accelerates mRNA decay upon recognition by YTHDC2, thereby affecting cystine uptake and contributing to antitumor activity [140]. The m 6 A modification-related inflammatory cytokine interleukin 37 (IL-37) has received extensive attention for the treatment of lung cancer. IL-37 inhibits the proliferative capacity of LUAD cells by regulating RNA methylation. Meanwhile, overexpression of IL-37 decreased the expression of ALKBH5 and thus can also be used to treat NSCLC patients [141]. Gefitinib is indicated for the treatment of locally advanced or metastatic NSCLC. However, acquired resistance limits its long-term efficacy. m 6 A modification reduces gefitinib resistance (GR) in NSCLC patients via the FTO/YTHDF2/ABCC10 axis [142]. The development of selective inhibitors to RNA modification regulators for future clinical applications would create more effective therapeutic approaches for treating lung cancers and chronic lung diseases.
Conclusions and Future Perspectives
In conclusion, this article reviews the recent advances in the function and molecular mechanism of RNA epigenetics in the progression of chronic lung diseases. RNA epigenetics is expected to be a research tool for the development of new diagnostic biomarkers with clinical value. Enzymes involved in regulating RNA modification can be new targets for the treatment of chronic lung diseases. Most of the m 6 A regulators are upregulated in cancer and play a role in promoting tumor growth. These regulators, including METTL3, METTL14, and WTAP, and their key targets are associated with the clinical characteristics of various cancer patients, which may provide new possibilities for early cancer diagnosis [143]. The combination of m 6 A targeted drugs with traditional chemotherapy drugs or PD-1/PD-L1 inhibitors has great therapeutic prospects [144]. The mechanism by which IL-37 inhibits the proliferation of LUAD cells and is used to treat NSCLC patients is also related to m 6 A methylation [141]. YTHDF1 knockout can significantly enhance the therapeutic effect of PD-L1 immune checkpoint blocking, suggesting that YTHDF1 is a potential therapeutic target in tumor immunotherapy [145]. The m 5 C deletion of mitochondrial RNA in tumor cells can reduce the metastasis and invasion of cancer cells. This means that when cancer patients undergo clinical treatment, they can inhibit the metastasis and spread of cancer cells by inhibiting m 5 C modification in mitochondria, thus increasing clinical benefits. As an enzyme responsible for modifying RNA, NSUN3 is only used to modify the formation of m 5 C, so it is a very promising drug target [146]. RNA modification provides a new research direction for the early diagnosis and treatment of tumors.
In the future, precision medicine based on RNA epigenetics may target individual patients with chronic lung diseases for diagnosis and treatment. However, the function of these chemical modifications in both coding and noncoding RNAs remains in its infancy, and collaborative efforts are still needed to establish a clear link between RNA epigenetics and chronic lung diseases. At present, many new methods of RNA modification have emerged, among which Direct RNA sequencing is the representative one. Through the unique way of the library building and relying on the algorithm, the collection of RNA modification information at the single base level can be achieved. The development of RNA modification quantitative map databases (such as DirectRMDB) based on this sequencing method also undoubtedly opens another direction for RNA modification detection [147].
In brief, the advances in RNA epigenetics detection technologies will undoubtedly lead to the discovery of new mechanisms regulating gene expression in chronic lung diseases in the future.
Author Contributions: Writing-original draft preparation, X.W. and Z.G.; writing-review and editing, Z.G.; supervision, F.Y. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,423.8 | 2022-12-01T00:00:00.000 | [
"Biology"
] |
Extended Interaction Length Laser-Driven Acceleration in a Tunable Dielectric Structure
The development of long, tunable structures is critical to increasing energy gain in laser-driven dielectric accelerators (DLAs). Here we combine pulse-front-tilt illumination with slab-geometry structures assembled by precisely aligning off-the-shelf 4~mm long transmission gratings to achieve up to 200 keV energy modulation for 6 MeV injected electrons. The effective interaction length is longer than 1~mm, limited by dephasing of the accelerated particles in the structure. The piezo-based independent mounting system for the gratings allows tuning of the gap and field distribution inside the structure.
Shrinking accelerators to the optical scale holds the promise to reduce cost and increase availability of relativistic electron beams for scientific, industrial and medical applications [1].Leveraging the high damage threshold of dielectric materials as well as continuous progress in high power laser and nanofabrication technologies, laser-driven structure accelerators (or dielectric laser accelerators, DLAs) have already demonstrated GeV/m level gradients [2], much larger than conventional accelerators and current research efforts are directed towards extending the interaction region.Notably, the technical challenges to achieve this goal are common to all advanced accelerators, including the plasma-based schemes [3], and are related to the physical dimensions of the accelerator (length of structure or plasma cell), the temporal walk-off associated with the different velocities of the drive pulse and the electron bunch (group velocity mismatch) and the loss of phase-synchronicity as the particles gain energy (dephasing).Depletion of the energy in the driving pulse then poses the fundamental limit to the acceleration length.
Experimental demonstration of DLA acceleration has been accomplished using two main structure types: pillars and gratings.Dual pillar structures can be fabricated on single Si wafers, easing the nm-scale fabrication tolerances.They can then be illuminated (from the side or the top) without propagating high intensity laser pulses inside thick dielectric substrates [4][5][6][7][8].Dual grating structures can have much larger aspect ratios, a built-in collimation function which is useful to isolate the transmitted electrons, and can be made out of fused silica and/or coated with higher damage threshold materials [2,[9][10][11][12][13][14][15].Reaching long interaction lengths in both of these structures has been impeded by the constraints imposed by power delivery geometry.The highest accelerating gradients are only accessible using 100 fs laser pulses, which allow for high intensities while still remaining below the damage thresholds for most materials.Since the laser is typically coupled orthogonally to the direction of electron travel, the interaction length is set by its pulse length to the tens of µm scale [16].To overcome this limitation, a pulse-front-tilt (PFT) configuration has been employed, extending the interaction beyond the temporal laser envelope duration [17,18], leading to the demonstration of 315 keV energy gain over a 700 µm interaction length [14].
Additional energy gain requires manufacturing longer structures, stretching the state-of-the-art in nanofabrication techniques to meet tolerances for sustaining acceleration and preserving alignment over meso-scale (mm to cm) dimensions.Some degree of post-fabrication tuning would greatly ease these challenges and allow for more flexibility in the structure design.This is also due to the fact that efficiently interacting over a longer distance requires mitigating the loss of phase synchronicity (or resonant condition) caused by the particles gaining energy in the structure.Dephasing can be compensated, as recently shown in sub-relativistic experiments, by carefully chirping the parameters of the structure along its length.However, this limits the structure to a unique input beam energy and laser gradient.Resonant acceleration can also be preserved by the so called soft-tuning approach which entails control of electron dynamics through software based manipulation of the drive laser phase and is very appealing for its experimental flexibility [19].
In this experiment we demonstrate the use of independently mounted commercial transmission gratings to form a 4 mm long dual grating structure for laser-driven acceleration.This structure is illuminated on a single side by a 2 mJ 780 nm, 100 fs laser in a PFT configuration and fed by the high brightness 6 MeV electron beam from the UCLA Pegasus photoinjector.By mounting the gratings on separate piezo controls, we can adjust the gap and the relative tooth offset to optimize the amplitude and symmetry of the fields experienced by the electrons and maximize in situ the energy modulation up to 200 keV.In agreement with FTDT simulations and optical characterization of the structure, a periodic slowly decaying relation between energy gain and gap size is observed.From the saturation of the energy gain for varying PFT laser sizes, a DLA interaction length of > 1 mm is observed, short of the physical dimension of the grating, but fully consistent with dephasing in an unchirped structure.These results provide the first demonstration of an in situ tunable grating structure and also the longest DLA interaction to date.They represent a critical step forward in increasing the energy gain in DLA schemes to the MeV scale.The experimental geometry is shown in Fig. 1, with a laser incident upon two parallel transmission gratings.The fields in the vacuum gap of an infinitely wide (no xdependence) dual grating structure of period λ g = 2π/k g illuminated by a laser of angular frequency ω and amplitude E 0 polarized along the direction of the electrons (in order to excite a TM wave) can be written as a sum of Floquet modes where the n th mode, described by longitudinal wave number, k n = nk g has normalized phase velocity For phase-synchronous acceleration we focus our attention on the resonant mode (usually n = 1) for which β n = β.In order to satisfy Maxwell's equations, the transverse wavenumber is The complex parameters d n and c n depend on the mode number n, the input laser frequency, and the structure geometry and can be interpreted as the amplitudes of the waves diffracted from the top and bottom grating respectively.
In a symmetrically illuminated structure, c n = d n and E z could be described by a cosh-like mode centered in the middle of the structure gap.However, for this singleside illumination, it is instead described by the sum of a cosh-like and sinh-like mode.The structure factor, κ, is proportional to the acceleration gradient and can be written in terms of only the the parameters c n and d n as κ n = |d n + c n |.In the upper left panel of Fig. 2 FTDT simulations results show how κ decreases with increasing gap size as expected from the evanescent nature of the fields.A weaker, but clearly visible dependence on the relative offset between the teeth is also observed.
Likewise, we can define a deflection parameter, δ, which is the magnitude of the sinh mode within the structure.This deflection is proportional to (k n − k 0 β)/Γ = 1/γ, and is therefore defined as δ = |d n − c n |/γ.The Fig. 2b shows δ as a function of offset and gap for the gratings in this experiment.From this, it is clear that the deflection force can be minimized via correctly aligning the structure geometry, even with defined grating parameters.Note that deflection forces are 2 orders of magnitude smaller than the acceleration force, regardless of alignment.This allows the structures to have some angular misalignment while maintaining high throughput.
We use 4 mm square gratings etched on a 625 µm thick fused silica substrate with a tooth height of 855 nm, 65% duty cycle, and 800 nm periodicity.The increased stiffness compared to thinner substrates is useful to avoid bending.Early attempts at bonding two wafers over multiple millimeters resulted in warped structures with micron-scale gap variability, so we opted for developing an independent mounting system.The gratings are mounted in a cage system with both coarse and fine controls of relative angle, gap, and offset, shown in Fig. 3a.The lower grating is glued at three points to its respec- tive mount attached to a 3-axis vacuum-compatible piezo stage.
Structures are characterized optically before beamline insertion using a 635 nm diode laser.During assembly of the structure, we first eliminate spatial thin film interference fringes using course angular adjustment followed by piezo fine tuning to flatten the gap.At this point, a tunable etalon effect on the reflectivity of the structure can be verified by changing the gap by λ/2.Once the gap is flat and small (< 6 µm), interference in the diffraction lobes can be used to set the relative grating rotation to near zero.Finally, the relative intensity of the first order diffraction lobes is recorded as a function of gap and offset.Simulations performed in Lumerical are compared to these measurements to retrieve the offset and gap [20], as shown in Fig. 2.
The optical system makes use of a 20 mJ, 100 fs, 780 nm laser split 9 to 1 between a frequency tripling UV path for the photocathode and a drive line utilizing a pulse front tilt (PFT) configuration incident on the DLA.The PFT setup is similar to the one described by Cesar et al [21] with an additional intermediate imaging plane where a piezo-controlled mirror can be used to adjust the angle of incidence on the DLA without changing the spatial alignment with the electron beam.A 600 ln/mm grating is followed by a 300 mm focal length achromatic lens to create the mid-point imaging plane.Two achromatic lenses (150 mm and 300 mm focal lengths respectively) are then used to precisely adjust the magnification and imaging plane location at the DLA structure.Since the DLA grating period (800 nm) is longer than the laser wavelength (780 nm), the phase matching condition k g − ω l cβ + ω l c sin(θ I ) = 0 for a 6 MeV electron beam is satisfied by an incident angle θ I = 28.1 mrad [15] which can be first set by careful alignment of the DLA backreflection and then tuned in with the piezo controlled mirror.
The overall magnification (M = tan(θ P F T ) dλ l = 2.08) is determined by group velocity matching the laser pulse to the electrons, β = cos(θ P F T ) sin(θ I +θ P F T ) , yielding θ P F T = 44.3deg.The main laser PFT angle is directly measured to be 44.2±0.3 deg over an interaction longer than 4 mm by observing the location of the interference fringes as a function of relative time of arrival of a probe reference laser pulse at the DLA plane.Two additional cylindrical lenses are used to adjust the transverse laser spot size in the non-PFT dimension and control the fluence at the interaction.
A 1 pC, 6 MeV, 1 ps electron bunch is generated by the UCLA Pegasus gun and linac [22], and focused at the DLA plane to an rms spot size of 100 µm with a normalized emittance of 200 nm.The measurement of the transmittance (approximately 1000 e-/shot with the laser off and a gap size of 1 µm) is consistent with these beam parameters and the structure dimensions.Note that in the initial setup we can take advantage of the piezo motor to widen the gap from 1 µm to 5 µm, and increase the transmission 26-fold, allowing for the optimization of pitch and yaw angle and e-beam spot size before decreasing the gap size.Downstream of the DLA, the beam is then transported to a dipole spectrometer, as shown in Fig. 3b.
After overlapping spatially and temporally [23] the electron and laser beams at the DLA plane, first experiments are conducted using a reference flat laser pulse by replacing the nominal 600 ln/mm PFT grating with a mirror.In this case, the interaction length is set by the laser pulse length and the only quantities affecting energy modulation are the incident fluence and the structure factor.Once a modulation signal is stably obtained in the flat pulse case, we first replace the grating with a 1200 ln/mm (giving a PFT angle of approximately 62.8 degrees) to increase the interaction region to 240 µm and then go to the nominal grating and change the dimension of the laser along the pulse front tilt dimension to maximize interaction length.
In Fig. 3c we show a representative energy spectrum showing the highest energy modulation recorded in the experiment.The asymmetry in energy gain and loss is consistent with the angle of incidence in this particular case being lower than the resonant angle 28.1 mrad [14].In general, in order to analyze the spectra, we define the figure of merit as F OM = ⟨S1⟩−⟨S0⟩ σ0 where ⟨S 1(0) ⟩ are the average of the observed spectra at least 5 laser on (off) shots and σ 0 is the standard deviation of the laser off shots.To better discriminate the signal at the tails of the spectrum where the electron density is low, we require >5 consecutive points on the data sets to have a signal-tonoise ratio larger than 1.25.We find the maximal energy gain from this experiment to be 200 keV.
Our setup allows us for the first time to study the per- formances of the DLA accelerator as a function of the gap between the gratings.In Fig. 4, we show the results of the gap-scan at constant offset as performed by controlling the in-vacuum piezo motors.In agreement with simulation, we observe a clear decrease in DLA acceleration as the gap increases which can be explained by the lower structure factor.The shaded area in the figure shows the range of possible κ depending on the teeth offset which is a parameter that can not be measured directly during the experiment.A particular line corresponding to an offset of 0 nm can be well matched to the data.In addition, while the depletion in the zeroloss main peak is evident and nearly constant at all gaps, the population of ac(de)celerated electrons changes in a periodic fashion, resulting in the energy modulation signal to vanish at certain gaps.This can be explained by considering the periodic variation of the deflection forces when adjusting the gap at constant offset (i.e.moving on an horizontal line in fig.2b.).Whenever the deflection forces are the strongest, no accelerated particles can make it through the narrow gap and the acceleration signal is lost.A sinusoidal fit with a decaying amplitude is overlaid to the data to take this effect into account.
We can also study the acceleration in this uniform dual grating structure for different interaction lengths.Fig. 5a shows the results where the interaction length is controlled in two distinct ways, i.e. by i) temporally or ii) spatially varying the overlap of the laser and the electrons at the DLA.The former is accomplished by swapping the PFT grating to create different θ P F T .In the θ P F T = 62.8 • case, this amounts to a 2.2x longer interaction length than the flat pulse case, designated in the green point in fig.5a.Once the PFT angle is matched to the electron velocity, however, the interaction is instead limited by the spatial extent of the laser which can be controlled by placing a slit aperture just before the PFT grating imaging plane.Fig. 5a shows the energy modulation increasing up to an interaction length of 1.24 mm and subsequently saturating.
There are a number of reasons that could contribute to this plateau, including a slightly unmatched PFT angle, a spatial variation in the laser phase profile and a poor alignment of the electron beam and laser propagation axes.Nevertheless, even when accounting for these factors, the energy exchange would still be limited by the particle dephasing along the interaction, since the gratings are not tapered.In order to understand this effect we look at the energy of a particle in the DLA fields as calculated by simply integrating the field amplitude and taking into account the dynamical evoluation of the electron phase for given pulse front tilt and incident angles.We plot particle trajectories assuming an initial 6 MeV beam in Fig. 5b for different input phases.The trajectories demonstrate that electrons do not gain energy linearly over the full length of the grating structure.For example, electrons which enter the DLA at the optimal 2 π phase reach their peak energy around the center of the structure and are subsequently decelerated by the end of the DLA.We consider this to be the main reason leading to the plateau of the maximal energy gain within the structure.The impact of dephasing on the DLA longitudinal dynamics can be minimized by increasing the input electron energy as a stiffer the beam can resonantly interact for a longer distance in a structure of constant periodicity.In the non-relativistic regime, chirped structures which taper the structure period for continuous phase matching have been shown effective to mitigate this effect.
In conclusion, we demonstrated the use of off-the-shelf gratings to accelerate electrons over a record 1.24 mm effective length.The use of commercial gratings to assemble a tunable DLA presents an attractive pathway to large-scale DLA development.The observed 200 keV energy modulation yields an average acceleration gradient of 0.16 GeV/m, mainly due to the non-optimized structure factor of the gratings.Further improvements can also be obtained by increasing the incident laser intensity, limited in the experiment by the low damage threshold of the grating antireflection coating layer.
The piezo-controlled independent mounting system allowed for the first time for beam-based tuning of structure parameters and the accelerator performances.In particular, sub-micron accuracy in controlling the gap size over 4 mm length was demonstrated in both optical measurement and acceleration experiments, addressing the challenge of aligning nanostructure on the multi-mm mesoscale, which is a fundamental step towards increasing the energy gain for relativistic applications of the DLA acceleration scheme.
In future experiments, the PFT optical setup used here will be modified to include a spatial light modulator to phase match the laser field to the accelerated electrons to mitigate the dephasing that caused saturation, allowing for multi-mm DLA acceleration lengths and energy gains of > 1 MeV to be fully realized.
Thanks to K. Buchwald from Ibsen Photonics for his help finding commercial gratings.This work has been
FIG. 1 .
FIG. 1.A cartoon showing a linearly polarized laser with pulse-front-tilt angle θP F T arriving at incident angle θI on a structure with periodicity λg.The field distribution of the evanescent mode inside the structure is shown.
FIG. 2 .
FIG.2.Dual grating structure parameters have strong dependence on offset and gap.Upper left) structure factor, κ, is seen to decay significantly with increased gap.Bottom left) Deflection factor, δ; only where the deflection is near zero will electrons be transmitted.Note the amplitude of deflection is two orders of magnitude weaker than the acceleration force.Bottom right) Measured ratio of ± 1 diffraction order amplitudes from the assembled structure illuminated by a 635 nm diode laser (lower right) and corresponding FDTD simulations (upper right).
FIG. 3 .
FIG. 3. (a) The components of the mounting system; each colored component corresponds to a tunable component.Only the piezo motor, highlighted in pink, is controllable when the structure is in vacuum.The grating itself is shown in the small inset.(b) Experimental setup (not to scale).The UCLA Pegasus gun and linac generates 6 MeV electrons which are then focused into the DLA aperture by a quadrupole triplet and then sent to a dipole spectrometer where the beam is observed on a YAG screen, imaged by a gated intensified CCD camera (ICCD).The pulse-front-tilt optics are also shown, with imaging planes denoted by stars at the initial grating, an intermediate plane where the piezo motor controlled mirror is installed, and at the DLA.Cylindrical lenses are used to tune the intensity.(c) Typical laser on and laser off electron energy spectrum.
FIG. 4 .
FIG. 4. Scan over gap size.(a) F OM vs. gap between the transmission gratings.(b) Maximum energy modulation as a function of gap size extracted from (a).The simulated structure factor, κ, as function of gap is also shown.The filled red region represents the variation of κ depending on the teeth offset.A sinusoidal fit with a decaying amplitude is fit to the data.
FIG. 5 .
FIG. 5. (a) Data shows increasing energy modulation up to an effective length of 1.24 mm.(b) Particle trajectories throughout a 4 mm DLA interaction.Particles are color-coded according to their initial phase.Due to dephasing, saturation of the energy gain occurs before the full length of the structure. | 4,670.4 | 2023-10-18T00:00:00.000 | [
"Physics",
"Engineering"
] |
Next Sentence Prediction helps Implicit Discourse Relation Classification within and across Domains
Implicit discourse relation classification is one of the most difficult tasks in discourse parsing. Previous studies have generally focused on extracting better representations of the relational arguments. In order to solve the task, it is however additionally necessary to capture what events are expected to cause or follow each other. Current discourse relation classifiers fall short in this respect. We here show that this shortcoming can be effectively addressed by using the bidirectional encoder representation from transformers (BERT) proposed by Devlin et al. (2019), which were trained on a next-sentence prediction task, and thus encode a representation of likely next sentences. The BERT-based model outperforms the current state of the art in 11-way classification by 8% points on the standard PDTB dataset. Our experiments also demonstrate that the model can be successfully ported to other domains: on the BioDRB dataset, the model outperforms the state of the art system around 15% points.
Introduction
Discourse relation classification has been shown to be beneficial to multiple down-stream NLP tasks such as machine translation (Li et al., 2014), question answering (Jansen et al., 2014) and summarization (Yoshida et al., 2014). Following the release of the Penn Discourse Tree Bank (Prasad et al., 2008, PDTB), discourse relation classification has received a lot of attention from the NLP community, including two CoNLL shared tasks (Xue et al., 2015(Xue et al., , 2016. Discourse relations in texts are sometimes marked with an explicit connective (e.g., but, because, however), but these explicit signals are often absent. With explicit connectives acting as informative cues, it is relatively easy to classify the discourse relation with high accuracy (93.09% on four-way classification in (Pitler et al., 2008)).
When there is no connective, classification has to rely on semantic information from the relational arguments. This task is very challenging, with state-of-the-art systems achieving accuracy of only 45% to 48% on 11-way classification. Consider example 1: ( In order to correctly classify the relation, it is necessary to understand that Arg1 raises the expectation that the next discourse segment may provide an explanation for why the venture wasn't good (e.g., that it was risky), and Arg2 contrasts with this discourse expectation. More generally, this means that a successful discourse relation classification model would have to be able to learn typical temporal event sequences, reasons, consequences etc. for all kinds of events. Statistical models attempted to address this intuition by giving models word pairs from the two arguments as features (Lin et al., 2009;Park and Cardie, 2012;Biran and McKeown, 2013;Rutherford and Xue, 2014), so that models could for instance learn to recognize antonym relations between words in the two arguments.
Recent models exploit such similarity relations between the two arguments, as well as simpler surface features that occur in one relational argument and correlate with specific coherence relations (e.g., the presence of negation, temporal expressions etc. may give hints as to what coherence relation may be present, see Park and Cardie (2012); Asr and Demberg (2015)). However, relations between arguments are often a lot more diverse than simple contrasts that can be captured through antonyms, and may rely on world knowledge (Kishimoto et al., 2018). It is hence clear that one cannot learn all these diverse relations from the very small amounts of available training data. Instead, we would have to learn a more general representation of discourse expectations.
Many recent discourse relation classification approaches have focused on cross-lingual data augmentation , training models to better represent the relational arguments by using various neural network models, including feed-forward network (Rutherford et al., 2017), convolutional neural networks (Zhang et al., 2015), recurrent neural network (Ji et al., 2016;Bai and Zhao, 2018), character-based (Qin et al., 2016) or formulating relation classification as an adversarial task (Qin et al., 2017). These models typically use pre-trained semantic embeddings generated from language modeling tasks, like Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018).
However, previously proposed neural models still crucially lack a representation of the typical relations between sentences: to solve the task properly, a model should ideally be able to form discourse expectations, i.e., to represent the typical causes, consequences, next events or contrasts to a given event described in one relational argument, and then assess the content of the second relational argument with respect to these expectations (see Example 1). Previous models would have to learn these relations only from the annotated training data, which is much too sparse for learning all possible relations between all events, states or claims.
The recently proposed BERT model (Devlin et al., 2019) takes a promising step towards addressing this problem: the BERT representations are trained using a language modelling and, crucially, a "next sentence prediction" task, where the model is presented with the actual next sentence vs. a different sentence and needs to select the original next sentence. We believe it is a good fit for discourse relation recognition, since the task allows the model to represent what a typical next sentence would look like.
In this paper, we show that a BERT-based model outperforms the current state of the art by 8% points in 11-way implicit discourse relation classification on PDTB. We also show that after pre- trained with small size cross-domain data, the model can be easily transferred to a new domain: it achieves around 16% accuracy gain on BioDRB compared to state of the art model. We also show that the Next Sentence Prediction task played an important role in these improvements. Devlin et al. (2019) proposed the Bidirectional Encoder Representation from Transformers (BERT), which is designed to pre-train a deep bidirectional representation by jointly conditioning on both left and right contexts. BERT is trained using two novel unsupervised prediction tasks: Masked Language Modeling and Next Sentence Prediction (NSP). The NSP task has been formulated as a binary classification task: the model is trained to distinguish the original following sentence from a randomly chosen sentence from the corpus, and it showed great helps in multiple NLP tasks especially inference ones. The resulting BERT representations thus encode a representation of upcoming discourse content, and hence contain discourse expectation representations which, as we argued above, are required for classifying coherence relations. is the special classification embedding while "C" is the same as "[CLS]" in pre-training but the ground-truth label in the fine-tuning. In the experiments, we used the uncased base model 1 provided by Devlin et al. (2019), which is trained on BooksCorpus and English Wikipedia with 3300M tokens in total.
Evaluation on PDTB
We used the Penn Discourse Tree Bank (Prasad et al., 2008), the largest available manually annotated discourse corpus. It provides a three level hierarchy of relation tags. Following the experimental settings and evaluation metrics in Bai and Zhao (2018), we use two most-used splitting methods of PDTB data, denoted as PDTB-Lin (Lin et al., 2009), which uses sections 2-21, 22, 23 as training, validation and test sets, and PDTB-Ji (Ji and Eisenstein, 2015), which uses 2-20, 0-1, 21-22 as training, validation and test sets and report the overall accuracy score. In addition, we also performed 10-fold cross validation among sections 0-22, as promoted in . We also follow the standard in the literature to formulate the task as an 11-way classification task. Results are presented in Table 1. We evaluated three versions of the BERT-based model. All of our BERT models use the pre-trained representations and are fine-tuned on the PDTB training data. The version marked as "BERT" does not do any additional pre-training. BERT+WSJ in addition performs further pre-training on the 1 https://github.com/google-research/ bert#pre-trained-models parts of the Wall Street Journal corpus that do not have discourse relation annotation. The model version "BERT+WJS w/o NSP" also performs pre-training on the WSJ corpus, but only uses the Masked Language Modelling task, not the Next Sentence Prediction task in the pre-training. We added this variant to measure the benefit of in-domain NSP on discourse relation classification (note though that the downloaded pre-trained BERT model contains the NSP task in the original pre-training).
We compared the results with four state-of-theart systems: Cai and Zhao (2017) proposed a model that takes a step towards calculating discourse expectations by using attention over an encoding of the first argument, to generate the representation of the second argument, and then learning a classifier based on the concatenation of the encodings of the two discourse relation arguments. Kishimoto et al. (2018) fed external world knowledge (ConceptNet relations and coreferences) explicitly into MAGE-GRU (Dhingra et al., 2017) and achieved improvements compared to only using the relational arguments. However, we here show that it works even better when we learn this knowledge implicit through next sentence prediction task. Shi and Demberg (2019) used a seq2seq model that learns better argument representations due to being trained to explicitate the implicit connective. In addition, their classifier also uses a memory network that is intended to help remember similar argument pairs encountered during training. The current best performance was achieved by Bai and Zhao (2018), who combined representations from different grained em-beddings including contextualized word vectors from ELMo (Peters et al., 2018), which has been proved very helpful. In addition, we compared our results with a simple bidirectional LSTM network and pre-trained word embeddings from Word2Vec.
We can see that on all settings, the model using BERT representations outperformed all existing systems with a substantial margin. It obtained improvements of 7.3% points on PDTB-Lin, 5.5% points on PDTB-Ji, compared with the ELMobased method proposed in (Bai and Zhao, 2018). What's more, the BERT model outperformed (Shi and Demberg, 2019) on cross validation by around 8%, with significance of p<0.01. Significance test was performed by estimating variance of the model from the performance on different folds in cross-validation (paired t-test). For the Lin and Ji evaluations, we estimated variance due to random initialization by running them 5 times and calculating the likelihood that the state-of-the-art model result would come from that distribution.
Evaluation On BioDRB
The Biomedical Discourse Relation Bank (Prasad et al., 2011) also follows PDTB-style annotation. It is a corpus annotated over 24 open access fulltext articles from the GENIA corpus (Kim et al., 2003) in the biomedical domain. Compared with PDTB, some new discourse relations and changes have been introduced in the annotation of Bio-DRB. In order to make the results comparable, we preprocessed the BioDRB annotations to map the relations to the PDTB ones, following the instructions in Prasad et al. (2011).
The biomedical domain is very different from the WSJ or the data on which the BERT model was trained. The BioDRB contains a lot of professional words / phrases that are extremely hard to model. In order to test the ability of the BERT model on cross-domain data, we performed finetuning on PDTB while testing on BioDRB. We also tested the state of the art model of implicit discourse relation classification proposed by Bai and Zhao (2018) on BioDRB. From Table 2, we can see that the BERT base model achieved almost 12% points improvement over the Bi-LSTM baseline and 15% points over Bai and Zhao (2018). When fine-tuned on in-domain data in the crossvalidation setting, the improvement increases to around 17% points. Table 2: Accuracy (%) on BioDRB level 2 relations with different settings. Cross-Domain means trained on PDTB and tested on BioDRB. For the In-Domain setting, we used 5-fold cross-validation and report average accuracy. Numbers in bold are significantly better than the state of the art system with p<0.01 and numbers with * denote denote significant improvements over BERT + GENIA w/o NSP with p<0.01.
It is also interesting to know whether the performance of the BERT model can be improved if we add additional pre-training on in-domain data. BioBert continues pretraining BERT with bio-medical texts including PubMed and PMC corpora (around 18B tokens), which achieved the best results on in-domain setting. Similarly, BERT+GENIA refers to a model in which the downloaded BERT representations are further pre-trained on the parts of the GENIA corpus which consists of 18k sentences and is not annotated with coherence relations. Evaluation shows that this in-domain pre-training yields another 3% point improvement; our tests also show that the NSP task again plays a substantial role in the improvement. We believe that gains for further pre-training on GENIA for the biomedical domain are higher than for pre-training on WSJ for PDTB because the domain difference between the BooksCorpus and the biomedical domain is larger.
Currently there are not so many published results that we can compare with on BioDRB for implicit discourse relation classification. We compared BERT model with naïve Bayes and Max-Ent methods proposed in Xu et al. (2012) on oneversus-all binary classification. We followed the settings in Xu et al. (2012) and used two articles ("GENIA 1421503", "GENIA 1513057") for testing and one article ("GENIA 111020") for validation. During training, we employed downsampling or up-sampling to keep the numbers of positive and negative samples in each relation consistent. The BERT base model achieved 43.03% average F 1 score and 77.34% average accuracy in one-versus-all level-1 classification. Compared with the current state-of-the-art perfor- Table 3: F 1 -score (Accuracy) of binary classification on level 1 implicit relation in BioDRB. Table 4: Precision, Recall and F 1 score for each level-2 relation on PDTB-Lin setting and BioDRB with "BERT + WSJ/GENIA" systems w/ and w/o NSP. "-" indicates 0.00 and "C." means the number of each relation in the test set.
Discussion
The usage of the BERT model in this paper was motivated primarily by the use of the nextsentence prediction task during training. The results in Table 1 and Table 2 confirm that removing the "Next Sentence Prediction" hurts the performance on both PDTB and BioDRB. In order to have better insights about which relation has benefited from the NSP task, we also reported the detailed performance for each relation with and without it in BERT. As illustrated in Table 4, we can see that performances on relations like Temporal.Synchrony, Comparison.Contrast, Expansion.Conjunction and Expansion.Alternative have been improved by a large margin. This shows that representing the likely upcoming sentence helps the model form discourse expectations, which the classifier can then use to predict the coherence relation between the actually observed arguments.
However, compared with BERT+GENIA, the results of BioBert in Table 2 show that having large in-domain data for pretraining also has limited ability in learning domain specific representations. We therefore believe that the model could be further improved by including external domain-specific knowledge from an ontology (as in Kishimoto et al. (2018)) or a causal graph for biomedical concepts and events.
Conclusion and Future work
In this paper, we show that BERT has very good ability in encoding the semantic relationship between sentences with its "next sentence prediction" task in pre-training. It outperformed the current state-of-the-art systems significantly with a substantial margin on both in-domain and cross domain data. Our results also indicate that the next-sentence prediction task during training indeed plays a role in this improvement. Future work should explore the joint representation of discourse expectations through implicit representations that are learned during training and the inclusion of external knowledge. In addition, Yang et al. (2019) showed that NSP only helps tasks with longer texts. It would be interesting to see whether it has the same effect on implicit discourse relation classification task, we'd like to leave that in the future work. | 3,596.6 | 2019-11-01T00:00:00.000 | [
"Computer Science"
] |
Accounting for skill in trend, variability, and autocorrelation facilitates better multi-model projections: Application to the AMOC and temperature time series
We present a novel quasi-Bayesian method to weight multiple dynamical models by their skill at capturing both potentially non-linear trends and first-order autocorrelated variability of the underlying process, and to make weighted probabilistic projections. We validate the method using a suite of one-at-a-time cross-validation experiments involving Atlantic meridional overturning circulation (AMOC), its temperature-based index, as well as Korean summer mean maximum temperature. In these experiments the method tends to exhibit superior skill over a trend-only Bayesian model averaging weighting method in terms of weight assignment and probabilistic forecasts. Specifically, mean credible interval width, and mean absolute error of the projections tend to improve. We apply the method to a problem of projecting summer mean maximum temperature change over Korea by the end of the 21st century using a multi-model ensemble. Compared to the trend-only method, the new method appreciably sharpens the probability distribution function (pdf) and increases future most likely, median, and mean warming in Korea. The method is flexible, with a potential to improve forecasts in geosciences and other fields.
Introduction
A common forecasting problem is one of probabilistic multi-model forecasts of a stochastic dynamical system [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. Sometimes, when a collection of complex dynamical models is used to provide multi-model forecasts, these forecasts are weighted according to model performance compared to observations [1,5,10,[19][20][21][22][23]. The Bayesian approach to this problem assumes that associated with k dynamical models are k competing statistical models M i for vector of observations y. These statistical models result in a conditional probability density function (pdf) for y given that M i is reasonable, p(y|M i ). Typically, in a multi-model evaluation context, the pdf p(y|M i ) is a multivariate statistical distribution centered on ith dynamical model trend x i . Each model is associated with a prior belief in its adequacy ("prior") p(M i ), which can be derived from previous work, or may be more subjective. The posterior probability, or weight, for each model i given the observations is then found using Bayes theorem [24]: Specifically, the posterior probability of each statistical (and corresponding dynamical) model is the likelihood of observations y coming from the model (given by the pdf p(y|M i )), multiplied by the model prior.
In ensemble modelling, models are usually judged on how well they represent the mean state of the system, its trend, or spatio-temporal fields [1,3,6,14,22,23]. However, it is increasingly being recognized that variability is of utmost importance for future prediction. Specifically, for some systems (stochastic dynamical systems) the stationary pdf of the equilibrium solution is directly affected by system dynamics (i.e., the nonlinear operator in the ordinary differential equations) through the Fokker-Planck (Kolmogorov forward) equation. Recent climate science work identifies variability as a key factor impacting climate projections [6,25]. Furthermore, variability has been used as a novel and effective constraint for climate sensitivity [26]. In addition, variability also has major relevance for forewarning of critical thresholds (i.e., a forcing value above which the underlying system shifts to a new equilibrium; [27]). Specifically, an increase in variance or lag-1 autocorrelation with time, as well as skewness and kurtosis, have been used as such early warning indicators [28][29][30][31]. This motivates using variability properties of the system as a novel metric to assess performance of multiple dynamical system models.
Several new studies break important new ground by incorporating variability into the weighting [17,[32][33][34], but they typically assume stationarity of the pdf of the system [17,32,33], or cannot work with complex dynamical models [34]. Some previous work does explicitly weight dynamical models by performance in variability and trends in a statistically-sound way [35]. However, the method in its current form works only for linear trends (as a function of time) and does not account for autocorrelation in the variability.
Here we propose a novel method to weight models of complex dynamical systems by their performance in autocorrelation, variability, and a potentially nonlinear trend (i.e., nonlinear with time) compared to observations, and to make probabilistic forecasts. The method is based on Bayesian Model Averaging (BMA) [20,21]. While the framework is Bayesian, it deviates from traditional Bayesian theory in some steps of the estimation process. We highlight these deviations where they arise in more detail in later sections. Consequently, we call our approach "quasi-Bayesian". Using several simulated and observed datasets (involving AMOC, its temperature-based index, and summer mean maximum temperature over Korea) we show that the new method results in better weighting and tends to improve forecasts of system mean change under new conditions compared to when trend-only BMA weighting is used. Thus, this work has implications for improving projections of many environmental systems. The approach is not restricted to linear trends, making it relatively easy to apply to new datasets. Finally, we apply the method to a real case problem of projecting future summer mean temperature changes over Korea.
The rest of the paper is structured as follows. Section 2 describes the novel methodology to weight models by trend and variability performance, to combine those weights, to make multimodel weighted projections, as well as the computational details. The main interest here is not the procedure for obtaining the trend and variability components, but the algorithm for model weighting. In Section 3 we describe leave-one-out cross-validation experiments to test method performance against a trend-only BMA method. Here we also provide the specific details on how the trend and variability components were extracted from the data. Section 4 describes the results of these experiments. Section 5 discusses the application of the method to make multi-model probabilistic projections of Korean summer mean maximum temperature change. Section 6 briefly discusses the main findings of the study and places it in context of prior work. Section 7 discusses the limitations of the work, and Section 8 presents conclusions.
Overview of the method
At the start of the analysis, we assume that we have a collection of dynamical model time series outputs, and that these outputs can be decomposed into long-term trend and variability components. The details of this decomposition are not critical for this study, as we focus on the statistical methodology for the weighting. The weights (or probabilities) for the two submodels are calculated separately, using the Bayesian statistical paradigm, and then combined. The combined weights can then be used to make predictions (Fig 1).
Notation and decomposition of model output
Consider that k models are available. We postulate that each dynamical model is associated with a statistical model M i for the observations. M i can be thought of as a statistical event, which when true indicates that ith dynamical model is a reasonable representation of real system. M i consists of two submodels: a trend submodel M T,i (related to the trend in the system), and a variability submodel M V,i (modelling internal fluctuations in the system). When M T,i is true, the ith dynamical model correctly captures the trend of the system. Likewise, when M V,i is true, the ith dynamical model correctly captures the variability of the system. Alternatively, we can consider the model for anomalies scaled by the mean (M V0,i ). Each model produces time series output of a physical quantity during the period when observations are available ("calibration period"), as well as under new forcing conditions, usually associated with future system projections ("projection period"). We are interested in finding the probability distribution of a change of the system mean Δ between a "projection reference period" (typically the same as the calibration period) and the projection period. We denote the raw calibration period model output from the ith dynamical model by vector i;n Þ where superscript "'" indicates that the output is raw (un-smoothed), and n is the length of the record. The model output is a regularly spaced time series. We consider decomposition of the form: We will use the term "anomalies" to refer to the variability component of the time series. The trend x i can be either a linear trend, or a more flexible nonlinear trend obtained, for example, from robust locally weighted regression [36]. We assume that this decomposition is deterministic, unique, and is performed before the start of the main analysis. We also assume that the estimate of the trend is a reasonable proxy for the true unknown trend. While it may be possible to also incorporate the uncertainty in this decomposition, we leave it to future work. The focus here is not on how to properly decompose a time series into a long-term trend and variability, but on the novel methodology for weighting by performance in both. See [18] for an example of use of an alternative methodology to decompose the data. The use of alternative methods for data decomposition is subject of future research. We describe the decomposition method we use for each dataset in Section 3. The same decomposition is also applied to the observed time series y': Another option is relative decomposition. It takes the following form: where � x i is the deterministic sample mean of the ith dynamical model output, and Δx 0 i are normalized anomalies; and similarly for the observations: where � y is the observed mean. Next subsections contain the following: subsection 2.3 discusses the trend submodel weighting (which largely follows previous work), subsection 2.4 centers on the variability submodel weighting, section 2.5 discusses combining the component weights for each model, section 2.6 is dedicated to procedure for making weighted multi-model projections, and section 2.7 presents computational details on the implementation of the method.
Weighting the trend submodels
The trend submodel weighting is implemented following prior work, and full details are provided there [9]. Essentially, this method is BMA that also considers the uncertainty due to model error, and uncertainty in statistical properties of data-model residuals. Here, we consider k competing statistical models M T,i for raw observations y'. We stress that statistical and dynamical models are conceptually related: i.e., if the statistical model M T,i is true, it implies that the associated ith dynamical model correctly represents the trend in the system. Each M T,i is a hierarchical statistical model that connects modelled deterministic trend from the ith model during the calibration period x i to real system trend y, and then the system trend to actual observations y' (Eq 6): where fε D is random discrepancy (long-term model error), and ε NV is random internal variability (as well as short-term observational error).
Here we deviate somewhat from orthodox Bayesian practice. A typical Bayesian approach would assume a distributional form for the discrepancy vector fε D . However, because this error is likely long-term dependent, and the probability distributions for its components are not necessarily normal, finding and justifying a proper parametric model for it is non-trivial. To deal with this conundrum, we adopt an approach inspired by prior work [37]. We postulate that model error can be derived from inter-model trend differences. The reasoning for this implementation is as follows. Imagine a particular trend submodel M T,i represents the "true" system. Associated with this system is trend x i and pseudo-observations x 0 i . If only the rest of the models are available to the researcher, then the best-fit model j to these pseudo-observations is associated with trend x j . The difference between the best model and the pseudoobserved trends is then the unscaled error of the jth model. Thus, we obtain samples for unscaled discrepancy ε D directly from the differences between each model's trend and the next-closest model trend (see [9] for details). We acknowledge that this parameterization is simplified; model error is an emergent research topic [37]. We thus hope this work can galvanize more research on parametrizing model error.
The second non-orthodox idea, is related to the deterministic f factor ("error expansion factor", Eq 6). This factor is a new addition to the model presented in previous work [9]. f is a parameter that scales ε D to account for potential overconfidence. The non-orthodox idea relates to the procedure for selecting f. Specifically, we do not estimate f from present-day observations as a strict Bayesian would do, but rather we select f that results in correct coverage of the 90% posterior credible intervals during cross-validation experiments (different f for each dataset). The reason for this is as follows. Using just present-day observations to estimate f may produce small f values that result in overconfident future projections. This is because models have been developed so that they match observed data. Philosophically, present-day model-data agreement may be due to overfitting, and may not be reflective of the actual amount of error in the models.
The internal variability ε NV (Eq 6) is modelled as an AR(1) process with random parameters θ = (σ, ρ), where σ is innovation standard deviation and ρ is autocorrelation. Following Bayes theorem, and marginalization theorem, the trend model weights are then calculated as: Here, p(M T,i ) denotes the prior for the ith trend model, p(y'|y,θ, M T,i ) is the AR1 likelihood resulting from the bottom line of Eq (6), p(θ) denotes the prior for the AR1 parameters, and p (y|M T,i ) is obtained according to the top line of Eq (6) using samples from fε D as discussed above. Unlike the previous work [9], here we assume uniform prior probabilities for trend models p(M T,i ). The integral is evaluated using Monte Carlo integration, which is simpler to implement than Markov chain Monte Carlo methods used in some studies [2]. For the relative low dimension parameter space that we deal with here, simple Monte Carlo is adequate. Additional experiments suggest the sample size we use for the Monte Carlo integration is reasonable to minimize Monte Carlo error (Text A in S1 File). Once calculated, the weights are normalized to sum to 1 to facilitate interpretation as probabilities. We provide technical details in Text A in S1 File.
Weighting the variability submodels
Variability models are weighted using similar ideas to the ones used in trend weight estimation. We consider k competing statistical models for calibration period anomalies observations Δy = (Δy 1 , Δy 2 , . . ., Δy n ) (see Eq (3)). Each ith variability model M V,i models the anomalies hierarchically in the following form: where θ V y ¼ ðs y ; r y Þ, are autocorrelation and innovation standard deviation of the real climate, � θ V M;i ¼ ð� s M;i ; � r M;i Þ are summary statistics of autocorrelation and innovation standard deviation from ith model anomalies, fε V is model error (where ε V = (ε σ , ε ρ ), and f is a deterministic scaling factor to widen the distribution to correct for potential overconfidence), and w t � Nð0; s 2 y Þ. The top line of Eq (8) connects real system anomaly properties to model summary statistics, and the bottom line shows that observed anomalies are modelled as red noise with parameters (σ y , ρ y ) of the real system.
Thus, in the top line of Eq (8) instead of performing full posterior sampling to obtain samples for real system autocorrelation and innovation standard deviation parameters θ V y we assume they are centered around summary statistics � θ V M;i of ith physical model anomalies with an additive error fε V . Each model's summary statistics are taken as the corresponding MLE estimates. Again, we refrain from assuming any parametric form for ε V . Similar to the error for the trend model, here we also assume samples for ε V are obtained from differences between each model MLE summary statistics � θ V M;i ¼ ð� s M;i ; � r M;i Þ and the next-closest model summary The next-closest model is found as follows: for each model i we compare the conditional likelihood of ith model anomalies given AR(1) parameters of other variability submodels pðDx i j � θ V M;j Þ, j 6 ¼ i under the AR(1) statistical model, and find a model j that maximizes this likelihood. We also add a sample of zero vector (0,0) to ε V for computational stability. We post-multiply these samples by a scaling factor f to obtain samples for fε V . f is the same parameter that is used to scale trend model discrepancy (Section 2.3).
This approach gives us only k+1 samples from fε V . To obtain a larger number of samples which are well-dispersed, we add to fε V realizations from an independent bivariate normal distribution with standard deviations in each dimension set to 1/5 of the original k+1 sample ranges. We use the value of 1/5 because it results in samples with a reasonably smooth density that preserves large scale cross-correlation structure between the original k+1 samples of ε σ and ε ρ , and provides a decent approximation to the underlying pdf for fε V (Fig A in S1 File). Sensitivity tests indicate that using lower standard deviations can degrade the smoothness of the pdf (not shown).
Then, the posterior probability of the variability model i is, using Bayes rule [24] and probability rules: where pðΔyjM V;i ; θ V y Þ is an AR1 likelihood function, pðθ V y jM V;i Þ is sampled using the top line of Eq (8) using bootstrapping from fε V as described above, and p(M V,i ) is the prior probability ("prior") for the ith variability submodel. We assume equal priors for all submodels. This integral is also evaluated using Monte Carlo integration. Specifically, we sample from the conditional pdf of real system summary statistics given each variability model pðθ V y jM V;i Þ as described above, and for each sample we calculate the conditional likelihood for the observed anomalies pðΔyjM V;i ; θ V y Þ. The integral is approximated as a simple mean of the conditional likelihoods across the samples. Probabilities are calculated for each submodel and are normalized to sum up to 1. The implementation using relative variability M V0 is identical except the residuals Δx i and Δy are normalized by the respective model and observational means prior to the analysis. We provide technical details on the implementation in Text B in S1 File.
Combined weights and Bayesian model averaging
In the next step, the weights for the two submodels are put together to form a single combined model weight. Using probability laws: We make two simplifying assumptions. First, we observe that in the datasets described in Section 3 typically the relationships between the variability summary statistics � s M;i and � r M;i on one hand, and trend model probability on the other hand, appear to be weak (Figs B-K in S1 File). In addition, the corresponding linear coefficients are almost always weak (weak is defined as the absolute values less than 0.5). Assuming that the relationships based on the sample summary statistics are a good proxy for those based on the population properties, we make an assumption that the probability of the trend model is independent of the variability model: which allows us to directly plug in trend model weights obtained using the method in Section 2.3. Second, since only anomalies are used to weight the variability model: This quantity is obtained following Section 2.4. As a result, the combined weights can be expressed as a product of the trend and variability submodel weights: We stress that even though the independence assumption generally appears reasonable here, it may not always apply. Hence, it is recommended to check it when applying the methodology to new datasets. Incorporating the potential dependence between the trend and variability submodels into our framework is the subject of future research. Once calculated, the probabilities are normalized to sum up to 1, meaning that we restrict our probability space to the union of available models M i .
Future Projections
Future model projections are implemented largely following previous work [9]. Once the weights are obtained, the statistical model for system change between projection reference and projection periods Δ follows the BMA formula [20,21]: where D = (y, Δy) is collection of all available observations, p(Δ|M i , D) is conditional probability for the change given than ith dynamical model is correct, and w i = p(M i |D) is the probability for the ith model (i.e., model weight) found earlier (Eq (13)) as the product of the trend and variability model probabilities. This represents a skill-weighted mixture of pdfs from individual models. Here we consider Δ to be a simple difference between projection period mean and forecast reference period mean. Future predictions are largely modelled following prior work [9]. Just as for the calibration period, we assume a deterministic decomposition of projection period output into trend and anomalies: The exact decomposition method for each dataset is listed in Section 3. Next, we consider the following statistical model for dynamical system time-series projections (all quantities are vectors): where y' (f) is the projection time series, x ðf Þ i is ith model trend output from Eq (15), b (f) = b (f) 1 is random time-constant bias, and ε ðf Þ S;i is random short-term internal variability in each model. Thus, we assume fthat if ith model is correct, the vector projection is the sum of ith model trend, a time constant bias, and internal variability. Here we again deviate somewhat from the traditional Bayesian theory in that the components of this model are partially informed by inter-model differences, and by model output during cross-validation experiments. Such steps are necessitated by the absence of actual system observations over the projection period to inform us about these components. We model the bias parameter as is sample standard deviation of future period-mean next-closest model differences (where next-best is used in the l 1 distance sense), and f is the deterministic model error expansion factor (the same factor that is used for model weighting). Two different formulations are implemented for internal variability. In the first formulation ("boot"; [9]) we use simple bootstrapping from Δx ðf Þ i to generate internal variability samples. In the alternative formulation ("ar1") we sample ε ðf Þ S;i as a red noise process with parameters θ ðf Þ i ¼ ð� s ðf Þ i ; � r ðf Þ i Þ, the sample innovation standard deviation and autocorrelation of future anomalies. An improvement would be to consider the uncertainty in the AR1 parameters; we do not do this here to simplify the method. To obtain projection period mean changes from the reference period, we take weighted samples of future projections using Eqs (14) and (16), and simply subtract projection reference period mean modeled value for each model. As in previous work [9], we use 100,000 samples for all experiments.
The overall algorithm for the method is illustrated in Fig 2. The method estimates model weights from calibration period observations, and has one fixed parameter f, quantifying model error. Larger f values lead to higher model errors, and as a result broader projections with higher coverage of the 90% posterior credible intervals. Unlike standard Bayesian analysis, we first choose f to obtain approximately correct empirical coverage of the 90% posterior credible intervals during cross-validation. For the cross-validation, each model is selected as Projections accounting for model trend and variability skill the "truth" one-at-a-time. Models are weighted using the output from the "true" model. The "true" model is then excluded from the model set, and the future weighted projections from the remaining models are compared to the output from the "true" model. Once f achieves approximately correct empirical coverage, the method is used for actual projections constrained by real observations. If there are many replicates (or regions) of the system, cross-validation can also be performed by splitting the calibration period into two subperiods. In step 1, observations during the first subperiod in each region/replicate can be used to assign replicate/ region-specific weights. In step 2, observations during the second subperiod can test the empirical coverage of the posterior credible intervals. Here, however, we focus on the one-ata-time cross-validation using future model output. This is because (i) the length of historical record for which high-quality observations are available is too short for most of the experiments [38,39], (ii) observational records suffer from observational errors, and (iii) climate signal (e.g., the magnitude of climate changes) is quite low in the historical period. We choose various variables and periods to test the method under different conditions.
Computational details
All experiments have been performed on an Intel Xeon CPU X5650 @ 2.67GHz GNU/Linux 2.6.18-164.el5 supercomputer, using R programming language version 3.3.3. For other required packages the following versions were used: mblm 0.12 and KernSmooth 2.23-15. We provide the R code as S2 File. This code is provided under the GNU general public license v3.
In the next section we describe several cross-validation experiments for our method and compare the performance of the method (which we call hereafter "trend+var") with a BMA method where all variability submodel weights are set to equal (termed hereafter "trend"). Note that "trend" method is BMA which still weights models by their performance in terms of trend.
Overview of leave-one-out cross-validation experiments
To evaluate method performance, we carry out leave-one-out cross-validation experiments with several simulated and observed datasets: (i) Atlantic meridional overturning circulation (AMOC) strength [Sv] from 13 global climate models (GCMs) (AMOC experiment), (ii) Korean summer mean maximum temperatures from 29 GCMs (Korea_temp), (iii) Korean temperatures with an extended calibration period (Korea_temp_long), (iv) winter East Sea surface temperatures (SSTs) (Winter SST Experiment), (v) temperature-based AMOC Index (temperature in northern North Atlantic "gyre" minus Northern Hemisphere temperature) from 13 GCMs (AMOCIndex), and (vi) the same as (v) but also considering information from climate observations (AMOCIndex_obs). We discuss each experiment in greater detail in the following subsections. The cases differ in terms of the calibration, projection, and projection reference periods (Table 1). In experiments involving model output only, each of the models is selected as "truth" one at time, and its output is used to weight the models. Then, during the validation period, the projected pdfs of changes using the remaining models are compared to the "true" model output. The set-up for the AMOCIndex_obs is slightly different: both calibration and validation periods have available instrumental observations. Here, instead of selecting each model output as pseudo-observations one-at-a-time, we simply use actual observations to both weight the climate models, and to evaluate the projections. All experiments are performed with both "trend" and "trend+var" methods. Both methods have been calibrated for each experiment to have approximately correct coverage (correct % of cases where the "truth" is outside the 90% posterior credible intervals) by adjusting the model error expansion factor f ( Table 1). The calibrated values of f for the AMOCIndex experiments are also used for the corresponding AMOCIndex_obs experiments. We focus on the Winter_SST experiment here, however summary results for all experiments are also provided.
AMOC experiment
For the AMOC experiment (Table 1), data extraction and processing largely follow previous work [9]. The Climate Model Intercomparison Project phase 5 (CMIP5; [40]) model output for this (and other) experiments has been obtained from the ESGF LLNL portal [41]. Future forecasts use the RCP8.5 emissions scenario [42]. We use robust locally-weighted "lowess" regression [36] to obtain the trend model component during the calibration period, and Theil-Sen slopes [43]-in the validation period. We set the "lowess" smoother span parameter to 0.8 during the smoothing. We use this span value because it appears effective at removing interdecadal variability. The smoothed model output is illustrated in Fig 3. Importantly, we see nonlinearities in the modeled trends. Previous variability weighting work does not account for such nonlinearities [35]. During the trend weighting we use smoothed output as anomalies with respect to the entire calibration period. We use normalized (by the absolute AMOC) anomalies to weight the variability models. Future projections use the "boot" variant of the method.
Korea_temp and Korea_temp_long experiments
Korea_temp and Korea_temp_long differ only in the calibration periods and the error expansion factors f, with Korea_temp_long using a longer calibration period. These experiments use output from historical and future RCP8.5 runs of 29 CMIP5 model runs ( Table 1, Table A in S1 File). First, Korean daily maximum temperatures are calculated as spatial averages over land grid cells (cells with more than 80% land) between 34-40˚N and 125-130˚E [38]. The JJA (June, July, August) means are then obtained for each year. Theil-Sen slopes are used for obtaining model trends during the model output decomposition. During the weighting, smoothed output is used as anomalies with respect to the entire calibration period. Future projections use the "boot" variant of the method. Note that the Korea_temp "trend+var" Table 1. Basic information about the design of leave-one-out cross-validation experiments, and the method performance. Bold font indicates improvement of the "trend+var" method, compared to the "trend" method. k is the number of models in the ensemble; MCIW is mean 90% credible interval width; MAE is mean absolute bias of the mean; CIW is 90% credible interval width; AB is absolute bias of the mean. Projections accounting for model trend and variability skill experiment has a slightly elevated coverage of 93%. Decreasing f to obtain approximately 90% coverage is expected to improve performance metrics, but also to make probability densities too discontinuous. Hence, we use the value of f = 0.75.
Winter_SST experiment
Winter_SST experiment uses winter sea surface temperatures from the East Sea from historical and future RCP8.5 runs of 26 CMIP5 climate models ( Table 1, Table B in S1 File). We select this dataset because we find considerable relationships between present-day internal variability properties and future SST change in this region and season (Fig 4; for model number corresponding to each model see Fig L in S1 File). We define the East Sea as the area between 35˚N and 42˚N, and between 130˚E and 139˚E. We use a simple average of all ocean points in this region. During the weighting we use the output as anomalies with respect to the calibration period. Furthermore, we use Theil-Sen slopes to obtain model output trends. Future projections use the "ar1" variant of the method, since we detect a considerable autocorrelation in the model output anomalies.
AMOCIndex experiment
AMOCIndex experiment (Table 1) relies on historical output from the same 13 CMIP5 models used for the AMOC experiment. AMOC Index is defined as sea surface temperature in northern North Atlantic "gyre" minus Northern Hemisphere temperature. It is physically linked to northward heat transport by the AMOC, and hence can be used as a proxy for AMOC [9,44]. Data extraction and processing follow [9], with a few changes. The Index is used as an anomaly with respect to the entire historical period 1880-2004. We then use a portion of the historical period (1880-1945) for calibration, and another portion for projections. Smoothing is performed using Theil-Sen slopes. Projections use the "ar1" variant of the method.
AMOCIndex_obs experiment
For the AMOCIndex_obs we use actual observations both to weight the models, and to validate the probabilistic projections. Otherwise, the experiment relies on the same model output as AMOCIndex experiment. The observations are a simple average of two AMOC Index versions: one calculated with ERSSTv4 SSTs [45][46][47], and one with COBE-SST2 SSTs [48]. ERSSTv4 data is publicly curated by National Oceanic and Atmospheric Administration [49], while COBE-SST2 observations are provided by M. Ishii on the servers of Hokkaido University, Japan [50]. Both versions use GISTEMP Northern Hemisphere temperatures [51]. GISTEMP observations are maintained by NASA Goddard Institute for Space Studies [52]. For comparison with model output the COBE-SST2 SSTs are first interpolated to a 2×2˚grid using bilinear interpolation, while the ERSSTv4 observations are already on such a grid. For both "trend" and "trend+var" experiments, f is taken from corresponding AMOCIndex experiments.
Results of leave-one-out cross-validation experiments
The new method tends to be better able to correctly identify the "true" model from pseudoobservations ( Fig 5, Fig M in S1 File). This is not surprising since it uses extra variability information that is not available to the "trend" method. This extra information can provide a powerful constraint because models differ considerably in their representation of internal variability, based on sample estimates of the variability properties (Fig 4). The most striking improvement is obtained for the AMOC experiment while arguably the least improvementfor the AMOCIndex (Fig M in S1 File). Another important metric is the factor f that provides calibrated projections. This factor can be interpreted as a rough measure of model error relative to the next-closest inter-model differences in output space. The values feature a substantial range from 0.75 to 3.75 (Table 1). For experiments AMOC, AMOCIndex, and Korea_temp_long, f is the same or similar for both methods. Thus, under our statistical model, the best dynamical models for both "trend" and "trend+var" experiments are approximately equally close to the "true" unobserved trends in the real system, in both calibration and projection periods. However, for the rest of the simulated data experiments the new method achieves a lower f. Here, the best model for the "trend+var" method is closer (more than twice as close for Korea_temp) to the "true" trend of the system, compared to the best model under the "trend" experiment, both in calibration and projection periods.
We now turn our attention to the question of future prediction. First, it is worth noting that we do not find a significant bias between projections and "true" model output in any of our leave-one-out cross-validation experiments. The new method tends to improve in terms of the mean 90% credible interval width as well as mean absolute bias of the mean ( Table 1, Figs 6 and 7, Figs N-Q in S1 File). Specifically, in the Korea_temp experiments, the forecast 90% credible intervals on average sharpen by about 25%. For some cases (e.g., models 3 and 22 of the Korea_temp experiment), the improvements are particularly dramatic, featuring a drastic sharpening of the pdf and a strong reduction in the 90% credible intervals, with a low bias; Figs P and Q in S1 File). The only cases with no improvement are AMOCIndex, and corresponding AMOCIndex_obs (Table 1). We note that these experiments rely on the same model output. They also use a weaker historical climate forcing during the projection period, whereas other experiments use stronger RCP8.5 future forcing. It is worthwhile noting that the experiments with the improvement boast a visual relationship between sample estimates of variability properties and future changes (Fig 4). Specifically, models with higher innovation standard deviation tend to produce higher summer mean maximum temperature warming in the Korean temperature experiments. A positive relationship between standard deviation and future temperature change has been previously found in previous work for many regions [6]. The relationships for the AMOC experiment are different: future AMOC slowdown appears to be stronger for models with higher autocorrelation and low normalized innovation standard deviation. In the Winter_SST experiment, the relationships also involve both variability properties: higher � s M;i and low � r M;i in the models are associated with smallest future warming. Thus, we speculate that the degree of improvement may be related to the strength of statistical relationships between the variability parameters and future change. Testing this hypothesis is left to future work. There can be considerable shifts in the pdf between the "trend" and "trend +var" method (Figs N-Q in S1 File). This is consistent with the fact that additional fluctuation data can provide a relatively independent constraint on the model weights.
We note that the improvement in performance by the "trend+var" method is not caused by any increase in number of parameters resulting in overfitting. The overall statistical model for the projections is the same in both cases: a weighted mixture of pdfs from individual models. The increase in skill is due to better estimation of individual model weights w i in the "trend +var" model through using new variability data constraints on the models.
Real-Case application: Projecting korean summer mean maximum temperature
We now apply both the "trend" and "trend+var" methods to make projections of Korean summer mean maximum temperature. Specifically, we use 29 GCMs from Coupled Model Intercomparison Project phase 5 (CMIP5, [40]) model runs (the same model set as for the Korea_temp experiment). The models are weighted using 1973-2005 station observational data provided by Korean Meteorological Administration (KMA) weather stations [38,53]. We apply simple area average to daily mean maximum temperatures from the stations before calculating summer mean values. We use this short period because it has the best observational coverage, however to provide a liberal estimate of the uncertainty we take model error expansion factors f from the corresponding longer-period Korea_temp_long experiments. Future changes (2081-2100 minus 1973-2005) under the RCP8.5 emissions scenario [42] are presented in Fig 8. The results show (a) notably higher projected warming and (b) considerable reduction of the low-warming (< 2 K) tails after the variability weighting. Specifically, the mean increases from 4.9 K to 5.6 K, and the 5 th percentile from 1.8 K to 3.2 K. The new projection mode leaps from 5.3 K to 6.6 K ( Table 2). In addition, the 90% credible interval shrinks from 5.5 K to 4.3 K (22% reduction).
Discussion
Here we present a novel method "trend+var" to weight models of complex dynamical systems by their skill at representing both autocorrelated variability and trend in observations. The key step is association of two statistical models with each dynamical model: a trend statistical model, and a variability statistical model. The component submodels are weighted separately using relevant observations, and then the weights are combined. The combined weights are used to make weighted probabilistic multi-model projections. In a series of cross-validation experiments, we show that the new method appears to better identify the "true" model compared to the trend-only weighting method ("trend"). The new method also tends to perform better in terms of mean 90% posterior credible interval and mean absolute bias. Our analysis deviates in some aspects from the traditional Bayesian framework, in order to avoid making difficult-to-justify parametric assumptions about model error, and to alleviate potential overconfidence in one-at-a-time cross-validation experiments.
Applying the new method to the real case of projecting Korean summer mean maximum temperature change by the end of this century considerably increases future projections. These projections are more informative than from the "trend" method because they use the additional variability and short-term memory (quantified by the lag-1 autocorrelation coefficient) information from both models and observations. Since the BMA predictive model is the same (Eq 14), the increase in skill is not due to an increased number of parameters, but is derived purely through better estimation of model weights. Recent work has found correlations with absolute values of up to approximately 0.8 between present-day interannual summer temperature sample standard deviation in global and regional climate models, and long-term future mean and/or variability changes for some regions [6,25]. This suggests that historical variability in those regions may provide a valuable constraint on the models. Applying the method to those regions should be considered for future work.
It is worth discussing differences between this study and previous Bayesian work. Here we for the first time implement a quasi-Bayesian statistical method that weights models by their performance in terms of both trend, variability, and short-term memory (as quantified by the lag-1 autocorrelation) for a relatively general case: arbitrary (potentially non-linear) trend function and red noise variability. The method can be extended to more complex variability structures. Model weights are obtained by constraining the method with calibration period observations, while a parameter controlling model error assumptions is calibrated using crossvalidation experiments. Some prior work does also incorporate variability into model weights [35], however their method has so far been demonstrated on a simple case: serially uncorrelated variability, and a linear mean function. Other studies [3,4,6] also incorporates variability into the analyses. However, these studies do not actually use variability performance to weight the models and ignore autocorrelation skill. Unlike previous work, we do consider autocorrelation, which is a common feature of variability in many observed and modeled processes [54][55][56].
Caveats
Our study is subject to several caveats. First, the anomalies around the long-term trend, as well as model-observational residuals are assumed to be red noise processes. However, our framework can be extended to more general cases in the future. We compare the spectra of model anomalies (normalized in the AMOC experiment) for each model and experiment to the 90% confidence intervals for the corresponding AR1 process spectra, based on 1000 random realizations (Figs R-T in S1 File). Relevant comparison for the AMOCIndex experiment is shown in Fig 4 of a preceding study [9]. These results indicate that AR1 process is a reasonable approximation to the internal variability for these systems. Second, when combining the weights of the variability and trend submodels we are assuming independence. While this assumption appears to be generally reasonable here, it may not apply for other datasets. Incorporating dependence should be considered in future studies. Thus, our method is expected to be ideal for cases where there is at least some relationship between present-day variability and future changes, yet the relationship between present-day trends and variability in the models is sufficiently weak to justify the independence assumption we make here. Third, by using a common error expansion factor f for the internal variability, trend submodel errors, as well as for the forecasts, we are assuming the magnitudes of errors in these three components are linked. A way forward in subsequent work may be to assume different f for trend and variability. The best f values could then be found using constrained optimization (optimizing future performance metrics while constraining coverage to be correct). This is beyond the scope of this study. Fourth, when sampling future internal variability, we do not consider the uncertainty in the AR1 parameters of the anomalies. However, as explained in Section 3, we calibrate our method to account for potential overconfidence by scaling the magnitude of the model errors.
Other caveats include the simplicity of the future model bias and of the cross-validation experiments, as well as no explicit representation of observational error. For the future Korean temperature projections, the high density of observational network mitigates some of these concerns, as random errors are expected to decrease after averaging across many stations. In addition, theoretically if modelled and observed data from multiple regions are used together in a cross-validation framework, the observational error will be implicitly incorporated into the analysis after nudging the f parameter. Nonetheless, an explicit representation of observed error should be considered in the future.
While the focus on this paper is on the statistical weighting methodology by trend and variability performance, the simplicity of the decomposition into trend and variability (e.g., lowess method or linear detrending) deserves mention. The nonlinear trends discussed here may include residual contributions from long-term internal climate variability. However, this can be handled by the trend-weighting part of the method since this part accounts for long-term model error [35]. The unfiltered long-term variability in each model can be simply considered as part of this long-term model error. Previous work provides examples of using a more sophisticated decomposition [18]. Improving the decomposition methodology is beyond the scope of the paper, and is subject of future work.
This work assumes stationarity of model weights: if a model is correct during the calibration period, it is also assumed to be correct in the validation period. This is a standard assumption of the BMA method [1,5,20,21,35].
Notably, this work does not properly confront the issue of model dependence (e.g., the fact that models coming from the same research group, or models with similar outputs are dependent in the general sense of the term) [12,[57][58][59][60]. This needs to be addressed in future work.
The best new datasets to apply the method to are the ones either with many regions, or with repeated experiments, and where a long calibration period can be split into two subperiods. In this case method performance can be systematically assessed using real observations in crossvalidation experiments, and f can be properly calibrated. However, any assumption about f under new conditions is inherently untestable. Hence, we recommend including equal weights projections along with projections from this (or any other) weighting scheme. In the absence of many regions, and with only short time series available, one has to resort to simulated crossvalidation experiments using calibration, projection, and projection reference period model output to calibrate the method. In such cases, if models share common errors, the real value of f may be higher than estimated.
Conclusions
We present a statistically-rigorous novel method to weight multiple models of stochastic dynamical systems by their skill at representing both internal variability (including autocorrelation) and a nonlinear trend of a time series process, and to make predictions of system change under new conditions. The weight is interpreted as a likelihood of a dynamical model being adequate at capturing both trend and variability aspects of the process. This is a particularly important diagnostic given the broad relevance of variability (e.g., variability can affect extreme events such as heat waves and droughts in climate science). We show that the proposed method tends to better identify "true" models in a suite of leave-one-out cross-validation experiments compared to a typically-used trend-only BMA weighting method. The new method also tends to improve forecasts, as judged by the mean 90% credible interval width and mean absolute bias. This has important implications specifically for multi-model climate projections. Applying the method to project Korean summer mean maximum temperature changes over this century considerably increases future projections. Specifically, the mode of 1973-2005 to 2081-2100 warming under the RCP8.5 emissions scenario increases by 1.3 K to 6.6 K, while the mean shifts from 4.9 K to 5.6 K. Furthermore, the pdf becomes 22% sharper as measured by the 90% posterior credible interval. | 10,679.2 | 2018-11-07T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
An insight for the market driving forces: Case of Tesla Model- S
Market analysis is a diagnostic process to uncover the root causes of how markets perform from an economic perspective. This market analysis will help us understand how the market supply and demand work and what elements affect this performance during 2015-2017. Market analysis provides the initial foundation for the market intervention; this foundation is not constant as markets that are constantly evolving and dynamic. This paper will analyze the factors that affect the supply and demand in the same period in addition to the price elasticity for the Tesla Model S. Finally, this paper develops an insight into the future of the industry. © 2019 Bussecon International Academy. Hosting by Bussecon International. All rights reserved. Peer review under responsibility of Bussecon International Academy.
Introduction
Market structure is very important for implementation of adaptive business strategies in competitive environment. Competitive strategies in business ecosystem should be effective and well designed. Market structures portrays the competitive environment in which a company/firm operates. Competitive strategies that are implemented by a firm in a market are affected by the characteristics of the market structure itself. Tesla Motors (Tesla) is a global enterprise that designs, produces and markets for electric powered vehicles (EV) and its components e.g. battery packs, and Powertrains. The overarching purpose of Tesla is to expedite the move from a mine-and-burn hydrocarbon economy towards a sustainable, solar electric economy. A group of engineers in Palo Alto, USA founded the company in 2003. The Name "Tesla" originated in memory of Nikola Tesla, who first built and patented an electrical induction motor in 1888 (Tesla Motors, Inc.,). The firm is currently the only vehicle manufacturer selling zero-emission sports cars in serial production. On March 17th 2008, Tesla started the mass production of the first model Tesla Roadster, a solely electric operated sports car. On January 12th 2010, Tesla sold his 1000th Roadster, Somssich, E. (2017).
Tesla's solar panels are blend into its roof with integrated front circuits and no visible mounting hardware. The result is a clean, streamlined look, Palin, R. et al, 2012. Tesla charges with energy produced by solar panels, making that energy available when needed, day or night, the Powerwall also enables solar panels to produce energy during grid outages. Tesla's solar roof: Customizing the amount of electricity the Solar Roof produces to fit energy needs. Tesla was ranked as the world's bestselling plug-in passenger car manufacturer in 2018, both as a brand and by automotive group, with 245,240 units delivered and a market share of 12% of the plug-in segment sales. This paper focuses on the reasons for accelerating the demand for using electric cars, despite that the environmental benefit of electric cars is still unclear on the macro level.
The following sections will focus on the market analysis and the successful driving forces followed by a conclusion and the future expected challenges.
Literature Review
Since traditional market share protection tactics discourage competitors from entering the market, product locking and offering lower rates can also prevent businesses from growing. The idea of lowering the usage of fossil fuel was firstly adopted by Tesla Motors as they stated that electric cars are much stronger than oil-burning cars as EVs are more efficient plus environmental clean. Other car companies started recently to adopt the same idea and entered the EV market.
The market of electric cars is specified as an oligopoly market as there are only a certain number of players that controls the market. Parmar et al. (2017) stated that Tesla motors positioned itself in the market as the features that they offer is not available entirely in any of the large luxury automobile companies, however a few numbers of the famous automobile companies entered the market recently. The electric vehicles industry (EV) competition is intensive, and increasing regulatory standards, pressure manufactures to reduce vehicle emissions.
According to Parmar, Tesla's has the superior hand as it products revolve around its core competency of creating entirely electric cars. Tesla through the strong brand awareness already created through traditional markets. As they have a greater number of dealerships through the US. However, Tesla's have greater profits to invest in marketing and advertising campaigns. According to Statista, Tesla's Model 3 has an EV market share about of 60% as Model 3 is the most produced and sold electric car model in the US with similarly priced BMW, Benz and Audi fossil-fuel rivals. As the market of EV is oligopoly, which means that the company has the total ability to satisfy the market needs, therefore this supplier sets the price of its product and has the upper hand over the market and the barriers. According to Dragasevic, Rakocevic & Glisevic, (2011), the game theory is an economic game that has two conditions, one being when competitors work together and form a cartel that is illegal in certain countries. Nevertheless, the way that one of the partners tends to deceive other by producing more than what they agreed is not always successful and eventually the deceiver gains a significantly greater share of the market than the others.
The other prerequisite is when companies on the market do not cooperate to increase competition and to reduce profits. According to the economist, the market of EV became highly competitive the last couple of years, but Tesla's is still leading the way as they got the higher market share than other firms. Cheong, Song, Hu, (2015) showed that , Competition is a business strategy focused on the mutual synthesis of collaboration and rivalry, which enables all competing firms to benefit, depending on their individual interests and goals, in terms of larger market shares, higher profits and technological progress. In reality coopetition is a kind of positive-sum game in which each player's final gains are better than what every player originally carries into the match. In addition to the game in which the winner takes every victory and the loser gets no defeat.
The co-patient system, the vertical or horizontal supply chain method, may contribute to reciprocal performance. A joint partnership between manufacturers and assemblers is a vertical supply chain co-operation. The significant demand for the product of an assembler on the retail market has resulted in more suppliers ' orders of parts. The major player in this industry are Tesla which offers hightech parts and safer vehicles, Toyota that has substantial vehicle shares but are new in the EV market, and finally BMW, and Mercedes-Benz that offer luxury and also focusing on producing cars that work on gasoline and electricity ( Boyd & Mellman, 1980). Moreover, the main challenge for the manufacturers are the consumers behavior about the new technology. According to Gayathree, & Samarasinghe (2019), the main purpose of perceived risks concept is to understand these consumers behavior that includes five dimensions which help firms to understand the risks entirely.
The consumers' uncertainty about the technology of electric cars has a negative impact on purchase intention. Social risks identified as to what extent a consumer understands the importance of whether they think the technical service should be used by others.
Research and Methodology
Using the data for model S for the units sold and produced to describe the relation with the market forces and calculate the elasticity based on data from the company's financial analysis reports for the period between third quarter in 2015 till the second quarter in 2017. This paper use the regression equation to test if the decision for buying the curve referred to the price or not during the period under study.
Emprical Data and Market Analysis
Over the course of 2016, Tesla's share in the U.S. automotive market rose to 2%. However, Tesla's Model 3 has a market share of about 60%. Currently, Tesla is the market leader in battery-electric car sales for the United States. The global market for electric vehicles is estimated to increase to over 567 billion U.S. dollars by 2025; Meanwhile Tesla Model "S" has a market share of about 18% as in December 31, 2016 Luxury automobile industry can be seen as an oligopoly for several reasons. First, few firms control the majority of the market share. There are smaller firms in the market, but their market share in the industry is very small compared to those of dominant firms. Small companies generally lack the financial capital to launch a brand on a large scale which is one of the challenges for sustaining. Secondly, the barriers for entry in the industry are very high. Producing such luxury cars requires a massive amount of investment and extremely high operating cost preventing small firms from entering the race. Tesla's work in an oligopoly market which have a limited competition in which a few producers control the majority of the market share and typically produce homogenous products.
Market Driving Forces
The Tesla Model "S" is an all-electric five-door car, produced by Tesla, Inc., and was introduced on June 22nd 2012. As of April 23, 2019, the Model S Long Range has an EPA range of 490 km, which is higher than any other electric car.
Production Driving Forces
Tesla currently produces more batteries in terms of kWh than all other car manufacturers combined. With the increased production of the Gig-factory, the cost of Tesla battery cells will decrease significantly, the battery cell was reduced from 200$ per kWh to 100$. Tesla's Giga-factory would reduce the production costs by 30 % because of its dependence on the solar panels. Moreover, Tesla has the knowhow of producing the entire electric component for their vehicles. Selling patented electric powertrain components to other automakers, including Daimler and Toyota. Panasonic's battery cell production lines in Tesla's Giga factory in Sparks, Nevada, produce 2170 cells exclusively. Figure 1 shows the increasing trend for the production and the price of the car which is based on the cost is fluctuation due to many reasons.
First the company faces shortage of liquidity: In comparison to competitors like Ford which has 20 billion U.S. dollars, General Motors with 25 billion U.S. dollars or Fiat Chrysler Automobiles 40 billion U.S. dollars which are cash rich from years and years of operations. From the challenges as well, is the capacity Issues. Tesla manufactures cars in just one plant, which is located in Fremont, California. The plant has a capacity to make 500,000 vehicles and the maximum production of the company is limited to this figure making it difficult for the company to target higher volumes. From the future challenges is the high debit load: as Tesla has 5 billion U.S. dollars long-term debit in 2017 that will reflect on the future years' plans.
Demand Driving Forces
Preference for new technologies: Vehicle technology is taking a whole new turn and there are a host of new technologies like hybrid vehicles, green cars, electric cars, battery operated cars and self-driven autonomous cars. Tesla has researched and launched many products in this emerging technology. With the trend towards environment friendly vehicles and regulations to limit emissions. Tesla has improved future opportunities in the battery electric vehicle (BEV) market. The price of the car is not a dominant factor, as we can see in figure 2, furthermore, the increasing preference for renewable energy lead to rising the popularity of low-carbon lifestyles and increasing preference for renewable energy. Moreover, Price of running cost: Price of oil and gasoline are skyrocketing, making the EV highly demanded. However, it is unsatisfying when you buy an expensive electric car and it expires after three years of use. The resident in the European countries can accept this concept but for others it may be inconvenient.
Price elasticity
The following tanble represent the factor of the changes of elasticity during the period under study. The main factors was there is no substitutes in the market till the second quarter for 2017 and the penetration of the market till the end of 2017. On the other side the elasticity of supply as highly affected by the cost of production especially raw material and technology development and safety measures as see in table 2.
Conclusions
The electric vehicle market is considered as an 'emerging industry'. The emerging industry typically consists of just a few companies and it often depends on a new technology. Market type is oligopoly with few sellers with the same product and Tesla is the market leader in USA. Tesla has direct control over input prices mainly for battery cells, which will enable it to build battery electric vehicles with luxury standards in the price range of a midsize sedan. Worth mentioning that this product is an elastic one, see table 3 and 4.Tesla Model "S" market forces (demand and supply) is usually affected by other externalities and circumstances other than the price. Tesla Company spent on research for the improvement of their products and enhanced the supply to avoid losing market shares in front of its competitors.
Tesla will face an aggressive competition when giants like Mercedes and BMW enter the EV. The future prices will decrease drastically due to high competition. Tesla is a market leader in battery electric vehicles and will continue to lead in the near future as it has a dominant strategy referring to its' early technology adoptions, competitive price and mileage per charge. Customers will respond to incentives (prices), and hence they will have more alternatives starting 2020. However, one of the future challenges is the recycling of batteries and what will its' impact on the environment be. | 3,143 | 2019-10-20T00:00:00.000 | [
"Economics",
"Business"
] |
An Effective Strengthening Strategy of Nano Carbide Precipitation and Cellular Microstructure Refinement in a Superalloy Fabricated by Selective Laser Melting Process
: An effective strategy to strengthen a superalloy processed by selective laser melting (SLM) is proposed . The aim is to increase the yield strength of Inconel 718 fabricated by SLM to beyond 1400 MPa, which has never been achieved before. In this study, various NbC additions (0.0%, 0.5%, 1.0%, and 5.0% by weight) were added in the powder bed of Inconel 718, and two types of post-SLM heat treatments were investigated, i.e., solution heat treated plus aging (STA) and direct aging (DA). With NbC addition, smaller depth of melt pool and finer dendritic cells were obtained. Both STA and DA promoted the precipitations of γ ’ and γ ”. STA eliminated the cellular dendrites and induced grain growth while DA retained the as-built cellular dendrites, grain size, and nano-carbide from NbC addition, rendering a significant 326.2 MPa increase in yield strength. In this work, 0.5% NbC addition exhibited a record-high yield strength of 1461 MPa and ultimate tensile strength of 1575 MPa for Inconel 718 processed by laser manufacturing process according to literature data to-date.
Introduction
Inconel 718 superalloy is a high strength structural material widely used in oil-gas industry [1,2]. It is primarily precipitation strengthened by coherent L12 structured γ' (Ni3 (Al, Ti)) phase and DO22 structured γ" (Ni3Nb) phase [3]. Engineering components used in oil-gas industries such as drills for deep-water operation are made of Inconel 718 and it is required to possess a yield strength beyond 1400 MPa at ambient temperature [2,4], but the standard heat treated Inconel 718 could possess only a yield strength of 900~1200 MPa [5,6]; to meet such ultra-high strength demand, additional strengthening contributions from grain refinement [7] and strain hardening [8,9] have been applied in addition to precipitation strengthening [8]. However, conventional wrought processes for Inconel 718 could limit the design complexity of the component and lead to waste of material during machining processes, recently the fused-based additive manufacturing (AM) for Inconel 718 has been a subject of interest [5,6,[10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. During the AM process, powder feedstocks are fused layer by layer to build components with complex geometry without modules. Among various fused-based AM technologies, powder bed selective laser melting (SLM) possesses the ability to make intricate parts with high cost-efficiency [25][26][27]; it permits part design with more complex geometries and better quality of surface finish compared to those made by powder jetted laser engineered net shaping (LENS) [25,26]. On the other hand, the SLM process does not require a working chamber under vacuum, which is required by powder bed Electron Beam Melting process [24]. Hence, SLM is considered as one of the main AM processes for Inconel 718 [5,6,[10][11][12][13][14][15][16][17][18][19], and it is of interest to the oil-gas industries to explore the possibility to fabricate ultra high strength Inconel 718 by SLM process [28]. It has been reported that residual strain induced by thermal contraction during the SLM fusion process could provide an extra strength of about 268 MPa on average [29][30][31]. For SLM Inconel 718, Kuo et al. [18] reported that residual strain from SLM process could be retained by direct aging process, and dislocation and sub-grains strengthening could render a yield strength of 1380 MPa; a similar result was reported by Deng et al. [11]: tensile yield strength of 1350 MPa was achieved by direct aging after SLM. To date, the highest reported tensile strength for Inconel 718 processed by SLM was 1365 MPa on average, which was still lower than that demanded by the oil-gas industry.
In principle, a further increase in thermally induced strain by SLM for strengthening could be achieved through adjusting scanning parameters [32][33][34]. However, this approach could also result in distortion or cracking of the built [32][33][34]. Alternatively, the introduction of inclusions could provide further strengthening for SLM metal alloys [16,[29][30][31]35]. In the work of Kim et al. [29], Mn-based oxide formed in CoCrFeMnNi during SLM process could strengthen the material and result in yield strength of 778.4 MPa. A similar observation was reported by Lin et al. [30]; Al/Ti-based oxide formed in SLM Al0.2Co1.5CrFeNi1.5Ti0.3 alloy could further increase the strength by 170 MPa. Moreover, in our previous work [16], 0.2 wt% CoAl2O4 was added to fabricate Inconel 718 by SLM process followed by solution and aging treatment; this approach resulted in the formation of Al-rich oxide and rendered a yield strength of 1161 MPa. Almangour et al. [31] reported that TiB2 addition not only provided an Orowan strengthening of 268 MPa in average to SLM 316L stainless steel, cellular structure with composition segregation was also refined and contributed an extra 61 MPa in average to yield stress; Han et al. [35] also reported an increase of 100 MPa in tensile strength for SLM Hatstelloy X with TiC addition.
According to literature to-date for laser manufactured Inconel 718, none has achieved a tensile yield strength greater than 1400 MPa. However, two separate approaches have shown great potentials to provide further strengthening contributions: one is by post-SLM direct aging [11,18] and the other is by adding inclusions prior to SLM [16,[29][30][31]35]. To the best of the authors' knowledge, no one has attempted to combine both approaches to enhance the strength of SLM Inconel 718. In this study, the objective is to fabricate highstrength Inconel 718 by SLM, with the aim to achieve yield strength greater than 1400 MPa. NbC was chosen as the inclusion since the elementary constituents of NbC are that of Inconel 718. The influence of NbC on the microstructure of SLM Inconel 718 has been examined; strengthening contributions with direct aging have been elucidated and discussed. This work presents an effective strategy to fabricate Inconel 718 by SLM with yield strength greater than 1400 MPa.
Powder Materials Preparation
Gas atomize powder of Inconel 718 was supplied by Chung Yo materials Co., Ltd. Kaohsiung City, Taiwan. The composition of the powder was analyzed by ICP-OES/carbon analyzer, and is presented in Table 1. The d50 of the powder was 32.84 μm, which was determined by a laser diffraction particle size analyzer (Coulter LS230, Beckman Coulter Inc., Brea, CA, USA). The inclusion was NbC flakes provided by Ultimate material technology Co., Ltd., Taiwan; d50 of NbC powder was 2.239 μm. In this work, various amounts of NbC flakes (0.5%, 1.0%, and 5.0% in weight) were blended with Inconel 718 powder by 2D powder mixing process for 1 h to ensure homogeneity. Figures 1 and 2 shows the morphology and the size distribution of the Inconel 718 powders and NbC flakes used in this study.
Selective Laser Melting Process
An in-house SLM machine equipped with a ytterbium fiber laser (λ: 1070 nm, YLR-500-SM-AC, IPG Photonics Co, Oxford, MA, USA.) was used. The chamber was protected with purity 99.99% argon gas, and the oxygen content of the chamber was kept at less than 100 ppm. The baseplate was S45C steel, and the pre-heating temperature was at 200 °C. During the SLM process, zig-zag lines scanning pattern was used, the hatch distance between each scanning track was 100 μm, and the layer thickness was 50 μm. During the scanning process, each layer was rotated by 67°. SLM scanning parameters were determined in order to achieve the least fractions of porosity (0.06 vol%, which was determined by image analysis) based on sets of experiments of laser power (110 W to 280 W) and scanning speed (400 mm/s to 1200 mm/s). In this study, a laser power of 220 W and a scanning speed of 800 mm/s were employed to fabricate samples for the following experiment. For microstructure observation and heat treatment study, the specimen size was 8 mm in length and width and 6 mm in height (120 layers). For tensile specimens, the samples were 15 mm in width, 90 mm in length, and 8 mm in height (160 layers).
Post-SLM Heat Treatment
The as-built parts were removed from the baseplate by electrical discharge machining. Two types of post-heat treatments were performed. The first type consisted of a solution heat treatment (SHT) at 1100 °C for 2 h followed by air cooling; the SHT temperature was chosen at above the δ solvus [36] in order to homogenize the chemistry [10]. After SHT, the aging process was performed to precipitate γ' and γ", the aging heat treatment consisted of 720 °C for 8 h, then samples were furnace cooled to 620 °C and soaked for 8 h followed by air cooling to room temperature; for the post-SLM, heat treatment consisted of SHT and aging, a designation of STA is given. The second type of post-SLM heat treatment was a direct aging process (DA), without the SHT step, DA samples were subjected to the same aging treatment as that of STA. The microstructures and properties of the STA and DA samples with varying NbC contents were compared in this work systematically.
Tensile Test
Specimens for tensile tests were sectioned perpendicularly to the building direction and the gage section dimension followed the ASTM standard E8/E8M [37]. All surfaces of specimens were ground with 2000 grit SiC sandpaper prior to tensile tests. All tests were conducted at ambient temperature by a tensile test machine (INSTRON 4468, Instron, Norwood, MA, US) equipped with an extensometer; strain rate of the test was 10 −3 per second. At least two specimens for each condition were tested and the averaged values of tensile properties are presented.
Microstructure Analysis
Specimens were ground by SiC sandpaper and then polished by 0.05 μm Al2O3 suspension; sample surfaces were electrolytically etched in 20 vol% phosphoric acid aqua solution. An optical microscope and a scanning electron microscope (SEM, Hitachi SU-8010, Tokyo, Japan) were used to observe microstructures; particle size, phase fraction, and inter-particle spacing were estimated by using Image J software (version 1.52a, Wayne Rasband, USA) [38]. For high-resolution analysis, transmission electron microscopy (TEM, JEOL JEM-F200, Tokyo, Japan) was employed, specimens were ground with 2000 grit SiC paper to a thickness of 50 μm and then punched into round discs with a diameter of 3 mm, discs were then polished by a twin-jet polisher in 10 vol% HClO4 + 90 vol% C2H5OH solution under 25 volt at −30 °C. For grain texture analysis, specimens for electron back scattering diffraction (EBSD) analysis were prepared by surface polishing with Al2O3 suspension followed by 0.02 μm colloidal silica suspension. EBSD analysis was performed with a JEOL JSM-7610F SEM equipped with an AZtec EBSD system (Oxford Instruments, Abingdon, Oxfordshire, UK). Grain analysis was conducted with a 100× magnification image and the step size was 4 μm, misorientation analysis for plastic deformation was performed with a 250× magnification image and a step size of 1 μm. More than 200 grains were counted in each specimen; for misorientation and dislocation density analysis, the Kernel Average Misorientation (KAM) analysis was used, and original EBSD data was post-processed with the Oxford Channel 5 software (Oxford Instruments, Abingdon, Oxfordshire, UK). The averaged KAM values with different kernel radius were then used to calculate overall geometrically-necessary dislocation (GND) density according to the methodology described by Moussa et al. [39]. It has been reported that GND density is related to lattice curvature, which is corresponding to plastic deformation and crystal misorientation [40][41][42]; Nye's dislocation tensor can provide a relationship of GND density based on local average misorientation [41]. The GND density ρ could be estimated by Equation (1) below: where θ is the average misorientation in radius, b is Burgers vector, x is the distance along which misorientation is measured, and a is 3 based on the previous literature [39,41]. The approximation was later modified by Kamaya [43] and Moussa et al. [39], where θ/x is replaced by dθ/dx to remove the background noise of the EBSD detector.
Assuming that the misorientation gradient is constant around the near pixels and there is no misorientation when kernel size is 0, then misorientation θ would be proportional to the distance x. In this study, the averaged misorientation data from KAM analysis with different kernel radius were recorded. The misorientation degree to define a high angel grain boundary was chosen as 15°, and misorientation degree below 15° would be considered in KAM analysis to separate the lattice of different grains [39,42,44]. Then the slope of the data points was calculated as dθ/dx. Eventually, overall GND density could be determined based on the modified tensor in this work.
As-SLM Microstructures
The cross-sectional optical micrographs of as-built samples are shown in Figure 3, and the melt-pools structures are clearly visible. Melting pool depths were measured based on the final layer of the as-built sample, at least 10 melting pool depths of different sides of the as-SLM samples were observed. With NbC additions, the average depth of melt-pools decreased notably from 223.4 μm of 0% NbC to 139.4 μm with 5.0% NbC (164.9 μm for 0.5% NbC, and 159.3 μm for 1.0% NbC), Figure 3a-d. A similar observation was reported by AlMangour et al. [31]. Gu et al. [45] suggested that inclusion particles could inhibit the convection inside the melting pool, which could cause a smaller melting pool due to heat accumulation at the melting pool surface [46]. A few un-dissolved and agglomerated NbC inclusions around 15 μm were also observed; the amounts appeared to increase with higher NbC contents. High magnification micrographs of as-built samples are shown in Figure 4; sub-micron cellular dendritic structure could be observed and inter-dendritic regions could be identified as a bright cellular wall. The increase in NbC addition also appeared to decrease the average cellular size; without NbC, the average cell size was 397 nm, and it decreased to average values of 357.6 nm, 334.6 nm, and 283.8 nm for 0.5%, 1.0%, and 5.0% NbC contents, respectively, Figure 4a-d. The decreases in the depth of melt-pools and the cell size were associated with an increase in the NbC addition. The as-SLM microstructures with and without NbC all exhibited cellular dendrites instead of equiaxed dendrite, Figure 4; this kind of microstructures was a result of a high ratio of temperature gradient to solidification velocity, and could induce small degree of constitutional supercooling and the growth of cellular structure along the solidification direction [47]. It is known that the cellular wall could contain high density of dislocations because of cyclic thermal stress during the fusion process of SLM; these dislocations have been reported to contribute to strengthening [48][49][50]. An equation for the influence of thermal gradient and solidification velocity on dendrite arm spacing L can be described as following [51]: where G is the thermal gradient, V is the solidification velocity (velocity of liquid-solid interface), a, b and c are constants [51]. Since SLM process was performed with a small laser beam size (~58 μm), the melt-pools had high thermal gradient and fast solidification velocity, resulting in the formation of fine cellular dendrites shown in Figure 4. TEM analysis indicated that particles presented along the cell walls in samples without NbC addition were hexagonal C14 Laves phase (lattice parameter a: 4.9 Å and c: 7.8 Å [52]), Figure 4e; by contrast, FCC_B1 Nb-rich cubic carbides (lattice parameter a: 4.4~4.5 Å [53]) were identified along cell walls for all samples with NbC additions, Figure 4f. These particles were incoherent with the FCC matrix (a: 3.58 Å based on TEM analysis). It appeared that the formation of both Laves phase and cubic carbides along cell walls were associated with Nb segregation to the interdendritic regions, as shown by the TEM-EDS analysis presented in Table 2. Furthermore, grain sizes were decreased with NbC additions, from 18.94 μm of no NbC addition to 10.51 μm of 5.0% NbC addition. Previous work indicated that small amount un-dissolved inoculants could affect the texture of grains and also decrease the grain size of as-built Inconel 718 [17], so it was possible that un-dissolved NbC in this work provided heterogeneous grain nucleation sites. Furthermore, dissolution of some NbC could also increase the amount of constitutional supercooling and provide more nucleation sites during solidification [47].
Microstructures after Post-SLM Heat Treatments
Microstructures of the samples subjected to STA heat treatment are shown in Figure 5, and it shows that cellular walls were all eliminated; with more NbC additions, more bright particles could be observed in the microstructures, Figure 5a Table 3, these particles were Nb-rich FCC_B1 cubic carbides, some large ones (430 nm~628 nm) were found along grain boundaries, while smaller particles (46~57 nm) were seen within the grain. Figure 6 shows the microstructures of samples subjected to DA heat treatment, and bright particles were observed along cellular wall; Figure 6e,f are TEM micrographs of these particles along the cellular wall, and their compositions are shown in Table 3 to compare with that in the STA specimen. TEM analysis indicated that for the specimen without NbC addition, these particles were C14 Laves phase, while with NbC addition, particles along the cellular wall were found to be Nb-rich FCC_B1 cubic carbide. The positions, volume fractions, and diameter of these particles after STA and DA treatments are shown in Table 4. Particles of STA specimens were categorized into "intragranular carbide" and "carbide along grain boundary" based on the observation in Figure 5; for DA specimens, particles along cellular wall were considered and they were identical to particles along grain boundaries. It is shown that NbC addition led to carbide formation and increased particles fraction to all specimens. For DA specimens, the volume fraction of particles increased from 1.28% to 7.6% with 5.0% NbC addition. A similar result was observed in STA specimens, volume fractions of both types of carbide increased with NbC addition, from 0.11% (intragranular carbide) and 0.09% (carbide along grain boundary) of no NbC content to 3.23% (intragranular carbide) and 4.36% (carbide along grain boundary) of 5.0% NbC. It should be noted that overall volume fractions of particles in STA specimens were less than those of DA specimen, which could be associated with more homogeneous composition profile due to STA heat treatment. Figure 7 illustrates TEM images of precipitate in STA and DA specimens; these particles were mainly γ" with disc-shaped morphology. Image analysis indicates that the average length along the long axis of γ" particles was 12.8 nm for STA specimen without NbC and 12.9 nm for STA specimen with NbC additions. For DA specimen, the average length along the long axis of these particles was about 13.3 nm for DA specimen without NbC and 13.0 nm for DA specimen with NbC. It has been reported that the growth of primary strengtheners, i.e., γ′ and γ′′ in Inconel 718 could follow Lifshitz-Slyozovi-Wagner theory, which suggests coarsening rate can be determined by diffusivity, temperature, and solute concentration [54]. Based on the as-built chemical profile of sample without NbC addition (Table 2), although there was an obvious Nb segregation toward cell wall regions, the overall chemical compositions were not affected much by the addition of NbC. With the same aging treatment, it is expected that DA samples and STA samples possessed virtually identical sizes and fractions of primary strengtheners. EBSD grain analysis are shown in Figure 8. Epitaxial grains growth was present in as-built Inconel 718 with NbC addition, and columnar grains were observed. However, with NbC addition, more small grains were detected. Table 5 shows the average grain diameter after different heat treatments. The average grain diameter of the as-built SLM samples decreased from 18.94 μm to 17.97 μm, 17.11 μm, and 10.51 μm for 0%, 0.5%, 1.0%, and 5.0% NbC addition, respectively. For specimens subjected to post-SLM heat treatments, it is found that STA could eliminate the columnar grains of as-built specimens and led to obvious grain growth. The average grain diameter of the sample without NbC addition was 44.53 μm after STA. It is found that grain growth during STA was inhibited with more NbC addition. Average grain size of STA specimens decreased to 30.85 μm with 0.5% NbC, 21.53 μm with 1.0% NbC, and 13.45 μm with 5.0% NbC addition. On the other hand, DA had less influence on grain size and grain morphology; grain structures of DA specimens were similar to those of as-built conditions. The overall GND densities of different specimens have been estimated by the method described in Section 2.5 and results are presented in Table 6. The corresponding KAM maps are shown in Figure 9. All the as-built specimens had similar GND density around 4.04 × 10 13 /m 2~4 .54 × 10 13 /m 2 and the values were independent of the contents of NbC addition. After STA heat treatment, GND density decreased from 4.18 × 10 13 /m 2 to 1.31 × 10 12 /m 2 for samples without NbC addition. GND density of STA specimens with NbC additions also decreased significantly due to stress relief by SHT. By contrast, DA processes did not appear to decrease the GND density dramatically for all specimens with and without NbC additions.
Tensile Properties
Tensile stress and strain curves are presented in Figure 10, and the corresponding tensile properties are listed in Table 7. For STA specimens, yield strength (YS) and ultimate tensile strength (UTS) increased gradually with NbC additions, from 1134.8 MPa and 1359.9 MPa of no NbC addition, to 1325.5 MPa and 1498.4 MPa of 5.0% NbC, respectively. After DA process, tensile strengths of samples increased significantly and the influence of NbC addition was more pronounced. Without NbC addition, YS was 1357.5 MPa and UTS was 1490.4 MPa; with 0.5% NbC addition, YS was 1461.0 MPa and UTS was 1575.2 MPa. However, for DA specimens with NbC addition of more than 1.0%, both YS and UTS were decreased compared to that of 0.5% NbC. DA specimen with 5.0% NbC addition broke without reaching the yield point. For samples without NbC addition, the elongation decreased from 23.96% to 14.59%. The elongation further decreased to 9.95% and less for samples with NbC additions greater than 1.0%. Tensile properties of as-built and as-SHT specimens without NbC addition, and as-built specimen with 0.5% NbC, are also listed in Table 7. As-built specimen without NbC had a YS of 771.6 MPa, and 0.5% NbC addition could increase the YS to 841.4 MPa in as-built condition, while as-SHT specimen with no NbC exhibited a YS of 379.5 MPa and large elongation. Stress-strain curves were used to calculated strain hardening exponents (n) based on the Hollomon equation below [55], n values are listed in Table 7.
where σt is the true stress, εt is the true strain, k is the strength coefficient, and n is strain hardening exponent. n is the gradient by plotting σt vs. εt in logarithmic scale. It is found that strain hardening exponents decreased with NbC additions, and STA samples possessed higher n values than that of DA samples. Fracture surfaces of tensile specimens are shown in Figures 11 and 12. Dimples were observed in the matrix at most of the regions of fracture surfaces, and this suggests that the fracture of FCC matrix was ductile. However, some brittle fracture features were observed in the specimens with NbC additions. It was clear that cracks were formed associated with carbide particles; most of these cracked carbide particles with irregular morphology were the residual NbC flakes shown in Figure 3. The fracture surfaces of DA specimens were similar to the STA specimens without NbC addition.
Results of tensile tests indicated that DA samples could exhibit higher strength than those of STA samples, as shown in Figure 10, and NbC addition further increased the strength of specimens. However, degradations of strength and ductility were observed in DA specimens when NbC addition was more than 1.0%; brittle fracture occurred on large NbC particles shown in Figures 11 and 12 were responsible for this observation. It should be noted that fracture particles larger than 10μm had multiple boundaries inside, which could also indicate that these were agglomeration of small flakes. These large NbC particles or agglomerations became stress concentration sites for brittle fracture; they broke during the tensile test, decreased the load-bearing volume. According to fracture toughness mechanism [56], DA specimens were less ductile than STA specimens because they possessed high GND density, and stress in tension could trigger the propagation of crack along un-dissolved NbC particles without significant yielding. Because fractions of undissolved NbC increased with more NbC addition as shown in Figure 3, this could lead to the degradation of mechanical properties for specimens with higher NbC contents. Analysis showed that strain hardening exponent decreased with both NbC additions and DA process, Table 7. DA specimens had higher GND density prior to tensile test, and this implied DA specimens had been strain-hardened; hence, it could result in lower strain hardening rates and elongations. However, UTS of these DA specimens were still higher than those of STA specimens. Hence the significant strengthening in tension achieved in this work could be mainly attributed to the increase in yield strength. In the following section, 0.5% NbC specimen has been chosen for further discussion about each strengthening contributions in order to avoid the influence of un-dissolved NbC particles. There are several factors that could affect the yield strength of alloy, and they can be expressed as the following equation [23,30,57]: where σy is the YS of material. σ0 is the strengthening contribution of matrix, and this term includes solid solution strengthening, stacking fault strengthening, friction stress [29][30][31].
Other strengthening contributions include grain boundary σG.B, precipitation σγ'/σγ'', and strain hardening σstrain. The exponent k is a constant depending on the interaction between each factor [57]. As shown in Figure 8 and Table 5, grain size changed with NbC content and heat treatment. Variation of grain size could influence tensile strength according to Hall-Petch relation; grain boundary could inhibit the movement of dislocation and hence smaller grains could provide higher strength to material [56]. The relationship is expressed as the equation below: where d is grain diameter of matrix and K is Hall-Petch coefficient related to material properties. Here, K is chosen as 750 MPa μm 1/2 for superalloy [58]. The average grain size in Table 5 was used. Calculated strengthening contribution of grain boundary to STA specimens was 112.4 MPa for specimen without NbC, and the value increased to 135 MPa with 0.5% NbC addition. NbC addition also increased the strengthening contribution of grain boundary to DA specimen slightly, from 168 MPa to 174 MPa of 0.5% NbC addition. It is known that GND density could dominate the plastic deformation and working hardening of SLM FCC materials [48], and it has also been reported that working hardening could increase proportionally with GND density [42]. Assuming that residual strain of SLM components would not cause large distortion, then GND density data from Table 6 could be used to estimate strengthening contribution by Taylor equation, which was used in previous studies [30,59,60]. Taylor relation describes necessary shear stress to overcome stress field between each dislocation. The equation is described below [56]: where M is Taylor factor (3 is assumed in this study), G is the shear modulus of the matrix (76 GPa based on previous work [58]), b is Burgers vector, and α is a value depending on the dislocation distribution. For heterogeneous distribution such as cellular structure, in which dislocations are accumulated along the cellular wall, α value of 0.3 was used in this study [59].
Estimated strengthening contribution of dislocation to STA specimens was 19.9 MPa for specimen without NbC and the value increased with 0.5% NbC addition to 29.3 MPa because that Zener drag could preserve some dislocation during heat treatment. On the other hand, NbC addition had less influence on strengthening contribution of dislocation to DA specimens. The strengthening contribution of dislocation to DA specimens was 110.5~117.7 MPa and was independent of NbC addition based on GND density data in Table 6. Strengthening contribution of dislocation from thermal strain in this study was lower than previous literatures of other fused based AM processes [29][30][31]48], in which strengthening contribution of dislocation about 160~400 MPa was reported. It might be because a relatively low energy density and pre-heating baseplate were used in this study, and both could decrease the thermally induced stress during the SLM process [61].
To estimate the strengthening contribution of primary strengtheners, i.e., γ′ and γ′′ precipitates, the following equation was used to describe the stress needed to cut through the particles [62]: where γAPB is the anti-phase boundary energy (~175 mj/m 2 for γ′ and ~296 mj/m 2 for γ′′ [63]), b is Burgers vector, f is the volume fraction of the particle, d is the particle diameter, T is a tension equal to Gb 2 /2, and G is the shear modulus of the matrix. In this study, long axis diameter of particles shown in Figure 7 is chosen as diameter to simplify the calculation (~13 nm).
It is hard to determine the volume fractions of γ′ and γ′′ even with TEM images. Previous work used phase compositions to estimate the fraction of each phase in Inconel 718 [23]. For DA specimen, γ′′ fraction was about 13.9 vol% and γ′ fraction was 5.13 vol%; for STA specimens, γ′′ fraction was about 16.9 vol% and γ′ fraction was 5.37 vol%. NbC addition did not influence the fractions and sizes of γ′ and γ′′ particles; the strengthening contributions of γ′′ and γ′ to STA specimen were about ~596 MPa and ~173 MPa, respectively. Moreover, for DA process, the values slightly decreased to ~565 MPa and ~170 MPa, respectively.
An extra factor termed "cellular structure strengthening'' (σcell) has been introduced in this work in order to describe the extra strength of DA specimen. This factor is a combination of sub-grain boundary associated with cell structure and hard particles (Laves and carbide) along cellular wall. The exponent k in Equation (4) was adjusted based on the assumptions below: (1) DA specimens and corresponding as-built specimens had similar σcell and (2) the estimated YS was close to the experimental data. The deduction gave an exponent k ~1.11, which is close to that of previous work (1.14~1.17) [23]. Hence, each strengthening contributors of STA and DA specimens can be presented in Figure 13. Based on the deduction, it is shown that for DA specimens, in addition to primary strengtheners, grain refinement and residual strain, cellular structure could provide further strengthening to Inconel 718. Without NbC addition, DA process could increase the yield strength by 222.7 MPa compared to that of STA without NbC content; with 0.5% NbC, an increase of 326.2 MPa was achieved. Based on the deduction, it is shown that for DA specimens, in addition to primary strengtheners, grain refinement and residual strain, cellular structure could provide further strengthening to Inconel 718. Since the STA process removed the cells while DA process preserved the cellular structure, NbC addition could further refine cell size and strengthen the cell walls; hence, a significant increase in yield strength beyond 1400 MPa could be achieved. [64]. The wide variation in yield strengths was attributed to not only different AM processes but also different post-heat treatments. SLM process can induce higher strength than that of LENS due to faster cooling rate, and this could induce higher GND density for strain hardening and finer grain size for grain boundary strengthening. With DA process, the yield strengths of both SLM and LENS processed Inconel 718 could be enhanced. Although SHT is usually applied on traditional wrought ingots of Inconel 718, and this could relieve the internal strain and chemical segregation, direct aging could preserve the plastic strain and have less influence on the grain structure and cellular dendrites, which was responsible for significant strengthening contribution in this study. Furthermore, with NbC addition, the cell size was refined and further strengthened by nano carbides along the cell walls. It was reported by Chen et al. [8] that a tensile yield strength beyond 1400 MPa could only be achieved by traditional wrought processes with 20% plastic reduction and DA treatment, but such process would be difficult to be applied to fabricate component with complex geometry. Thus, by blending only 0.5% NbC, and with direct aging treatment, this work presents a simple but effective method to achieve high strength for Inconel 718 processed by SLM; a record-high yield strength of 1461 MPa and ultimate tensile strength of 1575.2 MPa have been achieved according to literature data to-date. Figure 14. Yield strength of fused-based AM Inconel 718 in this work compared with literature data from [5,6,10,11,19,[21][22][23]34,64].
Conclusions
An effective method to strengthen a superalloy processed by SLM has been presented; a minor amount of NbC was blended with Inconel 718 superalloy for SLM process. The post-SLM direct aging heat treatment could render up to 326.2 MPa increase in yield strength. Both grain size and cellular dendrite became finer with more NbC additions in as-SLM condition. Two types of post-SLM heat treatments were investigated, i.e., solution heat treated plus aging and direct aging. Experimental results indicate that STA treatment could eliminate the cellular dendrites, reduce residual strain, and also induce grain growth; while DA treatment could retain the as-built cellular dendrites and grain size. Both STA and DA could promote the precipitations of primary strengtheners; furthermore, with NbC additions, nano carbides precipitations were observed along the retained cellular dendritic walls in DA samples. This could provide additional Zener dragging at the refined cellular walls, which were absent in STA samples. Furthermore, it was found that additions of 1.0% and 5.0% NbC could render a significant drop in ductility due to insufficient fusion of some large NbC flakes, and 0.5% NbC addition was found to provide the highest tensile strength with moderate tensile ductility around 10%. A record yield strength of 1461 MPa and ultimate tensile strength of 1575.2 MPa for Inconel 718 processed by laser manufacturing process have been achieved in this work according to literature data to-date.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,890.4 | 2021-10-23T00:00:00.000 | [
"Materials Science"
] |
A new insight into the innermost jet regions: probing extreme jet variability with LOFT
Blazars are highly variable sources over timescales that can be as low as minutes. This is the case of the High Energy Peaked BL Lac (HBL) objects showing strong variability in X-rays, which highly correlate with that of the TeV emission. The degree of this correlation is still debated, particularly when the flaring activity is followed down to very short time scales. This correlation could challenge the synchrotron-self-Compton scenario in which one relativistic electron population dominates the entire radiative output. We argue that the LOFT Large Area Detector (10 m$^2$, LAD), thanks to its unprecedented timing capability, will allow us to detect the X-ray counterpart (2-50 keV) of the very fast variability observed at TeV energies, sheding light on the nature of X-TeV connection. We will discuss the test case of PKS 2155-304, showing as it would be possible to look for any X-ray variability occurring at very short timescales, never explored so far. This will put strong constraints on the size and the location of any additional electron population in the multi-zone scenario. Under this perspective, LOFT and the CTA observatories, planned to operate in the same time frame, will allow us to investigate in depth the connection between X-ray and TeV emissions. We also discuss the potentialities of LOFT in measuring the change in spectral curvature of the synchrotron spectra in HBLs which will make possible to directly study the mechanism of acceleration of highly energetic electrons. LOFT timing capability will be also promising in the study of Flat Spectrum Radio Quasars (FSRQs) with flux $\ge 1$ mCrab. Constraints to the location of the high energy emission will be given by temporal investigation on second timescale and spectral trend analysis on minute timescales. This represents a further link with CTA because of the rapid (unexpected) TeV emission recently detected in some FSRQs.
Introduction
The Large Observatory For X-ray Timing (LOFT) is one of the ESA M3 mission candidates competing for a launch planned in 2022. LOFT will measure the equation of state of neutron stars and test General Relativistic effect in the strong field regime in Galactic black holes and AGNs [1]. However, the mission configuration will allow us to investigate a larger set of scientific targets (novae, GRBs, tidal disruption events,...) thanks to the capabilities of the two instruments aboard LOFT: the Large Area Detector (LAD, [2]) and the Wide Field Monitor (WFM, [3]). The LAD is a collimated experiment endowed with a 10 m 2 large effective area for X-ray detection and a spectral resolution approaching that of CCD-based instruments. In details, the usage of Silicon Drift Detectors and microchannel capillary plate collimators allow us to operate mainly in the energy range 2-30 keV with an energy (timing) resolution of < 200 − 260 eV (7-10ms). The WFM a e-mail<EMAIL_ADDRESS>envisages a set of 5 units (each unit is composed of 2orthogonal cameras), covering 1/3 of the sky at once and 50% of the sky available to the LAD at any time.
LAD background is dominated (> 70%) by high energy photons of the Cosmic X-ray Background (CXB) and Earth albedo leaking through the collimator. As these components are relatively stable and predictable [4], the anticipated LAD background systematics are estimated to be as low as ∼ 0.25% even for exposure times > 150 ks ( Figure 1).
LOFT is planned in the same timeframe of other observatories which are opening up the AGN time domain on short timescales, e.g. CTA at TeV energies and SKA and ALMA in the radio band. In this context, LOFT/LAD will be able to detect a large number (> 100) of AGNs, providing weekly monitoring (a few days for the brightest) thanks to the WFM sky coverage.
In the case of the High Energy Peaked BL Lac objects (HBLs), the X-ray band samples the synchrotron emission by the freshly accelerated and rapidly cooling high- LAD sensitivity limits at different signal to noise ratio corresponding to a systematics of 0.25% on the background knowledge est energy electrons. In the case of Flat Spectrum Radio Quasars (FSRQs) and Low Energy Peaked BL Lac objects (LBLs), it samples inverse Compton (IC) by the lowest-energy electrons, probing the bulk of the jet particle content and kinetic energy. With its huge collecting area peaking at 8-10 keV and providing unprecedented observational capabilities in this energy range, the LAD will allow us to study the temporal and spectral evolution in the 2-50 keV band with unprecedented detail, for sources with fluxes above 5 × 10 −12 cgs (see Figure 1). The LAD characteristics will therefore match very well with those of the atmospheric Cherenkov telescopes like CTA (and HESS/MAGIC/VERITAS) because all these observatories are timing explorers, characterized by huge collecting areas in the respective bands. In the following sections, we will focus on the temporal and spectral investigation of both HBLs and FSRQs with particular emphasis on short term variability studies.
X-ray/TeV connection
HBLs are good targets for the LAD as its energy band (2-50 keV) probes the synchrotron emission in the region above or close to the peak where the variability is expected to be higher.
The measurement of the TeV/X-ray lag also constrains the emission process: e.g. synchrotron self Compton (SSC) time dependent non-homogenous modeling predicts a TeV lag equal, approximately, to the light travel time across the emission region, whereas external Compton (EC), under the assumption the main source of variability is in the relativistic electron population of the jet, predicts almost simultaneity [5]. Present data do not provide measurements of lags less than a few hundred seconds [6], while with LAD this timescale will be pushed to a few/tens of seconds. This means that for flux of the order of 10 −10 cgs in 2-10 keV (typical flaring state for many HBLs and average state for some of them) the blazar flaring activity could be followed down to the decaying tail of the light curve with comparable bin time in X-ray and TeV energy bands [7]. We show in the right panel of Figure 2 how LAD observations will detect a flare lasting ∼ 200 s (which is ∼ the shortest time bin explored so far), shaped with a flux rise of 20% with respect to 10 −10 cgs over 100 s. LOFT/LAD will make possible to probe even shorter timescales during the brightest peak (which means a flux by a factor of ∼ 10 higher). These timing capabilities will provide a unique diagnostics to constrain the new emerging scenario of multi-zone (multiple electron populations) SSC modeling, as suggested by the degree of correlation observed between X-rays and TeV emissions during the flaring activity [6]. The detection of flaring activity on very short timescale in X-ray and TeV will be crucial in the understanding of jet physics and its connection with the central engine: extreme variability implies very compact regions allowing us to investigate the properties of black holes and their surroundings.
Moreover, the WFM will provide an excellent trigger for CTA (and HESS/VERITAS) observations as most of the targets suitable for TeV observation are relatively bright X-ray sources.
The acceleration mechanism
In the case of HBLs, a detailed measurement of the shape of the synchrotron spectral component from Optical/UV to hard X-ray frequencies provides physical information about the particle acceleration process in the jet since it directly traces the shape of the underlying particle distribution.
The broadband spectral distributions of several HBLs is well described by a log-parabolic fit, with the seconddegree term measuring the curvature in the spectrum. This is thought to be a fingerprint of stochastic acceleration [8]. In this scenario, the discrimination between a logparabolic and power-law cut-off shape provides a power-The innermost regions of relativistic jets and their magnetic fields ful tool to disentangle acceleration dominated states, from states at the equilibrium, or very close to, giving solid constraints on the competition between acceleration and cooling times, and on the magnetic field intensity. We have simulated a pure log-parabolic spectrum peaking a 1.5 keV, resembling the typical spectrum of a HBL object in a quiescent state. We assumed a flux of ∼ 10 −10 cgs in the energy range 2 − 20 keV over an integration time of 10 ks. In the center and bottom panels of Figure 3 we show the capability of the LAD to discriminate between a log-parabola (bottom panel) and a power-law cut-off model (center panel) with a high statistical significance. Thanks to the large sensitivity and the broad energy range of the LAD this result provides a relevant improvement compared to the performance of lower effective area instrument operating in the usual 0.2-10.0 keV range (simulations are performed in the case of Swift/XRT, top panel), exploring the curvature of the X-ray spectrum around 10 keV with unprecedented detail, almost a decade higher in energy than allowed by past X-ray observatories. Moreover, the capability to extract detailed spectra with a sub-ks temporal integration during the higher states, and the low energy threshold of 2 keV will complement the current understanding provided by X-ray observatories such as NuSTAR [9]. In addition to a phenomenological log-parabolic shape, we have simulated LAD spectra using a template numerical SED resulting from stochastic acceleration simulations taking fully into account both acceleration and cooling processes [10]. We have found that, also for this more realistic scenario, the LAD can provide firm indications on the difference between the acceleration dominated/equilibrium states. These results will be presented in a forthcoming paper (Tramacere et al. in preparation).
FSRQs: X-Gamma connection and bulk-Comptonization
Contrary to HBLs, FSRQs are blazars whose emission peaks at low energies, with the synchrotron component peaking in the radio/IR bands and the IC one in the Fermi and AGILE bands (50 MeV -50 GeV). As such, they were not expected to significantly emit at VHE (> 0.2 TeV). It came thus as a surprise the strong and highly variable emission recently discovered by TeV observatories (e.g. 3C 279, 4C 21.35, PKS 1510-089). The origin is still unclear, but the LAD can provide the answer: if due to a tail of high-energy electrons, their synchrotron radiation should appear in the hard X-ray band as well (like in Intermediate BL Lac or HBL objects), dominating over the IC one and completely changing the spectral slope (from Γ ∼ 1.5 to Γ > 2). Timing is critical: given the highly variable TeV emission (minutes-hours), such X-ray features can disappear quickly and not be measurable by instruments of lower area or poor pointing flexibility. In this respect, LOFT/LAD larger area would provide 1-s binned light curves for flux ≥ 1 mCrab (∼ 2 × 10 −11 cgs in 2-10 keV) and, in addition, spectral trend investigation during the flaring activity will be performed on minute time scale, being the non-thermal continuum power-law constrained within a few percent.
A further diagnostics will be provided by the LAD on the jet composition, in connection with GeV flares. If the jet is matter-dominated and ejected in blobs, the cold electrons are expected to Compton-upscatter BLR photons yielding a spectral signature as an excess emission at ≈ (Γ/10) 2 keV, where Γ is the bulk-motion Lorentz factor of the jet [11]. This feature, referred as bulk-Comptonization, has not been seen yet, but 1) with values of Γ > 20, this feature could actually peak around 10 keV, and thus could not be recognized so far; 2) if the jet accelerates slowly, it could be visible only for few hours, from the time the blob becomes relativistic to the time it goes outside the BLR (at a distance of ∼ 10 18 cm from the central engine) as re-ported in [12] (see their Figure 1). The LAD will able to discover this feature for the first time, or put very strong upper limits on the particle content of the jet at its base.
Combining LAD repointing with radio observations with ALMA can have also a strong impact on the SED modeling and thus on the nature of the gamma-ray emitting region, being the higher radio frequencies (30-950 GHz) properly related to the flaring activity.
Conclusions
We discussed the potential of LOFT for the study of blazars mainly based on its timing capabilities exploited in synergy with other observatories (radio, TeV) planned to operate in 2020s. On the basis of the portion of the spectral energy distribution accessible to LOFT (2-50 keV), we argue that the HBL objects are the best candidates for pointed observations with LAD. These observations will provide us the best sampling of the light curves in flaring states, given the exploration of unprecedented short timescales comparable with those achieved by the future TeV observatory CTA and in strict simultaneity. This will be possible for a large sample of TeV blazars thanks to the LAD pointing flexibility (with ∼ 70% of the sky accessible). Therefore, LOFT will open a new window on the investigation of X-ray/TeV connection, with particular regard to: • lag measurements with at least an order of magnitude improvement over the current limit of ∼ 100 seconds; • the study of the multi-zone SSC role during flares; • the connection between the jet variability and the central engine.
Moreover, the capability to extract detailed spectra in the range 2-50 keV with a sub-ks temporal integration (during the higher states) will allow us to study the mechanism of acceleration of highly energetic electrons, by following the curvature variations of the synchrotron emission.
Although FSRQs are less bright and variable in X-rays (see e.g. [13]) with respect to HBLs, LAD follow-up observations will allow us to investigate temporal (second timescale) and spectral variations during typical flaring activity (minute timescale). In a multifrequency context, LOFT will contribute to identify the nature and location of the high energy emission (particularly if detected at TeV energies which explore similar timescales).
Finally, the sky coverage, the 1-day sensitivity (∼ a few mCrab at 5-σ for an on-axis source) and the pointing flexibility of the WFM as well will provide triggers for multiwavelength follow-up from radio (SKA, ALMA) up to TeV energies (CTA). | 3,284.4 | 2013-10-25T00:00:00.000 | [
"Physics"
] |
Pre-training Language Models for Comparative Reasoning
Comparative reasoning is a process of comparing objects, concepts, or entities to draw conclusions, which constitutes a fundamental cognitive ability. In this paper, we propose a novel framework to pre-train language models for enhancing their abilities of comparative reasoning over texts. While there have been approaches for NLP tasks that require comparative reasoning, they suffer from costly manual data labeling and limited generalizability to different tasks. Our approach introduces a novel method of collecting scalable data for text-based entity comparison, which leverages both structured and unstructured data. Moreover, we present a framework of pre-training language models via three novel objectives on comparative reasoning. Evaluation on downstream tasks including comparative question answering, question generation, and summarization shows that our pre-training framework significantly improves the comparative reasoning abilities of language models, especially under low-resource conditions. This work also releases the first integrated benchmark for comparative reasoning.
Introduction
Comparative reasoning constitutes a fundamental cognitive ability that plays a crucial role in human decision-making processes.It involves comparing and contrasting various objects, concepts, or entities to draw conclusions or make informed decisions.For example, consumers often compare products based on features such as price, quality, and user reviews before making a purchase decision.Similarly, policymakers weigh the advantages and disadvantages of different policy proposals to address pressing issues.In the context of textual documents, comparative reasoning is crucial for tasks such as identifying differences between research papers, contrasting news articles from different sources, or synthesizing the arguments of opposing viewpoints in a debate.
In recent years, there have been studies developing natural language processing (NLP) models capable of mining (Jindal and Liu, 2006;Li et al., 2011), understanding (Bondarenko et al., 2022;Bista et al., 2019), and generating (Iso et al., 2022;Beloucif et al., 2022) comparative contents over texts.Yet, a substantial barrier persists: these models often require labor-intensive manual data labeling, rendering them costly and unfeasible for large-scale applications.Moreover, these models are designed for a particular task, which limits their generalizability on different or emerging tasks.Meanwhile, pre-trained language models (PLM) such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) exhibit good generalizability on several NLP tasks.However, existing pre-training methods (e.g., masked language modeling and span infilling) could not grant the language models strong comparative reasoning abilities, especially in fewshot and zero-shot settings (see results in Table 4).
In response to these challenges, in this paper, we present a novel pre-training framework to enhance the comparison reasoning abilities of PLMs, specifically by capturing comparable information within paired documents more effectively.Our approach pilots around a scalable, labor-free data collection method that can gather a wealth of facts for entity comparison by combining structured (e.g., Wikidata) and unstructured data (e.g., news sources and Wikipedia).We represent these facts as quintuples, which consist of a pair of entities and the corresponding values of their shared property.To enable the pre-training in a text-to-text manner, we convert the quintuples into textual components such as question-answer pairs and brief summaries of the two entities.We further design three pre-training tasks, including generating synthetic comparative answers, questions, and summaries, given the documents of two entities as contexts.Subsequently, we unify the pre-training tasks by multi-task learning.To our best knowledge, we are the first to pre-train language models for comparative reasoning.
We assess the effectiveness of our approach by benchmarking comparative reasoning on a suite of tasks, including comparative question answering, comparative question generation, and comparative summarization.Our experimental results demonstrate a notable improvement in the performance of conventional PLMs including BART and T5 under limited-resource scenarios.
Our contributions are three-fold: • We propose a scalable method for synthesizing data for entity comparison, leveraging both structured and unstructured data sources.• We present a novel framework for pre-training PLMs to enhance their comparative reasoning abilities on multiple related tasks.• We provide the first benchmark for entity comparison over texts, serving as a foundation for future research in this domain.
2 Related Work
Comparative Reasoning Tasks
Early studies focused on mining explicit comparative information from massive corpora, such as identifying comparative sentences (Jindal and Liu, 2006), mining comparable entities (Li et al., 2011), and classifying components of comparison (Beloucif et al., 2022).Recent work focused more on natural language generation tasks such as generating arguments to answer comparative questions (Chekalina et al., 2021), generating comparable questions for news articles (Beloucif et al., 2022), and summarize comparative opinions (Lerman and McDonald, 2009;Iso et al., 2022).However, the existing techniques were designed for specific tasks, and they are limited by the scarcity of supervised data, which poses a challenge due to the laborintensive nature of data collection.
Language Models Pre-training
The combination of structured and unstructured data in language model pre-training has garnered considerable attention in recent research.Early work proposed to fuse KG information and text by encoding the graph structure and take entity embeddings as a part of the input, such as ERNIE (Zhang et al., 2019), K-Adapter (Wang et al., 2020), KE-PLER (Wang et al., 2021), K-BERT (Liu et al., 2020), and JAKET (Yu et al., 2022).Another branch of work proposed to integrate entity information (Xiong et al., 2020;Qin et al., 2021;Zhang et al., 2022) or relation information (Qin et al., 2021;Hu et al., 2021) without modifying language model's structure.Notably, RGPT-QA (Hu et al., 2021) (Hu et al., 2022) brought structured knowledge into generative LMs by integrating graph-based knowledge augmented modules.However, they require knowledge graph as a part of their inputs and process them by the graph neural network-based modules.
Pre-training Framework
Our framework explicitly teaches language models at the pre-training stage about comparative reasoning.Specifically, they are given a pair of documents, each describing an entity, and are trained to generate a piece of text pertaining to the comparison between these two entities.Regarding the types of output texts, we design three sequence-tosequence pre-training tasks that require the model to simultaneously attend to both documents and extract information for pairwise comparison.The pretraining tasks include comparative answer generation ( §3.3.1),question-answer generation ( §3.3.2), and summarization ( §3.3.3).
To enable large-scale pre-training with a data collection, we utilized structured data (e.g., Wikidata) and unstructured corpora (e.g., Gigawords, CC-News and Wikipedia) to obtain quintuples, a novel structural unit on entity comparison.Then we convert the quintuples to textual components for the sequence-to-sequence pre-training.
Notations
Given a (head) entity e, we denote a Wikidata statement as (e, p, v), where p is a property or relation, and v is a value or (tail) entity.Given entities e 1 and e 2 , we obtain their Wikidata statements.From the two sets of statements, we first extract quintuples for entity comparison.A quintuple is represented as (e 1 , e 2 , p, v 1 , v 2 ), where p is a common property of e 1 and e 2 , and v 1 and v 2 are the corresponding values in the Wikidata statements.Such quintuples enable the comparison on shared properties, reflecting the similarity or difference between the corresponding property values or tail entities.
Then, we convert quintuples into textual forms, such as question-answer pairs, which are denoted by (Q, A), and summaries denoted by S. Additionally, we extract text descriptions D 1 and D 2 from Wikipedia for e 1 and e 2 , respectively.
Data sources
Our approach uses large-scale data with free access, as needed in most pre-training frameworks.We design novel pre-training tasks using both unstructured and structured data sources, compared to the existing frameworks that use mainly the text corpora.Because the structured data help define the tasks for comparative reasoning.
Wikidata1 is a collaborative knowledge base that stores data in a structured format.Wikidata contains a set of statements that describe entities, where each statement includes a property and a value, as denoted in §3.1.Values can be object entities which have unique identifiers, or literal values including date values, numerical values or strings.Each entity and property is associated with a set of aliases.
Wikipedia2 is a encyclopedia that contains a vast collection of articles covering a wide range of entities.A Wikidata entity can be linked to a Wikipedia article through a property named "sitelink".
Text corpora, encompassing news sources (e.g., Gigawords3 , CC-News4 ) and Wikipedia, offer an abundance of information for determining the comparability of entities and the properties for the comparisons.For example, a sentence in a piece of news from New York Times like "The show, with a book by the screenwriter Diablo Cody ('Juno') and staging by director Diane Paulus ('Waitress'), takes on the good work ...," indicating that Diablo Cody and Diane Paulus could be compared on the property of work (values: screenwriter vs. director).
Quintuple collection
In this section, we introduce the process of collecting quintuples by combining Wikidata and text corpora.The underlying hypothesis guiding our efforts is that when a pair of statements concerning the same property of related entities co-occur in a textual context, there is a high probability that these statements are indeed comparable.
To extract this comparability information, we first sample a document from the text corpora.Then, we link Wikidata statements to the sentences in corpora by identifying the mentions of entity e, property p, and value v using string matching based on their aliases provided in Wikidata.Specifically, a statement (e, p, v) is linked to a sentence if the aliases of e, p, and v all appear in the sentence.To increase the linking accuracy, we properly tokenize the sentences, convert all text to lowercase, and remove stop words.Next, we pair (e 1 , p 1 , v 1 ) and (e 2 , p 2 , v 2 ) if they satisfy the following criteria: 1. e 1 and e 2 belong to the same category, e.g., they both have the value human for property instance of.This ensures the entities are analogous to each other.2. p 1 and p 2 are equal.This follows the principle that comparisons are made on a common property of two entities.3. The sentences linked to (e 1 , p 1 , v 1 ) and (e 2 , p 2 , v 2 ) co-occur within the same context (e.g., a short paragraph of a news article).Being mentioned together indicates a high probability of implicit comparison.We denote such a statement pair as a quintuple (e 1 , e 2 , p, v 1 , v 2 ).Such quintuples store information for entity comparison.
Quintuple textualization
In order to pre-train the language model in a textto-text manner, it is crucial to represent the comparative information inherent in the quintuples in a textual form.We aim to explicitly train the model to capture comparable information from a pair of documents and make comparisons.To this end, we propose to input a pair of documents, each containing a text description of a single entity, and train the model to generate texts involving comparison.
As part of this process, we extract documents for each pair of entities e 1 and e 2 .First, we find Wikipedia articles of e 1 and e 2 by the links provided in Wikidata.Next, we split the articles into 10-sentence segments.To ensure the information within the quintuple can be inferred from the documents, we link (e 1 , p, v 1 ) and (e 2 , p, v 2 ) to sentences in the articles of e 1 and e 2 , respectively.We link the statements based on two assumptions: (1) Within an article pertaining to entity e, sentences are highly probable to discuss e as their subject; (2) If a sentence in a Wikipedia article of e mentions both e and v from a Wikipedia statement (e, p, v) , then it is highly likely that the sentence describes the fact of (e, p, v).Thus, we link the statements to sentences whenever (e, v) or (p, v) can be matched.To assess the linking quality, we randomly sampled 100 statement-sentence links and manually evaluate the the accuracy.The linking accuracy exceeds 95%, indicating the Wikidata statements are effectively linked to the sentences.Finally, We choose text segments from the articles of e 1 that can be linked to the (e 1 , p, v 1 ) as document D 1 .Similarly, we obtain D 2 for e 2 .
To enable the model to make comparisons in a text-to-text manner, the comparison knowledge encapsulated within the quintuples is converted into natural language forms, namely, question-answer pairs and summaries.The conversion to questionanswer pairs (Q, A) are synthesized using predefined templates shown in Table 2.
To generate synthetic comparative summaries S, we utilize a data-to-text model (Ribeiro et al., 2021) that has been fine-tuned with DART (Nan et al., 2021) dataset.This allows us to transform quintuples into concise declarative sentences.
Pre-training Tasks and Objectives
We propose three comparative pre-training tasks, including comparative answer generation, QA pairs generation, and summary generation.They are all text generation tasks, which fit the architectures of popular language models such as BART and T5 very well.We unify the pre-training tasks with task specific prompts, as shown in Table 1.Table 2: Synthetic QA templates.All indicates the following templates is applicable to all quintuples.The templates under When v 1 ̸ = v 2 : or When v 1 ̸ = v 2 : will be applied to quintuples whose v 1 and v 2 are different or the same, respectively.
Comparative answer generation
We concatenate the synthesized comparative questions and the two documents as input.The model is trained to generate the corresponding answer.This task not only activates the attention mechanism between the question and relevant contexts in each document, more importantly, it encourages the interaction between both documents, which is essential to make the comparison.We define the loss function for a training sample as in which T is a set of QA pairs derived from the templates.and P (•) is the predicted probability.
Comparative QA pairs generation
Given two documents, the model is required to generate comparative questions and answers.By learning to generate comparative questions, the model learns to attend to both documents and reason about common and valuable properties of the two entities:
Comparative summary generation
Given two documents, the model is tasked with generating short comparative summaries that represent the comparable statements: where S is the set of summaries from quintuple textualization.
Pre-training objectives
Inspired by the multi-task prompted training (Sanh et al., 2022), we unify the aforementioned pretraining tasks with natural language prompts.The detailed format of source and target sequences are shown in Table 1.The model is jointly optimized for all tasks using a shared loss function, which encourages the model to learn generalizable representations that are beneficial across tasks.For BART, we train the proposed pre-training tasks together with the text infilling (TI) task, where the model requires to reconstruct the text corrupted with randomly masked spans, as described in Lewis et al. 2020.The detailed We denote the loss function for text infilling as as L TI .Hence, the overall objective L is formulated as follows: 4 Experiments
Datasets and Evaluation Metrics
To evaluate our proposed method, we consider downstream tasks involving comparative reasoning, including comparative question answering (QA), comparative question generation (QG) and comparative summarization.In this section, we introduce the downstream datasets and evaluation metrics.The statistics of the datasets are shown in Table 3.
Comparative question answering
Comparative question answering (QA) requires to compare two or more entities on their shared properties.Since our focus on comparison over documents instead of knowledge retrieval, we do not include distractor passages but directly use the gold evidence passages as the context for question answering.For evaluation, we calculate the exact match (EM) score between the predicted answer and the ground-truth answer, after necessary normalization (Chen et al., 2017).Besides, unigram F-1 scores are also calculated as a complementary metric to compare the similarity beteween the predicted answer and the ground-truth.
HotpotQA CMP and 2WikiQA CMP HotpotQA (Yang et al., 2018) and 2WikiMultihopQA (2Wiki) (Ho et al., 2020) are factual question answering datasets collected from English Wikipedia.These datasets require multi-hop reasoning on different entities before reaching the correct answer.To focus on comparative questions, we obtain the subset of comparison questions based on their question type annotations and denote them as HotpotQA CMP and 2WikiQA CMP , repspectively.
Comparative question generation
Comparative question generation (QG) aims at generating questions that draw comparisons between the shared properties of two entities, given their textual descriptions.The task poses the challenge of identifying and inquiring about properties that humans would deem interesting and worthy of comparison.For comparative QG, we adopt wellestablished evaluation metrics, including BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004).
For comparative QG, we perform answerunaware QG and use datasets converted from comparative QA, denoted by HotpotQG CMP and 2WikiQG CMP respectively.
Comparative summarization
Comparative summarization aims at generating summaries that highlight the similarities or differences between two entities given their descriptions.Following the convention in text summarization (Zhang et al., 2020) , we evaluate the generated summaries with ROUGE scores.
CocoTrip For the CocoTrip dataset (Iso et al., 2022), we employ the common opinion summarization setting.The task involves summarizing the shared opinions from two sets of reviews about two hotels.We concatenate the reviews as input.
Diffen To address the absence of available datasets for the comparative summarization of two entities, we have taken the initiative to curate a unique dataset.The dataset is collected from Diffen.com, a website recognized for offering highquality, human-authored comparisons between different people or objects to help people make informed decisions.Comparison articles on Diffen.comtypically include a brief introduction summarizing the similarities and differences.We man- ually collect these introductory paragraphs as comparative summaries.To gather input sources, we obtain Wikipedia articles as entity descriptions.The task aims at generating a comparative summary based on the given text descriptions of two entities.
The input sequence consists of concatenated entity descriptions, with each description truncated to the first 512 tokens due to the text length restriction.
Experimental Setup
As a pilot study on pre-training for comparative reasoning, we adopt the pre-trained BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) as baselines.Models that are further trained on our comparative objectives are denoted as BART+CMP and T5+CMP, respectively.
For each downstream dataset, we implemented three distinct settings -full-data fine-tuning, fewshot learning, and zero-shot learning -to experiment with both the baseline and our proposed method.In the context of the few-shot learning scenario, we randomly selected 100 instances from the original training set.However, given the limited number of training instances available in CocoTrip and Diffen (specifically, only 20 instances), we merge the full-data and few-shot settings for these particular datasets.For Cocotrip, where the test set is available, we select the best model based on the validation set, and report the results on the test set.For other dataset, we report the results on the validation sets.
Effects of comparative pre-training
In the comprehensive evaluation across six datasets on three tasks, we compare the performance of the proposed method (in the form of "+CMP") against BART and T5.Main results are listed in Table 4.
In the full-data setting, both our proposed models, BART+CMP and T5+CMP, demonstrate performances on par with their baselines across all tasks.Specifically, on HotpotQA, the BART+CMP model achieved an EM of 69, while BART achieved an EM of 69.27, and similarly for T5, the metrics are 72.69 and 73.16, respectively.Similar patterns are observed for other metrics and datasets, emphasizing the competitive performance of our approach in data-abundant scenarios.
However, in low-resource scenarios, as represented by the few-shot and zero-shot settings, the superiority of our method over the baselines became clearly evident.For few-shot learning, our models outperform the baselines on most datasets.Among three tasks, our models show the most significant improvement on the comparative QA task, demonstrating the effectiveness of our synthetic QA pre-training.In zero-shot setting, BART+CMP and T5+CMP consistently surpass their baselines by a large margin.For instance, on HotpotQG, BART+CMP improves over BART by a substantial margin in BLEU (6.86 vs. 1.70) and .Likewise, T5+CMP surpasses T5, with BLEU and ROUGE-L scores of 7.24 and 28.99 against 1.21 and 18.7, respectively.Therefore, these results illustrate that our proposed pre-training method greatly enhances LM's performance in low resource scenarios, while retaining competitive performance in scenarios with abundant training data.
Multi-task v.s. single-task pre-training
To further explore the benefits of multi-task pretraining, we compare the performance of our models pre-trained on single task (i.e., QA, QAG or SUM) with the unified models pre-trained on all proposed tasks.Results are shown in Table 5.When the model is pre-trained on a single task, we observed a significant improvement in performance on the downstream task that closely resembled the pre-training task.However, the model did not exhibit similar improvements on other tasks that were unrelated or less similar in nature.This finding suggests that pre-training on a single task enhances the model's ability to transfer knowledge specifically to tasks with similar characteristics.For the unified model, we observed substantial improvements in performance across all downstream tasks.The improvement brought by the multi-task pretrained model on each task is comparable to the gains achieved through the corresponding task-specific pre-training.This outcome suggests that multitask pre-training enables the model to learn more generalized representations and effectively leverage the shared knowledge across different tasks.
Case Study
To intuitively show the comparative reasoning ability of our pre-trained model, we present an example of comparative summarization in Table 6.Given documents describing airsoft and paintball, models are expected to generate a summary comparing the commonalities and differences of these two games.However, without exhaustive fine-tuning, the generated summary of BART fails to describe the correct relationship between these two entities.On the contrary, after pre-trained on various comparative reasoning objectives, our model generates high quality comparative summaries based on the provided documents under the few-shot setting.The generated summary includes that both games are popular shooting sports while also comparing their differences in their equipment.
Conclusion
In this paper, we presented a novel framework for pre-training language models for comparative rea-D 1 : Airsoft is a team game in which participants eliminate opposing players by tagging them out of play with spherical plastic projectiles shot with mock air weapons called airsoft guns.... (446 words left) D 2 : Paintball is a competitive team shooting sport in which players eliminate opponents from play by hitting them with spherical dye-filled gelatin capsules called paintballs that break upon impact.... (472 words left) Gold: Airsoft is a popular combat simulation game where participants are eliminated when hit by pellets launched from guns that resemble real firearms.In paintball participants try to hit each other with paintballs launched from a special paintball marker/gun.While airsoft is cheaper and provides a more realistic warfare experience, paintball is more popular, more organized and has larger events.
BART (R-L: 18.66, R-2: 4.39) Airsoft is a team shooting sport in which participants eliminate opponents by hitting them with airsoft guns.Airsoft guns are shaped like basketballs or baseball bats and are equipped with a series of round-shaped projectiles called paintballs.BART+CMP (R-L: 19.17, R-2: 8.62) Airsoft and Paintball are two of the most popular shooting sports of all time.Airsoft is a shooting sport that involves hitting opponents with airsoft guns, while Paintball is a more aggressive game that uses a softer, more aggressive, ball-shaped paintball.soning.It obtained quintuples for entity comparison by combining structured and unstructured data, converted the quintuples into textual components, and employed them in three novel sequence-tosequence pre-training tasks.We demonstrated the effects of the pre-training tasks on six downstream datasets, especially in limited-resource scenarios.To facilitate the assessment of models' capability of entity comparison over texts, we release the benchmark for future research.
Limitations
In our pre-training framework, we generate synthetic data for comparative answer generation pretraining with templates, which can cause some synthetic questions not fluent.Such noises in the pre-training data might limit the downstream performance.Similarly, the language of the synthetic summaries generated by a trained data-totext model are rigid and lack of diversity and flexibility.Future work can adopt more advanced approaches to convert quintuples into more fluent and diverse texts for pre-training.
Figure 1 :
Figure 1: The framework of pre-training language models (LMs) for comparative reasoning abilities.In Step 1, we collect quintuples for entity comparison by combining structured knowledge base (i.e., Wikidata) and unstructured corpora (i.e., Gigawords, CC-News, Wikipedia).Details are in § 3.2.2.In Step 2, to obtain text-based pre-training data, we convert the quintuples into synthetic QA pairs and summaries with a set of QA templates and a fine-tuned data-to-text model, respectively.We gather Wikipedia documents as text descriptions of entities.Details are in § 3.2.3.In Step 3, we design novel seq-to-seq pre-training tasks for the LMs.Details are described in § 3.3.
e 1 and e 2 have the same/different value of p? A: Yes/No Q: Do e 1 and e 2 both have the value of v1 in terms of p? A: Yes/No Q: What are the p of e 1 and e 2 ?A: v 1 , v 2 When v 1 ̸ = v 2 : Q: Which one of the following entity's p is v 1 ?e 1 or e 2 ?A: e 1 Q: Is e 1 's p v 1 or v 2 ?A: v 1 When v 1 = v 2 : Q: Which entity has the same value as e 1 in terms of p? A: e 2 Q: e 1 and e 2 are known for what (value) of p? A: v 1 /v 2
Table 3 :
Statistics of our downstream datasets.
Table 4 :
Main results.Our pre-trained models denoted by +CMP, bring significant performance gain to BART and T5 in zero-shot (e.g., relatively +82% and +220% of F1 on HotpotQA CMP ) and few-shot (e.g., relatively +29% and +52% of F1 on 2WikiQA CMP ) settings across all tasks.In full-data settings that assume a huge number of labeled examples are available, our approach makes smaller improvements on the two models.
Table 5 :
Few-shot and Zero-shot results of models with multi-task pre-training (denoted by +CMP) vs. single-task pre-training (denote by +CMP_QA, +CMP_QAG, and +CMP_SUM).On each task, the (multi-task) unified model shows performance gain over BART, which is comparable to task-specific pre-trained models.In the meanwhile, it also improve on other tasks, showing the effectiveness of our unified multi-task pre-training.
Table 6 :
A test example of Diffen dataset.BART and BART+CMP refer to the model predictions after fewshot fine-tuning. | 6,062 | 2023-05-23T00:00:00.000 | [
"Computer Science"
] |
Metagenomic Analysis of Fecal Archaea, Bacteria, Eukaryota, and Virus in Przewalski's Horses Following Anthelmintic Treatment
Intestinal microbiota is involved in immune response and metabolism of the host. The frequent use of anthelmintic compounds for parasite expulsion causes disturbance to the equine intestinal microbiota. However, most studies were on the effects of such treatment on the intestinal bacterial microbes; none is on the entire microbial community including archaea and eukaryotic and viral community in equine animals. This study is the first to explore the differences of the microbial community composition and structure in Przewalski's horses prior to and following anthelmintic treatment, and to determine the corresponding changes of their functional attributes based on metagenomic sequencing. Results showed that in archaea, the methanogen of Euryarchaeota was the dominant phylum. Under this phylum, anthelmintic treatment increased the Methanobrevibacter genus and decreased the Methanocorpusculum genus and two other dominant archaea species, Methanocorpusculum labreanum and Methanocorpusculum bavaricum. In bacteria, Firmicutes and Bacteroidetes were the dominant phyla. Anthelmintic treatment increased the genera of Clostridium and Eubacterium and decreased those of Bacteroides and Prevotella and dominant bacteria species. These altered genera were associated with immunity and digestion. In eukaryota, anthelmintic treatment also changed the genera related to digestion and substantially decreased the relative abundances of identified species. In virus, anthelmintic treatment increased the genus of unclassified_d__Viruses and decreased those of unclassified_f__Siphoviridae and unclassified_f__Myoviridae. Most of the identified viral species were classified into phage, which were more sensitive to anthelmintic treatment than other viruses. Furthermore, anthelmintic treatment was found to increase the number of pathogens related to some clinical diseases in horses. The COG and KEGG function analysis showed that the intestinal microbiota of Przewalski's horse mainly participated in the carbohydrate and amino acid metabolism. The anthelmintic treatment did not change their overall function; however, it displaced the population of the functional microbes involved in each function or pathway. These results provide a complete view on the changes caused by anthelmintic treatment in the intestinal microbiota of the Przewalski's horses.
Intestinal microbiota is involved in immune response and metabolism of the host. The frequent use of anthelmintic compounds for parasite expulsion causes disturbance to the equine intestinal microbiota. However, most studies were on the effects of such treatment on the intestinal bacterial microbes; none is on the entire microbial community including archaea and eukaryotic and viral community in equine animals. This study is the first to explore the differences of the microbial community composition and structure in Przewalski's horses prior to and following anthelmintic treatment, and to determine the corresponding changes of their functional attributes based on metagenomic sequencing. Results showed that in archaea, the methanogen of Euryarchaeota was the dominant phylum. Under this phylum, anthelmintic treatment increased the Methanobrevibacter genus and decreased the Methanocorpusculum genus and two other dominant archaea species, Methanocorpusculum labreanum and Methanocorpusculum bavaricum. In bacteria, Firmicutes and Bacteroidetes were the dominant phyla. Anthelmintic treatment increased the genera of Clostridium and Eubacterium and decreased those of Bacteroides and Prevotella and dominant bacteria species. These altered genera were associated with immunity and digestion. In eukaryota, anthelmintic treatment also changed the genera related to digestion and substantially decreased the relative abundances of identified species. In virus, anthelmintic treatment increased the genus of unclassified_d__Viruses and decreased those of unclassified_f__Siphoviridae and unclassified_f__Myoviridae. Most of the identified viral species were classified into phage, which were more sensitive to anthelmintic treatment than other viruses. Furthermore, anthelmintic treatment was found to increase the number of pathogens related to some clinical diseases in horses. The COG and KEGG function analysis showed that the intestinal microbiota of Przewalski's horse mainly participated in the carbohydrate and amino acid metabolism. The anthelmintic treatment did not change their overall function; however, it displaced the population of the functional microbes involved in each function or pathway. These results provide a complete view on the changes caused by anthelmintic treatment in the intestinal microbiota of the Przewalski's horses.
INTRODUCTION
Gasterophilus spp. (horse botfly) is a common parasite in equids (1,2). Their eggs and larvae can survive in the digestive system (e.g., stomach and intestine) of a host for 8-10 months (3,4). Infection of horse botfly can cause serious clinical diseases, such as dysphagia, gastric and intestinal ulceration, gastric obstruction, and volvulus; it could even lead to severe risks of anemia, diarrhea, gastric rupture, peritonitis, perforating ulcers, and other complications (5)(6)(7). The horse botfly epidemic has been serious in the desert steppe of Xinjiang, China, with six species including G. haemorrhoidalis, G. inermis, G. intestinalis, G. nasalis, G. nigricornis, and G. pecorum commonly found in the local equids (8,9). In botfly infection in Przewalski's horses, there is particular severity with a 100% infection rate, and an infection level much higher than other equine animals (10). At present, deworming is performed annually in winter through administration of anthelmintic compounds to control the infestation (9). However, the frequent use of these drugs will inevitably lead to drug resistance of parasites, which have been widely reported (11)(12)(13). It could also disturb the balance of the intestinal microbial community after removal of the parasites (14).
The equine gut hosts a complex microbial ecosystem with a variety of commensal, symbiotic, and pathogenic microbes. Disturbances to the normal intestinal microbiota could exert critical impact on the host's physiology. In horses, some disturbances are found related with colic (15,16), diarrhea (17), obesity (18), and other clinical diseases. It is known that many factors such as nutrition and management, medication, age, disease, stress, and gender can influence equine intestinal microbiota (19,20). However, studies on the effects of anthelmintic treatment for parasite expulsion on the intestinal microbiota of horses are still limited. Goachet et al. (21) were the first to report a reduction in cellulolytic bacteria and an increase in Lactobacilli and Streptococci in horses after anthelmintic treatment. Peachey et al. (22) found that bacterial phylum TM7 was reduced 14 days after anthelmintic treatment, while Adlercreutzia spp. were increased only 2 days after. Crotch-Harvey et al. (23) detected temporal differences of the bacterial community when horses were treated with anthelmintic drugs. Walshe et al. (24) found that the alpha and beta diversity of the bacterial community decreased at day 7 of post-anthelmintic treatment and reverted on day 14. Peachey et al. (25) confirmed again that anthelmintic treatment was associated with alteration of the relative abundances of the bacterial community. Daniels et al. (26) observed that anthelmintic treatment increased the relative abundances of Deferribacter spp. and Spirochaetes spp. For Przewalski's horses, Hu et al. (4) found that the removal of horse botflies through anthelmintic treatment decreased the alpha diversity of the gut bacterial community, increased its Firmicutes to Bacteroidetes (F/B) ratio, and increased the genera of Streptococcus and Lactobacillus and some pathogenic bacteria. Nonetheless, these studies only explored the changes of the intestinal bacterial community associated with anthelmintic treatments. There is no study available on the changes of the entire microbial community and its functional prediction due to anthelmintic treatment. In fact, fungi, viruses, and some other species also play vital roles in the physiology and immune system of the host (27); e.g., fungi of Aspergillus, Candida, Fusarium, Penicillium, and Saccharomyces and archaea of methanogens represent notable members of the intestinal microbiota (28,29). Hu et al. (30) suggested that anthelmintic treatments on Przewalski's horses could impact the fungal communities even more than the bacterial communities. Therefore, metagenomic sequencing is used in this study to characterize the entire intestinal microbial community (archaea, bacteria, eukaryota, and virus) of Przewalski's horses prior to and following anthelmintic treatments (ivermectin), so as to identify the changes in microbial diversity and richness, as well as genes, functions, and metabolism pathways. The results will lead to a better understanding on the relationships between anthelmintic treatment and the equine intestinal microbiota.
Ethics Statement
This study was carried out in accordance with the recommendations of the Institute of Animal Care and the Ethics Committee of Beijing Forestry University. The Ethics Committee of Beijing Forestry University approved the experimental protocol. The management authority of the Kalamaili Nature Reserve (KNR) in Xinjiang approved the collection of Przewalski's horse fecal samples.
DNA Extraction and Metagenomic Sequencing
In a previous study of the present research group (4), fecal samples of seven adult Przewalski's horses (four male, three female) of similar body weight in the KNR prior to (PATPH) and following (FATPH) anthelmintic treatment of ivermectin were collected for DNA extraction and 16S rRNA sequencing. The numbers of horse botfly larvae in the fecal samples of the FATPHs were also recorded for assessing their parasitic infection status before treatment. In this study, shotgun metagenomic sequencing was conducted on the same DNA extracts used for 16S rRNA sequencing in Hu et al. (4). Out of the seven pairs of DNA samples, three pairs were chosen for this study based on their corresponding high to low and in between total fecal larva counts (FATPH3: 2,966, FATPH6: 1,928, FATPH1: 724). The concentration and purity of these six DNA samples (PATPH3, PATPH6, PATPH1; FATPH3, FATPH6, FATPH1) were tested by TBS-380 and NanoDrop 2000, respectively. DNA extract quality was checked with 1% agarose gel.
The DNA extract was fragmented to an average size of about 400 bp using Covaris M220 (Gene Company Limited, Beijing, China) for paired-end library construction with NEXTFLEX Rapid DNA-Seq (Bioo Scientific, Austin, TX, USA). Adapters containing the full complement of sequencing primer hybridization sites were ligated to the blunt end of fragments. Paired-end sequencing was performed on an Illumina sequencing platform at Majorbio Bio-Pharm Technology Co., Ltd. (Shanghai, China) according to the manufacturer's instructions (www.illumina.com). Sequence data associated with this study have been deposited in the NCBI Short Read Archive database (BioProject ID: PRJNA722063).
Sequence Quality Control and Genome Assembly
Data were analyzed on the free online Majorbio Cloud Platform (www.majorbio.com). The paired-end Illumina reads were trimmed of adaptors. Low-quality reads (length <50 bp or with a quality value <20 or having N bases) were removed by fastp (31) (https://github.com/OpenGene/fastp, version 0.20.0). Reads were aligned to the Przewalski's horse genome (GenBank accession no. GCA_000696695.1) by BWA (32) (http://bio-bwa. sourceforge.net, version 0.7.9a). Any hit associated with the reads and their mated reads were removed. Metagenomics data were assembled using MEGAHIT (33) (https://github.com/voutcn/ megahit, version 1.1.2), which makes use of succinct de Bruijn graphs. Contigs with the length being or over 300 bp were selected as the final assembling results; they then were used for further gene prediction and annotation.
Representative sequences of nonredundant gene catalog were aligned to the NCBI NR database with an e-value cutoff of 1e −5 using Diamond (37) (http://www.diamondsearch.org/ index.php, version 0.8.35) for taxonomic annotations. Cluster of orthologous groups of proteins (COG) annotation for the representative sequences was performed using Diamond against the eggNOG database with an e-value cutoff of 1e −5 . The KEGG annotation was conducted using Diamond against the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (http:// www.genome.jp/keeg/) with an e-value cutoff of 1e −5 . The pathogens were predicted with the pathogen-host interactions (PHI) database (http://www.phi-base.org/, version 4.4).
Statistical Analysis
All data were checked for normality. The Wilcoxon rank-sum test in STAMP was used to seek for significant differences between groups, and the p-value was tested by Bonferroni correction. The linear discriminant analysis (LDA) effect size (LEfSe) method was used to identify bacterial taxa with significant difference among groups (http://huttenhower.sph.harvard.edu/galaxy/ root?tool_id=lefse_upload). Principal component analysis (PCA) was calculated using weighted UniFrac distance metric in R software.
RESULTS AND DISCUSSIONS
Anthelmintic compounds are widely used in the equine populations because of their common parasitic infections. Anthelmintic treatment will gravely harm the animal's health if left uncontrolled. Previous studies showed that the composition and structure of the intestinal microbial community in horses could be changed upon the treatment of anthelmintics (23)(24)(25)38). However, these studies lacked a control group of horses that were free of parasites, hence making it difficult to identify if the changes were due to the administered anthelmintics or due to the removal of parasites. Then, Kunz et al. (14) conducted an experiment to investigate how the intestinal microbes of uninfected horses changed under the administration of anthelmintic compounds. Their results did not show largescale changes in the intestinal microbial community observed in infected horses treated with anthelmintics. Thus, changes in intestinal microbes of horses following anthelmintic treatment are mainly associated with the removal of parasites.
Horse botflies are the main concern for the wild Przewalski's horses in Xinjiang. Ivermectin is administered to the horses once a year in winter to control their parasitic infestation. Hence, understanding how these regular horse botfly expulsion treatments would impact the intestinal microbiota of the horses becomes important, as the microbiota plays an important role in influencing the host metabolism, immunity, speciation, and many other functions (39)(40)(41)(42)(43)(44). However, previous studies were limited to studying only the bacterial community based on 16S rRNA sequencing, thus lacking information on other members such as archaea, fungi, and virus of the microbiota, and analysis of its functions. (Figure 1). Compared with those in FATPHs, the abundance of Methanocorpusculum showed a decreasing trend (11.15%↓, p = 0.663) and that of Methanobrevibacter showed an increasing trend (18.69%↑, p = 0.081) after anthelmintic treatment. However, other top genera did not vary much. It is known that both Methanocorpusculum and Methanobrevibacter are methanogens that promote fermentation of carbohydrates and produce methane in the gut of mammals (47,48). Methanobrevibacter spp. were the dominant methanogens found in the guts of goats and dairy cows (49)(50)(51). The abundance of Methanocorpusculum spp. showed an increasing trend in horses fed with forage (52).
Differences in Composition of Intestinal Microbial Community in FATPHs and PATPHs
The use of metagenomics can classify the microbes into species level (53), which 16S rRNA sequencing cannot. The present metagenomics results showed that the species of Methanocorpusculum labreanum and Methanocorpusculum bavaricum dominated in both of the FATPHs and PATPHs. Their relative abundances were 24.15% (Methanocorpusculum labreanum) and 17.13% (Methanocorpusculum bavaricum), respectively in FATPHs and 30.85 and 21.45%, respectively in PATPHs. Both species showed a decreasing trend following anthelmintic treatment (p = 0.663) (Figure 2). Furthermore, there were seven species with relative abundances at zero order of magnitude in FATPHs and PATPHs (Figure 2). Among them, Methanobrevibacter ruminantium (6.63%↑, p = 0.081) and Methanobrevibacter olleyae (5.94%↑, p = 0.081) had a significant increasing trend following anthelmintic treatment. This was consistent with the findings in sheep by Moon et al. (54), who indicated that an increase in relative abundance of Methanobrevibacter ruminantium was associated with anthelmintic treatment.
Bacteria
A total of 82 bacterial phyla were detected in FATPHs and PATPHs, which is far more than the 23 phyla identified with 16S rRNA sequencing, referencing with the 16S rRNA sequencing results by Hu et al. (4), who did not find the presence of the Gemmatimonadetes phylum in FATPHs, but only a small 0.0012% in PATPHs. Results of the present metagenomic sequencing showed a successful detection of the Gemmatimonadetes in both FATPHs and PATPHs with relative abundances (percentage of read number in bacterial community) at 0.018 and 0.023%, respectively.
Moreover, there were five phyla of Epsilonbacteraeota, Kiritimatiellaeota, Patescibacteria, WPS-2, and unclassified_k__norank_d__Bacteria identified in 16S rRNA sequencing, but not found in metagenomic sequencing. At present, the main methods for studying microbial community based on high-throughput sequencing platform are marker gene amplicon (16S rRNA, 18S rRNA, ITS, etc.) and metagenomics (57). The advantages of 16S rRNA sequencing are fast, low cost, and easy result analysis, but biases associated with PCR amplification are inevitable. On the other hand, metagenomic sequencing is generally less affected by biases but more costly (58,59). Thus, it is necessary to choose more than one sequencing method in studying one single sample.
At the genus level, the top three genera in PATPHs with relative abundance higher than 5% were Bacteroides (9.06%), Prevotella (7.39%), and Clostridium (5.52%) (Figure 4). Compared with the corresponding genera in FATPHs, Clostridium exhibited an increasing trend (FATPHs: 6.53%, p = 0.383) and Bacteroides (FATPHs: 7.07%, p = 0.663) and Prevotella (FATPHs: 6.05%, p = 0.383) showed a decreasing trend after anthelmintic treatment. Ramanan et al. (60) demonstrated that deworming treatment for mice would change their gut Clostridium and Bacteroides levels. Meanwhile, a significant abundance decrease of the Bacteroides was found in sika deer after anthelmintic treatment (30). The association between Bacteroidetes and IL-10, a key anti-inflammatory cytokine involved in the induction of immune suppression, could be established after anthelmintic treatment (61). Clostridia were known to facilitate the host immune responses due to their production of short-chain fatty acids including butyrate with anti-inflammatory properties (62,63). Mice that were infected with Trichuris muris experienced a decline in Prevotella after the removal of the infection (64). A study indicated that the changed abundance of Prevotella could drive Th17 immune responses, which were associated with the occurrence and development of many inflammatory and autoimmune diseases (65). In addition, among the other genera identified in this study with relative abundance at zero order of magnitude, Eubacterium exhibited the largest increasing trend (3.21%↑, p = 0.081) following anthelmintic treatment. The increase of Eubacterium after deworming was consistent with a study conducted in humans infected by Opisthorchis felineus (66). Eubacterium was found related with the physiology of horses, negatively correlated with salivary cortisol levels, but positively correlated with N-butyrate production (67).
Indeed, disruption of the intestinal microbes by parasite infestation has the capacity to modify the host's immune regulatory system (68). For example, a type 2 immune response to parasitic infection of Nippostrongylus brasiliensis in mice was related to the altered intestinal microbial community, especially the segmented filamentous bacteria (69). Thus, it is suspected that anthelmintic treatment of the horse botfly infection could moderate the immune responses of the Przewalski's horses by shifting their relative abundances of Bacteroides, Clostridium, Eubacterium, and Prevotella. On the other hand, Bacteroides and Prevotella were associated with plant-rich diets, which played an important role in the breakdown of indigestible fibers (70). Combined with the large number of methanogen archaea identified, anthelmintic treatment may be capable of altering the digestive ability of the Przewalski's horses.
Referencing to previous studies conducted with parasiteinfected horses based on 16S rRNA sequencing, anthelmintic treatment could change the relative abundances of Streptococcus, Lactobacillus (4), Adlercreutzia (22), and Acinetobacter (24). Metagenomic analysis in the present study already reveals additional effects of anthelmintic treatment on the intestinal bacterial community.
Eukaryota
Eukaryota is rarely studied in animal intestinal microbes. Very limited published articles on profiling the intestinal eukaryotic Frontiers in Veterinary Science | www.frontiersin.org community of different animals can be found. They were done on dogs (71), humans (72), shrimps (73), and sika deer (30). The present study is the first to determine the composition and structure of the eukaryotic community in equine animal. The top eukaryota phylum with relative abundance at one order of magnitude in PATPHs was unclassified_d__Eukaryota (60.53%) (Figure 6). It displayed a decreasing trend to 30.86% in FATPHs (p = 0.190) after anthelmintic treatment. With abundance at zero order of magnitude, PATPHs had seven phyla. They were Chordata, Streptophyta, Ascomycota, Arthropoda, Apicomplexa, Basidiomycota, and Nematoda (Figure 6). Among them, Streptophyta (10.97%↑, p = 0.190), Chordata (9.56%↑, p = 0.081), and Nematoda (5.67%↑, p = 0.663) showed large increasing trends in FATPHs after anthelmintic treatment. At the genus level, the relative abundances of all annotated genera were at one order of magnitude and lower. The 12 genera in PATPHs with abundances at one and zero orders of magnitude were Oxytricha, Stylonychia, Tetrahymena, Paramecium, Pseudocohnilembus, Ichthyophthirius, Trichomonas, Triticum, Epinephelus, Danio, Entamoeba, and Aegilops (Figure 7). Of them, five genera had their relative abundances shifted by more than 5% in FATPHs after anthelmintic treatment. They were Oxytricha (8.79%↓, p = 0.190), Stylonychia (7.95%↓, p = 0.190), Tetrahymena (5.62%↓, p = 0.190), Paramecium (5.36%↓, p = 0.190), and Trichomonas (5.31%↑, p = 0.383). A study of interventional treatment with probiotics and a low-fat diet on humans showed that the levels of the abovementioned genera except Streptophyta were reduced (74). Therefore, anthelmintic treatment may affect the digestion of the treated Przewalski's horses. Note that characteristics of the other eukaryota genera identified have not yet been studied.
Virus
So far, the intestinal viral community was just done in dogs, cats, and humans (75,76). This is the first study to determine the composition and structure of the viral community in equines.
There was only one viral phylum unclassified_d__Viruses identified in this study. Under this phylum, 70 genera were found in FATPHs and 69 in PATPHs. Eleven of the identified genera were in the unclassified group, accounting for 75.19% in FATPHs and 79.06% in PATPHs. The top three genera in PATPHs with relative abundances at one order of magnitude were unclassified_d__Viruses (26.91%), unclassified_f__Siphoviridae (25.32%), and unclassified_f__Myoviridae (17.48%). Comparing the abundances of theirs in FATPHs after anthelmintic treatment, unclassified_d__Viruses (29.33%, p = 0.081) showed an increasing trend while unclassified_f__Siphoviridae (23.76%, p = 1.000) and unclassified_f__Myoviridae (9.41%, p = 0.663) exhibited a decreasing trend. The large number of viruses annotated to unclassified indicates that there is a great opportunity to find new species in the equine gut, which is an unexplored habitat.
At the species level, 681 and 656 viruses were found in FATPHs and PATPHs, respectively. The top 10 abundant species consisted of one unidentified phage and nine uncultured viruses, making up 25.53 and 21.81% of the community in FATPHs and PATPHs, respectively. It is noted that most of viral species were classified into phage with 128 phages in FATPHs and 96 in PATPHs, occupying 71.49% of relative abundance in FATPHs and 72.98% in PATPHs, respectively. The relative abundance of phage changed more than other viral species after anthelmintic treatment.
Pathogens in FATPHs and PATPHs
There were totally 187 pathogens of 108 genera found in the samples when comparing the identified reads to the PHI database (Supplementary Table 1). The anthelmintic treatment increased the abundances of 128 pathogens and reduced those of 59 others. The top 10 pathogens based on gene abundance in PATPHs were Staphylococcus aureus (Bacteria), Salmonella enterica (Bacteria), Streptococcus pneumoniae (Bacteria), Fusarium graminearum (Eukaryota), Magnaporthe oryzae (Eukaryota), Pseudomonas aeruginosa (Bacteria), Escherichia coli (Bacteria), Cryptococcus neoformans, Listeria monocytogenes (Eukaryota), and Aspergillus fumigatus (Eukaryota) (Figure 8). Their abundances were all increased in FATPHs following anthelmintic treatment. For these pathogens, Eukaryota-related diseases have not been observed in equine animals. The methicillin-resistant Staphylococcus aureus (MRSA) is an emerging equine pathogen and is associated with a series of clinical diseases, such as septic arthritis, intravenous (jugular) catheter site infections, pneumonia cases, incisional infection, wound infection, mastitis, rhinitis, and body wall absces (77,78). Horses infected by Salmonella enterica suffer from some clinical signs, such as fever, dehydration, diarrhea, colic, and septicemia (79,80). Infection of Streptococcus pneumoniae will trigger immune response of the horse to accumulate leucocytes and cytokine (81). Pseudomonas aeruginosa is an opportunistic pathogen that is commonly recognized as a cause of endometritis in horses (82,83). Escherichia coli are common commensal bacteria found in the intestinal tract of horses; they can cause diarrhea (83,84). Casual usage of anthelmintic treatment poses a risk to the horses, known as colic (26). The common characteristics of an intestinal microbial community after antibiotic treatment include change of microbial diversity, increase of the colonization of pathogens, and development of antimicrobial resistance (85,86). Hence, the removal of horse botflies through anthelmintic treatment raised the gene abundance of the top pathogens in the studied Przewalski's horses, which could increase their risk of sickness. However, this is limited by the small sample size used in the present study. Future repeated studies are needed for confirmation.
Functional Analysis of Intestinal Microbiota Between FATPHs and PATPHs
It was shown that horse botfly expulsion by anthelmintic treatment for the studied Przewalski's horses altered their intestinal microbial community composition and structure. These changes in turn could affect the metabolism and physiology of the horses, such as microbial triggered immune responses (87), aid in regulation of energy metabolism (88,89), and synthesis of the short-chain fatty acids or amino acids (90)(91)(92). In fact, it is increasingly recognized that the metabolism of animals is significantly affected by its intestinal microbes (93). Understanding the changes in intestinal microbial function is another key step toward clarifying the effects of anthelmintic treatment on horses. Metagenomic sequencing not only can characterize the gene content of a studied sample but also can predict the functional potential of its microbial community. The following sections present the metagenomic sequencing results on the intestinal microbial functions of the FATPHs and PATPHs.
COG Functional Annotation
A total of 4,869,449 genes with a total length of 2,194,175,671 were identified from the six samples used in this study. There were 24 kinds of COG functions associated with cellular processes and signaling, information storage and processing, metabolism, and poor characterization found in FATPHs and PATPHs. The top function was Function unknown, which accounted for 30.67% in FATPHs and 31.72% in PATPHs (Figure 9). There were five functions with their percentages of read numbers over 5%. They were replication, recombination, and repair (FATPHs: 10.09%; PATPHs: 9.55%); carbohydrate transport and metabolism (FATPHs: 7.61%; PATPHs: 7.45%); amino acid transport and metabolism (FATPHs: 6.98%; PATPHs: 6.44%); cell wall/membrane/envelope biogenesis (FATPHs: 6.56%; PATPHs: 6.86%); and translation, ribosomal structure, and biogenesis (FATPHs: 6.07%; PATPHs: 5.87%) (Figure 9). Distribution of the function catalog was consistent with that of Tang et al. (55), who studied some captive and wild Przewalski's horses without any anthelmintic treatment. Their samples were similar to those of the PATPHs in the present study.
One important function of the intestinal microbes is metabolism, which converts carbohydrate and protein of dietary substrates to beneficial metabolites or alternative energy sources for the host (94). The intestinal microbes of the Przewalski's horses were found mainly to participate in carbohydrate and amino acid metabolism. The fermentation products of carbohydrate metabolism by microbes are short-chain fatty acids and gases. The three principal short-chain fatty acids detected in feces were acetate, butyrate, and propionate (95). Acetate plays an important role in regulating central appetite (96). Butyrate is capable of inducing the growth of cancer cells and colonic tumor cell lines, inhibiting mRNA expression and telomerase activity of cancer cells in human, enhancing memory recovery and formation, and preventing obesity in mice (97,98). Propionate can be directly involved in portal-brain neural communication, induction of intestinal gluconeogenesis, and positive influence of the host's metabolism (99). Amino acids are key components of human and animal nutrition, which can regulate the intestinal bacterial community composition (100,101 (Supplementary Figure 1) (Supplementary Figure 1). Although the genes annotated with metabolism did not change significantly after anthelmintic treatment, the pattern of functional species was completely different between FATPHs and PATPHs. Anthelmintic treatment had a displacement effect on the same functional microbes.
PCA analysis showed that FATPHs and PATPHs can be separated based on the COG functional genes (Figure 10). It means that anthelmintic treatment changed the functions of the intestinal microbes. LEfSe analysis depicted that four functions showed significant differences between FATPHs and PATPHs (Figure 11). Transcription (LDA = 3.16, p = 0.05) showed significant effects on FATPHs while Function unknown (LDA = 3.97, p = 0.05), post-translational modification, protein turnover, chaperones (LDA = 3.14, p = 0.05), Nuclear structure (LDA = 3.08, p = 0.05), and intracellular trafficking, secretion, and vesicular transport (LDA = 2.85, p = 0.05) had significant effects on PATPHs. These results indicated that the functional differences of intestinal microbes resulted from some essential cellular processes rather than metabolism.
CONCLUSIONS
In this study, the composition and structure of the complex intestinal microbial community of Przewalski's horses prior to and following anthelmintic treatment were identified by metagenomic sequencing for the first time. The pathogens were determined based on the gene information. The obtained sequences were mapped to known genes or pathways in COG and KEGG databases. The results indicated that anthelmintic treatment might have adverse effects on horses, which needs to be further confirmed; thus, optimization is suggested for this strategy of controlling parasite infections or search for alternative methods in the future.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. | 6,515.6 | 2021-08-18T00:00:00.000 | [
"Environmental Science",
"Biology",
"Medicine"
] |
Condition-dependence of pheomelanin-based coloration in nuthatches Sitta europaea suggests a detoxifying function: implications for the evolution of juvenile plumage patterns
Adult-like juvenile plumage patterns often signal genotypic quality to parents. During adulthood, the same patterns often signal quality to mates. This has led to assume that adult-like juvenile plumage is a developmental consequence of sexual selection operating in adults. Many of these patterns are produced by the pigment pheomelanin, whose synthesis may help remove toxic excess cysteine. Excess cysteine is likely to arise under conditions of relatively low stress, such as those experienced by nestling birds. Thus, adult-like plumage may be advantageous for juveniles if produced by pheomelanin. In the Eurasian nuthatch Sitta europaea, juveniles are sexually dichromatic and identical to adults. Nestling nuthatches in poorer condition develop more intense pheomelanin-based feathers, indicating greater pigment production. The same is not observed in adults. This is contrary to a function related to signaling quality and instead suggests that, at least in the Eurasian nuthatch, adult-like juvenile plumage has evolved because of the detoxifying function of pheomelanin-based pigmentation. Given the prevalence of colors typically conferred by pheomelanin in juvenile plumage patterns, the detoxifying capacity of pheomelanin under low stress levels should be considered as an explanation for the evolution of both adult-like and distinctively juvenile plumage patterns.
in the adulthood. In other words, as it is understood thus far, sexual dichromatism in juveniles may never be the result of selection acting only on juveniles.
The mechanism mentioned above, however, does not fully clarify the evolution of adult-like plumage and sexual dichromatism in juveniles. It raises the question of why some species of birds have evolved distinctive juvenile plumage clearly differentiated from adults 14 . Some species even show sexual dichromatism in this distinctive plumage, e.g. the lesser kestrel Falco naumanni 15 . A specific signaling function that only operates in the first stages of development would always be more efficiently fulfilled by a distinctive juvenile plumage than by a plumage pattern that is identical to that of adults, as birds could easily recognize juvenile conspecifics by perceiving a clearly differentiated plumage pattern associated with that age 14,16 . The assumption behind a signaling function for the evolution of distinctive juvenile plumage is that it entails fewer physiological costs than the development of adult-like plumage 17 . However, this has never been demonstrated as there are no specific definitions of those costs. Therefore, the logic behind a signaling function of plumage coloration during the juvenile stage of birds appears more plausible when this trait is clearly differentiated from that of adults. There is empirical support for a role of plumage coloration in parent-offspring communication in species in which the appearance of juveniles is almost identical to that of adults (see references above). Despite this, it has sometimes been reported that the signaling function is exerted by particular plumage patches that are the only difference observed in juveniles, such as the nape of great tits Parus major (which is large and yellow in juveniles and small and white in adults 8 ).
Here I propose an alternative explanation for the evolution of adult-like and sexually dichromatic plumage in juveniles of species that are colored by melanins, the most abundant pigments in animals. One of the two main chemical forms of melanin, termed pheomelanin, is synthesized by animals by oxidizing the amino acid tyrosine in the presence of cysteine, whose sulfhydryl group is incorporated into the pigment structure 18 . Cysteine protects cells from the damaging effects of free radicals by forming part of the main intracellular antioxidant (glutathione, GSH). However, excess cysteine (i.e., a situation when levels of cysteine are higher than required for GSH and protein synthesis) produces cytotoxic free radicals because of its oxidation to the dimer cystine 19,20 . In birds, excess cysteine contributes to metabolic acidosis and a variety of associated problems such as thinning of egg shells and poor growth 21 . As the incorporation of cysteine to pheomelanogenesis is an irreversible process, the pigmentation of feathers with pheomelanin constitutes a consumption of cysteine, which may be adaptive in situations that favor excess cysteine (i.e., low levels of oxidative stress, when cysteine is less required for GSH synthesis) 22,23 . Indeed, this capacity of pheomelanogenesis to remove excess cysteine may be the adaptive benefit that has led to the evolution of pheomelanin 24 .
Thus, instead of a signaling function, the plumage of juvenile birds may have evolved because of the physiological benefits related to removing excess cysteine if pigmented by pheomelanin. In fact, distinctive juvenile plumage is usually characterized by drab chestnut and brownish colorations 25,26 , colors that are generated by pheomelanin 27 . It is not evolutionary logic to consider that the evolution of pheomelanin-based juvenile plumage patterns responds to needs for crypsis, because cryptic plumage colorations can also be achieved by eumelanin (another melanin form producing darker colors). Eumelanin is less costly to produce than pheomelanin due to the lack of cysteine consumption during its synthesis (in fact, species of birds with plumage entirely colored by eumelanin are common, while those with plumage entirely colored by pheomelanin appear to be rare) 28 . Indeed, predation risk does not seem to be important for the evolution of distinctive plumage coloration 17 . Furthermore, the juvenile plumage of birds, especially in altricial species, is developed during a period of low physical activity in which they are fed by parents. Exercise increases metabolic rate, the production of reactive oxygen species (ROS) and protein turnover, which may be detrimental in terms of oxidative stress particularly for old animals 29 . Moreover, foraging effort is also known to increase physiological stress 30 . Thus, the nestling stage may constitute a situation of relatively low stress as compared to post-fledging stages when juveniles must find food on their own with a high expenditure of energetic resources (e.g. ref. 31), and pheomelanin synthesis should be favored under such conditions 22,24 . Therefore, the detoxifying function of feathers colored by pheomelanin may represent a general explanation for the evolution of both distinctive and adult-like juvenile plumage. Similar to other pigments such as carotenoids that exert antioxidant effects but that may be toxic if in excess, with sexes differing in their susceptibility to suffering this excess 32 , males and females may also differ in their requirement of cysteine for antioxidant purposes and therefore in their susceptibility to suffer excess cysteine. As a consequence, this pheomelanin-based mechanism may also explain the evolution of sexual dichromatism in juveniles.
My aim here is to partially test this hypothesis using the Eurasian nuthatch Sitta europaea, a small passerine bird, as a model. The first plumage of Eurasian nuthatches is identical to that of adults, so that no significant changes in plumage coloration occur with age 2 . Adult nuthatches are sexually dichromatic, with males displaying chestnut (dark orange) flank body feathers that are darker than those in females 33 (Fig. 1). The color of these sexually dichromatic feathers is due to their relatively high content of the benzothiazole moiety of pheomelanin (104.1 μg thiazole-2,4,5-tricarboxylic acid per mg feather) 27 , and sexual differences in color are already observed in nestlings (see Methods). Thus, Eurasian nuthatches display plumage colored by pheomelanin that does not change with age and is sexually dichromatic in juveniles (Fig. 1), hence representing a good study model for the hypothesis proposed here.
If pheomelanin synthesis for plumage pigmentation has a detoxifying function by removing excess cysteine, this should be reflected in the physical condition of birds. Therefore, I quantified the expression of the chestnut coloration of flank feathers (a predictor of pheomelanin content) 27 and tested for its association with the body condition of both nestling and adult Eurasian nuthatches of both sexes. If the juvenile plumage of nuthatches is a consequence of sexual selection operating in adults and signals individual quality as pheomelanin-based coloration does in adults in other species 34 , it is predicted that the color expression of the flank feathers of nestlings should increase with their body condition because sexually selected traits are expected to show heightened condition-dependence 35 . This prediction assumes that the juvenile plumage of Eurasian nuthatches is a developmental consequence of sexual selection in adults (see above). Thus, this also implies that the positive association between pheomelanin-based coloration and body condition should be observed in adult nuthatches. If, by contrast, the juvenile plumage of Eurasian nuthatches evolves because of its benefit of removing excess cysteine, it is predicted that any association between pheomelanin-based color expression and body condition should be observed in nestling but not in adult nuthatches. In the latter case, birds in poorer condition would be in greater need of cysteine removal by producing pheomelanin and this would lead to a negative association between the color expression of flank feathers and the body condition of nestlings. The opposite is also possible because a greater production of pheomelanin would induce a better condition in nestlings, similar to the explanation for the possitive association between the expression of pheomelanin-based coloration and survival probability in adult barn swallows 23 . A negative association between color expression and body condition in nestlings, however, would preclude a signaling role of the pheomelanin-based plumage of juvenile nuthatches. The detoxifying hypothesis for pheomelanin-based juvenile plumage also implies that nestlings have lower oxidative stress levels than adults overall (see above). Thus, I compared the levels of reduced and oxidized glutathione, the main intracellular antioxidant 36 , between adult and nestling nuthatches.
Methods
All methods were carried out in accordance with relevant guidelines and regulations in Spain. This study received approval by the Bioethics Subcommittee of the Spanish National Research Council (CSIC) on 23th February 2015, and was conducted with the authorization #06-04-15-227 by local authorities (Consejería de Agricultura y Pesca y Desarrollo Rural, Junta de Andalucía).
Field methods.
The study was carried out in a population of Eurasian nuthatches breeding in nestboxes during two consecutive breeding seasons (April-May 2015 and 2016) in an extensive agro-ecosystem (Iberian dehesa) mainly composed of scattered holm oaks Quercus ilex and cork oaks Quercus suber at 450 m above sea-level in the Natural Park of Sierra Norte de Sevilla, southern Spain (37°47′N, 06°04′W). Frequent checks of nestboxes provided data on dates of clutch initiation and clutch size for all breeding pairs. Adults were captured and banded with numbered rings 12-15 days after hatching. I weighed the adults with a portable electronic balance to the nearest 0.1 g and measured their tarsus length to the nearest 0.01 mm with a digital calliper as a measure of body size. I took the same measurements on nestlings at 17 days of age (nestlings fledge at an age of about 21 days in the study area). In both adults and nestlings, I plucked 5-6 chestnut flank body feathers and stored them in the dark until measurements were made. In adults, I also plucked 5-6 breast and undertail body feathers as these plumage patches also display the same chestnut coloration (although breast feathers are lighter and actually cream colored) as flank feathers (these feathers are not fully developed in nestlings at day 17, so only adults were sampled for breast and undertail feathers; Fig. 1). I also took a small volume of blood from the brachial vein of both adults and nestlings for molecular sexing, separating cells from plasma by centrifugation and storing them at −80 °C until the analyses. In total, 24 adult Eurasian nuthatches (11 males and 13 females) belonging to 17 breeding pairs, and 38 nestlings (15 males and 23 females) belonging to 18 breeding pairs, were measured and sampled for feathers. The term 'nestling' is used here to refer to a bird developing its first plumage in the nest, while the term 'juvenile' refers to a bird in their first plumage independently of whether it has already left the nest or not.
I calculated body condition in both nestlings and adults as the residuals of body mass regressed against tarsus length, a measure that is often a good predictor of body fat content in birds 37 . Previous studies on juvenile European nuthatches showed that a similar index of residual body mass (mass divided by tarsus length) was higher in resident birds than in birds that did not secure a territory, the former also achieving higher local survival during some periods of the year than the latter 38 . This indicates that residual mass is a body condition index that is biologically relevant for Eurasian nuthatches, as it reflects survival prospects.
Analysis of pheomelanin-based color expression.
To analyze the color expression of feathers, I used an Ocean Optics Jaz spectrophotometer (range 220-1000 nm) with ultraviolet (deuterium) and visible (tungsten-halogen) lamps and a bifurcated 400 micrometer fiber optic probe. The fiber optic probe both provided illumination and obtained light reflected from the sample, with a reading area of ca. 1 mm 2 . Feathers were mounted on a light absorbing foil sheet (Metal Velvet coating, Edmund Optics, Barrington, NJ) to avoid any background reflectance, such that they resembled the natural appearance of the feather patch. Measurements were taken at a 90° angle to the sample. All measurements were relative to a diffuse reflectance standard tablet (WS-1, Ocean Optics, Dunedin, FL), and reference measurements were frequently made. An average spectrum of five readings on different points of the feathers was obtained for each bird, removing the probe after each measurement. Reflectance curves were determined by calculating the median of the percent reflectance in 10 nm intervals (Fig. 2).
I summarized spectral data as a measure of total brightness (i.e., the summed reflectance across the 300-700 nm range), as this is the best predictor of total levels of pheomelanin in feathers, with lower values (i.e., darker colors) denoting higher color intensity and higher pheomelanin content 23,39 . The concentration of pheomelanin in the flank feathers of adult Eurasian nuthatches is about 9 times higher than their eumelanin concentration [104.1 μg of the benzothiazole moiety of pheomelanin per mg feather vs. 11.9 μg of the 5,6-dihydroxyindole-2-c arboxylic acid (DHICA) unit of eumelanin per mg feather] 27 , so variation in the color expression of flank feathers is expected to mainly reflect variation in their pheomelanin content. It must be noted that the slope of reflectance regressed against wavelength has been proven to be the best predictor of the concentration of melanins and the best descriptor of the perceived hue (color) in feathers or hairs across different species, with higher slopes denoting lighter colors and higher relative concentration of the benzothiazole moiety of pheomelanin relative to the DHICA unit of eumelanin 27 . Within a single species (i.e., considering the same hue, such as the chestnut color of Eurasian nuthatch flank feathers), however, variation in hue is negligible as compared to the variation between the large classes of hues that are observed across species 27 . Consequently, variation in plumage color between species is mainly variation in hue (slope), while variation within species is mainly variation in intensity (brightness). Feather brightness variation within a species displaying a color hue generated by pheomelanin, such as the Eurasian nuthatch, thus reflects variation in pheomelanin content, with darker colors denoting higher content. Despite brightness being the most likely predictor of plumage color variation among nuthatches, variation in slope may still reflect some part of the variation in color intensity, and therefore I considered both brightness and slope as alternative measures of color expression in the analyses. It must also be noted that the variation in slope that may exist within species would mainly reflect variation in color intensity like the measure of brightness, meaning that the interpretation of slope variation within species must be opposite to the interpretation of the variation in this parameter between species (i.e., higher slopes are indicative of lower pheomelanin content within species), when it mainly reflects variation in color hue 27 . Sex determination in adults and nestlings. The sex of adult Eurasian nuthatches can be determined on the basis of plumage characteristics. In addition to darker chestnut flank feathers, males have darker and wider black lateral bands on the head 33 . Thus, I used these characteristics to sex adult nuthatches when they were captured at the nest, before plucking feathers. To ensure that this visual classification was correct, I extracted DNA from the blood of 20 adults (9 males and 11 females) using the ISOLATE II Genomic DNA kit (Bioline, London, UK) and amplified the CHD gene with a polymerase chain reaction (PCR) with electrophoresis and using the primer pair CHD1F/CHD1R (5′-TATCGTCAGTTTCCTTTTCAGGT-3′ and 5′-CCTTTTATTGATCCATCAAGCCT-3′) 40 . PCR products resulted in two bands, corresponding to Z (∼500 bp) and W (∼350 bp) chromosomes, amplified in all birds that had been visually sexed as females, while only the band corresponding to the Z chromosome was amplified in all birds that had been sexed as males (the electrophoresis gel view is not shown for clarity). This indicates that the sex of adult nuthatches had been correctly determined using plumage characteristics.
Once I determined that the primer pair CHD1F/CHD1R can be employed to correctly sex Eurasian nuthatches, I used it with quantitative real-time PCR combined with melting curve analysis to determine the sex of nuthatch nestlings, performing reactions with SYBR Green I Master in a LightCycler 480 System (Roche, Basel, Switzerland) 41 . There was total congruence between the results of this method and those for 18 adult nuthatches that had been previously sexed by conventional PCR with electrophoresis, as a gel view for real time PCR products amplified by the primer pair CHD1F/CHD1R showed that males and females were clearly differentiated by the Z chromosome band in males and the W chromosome band in females (the Z chromosome band was not visible in females, probably because of amplification competence; Fig. 3a). The melting curve analyses made after real-time PCR 41 differentiated males and females through a peak of melting temperature at 81 °C in males and a peak at 78 °C in females (Fig. 3b). Therefore, I used this procedure to determine the sex of nestling nuthatches after extracting genomic DNA from their blood as previously described.
With the information on the sex of nestlings, I then tested whether their adult-like plumage shows sexual dichromatism, as in adults. An ANOVA resulted in a significant effect of sex explaining the brightness of the flank feathers of nestling nuthatches [F 1,32 = 5.09, P = 0.031; a general linear mixed model indicated that nest identity had no significant effect when it was included as a random factor (P = 0.305), so nest identity was removed to increase the degrees of freedom of the model], with males having lower brightness values (i.e., darker color; mean ± 95% confidence interval: 342.27 ± 50.87) than females (254.47 ± 60.81; Fig. 4). The color slope values for male nestlings also tended to be lower than in females, but the difference was not significant (F 1,32 = 2.89, P = 0.099; Fig. 4). Thus, sexual dichromatism in Eurasian nuthatches is already developed during their first plumage as shown by nestlings, in which measurement of the brightness of flank feathers can be used for sex determination.
Lastly, I tested whether the sex of nestling nuthatches can be determined by visually perceiving the color intensity of their flank feathers as in adults 33 (see also above). Comparing my assignments of sex to nestlings by visually inspecting the color of their flank feathers (with birds in hand, before plucking feathers) with the results of molecular sexing, there was congruence in 89% of cases. Therefore, the sex determination of nestling Eurasian nuthatches can be made by a visual assessment of the color intensity of their chestnut flank feathers, although less reliably than in adults (see above).
Measurement of GSH levels in erythrocytes.
I determined total GSH (tGSH) following a procedure whose application to bird samples has been previously described, e.g. ref. 42. To determine oxidized GSH (GSSG) levels, 8 μl of 2-vinylpyridine were added to an aliquot (400 μl) of the supernatant obtained for tGSH assessment to promote GSH derivatization, e.g. ref. 43. The mixture was then centrifuged (3500 g for 10 min), and the change in absorbance of the supernatant was assessed at 405 nm. Reduced GSH levels were calculated by subtracting GSSG levels from tGSH levels. The ratio GSH:GSSG was used as an index of systemic oxidative stress. Statistical analyses. I used general linear models (GLM) to evaluate the contribution of body condition to explaining variation in the color expression (brightness or slope; response variables) of chestnut flank feathers in nestling and adult Eurasian nuthatches. Sex was added to the models as a fixed factor. The interaction between sex and body condition was also considered to evaluate the possibility that pheomelanin synthesis is differentially affected by body condition in males and females. As I was simultaneously conducting an experiment manipulating predation risk in the same nests included in this study (each nest was either control or increased risk), I added the experimental treatment (control vs. experimental) as a fixed factor to the models to control for this effect. In the models for nestlings, I first conducted a general linear mixed model with the same terms as described above (except for experimental treatment) but adding nest identity as a random factor. However, the effect of nest identity was not significant in either the model for brightness (P = 0.430) or in the model for slope (P = 0.256). Thus, this was not subsequently considered in order to increase the degrees of freedom of the models. I used t-tests to compare the mean plumage brightness and slope and the GSH:GSSG ratio of all adults and nestlings.
In all models, a backwards stepwise procedure was used to remove nonsignificant terms, using a P-value of 0.1 as a threshold to abandon the model. Inspections of residuals confirmed that the normality assumption was fulfilled.
Data Availability. The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
Results
In the model for the brightness of nestling flank feathers, the interaction between sex and body condition was not significant ( Fig. 5) and a marginally non-significant effect of sex (least squares mean ± 95% confidence interval: males: 0.019 ± 0.005, females: 0.025 ± 0.004; F 1,31 = 3.77, P = 0.061). Thus, nestling nuthatches in better condition produced lower amounts of pheomelanin to color their flank feathers.
In adults, only sex remained in the final models for flank feather brightness (46.5% variance; F 1,21 = 18.23, P < 0.001; mean ± 95% confidence interval: males: 285.01 ± 76.36, females: 502.08 ± 73.11; body condition: Lastly, there was a difference in the mean reflectance spectra of flank feathers of adults and nestlings of both sexes, as those from nestlings were indicative of a color darker than those from adults (Fig. 2). This resulted in significantly lower values of both brightness (t = 2.48, df = 55, P = 0.016) and slope (t = 3.58, df = 55, P < 0.001) in nestlings. Adults and nestlings also tended to differ in oxidative stress levels as reflected by the GSH:GSSG ratio. The difference in this variable did not reach significance (t = 1.57, df = 52, P = 0.122), but indicated a tendency toward higher ratios (i.e., lower oxidative stress) in nestlings (16.40 ± 3.87) than in adults (11.42 ± 5.05).
Discussion
The color intensity of the chestnut flank feathers of Eurasian nuthatch nestlings is negatively associated with their body condition, meaning that nestlings in poorer condition deposit greater amounts of pheomelanin in their feathers. By contrast, the color intensity of flank feathers was not associated with body condition in adult nuthatches. Since the plumage of Eurasian nuthatch nestlings is sexually dichromatic, as shown here, and identical to the plumage of adults, these results preclude the possibility that adult-like plumage and sexually dichromatism have evolved in this species as a signal of individual quality in either nestlings or adults. This is because sexually selected traits usually show heightened condition-dependence. Even if this is not a general rule and exceptions frequently arise 35 , the opposite (i.e., a negative relationship between trait expression and body condition), as found here, is contrary to a role of plumage coloration in signaling quality. It thus suggests that pigmenting feathers with pheomelanin is physiologically favored when juvenile Eurasian nuthatches are in poor condition, in accordance with a possible detoxifying function of pheomelanin synthesis (Fig. 6).
All hypotheses considered thus far provide two explanations for the evolution of adult-like plumage in juvenile birds. On the one hand, these plumage patterns may be signals of quality that are directed at potential mates in adulthood and at parents during the nesting period, or to conspecifics shortly after fledging 4 . There is empirical evidence of a role of plumage coloration in parent-offspring communication in species with adult-like juvenile Figure 5. Relationship between the body condition of Eurasian nuthatch nestlings (residuals of body mass regressed against tarsus length) and brightness (left axis, red color) and slope (right axis, green color) of their chestnut flank feathers. The residual figures of the response variables (i.e., partial effects after applying a GLM without sex and experimental treatment in the case of brightness and without sex in the case of slope) are shown. Lower brightness and slope values indicate more intense plumage coloration and higher pheomelanin content in feathers. plumage 7-12 , but to my knowledge, plumage coloration in these cases always plays a signaling role in adults as well. The second explanation is that plumage coloration signals quality to potential mates in adulthood and the genetic basis of this sexual selection process is already expressed in the first plumage of birds, leading to adult-like juvenile plumage. It is thus assumed in both cases that adult-like plumage in juveniles is a developmental consequence of selection operating in adults. The hypothesis that I propose here, supported by results from Eurasian nuthatches, implies a reversed conclusion: plumage in adults is a developmental consequence of selection operating in juveniles (Fig. 6). If pigmenting feathers with pheomelanin is beneficial under relative low stress levels 23 , it is expected that selection acts against those juveniles in poor condition that do not remove excess cysteine by producing large amounts of pheomelanin. Low stress conditions may indeed prevail in the nestling stage as compared to the adulthood. The fact that the expression of pheomelanin-based plumage coloration was condition-dependent in nestlings but not in adult nuthatches suggests that natural selection is more likely acting on juveniles than on adults, as body condition measured during the breeding season frequently predicts the probability of survival in birds, e.g. ref. 44.
It may be argued that body condition in adult nuthatches was not measured at the time when feathers were developing, as in nestlings, as plumage molt in Eurasian nuthatches takes place after the breeding season in August-September 2, 33 . This means that plumage coloration in breeding adult nuthatches may not reflect their body condition at the time that feathers were developed. However, it is unlikely that plumage coloration in adult nuthatches was unaffected by their body condition because this was measured during breeding and not at molt. Mating in Eurasian nuthatches takes place in winter, several months after molt 45 , and plumage coloration might play a role in mating in this species as indicated by the sexual dichromatism (which is the result of sexual selection; 3 ) observed in the chestnut feathers of flanks and undertail (ref. 33, this study). In other words, if the role that plumage coloration may have in sexual selection in adult nuthatches is related to its capacity to signal body condition, an association between plumage coloration and body condition should be observed in adults outside of the molting period. The results of this study thus suggest that pheomelanin-based plumage coloration has a Figure 6. Schematic representation of the detoxifying hypothesis for the evolution of adult-like juvenile plumage coloration based on pheomelanin, exemplified in nuthatches. Solid arrows represent causal effects, and dotted arrows represent relationships whose sign is depicted with a symbol. Juveniles probably tend to suffer low relative oxidative stress as compared to adults, making juveniles more prone to excess cysteine. As excess cysteine is toxic, this may negatively affect the body condition of birds, thus creating a higher need for cysteine removal. In the end, this may lead to negative associations between body condition and the expression of pheomelanin-based plumage coloration, as juvenile birds in poor condition would have a higher need to remove cysteine through the production of pheomelanin for its deposition in feathers. In contrast, adults may have a greater ability to achieve cysteine homeostasis, which would preclude any association between body condition and the expression of pheomelanin-based plumage coloration as found in the present study. As a consequence, plumage in adults may be a developmental consequence of selection operating in juveniles. The same need to remove excess cysteine through pheomelanin synthesis may also explain the evolution of distinctive juvenile plumage coloration, which is commonly produced by pheomelanin. The nuthatch illustration owns to L. Shyamal (https://commons.wikimedia.org/w/index.php?curid=4253041) and is covered by a CC BY 3.0 license (https://creativecommons.org/licenses/by/3.0/). signaling role in adult Eurasian nuthatches that is not related to signaling variation in body condition, and in fact condition-dependence of trait expression is not a requirement for honest signaling 46,47 . This may be a secondary function evolved from the trait under selection in nestlings.
The association between body condition and the intensity of pheomelanin-based plumage coloration in Eurasian nuthatch nestlings found here is better explained by a detoxifying function of pigmenting feathers with pheomelanin than by a signaling role, given the negative sign of the association. The possibility that pheomelanin-based coloration in nuthatch nestlings represents a signal of need to parents is unlikely, because the expression of this type of signal (such as begging behavior) should be dynamic (i.e., nestlings do not signal the same need for resources continuously) while pheomelanin-based plumage coloration is relatively constant until the feathers are molted, and because parents are expected to select their best offspring albeit being sensitive to their needs 48 . In fact, all studies show that parents favor nestling coloration indicating high quality, not low quality [7][8][9][10][11][12] . Therefore, pheomelanin-based plumage coloration may have evolved in Eurasian nuthatches because of the possible benefits conferred to juveniles by producing pheomelanin. Juveniles are probably exposed to low stress levels during the nesting stage, and a lack of change in the expression of genes that control pheomelanin synthesis 49 with the age of birds would then lead to the development of the same plumage pattern in both juveniles and adults. Indeed, it has recently been shown that the expression of a gene coding for a transporter that pumps cysteine out of melanosomes and thus avoids cysteine accumulation and excess in melanocytes (CTNS) increases with food abundance (i.e., availability of dietary cysteine) in nestling gyrfalcons Falco rusticolus 50 .
As expected, molecular analyses showed that Eurasian nuthatch nestlings display the same sexual dichromatism in pheomelanin-based plumage coloration as in adulthood. Another question is, therefore, why is sexual dichromatism already present in the first plumage of birds when sexual selection is not yet operating. I hypothesized that males and females may differ in their requirement to remove excess cysteine as seems to occur in other color traits generated by different pigments 32 . This may lead to a different degree of condition-dependence in male and female nestlings. However, the effect of body condition on the intensity of pheomelanin-based coloration was not dependent on sex in nuthatch nestlings. Instead, sexual dichromatism in nestlings may be the result of genes being expressed in all developmental stages and under sexual selection only in adults 4 . Thus, it may be suggested that, in the Eurasian nuthatch, the expression of pheomelanin-based plumage in adults is a developmental consequence of natural selection acting on this trait in juveniles, while sexual dichromatism in pheomelanin-based plumage in juveniles is a developmental consequence of sexual selection acting on this trait in adults.
These findings may not only be useful for understanding the evolution of plumage coloration in species in which juveniles are identical to adults, but also the evolution of juvenile plumage that is distinctively different from adult plumage (Fig. 6). When plumage coloration experiences changes with age, the first plumage is more rich in chestnut and brown colors than the definitive plumage 25,26 , and these colors are characteristic of pheomelanin 27 . Distinctive juvenile plumage thus seems to be generally more pheomelanic than adult plumage, and although it has been suggested that it evolves because the signaling needs of juveniles and adults differ 14,16,17,51,52 , it is simultaneouly assumed that the development of distinctive juvenile plumage entails fewer physiological costs than the development of adult-like plumage 17 . The latter is not clear in view of the evidence for a higher content of pheomelanin in juvenile than in adult plumage, as producing pheomelanin represents a consumption of an important antioxidant resource (i.e., cysteine/GSH) and therefore, developing pheomelanin-based plumage may entail higher physiological costs than developing plumage pigmented by eumelanin (the other main form of melanin, which is synthesized by oxidizing tyrosine without the involvement of cysteine) or unmelanized plumage 24 . These costs, however, depend on the prevailing environmental oxidative stress, as the potential capacity of pheomelanin synthesis to remove cysteine may be beneficial under low stress levels, when a toxic excess of cysteine is most likely to occur 28 . The relatively low physical activity of birds during the nesting stage as compared to adults may mean that nestlings experience low stress levels, which may favor the development of pheomelanin-based plumage. Consistent with this, I found a tendency of nestling Eurasian nuthatches to have lower systemic oxidative stress levels (higher GSH:GSSG ratio) than adults, even without controlling for several factors potentially affecting the levels of stress to which adults are exposed (e.g., foraging effort, predation risk, etc). The detoxifying capacity of pheomelanin-based plumage should therefore be considered, alternatively to a signaling function, as the adaptive benefit that may lead to the evolution of both adult-like and distinctive juvenile plumage in birds. Comparative studies of species differing in the expression of pheomelanin-based plumage coloration during the juvenile stage and in life history characteristics that may influence oxidative stress should now be conducted to test the general validity of this hypothesis. | 8,010.8 | 2017-08-22T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
“When ‘Bad’ is ‘Good’”: Identifying Personal Communication and Sentiment in Drug-Related Tweets
Background To harness the full potential of social media for epidemiological surveillance of drug abuse trends, the field needs a greater level of automation in processing and analyzing social media content. Objectives The objective of the study is to describe the development of supervised machine-learning techniques for the eDrugTrends platform to automatically classify tweets by type/source of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis- and synthetic cannabinoid–related tweets. Methods Tweets were collected using Twitter streaming Application Programming Interface and filtered through the eDrugTrends platform using keywords related to cannabis, marijuana edibles, marijuana concentrates, and synthetic cannabinoids. After creating coding rules and assessing intercoder reliability, a manually labeled data set (N=4000) was developed by coding several batches of randomly selected subsets of tweets extracted from the pool of 15,623,869 collected by eDrugTrends (May-November 2015). Out of 4000 tweets, 25% (1000/4000) were used to build source classifiers and 75% (3000/4000) were used for sentiment classifiers. Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM) were used to train the classifiers. Source classification (n=1000) tested Approach 1 that used short URLs, and Approach 2 where URLs were expanded and included into the bag-of-words analysis. For sentiment classification, Approach 1 used all tweets, regardless of their source/type (n=3000), while Approach 2 applied sentiment classification to personal communication tweets only (2633/3000, 88%). Multiclass and binary classification tasks were examined, and machine-learning sentiment classifier performance was compared with Valence Aware Dictionary for sEntiment Reasoning (VADER), a lexicon and rule-based method. The performance of each classifier was assessed using 5-fold cross validation that calculated average F-scores. One-tailed t test was used to determine if differences in F-scores were statistically significant. Results In multiclass source classification, the use of expanded URLs did not contribute to significant improvement in classifier performance (0.7972 vs 0.8102 for SVM, P=.19). In binary classification, the identification of all source categories improved significantly when unshortened URLs were used, with personal communication tweets benefiting the most (0.8736 vs 0.8200, P<.001). In multiclass sentiment classification Approach 1, SVM (0.6723) performed similarly to NB (0.6683) and LR (0.6703). In Approach 2, SVM (0.7062) did not differ from NB (0.6980, P=.13) or LR (F=0.6931, P=.05), but it was over 40% more accurate than VADER (F=0.5030, P<.001). In multiclass task, improvements in sentiment classification (Approach 2 vs Approach 1) did not reach statistical significance (eg, SVM: 0.7062 vs 0.6723, P=.052). In binary sentiment classification (positive vs negative), Approach 2 (focus on personal communication tweets only) improved classification results, compared with Approach 1, for LR (0.8752 vs 0.8516, P=.04) and SVM (0.8800 vs 0.8557, P=.045). Conclusions The study provides an example of the use of supervised machine learning methods to categorize cannabis- and synthetic cannabinoid–related tweets with fairly high accuracy. Use of these content analysis tools along with geographic identification capabilities developed by the eDrugTrends platform will provide powerful methods for tracking regional changes in user opinions related to cannabis and synthetic cannabinoids use over time and across different regions.
Introduction
To design effective prevention, intervention, and policy measures, public health professionals require timely and reliable information on new and emerging drug use practices and trends [1][2][3]. There is a growing recognition that user-generated content available through Web-based and social media platforms such as Twitter, can be used as a rich data source of unsolicited and unfiltered self-disclosures of substance use and abuse behaviors. Such data could be used to complement and broaden the scope of existing illicit drug use monitoring systems by enhancing their capacity for early identification of new trends [3][4][5][6].
Twitter is a microblogging service provider and social network platform that was launched in 2006. Currently, Twitter reports 310 million monthly active users [7] that generate over 500 million tweets per day [8]. Prior research has demonstrated that Twitter can be a useful tool for infodemiology studies of very diverse public health issues [9][10][11][12]. Furthermore, the US Twitter population is young and ethnically diverse, which makes analysis of Twitter data particularly suitable for drug abuse surveillance because young adults display the highest rates of drug use behaviors [13].
Because of the high volume of data generated by Twitter users and availability of geographic information, analysis of tweets can help identify geographic and temporal trends [14][15][16][17]. The content of tweets, although brief and limited to 140 characters (with some recent relaxation of this limit), can be used to extract information on user attitudes and behaviors related to drug use [15,16,[18][19][20][21][22]. Prior research indicates that the ability to separate personal communications from other types of communications such as official/media or retail-related tweets might help reduce the "noise" in social media research and increase the quality of the data for epidemiological surveillance [23,24]. Sentiment analysis is another approach to content analysis of social media data that seeks to understand the opinions (positive, negative, or neutral) expressed regarding selected topics.
Several prior studies used manual coding to classify cannabis, alcohol, and other drug-related tweets by sentiment [15,18,20,21] and source [15,21]. However, such studies, because they relied on manual coding, were limited to the analyses of relatively small samples of tweets. Manual coding is a labor intensive and time consuming process, and its wider application to social media data is human-resource intensive and hence slow, expensive, and difficult in particular for the purpose of identifying emerging trends in real-time. Automation of content analysis tasks would provide powerful tools to examine temporal and geographic trends not just in terms of general tweeting activity [14][15][16][17], but also in terms of the types of communications and opinions expressed in such tweets (eg, how the opinions expressed in tweets in relation to emerging cannabis products change over time and vary across different states and regions).
Although several prior studies reported on the development of automated approaches to analyze tobacco and ecigarette-related tweet content [25,26] and to identify adverse effects associated with medical use of pharmaceutical drugs [27,28], there have been very few attempts to apply automated content analysis techniques to analyze drug abuse-related tweets [29]. This lack of research is partially related to the fact that drug-related content adds another layer of ambiguity and difficulty in the development of automated techniques because of pervasive use of slang terminology and implied meanings [30,31]. For example, the sentiment lexicon that generally conveys negative meaning in its conventional uses (eg, "bad," "wasted," "faded," "fucked up") could express positive sentiment when used in drug-related tweets that describe desired effects of getting intoxicated and high (eg, "I wanna mad amounts of blunts and let's get faded"; "I get fucked up on this shit, I drink lean and smoke dabs every day"). For this domain-specific usage and meanings of sentiment words (where "bad" comes to mean "good," such as in the case of being "faded" or "fucked up"), traditional approaches that use sentiment lexicons (eg, Valence Aware Dictionary for sEntiment Reasoning (VADER) [32]) may not perform well, and machine learning techniques, trained using manually coded data, could increase the accuracy of sentiment identification in drug-related tweets.
The study builds on interdisciplinary collaboration that combines drug abuse and computer science research to develop eDrugTrends, a highly scalable infoveillence platform for real-time processing of social media data related to cannabis and synthetic cannabinoid use. Development of eDrugTrends platform is based on previous research and infrastructure created by our research team, including Twitris (for analysis of Twitter data) [33][34][35][36] and PREDOSE (for analysis of Web forum data) [37][38][39].
The key goal of this study is to describe the development and performance of machine learning classifiers to automatically identify tweets by the source/type of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis-and synthetic cannabinoid-related tweets. Because prior research identified distinct linguistic and sentiment patterns in personal communication tweets compared with tweets generated by organizational entities [15,23], the study also tests an innovative approach that integrates sentiment and source classification to examine sentiment identification in personal communication tweets.
Data Collection
The eDrugTrends platform [14,15] was used to collect and filter Twitter data available through Twitter's steaming Application Programming Interface. eDrugTrends filters out non-English language tweets and uses keywords and blacklist words to extract tweets of interest. Keywords related to cannabis products (cannabis in general, marijuana edibles, marijuana concentrates) and synthetic cannabinoids were selected using prior research, media publications, and social media discussions of relevant terms [24]. To increase the accuracy of collected tweets, ambiguous slang terms (eg, blunt, spice) were combined with keywords indicating drug usage (eg, smoke/smoked/smoking). In addition, a "blacklist" of words was used to exclude collection of irrelevant tweets (eg, Emily Blunt, pumpkin spice latte) [14,15]. Performance of selected keywords was continuously monitored to identify emerging new uses, contexts, and meanings of slang terminology. The eDrugTrends platform is a real-time data collection system that initiated cannabis-and synthetic cannabinoid-related Twitter data collection in November 2014.
The Wright State University institutional review board reviewed the protocol and determined that the study meets the criteria for Human Subjects Research exemption 4 because it is limited to publicly available tweets. Tweets used as examples were modified slightly to ensure the anonymity of Twitter users who had posted them.
Manual Coding
Manual coding was conducted to develop a labeled data set to be used as a "gold standard" for machine learning classifiers. First, 3 drug abuse researchers or "domain experts" (RD, FL, RC) conducted preliminary "open" coding [40] of several batches of 200-300 tweets to develop and refine the coding rules for source (Multimedia Appendix 1) and sentiment classification (Multimedia Appendix 2). Next, to assess intercoder reliability, a random subsample of 300 tweets was selected from a batch of 3000 tweets that were randomly extracted from eDrugTrends database of tweets collected between May and July of 2016. Reliability subsample was coded independently by the first and third authors using QDA Miner [41]. Krippendorff's Alpha statistic was used to assess intercoder reliability [42]. Coding of personal communication (K Alpha = 0.84) and media-related communication (K Alpha = 0.83) tweets had substantial agreement, while agreement was moderate for retail-related tweets (K Alpha = 0.64). Coding of positive (K Alpha = 0.69) and negative sentiment (K Alpha = 0.68) had an adequate level of agreement. However, coding of neutral/unidentified category of tweets achieved a lower level of intercoder agreement (K Alpha= 0.49), which could be explained by the fact that this category was a more amorphous and eclectic group.
Development of the manually labeled data set involved several phases of coding conducted by the first and third authors. To obtain a more balanced dataset, less common categories (eg, negative or retail-related tweets) were purposefully oversampled (for more details, see Multimedia Appendix 3). Oversampling of underrepresented categories is important in order to obtain a more balanced data set for development of machine learning classifiers, given that significant under sampling of a certain category in the training data can directly impact the quality of classification [26]. To reach a sample size of 4000 tweets for the manually labeled data set for machine learning, more than 8000 tweets were manually reviewed and filtered using QDA Miner [41]. The tweets for manual coding were extracted from the pool of 15,623,869 tweets that were collected by eDrugTrends between May and November 2015.
The sample of 4,000 manually labeled tweets was split into two subsamples-1000 were used to train source classifier, and 3000 were allocated for sentiment classification. Information on the manually labeled tweet numbers by category for each subsample is provided in Multimedia Appendix 4.
Machine Learning
Because the study aimed to integrate source and sentiment classification by focusing on sentiment in personal communication tweets only, source classification can be seen as a preprocessing step that is done before sentiment classification. First, 1000 tweets were used to train a source classifier (Multimedia Appendix 4). Next, for the remaining 3000 tweets (Multimedia Appendix 4), the source classifier is applied to filter out the media-and retail-related tweets, and then train the sentiment classifiers using only the personal communication tweets.
Source Classification Models
Development of source classifiers focused only on tweets with URLs. Because all media-and retail-related tweets contained URLs, tweets without URLs could be automatically classified as belonging to the personal communication category. To select 1000 tweets with URLs for source classifier, approximately equal numbers of tweets were randomly sampled from each category-330 official/media-related, 340 retail-related, and 330 tweets that contain URLs from personal communication.
Summary information about the machine learning classification models used in the study is presented in Textbox 1. Source classification tested 2 approaches: Approach 1 used short URLs as they appear in tweets, and Approach 2 expanded URLs to their original version and used unigrams and bigrams obtained from unshortened URLs as features in machine learning (Textbox 1 A). Twitter automatically shortens all links to save character space [43], and such shortened links typically do not contain identifiable words. In contrast, expanded URLs frequently contain useful information that could help improve tweet classification accuracy. Examples of commonly occulting words identified in expanded URLs are presented in Multimedia Appendix 5.
First, performance of source classifiers was assessed for multiclass classification (media, retail, personal). Next, the best performing machine learning algorithm in multiclass classification was selected to assess 3 binary classification tasks: (1) media versus the remaining tweets, (2) retail versus the remaining tweets, and (3) personal communication tweets versus the remaining tweets (Textbox 1 A).
Sentiment Classification Models
Sentiment classification tested 2 approaches: Approach 1 applied sentiment classification to all tweets, regardless of their source/type, using all 3000 manually labeled tweets (1292 positive, 921 negative, 787 neutral/unidentifiable), and Approach 2 applied sentiment classification to tweets identified as personal communications only, excluding retail and media-related tweets. For this approach, the sample of 3000 tweets was first processed using the best performing source classifier (developed for this study) to identify personal communication tweets, which resulted in a sample of 2633 tweets (Textbox 1 B). The sample of 2633 tweets contained 1157 that were manually labeled as positive, 850 negative, and 626 neutral/unidentifiable. (Note that these numbers are different from the information presented in Multimedia Appendix 4 because extraction of 2633 personal communication tweets was performed using source classifier, while Multimedia Appendix 4 information is based on manual coding).
Performance of sentiment classifiers was examined for multiclass (positive, negative, neutral) and for binary classification tasks. Binary classification focused on positive versus negative tweets to examine how well sentiment classifiers performed on reliable categories (as determined by reliability assessment), excluding neutral/unidentifiable group that reached a low level of agreement among human coders. To test Approach 1 (all tweets, regardless of source/type), binary classification used a data set of 2213 tweets that was obtained after removing 787 neutral tweets from the sample of 3000. To test Approach 2 (personal communication tweets only), binary classification used a dataset of 2007 tweets that was obtained after removing 626 neutral/unidentifiable tweets from the sample of 2633 (Textbox 1 B).
In addition, the study used a lexicon and rule-based method VADER that was developed for the analysis of social media texts [32] to classify manually labeled tweet sample allocated for sentiment analysis (N=3000). VADER performance in classifying manually annotated tweets was compared with the accuracy of machine learning classifiers using a one-tailed t test statistic.
Building and Assessment of Machine Learning Classifiers
To build classifiers, the tweets were tokenized and all words were processed to convert uppercase letters to lowercase. Because prior research suggests that stop words and complete forms of words can be useful sentiment indicators, particularly in brief texts such as tweets, stop words were retained, and no stemming was applied [44][45][46]. Next, all the unigrams and bigrams were collected and chi-square test was applied to select the top 500 unigrams and bigrams with highest chi-square scores as features [47]. For each feature t (i), its tf-idf score was calculated in a tweet d (j) as w (i,j) = tf (i,j) × idf (i). Term frequency tf (i,j) is the number of times feature t (i) occurs in tweet d (j). Inverse document frequency is calculated as idf (i) = log(N/df (i)), where N is the total number of tweets in the dataset, and df (i) is the number of tweets in which feature t (i) occurs. Each tweet is represented as a feature vector, and each entry of the vector is the tf-idf score of that feature in the tweet. Three machine learning classification techniques were tested for each classification model/approach: Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM). All three are commonly used classification algorithms that are known to achieve good results on text classification tasks [25,26,48,49].
The performance of each classifier was assessed by 5-fold cross validation, which is a commonly used method for the evaluation of classification algorithms that diminishes the bias in the estimation of classifier performance [50]. This approach uses the entire dataset for both training and testing, and is especially useful when the manually labeled data set is relatively small. In 5-fold cross-validation, the manually labeled data set is randomly partitioned into 5 equal-sized subsets. The cross-validation process is then repeated 5 times (the folds). Each time, a single subset is retained as the validation data for testing the model, and the remaining 4 subsamples are used as training data. The 5 results from the folds are then averaged to produce a single estimation. The study reports the average of the precision, recall, and F-scores calculated by the system on different folds. Precision is defined as the number of correctly
Source Classification
Source classification (Approach 1) that used short URLs demonstrated good performance (Table 1 A). SVM algorithm applied to multiclass classification task achieved a macro average F-score of 0.7972, which was not significantly higher compared with LR (P=.09) or NB (P=.27) performance (Table 1 A). Table 1 B shows the performance of source classifier that used expanded URLs when applied to multiclass classification task. SVM showed slightly better improvement in performance in multiclass classification, compared with NB and LR algorithms, reaching 0.8141 precision, 0.8119 recall, and an F-score of 0.8102. However, these differences did not reach a level of statistical significance (Table 1 C). Performance of both source classification approaches was also assessed on binary classification tasks. Because SVM showed slightly better performance in multiclass classification than NB or LR (although not statistically significant), it was selected for evaluation on 3 binary classification tasks using the 1000 tweets: (1) media-related tweets versus the rest of tweets, (2) retail-related tweets versus the rest of tweets, and (3) personal tweets versus the rest of tweets (Table 2). When using short URLs for binary classification task, identification of media-related tweets showed slightly better precision, recall, and overall F-scores compared with identification of retail or personal communication tweets (Table 2 A), although these differences were not statistically significant (Table 2 C). The identification of all 3 source categories benefited significantly when unshortened URLs were used as features in classification. Improvements in F-scores between Approaches 1 and 2 were significant for all 3 categories (Table 2 C). Identification of the personal communication tweets benefited the most reaching 0.9020 precision, 0.8572 recall, and an F-score of 0.8736, compared with an F-score of 0.8200 when using short URLs (P<.001). Furthermore, when Approach 2 was used, identification of media and personal communication tweets showed significantly higher F-scores compared with retail-related tweet identification (
Sentiment Classification
For general sentiment classification approach that classified all 3000 tweets regardless of their source, SVM results showed better precision (0.7147) than other machine learning classifiers, but LR achieved better recall (0.6763) ( Table 3 A). In overall F-scores, SVM achieved slightly better results (F=0.6723) than other machine learning classifiers, but the differences were not statistically significant (Table 3 C). However, all 3 machine-learning algorithms achieved better results than the lexicon and rule based method VADER. Compared with VADER (F=0.5116), SVM performance was over 30% better, and the difference was statistically significant at P<.001 (Table 3 C).
Before sentiment classification Approach 2 could be applied, the sample of 3000 tweets had to be processed to extract personal communication tweets. Because the SVM source classifier with unshortened URLs showed better performance than other classifiers (Table 2), it was used to identify the personal communication tweets (2633) from the sample of 3000. Table 3 B shows evaluation of sentiment classification of personal communication tweets. Compared with Approach 1 ( Table 3 A), multiclass sentiment classification of personal communication tweets (Approach 2) showed approximately 3% improvement for NB, 4% improvement for LR, and 5% for SVM classifier, although these increases did not reach a level of statistical significance (Table 3 C). The NB classifier achieved the greatest precision (0.7539), but SVM showed the highest recall scores (0.7021). Overall, the SVM classifier demonstrated slightly better performance than the other 2 machine learning classifiers by achieving an F-score of 0.7062, which was significantly greater compared with LR and NB, but these difference did not reach statistical significance. All 3 machine-learning classifiers achieved better accuracy than VADER. The F-score of SVM was over 40% greater in comparison to VADER performance, and the difference was statistically significant at P<.001 (Table 3 C). The most discriminative unigram and bigram features reflect thematic categories pertinent to each source category (Multimedia Appendix 6). As shown in Table 4 A, for binary sentiment classification (Approach 1), the SVM classifier showed the best precision and recall scores. The SVM algorithm achieved an F-score of 0.8557, which was slightly higher than LR and NB, although the differences were not statistically significant (Table 4 C). When sentiment classification was performed on personal communication tweets only (Table 4 B), LR and SVM performance showed statistically significant improvement in comparison to Approach 1 binary classification task (Table 4 C). The SVM classifier achieved high precision and recall (both of which approached 90%), and an F-score of 0.8800, which was significantly greater in comparison to NB, but not significantly different from LR (Table 4 C). Results of binary classification tasks were not compared with VADER, because the latter still classifies tweets into 3 categories assigning a tweet to a neutral category when it cannot find any sentiment words/patterns. The most discriminative unigram and bigram features that were identified by chi-square test reflect thematic groups as pertinent to sentiment categories: "want," "love," "need" for positive, in contrast to "don't," "shit," "fake" for negative tweets (Multimedia Appendix 7). Our sentiment classifier tended to incorrectly classify tweets that expressed an opposing opinion to negative thoughts or actions related to cannabis use or its legalization. For example, the following tweets were classified as negative by our classifier, although manual coding identified them as conveying positive views toward cannabis: "@GovChristie very ignorant to not see the value of cannabis"; "I think it's ridiculous professional athletes get penalized for smoking a joint...." Humorous and sarcastic tweets were also more difficult to classify correctly by our classifier. For example, the following tweet was coded by domain experts as conveying a positive attitude toward marijuana, but was coded as negative by our machine learning classifier: "Marijuana -side effects may include being happy and consumption of fast food."
Principal Findings
The results of this study provide an example of the use of supervised machine learning methods to categorize cannabisand synthetic cannabinoid-related content on Twitter with fairly high accuracy. To classify tweets by source/type of communication, an SVM algorithm that used expanded URLs produced the best results, in particular as demonstrated by binary classification tasks. For sentiment classification, the SVM algorithm that focused on "personal communication" tweets, in particular classifying positive versus negative tweets only, performed better than a more general approach that included all tweets regardless of the source.
Integration of the 2 dimensions of content analysis tasks-identification of type of communication and sentiment-represents a novel approach. Identification of sentiment in user-generated tweets (personal communications) carries greater relevance for drug abuse epidemiology research than an approach that does not separate personal from mediaand retail-related tweets. Use of these content analysis tools along with geographic identification features currently functional in the eDrugTrends platform [14] will provide powerful methods for tracking regional changes in user sentiments related to cannabis and synthetic cannabinoids use over time and across different states or regions.
Overall, our machine learning methods for sentiment classification demonstrated substantially better performance than the lexicon and rule-based method VADER [32]. Prior research has shown that VADER method can achieve an F-score of 0.96 in identifying sentiment when applied to "general" tweets. It is noteworthy that VADER accuracy in classifying tweets in drug use-related domain (where negative words sometime can convey positive and desired experiences) was substantially lower (F=0.51). The accuracy of SVM multiclass sentiment classifier that focused on personal communication tweets only was 40% better in comparison to VADER performance, and the difference was statistically significant at P<.001.
Our study demonstrates that content analysis and manual coding of drug-related tweets is not an easy task even for human coders with substantial experience in drug abuse research and qualitative content analysis. This is consistent with prior studies that have reported high level of ambiguity and lack of context as complicating factors in content analysis of tweets [52]. Although our study demonstrates strong performance of machine learning classifiers for automatic classification of tweet content, manual coding will remain an important method necessary for exploration of new domains and improvement of existing automated classification techniques to reflect changes in drug use practices and/or slang terminology. Our experiences developing the labeled data set emphasize the importance of: (1) revealing ambiguities and difficulties encountered when conducing manual coding, and (2) using appropriate metrics to assess intercoder reliability [42].
Limitations
One of the limitations of our study is that we did not include development of machine learning classification methods to identify relevant and irrelevant tweets (eg, cases were "spice" may refer not to synthetic cannabinoids but to food seasoning). Relevance of extracted data was monitored using appropriate keyword combinations and blacklisted words [15]. We also note the limitations in relation to our ability to identify neutral tweets because they were grouped together with the "unidentifiable" or "difficult to classify" tweets. Until better methods are developed, our future applications of eDrugTrends sentiment analysis tools will take into consideration that neutral/unidentifiable group is a nonreliable category, and will focus on drawing conclusions about positive/negative sentiment tweets only.
Future research will assess performance of these techniques to analyze tweets mentioning other drugs of abuse and will also extend them to automate extraction of more detailed thematic information from drug-related tweets. In addition, because many tweets contain visual information to convey meaning, machine learning-based image classification would add an additional dimension and improve the accuracy of overall tweet content classification. In the future, we will examine the feasibility of separating true neutral tweets from unidentifiable group to improve sentiment analysis.
Conclusions
This is one of the first studies to report successful development of automated content classification tools to analyze recreational drug use-related tweets. These tools, as a part of eDrugTrends platform, will help advance the field's technological and methodological capabilities to harness social media sources for drug abuse surveillance research. Our future deployment of the eDrugTrends platform will generate data on emerging regional and temporal trends and inform more timely interventions and policy responses to changes in cannabis and synthetic cannabinoid use practices.
Acknowledgments
This study was supported by the National Institute on Drug Abuse (NIDA), Grant No. R01 DA039454 (Daniulaityte, PI; Sheth, PI). The funding source had no further role in the study design, in the collection, analysis, and interpretation of the data, in the writing of the report, or in the decision to submit the paper for publication.
Conflicts of Interest
None declared.
Multimedia Appendix 1
Source classification: coding guidelines used to manually annotate tweets as personal, retail-, and media-related communications.
Multimedia Appendix 2
Sentiment classification: coding guidelines used to manually annotate tweets as expressing positive, negative, or neutral/unidentifiable sentiment.
Multimedia Appendix 3
Description of the development of manually labeled data set.
Multimedia Appendix 4
Information about the manually labeled tweets included in subsets to train source and sentiment classifiers.
Multimedia Appendix 5
Commonly occurring words in unshortened URLs by source/type category.
Multimedia Appendix 6
Top 10 most discriminative unigram and bigram features for source classification.
Multimedia Appendix 7
Top 10 most discriminative unigram and bigram features for sentiment classification. | 6,582.8 | 2016-10-24T00:00:00.000 | [
"Computer Science"
] |
Survivability-Aware Topology Evolution Model with Link and Node Deletion in Wireless Sensor Networks
The paper proposes the algorithm of survival topology evolution model. The model applies the survival analysis to the research on the topology of networks. On one hand, it takes the survivability of every node into account. Since the state of a node is affected by the node itself, the environment and so on, it's necessary to study the survivability of nodes. In addition, the survival analysis on the nodes can indicate whether the nodes work well in real time. On the other hand, it also takes the deletion of links and nodes determined by the survivability of the nodes into account. Then it reaches the conclusion by mean field method. The result shows that the degree distributions of WSNs are approximately power law as B-A model and that the survivability of the nodes is proportional to the degree distribution of the network consisting of the previous nodes. These are further confirmed by simulation example.
Introduction
Wireless sensor network (WSN) appears with the rapid development of microelectromechanical systems (MEMS), system on a chip (SoC), wireless communication, and lowpower embedded technology, which brings a revolution of information perception.WSN is a multiple hops selforganizing network which is made of a large number of low-cost microsensor nodes deployed in the monitoring area by wireless communication.As the scale of WSN becomes larger and larger, the problem of coefficient, error, and attack tolerance arouses people's concern.Hence, it is necessary to find the solution.
In recent years, people pay more and more attention to the structure and dynamics of large complex networks [1].Many new topology algorithms for wireless sensor networks have been presented due to the influence the topology has on the lifetime and communication efficiency of a network.In [2], a complex networks-based energy-efficient evolution model for wireless sensor networks is proposed.In [3], the authors presented a local world evolving model for energyconstrained wireless sensor networks.In [4], an energy-aware topology evolution model with link and node deletion in wireless sensor networks is presented.In [5], an evolving model of network with aging sites was proposed according to the effect of aging on network structure presented in [6,7].The model applies the algorithm proposed in [8][9][10][11] such as the mean field theory to the algorithm in [5].In [12], the authors proposed a weighted local world evolving network model with aging nodes to make the previous model portray some complex networks more appropriately.
The models above are all involved in the energy of a node.And the topology of a WSN is closely related to it.However, what impacts the lifetime of a node is not only the energy.As a result, the paper takes the notion of survivability into account.Survival analysis refers to making an analysis and inference on the survival time of creatures and people according to the data from tests or investigations.With the 2 International Journal of Distributed Sensor Networks continuous improvement of the theories and methods of survival analysis, survival analysis comes to be applied to some other fields such as the assessment of the lifetime of products.
Survival analysis has become another hot topic in recent years.Although it is proposed to solve some biology problems, people are focusing on its application to computer science [13].For example, the articles [14][15][16][17][18][19] are trying to give a three-layer survivability analysis on the reliability of the WSN.As we all know, the WSN consists of plenty of inexpensive sensor nodes, which are not repairable and worthy of repairing.Therefore, the lifetime of the sensor nodes is limited.And the WSN is generally deployed in the severe environment, which will shorten the lifetime of the nodes.And then it is necessary to consider the survivability of the nodes when studying the topology of the WSN.So this paper applies the survivability of every node to the energyaware topology evolution model with link and node deletion and then explores the topology of WSN.
The remainder of the paper is made of three sections.In Section 2, an algorithm of survival topology evolution model, which is based on [4], is proposed.In Section 3, the numerical experiments to present the features of the networks generated by the proposed algorithm are given.Finally, some concluding remarks are given in Section 4.
Model
The particular features of WSNs evolving networks can be captured in the present model.The model evolves from an initial WSN consisting of 0 nodes and 0 edges.
Preferential Attachment.
The survivability of the nodes in a WSN has an impact on the lifetime of the network, which will influence the topology of the network at the same time.As we all know, the survivability of a node is not only related to the energy of the node, but also involved in other factors such as the disturbance in the environment.The section is doing some researches on how the survivability of the nodes impacts the topology of a WSN.In other words, the model takes the energy of the node, the disturbance in the environment, and so on into account.Then, the iterative algorithm during the evolving process is outlined as follows.
At each time step, a new node is added to the system.And (0 < < 0 ) new links from the new node are connected to existing nodes [1].We assume that the probability Π of a new node will be connected to node depends on the connectivity and the survivability of the node .In this paper, we define a function (()) to represent the relationship between the survivability of a node and its ability to be linked.The more survivability of a node is, the more ability it will have of being connected to the new coming nodes.Therefore, (()) must be an increasing function and the form may be (), [()] 2 , √(), ln[()], and so on.In this paper, we just set ( ()) = ().And the form of Π is From [13], the survivability () of a node is defined as the probability that the life of the node exceeds the time ( > 0): () = ( ≥ ) . (2) 2.2.Links Deletion.At each time, with probability (0 ≤ < 1), * old links are removed.So the parameter denotes the deletion rate, which is defined as the rate of links removed divided by the rate of links addition; see [4].We assume the value of the parameter is related to the survivability () of the node.In this paper, we assume the survivability of a node is related to the definition of the deletion rate as follows.
We set a survival function threshold (0 < < 1).If the survivability of a node satisfies then the node will be removed.So the deletion rate of the node can be expressed as the probability that the survivability of the node is lower than the threshold : We first select a node as an end of a deleted link with the antipreferential probability as The less energy the node has, the more probability it will have for being deleted.Then node is then chosen from the linked neighborhood of node (denoted by ) with probability −1 Π * ( ), where Then the link connecting nodes and is removed; this process is repeated * times.Once an isolated node appears, it should be removed from the network to maintain the connectivity of networks.
Degree Distribution.
In complex networks, the degree distribution (), which indicates the probability that a randomly selected node has connections, is a very important and useful factor to observe the features of networks and has been suggested to be used as the first criterion to classify real-world networks.In this paper, the mean field theory is adopted to give a qualitative analysis of () for our survivability-aware evolving model with link and node deletions.
By the mean field theory, let () be the degree of the th node at time , and then, in the limit of large , the increasing rate of () satisfies the following dynamical equation: The first term in (6) accounts for the increasing number of links of the th node by the preferential attachment due to the newly added node.The second item in ( 6) means the losing of links by antipreferential attachment during the evolving process.
From the mean field sense, we have where () is the expected value of the node survivability in the whole network; () is the number of nodes at time ; ⟨()⟩ is the average degree of the network at time .For large , In this paper, we assume that the survivability of nodes is exponential distribution as the following: where indicates the time a new node is added to the network and ( > 0) is a parameter of the exponential distribution.
Then the expected node survivability in the whole network is Finally, we can get the simplified form of the dynamical equation: With the initial condition () = , then we can get The probability that a node has a connectivity which satisfies ( ) < is where = 2(1 − − )/( − − 1) − .Assuming that we add the node to the network at equal time intervals in evolving process for WSNs, the probability density at the time is ( ) = 1/( 0 + ).Therefore, we get The probability density function of the degree of a node is 2.4.2.Case B ( ̸ = 0).In this case, links and nodes in the evolving network model are not monotonously growing.Instead, links and nodes can be added in some occasions and removed in other cases.
At first, we assume the probability density of the threshold is 1.Then we can obtain the expression of from (4) and ( 9) Here, we assume ≫ .Then we can get (18) from ( 11) By solving the differential equation ( 18), we can get the form of the solution to the differential equation as follows: where International Journal of Distributed Sensor Networks With the same initial condition as Case A, ( ) = , the solution will be (21) after simplification.Consider where = ( − − 1)/2.
Then the probability that a node has a connectivity which satisfies ( ) < is Assuming that we add the node to the network at equal time intervals in evolving process for WSNs, the probability density at the time is ( ) = 1/( 0 + ).Therefore, we get The probability density function of the degree of a node is
Simulation
In this section, we compute numerical results in the evolution of a network and compare them with simulation examples.
Case A (𝑝 = 0)
. The degree distribution () is provided for different time with fixed = 0.3, 0 = 50, = 2 in Figure 1.According to (15), must satisfy the following condition to guarantee nonnegativity of (), which is consistent with the other expressions: It indicates that the degree distributions () are approximately power law as B-A model.Figure 1(a) shows that the network makes lower connectivity as increases, because the scale of the network becomes larger with the increment of .
Although the probability of a node to be connected becomes bigger as time goes on, the number of nodes in the network becomes larger at the same time.Of course, the survivability of nodes becomes smaller as time goes on, which is also the reason of lower connectivity as increases.Figure 1(b) shows that the degree distribution () follows an approximate exponential decay with the increase of and the rate of decay will be much faster with the growth of time.
The degree distribution () is provided for different time with fixed = 1000, 0 = 50, = 2 = 0.3, 0 = 50, = 2 in Figure 2. It indicates that the network makes higher connectivity as decreases.Figure 2(a) shows that when the survivability of the nodes in the network becomes lower with the increment of , the probability of the nodes to be connected will be higher, which is the same with the definition of (1). Figure 2(b) shows that the degree distribution () follows an approximate exponential decay with the increment of and the rate of decay will be much faster with the growth of .
The degree distribution () is provided for different time 0 with fixed = 1000, = 0.3, = 2 in Figure 3. Figure 3(a) shows that the network yields higher connectivity as 0 decreases.It is concluded that the degree distribution () follows an approximate exponential decay with the increase of and the rate of decay will be much faster with the growth of 0 .
The degree distribution () is provided for different time with fixed = 1000, = 0.3, 0 = 50 in Figure 4. Figure 4(a) shows that with the increase of the value of , the beginning of the curve gradually moves to right, which is determined from (25).It is also very easy to understand that the network makes higher connectivity as increases.Hence, Figure 4(b) shows that the degree distribution () follows an approximate exponential decay with the increase of () and the greater the is, the faster the rate of the decay will be from Figure 4.
Case B (𝑝 ̸
= 0).The degree distribution () is provided for different time with fixed = 0.3, 0 = 50, = 2 in Figure 5.According to (23), must satisfy the following condition to ensure nonnegativity of () It can be found that the degree distributions () are approximately power law as B-A model because the survivability of the nodes will become lower as time goes on.Figure 5(a) International Journal of Distributed Sensor Networks 9 shows that the network makes higher connectivity as increases, because the scale of the network becomes larger with the increase of , which is the same with Case A. In addition, the value of () is higher when is comparatively smaller due to the limitation to the lifetimes of nodes.Figure 5(b) shows that the degree distribution () follows an approximate exponential decay with the increase of ().
The degree distribution () is provided for different time with fixed = 1000, 0 = 50, = 2 in Figure 6.In Case B, the nodes and links are deleted according to the survivability of the nodes, so when the value of becomes bigger, that is to say, the survivability of the nodes becomes lower, the nodes and its links will be easier to be deleted.Consequently, the values of () are bigger at the beginning of the curves with the smaller .According to (26), the curve with bigger value of begins where the value of is bigger.Due to the survivability, the values of () with bigger will become smaller with the increment of .So some intersections will be shown in Figure 6(a).Figure 6(b) shows that the degree distribution () follows an approximate exponential decay with the increase of ().
The degree distribution () is provided for different time 0 with fixed = 1000, = 0.3, = 2 in Figure 7. Figure 7(a) shows that there is an inverse relationship between 0 and (), which can also be seen from ( 24). Figure 7(b) shows that the degree distribution () follows an approximate exponential decay with the increment of and the rate of decay will be much faster with the growth of 0 .
The degree distribution () is provided for different time with fixed = 1000, = 0.3, 0 = 50 in Figure 8. Figure 8(a) shows that the curves begin at different places, which is the same with Case A. The bigger the is, the bigger the value of at the beginning of the curve will be, which can also be seen in (26).Compared with Case A, the distance of the beginning points with different becomes bigger, which results from the deletion of nodes and links according to the survivability of nodes.Figure 8(b) shows that the degree distribution () follows an approximate exponential decay with the increase of ().
Compared with Case A, the degree distributions of Case B are bigger because the deletion makes the scale of the network in Case B smaller than that in Case A. And all the degree distributions () follow the approximate exponential decay with the increment of .The rate of the decay depends on the choice of different parameters.
The evolving algorithms for wireless sensor networks discussed in [2][3][4] are based on the energy of nodes.And this paper introduces the relationship between the survivability of nodes and the energy of nodes.The survivability of nodes is influenced by the battery consumption of the nodes, the disturbance in the environment, and so forth.It is easy to understand that the decrease of the energy of nodes gives rise to the decline of the survivability of the nodes.
It can reach some conclusions from the simulation above.It indicates that the degree distribution follows approximately power law.And the degree distribution increases with the decrease of the survivability.According to the relationship between survivability and energy, the conclusion is consistent with the conclusion in [2][3][4].
Conclusion
The paper proposes the algorithm of survivability topology evolution model.The model applies the survival analysis to the research on the topology of networks and presents the mathematical results.The lifetime of nodes has a great impact on the wireless sensor network made up of these nodes.So we consider the survival analysis of the nodes.Due to the battery consumption of the nodes, the disturbance in the environment, and so forth, the survivability of the nodes will change in real time, which will get rise to the real-time change of the topology of the WSN.Therefore, it is necessary to take the survivability of nodes into account when the topology of the WSN is studied.
According to the model, the simulation studies of the model are presented.From these results, it reaches the conclusion that the degree distributions () are approximately power law as B-A model.When there is no deletion of nodes and links in the network, the lower the survivability of the nodes is, the bigger the values of () will be.When there is a deletion of nodes and links according to the survivability of the nodes, the same consequence can be obtained.Moreover, the values of the degree distributions () in Case B are bigger than those in Case A. | 4,221 | 2014-04-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Value-serializability and an Architecture for Managing Transactions in Multiversion Objectbase Systems
Multiversioning of objects in an objectbase system provides increased concurrency and enhanced reliability. The last decade has seen proposals for managing transactions in multiversion database systems. A new transaction model, a new correctness criterion, and an architecture that exploit multiple versions in objectbase systems are described in this paper. The architecture contains three main components that ensure correct concurrent serializable executions of transactions that satisfy our correctness criterion and provides a spring board for several open problems that must ultimately be addressed.
Introduction
Traditional multiversion database environment have used data versioning for historical purposes as well as issues related to transaction management.Data versioning reduces the overhead involved in recovery and impacts concurrency especially in environment where contention between read-only and update queries is problematic.This paper presents a model of versioned objects for an objectbase environment and concentrates on the role of versions in increasing concurrency control.
An objectbase consists of a set of objects which contain structure and behavior.The structure is the set of attributes encapsulated by the object.An object's behavior is defined by procedures called methods.A method's operations can read or write an attribute, or invoke another method, possibly in another object.
Multiple users may access a database at the same time and their access must be controlled to avoid concurrency anomalies such as lost updates and inconsistent reads.Transactions are used to facilitate this control.Traditionally transactions are defined as a sequence of read and write operations on passive data.In an object-oriented system a transaction consists of a sequence of method invocations which perform operations on object attributes on the transaction's behalf.We distinguish two types of transactions: user transactions and version transactions.A user transaction is a sequence of method invocations on objects.Method executions are managed as version transactions.A version transaction is the execution of read/write operations on a version of an object and any nested method invocations.
Concurrent execution of a set of transactions must be controlled so that the final result of the execution is equal to the result of a serial execution of the transactions.An objectbase system is provided with a scheduler that orders the operations of the concurrent transactions based on a correctness criterion.Conflict-serializability and view-serializability are two correctness criteria often selected for transaction models.This paper introduces a new correctness criterion called value-serializability.Value-serializability seems to be more efficient to implement than conflict-serializability and is not NP-complete like view-serializability.
Correctness criteria are enforced by concurrency control algorithms that ensure serialization of concurrently executing transactions.Concurrency control algorithms are divided into two broad categories: pessimistic and optimistic.Pessimistic protocols block the transactions by deferring the execution of some conflicting operations.Optimistic Third International Workshop on Advances in Databases and Information Systems, 1996 algorithms do not block the transactions but validate their correctness at commit time.Focusing on the centralized objectbase environment, this paper introduces the object versioning techniques used to build a framework for developing an optimistic concurrency control algorithm.Our model introduces two types of concurrency control: inter-UT and intra-UT concurrency.Inter-UT concurrency refers to the concurrent execution of multiple user transactions.Intra-UT concurrency refers to concurrent execution of multiple subtransactions originated from the same user transactions.
The primary contribution of this paper are as follows: 1.It describes a model that lays out the fundamental concepts needed to manage transactions in a multiversion objectbase system.
2. It introduces a new correctness criterion which allow more scheduling and can be less costly to implement than some well-known traditional correctness criteria.
3. It presents an architecture and illustrates the steps required to implement a suitable optimistic concurrency control.
4. Finally, some solutions that will motivate some challenging open problems such as transactions reconciliation are discussed.
The paper begins by describing related work on multiversion concurrency control and transaction models for objects in Section 2. Section 3 describes our model and defines its key concepts.Section 4 introduces value-serializability and compares it with other correctness criteria.An architecture for our model and details of its components is presented in Section 5. Finally, Section 6 makes some concluding remarks and describes future work.
Related Work
Using multiple versions of data items for transaction synchronization was first proposed by Reed [12] and subsequently by Bayer, et al: [2].Multiversioning permits enhanced concurrency, simplifies recoverability, and supports temporal data management.This section briefly reviews relevant multiversion and objectbase concurrency control literature.
Objectbase Concurrency Control
Our model is closely related to those of Hadzilacos and Hadzilacos [7], and Zapp and Barker [14].Zapp and Barker define object serializability and a serialization graph technique to capture intra-object and inter-object transaction synchronization found in Hadzilacos and Hadzilacos' model.Intra-object synchronization serializes operations within an object.Inter-object synchronization ensures consistency of the independent synchronization decisions made at each object.We adapt and extend Zapp and Barker's model to the multiversion objectbase environment.
User transactions and object transactions lead immediately to the nested transaction model.Each transaction forms a tree whose root (the top-level transaction) is the user transaction and whose descendants are object transactions.To ensure correctness a history called global object history is defined that contains both the ordering relation of object transactions executed at an object and the ordering of the user transactions.
Zapp and Barker also present an architecture that describes the transaction facilities and a suitable concurrency control algorithm.The architecture is composed of two major components: an Execution Monitor and an Object Processor.The purpose of the Execution Monitor is to provide a user interface and to schedule the method invocations on behalf of user transactions.Methods are converted to object transactions and are executed by the Object Processor.In processing method executions, the Object Processor retrieves and updates object attributes by accessing the persistent object store.The Execution Monitor and the Object Processor ensure intra-object and inter-object serialization, respectively.
Multiversion Objectbase Concurrency Control
Nakajima [10] presents an optimistic multiversion concurrency control mechanism.Multiversioning techniques are applied to the concepts of backward and forward commutativity [13].Basically, two operations executing on an object commute if they can be scheduled in any order.Nakajima
The Computational Model
This section defines objects, transactions, and introduces versions vis a vis objects.
Object Model
Zapp and Barker's object model defines a set of objects that are uniquely identifiable containing structure (attributes) and behavior (methods) [14].We adapt this definition for our model.
Definition 1 (Object):
An object is an ordered triple, o = f; S; M, where: 1. f is a unique object identifier, 2. S is the object's structure, composed of attributes such that 8a i ; a j 2 S , a i 6 = a j , and 3. M is the object's behavior, composed of methods such that 8m i ; m j 2 M , m i 6 = m j .
Point (1) assigns unique identifiers to each object.Point (2) specifies the attributes of an object and point (3) specifies the methods of an object.This paper identifies object f by o f .The method and the structure of o f are unambiguously referenced by M f and S f , respectively.
An object is versionable in that several versions can be derived from one object.Versions of an object must have the same structure and methods as the object.Versions are either active, committed, or aborted.An active version of an object begins as a copy of the object which can then be manipulated independent of all other such versions.The structure of an active version may be modified extensively for some period.Eventually, the modified active version commits and becomes a committed version if its state is consistent with the current state of the object.Otherwise, the state of the active version is modified again and if it still can not be committed, it becomes an aborted version.Committed versions are merged with the object, creating a new state for the object.Aborted versions are disposed.
An active version is identified by a pair < f ; c > where the first element is the object identifier in which the versions is derived and the second a unique version identifier.Thus we adapt the notational shorthand where v fc identifies active version c of object o f .An arbitrary data item x in v fc is unambiguously denoted x fc .Third International Workshop on Advances in Databases and Information Systems, 1996
Transaction Model
A nested transaction is described by a tree where the root is the top-level transaction, a sequence of intermediate transactions, and a set of leaf transactions.The top-level transaction and its descendants, constitute a transaction family.Transaction families appear atomic to other transaction families 1 .Formal definitions of flat and nested transactions appear in the literature [11,14].
We now precisely describe how our transaction model maps to the traditional nested model [9].User submit transactions that invoke a set of object methods.Transactions submitted by a user are atomic so the underlying system must ensure that the nesting of methods resulting from them are also atomic.Users only know about the set of methods they submit to the system.Subsequent method invocations performed on behalf of these initial methods are transparent to the users.Thus, nested transactions submitted by the users may be divided into two groups.The first group includes top-level transactions explicitly created by the users and the second contains transactions occurring as a consequence of the method invocations made by the top-level transactions.The transactions in the first group are user transactions and those in the second group version transactions.
A user transaction cannot directly modify an object's state in the objectbase.This is accomplished by the methods it invokes.Such methods are converted to version transactions.Version transactions are created by the system and each operate on active versions of specific objects.
User Transactions
We denote an operation p of user transaction i as ip , the set of all operations of transaction i by OS i , and the termination condition as N i 2 f commit, abortg.User transaction i is denoted UT i .Operation ik of user transaction UT i is an invocation of a subtransaction denoted by T f ij .The subtransaction T f ij refers to the jthsubtransaction of UT i operating on v fi .The set of operations for UT i is OS i = [ k ik , where the ik 's are enumerated by finding the transitive closure of the method invocations made by UT i .
Definition 2 (User Transaction):
A user transaction UT i is a partial order ( P i , i ), where: For any two ip , iq 2 OS i , if depends( ip , iq ) or depends( iq , ip ) then iq i ip or ip i iq , respectively, 3. 8 ip 2 OS i , where ip = T f ij then N ij = N i , and 4. 8 ip 2 OS i , ip i N i .
Point (1) enumerates the operations in UT i .Point (2) states that dependent operations of a user transaction must be ordered.It introduces a boolean function called depends which accepts two operations at least one being a method invocation and returns true if there is no dependency in the internal semantics of the operations.The rationale for the depends function is discussed below.Point (3) ensures that the termination conditions of all subtransactions invoked by the user transaction are the same as the termination condition of the user transaction.Point (4) restricts any operation of the user transaction from occurring after the user transaction terminates.
Version Transactions
Additional notation is required.A version transaction from step k of UT i , executing on v fi is denoted V T f ik .The version transaction V T f ik is created when operation ik of UT i invokes a method of o f .The direct and indirect descendant transactions of a version transaction V T f ik , are V T a ik1 ; V T b ik2 ; : : : ; V T k ikn .When some V T e ikj attempts to complete, it enters a pre-commit state where it is ready to commit subject to the commitment of its parent transaction.The operation pc denotes entry into the pre-commit state by a nested subtransaction so the operation set of V T f ik is OS ik = f[ p ikp g, where ikp 2 f read; write; pc; V T e ikj g.Two version transactions may execute on a common version of an object if they are from the same user transaction and access the same object.The significance of the depends function (point 2 in Definition 2 and point 2b in Definition 3) is that it provides information to allow intra-transaction concurrency.This implies that operations of a transaction which do not depend on each other can be freely executed concurrently.
Definition 3 (Version Transaction):
Fully describing the implementation of the depends function requires a deep examination of compiler construction and a thorough treatment of the runtime systems.Clearly this is beyond this paper's scope but a brief discussion of the fundamental compile-time techniques should be sufficient to demonstrate feasibility.A more complete description is available in Graham [6] and others [1,5].
To implement the depends function static information captured at compile time can be used to obtain some knowledge related to dependence between the object methods.The following defines the necessary data structures.
extent(msg):
The extent of a method invocation (message step [7]) msg is the set of all object methods invoked directly or indirectly by msg.RS(s)/WS(s): the set of all attributes referenced for read/write by statement s in a method.If the statement is a message step, the readset and the writeset are the input parameters and the output parameters of the message step, respectively.
RS(M)/WS(M):
the set of all attributes referenced for read/write by method M. Unfortunately, the information provided by the above data structures are only captured conservatively.For example, extentm determines all possible methods that may be invoked directly or indirectly if the message step msg is executed.During the execution of a method some sections of a method are pruned off by the conditional statements and only a subset of the code is executed.Since the depends function is using this information, the result of the depends function is also calculated conservatively.
Dependency between two operations can be of three forms: direct dependency, indirect dependency, and hidden dependency.Direct dependency occurs if the two operations directly conflict in the local object.Indirect dependency occurs if the operations commonly access conflicting methods in some other object.Hidden dependency happens if two operations conflict indirectly in the local object (typically the result of recursion).The following shows an example of each form and discusses the methods to detect the dependencies.
Direct dependency is the most trivial case.Figure 1A shows an example of direct dependency between two statements s 1 and s 2 in method m 1f .Clearly s 1 and s 2 access conflicting operations.This dependency can be detected by comparing the readsets and the writesets of s 1 and s 2 .Call M2g
M1 M2 M3
Call M1g Figure 1: dependency of the statements in a method Figure 1B shows an example of indirect dependency between two statements s 1 and s 2 in m 1f .s 1 and s 2 do not conflict locally but both indirectly invoke some conflicting methods in o g .This dependency can be detected by comparing the extents 1 with the extents 2 and building the Conflict-Sets 1 ; s 2 .If the conflict-set is empty no indirect dependency occurs; otherwise, potential indirect dependency exist.Figure 1C illustrates an example of hidden dependency.Note that s 1 and s 2 neither directly nor indirectly conflict.But s 2 indirectly access some other methods in o f which has some conflicting operation with s 1 .This dependency can be detected by comparing the readset and the writeset of s 1 with the readset and the writeset of a method that is indirectly invoked by s 2 in o f .Other forms of hidden dependencies are also possible.For example, in Figure 1C, s 2 may conflict with some methods that may be called by s 1 in o f indirectly.Similarly, s 1 and s 2 may call methods m 2f and m 3f , respectively and conflicts may occur between s 1 and m 3f , or s 2 and m 2f , or m 2f and m 3f .
If the result of the depends function is false, the two operations can be freely executed concurrently.Otherwise, if a local dependency exists, one operation is blocked until the other is completely executed.If the two operations are not locally dependent, but the result of the depends function warns about the potential indirect dependency or hidden dependency, the two operations can be executed concurrently as long as their executions are serialized based on a defined correctness criterion.
Value-serializability
This section describes a new correctness specification called value-serializability that relaxes the restrictive properties of conflict-serializability but is not NP-complete like view-serializability. Before discussing the specification, several notational elements need to be provided and an extension to the traditional definition of a "history" must be stated.
First, without loss of generality, a history H is always a committed projection of a schedule created by a scheduler in the system [3].Further, a read/write operation by user transaction UT i in history H is represented as r i x; v=w i x; v where v the value read/written by UT i .Now a history is defined as follows: Definition 4 (History): A complete history over a set of user transactions fU T 1 ; U T 2 ; :::UT n g is a partial order with ordering relation H where: H i S f S f k f ik g, and 3. for every two operations ip and jq 2 P H , and two distinct values u and v, if ip = w i x; u and iq = r j x; v/w j x; v, either ip H jq or jq H ip .
Point (1) enumerates the operations of all transaction families (see Definition 3).Point (2) defines the ordering relation for the operations of all transaction families.Point (3) indicates that two conflicting operations belonging to two transaction families must be ordered if they read or write distinct values.
Serializability
Conflict serializability states that a conflict occurs if two operations access the same data and at least one is a write operation.Our notion of conflict is called value-conlict and says that conflicting operations occurs when different values are read/written.For example, two write operations that write the same value into a data item x can be executed in any order; or, if x has a value v, a read and a write operation on x can be processed in any order on x as long as the write operation overwrites the same value v on x.As another example consider the following: A = fw p x; 5; :::::::::;w r x; 10; :::::::::; w q x; 5g Suppose A is a projection of history H and operation r i x; 5 is an operation of UT i that may or may not be in A. Note that if A does not contain any write operations of UT i , it makes no difference if r i x; 5 reads from w p x; 5 (happens before w r x; 5) or reads after w q x; 5 (happens after w r x; 5).Set A is defined to be a range for operation r i x; v.
formally defined as follows: Definition 5 (Range): Given three user transactions UT i ; U T p ; U T q 2 H , set A is a range for operation r i x; v i if: 1.A is a projection of H, 2. w p x; v p of UT p and w q x; v q of UT q are the first and the last elements in A, respectively and v i = v p = v q , and 3. A contains no write operation of UT i on any data item.Thus two write operations accessing the same data item x value-conflict if they write different values into x.A read and a write operation on x also value-conflict if the write operation does not occur in the range of the read operation.This gives rise to the concept of the equivalence between two histories.
Definition 6 (Value-conflict Equivalent):
Two history H 1 and H 2 are value-conflict equivalent if H 1 and H 2 are defined over the same set of user transactions, and have the same operations, and the order of their value-conflicting operations is the same.
A history is serializable if it is equivalent to a serial history [3].A history is serial if for every two transactions UT 1 and UT 2 in the history, all operations of UT 1 occurs before all operations of UT 2 or vice-versa [3].Thus a value-serializable history is:
Definition 7 (Value-serializable):
A history is value-serializable if it is value-conflict equivalent to a serial history.
The Value-serializability Theorem
Suppose history H is defined over a set of user transactions T = fU T 1 ; U T 2 ; :::UT n g.We determine whether H is value-serializable by constructing a graph called a Value Serialization Graph denoted V S G H .The V S G H = V;Ewhere a vertex v 2 V represents a transaction T 2 H, and an edge in E from vertex v i to vertex v j indicates that at least one operation of UT i proceeds and value-conflicts with an operation of UT j in H. Theorem 4.1 (Value-serializability Theorem): A history H is value-conflict serializable iff V S G H is acyclic.
Proof (sketch):
(if): Suppose H is a history over T = fU T 1 ; U T 2 ; :::UT n g and V S G H is acyclic.Without loss of generality, assume UT 1 ; U T 2 ; :::; UT n are committed in H. Thus UT 1 ; U T 2 ; :::; UT n represent the nodes of V S G H . Since V S G H is acyclic it can be topologically sorted.Let i 1 ; i 2 ; :::; i n be a permutation of 1; 2; :::; nsuch that UT i 1 ; U T i 2 ; :::; UT in is a topological sort of V S G H . Let H s be a serial history over UT i 1 ; U T i 2 ; :::; UT in .We prove that H is value-conflict equivalent to H s .Let ip and jq be operations of UT i and UT j , respectively such that ip and jq value-conflict and ip precedes iq in H ( ip !jq ).By definition of V S G H , there is an edge from UT i to UT j in V S G H .
Serializable
Conflict Serializable
Value Serializable
Figure 2: Relationship between value, view, and conflict-serializability Therefore, in any topological sort of V S G H , UT i must appear before UT j .Consequently, in H s all operations of UT i appear before any operation of UT j .Thus any two value-conflicting operations are ordered in H in the same way as in H s .Thus H is value-conflict equivalent to H s .(only if): Suppose history H is value-conflict serializable.Let H s be a serial history that is value-conflict equivalent to H. Consider an edge from UT i to UT j in V S G H . Thus there are two value-conflicting operation ip and jq of UT i and UT j , respectively, such that ip !jq in H.Because H is value-conflict equivalent to H s , ip !jq in H s .This indicates that because H s is serial and ip in UT i proceeds jq in UT j , it follows that UT i appears before UT j in H s .Now suppose there is a cycle in V S G H and without loss of generality let that cycle be UT 1 !U T 2 !::: !UT k !U T 1 .This cycle implies that in H s , UT 1 appears before UT 2 which appears ... before UT k which appears before UT 1 and so on.Therefore, each transaction occurs before itself which is an absurdity.So no cycle can exist in V S G H . Thus, V S G H must be an acyclic graph.
Relationship with other Correctness Criteria
The value serialization graph discussed above shows that the decision problem that determines if a history is valueserializable can be solved in polynomial time.This is because a cycle in the value serialization graph can be detected in polynomial time.Thus value-serializability is not an NP-complete problem.This section further compares value-serializability with view and conflict serializabilities in terms of scheduling and the cost of implementation.
In some environment, concurrency control algorithms that enforce value-serializability can be less costly and more efficient to implement than the ones which use conflict serializability.A common concurrency control algorithm that uses conflict serializability is two phase locking (2PL).Suppose two phase value locking (2PVL) is the corresponding concurrency control that enforces value serializability.The following compares 2P Lverses 2P VL .
Consider the execution sequence of 2P L .If a lock is required, a request of 2P Lis made to the system kernel in privileged mode that requires the suspension of the currently running process, a lock acquisition, and a control switch back to the first process.This is an extremely expensive process that involves approximately one hundred (100) machine cycles (if conflict does not occur) or more (if conflict occurs) [8].If the compilers can detect through static analysis, that a "value" is not in conflict, then the process above can be usurped for this particular access.The cost of 2P VLwould be a comparison operation between the current value and one read, at the time the transaction initially began execution.This requires only three (3) machine cycles.If you include the cost of the initial reads and the storage of these initial values, it only costs a total of ten (10) cycles.This results in a magnitude savings at execution time.
Validation Processor
Unfortunately, two conditions make the scenario problematic.First, if the transactions do actually value-conflict, the locking mechanism (2P L ) must be added to the checking cost which leads to a ten percent increase in overhead.
Secondly, the compiler must embed the comparison operations into the methods which requires a substantial rewrite of the compiler itself and will slow down the compilation process.The former concern is an issue of ongoing research while the latter is irrelevant since it is a pre-runtime issue.Therefore, environment with low data contention or where the domain of values for the data items is small will benefit the most from 2P VL .On the other hand, if transactions are constantly updating a small member of data items with a wider range of values (typically by hot-spots) 2P Lwill outperform 2P VL .
Architecture
A versioned object store is comprised of two portions: a persistent stable objectbase and a non-persistent unstable working store.The objectbase contains persistent objects and the unstable working store keeps the active versions.
An active version v fi is created by copying an object o f from the objectbase and assigning a unique version identifier i. Active versions may be promoted to committed versions where their contents are merged with the objects in the objectbase thereby creating new states for the objects.For our purposes it is sufficient to assume that committed versions are not maintained historically in the objectbase.It should be noted that relaxing this constraint may significantly increase concurrency and has an important impact on the specification of serialization in a multiversion system, but this is beyond the scope of this paper.
Architectural Model
Three major components form the basis of our architecture: the Transaction Processor, the Version Processor, and the Validation Processor (Figure 3).An active version of o f (v fi ) is requested from the objectbase and placed in the unstable store.Next the Version Transaction Manager passes V T f ip to the Execution Manager.The Execution Manager executes the operations of V T f ip updating v fi in the unstable store.The Version Transaction Manager also builds a version list for each active user transaction.The version list of UT i V RLSTUT i is a set that logically records the objects referenced by UT i .Every time a version of an object (v fi ) is created for UT i , the Version Transaction Manager appends f (the object identifier) to V RLSTUT i .When all the version transactions of UT i terminate, V RLSTUT i is passed to the Execution Manager.The Execution Manager submits V RLSTUT i to the Validation Processor.
The Validation Processor checks the validity of the updated versions referred to in the version list.It has two components: the Decision Manager and the Commit Manager.The Decision Manager compares each updated version (v fi ) referred in the version list with its related object (o f 0 ) in the objectbase.The purpose of the comparison is to determine if updated active versions would create inconsistency in the objectbase.An updated version v fi is consistent with o f if the attributes accessed in v fi have not been accessed in o f since v fi was created.If the results made from all the updating versions are consistent with the current state of their corresponding objects in the objectbase, the version Third International Workshop on Advances in Databases and Information Systems, 1996 list is passed to the Commit Manager.The Commit Manager promotes the updated versions to committed versions and merges the committed versions with their corresponding objects in the objectbase, thereby creating new states for the objects.
Conclusion
We have presented a formalism for multiversion objects and transactions on them.We have presented a serializability theory and an architecture which can be used as the basis for the development of optimistic concurrency control protocols.Detailed algorithms will appear later in subsequent work.Several open problems present themselves.First, enhancements to the basic optimistic algorithm reflected in our model are yet to be developed.Such an algorithm can exploit the maintenance of historical objects by providing increased opportunities to serialize committing transactions.The availability of historical data may help enhance reliability in addition to the obvious benefits of tracking data values over time.Reconciliation is still a largely unexplored research area.Successful research that detects "incorrect" data items but makes them consistent with the rest of the information in the objectbase would have significant impact on both multiversion and semantic database systems.
Figure 3 :
Figure 3: The components of the architecture
Figure 4 :Figure 4
Figure 4: The architecture argues that forward commutativity uses the latest committed version of the objects to determine a conflict relation while backward commutativity uses the current states Third International Workshop on Advances in Databases and Information Systems, 1996 of the objects.Forward and backward commutativity relations are combined into a new relation called the general commutativity relation.A general commutativity relation exists between two operations if they either backward commute or forward commute.In Nakajima's model, each object consists of a collection of versions.The versions are classified into two groups: committed and uncommitted versions.The most recent committed version of an object o i is called the last committed version of o i (LCV o i ), and the most recent uncommitted version of o i is called the current version of o i (CV o i ).When transaction T j invokes a method M ik in object o i a new uncommitted version of o i (N V o i ) is created for T j .If the return result from NV o i backward commutes with CVo i or forward commute with LCV o i , NV o i becomes the new current version of o i and replaces the old current version.Otherwise, NV o i is discarded and T j invokes method M i again.simple reconciliation and complex reconciliation.Simple reconciliation merges the result of the execution of two versions o f1 and o f2 of object o f accessed by two transactions T 1 and T 2 , respectively and provides a serialization order between T 1 and T 2 .Versions o f1 and o f2 can be merged if T 1 and T 2 do not access common data in a conflicting manner.Complex reconciliation is attempted if simple reconciliation cannot be performed.Complex reconciliation of two transactions T 1 and T 2 may require the less costly transaction be reexecuted against the state created by another transaction.The cost of the reexecution of a transaction is estimated by static compile time analysis.Complex reconciliation of a transaction is mainly partial reexecution of the operations which have accessed stale data.Reconciling an unsuccessful transaction at commit time is often a less costly procedure than the complete roll-back and reexecution of the transactions.
g, 2. (a) for any two ikp , ikq 2 OS ik , if ikp = wx fi and ikq = wx fi =rx fi , for any x fi , ikp f ik ikq or ikq f ik ikp , (b) for any two ikp , ikq 2 OS ik if ikp = V T e ikj and depends( ikp , ikq ) or depends( ikq , ikp ), ikq f ik ikp or ikp f ik ikq , respectively, 3. if ikp = pc, then ikp is unique and 8 ikq 2 OS ik , p 6 = q ikq f ik ikp , 4. 8 ikp 2 OS ik , where ikp = V T e ikj then N ikj = N ik , and 5. 8 ikp 2 OS ik , ikp f ik N ik .Only those points different than Definition 2 are discussed.Point (2a) orders the conflicting local operations of the version transaction.Point (2b) orders the conflicting operations of two subtransactions of a version transaction which are invoked on the same version.Point (3) indicates that all operations of a version transaction must occur before its pre-commit operation.
is a set of pairs < M if ; M jf > where M if , and M jf are two methods of object o f such that M if 2 extentmsg, M jf 2 extentmsg 0 , and M if and M jf may access attributes in o f in conflicting manner. Conflict-Set(msg,msg'): | 7,963.2 | 0001-01-01T00:00:00.000 | [
"Computer Science"
] |
A World Worth Living—Can Artificial Intelligence Help to Reach the Goal?
: Artificial intelligence (AI) is an area of computer science that has received much attention in the public media, politics and economy. There is a worldwide expectation that AI will be a key technology for the future. In this short paper, I sketch and discuss whether the prospects and hopes are realistic from a technical point of view and under which conditions AI will contribute to the welfare of human beings.
Introduction
Artificial intelligence (AI) is an area of computer science that has received much attention in public media, politics, and economy. There is a worldwide expectation that AI will be a key technology for the future. In this short paper, I sketch and discuss whether these prospects and hopes are realistic from a technical point of view and under which conditions AI will contribute to a world worth living in-a goal that is described in Section 2. In Section 3, it is recalled that science and technology are driving forces for the development of human societies. where digitization and artificial intelligence play a more important role in this respect in more recent times. In Sections 4-7, the current hype about AI is referred to, the major methods of AI are sketched, the relation between AI and human intelligence is broached, and typical applications of AI are pointed out. The title question is tentatively answered in the concluding section.
A World Worth Living
In my view, a world worth living is a peaceful world without poverty, hunger, human exploitation and destruction of nature, a world with equal rights and equal opportunities for all people. It enjoys a sustainable economy and a sustainable way of life. Moreover, the use of technologies is compatible with these goals.
As far as digitization is concerned, this characterization may be considered as equivalent to the ideas of digital humanism (cf. [1,2]). I avoid this term as far as possible because it refers rather to digitization under the conditions of humanism while humanism cannot be digital in itself, if my understanding of the meaning of "digital" is correct.
Science and Technology
In combination with societal structures, culture, and economy, science and technology have been essential factors in the development of human civilizations for thousands of years. This development received a big push from industrialization over the last 250 years and a further push by computerization and digitization for over last 70 years, which is reaching a new level via the creation artificial intelligence (AI) and robotics more recently. Therefore, one may ask: can AI help to make the world worth living? It should not be surprising that the answer is not just YES or NO; rather, it depends.
AI Hype
Over the last two decades, one can encounter quite some spectacular successes of AI in gaming (chess, Go, poker, jeopardy, ...) as well as in more practical applications such as language and picture processing. Moreover, some AI experts continue to promise further breakthroughs. Both facts trigger the expectations in politics and economy that AI will become a key technology in the future of future surplus value and-sometimes-even of world leadership (cf., e.g., [3]). Many states have national AI strategies (see, e.g., [4,5]) and are going to invest huge amounts of money developing AI. One wonders about the directions that these developments may take.
AI Methods
Although the area of AI is subdivided into a wide spectrum of topics, they share some basic methods and principles (cf., e.g., [6]). One of the major methods exploited in AI is the use of rules. The kinds of rules include arithmetic laws (e.g., the commutativity a + b = b + a), logic laws (e.g., if a implies b and a is true, then b is true), grammar rules (e.g., a sentence may consist of a subject, a predicate and an object), rules of games (e.g., chess, ludo, bridge, etc.) and logical puzzles (e.g., sudokus, labyrinths, etc.). While these well-known kinds of rules are usually simple in structure and small in number, an AI system may consist of a very large number of partly very sophisticated rules. Nevertheless, the basic principles and uses are alike. They can be applied to underlying discrete information structures performing local changes and provide complex transformations and computations by iteration.
A further major method frequently employed in AI is probability theory and variants to describe uncertainty, vagueness, fuzziness and the like as encountered in many practical and real-world applications.
It is interesting to take note of the fact that machine learning (including so-called deep learning), as one of the most important subareas of AI, combines both main methods in such a way that impressive progress is made in prediction and prescription in a wide range of applications compared to the more classical fields of AI.
The most favored AI principles are known for decades and quite similar to methods used in other areas of computer science and even beyond. The main difference is that the scientific communities are different and do not know much about each other. The notable recent successes of AI are achieved in large part by the growing speed of computation and the growing storage capacities that allow the provision of huge amounts of big data and make ambitious projects possible. In other words, there is no magic, and there are no disruptive innovations. To be fair, I agree that AI has made remarkable progress, but I argue that this due to ordinary scientific and technological reasons.
What about Intelligence?
The major part of AI is referred to as 'weak AI'. Its goal is the simulation of limited processes for which human beings use their intelligence such as playing games, logical deduction, problem solving, language understanding, picture recognition, planning, decision making, etc. One should be aware that there is no big difference between weak AI and computer science in general. The emphases of the latter lay on computing, data storing, searching, sorting, controlling, managing, administrating, routing, etc., as these are all activities for which human beings need their intelligence, too.
How is this kind of simulation of intelligent behavior in very restricted contexts related to natural intelligence and to human intelligence, in particular? There is no final answer, as the functioning of natural intelligence is not fully understood. However, on the phenomenological level, big differences can be seen. Let us consider, for example, the AI concept of 'deep learning'. Nothing is really deep about it. It is based on artificial neural networks with multiple layers between the input layer and output layer (deeply stacked). The 'learning' of such a network is rather 'training' by 1000s and 1000s of input data samples of whatever should be 'learnt'. Typical examples are pictures of cats and dogs or-more practically-of dermal cancers. In contrast to that, humans and even very young children usually only need very few samples to learn something. Moreover, learning takes place all the time and concerns a wide spectrum of topics simultaneously.
In contrast to weak AI, the proponents of 'strong AI' (or artificial general intelligence) aim at systems that behave intelligently in the same way as humans behave intelligently. Some of them even strive after superintelligent systems that are more intelligent than hu-mans-a popular topic in fiction (cf., e.g., [7][8][9]). But, as far as I can see, there is not the least indication yet that this may come true soon or at all. Most authors, such as those of [10], agree with my opinion.
AI Applications
Since, all over the world, a lot of money is available for the development of AI technologies, many applications are in progress and planned. A good portion of these look promising. AI-including robotics-is already used in medicine, production, transportation, etc., with some success. Moreover, many prototypical applications are under development, such as autonomous vehicles, service robots, and many more. However, several of them are in obvious contradiction to a world worth living. There are applications that increase the profit of private companies and nothing more, that consume large amounts of energy, that are used for social surveillance (in the small and large scales), that increase the horror of war by autonomous lethal weapons, drone swarms, and various further military applications.
Conclusions
Can AI help to make the world worth living? There is and will be a lot of money to be spent in AI. There are thousands and thousands of active AI researchers, engineers, developers, and managers who will take and spend this money and produce plenty of outcomes of all kinds. This includes AI applications that make the rich richer and powerful people more powerful. It also includes AI-based social surveillance undermining human rights, as well as new and horrible AI-based weaponry. The attitude of many AI experts is arrogant and ignorant. They avoid not only considering and respecting the limits of AI, but also develop whatever is required and paid for. They ignore the fact that AI algorithms and AI systems are often uncontrollable and inscrutable, making them dangerous and risky. Ethical aspects play a minor role, if at all. The expectations in politics and economy are exaggerated, as they do not take into account that the development of technology is a slow process, can fail, often costs much more than calculated at the beginning and the results may not meet given promises. Furthermore, AI is not the only cutting-edge technology. It may be necessary to be considered in the context of bio-and nano-information technology as a whole (cf., e.g., [11]). All this indicates the answer NO.
Nevertheless, the answer can be YES if all efforts are directed toward the goal. This requires a dramatic change of the framework conditions, guidelines, and aims of politics and the economy. It depends very much on the way technologies such as AI are further developed and employed. The leaders of the world on the one hand and scientists and engineers on the other hand must obey Hans Jonas' imperative of responsibility [12]: "Act so that the effects of your action are compatible with the permanence of genuine human life." One should be aware that global challenges in the form of climate change, the division between the poor and rich, the violation of human rights in many states, and the world-wide arms race cannot be coped with by relying on technology only. Acknowledgments: I would like to thank Sabine Thürmel and the anonymous reviewers for their helpful comments. | 2,424.8 | 2022-04-05T00:00:00.000 | [
"Computer Science",
"Philosophy"
] |
Metabolome and proteome analyses reveal transcriptional misregulation in glycolysis of engineered E. coli
Synthetic metabolic pathways are a burden for engineered bacteria, but the underlying mechanisms often remain elusive. Here we show that the misregulated activity of the transcription factor Cra is responsible for the growth burden of glycerol overproducing E. coli. Glycerol production decreases the concentration of fructose-1,6-bisphoshate (FBP), which then activates Cra resulting in the downregulation of glycolytic enzymes and upregulation of gluconeogenesis enzymes. Because cells grow on glucose, the improper activation of gluconeogenesis and the concomitant inhibition of glycolysis likely impairs growth at higher induction of the glycerol pathway. We solve this misregulation by engineering a Cra-binding site in the promoter controlling the expression of the rate limiting enzyme of the glycerol pathway to maintain FBP levels sufficiently high. We show the broad applicability of this approach by engineering Cra-dependent regulation into a set of constitutive and inducible promoters, and use one of them to overproduce carotenoids in E. coli.
ngineering synthetic metabolic pathways by inserting new enzymes into the metabolic network of microbes is a common approach to expand the spectrum of chemicals that they can overproduce 1 , or gain access to new feedstocks like atmospheric CO 2 2 . However, synthetic metabolism interferes with the endogenous one, often in a way that impairs the cellular growth and fitness of the host. For example, overproduction pathways consume metabolites that are no longer available for the growth and metabolism of the host. This competition between synthetic and endogenous metabolism leads to a metabolic burden that causes stress responses and physiological changes of the host 3 . Eventually, metabolic burden and the accompanying perturbations to metabolism reduce the overall fitness and productivity of the engineered microbes. Therefore, the current challenge in metabolic engineering is to minimize metabolic burden, while maximizing flux through synthetic metabolic pathways.
An approach to avoid metabolic burden is to express synthetic pathways in non-growing microbes using two-stage bioprocesses 4,5 . Non-growing microbes are less susceptible to metabolic burden, because they have a lower requirement for biomass building blocks and energy. However, unlike actively growing cells, non-growing cells have a low overall metabolic activity 6 , and this can limit the flux and productivity of synthetic metabolic pathways. Thus, the higher metabolic activity of growing cells is undoubtedly an advantage but requires the optimization of the enzyme levels in the synthetic metabolic pathway in such a way that sufficient resources remain for cell growth 7 . Optimal control of enzyme expression has been achieved at various levels of transcription and translation, for example by engineering promoters 8 or ribosome-binding sites 9 . However, these methods are static because they do not allow adjusting the enzyme levels to the changing internal and external conditions 10 . To dynamically control the expression levels of enzymes, feedback mechanisms have been introduced into synthetic metabolic pathways. An approach to do so is to express enzymes in the synthetic pathway under the control of promoters that bind transcription factors (TFs). The activity of the TFs in turn is controlled by intermediates or precursors of the synthetic pathway. The resulting feedback between metabolism and gene expression improved overproduction of lycopene 11 , fatty acids 12,13 , and precursors of isoprenoids 14 . Another approach to engineer metabolic feedback regulation of gene expression is to combine CRISPR interference with transcriptional regulators that sense stress of the host 15 .
Here, we used glycerol production in E. coli as an example to systematically study the cause and consequences of metabolic burden in engineered bacteria. We chose the glycerol pathway because it is a simple two-step pathway that drains precursors from one of the most central pathways, glycolysis. According to the United States Department of Energy (DOE), glycerol belongs to the top ten value-added chemicals from biomass 16 , and can be a precursor for other bio-based products like acrylic acid 17 and 1,3-propanediol 18 . First, we controlled the glycerol pathway with an arabinose-inducible pBAD promoter and observed that already low levels of inducer caused a growth burden. Metabolomics and proteomics data indicated that the growth burden was caused by a transcriptional response in glycolysis, notably the activation of gluconeogenesis by the transcription factor Cra. Next, we combined theoretical and experimental analysis to show that insertion of a Cra-binding site into the pBAD promoter enables higher growth rates at higher glycerol production rates. Finally, we show that this approach is generally applicable to synthetic pathways that utilize glycolytic metabolites such as carotenoid production.
Results
Glycerol production causes a growth burden in E. coli. To investigate how induction of a synthetic metabolic pathway impacts the metabolism of the host, we expressed the glycerol biosynthesis pathway from yeast in E. coli (Fig. 1a). The glycerol pathway is a two-step pathway that starts from the glycolytic metabolite dihydroxyacetone phosphate (DHAP). The first reaction is catalyzed by the glycerol-3-phosphate dehydrogenase 1 (GPD1), which converts DHAP into glycerol-3-phosphate (glycerol-P). The second reaction is catalyzed by the glycerol-3phosphate phosphohydrolase 2 (GPP2) and leads to dephosphorylation of glycerol-P into the product glycerol. Our E. coli strain for glycerol production expressed the two genes encoding gpd1 and gpp2 from a plasmid, and lacked the glycerol kinase gene (glpK) to prevent that glycerol is re-utilized as a carbon source ( Supplementary Fig. 1). In the following, we will refer to this strain as the base strain.
We sought to control glycerol production by expressing the first enzyme in the glycerol pathway (GDP1) with an arabinoseinducible pBAD promoter, and the second enzyme (GPP2) with a strong constitutive promoter (Fig. 1a). Expressing GFP with the pBAD promoter showed a linear relationship between the concentration of arabinose (ara) and promoter activity (Fig. 1b). Thus, we expected that the pBAD promoter would allow us to linearly control the abundance of GPD1 and thereby gradually increase glycerol production (Fig. 1c). However, already low ara levels (0.3%) caused a strong growth defect and low biomass concentrations and titers of glycerol (Fig. 1d). The maximal glycerol titers were achieved with 0.1% ara (17.71 mM, Fig. 1d).
We then examined the mechanisms that caused the growth burden at higher ara levels. We excluded that the protein cost of GPD1 expression was burdensome, because expressing GFP from the pBAD promoter did not affect growth ( Supplementary Fig. 2). Thus, the growth burden was likely caused by the competition between the glycerol and glycolytic flux. Flux balance analysis (FBA) with a genome-scale model of E. coli metabolism 19 predicted that growth and glycerol production rates follow a linear relationship (line in Fig. 1e), which reflects the trade-off between utilizing glucose for production of either biomass or glycerol. To test if the base strain followed this theoretical tradeoff, we measured glycerol production rates and growth rates at three induction levels: 0, 0.1, and 0.5% (dots in Fig. 1e and Supplementary Fig. 3). However, the experimentally determined rates did not follow the theoretical trade-off that was predicted by FBA (Fig. 1e). The measured glycerol production rates and growth rates at 0.5% ara were markedly lower than the theoretical ones, thus indicating that other factors than flux balances were responsible for the growth burden.
In summary, the pBAD promoter enabled us to linearly increase protein expression (Fig. 1b). However, we could not use the pBAD promoter to modulate growth rates and glycerol production rates according to a theoretical trade-off estimated by flux balance analysis (line in Fig. 1e). Instead, at 0.5% induction of the glycerol pathway, the measured growth rates decreased much stronger than predicted by FBA (dots in Fig. 1e).
Glycerol production activates the transcription factor Cra by decreasing fructose-1,6-bisphosphate levels. To understand the molecular mechanisms that caused the growth burden in the base strain, we measured the metabolome at three induction levels: 0, 0.1, and 0.5% ara. Therefore, we cultured the strain in shake flasks and collected samples for metabolomics by fast filtration (Fig. 2a). The metabolome data covered 96 metabolites ( Supplementary Fig. 4) that remained relatively constant at 0.1% ara but displayed strong changes at 0.5% ara (Fig. 2b). The most strongly decreased metabolite at 0.5% ara was the direct precursor for the glycerol pathway, DHAP ( Fig. 2b and Supplementary Fig. 4). Also, fructose-1,6-bisphosphate (FBP), which is directly upstream of DHAP, was one of the most strongly responding metabolites and decreased more than 5-fold in the presence of 0.5% ara (Fig. 2b). These data demonstrate that glycerol overproduction perturbs metabolites near the entry point of the engineered pathway.
FBP is a regulatory metabolite that is responsible for a glycolytic flux-dependent regulation of gene expression in E. coli 20 . FBP inhibits the activity of the transcription factor Cra, which inhibits the expression of genes encoding glycolytic enzymes and activates gluconeogenesis-related genes (Fig. 2c). Although it is currently unclear whether FBP is a direct or indirect effector of Cra 21 , it is widely assumed that the concentration of FBP affects Cra activity. Correspondingly, we wondered whether the low concentration of FBP (at 0.5% ara) activated Cra and thereby changed gene expression and enzyme levels. To test this, we probed the proteome at the three induction levels, and inspected the abundance of a total of 38 enzymes in glycolysis and gluconeogenesis (Fig. 2d). Similar to metabolites, enzyme levels changed stronger at 0.5% induction than at 0.1% induction. To test if proteome changes were caused by Cra, we measured the proteome of the Cra deletion strain (Δcra) as a reference (Fig. 2d). One of the most strongly decreased enzymes in the Δcra strain was the phosphoenolpyruvate synthetase (PpsA). The strong effect of Cra on the expression of PpsA is consistent with previous studies, which showed that the ppsA promoter is under the direct control of Cra 22 . In our base strain, PpsA was one of the most strongly increased enzymes at 0.5% ara ( Fig. 2d), thus indicating a high activity of Cra in this strain. Moreover, the base strain had low levels of glycolytic enzymes that are known to be repressed by Cra, such as glyceraldehyde-3phosphate dehydrogenase (GapA).
Taken together, proteome and metabolome data suggest that induction of the glycerol pathway with 0.5% ara decreased the concentration of FBP. This, in turn, activated the transcription factor Cra which then downregulated enzymes in glycolysis (e.g., GapA) and upregulated enzymes in gluconeogenesis (e.g., PpsA). Because cells grew on a glucose minimal medium we hypothesized that activation of gluconeogenesis was responsible for the growth burden. We confirmed this hypothesis by deleting cra in the base strain (Fig. 2e). The resulting Δcra strain grew indeed better than the base strain at high induction of the synthetic glycerol pathway, and the maximum glycerol titers increased 1.6fold (compare Figs. 2e and 1d). Thus, Cra-regulation contributes to the growth burden of glycerol overproduction in E. coli.
A metabolic model predicts optimization strategies for glycerol production. To obtain additional evidence that transcriptional regulation by Cra is a problem for glycerol production, we developed a small kinetic model (Fig. 3a). The model included one metabolite (FBP) and two enzymes e1 and e2. Enzyme e1 corresponds to glyceraldehyde-3-phosphate dehydrogenase (GapA) in lower glycolysis and e2 is GPD1 in the glycerol pathway. FBP influenced reaction rates in lower glycolysis (r lower_glycolysis ) and in the glycerol pathway (r glycerol ) according to Michaelis-Menten kinetics, which are a well-established kinetic format for enzymatic reactions 23 . Similar to flux balance analysis Fig. 1 Overproduction of glycerol causes a growth burden in E. coli. a Metabolic map of E. coli glycolysis and the synthetic glycerol pathway (orange). The synthetic pathway consists of two enzymes from S. cerevisiae: glycerol-3-phosphate dehydrogenase 1 (GPD1) and glycerol-3-phosphate phosphohydrolase 2 (GPP2). The genes encoding the two enzymes (gpd1 and gpp2) were expressed from a plasmid using the arabinose-inducible pBAD promoter for gpd1 and the constitutive promoter pJ23101 for gpp2. b Activity of the pBAD promoter at different arabinose levels. GFP fluorescence and OD 600 were measured in n = 2 plate reader cultures, and promoter activity was calculated as dGFP/dt/OD by regression analysis between 7 and 9 h. c Schematic of the control strategy for the synthetic glycerol pathway. GPP2 is expressed in excess to ensure that GPD1 is the rate-limiting step. GPD1 levels are varied by inducing the pBAD promoter with different amounts of arabinose. Size of boxes indicates enzyme levels, size of arrows indicate flux through the pathway. d gpd1 and gpp2 were expressed from a plasmid in an E. coli strain lacking glpK (base strain). The base strain was cultured in 96-well plates. Growth was measured in a plate reader at different induction levels of GPD1 (0, 0.1, 0.3, 0.5, 1, and 2% ara). Glycerol in the medium was measured after 24 h. Growth rates were determined by regression analysis between 5 and 10 h. Growth curves and dots show the means of n = 2 plate reader cultures. e Theoretical relationship between glycerol flux and growth rate based on flux balance analysis with a genome-scale model of E. coli metabolism (iML1515). Dots are growth rates and glycerol production rates measured in shake flask cultures of the base strain at 0, 0.1, and 0.5% ara. Source data are provided in the Source Data file. Fig. 1e), we assumed a constant influx in the model and fixed the reaction rate in upper glycolysis to 4.9 mmol g −1 h −1 . This means that FBP is produced at a constant rate and it is either used for glycerol production or for growth according to the mass balances in Eq. (1).
In total, we analyzed three different models, each with a different regulatory structure (Fig. 3b). A model of the base strain included an interaction, in which FBP activates the expression of enzyme e1. This interaction resembled transcriptional regulation of lower glycolysis by Cra. A second model of the Δcra strain had no regulation. In a third model, FBP activates the expression of both enzymes e1 and e2. In this doubly regulated model (2xcra model) both lower glycolysis and the glycerol pathway were subject to Cra-regulation. We simulated Cra-regulation with a power-law term that affects the maximal enzyme expression rate. Since the power-law term equals one in the un-induced state, all models share the same parameter set.
The three models were analyzed with 5000 parameter sets that were randomly sampled from physiologically meaningful ranges based on literature values (Table 1). We sampled the power-law exponent between 1 and 2, in order to ensure that Cra-regulation depends at least linearly on the concentration of FBP and to avoid instabilities that can occur at exponents <2. For each of the 5000 parameter sets, we calculated the maximal glycerol production rate (r glycerol,MAX ) that can possibly be achieved given the specific set of parameters. To estimate r glycerol,MAX , we made use of a numerical continuation method 24 , which iteratively increases the expression rate of enzyme e2 (β 2 ) and computes the new steady state for FBP, e1, and e2. After each iteration, the continuation method determines the stability of the model by inspecting the eigenvalues of the Jacobian matrix 24 , and terminates if instabilities occur in the model. If the model remains stable, the continuation method terminates at the maximal expression rate of e2 (β 2,max ), which we defined as the rate were 20% of the ribosomes translate e2. Thus, r glycerol,MAX is the glycerol production rate at the termination point of the continuation method and we obtained 5000 values of r glycerol,MAX for each of the three models (Fig. 3c).
The distribution of the 5000 r glycerol,MAX values showed that the Δcra model performed better than the base strain model, because more parameter sets achieved higher maximal glycerol b Intracellular concentration of dihydroxyacetone phosphate (DHAP) and fructose-1,6-bisphosphate (FBP). Data are normalized to the 0% culture. Dots are data of n = 3 independent shake flask cultures and bars are the mean. c Fructose-1-phosphate (F1P) and FBP inhibit the activity of the transcription factor Cra. Cra activates the expression of genes encoding gluconeogenic enzymes (e.g., ppsA) and represses those of glycolytic enzymes. d Proteome data showing the relative abundance of proteins in the base strain with 0.1% ara and 0.5% ara. Data are normalized to the base strain with 0% ara. Δcra is the proteome of a cra deletion strain, normalized to the wild-type strain. Dots are means of samples from n = 3 independent shake flask cultures (a). Shown are only protein levels with a relative standard deviation smaller than 20%. Blue dots are enzymes that belong to glycolysis or gluconeogenesis in the iML1515 model. e The glycerol pathway was expressed in the Δcra strain. The Δcra strain was cultured in 96-well plates. Growth was measured in a plate reader at different induction levels of GPD1 (0, 0.1, 0.3, 0.5, 1, and 2% ara). Glycerol in the medium was measured after 24 h. Growth rates were determined by regression analysis between 5 and 10 h. Growth curves and dots show the means of n = 2 plate reader cultures. Source data are provided in the Source Data file. production rates (r glycerol,MAX ) with the Δcra model than with the base strain model (Fig. 3c). This matched the experimental observation that the Δcra strain performed better than the base strain. The underlying assumption was: the more parameter sets achieve high glycerol fluxes, the higher the likelihood that the real system would achieve them too. The model of the base strain did not achieve high glycerol production rates, because the model was not stable at higher induction levels (Fig. 3c). To better understand the origin of these instabilities, we performed timecourse simulations with the three models and an average parameter set ( Supplementary Fig. 5). The time-course simulations matched the results obtained with the continuation method, thus confirming that both numerical approaches yield the same results. We simulated the models at different induction levels and the base model was not stable at higher induction, because enzyme e2 increased exponentially. Thus, there is a critical point where the expression rate of e2 exceeds its dilution by growth. These imbalances are probably amplified by Cra-regulation, because Cra downregulates e1 and thus growth. The Δcra model, in contrast, was stable at almost all induction levels. The best model in our analysis was the 2xcra model. With this model, the highest fraction of parameter sets achieved high glycerol fluxes (Fig. 3c). Further, the stability of the 2xcra model was similar to the Δcra model (Fig. 3c). Thus, the model analysis predicted that engineering Cra-regulation into the glycerol pathway should lead to higher glycerol production rates and we next tested this prediction experimentally. Maximal glycerol production rate r Glycerol,MAX (mmol g -1 h -1 ) Induction of E 2 (%) . c The three models in b were simulated with the same 5000 parameter sets that were obtained by random sampling (see also Table 1). For each parameter set, β 2 (the expression rate of e2) was increased until the model became unstable or until the expression rate β 2 reached the maximum. Shown is the maximal glycerol flux that was achieved with each model as a cumulative sum distribution. Robustness is shown as the percentage of the 5000 models that remain stable at a given induction level. d The pBAD promoter was engineered by inserting a Cra-binding site between the promoter and the ribosome-binding site, resulting in the pBAD-Cra promoter. Cra-FBP regulation should ensure that low FBP levels repress the pBAD-Cra promoter. The pBAD promoter in the base strain ( Fig. 1a) was replaced with the pBAD-Cra promoter, resulting in the pBAD-Cra strain. e The pBAD-Cra strain (orange) was cultured in 96-well plates. Growth was measured in a plate reader at different induction levels of GPD1 (0, 0.1, 0.3, 0.5, 1, and 2% ara). Glycerol in the medium was measured after 24 h. Growth rates were determined by regression analysis between 5 and 10 h. Small dots show data from n = 2 plate reader cultures and big dots are the mean. Data of the base strain (black) and the Δcra strain (blue) are shown as a reference (same data as in Figs. 1d and 2e). f Growth rates and glycerol production rates of the pBAD-Cra strain measured in shake flasks, at 0, 0.1, and 0.5% ara. The line is the theoretical relationship between glycerol flux and growth rate shown in Fig. 1e. Source data are provided in the Source Data file. A Cra-regulated pBAD promoter improves growth and glycerol production. To engineer a pBAD promoter that is repressed by Cra, we inserted the consensus binding sequence of Cra between the promoter region and the ribosome-binding site (Fig. 3d). Then we expressed GPD1 under the control of this pBAD-Cra promoter and introduced the plasmid in E.coli ΔglpK to create the pBAD-Cra strain. This strain grew indeed much better than the base strain and maintained growth even at full induction with 2% arabinose (orange one in Fig. 3e). At 0.5% ara, growth and glycerol production rates of the pBAD-Cra strain were even higher than the rates at the theoretical trade-off frontier ( Fig. 3f and Supplementary Fig. 6). These data confirm the model prediction that a doubly Cra-regulated strain performs better than the base strain and the Δcra strain.
Next, we compared the activities of the pBAD promoter and the pBAD-Cra promoter in the context of our production system. (Fig. 1a) was replaced with the mutated pBAD promoter and cultured in 96-well plates. Growth was measured in a plate reader at different induction levels of GPD1 (0, 0.1, 0.3, 0.5, 1, and 2% ara). Glycerol in the medium was measured after 24 h. Growth rates were determined by regression analysis between 5 and 10 h. Small dots show data from n = 2 plate reader cultures and big dots are the mean. b Same as a, for strains expressing gpd1 with engineered pBAD promoters that have 0 to 3 Cra-binding sites inserted at different positions. The position of Cra-binding sites is indicated in orange. c GFP plasmids with or without Cra-binding site were expressed in the wild-type (WT) or the Δcra strain. (n = 2, pBAD, WT is the same as in Fig. 1b). Source data are provided in the Source Data file.
Therefore, we replaced GPD1 with a GPD1-GFP fusion protein ( Supplementary Fig. 7). The pBAD-Cra promoter had a 3.7-fold lower activity than the pBAD promoter at 0.5% ara, showing that insertion of the Cra-binding site reduced GPD1 expression. Flow cytometry data revealed that the cell-to-cell variation in GPD1-GFP content was independent of the promoter ( Supplementary Fig. 8), indicating that all the cells of the pBAD-Cra population had a lower promoter activity. Thus, the pBAD-Cra promoter is a 3.7-fold weaker version of the pBAD promoter, probably because Cra represses the promoter. In principle, the same effect could be achieved by other mutations that decrease the activity of the pBAD promoter, for example, mutations between the −10 and −35 boxes 25 . Therefore, we constructed a pBAD promoter with mutations between the −10 and −35 boxes that decrease activity by a factor of two 25 , and analyzed the resulting pBAD-weak strain. The pBAD-weak strain grew indeed better than the base strain and achieved higher glycerol titers (Fig. 4a). However, the pBADweak strain performed worse than the pBAD-Cra strain, which might be due to the different activities of the pBAD-Cra and the pBAD-weak promoter (3.7-fold and 2-fold lower activity than the original pBAD promoter, respectively). Another explanation for the better growth of the pBAD-Cra strain is that the pBAD-Cra promoter is dynamic and the pBAD-weak promoter is static.
If Cra actively inhibits the pBAD-Cra promoter, we expected that insertion of Cra-binding sites outside of the promoter region would have no effect. Indeed, inserting 1 to 3 Cra-binding sites improved growth only when a binding site was inserted directly after the pBAD promoter (Fig. 4b). Even two Cra-binding sites outside of the promoter region gave no improvement. This demonstrates that Cra actively inhibits the pBAD-Cra promoter, and that the improvements are not a consequence of titrating Cra away from its genomic targets.
To obtain further evidence that the pBAD-Cra promoter is functional, we measured the activity of the pBAD promoter and the pBAD-Cra promoter with GFP, both in the wild-type and the Δcra strain. In the wild-type, the pBAD-Cra promoter had lower activity than the pBAD promoter (Fig. 4c), thus indicating that Cra inhibits the promoter. In the Δcra strain, however, the pBAD-Cra promoter had slightly higher activity than the pBAD promoter (Fig. 4c). These results show that the pBAD-Cra promoter is functional: Cra inhibits the pBAD-Cra promoter and this regulation is missing in the absence of Cra (in the Δcra strain). Thus the lower activity of the pBAD-Cra is not merely due to sequence changes but due to active inhibition by Cra.
To further demonstrate the broad utility of this approach we inserted Cra into a constitutive promoter and the pTet promoter ( Supplementary Fig. 9). In both cases, the strain with the Craregulated promoter variant grew better than the strain with the original version. This suggests that Cra inhibits these promoters and automatically reduces their activity.
The Cra-regulated pBAD promoter maintains high FBP levels at high glycerol fluxes. To better understand the dynamic nature of Cra-regulation, we measured the dynamic changes in the metabolome and proteome upon induction of the glycerol pathway. Therefore, we induced the base strain and the pBAD-Cra strain with 0.5% ara and collected metabolomics and proteomics samples for the subsequent 4.5 h. Additionally, we measured growth (Fig. 5a) and the concentration of glycerol in the medium in order to calculate the flux through the glycerol pathway (Fig. 5b). The pBAD-Cra strain grew again much better than the base strain (Fig. 5a). The growth defect of the base strain appeared 1 h after inducer addition, but at this time point, both strains had similar glycerol production rates (~5 mmol g −1 h −1 , Fig. 5b). After 2 h, the glycerol production rate was even higher in the pBAD-Cra strain (10 mmol g −1 h −1 ) than in the base strain . This suggested that it was not glycerol production per se, which impaired growth of the base strain, but rather other effects such as the higher expression levels of GPD1 (Fig. 5c). We then hypothesized that after 1 h Cra was activated in the base strain, while Cra should be less active in the pBAD-Cra strain. To test this hypothesis, we used again the abundance of PpsA as a proxy for Cra activity. In the base strain, PpsA levels increased 1 h after inducer addition, which matches the time when this strain shows a growth defect (Fig. 5c). In the pBAD-Cra strain, however, PpsA levels remained constant after induction, suggesting that in this strain the activity of Cra remains below a threshold that activates gluconeogenesis.
If Cra is less active in the pBAD-Cra strain than in the base, we expected that the latter has lower FBP levels. Indeed, FBP decreased rapidly after inducing the base strain (Fig. 5d). The pBAD-Cra strain, in contrast, had always higher FBP levels than the base strain despite its higher flux in the glycerol pathway. The concentration of other glycolytic metabolites (hexose-P, DHAP, and PEP) was also higher in the pBAD-Cra strain than in the base strain (Fig. 5d). Additionally, we confirmed that the pBAD-Cra strain maintained higher concentrations of FBP under steadystate conditions, by probing the metabolome at constant induction with 0, 0.1, and 0.5% ara (Supplementary Fig. 4).
Thus, the pBAD-Cra strain can maintain higher FBP levels at higher glycerol production rates than the base strain. This suggested that the interaction between FBP and Cra, in combination with the pBAD promoter, counteracts decreases of FBP: (i) if FBP falls below a critical value, Cra activity increases, (ii) higher Cra activity represses the pBAD-Cra promoter and decreases GPD1 expression, and (iii) lower expression of GDP1 will restore the concentration of FBP. Thus our data indicate that this feedback regulation is functional, because it enabled high FBP levels and at the same time high glycerol production rates, which presumably prevented that E. coli switches from glycolysis to gluconeogenesis. However, further experiments are required to experimentally investigate how this feedback loop shapes metabolism in space and time 26 .
Cra-regulation improves the growth of carotenoid overproducing E. coli. Because many bio-based chemicals use glycolytic metabolites as precursors, we wondered if the Craregulated pBAD promoter could have broader applicability. Therefore, we used the pBAD promoter (in its original version and with Cra-regulation) to control a synthetic metabolic pathway for the overproduction of carotenoids (Fig. 6a). Biosynthesis of carotenoids starts from the glycolytic metabolites pyruvate and glyceraldehyde-3-phosphate (GAP), which are converted by the methylerythritol phosphate (MEP) pathway of E. coli into farnesyl diphosphate (FPP). FPP is then further converted into carotenoids by heterologous enzymes from Pantoea ananatis (Fig. 6a). The first two enzymes in the MEP pathway (Dxs and Dxr) were overexpressed from a plasmid using the two versions of the pBAD promoter, and we refer to the two plasmids as pController and pController-Cra, respectively. The remaining enzymes in the carotenoid pathway (CrtE/B/I/Y/Z) were expressed from a second plasmid (pCarotenoid) using a native promoter of P. ananatis.
Expressing only the pCarotenoid plasmid led to a basal production of carotenoids and did not influence cell growth (Supplementary Fig. 10). Carotenoid production increased almost 3-fold when E. coli carried the pCarotenoid plasmid together with either the pController or pController-Cra plasmid (Fig. 6b). However, with the pController plasmid, higher carotenoid levels were only achieved within a small range of inducer, whereas the pController-Cra plasmid performed well over a broader range of ara levels. Thus, similar to the glycerol pathway, we observed that the promoter with Cra regulation enables higher growth rates at high induction levels of a synthetic carotenoid pathway, and that the higher amount of inducer does not impact cell growth and productivity (Fig. 6b). Taken together, the results suggest that the Cra-regulated pBAD promoter is generally applicable and may enable bioengineers to regulate the expression of a wide range of synthetic pathways that use glycolytic metabolites.
Discussion
In this study, we used the arabinose-inducible pBAD promoter to control synthetic metabolic pathways in E. coli. While, in the case of GFP expression, the pBAD promoter showed a linear relationship between the concentration of inducer (ara) and expression rates, this was not observed for glycerol overproduction. In the latter case, the problem was that a small increase of inducer was sufficient to cause a strong growth burden that constrained productivity. A consequence of the growth burden was a low biomass concentration and, consequently, low glycerol titers at the end of the cultivations. A classical view is that synthetic metabolic pathways are a burden for industrial microbes because they consume cellular resources 27 . Recent multi-omics data suggest that synthetic metabolic pathways can place an additional burden on cellular metabolism by perturbing the proteome and metabolome 28 . Perturbations of the metabolome could be critical especially if they alter the concentration of regulatory metabolites that control the expression and function of proteins, for example via metabolite-protein interactions 29,30 . Here, we observed that glycerol overproduction decreased the concentration of glycolytic metabolites, and that this caused misregulation at the level of transcription. More specifically, low concentrations of the regulatory metabolite FBP activated the transcription factor Cra and thereby upregulated the expression of gluconeogenesis enzymes likes PpsA. These results demonstrate the importance of maintaining regulatory metabolites above a critical threshold in engineered microbes. However, synthetic pathways often lack the regulatory mechanisms that maintain metabolite concentration homeostasis in natural metabolic pathways, such as directed overflow 31 , allosteric enzyme regulation 32 , or transcriptional regulation 33 . Here we show that engineering transcriptional regulation (Cra-regulation) in a synthetic glycerol pathway can help to maintain regulatory metabolites above a critical threshold: the pBAD-Cra strain has higher FBP levels and at the same time a higher glycerol flux than the base strain. This supports the hypothesis that Cra-dependent regulation counteracts a decline in the concentration of FBP by downregulating the expression of the glycerol pathway in response to decreasing FBP levels. However, as it remains an open question whether this regulation is truly dynamic, we cannot rule out the possibility that, due to the constant inhibitory activity of Cra, the pBAD-Cra promoter simply functions as a weaker pBAD promoter. Future studies should clarify whether the pBAD-Cra promoter automatically adapts to new conditions, e.g., by shifting the glycerol producers between different environments.
Previous studies mainly focused on the transcriptional consequences of promoter engineering 34,35 , or demonstrated the ability of engineered promoters to increase the productivity of synthetic pathways [11][12][13][14][15] . Here, we combined metabolomics and proteomics to study the consequences of engineered promoters at the level of host metabolism. From a biotechnological perspective, this approach will help to engineer improved production strains that can autonomously buffer external perturbations in industrial-scale bioreactors 36,37 , and internal perturbations like gene expression noise 38,39 .
Methods
Construction of strains and plasmids. Strains and plasmids are listed in Supplementary Table S1. Strains derived from Escherichia coli K-12 MG1655 (wild type, DSMZ No. 18039) were used for glycerol and GFP production. Strains derived from E. coli BW25113 (KEIO collection 40 ) were used for carotenoid production. Consensus sequences of Cra are AGCTGAAGCGTTTCAGTC (from epd gene). All oligonucleotides used for cloning were synthesized by Eurofins Genomics (Germany GmbH) and are listed in Supplementary Table S2. Target genes were amplified to obtain linear fragments by PCR using Q5 High-Fidelity DNA Polymerase (M0491L, BioLabs). Circular polymerase extension cloning (CPEC) 41 and Gibson assembly (E2611S, Biolabs) were used for cloning. The No-SCAR system 42 for genome editing was obtained from Addgene (#62654, #62655, and #62656, see Supplementary Table S1) and used for the construction of ΔglpK and Δcra. The genes for glycerol-producing enzymes were cloned from yeast chromosomal DNA (Saccharomyces cerevisiae SEY6210) into the pBAD promoter. The Cra consensus binding sequence was inserted into the pBAD-Cra plasmid by PCR. The pController plasmid carries dxs and dxr genes from E. coli MG1655. The pController-Cra plasmid was derived from the pController plasmid by inserting the Cra-binding site.
Cultivations. All cultivations were performed using an M9 minimal medium with 5 g l −1 glucose. M9 medium was composed of (per liter): 42 For pre-cultures, frozen bacterial stocks were plated on LB agar plates with the respective antibiotics and single colonies were used to inoculate 5 ml LB-pre-cultures in tubes. From this first pre-culture a second M9 pre-culture was inoculated 1:1000 and incubated overnight at 37°C under shaking. For cultivations in microtiter plates, 96-well flat transparent plates (Greiner Bio-One International) containing 150 µl M9 minimal medium were inoculated 1:150 from the overnight M9-culture. Online measurements of optical density at 600 nm (OD 600 ) with the glycerol production strains were performed at 37°C with shaking in a plate reader (Epoch, BioTek Instruments Inc, USA). Online measurements of OD 600 and additional measurements of GFP fluorescence (excitation 490 nm, emission 530 nm) of the GFP production strain were performed at 37°C with shaking in a plate reader (Synergy, BioTek Instruments Inc, USA). Growth rates were calculated as dln(OD 600 )/dt by linear regression over the indicated time windows. For cultivations in shake flask, a 500 ml shake flask containing 35 ml M9 minimal medium (5 g l −1 glucose) was inoculated 1:150 from the overnight M9culture and incubated at 37°C under shaking at 220 rpm. Antibiotics were added as required: kanamycin (50 μg ml −1 ), ampicillin (100 μg ml −1 ), and spectinomycin (100 μg ml −1 ). For FACS (fluorescence-activated cell sorting) measurement, 10,000 cells were sorted per sample by BD LSRFortessa SORP flow-cytometer (BD Biosciences, USA). A 488-nm laser (blue) at 100 mW was used for green fluorescent with a 510/20 band pass filters. BD FACS Diva software (BD Biosciences, USA) and Flow Cytometry GUI for Matlab (version 1.3.0.0) by Nitai Steinberg (https://ww2. mathworks.cn/matlabcentral/fileexchange/38080-flow-cytometry-gui-for-matlab) were used for the analysis of acquired data.
Glycerol measurements. Glycerol was measured in the culture supernatant with a glycerol enzyme assay kit (MAK117-1KT, Sigma). 10 μl supernatant were mixed with 100 μl reaction buffer and incubated for 20 min. Absorbance was measured at 570 nm in a plate reader (Epoch, BioTek Instruments Inc, USA).
Quantification of carotenoids. Carotenoid production strains were cultivated in 96-well plates with M9 minimal medium containing 0.5% glucose and an additional 20% LB at 37°C with continuous shaking. After 24 h cultivation, cells were harvested by centrifugation at 3220 × g. Cell pellets were resuspended in 120 µl DMSO 43 , and sonicated for 30 s. Samples were centrifuged again at 3220 × g and 50 µl of the supernatants were transferred to a 384-well plate, and carotenoids were quantified by measuring the absorbance at 470 nm. For absolute quantification, standards of β-carotene (C4582-25MG, Sigma) were prepared at final concentrations of 5, 10, 25, and 50 mg l −1 .
Metabolomics measurements. Shake flask cultivations on M9 glucose were performed as described above. For steady-state metabolomics, cells were grown to an OD 600 of 0.5 and 2 ml culture aliquots were vacuum-filtered. For time-course metabolomics, volumes of samples were adjusted based on the OD 600 of the culture to obtain 1 ml with OD 600 1. Culture aliquots were immediately filtered on a 0.45 µm pore size filter (HVLP02500, Merck Millipore) and filters were transferred into an extraction solution consisting of acetonitrile/methanol/water (40:40:20 (v/v)). Extracts were centrifuged for 20 min at −9°C at 17,000 × g to remove the cell debris. Centrifuged extracts were mixed with 13 C-labeled internal standard and analyzed by LC-MS/MS, with an Agilent 6495 triple quadrupole mass spectrometer (Agilent Technologies) 44 . An Agilent 1290 Infinity II UHPLC system (Agilent Technologies) was used for liquid chromatography and controlled by the Agilent MassHunter Acquisition software (Version B.07.01). The temperature of the column oven was 30°C, and the injection volume was 3 μl. LC solvents A were water with 10 mM ammonium formate and 0.1% formic acid (v/v) (for acidic conditions); and water with 10 mM ammonium carbonate and 0.2% ammonium hydroxide (for basic conditions). LC solvents B were acetonitrile with 0.1% formic acid (v/v) for acidic conditions and acetonitrile without additive for basic conditions. LC columns were an Acquity BEH Amide (30 × 2.1 mm, 1.7 µm) for acidic conditions, and an iHILIC-Fusion(P) (50 × 2.1 mm, 5 µm) for basic conditions. The gradient for basic and acidic conditions was: 0 min 90% B; 1.3 min 40% B; 1.5 min 40% B; 1.7 min 90% B; 2 min 90% B. Quantification of intracellular metabolite concentrations was based on the ratio of 12 C and 13 C peak heights.
Proteomics measurements. Cultivations were performed as described above. Culture aliquots were transferred into 2 ml reaction tubes and washed two times with PBS buffer (0.14 mM NaCl, 2.7 mM KCl, 1.5 mM KH 2 PO 4 , and 8 mM Na 2 HPO 4 ). After washing, cell pellets were resuspended in 200 µl of lysis buffer containing 100 mM ammonium bicarbonate and 0.5 % sodium laroyl sarcosinate. Cells were again incubated for 15 min with 5 mM Tris(2-carboxyethyl)phosphine (TCEP) at 95°C followed by alkylation with 10 mM iodoacetamide for 30 min at 25°C. We used SP3 bead method 45,46 for a large number of samples. Fixed protein amount of 50 µg measured by BCA assay (23225, Thermo Fischer) and mixed with 4 µl SP3 beads stock (mixed 20 μl of each Sera-Mag Beads A and B (GE Healthcare) with 100 µl ddH 2 O) in 96-well high-volume v-bottom plate (710879, Biozym Scientific GmbH). To initiate protein binding to the beads, 75 μl of 100% ethanol were added with the mixture of protein and beads for 15 min at room temperature. Tubes were placed in a magnetic rack for 5 min. The supernatant was discarded and the beads were rinsed two times with 200 µl of 70% ethanol and then 180 µl of 100% ethanol on a magnetic rack. For proteolytic digest, tubes were removed from the magnetic rack, and the beads were reconstituted in 28 µl 10% acetonitrile/10 mM NH 4 HCO 3 with 1 µg trypsin (Promega) incubated shaking overnight at 30°C. After incubation, the tubes were sonicated for 30 s and placed on a magnetic rack, and the supernatant containing tryptic peptides was recovered and transferred to new tubes. Recovered peptides were acidified by adding trifluoroacetic acid (TFA) to a 1.5% final concentration. Peptides were then purified through C18 microspin columns (Harvard Apparatus) according to the manufacturer's instruction. The eluted peptides were dried and resuspended in 0.1% TFA for analysis of peptides. Analysis of peptides was performed by a Q-Exactive Plus mass spectrometer connected to an Ultimate 3000 RSLC nano with a Prowflow upgrade and a nanospray flex ion source (Thermo Scientific) as previously described 32,47 . Briefly, peptides were separated by a reverse-phase HPLC column (75 μm × 42 cm) packed with 2.4 μm C18 resin (Dr. Maisch GmbH, Germany) at a flow rate of 300 nl/min by gradient model which is from 98% solvent A (0.15% formic acid) and 2% solvent B (99.85% acetonitrile, 0.15% formic acid) to 35% solvent B over 84 min. The data acquisition was set to obtain one high-resolution MS scan at a resolution of 70,000 full width at half maximum (at m/z 200) followed by MS/MS scans of the 10 most intense ions. Label-free quantification (LFQ) of the data acquired from mass spectrometry was processed with Progenesis QIP (Waters), and MS/MS search was performed in MASCOT (v2.5, Matrix Science). The following search parameters were used: full tryptic search with two missed cleavage sites, 10 ppm MS1 and 0.02 Da fragment ion tolerance. Carbamidomethylation (C) as fixed, oxidation (M), and deamidation (N,Q) as variable modification. Progenesis outputs were further processed with SafeQuant.
Constraint-based modeling. Flux balance analysis (FBA) was performed with a genome-scale model of E. coli metabolism, iML1515 19 , and COBRApy 48 . Two additional reactions (G3PD_synth, G3PT_synth) were added to simulate the synthetic glycerol pathway. Constraints of the GLYK reaction were set to zero, to simulate deletion of the glyK gene. Additionally, constraints of glycerol-3-phosphate and glycerol dehydrogenases G3PD5, G3PD6, G3PD7, and GLYCDx were set to zero. The model was further constrained to stimulate growth on a minimal medium, with glucose as the sole carbon source at an uptake rate of 8 mmol g −1 h −1 . The oxygen uptake rate was constrained at maximum of 20 mmol g −1 h −1 and uptake of inorganic ions was not constrained (nh4, pi, so4, k, fe2, mg2, ca2, cl, mn2, zn2, ni2, cobalt2, mobd) 19 . FBA was performed with different glycerol production rates between 0 and 14 mmol g −1 h −1 and maximal growth was the objective function.
Kinetic modeling and steady-state analysis. The stoichiometry of the model is shown in Fig. 3a. Mass balancing yields a system of ordinary differential equations (ODEs), F, that is a temporal function of the state variables x and the kinetic parameters p: The metabolite FBP is produced by r upper glycolysis and consumed by r lower glycolysis and r glycerol . Additionally, FBP is diluted by growth. The enzyme e1 is a lower glycolysis enzyme for which we used parameters of glyceraldehyde-3phosphate dehydrogenase (GapA) and e2 is GPD1. Both enzymes are produced by a production term β and they are removed by dilution by growth. We assumed that enzyme degradation contributes little to the overall enzyme turnover and therefore can be neglected.
An upper glycolytic flux of 4.904 mmol g −1 h −1 was estimated with FBA using a glucose uptake rate of 8 mmol g −1 h −1 . With the specific cell volume for E. coli (2 µl mg −1 ) 49 the reaction rate r upper glycolysis is: r upper glycolysis ¼ 4:904 mmol g À1 h À1 0:002 l g À1 * h 60 min ¼ 40:87 mM min À1 ð2Þ The reactions r lower_glycolysis and r glycerol follow Michaelis-Menten kinetics: The expression rates of enzyme 1 (GapA) and enzyme 2 (GPD1) are: and Cra-regulation was simulated with a power-law term FBP FBP SS α that affects the maximal enzyme expression rate. The power-law format has the advantage that the power-law term equals one in the un-induced state and therefore allows the same parameter values for the base model, the Δcra model, and the 2× cra model. Further, setting α to zero removes the regulation and therefore α2 was zero in the base model, while α1 and α2 were zero in the Δcra model. We assumed that the growth rate µ is proportional to r lower_glycolysis , because flux balance analysis showed a linear relationship between r lower_glycolysis and the growth rates ( Supplementary Fig. 11). Additionally, previous 13C labeling data showed a positive correlation between lower glycolytic flux and growth in E. coli 50 . With a growth rate of 0.01 min −1 in the un-induced state, the proportionality factor-alpha follows as: In total, the model includes 8 kinetic parameters k cat,1 , k cat,2 , K m,1 , K m,2 , β 1,max , β 2,max , α1, and α2. The parameters were either sampled 5000 times log-uniformly from predefined intervals or calculated based on steady state constraints. K m,1 and K m,2 were randomly sampled between 0.01 and 10 mM to account for high and low saturation of enzymes.
The power-law exponents α1 and α2 were randomly sampled between 1 and 2. The lower bound was 1 to ensure that the expression rate is at least linearly dependent on the FBP concentration. The upper bound was 2 to avoid higherorder dynamics that can cause instabilities 51 .
The k cat,2 value was based on the kinetic parameter of GPD1 (k cat,2 = 1705 min −1 ) 52 and was sampled between 0.33-fold and 3-fold of this literature value. The parameter k cat,1 followed from the steady-state constraint of the un-induced state where r glycerol = 0. β 1,max was derived from the mass balances of e 1 , assuming steady state: The concentration of e1 was 0.0238 mM, based on quantitative proteome data for GapA 53 resulting in β 1,max = 0.000238 mM min −1 .
The maximal enzyme expression rate in the glycerol pathway (β 2,max ) was defined by the translation rate of ribosomes according to: Equation (11) considers the following parameters that were derived from the Bionumbers Database 54 : average translation rate (r t = 8.4 amino acids s −1 ), the median and abundance weighted protein length (L = 209 amino acids), the fraction of active ribosomes (f R = 0.8) and the cellular volume (V c,0.6 = 3 × 10 −15 L) and at a growth rate of µ = 0.6 h −1 , the Avogadro number (N A = 6.02 × 10 23 mol −1 ), the number of ribosomes per cell at that growth rate (R 0.6 = 8000 ribosomes cell −1 ). The fraction of ribosomes (p) that synthesize GPD1 at full induction was assumed to be 20%, because only 50% of the ribosomes can translate a heterologous protein and this is associated with significant protein burden 55 .
Steady state and robustness analysis. To obtain steady states of the un-induced system, β 2,max and e2 were set to zero. Then 6 parameters were randomly sampled from intervals defined above. The 2 parameters (k cat1 and b 1,max ) were calculated to ensure steady-state conditions. To test the stability of the steady states, eigenvalues of the Jacobian matrix were calculated, and tested if all eigenvalues are negative (λ < −10 −5 ). The procedure was repeated until 5000 stable steady states were achieved. Next, induction (ind in Eq. (6)) was iteratively increased from 0 to 1 using a numerical parameter continuation method. The method is based on finding a connected path of steady-state concentrations (x ss : steady-state concentration vector containing e 1ss , e 2ss , FBP ss ) as a parameter p is varied. As the system is in steady state it follows that: The derivative of F(x ss ,p) with respect to the parameters is also zero: dF x ss ; p À Á dp ¼ δF δx ss Á dx ss dp þ δF δp ¼ 0 ð13Þ After rearranging Eq. (13), Eq. (14) is obtained: which describes the changes in the steady-state concentrations as a kinetic parameter that is varied iteratively. The iteration stops when one of the following two stability criteria is no longer fulfilled. 1st criterion: all real parts of the eigenvalues of the system's Jacobian need to be negative. In Eq. (14), the inverse of the Jacobian Matrix (δF/δx SS ) is required. The inversion is only possible as long as the matrix is regular. Once an eigenvalue reaches zero, the Jacobian becomes singular and matrix inversion is no longer possible. This bifurcation point defines the boundary between the stable and unstable parameter space. In other words: after this point is passed, the system cannot return to a stable steady state.
Calculating the eigenvalues of the Jacobian at each step ensures that the iteration is terminated when one eigenvalue exceeds λ = −10 −5 . The 2nd criterion is that all variables are positive.
Quantification and statistical analysis. Statistical analysis was performed with Matlab (R2018b) and custom Matlab scripts. The number of replicates (n) of each experiment can be found in the respective figure caption. For proteomics and metabolomics n represents the number of independent shake flask cultures. In growth assays, n represents the number of independent microtiter plate cultures. | 11,545.8 | 2021-08-13T00:00:00.000 | [
"Biology",
"Engineering",
"Environmental Science"
] |
Enterococcus faecalis from Healthy Infants Modulates Inflammation through MAPK Signaling Pathways
Colonizing commensal bacteria after birth are required for the proper development of the gastrointestinal tract. It is believed that bacterial colonization pattern in neonatal gut affects gut barrier function and immune system maturation. Studies on the development of faecal microbiota in infants showed that the neonatal gut was first colonized with enterococci followed by other microbiota such as Bifidobacterium. Other studies showed that babies who developed allergy were less often colonized with Enterococcus during the first month of life as compared to healthy infants. Many studies have been conducted to elucidate how bifidobacteria or lactobacilli, some of which are considered probiotic, regulate infant gut immunity. However, fewer studies have been focused on enterococi. In our study, we demonstrate that E. faecalis, isolated from healthy newborns, suppress inflammatory responses activated in vivo and in vitro. We found E. faecalis attenuates proinflammatory cytokine secretions, especially IL-8, through JNK and p38 signaling pathways. This finding shed light on how the first colonizer, E.faecalis, regulates inflammatory responses in the host.
Introduction
Significant controversy exists over the role of Enterococcus, and more specifically E.faecalis, on health. Whereas, in clinical settings with immune-compromised patients, E.faecalis can be considered an opportunistic pathogen [1], it has also been shown to impart beneficial effects to health. A recent in vitro study demonstrated that E. faecalis was inhibitory to C. jejuni MB 4185 infection under simulated broiler caecal condition [2]. An E.faecalis isolated from a healthy adult showed the highest probiotic activity when compared with over 70 other lactic acid bacteria (LAB) isolates, including lactobacilli and bifidobacteria [3]. These contrasting roles suggest an interplay between bacteria and human host that is context-dependent and likely dynamic over time.
Prior to birth, the gut is sterile and bacteria start to colonize after birth. The neonatal gut, with a naïve but competent immune system, represents a valuable context for determining the role of particular bacteria in health. Proper development of the gastrointestinal tract requires timely colonization after birth [4]. Study showed microbiota acquisition in infancy is likely a determinant of early immune programming, subsequent infection, and allergy risk [5]. Among the first wave of microorganisms detected in the stool of infants, enterococci are commonly found on the first day of life [6,7]. They gradually decrease with concurrent increases in bifidobacteria that appear within 2-3 days in breast fed infants [8]. Babies who developed allergies were less often colonized with Enterococcus during the first month of life as compared to healthy infants [9]. This implies that Enterococcus could have major impact on intestinal immune development in the very early stage of life.
Several factors, such as mode of delivery and gestational age, are known to influence the composition of microbial flora in early stage of life. Preterm infants and infants delivered via cesarean section display a delayed intestinal colonization with smaller species variability and a higher occurrence of potentially pathogenic microorganism [10,11]. Pre-term births are far more likely to suffer necrotizing enterocolitis (NEC) than are births at term [12]. Interestingly, infants who developed NEC carry less E.faecalis [13]. Previous studies have shown that serum concentrations of IL-8 were elevated in severe cases of NEC from its onset through the first 24 hours [14]. IL-8 is a chemokine that stimulates migration of neutrophils from intravascular to interstitial sites and can directly activate neutrophils and regulate the expression of neutrophil adhesion molecules [15][16][17]. Thus, IL-8 plays an important role in infant infections.
It is also known that cell-wall components from Gram-negative such as lipopolysaccarides as well as host-derived cytokines such as IL-1b and TNF-a, increase IL-8 secretion from IECs through the activation of mitogen activated protein kinase (MAPK) [18,19]. At least three groups of MAPKs have been identified. These include the extracellular signal-regulated kinases (ERKs), the c-JUN NH 2terminal kinases (JNKs) and p38. P38 was reported to stabilize IL-8 mRNA and its level was increased in the muscularis propria of colonic tissue both in DSS colitis mice and patients with inflammatory bowel disease (IBD) [20]. A p38 inhibitor suppressed inflammation in DSS-induced colitis model by reducing mucosal IL-1b and TNF-a levels [21]. Inhibition of JNK activation correlated with suppression of IL-1b-induced IL-8 secretion in IECs [22]. Study showed activation of the ERK signaling pathway in response to TNF-a in HT-29 cells leads to increased expression of IL-8 [23].
Extensive studies have been performed to determine how lactobacilli and bifidobacteria regulate infant gut immunity [24,25]. However, few studies have focused on E.faecalis, which is the first colonizer in the human GI tract [6,7]. Here, we demonstrate that E. faecalis, isolated from newborns, can suppress pathogen-mediated inflammatory responses in human IECs as well as DSS-induced inflammation in mice model. E. faecalis attenuates proinflammatory cytokine secretions, especially IL-8, via distinct pathways. These bacteria suppress JNK and p38 as well as disrupt c-JUN-regulated inflammatory responses. These findings shed lights on functions of the first colonizer, E.faecalis, in infant gut protection.
Isolation and identification of bacteria from infants' gut
In order to characterize early postnatal gut LABs, we collected feces from 16 healthy infants aged 3 days and 1 month from Indonesia. A total of 25 isolates were expanded and confirmed as LABs based on lactic acid production, a rod or coccal shape, and Gram positivity. Based on their carbohydrate fermentation patterns, nine strains were categorized as Lactobacillus and 16 strains as Enterococcus. 16S rDNA sequence analysis showed that 8/ 9 of the lactobacilli were Lactobacillus casei and 13/16 of the enterococci were Enterococcus faecalis (Table 1). Thus, we found a restricted diversity of early colonizing species in infants with L.casei and E. faecalis being the prominent early colonizers. A phylogenetic tree, plotted based on a sequence distance method, is provided ( Figure 1). From the phylogenetic tree, we could see some enterococcus were closer to lactobacillus than the rest enterococcus.
Enterococcus faecalis suppress intestinal IL-8 secretion
We then examined how these early colonizing bacteria can impact intestinal immunity and epithelial cell signaling. Potential anti-inflammatory effects of infant-isolated enterococci and lactobacilli were investigated by co-culturing them with colorectal cancer-derived IECs (Caco-2, HT29 and HCT116). Supernatants were harvested for IL-8 analysis as a marker for gut inflammation [26]. Although some lactobacilli suppressed IL-8 secretion in Caco-2 cells, none of the Lactobacillus isolates significantly attenuated IL-8 secretion in all the three cell lines ( Figure S1A-C). In contrast, the majority of Enterococcus isolates suppressed IL-8 production in Caco-2 and HCT116 cells (Figure 2A, B). Strikingly, four strains, namely EC1, EC3, EC15 and EC16, suppressed IL-8 secretion in all three lines (Figure 2 A-C and the reduction of IL-8 levels was not due to apoptosis induced by E.faecalis ( Figure S1D, E). We then tested the kinetics of IL-8 suppression by these four strains in HCT116 cells ( Figure 2D). For each of these isolates, the degree of suppression of IL-8 was dependent on bacterial multiplicity of infection (MOI). E. faecalis isolates suppressed IL-8 secretion from 4h at a MOI of 100. However, the same level of suppression was observed much earlier at a MOI of 1000. In contrast, this rapid and robust suppression of IL-8 was not seen with commercial probiotic strains, L. rhamnosus GG (L.gg) ( Figure 2D). Salmonella typhimurium (Salm), a known pathogen and stimulator of IL-8 [27], activated IL-8 secretion after 2 h ( Figure 2D). These data suggest that early colonizing E.faecalis have potent anti-inflammatory effects as assessed using IL-8 expression as a marker.
Active E. faecalis physiology is not critical for suppression of intestinal inflammation
We then investigated what aspects of E. faecalis may cause reduced IL-8 secretion in IECs. In order to test if intact bacterial physiology was critical for IL-8 suppression, we exposed HCT116 cells to live bacteria or bacteria that had been killed by ultraviolet exposure (UV) or by physical disruption through sonication immediately before use ( Figure 3). In each case we found that IL-8 suppression remained intact, indicating that active bacterial physiology was not critical for this activity. We then tested whether physical contact between bacterial membranes and epithelial cells was important. Physically separating live bacteria from the epithelial cells, using a semi-permeable cell culture insert, greatly relieved the IL-8 suppression (Figure 3), indicating that physical contact between the bacteria and epithelial cells was important for this activity. Furthermore, bacteria-conditioned media from bacteria alone or from mammalian cell cocultures resulted in no apparent IL-8 suppression (Figure 3), suggesting that epithelial cells are responding to poorly soluble factor(s), likely present on the exterior cell wall of E. faecalis. We next tested the ability of E. faecalis to regulate IL-8 production induced by IL-1b (2 ng/ml), TNF-a (200 ng/ml) and S. typhimurium (10 7 CFU/ml) in Caco-2 and HCT116 cells. We found that E. faecalis EC16 suppressed IL-8 production induced by IL-1b, TNF-a and S. typhimurium after 1 h of incubation. The suppression of IL-8 production inside the cells was observed at 30 mins of treatment ( Figure S2A,B). This phenomenon was observed in both HCT116 ( Figure 4A,B) and Caco-2 ( Figure S2C) cells. Interestingly, E. faecalis EC2 could not suppress IL-8 production either inside or outside of IECs, suggesting distinctive ability of these four E. faecalis strains (EC1, EC3, EC15, EC16) in regulating IL-8 production.
E. faecalis suppresses TNF-a expression induced by IL-1b
In addition to IL-8, we tested potential TNF-a regulation by E. faecalis in HCT116 and Caco-2 cells. TNF-a is an important regulator of epithelial inflammation. TNF-a levels are elevated in both human inflammatory bowel diseases and animal models of intestinal inflammation [28][29][30]. As expected, we found that TNFa secretion from HCT116 ( Figure 4C) and Caco-2 ( Figure S2D) cells were activated by IL-1b. This increase secretion, however, was attenuated by E. faecalis EC16. Interestingly, ICAM1, IL-2, IL-5, IL-17 and INF-c induction by IL-1b and S. typhimurium was suppressed by EC16 in Caco-2 cells ( Figure S2D).
E. faecalis regulates multiple immune-signaling pathways
Our finding that E.faecalis could suppress IL-8 expression led us to investigate if these bacteria could regulate other immunesignaling pathways. We tested immune-gene expression by the four strains of E. faecalis (EC1, EC3, EC15, EC16) using cDNA microarray analysis. Potential expression changes for 406 immune-signaling genes were assayed in Caco-2 cells after 6 hour coculture with E. faecalis isolates. Acquired data from array membranes were initially scanned ( Figure S3A) and volcano plots were obtained to identify statistically significant gene expression changes ( Figure S3B). A partial list of the genes that demonstrate statistically significant expression changes (.1.5 fold) by cDNA microarray analysis is provided in Table 2. From the data of microarray, we found several signaling pathways that may be involved in the anti-inflammatory effects of the four E. faecalis strains. Using Ingenuity Pathway Analysis, we mainly identified cytokine signaling (IL-1, IL-2, IL-6, IL-8 and IL-10 signaling), SAPK/JNK signaling, P38 MAPK signaling and NF-kB signaling are responsible for the responses ( Figure 5A). Together these data suggest that E. faecalis may alter multiple immunomodulatory pathways simultaneously.
From the microarray analysis, we selected 46 genes for further study using TLDA. The gene ID and Taqman probes are listed in Table S1. Each of the four E. faecalis isolates demonstrated a similar pattern on the immune-gene regulation in Caco-2 and HCT116 cells. Data for gene-regulation by isolate EC16 is provided as a representative example ( Figure 5B). To our surprise, we found that IL-8 mRNA was unregulated. We also found that DUSP1, a reported MAPK phosphatase that can attenuate MAPK signaling, was strongly upregulated. Other genes involved in MAPK signaling were downregulated in Caco-2 cells, namely MAP3K7IP1, a positive regulator of the MAP kinase cascade [31]; MKNK1, a target of ERK and activator of CREB-mediated proliferation and differentiation [32] and MAPKAPK2, a target of p38 MAP kinase involved in many cellular processes including inflammatory responses ( Figure S4A). Other MAPK family members like MAPK7 in Caco-2 cells, MAPK9 in HCT116 cells were also suppressed by E. faecalis ( Figure S4B,C). From the above data, we hypothesize that early-colonizing E. faecalis might influence the inflammatory responses in the host by regulating MAPK signaling pathway. Furthermore, consistent with observations from microarray, TLDA results also showed that NF-kB1 and IKBKB were suppressed by E. faecalis in both Caco-2 and HCT116 cells ( Figure S4D,E) suggesting that E. faecalis suppressed NF-kB1 signaling pathway at transcriptional level.
E. faecalis suppresses activation of P38, P-JNK and C-JUN
Because factors regulating MAPK signaling were altered upon E.faecalis exposure, we hypothesized that these bacteria may attenuate IL-8 production, at least in part through inhibiting MAPK pathways. To test this, we examine MAPK expression and phosphorylation by immunoblotting. Whereas p-JNK was strongly and transiently activated by IL-1b and S. typhimurium in HCT116 cells within half hour of treatment, E. faecalis suppressed this activation ( Figure 6A). Consistent with this finding, cJUN which is downstream of JNK, was activated by IL-1b and S. typhimurium at one hour while suppressed by E. faecalis EC16 ( Figure 6B). Similarly, the presence of phosphorylated p38 was increased by incubation with either IL-1b or S. typhimurium at one hour, which was abrogated by coculture with E. faecalis EC16 ( Figure 6C). Therefore, E. faecalis can inhibit JNK and p38 signaling as one potential means of suppressing IL-8 production. In contrast, ERK phosphorylation did not change upon the treatment of IL-1b, S. typhimurium and E. faecalis ( Figure S5), suggesting that ERK does not impact IL-8 production in these cells. These data suggest that E.faecalis may suppress inflammatory signaling in IECs through reducing the activation of JNK and p38.
E.faecalis suppress IL-1b and TNF-a expression in DSS induced colitis mice model
To examine the immunomodulatory effects of E.faecalis in vivo, we induced colitis using Dextran Sulphate Sodium salt (DSS) in mice and then treated them with E.faecalis EC16 or Lactobacillus rhamnosus GG (L.gg), a well studied probiotic strain. Colon length, which is an indicator of inflammation, was shortened by DSS treatment. Consistent with an immunorepressive role, both E.faecalis and L.gg significantly alleviated this shortening ( Figure 7A). Furthermore, E.faecalis as well as L.gg prevented DSS-induced weight loss in these animals ( Figure 7B). IL-1b and TNF-a which were activated by DSS were significantly down regulated by EC16 and L.gg ( Figure 7C,D)
Discussion
In this study, we describe the isolation and primary characterization of E.faecalis strains isolated from healthy newborns in Indonesia. E.faecalis represented the most frequently isolated bacteria from the children and selected isolates possessed ability to strongly inhibit inflammatory markers in IECs. The selected isolates with the most potent anti-inflammatory effect were characterized to define a potential mechanism of action and we found that MAPK pathways were regulated. Specifically, E. faecalis was able to mitigate activation of JNK and p38 coincident with reduced expression of IL-8 in vitro as well as IL-1b and TNF-a in vivo.
Enterococci have previously been reported among the common early colonizers in humans [6]. Thus our study population in Indonesia provides a surprising consistency of early colonizing bacteria. Since E.faecalis accounts for 90-95% commensal enterococci in adult human intestines, the finding of E.faecalis as more prominent than the next most prominent species, E.faecium, is not unexpected. In our study L.casei was the second most frequently isolated lactic acid bacteria in 3 day old infants. Although the study cohort is not big, to our knowledge this is the first report of lactobacillus among early colonizers, which might reflect environmental differences for this Indonesian cohort.
None of the lactobacilli isolates were strongly immunosuppressive as determined by IL-8 expression levels. Certain E.faecalis isolates, on the other hand, could strongly inhibit IL-8 in IECs. Our previous findings showed that E.faecalis can induce antiinflammatory cytokine IL-10 in intestinal epithelial through PPAR-gamma [33]. Taken them together, this may explain lower abundance enterococcus colonization [34] and high rate of NEC in preterm infant [35]. Furthermore, infants who developed NEC carry less E.faecalis [13,36]. Therefore, E.faecalis may possess the capacity to modulate and attenuate the inflammatory responses further to prevent inflammatory diseases such as NEC in infants. Interestingly, the behavior of the three IEC lines differed substantial in response to many bacteria, suggesting that, in contrast to standard analyses, the use of a single IEC for characterization may be inadequate. The anti-inflammatory effects were confirmed in an in vivo system. This finding is consistent with previous study showing that E.faecalis has a great protective effect in DSS-induced experimental colitis model in mice [37]. It is not clear, at this stage, why only certain isolates of E.faecalis in this study could reduce IL-8 production in vitro. It will be interesting to potentially examine these other isolates for their behavior in DSS-induced colitis model in vivo.
Interestingly, while E.faecalis isolates decreased IL-8 protein level inside and outside the cells, its mRNA level was upregulated. Recent researches also showed that MAPK can regulate eIF4E [38] and eIF4E overexpression is associated with increased IL-8 expression [39]. Therefore, inhibition of MAPK pathway might suppress eIF4E, which results in reduction in IL-8 translation. In the DSS-colitis model as well as patients with inflammatory bowel disease (IBD), p38 levels are increased in the muscularis propria of colonic tissue [20]. When treated with p38 inhibitor, mucosal IL-1b and TNF-a levels were reduced in DSS colitis model [21] consistent with what we found for E.faecalis treatment. Thus, the phenomena we see with E.faecalis may be exclusively a result of MAPK inhibition. Alternately, E.faecalis may be affecting other pathways, for example our previous findings showed LAB can suppress TLR3, TLR9 and TRAF6 mRNA levels [40]. Therefore E.faecalis could suppress TLR pathways and further suppress MAPK pathway as well as NFkB-mediated transcription.
E.faecalis, as one of the first colonizer, could suppress the inflammatory responses and shape the immune system. Infant intestine usually undergo acute inflammation when exposed to Gram-negative bacteria. The presence of E. faecalis may help the intestine maintain the immune balance in response to such challenges. Our finding that E.faecalis performed as good as recognized probiotics indicates their potential to serve as a probiotic. However, the therapeutic effects must be examined thoroughly as E.faecalis was also reported as an opportunistic pathogen in hospital infections. Since we found dead E.faecalis also have the ability to suppress IL-8 secretion, the use of dead E.faecalis could mitigate the risk of opportunistic E.faecalis infection.
Bacteria culture and Identification
All bacterial strains were isolated from 16 healthy infants in Indonesia. This study is reviewed and approved by the National University of Singapore Institutional Review Board (NUS IRB), approval no. NUS1469. A written consent form was obtained from participants guardians. The inclusion criteria included natural birth and breast feeding. The exclusion criteria included antibiotic intake 2 weeks before and within the study, received probiotics/culture milk 2 months prior and during the course of the study.
Those bacteria were first Gram stained followed by API 50 CH test strips for rod-shaped bacteria and rapid ID 32 STREP test for the coccal shaped strains. All bacteria strains were then cultured and extracted for DNA. 16S rDNA were directly sequenced using primer 1100 reverse 5V-GGGTTGCGCTCGTTG-3V to obtain partial sequence of the 16S rDNA [41]. Phylogenetic tree calculation was based on a sequence distance method and utilizes the Neighbor Joining (NJ) algorithm of Saitou and Nei [42].
Cell culture and Infection
Caco-2, HT-29 and HCT116 were obtained from American Type Culture Collection (Manassas, VA) and maintained in ATCC recommended medium. Before cell infection, 1610 5 cells were cultured in sterile 24-well flat-bottom plates (Nalge Nunc International, USA) for 24 hours. Caco-2, HT-29 and HCT116 cells were incubated in fresh medium without (control) or with bacteria at a multiplicity of infection (MOI) of 100 for 6 hours. Supernatants were harvested for ELISA assay (BD bioscience, San Diego, CA). and proteins were harvested for Western blotting analysis.
Conditional Medium and Cell Inserts
The cell culture supernatants obtained from IECs, bacteria and coculture of bacteria and IECs were harvested sterilely and conditional media were then added to the cell cultures prepared. The supernatant were then collected for cytokines assay. For cell inserts test, HCT116 cells were plated in wells of Transwell (USA) multiple well plate. 100 ml of bacterial suspensions were added to each insert. To sonicate bacterial cells, protease inhibitors were added to the suspensions. The bacterial cells were then disrupted by sonication (SANYO, Japan) on ice for 10 cycles with 30 sec pulses and 1min rest [43]. Cell debris was pelleted down and added in the cell culture prepared as above. Another bacterial suspensions were exposed to UV light for 5 min [44] to make sure at least 99% bacteria were killed and then co-culture with cells.
TNF-a, IL-1b and S. typhimurium induced cytokine secretion and protein production 200 ng/ml TNF-a (Preproteck, INC, rocky hill, NJ), 0.2 ng/ml IL-1b (Preproteck, INC, rocky hill, NJ) and S. typhimurium at a MOI of 100 were added to Caco-2 and HCT116 cells. Cells were then infected with E. faecalis EC16 with or without TNF-a/IL-1b/ S. typhimurium. Cells without any treatment were used as controls. The cells were then cultured at 37uC with 5% CO 2 for 30 mins, 1 h, 2 h, 4 h and 6 h. Supernatants were collected and the concentration of IL-8 was determined by ELISA (BD bioscience, San Diego, CA). Other cytokines like INF-c, TNF-a, IL-2, IL-5, IL-17 and ICAM-1 were determined using cytokine assay (Bio-Rad, USA). The proteins were harvested for Western blotting assay.
Microarray analysis
A total of 406 human immunology signaling pathway related cDNA clones were detected in this study (Superarray, USA) according to manufacture's instruction. Raw data are available at Gene Expression Omnibus with accession number of GSE56485. Signals were analyzed using the web-based GEArray Expression Analysis Suite (Supperarray, US). GeneSpring GX 7.3.1 and Ingenuity pathway analysis was also used to analyze the data. Data were excluded with bad or absent flags. In this experiment, 2-3 replicates were done for one treatment. Student t test was applied for controlling the false-positive rate (P,0.05 was considered significant), clustering analysis was generated by the software.
TaqMan Low Density Array (TLDA)
The RNA was harvested using the RNA extraction kit (Roche, Switzerland). A 48-well format Taqman low density array was designed for a subset of genes that were differentially expressed in the array experiments including two endogenous controls 18 s and b-actin (Applied Biosystems, USA). Genes and ABI assay IDs are listed in Table S1. 0.5 mg total RNA was converted to cDNA using High-Capacity cDNA archive kit (Applied Biosystems, USA) and 10 ng cDNA in 100 ml TaqMan universal PCR master mix (Applied Biosystems, USA) was used for each port and run on an ABI 7900 system (Applied Biosystems, USA). Data was analyzed using the SDS2.2 software where baseline and threshold settings were automatically adjusted.
Sodium-Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis (SDS-PAGE) and immuno blotting
Sodium-dodecyle sulfate-polyacrylamide gel electrophoresis was performed on a 10% gel by using 20 mg per lane of whole cell lysate. Electrophoresis was carried out using consistent voltage of 75 volts for approximately one and a half hour and then transferred onto a 0.22 mm Nitrocellulose membrane (Biorad, USA) at 85 volts for 2 hours in cold room. The membrane was then blocked in Tris buffer saline-Tween (TBST) containing 5% skim milk for at least 1 h. Specific primary antibodies were then diluted and the membrane was incubated overnight at 4 uC on an orbital shaker (Bellco, USA). After washing, appropriate horseradish peroxidase (HRP)-conjugated secondary antibodies diluted in TBST containing 5% skim milk were added into the membrane and incubated for 1 hour at room temperature on an orbital shaker (Bellco, USA). Visualization of the immunolabeled bands was then carried out using ECL Plus Western Blotting Detection Reagents or ECL Advance Western Blotting Detection Kit (GE Healthcare, UK) according to manufacturer's instruction. The signals were exposed on X-ray film (Koda, USA).
Animals
Male C57BL/6 mice (8 to 10 weeks of age) were kept in SingHealth Experimental Medicine Centre and housed in collective cages at 2261uC under a 12-h light/dark cycle (lights on at 07:00 h) with free access to laboratory chow and autoclaved tap water. Experiments were performed during the light phase of the cycle. The experimental procedures were previously approved by SingHealth Institutional Animal Care and Use Committee (IACUC) on the Ethical Use of Animals, where the study was carried out, and were conducted in accordance with Singapore regulations on animal welfare. DSS treatment, treatment group were fed with 10 ' 7 CFU/100 ml EC16 or 10 ' 7 CFU/100 ml L.GG using a gavage needle respectively. Control group were fed with PBS. The animals were provided with 1.5% DSS till day 12. On day 12, the animals were euthanized, colon were removed for length measurement and frozen in liquid nitrogen for future analysis.
Statistical analysis
Data are expressed as means 6 SD. Significance of differences was determined using the Student's T-test and analysis of variance. P values,0.05 were considered to be statistically significant. Figure S1 IL-8 secretions in Caco-2 (A), HT-29 (B) and HCT116 (C) with the treatment of Lactobacillus and apoptosis assay in HCT116 (D,E). A total number of 10 7 cfu/ml bacteria were added into the cells for 6 h. Supernatants were harvested for cytokine assay as described in Materials and Methods. Three independent experiments were compiled to produce the data shown. Data were expressed as mean value 6SD. D. Apoptosis assay in HCT116 control. E. Apoptosis assay in HCT116 with EC16 treatment. One representative assay was shown from three independent experiments. (TIF) and T-test with a P,0.05 was considered significant change. (TIF) Figure S5 ERK expression in HCT116 cells at 30 mins with the treatment of 2 ng/ml of IL-1b and S. typhimurium with and without E. faecalis EC16 at a MOI of 100. Total protein was harvested and protein production was analyzed using Western blotting as described in Materials and Methods. Experiments were repeated three times. (TIF) | 5,999 | 2014-05-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
“A Running Back” and Forth: A Review of Recursion and Human Language
An attentive reader of the cognitive science literature would have noticed that the term recursion has appeared in myriad publications, and in many guises, in the last 50 or so years. However, it seems to have gained a disproportionate amount of attention ever since Hauser et al. (2002) hypothesized (for that is what it was) that this property may be the central and unique feature of the faculty of language. Indeed, a barrage of publications, conferences, and even critical notes in the popular press about recursion has recently flooded academia. The volume under review here is the result of one such conference — one that was celebrated in 2007 at the Illinois State University — and it offers, or so it says, a compendium of works that tackle this notion from different perspectives. I will not be following the thematic division underlying this volume as a way to frame the “different perspectives” it advertizes. Rather, this critical note will focus on four distinct senses of the term recursion that can appropriately be applied, or so it will be argued here, to four well-defined theoretical constructs of the cognitive sciences. The formal sciences will, naturally, inform most of this discussion, but the focus of this note will fall on relating the different perspectives of this collection to the four senses of recursion I will outline. Ultimately, this review will press one main point: Contrary to a(n apparently) widespread belief, neatly stated in this book’s back cover, it is simply not true that recursive structures in languages “suggest recursive mechanisms in the grammar” (at least not in the sense that is usually intended in the literature — see infra).1 The one feature that binds together the four theoretical constructs I will be focusing on is the self-reference property that characterizes recursion — a feature that is quite unrelated to the uses to which this notion can be applied. This self-
reference property is readily demonstrated by the first connotation I will be considering (the primary meaning in mathematics, in fact), which consists in "defining a function by specifying each of its values in terms of previously defined values" (Cutland 1980: 32); that is, a definition by induction (or recursive definition).The factorial functions (n!) offer a standard and rather trivial example: (1) Def. n!: if n > 1 n! = n x (n-1)!recursive step Note that the recursive step involves another invocation of the factorial function.Thus, in order to define the factorial of, say, 4 (i.e. 4 x 3!), the function must define the factorial of 3, and so on until it reaches the factorial of 1, the base case, effectively terminating the recursion.
A definition by induction ought not to be confused, however, with a related construct that receives similar denominations, such as an 'inductive definition', 'inductive proof', or 'mathematical induction'.
An inductive definition, a mathematical technique employed to prove if a given property applies to an infinite set, proceeds as follows: First, we show that a given statement is true for 1; then, we assume it is true for n, a fixed number (the inductive hypothesis); lastly, we show that the statement is therefore true for n+1 (the inductive step).If every step is followed correctly, we conclude that the statement is true for all numbers (Epstein & Carnielli 2008).An inductive definition, then, also employs recursion, but it additionally includes an inductive hypothesis.These two constructs should not be conflated, even if they are closely related.In fact, it is important to note that inductive definitions are a central feature of recursive definitions in the sense that the former grounds the latter; that is, the recursive definition of a function is justified insofar as it ranges over the domain established by the inductive definition (Kleene 1952: 260 et seq.). 2 Recursive and inductive definitions are discussed a few times in this collection; this is the case, to a certain extent, for Introduction, Langendoen, Kinsella, and, most notably, in Geoffrey Pullum & Barbara C. Scholz.The latter take issue with the prevalent belief of what they call the 'infinitude claim'; that is, the claim that, for any language, the set of possible sentences is infinite.Their discussion is framed, I believe, alongside three main themes and it is of interest to have a closer look at them; these are: a critique of the actual 'standard' argument supporting infinitude claims, the accompanying assumptions, and a number of loosely related obiter dicta that I will not discuss in any detail.
The standard argument has three parts, according to them: (i) there are some 'grammatically-preserving extensibility' syntactic facts of the kind I know that I exist, I know that I know that I exist, etc. (p. 115) that lead us to believe that (ii) there is no upper bound on the maximal length of possible sentences (at least for English); these two facts together, in turn, warrant the conclusion that (iii) the 2 The mathematical literature contains many different types of recursive functions: the primitive class, the general class, and the partial class, among others.They are all recursively defined functions, but range over different types of objects; whatever objects/relations they subsume should not distract from the fact that recursion remains a central property.
collection of all grammatical expressions in a given language is infinite.
The argument is well-put together as far as it goes, and their main worry falls not on the move from (ii) to (iii) (which is simple mathematics, they tell us), but on the transition from (i) to (ii).Interestingly, they do not tell us what is actually necessary to warrant the troubled transition; instead, they dismiss three different possibilities that could be employed for its justification: the use of an inductive generalization, mathematical induction, or by arguing from generative grammars (the latter they take to be, strictly speaking, systems of rewrite rules only; see infra).It is not clear at all that any of these strategies have ever been explicitly employed in the literature in order to support the standard argument -at least not in the sense that Pullum & Scholz have in mind.Indeed, the examples they do provide are rather strained; specifically, the connection they make between mathematical induction and a remark by Pinker in the context of a popular science book (see p. 119) seems rather weak.
More interestingly, later on in their paper (p.124 et seq.), they point to the (supposedly widely-held) assumption that languages are collections -in a strong mathematical sense. 3Given that they take this to be the case, it is no surprise the burden they place upon linguists to prove the infinitude claim.In a similar vein, Terence Langendoen's contribution orbits these very issues, and is ended by urging the field to come to an agreement "upon a basis for determining whether a language is closed under one or more of its iterative size-increasing operations" (p.145).
Note that the whole issue, then, turns out to revolve around too close a connection between natural languages and mathematical systems, to the point that the infinitude of the former is to be proven by the standards that we impose upon the latter.This is, however, unwarranted.It is certainly true that many linguists have employed mathematical techniques and vocabulary to study natural languages, but these were so used because they were useful -certainly not to reduce linguistic phenomena to abstraction.In fact, the latter play a rather limited role in linguistic explanation, for note that linguists typically focus on informants' grammatical judgements in order to unearth the underlying structure of strings.This kind of study focuses on the structure that a certain mental state -viz.the linguistic capacity -imposes upon the strings, and not on the strings themselves in isolation from these judgements.
Ultimately, it seems that these authors confuse the use of mathematical concepts as a useful toolkit for a call to reduce linguistics to mathematics, but no such thing ought to be accepted by the working linguist (I will retake the infinitude claim below).
On another note, one would expect to see the opposite argument (that is, the finiteness of a given language) to be placed under the same burden, but this 3 They rank this assumption as one of four factors that may account for the persistent presence of infinitude claims in the literature, but in fact only provide three (pp. 124-129).The second of these -the connection between recursion and linguistic creativity -rests on an obvious misrepresentation, corrected many times before (see Chomsky 2006: xviv).Roughly, creativity does not rest on the ability to construct new sentences (p.126); rather, this property points to the fact that linguistic behaviour is generally stimulus-free, and that speakers/hearers have the capacity to understand/produce novel sentences that are appropriate to context.
does not appear to be the case for languages that prima facie lack the 'grammatically-preserving extensibility' syntactic facts mentioned in (i) (p. 130-131).Surely a similar argument would arise: (i') There are some syntactic facts of language A that suggests this language lacks grammatically-preserving extensibility structures (such as self-embedding and coordination), which leads to believe that (ii') there is indeed an upper bound on the maximal length of its sentences; therefore, (iii') this language is a finite collection of sentences.Clearly, the transition from (i') to (ii') is as troubling as that of (i) to (ii) -but only if we grant Pullum & Scholz's (and Langendoen's) burden.
Be that as it may, I now want to argue that none of this has, in actual fact, much to do with the introduction of recursion into linguistics -at least not in the sense that Chomsky has treated this notion.
A second sense of the term recursion has it as a general and central property of algorithms and generative systems.Thus, in the analysis of algorithms discipline, systems of recursive equations have been employed to formalize the notion of an algorithm qua formal object (see, especially, McCarthy 1963) and some scholars have proposed that these recursive equations subsume a specific mapping function, termed a 'recursor' (Moschovakis 1998, 2001, Moschovakis & Paschalis 2008).A recursor is said to describe the structure of an algorithm and, in this sense, algorithms are recursors.Production systems of rewriting rules also contain recursion as a central property, but not simply in those specific cases in which the same symbol appears on both the left-and right-hand side of a rewrite rule.
Consider, for example, the underlying transformation that converts some structure φ 1 ...φ n into some structure φ n+1 ; the → relation can then be interpreted as "expressing the fact that if our process of recursive specification generates the structures φ 1 ...φ n , then it also generates the structure φ n+1 " (Chomsky & Miller 1963: 284).This is basically the successor function, one of the primitive class of recursive functions.Whereas the former cases involve an internal application of recursion within production systems, the latter is a global property of collections of rewriting rules qua production systems (see infra).
The successor function also underlies what is known as the 'iterative conception of set', a process in which sets are "recursively generated at each stage", a statement that is to be understood as the "repeated application of the successor function", drawing our attention to the analogy between "the way sets are inductively generated […] and the way the natural numbers […] are inductively generated from 0" (Boolos 1971: 223).The current characterization of Merge, the building operation at the heart of the language faculty, as a set-formation operator seems to be akin to this interpretation of recursion (see Chomsky 2008 and Soschen 2008). 4 This is better understood in the context of the discussion Soare (1996) provides on the state of the art within mathematical logic.Therein, he argues that the field has for long assumed that recursion and computation are synonymous terms (and the same would apply for recursive and computable).This, he argues, has resulted in what he calls the Recursion Convention (RC), a state of affairs he has attempted to reverse in subsequent publications.The RC has three parts: (i) Use the terms of the general recursive formalism to describe the results of the subject, even if the proofs are based on the formalism of Turing computability; (ii) use the term Church Thesis to denote various theses; and (iii) name the subject using the language of recursion (e.g., Recursion Function Theory).
Granted, even if it is commonly conceded that a Turing Machine captures the manner in which every conceivable mechanical device computes a calculable function (and it is, furthermore, generally accepted as the best formalization in the field), Turing's model did not in actual fact provide a formalization of what an algorithm qua formal object is.Indeed, there is a distinction to be had between formalising an algorithm qua a 'model of computation' -that is, an analysis of what actually happens during a computational process -and qua an abstract mathematical object.It is to the former construct that Turing's model appropriately applies, while it is the latter that systems of recursive equations and recursors subsume.Furthermore, it is a well-known result that Turing Machines and the partial recursive functions formalism of Church/Kleene (but not the general recursive class) are extensionally equivalent in the sense that both identify the class of computable functions.From the same inputs, both formalisms return the same outputs, albeit in different ways (these 'intensional differences' will be of some importance later on, though).Finally, it is no surprise that the Turing Machine model has been more prominent in cognitive psychology, with its emphasis on real-time processes, while the more abstract characterization of what a computation is -one based on recursion -has found its natural place in theoretical linguistics mainly.Indeed, as Collins (2008) states in the context of an introductory book to Chomsky's thought: "via the Church/Turing thesis, computation is just recursion defined over a finite set of primitive functions" (p.49).
There is, therefore, a certain consistency in Chomsky's writings if we understand his treatment of recursion in the terms just described.Perhaps rather tellingly, he has pointed to the connection between grammatical theory and recursive function theory in many writings (e.g., in Piattelli-Palmarini 1980: 101), which suggests that he may have been influenced by the RC. 5 Naturally, the whole point of introducing recursion into linguistics was to account for the fact that speakers/hearers show a continuous novelty in linguistic behaviour -a novelty that does not appear to be capped in any meaningful respect.Further, since speakers/hearers cannot possibly store all the possible sentences they understand or utter, the cognitive state accounting for this linguistic behaviour must be underlain by a finite mechanical procedure -an algorithm.This is one of those properties that one would argue are a matter of 'conceptual necessity'.A rather trivial matter, perhaps, but the whole point has been muddied by orbiting issues.Just as Pullum & Scholz do, many studies focus on the so-called self-embedded sentences (sentences inside other sentences, 5 This is actually confirmed in personal correspondence with Noam Chomsky (May 2009): "[T]here is a technical definition of 'recursion' in terms of Church's thesis (Turing machines, lambda calculus, Post's theory, Kleene's theory, etc.)", the only one used he's ever used, "a formalization of the notion algorithm/mechanical procedure".Further, he states that he's "always tacitly followed the RC".such as I know that I know etc.) as a way to demonstrate the non-finiteness of language, and given that self-embedding is sometimes used as a synonym for recursive structures (see infra), too close a connection is usually drawn between the presence of these syntactic facts and the underlying algorithm of the language faculty.However, even if there were a language that did not exhibit selfembedding but allowed for conjunction, you could run the same sort of argument and the non-finiteness conclusion would still be licensed.These two aspects must be kept separate; one focuses on the sort of expressions that languages manifest (or not), while the other is a point about the algorithm that generates all natural language structures.
There is surprisingly little in the collection under review here that touches on these very issues.Both Harry van der Hult's Introduction and Arie Verhagen do discuss global and local applications of recursion, but not quite in the terms outlined here.Verhagen describes two roles for recursion, a specific one that gives rise to long-distance dependencies (supposedly the self-embedded sentences, see below) and a more general one that delineates a mechanism for embedding phrases inside other phrases.As for the Introduction, and even though the discussion presented there is framed in terms of rewriting rules, the main points pertain to structures only.Thus, the general application refers to the general embedding of phrases into other phrases, while the specific one refers to those phrases that are embedded within a constituent of the same kind.I will come back to this below, but it is worth pointing out now that this is the closest this collection gets to discussing the central role of recursion in the formalization of an algorithm -a clear shortcoming, given that the RC, systems of recursive equations, recursors, etc form the core of Chomsky's thought on the matter.
There is some tangential discussion regarding Merge by Jan Koster.He takes issue with the postulation of a recursive Merge as the syntactic engine that generates linguistic expression.His worries seem to be twofold; on the one hand, a recursive Merge cannot do anything unless in combination "with external, invented cultural objects -lexical items" (p.289); on the other hand, these lexical items come with specific combinatorial properties that already account for the hierarchical structure that Merge will, then, redundantly generate "once more" (p.292).The latter is, of course, a valid point that recapitulates the debate between representational and derivational views on theories of grammar.The former is, however, more troubling.Even if we were to grant Koster that lexical items are cultural inventions, we would do well to remind ourselves that there cannot be any cultural inventions that are not entertained in the mind first, which is perhaps a trivial point.Moreover, we can be sure that external, cultural inventions do not come with "fully-fledged combinatorial properties" (idem.)other than the mould that the linguistic capacity imposes -and this is clearly an internalist explanation.More importantly, this bears very little relation to my description of the role of recursion in linguistic theory as a general property of the underlying mechanical procedure.
I have been defending a common thread running through Chomsky's writings, but this is not to say that he has always been consistent -or that the focus has not fallen, on many occasions, on the internal applications of recursion within production systems.
The characterization of this internal application has changed dramatically as the theory progressed.An early characterization of generative grammar divided the computational system into two components: the base (composed of rewriting rules that returned strings with associated phrase markers) and the transformational system (a component that would convert some phrase markers into other phrase markers, preserving structure).In Chomsky (1957), the recursive property of certain rules is ascribed to the latter system, while Chomsky (1965) assigns it to the base component.By the 1970s and 1980s, most of the rewriting rules were in fact eliminated from syntactic theory, perhaps completely so by the time Chomsky (1986) appeared.The latter is an important point, given that most discussions on recursive mechanisms -and this is no exception in this collection -seems to be centered exclusively on rewriting rules, which is rather unfortunate.It should be trivial at this point to remark that recursion as a general property of generative systems remains at the center of linguistic theory regardless of the replacement of production systems with Merge -both, as I have tried to show, are underlain by the successor function.
Nevertheless, it is of interest to discuss recursive rewriting rules to some extent, given the prominence they receive in the literature -and in this collection.Consider the following sample below.
VP → V S Rules (2d) and (2e) are recursive, as the category on the left-hand side of the arrow is reintroduced on the right-hand side (directly in (2d), indirectly in (2e)).It is sometimes supposed that nested structures and recursive rules are very closely connected, to the point that nested constructions cannot be generated by anything other than recursive rules.This is mentioned in this collection a couple of times, sometimes with (dubious) references to the mathematical linguistics literature on formal grammars.This is clearly not quite correct, however.To a first approximation, it is worth noting that recursive rules were introduced in order to simplify the grammar.Take a nested string such as [a[ab]b], where the a's stand for subjects and the b's for verbs.This can easily be generated by the employment of rules like (2d) and (2e), while a more complicated generation would involve the repeated application of rules like A→aB, B→aC, C→bD, D→b.It is precisely in this context that Chomsky (1956: 115-116) states that "if a grammar has no recursive steps […] it will be prohibitely complex", with danger of reducing it to a list of sentences.
Amy Perfors et al. (Josh Tenenbaum, Edward Gibson, and Terry Regie) provide (partial) confirmation for this intuition by employing a qualitative Bayesian analysis to calculate the ideal trade-off between simplicity of grammars (treated as a prior probability) and the degree of fit to a corpus (treated as the likelihood).Even though recursive rules, they tell us, are costly because they predict sentences that are not observed in a corpus (which hurts their goodness of fit; see pp. 161-164), the calculation ultimately returns, perhaps unenlightening, a grammar with recursive and non-recursive rules as the preferred choice.I qualify these results as uninformative because they do not seem to differ from what was being proposed in the 1950s.Granted, this sort of analysis offers a much more formal understanding, but one should not mistake formalization for insight if the issues were already well-understood.Further, there are two aspects of this work that are somewhat troubling.First, the study places too much emphasis on the actual 'observed' data found in corpora.These are not to be disregarded, obviously, but linguists ought not to forget that the actual subject matter, that is, the actual phenomenon to be explained, remains the cognitive state that underlies the observed linguistic behaviour (this point about corpora resurfaces in many other contributions).Secondly, it is an obvious point to make that this analysis only applies to those theories that postulate production systems as grammars -those linguists close to the generative framework, though, have long dispensed with them.Quite clearly, none of this applies to theories that focus on Merge as the central syntactic engine.Moreover, it certainly has very little to do with the general point made supra; namely, Chomsky's leitmotif is based on recursion qua general property of the computational system underlying language, be this a production system or a set-operator like Merge.
Despite it all, it is worth delving into the uses (and abuses) that systems of rewriting rules have been put to as to unearth some (seemingly) widespread mistakes.This will allow me to introduce the third sense I would like to discuss, one that pertains to the study of computational processes, which is the interest of much of applied computer science.At this point, though, it seems "a reasonable conjecture" to claim that at root "there is only one fixed computational procedure that underlies all languages" (Chomsky 1995: 11); a 'recursive' Merge in this sense.
It is important to note that there is a significant discontinuity between rewriting rules and linguistic expressions.Technically speaking, rewriting rules only return strings, not structures, which is presumably one of the reasons why rewritings rules were eliminated from linguistic theory (cf.Collins 2008: 58) 6 .It is a point that deserves emphasis, as its neglect hampers clarity.Take Fitch (2010), for instance; therein, he puts forward two problematic claims: Firstly, that a recursive rule has the property of self-embedding (p.78), and secondly, that it is a "linguistic stipulation" for a self-embedding rule to entail a self-embedded 6 I say "technically" in reference to the historical fact that rewriting rules have always been employed as string substitution operations.It is sometimes stated, however, that a system of rewriting rules strongly generates a set of structures, while it weakly generates a set of strings, but there is no obvious difference in the actual rules to merit the distinction -apart from the definition.Perhaps this should be rephrased as follows: A computational system such as rewriting rules generates weakly but a system like Merge generates strongly.structure (p.80), which I suppose carries over to simply embedding rules and embedded structures.
The first claim is simply not correct.A rule is recursive if there is a self-call, but this is independent of what operation is in fact executed.There is a distinction to be had between what a rule does and how it actually proceeds, and it is to the latter than recursion applies.It is this reflexive property that makes the definition of the factorial functions recursive, but there is no sense in stating that there is any embedding whatsoever.
As mentioned, rewriting rules return strings, not structures; a fortiori, there is no such thing as a self-embedding rewriting rule.Moreover, Fitch misplaces the long-held stipulation he identifies.In previous models, the rules of the base component would return simple declarative sentences, and these would be converted into more complex structures by the transformational component; the latter were not part of the set of rewriting rules.
The replacement of Merge for production systems involved the postulation of an operation that embeds elements into one another.Merge does this in a bottom-up fashion rather than generating strings in the left-to-right manner of rewriting rules, but both Merge and a production system are recursive devices for the same reason, that is, qua generative systems that are underlain by the successor function.
The conflation, apropos recursion, between what an operation does and how it proceeds is rather common in the literature, and this collection of papers is no different.Some contributions (Karlsson, Verhagen, Kinsella, Harder, Hunyadi) discuss what they call center-embedding rules, tail-recursive rules, the sort of structures these generate, and their relationship.Much like Fitch, these terms actually refer to the structures themselves, rather than the actual rules.Thus, a center-embedding rule is supposed to generate nested structures in which, say, a sentence is embedded in the middle of a bigger sentence, like in the classic (The mouse (the cat (the dog bit) chased) ran away).A tail-recursive rule, on the other hand, embeds elements at the edge of sentences, either on the left-hand side (John's [brother's [teacher's book]
] is on the table) or on the right-hand side (The man [that wrote the book [that Pat read in the cafe [that Mary owns]]]).
These terms, however, have absolutely nothing to do with the recursive character of the rules themselves, only to the type of embedding the resultant expression manifests.A center-embedding rule, after all, is not one in which the reflexive call occurs in the middle of a derivation, but even if it did, this has no substantial consequences.As for tail-recursion, this is a widely-used term in computer science, and it refers to a process in which the recursive call of the algorithm occurs at the very end of the derivation (Abelson et al. 1996).Quite clearly, a nested structure on the left-hand side of a sentence cannot be the result of a tail-recursive rule if the derivation process undergoes left-to-right applications of rewriting rules.In a nutshell, these terms refer to specific properties of the structures, not to recursive mechanisms or operations.
Rather surprisingly, some of the aforementioned chapters seem to have a much stronger point in mind.Fred Karlsson, following Parker (2006; cited therein), states that 'nested recursion' rules (i.e.center-embedding; Verhagen, p. 103 tells us that this is sometimes known as 'true recursion', but no reference is provided) cannot be reduced to iterations (while tail-recursion can) 7 , a claim that is repeated by Peter Harder (p.239) and, with qualifications, by Vitor Zimmerer & Rosemary A. Varley (p.397).
They could not possibly mean this as a general point about computability theory, however.After all, it is a well-established, though often forgotten, result of the formal sciences that all tasks that can be solved recursively can also be solved iteratively (Roberts 2006).Put bluntly, that is, that "all recursive relations can be reduced to recurrence or iterative relations" (Rice 1965: 114).In fact, one of the references mentioned in this collection, albeit indirectly (p.347), namely Liu & Stoller (1999), offers a framework that provides automatic transformations of any type of recursion into iteration, an "optimization technique" that can cope with the most complex of recursive relations, such as multiple base cases or multiple recursive steps, of which Fibonacci sequences are an example (contrary to what Fitch 2010: 78 seems to think).
Perhaps what these authors have in mind is a much narrower point; namely, the interrelations between recursion and iteration within sets of rewriting rules.In this context, James Rogers & Marc Hauser offer a solid discussion of formal grammars and their potential relevance for the study of behaviour.Still, the formal literature hardly contains a mention of 'center-embedding recursion', a term that only seems to appear in some linguistic papers; as I stated above, it tends to appear in the context of recursive rewriting rules, even if in reality it refers to either an embedding operation of a particular kind, or to a certain type of structure.
As for the recursion/iteration equivalence in general terms, let us take the factorial functions we defined recursively above to clarify this point, which brings me to the third sense I would like to focus on.This refers not to the algorithm qua formal object, but to its actual implementation; that is, it is the study of the so-called models of computation.A recursive process, then, is one in which an operation calls itself, creating chains of deferred operations, which is usefully contrasted with an iterative process, wherein an operation reapplies in succession (Abelson et al. 1996: 33-34). 8he recursive processing (shown on the left-hand side of Table 1) naturally follows from the recursive definition, while the iterative solution (shown on the right-hand side) necessitates a subtle observation.This is simply that factorials can be iteratively computed if we first multiply 1 by 2, then the result by 3, then by 4, until we reach n.That is, we keep a running product, together with a counter that counts from 1 up to n.Further, we add the stipulation that n! is the value of the product when the counter exceeds n. (NB: The first digit of the iterative solution shows the factorial whose number we are calculating, the second 7 Further, he incorrectly states, by misunderstanding the discussion in Tomalin (2006: 64), that Bar-Hillel might have reintroduced recursion into linguistics.Rather, Bar-Hillel seems to have been interested in a more precise definitional technique for theoretical constructs.Chomsky (1955: 45) manifests his agreement in spirit, while two years later sees "success along these lines unlikely" (Chomsky 1957: 58).
digit is the actual counter and the third is the running product.As the shape of these implementations show, the material kept in memory at any stage differs greatly.In the second line of the recursive processing, the actual operation in course is factorial 2, while what is being kept in memory is 4 × (3 × …).This is in great contrast to any stage of the iterative process, as the only things in working memory are the operation in course and the variables it operates upon.Naturally, an iterative process is in general more efficient; still, there exist clear data structures meriting a recursive solution.Three properties must be met for a recursive solution to be the most natural: (i) the original problem must be decomposable into simpler instances of the same problem; (ii) the sub-problems must be so simple that they can be solved without further subdivision; and (iii) it must be possible to combine the results of solving these sub-problems into a solution to the original problem (Roberts 2006: 8).Of course, recursive structures are naturally (and intuitively) the ideal candidates, but this should not distract from the point just made, namely that there is nothing intrinsically recursive about the factorial class.That is, the suitability of the recursive solution has to do with the nature of the solution itself, and not with the structures themselves.The connection between a structure and a recursive processing is, therefore, an empirical matter to be worked out on an individual basis; it cannot be simply assumed.
There is a great confusion about this in the cognitive sciences.Thus, much of the literature clearly conflates structures and mechanisms, inevitably concluding that recursive structures can only be generated or produced by recursive mechanisms, and mutatis mutandis for iterative structures and mechanisms.I have already noted above that a general property of implementations is that any sort of task which can be solved recursively can also be solved iteratively.Indeed, at the most general level, any function or task that can computed by the partial recursive functions of Church/Kleene (Kleene 1952), that is, a recursor, is computable by a Turing Machine, and the latter is an iterator (Moschovakis 1998).Translating this general result into actual processes is no small matter, but the literature provides many cases (see Liu & Stoller 1999, mentioned supra).
There is not an awful lot of discussion on real-time recursive processes in the collection under review here.Perfors et al. mentioned, in passing, that even though syntax may well be fundamentally recursive (in the sense of the grammar containing recursive rewriting rules), the parser could "usefully employ non-recursive rules" for simpler sentences (p.170) -a well-taken point about the efficiency of non-recursive rules in processing.
László Hunyadi does offer some data regarding possible recursive performance.After correctly stating that recursion and iteration would access (working) memory differently (p.347), some experimental evidence is provided on the type of prosodic structure associated with some of the structures alluded to earlier (self-embedding and tail-recursion; an 'iterative' structure is also delineated, viz. John is an excellent, cheerful, good-humoured man).The experiments were rather low-key; subjects were to read some sentences aloud, so that tonal phrases could then be analysed.They found different pitch levels for the embedded sentences in the nested constructions, a phenomenon that was coupled with a 'tonal continuity' -that is, there is a long-distance dependency between two discontinuous segments.For example, for [A… [C]…B], the tonal properties of [A…B] are identical to a continuous [AB] phrase.Further, this phenomenon is accompanied by three other effects: (i) there is no lowering of the pitch contours in C (a general tendency called 'downdrift'), (ii) the phrase C is realized in a different pitch, and (iii) there is an 'upstep' of B to the initial pitch of A.
Hunyadi sees this process as a clear example of recursion, given that the tonal properties of A must be kept in memory during C, so that they can be restored at B. This, Hunyadi believes, is a direct probe of memory -a 'bookmark effect'.Further analyses with tail-recursive and iterative structures show that the bookmark effect does not appear, which would suggest a principled distinction between these and self-embedded structures.
Despite couching the whole discussion in terms of the computational principles of recursion, tail-recursion and iteration, these results appropriately describe structural properties of prosody (or grouping in general; see pp. 361-365), but not its actual production (let alone the underlying grammar).That is, the different memory loads that recursive and iterative processes would incur was, in actual fact, not probed at all -there is a distinction, after all, between probing structures and probing mechanisms.We can doubtless be certain that the prosodic structure is, roughly, isomorphic to the syntactic structure, but not much follows about the underlying processing mechanisms.
Furthermore, this contribution introduces some confusion regarding the relationship between recursion and hierarchy (and iteration), and it might be worth our time to clarify it.Take computer science, a discipline which also employs 'trees' in order to represent nonlinear data structures.Computer scientist Donald Knuth certainly echoes a widely-held view when he writes that "any hierarchical classification scheme leads to a tree structure" (Knuth 1997: 312); more importantly, we need to understand his contention that "recursion is an innate characteristic of tree structures" (idem., p. 308).By 'innate', here, is probably meant intrinsic, and we can clarify what this means by providing a graphic representation of the recursive implementation of the factorials, as shown here: (3) fact 4 Note that the hierarchical structure directly stems from the fact that the implementation is underlain by a two-equation system: A variable plus a self-call, and it is the latter that expands into the base case, effectively terminating the recursion and the overall computation.It is this specific characteristic that explains why this type of hierarchy, a binary tree, automatically results from a recursive implementation.This hierarchy, however, is among the operations, and not the data structures.There is no sense in stating, by looking at the tree, that the factorial of 3 is embedded into the factorial of 4. This would amount to a definition of a structure in terms of how it is generated, but why do that?After all, most people are taught at school that the factorial of 4 is calculated by multiplying 4 by 3, then by 2, and finally by 1, and this magically eliminates the once-perceived embedding.
There is certainly a difference between representing a hierarchy of operations and representing a complex object; the factorials example is meant to illustrate that a recursive implementation automatically results in a binary hierarchy, but that one cannot necessarily infer the former from the latter.
It is also worth pointing out that an implementation is a real-time computational process, and we are therefore on a different level of analysis than when discussing, say, Merge.Granted, linguistic expressions also exhibit a binary tree structure, and it is certainly the case that Merge effects this geometry, but crucially it does not do so in the way of a recursive implementation.Recursive generation (successor function) and recursive implementations are different things, even if they may result in similar 'forms'.
It is to the structural 'forms' the language faculty generates that we move onto now, the fourth and last sense of recursion.Recursive data structures are defined by the U.S. National Institute of Standards and Technology as any object or class 'that is partially composed of smaller or simpler instances of the same data structure'; that is, any structure which includes an abstraction of itself (an X within an X).The prototypical cases here are the 'trees within trees' so familiar to generative grammar.It is important to establish what the X in the 'X within an X' is, so as to identify a recursive structure that is in fact of some relevance.
Note that this is a definition that focuses on properties of structures only, independently of the operations/mechanisms that generate/process them.There is nothing odd about this, Chomsky & Miller (1963) defined certain constructions in these very terms; they defined the tail-recursive sentences as either right-or left-recursive (depending on the direction of the embedding), and offered the term self-embedding (still used today) for what some call the center-embedding constructions (p.290).
It is to actual structures and their properties that a significant amount of papers in this collection focus on.One set of papers focuses on languages that, prima facie, lack self-embedding sentences; I will focus on these first.Jeanette Sakel & Eugenie Stapert, then, review the data presented by Daniel Everett on Pirahã, an Amazonian language claimed to lack any type of selfembedding.There are two main points here; one is that Pirahã lacks 'mental state' verbs (verbs like think, believe, etc); a fortiori, there is no outright clausal embedding, but simple juxtaposition of individual sentences.The correctness of the latter rests on the status of the verbal suffix -sai, a marker that Everett, in earlier work he now considers mistaken, classified either as a nominalizer or as a clausal embedding indicator (see p. 5).Ultimately, Sakel & Stapert support Everett's contemporary analysis of this suffix as a single marker of semantic cohesion between parts of discourse.Sauerland (2010), however, offers some experimental data that might cast some doubt on this.After carrying out a maximum pitch analysis on the two conditions the -sai marker would appear, according to Everett's earlier study, Sauerland found that the pitch level in the nominalizer condition was indeed much greater than in the clausal condition, indicating that there are two versions of this marker, one of which marks embedding.
Sauerland's methodology is an interesting one, and can complement more traditional ways of determining whether a language exhibits self-embedding.Marianne Mithun lists some of the usual formal features that languages with self-embedding manifest (viz.complementizers, omission of co-referential arguments, non-finite verb forms; p. 23) and provides an analysis of a variety of languages, such as central Alaskan Yup'ik, Mongolian Khalkha, and North American Mohawak.Apparently, these three languages exhibit some kind of selfembedding, but in different ways, which suggests, to Mithun at least, that this feature manifests cross-linguistic gradience (sic.) and variation (p.39).Perhaps this variation is present within individual languages too.Ljiljana Progovac seems to suggest this much for English, given the impossibility of nesting root small clauses (e.g., Me first, Case closed; p. 193) into other root small clauses.It is to be supposed that even if this is correct, it is so for a small part of the grammar, leaving other claims (viz. the presence of self-embedding else-where) virtually uncontested.
Karlsson, on the other hand, offers a typology of recursive and iterative structures (two and six types, respectively; see pp. 43-49 for definitions and examples) based on a quantitative analysis of spoken corpora.By recursive structures he means self-embedding and tail-recursion, and the central claim of his paper is that this sort of corpora analysis provides qualitative data -in the form of 'constraints' -that explain why recursive structures are so rare in spoken language.Karlsson then concludes that multiple nesting is an artificial feature of language that "arose with the advent of written language" (p.64) -it is not a central feature of language.
A similar rationale informs Ritva Laury & Tsuyoshi Ono.Therein, they provide a corpora analysis of conversations conducted in Finnish and Japanese, reaching similar results (and conclusion): Nested constructions are not very common in spoken Finnish and Japanese (pp.84-55); therefore, recursive structures cannot be a central property of language (I will come back to this presently).
Another set of papers, on the other hand, focus on self-embedding outside syntax.Thus, Eva Juarros-Daussà argues that there is a restriction (what she calls the two-argument restriction) that prevents argument structure (i.e. the predicate with its lexically encoded arguments) to be truly recursive.Quite clearly, however, she is not arguing against the possibility that an element may well be embedded inside an element of the same type (which automatically makes a structure recursive).Rather, she is suggesting that this embedding cannot go on into infinitude (a slightly different matter) -that is, she is arguing for the finitude of argument structure (p.253).Similarly, Yury A. Lander & Alexander B. Letuchiy provide data from a Northwest Caucasian language, Adyghe, that seems to allow self-embedding within its verb forms.On a much grander scale, Harry van der Hulst discusses self-embedding in phonology, a topic that has generated some heated debate (as he discusses).Phonological structure is clearly hierarchical, but whether it also manifests selfembedding is rather controversial.This chapter defends the controversial view (phonology is recursive), and the overall idea seems to be that given that recursive structures are principally semantic phenomena (a manner of organizing information), there must be an isomorphic structure in morphology (p.303).The remainder of the chapter offers a long discussion of phono-morphotactic structure, phonotactic structure, and prosodic structure, concluding that there is, after all, self-embedding at the syllable/foot, word, phrase and prosodic level.Pretty grand claims, and it will certainly be interesting to see what the literature makes of it (a thorough discussion of these issues is beyond the present review).
It will have been noticed that I have discussed all these papers in the context of structures only.Indeed, the study of self-embedded structures in natural language is an important one, but it ought to be clear that this phenomenon tells us much more about semantics than about syntax.Such structures, it is clear, provide the linguistic system with a way of "organizing and constraining semantic information", and their distribution appears to be construction-and languagespecific (Hinzen 2008: 358-359).
Once the dubious connection between these structures and specific rewriting rules is disregarded, it is not at all clear why some contributors believe that self-embedding cannot be converted into other types of phrases -a claim that is in fact explicitly denied in Kinsella (p.188).Therein, Anna Kinsella makes clear that languages like Pirahã, even if they really do not manifest selfembedding, do not come at an 'expressive' loss to their speakers.That is, there is no reason to believe that Pirahã cannot "express [similar] concepts using alternative means".Indeed, a self-embedded sentence such as The mouse [the cat [the dog chased] bit] ran away seems to be easily converted into either The dog chased the cat that bit the mouse that ran away (which some would call, I suppose, tailrecursive) or The dog chased the cat and the cat bit the mouse and the mouse ran away (a type of iterative structure, according to Karlsson).
Furthermore, there is a lot of interesting work regarding the concomitant properties that self-embedded structures exhibit, which range from their role in language acquisition and their cross-linguistic distribution, to their connection to the conceptual system (see, for example, Roeper 2009).Be that as it may, Merge remains a simple, recursive generator for reasons that lie elsewhere -the presence (or not) of self-embedded structures in a particular language is an ancil-lary matter.
As a final point in this lengthy review, I might as well mention that there are, in fact, grounds to believe that language manifests a much more general type of recursive structure.At the appropriate level of abstraction, a structure that contains an instance of itself (i.e. an X within an X) appears to be a feature of any type of syntactic structure.That is, every syntactic phrase, as Moro (2008: 68) shows, accords to the same geometry, an asymmetric structure [Specifier [Head -Complement]].This is shown below: ( ) shows, all human languages appear to follow this scheme, despite some variation in the linear order.Linear order is not the key property; rather, the central point is the basic hierarchical configuration: S is always more prominent than [H-C] and H is always more prominent than C. 9 At this level, then, structural recursion appears to come for free, but remains an interesting and surprising fact about language.It in fact identifies natural language as a subcategory of infinite systems, one that manifests a specific type of embedding: endocentric and asymmetric X structures.As such, category recursion is a subtype of structural recursion (in the same way that selfembedding is a subtype of general embedding), and it is perhaps in this sense that contemporary debates on the universality of embedding ought to be understood.
Certainly, an extensive terminological clean-up is in order, as much of the nomenclature currently in use (such as 'true, nested or center-embedding recursion', 'tail-recursion', 'self-embedding rules', et alia) is likely to create confusion rather than anything else.
Epilogue
It appears that the word recursion entered the English language in the 17 th century as an adaptation of the past participle of the Latin verb recurrere 'to go back'.Thus in his An English Expositor (1616) -that compendium of the "hardest words" -John Bullokar defined recursion as "a running back".A convoluted term it remains, but for different reasons.This critical note has attempted to outline four contemporary senses of this term that appropriately applies to well-established theoretical constructs of the cognitive sciences.An attempt was made to encompass the material of this collection around these four connotations as to elucidate a number of issues.Naturally, many topics went untreated, and the focus of this review has befallen on two main points.
Firstly, I have claimed that Chomsky, like many in the mathematical literature, takes recursion to be a central property of what a mechanical procedure is.Despite the different applications recursion has received within his vast output, a recent paper states that linguistic "competence is expressed by a generative grammar that recursively enumerates structural descriptions of sentences" (Chomsky 2006: 165; my emphasis) -a very close statement to the spirit of the Recursion Convention.
Secondly, I have tried to show that there is a clear conflation between, on the one hand, recursion and (self)-embedding and, on the other, recursive structures and recursive mechanisms.All these should in fact be kept separate unless there are principled reasons (and there might well be) to link them.Their connection, however, cannot be simply assumed.
It is rather clear that the present collection completely disregards the first point, while being guilty, for the most part, of the second.Perhaps we can concoct an explanation for why this collection so utterly fails to address Chomsky's actual introduction of recursion in linguistics -the overarching effect of one paper: Hauser et al. (2002).
It is undeniable that this paper has generated an incredible amount of discussion, but recursion was certainly not its main topic; indeed, to a certain extent, it received a rather indefinite characterization.This has had the unfortunate result that many recent publications on the role of recursion in cognition (and this is true for many of the contributions of the collection under review) come up with rather outlandish definitions, which are then loosely related to the aforementioned piece, even if on closer inspection, the actual work presented has very little to do with it -or more importantly, with Chomsky's leitmotif. 10 Unfortunately, the literature is steadily moving towards an increasingly confused study of recursive structures in conflation with mechanisms, obscuring what ought to be a rather straightforward and uncontroversial point: The centrality of recursion within the formalization of the mechanical procedure that underlies the language faculty.
9
The S-H-C schema invokes X-bar configurations.However, current linguistic theory doubts the existence of the specifier position.If so, the overall architecture would be something like this: [… Head … (Compl) … [… Head … (Compl) …] …].The point I am making still applies; that is, this sort of general recursive structure is present in all languages, independently of the most usual form of self-embedding.
Table 1 :
Recursive and Iterative Implementations | 11,632.6 | 2011-06-27T00:00:00.000 | [
"Philosophy"
] |
Atypical Manifestation of LRBA Deficiency with Predominant IBD-like Phenotype
Background:Inflammatory bowel diseases (IBDs) denote a heterogeneous group of disorders associated with an imbalance of gut microbiome and the immune system. Importance of the immune system in the gut is endorsed by the presence of IBD-like symptoms in several primary immunodeficiencies. A fraction of early-onset IBDs presenting with more severe disease course and incomplete response to conventional treatment is assumed to be inherited in a Mendelian fashion, as exemplified by the recent discovery of interleukin (IL)-10 (receptor) deficiency. Methods:We analyzed a patient born to consanguineous parents suffering from severe intestinal manifestations since 6 months of age and later diagnosed as IBD. Eventually, she developed autoimmune manifestations including thyroiditis and type I diabetes at the age of 6 and 9 years, respectively. Combined single-nucleotide polymorphism array-based homozygosity mapping and exome sequencing was performed to identify the underlying genetic defect. Protein structural predictions were calculated using I-TASSER. Immunoblot was performed to assess protein expression. Flow cytometric analysis was applied to investigate B-cell subpopulations. Results:We identified a homozygous missense mutation (p.Ile2824Pro) in lipopolysaccharide-responsive and beige-like anchor (LRBA) affecting the C-terminal WD40 domain of the protein. In contrast to previously published LRBA-deficient patients, the mutant protein was expressed at similar levels to healthy controls. Immunophenotyping of the index patient revealed normal B-cell subpopulations except increased CD21low B cells. Conclusions:We describe a patient with a novel missense mutation in LRBA who presented with IBD-like symptoms at early age, illustrating that LRBA deficiency should be considered in the differential diagnosis for IBD(-like) disease even in the absence of overt immunodeficiency.
T he gastrointestinal tract represents the largest interface of the organism with the environment and is constantly confronted with foreign antigens and bacteria, which may elicit either beneficial or pathogenic effects. Whether the outcome is beneficial is determined by a variety of systems, including the immune system (reviewed in Ref. 1). Effector functions of the immune system in the gut are tightly regulated as inadequate activation may have destructive effects on the bowel (reviewed in Ref. 2). Inflammatory bowel diseases (IBDs) represent a group of diseases resulting from pathologically increased activation of host defense systems leading to severe inflammation and diarrhea (reviewed in Ref. 3). Conversely, primary immunodeficiency disorders including common variable immunodeficiencies (CVIDs) are also associated with IBD-like manifestations. 4 It has been hypothesized that for IBDs, disease onset is both partially environmentally and partially genetically driven (reviewed in Ref. 5). Recent studies have identified monogenetic causes of IBD, which may explain early disease onset in particular cases where the relative contribution of host genetics will arguably be the highest (reviewed in Ref. 6). For instance, mutations affecting the interleukin (IL) 10 (receptor), 7,8 ADAM17, 9 XIAP, 10 and TTC7A 11 genes have recently been identified as monogenic causes of very early-onset IBD (onset before 6 yr of age 12 ). The diagnosis in these cases is difficult due to the unusual phenotype and lack of specific laboratory signs of intestinal inflammation (reviewed Ref. 6). A large proportion of patients with very early-or early-onset IBD (symptoms before 10 yr of age 13 ) remain molecularly unclassified. 14,15 Early detection of such diseases and identification of the underlying causative genetic aberration(s) may improve the treatment strategies and enable further understanding of the pathogenic mechanisms underlying IBD.
We here describe, for the first time, a patient with very earlyonset intestinal manifestations and diagnosed as IBD later in her life in whom we identified a biallelic mutation affecting the CVID-related gene lipopolysaccharide-responsive and beige-like anchor (LRBA).
Patient
The described study was performed according to the Helsinki Declaration and approved by the local ethics committee. All investigated individuals signed informed consent documents. The patient was treated at the Department of Pediatric Gastroenterology at Ankara University and at the Department of Pediatric Gastroenterology at Akdeniz University in Turkey.
Immunohistochemistry
Staining against CD3 was performed using an anti-CD3 antibody (DAKO, United Kingdom) combined with streptavidinperoxidase method. Hemosiderin staining was used to detect iron deposition in hepatocytes (Prussian blue staining).
Homozygosity Mapping
Homozygous regions were mapped using Affymetrix 6.0 SNP arrays (Affymetrix, High Wycombe, United Kingdom) as previously described 16 with minor modifications. In brief, genomic DNA was digested using the enzymes NspI and StyI (New England Biolabs, Frankfurt, Germany). Fragmented DNA was purified using Agencourt AMPure XP magnetic beads (Beckman Coulter, Vienna, Austria) and ligated to adapters, which were labeled and hybridized to the chips. Analysis was done using Genotyping console (Affymetrix) and the online tool homozygositymapper.org 17 (accession date February 10, 2014).
Exome Sequencing
The index patient's exome was sequenced applying the Nextera exome enrichment kit (Illumina, Eindhoven, the Netherlands) according to manufacturer's recommendation. In brief, 50 ng of genomic DNA (gDNA) extracted from whole blood were subjected to transposase-based in vitro shotgun library preparation (tagmentation), which introduced adapters into the genomic DNA, while fragmenting it into 300 to 500 bp segments. 18 The adapter sequence was used to amplify fragmented gDNA in a limited cycle polymerase chain reaction. The DNA was enriched for exonic fragments, which were also amplified. Clusters were generated on a cBot Cluster Generation System (Illumina) applying the SE cluster kit v3 (Illumina) and sequenced on an Illumina HiSeq 2000 (Illumina) applying 3-plexed 50 bp single-end sequencing. Sequences were demultiplexed and aligned to the human genome 19 with Burrows-Wheeler Aligner version 0.5.9. Insertion/deletion realignment, quality score recalibration, and variant calling were done applying the genome analysis toolkit version 1.6 19 . Annotation of single nucleotide variants, insertion and deletions were performed with ANNOVAR. 20 Common variants listed in dbSNP 137 were excluded from further analysis. Validation of the identified variants in LRBA was performed using conventional Sanger sequencing.
FACS Analysis
Peripheral blood monocytes were isolated from shipped blood samples after 2 days using Ficoll Paque PLUS (VWR International GmbH, Vienna, Austria) and stored in 90% FBS (PAA Laboratories GmbH, Pasching, Austria) and 10% dimethyl sulfoxide (Sigma-Aldrich Handels GmbH, Vienna, Austria) in liquid nitrogen. After thawing of peripheral blood monocytes, surface molecules were blocked in RPMI (PAA Laboratories GmbH) supplemented with 10% FBS. Staining was performed 30 minutes on ice using the following antibodies: CD19-PerCP-Cy5.
Western Blot
Patient granulocytes were isolated applying density centrifugation with Ficoll Paque PLUS (VWR International GmbH; Vienna, Austria) and stored at 2808C. Healthy donor granulocytes were isolated from fresh blood stimulated or not with 100 ng/mL LPS (Sigma-Aldrich, Handels GmbH) and directly lysed. Lysis was performed using RIPA buffer (1% NP40, 0.1% SDS, 0.5% sodium deoxycholate, 150 mM NaCl, and 10 mM Tris-HCl pH 7.5). 50 mg of protein were loaded on a gradient of 12% and 8% polyacrylamide gel and subjected to gel electrophoresis. Cell lysates were blotted onto an immobilon polyvinylidene difluoride membrane (Roche, Grenzach-Wyhlen, Germany) for 4 hours at 45 V and then stained with the primary antibodies anti-LRBA (Sigma-Aldrich Handels GmbH) and anti-tubulin (Abcam, Cambridge, United Kingdom) as well as the secondary anti-rabbit antibody coupled to horseradish peroxidase.
RESULTS
The index patient is a female born at term in 1998 to healthy consanguineous parents of Turkish origin. Her sister is healthy except for hypothyroidism without detectable autoantibodies (Fig. 1A).
Symptoms of nonmucoid and nonbloody diarrhea commenced at the age of 6 months (Fig. 1B). Persisting diarrhea was accompanied by severe edema due to hypoalbuminemia prompting recurrent albumine infusions. Other routine laboratory tests were normal. Known causes of persistent diarrhea such as congenital lactase deficiency, congenital chloride diarrhea, microvillous inclusion disease, primary intestinal lymphangiectasia, and food-induced enteropathy had been excluded by clinical history and appropriate laboratory tests. At the age of 1.5 years, a diagnosis of celiac disease was considered after endoscopic biopsy revealed diffuse villous atrophy, crypt hyperplasia, and intraepithelial lymphocytosis in duodenum (see Fig., Supplemental Digital Content 1, http://links.lww.com/IBD/A645). However, introduction of gluten-free diet did not induce remission of the disease, thus making this diagnosis highly unlikely. In addition, tissue transglutaminase antibodies were normal (0.4 U/mL; reference value ,10 U/mL), and screening for HLA DQ2 and DQ8 was negative when assessed later in her life.
At the age of 6 years, she was diagnosed with autoimmune thyroiditis based on the presence of autoantibodies, and thyroid hormone therapy was initiated. Three years later, the index patient presented with type 1 diabetes mellitus. From 9 to 11 years of age, the patient remained clinically stable despite some episodes of diarrhea, which did not require hospitalization.
At the age of 11 years, the clinical course deteriorated with increasing diarrhea and a weight loss of 15 kg within a year. When she was 12 years old, another endoscopy was performed also showing diffuse villous atrophy, crypt hyperplasia, and intraepithelial lymphocytosis in duodenum. In addition, a colonoscopy revealed crypt epithelium injury and regenerative inflammation in the histology examination. Reanalysis of this biopsy at the age of 13 years showed T-cell mediated epithelium destruction based on autoinflammatory processes (Fig. 1C). Those findings suggested autoimmune enteropathy or IBD. Autoimmune polyglandular syndrome, IPEX-like syndrome, and mitochondrial disease were excluded through Sanger sequencing of AIRE, IL2RA, CD25, TYMP, and POLG genes, respectively. Steroid therapy to treat the deterioration of the intestinal manifestations was started and resulted in clinical improvement. After 3 months, recurrence of intense diarrhea motivated the addition of cyclosporine A (CsA) to her therapy. This combined therapy induced partial improvement and was continued for 5 months.
At the age of 13 years, the patient presented with cachexia (21 kg, 130 cm), severe diarrhea, finger clubbing (Fig. 1D), and long-standing severe anal fissure and skin tags (Fig. 1E). The finger clubbing had not been previously noticed, whereas the anal fissures and skin tags had already been present for 2 to 3 years. It remains unclear whether the finger clubbing may have been related to therapy with CsA and/or the autoimmune thyroiditis (despite euthyroid state under therapy). Those clinical manifestations, together with the previously described colonoscopy results, suggested the diagnosis of IBD and prompted reinitiation of treatment with steroids and CsA.
After 30 days of combined therapy, the patient developed renal, respiratory and cardiac failure, bicytopenia, pleural effusion, pericardial effusion, ascites, splenomegaly, deranged coagulation tests, and direct hyperbilirubinemia. Additional laboratory investigations evidenced thrombotic microangiopathy and thrombocytopenia associated with multiple organ failure possibly associated with CsA treatment. In light of these severe reactions, the combined immunosuppressive therapy was immediately discontinued, with subsequent recovery of the patient. After this crisis, she showed lactate and ammonium elevations, pseudoobstruction, and metabolic acidosis attacks intermittently. Magnetic resonance imaging revealed cerebral and cerebellar atrophy, whereas MR spectroscopy was normal. Electromyogram showed demyelinating polyneuropathy. Autoantibodies other than thyroid autoantibodies (ANA, AMA, ASMA, TTG IgA, ANCA, anti dsDNA, LKM) were negative.
The presence of the thrombocytopenia and anemia prompted bone marrow aspiration which showed no abnormalities, suggesting that the bicytopenia was caused by peripheral destruction. Liver biopsy revealed fibrosis in portal area, periportal fibrosis, perisinusoidal fibrosis, patchy cholestatic findings, and hemosiderosis ( Fig. 2A).
The clinical picture continued to deteriorate with persistent diarrhea and lack of weight gain despite enteral and parenteral nutrition support. Additional invasive examinations could not be performed because of her poor clinical condition.
To exclude that her symptoms were caused by an underlying CVID, we performed extensive immunophenotyping. The patient, however, did not at any time presented with either recurrent/severe infections or serum reduction of specific immunoglobulin subtypes. B-and T-lymphocyte counts were also within normal range (Fig. 2B). The evaluation of specific subgroups of B lymphocytes revealed normal numbers of classswitched IgD 2 and CD27 + B cells (Fig. 2B). Interestingly, she presented with increased numbers of CD21 low B cells. Giving the early disease onset, a monogenetic cause for the disease was suspected. As the patient was born to consanguineous parents, we assumed an autosomal recessive mode of inheritance. Thus, we performed homozygosity mapping using Affymetrix 6.0 Genotyping SNP arrays. Calculations using homozygosity mapper 17 revealed 2 homozygous stretches with the maximal homozygosity score 1000 each on chromosomes 4 and 7 (Fig. 3A). The patient's DNA was subjected to exome sequencing which revealed a total of 43,772,399 reads that could be mapped uniquely to the genome (98.38% of total reads), resulting in a mean coverage of 19 reads per base. Exome sequencing revealed 5 variants in 4 genes ( Table 1) fulfilling the criteria of novel nonsense, missense, or splice-site variants located inside the homozygous candidate intervals, among them 2 single nucleotide exchanges of neighboring nucleotides in the gene encoding LPS-responsive and beige-like anchor protein (LRBA; NP_006717.2). The variants affecting adjacent nucleotide positions (c.A8470C; c.T8471C) in LRBA lead to an amino acid exchange within the C-terminus of the protein (p.Ile2824Pro). Both variants were validated using conventional Sanger sequencing and showed perfect segregation under the assumption of autosomal-recessive inheritance with full penetrance (Fig. 3B). The mutated residue Ile-2824 is highly conserved throughout vertebrate evolution (Fig. 3C). Protein 3D structure modeling of the C-terminal region (amino acids 2073-2863) predicted the exchanged amino acid to be located in one of the 5 WD40 domains at FIGURE 3. Identification of a homozygous LRBA mutation as underlying genetic defect of the index patient's phenotype. A, Two homozygous intervals with maximal homozygosity scores of 1000 (red) were identified in the patient. B, Inside the candidate intervals, a homozygous missense mutation in LRBA (c.A8470C; c.T8471C; p.Ile2824Pro) was identified, which could be validated by Sanger sequencing and segregated with the disease. C, The mutated residue is highly conserved throughout vertebrates. D, Protein 3D structure modeling of the BEACH (black) and WD40 (blue) domains revealed that the mutated amino acid of the index patient Ile-2824 is located inside the second last b-sheet of the WD40 domain b-propeller whereas the previously published mutated amino acid Ile-2857 is located in close proximity to the BEACH domain structure (yellow: Linker between BEACH and WD40). E, All published mutations (black annotation) lead to an absence of the protein product. The index patient mutation is annotated in red (black: BEACH domain; dark blue: WD40 domain). F, The mutation p.Ile2824Pro allows protein expression at similar levels as the healthy donor. the C-terminus (Fig. 3D, E). Polyphen-2 calculations predicted the mutation as probably damaging with a score of 0.985 (maximum 1). Immunoblot analysis showed that the mutation in LRBA allowed for protein expression at a similar level as in a healthy control (Fig. 3F).
DISCUSSION
Recently identified monogenic forms of IBD such as IL-10 (receptor) deficiency, 7,8 XIAP deficiency, 10 TTC7A deficiency, 11 and ADAM17 deficiency 9 are associated with very early and severe onset of the disease. These patients often do not respond well to conventional therapy (reviewed in Ref. 6).
Here, we describe a female patient whose main symptom was very early-onset and treatment-resistant nonmucoid, nonbloody diarrhea. She developed signs of autoimmunity, such as thyroiditis, at the age of 6 and diabetes mellitus type 1 at the age of 9 years. Histological analysis of duodenal biopsies revealed T-cell mediated epithelial destruction, thus enabling a diagnosis of IBD at the age of 13 years. Retrospectively, the diagnosis of early-onset IBD could be considered if early supporting evidence of intestinal inflammation had been available. However, as she was under treatment at a rural hospital until the age of 13, such evidence was not available.
Exome sequencing covering more than 98% of coding genomic region revealed no variants or mutations with minor allele frequency of less than 1% in the IL10, IL10RA, IL10RB, XIAP, ADAM17, or TTC7A genes, respectively. Further genes related to polyglandular autoimmune syndrome or other diseases that might explain the phenotype of the patient were also not uncovered by combined homozygosity mapping and exome sequencing. Heterozygous mutations were excluded from the analysis as they would lead to symptoms in other variant-bearing family members. Homozygous missense variants in the 4 genes, ABCE1, LRBA, SCIN, and DNAH11, were identified (Table 1). ABCE1 encodes the protein adenosine triphosphatebinding cassette subfamily E member 1, which is a negative regulator of RNAse L. 23 SCIN encodes the protein adseverin, which is an actin capping and serving protein. 24 Deleterious mutations in DNAH11 are causative for situs inversus totalis. 25 Regarding the index patient, none of these 3 genes can be easily linked to her phenotype. Mutations in the fourth gene LRBA have been recently identified as a cause of a CVID associated with autoimmunity and IBD-like disease, [26][27][28] which prompted us to further investigations.
LRBA was first identified as a lipopolysaccharide responsive gene in B cells and macrophages whose protein structure is similar to the lysosomal-trafficking regulator LYST. 29 Both LYST and LRBA belong to the group of BEACH domain-containing proteins, which consists of 9 human proteins (reviewed in Ref. 30). Apart from mutations in LRBA, 3 other BEACH domaincontaining genes have been implicated in autosomal recessive Mendelian disorders. Homozygous LYST mutations lead to Chediak-Higashi syndrome 31 ; homozygous Neurobeachin-like 2 mutations result in gray platelet syndrome [32][33][34] ; and biallelic WD repeat domain 81 mutations result in a cerebellar ataxia, mental retardation, and dysequilibrium syndrome. 35 Two of these Mendelian disorders, namely Chediak-Higashi syndrome and LRBA deficiency, result in reduced immune functions.
The functional role of BEACH domain-containing proteins remains elusive. It has been speculated that these proteins are involved in membrane dynamics and vesicular transport (reviewed in Ref. 30). LRBA has been shown to colocalize with lysosomes, ER, and the Golgi complex, respectively. 29 Furthermore, it has been implicated as a negative regulator of apoptosis as it is overexpressed in several cancers. 36 This is in concordance with the increased apoptosis in B cells, which has been described in LRBA-deficient patients. 27 Elevated cell death might be due to defective autophagy. 27 Interestingly, our patient did not show reduced numbers of B cells (Fig. 2B). Whether autophagy is affected in the index patient could not be determined.
So far, only a limited number of patients with LRBA deficiency have been published. [26][27][28] Interestingly, all previously described patients bear mutations that lead to absence of protein expression. [26][27][28] One of the identified patients presented with a missense mutation in LRBA located inside the WD40 domains (p. Ile2657Ser) similar to the index patient (p.Ile2824Pro). 27 The reason why the mutation p.Ile2657Ser results in an absent protein in contrast to the mutation p.Ile2824Pro might be explained by the different location of the amino acids in the protein. Protein 3D structure modeling of the BEACH-WD40 domain of LRBA revealed that the amino acid Ile-2657 is located in close proximity to the BEACH domain (Fig. 3D), which might potentially be crucial for protein stability. However, the amino acid Ile-2824 is located between the last 2 b-sheets of the WD40 b-propeller (Fig. 3D) and does not significantly influence the stability of the protein, as it is still detectable (Fig. 3F). Because there is currently no simple test for intact function of LRBA protein, we cannot formally assess whether the mutation described here allows for residual protein function. LRBA-deficient patients present with heterogeneity of clinical symptoms with no clear genotype to phenotype correlation. Common features of LRBA-deficient patients are quantitative and/or qualitative B-cell defects as well as autoimmunity. [26][27][28] Nine of 11 published patients to date present with autoimmune IBD-like manifestations. 26,27 Another common feature is recurrence of pulmonary infections. The patient described here differs from those previously described because her leading symptom was a potential IBD-like disease, starting at the age of 6 months and diagnosed as IBD when she was 13 years old. Further autoimmune features only manifested later in her life.
B-cell phenotyping at the age of 14 years revealed increased numbers of CD21 low B cells. This subgroup of B cells has been associated with autoimmunity in patients suffering from CVID 37 but has not been reported for LRBA deficiency to date. Overall B-cell numbers and numbers of class-switched B cells were not affected in the index patient, and she did not show any pulmonary complications. Also in contrast to other LRBAdeficient patients, 26,27 the index patient does not fulfill the formal criteria for a CVID diagnosis. 38,39 To our knowledge, this is the first LRBA-mutant patient presenting exclusively with gastrointestinal symptoms in the first few months of life. Only one previously reported patient bears some similarities to the patient described here as he also initially showed nonbloody diarrhea. 26 The patient however presented in addition with autoimmunity and Epstein-Barr virus-associated lymphoproliferative disease very early in his life. 26 We speculate that the differences in phenotype might be due to a hypomorphic nature of the variant in LRBA, as the missense mutation identified may allow for residual function of the corresponding gene product, although we cannot formally rule out the effects of the other variants found and of hidden intronic variants, which are not covered by exome sequencing.
Taken together, we describe for the first time a patient with a missense mutation in LRBA allowing for detectable protein expression (and potentially residual function) presenting exclusively with gastrointestinal manifestations at very young age, later diagnosed as IBD. LRBA deficiency thus represents an important molecular differential diagnosis for severe persisting IBD-like disease and related conditions. | 4,862.2 | 2015-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Bayes ’ Model of the Best-Choice Problem with Disorder
We consider the best-choice problemwith disorder and imperfect observation. The decision-maker observes sequentially a known number of i.i.d random variables from a known distribution with the object of choosing the largest. At the random time the distribution law of observations is changed. The random variables cannot be perfectly observed. Each time a random variable is sampled the decision-maker is informed only whether it is greater than or less than some level specified by him. The decision-maker can choose at most one of the observation. The optimal rule is derived in the class of Bayes’ strategies.
Introduction
In the papers we consider the following best-choice problem with disorder and imperfect observations.A decision-maker observes sequentially n iid random variables ξ 1 , . . ., ξ θ−1 , ξ θ , . . ., ξ n .The observations ξ 1 , . . ., ξ θ−1 are from a continuous distribution law F 1 x state S 1 .At the random time θ, the distribution law of observations is changed to continuous distribution function F 2 x i.e., the disorder happen-state S 2 .The moment of the disorder has a geometric distribution with parameter 1 − α.The observer knows parameters α, F 1 x , and F 2 x , but the exact moment θ is unknown.
At each time in which a random variable is sampled, the observer has to make a decision to accept and stop the observation process or reject the observation and continue the observation process .If the decision-maker decided to accept at step k 1 ≤ k ≤ n , she receives as the payoff the value of the random variable discounted by the factor λ k−1 , where 0 < λ < 1.The random variables cannot be perfectly observed.The decision-maker is only informed whether the observation is greater than or less than some level specified by her.
International Journal of Stochastic Analysis
The aim of the decision-maker is to maximize the expected value of the accepted discounted observation.
We find the solution in the class of the following strategies.At each moment k 1 ≤ k ≤ n , the observer estimates the a posterior probability of the current state and specifies the threshold s s n−k .The decision-maker accepts the observation x k if and only if it is greater than the corresponding threshold s.
This problem is the generalization of the best-choice problem 1, 2 and the quickest determination of the change-point disorder problem 3-5 .The best-choice problems with imperfect information were treated in 6-8 .Only few papers related to the combined best-choice and disorder problem are published 9-11 .Yoshida 9 considered the fullinformation case and found the optimal stopping rule which maximizes the probability that accepted value is the largest of all θ m − 1 random variables for a given integer m.
Closely related work to this study is Sakaguchi 10 where the optimality equation for the optimal expected reward is derived for the full-information model.In 11 , we constructed the solution of the combined best-choice and disorder problem in the class of single-level strategies, and, in this paper, we search the Bayes' strategy which maximizes the expected reward in the model with imperfect observation.
Optimal Strategy
According to the problem the observer does not know the current state S 1 or S 2 .But she can estimate the state using the Bayes' formula: Here, s s i is the threshold specified by the decision-maker within i steps until the end i.e., at the step n − i , π is the a prior probability of the state S 1 i.e., before getting the information that x ≤ s , F π s πF 1 s πF 2 s , and π 1 − π.We use the dynamic programming approach to derive the optimal strategy.Let v i π be the payoff that the observer expects to receive using the optimal strategy within steps until the end.The optimality equation is as follows: Simplifying 2.2 , we get Here, The following theorem gives the presentation of the expected payoff in linear form on π.
Theorem 2.1. For any i the function v i π can be written in the form
where Proof.Using the formula 2.3 , one can show that where Assume the theorem is correct for certain i k.Then, for i k 1 where
2.9
The theorem is proved.
The following lemma takes place.
Proof.It is obvious that the sequence v i π is increasing by i.
International Journal of Stochastic Analysis
Now, we prove that the sequence of the expected payoffs has an upper bound.
2.10
Further one can show using the induction that for any i ≥ 1 and any 0 ≤ π ≤ 1 the expected payoff at the step i has the upper bound The lemma is proved.
Corollary 2.3.Theorem 2.1 and the lemma yield that there are such A and B that
2.12
As i → ∞ the expected payoff satisfies the following equation:
2.13
To find the components of the expected payoff for a case of huge number of observation we should solve the following equation:
2.15
The solution of the system is as follows
2.16
The expected payoff is v π max s πA B 2.17 and the optimal threshold is s s π arg max s πA B .
2.18
The above results are summarized in the following theorem.
Theorem 2.4.For i → ∞, the solution of 2.3 is defined as where 2.20
Examples
Consider the examples of using the Bayes' strategy B defined by the formula 2.18 comparing with two strategies with constant thresholds that do not depend on π.
Normal Distribution
Consider the example of the normal distribution of the random variables where functions F 1 x and F 2 x have the variance σ 2 1 and the expectation μ 1 10 and μ 2 9, respectively.Strategies A 1 and A 2 with constant thresholds defined by the following formula: where F s ≡ F 1 s and E s ≡ E 1 s for the strategy A 1 ; F s ≡ F 2 s and E s ≡ E 2 s for the strategy A 2 .
The values of the thresholds of strategies A 1 and A 2 depending on discount rate are tabulated in Table 1.
Table 1 shows how much the discount rate is affect on the thresholds.Figure 1 shows the graphics of the optimal thresholds for strategies A 1 and A 2 s 1 and s 2 , resp.and strategy B s opt depending on π.As the figure shows, the strategy B depends on the a posterior probability of the state S 1 π .As π tends to zero, the optimal threshold of the strategy B tends to threshold s 2 .
We compare the payoffs that the observer expects to receive using different strategies.Define V α as the expected payoff for π 1 and depending on probability of disorder α.
Figure 2 shows the numerical results of the expected payoffs of the observer who uses the strategies A 1 , A 2 , and B thresholds s 1 , s 2 , and s opt , resp. .The expected payoff of the observer who uses the Bayes' strategy B is greater if she uses one of the strategies A 1 or A 2 .The difference is significant for α ∈ 0.75, 0.98 , because of uncertainty of the current state of the system.Table 2 shows the numerical results of the main characteristics of the best-choice process.
For the small probability of the disorder 1 − α 0.1 , the expected payoff according to the strategy A 2 is greater 10.429 than according to the strategy A 1 10.035 .But the Bayes' strategy B that depends on π gives the largest expected payoff 10.500 .
Table 2 shows that the average time of accepting the observation is increasing with respect to the value of the threshold.Note that the strategy A 1 does not depend on the disorder and this leads to a high value of the average time of accepting the observation.Both strategies A 2 and B have a small average time of accepting the observation.
Exponential Distribution
Consider the example of the exponential distribution of the observations.Let F 1 x and F 2 x have the exponential distribution with parameters λ 1 0.5 and λ 2 1, respectively.As in the previous example, consider the strategies A 1 and A 2 comparing with the Bayes' strategy B, s E s 1 − λF s , 3.2 where F s ≡ F 1 s and E s ≡ E 1 s for the strategy A 1 ; F s ≡ F 2 s and E s ≡ E 2 s for the strategy A 2 .Table 3 shows the values of the thresholds for the strategies A 1 and A 2 depending on the discount rate.
The value of the optimal threshold of the strategy B as in the case of the normal distribution of the observations is increasing by π and equal to the threshold of the strategy A 2 at π 0. The graphics of the expected payoffs have the same view as in Figure 2. Table 4 shows the main characteristics of the best-choice process for different strategies.As in the previous example, the Bayes' strategy gives better payoff than the strategy A 2 , but it has bigger average time of accepting the observation.The strategy A 1 is the worst for all the parameters.
Figure 2 :
Figure 2:Expected payoffs of the observer who uses the strategies A 1 , A 2 and B for α 0.9, λ 0.99.
Table 1 :
The values of the thresholds of strategies A 1 and A 2 .
Table 2 :
Main characteristics of the best-choice process for α 0.9, λ 0.99.
Table 3 :
The values of the thresholds of strategies A 1 and A 2 .
Table 4 :
Main characteristics of the best-choice process for α 0.9, λ 0.99. | 2,342.2 | 2012-08-03T00:00:00.000 | [
"Mathematics"
] |
Identifying Automatically Generated Headlines using Transformers
False information spread via the internet and social media influences public opinion and user activity, while generative models enable fake content to be generated faster and more cheaply than had previously been possible. In the not so distant future, identifying fake content generated by deep learning models will play a key role in protecting users from misinformation. To this end, a dataset containing human and computer-generated headlines was created and a user study indicated that humans were only able to identify the fake headlines in 47.8% of the cases. However, the most accurate automatic approach, transformers, achieved an overall accuracy of 85.7%, indicating that content generated from language models can be filtered out accurately.
Introduction
Fake content has been rapidly spreading across the internet and social media, misinforming and affecting users' opinion (Kumar and Shah, 2018;Guo et al., 2020). Such content includes fake news articles 1 and truth obfuscation campaigns 2 . While much of this content is being written by paid writers (Luca and Zervas, 2013), content generated by automated systems is rising. Models can produce text on a far greater scale than it is possible to manually, with a corresponding increase in the potential to influence public opinion. There is therefore a need for methods that can distinguish between human and computer-generated text, to filter out deceiving content before it reaches a wider audience.
While text generation models have received consistent attention from the public as well as from the academic community (Dathathri et al., 2020;Subramanian et al., 2018), interest in the detection of automatically generated text has only arisen more 1 For example, How a misleading post went from the fringes to Trump's Twitter. 2 For example, Can fact-checkers save Taiwan from a flood of Chinese fake news? recently (Jawahar et al., 2020). Generative models have several shortcomings and their output text has characteristics that distinguish it from humanwritten text, including lower variance and smaller vocabulary (Holtzman et al. (2020);Gehrmann et al. (2019)). These differences between real and generated text can be used by pattern recognition models to differentiate between the two. In this paper we test this hypothesis by training classifiers to detect headlines generated by a pretrained GPT-2 model (Radford et al., 2019). Headlines were chosen as it has been shown that shorter generated text is harder to identify than longer content .
The work described in this paper is split into two parts: the creation of a dataset containing headlines written by both humans and machines and training of classifiers to distinguish between them. The dataset is created using real headlines from the Million Headlines corpus 3 and headlines generated by a pretrained GPT-2. The training and development sets consist of headlines from 2015 while the testing set consists of 2016 and 2017 headlines. A series of baselines and deep learning models were tested. Neural methods were found to outperform humans, with transformers being almost 35% more accurate.
Our research highlights how difficult it is for humans to identify computer-generated content, but that the problem can ultimately be tackled using automated approaches. This suggests that automatic methods for content analysis could play an important role in supporting readers to understand the veracity of content. The main contributions of this work are the development of a novel fake content identification task based on news headlines 4 and analysis of human evaluation and machine learning approaches to the problem. Kumar and Shah (2018) compiled a survey on fake content on the internet, providing an overview of how false information targets users and how automatic detection models operate. The sharing of false information is boosted by the natural susceptibility of humans to believe such information. Pérez-Rosas et al. (2018) and Ott et al. (2011) reported that humans are able to identify fake content with an accuracy between 50% and 75%. Information that is well presented, using long text with limited errors, was shown to deceive the majority of readers. The ability of humans to detect machinegenerated text was evaluated by Dugan et al. (2020), showing that humans struggle at the task. Holtzman et al. (2020) investigated the pitfalls of automatic text generation, showing that sampling methods such as Beam search can lead to low quality and repetitive text. Gehrmann et al. (2019) showed that automatic text generation models use a more limited vocabulary than humans, tending to avoid low-probability words more often. Consequently, text written by humans tends to exhibit more variation than that generated by models.
Relevant Work
In Zellers et al. (2019), neural fake news detection and generation are jointly examined in an adversarial setting. Their model, called Grover, achieves an accuracy of 92% when identifying real from generated news articles. Human evaluation though is lacking, so the potential of Grover to fool human readers has not been thoroughly explored. In Brown et al. (2020), news articles generated by their largest model (175B parameters) managed to fool humans 48% of the time. The model, though, is prohibitively large to be applied at scale. Further, showed that shorter text is harder to detect, both for humans and machines. So even though news headlines are a very potent weapon in the hands of fake news spreaders, it has not been yet examined how difficult it is for humans and models to detect machine-generated headlines.
Dataset Development
The dataset was created using Australian Broadcasting Corporation headlines and headlines generated from a model. A pretrained 5 GPT-2 model (Radford et al., 2019) was finetuned on the headlines data. Text was generated using sampling with tem-5 As found in the HuggingFace library. perature and continuously re-feeding words into the model until the end token is generated.
Data was split in two sets, 2015 and 2016/2017, denoting the sets a "defender" and an "attacker" would use. The goal of the attacker is to fool readers, whereas the defender wants to filter out the generated headlines of the attacker. Headlines were generated separately for each set and then merged with the corresponding real headlines.
Dataset Analysis
Comparison of the real and automatically generated headlines revealed broad similarities between the distribution of lexical terms, sentence length and POS tag distribution, as shown below. This indicates that the language models are indeed able to capture patterns in the original data.
Even though the number of words in the generated headlines is bound by the maximum number of words learned in the corresponding language model, the distribution of words is similar across real and generated headlines. In Figures 1 and 2 we indicatively show the 15 most frequent words in the real and generated headlines respectively. POS tag frequencies are shown in Table 1 for the top tags in each set. In real headlines, nouns are used more often, whereas in generated headlines the distribution is smoother, consistent with findings in Gehrmann et al. (2019). Furthermore, in generated headlines verbs appear more often in their base (VB) and third-person singular (VBZ) form while in real headlines verb tags are more uniformly distributed. Overall, GPT-2 has accurately learned the real distribution, with similarities across the board. Lastly, the real headlines are shorter than the generated ones, with 6.9 and 7.2 words respectively.
Survey
A crowd-sourced survey 6 was conducted to determine how realistic the generated text is. Participants (n=124) were presented with 93 headlines (three sets of 31) in a random order and asked to judge whether they were real or generated. The headlines were chosen at random from the "attacker" (2016/2017) headlines.
In total, there were 3435 answers to the 'real or generated' questions and 1731 (50.4%) were correct. When presented with a computer-generated headline, participants answered correctly in 1113 out of 2329 (47.8%) times. In total 45 generated headlines were presented and out of those, 23 were identified as computer-generated (based on average response). This is an indication that GPT-2 can indeed generate realistic-looking headlines that fool readers. When presented with actual headlines, participants answered correctly in 618 out of 1106 times (55.9%). In total 30 real headlines were presented and out of those, 20 were correctly identified as real (based on average response).
Of the 45 generated headlines, five were marked as real by over 80% of the participants, while for the real headlines, 2 out of 30 reached that threshold. The five generated headlines were: Most of these examples contain grammatical errors, such as ending with an adjective, while some headlines contain absurd or nonsensical content. These deficiencies set these headlines apart from the rest. It is worth noting that participants appeared more likely to identify headlines containing grammatical errors as computer-generated than other types of errors.
Classification
For our classifier experiments, we used the three sets of data (2015, 2016 and 2017) we had previously compiled. Specifically, for training we only used the 2015 set, while the 2016 and 2017 sets were used for testing. Splitting the train and test data by the year of publication ensures that there is no overlap between the sets and there is some variability between the content of the headlines (for example, different topics/authors). Therefore, we can be confident that the classifiers generalize to unknown examples.
Furthermore, for hyperparameter tuning, the 2015 data was randomly split into training and development sets on a 80/20 ratio. In total, for training there are 129, 610 headlines, for development there are 32, 402 and for testing there are 303, 965.
Experiments
Four types of classifiers were explored: baselines (Elastic Net and Naive Bayes), deep learning (CNN, Bi-LSTM and Bi-LSTM with Attention), transfer learning via ULMFit (Howard and Ruder, 2018) and Transformers (BERT (Devlin et al., 2019) and DistilBERT (Sanh et al., 2019)). The architecture and training details can be found in Appendix A. Results are shown in Table 2. Overall accuracy is the accuracy in percentage over all headlines (real and generated), while (macro) precision and recall are calculated over the generated headlines. Precision is the percentage of correct classifications out of all the generated classifications, while recall is the percentage of generated headlines the model classified correctly out of all the actual generated headlines. High recall scores indicate that the models are able to identify a generated headline with high accuracy, while low precision scores show that models classify headlines mostly as generated.
We can observe from the results table that humans are overall less effective than all the examined models, including the baselines, scoring the lowest accuracy. They are also the least accurate on generated headlines, achieving the lowest recall. In general, human predictions are almost as bad as random guesses.
Deep learning models scored consistently higher than the baselines, while transfer learning outperformed all previous models, reaching an overall accuracy of around 83%. Transformer architectures though perform the best overall, with accuracy in the 85% region. BERT, the highest-scoring model, scores around 30% higher than humans in all metrics. The difference between the two BERT-based models is minimal.
Since training and testing data are separate (sampled from different years), this indicates that there are some traits in generated text that are not present in human text. Transformers are able to pick up on these traits to make highly-accurate classifications.
For example, generated text shows lower variance than human text (Gehrmann et al., 2019), which means text without rarer words is more likely to be generated than being written by a human.
Error Analysis
We present the following two computer-generated headlines as indicative examples of those misclassified as real by BERT: Extra Surveillance Announced For WA Coast
Violence Restricting Rescue Of Australian
The first headline is not only grammatically sound, but also semantically plausible. A specific region is also mentioned ("WA Coast"), which has low probability of occurring and possibly the model does not have representative embeddings for. This seems to be the case in general, with the mention of named entities increasing the chance of fooling the classifier. The task of predicting this headline is then quite challenging. Human evaluation was also low here, with only 19% of participants correctly identifying it.
In the second headline, the word "restricting" and the phrase "rescue of" are connected by their appearance in similar contexts. Furthermore, both "violence" and "restricting rescue" have negative connotations, so they also match in sentiment. These two facts seem to lead the model in believing the headline is real instead of computer-generated, even though it is quite flimsy both semantically (the mention of violence is too general and is not grounded) and pragmatically (some sort of violence restricting rescue is rare). In contrast, humans had little trouble recognising this as a computergenerated headline; 81% of participants labelled it as fake. This indicates that automated classifiers are still susceptible to reasoning fallacies.
Conclusion
This paper examined methods to detect headlines generated by a GPT-2 model. A dataset was created using headlines from ABC and a survey conducted asking participants to distinguish between real and generated headlines.
Real headlines were identified as such by 55.9% of the participants, while generated ones were identified with a 47.8% rate. Various models were trained, all of which were better at identifying generated headlines than humans. BERT scored 85.7%, an improvement of around 35% over human accuracy.
Our work shows that whereas humans cannot differentiate between real and generated headlines, automatic detectors are much better at the task and therefore do have a place in the information consumption pipeline. | 3,102.4 | 2021-06-01T00:00:00.000 | [
"Computer Science"
] |
Control of Optoelectronic Scanning and Tracking Seeker by Means the LQR Modified Method with the Input Signal Estimated Using of the Extended Kalman Filter
: The paper presents the concept of controlling the designed optoelectronic scanning and tracking seeker. The above device is intended for the so-called passive guidance of short-range anti-aircraft missiles to various types of air maneuvering targets. In the presented control method, the modified linear-quadratic regulator (LQR) and the estimation of input signals using the extended Kalman filter (EKF) were used. The LQR regulation utilizes linearization of the mathematical model of the above-mentioned seeker by means of the so-called Jacobians. What is more, in order to improve the stability of the seeker control, vector selection of signals received by the optoelectronic system was used, which also utilized EKF. The results of the research are presented in a graphical form. Numerical simulations were carried out on the basis of the author’s own program developed in the programming language C++.
Introduction
One of the most important components of an anti-aircraft self-guiding infrared missile is the optoelectronic self-guiding seeker. This type of device is still the subject of intensive research in many scientific centers around the world [1 -15]. The issue of this article refers to the publications [16,17] and is a continuation of the research conducted on the designed optoelectronic scanning and tracing seeker, presented in Figure 1.
The drive system of the designed scanning and tracking seeker is the rotor shown in Figure 1a. It is suspended in two rotating housings forming the so-called Cardan joint (Figure 1c). The rotor axis is the optical axis of the search and tracking system for a detected target. By means of the motors mounted in the individual housings (Figure 1b), control moments are applied to the rotating rotor, which makes it possible to change the position of its axis in space and thus to control the seeker. Figure 1d shows a 3D visualization of the complete seeker. Thanks to the 3D software, the mathematical and dynamics model and problem of moving parts are easier to solve [18]. Figure 2 shows the seeker set in the first operating mode in which the device scans the air space with the so called large angle of scanning β = 1.92 • . Figure 3 shows the area of the airspace scanned by the seeker set in the first operating mode (the plane is scanned perpendicularly to the head axis). Figure 4 shows the seeker set in the second operating mode, where the device scans the air space with a so-called small scanning angle β = 0.28 • . Figure 5 shows the area of air space scanned by the seeker set in the second operating mode (the plane scanned is perpendicular to the seeker axis). The drive system of the designed scanning and tracking seeker is the rotor shown in Figure 1a. It is suspended in two rotating housings forming the so-called Cardan joint ( Figure 1c). The rotor axis is the optical axis of the search and tracking system for a detected target. By means of the motors mounted in the individual housings (Figure 1b), control moments are applied to the rotating rotor, which makes it possible to change the position of its axis in space and thus to control the seeker. Figure 1d shows a 3D visualization of the complete seeker. Thanks to the 3D software, the mathematical and dynamics model and problem of moving parts are easier to solve [18]. Figure 2 shows the seeker set in the first operating mode in which the device scans the air space with the so called large angle of scanning Figure 3 shows the area of the airspace scanned by the seeker set in the first operating mode (the plane is scanned perpendicularly to the head axis). Figure 3 shows the area of the airspace scanned by the seeker set in the first operating mode (the plane is scanned perpendicularly to the head axis). Figure 5 shows the area of air space scanned by the seeker set in the second operating mode (the plane scanned is perpendicular to the seeker axis). The detailed principle of operation and innovation of the seeker is presented in [16,19]. At the present stage of research, a mathematical model of the dynamics of the presented device has been developed, various algorithms of control of the seeker's optical axis have been analysed in [16,[20][21][22][23], and optimal operating parameters of the seeker have been determined while maintaining the stability conditions specified by the so-called Lapunov method [24]. In the course of the above-mentioned research, problems with precise control of the device axis in the so-called second operating mode of the seeker (Figure 4), in which the seeker tracks the previously detected air target with the small scanning angle ( Figure 5).It should be noted that this type of solution for detection (space scanning) and tracking of the maneuvering air target is not described in the available literature. After a deeper analysis of the problem, it turned out that it is caused by too many pulses from infrared radiation emitted by the target that are received by the optoelectronic system. Too many detection pulses cause unfavorable overdriving of the seeker axis. It was, therefore, advisable to carry out additional filtering of signals received by the optoelectronic system. For this purpose, the so-called vector selection of signals received by the optoelectronic system was used, with a Kalman filter added [25][26][27][28]. Moreover, the so-called modified LQR control method was used to increase the precision of the seeker axis control. The results of this work are presented in subsequent points of this paper. The detailed principle of operation and innovation of the seeker is presented in [16,19]. At the present stage of research, a mathematical model of the dynamics of the Figure 6 shows the scanning seeker diagram together with the adopted coordinate systems and markings of individual angles of rotation of the respective systems in relation to each other. The origins of all coordinate systems are located at the intersection of the axis of rotation of the outer housing with the axis of rotation of the inner housing of the seeker. The movement of the seeker axis can be induced by moments of external forces M Z and M W forces generated by control motors or by moments of friction forces M TW and M TZ forces generated in the bearings of particular seeker housing as a result of angular displacement of the missile deck.
Mathematical Model of the Scanning Seeker
Angular movements of a missile are treated as external disturbances and are determined by the angular ω x P , ω y P , ω z P velocities that cause the missile to rotate around the individual axes of the system x P y P z P at the appropriate angles α x α y α z . Angles ψ, ϑ are measured with fiber optic sensors ( Figure 4) and angle ϕ is measured with the rotor position sensor (Figure 2). Angular movements of a missile are treated as external disturbances and are determined by the angular velocities that cause the missile to rotate around the individual axes of the system .Angles ϑ ψ , Figure 6. Seeker diagram with adopted coordinate systems.
The following coordinate systems have been introduced: x K y K z K -a coordinate system associated with the reference direction established in space; x R y R z R -a mobile coordinate system associated with the rotor; x CW y CW z CW -a mobile coordinate system associated with the inner housing; x CZ y CZ z CZ -a mobile coordinate system associated with the outer housing; x P y P z P -a mobile coordinate system associated with a missile; The following marking of the angles of rotation has been adopted: ψ-angle of rotation x CZ y CZ z CZ relative to x K y K z K around axis z CZ ; ϑ-angle of rotation x CW y CW z CW relative to x K y K z K around axis x CW ; ϕ-angle of rotation x R y R z R relative to x K y K z K around axis y R ; α x -angle of rotation x P y P z P relative to x K y K z K around axis x P ; α y -angle of rotation x P y P z P relative to x K y K z K around axis y P ; α z -angle of rotation x P y P z P relative to x K y K z K around axis z P ; The following were assumed as given values: ψ, here: c w is a coefficient of friction in the inner bowl bearing and c z is a coefficient of friction in the outer bowl bearing.
Using the Lagrange II equation, the following gyroscope motion equations have been derived [29]: (1) where the components of the angular velocity of the outer housing: ψ + ω z P and the components of the angular velocity of the inner housing: Assuming that external kinematic impacts are negligible, we will obtain the following system of equations of motion of the seeker:
LQR Control of the Scanning Seeker
In this article, the authors proposed to control the seeker axis by means of a modified linear-quadratic regulator (LQR). This method can be used to determine such control that minimizes the integral quality indicator, given by the formula: where Q = matrix of state variable weights, R = matrix of control weights, x=state vector, u = [M W − M TW ] T -control vector. Q and R matrices are diagonal weight matrices that can be used to change the influence of particular state variables and controls on the presented quality criterion. The advantage of this method is that the entire state vector is the set point value, not just its selected values, as is the case with other controllers (e.g., PID) [30][31][32][33].
LQR regulation requires linearization and discretisation of state equations. Jacob's matrix-a matrix of successive partial derivatives-was used in the process of linearization.
To Equations (3) and (4), we introduce the signs: thanks to which we get a nonlinear system of equations: where: Then, in the above system, the components dependent on the so-called owndynamics of the system (state variables) and the components dependent on external actions (control moments) will be separated, as shown below: where: f i = a component dependent on the own dynamics of the system, z i = a component dependent on control and external interference: The control law takes the form: where x Z is the matrix of set state variables, while the matrix of amplification is calculated from the dependency: where P matrix is the solution to Riccati discrete equation [34,35]: Selection of LQR regulator settings consists in the determination of the Q and R weights matrices. The LQR algorithm does not have a universal method for selecting the above parameters and they are usually iteratively selected. In this paper, when selecting the initial values of Q and R matrices, the authors used the Bryson [36] rule, which suggests the selection of the following input parameters: where i-means another element of the state vector; x ii -these are the maximum values for individual elements of the state vector x; u ii -these are the maximum control moments. The maximum operating parameters of the seeker were determined using the Lapunov method [24], and they are respectively: The P matrix was determined by numerically solving Riccati discrete equations according to the formula: Matrix P j−1 , according to the above formula, is calculated iteratively from the back. P j = Q is assumed as the input value. Jacobians were used to determine the state matrix A and control matrix B [37]. The individual elements of the matrix A are given according to the dependency: After calculating the partial derivatives, further elements of the matrix A were determined, represented by the following equations: x 2 x 4 2 sin x 1 cos x 1 sin 2x 1 +J s2 J y R nx 2 cos 2 x 1 2 sin x 1 (J s3 +J s2 sin 2 x 1 ) The individual elements of the matrix B are given according to the dependency: After calculating the partial derivatives, the individual elements of the control matrix B were obtained, represented by the following equations:
Vector Filtration of Control Signals by Means of the Extended Kalman Filter
The accurate angle measurement of the detected object has a significant impact on the accuracy of its tracking [38][39][40][41][42]. In order to correctly determine the angular position of the detected object, it should be determined the law of airspace scanning by the optoelectronic system.
The law of airspace scanning by the optoelectronic system of the seeker is presented in the paper [19]. On its basis, linear equations describing the model of the scanning process were derived: β X (t) = a tan(tan(β(t)) · cos(a sin(z zp (t)/ x zp (t) 2 + z zp (t) 2 )) (17) β Z (t) = a tan(tan(β(t)) · sin(a sin(z zp (t)/ x zp (t) 2 + z zp (t) 2 )) (18) where: β X (t), β Z (t)-angular coordinates of the detected target relative to the axis of the scanning seeker; β(t)-resultant angle of deflection of the light beam from the optical axis: x zp , z zp -the components of the position of the light beam on the plane of the original mirror.
Angular coordinates of the detected target β X , β Z are measured with respect to the optical axis of the seeker. These coordinates are the position desired to control the axis of the seeker so that it tracks the detected target.
Due to the high scanning density, especially in the second operating mode of the seeker (see Figure 5), there is a large number of pulses received from the infrared detector, which causes an unfavorable overdriving of the seeker axis. For the reasons mentioned above, it was necessary to apply appropriate filtration. Although a large number of control signals do not cause losing track of the target, it has a negative effect on the precision of the control. The method of filtering signals received by the optoelectronic system of the seeker presented in the paper is divided into two stages: selecting the maximum signal (pulse), -performing additional filtration of the determined maximum signals using the EKF. Figure 7 shows a diagram of filtering the signals received by the optoelectronic system of the seeker. signals do not cause losing track of the target, it has a negative effect on the precision of the control. The method of filtering signals received by the optoelectronic system of the seeker presented in the paper is divided into two stages: -selecting the maximum signal (pulse), -performing additional filtration of the determined maximum signals using the EKF. Figure 7 shows a diagram of filtering the signals received by the optoelectronic system of the seeker. Based on the series of pulses from the infrared detector, only those for which the voltage value is the highest, i.e., theoretically the closest to the source of infrared radiation, are taken into account. These are the so-called maximum pulses marked in Figure 7 with the symbol B. At the next stage of selection, the Kalman filter was used, in which a variable coefficient ∆β was adopted as one of the quality criteria (see Figure 7), depending on the value of the V WC velocity vector. The coefficient ∆β varies from 1.5 Ra to 6 Ra, where Ra is the radius of the visual field of the seeker corrective lens system. The algorithm according to which the Kalman filter works is divided into two stages. The first stage is called prediction and the second stage is called correction. During prediction, the velocity vector of the detected target is estimated based on the previous coordinates of the detected target [43][44][45].
Estimated values of the direction and orientation of the target velocity vector are additional quality criteria for filtering those maximum signals whose vectors have the opposite direction and orientation compared to the V WC vector. Vectors of measurement signals are marked with the "r i " symbol in Figure 7.
The prediction of the direction and orientation values of the target velocity vector is based on the matrix of coordinates of the detected target: where i = number of target detection pulses.
Equations describing the estimated velocity vector of a detected air target: where: ν WC = estimated value of the target velocity vector, γ WC = estimated direction of the target velocity vector, t i = time of measurement of consecutive pulses from the infrared detector, ∆t = reverse time interval (in numerical simulations, it is the time of about 10 detection pulses).
In the next stage of filtration, called correction, the final control signal is determined (Figure 7, point E), for which the value of the determined "r i " vector is greater than or equal to the quality ∆β coefficient. Signals selected in this way β X , β Z have been used to control the seeker axis and thereby track the detected air target, as described in the next chapter of the paper.
Results
The studies were carried out for different air situations. Numerical simulations were carried out on the basis of the author's own program developed in the C++ language.
Scanning Seeker Parameters
Moments of rotor inertia: J x R = 0.00114143 (kg · m 2 ; J y R = 0.00157911 (kg · m 2 ; J z R = 0.00158234 (kg · m 2 Moments of inertia of the complete inner housing: J x CW = 0.0016663 (kg · m 2 ; J y CW = 0.0011666 (kg · m 2 ; J z CW = 0.0011463 (kg · m 2 ) Moments of inertia of the complete outer housing: J x CZ = 0.0003383 (kg · m 2 ) ; J y CZ = 0.0002213 (kg · m 2 ; J z CZ = 0.0002583 (kg · m 2 ) Rotational speed of the rotor: n = 600(rad/s) (The speed and torque of the motor depend on the strength of the magnetic field generated by the energized windings of the motor, which depends on the current through them-may slightly differ from the fixed value [46]).
The coefficient of friction in the inner housing bearing: The coefficient of friction in the outer housing bearing: Figure 8 shows a computer simulation image of the tracking of an air target moving at a speed of 350 m/s, located at a distance of 1600 m from the firing position, without the use of a filtration of pulses received by the optoelectronic system of the seeker. Figure 9 shows a computer simulation image of tracking the same air target but using the signal filtering presented in Chapter 4. In both cases, the seeker axis is controlled by the method described in Chapter 3, using a modified linear-quadratic regulator (LQR). Figure 9 shows a computer simulation image of tracking the same air target but using the signal filtering presented in Chapter 4. In both cases, the seeker axis is controlled by the method described in Chapter 3, using a modified linear-quadratic regulator (LQR). For a better comparison of the simulations shown above, Figure 10 shows the set trajectory TZ and the trajectory TR pursued by the seeker axis when tracking a detected air target without signal filtering. Figure 9 shows a computer simulation image of tracking the same air target but using the signal filtering presented in Chapter 4. In both cases, the seeker axis is controlled by the method described in Chapter 3, using a modified linear-quadratic regulator (LQR). For a better comparison of the simulations shown above, Figure 10 shows the set trajectory TZ and the trajectory TR pursued by the seeker axis when tracking a detected air target without signal filtering. For a better comparison of the simulations shown above, Figure 10 shows the set trajectory T Z and the trajectory T R pursued by the seeker axis when tracking a detected air target without signal filtering. Figure 11 shows the same trajectories after signal filtering. Figure 12 shows a computer simulation image of the seeker axis control in the airspace search phase and in the phase of tracking the detected target. Target speed: 250 m/s, target distance from fire station: 1100 m.
Results of the Simulation
Description of the markings used in Figures 12-14: A, scanning lines; B, trajectory of seeker axis motion in the programmatic airspace search phase; C, phase of seeker axis shifting to the detected target; D, tracking of the detected target. Figure 13 shows the differences between the set trajectory and the trajectory pursued by the seeker axis when controlled with the use of the PID method, while Figure 14 shows the differences between the set trajectory and the trajectory pursued by the seeker axis when controlled with the use of modified LQR method. Figure 11 shows the same trajectories after signal filtering. Figure 11. TZ set trajectory and pursued TR trajectory of the seeker axis with the filtration of signals received from the infrared detector. Figure 12 shows a computer simulation image of the seeker axis control in the airspace search phase and in the phase of tracking the detected target. Target speed: 250 m/s, target distance from fire station: 1100 m. Figure 11 shows the same trajectories after signal filtering. Figure 11. TZ set trajectory and pursued TR trajectory of the seeker axis with the filtration of signals received from the infrared detector. Figure 12 shows a computer simulation image of the seeker axis control in the airspace search phase and in the phase of tracking the detected target. Target speed: 250 m/s, target distance from fire station: 1100 m. Description of the markings used in Figures 12-14: A, scanning lines; B, trajectory of seeker axis motion in the programmatic airspace search phase; C, phase of seeker axis shifting to the detected target; D, tracking of the detected target. Figure 13 shows the differences between the set trajectory and the trajectory pursued by the seeker axis when controlled with the use of the PID method, while Figure 14 shows the differences between the set trajectory and the trajectory pursued by the seeker axis when controlled with the use of modified LQR method.
Conclusions
The paper presents the application ofmodified LQR control and the estimation of input signals using Kalman filter for the process of detection and tracking of air targets.
LQR regulation uses linearization of the mathematical model of the tested scanning seeker with the use of the so-called Jacobians, while in order to improve the stability of the seeker's operation, vector selection of signals received by the optoelectronic system, which utilizes, among others, an extended Kalman filter, was used.
Computer simulations have shown that tracking of the maneuvering air target by the seeker being studied, using a Jacobian in a closed-loop control, is more precise than using the classical PID control method. The results also confirm the effectiveness of the developed method of filtering the signals received by the optoelectronic system of the presented seeker. After applying the vector selection of signals and Kalman's linear filter, we can clearly see a significant improvement in the stability of the trajectory of seeker axis motion.
In further research, statistical results will be presented and analyzed, which will be compared with the results obtained in this article. Moreover, in the future, it is planned to conduct research on the use of a more powerful filter, the "unscented Kalman filter", which was widely discussed in articles [47][48][49][50][51]. | 5,532.8 | 2021-05-26T00:00:00.000 | [
"Engineering"
] |
Particle systems with quasi-homogeneous initial states and their occupation time fluctuations
Occupation time fluctuation limits of particle systems in R^d with independent motions (symmetric stable Levy process, with or without critical branching) have been studied assuming initial distributions given by Poisson random measures (homogeneous and some inhomogeneous cases). In this paper, with d=1 for simplicity, we extend previous results to a wide class of initial measures obeying a quasi-homogeneity property, which includes as special cases homogeneous Poisson measures and many deterministic measures (simple example: one atom at each point of Z), by means of a new unified approach. In previous papers, in the homogeneous Poisson case, for the branching system in"low"dimensions, the limit was characterized by a long-range dependent Gaussian process called sub-fractional Brownian motion (sub-fBm), and this effect was attributed to the branching because it had appeared only in that case. An unexpected finding in this paper is that sub-fBm is more prevalent than previously thought. Namely, it is a natural ingredient of the limit process in the non-branching case (for"low"dimension), as well. On the other hand, fractional Brownian motion is not only related to systems in equilibrium (e.g., non-branching system with initial homogeneous Poisson measure), but it also appears here for a wider class of initial measures of quasi-homogeneous type.
Introduction
In a series of papers [3,4,5,6,7,8,9,10] we studied particle systems in R d starting from a configuration determined by a random point measure ν, and independently moving according to a standard α-stable Lévy process (0 < α ≤ 2). In some models the particles additionally undergo critical branching. The evolution of the system is described by the empirical process N = (N t ) t≥0 , where N t (A) is the number of particles in the set A ⊂ R d at time t. The main object of interest is the limit of the time-rescaled and normalized occupation time fluctuation process X T defined by as T → ∞ (i.e., as time is accelerated), where F T is a suitable deterministic norming. The process X T is signed measure-valued, but we regard it as a process with values in the space of tempered distributions S ′ (R d ) for technical convenience, and also because in some cases the limit is genuinely S ′ (R d )-valued. In all the cases considered in the abovementioned papers the initial measure ν was a Poisson field, homogeneous or not. This assumption permitted to investigate convergences conveniently with the help of the Laplace transform (due to infinite divisibility). The results always exhibited the same type of phase transition: for "low" dimensions d the limit process was the Lebesgue measure multiplied by a real long-range dependent process, whereas for "high" dimensions the limit was an S ′ (R d )-valued process with independent increments. A natural question is what happens for non-Poisson initial measures ν. Mi loś [27,28,29] considered (critical) branching systems where ν was an equilibrium measure (see [19]). In that model the limits have a similar dimension phase transition; moreover, for high dimensions they are the same as in the homogeneous Poisson case, while for low dimensions they are different. The conclusion was that in low dimensions the occupation time fluctuation process "remembers" the initial state of the system. Since the equilibrium states of the branching system are somewhat similar to homogeneous Poisson measures (they are infinitely divisible random point measures with uniform intensity; distributions of this kind are called "equilibrium distributions of Poisson type" in [26]), the Laplace transform method was also useful in [27,28,29].
The aim of the present paper is to investigate what happens with initial measures of other types, for example, some measures that are deterministic or almost deterministic. For simplicity we consider d = 1 and assume that the motions are either without branching or with the simplest critical binary branching. In [4,5] we proved for such motions, with general d, that if ν is a homogeneous Poisson measure, then the following results hold (where λ denotes Lebesgue measure and K is a different constant in each case): in the non-branching system: if d < α, then X T converges in distribution (in C([0, τ ], S ′ (R d )) for any τ > 0) to a process Kλξ, where ξ is a fractional Brownian motion; if d = α, then the limit process is Kλβ, where β is a standard Brownian motion; if d > α, then the limit is a time-homogeneous Wiener process in S ′ (R d ); in the branching system: if α < d < 2α, then the limit is Kλζ, where ζ is a sub-fractional Brownian motion (the case d ≤ α requires a slightly different treatment based on high-density models, see [9]); if d = 2α, then the limit is Kλβ; if d > 2α, then the limit is a time-homogeneous Wiener process in S ′ (R d ), different from the one in the non-branching case.
In this paper we define a class M of initial measures ν which contains in particular homogeneous Poisson measures (which are "completely random" [24]), and quasi-homogeneous deterministic measures (e.g., the measure defined by one atom at each j ∈ Z, which is "completely deterministic"), and we develop a unified approach that permits to obtain limits of X T for all ν ∈ M. By a quasi-homogeneous deterministic point measure on R we mean any measure defined by the following procedure: Given a positive integer k, in each interval [j, j + 1), j ∈ Z, we fix k points. For a general ν ∈ M, each interval [j, j + 1) contains θ j points chosen at random, and θ j , j ∈ Z, are i.i.d. random variables (see Section 2 for a rigorous definition). The main feature of those measures is this form of quasi-homogeneity and independence on the family of intervals [j, j + 1).
For each ν ∈ M we obtain the limit of the corresponding X T and in this way we recover the results of [4,5] for the homogeneous Poisson case (for d = 1, but there is no doubt that the results for higher dimensions can be obtained analogously), and we also derive limits for many other initial measures. It seems interesting that the idea of the proofs in this general framework is simpler than that in our previous papers, and is based on the central limit theorem. This is a significant change of methodology. However, some technical points in those papers are employed again here. An argument using the non-linear equation associated with the occupation time of the branching system, which can be obtained by means of the Feynman-Kac theorem, again plays an important role, but now in a different way: it is a key step in moment estimates in order to apply the Lyapunov theorem in the branching case. The equilibrium measures for the branching system do not belong to M because the branching introduces spatial dependence.
Some of the results we obtain are unexpected. It turns out that the only case where new limits appear is the non-branching case with (d = )1 < α. They have the form Kλξ, where ξ is the sum of two independent processes, one of them is a sub-fractional Brownian motion (see (2.3)), and the second one is a new (centered continuous with long range dependence) Gaussian process (see (2.4)). The process ξ depends on the initial measure ν only through Eθ 0 and Var θ 0 . In particular, for a deterministic initial measure this process reduces to a sub-fractional Brownian motion, and in the homogeneous Poisson case (as well as for any ν with Eθ 0 = Var θ 0 ) it yields a fractional Brownian motion (see Theorem 2.2). This result seems surprising since in all earlier papers sub-fractional Brownian motion was related only to branching systems, and was consequently attributed to the branching, but now, in the present context, this process turns out to be more "natural" than fractional Brownian motion. On the other hand, fractional Brownian motion, which is typically related to systems in equilibrium (in particular the non-branching system with initial homogeneous Poisson measure), now appears also for a wider class of quasi-homogeneous initial measures, as noted above.
In all the remaining cases the limits are (up to constants) the same, and with the same normings F T , as those recalled above for homogeneous Poisson models.
The results show that within the class M the fluctuations caused by the branching are so large that X T "forgets" the randomness of the initial state of the system (it "remembers" Eθ 0 only). On the other hand, for low dimensions it does distinguish between ν ∈ M and the equilibrium initial state (which is not in M). Another conclusion is that for high dimensions (which for d = 1 amounts to small α), the fluctuation process "forgets" the initial measure, as long as it is in some sense homogeneous (i.e., ν ∈ M), and this property holds for branching and non-branching systems; it is also preserved for branching systems in equilibrium.
In this paper we are interested mainly in identifying the limit processes, therefore we have not attempted to prove convergences in their strongest, functional form; in most cases we prove only convergence of finite-dimensional distributions. Presumably, convergence in distribution also holds in C([0, τ ], S ′ (R)) for any τ > 0. As an example, we give one result of this type (Proposition 2.6).
We have not found results in the literature concerning occupation times for particle systems starting from a deterministic or quasi-deterministic point measure. Some kinds of quasi-homogeneity of initial configurations for systems of independent particles, different from those in this paper, appear in other contexts in [33] and [20] (see Remark 2.5 (f)). It may be that systems of independent particles with α-stable motion and the initial conditions of [33] lead to the same results as with initial homogeneous Poisson distribution.
The following notation is used in the paper.
S(R): space of C ∞ rapidly decreasing function on R.
Generic constants are written C, C i , with possible dependencies in parenthesis.
In Section 2 we describe the particle system, formulate the results and discuss them. Section 3 contains the proofs.
Results
We start with detailed description of the particle system.
Let θ be a non-negative integer-valued random variable with distribution P (θ = k) = p k , k = 0, 1, 2, . . . , (2.1) such that Eθ 3 < ∞. This moment condition is a technical assumption satisfied by all cases of interest in this paper, but we suppose that finiteness of the second moment could be sufficient.
Let θ j , j ∈ Z, be independent copies of θ, and for each j ∈ Z and k = 1, 2, . . ., let ρ j k = (ρ j k,1 , . . . , ρ j k,k ) be a random vector with values in [j, j + 1) k . We assume that (θ j , (ρ j k ) k=1,2 , . . .), j ∈ Z, are independent. These objects determine a random point measure ν on R in the following way: For each j, θ j is the number of points in the interval [j, j + 1), and for each k, if θ j = k, the positions of those points are determined by ρ j k . In other words, where κ j,n = ρ j θ j ,n and δ a is the Dirac measure at a ∈ R. We denote by M the class of all such measures ν.
Remark 2.1 (a) If θ ≡ k and, for each j, the ρ j k are not random, then ν is a quasi-homogeneous deterministic measure mentioned in Introduction. The simplest example is ν = j∈Z δ j .
(b) If θ is a standard Poisson random variable and, for each j, ρ j k,1 , . . . , ρ j k,k are independent, uniformly distributed on [j, j + 1), then ν given by (2.2) is the homogeneous Poisson point measure (with intensity measure λ).
Fix α ∈ (0, 2] and ν ∈ M. Assume that at the initial time t = 0 there is a collection of particles in R with positions determined by ν. As time evolves, these particles move independently according to the standard α-stable Lévy process. We consider systems either without branching, or with critical binary branching (i.e., 0 or 2 particles with probability 1/2 each case) at rate V . For the corresponding empirical process, we define an S ′ (R)-valued process X T by (1.1).
Before stating the first theorem we recall the definition of sub-fractional Brownian motion. A sub-fractional Brownian motion with parameter H (0 < H < 1) is a centered continuous Gaussian process ζ H with covariance See [3,35] for properties of this process. It appears in [16] in a different context, and it has also been investigated in [1,31,36,37,38]. We will need another centered Gaussian process ϑ H with covariance Existence of this process for H = 1/2 follows from the formula which implies positive-definiteness of Q H .
Theorem 2.2 For the system without branching, (a) if 1 < α and where ζ H , ϑ H are independent, H = 1 − 1/2α, and then where β is a standard Brownian motion in R, and (c) if 1 > α and where X is an S ′ (R)-valued homogeneous Wiener process with covariance be the process in Theorem 2.2(a). If Eθ = Varθ, in particular if ν is homogeneous Poisson (see Remark 2.1(b)), then ξ H is, up to a constant, a fractional Brownian motion with Hurst parameter H, i.e., it has covariance C(s 2H + t 2H − |s − t| 2H ). Thus we recover Theorem 2.1 of [4]. On the other hand, if θ is deterministic, then ξ H is a sub-fractional Brownian motion. Moreover, in general randomness of the ρ's in the definition of ν ∈ M (see (2.2)) does not play any role in the limit. (b) The long time dependent behavior of Gaussian processes is usually characterized by the covariance of increments of the process on intervals separated by distance τ , as τ → ∞. For the process ϑ H that behavior is asymptotic decay like τ 2H−2 (the same as for fractional Brownian motion), for sub-fBm it is τ 2H−3 (see [3]). So, the long time dependent behavior of the process ξ H in Theorem 2.2(a) is determined by ϑ H in all cases where θ is random.
where ζ H is a sub-fractional Brownian motion with parameter H = (3 − 1/α)/2, and where β is a standard Brownian motion, and where X is an S ′ (R)-valued homogeneous Wiener process with covariance [4] and [5]). (b) The condition α < 1 in part (a) of the last theorem corresponds to α < d in [4] and [5]. In the homogeneous Poisson case for d ≤ α, we obtained limits of the same form as for α < d < 2α, by introducing high density, i.e., considering systems with initial intensity H T λ, H T → ∞ sufficiently fast [9]; the high density counteracts the tendency to local extinction caused by the critical branching. The same procedure can be applied in the present case, yielding the limits for 1 ≤ α ≤ 2, if the intervals [j, j + 1) are replaced by [j/H T , (j + 1)/H T ).
(c) As in [4] and [5], Theorems 2.2 and 2.4 can be extended to systems in R d , where the intervals (d) Comparing parts (a) of Theorems 2.2 and 2.4, we see that the branching weakens the influence of the initial configuration.
(e) The previous results show that sub-fractional Brownian motion is a "natural" process for our model. So far it had appeared only in the context of branching systems, but now we see that it is intrinsically related to the non-branching systems as well for a large class of initial conditions. Fractional Brownian motion occured before only in the case of systems in equilibrium, but now it also appears wherever Eθ = Varθ.
(f) Theorems 2.2 and 2.4 can also be extended to other models. For example, in [20] a model is studied in a different context with independent α-stable motions without branching and initial positions of particles (j + ρ) j∈Z , where ρ is a random variable uniformly distributed on [0, 1], independent of the motions. It is easy to see, by a standard conditioning argument (considering the characteristic function and conditioning on ρ), that for models with or without branching and with this initial configuration, the limits are the same as for deterministic ρ, i.e., they are given by Theorems 2.2 and 2.4.
We have formulated Theorems 2.2 and 2.4 with convergence of finite-dimensional distributions only, as we are mostly interested in the limit processes, but we have no doubt that functional convergence also holds. As an example, let us consider the cases of large α. For simplicity we assume that the initial configuration is such that θ ≡ 1 and ρ j 1,1 is uniformly distributed on [j, j + 1), j ∈ Z. Proposition 2.6 For the model described above, the processes X T in Theorems 2.2(a) and 2.4(a) converge in law in C([0, τ ], S ′ (R)) for any τ > 0.
Auxiliary facts related to the stable density
We will often use the self-similarity property of the transition density p t of the standard α-stable process in R: Since p t (·) is decreasing on R + and symmetric, then by (3.1) we have For ϕ ∈ S(R) we have |ϕ(x)| ≤ C(ϕ, m)φ m (x). This, and an obvious inequality, In the sequel we will use various versions of this estimate, e.g., We will also need the following estimate ( [22], Lemma 5.3) for the potential operator G (see (1.3)). If α < 1, q > 1, and ϕ is a measurable function on R such that (3.7)
Scheme of proofs
The proofs of Theorems 2.2 and 2.4 are based on the central limit theorem and follow the scheme described presently.
Let N x denote the empirical process of the system (with or without branching) started from a single particle at x, and N (j) , j ∈ Z, be the empirical process for the particles which at time t = 0 belong to [j, j + 1), i.e., according to the description at the beginning of Section 2 (see (2.2)). Note that N (j) , j ∈ Z, are independent.
The process X T defined in (1.1) can be written as The first step in our argument is to prove that for any ϕ, ψ ∈ S(R), and s, t ≥ 0, where X is the corresponding limit process. Without loss of generality we may assume that ϕ, ψ ≥ 0. Using (3.9) we have r ′ , ψ dr ′ dr. (3.11) Using (3.8), (2.1) and the fact that E N x t , ϕ = T t ϕ(x) in both non-branching and (critical) branching cases, and defining, for x ∈ R, n ≤ k, random variables k,n − x, (3.12) where [x] is the largest integer ≤ x, we rewrite (3.11) as where 14) (in the first equality for II we used independence of systems starting from different points), In each case we will show convergence of I, II and III, thus proving (3.10). (It will be shown that I, II, III are bounded, so the passage to the limit in each sum in (3.13) is justified). Next, we show that X(t), ϕ ⇒ X(t), ϕ , ϕ ∈ S(R), t ≥ 0.
To this end, by (3.9) and (3.10) it suffices to prove that the Lyapunov condition is satisfied, and this property will follow if we show that It is clear that convergence in law of linear combinations m k=1 a k X T (t k ), ϕ k can be obtained analogously from (3.10) and (3.18), thus establishing the claimed convergence X T ⇒ f X.
Proof of Theorem 2.2(a)
Following the scheme we show that lim T →∞ Let η denote the standard α-stable Lévy process in R. As we consider the model without branching, we have, for r > r ′ , Putting this into (3.14) and omitting the subscripts k, n, we obtain where In I 1 we substitute r = r/T, r ′ = r ′ /T , use (2.5) and (3.1), and then we substitute x = T −1/α (x−y), arriving at By (3.17) and (3.1), the expression under the integrals converges pointwise, as T → ∞, to can be treated analogously, hence by (3.23) we obtain (3.20).
Next we take II. In (3.15) we substitute r = r/T, r ′ = r ′ /T , and by (2.5) we have II(T ; k, n, m) We use (3.1) and substitute x = T −1/α (x − z), obtaining The integrand converges pointwise to p r (x)p r ′ (x)ϕ(y)ψ(z), and (3.17) implies that for T > 1 , it is bounded by r −1/α p 1 (0)g r ′ (x)ϕ(y)ψ(z) (see (3.3)), which is integrable. As R p r (x)p r ′ (x)dx = (r + r ′ ) −1/α p 1 (0), we obtain the limit (3.21) for II. Note that in this argument the only property of h k,n we have used is (3.17), therefore it is immediately seen that the limit of III can be obtained in the same way (see (3.16)). This completes the proof of (3.10).
As explained in the scheme, to finish the proof it suffices to show (3.19).
The expression under lim T →∞ sup n,k in (3.19), similarly as in (3.22), can be written as (3.26) where 27) and φ 2 is given by (3.4). The last inequality in (3.26) is of the same type as (3.6), and can be obtained by an analogous argument using (3.5) and (3.17). Hence, for (3.19) it is enough to show that lim T →∞ J(T ) = 0. (3.28) Note that in this argument we have not used the assumption on α.
After obvious substitutions, using (2.5) and the invariance of Lebesgue measure for T t , we have By the self-similarity property (3.1),
It is now easy to see that Passing to I 3 defined by (3.32), we first estimate it similarly as I 1 (see (3.36)), then we change the order of integration drdr ′ , and substituter ′ = r ′ /T andr = r − Tr ′ , obtaining
Proof of Theorem 2.4
We recall first the following formula for the second moments of critical binary branching systems with branching rate V: (3.47) which is obtained using, e.g. Lemma 3.1 in [25], and the Markov property.
We have proved (3.10) in all the cases. According to the scheme, to finish the proof it remains to show (3.19). To this and we define the function It is known that the Feynman-Kac formula implies that v θ satisfies the non-linear equation (see e.g. [18], or the space-time approach used in [4,5]). Hence, by a similar argument as in (3.45)-(3.47) of [5], we obtain Without loss of generality we may assume t = 1.
We turn to J 4 , which requires more work. By (3.62), the Schwarz inequality and (3.67) we have
Proof of Proposition 2.6
Since the convergence of finite-dimensional distributions has been proved already, by virtue of the Mitoma theorem [30] it remains to show tightness of X T , ϕ , T > 1, in C([0, τ ], R) for any fixed ϕ ∈ S(R), ϕ ≥ 0. To this end we prove E( X T (t), ϕ − X T (s), ϕ ) 2 ≤ C(t − s) a , s < t ≤ τ, (3.69) for some a > 1. By (3.8) and (3.9) we have The latter expression is equal to E X Remark Recall that Proposition 2.6 refers to a special simple choice of ν. For a general ν ∈ M, the proof of (3.69) is similar but slightly more involved. One has to estimate an extra term and use an inequality of the type (3.6). | 5,682.6 | 2010-02-22T00:00:00.000 | [
"Mathematics"
] |
Quantum Variational Optimization of Ramsey Interferometry and Atomic Clocks
We discuss quantum variational optimization of Ramsey interferometry with ensembles of $N$ entangled atoms, and its application to atomic clocks based on a Bayesian approach to phase estimation. We identify best input states and generalized measurements within a variational approximation for the corresponding entangling and decoding quantum circuits. These circuits are built from basic quantum operations available for the particular sensor platform, such as one-axis twisting, or finite range interactions. Optimization is defined relative to a cost function, which in the present study is the Bayesian mean square error of the estimated phase for a given prior distribution, i.e. we optimize for a finite dynamic range of the interferometer. In analogous variational optimizations of optical atomic clocks, we use the Allan deviation for a given Ramsey interrogation time as the relevant cost function for the long-term instability. Remarkably, even low-depth quantum circuits yield excellent results that closely approach the fundamental quantum limits for optimal Ramsey interferometry and atomic clocks. The quantum metrological schemes identified here are readily applicable to atomic clocks based on optical lattices, tweezer arrays, or trapped ions.
I. INTRODUCTION
Recent progress in quantum technology of sensors has provided us with the most precise measurement devices available in physical sciences. Examples include the development of optical clocks [1], atom [2] and light [3] interferometers, and magnetic field sensing [4]. These achievements have opened the door to novel applications from the practical to the scientific. Atomic clocks and atomic interferometers allow height measurements in relativistic geodesy [5][6][7][8] or fundamental tests of our understanding of the laws of nature [9][10][11], such as time variation of the fine structure constant. In the continuing effort to push the boundaries of quantum sensing, entanglement as a key element of quantum physics gives the opportunity to reduce quantum fluctuations inherent in quantum measurements below the standard quantum limit (SQL), i.e. what is possible with uncorrelated constituents [12]. Squeezed light improves gravitational wave detection [13], allows lifescience microscopy below the photodamage limit [14], further, squeezing has been demonstrated in atom interferometers [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. However, beyond the SQL, quantum physics imposes ultimate limits on quantum sensing, and one of the key challenges is to identify, and in particular devise experimentally realistic strategies defining optimal quantum sensors [31].
Here, entangled input states and the entangled measurement protocols [57][58][59][60][61][62][63] defining the generalized Ramsey interferometer are represented as variational quantum circuits built from 'natural' quantum resources available on the specific sensor platform (see Fig. 1), which are optimized in light of a given cost function defining the optimal interferometer. As we will show, already lowdepth variational quantum circuits can provide excellent approximations to the optimal interferometer. Intermediate scale atomic quantum devices [64,65], acting as programmable quantum sensors [66], present the opportunity to implement these low-depth quantum circuits, defining an experimental route towards optimal Ramsey interferometry.
As noted above, optimality of a quantum sensing protocol is defined via a cost function C which is identified in context of a specific metrological task. In our study of variational N -atom Ramsey interferometry, we wish to optimize for phase estimation accuracy defined as the mean squared error (φ) relative to the actual phase φ, averaged with respect to a prior distribution P(φ) with width δφ, which represents the finite finite dynamic range of the interferometer. Thus the cost function is C ≡ (∆φ) 2 = dφ (φ)P(φ). This corresponds to a Bayesian approach to optimal interferometry where the prior width of the phase distribution δφ is updated through measurement to ∆φ characterizing the posterior distribution. As outlined in Fig. 1 1. (a) Quantum circuit representation of Ramsey interferometer with uncorrelated atoms. The phase φ is imprinted on the atomic spin-superposition prepared by global π/2-rotation around y-axis, Ry(π/2). Consequent rotation, Rx(π/2), and measurement of difference m of atoms in eigenstates |↑ and |↓ in z-basis allows estimating the phase φ using an estimator function φest(m). (b) Quantum circuit of a generalized Ramsey interferometer with generic entangling and decoding operations UEn and UDe, respectively. Our variational approach (c) consists of an ansatz, where optimal UEn and UDe are approximated by low-depth circuits. These are built from 'layers' of elementary operations, which are provided by the given platform. We specify the variationally optimized quantum sensor by circuits UEn(θ) and UDe(ϑ) [see Eqs. (6) and (7)], of depth nEn and nDe, respectively. Here θ ≡ {θi} and ϑ ≡ {ϑi} are vectors of variational parameters to be optimized for a given strategy represented by a cost function C defined here as Bayesian mean squared error (BMSE) [see Eqs. (2) and (10)]. We illustrate the approach with a variational circuit built from global spin rotations Rx and one-axis-twisting gates Tx,z available in neutral atom and ion quantum simulation platforms, as discussed in Sec. II B. The circuits optimization, shown as a feedback loop (in red), can be performed on a classical computer, or, if the complexity of underlying quantum many-body problem exceeds capabilities of classical computers, on the sensor itself, thus, leading to a (relevant) quantum advantage, see Sec. II G.
variational approach to optimal Ramsey interferometry seeks to minimize C over variational quantum circuits, and thus identifying optimal input states and measurements for a given δφ. Note that in the present work we optimize a metrological cost function for the complete quantum sensing protocol with variational quantum circuits. We distinguish this from variational state preparation schemes, e.g. variational squeezed state preparation of Ref. [66], where a squeezing parameter was optimized as cost function.
We contrast our Bayesian approach of identifying a metrological cost function to a Fisher information approach, which optimizes accuracy locally at a specific value of the phase, corresponding to the limit δφ → 0 [3]. Discussions of fundamental limits in quantum sensing are often phrased in terms of quantum Fisher information and the quantum Cramèr-Rao bound leading to definition of the Heisenberg limit (HL) [67][68][69]. This identifies GHZ states [70], saturating the HL, as the optimal states for Ramsey interferometry. Furthermore, this leads to the conclusion that adding a decoding step (see Fig. 1) is not beneficial for quantum metrology since a separable measurement is optimal in this context [68]. This conclusion, however, is not applicable to phase estimation with finite prior width since the GHZ state interferometry in single-shot scenarios is optimal only for estimation of phase values in an interval δφ GHZ ∼ 1/N , which shrinks as number of atoms N increases [37,71], see Sec. II F below. In fact, for large priors δφ tailored quantum input states will differ greatly from squeezed spin states (SSS) [72,73] or GHZ states [3,31], and a nontrivial mea-surement is required for an optimal metrological protocol. Our variational approach to optimal Ramsey interferometry (see Fig. 1) finds these optimal entangling and decoding circuits [74].
Our discussion of optimal single-shot Ramsey interferometry [75] has immediate relevance for atomic clocks [12,[76][77][78][79]. An optical atomic clock operates by locking the frequency of an oscillator, represented by a classical laser field with fluctuating frequency ω L (t), to the transition frequency ω A of an ensemble of N isolated atoms [1]. The locking of the laser to the atomic transition is achieved by repeatedly measuring the accumulated phase φ = T 0 dt[ω L (t) − ω A ] in Ramsey interferometry with interrogation time T . Importantly, the width δφ of the distribution of this phase increases with the Ramsey time T . It is therefore critical to achieve a good phase estimate in conjunction with a wide dynamic range for making an accurate inference about the frequency deviation, and ultimately for stabilizing the clock laser to the atomic transition. Our variational approach for Bayesian phase estimation is made to satisfy these requirements, and provides optimal quantum states and measurements minimizing the instability in atomic clocks as measured by its Allan deviation. We predict significant improvements over previously known one-shot non-adaptive strategies. Our predictions are backed up by comprehensive numerical simulations of the clock laser and its stabilization to the atomic reference in a closed feedback loop [78,79].
In the following, we first develop the general theory of variationally optimized Ramsey interferometry based on Bayesian phase estimation in Sec. II, and then apply this theory to the specific problem of an optical atomic clock in Sec. III.
II. QUANTUM VARIATIONAL OPTIMIZATION OF RAMSEY INTERFEROMETRY
For concreteness, we consider estimation of the phase φ in an atomic interferometer consisting of an ensemble of N identical two-level atoms described as spin-1/2 particles [12]. The general idea developed in the following applies to any SU (2) interferometer. The interferometer encodes the phase in the atomic state by evolving according to |ψ φ = exp(−iφJ z ) |ψ in . Here |ψ in is an initial probe state [80], and J x,y,z = 1/2 N k=1 σ x,y,z k is the collective spin with σ x,y,z the Pauli operators. The task is to determine the unknown phase φ by performing a measurement on the atoms.
A. Bayesian approach to phase interferometry
The most general measurement is described by a positive operator valued measure (POVM), that is a set {Π x } of positive Hermitian operators such that dx Π x = 1 1. The parameter φ is estimated on the basis of a measurement result x using an estimator function φ est (x). The phase estimation accuracy is characterized by a mean squared error (MSE) with respect to the actual phase φ where p(x|φ) = Tr{Π x |ψ φ ψ φ |} is the conditional probability of the measurement outcome x [3]. In our discussion we consider the phase φ to be defined on the interval −∞ < φ < ∞ [81]. In order to find an interferometer performing the most accurate measurement of the phase φ we cannot minimize the MSE (1) for all values of φ simultaneously. First, the atomic interferometer is only sensitive to phase values modulo 2π as exp(−iφJ z ), and hence also p(x|φ) is periodic. Thus, it can not distinguish arbitrary phases. Second, an initial state and measurement working well for one phase value might be insensitive to another value. Thus we consider an estimation error minimized for a weighted range of phase values relevant for a given sensor and measurement task. In the following we will adopt a Bayesian approach where the estimation error is averaged over a prior phase distribution P(φ). The cost function of interest is thus defined as the MSE averaged over the prior distribution, defining the Bayesian mean squared error (BMSE) The prior distribution P(φ) reflects the statistical properties of the unknown phase φ hence it is, in general, sensor and task dependent.
Optimal interferometry is based on minimizing the cost function (2) over |ψ in , {Π x }, and φ est (x) for the given prior distribution. For simplicity, we will focus on prior distribution as a normal distribution centered around zero This problem was addressed in [31], where the optimal quantum interferometer has been identified. Below we optimize the cost function (2) within a variational quantum algorithmic approach.
B. Variational Ramsey interferometry
Our goal is to find an implementation of the optimal interferometer given a restricted set of quantum gates available on an experimental platform such as neutral atoms or trapped ions. We will show that low-depth variational quantum circuits of given depth [see Fig. 1(c)] are excellent approximations to optimal interferometry, and can yield significant improvements over SQL defined for uncorrelated atoms.
In the most general form the variational interferometer, illustrated in Fig. 1(b), can be defined by a generic entangling unitary operation U En preparing an entangled input state from the initial product state |ψ 0 = |↓ and a decoding operation U De transforming the projective measurement of a typical observable J z , with eigenbasis |m , into a generic projection Here we consider the subspace spanned by the states |m , m ∈ {−N/2, . . . , N/2}, which are completely symmetric under permutations of N atoms, and |ψ 0 = |−N/2 . The measurement amounts to counting the difference m of atoms in state |↑ and |↓ . As shown in [34], this assumption can be made without loss of generality. The basis states |m are given by the eigenstates of total spin of maximum length, j = N/2, thus satisfying J 2 |m = j(j + 1) |m and J z |m = m |m . As shown in [31], the optimal POVM may be restricted to the class of standard projection von Neumann measurements Π x = |x x|, x|x = δ xx . Thus the measurement of the collective spin component J z transformed by a decoder U De represents the measurement problem in full generality. We assume that the programmable quantum sensor provides us with a set of native resource Hamiltonians {H (i) R }. The unitaries generated by these Hamiltonians determine a corresponding native set of quantum gates as variational ansatz for U En and U De . A generic example is provided by global rotations R µ (θ) = exp(−iθJ µ ) and the infinite range one-axis-twisting (OAT) interaction [73] T µ (θ) = exp(−iθJ 2 µ ) with µ = x, y, z. Such interactions have been realized on quantum simulation platforms [15-29, 82, 83], and very recently also on an optical clock transition [30]. Within this set of gates we constrain the quantum circuits to be invariant under the spin x-parity transformation ensuring an anti-symmetric estimator at and around φ = 0 (see App. B). The most general circuits satisfying the x-parity constraint for a fixed number n En and n De of layers of entangling and decoding gates are and Here the subscripts on the parameters indicate the layer containing the same three gates and the superscript identifies the gate within the layer. The complexity of the circuit is thus classified by (n En , n De ), and we have 3(n En +n De ) (global) variational parameters in a (n En , n De )-circuit, independent of N . Note that here U En and U De commute with particle exchange. The Hilbert space dimension for dynamics in the symmetric subspace is linear in N , which allows us to study theoretically the scaling for large particle numbers N below -in contrast to the case of finite range interactions in Sec. II G. We note that conventional Ramsey interferometry with uncorrelated atoms corresponds to the (0, 0)-circuit with U En = R y (π/2) and U De = R x (π/2). Here atoms are prepared initially in a product state, or coherent spin state (CSS), and remain in a product state during the evolution in interferometer followed by measurement of J y . On the other hand, the interferometer with SSS as input, and GHZ interferometry emerge as (1, 0)-and (2, 1)-circuits, respectively.
In the presented entangler-decoder framework the performance of the interferometer is described, similar to Eq. (1), by the MSE where the conditional probability is Therefore, the optimal interferometer found within the restricted set of available operations is described by the minimum of the BMSE 2. Performance of the variationally enhanced interferometer with N = 64 particles. Performance is shown in terms of the posterior phase distribution width relative to the prior width, ∆φ/δφ, for a given prior, that is, for a given dynamic range of the interferometer. Colored lines show the performance of variationally optimized circuits for the depth (nEn, nDe) of entangling and decoding layers as indicated. The number of variational parameters is given by 3(nEn+nDe). The performance of the optimal quantum interferometer (OQI) [31] is indicated by the dotted line. The shaded areas indicate the classically accessible (purple) and the quantum mechanically forbidden (gray) regions (for N = 64). Related results applied to atomic clocks are shown in Fig. 10 To be specific, we assume for the prior a normal distribution P δφ (φ) with standard deviation δφ [see Eq. (3)]. In addition, (10) assumes a linear estimator φ est (m) = am which is close to optimal, as shown below. We note that it is possible to use the optimal Bayesian estimator, which however is computationally demanding. We describe the corresponding iterative procedure in App. D for the case of a phase operator as observable.
C. Results of optimization
Results of interferometer optimizations [84] are shown in Fig. 2 for N = 64 atoms. The figure plots the ratio ∆φ/δφ of the root BMSE ∆φ relative to the normal prior width δφ. The more information we gain about the parameter φ in a single measurement, the smaller the value of this ratio.
The black dotted line shows the result of the unrestricted minimization of the cost function (2) with normal prior [31], which we refer to as optimal quantum interferometer (OQI). It defines the region (shaded area) inaccessible to any N -particle quantum interferometer. The purple line represents performance of the conventional Ramsey interferometer with CSS as input and a linear estimator, given by the (0, 0)-circuit. Thus, the shaded area above the purple line roughly defines the classically achievable performance.
The performance of the entanglement enhanced inter- 3. Visualization of quantum states |ψ φ = exp(−iφJz) |ψin , and quantum measurement operators as Wigner distributions on the generalized Bloch sphere for N = 64 and δφ ≈ 0.7. The first (a,d), second (b,e), and third column (c,f) correspond to (nEn, nDe) = (1, 0) (squeezed input state, and Jy measurement operator), the optimal quantum interferometer, and to a (1, 3) quantum circuit, respectively. Measurement operators are visualized as colored contours on the Bloch sphere corresponding to different measurement outcomes. The corresponding optimized (optimal) states |ψ φ are shown at various angles φ as gray shaded areas. (a,b,c) three dimensional view of the generalized Bloch sphere with a state rotated to φ = π/3. (d, e, f) Top view of the Bloch sphere with the state rotated to angles φ = 0, π/3, 2π/3. (g) Measurement probability p(m|φ) [see Eq. (9)] corresponding to the overlap between the contours of the measurement distribution and the respective state distribution, displayed in the same column. The three rows correspond to the above three angles φ. Note that for the Jy measurement the distributions at angles π/3 and 2π/3 are indistinguishable in measurement statistics. In contrast, for the OQI and the (1, 3) quantum circuit these angles are well resolved.
ferometer is shown with colored lines. The orange curve represents a (1, 0)-circuit corresponding to a squeezed spin state (SSS) interferometer [72,73], employing the OAT interaction to generate an entangled initial state with suppressed fluctuations along the axis of the effective J y measurement. The minimum of the orange line is located at smaller δφ values when compared to the minimum of the purple line corresponding to the SQL. This manifests the fact that SSS input state increases the sensitivity of the phase measurement at expense of dynamic range [76,85,86]. By adding a single layer of a decoding circuit we obtain the blue curve corresponding to the (1, 1)-interferometer with a slightly enhanced sensitivity and dynamic range. The red and green lines correspond to (1, 3)-and (2, 5)-circuits, respectively, and show striking improvement in sensitivity, providing an excellent approximation for the optimal interferometer (black dotted line). Remarkably, the minima of the red, green, and black curves are located at a wider dynamic range δφ than that of the CSS interferometer. Hence the optimal entangled initial state and the effective nonlocal observable allows us to achieve both a higher phase sensitivity and a wider dynamic range.
To gain understanding of the physical meaning of the measurements and initial states emerging from the numerical optimization, we show their Wigner functions in Fig. 3. A formal definition of the Wigner distribution is provided in App. C. The three columns correspond, in consecutive order, to the (1, 0)-circuit (SSS interferometer), the optimal quantum interferometer of [31], and the (1, 3)-circuit. The chosen prior width δφ ≈ 0.7 is indicated in Fig. 2 by the vertical dashed line. The first row of panels 3(a-c) shows 3D views of the generalized Bloch sphere with Wigner functions of the measurement operators shown in shades of red and blue for J y , the optimal observable, and U † De J z U De operators, respectively. A contour of constant color corresponds roughly to a certain measurement outcome which is obtained with the probability given by the overlap of the contour with the Wigner function of a quantum state. The states are shown in 3(a-f) with the gray outlined areas.
Panel 3(a) shows clearly the non-optimality of the SSS interferometer with a measurement of the spin projection J y . Optimization of the SSS results in a moderate level of squeezing (gray ellipse squeezed along the y-axis). More squeezing would produce stronger anti-squeezing along z-axis leading to overlap with more contours of the J y Wigner function, thus increasing the variance of the measurement results for nonzero φ [76,86]. Another limitation of the SSS interferometer, illustrated in panels (d) and (g), is the reduced dynamic range in the interval −π/2 and π/2. Panels (d) and (g) show that states rotated by the phase angle φ = 2π/3 > π/2 have the same measurement statistics as states rotated by φ = π/3. Thus, phases outside the [−π/2, π/2] interval can not be reliably estimated.
The optimal quantum interferometer is explained in the central column of Fig. 3. Here panel (b) shows that the initial state is squeezed significantly stronger than in the SSS interferometer. This is possible because the corresponding optimal measurement is very similar to the phase operator of Pegg and Barnett [87], which has eigenstates with well defined phases (see Sec. II D below for detailed comparison). One can see that the color contours of the optimal measurement Wigner function in panels (b) and (e) are aligned with the meridians and thus overlap favorably with the strongly squeezed initial state rotated by a wide range of phase angles φ. Strikingly, the OQI can effectively use the full 2π dynamic range, as illustrated in panels (e) and (g).
Finally, the (1, 3)-interferometer, presented in the third column of Fig. 3, exhibits properties similar to the OQI. Interestingly, the initial state in this case is not a con-ventional squeezed state, as shown in panel (c), but a slightly twisted one. This, however, does not impair the performance of the interferometer as the effective measurement is also twisted such that it matches the initial state rotated by a wide range of phase angles. This peculiarity is a consequence of the restricted gate set available for the variational optimization in a realistic system. It is remarkable that the low depth (1, 3)-circuit already provides an excellent approximation for the OQI.
The extended dynamic range of the variationally optimized interferometer is explored in Fig. 4. Panels (a) and (b) show, respectively, the estimator expectation valuē and the estimator mean squared error (8) as functions of the actual phase φ for an interferometer optimized for prior width of δφ ≈ 0.7 (indicated with vertical dashed lines).
The estimator expectation value of the (0, 0)-and (1, 0)circuit (CSS and SSS interferometer) is given by a sine function [purple and orange line in panel (a)], thus, it can unambiguously map the estimated phase to the actual phase in the range between −π/2 and π/2. However, the useful dynamic range of the interferometer is even narrower as shown by the estimator error in panel (b). The estimator error of the SSS state is suppressed below the CSS benchmark line only for phases between, roughly, −π/4 and π/4. The (1, 1)-interferometer [blue line in (a) and (b)] starts to exploit the entangled measurement and achieves a bit wider linear regime ofφ est in (a) and a wider region of suppressed estimator error in (b). Although the minimum error of (1, 1)-circuit is larger than that of (1, 0)circuit, it still has superior overall sensitivity as phases in the tails of the prior distribution are better resolved.
Finally, more complex decoding operations employed by the (1, 3)-and (2, 5)-circuit (red and green lines) allow to approach the performance of the optimal interferometer (black dotted lines). The linear regime ofφ est extends almost to the full 2π range, and the estimator error is well suppressed for phases deeply within the tails of the prior.
D. Comparison between variational and phase operator based interferometers
From a theory perspective it is interesting to compare the performance of the variationally optimized interferometer and the interferometer based on covariant measurement [33,34]. Here covariant measurements represent the class of measurements optimal for phase estimation with no a priori knowledge and phase-shift symmetry, i.e. assuming a prior distribution P(φ) = (2π) −1 and a 2π-periodic cost function, as opposed to the MSE (1).
In the case of clocks and magnetometry, the free evolution encoding the phase φ is the collective spin rotation, e −iφJz . The corresponding covariant measurement optimal for estimation of the rotation angle φ can be represented by the von Neumann measurement [88] with phase operatorΦ [87], which we define in App. D.
In order to evaluate the performance of the phase operator based interferometer (POI), we minimize the cost function (2) forΦ as the observable and the normal prior P δφ (φ). To this end, we use the optimal Bayesian estimator known as the Minimum Mean Squared Error (MMSE) estimator [3] and find the corresponding optimal initial state |ψΦ (see App. D for details). This results in the optimal posterior width ∆φ POI as discussed in Sec. II C for the variationally optimized interferometer.
To compare different interferometers we consider their performance at the optimal prior width with respect to the OQI performance and define the ratio: The χ value corresponds to the ratio of minima of an interferometer and the OQI curves in Fig. 2. The OQI corresponds to χ = 1. Figure 5 shows the χ − 1 value for variationally optimized andΦ based interferometers for various system sizes up to N = 512. The figure highlights sub-optimality of the POI (blue points) for the task of phase estimation with non-periodic cost function, as is relevant for frequency estimation in, e.g., optical clocks. For small systems, N 16, the POI is up to ∼ 10% less efficient than the OQI and the variational (1, 3)-and (2, 5)-interferometers (green and red points, respectively). (1, 3)-circuit outperforms POI for systems of up to N ∼ 40 atoms, whereas (2, 5)-circuit is better for up to N ∼ 100 atoms. In the limit of large number of atoms, N ≫ 1, the POI approaches the OQI performance. Empirical fitting indicates convergence rate χ POI − 1 ∼ N −0.77 , as N increases. On the other hand, the variationally optimized interferometers diverge from OQI linearly with N .
E. Variational Optimization in Presence of Imperfections and Noise
Variational optimization can be extended to include imperfections and decoherence. This optimization can also be carried out on the physical quantum sensor. This is particularly beneficial when the experimental characterization of imperfections and noise is incomplete.
There are various sources of imperfections and decoherence, which are relevant in our context. First, there are control errors in implementing variational quantum gates. These include offsets of control parameters and Hamiltonian design errors. The latter are deviations of the physically realized vs. the ideal Hamiltonian, e.g. in the implementation of one-axis twisting interaction. However, if these (unknown) control or design errors are static, i.e. do not fluctuate between experimental runs, a variational algorithm performed on the device will still optimize, and thus compensate in the best possible way for these errors in U En and U De , i.e. find the best gate decomposition for given building blocks. In addition, there will be decoherence due to fluctuations of control parameters, or coupling to an environment as in spontaneous emission or dephasing.
To incorporate the latter we need to extend the formalism to density matrices instead of the previously discussed pure states. Below we illustrate this by an optimization of the Ramsey interferometer in the presence of single atom dephasing noise during the Ramsey interrogation time T , as one example of experimentally relevant decoherence. Local dephasing noise is described by the Lindbladian Thus the density matrix after the Ramsey interrogation time, can be expressed in terms of the dimensionless phase φ accumulated during the Ramsey interrogation time T and the effective exposure to the dephasing noise γT with dephasing rate γ. Here ρ θ = U En (θ) |ψ 0 ψ 0 | U † En (θ), where we used that the dephasing Lindbladian and the free evolution of the clock supercommute. The particle permutation symmetry of the Lindbladian enables us to simulate systems at a cubic cost in N [89,90]. The conditional probability, required to determine the BMSE in Eq. (10) therefore reads Figure 6 shows that the optimized ∆φ/δφ increases as the noise increases, as expected. For a small γT /δφ = 0.01 the variational (1, 3)-interferometer is close to optimal without noise. Remarkably for all ratios γT /δφ 1, the minimum of the (1, 3) interferometer remains well below the uncorrelated (0, 0)-and the SSS (1, 0)-interferometers. This ordering of the respective global minimum is independent of N , whereas for γT /δφ = 10 none of the entangling sequences improve significantly compared to SQL [91].
F. Towards the Heisenberg limit
The variationally optimized interferometer with lowdepth quantum circuits found within the Bayesian framework quickly approaches the accuracy of the optimal Ramsey interferometer. We will now discuss our results from the perspective of reaching the Heisenberg limit (HL).
The HL is a lower bound on the accuracy of an interferometer imposed by quantum mechanics. For an N -atom interferometer the HL and SQL are traditionally written as which must be understood in context of the quantum Fisher information [67,69,92] and quantum Cramér-Rao bound [33,93] (implying δφ → 0). In contrast, in the present work we have adopted a Bayesian approach, which includes optimizing for a finite dynamic range δφ.
To evaluate the performance of our quantum variational results for a given circuit depth in comparison with HL, we will adopt below van Trees inequality [94,95] as a bound for the BMSE. In brief, for any given conditional probability distribution p(m|φ) the Cramér-Rao inequality provides a bound on the variance of an unbiased (φ est = φ) For pure states, i.e. in the absence of decoherence, F φ ≤ N 2 in correspondence to the HL above. We emphasize that the Cramér-Rao inequality seeks to identify optimal unbiased estimators, which can in general be achieved only locally in φ, i.e. in a small neighborhood of a given phase, and not for a finite dynamic range as is the goal in our Bayesian approach. In the Bayesian framework, a bound on the BMSE is imposed by van Trees' inequality, Here, the first term in the denominator is the Fisher information (17) averaged over the prior distribution, The second term is the Fisher information of the prior distribution, representing the prior knowledge. To isolate the measurement contribution from the prior knowledge, we define an effective measurement variance (∆φ M ) 2 via and obtain reminiscent of the Cramér-Rao inequality (16). In case of a normal prior distribution (3) we have I = (δφ) −2 , and the effective measurement variance (19) reads In Fig. 7 we plot (∆φ M ) × N , the measurement error scaled to the atom number, for the (2, 5)-variational interferometer (solid lines) as a function of the prior width δφ for a range of atom numbers N . In addition, we indicate the HL and the π-corrected HL (see below) as dotted lines and show results for a GHZ interferometer with spin x-parity measurement [70] (dashed lines). In the case of the GHZ interferometer with a normal prior we have showing that the GHZ interferometer attains the HL uncertainty ∆φ M → 1/N for a given prior width δφ only for atom numbers N 1/δφ. This fact is illustrated in Fig. 7 by the dashed lines which diverge from the HL for smaller and smaller δφ as N grows. In contrast, the variational interferometer (solid lines) is of the order of the π-corrected HL [34,35,37,96], ∆φ M → π/N , for a wide range of prior widths δφ as N increases. Intuitively, the emergence of π-corrected HL can be understood as follows. The optimal N atom quantum interferometer can be described as a von Neumann measurement in the particle permutation symmetric subspace [31,34]. Thus, there are N +1 possible measurement outcomes to distinguish at most N + 1 phase values in the interval [−π, π]. The corresponding estimation error for evenly spread estimates reads ∆φ ∼ (1/2) 2π/(N + 1) → π/N . For large δφ the solid lines in Fig. 7 exhibit strong deviations from the asymptotic π-corrected HL behavior. The cusps are explained by phase slips outside the interval [−π, π] which lead to a squared estimation error of 4π 2 . For a normal prior distribution, the performance of an interferometer limited by the π-corrected HL including the phase slips is given by Results of this section are obtained in absence of decoherence.
G. Finite range interactions
Our previous discussion assumed infinite range interactions as entangling quantum resource, while e.g. neutral atoms stored in tweezer arrays feature finite range interactions. The variational optimization of the BMSE can be directly generalized to finite range interactions, which we illustrate by optimizing a sensor based on Rydberg dressing resources [97,98] , as is realized in alkaline earth tweezer clocks [46][47][48]. The effective interaction Hamiltonian we use for the optimization reads where r k represents the position of particle k. The interaction strength at short distances V 0 and interaction radius R C depend on the Rydberg level and the dressing laser used to let the particles interact [99].
Ref. [66] presented a study of variationally optimized spin-squeezed input states, and we refer to this work for the elementary gates we employ as building blocks for variationally optimizing entangling and decoding operations. In analogy to Eqs. (6) and (7), we write the entangler and decoder, effectively replacing the T x,z by D x,z . In a similar way we can rewrite Eq. (9) to account for dynamics in full 2 N -dimensional Hilbert space. Figure 8(a) shows the optimized ∆φ/δφ for a 4 × 4 square array for R C = a with a the lattice constant. We find variational solutions approximating the OQI, similarly to the OAT interactions in Fig. 2. In contrast to the infinite range OAT interaction we are not able to exactly reproduce the optimal GHZ-state interferometer at δφ < 1/N . Nonetheless, at any prior distribution width significant improvement beyond the uncorrelated interferometer is achieved and in particular around global minimum of the optimal interferometer (vertical dashed line), the decoder-enhanced circuits clearly surpass sensitivity of entangled input states only.
In Fig. 8(b) we further study the dependence on the scaled interaction radius R C /a for a fixed prior distribution width δφ corresponding to the minima of variational and optimal interferometer curves in Fig. 8(a) (vertical dashed line). We see that even in the limit of an effective nearest neighbor interaction R C = a a clear improvement beyond the classical sensitivity limit is possible. As the interaction radius increases, the root BMSE of the variationally optimized interferometer decreases, ultimately reproducing the results of infinite-range interactions in the limit R C /a → ∞.
Theoretical treatment of the variational interferometry with finite range interactions involves solution of a quantum many-body problem. This, in general, is an exponentially hard problem representing the regime where variational optimization on the quantum sensor as a physical device provides a (relevant) quantum advantage, beyond the capabilities of classical computation.
III. APPLICATION TO ATOMIC CLOCKS
Atomic clocks realized with neutral atoms in optical trap arrays or trapped ions provide us with natural entanglement resources to implement variationally optimized Ramsey interferometry. Below we provide a study of a variationally optimized clock assuming as quantum resources global spin rotations and OAT, as realized, for example, with trapped ions as Mølmer-Sørensen gate, or in cavity setups with neutral atoms. This discussion is readily extended to other platforms and resources.
Optical atomic clocks operate by locking the fluctuating laser frequency ω L (t) to an atomic transition frequency ω A [1]. To this end, an atomic interferometer is used to repeatedly measure the phase φ k = t k +T t k dt[ω L (t) − ω A ] accumulated during interrogation time T at the k-th cycle of clock operation, i.e. k = 1, 2, . . .. After each cycle, the measurement outcome m k providing the phase estimate φ est (m k ) is used to infer an estimated frequency deviation φ est (m k )/T . In combination with previous measurement results this is used to correct the laser frequency fluctuations via a feedback loop yielding the corrected frequency of the clock ω(t). For further details on the actual clock operation we refer to App. G, where we also describe our numerical simulations of optical atomic clocks. We emphasize the importance of finite dynamic range in phase estimation in identifying the optimal clock operation, as provided by the Bayesian approach of Sec. II.
The relevant quantity characterizing the long-term clock instability is the Allan deviation σ y (τ ) for fluctuations of fractional frequency deviations y ≡ [ω(t) − ω A ]/ω A , averaged over time τ T [1]. To connect the Bayesian posterior phase variance of the optimized interferometer (10) of Sec. II, we follow the approach of [78] to obtain predictions for the clock instability in the limit of large averaging time τ . Our predictions are supported by numerical simulations of the closed servo-loop of the optical atomic clocks.
In the following we assume that interrogation cycles can be performed without dead times (Dick effect). This can be achieved using interleaved interrogation of two ensembles [100]. For interrogation of a single ensemble, Dick noise may pose limitations for interaction-enhanced protocols especially for larger ensembles, as was analyzed for squeezed states in Ref. [79]. In App. F and App. H we characterize in more detail the requirements regarding dead time for the class of variational protocols developed here.
A. Prediction of clock instability in the Bayesian framework
As shown in [78], the Allan deviation can be well approximated by means of the effective measurement uncertainty ∆φ M which isolates the measurement contribution from the prior knowledge, as in Eq. (19). Assuming no dark times between interrogation cycles, the Allan deviation reads Here τ /T is the number of cycles of clock operation and ∆φ M (T ) ≡ [(∆φ T ) −2 − (δφ T ) −2 ] −1/2 is the effective measurement uncertainty of one cycle. The posterior width ∆φ T is found according to (10) assuming a prior width δφ T = (b α T ) α/2 corresponding to laser noise dominated spreading of the phase distribution within one interrogation cycle. The labels α = 1, 2, 3 specify temporal correlations in the phase noise of the laser and correspond to atomic clocks with a white-, flicker-, or random-walkfrequency-noise-limited laser, respectively. The laser noise bandwidth b α and the exponent α are related to the power spectral density S L (f ) ∝ f 1−α of the free running laser (see App. A). Representative examples for σ y (τ ) when using variationally optimized protocols are shown in Fig. 9. The solid lines result from numerical simulations of the full feedback loop of an atomic clock in which an integrating servo corrects out frequency fluctuations over the course of multiple cycles, see App. G for details. For the simulations we assume the atoms as ideal frequency references without any systematic shift of ω A . In atomic clocks the simulated Allan deviations presented in Fig. 9 are larger at small averaging times τ /T ∼ 1, due to the delayed feedback, before reducing as σ y (τ ) ∝ τ −1/2 at long averaging times τ /T 1 when all correlated laser noise is corrected out. To determine long-term stability the Allan deviation is measured experimentally for a time τ long enough that clock instability has reached this asymptotic scaling. Therefore, we introduce and consider below a dimensionless prefactor for the asymptotic scaling which gives the Allan deviation in units of ω −1 A (b α /τ ) 1/2 , as shown by the dashed lines in Fig. 9. In the following, we use Eq. (25) to re-evaluate the performance of the optimized interferometers presented in Fig. 2 as the achievable long-term clock instability σ at an averaging time τ . In comparison to the framework of Sec. II the BMSE is replaced by the Allan deviation and the prior width by the interrogation time T . We note that the scaling of the Allan deviation with respect to T is more intricate than the one of the BMSE with the prior width: On the one hand, a large interrogation time means good accuracy in frequency estimation, but on the other hand, it also broadens the prior distribution and therefore degrades the phase estimation. Figure 10(a,b) shows the achievable long-term clock instability σ as a function of the interrogation time T for clocks made of N = 64 atoms and the flicker-noiselimited laser. The purple line (in both panels) represents performance of the conventional clock exploiting Ramsey interferometer with CSS as input, collective spin projection measurement, and a linear estimator, given by the circuit (0, 0). Thus, the shaded area above the purple line roughly defines the performance achievable by classical clocks. In the case of CSS based classical clocks the cost function (10) can be analytically minimized [78] yielding the dimensionless Allan deviation
B. Results of the clock optimization
where ν ≡ (δφ T ) 2 . The expression (26) has two important limits. For small interrogation times and, consequently, small prior widths the performance of the clock is limited by the quantum projection noise of the uncorrelated atoms as σ SQL = (N b α T ) −1/2 . The SQL limited clock instability σ SQL (dashed purple line) decreases as the interrogation time grows. For large interrogation times, b α T ∼ 1, however, the laser noise becomes dominant and generates accumulated phase values exceeding the dynamic range of the atomic interferometer, thus, leading to the laser coherence time limit (CTL) [79] of the clock σ CSS . Between these two limits there exists an optimal interrogation time delivering the minimum Allan deviation σ opt ≡ min T σ which defines the optimal clock performance.
The black dotted line in Fig. 10(a,b) shows the instability of the optimal quantum clock (OQC), σ OQC , exploiting single-shot protocols with the optimal interferometer. The gray shaded region below the black dotted curve is inaccessible to any N -particle clock not using entanglement between different clock cycles for initial state preparations and/or measurements. The laser CTL for the optimal clock in the asymptotic limit of large N can be estimated from Eq. (2) by assuming zero phase estimation error within the [−π, π] interval and (φ) = 4π 2 outside of the interval due to the phase slip The green dotted line in panel (a) shows the laser CTL for the optimal clock, σ OQC CTL . The optimal clock instability at shorter interrogation times demonstrates two distinct scalings corresponding to the two Heisenberg limits discussed in Sec. II F. At very short times, (b α T ) α/2 N −1 , the GHZ state based clock (red line) becomes optimal approaching the instability limit given by the conventional HL, σ HL = N −1 (b α T ) −1/2 (red dashed line). Larger interrogation times correspond to wider prior phase distributions hence the π-corrected HL becomes the limiting factor, σ πHL = πN −1 (b α T ) −1/2 (green dashed line). The optimal quantum clock instability in the limit of large number of atoms, N → ∞, is fundamentally restricted by the interplay between the σ πHL and σ OQC CTL as we will discuss below.
The instabilities of clocks based on variationally optimized interferometers employing quantum circuits of various complexities are shown in Fig. 10(b) with solid color lines. In particular, the orange line corresponds to the SSS based clock, given by the circuit (1, 0). As the circuits depth grows, the enhanced dynamic range of the variational interferometer shifts the laser CTL towards larger interrogation times which in combination with suppressed shot noise reduces the clock instability. The figure shows that variational clocks of growing complexity quickly outperform the SSS clock and approach the optimal quantum clock instability. Beyond the model predictions this improvement is also observed in simulations of a full clock operation using variationally optimized protocols, as shown by the markers in Fig. 10(b). Deviations between theory and numerical results can arise due to a number of different effects. For one, the onset of fringehops for b 2 T ∼ 1 is not included explicitly in the models. Especially for small N a sudden loss of stability, resulting from fringe-hops, can occur before reaching the CTL due to stronger, non-Gaussian measurement noise [78,79]. In contrast, for clocks with larger N and increasing complexity it is expected that the onset of fringe-hops and the minimum of CTL coincide. Another source of discrepancy is the assumption of a laser noise dominated prior width δφ T = (b α T ) α/2 . Propagation of the measurement uncertainty and delay within the feedback control can lead to a broadening of the true phase distribution. Especially protocols which are highly optimized to a particular prior width may thus not achieve their predicted stability in the simulations, e.g. around b 2 T ≈ 0.02 in Fig. 10(b).
Nevertheless, good agreement between the numerically determined instability and the theory prediction is found around the overall optimal protocols.
In Fig. 11 we study optimal instability of the variational clocks σ opt (corresponds to minima in Fig. 10) as a function of the atomic ensemble size N . The CSS clock is represented by the purple line which scales asymptotically as σ CSS opt ∝ N −(3α−1)/(6α) . The scaling is a bit slower than the conventional SQL limit ∝ N −1/2 due to the laser CTL which reduces the optimal interrogation time as N grows. Any classical clock using one-shot protocols with collective spin measurements belongs to the shaded purple region above the CSS clock line. The N -scaling of the optimal quantum clock is shown with the black dotted line for system sizes up to N = 64.
For larger system sizes we show the asymptotic behavior (black dashed line) obtained by combining the noise contributions of the π-corrected HL and the laser CTL, σ asym ≡ min T [σ 2 πHL + (σ OQC CTL ) 2 ] 1/2 . Similar to the classical clock scaling, the laser CTL prevents the optimal quantum clock (OQC) from achieving the Heisenberg scaling ∝ N −1 , instead, leading to a logarithmic correction in the large N limit as found in [101,102]. The present approach allows obtaining tighter bounds on the asymptotic scaling for general α (see App. E). In particular, for the flicker-noise-limited laser, α = 2, the OQC instability scales as with z ≡ 32N 4 /π and the corresponding optimal interrogation time scaling as T OQC opt πb −1 2 ln(z ln z) −1/2 . The gray shaded area below the dashed and dotted black lines is inaccessible to quantum clocks without entangled clock cycles. Finally, the variationally optimized clocks of various circuit complexities are shown with solid color lines and demonstrate scalings approaching the optimal quantum clock as the circuit depth increases.
We have also studied performance of the variationally optimized clocks experiencing individual atomic dephasing during the interrogation period T . Similar to the results of Sec. II E, the optimized clocks perform well for decoherence rates small compared to the laser noise bandwidth, γ/b α 1. For stronger noise, γ/b α 1, the optimized clock instability approaches the one of the classical clock, as expected. We also checked the performance of optimized clocks for other types of laser noise α = 1, 3, and found no significant changes to the results presented above.
In summary, atomic clocks based on variational quantum interferometers with low-depth circuits can approach the performance of the optimal quantum clock in singleshot protocols. The variationally optimized clocks can be readily complemented with more sophisticated interrogation schemes [103,104], eventually also approaching the ultimate quantum bound on the Allan deviation [105,106].
IV. OUTLOOK AND CONCLUSIONS
In this work we have studied optimal Ramsey interferometry for phase estimation with entangled N -atom ensembles, and application of these optimal protocols to atomic clocks. We have considered a Bayesian approach to quantum interferometry, and have defined optimality via a cost function, which in the present study is the BMSE for a given prior distribution or, in the context of atomic clocks, the Allan deviation for a given Ramsey time. The key feature of the present work is that optimization is performed within the family of operational quantum resources provided by a particular programmable quantum sensor platform. Thus identifying the optimal quantum sensor is recast as a variational quantum optimization where the entangling circuits generating the optimal input state, and the decoding circuits implementing the optimal generalized measurement are variationally approximated with the given resource up to a certain circuit depth. We have presented two model studies: in our first model, we considered one-axis twisting as quantum resource; our second model uses finite range interactions as entangling operations. Our examples demonstrate that already low-depth circuits provide excellent approximations for optimal quantum interferometry. We emphasize that the familiar discussions of interferometry with spinsqueezing and GHZ states are included as special cases. Furthermore, advanced measurement strategies including adaptive measurement and quantum phase estimation are not advantageous for the present problem, as a von Neumann measurement has been proven optimal.
Given advances in building small atomic scale quantum computers, or programmable quantum simulators which can act also as quantum sensors, the variational approach to optimal quantum sensing provides a viable route to entanglement enhanced quantum measurements with existing experimental entangling, possibly non-universal resources, and optimizing in presence of noise. Indeed trapped ions with Mølmer-Sørensen entangling gates, and optical arrays interacting via Rydberg finite range interactions or cavity setups provide the necessary ingredients for implementing such variational protocols, and quantum sensors. While first generation experiments might demonstrate optimal Ramsey interferometry for a specified dynamic range of the phase, and optimization of quantum circuits 'on the quantum sensor' for various circuit depths (Sec. II), the present work also promises application of variational quantum sensing on existing quantum sensors, in particular atomic clocks (Sec. III). The guiding principle behind the present work of identifying for a sensing task the optimal sensing protocol given the quantum resources provided by a particular sensor and sensor platform, is, of course, general and generic, and applies beyond Ramsey interferometry, and beyond the BMSE as cost function.
As an outlook, we emphasize that the search for optimal sensing can also be run directly as a quantum-classical feedback loop on the physical quantum sensor. This offers the intriguing possibility of optimizing with given quantum resources and in presence of imperfections of the actual device, which might include control errors and noise. Further studies are needed to explore best optimization strategies of the cost function on the classical side of the optimization loop given the limited measurement budget on the programmable quantum sensor. This applies to both the initial global parameter search, supported by theoretical modeling, and small iterative readjustments of optimal operation points due to slow drifts of the quantum sensor.
Optimization on the (physical) quantum sensor can also be performed in the regime of large particle numbers N , which might be inaccessible to classical computations, i.e. in the regime of quantum advantage. Hybrid classicalquantum algorithms have been discussed previously as variational quantum eigensolvers for quantum chemistry and quantum simulation, where 'lowest energy' plays the role of the cost function which is evaluated on the quantum device. In contrast, in variational quantum sensing we optimize quantum circuits in view of an 'optimal measurement' cost function, and it is the (potentially large scale) entanglement represented by the variational manyparticle wavefunction in N -atom quantum memory, which provides the quantum resource and gain for the quantum measurement.
Note added. After submission of the present manuscript, Ref. [107] reported an experimental implementation of variationally optimized Ramsey interferometry in a systems of up to N = 26 trapped ions, in one-to-one correspondence to the present theoretical work. This includes demonstration of quantum enhancement in metrology beyond squeezing through low-depth, variational quantum circuits, and on-device quantum-classical feedback optimization to 'self-calibrate' the variational parameters. In both cases it is found that variational circuits outperform classical and direct spin squeezing strategies under realistic noise and imperfections. To present results in Sec. III in dimensionless units, we follow [78] and define an effective bandwidthb via where σ L is the Allan deviation of the uncorrected reference laser. For a laser that is mainly limited by a single power spectral density component, i.e S L (f ) = h 1−α f 1−α one can unambiguously express the bandwidth in terms of the prefactors h 1−α in the power spectral density and the respective Allan deviation [108], so that Numerical simulation of the clock feedback loop [78] reveal that the dimensionless time b α T is related to the prior distribution width of a stabilized clock by the relation (δφ) 2 = (b α T ) α , where b α = χ(α) 1/αb α is a rescaled bandwidth, differing fromb α only by an empirically determined prefactor χ ≈ 1, 1.8, 2 for α = 1, 2, 3. For a laser spectrum containing all three contributions Eq. (A1) can still be used to determine an effective bandwidth, and servo loop simulations of the clock can reveal the modified time dependence of the prior distribution width enabling one to extend the clock model to realistic laser noise parameters.
Appendix B: Spin x-parity in entangling and decoding circuits We consider global rotations R µ , OAT interactions T µ (see Sec. II B) and finite range dressing interactions D µ (see Sec. II G) with µ = x, y, x as resources for the variational optimization. Within is this set of resources we are able to ensure an anti-symmetric estimator by imposing invariance under the spin x-parity P x on the Entangler and Decoder, i.e. P x U En R y (−π/2)P x = U En R y (−π/2) and P x U De P x = U De under the spin x-parity P x = R x (π/2), since this implies where we use that P x J x P x = J x , P x J y,z P x = −J y,z , P † x = P x and P x R y (π/2) |ψ 0 = R y (π/2) |ψ 0 . The most general entangling and decoding sequences satisfying these constraints are used in Eq. (6),(7) and displayed in Fig. 1. [109].
To obtain the Wigner distribution, the operator is expanded in terms of spherical tensors where j k j −m q m denotes the Wigner 3j symbol. O can be represented in the spherical tensor basis where c k,q = Tr OT k,q . Replacing T k,q in this representation by spherical harmonics Y k,q (θ, φ), one arrives at the Wigner distribution, as a quasi-probability distribution on a generalized Bloch sphere.
The Wigner function can be used to calculate the expectation value by integrating the overlap of the respective Wigner functions over the generalized Bloch sphere. This implies that we can interpret contours of the measurement distribution with the different eigenvalues of the measurement operator while the amplitude of the state distribution indicates how much the state overlaps with the respective projection of the measurement projection.
Appendix D: Numerical optimization of the phase-operator based interferometer Here we define the phase operator and describe an iterative optimization procedure allowing us to minimize the cost function (2) for a given observable using the Minimal Mean Squared Error (MMSE) estimator [3]. The phase operatorΦ reads [87,88]: where J z |m = m |m .
Our goal is to minimize the cost function Eq. (2) for the observableΦ and the MMSE estimator by finding the optimal initial state |ψΦ . The MMSE estimator reads [3]: where the conditional probability is p(φ|s) ∝ p(s|φ)P(φ) with p(s|φ) = | s|e −iφJz |ψ in | 2 and the observable eigenstate |s defined in Eq. (D3). The optimization is performed iteratively. Initially we start with s = 0 eigenstate ofΦ as the input state |ψ (0) in = |s = 0 , which is a good approximation for a state highly sensitive to phases around φ = 0. The state defines the corresponding MMSE estimator φ MMSE est(0) (s) as given by Eq. (D4). In the next iteration we find the state |ψ (1) in minimizing the cost function (2) for the given φ MMSE est(0) (s) estimator by solving a corresponding eigenproblem, as described in [31]. The iterative procedure converges quickly yielding the optimal initial state for the POI |ψ (k) in → k→∞ |ψΦ which, in turn, defines the optimal estimator via Eq. (D4) and the corresponding posterior width ∆φ POI . This result is used in Sec. II D.
Appendix E: N -scaling of the optimal quantum clock instability Here we derive asymptotic scaling of the optimal interrogation time and the corresponding minimal instability of the optimal quantum clock. As discussed in Sec. III, the instability of clocks exploiting single-shot protocols is fundamentally limited by the measurement shot noise given by the π-corrected HL for short interrogation times T , and the laser CTL for large T . For the dimensionless Allan variance we write where s ≡ π − 2 α b α T is the dimensionless Ramsey time. The goal is to minimize Eq. (E1) with respect to s in the limit of large number of atoms, N → ∞. The derivative with respect to s reads and, using a self-consistent assumption for optimal time s * 1, results in the following equation for s * , Here we used the error function asymptotic 1 − erf(x) → e −x 2 /( √ πx) for x → ∞. Taking the logarithm of the expression (E2) (s * , α, and N are positive) we obtain an equation for w ≡ s −α * , w − ln w = ln z, with z ≡ 8α 2 N 4 /π. For z > e, the solution can be written as the infinitely nested logarithm, w(z) = ln(z ln(z ln(z(ln . . .) . . .))), and can be checked by direct substitution. Using the w(z) function we can express the optimal Ramsey time for N 1 as follows Finally, we substitute the optimal Ramsey time into Eq. (E1) We use Eqs. (E3) and (E4) and keep only the first two logarithms in the definition of w(z) to obtain expressions for the optimal interrogation time and minimal instability of the optimal quantum clocks in Sec. III for α = 2.
Appendix F: Finite dead time in the atomic clock protocol Here we discuss upper limits to the dead times of atomic clocks, which are required to reach the variationally optimized stability presented in Sec. III. When each interrogation cycle of duration T C = T D + T is composed of a dead time T D > 0, and Ramsey free evolution time T , the stability is reduced compared to the ideal case at T D = 0 discussed in the main text.
Let us consider S L (f ) = h −1 f −1 as the power spectral density of the free running laser. In addition, we assume that the protocols are sensitive to phase shifts during T only and that all entangling and decoding operations are included in the dead time where we assume no sensitivity. Given these assumptions, the instability contribution of the Dick effect is [110] with χ given in App. A and the duty cycle d = T /T C . In addition, the instability predicted in the Bayesian framework, Eq. (24), becomes with σ as defined in Eq. (25). In the following we want to estimate below which level of dead time the combined instability σ y (τ ) = σ 2 Bay (τ ) + σ 2 Dick (τ ) is no longer dominated by the contribution of the Dick effect. The minimal required duty cycle d min where the value for σ 2 Dick (τ ) at optimal Ramsey time b 2 T opt dives below the lowest variational instability is sin 2 (πnd) π 2 n 3 ≤ σ 2 opt .
(F3) From d min one can directly infer the maximum fraction R = T D,max /T C = 1 − T opt /T C = 1 − d min of dead time in the clock cycle, where T C = T opt + T D,max . In the limit R 1 it can be shown that − ln(R)R 2 /(1 − R) 2 ∝ (b 2 T opt )σ 2 opt , so it is expected that for N 1 this ratio will eventually follow a similar scaling as σ 2 opt . The exact relation is shown in Fig. 12. It is worth noting that R 1 is still recommended for small ensemble sizes, even though this condition is not required based on d min , to prevent unnecessarily increasing the clock instability. A more complete model for the influence of dead time and the Dick effect requires to include the full spectral density S L (f ) of the laser and evaluating the sensitivity function during the entangling and decoding dynamics. In order to see how well σ [Eq. (25)] reflects an achievable instability we perform numerical simulations of all essential parts involved in the closed feedback loop of an optical atomic clock when operating with the variationally optimized Ramsey protocols.
Building up the simulations proceeds as follows: (i) The free-running laser is simulated. Given a particular spectral density S L (f ) = h 1−α f 1−α and the Ramsey time T we generate a sequence of random numbers y k = 1 T t k +T t k dt[ω L (t) − ω A ]/ω A which gives the average frequency fluctuations of the laser without any feedback in each cycle k. Correlations between different cycles, required when α = 1, can e.g. be obtained in the time domain by implementingȳ k as a random walk or a sum of multiple damped random walks [78].
(ii) To stabilise the laser frequency for long averaging times τ T a feedback correction is applied to the laser frequency at the end of each cycle. In the simulations, the estimated frequency deviationȳ est,k = m k /(2πω A T ∂ φm (φ) |φ=0 ) obtained from measurement result m k at t k is multiplied by a gain factor 0 < g ≤ 1 and subtracted from the true laser frequency. This integrating servo corrects frequency errors over ∼ 1/g cycles and is sufficient to achieve a robust stabilization at τ /T 1/g for flicker noise limited lasers [79]. However, to simulate the quantum probabilities p(m|φ k ) at t k the phase φ k = ω A Tȳ k based on the actual laser noiseȳ k is needed. Thus, later measurements are affected not only by the noise of the free-running laser but also by the measurement results and corrections from earlier cycles. To implement this efficiently, the simulation runs sequentially: At the beginning the phase φ 1 is calculated for the first cycle only. Then the probabilities p(m|φ 1 ) with this particular phase are calculated and a single measurement result m 1 is sampled according to this distribution. The estimator y est,1 is calculated and the servo corrects the laser frequency so thatȳ 2 =ȳ 2 − gȳ est,1 is the actual noise in the second cycle. This procedure is repeated in each cycle with the corrected frequencies, meaning e.g. φ 2 = ω A Tȳ 2 .
(iii) The clock stability is evaluated, based on the simulated sequence of stabilized frequency deviationsȳ k . The overlapping Allan deviation σ y (τ = nT ) is calculated numerically from averages over n cycles. Statistical averaging is performed over many intervals of length n in a single run with n tot n cycles and then averaging again over multiple runs. Finally, the long term instability is extracted by fitting the prefactor to the asymptotic scaling σ y (τ ) ∝ τ 1/2 reached typically after n ∼ 10 4 cycles in simulations of n tot = 2 × 10 6 cycles.
To compare numerical results to theory predictions, as in Fig. 10(b), the values for T and h 1−α in the simulations are matched to reproduce the same laser induced prior width (δφ) 2 = (b α T ) α .
Appendix H: Cumulative interaction angle
A relevant question regarding the Dick effect is the time it takes to perform the entangling and decoding sequence. The slowest time scale on a quantum simulator is usually the interaction strength. Results presented in Fig. 2, 10 were obtained for interaction angles ≤ π/2. From a practical point of view, however, it might be beneficial to consider smaller interaction angles.
Here we show that, close to the respective minima in Fig. 2, 10, the displayed results of the variationally optimized interferometers can be well approximated by quantum circuits with small cumulative interaction angles θ OAT = nEn k=1 θ . In Fig. 13 we constrain each interaction angle to be positive and smaller than a threshold that decreases with The cumulative angle of all one axis twisting gates Tx,y required to obtain the dimensionless Allan deviations displayed above. The vertical dashed line indicated the interaction angle of π/2 required to prepare a GHZ-state. the depth of the circuit. In addition we require that the cumulative interaction angle θ OAT is always smaller or equal than π/2, the interaction angle required to prepare a GHZ-state. Similarly to the OAT squeezing [73], the variational sequences can also work with a cumulative interaction that decrease rapidly with N while the resulting Allan deviation remains a good approximation of the unconstrained optimization in Fig. 11. | 15,565.2 | 2021-02-10T00:00:00.000 | [
"Physics"
] |
Assimilation of sea-surface temperature and altimetric observations during 1992–1993 into an eddy permitting primitive equation model of the North Atlantic Ocean
Sea-surface temperature (SST) and sea-surface height (SSH) observations collected from space between October 1992 and December 1993 have been assimilated into a realistic primitive equation model of the North Atlantic Ocean circulation at eddy permitting resolution. The assimilated SST data originate from AVHRR observations gathered and processed within the NASA Pathfinder project; the altimetric data consist of SSH maps computed as the sum of a time-invariant dynamic topography and gridded sea-level anomalies obtained by combining Topex/Poseidon and ERS altimeter data. The assimilation scheme is a reduced-rank Kalman filter derived from the Singular Evolutive Extended Kalman (SEEK) methodology [J. Mar. Syst. 16 (1998) 323], in which the error statistics is represented in a subspace of small dimension. The error subspace is initialized with a truncated series of Empirical Orthogonal Functions (EOFs) of the system variability. The analysis algorithm includes a mechanism to update the forecast error statistics adaptively using all pertinent informations from the innovation vector. Hindcast experiments have been conducted with a 1/3° model of the North Atlantic basin forced with ECMWF atmospheric reanalyses. The impact of the data assimilated during 1993 is assessed by examining how observed (SSH and SST) and nonobserved variables (such as velocity and thermohaline properties in the interior of the ocean) are modified by the assimilation scheme. Finally, the validation of the hindcast experiments with independent XBT measurements is performed in order to evaluate the objective skill of the procedure. The various diagnostics demonstrate the positive impact of the satellite data to hindcast the upper ocean circulation at eddy permitting resolution and the capacity of the scheme to estimate the geographic distribution of the forecast error.
Assimilation of sea-surface temperature and altimetric observations during 1992-1993 into an eddy-permitting primitive equation model of the North Atlantic Ocean
Charles-Emmanuel Testut, Pierre Brasseur, Jean-Michel Brankart, Jacques
Introduction
Assimilation algorithms are at the hearth of operational ocean prediction systems that will be operated in the future on a routine basis, at resolutions fine enough to represent oceanic eddies.
The theoretical framework of data assimilation in meteorology and oceanography is now well established: variational methods seek to minimize the misfit between data and model simulations by optimization of well-chosen sets of control parameters, while sequential methods proceed by intermittent blending of observations and model solutions according to their respective accuracy.
The generalized inverse (Bennett, 1992) or adjoint variational methods (Courtier, 1997;Rabier et al., 2000) can be used to optimize the initial conditions and the atmospheric forcings of an ocean simulation, assuming that the model describes the dynamics perfectly (e.g., Lee and Marotzke, 1998;Wenzel et al., 2001;Stammer et al., 2003) or unperfectly (e.g., Bennett et al., 1998). In order to ensure rapid convergence of the minimization process, the model dynamics has to be linear or weakly nonlinear. These methods have been applied successfully in the context of equatorial or global models at coarse resolution. In the case of strongly nonlinear flows, specific temporal strategies (Luong et al., 1998) or iterative algorithms (Chua and Bennett, 2001) are needed to cope with the problem of local minima.
Using the sequential estimation theory, ensemble Kalman filters and smoothers, which propagate the error statistics by simulating a limited set of model trajectories simultaneously, have been investigated in the context of nonlinear ocean dynamics (Evensen, 1994;Burgers et al., 1998;Evensen and van Leeuwen, 2000). Like variational algorithms, the practical implementation of these methods show that the numerical resources needed to optimally combine nonlinear ocean models with observations are equivalent to, at least, a few tens of model integrations (Brusdal et al., 2003). Considering the computing power available today, and the need to operate highresolution ocean models at global scales routinely (e.g., every week), cheaper assimilation methods are required to demonstrate the feasibility of operational ocean prediction systems.
Several studies have examined how simplified representations of the estimation error statistics can reduce the computational burden of the conventional Kalman filter with nonlinear models in academic configurations (e.g., Fukumori and Malanotte-Rizzoli, 1995;Pham et al., 1998;Voorrips et al., 1999), while applications of reduced-order Kalman filters into realistic models of the Tropical Pacific Ocean (Fukumori, 1995;Cane et al., 1996;Verron et al., 1999) have demonstrated the usefulness of the concept in quasi-linear regimes.
Concerning the operational prototypes in use today, most of them assimilate data with suboptimal interpolation methods wherein the background error covariances are derived from fairly simple schemes (De Mey et al., 2002;Bell et al., 2000;De Mey and Benkiran, 2001;Smedstad et al., 2003). These simplified methods are easy to set up and computationally efficient, but they require external estimates of error covariances. As a result, the assimilation increments may not be consistent dynamically, and relevant statistically. A better use of the data in those systems can thus be expected from ''advanced methods'', which aim at more dynamically and statistically consistent error covariances and state estimates. In the framework of reduced-order Kalman filters, this goal can be achieved by specifying multivariate error subspaces with dynamically balanced covariances, and by using all pertinent informations from the model and the data to propagate the error statistics during the assimilation period.
This issue has motivated the present study, which examines the implementation of an advanced statistical method into a realistic primitive equation model of the North Atlantic. A reduced-order Kalman filter derived from Singular Evolutive Extended Kalman (SEEK) (Pham et al., 1998) is used to assimilate seasurface temperature (SST) and sea-surface height (SSH) data collected from space between October 1992 and December 1993. Potentially, other data types such as surface salinity (Durand et al., 2002) and hydrographic profiles should also be included to meet operational requirements. However, a first objective of the present work will be to assess, in a preoperational context, the benefit of satellite data assimilation by comparison with independent in situ data.
The analysis step of the SEEK filter is achieved in a subspace of small dimension which contains the dominant directions of the background error (Pham et al., 1998;Ballabrera et al., 2001). The error subspace used here is initialized with a truncated series of threedimensional, multivariate Empirical Orthogonal Functions (EOFs) of a previous model simulation; this technique allows us to specify error covariances consistently with the model dynamics. In addition, the SEEK algorithm includes an adaptive mechanism to update the error subspace with all pertinent informations left in the innovation vector after the analysis step (Brasseur et al., 1999;Brankart et al., 2003). A second objective will be to examine the capacity of the scheme to propagate the error statistics in a consistent way throughout the assimilation period, and to determine its dominant characteristics in a realistic North Atlantic model. The paper is organized as follows. In Section 2, the model configuration adopted for the assimilation experiment is described. The main characteristics of the data sets available for assimilation during 1992 and 1993 are reviewed in Section 3, and an evaluation of the free model with respect to these data is discussed. The specific aspects of the SEEK assimilation system implemented for this study are described in Section 4. Section 5 is dedicated to the analysis of one major assimilation experiment of real SST and SSH satellite data. Finally, concluding remarks are given in Section 6.
The model configuration
The assimilation experiments examined in this paper have been carried out with a primitive equation model of the North Atlantic basin implemented and validated by the French CLIPPER project (Treguier et al., 1999).
The numerical code adopted to simulate the ocean circulation is OPA 8.1, a z-coordinate, primitive equation model developed at LODYC (Madec et al., 1998) that uses the hydrostatic approximation and the rigid-lid formulation. Vertical mixing of momentum, temperature and salinity is computed according to the TKE closure model developed by Blanke and Delecluse (1993), with enhanced turbulent viscosity in case of convective situations.
The model domain covers the North Atlantic basin from 20jS to 70jN and from 98.5jW to 20jE, with a horizontal resolution of 1/3 Â 1/3j cos(latitude) (Fig. 1). The vertical discretization is achieved on 42 geopotential levels, with a grid spacing that increases from 12 m at the surface to 200 m below 1500 m depth. The bathymetry is derived from a smoothing of the bottom topography prepared by Smith and Sandwell (1997). The Southern boundary at 20jS, the Northern boundary at 70jN and the Gibraltar Strait are closed but the model solution is relaxed toward climatology within buffer zones defined off Portugal, in the Norwegian Sea and along the Southern boundary to simulate the supply of Mediterranean Water and the exchange with the Arctic and South Atlantic basins. The choice of an eddy permitting resolution resulted from a balance between the need to represent the mesoscale turbulence fairly well and the computer resources needed to assimilate the variability signal observed by the satellites.
The thermodynamic variables in the model are initialized using a Northern hemisphere winter season state from the climatology compiled by Reynaud et al. (1998). The atmospheric forcing fields of heat, freshwater and momentum are derived from the reanalyses of the ECMWF 6-h forecasts of the 1979 -1993 period. In addition, the model surface temperature is relaxed toward weekly Reynolds SST data in order to maintain interactivity between the ocean model and the atmosphere (Barnier et al., 1995). However, the relaxation term is removed during the assimilation experiments in order to avoid spurious competition with the assimilation of SST data. This also allows an easier diagnostic of the actual impact of the assimilation.
The model was spun up for 8 years, starting from rest in 1985. Several tests were performed to determine the most appropriate spin-up length, taking into account the existence of a slow model drift and the need to compute realistic modes of variability for initialization of the assimilation experiments. We realized that, by comparison with the Reynaud seasonal climatology, too long numerical integrations tend to deteriorate the three-dimensional distribution of temperature and salinity; on the other hand too short integrations leave some dynamical features unadjusted. A spin-up of 8 years was eventually found as a good trade-off between dynamical consistency and fit to climatology.
The integration was pursued until December 1993 to produce a reference run available for further comparison with the assimilation experiment. The barotropic stream function averaged between 1989 and 1993 is illustrated in Fig. 1. Some well-known elements of the North Atlantic circulation are easily recognized, such as the subtropical gyre, the intensification of western boundary currents along the American coast, and the subpolar gyre in the Labrador Sea.
A more detailed examination of the currents, however, reveals a number of unexpected features: the NAC is not well defined; the intensity of the subpolar gyre is too strong; the Gulf Stream shows a spurious recirculation southward with a permanent eddy at 35jN, and a bifurcation of the flow near the Hateras Cape advects warm surface water northward on the continental shelf (see also Fig. 8).
The unsufficient resolution of the numerical model probably bears part of the responsibility in those unexpected features, though higher resolution configurations still exhibit similar deficiencies (Treguier et al., 1999). One can thus expect improvements in the representation of the currents by assimilating satellite data to constrain the flow field.
Sea-surface temperature and altimetric data sets
The time window chosen for the assimilation experiments extends from October 1992 to December 1993, i.e., a period during which both ECMWF reanalysis, SST and SLA data products are available simultaneously.
The SST data consist of composite AVHRR observations, gathered and processed within the NASA Pathfinder project. In addition, these products have been quality controlled, compared with Reynolds analyses and gridded on maps at 1/4j resolution every 10 days. In principle, AVHRR observations cover the entire model domain, but the presence of clouds, to some extent, restricts the practical availability of SST data at high latitudes and during the winter season. An accuracy of 0.5 jC on the gridded SST products has been assumed, which corresponds to the sum of errors in acquisition procedures, processing algorithms and space -time interpolation. Before the assimilation experiments, SST data have been used to get a prior evaluation of the reference run during 1993. Fig. 2 illustrates the bias between the model SST and the satellite observations mapped on the model grid. In spite of the flux correction applied on the surface temperature, a number of strong anomalies can be pointed out: the subpolar gyre in the model is too warm of f 2 to 4 jC in average near the surface; a large misfit also occurs near the African coast, reflecting a weakness in the representation of the African Upwelling off Senegal; negative values of À 2 to À 4 jC are observed in Southern sector of the Gulf Stream, while an excess of 4 to 7 jC in the model SST develops along the American coast north of 40jN. The latter problem is partly linked to a systematic error in the Gulf Stream pathway in the model.
The altimetric data sets considered for the assimilation consist of 10-day maps of SSH at 1/4j resolution, obtained as the sum of a time-invariant dynamic topography, and SLA data from the AVISO project. The mean SSH added to SLA maps is derived from inverse modelling (at 1j resolution) of the Atlantic circulation (Le Grand, 1998), using the Reynaud climatology as a dynamical constraint. The gridded SLA at 1/4j resolution are calculated by combining Topex/Poseidon and ERS altimeter data, with an interpolation between tracks according to the method described by Le Traon et al. (1998). The accuracy of the SLA products can be reduced to f 3 cm RMS in average because of the very efficient correction of orbit errors on along-track data. However, due to the lack of accurate tidal corrections, the SLA data will not be assimilated in several coastal zones like the Georges Bank or the Great Banks of Newfoundland. A bulk error of 5 cm RMS on the total SSH data has been prescribed in the assimilation system, taking into account the cumulated effect of measurements, inverse estimates, and mapping procedures.
Using a similar procedure as for SST, the bias between the SSH of the reference simulation and the data is diagnosed in order to evaluate systematic model errors in terms of surface topography (Fig. 3). The amplitude of the bias is maximum in the midlatitude regions. A dipole of negative (north) and positive (south) sea-level misfit is present on each side of the Gulf Stream, reflecting the lack of kinetic energy in the jet by comparison with the mean currents of the inverse solution. The positive misfit centered at (50jN; 30jW) denotes the weakness of the Gulf Stream extension eastward, and its consequences on the North Atlantic drift. In the Labrador Sea, a negative pattern, exceeding À 20 cm, confirms the excess of cyclonic circulation in the subpolar Gyre which was already pointed out in the mean Barotropic Stream Function (see Fig. 1). Finally, the positive anomalies extending zonally located at 35jN in the eastern part of the basin reflect the absence of Azores Current in the reference run.
As an example of synoptic data, Fig. 4 (top) represents the SSH field in the Gulf Stream region that will be used in the assimilation experiment on October 21, 1992. This picture illustrates the crucial role of the mean surface topography added to the observed SLA to derive the absolute SSH data, resulting in a well-defined frontal structure along the Gulf Stream. A comparison with the model simulation Fig. 3. Bias (cm) between the simulated model SSH and the SSH observations (constructed as a sum of a mean SSH and a sea level anomaly) during 1993. The spatial mean has been set to zero. Dark (light) grey values indicate higher (lower) sea level in the model than in the observation. for the same day (Fig. 4, middle) reveals the lack of intensity in the surface current, and the wrong positioning of the meanders and eddies. Additional diagnostics (not shown here) indicate that, in general, the simulations suffer from a deficit in SST and SSH variability which are explained by the lack of mesoscale activity in the model.
The observation error that we specify in the assimilation experiments discussed hereafter is not only related to the accuracy of the gridded SST and SSH data, but also to the ability of the model to represent the observed signal. In order to evaluate this representativeness error, we have examined the RMS difference between the original data on a 1/4j map and their projection onto the 1/3j model grid. The standard deviation of the signal lost during the upscaling process varies between 0.1 and 0.6 jC for the SST and between 0.5 and 5 cm for the SSH. During the assimilation experiment, this signal will be considered as an additional component of the observation error, added to the 0.5 jC error on the gridded SST products, or to the 5 cm error on the gridded SSH products.
The assimilation method
The assimilation scheme implemented in these experiments is sequential, implying that only observations from the past can influence the current estimate of the oceanic state. The backbone of the assimilation algorithm is derived from the Kalman filter theory (Gelb, 1974). The model trajectory is corrected intermittently through a sequence of assimilation cycles, taking into account the confidence in the model prediction and the accuracy of the observed quantities. The assimilation cycles must be long enough to accumulate a sufficient amount of observations and correct the model forecast accordingly. A 10-day cycle has been prescribed in the present experiments, which is the time period needed to achieve a global coverage of the SSH and SST from satellites and generate the gridded products described in Section 3.
The state vector and the estimation vector
The sequential correction due to the Kalman filter analysis provides a new estimate of the state vector which contains all the variables needed to restart the model. As the observed data only relate to a few variables of the model state, the assimilation scheme has to be multivariate, i.e., the whole state vector must be modified in a consistent manner in addition to the observed quantities themselves. Given the nature of the data available for assimilation in this study, a critical issue will be to correct the subsurface ocean properties from SST and SSH data only.
In the context of the statistical estimation theory, this can be achieved as long as the multivariate error statistics prescribed for the forecast state and the observations are reliable and robust. In practical assimilation problems, it is impossible to perfectly specify the error covariance between all state variables. The optimality of the Kalman filter is therefore restricted to the subset of the state vector with reliable statistical properties, which will be defined as the estimation vector. In order to update the rest of the state vector, simple physical constraints such as the geostrophic balance can be used to complete the statistical correction. This adjustment should minimize the occurrence of spurious perturbations associated to unbalanced increments during the next model initialization.
Formally, one needs to specify a partition of the model state vector w in order to distinguish the estimation vector x from the rest v, so that: Strictly speaking, the state vector of the OPA model described in Section 2 is made of the twodimensional barotropic stream function (BSF), and the three-dimensional arrays of the temperature (T), salinity (S), turbulent kinetic energy (TKE) and horizontal velocity components (U, V). In order to simplify the expression of the observation operator needed to compute the misfit between the model state and the data, the two-dimensional SSH fields is included as a part of the state vector in spite of the diagnostic nature of this variable in the rigid-lid approximation.
In order to define the estimation space, we proceed in two steps: first, we eliminate all state variables deeper that 1500 m because of the difficulty encountered in preliminary experiments to specify reliable and robust error covariances between these variables and the surface. Second, for the same reason, we exclude the velocity, the associated barotropic stream function, and the turbulent kinetic energy from the estimation vector: the chaotic nature of the turbulent kinetic energy makes it difficult to calculate statistically significant covariances between TKE and the other state variables. The estimation vector is thus restricted to the sea surface elevation, the temperature and the salinity in the upper 1500 m.
Consistently with this partition, each assimilation cycle includes an analysis step to correct the estimation vector statistically using all pertinent observations of the system, an adjustment step to re-initialize the model state vector in a dynamically consistent manner, and a forecast operation to predict the ocean trajectory up to the next analysis time. These three steps are described hereafter.
The analysis step
The statistical method used to update the estimation vector using the satellite observations is derived from the Singular Evolutive Extended Kalman (SEEK) filter, which is a reduced-order assimilation scheme described in several earlier publications (e.g., Pham et al., 1998;Verron et al., 1999;Brasseur et al., 1999).
In spite of a reduced dimension of the estimation space compared to the model state space, a full Kalman filter still requires computational resources exceeding presently available computers, and a further reduction of the size of the estimation problem is needed. The SEEK filter has been developed to this aim, based on a reduced rank representation of the error covariance matrix associated to the estimation state.
The reduced order Kalman gain
The reduced-rank approximation of the background error covariance matrix at time t i can be written as: where S i f is the reduction operator (of dimension n  r) related to the r modes {S i f } k defining the error subspace. Using conventional notations (Ide et al., 1997), the analysis step of the SEEK filter takes the form in which x i f is the forecast estimation vector (of dimension n) obtained by model integration up to time t i ; x i a is the estimation vector after the analysis step, and y i is the vector of observed quantities of dimension p (in our case study, gridded maps of SST and SSH). The gain matrix K can be expressed in terms of the simplification operator S and the observation operator H (dimension p  n) relating the observations to the prediction: where R is the observation error covariance matrix.
Eq. (4) shows that the size of the inversion problem is determined by the error subspace dimension (as long as the observation error is parameterized with a diagonal matrix), while the original Kalman gain requires an inversion in the observation space. As the number of observations is usually much larger than the rank of the error subspace used in practice, the inversion step of the SEEK algorithm is getting much cheaper than the corresponding computation of the original Kalman gain. Finally, the scheme evaluates the error covariance of the analysis and updates the error subspace accordingly:
The local approximation
The estimation of small correlations associated with remote observations is a well-known difficulty of reduced-order Kalman and ensemble methods (Houtekamer and Mitchell, 1998). A further simplification of the SEEK analysis scheme is thus introduced (Testut, 2000;Brankart et al., 2003), enforcing to zero the error covariances between distant variables. This algorithmic improvement will prevent the spurious influence of, say, equatorial data at high latitudes through large-scale signatures in the EOFs.
In practice, this is implemented by assuming that distant observations have negligible influence on the analysis. The global system is split into subsystems for which a traditional analysis is computed: only data points located within a specified region around the subsystem will contribute to the Kalman gain.
In practice, each subsystem includes 2 Â 2 horizontal grid points (Â 43 vertical levels), and the associated regions of influence extend over 14 Â 14 grid points, setting an upper bound to the correlation scales of about 200 km. Note that we did not observe any discontinuity of the gain between adjacent subsystems because the size of the subdomains was taken large enough to overlap each other in a way that two neighboring subdomains use almost the same local data set.
The local gain is an approximation, but it makes sense since only data points located in the ''neighborhood'' of a model grid point should have a significant impact on the analysis for that grid point. Besides, the regular distribution of the gridded observations (at 1/4j) on the model domain always provides at least a few data points within each region of influence (if there were no data available inside the region of influence, no correction would be applied). Further, we have observed that this also improves the analysis because the dimension of the error subspace, relative to the number of state variables in a particular subsystem, increases and therefore spans a larger part of the estimation space.
The error subspace initialization
In most earlier implementations of SEEK, the dominant EOFs of the system's variability were used to initialize the background error covariance (e.g., Verron et al., 1999). The initialization of the error subspace could also be expressed in terms of singular vectors, breeding vectors or differences between slightly different model forecasts. A procedure along these lines was actually tested with some success by considering the divergence between two model forecasts obtained with and without SST relaxation (Testut, 2000).
In the present work, however, the computation of EOFs was found easy and robust enough to estimate relevant error statistics at initial time. We thus performed an EOF analysis of multivariate model dumps sampled every 10 days from a prior simulation of the 1990-1993 period. In each subsystem defined above, the local leading modes of the spectrum were retained out of a series of 154 realizations, preserving in this way the local dominant features of SSH and SST covariance simulated by the model (Testut, 2000;Penduff et al., 2003). In practice, local EOFs are computed on each subdomain individually with only the subset of multivariate model variables available on the subdomain grid. By doing so, we observed that the model variability can be represented with only a few local EOFs (we used four local modes here), while a much larger number of EOFs covering the basin scale would be required to achieve an equivalent level of explained variance. The effective rank of the error covariance matrix is then considerably increased. The error subspace spanned by these modes specify how to spread the information from the observed quantities to the whole estimation vector (e.g., it specifies how SST and SSH data are correlated to the thermohaline properties in the upper 1500 m of the water column).
The adjustment step
The analysis step described above updates the estimation vector only. Before starting a new forecast, the remaining part of the state vector v should in principle be adjusted to the statistical update defined in the estimation space. This includes the temperature and salinity variables under 1500 m, the turbulent kinetic energy, the horizontal velocity components and the barotropic stream function.
Without the explicit use of in situ observations of the deep ocean properties, one can hardly expect an effective correction of the T/S field below 1500 m. In practice, our assimilation scheme will preserve the evolution of the deep water mass properties as they are computed by the model itself. A moderate smoothing of the correction is simply performed to avoid an abrupt transition between the corrected variables above 1500 m depth, and those uncorrected below.
Concerning the turbulent kinetic energy (TKE), we re-diagnose its distribution in space from the other analysed state variables, in order to restore an approximate balance in the production/destruction terms of the closure scheme (Blanke and Delecluse, 1993). Conversely, there is no explicit update of the velocity field from the previous forecast, but a dynamical adjustment to the thermohaline update is achieved by the model within a few time steps (about 1 day after restart). New solutions are currently examined to improve the dynamical balance of the state vector before restart by including the velocity components in the estimation vector or by enforcing geostrophic equilibrium.
A few additional details of the restart procedure concern the following items: -the time-integration in OPA is based on a leap-frog scheme, requiring two successive time-steps to restart a model run after the analysis step; as only one analysed state can be calculated every assimilation cycle, a second ''restart'' state is generated by performing a Euler time step first before the next model forecast; -due to the nonlinearities of the model physics and to the existence of a hydrostatic constraint in the model formulation, the hydrostatic balance of the analysed state is not guaranteed; a check of the water column stability is thus performed after the analysis, and an adjustment based on enhanced vertical diffusion coefficients takes place if needed; -in buffer zones, the correction has been switched off to avoid conflicting interplay between the assimilation updates and the Newtonian relaxation terms.
The forecast step
The analysis and adjustment steps at time t i supply a new state vector which is used as initial conditions for a new model forecast up to time t i + 1 : In addition to the model state, a property of sequential filters belonging to the Kalman family is the propagation of the error covariance from one analysis step to the next using the model dynamics and a statistical description of the model error. This error propagation is needed to perpetuate the optimality of the estimation process with time. Introducing the low rank approximation into the error covariance equation and considering the linear tangent model MV, one gets: Two major hurdles arise to explicitly resolve this equation in the assimilation algorithm. First, the evolution of the error covariance with the model equations remains computationally expensive even with the reduced-rank approximation, requiring r model integrations in addition to the central forecast (Eq. 8). Second, one has to specify the explicit structure of the systematic error Q i ; however, a detailed knowledge of the nature of the model error is rarely available, and the impact of poorly specified statistics may be dramatic on the estimation process.
Instead of this formulation, we investigate in this paper a shortcut to Eq. (9), assuming that the forecast error covariance matrix can be written as: where D i 1/2 is a diagonal amplification matrix. The role of this simple parameterization is to simulate the cumulated effects of the model error and the dynamical growth of the pre-existing error modes, with the interesting property to preserve the rank of the error covariance matrix. Despite its simplicity, such idealization does not imply that the model is assumed perfect.
In the general case, the diagonal elements of D i should be tuned individually to achieve the same error variance propagation as what could be obtained from Eq. (9). In this first attempt, we have adopted a further simplification by assuming for this matrix a blockdiagonal structure which corresponds to the partitioning of the system into the subsystems introduced for the analysis step. A unique amplification factor is thus associated to each subsystem, the value of which is prescribed a priori or evaluated using the adaptive scheme described hereafter.
In Brasseur et al. (1999), it was shown that an adaptive mechanism can efficiently update the error subspace of the SEEK filter using the information left in the innovation vector after each analysis step (i.e., the residual innovation vector). A similar idea, also used by Dee (1995), has been implemented in the present study to determine the amplification matrix D i using the same source of information.
The baseline of the adaptive algorithm is to enforce the consistency between the error variances predicted by the filter and the 'observed' variance captured by the innovation vector. To achieve this goal, we consider the classical Eq. (11) from the optimal linear estimation in which we introduce Eq. (10). E denotes the mathematical expectation operator. Taking the diagonal part and re-ordering, we obtain where D v i is a diagonal matrix containing the innovation variance v i . D v i is estimated from the sequence of the last innovation vectors weighted with an exponential decreasing towards the past. In practice, the esti- mation of v i is done sequentially at each step by evaluating: where v * is the square of the current innovation (Testut, 2000;Brankart et al., 2003). A value of 0.2 is prescribed for h, which corresponds to an e-folding time of 50 days for 10-day assimilation cycles. Eq. (12) expressed that the background error variance should in average be equal to the innovation variance minus the observation error variance. Then, a procedure is set up, which determines adaptively the D i diagonal matrix (i.e., an amplification factor on each subdomain) in order to approximately verify the balance expressed by this equation. Of course, in Eq. (12), we have more equations than unknowns. So we need to use a criterion to find the ''best'' solution.
Assuming that D H f is the left hand side of Eq. (12) and D est is the right hand side, we adjust the amplification factor to minimize: on each subsystem. This formulation is minimum if D H f is equal to D est , and increase if strong ratio exist between them. Despite its simplicity, this method is useful to control the evolution of the error variance during the assimilation sequence and preserve the statistical consistence of the scheme.
The assimilation experiments
In this section, we discuss the results of assimilation experiments using the data sets described in Fig. 6. Bias (cm) between the SSH from the assimilation experiment (10-day forecasts) and the SSH observations during 1993. The spatial mean has been set to zero. Dark (light) grey values indicate higher (lower) sea level in the model than in the observation. Section 3 during the period from October 1992 to December 1993. The experiments have been implemented with the help of the SESAM software (Testut et al., 2001), which is a modular package developed to manage the various stages of the assimilation chain in a flexible manner.
The assimilation system needs a few cycles to adjust the initial error statistics (Testut, 2000), and the first 3 months in 1992 must be regarded as the ''spin-up'' of the experiment, while the complete annual cycle of 1993 can be considered as the adequate time interval for computing the assimilation diagnostics. In addition to assessing the global solution, we will focus some of these diagnostics on the Gulf Stream region because of its critical role on the dynamics of the whole North Atlantic basin.
The objective assessment of the system is not trivial because the data with a sufficient space -time coverage are already used by the assimilation process, while only a few independent data are available for verification. The methodology to evaluate the assimilation experiments will therefore rely on three different metrics: (i) the computation of RMS misfits between the SST and SSH data fields and their equivalent estimates from the assimilation sequence (in spite of the fact that these statistics cannot be considered as an objective measure of the system's performance); (ii) the assessment of unobserved quantities (such as large-scale currents) by comparison with our prior knowledge of the ocean circulation; (iii) and, the validation with fully independent data sets (e.g., in situ hydrographic profiles).
Comparison to the satellite data
As a first illustration, the SSH analysis on October 21, 1992 (Fig. 4, bottom) allows a detailed inspection of how one specific analysis modifies the surface topography in the Gulf Stream region. The frontal system associated to the Gulf Stream has been corrected in a positive way with respect to the first guess shown in Fig. 4 (middle). The mean position of the jet is more consistent with the SSH map in Fig. 4 (top) after assimilation, and the magnitude of its meridional slope is more realistic. The observed mesoscale activity identified by the presence of eddies along the jet is also better represented and located in the analysis.
An evaluation of the averaged behaviour of the assimilation system during 1993 is given by Table 1, which represent the RMS misfits between the satellite data and the model estimates (in cm for SSH and jC for SST) in the free run, the 10-day forecasts and the analysed states of the assimilation sequence.
In the North Atlantic basin, a systematic reduction of the RMS misfit is observed on the SSH, dropping from 15 cm in the free run to 4.5 cm in the analyses, i.e., slightly lower than the standard deviation of the observation error (5 cm). Regarding SST, the RMS misfit drops from 1.41 jC in the free run to 0.74 jC in the analysis. Concerning the 10-day forecasts, the RMS misfits with the data are larger (9.8 cm and 1.14 jC, respectively, for SSH and SST), but they remain well below the corresponding figures of the free run. The model is thus able to properly ''ingest'' the observed information at the analysis time, and to propagate this information dynamically up to the next assimilation cycle.
In addition to the model-data misfits, Table 1 provides the average levels of forecast and analysis errors predicted by the filter in the assimilation experiment. Those errors are systematically smaller than the corresponding misfits to the data, suggesting that some noise in the observations has been filtered out by the scheme. Similar statistics have also been computed in the Gulf Stream area, showing the same tendencies but with generally higher figures as a result of a more intense oceanic variability in that region.
In optimal linear estimation, the error statistics should verify (Dee, 1995): indicating that the analysis/observation misfits should be smaller than the observation errors. Table 1 shows that these conditions are not satisfied everywhere for SST and for SSH. The lack of consistency between the estimated errors and the innovations can be explained by inadequate values of the observation error variances (including representativeness errors) which, in the Gulf Stream region, are probably underestimated. The estimation of the error statistics could therefore still be improved, for instance by means of a more complex adaptive algorithm.
In order to identify the nature of the signal which has effectively been modified by the assimilation, we show in Figs. 5 and 6 the bias between the data and the 10-day forecasts of all assimilation cycles in Fig. 10. RMS misfit to Reynaud climatology for temperature in jC (left column) and salinity in psu (right column) in the Gulf Stream region down to 1600 m depth averaged over 1993. The solid curve indicates the free run misfit while the dashed curve shows the 10-day forecast misfit. 1993. A significant reduction of the bias can be observed for both SSH and SST by comparing with the same quantity computed from the free run (Figs. 2 and 3).
In the Gulf Stream, particularly, the assimilation leads to an improved agreement with the data. Colder surface waters are now present north of 40jN, as a result of a better positioning of the jet, which separates cold waters of subpolar origin and warm waters from the subtropical gyre. The new slope across the jet is apparently strengthened, and thus more consistent with what is expected in reality. The extension of the Gulf Stream has been modified too, and we observe a reduction of the bias in the Eastern North Atlantic accordingly. In the central North Atlantic, a significant bias was present at 35jN in Fig. 3 because of a too weak Azores Current in the free run. This feature has completely been removed by the assimilation (Fig. 6), and this can be partly attributed to the presence of this feature in the mean surface topography added to the SLA data.
In the low latitude region, the 10-day forecast bias is generally small, and the signature of a too weak upwelling along the African coast is less visible. The high latitudes are characterised by some improvement in the SSH representation. However, the signature of an important SST bias remains in the Labrador Sea, and more generally in the subpolar gyre. This can be explained by the error in the forcing fields which dominates in that region, and significantly affects the forecast, which is performed without SST relaxation in the assimilation runs.
The assimilation of satellite data is also useful to improve the variability in the regions where the horizontal grid resolution is not sufficient to properly simulate the mesoscale turbulence. This is illustrated by Fig. 7, showing the distribution of RMS difference between the model runs (reference and assimilation) and the data. As expected, the maximum of the misfit amplitude is associated to the Gulf Stream path and its extension toward the Eastern North Atlantic. However, the misfits are significantly reduced in the assimilation run, suggesting a better consistency with the variability of the surface properties themselves. One can also notice that the well-marked SSH signature in the reference run characterising the absence Fig. 13. Distribution of the amplification factor l in the adaptative assimilation experiment averaged over 1993. of zonal extension of the Azores current at 35jN, has been smoothed out completely in the assimilation.
These statistical results suggest that the assimilation system has the capability to correct the major model failures diagnosed from the observation of the surface variables. In addition, these corrections are sufficiently robust to persist during a 10-day forecast and impact the following analysis.
The gulf stream circulation
Another critical issue is to verify the correct extrapolation of the assimilated information onto unobserved variables, such as large-scale surface currents. The velocity field from the 10-day forecasts is an interesting quantity to diagnose if the correction of the state vector is sufficiently robust to permit the adjustment of the dynamics. Note that the new current structure does not directly result from the analysis itself because in this experiment the velocity variables (U, V) are not included in the estimation space.
A focus on the Gulf Stream region (Fig. 8) illustrates the positive impact of SST and SSH data on horizontal currents at 50 m depth and their associated transports. The assimilation is able to improve the surface velocity and modify its direction efficiently. By comparison with the free simulation, the Northern stream along the American coast and the permanent eddy near Cape Hatteras have been removed in the assimilation run. One can also notice a better organized North Atlantic Current at 45jN, 45jW. In spite of these improvements, however, the westward extension of the Gulf Stream is still too ''viscous'' with respect to what is expected in reality.
A vertical section through the Gulf Stream at 72jW (Fig. 9) demonstrates that the assimilation of surface data consistently modifies the three-dimensional structure of the flow. The zonal velocity has been intensified between the surface and 2000 m depth, showing a well identified jet located at a more realistic latitude. The maximum surface flow of the Gulf Stream occurs now at 35-36jN and, at more than 80 cm s À 1 , in agreement with expected values.
Validation with hydrographic data
The only way to objectively validate the impact of assimilation is to compare with independent data, i.e., data which have not been used at any stage of the estimation process.
For this purpose, the model tracer fields (T, S) of the assimilation and the reference runs have been compared to the Reynaud climatology (Fig. 10). The climatological equivalent of the temperature and salinity fields have been computed by averaging the experiment in space and time, in order to compare the same features as those described by the Reynaud climatology. The plots represent RMS misfits on the vertical between the surface and 1600 m depth in the Gulf Stream region. This comparison is useful to examine how the assimilation propagates the information from the surface to the ocean's interior and also from observed to unobserved variables of the estimation space (e.g., salinity).
By comparison with the free run, the climatology of the 10-day forecasts has been systematically improved with respect to both temperature and salinity distributions. This improvement is fairly consistent throughout the whole water column where the analysis correction is applied (i.e., the upper 1500 m depth). The mechanism responsible for the modification of the thermohaline properties is at first order related to the vertical structure of the multivariate error modes linking the many variables of the estimation space. Fig. 15. Distribution of the forecast error for the sea surface height (cm) averaged over 1993 in the assimilation experiment using a constant amplification factor of l = 5.
A final assessment of the assimilation performances has been produced using an ensemble of 2500 XBT profiles collected in the North Atlantic during the period of the experiment (Fig. 11). These data have been extracted from a large historical data base gathered by the SISMER oceanographic data center (http://www.ifremer.fr/sismer/). We show in Fig. 12 the RMS misfits between the XBT temperature profiles and their model counterparts interpolated at the same times and locations during 1993.
Again, it is fortunate to notice the positive impact of the assimilation on the thermal field between the surface and 700 m depth. The analysis profile is better that the free run by almost 1 jC near the surface, and 0.7 jC down to 500 m depth. The profile calculated from the 10-day forecasts is slightly worse, remaining however systematically better than the free run. The reduction of the misfit results from a smaller bias and a better representation of the variability in the assimilation.
The vertical structure of the misfits exhibit a local maximum at 100 m depth, which may be symptomatic of the difficulty to simulate the mixed layer depth correctly. This is a common feature of both the free run and the assimilation experiment (see also Brankart et al., 2003;Penduff et al., 2003). It is worth remembering here that the relaxation on SST is active on the free run and inactive in the assimilation experiment, indicating that the decrease of the misfits just below the surface is the consequence of different causes.
A minimum RMS misfit is observed between 200 and 300 m, i.e., at a depth not directly affected by the surface fluxes, but still under the influence of the mesoscale activity. The comparison with the XBT data demonstrates a positive impact of the altimetric data, which play the dominant role on the Fig. 16. Distribution of the forecast error for the sea surface height (cm) averaged over 1993 in the assimilation experiment using a constant amplification factor of l = 1.25. representation of eddies in the assimilation experiment.
Forecast error statistics
The possibility to diagnose error statistics as a byproduct of assimilation algorithms is one of the several motivations to develop advanced methodologies. Error estimates are useful per se to assign confidence levels to the oceanic field estimates, but also to verify the consistency of the prior statistical hypothesis needed by the methods.
As explained in Section 4, the parameterization of the forecast error is based on the idea of adaptive tuning of the forecast error. Informations contained in the innovation vector are used every assimilation cycle to compute a geographic distribution of the amplification factor according to Eq. (12). The results averaged during 1993 are illustrated by Fig. 13. The maximum amplification of the forecast error at a 10day range takes place in the region of high mesoscale variability extending on both sides of the Gulf Stream path. This is a manifestation of the limited skill that one can expect from eddy permitting models to predict the synoptic evolution of mesoscale feature. By contrast, the amplification factor is smaller in the regions where the growth rates of the forecast error are representative of a slower dynamics, such as in the tropical regions.
The associated distribution of the forecast error on SSH is shown in Fig. 14. As expected, the maximum of the forecast error is found along the Gulf Stream path between Cape Hatteras and 40jN, where the forecast error standard deviation exceeds 20 cm in some places. Local maxima can also be detected along the North Atlantic Current extension and the Azores Current at 35jN. This picture of the forecast error looks fairly realistic, and can be considered as relevant of the first guess error statistics in asymptotic regime.
Two additional assimilation experiments have been conducted with prescribed amplification factors homogeneous in space, to test the sensitivity of the scheme to the adaptive parameterization and to assess the impact of these choices on the forecast error patterns. Figs. 15 and 16 show the forecast error distribution on SSH obtained with D = lI for, respectively, l = 5 and l = 1.25. By comparison with the adaptive experiment, the averaged amplification factor corresponding to Fig. 13 was l = 2.7. The main effect of taking a fixed amplification factor is to flatten the forecast error over the basin and to disconnect its distribution from the dynamical regimes. At first glance, these picture are less relevant of the model error, which is believed to be higher in the regions of strong mesoscale activity than elsewhere. In addition, the general performance of the assimilation scheme evaluated with respect to independent data or prior knowledge are significantly worse (Testut, 2000), and deserve less attention.
Conclusion
Hindcast experiments assimilating sea-surface temperature and sea-surface height data during 1993 have been successfully conducted with an eddy permitting circulation model of the North Atlantic basin. The assimilation method is an adaptation of a reducedorder Kalman filter (SEEK filter) based on a local parameterization of the background error covariance and a mechanism to extract all pertinent informations from the innovation vector adaptively.
This study has demonstrated the positive impact of satellite data to reconstruct the variability of the upper ocean circulation in the North Atlantic. In general, a reduction of the RMS misfit with respect to the assimilated data has been obtained, reflecting both a positive impact of the assimilation on the model bias and a significant improvement of the model in terms of variability. In addition, the analysis procedure is shown to be efficient to propagate the information from the surface SSH and SST data towards unassimilated variables in the interior of the ocean. With respect to our prior knowledge of the circulation, the pattern of the mean currents between the surface and 2000 m depth has been corrected positively in many areas like in the Gulf Stream region.
The validation of the hindcast results with independent data (the Reynaud climatology and XBT profiles collected during 1993) objectively demonstrates that the combined use of the two data sets allows the thermohaline properties of the upper ocean to be improved almost everywhere between the surface and 700 m depth. The strongest improvements are located in regions where the mesoscale activity dominates the variability signal, like in the Gulf Stream extension.
In order to make the assimilation system stable and efficient during long integration periods, a simple adaptive scheme has been implemented to enforce the internal consistency between the forecast errors diagnosed by the filter and the statistics of the innovation sequence. This adaptive mechanism has been shown of critical importance to make these hindcast experiments successful. However, a single factor has been used to parameterize the amplification of the error in a given subsystem, irrespective of the model variable. A possible improvement could be to discriminate the amplification factor according to the model variables, as one can expect different error growth rates for the different physical quantities. Ensemble forecasts could be performed to examine this issue in more details.
In spite of these encouraging results, a number of implementation issues still form the subject of ongoing developments. The first limitation of the assimilation system used in this study concerns the numerical resolution of the model in the regions of strong mesoscale activity: an horizontal grid size of 1/ 3 at mid-latitude only permits the existence of mesoscale features in the solution, but such a resolution is still insufficient to resolve the underlying dynamics explicitly. The poor performances of the model predictions can be diagnosed from the forecast error distributions which exhibit quite high values along the Gulf Stream extension, and also from the too weak persistence score of the predictions at medium range (not discussed in the present paper). It will therefore be essential to reduce the model error as much as possible, for instance by increasing the horizontal resolution and by using the best possible atmospheric forcings.
Another current limitation is due to the use of gridded SSH and SST products instead of the original satellite measurements collected along tracks. The technical difficulties to remove this limitation have been addressed, and the benefit that can be drawn from the assimilation of original data without prior gridding has been evaluated in the context of academic models of the mesoscale ocean circulation. Other minor updates concern the implementation of a dynamical constraint in the adjustment operator to produce geostrophically balanced analysis states and thereby, reduce the drift of the model forecast after reinitialization.
Further investigations have been undertaken recently, which will focus on the complementarity between the data sets used in this study and in situ measurements from hydrographic profiles, drifting buoys or surface salinity fields from climatologies. A more sophisticated observing system is expected to provide a better control of a number of the mixedlayer properties, which are not sufficiently constrained by satellite observations only. | 12,454.4 | 2003-04-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Mid-to-far infrared tunable perfect absorption by a sub - \lambda/100 nanofilm in a fractal phasor resonant cavity
Integrating an absorbing thin film into a resonant cavity is the most practical way to achieve perfect absorption of light at a selected wavelength in the mid-to-far infrared, as required to target blackbody radiation or molecular fingerprints. The cavity is designed to resonate and enable perfect absorption in the film at the chosen wavelength \lambda. However, in current state-of-the-art designs, a still large absorbing film thickness (\lambda/50) is needed and tuning the perfect absorption wavelength over a broad range requires changing the cavity materials. Here, we introduce a new resonant cavity concept to achieve perfect absorption of infrared light in much thinner and thus really nanoscale films, with a broad wavelength tunability using a single set of cavity materials. It requires a nanofilm with giant refractive index and small extinction coefficient (found in emerging semi-metals, semi-conductors and topological insulators) backed by a transparent spacer and a metal mirror. The nanofilm acts both as absorber and multiple reflector for the internal cavity waves, which after escaping follow a fractal phasor trajectory. This enables a totally destructive optical interference for a nanofilm thickness more than 2 orders of magnitude smaller than \lambda. With this remarkable effect, we demonstrate angle-insensitive perfect absorption in sub - \lambda/100 bismuth nanofilms, at a wavelength tunable from 3 to 20 \mu m.
Perfect absorbers tuned to absorb light at a selected wavelength in the mid-to-far infrared (typically, from 3 m to 20 m) are crucial for applications in the biomedical, environment, or security areas. 1 They are needed to target blackbody radiation or molecular fingerprints, as required for thermal recognition, 1 hyperspectral imaging, 2 stealth and cloaking, 3 thermal radiation spectrum shaping. 4 Besides allowing spectral selectivity, perfect absorbers must present a nanoscale thickness. When a perfect absorber acts as active medium in a photodetector, it must be nanoscale-thick to ease photocarrier extraction. 5 In general in any device, nanoscale-thick absorbers are also beneficial for the performance since it is easier to grow defect-free absorbing materials with good physico-chemical properties as thin layers than as thick ones. Another requirement for a perfect absorber to be fully efficient is angleinsensitivity: it must effectively absorb the light impinging from any angle of incidence.
The most advanced kind of nanoscale-thick infrared perfect absorbers is based on metasurfaces. [2][3][4][6][7][8][9][10][11][12][13] Metasurfaces show a broadly tunable optical response, however it is often intrinsically angle-dependent. Furthermore, their applicability is limited by their complex fabrication process combining deposition, lithography, etching, which is unpractical and costly. For this reason, efforts have been made for developing thin films with valuable optical properties by more accessible lithography-free deposition processes. 5,[14][15][16][17][18][19][20][21][22][23][24][25][26][27] To achieve nanoscale perfect absorption at a selected wavelength, the most practical approach consists in integrating an absorbing thin film into a simple resonant cavity. Perfect absorption in this film is achieved by destructive optical interference at the cavity resonance wavelength, which can be tuned by changing the component materials and/or tailoring the cavity structure, for instance the absorbing film thickness. The current state-of-the-art resonant cavity design, in terms of structure simplicity and absorbing film thickness needed to achieve perfect absorption at a given wavelength, consists of a strongly absorbing film (refractive index n extinction coefficient k) backed by a finite optical conductivity mirror (finite n < k). The cavity resonance (and thus perfect absorption) is achieved for film thicknesses t down to 1/60 of the wavelength (i.e. /60). 19,21 This enables a much more compact design than for a standard Fabry-Pérot cavity, for which perfect absorption is achieved for t /16 using standard semiconductor absorbers (e.g. Si, Ge, InAs, InSb, PbS, HgCdTe). This state-of-theart cavity design has enabled angle-insensitive perfect absorption at selected infrared wavelengths between 5 m and 12 m, t being adjusted from 100 nm to 600 nm. [22][23][24][25] In other words, still a quite thick absorbing film is needed, especially if perfect absorption in the far infrared is targeted. Furthermore, finite infrared conductivity cannot be achieved with standard metals, which behave as near-perfect conductors at infrared wavelengths because of their high charge carrier density. As a solution, other materials, sometimes artificial (e.g. sapphire, AZO, heavily doped Si) are used, however none presents a finite optical conductivity over the whole infrared spectrum. Therefore, bringing perfect absorption to markedly different infrared wavelengths requires changing the nature of the mirror.
Summarizing, with this current state-of-the-art cavity design, achieving infrared perfect absorption requires a still quite thick absorbing film and tuning the perfect absorption wavelength over the mid-to-far infrared is a challenging task.
Here, to achieve perfect absorption in much thinner, really nanoscale films (with a thickness as small as /200) with a facile wavelength tuning over the mid-to-far infrared, a new resonant cavity concept is introduced and applied. It is based on a nanofilm with giant refractive index and small extinction coefficient, backed by a transparent spacer and a nearperfectly conducting mirror. The key element of such cavity is the nanofilm, which acts both as absorber and multiple reflector for the internal cavity waves over a broad wavelength range.
After escaping the cavity, these waves follow a fractal phasor trajectory that drives their interference with the wave directly reflected at the air/nanofilm interface. This enables angle-insensitive perfect absorption in films of spectacularly small thickness. A material with the required properties for the nanofilm can be found among the semi-metals, semiconductors and topological insulators of the p-block. The mirror and spacer can consist of any standard metal and low index transparent material, respectively. As a remarkable example, we demonstrate both theoretically and experimentally that a cavity consisting of a bismuth (Bi) nanofilm backed by an Al2O3 spacer and an Ag film enables angle-insensitive perfect absorption in a sub -/100 Bi nanofilm, at a wavelength tunable in a broad infrared region from 3 to 20 m by varying only the nanofilm and spacer thicknesses. For instance, perfect absorption can be achieved at = 4 m and 18m in a nanofilm with thickness of only 30 nm (~ /130) and 90 nm (~ /200), respectively. In sum, the new kind of cavity we introduce (called hereafter "fractal phasor resonant cavity") enables perfect absorption in much thinner films than with the current state-of-the-art cavities. Also, in contrast with them, there is no need to change the nature of the cavity materials to tune the perfect absorption wavelength over the whole midto-far infrared.
Prior to focusing on such novel fractal phasor resonant cavity, we explore the advantages of building a "boosted" Fabry-Pérot cavity based on an absorbing nanofilm with a giant refractive index n and small extinction coefficient k, a solution which has not been considered in the past. As an example, in Figure 1a it is shown the design of such a cavity where the nanofilm with t = 100 nm and a giant n = 10 is backed by a near-perfectly conducting mirror. Figure 1b shows the reflectance spectrum of this cavity, assuming k = 0.5. A resonance with zero reflectance, i.e. perfect absorption, occurs near = 5 m, i.e. for t ~ /50 and with 80% of the power absorbed in the film (20% lost to the mirror). As seen in Figure 1c, the wavelength of this resonance is independent of k, and it becomes weaker (non-perfect absorption) when k increases. This behavior is typical of a Fabry-Pérot interference mechanism, as well as the cavity signature in the phasor diagram ( Figure 1d). It consists of a nanofilm with giant refractive index n, small extinction coefficient k and thickness t backed by a near-perfectly conducting mirror. Here, as example, we take n = 10, t = 100 nm, and the mirror is made of Ag (with n and k from ref. 28). (b) Simulated reflectance spectrum of the cavity at normal incidence for k = 0.5 (black line) showing perfect absorption at the resonance wavelength 5 m (t /50), and reflectance loss due to absorption in the film (dashed grey line). (c) Color map of the simulated reflectance spectra for different values of k, showing that perfect absorption is not achieved for larger k. (d) Simplified representation of the wave propagation in the cavity and phasor diagrams at the resonance wavelength 5 m, for k = 0.5 and k = 1. The vector r0 accounts for the wave directly reflected at the air/film interface (first reflection) and s is the sum of the vectors accounting for the internal cavity waves escaping after multiple reflections (rj's). These vectors follow a straight line, typical of a Fabry-Pérot cavity. For k = 0.5, s cancels with r0, this totally destructive interference enables perfect absorption. This phasor diagram describes the interference between the wave directly reflected at the air/nanofilm interface (first reflection, r0) and the internal cavity waves escaping after multiple reflections (r1, r2, etc…) at the resonance wavelength ( 5 m). The vector accounting for r0 points toward the left, while the vectors accounting for the rj's point toward the right. For k = 0.5, the sum (s) of such vectors is large enough to cancel with r0 (totally destructive interference, i.e. perfect absorption). In contrast, for k = 1, the amplitude of the rj's is too small to enable a total cancelation (partially destructive interference, i.e. non-perfect absorption). Further details about the perfect absorption mechanism in such kind of cavity are given in Supporting Information S1. Because of the Fabry-Pérot type of such mechanism, the perfect absorption wavelength can be tuned by varying the film thickness. A variation of t up to 500 nm enables tuning the perfect absorption wavelength in the whole 3-20 m range (Supporting Information S2). Much larger thicknesses are needed if using as absorber a standard semiconductor with lower refractive index (Supporting Information S3).
In sum, this "boosted" Fabry-Pérot cavity rivals the current state-of-the-art cavities in terms of simplicity and film thickness needed to achieve perfect absorption at a given wavelength. It also makes much more practical the tuning of the perfect absorption wavelength over the whole mid-to-far infrared, because only controlling the film thickness is required with no need of changing the nature of the cavity materials. Yet, a drawback of this cavity is that the extinction coefficient of the absorbing film must be small enough to enable perfect absorption.
This condition is difficult to fulfil with existing materials in a broad wavelength range.
However, this restriction is overcome by an improved design that introduces a transparent spacer between the absorbing film and the mirror, 5 i.e. with the fractal phasor resonant cavity.
This design is described on an example in Figure 2a. In this example, a nanofilm with t = 40 nm, n = 10 and k = 1 is backed by a transparent spacer and a near-perfectly conducting mirror.
Such nanofilm thickness is sufficient to achieve perfect absorption at 5 m (Figure 2b), the same as with a 100 nm -nanofilm with lower k in a "boosted" Fabry-Pérot cavity.
Therefore, the fractal phasor resonant cavity not only enables to succesfully overcome the restriction on the k value: it also allows to strongly reduce the absorbing film thickness needed to achieve perfect absorption at a given wavelength. Furthermore, near 100% of the power is absorbed in the film (Figure 2b), in contrast with the "boosted" Fabry-Pérot cavity shown in Figure 1 where a significant amount of power is lost to the mirror. In sum, with the fractal phasor resonant cavity, perfect absorption occurs in a sub -/100 nanofilm (in this example, t /125), i.e. much thinner than with the current state-of-the-art cavity designs.
This spectacular result is allowed by the special optical interference mechanism of the cavity, which is described in the phasor diagram shown in Figure 2c. consists of a nanofilm with giant refractive index n, small extinction coefficient k and thickness t, backed by a transparent spacer and a near-perfectly conducting mirror. Here, as example, we take n = 10, k = 1, t = 40 nm, an Al2O3 spacer (with n = 1.65, k = 0), and an Ag mirror. (b) Simulated reflectance spectrum of the cavity at normal incidence (black line) showing perfect absorption at the resonance wavelength 5 m (t /125), and reflectance loss due to absorption in the film (dashed grey line). At the resonance wavelength, near 100% of the incident power is absorbed in the film. (c) Simplified representation of the wave propagation in the cavity and phasor diagram at the resonance wavelength 5 m. The vector r0 accounts for the first reflection and s is the sum of the vectors accounting for the internal cavity waves escaping after multiple reflections (rj's). These vectors rj's follow a fractal trajectory. This fractal trajectory yields a s vector that cancels with r0, enabling totally destructive interference, and thus perfect absorption in a particularly thin absorbing film.
As in any resonant cavity, the reflection properties are governed by the interference between the first reflection (r0) and the internal cavity waves escaping after multiple reflections (r1, r2, etc…). However, the giant n and small k of the absorbing nanofilm open a surprising path for these waves inside the cavity before they escape. The nanofilm acts as an efficient multiple reflector for such internal cavity waves (reflections at the nanofilm/air, nanofilm/spacer, spacer/nanofilm interfaces), which can jump many times between the nanofilm and spacer and make many trips in the cavity before escaping, while being weakly absorbed during each trip.
In the phasor diagram of the cavity, this translates into a fractal trajectory for the vectors accounting for the rj's. At the cavity resonance wavelength ( 5 m), the fractal branch has grown fully toward the right and s cancels with r0. Note that off-resonance (Supporting Information S4), such fractal branch is rotated and not fully grown, so that s does not cancel with r0. To the best of our knowledge, this is the first time that such phasor trajectory is reported for resonant cavities.
To bring such finding to the real world, materials fulfilling the above -defined criteria (giant n, small k) must be found. At such aim, one should pay a special attention to semi-metals, semiconductors and topological insulators of the p-block, which are becoming of increasing interest to the photonics and optoelectronic communities. 29 One single-element p-block material, the semi-metal bismuth (Bi), is a particularly interesting candidate as it presents a giant n and small k (8 < n < 10 and 1 < k < 2) over the whole 3 m to 20 m wavelength region, as shown in Figure 3a. 29,30 Other interesting candidates among p-block materials with simple compositions are the semi-metal Sb, 29 the topological insulator Bi2Te3, 31 To demonstrate such properties in an experiment, Bi/Al2O3/Ag cavities have been fabricated by depositing Bi films on Al2O3/Ag stacks grown on Si. As a reference, Bi/Ag Fabry-Pérot cavities have also been grown on Si. The materials have been grown by physical deposition, following procedures detailed elsewhere. 30,34 The aspect of the Bi target used for deposition and some of the fabricated samples are shown in Figure 4a (upper panel). The cross-section images of the grown Bi/Al2O3/Ag cavities (Figure 4a, lower panel) show a well-defined layered organization, with the continuous Bi film on top. Bi thicknesses less than 100 nm are observed. . These images confirm the layered structure, and the small Bi film thicknesses, of few tens of nm. (b) Reflectance spectra of the cavities with different Bi thicknesses, at near normal incidence (9º). Continuous lines represent the experimental data and dash-dotted lines represent the corresponding simulations. Perfect absorption is achieved at a wavelength tuned from 4.8 to 6.8 m by varying tBi from 27 to 54 nm ( /180 to /120). The spectrum of a Bi/Ag Fabry-Pérot cavity is also shown. For this cavity, perfect absorption is not achieved. (c) Reflectance spectra of the Bi/Al2O3/Ag cavity with tBi = 54 nm as a function of the angle of incidence. Near perfect absorption remains in a wide angular range, up to 45º. Figure 4b shows the reflectance spectra of the cavities, at near-normal incidence (9º). The cavities resonate at different wavelengths in the mid-infrared that depend on the type of cavity (Bi/Al2O3/Ag or Bi/Ag) and Bi thickness. The Bi/Ag cavity with tBi = 85 nm resonates at = 4.2 m (tBi ~ . However, it is far to enable perfect absorption, due to the too large extinction coefficient k of Bi. In contrast, as expected, perfect absorption is nearly achieved with the Bi/Al2O3/Ag cavities. For Bi films with tBi = 27 nm, 45 nm, and 54 nm, the cavity resonates at = 4.8 m (tBi ~ ), 6.2 m (tBi ~ ) and 6.8 m (tBi ~ ), respectively. In addition to their spectral selectivity, the Bi/Al2O3/Ag cavities show angleindependent optical properties. This is exemplified in the reflectance spectra of Figure 4c, which show that the perfect absorption is maintained for angles of incidence up to 45º. The observed trend is well reproduced by simulations (Supporting Information S6).
In conclusion, the theoretical and experimental results shown in this work demonstrate that and there is no need to change the nature of the cavity materials to tune the perfect absorption wavelength over the whole mid-to-far infrared: only the spacer and nanofilm thickness must be adjusted. In addition, the resonance wavelength is not sensitive to the nature of the mirror and the spacer (Supporting Information S7), making the use of cheaper or more convenient materials than Ag and Al2O3 possible. Note that, although we chose to use the semi-metal Bi as absorbing material because of its champion optical properties, other materials (semi-metals, semiconductors, or topological insulators) from the p-block could be used. Currently, there is a growing interest in fabricating p-block compounds with tunable optical properties. 31,[35][36][37][38] This quest may unveil new materials with an infrared refractive index even higher than that of Bi, enabling therefore perfect absorption in films with smaller thickness. Note that very high infrared refractive indexes can also be found at the vicinity of the phonon bands of many dielectrics. 39 With a fractal phasor resonant cavity, this effect could therefore enable perfect absorption in very thin films, however in a narrow spectral region. Besides these material aspects, perfect absorption in smaller dimensions may also be achieved with cavity designs exploring different fractal phasor trajectories. Therefore, the new fractal phasor resonant cavity concept we propose may open a pathway toward the facile lithography-free fabrication of few-nm infrared perfect absorbers with broad spectral tunability. In particular, if built from semi-metals or topological insulators, such ultrathin absorbers may enable an extremely strong coupling between infrared light and surface electronic states, ideal for optoelectronic interfacing in the considered biomedical, environment, and security applications.
Methods
The reflectance, power absorbed in the film, and phasor calculations were done within the transfer matrix formalism using the WVASE32 software (Woollam Co. Inc.) and a homemade code. The Bi/Ag and Bi/Al2O3/Ag cavities were grown by physical deposition techniques, especially pulsed laser deposition. The layer nominal thicknesses were 100 nm for Ag, 180 nm for Al2O3, and that of Bi was 80 nm for the Bi/Ag structure and it was varied in the range of a few tens of nm for the Bi/Al2O3/Ag structure. The thickness of the deposited Al2O3 was confirmed by ultraviolet-visible-near infrared ellipsometry measurements. The thicknesses of deposited Bi were extracted by fitting the measured infrared reflectance spectra.
To realize such fitting, the Bi dielectric function in the infrared was taken from ref. 30, where it was determined on Bi films grown with the same deposition technique. The infrared dielectric function of Al2O3 was determined by extrapolating the measured ultraviolet-visiblenear infrared data. The infrared dielectric function of Ag was taken from ref. 28. The sodetermined deposited Bi and Al2O3 thicknesses were in good agreement with those observed in the cross-section images obtained with a scanning electron microscope. The infrared reflectance spectra were measured at room temperature with an IFS 66 Bruker Fourier transform infrared spectrometer equipped with a DTGS detector. The incident beam, with a 5 mm diameter, was unpolarized and the angle of incidence was varied from 9º to 70º.
Author Contributions
J.T. proposed the concept and did the optical simulations. R.S. coordinated and supervised the experimental work. M.G.P. and N.R. fabricated the cavities and did the basic material characterization. R.P. and B.M. measured the infrared reflectance. J.T. wrote the paper with the advice of R.S. and the input of the other authors. S1. "Boosted" Fabry-Pérot cavity: more details about the perfect absorption mechanism Fig. S1. This figure refers to the "boosted" Fabry-Pérot resonant cavity shown in figure 1: film/near-perfectly conducting mirror, with n = 10, k = 0.5, t = 100 nm for the film. (a) Schematic representation of wave propagation in the cavity at the perfect absorption wavelength, at normal incidence. The round-trip phase shift () of an internal cavity wave is exactly opposite to the reflection phase shift (-) at the film/mirror interface, 1/2. Therefore, each internal cavity wave escaping the cavity has a null phase shift with respect to the incident wave, and so does their sum (s). Thus, the phase of s is shifted from that of the wave directly reflected at the air/film interface 0/1 (r0). This enables the totally destructive interference between s and r0. (b) Calculated spectra of the amplitude and phase of r0, s and r0+s, showing the perfect absorption at 5 m. (c) Calculated spectra of the amplitude and phase of the reflection coefficients at the interfaces between the different media (air = 0, film = 1, mirror = 2), and of the round-trip contribution. All the reflection coefficients have a 0 phase, except r01 (which equals to r0) that has a - phase and r23. r23 is slightly different from , because the mirror is non-perfectly conducting. This slightly different phase enables perfect absorption to occur for a film thickness slightly smaller than expected from the Fabry-Pérot formula (/50 < /4n, i.e. /40 with n = 10).
S3. "Boosted" Fabry-Pérot cavity vs standard Fabry-Pérot cavity: tuning of the perfect absorption wavelength in the mid-to-far infrared Fig. S3. This figure refers to the Fabry-Pérot resonant cavity shown in figure 1: film/near-perfectly conducting mirror, but with adjustable film thickness t, refractive index n, and extinction coefficient k for the film. (a) Color maps showing the simulated reflectance spectra of the cavity at normal incidence as a function of t (from 0 to 1000 nm), n (4 or 10) and k (0, 0.5 or 1). The variation in t needed to tune the resonance wavelength in the whole S4. Fractal phasor resonant cavity: phasor at different wavelengths Fig. S4. This figure refers to the fractal phasor resonant cavity shown in figure 2: film/transparent spacer/nearperfectly conducting mirror, with n = 10, k =1, t = 40 nm for the film, a 120 nm -thick spacer and an Ag mirror. (a) Schematic representation of wave propagation in the cavity at normal incidence. The vector r0 accounts for the first reflection and s is the sum of the vectors accounting for the internal cavity waves escaping after multiple reflections (rj's). (b) Simulated reflectance spectrum of the cavity at normal incidence. (c) Phasor diagram of the cavity, at the different wavelengths marked with the same color in (b). The rj's follow a fractal trajectory that depends on . At the perfect absorption wavelength ( 5 m), the fractal branch is fully grown and yields a s vector that cancels with r0. At non-resonant wavelengths, the fractal branch is rotated and not fully grown, so that s does not cancel with r0. | 5,523 | 2018-10-19T00:00:00.000 | [
"Physics"
] |
Polarimetric Evidence of the First White Dwarf Pulsar : The Binary System AR Scorpii
The binary star AR Scorpii was recently discovered to exhibit high amplitude coherent variability across the electromagnetic spectrum (ultraviolet to radio) at two closely spaced ∼2 min periods, attributed to the spin period of a white dwarf and the beat period. There is strong evidence (low X-ray luminosity, lack of flickering and absense of broad emission lines) that AR Sco is a detached non-accreting system whose luminosity is dominated by the spin-down power of a white dwarf, due to magnetohydrodynamical (MHD) interactions with its M5 companion. Optical polarimetry has revealed highly pulsed linear polarization on the same periods, reaching a maximum of 40%, consistent with a pulsar-like dipole, with the Stokes Q and U variations reminiscent of the Crab pulsar. These observations, coupled with the spectral energy distribution (SED) which is dominated by non-thermal emission, characteristic of synchrotron emission, support the notion that a strongly magnetic (∼200 MG) white dwarf is behaving like a pulsar, whose magnetic field interacts with the secondary star’s photosphere and magnetosphere. Radio synchrotron emission is produced from the pumping action of the white dwarf’s magnetic field on coronal loops from the M-star companion, while emission at high frequencies (UV/optical/X-ray) comes from the particle wind, driven by large electric potential, again reminiscent of processes seen in neutron star pulsars.
Introduction
The close binary system AR Scorpii was recently discovered by Marsh et al. [1] to consist of a fast rotating (~2 min) white dwarf in a 3.6 h orbit with an M5 red dwarf.Furthermore, the system appears to be spinning down at a rate of Ṗ ∼ 4 × 10 −13 s s −1 , which is the dominant source of the system's luminosity.This extensive multi-wavelength study, from radio to X-ray wavelengths, showed that the system's SED is dominated by pulsed non-thermal emission, predominantly at the 118 s beat period of the system.The low X-ray luminosity and lack of evidence for mass transfer (no flickering or broad emission lines) argues strongly against any accretion in the system, although the secondary star is clearly subjected to irradiation from the white dwarf, judging from the strong photometric orbital modulation coupled with the radial velocity motion of the narrow emission lines from the inner face of the secondary.These comprise both optical lines (Balmer and He I) and UV lines (Si IV and He II).The discovery by Buckley et al. [2] that AR Sco is highly linearly polarized (up to 40%) and, more importantly, that the polarization is strongly modulated (up to 90% pulse fraction) on both the spin and beat periods, has strengthened the evidence that AR Sco behaves as a pulsar.The high level of polarization also led Buckley et al. [2] to conclude that the white dwarf is highly magnetic, with a Galaxies 2018, 6, 14; doi:10.3390/galaxies6010014www.mdpi.com/journal/galaxiesfield strength as high as 500 MG.Various interpretations have been proposed to explain the observed properties, including direct MHD interactions between the magnetic fields of both components [2], possibly producing bow shocks close to the surface of the M-star companion (e.g., [3,4]).
Luminosities
Marsh et al. [1] determined the distance to AR Sco of 116 ± 16 pc from fitting an M5 star template to the observed spectrum and assuming the M-star companion was close to filling its Roche lobe.This led to the derivation of the individual luminosities of the M5 and white dwarf components, which in total is ∼4 × 10 31 erg s −1 .This represents just ∼15% of the total system luminosity, dominated by the modulated non-thermal component, which varies from 0.6−3.6 × 10 32 erg s −1 .The spin-down power of the system, derived from Ṗ, can be calculated from the formula L ν = −4π 2 Iν s νs , where I is the moment of inertia and ν the spin frequency.The moment of intertia varies by over 5 orders of magnitude between a neutron star and a white dwarf, leading to spin-down powers of 1.1 × 10 28 and 1.5 × 10 33 erg s −1 , respectively.For the observed Ṗ, it is clear that the pulsed luminosity cannot be powered by a spinning down neutron star, since this falls short by about a factor of ∼2 ×10 4 in the required power.On the other hand, a spinning down white dwarf with the observed Ṗ is capable of explaining the total modulated power in the system.
In mass transferring white dwarf -red dwarf binaries (i.e., cataclysmic variables, or CVs), the dominant source of luminosity comes from accretion.The conclusion (e.g., [1,2]) that the white dwarf has a high magnetic field strength raised the possibility that AR Sco was an asynchronous magnetic CV, or intermediate polar, which are typically quite hard (kT ∼ 10-20 keV) and moderately luminous X-ray sources (∼10 31 -10 33 erg s −1 ).In contrast, AR Sco has an X-ray luminosity of 5 ×10 30 erg s −1 (from a Swift observation [1]), inconsistent with it being an intermediate polar.
In addition, both the optical spectra and the photometric variations are different from that expected for a mass transferring binary (i.e., a CV).Spectral lines in such systems are typically much broader, often multi-component in nature, and are kinematically traced to accreting gas close to the white dwarf, or within its Roche lobe, like in an accretion disk.This is in contrast to the narrow emission lines seen in AR Sco that come from close to the L1 point of the illuminated secondary star [1].There is also the tell-tale absence of high frequency (>1 Hz) flickering associated with accretion in cataclysmic variables, in both magnetic and non-magnetic systems.Therefore the conclusion reached by [1,2] is that AR Sco is a detached binary, with no signs of accretion or mass loss through L1, powered predominantly by the spin-down of the magnetic white dwarf.
Finally, the observed low X-ray luminosity is added evidence that AR Sco cannot be a neutron star binary, i.e., a low mass X-ray binary (LMXB).At a distance of only 116 pc, this would make it the nearest and least X-ray luminous LMXB, with an observed L X /L opt ratio of only ∼0.04, quite atypical of LMXBs, whose ratios are typically ∼100.
Polarization Discovery and Behaviour
High speed photopolarimetry of AR Sco was undertaken on two consecutive nights (14 & 15 March 2016) using the HIPPO polarimeter on the SAAO 1.9-m telescope.This instrument ( [5]) enables all four Stokes parameters to be derived simultaneously, at a sampling rate as high a 10 Hz, using photomultplier tubes.The rapid photon-counting sampling mitigates atmospheric effects and allows data to be binned on any suitable timescale, to suit the desired time resolution or signal to noise.The observations shown here ( [2]) were binned such that the intensity (I) is sampled at 1 s intervals, while the other Stokes parameters (Q, U & V) were binned to 10 s.
In Figure 1 we show the observations from the first night (14 March 2016), which show that the system was strongly linearly polarized, at levels reaching 40%, modulated predominantly on the harmonic of the spin and beat period, as determined from a period analysis [2].These two nights of observations covered differing orbital phase, namely φ ∼ 0.07 − 0.23 and 0.38 − 0.85, respectively, which is the reason for differences in the phase-folded (on the 117 s spin period) polarization curves, as shown in Figure 2 (bottom panels).These clearly show double-peaked polarized intensity variations, modulated with up to ∼90% pulse fraction (on 14 March).The linear polarization variations, plus the large swing in position angle (∆θ = 180 • ), is a result of the viewing aspect, where the dipole is nearly perpendicular to the rotation axis of the white dwarf.
The difference between the curves on the two nights is a consequence of the different orbital phases of the two observations, which were at an average of φ orb ∼ 0.15 and φ orb ∼ 0.64, respectively.Both the spin and beat modulations combine differently on the two nights, resulting in the variability of the waveforms for the phase-folded observations.Subsequent observations, spanning many orbital cycles, have confirmed the stability of the orbital modulation and hence the amplitudes and phases of the side-band frequencies (work in progress).In the top panel of Figure 2 we show the spin phase averaged Q and U data, together with the trajectories of their motion in the Q−U plane, which follow counter-clockwise loops, due to the changing magnetic field orientation.On the first night (φ orb ∼ 0.15), the main peak maps to the outer loop and the secondary peak maps to the small loop inside it.For the second night (φ orb ∼ 0.65), there is an apparent phase change and more complex polarized flux variations (see bottom panel), leading to a different trajectory in the Q−U plane, indicative of changes in the combined white dwarf-red dwarf magnetic field topolgy with respect to our line of sight over the two nights.
The Q and U variations in AR Sco are qualitatively similar to that seen in the optical polarimetry of the Crab pulsar [6], although for the Crab p and θ variations show somewhat different morphologies, with more abrupt θ "swings", possibly a consequence of the lower phase resolution of the AR Sco data.The polarized flux for the Crab shows a duty cycle of ∼30%, less that the minimum duty cycle estimated for AR Sco of ∼60%.For radio pulsars there are a range of different θ swing morphologies observed [7], which have been interpreted in terms of a rotating vector model, where the polarization vector (p, θ) is a projection on the sky of the direction of the magnetic field in the region where the polarized radiation is emitted.
Interpretation and Proposed Model
The strongly pulsed polarized emission in AR Sco is analogous to that observed in pulsars, like the Crab.The ratio of X-ray luminosity to spin-down power for AR Sco, α = L x /L s−d , is ∼ 10 −3 , implying that most of the luminosity of the system is not produced by accretion of matter, but by spin-down energy loss, just as in a spin-down powered pulsar.
An upper limit on the magnetic dipole strength can be derived assuming that the bulk of the spin-down power is radiated by dipole radiation, which was shown by Buckley et al. [2] to be ∼500 MG.Values this high have been derived for isolated magnetic white dwarfs, which is also within a factor of 2× of that seen in strongest magnetic cataclysmic variable (∼250 MG).An alternative model, assuming that a fraction of the spin-down power is dissipated through a magnetic stand-off shock near the secondary, has led to an estimate of the magnetic field of ∼100 MG [4].If rotational energy is also dissipated through magnetohydrodynamical (MHD) pumping of the secondary star, then a constraint can be placed by estimating the MHD power dissipated in the surface layers of the secondary [8].Dissipation will occur through magnetic reconnection and Ohmic heating, particularly in that part of the secondary star's photosphere which faces the white dwarf.This could contribute to both the observed line emission and the strong orbital photometric modulation, which is at a maximum when the secondary star is at superior conjunction, when it is at the most favourable viewing aspect.
Due to the absence of conducting plasma from mass loss/accretion, it was shown [2] that electric potentials of the order ∆V ∼ 10 12 V can be induced over the distance between the white dwarf and the light cylinder, which has a radius of 6 ×10 11 cm, ∼10× the orbital separation of the two stars.This potential can accelerate electrons to energies of the order of γ e ∼ 10 6 , resulting in pulsed and strongly polarized synchrotron emission, possibly up to to X-ray frequencies.The high level of linear polarization and its modulation is consistent with synchrotron emission of relativistic electrons in ordered magnetic fields [2].The periodic behaviour of the polarization, at both the white dwarf spin and the beat periods, is also consistent with the emission being produced by the interaction of the white dwarf's magnetosphere with the M-dwarf, which explains the beat period as a consequence of orbital modulation.
The spectral energy distribution (SED) in AR Sco [1,4]) shows a S ν ∝ ν α 1 (α 1 ∼ 1.3) self-absorbed power law spectral distribution for ν ≤ 10 12 − 10 13 Hz, i.e., at infrared to radio wavelengths.We have suggested [2] that these originate from pumped coronal loops of the nearly Roche lobe-filling secondary star.The magnetic flux tubes of the secondary star are periodically distorted by the fast rotating white dwarf magnetic field, inducing strong field-aligned potentials and synchrotron flares, consistent with the observed power law at ν ≤ 10 12 Hz and the peak emission at ν crit ∼ 0.3ν syn ≤ 10 13 Hz.The emission is therefore expected to be pulsed at the beat frequency, consistent with the ATCA observations at 5.5 and 9.0 GHz [1].
At higher frequencies, ν ≥ few × 10 14 Hz (optical-UV-X-rays), the SED follows a different ν α 2 power law [1], where α 2 ∼ −0.2 (see also [3]).This component is produced by non-thermal synchrotron emission from accelerated charged particles as the magnetic white dwarf dipole interacts with the M-star magnetic field and wind, and can explain the high level of linear polarization observed at optical frequncies.
Discussion
The high degree of asynchronism in AR Sco (P s /P orb = 0.009) implies that the white dwarf was spun up, presumably as a result of a previous phase of mass transfer.Another system, AE Aqr, which also harbours a fast spinning (P s = 33 s) white dwarf which is spinning down [9], was shown [10,11] to have experienced a previous high mass transfer phase.In this case the secondary star may have shed its outer envelope in a catastrophic run-away mass transfer process, resulting in the white dwarf being spun-up.However, in the case of AR Sco, there is currently no observational evidence to indicate that it has evolved through such an extreme high mass transfer phase.Therefore the evolutionary path which AR Sco has followed is still somewhat of an open question.
The high magnetic field of the white dwarf in AR Sco also presents a significant problem when it comes to spinning up the white dwarf in the first place, since material will tend to be ejected rather than accreted, except at very high accretion rates.This is precisely what the current situation is for AE Aqr, which is acting as a propellor, ejecting rather than accreting material from its companion [9].Since the observed white dwarf spin-down timescale for AR Sco of ∼10 7 years [1,2] is less than the spin-orbit synchronization timescale of ∼2.5 × 10 8 years, calculated for MHD torques alone [2], this implies that most of the spin-down power is dissipated through other channels.The high value of the magnetic field is therefore the reason for the current large spin down rate, through the various mechanisms (e.g., dipole radiation, MHD interactions) which rob the white dwarf of its angular momentum.
AR Sco is a unique object and the best candidate for a white dwarf pulsar-notwithstanding the one crucial point that it is not a neutron star-based on the following attributes: We show an artist's impression of AR Sco in In Figure 3.Further observations, particularly at UV, X-ray and radio wavelengths, will be important in determining the exact nature of the emission mechanisms operating in AR Sco.More extensive time resolved polarimetry, obtained at SAAO during the 2016 and 2017 observing seasons, will also help to constrain the polarized emission models by disentangling the two closely spaced polarized signals, at the spin and beat period, leading to more definitive conclusions regarding geometry.Its proximity in the outskirts of the ρ Ophiuchus molecular cloud also raises the interesting prospects that the system could be a source of EHE γ-ray emission through charged particle interactions.
Figure 1 .
Figure 1.Photopolarimetry of AR Sco covering the band ∼570-900 nm, taken on 14 March 2016, in 10 s bins.The panels show, from the top, the total polarized flux (s), degree of linear polarization (p) and position angle of linear polarization (θ).
Figure 2 .
Figure 2. Spin phased variation of the Stokes Q and U parameters (upper panels) and total linearly polarized flux (lower panels) for the red-band (570−900nm) on the two nights 14 (left) and 15 (right) March 2016.The migration of the Stokes Q and U amplitude pairs are shown, plotted every 3 s, with 40 points plotted per spin cycle.Points are colour-coded as in the average phase-folded linearly polarized flux plots.The Q , U pairs follow counter-clockwise trajectories.
•
Spin-down powered • SED dominated by synchrotron emission • Strongly linearly polarized • Spin modulated magnetic dipole • Pulsations are seen from the radio to the ultraviolet (and more recently in X-rays) • Beamed radiation from relativistic electrons accelerated in a stong electrical potential • Lorentz factors of γ ∼ 400 − 10 6
Figure 3 .
Figure 3. Artist's impession of the white dwarf pulsar system, AR Sco.Credit: Mark Garlick/University of Warwick. | 3,984.4 | 2018-01-22T00:00:00.000 | [
"Physics"
] |
Refitting an X-ray diffraction system for combined GIXRF and XRR measurements
A commercial Empyrean X-ray diffractometer was adapted for combined grazing incidence X-ray fluorescence analysis (GIXRF) measurements with X-ray reflectivity (XRR) measurements. An energy-dispersive silicon drift detector was mounted and integrated in the angle-dependent data acquisition of the Empyrean. Different monochromator/X-ray optics units have been compared with the values obtained by the Atominstitut GIXRF + XRR spectrometer. Data evaluation was performed by JGIXA, a special software for combined GIXRF + XRR data fitting, developed at Atominstitut. A sample consisting of a ~50 nm nickel layer on a silicon substrate was used to compare the performance criteria (i.e. divergence and intensity) of the incident beam optics. An Empyrean X-ray diffractometer was successfully refitted to measure both GIXRF and XRR data.
I. INTRODUCTION
X-ray reflectometry (XRR) is a well-known and established technique for the characterization of single-and multi-layered thin-film structures with layer thicknesses in the nanometer range. XRR spectra are acquired by varying the incident angle in the grazing incidence regime while measuring the intensity of the specular reflected X-ray beam. The shape of the resulting angle-dependent curve is correlated to changes of the electron density in the sample and, specifically in the case of layers, distinct Kiessig fringes can be observed (Kiessig, 1931). The position and intensity of these fringes can be calculated (Parratt, 1954) and thus be used for the characterization of layered samples.
Grazing incidence X-ray fluorescence (GIXRF) is a total reflection X-ray fluorescence analysis (TXRF) related technique, which uses the angle-dependent XRF signal in the grazing incidence regime. While TXRF uses measurements at a single point below the critical angle, GIXRF uses angle scans from 0°up to 3 or 4 times the critical angle in order to evaluate the angle-dependent variations in the X-ray standing wavefield. The XRF signal is element-specific, and therefore, the measurements contain information about the elemental composition, concentration profile, and thickness and density of near-surface layers (de Boer, 1991).
The combined measurement and evaluation of GIXRF and XRR data can improve the obtained information, as it reduces uncertainties and ambiguities of the individual techniques, especially for the analysis of samples, for which the exact stoichiometry might not be known (Ingerle et al., 2014a). Already in 1994, van den Hoogenhof and de Boer, while working at Philips, presented a setup for combined GIXRF and XRR experiments (van den Hoogenhof and de Boer, 1994). Nevertheless and although there are several manufacturers for TXRF (or limited GIXRF) instrumentation or for diffractometers, which can be used for XRR analysis, at the moment there is no commercial instrument available, which (out-of-the-box) provides the possibility to perform combined GIXRF/XRR analysis.
In a previously published approach, we modified an existing GIXRF setup by adding a detector for the simultaneous acquisition of XRR intensities (Ingerle et al., 2014b). Although this setup has several advantages (e.g. vacuum chamber and easily exchangeable tube anode material), it also suffers from a somewhat limited measurement resolution because of minimum motor step size as well as beam divergence. As an improvement in this respect would amount to a complete redesign, we considered a different approach and looked at the instruments, which are available at the X-ray Center of our university.
We used an Empyrean X-ray diffraction (XRD) system by PANalytical, which offers beam optics, detectors, and software for XRR on thin layers, but no support for the acquisition of X-ray fluorescence, and added an Amptek silicon drift detector (SDD) for the acquisition of XRF spectra ( Figure 1). A custom control software was developed in order to synchronize the acquisition of XRF spectra with the angular movement during an XRR scan.
II. EXPERIMENTAL
The Empyrean XRD system by PANalytical is a commercially available platform for a variety of applications in analytical XRD. The goniometer, which is the central part of the diffractometer, has a radius of 240 mm. It features Heidenhain encoders and a minimum step size of 0.0001°for the angle of incidence as well as the scattering angle. The instrument uses a θ/θ configuration, i.e. the sample is stationary in the horizontal position, whereas the tube and XRR detector move simultaneously. The PreFIX concept allows the exchange of beam optical and detector modules within minutes.
Specifics about the used components were taken from the Empyrean Reference Manual.
A. X-ray tube The line focus of an Empyrean Cu LFF HR X-ray tube was used for the measurements. This metal-ceramic tube has a maximum power rating of 1.8 kW and a focal spot of 12 × 0.4 mm. The exit window consists of beryllium with a thickness of 300 μm. The tube was operated at 45 kV and 40 mA.
B. X-ray optics
A wide variety of incident beam optics, i.e. monochromators and mirrors, is available for the Empyrean. We used and compared four of them: • The hybrid monochromator consists of a parabolical shaped graded multilayer and a channel-cut Ge(220) crystal in one module. It creates almost pure Kα 1 radiation. The Kα 2 radiation is reduced to below 0.1% of the original value. • The parallel beam X-ray mirror module contains a parabolical shaped graded multilayer. It converts the divergent beam into a monochromatic Kα quasi-parallel beam. The Kβ radiation is reduced to below 0.5% of the original value. • The Bragg-BrentanoHD module converts a divergent X-ray beam into a monochromatic divergent X-ray beam. We used this module with a 1/32°(0.05 mm) exit slit. The energy resolution is about 450 eV. • The focusing X-ray mirror module contains an ellipticalshaped graded multilayer. It converts the divergent beam into a monochromatic Kα beam, focused on the detector. The Kβ radiation is reduced to below 0.5% of the original value.
All modules were used with a 1/32°(0.05 mm) divergence slit and a 10 mm wide beam mask.
C. XRR detector
The XRR detector assembly consists of a 0.18°parallel plate collimator, a 0.1 mm collimator slit, a programmable beam attenuator, and the detector module. The beam attenuator contains a Nickel foil, which is 125 μm thick. The foil, which reduces the CuKα intensity by a factor of 174, is automatically inserted into or retracted from the beam path if the count rate exceeds or falls below a configured threshold.
Concerning the detector module, we tested a scintillation and a PIXcel 3D detector, which is based on Medipix2 technology (Llopart et al., 2002). After measurements with the parallel beam mirror, we realized that the count rate at small angles exceeds the specified 99% linearity range of the scintillation detector (0-500 kcps). Thus, we only used the PIXcel3D for our comparison, which has a 99% linearity range of 0-5 × 10 6 cps per column. The detector consists of 255 × 255 px with a pixel pitch of 55 μm and was used in the open detector (0D) mode.
D. XRF detector
An Amptek XR-100SDD with an 8 μm thick Beryllium entrance window was placed at 90°to the sample surface ( Figure 1). The detector has a 25 mm 2 active area (internally collimated to 17 mm 2 ) and is 500 μm thick. The signal processing was done by an Amptek PX4 digital pulse processor (DPP). This combination of SDD and DPP results in an approximate maximum input count rate of 200 kcps at 2.4 μs peaking time and a minimal achievable resolution of 125 eV at 5.9 keV (Amptek, 2020). As GIXRF measurements can imply high count rates above the critical angle and XRF peak overlaps in case of multi-element samples, good count rate capability as well as energy resolution are both critical.
During the mounting, care has to be taken that the center of the XRF detector aligns with the center of rotation in order to make the measurements as accurate and reproducible as possible. The location and stability of the center of rotation of the Empyrean system can be verified by a fluorescence disk, which is provided with the diffractometer. Furthermore, we used distance holders to place the detector 3 mm above the sample surface.
The total efficiency, which in the considered energy range up to 8 keV is mainly influenced by the absorption in the air path and the Beryllium window, can be estimated to ∼50% for SiKα at 1.74 keV and >95% above 4 keV.
E. Acquisition software
We used the Data Collector program, which is the standard software for the Empyrean, for movement control and for the acquisition of XRR data. In order to start, stop, and read out the XRF detector in synchrony with the XRR scan, we developed our own software.
This was actually the most challenging task in the adaption of the diffractometer, as the commercial control software acts more or less like a black box, with no documentation on programming interfaces available to us. Fortunately, the communication between the diffractometer and control software is unencrypted via plain text serial port connection. Thus, we were able to write a software module, which intercepts this communication and forwards information on angle positions and scan status to our own control software, which manages the XRF detector acquisition. One limitation of this approach is a slight overhead of 1-2 s, which we had to introduce at each angle step, in order to facilitate the timely stopping of the XRF detector. Furthermore, the XRF scan acquisition is
S30
Powder Diffr., Vol. 35, No. S1, December 2020 Ingerle et al. S30 only started for symmetric goniometer scans in the step mode, as continuous scans would introduce additional inaccuracies.
A. Comparison of beam optics
A sample consisting of a ∼50 nm nickel layer on a silicon substrate was used to compare the performance (i.e. divergence and intensity) of the incident beam optics. We performed combined measurements of SiKα XRF and XRR with an acquisition time of 7 s per point. The NiKα XRF from the layer is not accessible with the copper tube and the available optics as the Ni-K edge (8.3 keV) is above the energy of the CuKα radiation (8.04 keV), but it is important to note that the SiKα GIXRF from the substrate is also modulated because of the interference effects caused by the layer. The evaluation was done with the software JGIXA (Ingerle et al., 2016).
We used two figures of merit to evaluate and compare the performance of the different optics: The measured intensities and the divergence reported by the evaluation software.
The intensities (i.e. count rates), which are measured by the XRF and XRR detector, are directly correlated to the intensity of the beam after the primary optics. Typically, one would aim for the highest possible intensity in order to reduce counting times especially for larger angles in the XRR spectrum or for the XRF measurement of low concentration samples. Nevertheless, we have to add the following caveat: the intensity can be too high for the chosen detector and sample combination. For example, as mentioned before, the intensities of the parallel and focusing beam mirrors were actually exceeding the specified linearity range of the scintillation detector of the Empyrean system even when an attenuator foil was inserted. This effect leads to a damping of the total reflection region in the spectrum and, thus, a distortion of the critical angle or an underestimation of Bragg-peaks. A similar effect can be observed for the XRF measurement of elements with high concentration and good cross-section for the incident radiation. In this case, mainly, the angles above the critical angle, i.e. when the beam is fully penetrating into the sample, are affected. The evaluation of all elements in the XRF spectrum will be impaired, leading to wrong quantification results. In case no better detector is available, these adverse conditions can be avoided by reducing the tube current or, maybe even better, by choosing a different X-ray optics, which has reduced flux, but might create a more coherent beam, as was observed in our experiments (see below).
The reported divergence has to be considered as a measure for the coherence of the beam relevant for the experiment and not as a full characterization of the beam. XRR and GIXRF are based on beam interference effects and, thus, rely on a suitable coherence to show oscillations in the angle-dependent measurement curve. We can distinguish two types of coherence, which might be relevant for the techniques: Firstly, we have the temporal coherence and, secondly, the spatial coherence, which can be described by the monochromaticity and the angular divergence of the beam, respectively. The angular divergence can be subdivided in two values for the planes parallel and perpendicular to the sample surface, but in the case of XRR and GIXRF, it is mainly the angle distribution perpendicular to the surface, which is of relevance for the measurement.
Further discussion on this topic can be found in the literature (von Bohlen et al., 2009;Tiwari et al., 2015).
In our setup, we have an X-ray tube as the primary source, which has to be considered as an incoherent source. We use monochromators, i.e. crystals or multilayers, to improve the temporal coherence and slits or collimators to improve the spatial coherence. Special cases are multilayers on a bent or shaped substrate, graded multilayers, or bent or shaped crystals, as they will improve temporal and spatial coherence at the same time, i.e. typically a parallel or focused, monochromatic beam. One fact, which is working to the advantage of X-ray tubes used with multilayers, is the use of characteristic lines of the anode material. Although multilayers typically have a bandwidth of some hundred eV near CuKα (∼8 keV), the actual monochromatization is still sufficient for many experiments, as the main component of the beam will be Kα 1 and Kα 2 radiation. As these two lines are only ∼20 eV apart, the additional error, introduced by the shift in angles at different energies, is typically negligible for many samples in comparison to the much larger angular divergence.
Considering this, we assume perfectly monochromatic excitation for the simulation in JGIXA and introduce a convolution with a Gaussian point spread function in order to simulate the angular spread of the beam. Furthermore, the coherence for the XRR measurement can be further improved by additional optical elements after the sample and in front of the detector. Thus, the simulation uses two separate values for the simulation of divergence in the GIXRF and the XRR calculations.
Coming back to our comparison of the optics, we clearly see an expected tradeoff between intensity and estimated divergence (Figure 2). The divergence for the XRR measurement seems to be reduced because of the additional slit and a parallel plate collimator in front of the XRR detector. A closer look at some details in the measurement curves clearly shows the benefit of the better divergence ( Figure 3). The comparison with a self-built spectrometer for combined GIXRF and XRR measurements (Ingerle et al., 2014b), which only uses a flat multilayer and a slit system, is also instructive, as it is clear to see, that features start to disappear, which are visible with the hybrid monochromator of the Empyrean.
B. Importance of divergence for GIXRF measurements
In the past, we successfully used a modified GIXRF setup for combined GIXRF and XRR measurements (Ingerle et al., 2014b). The analyzed samples mainly involved the depthprofiling of shallow depth ion-implantation (Ingerle et al., 2014a) or diffusion effects because of annealing in thin layers (Caby et al., 2015;Rotella et al., 2017). All of these samples could not be analyzed by XRR alone, as the change in the electron density was almost non-existent, either due the low implantation dose or the similarity in the atomic number of the involved elements. Furthermore, the divergence was not a problem for these studies, as the critical features of the measurement curves were not expected to be significantly influenced.
But in a recent analysis, the divergence was expected to become a problem for the evaluation of the measurement. This gave us the opportunity to test the new approach for a combined setup, which is presented in this manuscript.
The analyzed samples consisted of a simplified model built from typical materials for organic light-emitting diode (OLED) production, i.e. mainly organic layers with a sulfur based small molecule host. The topic of interest was the difference in diffusion because of vapor or solution-based deposition. Here again, the analysis with XRR alone would provide inconclusive results, as the concentration of sulfur is small (∼1-2%) and also the variation in the distribution is very small and thus results in almost no change of electron density.
Further information on the result and the samples can be found in another publication (Maderitsch et al., 2018). and X-ray reflectivity intensity (bottom) versus angle of incidence for the various X-ray optics of the Empyrean diffractometer. The angle range from 0.4 to 0.6°is plotted to emphasize the influence of the divergence of the used X-ray optics on specific features (marked with a circle). Additionally, the results from the ATI GIXRF + XRR spectrometer are given. The divergence values are the results of the fitting software JGIXA.
We want to showcase here the measurement of an OLED model system consisting of a 20 nm buffer layer, a 20 nm hole transport layer, and a 60 nm host layer, with a special emphasize on the influence of the divergence on the measurement curve. The variation in the angle-dependent SKα intensities can be used to conclude on the distribution of sulfur in the differently prepared host layers (solution processed or evaporated), but only if we can discern the features in the measured curves. Figure 4(a) shows the actual measurements of the samples with the hybrid monochromator, which were fitted with a simulated divergence of 0.25 mrad. Several features, i.e. steps in the rising and falling edge, are clearly visible in the measurement and well-matched in the simulation. If we perform a simulation with 0.32 mrad divergence, which corresponds to the divergence expected from a parallel beam mirror, we can see that the features become much harder to notice [Figure 4 (b)]. In fact, the modulations in intensity are so small, that they could be mistaken for effects of counting statistics. From this comparison, it is clear that a measurement with the parallel beam mirror would make it much harder if not impossible to perform an evaluation and the hybrid monochromator is much better suited for this analysis.
IV. CONCLUSION
• Our work demonstrates that it is relatively easy to integrate an XRF detector for GIXRF measurements into a commercially available XRD/XRR setup. This fact in combination with an evaluation software creates the opportunity for new applications and users. • The beam optics worked as expected, showing a tradeoff between divergence and intensity. In the current configuration, the Bragg-BrentanoHD module seems to represent a good compromise for most applications. • The setup was successfully used for the measurement of organic multilayer structures (Maderitsch et al., 2018). This analysis required the better resolution of the hybrid monochromator. | 4,397.4 | 2020-07-01T00:00:00.000 | [
"Physics"
] |
Shadow glass transition as a thermodynamic signature of β relaxation in hyper-quenched metallic glasses
Abstract One puzzling phenomenon in glass physics is the so-called ‘shadow glass transition’ which is an anomalous heat-absorbing process below the real glass transition and influences glass properties. However, it has yet to be entirely characterized, let alone fundamentally understood. Conventional calorimetry detects it in limited heating rates. Here, with the chip-based fast scanning calorimetry, we study the dynamics of the shadow glass transition over four orders of magnitude in heating rates for 24 different hyper-quenched metallic glasses. We present evidence that the shadow glass transition correlates with the secondary (β) relaxation: (i) The shadow glass transition and the β relaxation follow the same temperature–time dependence, and both merge with the primary relaxation at high temperature. (ii) The shadow glass transition is more obvious in glasses with pronounced β relaxation, and vice versa; their magnitudes are proportional to each other. Our findings suggest that the shadow glass transition signals the thermodynamics of β relaxation in hyper-quenched metallic glasses.
While the exothermic enthalpy relaxation might be understood as the continuous transformation of a high enthalpy state to a lower one during slow heating, the endothermic shadow glass transition is intriguing: it seems to indicate that during annealing, some parts of the glass reach lower energy states relative to the rest of the system and then return to the higher energy states during DSC up-scan [3,31]. Some researchers proposed that the shadow glass transition might also imply structural heterogeneity of the glass [15,21,31,32]. The basic question remains unclear as to what kind of atomic motions are responsible for the heating-absorbing shadow glass transition.
Aside from these non-equilibrium relaxation phenomena, glasses and supercooled liquids also have a range of inherent dynamic processes which can be found in both the thermodynamic C The Author(s) 2020. Published by Oxford University Press on behalf of China Science Publishing & Media Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. equilibrium states (the supercooled liquids) and the out-of-equilibrium glass states [33][34][35][36][37][38][39][40][41][42]. Among them, the most prominent is the so-called primary (α) relaxation. Its evolution from equilibrium to out-of-equilibrium during cooling of the liquid is associated with the thermodynamic signature of glass transition, as can be measured from the jump of specific heat, C p [15,39,43]. Processes occurring in addition to the α relaxation at shorter timescales or lower temperature are referred to as secondary (β) relaxations [33,36,42,44,45]. Usually the β relaxations are probed by dielectric or mechanical spectroscopy [36,42,[46][47][48][49][50][51][52][53][54][55], but could not be readily detected by ordinary DSC procedures. Nevertheless, Fujimoi and Oguni reported thermodynamic signatures of β relaxations by adiabatic calorimetry [56] and Busch et al. by the temperature-modulated DSC [19,22]. Recently, Ngai and coworkers, in a series of papers, also proposed other signatures for β relaxations [57,58].
In light of these studies, it is of interest to know whether the shadow glass transition is connected to β relaxations, just as the (real) glass transition is to α relaxations. This question is of crucial importance for both revealing the origin of the shadow glass transition and β relaxation in glassy materials, as well as improving our understanding about the nature of the glass. We note that there are some previous studies that attempted to establish connections between the β relaxation and the (heat-releasing) enthalpy relaxation [6,18,32,[59][60][61]. For instance, the enthalpy relaxation has been considered as a proxy of β relaxation [18], and the activation energy of enthalpy relaxation and β relaxation reported to be nearly equal in some glasses [60]. Logically, on the other hand, by comparing the real glass transition and the α relaxation, one may envisage that if the β relaxation has thermodynamic consequence, it might show an endothermic (heat-absorbing) feature. The shadow glass transition might be such a candidate [62]. Some authors have inferred that the shadow glass transition might be related to the β relaxation based on the activation energy [19,22,25,63]. As these studies depend on the dedicated annealing treatments and as the accessible observation time window is narrow as it is limited by the heating rates of DSC (typically 0.1-1 K/s) [15,18,24,61], it is still difficult to make direct comparisons between the shadow glass transition and the β relaxation. Consequently, whether the shadow glass transition and β relaxation are connected is still not elucidated.
In this work, we use a chip-based fast scanning calorimetry (FSC) [64][65][66][67][68][69][70][71][72][73][74] to investigate the dynamics of the shadow glass transition in a wide range of heating rates (3-20 000 K/s) in two dozen differ-ent metallic glasses (MGs). We show that the FSC can clearly capture the shadow glass transition without the need for annealing at high heating rates for rapidly quenched MGs. We illustrate that the dynamics of the shadow glass transition quantitatively match the β relaxation as independently measured by mechanical relaxations. Interestingly, we find that the shadow glass transition is more obvious in glasses with pronounced β relaxation, while it is hard to observe in glasses with weak β relaxation. Our results provide clear evidence on the correlation between the shadow glass transition and the β relaxation. These findings suggest that the shadow glass transition signals the thermodynamic freezing of β relaxation, analogous to the glass transition and the freezing of α relaxation. Figure 1a compares two typical heat flow curves of a La 50 Ni 15 Co 2 Al 33 MG measured by a conventional DSC (at a heating rate Q = 0.333 K/s or 20 K/min) and an FSC (Q = 500 K/s), respectively. The conventional DSC curve only exhibits an exothermic process (the enthalpy relaxation) before T g . In contrast, the FSC curve exhibits a clear endothermic peak, which is the shadow glass transition, in addition to the enthalpy relaxation and the glass transition. We define T g, shadow as the temperature corresponding to the maximum point of this endothermic peak. We consider that the shadow glass transition is not a true glass transition, and it does not have a step-like heat-capacity jump. Instead, the shadow glass transition might be better viewed as an activation processes, and thus the peak temperature might be more suitable for analysis than the onset temperature, as is the case for many other activation processes. We note that previous studies of the shadow glass transition have resorted to dedicated thermal annealing procedures [19,20,22,23]. Thus, the FSC enable us to directly investigate the shadow glass transition without the need of annealing. Figure 1b presents the heat flow curves for five different glassy ribbon samples with thickness ranging from 10 to 60 μm that are produced by different roller speeds during spinning quenching. Consequently, they have different cooling rates, and the thinner the sample, the higher the cooling rate. Figure 1b indicates that the cooling rate influences the shadow glass transition, as T g, shadow decreases with cooling rates. Quantitatively, we estimate the cooling rates of the samples according to the energy matching method of Liu et al. [18]. Figure 1c shows the T g, shadow as a function of the estimated cooling rate. It reveals that for samples RESEARCH ARTICLE prepared with faster cooling rates, the shadow glass transition can shift to a lower temperature. Interestingly, when the cooling rate is faster than ∼10 6 K/s, T g, shadow gradually approaches a value of constant, as further increasing of the cooling rates does not lead to lowering T g, shadow within the experimental sensitivity. Thus, the T g, shadow could be used as a materials property only if the samples are prepared by a cooling rate higher than 10 6 K/s, that is the hyper-quenched glasses. In the following experiments, all the samples are prepared by the highest cooling rates (i.e. with thickness ∼10 μm, or cooling rates larger than 10 6 K/s). Figure 2a presents the typical FSC curves showing heat flow versus temperature at a range of heating rates from 10 to 10 000 K/s for the La 50 Ni 15 Co 2 Al 33 MG. The dynamic behavior of the shadow glass transition is similar to the real glass transition process, moving to higher temperatures at higher heating rates, which demonstrates that the shadow glass transition is of kinetic nature. Meanwhile, dynamic mechanical spectra (DMS) were carried out at different testing frequencies to investigate its inherent relaxation dynamics. Figure 2b shows the temperature dependence of the normalized loss modulus E"/E" max at different testing frequencies for La 50 Ni 15 Co 2 Al 33 MG. The MG shows pronounced β relaxation peak, in addition to the α relaxation. Figure 2c shows the FSC heat flow curve (300 K/s) and the normalized loss modulus E"/E" max (2 Hz). These two curves are selected due to the glass transition probed by FSC at this heating rate and the α relaxation of DMS at this frequency have nearly the same temperature (∼528 K here). From DMS, one can see a distinct β relaxation peak which locates about 410 K (i.e. the β relaxation peak temperature, T β = 410 K). At the same time, we find the FSC curve also exhibits a pronounced endothermic peak in the same temperature range due to the shadow glass transition. In Fig. 2d, we summarized the β and α relaxations from DMS, the shadow glass transition and the (real) glass transition from FSC in a relaxation map for La 50 Ni 15 Co 2 Al 33 MG. We note that the timescale is represented by two different quantities in the two experiments, namely, the testing frequency (Hz or s −1 ) in DMS and the heating rate (K/s) in FSC. To translate the frequency in DMS to heating rates in FSC, we assume there is a linear relation between them and we vertically shift the DMS data in Fig. 2d to make the α relaxation maximally overlap with the T g data (at different heating rates) by FSC. The shift-factors are given in the online supplementary data. Importantly, we find that, as shown in Fig. 2d, once the α relaxation is overlapped with T g (by FSC) by this manipulation, the β relaxation coincides nicely with shadow glass transition as well.
RESULTS
Meanwhile, both the β relaxation peak and shadow glass transition peak can be fitted by an Arrhenius equation at low temperatures. However, with the further increase of heating rate the T g, shadow does not follow an Arrhenius behavior for temperatures above T g , but it follows a super-Arrhenius behavior at a higher temperature and eventually merges into α relaxation (real glass transition) at heating rates above 10 000 K/s. These behaviors are indeed similar to the β relaxation in general. Due to the limited frequency range of our DMS, the β relaxation at higher frequency (or higher temperature) could not be measured in MGs. Nevertheless, several experiments based on dielectric spectroscopy have shown that the β relaxation in molecular glasses merges with the α relaxation in a super-Arrhenius manner. Thus the shadow glass transition behaves like the β relaxation in dynamics. Similar experiments were also performed for a Pd 40 Cu 40 P 20 MG. As shown in Fig. 3a, the FSC curve exhibits a clear shadow glass transition at a temperature below the enthalpy relaxation and the T g . Figure 3b and c shows the heat flow curves of Pd 40 Cu 40 P 20 MG measured by FSC over a range of heating rates Q from 10 to 10 000 K/s. The DMS loss modulus (2 Hz) and the FSC heat flow (200 K/s) are shown in Fig. 3d. Figure 3e shows the dynamic behavior of α relaxation and β relaxation at different test frequencies.
The corresponding relaxation map are reported in Fig. 3f which summarizes T g,shadow from FSC and T β from DMS at different testing frequencies.
Again, one can see that the shadow glass transition and β relaxation agree with each other and they also agree with an Arrhenius equation at low temperatures (or heating rates lower than ∼4 000 K/s). As heating rate Q increases, the shadow glass transition progressively shifts to a higher temperature at a faster speed, thus, the shadow glass transition follows a super-Arrhenius behavior at a higher heating rate Q ≥ 4000 K/s, until it eventually merges with α re-laxation near 10 000 K/s. This observation demonstrates again an intrinsic correlation between the shadow glass transition and β relaxation in metallic glasses.
Previous studies have shown that the behaviors of β relaxation are materials specific and sensitive to chemical compositions [36,42,75]. In some MGs, β relaxations manifest as distinct peaks, while in some other systems, β relaxations appear to be absent and, instead, excess contributions to the tails of α relaxations show up [36,37,42,54,76,77]. These so-called excess wings have been observed in many systems without well-resolved peaks of β relaxations [36,42,77]. Since the above experiments were conducted in MGs with pronounced β relaxations, it is of interest to study the effect of the unobvious β relaxation (e.g. shoulder or excess wings) on shadow glass transition. We therefore investigate the FSC and DMS on Ni 78 P 22 , Al 86 Ni 9 Sm 5 and 13 different Zr-based MGs (Table 1). What is common to these MGs is that they do not have pronounced β relaxations. They either show excess wings or shoulderlike features as probed by DMS. Figure 4 shows the temperature dependence of the DMS loss modulus (1 Hz) and the FSC heat flow (500 K/s) for these MGs. One can see that none of them exhibits a clear shadow glass transition as probed by FSC. This result suggests that the magnitudes of shadow glass transition and the β relaxation evolve hand in hand with each other, providing more evidence as to correlation between them.
RESEARCH ARTICLE
The results for all the studied MGs are collectively shown in Table 1, where the MGs are classified into different groups by two features: the behavior of the β relaxation in each row and the shadow glass transition in each column. We can see that the shadow glass transition is always found in the hyperquenched MGs with pronounced β relaxation. On the other hand, the MGs without obvious β relaxation are less likely to show shadow glass transition as probed by FSC.
To quantitatively correlate the distinct behaviors of β relaxation and the shadow glass transition, the relative heights of β relaxation and shadow glass transition can be determined respectively as E" β /E" α and C p@Tg,shadow / C p@Tg . Here, E" β /E" α is the ratio between peak height of β relaxation and α relaxation. Similarly, C p@Tg,shadow / C p@Tg is the ratio between the peak height of shadow glass transition C p@Tg,shadow and the heat capacity jump of real glass transition C p@Tg . Here, we first use the Pd-based MGs system as a typical example to illustrate the relation between the shadow glass transition and β relaxation. One can see a trend that the C p@Tg,shadow / C p@Tg increase with the addition of the Cu into Pd 40 Ni 40 P 20 MG to replace Ni atom for Pd 40 Ni 40-x Cu x P 20 (x = 0, 30 and 40) MGs system, as shown in Fig. 5a. At the same time, when Cu is added into Pd 40 Ni 40 P 20 to replace Ni, the peaks of β relaxation also shift gradually to lower-scaled temperatures and become Table 1. Cross-correlation between the behavior of the β relaxation and shadow glass transition for 24 different metallic glasses. more pronounced as shown in Fig. 5b. In other words, alloying influences in the same way to the relative strength of β relaxation and the shadow glass transition. Figure 5c presents the quantitative relationship between the β relaxation and the shadow glass transition by plotting C p@Tg,shadow / C p@Tg against E" β /E" α . It is noteworthy that C p@Tg,shadow / C p@Tg is nearly a proportional (i.e. y = x) function of E" β /E" α for these MGs. It indicates that the stronger shadow glass transition with higher C p@Tg,shadow / C p@Tg corresponds to a more pronounced β relaxation peak and vice versa. This corroborates that the strength of shadow glass transition and the behaviors of β relaxation are correlated.
DISCUSSION
These results inspire the physical mechanism that a β relaxation induced connectivity percolation happens before the glass transition and leads to the sub-T g endothermic peak. The β relaxation in MGs has been identified to reflect the string-like collective atomic arrangement based on molecular dynamics simulations [51,78,79]. Previous exper-iments also found that the fraction of liquid-like regions (or 'flow units') was above 0.25 after the full activation of β relaxation [62,80,81]. The value between 0.25-0.3 happens to be the threshold volume fraction of connectivity percolation for a 3D continuum system [82][83][84]. The connectivity percolation means that the expansion of activated liquid-like regions with increasing temperature enables the appearance of at least one connected flow unit chain to penetrate through the sample. Unlike the 'real' glass transition, where we believe a rigidity percolation happens and the sample behaves with a macroscopic softness, the 'shadow' glass transition is rather confined with no additional macroscopic degree of freedom. Therefore, an endothermic peak which reflects the local to cooperative transition can be observed but with a smaller value compared to a 'real' glass transition. However, it is a kinetic process in the real world and the competition between the activation process and structural relaxation will weaken the endothermic process if the heating rate is slow. This explains the reason why the shadow glass transition peak is difficult to detect by using traditional calorimetry equipment. If the sample is heated up fast enough, the connectivity and rigidity percolation may be reached simultaneously and the shadow glass transition will merge into the main glass transition as shown in Figs 2d, and 3c and f.
Besides, the energy status of sample or chemical influence also plays an important role in the activation process. Generally, the low cooling rate and annealing treatment will lower both the system energy and the diversity of structural heterogeneity, which means the connectivity percolation can only be reached at a higher temperature. From our FSC results, lower cooling rate indeed leads to a higher shadow glass transition as predicted from the model. Chemical influence on shadow glass transition is as strong as on β relaxation, where no clear shadow glass transition can be probed even by FSC in systems with weak β relaxation behaviors. The physical mechanism for the phenomenon might also be related to the percolation state. The unobvious β relaxation shoulder or excess wing is believed to result from the indiscernibility between the two relaxations, where deduced T β is close to 0.9T α (here, T α is the peak temperature of the α relaxation) and therefore β peak hidden in the flank of α peak [85]. Weak β relaxation behavior together with fewer flow unit regions will result in an undistinguished shadow glass transition, which was observed in those Zr-, Ni-and Al-based MGs (Fig. 4).
We have shown that the shadow glass transition and β relaxation follow a same temperaturetime dynamic and their magnitudes are proportional with each other. These results are enabled by the combined experiments of dynamical mechanical analysis and, especially, the recently developed fast-scanning calorimetry with heating rates of hundreds/thousands kelvin per second. Our findings establish a correlation between the two seemingly different processes, which provides an example of settling long-standing attempts to relate glass dynamics to thermodynamic responses. Meanwhile, the progress in the understanding of β relaxation could be suggestive of ultimately resolving the mechanisms of shadow glass translation. The emerging physical picture implies that the shadow glass transition is a thermodynamic signature of β relaxation in hyper-quenched glasses, analogous to the glass transition and the freezing of α relaxation. The results presented above thus open new challenges and opportunities for furthering our understanding of glass relaxations. (c) Relationship between C p@Tg,shadow / C p@Tg and E" β /E" α .
Dynamical mechanical analysis
The dynamical mechanical spectra of these MGs were measured on a TA Q800 dynamical mechanical analyzer. For these amorphous ribbon samples, film tension mode was used in an isochronal mode with a heating rate of 3 K/min, strain amplitude of 6 um and discrete testing frequency of 0.5, 1, 2, 4, 8 and 16 Hz.
Calorimetry measurements
The present calorimetry was performed using a combination of Flash DSC (Mettler Toledo Flash DSC 2+) and conventional DSC (Mettler Toledo DSC 3). The heat flow curves of MGs at a relatively low heating rate (0.083-1.33 K/s) is obtained by continuous heating on a conventional DSC using a refrigerated cooling system with a N 2 -gas DSC cell purge under a 50 ml/min nitrogen gas flow. The sample masses were 8-15 mg. In order to ensure the reliability of the measurement, each crystallized sample was heated again to obtain a baseline. The conventional DSC was calibrated by using pure In and Zn standard. The heat flow curves of MGs at higher heating rates were obtained by continuous heating on a Flash DSC under 80 ml/min argon gas flow. The twin-type chip sensor based on MEMS technology is made of a sample and a reference. The FSC chip sensors were preconditioned and calibrated following the manufacturer recommendation. The FSC samples were prepared by cutting the melt-spun ribbons into small pieces under a stereomicroscope and then transferred using an electrostatic manipulator hair onto a temperature-corrected MultiSTAR UFS1 sensor or UFH sensor. Samples were placed on the sensitive area of a MEMS chip sensor for a range of heating rates from 3 to 20 000 K/s. | 5,089 | 2020-05-13T00:00:00.000 | [
"Materials Science"
] |
The (cid:12) -function of N = 1 supersymmetric gauge theories regularized by higher covariant derivatives as an integral of double total derivatives
,
Introduction
Ultraviolet divergences in supersymmetric theories are restricted by some nonrenormalization theorems. According to one of them, N = 4 supersymmetric Yang-Mills (SYM) theory is finite in all orders [1][2][3][4]. Divergencies in N = 2 theories exist only in the one-loop approximation [1,4,5], so that it is even possible to construct finite N = 2 supersymmetric theories by choosing a gauge group and a matter representation in such a way that the one-loop divergencies cancel [6]. All these non-renormalization theorems can be derived [7,8] from the equation which relates the β-function of N = 1 supersymmetric gauge theories with the anomalous dimension of the matter superfields [9][10][11][12] β(α, λ) = − where α is the gauge coupling constant and λ denotes the Yukawa couplings. Note that so far we do not specify the definitions of the renormalization group functions (RGFs) and what couplings are considered as their arguments. Eq. (1.1) called the exact NSVZ β-function can also be considered as a non-renormalization theorem in addition to the JHEP10(2019)011 well-known statement that the superpotential in N = 1 supersymmetric theories is not renormalized [13]. According to one more non-renormalization theorem derived in [14], the triple ghost-gauge vertices in N = 1 supersymmetric gauge theories are finite in all orders. 1 With the help of this non-renormalization theorem the exact NSVZ β-function can be equivalently rewritten in a new form [14], which relates the β-function to the anomalous dimensions of the quantum gauge superfield (γ V ), of the Faddeev-Popov ghosts (γ c ), and of the matter superfields ( γ φ i j ).
Some NSVZ-like relations can be written for other theories. For example, in theories with softly broken supersymmetry an analogous equation describes the renormalization of the gaugino mass [18][19][20]. Also it is possible to construct the NSVZ-like equations for the Adler D-function in N = 1 SQCD [21,22] and even for the renormalization of the Fayet-Iliopoulos term in two-dimensional N = (0, 2) supersymmetric models [23].
Various derivations of the exact NSVZ β-function involve general arguments based on the analysis of the instanton contributions [7,9], anomalies [10,12,24], and nonrenormalization of the topological term [25]. However, a direct perturbative verification of eq. (1.1) in all orders appeared to be a highly non-trivial problem. Even to start solving this problem, one should first pay attention to some important subtleties related to the regularization, quantization, and renormalization.
Really, the calculations of quantum corrections made in the DR-scheme (that is with the help of dimensional reduction [26] supplemented by the modified minimal subtractions [27]) in refs. [28][29][30][31][32] demonstrate that the NSVZ relation is not valid for this renormalization prescription. However, the difference can be explained by the scheme dependence of the NSVZ relation, which is described by the general equations derived in [33,34]. Namely, it is possible to tune the renormalization scheme in such a way that the NSVZ equation will take place [28][29][30]. 2 It is important that this possibility is highly non-trivial due to some scheme-independent equations following from the NSVZ relation [34,36]. Nevertheless, at present there is no general all-loop prescription giving the NSVZ scheme in the case of using the regularization by dimensional reduction.
The NSVZ renormalization prescription can be naturally formulated in all loops if N = 1 supersymmetric gauge theories are regularized by the higher covariant derivative method [37,38] in the supersymmetric version [39,40]. The matter is that using of this regularization reveals the underlying structure of the loop integrals responsible for appearing the NSVZ relation. Namely, in this case the integrals giving the β-function defined in terms of the bare couplings appear to be integrals of double total derivatives with respect to loop momenta. 3 This was first noted in calculating quantum corrections for N = 1 su-
The integrals of double total derivatives do not vanish due to the identity where Q is an Euclidean momentum. The δ-function reduces the number of loop integrations by 1, so that in the Abelian case an L-loop contribution to the β-function appears to be related to an (L − 1)-loop contribution to the anomalous dimension of the matter superfields. The sum of singularities in the Abelian case was calculated in [54,55], where it was expressed in terms of the anomalous dimension of the matter superfields. The relation between the β-function and the anomalous dimension obtained in this way is nothing else than the NSVZ equation for RGFs defined in terms of the bare couplings. Thus, at least in the Abelian case, it naturally appears in the case of using the higher derivative regularization. Note that the RGFs defined in terms of the bare couplings are scheme independent if a regularization is fixed (see, e.g., [57]), so that the NSVZ equation for these RGFs is valid for an arbitrary renormalization prescription. 4 In the non-Abelian case the situation is much more complicated. Eq. (1.1) relates an L-loop contribution to the β-function to the anomalous dimension of the matter superfields in all previous orders. That is why it is more probable that it is eq. (1.2) that originally appears in the perturbative calculations. Moreover, unlike eq. (1.1), eq. (1.2) can be visualized in the same way as in the Abelian case (see refs. [44,50]). Namely, starting from a supergraph without external lines, it is possible to obtain a contribution to the β-function by attaching two external lines of the background gauge superfield and contributions to the anomalous dimensions by cutting internal lines. Thus obtained contributions are related by eq. (1.2).
The similarity between eq. (1.2) and the Abelian NSVZ equation [58,59] allows suggesting that the factorization of integrals into double total derivatives also produces the NSVZ equation in the non-Abelian case. This guess was confirmed by numerous calculations in the lowest loops, see, e.g., [47,51,53,60]. This implies that all higher order corrections to the β-function (starting from the two-loop approximation) appear from the δ-singularities. Therefore, to derive the NSVZ relation in the non-Abelian case (for RGFs defined in terms of the bare couplings with the higher covariant derivative regularization), it is necessary only to sum singular contributions and to prove that they give the sum of the anomalous dimensions in the right hand side of eq. (1.2). If this is really so, then the NSVZ scheme for
JHEP10(2019)011
RGFs defined in terms of the renormalized couplings is given by the so-called HD+MSL prescription [14] exactly as in the Abelian case [34,36,57]. 5 This means that the theory is regularized by higher covariant derivatives supplemented by the minimal subtractions of logarithms, when only powers of ln Λ/µ are included into the renormalization constants. 6 The paper is organized as follows: in section 2 we formulate the theory under consideration in N = 1 superspace, regularize it by higher covariant derivatives, and describe the quantization. Also in this section we introduce some auxiliary constructions, which will be needed for the investigation of the loop integrals giving the β-function. RGFs defined in terms of the bare couplings are introduced in section 3. In this section we also present the β-function and the NSVZ relation for it in the form which is mostly convenient for the analysis. In section 4 we demonstrate that the β-function defined in terms of the bare couplings is given by integrals of double total derivatives with respect to loop momenta. Here we also describe the method which allows to construct these integrals in a simple way. This method is applied for calculating the three-loop contribution to the β-function containing the Yukawa couplings in section 5. In particular, we demonstrate that the result exactly coincides with the one obtained in ref. [53] with the help of the standard supergraph calculation.
2 N = 1 supersymmetric gauge theories: regularization, quantization, and auxiliary parameters It is convenient to describe N = 1 supersymmetric gauge theories using N = 1 superspace with the coordinates (x µ , θ), where θ is an auxiliary anticommuting Majorana spinor. In this case N = 1 supersymmetry of the theory is manifest. Moreover, it becomes possible to perform the quantization and calculate quantum corrections in a manifestly N = 1 supersymmetric way [64][65][66]. At the classical level the considered theory in the massless limit is described by the action where V is the Hermitian gauge superfield and φ i are the chiral matter superfields in a representation R of a gauge group G which is assumed to be simple. In the classical theory (2.1) the supersymmetric gauge superfield strength is defined as W a ≡D 2 e −2V D a e 2V /8. The gauge coupling constant is defined as α = e 2 /4π, and the Yukawa couplings are denoted by λ ijk . Note that at the classical level we do not distinguish between bare and renormalized couplings. This difference is essential in the quantum theory. Below, considering the quantum theory, we will denote the bare couplings by α 0 = e 2 0 /4π and λ ijk 0 , while the renormalized couplings will be denoted by α and λ ijk . 5 HD+MSL prescription also gives the NSVZ-like schemes for the Adler D-function [52] and for the renormalization of the photino mass in softly broken N = 1 SQED [61]. 6 This NSVZ scheme is not unique [62]. For example, in N = 1 SQED the on-shell scheme is also NSVZ [63].
JHEP10(2019)011
Below t A and T A are the generators of the fundamental representation and the representation R, respectively. These sets of generators satisfy the conditions We will always assume that tr(T A ) = 0. Also we will use the notation (The generators of the adjoint representation are expressed in terms of the structure constants as ( Under the condition parameterized by a Lie algebra valued chiral superfield A. To quantize the theory (2.1), it is also necessary to take into account that the quantum gauge superfield is renormalized in a nonlinear way [67][68][69] (see also refs. [70,71]). The necessity of this nonlinear renormalization has been demonstrated by explicit calculations in refs. [72,73]. Moreover, the two-loop calculation of the Faddeev-Popov ghost anomalous dimension in [74] showed that without this nonlinear renormalization the renormalization group equations are not satisfied. Thus, it is really needed for quantum calculations. To take into account the nonlinear renormalization, following ref. [68], we substitute the gauge superfield V by the function F (V ) in the action functional. Moreover, it is necessary to replace e and λ by the bare couplings e 0 and λ 0 , respectively.
For obtaining a manifestly gauge invariant effective action we will use the background field method [75][76][77] formulated in N = 1 superspace [1,64]. A distinctive feature of the background field method in the supersymmetric case is the nonlinear background-quantum splitting which in the considered case can be implemented by the substitution where in the right hand side V and V are the quantum and background gauge superfields, respectively. 7 In this case the quantum gauge superfield satisfies the constrain V + = e −2V V e 2V . Due to the background-quantum splitting the gauge invariance produces two different types of gauge transformations. Under the background gauge symmetry the superfields of the theory change as The standard form of the background quantum splitting is e 2F (V ) → e Ω + e 2F (V ) e Ω , the background gauge superfield being defined by the equation e 2V = e Ω + e Ω . However, after the change of variables V → e −Ω + V e Ω + in the generating functional we arrive to eq. (2.6).
JHEP10(2019)011
This invariance remains unbroken at the quantum level and becomes a manifest symmetry of the effective action. Alternatively, the quantum gauge invariance is broken by the gauge fixing procedure. It is convenient to introduce the background supersymmetric covariant derivatives ∇ a and∇ȧ and the gauge supersymmetric covariant derivatives ∇ a and∇ȧ defined by the equations Note that for the purposes of this paper it is more convenient to use a different representation for them in comparison with refs. [74,78]. In the representation (2.9) the covariant derivatives ∇ a and∇ȧ should act on a function X which changes as X → e −A + X. In this case they transform in the same way under both background and quantum transformations. This is also valid for the background covariant derivatives ∇ a and∇ȧ, but only in the case of the background gauge transformations. If we use the background field method and take into account the nonlinear renormalization of the quantum gauge superfield, then the gauge superfield strength is defined as (2.11) Below we will also need some auxiliary parameters. The coordinate-independent complex parameter g describes the continuous deformation of the original theory (corresponding to g = 1) into the theory in which quantum superfields interact only with the background gauge superfield (corresponding to g → 0). This parameter is introduced by making the substitutions Then, it is easy to see that an L-loop contribution to the two-point Green function of the background gauge superfield is proportional to (gg * ) L−1 . Also we introduce the auxiliary chiral superfield 8 g(x, θ). It is added to g in such a way that all quantum corrections containing g will actually depend on the (coordinatedependent) combination
JHEP10(2019)011
Now, let us include the parameters g and g into the classical action. For this purpose we write all terms containing the quantum gauge superfield as integrals over d 4 x d 4 θ ≡ d 8 x with the help of eq. (2.10). After this we modify the result by introducing the auxiliary parameters in the following way: where the integration measures are Note that we do not include the superfield g in the first term of eq. (2.14), which does not contain the quantum gauge superfield V . This allows to avoid breaking of the background gauge invariance (2.7). However, the action (2.14) is invariant under the quantum gauge transformations (2.8) only if g = 0 (but for an arbitrary value of the coordinate independent parameter g). Nevertheless, it is not important, because the parameter g is auxiliary and actually we are interested only in the cases when g = 0, 1 and g = 0.
The most important ingredient needed for deriving the NSVZ β-function for RGFs defined in terms of the bare couplings is the higher covariant derivative regularization [37,38]. In this paper we will use the version similar to the one considered in ref. [78] with some modifications appearing due to the presence of the auxiliary parameters and the function F (V ). To regularize a theory by higher covariant derivatives, at the first step, it is necessary to add a higher derivative term S Λ to its action. As a result, propagators will contain higher degrees of momenta that, in turn, leads to the finiteness of the regularized theory beyond the one-loop approximation [82]. In the case g = 0 the regularized action S reg = S + S Λ invariant under both background and quantum gauge transformations can be constructed as where the higher derivative regulators R(x) and F (x) are functions rapidly growing at infinity which satisfy the conditions R(0) = F (0) = 1. In eq. (2.16) and below the subscript Adj means that remains unbroken. This can be done similarly to constructing the action (2.14). However, it is more difficult due to the presence of the function R(x). We present this function in the form Then the regularized action can be written as It is important that this action is invariant under the background gauge transformations, but the quantum gauge invariance exists only for g = 0. In this case the action (2.19) is reduced to eq. (2.16). Moreover, all terms containing the quantum superfields depend on auxiliary parameters only in the combination g = g + g. (The first term, which depends on the constant g and does not depend on the superfield g, contains only the background gauge superfield.) To obtain a manifestly gauge invariant effective action, it is necessary to use a gauge fixing term invariant under the background transformations (2.7). Taking into account that a higher derivative regulator should be also inserted into this term [78], the gauge fixing action can be chosen as Certainly, the quantization procedure also requires to introduce the Faddeev-Popov action. The Faddeev-Popov ghosts and the corresponding antighosts in the supersymmetric case are described by the chiral superfields c A andc A , respectively. The action for them obtained in a standard way takes the form (2.21) In the case of using the background superfield method it is also necessary to take into account the Nielsen-Kallosh ghost action
JHEP10(2019)011
Here the Nielsen-Kallosh ghosts b are chiral anticommuting superfields in the adjoint representation, which interact only with the background gauge superfield. The arrow points out that the parameters g and e 0 can be excluded from the Nielsen-Kallosh action by the change of variables b → e 0 gb; b + → e 0 g * b + in the generating functional. (It is easy to see that the corresponding determinant is equal to 1.) After the gauge fixing procedure the quantum gauge transformations (2.8) are no longer a symmetry of the total action (that, in particular includes the gauge fixing term and ghosts). The total action is invariant under the BRST transformations [83,84]. In N = 1 superspace the BRST transformations have been formulated in ref. [67]. For the theory considered in this paper the BRST invariance is a symmetry of the action only in the case g = 0, but for an arbitrary value of the coordinate independent parameter g.
As we mentioned above, the one-loop divergences cannot be regularized by adding the higher derivative term to the action. For this purpose it is necessary to supplement the higher derivative method by the Pauli-Villars regularization which is introduced by inserting the Pauli-Villars determinants into the generating functional [85]. According to refs. [78,86], to cancel the one-loop divergences appearing in supersymmetric gauge theories, one should introduce three chiral Pauli-Villars superfields ϕ a with a = 1, 2, 3 in the adjoint representation of the gauge group, and chiral superfields Φ i in a certain representation R PV which admits a gauge invariant mass term. The superfields ϕ a cancel one-loop divergences coming from the loops of the quantum gauge superfield, of the Faddeev-Popov ghosts and of the Nielsen-Kallosh ghosts. The superfields Φ i cancel the one-loop divergences coming from the matter loop. This occurs if the generating functional is defined as where Dµ denotes the measure of the functional integration and c = T (R)/T (R PV ). The sources are included into 9 The Pauli-Villars determinants are constructed as
JHEP10(2019)011
and M jk M * ki = M 2 δ j i . (We assume that the representation R PV is chosen in such a way that this condition can be satisfied. For example, it is possible to use the adjoint representation.) To obtain a regularized theory with a single dimensionful parameter, it is necessary to require that the Pauli-Villars masses M ϕ and M should be proportional to the parameter Λ, It is important that we consider a regularization for which a ϕ and a do not depend on couplings. The effective action is standardly defined as the Legendre transform of the generating functional W = −i ln Z for connected Green functions, where the sources should be expressed in terms of (super)fields from the equations (2.30)
Renormalization and RGFs defined in terms of the bare couplings
In this section we present the β-function defined in terms of the bare couplings in a form which is the most convenient for proving the factorization of the corresponding loop integrals into integrals of double total derivatives. This factorization is an important step towards constructing the all-loop perturbative derivation of the exact NSVZ β-function.
That is why in this section we also rewrite the NSVZ relation (1.2) in such a form that can be used as a starting point of this derivation. To find the β-function defined in terms of the bare couplings, we consider the twopoint Green function of the background gauge superfield. Note that in our conventions the term "two-point" in particular means that the auxiliary superfield g is set to 0, but the dependence on the parameter g is kept. It is easy to see that the considered Green function depends on g, α 0 , λ 0 , and λ * 0 only via the combinations gg * α 0 and gg * λ ijk 0 λ * 0mnp . (For simplicity, below we will denote the latter one by gg * λ 0 λ * 0 .) Really, in the case g = 0 the total action depends on gg * α 0 , gλ 0 and g * λ * 0 . However, the numbers of λ 0 and λ * 0 in any supergraph contributing to the considered Green function are equal. Therefore, the Yukawa couplings enter it only in the combination gg * λ 0 λ * 0 . Similar arguments also work for the two-point Green functions of the quantum gauge superfield, of the Faddeev-Popov ghosts, and for the two-point Green function φ * i φ j of the matter superfields. Below we will use the notation ρ ≡ |g| 2 = gg * , (3.1) so that the above mentioned two-point Green functions actually depend on ρα 0 and ρλ 0 λ * 0 . Due to the background gauge invariance the two-point Green function of the background gauge superfield is transversal and (in the massless limit) can be written as
JHEP10(2019)011
where the supersymmetric transversal projection operator is defined by the equation With the help of the Slavnov-Taylor identities [87,88] (and some other similar equations) it is possible to prove that quantum corrections to the two-point Green function of the quantum gauge superfield are also transversal, Also we will need the two-point Green functions of the Faddeev-Popov ghosts and of the matter superfields, Renormalized couplings α, λ and the renormalization constants pressed in terms of α and λ in the limit Λ → ∞. Note that due to the non-renormalization of the superpotential [13] the renormalized Yukawa couplings are related to the bare ones by the equation Similarly, due to the non-renormalization of the triple ghost-gauge vertices [14] the renormalization constants can be chosen in such a way that We will always assume that the renormalization constants satisfy eqs. (3.7) and (3.8).
(Certainly the renormalization constants are not uniquely defined [89], and these constrains partially fix an arbitrariness in choosing a subtraction scheme.) It is important that in the non-Abelian case the quantum gauge superfield is renormalized in a nonlinear way [67][68][69]. The non-linear renormalization can be realized as a linear renormalization of an infinite set of parameters. For example, in the lowest approximation it is possible to present the function F (V ) in the form where y 0 is a new bare parameter and (3.10)
JHEP10(2019)011
Then, the result for the nonlinear renormalization obtained in [72,73] can be equivalently written in the form where ξ is the renormalized gauge parameter and k 1 is a finite constant which appears due to the arbitrariness in choosing a subtraction scheme. The explicit calculation of ref. [74] demonstrated that the renormalization group equations cannot be satisfied without introducing the parameter y 0 (or, possibly, implementing the nonlinear renormalization by some different way). Certainly, in higher orders an infinite set of parameters similar to y 0 is needed. All these parameters are similar to the gauge fixing parameter ξ 0 , because by a proper change of variables in the generating functional it is possible to prove that a nonlinear renormalization is equivalent to a nonlinear change of a gauge [67]. That is why below we will include the gauge fixing parameter and the parameters of the nonlinear renormalization inside the function F (V ) into a single set The corresponding renormalized values will be denoted by Y = (ξ, y, . . .).
We believe that the NSVZ relation is valid for RGFs defined in terms of the bare couplings in the case of using the higher covariant derivative regularization. These RGFs are defined by the equations and do not depend on a renormalization prescription for a fixed regularization [57]. It is easy to see that RGFs defined in terms of the bare couplings can be obtained by differentiating the corresponding Green functions. For example, the β-function defined in terms of the bare couplings can be constructed by differentiating the quantum corrections in the twopoint Green function of the background gauge superfield in the limit of the vanishing external momentum, (3.14) Note that the term 1/(gg * α 0 ) appears in the function d −1 in the tree approximation and corresponds to the first term in eq. (2.19). The limit p → 0 is needed for removing terms proportional to (p/Λ) k , where k is a positive integer. The equality follows from the finiteness of the function d −1 expressed in terms of the renormalized couplings.
JHEP10(2019)011
It is well known that for g = 1 the β-function can be presented as the series where the (Y 0 -independent) coefficient is obtained by calculating the one-loop contribution to the β-function. (For the considered regularization the details of this calculation can be found in [78].) For g = 1 it is easy to see that the L-loop contribution to the β-function is proportional to gg * L+1 = ρ L+1 . Therefore, the dependence of the expression β(ρα 0 , ρλ 0 λ * . . If we consider g and g * as independent variables, then Consequently, where +0 means that ρ = 0, but ρ → 0. Taking into account that the limit ρ → 0 corresponds to the theory in which quantum superfields interact only with the background gauge superfield, so that nontrivial quantum corrections exist only in the one-loop approximation, we obtain Therefore, the β-function defined in terms of the bare couplings (for the original theory which corresponds to g = 1) can be calculated with the help of the equation Due to the finiteness of the functions dimensions of the quantum superfields can also be related to the corresponding Green functions by the equations In the one-loop order these anomalous dimensions contain terms proportional to α 0 and λ 0 λ * 0 (the latter ones appear only in γ φ i j ), (3.24) and the terms corresponding to the L-loop approximation are proportional to gg * L = ρ L . Using this fact, from the identity (3.18) we obtain This implies that for deriving the NSVZ relation (1.2) it is sufficient to prove that to this equation with the help of eqs. (3.14) and (3.21) -(3.23). In eq. (3.26) the derivative with respect to ln Λ is very important, because it removes infrared divergences which could appear in the limit of the vanishing external momentum. Explicit loop calculations (e.g., in refs. [51,53]) demonstrate that loop integrals written without d/d ln Λ are not well defined, while after the differentiation all bad terms disappear.
The derivatives with respect to g and g * are not so important and can be excluded from eq. (3.26). Certainly, in this case it is necessary to add the constant corresponding to the one-loop contribution, For g = 1 this identity was first suggested in ref. [14]. However, for deriving the NSVZ relation in all loops it is more preferable to use eq. (3.26).
The left hand side of eq. (3.26) can be constructed starting from the expression for the two-point Green function of background gauge superfield (3.2). To extract the function d −1 , it is convenient to make the formal substitution where θ 4 ≡ θ a θ aθȧθȧ .
where v A 0 = const and X µ = (x i , ix 0 ) are the Euclidean coordinates. The corresponding Euclidean momenta are denoted by From eq. (3.31) we see that v A (P ) is essentially different from 0 only in a small region of the size 1/R → 0. This implies that substituting the functions (3.30) into eq. (3.2) we automatically obtain the limit P → 0 (or, equivalently, p → 0), which is needed for constructing RGFs defined in terms of the bare couplings. Let us consider quantum corrections encoded in the expression where S total includes the usual action, the gauge fixing term, and the ghost actions. (Certainly, the terms proportional to Λ −k , where k is a positive integer, should be omitted).
Then we consider a part of ∆Γ corresponding to the two-point Green function of the background gauge superfield. Performing the Wick rotation and making the substitution (3.29), after some transformations, in the limit R → ∞ we obtain where we have introduced the notation Thus, we see that the substitution (3.29) allows extracting the β-function defined in terms of the bare couplings from the considered part of the effective action in the case of using the higher covariant derivative regularization. (In the case of using the dimensional reduction one should be much more careful, see [41,42] for details.) Differentiating eq. (3.33) with respect to the parameters g and g * and multiplying the result by the factor 2π/V 4 , we obtain the left hand side of eq. (3.26). In turn, the derivatives with respect to the coordinate-independent parameters g and g * can be expressed in terms of the derivatives with respect to the chiral superfield g and the antichiral superfield g * , respectively. Really, all terms in the action containing quantum superfields depend only on the combinations g and g * , see eqs. which depends on g and g * in a different way is the first term in eq. (2.19), but it does not affect quantum corrections and does not enter ∆Γ. Therefore, it is possible to relate the derivatives of ∆Γ with respect to g and g * to the derivatives with respect to g and g * , Thus, to derive the NSVZ relation, it is sufficient to prove the identity V denotes a part of Γ which is quadratic in the background gauge superfield and does not contain the other superfields except for g. Note that writing eq. (3.37) we took into account that S Note that here we do not set the auxiliary external superfields g and g * to 0, because eq. (3.37) contains the derivatives with respect to these superfields. In this paper we will consider only N = 1 supersymmetric gauge theories with a simple gauge group. In this case it is easy to see that any invariant tensor I AB should be proportional to δ AB . 10 Therefore, for simple gauge groups With the help of eqs. (3.38) and (3.39) for a simple gauge group it is possible to rewrite eq. (3.37) in the form mostly convenient for proving, namely, According to the above discussion, for the theory regularized by higher covariant derivatives this equation is equivalent to the NSVZ relations (1.1) and (1.2) for RGFs defined in terms of the bare couplings. Below we will prove that the left hand side of eq. (3.40) is given by integrals of double total derivatives. 10 The considered invariant tensor satisfies the equation [T A Adj , I] = 0, so that it commutes with all generators of the adjoint representation. For a simple group the adjoint representation is irreducible. Therefore, IAB should be proportional to δAB.
JHEP10(2019)011
4 The β-function as an integral of double total derivatives
The Slavnov-Taylor identity for the background gauge invariance
The background gauge invariance is a manifest symmetry of the theory under consideration (even in the presence of the auxiliary superfield g). At the quantum level symmetries are encoded in the Slavnov-Taylor identities [87,88]. The Slavnov-Taylor identity corresponding to the background gauge transformations constructed in this section is a very important ingredient for the all-loop proof of the factorization into double total derivatives. This identity is derived by standard methods, namely, it is necessary to make the change of variables in the functional integral (2.23), which does not change the generating functional Z. This change of variables coincides with the background gauge transformations of the quantum superfields. Due to the background gauge invariance, the total gauge fixed action and the Pauli-Villars determinants remain unchanged if the background gauge superfield is also modified as However, the source term S sources transforms nontrivially. This implies that in the linear order in A the invariance of the generating functional W = −i ln Z under the change of variables (4.1) can be expressed by the equation where the variations of various superfields under the infinitesimal background gauge transformations are written as 11 where B is a function(al) depending on the superfields of the theory. 11 The expression for δV = δV B t B is obtained in the standard way from the identity 0 = δ[V , e 2V ].
JHEP10(2019)011
Rewriting eq. (4.4) in terms of (super)fields, we obtain the equation which expresses the manifest background gauge invariance of the effective action, (4.7) It is important that in this equation (super)fields are not set to 0, so that this equation encodes an infinite set of identities relating Green functions of the theory. That is why we will call it the generating Slavnov-Taylor identity.
Considering A and A + as independent variables and differentiating eq. (4.7) with respect to A A we obtain where the matrix [f (X) Adj ] AB is defined by the equation Expressing the generators of the adjoint representation in terms of the structure constants it is possible to rewrite the generating Slavnov-Taylor identity (4.7) corresponding to the background gauge symmetry in the form where the operatorÔ A is given by the expression To verify eq. (4.10), it is necessary to take into account that a derivative with respect to a chiral superfield is also chiral and use the identity valid for an arbitrary chiral superfield φ.
It is important that due to eq. (4.10) the effective action satisfies the equation whereŌ Ȧ a ≡ −Dȧ This can be verified with the help of the equality Therefore, taking into account that f AAC = 0, after the differentiation we see that Note that here all fields (including the background gauge superfield V ) should be set to 0, but the auxiliary superfield parameter g remains arbitrary. To derive the last equality, it is necessary to use eq. (4.13) and the identity This expression can be presented as a sum of certain one particle irreducible (1PI) supergraphs, because the effective action is the generating functional for 1PI Green functions (see, e.g., [90]). Therefore, it can be calculated using the tools of the perturbation theory, which include standard rules for working with supergraphs. Note that the external lines in the superdiagrams contributing to the expression (4.19) are attached to the points x, y, z 1 , and z 2 and correspond to θ 2θȧ v B x , θ 2θḃ v B y , 1, and 1, respectively.
JHEP10(2019)011
Evidently, any two points of an 1PI graph can be connected by a chain of vertices and propagators. This allows to shift v B in an arbitrary point of the supergraph, because additional terms produced by such shifts are suppressed by powers of 1/R. Really, propagators contain derivatives with respect to the superspace coordinates acting on δ 8 xy . Certainly, v B commutes with ∂/∂θ a and ∂/∂θȧ due to the independence of θ. As for the derivatives with respect to the space-time coordinates x µ , the shifting of v B from the superspace point 1 to the point 2 is made according to the procedure where we took into account that the space-time derivatives of v B are proportional to powers of 1/R, see, e.g., eq. (3.30). (To be exact, the dimensionless parameter in this case is 1/(ΛR).) Certainly, the terms proportional to 1/R can be omitted in the limit R → ∞, which is actually equivalent to the limit p → 0 in equations like eq. (3.14). Below we will always ignore them.
With the help of equations like (4.20) we can shift v B to an arbitrary point of the supergraph. Let us shift both v B in eq. (4.19) to the point z 1 , Note that in this case the usual coordinates x µ on which v B depends should be replaced by the chiral coordinates y µ = x µ + iθȧ(γ µ )ȧ b θ b to obtain a manifestly supersymmetric expression. Certainly, this is possible, because the difference is proportional to powers of 1/R and vanishes in the limit R → ∞. Also it is possible to prove thatθȧ andθ˙b in eq. (4.19) can be shifted in an arbitrary point. Really, let us consider a supergraph contributing to the expression (4.19). It is calculated according to the well-known algorithm (see, e.g., [65]), the result being given by an integral over the full superspace. 12 The integral over the full superspace includes integration over d 4 θ and does not vanish only if the integrand contains θ 4 = θ 2θ2 . Note that new θ-s cannot be produced in calculating the supergraphs, in spite of their presence inside the supersymmetric covariant derivatives. Therefore, any supergraph with θ-s on external lines does not vanish only if it contains at least two right components θ a and two left componentsθȧ. The expression (4.19) is quadratic inθ, which can be shifted along a pass consisting of vertices and propagators using equations like Here O(1) denotes terms which do not containθ. They appear when the covariant derivatives are commuted withθ-s with the help of the identity {θȧ,D˙b} = δȧ b . The arrow in eq. (4.22) points that we omit them, because these terms do not contribute to eq. (4.19). Really, the original expression is quadratic inθ, so that the contributions of O(1) terms are no more than linear inθ-s. This implies that they are removed by the final integration over d 4 θ.
JHEP10(2019)011
Thus, we see thatθ-s in supergraphs contributing to eq. (4.19) can be shifted in an arbitrary way using equations like (4.22). This allows shiftingθȧ andθ˙b from the points x and y to the point z 2 , θȧ After this, we use the identity (4.24) (Here we essentially use that bothθ-s are placed into a single point z 2 .) As a result, we obtain that after the shifts (4.21) and (4.23) the considered expression is written as . (4.25) Note that due to the antichirality ofθ 2 this expression remains manifestly supersymmetric. The right components θ cannot be shifted in an arbitrary way, because the considered expression is quartic in θ a (here we count only the degree of the right components). However, in this case it is possible to use a special identity derived in ref. [55]. Let us consider an 1PI supergraph contributing to the expression (4.25) and construct two passes connecting the point x with z 1 and the point z 1 with y, see figure 1. The corresponding sequences of vertices and propagators we will denote by A and B, respectively. Actually, A and B are products of the expressions in which various derivatives (namely, ∂ µ , D a ,Dȧ, and 1/∂ 2 ) act on superspace δ-functions. Then according to ref. [55] θ 2 ABθ 2 + 2(−1) P A +P B θ a Aθ 2 Bθ a − θ 2 Aθ 2 B − Aθ 2 Bθ 2 = O(θ), (4.26) where (−1) P X is the Grassmannian parity of an expression X, and O(θ) denotes terms which are no more than linear in θ. For completeness, we also present the proof of this identity in appendix A. (The point x is on the left of each term, the point y is on the right, and the point z 1 is between A and B.) Evidently, the O(θ) terms in eq. (4.26) do not contribute to eq. (4.25), because the integral over d 4 θ which remains after the calculation of the supergraph removes them. Therefore, with the help of eq. (4.26) the left hand side of eq. (3.40) can be rewritten in the form where we take into account that all propagators are Grassmannian even. This expression can be equivalently expressed in terms of the operatorÔ A as
JHEP10(2019)011
x y z 1 A B d d © Figure 1. The points x, z 1 , and y of a supergraph can be connected by a pass which consists of the gauge, matter, and ghost propagators. A corresponds to its part connecting the points x and z 1 , and B corresponds to the part connecting the points z 1 and y.
To see this, it is necessary to use the identity where we also took the identity into account. Eq. (4.30) is a convenient starting point for presenting the left hand side of eq. (3.40) in the form of an integral of double total derivatives. This will be made in the next section.
Formal calculation
Numerous explicit calculations of the β-function reveal that it is given by integrals of double total derivatives in the momentum space for both the Abelian [44,50] and non-Abelian [47-49, 51, 53] N = 1 supersymmetric theories regularized by higher covariant derivatives. In the Abelian case this factorization into integrals of double total derivatives has been proved in all orders in refs. [54,55]. For generalizing this result to the non-Abelian case we consider the left hand side of eq. (where ρ = gg * ) and present it in the form (4.30). Below we will demonstrate that it is given by integrals of double total derivatives in the momentum space in all orders.
JHEP10(2019)011
An important observation is that the expression (4.30) formally vanishes as a consequence of the Slavnov-Taylor identity (4.7). In fact, it is not true because of singular contributions, which will be discussed in section 4.5. However, first, we describe the formal calculation.
As a starting point we consider the Slavnov-Taylor identity (4.7) in which we set the superfields V , φ i , c A , andc A to 0. However, the auxiliary superfields remain arbitrary. This gives the equation Its left hand side is a functional of the background gauge superfield V and the auxiliary external superfields g and g * . Next, we differentiate eq. (4.33) with respect to V B y and, after this, set the background gauge superfield to 0. Then using eq. (4.5) we obtain where we also took into account that (even for g = 0) .
1) and (4.3) in the form
where ε aB is a coordinate independent anticommuting parameter. This implies that A B = ε aB θ a . Substituting these parameters into eq. (4.34) and differentiating with respect tō εȧ B , we obtain the equation the left hand side of which being a functional of the auxiliary superfield g. Therefore, it is possible to differentiate with respect to g and g * , so that the part of eq. (4.30) obtained
JHEP10(2019)011
from the second term in the round brackets vanishes. The part obtained from the first term vanishes due to the same reason. This implies that LHS of eq. (3.40) = 2 d 8 x d 8 y d 6 z 1 d 6z The similar arguments can be used for this expression (which corresponds to the third term in the round brackets in eq. (4.30)). In this case it is necessary to choose the superfield A as where a B µ are real coordinate-independent parameters. Therefore, A B = ia B µ y µ , where the chiral coordinates y µ and the antichiral coordinates (y µ ) * are defined as respectively. In this case from eq. (4.34) for arbitrary g we formally 13 obtain the identity Consequently, the expression (4.39) seems to vanish. This implies (see eqs. (3.19) and (4.32)) that all higher order corrections to the β-function vanish and the β-function is completely defined by the one-loop approximation. Certainly, it is not true. The matter is that the above calculation was made formally and something very important was missed. The origin of the incorrect result can be found analyzing the explicit calculations made with the higher covariant derivative regularization [45][46][47][50][51][52][53]. They demonstrate that all integrals giving the β-function are integrals of double total derivatives in the momentum space, and that all loop corrections come from δ-singularities. Below in section 4.4 we will see that the integrals of (double) total derivatives appear due to the presence of x µ in eq. (4.40). These total derivatives produce singular contributions which were ignored in the formal calculation. Note that eq. (4.37) does not contain x µ , so that the momentum total derivatives do not appear in the first two terms of eq. (4.30). This implies that the higher (L ≥ 2) loop corrections to the β-function are completely determined by the third term inside the round brackets in eq. (4.30). It is this term that produces the double total derivatives in the momentum space. To derive this fact in section 4.4, here we relate this term with the second variation of the functional integral giving the effective action under the change of variables corresponding to the background gauge transformations.
Let us set all quantum superfields to 0. Then the effective action will depend only on the external superfields V and g. Taking into account that (at least, in the perturbation theory) the vanishing of the quantum (super)fields corresponds to the vanishing of the sources, we obtain Γ quantum fields=0 where Z is given by the functional integral (2.23). 13 This identity is not actually valid, because the parameter A too rapidly grows at infinity.
JHEP10(2019)011
Similarly to the derivation of the Slavnov-Taylor identity in section 4.1, we perform the change of variables (4.1) in this functional integral, but the parameter A will be chosen in the form (4.40). Let us denote the variation of the effective action under the background gauge transformations of the quantum superfields byδ a . (This variation does not include the transformation of the background gauge superfield V .) Taking into account that the generating functional (4.43) remains the same after the considered change of variables, while the total action is invariant under the background gauge transformation, we obtain the equation similar to eq. (4.7), which is certainly a mere consequence of the Slavnov-Taylor identity. (Note that the background superfield V and the external superfield g are not so far set to 0.) Differentiating eq. (4.44) with respect to a B µ gives The derivative of the effective action with respect to V A entering this equation can be presented as the functional integral where the angular brackets are defined by eq. (4.6) and we also introduced the notation In this functional integral it is possible to perform again the change of variables (4.1) with the parameter A = ib B µ t B y µ . After this change of variables we set the background gauge superfield V to 0. As a result, we obtain the identity As usual, the subscript "fields = 0" means that the superfields V , φ i , c,c, and V are set to 0, while the chiral superfield g can take arbitrary values. The symbolδ b denotes the variation under the transformations (4.1) of the quantum superfields parameterized by A = ib A µ t A y µ , the background gauge superfield V being fixed. Let us transform the right hand side of this expression taking into account that the total action (4.2) and the Pauli-Villars actions S ϕ and S Φ (given by eqs. (4.49) where δ b V is given by eq. (4.5). From eq. (4.49) it is possible to obtain the identities They can be derived by commuting the derivative with respect to V B y to the left, if we take into account that it commutes withδ b and use the equation which is valid because f AAC = 0. The operatorδ b in eq. (4.48) acts on the expression inside the angular brackets and on the actions S total , S ϕ , and S Φ in the exponents. Eqs. (4.49) and (4.50) allow expressing the result in terms of the derivatives with respect to the background gauge superfield. From the other side, the derivative of the angular brackets with respect to V also acts on the expression inside these brackets and on the actions in the exponents. This implies that The expression δ b V entering this equation is given by eq. (4.5). Differentiating it with respect to b B µ and setting the background gauge superfield to 0, we obtain Therefore, taking into account eq. (4.44), we see that the formal calculation gives (Note that in this expression we do not set the external superfield g to 0.) However, in what follows we will see that the first equality is not true, because doing the formal calculation we ignore singular contributions. These singular contributions will be discussed below.
JHEP10(2019)011
If we apply the operator to the left hand side of eq. (4.54) and, after this, set the auxiliary external superfield g to 0, then we obtain the expression (4.39), According to this equation all higher order corrections to the β-function vanish. Certainly, it is not true. As we have already mentioned above, such a result appears, because singular contributions were missed in the formal calculation described above.
Although from eq. (4.56) we obtain the same (incorrect) formal result as from eq. (4.42), eq. (4.56) will be very useful below, because it allows explaining the factorization of the loop integrals giving the β-function into integrals of double total derivatives.
Integrals of double total derivatives
Although the calculation described in the previous section is formal, it allows explaining why the β-function (defined in terms of the bare couplings with the higher derivative regularization) is given by integral of double total derivatives in the momentum space. This can be done starting from eq. (4.56). Its left hand side is related to the β-function by eq. (4.32). In this section we present the right hand side of eq. (4.56) as a sum of integrals of double total derivatives and formulate a prescription for constructing these integrals.
Let ϕ I denotes the whole set of superfields of the theory, where the index I corresponds to quantum numbers with respect to the gauge group, and j I are the corresponding sources. In the momentum representation the propagators can be presented in the form (4.57) where Z 0 is the generating functional for the free theory.
Let us make the change of the integration variables (4.1) with the parameter A given by eq. (4.40) in the generating functional Z with the sources and the background gauge superfield set to 0. Although under this change of variables the generating functional remains invariant, the propagators and vertices transform nontrivially. Really, if S 2 and S int are the quadratic part of the action and the interaction, respectively, then , where Z 0 ≡ Dϕ exp iS 2 ϕ (ϕ) +iϕ·j , (4.59) are also different from the old ones. Now, let us try to understand how the evident equality Z = Z appears at the level of superdiagrams. For this purpose we write the transformation (4.1) with the parameter (4.40) and concentrate on the terms linear in x µ , where (T A ) I J are the generators of the gauge group in a relevant representation, and the terms which do not explicitly depend on x µ are denoted by dots. 14 Then the propagator changes as Using this equation it is possible to demonstrate that in the momentum representation the change of the propagator (4.61) is related to its derivative with respect to the momentum, Next, let us proceed to the interaction vertices. An n-point vertex can be formally written in the form In (x 1 , x 2 , . . . , x n ; θ 1 , θ 2 , . . . , θ n ) 14 Note that if the sources are not set to 0, then . . In this case the arguments of the effective action change as ϕI = δW/δj I → ϕ I = ϕI + ia A µ x µ (T A )I J ϕJ + . . . This implies that the considered change of the integration variables actually generates the transformationδa.
. . , k n ; θ 1 , θ 2 , . . . , θ n ) + . . . (4.72) Next, it is necessary to note a resemblance between eq. (4.69) and eq. (4.65). In eq. (4.65) each generator actually corresponds to a propagator coming from the considered vertex exactly as momenta in eq. (4.69). This implies that such equations appear in pairs. Say, if the considered vertex is placed inside a certain graph in which the momentum k µ 2 can be expressed in terms of k µ 3 , . . . , k µ n , then where c 3 , . . . c n are some numerical coefficients. In this caseδ aV I 1 I 2 ...In will be proportional to Thus, the variationsδ a of vertices inside a supergraph contain only derivatives with respect to independent momenta. It is well known that due to the momentum conservation in each vertex (encoded in equations like eq. (4.69)) in an L loop graph without external lines only L momenta are independent. (In our case this is also true, because the momenta of all external lines vanish.) Therefore, we can mark L propagators whose momenta are considered as independent parameters, see figure 2 (which corresponds to the case L = 3). Then, using the resemblance between eq. (4.69) and eq. (4.65), it is possible to construct L independent structures in which the generators correspond to certain propagators, e.g., to the propagators whose momenta we consider as independent parameters. Any graph in which T A stands on a certain propagator can be expressed in terms of these structures.
Let us consider a closed loop, consisting of vertices and propagators, which includes one of the independent momenta, say, k µ . Then according to eqs. (4.63), (4.72) and (4.75), from the terms containing the derivative ∂/∂k µ we obtain the contribution to the first variation of the considered supergraph given by an integral of a total derivative where the generator T A should be inserted on the propagator with the momentum k µ . This is graphically illustrated in figure 2.
The second variation is calculated similarly. Thus, we have a prescription, how to find integrals of double total derivatives which contribute to the β-function. The starting point is the expression (4.77) First, we consider a certain L loop supergraph contributing to it and (in an arbitrary way) mark L propagators with the (Euclidean) momenta Q µ i considered as independent. Let a i be the indices corresponding to their beginnings. Next, it is necessary to calculate the supergraph using the standard rules. The result includes a coefficient which contains couplings and some group factors. This coefficient should be replaced by a certain differential operator which is obtained by calculating the "second variation" of the expression i δ b i a i , where δ b i a i comes from the marked propagators, formally setting In other words, we make the replacement Next, one should multiply the result by the factor where the sign "−" appears, because
JHEP10(2019)011
Finally, it is necessary to rewrite the result in terms of ρ = gg * and perform the integration dρ. (4.82) The expression obtained according to the algorithm described above coincides with a contribution to β/α 2 0 coming from the sum of all superdiagrams which are obtained from the original vacuum supergraphs by attaching two external lines of the background gauge superfield in all possible ways.
Below in section 5 we will verify this algorithm for some particular examples.
The role of singularities
From the discussion of the previous section we can conclude that in the case of using the higher derivative regularization the integrals giving the β-function are integrals of double total derivatives. This agrees with the results of explicit calculations which also reveal that all higher order corrections to the β-function originate from singularities of the momentum integrals. Actually it is the contributions of the singularities that have been missed in the formal calculation of section 4.3. Let us demonstrate, how they appear, by considering the integral as a simple example. In eq. (4.83) Q µ denotes the Euclidean momentum, and f (Q 2 ) is a nonsingular function which rapidly tends to 0 in the limit Q 2 → ∞.
If we calculate the integral (4.83) formally, then it vanishes, because it is an integral of a total derivative. Actually, using the divergence theorem, we reduce the integral under consideration to the integral over the infinitely large sphere S 3 ∞ in the momentum space. Evidently, the result is equal to 0, because the function f vanishes on this sphere, where dS µ is the integration measure on S 3 ∞ . Actually, in section 4.3 we made a similar calculation. However, the result obtained in eq. (4.84) is evidently incorrect due to a singularity of the integrand at Q µ = 0.
To correct the above calculation, it is necessary to surround the singularity by a sphere S 3 ε of an infinitely small radius ε (with the inward-pointing normal) and take into account the integral over this sphere,
JHEP10(2019)011
Let us visualize this result by reobtaining it in a different way. First, we note that defining the integral I we actually do not distinguish between the expression (4.83) and the integral However, it is possible to introduce the operator ∂/∂Q µ which is similar to ∂/∂Q µ , but, by definition, the integral of it is always reduced to the integral over the sphere S 3 ∞ only. Moreover, we assume that this operator is commuted with Q µ /Q 4 in the integrand with the help of the identity (4.87) In terms of the operator ∂/∂Q µ the considered integral is defined as Then, if we integrate by parts taking into account vanishing of the integral of a total derivative and eq. (4.87), we obtain From this equation we see that the integral I is determined by a contribution of the δ-singularity. Note that in the coordinate representation where a is a certain function, while Such a structure of loop integrals appears in the Abelian case (see, e.g., [54]). In the non-Abelian case the structure analogous to (4.90) is the right hand side of eq. (4.56), while its left hand side is an analog of the expression (4.91). Therefore, it becomes clear that making the calculations formally in the previous section we ignored the δ-singularities. Thus, to make the calculation properly, it is necessary to take into account singular contributions, which generate all terms containing the anomalous dimensions in the NSVZ equation (1.2) for RGFs defined in terms of the bare couplings. We hope to describe how to sum these singularities in a future publications. Figure 3. Graphs generating terms containing the Yukawa couplings in the three-loop β-function.
JHEP10(2019)011
We point out independent momenta and indices corresponding to beginnings of the respective propagators using the same notations as in the calculation described in the text.
Verification in the lowest orders
To confirm the correctness of the general arguments presented above, it is desirable to verify them by explicit calculations in the lowest orders. In section 4.4 we have formulated the prescription, how to construct integrals of double total derivatives which appear in calculating the β-function in the case of using the higher covariant derivative regularization. For obtaining these integrals one usually calculates a set of superdiagrams which are obtained from a given graph by attaching two external lines of the background gauge superfield in all possible ways. For example, in ref. [51] this has been done for the three-loop contributions quartic in the Yukawa couplings. All three-loop terms containing the Yukawa couplings have been subsequently found in ref. [53]. (Both these calculations were made in the Feynman gauge ξ = 1 for the higher derivative regulator K = R.) Unfortunately, at present no other three-loop contributions to the β-function are known in the case of using the higher covariant derivative regularization. Nevertheless, the results of refs. [51,53] allow verifying the general argumentation of the present paper by comparing the algorithm described in section 4.4 with the result of the standard calculation.
A part of the three-loop β-function which contains the Yukawa couplings originates from the supergraphs presented in figure 3. Within the standard technique used in refs. [51,53] they generate large sets of superdiagrams with two external lines corresponding to the background gauge superfield which have to be calculated. However, now it is possible to derive the result for their sums by a different (and much simpler) way. Namely, we
JHEP10(2019)011
should calculate the (specially modified) superdiagrams without external lines and, after this, follow the algorithm described in section 4.4. Here we describe this calculation for the graph (1) in details and present the similar results for the remaining graphs (2) - (5).
As a starting point we find the contribution of the graph (1) to the expression (4.77). Due to the derivatives with respect to the superfields g and g * and subsequent integrations, two vertices in this graph take the form Then, after some standard calculations, for the contribution of the supergraph (1) (in the Euclidean space after the Wick rotation) we obtain Note that although here the superfield g is set to 0, the coordinate independent parameter g can in general be present in the Yukawa vertices and gauge propagators. However, the graph (1) appears to be independent on g and, therefore, on ρ = gg * . According to the prescription described in section 4.4 for obtaining the contribution to the β-function, at the first step, it is necessary to replace the factor λ ijk 0 λ * 0ijk (which in the original graph comes from the expression λ ijk 0 λ * 0pmn δ p i δ m j δ n k ) by a certain differential operator acting on the integrand in eq. (5.2). To construct this operator, we consider the propagators with the independent momenta K µ and Q µ . Let they are proportional to δ m j and δ n k , respectively. Then, we construct the second "variation" formally replacing This operation changes the Yukawa coupling dependent factor in eq. (5.2) as Replacing the factor λ ijk 0 λ * 0ijk in eq. (5.2) by this operator and taking into account that the Euclidean momenta K µ and Q µ enter the integrand of eq. (5.2) symmetrically, we obtain the expression
JHEP10(2019)011
To simplify it, we use two identities. The first one, follows from eq. (2.4), while the second one, can be verified by direct differentiating after some changes of integration variables in the resulting integrals. Then the expression under consideration takes the form To find the contribution to the function β(α 0 , λ 0 λ * 0 , Y 0 )/α 2 0 , it is necessary to multiply this expression by −2π/rV 4 and apply the operator to the result. For the graph (1) this integration gives the factor 1, because the expression for this graph does not depend on ρ. Therefore, This result exactly coincides with the one derived in ref. [51] by direct summation of the superdiagrams contributing to the two-point Green function of the background gauge superfield. Certainly, the calculation described here is much simpler, because we had to calculate the only superdiagram without external lines. The agreement of the results confirms the correctness of the general arguments presented in this paper. However, it is desirable to verify also the three-loop results corresponding to the graphs (2) -(5) in figure 3. As in refs. [51,53] we will use the Feynman gauge, so that in what follows the parameter ξ 0 is set to 1 and the higher derivative regulator K is chosen equal to R. Calculating the supergraph (2) in figure 3 we should take into account that θ 2 andθ 2 can appear in different points. This produces a set of subgraphs presented in the curly brackets in figure 4. However, all these subgraphs differ only in the numeric coefficients. Really, they are quartic in θ-s, so that these θ-s can be shifted to an arbitrary point of the supergraph. (Terms with lower degrees of θ, which can appear after such shifts, evidently vanish due to the integration over d 4 θ.) For example, it is possible to shift θ-s as it is shown in the right hand side of figure 4. 15 JHEP10(2019)011 Figure 4. Subgraphs of the supergraph (2) correspond to different positions of θ 2 andθ 2 . However, the sum of them is effectively reduced to a single supergraph in which θ 4 can be placed in an arbitrary point and g = g * = 1.
The result for their sum (in the Euclidean space after the Wick rotation) can be written as where, following ref.
[53], we use the notation As earlier, we should replace the factor λ ijk 0 λ * 0imn (T B ) j m (T B ) k n by a relevant differential operator. For constructing this differential operator we again mark the propagators with the independent momenta Q µ , L µ , and K µ , see figure 3. The beginnings of the lines which denote them correspond to the indices m, i, and B. They refer to the representations R (in which the matter superfields lie),R, and Adj, respectively. Then, the calculation of the first "variation" gives where we take into account that T Ā R = − T A t (with T A being the generators of the representation R) and T A Adj BC = −if ABC . The second "variation" is calculated in a JHEP10(2019)011 similar way. After some (rather non-trivial) transformations involving eq. (2.4) we obtain that the differential operator for the considered graph has the form Then it is necessary to repeat the same algorithm as for the graph (1), namely, n by the operator (5.14); 2. multiply the result by −2π/rV 4 ; 3. apply the operator (5.9).
The three-loop supergraphs are proportional to gg * = ρ, so that in the considered case the integration gives 16 Thus, the contribution of the graph (2) to the function β/α 2 0 takes the form We see that this result coincides with the one obtained in ref. [53] by the straightforward calculation of superdiagrams with two external legs of the background gauge superfield. The expression for the next graph (3) has the form 17) 16 In general, an L-loop supergraph is proportional to ρ L−2 , and the integration gives the factor (L − 1) −2 .
This implies that in the general case to find a contribution to eq. (4.77), it is possible to start with a vacuum supergraph contributing to the effective action with g = g * = 1 and simply insert θ 4 to an arbitrary point which contains integration over the full superspace. (Note that the integrations over d 6 x or d 6x in the Yukawa terms can always be converted to the integrals over the full superspace.)
JHEP10(2019)011
where Similar to the previous supergraphs, we replace the factor λ ijk 0 λ * 0ijl T B k m T B m l by a differential operator. To obtain this operator, we begin with calculating the first "variation" of the considered factor, The second "variation" is constructed by a similar procedure. The result can be written in the form (5.20) Proceeding according to the above described algorithm, we find the contribution of the supergraph (3) to the function β/α 2 0 , Note that the last term in eq. (5.20) is not essential, because the corresponding contribution to β/α 2 0 vanishes. (It changes the sign under the sequence of the variable changes L µ → L µ − Q µ ; Q µ → −Q µ ; K µ → −K µ .) The result (5.21) also coincides with the one obtained in ref. [53].
The expression for the supergraph (4) is Here we use the same notation as in ref. [53],
JHEP10(2019)011
where the prime and the subscript Q denote the derivative with respect to Q 2 /Λ 2 . The corresponding operator is exactly the same as for the supergraph (3) and is given by eq. (5.20). Similarly to the case of the supergraph (3), the last term in this expression does not contribute to β/α 2 0 , so that This result also agrees with the calculation of ref. [53].
The last supergraph (5) is given by the expression The first "variation" of the factor λ ijk 0 λ * 0ijl λ mnl 0 λ * 0mnk is written as The second "variation" can be found by a similar method, but, to simplify the resulting expression, it is necessary to involve the identities which follow from eq. (2.4). Using these identities and taking into account that the integrand of eq. (5.25) is symmetric in Q and L, we find the required replacement
JHEP10(2019)011
Constructing the contribution of the graph (5) to the function β/α 2 0 with the help of this operator and using the equations we obtain This expression also agrees with refs. [51,53]. Thus, we see that the algorithm described in this paper allows reproducing all results obtained earlier by the direct summation of the superdiagrams with two external lines of the background gauge superfield. Certainly, this fact can be viewed as an evidence in favour of the correctness of the general consideration made in this paper.
Conclusion
We have proved that for N = 1 supersymmetric gauge theories the integrals giving the βfunction defined in terms of the bare couplings are integrals of double total derivatives with respect to the loop momenta in all orders in the case of using the regularization by higher covariant derivatives. This fact agrees with the results of numerous explicit calculations in the lowest orders and generalizes the similar statement for the Abelian case [54,55]. The proof of the factorization into double total derivatives is a very important step towards the all-loop perturbative derivation of the exact NSVZ β-function. This derivation consists of the following main steps: 1. Using the finiteness of the triple ghost-gauge vertices (which has been demonstrated in ref. [14]) we rewrite the NSVZ equation in the equivalent form (1.2).
2. The β-function defined in terms of the bare couplings is extracted from the difference between the effective action and the classical action by the formal substitution (3.29). Then, using the identity (4.26) and the background gauge invariance, the result is presented as an integral of a double total derivative in the momentum space. This integral is reduced to the sum of singular contributions which are given by integrals of the momentum δ-functions. (This has been done in this paper.)
JHEP10(2019)011
3. The remaining step is to sum the singular contributions and to prove that they produce the anomalous dimensions of the quantum superfields in eq. (1.2). Now this work is in progress.
As a result, we presumably obtain eqs. (1.1) and (1.2) for RGFs defined in terms of the bare couplings in the case of using the higher covariant derivative regularization (in agreement with the results of explicit multiloop calculations). Due to scheme independence of these RGFs (for a fixed regularization) this statement is valid for all renormalization prescriptions.
If the NSVZ relation is really valid for RGFs defined in terms of the bare couplings for theories regularized by higher covariant derivatives, then the all-order prescription for constructing the NSVZ scheme for RGFs defined in terms of the renormalized couplings is HD+MSL. This means using of the higher covariant derivative regularization supplemented by minimal subtractions of logarithms, when only powers of ln Λ/µ are included into renormalization constants.
As a by-product of the proof presented in this paper we have obtained a simple method for constructing the loop integrals contributing to the β-function defined in terms of the bare couplings. Actually, it is necessary to calculate (a specially modified) supergraphs without external lines and replace the products of couplings and group factors by a certain differential operator specially constructed for each supergraph. The result is equal to the sum of a large number of superdiagrams which are obtained from the original supergraph by attaching two external lines of the background gauge superfield in all possible ways. Certainly, this drastically simplifies the calculations.
As an illustration of this method we considered all three-loop contributions containing the Yukawa couplings and compared the result with the one found by the standard calculation in refs. [51,53]. The coincidence of the expressions obtained by both these methods confirms the correctness of the algorithm proposed in this paper.
Anticommuting θ a with supersymmetric covariant derivatives inside A and B we obtain expressions which do not explicitly depend on θ. This implies that the right hand side of eq. (A.1) is proportional to the second degree of (explicitly written) θ. After commuting the remaining θ 2 to the left, the expression (A.1) can be presented as Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 17,248 | 2019-08-12T00:00:00.000 | [
"Physics"
] |
Short distance non-perturbative effects of large distance modified gravity
In a model of large distance modified gravity we compare the nonperturbative Schwarzschild solution of hep-th/0407049 to approximate solutions obtained previously. In the regions where there is a good qualitative agreement between the two, the nonperturbative solution yields effects that could have observational significance. These effects reduce, by a factor of a few, the predictions for the additional precession of the orbits in the Solar system, still rendering them in an observationally interesting range. The very same effects lead to a mild anomalous scaling of the additional scale-invariant precession rate found by Lue and Starkman.
Introduction
The DGP model of large distance modified gravity [1] has one adjustable parameter -the distance scale r c . Distributions of matter and radiation which are homogeneous and isotropic at scales ∼ > r c exhibit in this model the following properties: for distance/time scales ≪ r c the solutions approximate General Relativity (GR) to a high accuracy, while for scales ∼ > r c they dramatically differ [1,2,3,4]. Postulating that r −1 c ∼ H 0 ∼ 10 −42 GeV the deviations from GR could lead to interesting observational consequences in late-time cosmology, see, e.g., [3,5], [6]- [11].
On the other hand, sources of matter and radiation with typical inhomogeneity scale less than r c have somewhat different properties. These are easier to discuss for a Schwarzschild source -a spherically-symmetric distribution of matter of the mass M and radius r 0 , such that r M < r 0 ≪ r c (r M ≡ 2G N M is the Schwarzschild radius and G N the Newton constant). For such a source a new scale, that is a combinations of r c and r M , emerges (the so-called Vainshtein scale 1 ) [4]: Above this scale gravity of a compact object deviates substantially from the GR result. Note that r * is huge for typical astrophysical objects. An isolated star of a solar mass would have r * ∼ 100 pc. However, if we draw a sphere of a 100 pc radius with the Sun in its center there will be many other starts enclosed by that sphere. The matter enclosed by this sphere would have even larger r * . We could draw a bigger sphere, but it will enclose more matter which would yield yet larger r * and so on. An isolated object which could be separated from a neighboring one by a distance larger than its own r * is a cluster of galaxies. For typical clusters, r * ∼ (f ew Mpc) is just somewhat larger than their size and is smaller than their average separation. The above arguments suggest that interactions of isolated clusters will be different in the DGP model. On the other hand, at scales beneath a few Mpc or so, there will be agreement with the GR results with potentially interesting small deviations. Below we discuss these issues in detail on an example of a single isolated Schwarzschild source. There exist in the literature two different solutions for the Schwarzschild problem in the DGP model. The first one is based on approximate expansions in the r ≪ r * and r ≫ r * regions [1,4,13] (see also [14,15]). We call this set of results the perturbative Schwarzschild (PS) solution. The second one [16] is a solution that interpolates smoothly from r ≪ r * to r ≫ r c ≫ r * , and is non-analytic in the either parameters used to obtain the PS solution. We call this the nonperturbative Schwarzschild (NPS) solution. It is important to understand which of these two solutions, if any, is physically viable. Since neither of the two can be solved completely without numerical simulations, a first step to discriminate between them would be to look closely at the theoretical differences, as well as predictions that could by tested observationally. This is the goal of the present note.
Qualitative discussions
We will study separately two regimes, r ≪ r * and r ≫ r * .
(I) r ≪ r * . In this regime the standard G N expansion breaks down [4]. How could one proceed? One way is to perform an expansion in powers of m c = r −1 c [4]. This expansion breaks down above r ∼ r * but is well suited for the r ≪ r * domain (Kaloper [17] recently used a different expansion. His proposal could prove to be useful for a broad class of problems). A Schwarzschild metric in the small m c expansion was calculated by Gruzinov [13] (see also [14]). It is instructive to compare the result of [13] with the NPS solution of [16].
Let us start with the Newton potential φ(r). The expansion of the exact result of [16] for r ≪ r * leads: where β = 3/2 − 2( √ 3 − 1) ≃ 0.04, and α is a number to be discussed in detail below. The above result, but with β = 0, is what was first obtained in a small m c expansion [13]. The NPS solution of [16] gives β ≃ 0.04, it depends on irrational powers of m c [16], and it differs by that from the small m c expansion results.
Is the above difference important? As was demonstrated in Refs. [14] and [18], the modification of the Newton potential in (2), although tiny, could lead to a measurable precession of orbits in the solar system (see, Refs. [19] for further studies). The above works used the potentials obtained in the small m c expansion, e.g., used (2) with β = 0. Although β is tiny, the ratio (r * /r) is typically huge in the cases of interest, therefore, taking into account the effects of a nonzero β could lead to appreciable differences in the predictions of the PS and NPS solutions. We will study this issue in the next section.
Consequences of the modified potential (2) could be understood as well in terms of invariant curvatures. The Schwarzschild solution in GR has zero scalar curvature. In contrast with this, the solution (2) generates a nonzero Ricci scalar that extends to r ∼ r * in the NPS solution (see, [16] and discussions below). This can be seen by looking at the trace equation in the DGP model: where R is the 4D Ricci scalar, K is a trace of an extrinsic curvature and T is a trace of the stress-tensor times 8πG N (for the ADM formalism in the DGP model see, e.g., [20,21]). This has to be compared with the trace equation in GR: R = T . The second term on the LHS of (3) is not zero outside the source and, therefore, gives rise to nonzero R. This curvature, although tiny, extends to enormous scales of the order of r ∼ r * [16]. The sign of the curvature depends on a choice of the boundary conditions in the bulk, since the latter determines the sign of K. There are two choices for this. The so-called conventional branch corresponds to a negative (AdS like) curvature produced by the Schwarzschild source, while the selfaccelerated branch [2] corresponds to a positive (dS like) R. This is reflected in the sign of the coefficient α in (2) which takes a positive value on the conventional branch and becomes negative on the selfaccelerated branch: α ≃ ±0.84. Therefore, there is an additional tiny attraction toward the source on the conventional branch and a repulsion of the same magnitude on the selfaccelerated branch. This change of sign was first found by Lue and Starkman [14] in the context of the PS solution.
(II) r ≫ r * . In this regime the small m c expansion breaks down. However, the conventional G N expansion can be readily used [1,4]. The results are [1]: (A) For r ≫ r * DGP gravity is a tensor-scalar theory, where the extra scalar couples to matter with the gravitational strength: the vDVZ phenomenon [22,23].
(B) The Newton potential scales as 1/r for r * ≪ r ≪ r c which smoothly transitions into the 1/r 2 potential at r ≫ r c .
These properties of the PS solution were reconfirmed in detailed studies of Refs. [13,14,15,24,25]. Could the PS solution interpolate from r ≪ r * to r ≫ r c ? The above question is related to the following one: what is a gravitational mass that is felt by an object separated from the source at a distance r ≫ r * ? The PS solution implies that this is just the bare mass M of the original source. On the other hand, one may expect that the curvature created by the source in the domain r ≪ r * would also contribute to this effective mass (the ADM mass) [16]. If so, unless there is a hidden nontrivial cancellation, a putative observer at r ≫ r * would measure an effective mass different from M. The above property is captured by the NPS solution of Ref. [16]. It has the following features: (A ′ ) For r ≫ r * it is a solution of a tensor-scalar gravity (as in (A) above); (B ′ ) The Newton potential scales as 1/r 2 for r ≫ r * (different from (B)). An attractive feature of the NPS solution is that it smoothly interpolates from r ≪ r * to r >> r c . However, a somewhat unusual fact is that it does not recover the results of the G N expansion. This will be discussed in the reminder of this section (readers who are not interested in these somewhat technical issues could directly go to the next section without loss of clarity).
Why is that, that the NPS solution [16] does not agree with the results of the perturbative G N expansion, even in the regime r ≫ r * , where the latter approximation is internally self-consistent? There could be a few different reasons for this. Formally, one is solving nonlinear partial differential equations and these can have different solutions even with the same boundary conditions. In our two cases, however, the boundary conditions are somewhat different: the PS solution is supposed to describe the same mass M at short and large distances, while the NPS solution matches M at the short scales but asymptotes to a screened mass at the large scales 2 . Then either the PS and NPS solutions belong to different sectors and are both stable, or at least one of them should be unstable. In the former case, one should distinguish between them observationally, while in the latter case a relevant point would be that the ADM mass of the NPS solution is smaller [16]. In a very qualitative way, this can be understood as follows. A deviation from the conventional metric at r ≪ r * sclaes as m c √ r M r (we ignore small β here.) This can give rise to a scaling of the scalar curvature m c √ r M r −3/2 . The curvature extends roughly to r ∼ r * , and the integrated curvature scales as m c √ r M r 3/2 * ∼ r M . Then, the "effective mass"' due to this curvature can be estimated as r M M 2 Pl ∼ M, which is of the order of the mass itself.
On the other hand, it may well be that there is a certain "discontinuity" between the linearized and full non-linear versions of the DGP model in 5D. This could result from a different number of constraints one has to satisfy depending on whether solutions are looked for in the linearized approximation or in the full non-linear theory. For instance, one of the bulk equations can be combined with the junction condition in 4D to yield: On a flat background both terms on the RHS of (4) contain at least quadratic terms in the fields. Therefore, according to (4), R has to be zero in the linearized approximation. The latter condition happens to be a consequence of the other linearized equations of the theory as well; therefore, (4) is trivially satisfied as long as those other equations are fulfilled. This changes at the nonlinear-level: Eq. (4) becomes an additional constraint that one has to satisfy on top of the other equations. Because of this: (i) The solutions of the linearized theory may not be supported by the nonlinear equations (the phenomenon known as "linearization instability" in gravity). (ii) New non-perturbative solutions that do not exist in the linearized theory may emerge. One way to decide on the point (i), is to study solutions for other sources and see whether a similar phenomenon takes place. The NPS solution of [16] is an explicit example of the point (ii).
Explicit solution
We consider the action of the DGP model [1]: Here, the (4 + 1) coordinates are x M = (x µ , y), µ = 0, . . . , 3 and g and R are the determinant and curvature of the 5 dimensional metric g M N , while g and R are the determinant and curvatures of the 4 dimensional metric g µν = g µν (x µ , y = 0). The Gibbons-Hawking [26] surface term that guaranties correct equations of motion is implied in the action (5). M P denotes the 4D Planck mass and is fixed by the Newton constant. On the other hand, the scale M * is traded for the parameter r c ≡ M 2 p /2M 3 * discussed in the previous section.
The NPS solution studied in [16] is found by considering a static metric with spherical symmetry on the brane and with Z 2 symmetric line element: ds 2 = −e −λ dt 2 + e λ dr 2 + r 2 dΩ 2 + γ drdy + e σ dy 2 , where λ, γ, σ are functions of r = √ x µ g µν x ν and y. The Z 2 symmetry across the brane implies that γ is an odd function of y while the rest are even. The brane is chosen to be straight in the above coordinate system 3 . The exact solution for y → 0+ is given implicitly as follows: where P is obtained from in which U can have two different behaviors corresponding to the solution of the following two equations (giving rise to a conventional and selfaccelerated branch respectively): where f = √ 1 + 6U + 3U 2 and k is an integration constant. Note that in this parametrization the gravitational potential φ in weak field approximation is easily obtained, namely The off-diagonal metric component, γ, is determined from and the yy component from The profile λ y for y → 0+ can be computed as well: The two integration constants, k and the one produced in the integration (8), are determined by imposing appropriate boundary conditions at the source (namely, P (r → 0+) → r M ) and at large distances, (namely, λ ∼ r 2 M /r 2 in the conventional branch or λ ∼ m 2 c r 2 + r 2 M /r 2 in the selfaccelerated branch and no 1/r term).
Conventional branch
The conventional branch is obtained from the solution of (9). As shown in [16] the boundary conditions (P (0) = r M , P (+∞) = 0) determine the exact relation between k 1 and r * , namely where c is the following integral: The solution has the following asymptotic behavior. At large distances, r ≫ r * (U → 0 + ), we obtain where, while at short distances, r ≪ r * (U → +∞), we get where α 1 = 6 ( As we see, a short distance observer at r M ≪ r ≪ r * would measure the gravitational mass M with a small corrections to Newton's potential, while the large distance observer at r ≫ r * would measure an effective gravitational mass ∼ M(r M /r c ) 1/3 [16]. The latter includes the effects of the 4D curvature.
Selfaccelerated branch
The solution on the selfaccelerated branch is obtained from (10). The relation between k 1 and r * is obtained, as in the conventional case, by imposing boundary conditions (P (0) = r M , P (r) − m 2 c r 3 → 0 for large r). This gives The second line in (21), that is generated by a change of variables in the integral ( U = −U − 2) while using (15), also gives a relation between k 1 and k 2 , The solution has the following asymptotic behavior. At large distances, r ≫ r * (U → −2 − ), we derive where, while at short distances, r ≪ r * (U → −∞), we get where α 2 = −α 1 ≈ −0.84 is, in absolute value, the same constant appearing in the conventional branch short distance expansion (19). Note, however, that the sign of the correction to the 4D behavior is opposite in the two branches. At intermediate distances, r * ≪ r ≪ r c , the potential contains a 5D gravitational term that is repulsive,r 2 M /r 2 . This looks like a 5D negative mass. However, this is not an asymptotic value of the mass since one can only cover the solution in the above coordinate system till r ∼ r c where the dS like horizon is encountered. Moreover, in the intermediate regime r * ≪ r ≪ r c , the de Sitter term m 2 c r 2 in the potential always dominates over ther 2 M /r 2 term suggesting that the effects due to the Schwarzschild source are strongly suppressed.
Perihelion precession
The deviation from 4D gravity (2) gives rise to the additional perihelion precession of circular orbits [14,18] (see also [19] for comprehensive studies of these and related issues). In a simplest approximation this effect is quantified by a fraction of the deviation of the potential from its Newtonian form This can be used to evaluate an additional perihelion precession of orbits in the Solar system [14,18] 4 . As we discussed in Section 1, the ǫ ratio is somewhat different for the non-perturbative solution (NPS solution) as compared to the approximate solution (the PS solution ) used in Refs. [14,18]. We can easily calculate this difference: The perihelion precession per orbit is The second term on the RHS is the Einstein precession, and the last term arises due to modification of gravity. For the PS solution this was first calculated in Refs. [14,18]; the solution (28) is written for the NPS solution and is somewhat different. For the Earth-Moon system r = 3.84 × 10 10 cm and r Earth * ≃ 6.59 × 10 12 cm; as a result the ratio in (27) is approximately 0.48. Therefore, the predictions of the NPS solution for the additional perihelion precession of the Moon is a factor of two smaller than the predictions of the approximate solution. The result of (28) for the additional precession (the last term on the RHS) is ∓0.7 × 10 −12 (the plus sign for the selfaccelerated branch). This is below the current accuracy of 2.4 × 10 −11 [27], but could potentially be probed in the near future [28] 5 .
A similar calculations can be performed for the anomalous Martian precession [14,18]. For the Sun-Mars system we get: where we used r Sun−M ars = 2.28 × 10 13 cm and r Sun * = 4.9 × 10 20 cm. Therefore, we see that the suppression in the NPS result for the precession of the Martian orbit is stronger. The additional precession of the Mars orbit is ∼ ∓1.3 × 10 −11 , which should be contrasted with a potential accuracy of the Pathfinder mission ∼ 9×10 −11 .
Last but not least, Lue and Starkman (LS) [14], found that the PS solution gives rise to a correction to the precession rate (additional precession per unit time), that is universal, i.e., is independent of the source. The NPS solution, predicts a weak anomalous violation of the universal Lue-Starkman scaling due to the RHS of (27). The results is This rate depends mildly on the source mass and a separation from it. The rate is a slowly increasing function or r, as opposed to the rate due to the second term on the RHS of (28), which is decreasing with growing r as Γ Einstein = 9r 3 M /8r 5 .
Outlook
In this note we compared the PS [1,4,13,14,15] and NPS [16] solutions in the DGP model. We emphasized different, but interesting predictions that these two solutions make in the observationally accessible domain of r ≪ r * . These predictions are testable. As we have also mentioned, there will be important differences in the predictions at r ≫ r * . These need further detailed studies, especially in the context of the structure formation. We would expect that both the linear as well as non-linear regimes of the structure formation will be affected. If the NPS solution is the right one, then even at very large scales nonperturbative techniques should be used. Moreover, the nonlinear regime of the structure formation could be sensitive to, and be able to discriminate between, the PS and NPS solutions.
The same issue of nonlinear interactions arises in the context of strong coupling behavior in the 5D DGP model [4,30,31,32,33]. This is related to the problem of the UV completion of the quantum theory [30,31] for which seemingly two different proposals were put forward in Refs. [32] and [33]. It would be interesting to pursue these studies further. The string theory realizations of brane induced gravity of Refs. [34,35,36] can be taken as a guideline. It would also be interesting to understand the NPS solution in terms of the approach of Refs. [30,33].
We have not touched upon the issue whether the small fluctuations on the selfaccelerated branch contain a negative norm state [30,33], or not ( see also [37]), and when these fluctuations are relevant. Additional investigations on this issue are being conducted.
It would also be interesting to look at the Schwarzschild solutions in models of large distance modified gravity where nonlinear interactions do not exhibit the strong coupling behavior. This is the case [38] in a certain models of brane induced gravity in more than five dimensions [39,38], as well as in the "dielectric regularization" of the 5D DGP model [40]. Finally we would also point out the constrained approach to the 5D DGP model [41,42,43] in which case strong interactions also seem to be absent. All the above deserves further detailed investigations. | 5,343.8 | 2005-08-26T00:00:00.000 | [
"Physics"
] |
Greetings as a Politeness Strategy in EFL Distance Learning Students' Official Emails
The current study attempts to discover English as a Foreign Language (EFL) distance learning male students’ awareness of email greetings as a politeness strategy in English computer-mediated communication (CMC). To this end, 200 email messages sent from distance learning students at King Faisal University, Saudi Arabia, to their graduation project supervisor were analyzed. The degree of formality of these messages was very high for two main reasons. First, all of the email messages comprised instances of first-time contact with the supervisor. Second, the social distance between the students and their supervisor was high. Hence, the students were expected to use formal email greetings. The emails sent by the analyzed sample were put into three categories: begun with formal greetings, started by informal greetings, and null-greeting emails. Contrary to expectations, only 16.5% used formal English email greetings. The remaining students chose religious greetings (20.5%), less formal greetings (7%), or null greetings (56%). The large number of null greeting emails suggests that the students’ awareness of greetings as a politeness strategy was low. Hence, the study concluded with implications to ensure increasing EFL students’ awareness of politeness strategies in CMC.
Introduction
Over the last few years, distance learning programs have enabled many people worldwide to attend online courses in which they can learn from and interact with instructors and other students via interactive distance learning platforms. Unlike the norm in many traditional classrooms, the social distance between the people involved in distance learning (i.e., the students and their instructors) is very high. Indeed, distance learning platforms cater for people from various ethnic, religious, geographical, and linguistic backgrounds who have little, if any, knowledge of each other. This form of interaction, which entails a high social distance, a high level of formality, and different status levels in student-instructor communication, requires knowledge of politeness strategies to ensure that the learning process is not hindered by unintentional face-threatening (i.e., impolite) acts (FTAs). Hence, the purpose of the current study was to assess EFL distance learning students' use of email greetings as a politeness strategy when communicating with their graduation project supervisors. The emails used by the sample were classified into three groups: having formal greetings, having less formal/informal greetings, and null greeting emails. The analysis of 200 email messages sent by male Saudi EFL students in a distance learning program revealed that the majority of the researched students seemed to have little knowledge of what type of greetings to use in formal email messages sent in English. This is despite the fact that all the researched students were at the final stage of obtaining a BA degree in English. Thus, one of the main recommendations of this study is to ensure that EFL distance learning students are oriented with politeness strategies for use in computer-mediated communication (CMC).
Literature Review
Since Goffman's [1] face theory, which suggests that people try to reinforce/maintain their face (i.e., public self-image) whenever they engage in social interactions, research on politeness has garnered the attention of many researchers in the fields of pragmatics and sociolinguistics. One of the most influential theories on politeness is the work of Brown and Levinson [2]. They proposed a universal model of politeness based on Goffman's face theory. In their politeness model, they suggested that speakers and hearers perform politeness acts to protect their faces as well as their interlocutors' faces from being damaged by FTAs. Thus, this model distinguishes between positive and negative face as well as positive and negative politeness. Individuals have the need to be respected and valued by others, an instinctive human need that is referred to as positive face. Negative face, on the other hand, refers to individuals' will to be free from imposition. Based on this distinction, speech acts that protect other people's positive face are classified as positive politeness strategies while negative politeness speech acts seek to protect others' negative face. In turn, lack of adherence to these two kinds of politeness may lead to FTAs, which are also characterized as either negative or positive.
Although no definition of linguistic politeness has been agreed upon, Nwoye [3:309] suggests that there is an overall agreement that it includes "verbal strategies for keeping social interaction friction free". It should be noted, however, that politeness strategies differ from culture to culture. Hence, data produced by EFL learners can be considered a valuable recourse for researchers interested in cross-cultural politeness studies as we will see in the review of the studies below. Another issue that is worth considering here is that linguistic politeness strategies also vary depending on the type of relationship between people. Holmes [4] suggests that people who do not know each other have a high social distance in their relationship and hence are expected to pay more attention to politeness strategies when communicating with each other. Similarly, when communicating with a person of a higher social status (e.g., one's manager or teacher), the speaker is expected to use politer forms than they would use with people of the same social status. Formality can also have a great impact on politeness strategies. Indeed, one is expected to use polite expressions in formal domains such as religion, work, and education. Finally, age can also be a factor that affects politeness levels as younger people are expected to be politer when communicating with older people.
Brown and Levinson's [2] politeness model has, as detailed in the sub-sections below, been widely accepted by many researchers and hence implemented in a large body of empirical cross-cultural research in various fields such as language acquisition, social interaction, and online communication. It should be noted, however, that this model was not free from criticism and alternative ways to understanding the concept of politeness. Once of the strongest criticisms to this model is that it assumes that people prefer to avoid impoliteness at all times. This is indeed not always the case as people sometimes deliberately avoid politeness strategies, especially when challenging someone with an opposing view, see [5][6][7]. Brown and Levinson's model, however, seems suitable for this study as it studied linguistic politeness in a case where students should, and where expected to, be polite.
The sub-sections below review a number of studies on politeness in the relevant fields to this study (namely, social interaction, online communication, and EFL).
Politeness in Real-time Social Interaction
Brown and Levinson's [2] model for politeness discussed above is proposed to be universal across cultures. Therefore, in their edited volume, Zarobe and Zarobe [8] compiled a number of studies on politeness across different cultures/languages; for example, Polish by Ogiermann [9], Turkish by Zeyrek [10], and Chinese by Jiang [11]. The politeness model has also been implemented in comparative gender studies in non-Western communities. For example, Aliakbari and Moalemi [12] examined the positive and negative politeness strategies used by 30 male and female Iranian students when communicating with university service providers (e.g., librarians, lab technicians, and waiters). Their study revealed that female participants used politer strategies than male participants. Similar findings were also found in another study in the Middle-Eastern context in which Al-harahsheh [13] investigated the employment of silence as a politeness strategy by 12 male and 12 female Jordanian university students in casual conversations. Across different genders, silence was used as a politeness strategy among strangers. Wagner [14] also investigated politeness in apologizing in Cuernavaca (a Spanish variety). Unlike many other studies on politeness in face-to-face communication, Wagner analyzed naturally occurring data. This has the benefit of analyzing politeness strategies accurately. Indeed, an FTA is less likely to occur in a made-up situation, especially in the presence of a researcher and video recording equipment.
Despite the implementation of Brown and Levinson's [2] model in many non-Western communities, there has been some criticism of this theory with some researchers questioning its viability in non-Western cultures, or, at the very least, accusing the model of being Western-biased, see [15]. Hence, Hill et al. [16] hypothesized an alternative model to Brown and Levinson's in which they distinguished between two types of politeness: discernment, which is the tendency for speakers to conform to the socially agreed-upon norms in various situations, and volition, an aspect of politeness that allows the speaker to choose from various politeness acts. They suggested that Western-based research on politeness has not paid much attention to discretion despite its importance in Japanese culture and despite its existence as a politeness strategy in American English and most likely in all other languages/varieties worldwide. Both models suggest that politeness is a universal phenomenon. Different cultures, however, may have their own implications of Brown and Levinson's or Hill et al.'s models of politeness. This is indeed what Hill et al. [16] found when they compared variations in politeness in Japanese and American informants' requests. Both groups were sensitive to distance and status but Japanese informants were more sensitive to discernment than the American sample.
Moreover, high pitched voice may be interpreted as a polite gesture by hearers in some cultures, while a lowered pitch is used as a politeness strategy in Korean [17]. The next section addresses linguistic politeness in online communication, a rather different form of interaction that mostly involves the use of texts where the phonological aspects of linguistic politeness are often nonexistent.
Politeness in Online Communication
Computer mediated communication (CMC) differs from face-to-face interaction in many ways. For example, the anonymity and physical separation between people communicating online allows people to construct alternative identities [18]. Indeed, people engaged in CMC often do not see one another. This affords them more control over their mental state, especially in asynchronic communication such as email. This is not to assume that politeness is achieved more easily in CMC. In fact, it can be even more challenging than face-to-face interaction in that CMC has restrictions that are difficult to avoid, such as the absence of prosodic cues that people use to clarify meaning and to demonstrate politeness [19]. Such differences between face-to-face interaction and CMC called for the establishment of politeness norms for online communication, known as netiquette, see [20][21][22]. These rules for online politeness include, but are not limited to, politeness acts such as respecting other people's time and privacy and not forgetting that one is dealing with another human at the other end. These politeness acts are never different from what people expect in face-to-face interaction. Yet, it is of great importance to keep them in mind when communicating online as they are easily overlooked. For example, a student is not expected to enter their instructor's office and submit their term project without greeting the instructor beforehand. Yet, the same student might submit their paper via email without using any form of greeting.
The increasing need for online communication as well as the great deal of multi-functionality for CMC calls for research in the various ways in which politeness is achieved. Email, for example, has evolved from a medium of communication that is primarily used for business interaction to a tool that is also used for personal communication.
Therefore, (im) politeness in communication via email has been evolving over the past two decades [19].
Hence, a large number of researchers have endeavored to investigate politeness in CMC. Our focus here will be on studies that have addressed politeness in email communication. In an attempt to compare politeness between requests made via email and voicemail, Duthler [23] analyzed requests made by 151 participants and found that requests made via email were politer than those via voicemail. They explained this difference by stating that voicemail requires immediate action and thus does not allow people to construct appropriate politeness acts. Email users, on the other hand, have more time to ensure that they adhere to the expected politeness levels by the email recipients. Duthler's findings were consistent with Walther's [24] model, which suggests that CMC facilitates social tasks. In an interesting comparative study, Waldvogel [25] compared two workplace environments, an educational institution and a factory, to see whether the different conditions in these two workplace environments would lead to variation in the use/omission of greetings. The findings suggest that the workplace is a more determining factor of the use of greetings and closings as polite acts than status, gender, or even social distance. These findings are contrary to the established view that distance and social status are more determining factors of politeness (see Holmes [4]). The following section focuses on CMC in educational settings.
Politeness and EFL
The growing number of online EFL courses and the continuous need for online communication between distance learning students and their instructors (both synchronic and diachronic) call for the importance of offering netiquette tuition to such students. A number of studies on politeness in CMC have raised the importance of such a move. For example, in a recent study, Alsharif and Alyousef [26] investigated the differences in negotiation and (im) politeness strategies in emails sent to university professors by two groups: Australian and Saudi postgraduates. Both groups performed relatively similarly in extended negotiations and hence the authors concluded their research by recommending that international students improve their negotiation techniques by providing more detailed explanations. In a more relevant study to the current research, Vinagre [27] examined politeness in e-mail exchanges in computer-supported collaborative learning. She began her investigation by stressing the importance of developing means for online collaborative learning and how advantageous this can be for students in geographically remote areas. Since participating in collaborative distance learning programs requires constant exchanges with people of high social distance, misinterpretations of linguistic behavior might occur, and hence negative face is always at stake. As a result, Vinagre examined politeness strategies in 11 email exchanges between students, who were learners of English and Spanish as foreign languages, in collaborative learning and found that the participants did not use negative face strategies as often as positive face strategies. Although this can be interpreted as an attempt by students to build solidarity with each other, students' negative face should also be considered. The next section discusses the data and methodology implemented in the current investigation.
Materials and Methods
The purpose of the current study was to assess the awareness of EFL distance learning students of email greetings as a politeness strategy. In line with Brown and Levinson's [2] aforementioned theory of face, sending an email to one's instructor with a less formal greeting or with no greeting at all can be interpreted as an FTA. Starting the email with a formal greeting, on the other hand, is a sign of the student's awareness of strategies for protecting the email recipient's positive face, see Waldvogel [25]. Hence, I analyzed the greetings used in 200 email messages sent to me, as a graduation project supervisor, by the 200 participating students in the academic years 2014-2015 and 2015-2016. The aim of this analysis was to determine what type of greetings these distance learning students used when communicating with their instructor for the first time. There were three possible conditions for the greetings: formal, less formal/informal, and null. All the students investigated in the current study were male, non-native speakers of English 1 , and were in their final year of the BA Program in English (distance learning) offered by King Faisal University (KFU). Hence, the effects of gender difference and variations in English competence are minimized.
Once students enrolled in the program have completed the list of program courses over seven semesters (three and a half academic years), they are required to write a graduation project in the last semester. In this graduation project, they write a research proposal over the course of 14 weeks. The research proposal is limited to research in the areas of linguistics, literature, and translation. Since writing the research proposal requires assistance in choosing the topic, narrowing it down, writing the research questions, choosing the appropriate methodology for data collection and analysis, etc., a supervisor is assigned to every 50 students to guide them through the process of writing their research project. Most students are located in geographically distant areas from the university main campus. Therefore, face-to-face interaction between the students and their supervisor is difficult to achieve. Hence, students were given deadlines for important phases in writing the research proposal (such as deciding on the topic, sending the research questions, sending the first draft of the proposal, etc.) and they were asked by the supervisor to send these requirements via email in accordance with the set deadlines. Despite the fact that both the instructor and students were native speakers of Arabic, the language of communication was English. 1 The students' and the instructor's first language is Arabic.
To minimize the impact on politeness of constant communication, whereby people become more relaxed as they continue their linguistic exchange and thus are not expected to continue using politeness strategies (see Holmes [4]), all the email messages analyzed in the current study comprise instances of first time communication with the supervisor. Another factor that led us to expect the use of formal greetings was the high social distance between the students and their supervisor. The students were enrolled in a distance learning program and none of them had had previous contact with their supervisor. Even if the supervisor had taught some of the students' previous courses, there had been no direct communication as students were only required to watch video-recorded lectures in the program courses and to complete assignments via Blackboard. In addition to social distance, power is another factor that is expected to result in politer greetings. Indubitably, teachers have more social power than students which, in turn, leads to expecting students to use polite expressions when communicating with their teachers.
Unlike many other research projects on politeness (see the review above), the data used in the current paper are naturally occurring and thus are more precise determiners of politeness strategies than experimental stimuli-reaction studies. Indeed, participants in such studies are expected to attempt to avoid FTAs and use more politeness strategies as they are likely to behave more politely. In other words, they provide data on what should be done rather than what is actually done in real-life communications. In naturally occurring data, however, people behave spontaneously and are not behaving more politely than usual due to their participation in a politeness experiment.
The greetings used in the email messages under investigation were categorized into three types. The first type comprised formal greetings, which could be: Religious Greetings (RG), Time Related (TR), or the common form Dear X (DX). The second type comprised less formal email greetings. Examples of this type are: Greetings (GR), Hello (H1), and Hi (H2). The third type comprised null greetings (NG), in which the student sent their research topic, inquiry, etc., in an attached file or in the body of the email, but without any greetings. Table 1 below summarizes these three categories and provides examples of them: Table 1 above, the instances of NG or H1 and H2 were not classified as impolite acts although they may pose a threat to the positive face of the email recipient and are indeed FTAs. This is due to the fact that not using formal greetings is most likely to result from lack of awareness of formal email greetings as politeness strategies. It is unexpected for students to purposefully commit an FTA. Indeed, this is highly unlikely as students are expected to build a positive relationship with their supervisor in their first contact with them. Other factors that may lead to impoliteness, such as poor working conditions (as reported by Waldvogel [25]) are also unlikely because the students and the professor have no prior contact. Hence, the various types of greeting were interpreted as a continuum of awareness of greetings as a positive face strategy in CMC, from the strongest indicator on the left side of the table to the weakest indicator on the right. The next section presents the study findings and discusses the potential interpretation of the data.
Results and Discussion
As detailed in the previous section, the greetings in the 200 email messages sent by the students to their graduation project supervisor were classified into three categories according to the degree of formality (formal, informal, and null). The use of informal greetings or no greetings could be interpreted as signifying a lack of awareness of positive face strategies used in CMC to show solidarity with the email recipient. Table 2 below lists the types of greetings used by the students as well as the number of tokens for each classification: Contrary to expectations, the least polite classification (NG) had the highest proportion of tokens. One hundred and twelve students (56% of the sample) contacted their graduation project supervisor for the first time without including any form of greeting in their emails. It seems that the only possible explanation for this large proportion of NG emails in the data is that the researched students were unaware of the importance of greetings as a politeness strategy in formal CMC. This claim can also be supported by the fact that the conventional greetings in formal emails (i.e., DX variants) were only used by 14% of the students, as illustrated in Table 3 below.
Seventy-four (37%) of the sampled students used formal greetings in their emails. These formal greetings can be further classified into three types: RG, DX, and TR greetings. In the first type, 41 students chose to write the religious greeting 2 in Arabic ﻋﻠﯿﻜﻢ( )اﻟﺴﻼم 'peace be upon you,' despite having been instructed to use English in all forms of communication. Three of those students chose to transliterate this Islamic-Arabic greeting and write it in English letters. This can possibly be explained by the students' attempt to build social relations by signaling shared ethnicity with the supervisor. This solidarity-based act is reported by Holmes [4] as one of the common reasons for code-switching, also see Sert [28] and Walker [29]. Another possible explanation for this use of religious greeting is that students viewed the greeting and body of the email (be it an enquiry or an initial topic for their research) as two separate elements. In other words, they might have considered the greeting to be an irrelevant part of the email, and therefore used a greeting that they would normally use in daily interactions. If this explanation is correct, then it would also explain why the majority of the sampled students did not use any form of greeting in their emails. It is therefore possible that the students considered the greeting to be an irrelevant part of the email they had to send to their supervisor. The other two types of formal greeting are the ones that we expect to find in formal CMC in English, namely DX (14%) and TR (2.5%). The second category (DX) took many forms in the data. These variants are shown in Table 3 below: None of the tokens of DX above were written in Arabic. This could be explained by the fact that the word 'dear' in Arabic (i.e., )ﻋﺰﯾﺰي could have been viewed as too intimate by the recipient. The different connotation of the word in Arabic, in addition to the instructions to students to communicate with their instructors in English only, could both have contributed to the absence of the Arabic word for 'dear' in the data. The last type of formal greeting comprises time-related greetings such as "Good morning" and "Good afternoon." These occurred only five times in the data. The number of formal greetings was lower than expected in the corpus (at 37%). The proportion of greetings typically used in English formal greetings in CMC is even lower (only used in 28% of the sampled messages).
Less formal greetings were rarely used by the students. The word "greetings" was used by one student only. The Arabic equivalent of this greeting )اﻟﺘﺤﯿﺔ( was used by two other students. The informal greeting "hello" was used by six students while "hi" was used by five students. This low number of informal CMC greetings could be explained by students' awareness that these greetings are informal and thus should not be used in formal communication.
The low number of formal greetings, in addition to the large number of NG emails, has a number of implications. The following section lists a number of suggested implications for distance learning programs that use English as the medium of communication and are attended by non-native speakers of English.
Implications
The data of this study suggest that the majority of the researched students possibly lacked the essential knowledge of positive face politeness strategies typically used in English email messages. Other explanations, such as deliberate avoidance of linguistic politeness seem to be unlikely, as argued in section 4 above. This potential lack of awareness of formal email greetings in English CMC is despite the fact that all the sampled students were in the final stage of a distance learning BA in English program. The lack of knowledge is reflected in the sending of official emails without any form of greeting by 56% of the sampled students, the use of an Islamic-Arabic religious greeting by 20.5% of the sample, and the use of informal greetings by 3 Note that T stands for title, I for initial name, S for surname.
7%. This means that in total, 83% did not use the greetings typically used in official communication. Awareness of politeness strategies in CMC is indeed essential for all EFL students. This need is even greater for distance learning students as CMC is their learning tool. In addition to using CMC in their official and unofficial online communication, distance learning students also use CMC to communicate with their peers, course instructors, as well as other faculty in the program. This lack of awareness could lead to unintentional positive and negative FTAs by the students. Hence, the most essential recommendation of this study is to raise the awareness of EFLs in distance learning programs of politeness strategies in CMC. This can be achieved by providing netiquette training once students join these programs. This training should provide students with essential politeness strategies to which they should adhere in CMC. The program which the sampled students are enrolled in has a core course entitled Language and IT in which they are introduced to some topics at the intersection between language and technology, such as CALL and Corpus Linguistics. This course, however, does not take CMC politeness into consideration despite its importance for the students enrolled in the program.
Another suggestion is to encourage the organizers of EFL distance learning programs to design interactive programs, in which students communicate with their instructors and peers throughout the program courses. Such interactive learning environments are expected to allow course instructors to identify areas of incompetency and weakness and to remedy them at earlier stages. The program under investigation offers a one-way communication whereby students attend recorded lectures. The students can still communicate with their instructors via email or phone in all of the courses they attend, but the vast majority of students do not make use of these facilities because they have most of the information they need in the recorded lectures and the content uploaded to Blackboard.
EFL students attending distance learning programs should also be encouraged by their instructors to practice English skills in their free time. Indeed, most of what we learn about language use can be achieved outside the domain of the classroom. For instance, if the sampled students who sent NG emails had read about the greetings used in official emails, which are available on many websites, they would have chosen more formal greetings.
It should also be noted that the above-mentioned recommendations also apply to all students of various majors, as CMC is an inevitable part of every educational setting. Proper use of email greetings is also an essential life skill for all students.
Conclusions
The current study attempted to measure EFL distance learning students' awareness of the use of formal greetings Linguistics and Literature Studies 6(6): 259 -266, 2018 265 in CMC as a politeness strategy. The greetings used in 200 emails sent by 200 EFL distance learning students were analyzed. Despite the high level of formality of these emails, the high social distance between the senders and recipient, and the higher status of the recipient, the majority of students did not use any form of greeting in their emails. This lack of adherence to greeting norms in English official emails calls for serious steps to raise EFL distance learning students' awareness of politeness strategies in CMC.
The anticipated increase of EFL distance learning programs in the future, as more EFL learners worldwide have access to the internet, increases the need for research that aims at tackling the challenges faced by distance learning programs and how these predicaments can be minimized if not completely solved. Since CMC is the main tool for communication between students and instructors, research on other (im) politeness strategies used by EFL distance learning students is encouraged. Indeed, distance learning programs can be perceived of as international schools with students from different ethnic and cultural backgrounds. Lack of adequate instruction on politeness measures in CMC can lead to communication breakdowns and misconceptions. Thus, research on (im) politeness in these programs could be of great help to both the organizers and students.
The data in the current study can also be compared with similar data from female students. Indeed, approaching this line of research from a sociolinguistic perspective can help us to determine the sociolinguistic groups that are in most need of instruction in politeness strategies in CMC. | 6,892.2 | 2018-11-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Lixisenatide attenuates advanced glycation end products (AGEs)-induced degradation of extracellular matrix in human primary chondrocytes
Abstract Osteoarthritis (OA) poses a growing threat to the health of the global population. Accumulation of advanced glycation end-products (AGEs) has been shown to upregulate expression of degradative enzymes such as matrix metalloproteinases (MMPs) and a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) in chondrocytes, which leads to excessive degradation of type II collagen and aggrecan in the articular extracellular matrix (ECM). In the present study we investigated the effects of the GLP-1 agonist lixisenatide, a widely used type II diabetes medication, on AGEs-induced decreased mitochondrial membrane potential (MMP), degradation of ECM, oxidative stress, expression of cytokines including interleukin (IL)-1β and IL-6, and activation of nuclear factor kappa B (NF-κB). Our findings indicate that lixisenatide significantly ameliorated the deleterious effects of AGEs in a dose-dependent manner. Thus, lixisenatide has potential as a safe and effective treatment for OA and other AGEs-induced inflammatory diseases.
Introduction
Osteoarthritis (OA) is one of the world's leading debilitating age-related diseases and is expected to become more than twice as prevalent over the next decade. In large part, this is due to an increase in the average age of the global population owing to advancements in healthcare and lifestyle improvements [1]. Advanced glycation end-products (AGEs) are the byproduct of non-enzymatic protein glycation and accumulate in the body due to their resilience to degradation. Additionally, AGEs are used as a food preservative, thereby also entering the body through diet [2]. Chondrocytes have been shown to express the receptor for AGEs (RAGE), indicating that AGEs play a role in regular cell turnover and cartilage remodeling [3]. It has also been shown that accumulation of AGEs promotes the release of pro-inflammatory cytokines including tumor necrosis factor alpha (TNF-a) and interleukins (ILs), activation of nuclear factor kappa B (NF-jB), and production of reactive oxygen species (ROS) [4,5].
The main hallmark of OA is the excessive degradation of the articular extracellular matrix (ECM). In joint cartilage, the ECM is primarily composed of type II collagen and aggrecan and accounts for roughly 95% of the cartilage tissue mass. Of these, type II collagen is fibrillar collagen with rigid and stress-resilient properties that provide cartilage with its high degree of structural integrity [6]. Owing to its resilience to degradation under normal physiological conditions, type II collagen has a very slow rate of cell turnover. Therefore, excessive degradation of type II collagen is largely considered to be irreversible. Matrix metalloproteinases (MMPs) are a family of zinc-dependent enzymes that target type II collagen for degradation. In normal physiological conditions, MMPs play important roles in morphogenesis, embryonic development, tissue repair, and the inflammatory disease process [7][8][9]. Of these, MMP-3 (stromelysin-1) and MMP-13 (collagenase 3) have been cited as the main collagenases responsible for cleavage of type II collagen by unwinding the collagen triple helix and cleaving the P4-P11 site [10]. Aggrecan is the most abundant proteoglycan in articular cartilage and allows joints to withstand the compressive force and absorb shock [11]. Loss of aggrecan is an early event in the pathogenesis of OA and expression of cytokines such as tumor necrosis factor alpha (TNF-a) and interleukin (IL)-6 has been shown to induce cleavage of aggrecans by upregulating the expression of a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTS) [12]. The aggrecanases ADAMTS-4 and ADAMTS-5 cleave aggrecan to promote regular cell turnover and homeostasis, however, increased expression of ADAMTS-4 and ADAMTS-5 has been shown to be involved in the destruction of aggrecan in a human OA model [13]. However, in the pathological condition of OA, increased secretion of IL-6 and TNF-a leads to enhanced expression of MMPs and ADAMTS, thereby resulting in excessive and irreversible degradation of the ECM [14]. Additionally, overproduction of reactive oxygen species (ROS) and activation of nuclear factor kappa B (NF-jB) have been shown to play a role in OA. ROS are the byproduct of numerous physiological processes and play important roles in normal homeostasis; however, overproduction of ROS promotes expression of MMPs, ADAMTS, and exacerbates generation of ROS, resulting in increased and sustained degradation of the ECM. Furthermore, it has been shown that AGEs directly influence the generation of ROS through catalytic sites in their molecular structure [15]. Activation of the NF-jB pro-inflammatory pathway through phosphorylation of nuclear factor of kappa light polypeptide gene enhancer in Bcells inhibitor, alpha (IjBa) further induces the inflammatory response and upregulates expression of MMPs and ADAMTS, thereby promoting the progression of OA [16].
Lixisenatide is an agonist of the receptor for glucagonlike peptide 1 (GLP-1R), an endogenous incretin hormone responsible for mediating the secretion of glucagon [17]. Lixisenatide is widely used to increase the release of insulin by b-cells in the treatment of type II diabetes. Lixisenatide has displayed a wide range of pharmacological properties in different kinds of diseases. Mouse model experiments have demonstrated the anti-inflammatory effects of lixisenatide. For example, in a murine model of Alzheimer's disease, lixisenatide was shown to exert neuroprotective effects by reducing synapse loss and preventing chronic inflammation [18]. In a recent study involving atherogenic-diet-fed Apoe À/À Irs2 þ/À mice, lixisenatide was shown to exert anti-inflammatory and antiatherogenic effects by downregulating expression of IL-6 and discouraging macrophages from shifting to the M2 proinflammatory phenotype [19]. Interestingly, a recent literature review showed that GLP-1 agonists such as lixisenatide exert anti-inflammatory effects in a wide range of disease including types 1 and 2 diabetes, atherosclerosis, neurodegenerative disorders, nonalcoholic steatohepatitis, diabetic nephropathy, asthma, and psoriasis [20]. However, there is still little research regarding the role of GLP-1 in bone homeostasis. A recent review investigating the role of GLP-1 in bone metabolism determined that GLP-1 may regulate bone formation and bone structure, but the existing research is not adequate to confirm this [21]. In the present study, we investigated the effects of agonism of GLP-1R using lixisenatide on AGEs-induced degradation of the ECM. Our findings demonstrate that lixisenatide has potential as a novel treatment against OA by rescuing AGEs-induced mitochondrial dysfunction, ameliorating oxidative stress, preventing degradation of type II collagen and aggrecan through downregulation of MMP-3, MMP-13, ADAMTS-4 and ADAMTS-5, exerting an anti-inflammatory effect by reducing expression of TNF-a and IL-6, and inhibiting activation of the proinflammatory NF-jB pathway by preventing phosphorylation of IjBa.
Cartilage explant cultures
Human primary chondrocytes (HPCs) were commercially purchased from MT-BIO Company (China). The cells were cultured in DMEM/Ham's F-12 medium (Thermo Fisher Scientific, USA) containing 10% FBS, 100 lg/ml streptomycin, and 100 IU/ml penicillin. A portion of the cells was frozen in a liquid nitrogen tank for later use, and the remaining cells were treated with 100 lg/ml AGEs in the presence or absence of 10 and 20 nM lixisenatide for 48 h.
Western blot
Treated and non-treated chondrocytes were washed with PBS for protein extraction using cell lysis buffer (Cell Signaling, USA) containing protease and phosphatase inhibitors. Equal amounts (20-50 lg) of total protein or nuclear protein were loaded onto 4-12% Mini-PROTEANV R TBE Precast Gels (Bio-Rad, USA). Protein in the acrylamide gel was transferred onto a PVDF membrane (Thermo Fisher Scientific, USA). The membrane was then blocked in 5% slim milk diluted in TBS containing 0.1% Tween 20 (TBS-T) for 1 h. The membrane was incubated overnight with primary antibody at 4 C, rinsed three times in TBS-T, and incubated with secondary antibody for 1 h at room temperature (RT). Detection of immunoreactive bands was performed using chemiluminescence (Novex ECL, Invitrogen).
Real-time polymerase chain reaction (RT-PCR)
Total RNA was isolated from human chondrocytes in accordance with the manufacturer's instructions for the use of TRIzol reagent (Thermo Fisher Scientific, USA). Total RNA was reverse-transcribed to cDNA using an iScript cDNA synthesis kit (Bio-Rad). Quantitative PCR was performed using Advanced SYBR Green Supermix (Bio-Rad) and amplified on a StepOne Plus real-time PCR system (Applied Science) to obtain cycle threshold (Ct) values for target and internal reference cDNA levels.
Preparation of nuclear extracts from chondrocytes
Nuclear protein was extracted from human primary chondrocytes using a commercial kit (Thermo Fisher Scientific, USA). Nuclear protein concentrations were determined by the BCA method. The nuclear protein lamin B1 was used as an internal control. Nuclear levels of NF-jB p65 were examined using western blot analysis to determine NF-jB activation.
Measurement of 4-hydroxy-2-nonenal (4-HNE) immunofluorescence
Intracellular levels of 4-HNE in chondrocytes were measured using the immunofluorescence method. Firstly, chondrocytes were fixed with 4% paraformaldehyde for 15 min at RT and then permeabilized with 0.4% Triton X-100 for 10 min. After blocking with 5% BSA and 2.5% FBS, cells were incubated with the primary anti-4HNE antibody (ab48506, Abcam, USA) for 2 h at RT. After three washes, cells were probed with Alexa-488 conjugated secondary antibodies (Invitrogen, USA) for another 1 h at RT. Fluorescent signals were visualized with an inverted fluorescence microscope (Zeiss, Germany).
Determination of mitochondrial membrane potential (MMP)
Intracellular MMP in chondrocytes was assessed with tetramethylrhodamine methyl ester (TMRM). Cells were treated with 100 lg/ml AGEs in the presence or absence of 10 and 20 nM lixisenatide for 48 h. After 3 washes with PBS, cells were incubated with 20 nmol/L TMRM for 30 min at 37 C in darkness. Fluorescent signals were visualized with an inverted fluorescence microscope (Zeiss, Germany).
Measurement of adenosine triphosphate (ATP) levels via bioluminescence assay
The level of intracellular ATP in chondrocytes was measured using an ATP bioluminescence assay kit (#A22066, Thermo Fisher Scientific, USA). Briefly, chondrocytes were lysed with the lysis buffer included in the kit and centrifuged at 10,000 Â g. Supernatant (100 ll) was mixed with another 10 ll luciferin/luciferase reagent. Light output was recorded with a microplate luminometer.
ELISA assay
Secretion of TNF-a and IL-6 from human primary chondrocytes into the cell culture medium was measured using a commercial ELISA kit obtained from R&D Systems in accordance with the manufacturer's protocols.
Luciferase reporter gene assay
To determine NF-jB transcriptional activity, NF-jB promoterluciferase (Clontech, USA) and b-galactosidase plasmid were transfected into chondrocytes using Lipofectamine 2000. Human primary chondrocytes were treated with 100 lg/ml AGEs in the presence or absence of 10 and 20 nM lixisenatide for 48 h. Cells were then lysed to measure luciferase activity and b-galactosidase activity using a commercial dual luminescence assay kit (Gene Copoeia, MD) with a luminometer. Luciferase activity was normalized to b-galactosidase activity.
Statistical analysis
All data are presented as means ± SEM Statistical analyses were performed using analysis of variance (ANOVA). p-values < .05 were considered statistically significant.
Rescue of AGEs-induced mitochondrial dysfunction
Mitochondrial dysfunction induced by the presence of AGEs has been observed in age-related diseases including OA. To determine the effects of lixisenatide on AGEs-induced mitochondrial dysfunction, we assessed mitochondrial membrane potential (MMP) and production of adenosine triphosphate (ATP) using TMRM and a luciferase assay. Briefly, human primary chondrocytes were treated with 100 lg/ml AGEs in the presence or absence of 10 and 20 nM lixisenatide for 48 h. As shown in Figure 1(A), exposure to AGEs drastically lowered the level of MMP by more than half, which was restored by treatment with lixisenatide in a dose-dependent manner. Notably, 20 mg lixisenatide almost completely restored MMP to basal levels. As shown in Figure 1(B), these effects were confirmed by the results of luciferase assay showing that exposure to AGEs reduced mitochondrial ATP production by nearly half, which was rescued by treatment with lixisenatide. Again, 20 mg lixisenatide almost completely ameliorated the effects of AGEs on ATP level.
Reduction of AGEs-induced oxidative stress
Oxidative stress plays a major role in chronic inflammation. The a, b-unsaturated hydroxyalkenal 4-hydroxy-2-nonenal (HNE-4) is a byproduct of lipid peroxidation. NADPH oxidase 4 (NOX4) has been shown to protect against inflammatory stress and is upregulated in response to oxidative stress. To investigate the effects of lixisenatide on AGEs-induced oxidative stress, HPCs were again exposed to 100 mg AGEs in the presence or absence of 10 and 30 mM lixisenatide for 48 h. As shown in Figure 2(A), exposure to AGEs elevated expression of 4-HNE by approximately 3.5-fold, which was ameliorated by lixisenatide in a dose-dependent manner. The results of western blot analysis in Figure 2(B) demonstrate that exposure to AGEs drastically increased protein expression of NOX4, which was also ameliorated by lixisenatide in a dose-dependent manner. Notably, a dose of 20 mg lixisenatide again rescued expression of 4-HNE and NOX4 to near basal levels, indicating that lixisenatide may possess a protective effect against oxidative stress in HPCs.
Reduction of AGEs-induced expression of MMPs and degradation of type II collagen
Degradation of type II collagen by MMP-3 and MMP-13 is a major event in the pathogenesis of OA. Here, we tested the effects of treatment with 10 and 20 mM lixisenatide on HPCs exposed to 100 mg AGEs for 48 h. As demonstrated by the results in Figure 3, exposure to AGEs caused a significant elevation in the expression of MMP-3 and MMP-13, which was significantly prevented by lixisenatide at both the mRNA and protein levels in a dose-dependent manner. Concordantly, the results in Figure 4 show that treatment with 10 and 20 mM lixisenatide also rescued AGEs-mediated degradation of type II collagen in HPCs at both the mRNA and protein levels in a dose-dependent manner.
Reduction of AGEs-induced expression of ADAMTS and degradation of aggrecan
ADAMTS-4 and ADAMTS-5 are the major aggrecanases responsible for degradation of aggrecan in the articular ECM. To investigate the effects of clinically relevant doses of lixisenatide on AGEs-induced expression of ADAMTS-4 and ADAMTS-5, we exposed HPCs to 100 mg AGEs in the presence or absence of 10 and 20 mM lixisenatide for 48 h. As shown by the results of real-time PCR and western blot analysis in Figure 5, exposure to AGEs drastically increased expression of ADAMTS-4 and ADAMTS-5 and both the mRNA and protein levels, respectively. Again, 20 mM lixisenatide almost completely ameliorated AGEs-induced elevated levels of ADAMTS-4 and ADAMTS-5. To further confirm the protective effect of lixisenatide against AGEs-induced degradation of aggrecan through downregulation of ADAMTS expression, we investigated the degradation of aggrecan through western blot analysis. The results in Figure 6 confirm that treatment with lixisenatide prevented degradation of aggrecan induced by AGEs.
Reduced expression of proinflammatory cytokines
Proinflammatory cytokines such as TNF-a and IL-6 have been shown to play a pivotal role in the development and progression of OA. Accumulation of AGEs elevates expression of TNF-a and IL-6, thereby sustaining the inflammatory response. To investigate the effects of lixisenatide on AGEsinduced expression of these cytokines, HPCs were exposed to 100 mg AGEs for 48 h in the presence or absence of 10 and 20 mM lixisenatide to induce expression of TNF-a and IL-6. As shown by the results of real-time PCR and western blot analysis in Figure 7, exposure to AGEs increased expression of TNF-a and IL-6 by roughly 5-and 4-fold, respectively, at both the mRNA and protein levels. However, the expression of both of these cytokines was prevented by treatment with lixisenatide in a dose-dependent manner.
Reduction of NF-jB via rescue of phosphorylated IjBa
NF-jB is a major regulator of inflammation. Under normal conditions, NF-jB is sequestered in the cytoplasm by its inhibitor IjBa. However, under pathological conditions, phosphorylation of IjBa leads to translocation of p65 to nuclei and subsequent activation of NF-jB, thereby driving the inflammatory response. To determine the effects of lixisenatide on AGEs-induced phosphorylation and degradation of IjBa as well as activation of NF-jB, we exposed HPCs to 100 mg AGEs in the presence or absence of 10 and 20 mM lixisenatide for 6 h. As demonstrated by the results of western blot analysis in Figure 8, exposure to AGEs significantly elevated the level of phosphorylated IjBa and decreased the level of total IjBa. Treatment with lixisenatide successfully reduced these effects of AGEs in a dose-dependent manner. Using lamin-B as a positive control, we next set out to determine the effects of clinically relevant doses of lixisenatide on activation of NF-jB by exposing HPCs to 100 mg AGEs in the presence or absence of lixisenatide for 48 h. As demonstrated by the results in Figure 9, AGEs significantly increased the nuclear translocation of p65 protein and luciferase activity of NF-jB, which were both ameliorated by treatment with lixisenatide in a dose-dependent manner.
Discussion
The results of the present study demonstrate that administration of lixisenatide at clinically relevant doses has a beneficial protective effect against multiple aspects of OA, including degradation of the articular ECM, mitochondrial dysfunction, oxidative stress, expression of proinflammatory cytokines, and activation of the NF-jB proinflammatory signaling pathway. Although the exact mechanisms through which lixisenatide exerts these effects remains unclear, as a specific GLP-1R agonist, we can hypothesize that activation of GLP-1R on primary chondrocytes plays a role in mediating the deleterious effects of AGEs as seen in OA. The incretin peptide GLP-1 has been widely studied as a therapeutic target for the treatment of type II diabetes due to its role in promoting insulin secretion by pancreatic b-cells and has been praised for its ability to slow gastric emptying, inhibit glucagon secretion and promote weight loss [22]. Mechanistically, a recent study showed that GLP-1 could inhibit the expression of MMPs, including MMP-3 and MMP-13 [23]. As a safe and effective therapeutic option, the specific GLP-1R agonist lixisenatide was approved by the Food and Drug Administration (FDA) in 2013 and has since been widely used for the treatment of type II diabetes [24]. Thus, we set out to investigate the potential of this drug in preventing AGEs-induced degradation of the articular ECM.
Our results show that consistent with previous studies, agonism of GLP-1R by lixisenatide reduced degradation of type II collagen and aggrecan by collagenases (MMP-3, MMP-13) and aggrecanases (ADAMTS-4, ADAMTS-5). As these are recognized as the most important proteinases in OA, the capacity of GLP-1R to downregulate increased expression of these factors induced by AGEs is of great value as a potential therapeutic target for OA. Additionally, we determined that treatment with lixisenatide can rescue reduced mitochondrial membrane potential (MMP) and ATP production induced by AGEs. Loss of mitochondrial function leads to reduced production of ATP, the body's main source of energy. Such loss of mitochondrial function has been shown to play a major role in age-related disease due to decreased oxidative phosphorylation through the electron transport chain [25]. The capacity of lixisenatide to ameliorate mitochondrial dysfunction induced by AGEs may be of great value in a wide range of diseases. Oxidative stress also plays a major role in a wide range of disease states. In part, this is because the generation of ROS, NOX4 and sustained oxidative stress has a self-promoting role in chronic inflammatory diseases. That is to say, high levels of oxidative stress further exacerbate generation of ROS and release of 4-HNE and NOX4, which in turn raises the level of oxidative stress.
TNF-a and IL-6 are two of the major proinflammatory cytokines involved in OA. TNF-a has been shown to exacerbate the pathological progression of OA by upregulating generation of MMPs, ADAMTS, ROS and other factors [26,27]. As a class of proinflammatory cytokines, interleukins (ILs) have been shown to play critical roles in a wide range of chronic inflammatory diseases including OA. Of these, the expression of IL-6, which is upregulated by oxidative stress and ROS, has been shown to be involved in the pathogenesis of OA [14,28]. Furthermore, this response is mediated through activation of the NF-jB signaling pathway [29,30]. The NF-jB transcription factor is ubiquitously expressed and plays a key role in triggering and sustaining the inflammatory response. Here, the IKKa/IjBa/NF-jB inflammatory pathway has been shown to mediate inflammation, cytokine production, and cartilage degradation, among other things [16]. Under normal physiological conditions, NF-jB exists in the cytoplasm in its inactive state as a heterodimer of the p50/ p65 subunits, which is maintained by IjBa, a member of the IjB class of inhibitory proteins. However, oxidative stress leads to activation of the p38 mitogen-activated protein kinase (MAPK) pathway and subsequent phosphorylation of IjBa by IKKa/b, thereby resulting in nuclear translocation of p65 and activation of NF-jB [31][32][33]. Our findings show that treatment with lixisenatide exerted a dose-dependent reduction of AGEs-induced expression of IL-6 and TNF-a, activation of the NF-jB signaling pathway, generation of ROS, 4-HNE and NOX4, expression of MMP-3, MMP-13, ADAMTS-4, ADAMTS-5 and degradation of the articular ECM in primary human chondrocytes. These findings suggest a novel role of the selective GLP-1 agonist lixisenatide in the treatment and prevention of OA. Further in vivo research using animal models or human trials is necessary to further validate these findings.
Disclosure statement
None of the authors of this study have any conflicts of interest that need to be disclosed. | 4,776.8 | 2019-04-03T00:00:00.000 | [
"Biology"
] |
Co-Expression of Androgen Receptor and Cathepsin D Defines a Triple-Negative Breast Cancer Subgroup with Poorer Overall Survival
Background: In the triple-negative breast cancer (TNBC) group, the luminal androgen receptor subtype is characterized by expression of androgen receptor (AR) and lack of estrogen receptor and cytokeratin 5/6 expression. Cathepsin D (Cath-D) is overproduced and hypersecreted by breast cancer (BC) cells and is a poor prognostic marker. We recently showed that in TNBC, Cath-D is a potential target for antibody-based therapy. This study evaluated the frequency of AR/Cath-D co-expression and its prognostic value in a large series of patients with non-metastatic TNBC. Methods: AR and Cath-D expression was evaluated by immunohistochemistry in 147 non-metastatic TNBC. The threshold for AR positivity (AR+) was set at ≥1% of stained cells, and the threshold for Cath-D positivity (Cath-D+) was moderate/strong staining intensity. Lymphocyte density, macrophage infiltration, PD-L1 and programmed cell death (PD-1) expression were assessed. Results: Scarff-Bloom-Richardson grade 1–2 and lymph node invasion were more frequent, while macrophage infiltration was less frequent in AR+/Cath-D+ tumors (62.7%). In multivariate analyses, higher tumor size, no adjuvant chemotherapy and AR/Cath-D co-expression were independent prognostic factors of worse overall survival. Conclusions: AR/Cath-D co-expression independently predicted overall survival. Patients with TNBC in which AR and Cath-D are co-expressed could be eligible for combinatory therapy with androgen antagonists and anti-Cath-D human antibodies.
Construction of Tissue Microarrays
Tumor tissue blocks with enough material at gross inspection were selected from the Biological Resource Center. After hematoxylin-eosin-safranin (HES) staining, the presence of tumor tissue in sections was evaluated by a pathologist. Two representative tumor areas, to be used for the construction of the tumor microarrays (TMAs), were identified on each slide. A manual arraying instrument (Manual Tissue Arrayer 1, Beecher Instruments, Sun Prairie, WI, USA) was used to extract two malignant cores (1 mm in diameter) from the two selected areas. When possible, normal breast epithelium was also sampled as internal control. After arraying completion, 4 µm sections were cut from the TMA blocks. One section was stained with HES and the others were used for IHC.
Statistical Analyses
Data were described using medians and ranges for continuous variables, and frequencies and percentages for categorical variables. Comparisons were performed with the Kruskal-Wallis test (continuous variables) and the chi-square or Fisher's exact test, if appropriate (categorical variables). All tests were two-sided, and p-values < 0.05 were considered as significant. The median follow-up was calculated using the reverse Kaplan-Meier method. Relapse-free survival (RFS) and overall survival (OS) were estimated using the Kaplan-Meier method and compared with the log-rank test. RFS was defined as the time between the date of the first histology and the date of the first recurrence at any site. Surviving patients without recurrence and patients lost to follow-up were censored at the time of the last follow-up or last documented visit. OS was defined as the time between the date of the first histology and the date of death from any cause. Multivariate analyses were performed using the Cox proportional hazard model. Hazard ratios (HR) were given with their 95% confidence interval (95% CI). All statistical analyses were performed with the STATA 13.0 software (StatCorp, College Station, TX, USA).
Effect of the AR Inhibitor Enzalutamide on Cath-D Expression and Secretion in TNBC Cells
As our multivariate analysis indicated that OS was worse in patients with AR+/Cath-D+ TNBC, combination treatment with an AR antagonist and the human anti-Cath-D F1 antibody [22] may be of interest. Therefore, due to the estrogen-like effect of AR [28], the estrogen-regulation of Cath-D [29,30] and the inhibition of Cath-D secretion by ER antagonists [31,32], we wanted first to determine whether anti-androgen treatment affects Cath-D expression or secretion in AR+/Cath-D+ TNBC cells. To this aim, we tested the AR antagonist enzalutamide in four different AR+/Cath-D+ TNBC cell lines (SUM159, MDA-MB-231, MDA-MB-453, and MDA-MB-468 cells) [33] that express and secrete Cath-D (Figure 4; Figure S2). Incubation with enzalutamide (20 µM) for 24 h did not affect Cath-D expression or secretion (Figure 4).
Effect of the AR Inhibitor Enzalutamide on Cath-D Expression and Secretion in TNBC Cells
As our multivariate analysis indicated that OS was worse in patients with AR+/Cath-D+ TNBC, combination treatment with an AR antagonist and the human anti-Cath-D F1 antibody [22] may be of interest. Therefore, due to the estrogen-like effect of AR [28], the estrogen-regulation of Cath-D [29,30] and the inhibition of Cath-D secretion by ER antagonists [31,32], we wanted first to determine whether anti-androgen treatment affects Cath-D expression or secretion in AR+/Cath-D+ TNBC cells.
Discussion
In our study, with cut-offs of 1% for AR positivity, 72.8% of TNBC expressed AR. This percentage is higher than in other studies [2], except in the work by Traina et al. where nuclear AR expression was higher than 0% in nearly 80% of all evaluable samples [9]. We chose the cut-off of 1% for AR because a recent clinical trial used this threshold and demonstrated the clinical activity of an AR antagonist [9]. Moreover, recent data suggest that AR-targeted therapies may enhance chemotherapy efficacy even in TNBC with low AR expression by targeting cancer stem cell-like cells
Discussion
In our study, with cut-offs of 1% for AR positivity, 72.8% of TNBC expressed AR. This percentage is higher than in other studies [2], except in the work by Traina et al. where nuclear AR expression was higher than 0% in nearly 80% of all evaluable samples [9]. We chose the cut-off of 1% for AR because a recent clinical trial used this threshold and demonstrated the clinical activity of an AR antagonist [9]. Moreover, recent data suggest that AR-targeted therapies may enhance chemotherapy efficacy even in TNBC with low AR expression by targeting cancer stem cell-like cells [34]. While most studies have shown that AR expression is a good prognostic factor in ER+ tumors [35][36][37], it is more controversial for ER-tumors where AR signaling could drive tumor growth [38]. Indeed, it was suggested that in TNBC, AR might use estrogen response element-like motifs to bind to DNA and induce transcription of genes that regulate cell growth in an ER-independent manner [28]. In a meta-analysis based on retrospective studies on TNBC and on population data, AR positivity was significantly associated with prolonged RFS, but had no significant impact on OS [6]. Conversely, in the Nurses' Health Studies, AR was associated with improved BC survival in patients with ER+/HER2− tumors and with worse survival in patients with TNBC (n = 4147 women with BC, including 581 with TNBC) [5]. In the first 7 years post-diagnosis, AR expression was associated with a 62% increase in BC-specific mortality in patients with ER-tumors after adjustments for patient, tumor, and treatment covariates [5]. Battharai et al. evaluated AR prognostic value in TNBC from six international cohorts (n = 1407) and found that AR status alone was not a reliable prognostic marker [4]. In our study there was no significant association between AR expression and RFS or OS. These results underline that prospective data are needed to conclude on AR prognostic significance in TNBC and that another biomarker is required to identify a subgroup with worse prognosis in this specific population.
Cath-D, p53, and BCL-2, assessed by IHC, are prognostic indicators of BC metastatic spreading [39]. Cath-D quantified by cytosolic assay in primary BC is a well-established marker of poor prognosis independently of the ER status [20,21]. In ER+ BC cell lines, estrogen and growth factors stimulate Cath-D protein and mRNA accumulation [30,40]. The regulation of Cath-D mRNA accumulation by estrogen is mainly through transcription initiation increase [29]. Estrogen responsive elements have been detected in the proximal promoter region of the Cath-D-encoding gene CTSD [41]. High Cath-D level (by IHC) has been associated with poorer prognosis in patients with ER+ BC, including tumors of lower histological grade [42][43][44]. This suggests that Cath-D may identify a subgroup of more aggressive tumors. Recently, Cath-D expression was assessed in cytosols of different primary BC subtypes (ER + /HER2 + , ER − /HER2 + , ER + /HER2 − , and ER − /HER2 − ) using a cytosolic assay [22]. The mean Cath-D level in the TNBC subtype was in the range of the cut-off values reported in clinical studies on all combined BC subtypes [20,21]. A recent IHC study found that Cath-D was overexpressed in 71.5% of TNBC analyzed (n = 504) and proposed a prognostic model for TNBC outcome based on node status, Cath-D expression, and Ki67 index [45]. In addition, high CTSD mRNA expression was significantly associated with shorter RFS in a cohort of 255 patients with TNBC [22], suggesting that Cath-D overexpression might be a predictive marker of poor TNBC prognosis. However, Cath-D prognostic value had not been studied in AR+ TNBC before. Taking into account all these observations including the ER-like activity of AR in TNBC and the estrogen-mediated regulation of Cath-D, here we assessed the prognostic value of AR/Cath-D co-expression in TNBC.
In our study, Cath-D expression in tumor cells was more frequently detected in AR+ than AR-TNBC, and 62.7% of non-metastatic TNBC harbored AR/Cath-D co-expression. To our knowledge, this is the first study to investigate the profile and the prognostic value of this association in TNBC. AR+/Cath-D+ TNBC seemed to behave like luminal tumors, with a morphological profile distinct from that of other TNBC. Moreover, patients with an AR+/Cath-D+ tumors had a higher risk of relapse and a significant worse OS than patients with other TNBC types. Importantly, in our study, AR/Cath-D co-expression was an independent prognostic factor for OS, but not AR or Cath-D expression on its own, underlying the importance of their co-expression. Moreover, although lymph node involvement was more common in AR+/Cath-D+ tumors in our study, nodal status was not an independent prognostic factor. Usually, in TNBC, relapses occur in the first 3 years of follow-up. This was confirmed in our study for patients with AR-or AR+/Cath-D-tumors. On the other hand, relapses occurred even after a longer interval in patients with AR+/Cath-D+ tumors, like in patients with ER+/HER2− tumors. Prospective data in large cohorts are needed to confirm the prognostic value of AR/Cath-D co-expression in TNBC.
For patients with AR+/Cath-D+ TNBC at risk of late relapse, an adjuvant anti-androgen therapy could be considered, like for ER+ tumors. In the metastatic setting, the clinical benefit rate of anti-androgens (delivered as monotherapy) is only about 20% [8][9][10]. Therefore, new combinations of targeted therapies are urgently needed in this TNBC subgroup. In patients with metastatic or locally advanced TNBC, the atezolizumab (anti-PD-L1 antibody) plus nab-paclitaxel combination prolonged progression-free survival in the entire population and in the PD-L1+ subgroup, in a randomized phase III study [46]. Interestingly, among patients with PD-L1+ tumors, the median OS was 25.0 months with atezolizumab and 15.5 months with chemotherapy alone [46]. A clinical trial is currently assessing a selective androgen receptor modulator and an anti-PD-1 antibody in patients with metastatic AR+ TNBC (NCT02971761).
In agreement with the literature [27], 66.9% of TNBC expressed PD-L1. PD-L1 expression on tumor cells and PD-1 expression on TILs were not different in tumor co-expressing or not AR and Cath-D, in our population. Thus, AR and Cath-D co-expression does not allow defining a subgroup of patients who could benefit most from the combination of anti-androgens and check-point inhibitors.
We recently showed that extracellular Cath-D could be considered as a biomarker in TNBC and a therapeutic target for the fully human anti-Cath-D F1 antibody [22]. Treatment with the F1 antibody of mice xenografted with MDA-MB-231 TNBC cells led to tumor depletion of pro-tumoral M2-polarized TAMs and of MDSCs [22]. In addition, co-culture assays showed that mesenchymal stem cell homing towards MDA-MB-231 cells depends on the chemoattractive effect of extracellular Cath-D [47]. Thus, the association of anti-androgen therapy and anti-Cath-D immunotherapy may be of interest in TNBC. Here, we observed that TAM density was lower in AR+/Cath-D+ tumors. High TAM density has been associated with poor survival rates in BC [25] and also with negative hormone receptor status and malignant phenotype [25]. Thus, the anti-Cath-D F1 antibody might reduce the level of M2-TAMs, allowing the re-activation of immune cells. Similarly, treatment of BC explants with the matrix metalloproteases inhibitor BB-94 reduced tumor growth in mice, not by directly targeting tumor cells, but by indirectly decreasing the number of recruited TAMs, possibly through inhibition of their mesenchymal migration properties [48]. As estradiol antagonists inhibit Cath-D secretion [31,32], we confirmed in vitro that treatment with an androgen antagonist does not affect Cath-D expression or secretion in AR+/Cath-D+ TNBC cells before testing the possibility of combination therapy using anti-androgens and anti-Cath-D antibodies. These data suggest the feasibility of the association of anti-androgens and the F1 antibody to treat patients with AR+/Cath-D+ tumors. This is in agreement with previous studies [49,50] suggesting that AR inhibition, especially in combination with immunotherapy, may provide a potential novel therapeutic option for selected patients with TNBC.
Conclusions
In this series, almost 63% of TNBC co-expressed AR and Cath-D and displayed distinct clinicopathological characteristics. AR/Cath-D co-expression independently predicted OS. Patients with AR+/Cath-D+ tumors tended to have higher risk of late recurrences than patients with other TNBC types. These biomarkers could be useful to identify a specific TNBC subgroup with worse prognosis. Our results could have therapeutic implications because anti-androgens are under investigation and anti-Cath-D antibodies are tested in pre-clinical studies. | 3,087 | 2020-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
The Coma Dust of Comet C2013 US10 (Catalina) A Window into Carbon in the Solar System
Comet C/2013 US10 (Catalina) was an dynamically new Oort cloud comet whose apparition presented a favorable geometry for observations near close Earth approach (~0.93au) at heliocentric distances ~2au when insolation and sublimation of volatiles drive maximum activity. Here we present mid-infrared spectrophotometric observations at two temporal epochs from NASA's Stratospheric Observatory for Infrared Astronomy and the NASA Infrared Telescope Facility. The grain composition is dominated by dark dust grains (modeled as amorphous carbon) with a silicate-to-carbon ratio ~0.9, little of crystalline stoichiometry (no distinct 11.2um feature attributed to Mg-rich crystalline olivine), the submicron grain size distribution peaking at ~0.6um. The 10um silicate feature was weak, ~12.8% above the local continuum, and the bolometric grain albedo was low (~14%). Comet Catalina is a carbon-rich object. This material, which is well-represented by the optical constants of amorphous carbon is similar to the material that darkens and reddens the surface of comet 67P/Churyumov-Gerasimenko. We argue this material is endemic the nuclei of comets, synthesizing results from the study of Stardust samples, interplanetary dust particle investigations and micrometeoritic analyses. The atomic carbon-to-silicate ratio of comet Catalina and other comets joins a growing body of evidence suggesting the existence of a C/Si gradient in the primitive solar system.
INTRODUCTION
Traces of primordial materials, and their leastprocessed products, are to be found in the outermost regions of the solar system in the form of ices of volatile materials (H 2 O, CO, CO 2 , and other more rare species), and more refractory dust grains. This is the realm of comets. Nevertheless, it is certain that this outer re-derstand the environment of the early solar system from pebbles to planetesimals to larger bodies (see Poulet et al. 2016, and references therein). These grains likely are minimally processed over the age of the solar system after incorporation into the nuclei of comets. Information on the nature of these grains comes from a variety of sources, including remote sensing through telescopic observations (ground-based, airborne, and spacebased), rendezvous/encounter experiments (i.e., Giotto, Rosetta/Philae, Deep Impact ), collection of interplanetary dust particles (IDPs) in the Earth's stratosphere, and a sample return mission (Stardust ). All these activities have made important contributions to our understanding of these grains. The most detailed information we have comes from the latter two types of studies, where laboratory analysis is possible. Yet, the IDPs from comets 81P/Wild 2 and 26P/Grigg-Skjellerup are vastly different. The former contains material processed at high temperature (Zolensky et al. 2006) while the latter is very "primitive" (Busemann et al. 2009). For these reasons, it is necessary to determine as best we can the properties of dust grains from a large sample of comets using remote techniques (Cochran et al. 2015). These include observations of both the thermal (spectrophotometric) and scattered light (spectrophotometric and polarimetric). The former technique provides our most direct link to the composition (mineral content) of the grains.
With these data, combined with modeling features in the infrared spectral energy distribution (SEDs) arising from mineral species emitting in the comet coma (dust grains) and dynamical models of solar system formation and planetary migration we can address fundamental questions of solar system formation. These question include: What was the method of transport of these materials, and has information on the scale of those transport processes been stored in primitive solar system objects? Do comets, the remnants of that epoch, still contain clues as to what happened?
In this paper we report our post-perihelion (TP = 2015 Nov 15.721 UT) spectrophotometric observations of comet C/2013 US 10 (Catalina), a dynamically new (see Oort 1950, for a definition based on orbital elements) Oort Cloud comet with 1/a org = 5.3 × 10 −6 AU −1 (Williams 2019) and discuss important new interpretations that the coma grain composition of comets from remote sensing observations can bring to understanding disk processing in the primitive solar system.
OBSERVATIONS
Infrared and optical observations of C/2013 US 10 (Catalina) were conducted at two contemporaneous epochs near close Earth approach (∆ ≃ 0.93 au) with the NASA Infrared Telescope Facility (IRTF) and NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA) facility. Table 1 summarizes the all observational data sets discussed herein and physical parameters of the comet.
Ground-based Spectrophotometry
Medium resolution (R ≡ λ/∆λ ≃ 50 − 120) infrared spectroscopy of comet C/2013 US 10 (Catalina) was obtained on the NASA IRTF telescope with The Aerospace Corporation's Broadband Array Spectrograph System (BASS; Hackwell et al. 1990) during the early morning (daytime) hours. BASS has no moving parts and observes all wavelengths in its 2 to 14 µm operable range using two 58 element block impurity band linear arrays simultaneously through the same aperture. All observations were obtained with a fixed 4. ′′ 0 diameter circular aperture. Standard infrared observing techniques were employed, using double beam mode with a chop/nod throw of ≃ 60 ′′ . Sprague et al. (2002) provide a detailed description of the BASS data acquisition and preliminary reduction scheme. Non-sidereal tracking of the comet by the IRTF telescope was performed using Jet Propulsion Laboratory (JPL) Horizons' (Giorgini et al. 1996) generated rates, and fine guiding, to keep the comet photocenter in the BASS aperture, was done either by manually guiding on the visible comet image produced by the BASS sky-filtered visible CCD camera, or off a strip-chart using thermal channels of the BASS array.
Photometric calibration of individual comet data sets were performed using observations of α Boo observed at equivalent airmass to minimize telluric corrections. α Boo is a well-characterized infrared standard for ground-and space-based telescopes and has been extensively monitored and modeled by the BASS instrument team and other investigators for decades. The calibration and telluric corrections are uncertain to within ≃ 3%. Examination of independent, flux calibrated spectra of comet C/2013 US 10 (Catalina) obtained during the course of the 2016 Jan 10.61 UT observational campaign showed no variance in the flux level of the spectral energy distribution (i.e., no outbursts, or jet induced changes in coma brightness were witnessed), or spectral shape. Hence, all spectra where averaged together (with the proper propagation of all statistical point-to-point uncertainties) to produce the final spectrum presented in Fig. 1.
b Vector direction measured CCW (eastward) from celestial north on the plane of the sky.
(5 sec each) of the comet nucleus and surrounding coma were obtained using AB pairs nodding the telescope by 60 ′′ , and dithering the telescope while tracking at the non-sidereal rate corresponding to the predicted motion of the comet in an airmass range of ≈ 1.18. All images were corrected for overscan, and bias with stan-dard IRAF 1 routines. The data was photometrically calibrated using GSC 02581-02323 (G2V) SDSS colors reported from SIMBAD transformed to the USNO system as described in Tucker et al. (2006), adopting 3631 Jy for zeroth magnitude. No color corrections for spectral type were applied in the transformation. The average nightly seeing was ∼ 2. ′′ 2 as determined from the standard star. The observed i ′ flux density of the comet measured in an equivalent BASS aperture was (2.316 ± 0.001) × 10 −17 W cm −2 µm −1 . UT with the NASA IRTF telescope. This spectrum was derived by averaging all photometrically calibrated individual comet spectra obtained over a 1.33 hr interval. Regions of poor telluric transmission ( < ∼ 30%) where from atmospheric CO2 and H2O vapor have strong absorption bands result in gaps in the data where BASS spectral data points are clipped out. The red curve is the best-fit blackbody, TBB = 265.3±2.6 K fit to the local 10 µm continuum as described in §3.3. The excess over the blackbody curve at short wavelengths is due to scattered, reddened sunlight contributing substantially to the flux.
Airborne SOFIA Observations
Mid-infrared (mid-IR) spectrophotometric observations of comet C/2013 US 10 (Catalina) were obtained using the Faint Object InfraRed CAmera (FORCAST; Herter et al. 2018) mounted at the Nasmyth focus of the 2.5-m telescope of the SOFIA Observatory (Young et al. 2012). FORCAST is a dual-channel mid-IR imager and grism spectrometer operating from 5 to 40 µm.
The data were acquired on two separate, back-toback flights, originating from Palmdale, CA at altitudes of ≃ 11.89 km in 2016 February, conducted as part of our SOFIA comet programs (P.I. Woodward, AOR ID 04 0010). Mid-infrared imaging observations of C/2013 US 10 (Catalina) in three filters and the The Short Wavelength Camera (SWC) grism (G063) were obtained on the first flight, while on the second flight, imaging in the same three filters was repeated in addition to Long Wavelength Camera (LWC) grism observations with three gratings (G111, G227, and G329). For all spectroscopic observations the instrument was The red curve is the best-fit blackbody that yields a TBB = 239.5 ± 0.5 K fit using all wavelengths > ∼ 6.0 µm as described in §3.3. configured using a long-slit (4. ′′ 7 × 191 ′′ ) which yields a spectral resolution R = λ/∆λ ∼ 140-300. The comet was imaged in the SWC using the F197 filter to position the target in the slit. Both imaging and spectroscopic data were obtained using a 2-point chop/nod in the Nod-Match-Chop (C2N) mode with 45 ′′ chop and 90 ′′ nod amplitudes at angles of 30 • /210 • in the equatorial reference frame.
The FORCAST scientific data products were retrieved from the SOFIA archive, after standard pipeline processing and flux calibration was performed (for details see Clarke et al. 2015;Woodward et al. 2015). An extensive discussion of the FORCAST data pipeline can be found in the Guest Investigator Handbook for FOR-CAST Data Products, Rev. B 2 The computed atmospheric transmission at flight altitudes was used to clip-out grism data points in wavelength regions where the transmission was less than 70%. Subsequently, to increase the signal-to-noise (SNR) ratio of the comet spectra, data in each grism spectra segment were binned using a weighted 3-point boxcar. As there is no wavelength overlap between individual FORCAST grism segments, combined with an inherent uncertainty in the absolute grism flux calibration, and the fact that observations were conducted on separate nights, photometry derived from the image data was used to scale the grism data to a common spectral energy distribution (SED). Integration of the observed grism data with the corresponding filter transmission profile lying within the respective grism spectral grasp (i.e., FORF111 for G111) was used to construct a synthetic photometric point. This latter photometric point was compared to the observed image aperture photometry derived within an equivalent circular diameter beam corresponding to the grism extraction aperture area (average for all grisms was 17. ′′ 54 ± 0. ′′ 74, derived data product keyword PS-FRAD). The grism scaling factor was derived from this ratio ( < ∼ 8%). Neither the shape of the observed SED inferred from the image photometry nor the relative flux level of the SED changed significantly over the two epoch of the SOFIA observations.
The resultant composite FORCAST spectra of comet C/2013 US 10 (Catalina) is presented in Fig. 2. Figure 3 presents panels for each individual grating segment, spanning the respective spectral grasp, to illustrate spectral details of the observed SEDs.
Optical images in the SDSS i ′ filter also were obtained on each flight series prior to the start of the mid-infrared observing sequence using the Focal Plane Imager (FPI+; Pfüller et al. 2016). The FPI+ field-of-view is 8.7 square arcminutes, with a plate scale of 0. ′′ 51 per pixel, and a FWHM of ≃3. ′′ 75. The comet was tracked using the JPL Horizons non-sidereal rates. These data frames were bias and overscanned corrected using standard routines. The comet's surface brightness was flux calibrated by using aperture photometry of seven stars in the image field of view with known i ′ magnitudes taken from the USNO UCAC4 catalog to establish the photometric zero point (resultant fractional uncertainty of ≃ 1%). The observed i ′ flux density of the comet measured in an equivalent circular aperture corresponding to the average SOFIA FORCAST grism extraction aperture was (8.215 ± 0.009) × 10 −17 W cm −2 µm −1 .
3. DISCUSSION 3.1. SOFIA Imagery and Photometry Images of comet C/2013 US 10 (Catalina) obtained during the 2016 February 09 UT flight are presented in Fig. 4. Examination of the azimuthally averaged radial profiles of the comet in each filter reveals comet C/2103 US 10 (Catalina) exhibited extended emission beyond the point-spread function (PSF) of point sources observed with FORCAST under optimal telescope jitter performance in each filter. 3 Centroiding on the photocenter of the comet nucleus, photometry in an effective circular aperture of radius 13 pixels, corresponding to 9. ′′ 984, with a background aperture annulus of inner radius 30 pixels (23. ′′ 58) and outer radius of 60 pixels (47. ′′ 16) was performed on the Level 3 pipeline co-3 http://www.sofia.usra.edu/Science/ObserversHandbook/FORCAST.html added (*.COA) image data products using the Aperture Photometry Tool (APT v2.4.7;Laher et al. 2012). The photometric aperture is ≃ 3× the nominal point-source full width half maximum (FWHM), and encompassed the majority of the emission of the comet and coma. Sky-annulus median subtraction (ATP Model B as described in Laher et al. 2012) was used in the computation of the source intensity. The stochastic source intensity uncertainty was computed using a depth of coverage value equivalent to the number of co-added image frames. The calibration factors (and associated uncertainties) applied to the resultant aperture sums were included in the Level 3 data distribution and were derived from the weighted average calibration observations of α Boo.
The resultant SOFIA photometry is presented in Table 2. For the SOFIA epoch of comet C/2013 US 10 Catalina, the coma did not appear to have jets or active areas creating discernible coma structures, by our visual examination of the photometric images divided by their azimuthally averaged radial profiles.
Dust Thermal Models of Infrared Spectra
Infrared spectroscopic observations are fitted with thermal models using standard spectral fitting techniques that minimize χ 2 . Interpreting thermal models enables investigation into fundamental quantities of comet dust populations including: (1) bulk composition; (2) silicate structures of disordered ("amorphous silicates") and/or crystalline forms (forsterite and enstatite); (3) particle structures and size distributions; and (4) coma bolometric albedo. Refractory dust particles are much more robust in maintaining the chemical signatures from the time of formation (see Wooden et al. 2017) than the highly volatile ices as well as semi-refractory organics with limited coma lifetimes (Wooden et al. 2017;Dello Russo et al. 2016). Semi-refractory organics are known to exist through their limited lifetimes in comae, and are presumed to be organics in the dust that are modified while in the coma. These are the so-called 'distributed sources', distributed to the coma by the dust particles. The semi-refractory organics are not (yet) observed in thermal IR spectroscopy but rather indirectly by the observed delayed release of molecules such as CO and/or H 2 CO as described in Disanti et al. (1999) and Cottin & Fray (2008) or by changes in the color of the scattered light (Tozzi et al. 2004). Polarization properties of particles also are dependent upon organics (Hadamcik et al. 2020). Wooden et al. (2017) and Dello Russo et al. (2016) provide a detailed discussion of semi-refractory organics in cometary comae.
Thermal emission spectroscopy when combined with thermal modeling probes the particle composition from the optical active material in comet coma. A number of approaches have been employed to model the dust thermal emission and study the composition of Figure 3. Comet C/2013 US10 (Catalina) SOFIA FORCAST spectra by individual grating to highlight spectral details and the signal-to-noise quality of the data. The panels are (a) G063, (b) G111, (c) G227, and (d) G329. The original spectra have been binned with a 3-point width (in wavelength-space) median boxcar, with the errors propagated by use of a weighted mean. Gaps in the contiguous spectral coverage arise from regions where the atmospheric transmission was modeled to be < ∼ 70%.
cometary particles. Usually, these involve the simultaneous use of a number of different grain compositions (mineralogy), a size distribution, and a description of the particle porosity. Radiative equilibrium is assumed when deriving particle temperatures, which are strongly composition-dependent as well as particle-radiidependent for low to moderate particle porosities. Particles of more highly absorbing compositions produce higher temperatures and higher flux density thermal emissions. To produce the combined emission of multiple compositions and integrated over grain size distributions, thermal models may employ an ensemble (sums) of individual particles of homogeneous dust materials (Harker et al. , 2011(Harker et al. , 2017, or may employ composition "mixtures" calculated using Effective Medium Theory (see Bockelée-Morvan et al. 2017a,b). At a given heliocentric (r h [au]) and geocentric (∆[au]) distance, the particle (dust) composition of the optically active grains, comprising a linear combination of discrete mineral components, porous amorphous materials, and solid crystals in a comet's coma can be constrained by non-negative least-squares fitting of the thermal emission model spectra to the observed comet spectrum. The relative mass fractions and their respective correlated errors and the particle properties including the porosity and size distribution, having invoked a Hanner grainsize distribution (HGSD; Hanner 1983) for n(a)da, are given as a prescription for the composition of coma particles (for details see Harker et al. 2018Harker et al. , 2011, and references therein). The particle compositions of dust in the coma of comet C/2013 US 10 (Catalina) and relevant parameters from the best-fit thermal modeling are summarized in Table 3. The uncertainties on the derived thermal model parameters reflect the 95% confidence limits that result from 1000 Monte Carlo trials 3.2.1. Optical properties and IDP analogues A particle's composition, structure (crystalline or amorphous), porosity, and effective radius (a) determine its absorption and emission efficiency, Q abs (a grain's absorption efficiency and emission efficiency are equivalent at any given wavelength by Kirchhoff's Law). For an individual particle of effective radius a, F λ (a) ∝ π × a 2 × Q abs (a) × B λ (T dust [a, composition]) where B λ is the Planck blackbody function, evaluated as a function of grain temperature, T (K), particle size, and particle composition .
The optical properties of the materials used in the radiative equilibrium calculations for particle temperatures are derived from either laboratory-generated materials or mineral samples from nature. Materials chosen for our thermal models have available optical constants and are found in or are analogous to materials in cometary samples. Crystalline silicates in Interplanetary dust particles (IDPs, Wooden et al. 2000), Stardust, UltraCarbonaceous Antarctic MicroMeteorites (UCAMMs; Duprat et al. 2010) are of olivine and pyroxene compositions with a range of Mg:Fe contents with typically 1.0 ≤ y ≤ 0.5 and 1.0≤ x ≤ 0.5 (Wooden et al. 2017;Frank et al. 2014;Joswiak et al. 2014;Dobricȃ et al. 2012;Brunetto et al. 2011). Only f Peak grain size (radius) of the Hanner GSD. † Ratio represents the bulk mass properties of the materials in the models.
Mg-rich crystalline olivine resonances have thus far been detected definitively in multiple comets using both the mid-and far-IR resonances. Laboratory studies of crystalline olivine by Koike et al. (2013) show that with decreasing Mg-content (i.e., with y < 0.8), the 11.2 µm peak shifts towards 11.4 µm and the far-IR resonances dramatically change to different central wavelengths with different relative intensities. However, these more fayalitic crystalline olivine resonances have not been detected in comet comae. Amorphous silicates and amorphous carbon in thermal models are considered candidate ISM or dense cloud materials (Wooden et al. 2017). The outer cold disk where comet nuclei accreted is a likely reservoir of inherited interstellar grains (Sterken et al. 2019). However, modeled characteristics of interstellar grains and measured cometary organics differ. Matrajt et al. (2005) UT at the NASA IRTF telescope. Gaps in the spectra are due to regions of poor telluric transmission within the continuous wavelength range covered by the instrument. The decomposition technique is used to determine the dust composition responsible for the observed coma emission at midinfrared wavelengths. The solid red line is the best-fit model of the emission from the aggregate dust components, wherein the orange line represent the contribution from amorphous carbon, the dark blue solid line is the emission from amorphous pyroxene, the solid cyan turquoise line depicts the amorphous olivine emission, and the green solid line depicts the crystalline olivine ("hot" forsterite). The observed spectral data are the filled black circles with respective uncertainties. The coma dust composition is dominated by amorphous carbon (dark material) and silicate grains with peak grain sizes (radii) of 0.5 µm (Hanner grain size distribution). Some crystalline material is present.
persistently suggest that the origin of the organic fraction of cometary IDPs is a different environment than the diffuse interstellar medium (DISM) because (a) the 3.4 µm band of organics in anhydrous IDPs is significantly narrower than in the DISM (e.g., towards the Galactic Center that is a mixture of diffuse and dense cloud material) and (b) the aliphatic chains in IDPs are longer (less ramified) than in the DISM, based on the −CH 2 /−CH 3 ratio in IDPs. The Heterogeneous dust Evolution Model for Interstellar Solids (THEMIS) model (Jones et al. 2017;Jones 2016) predicts the formation and evolution of interstellar dust, from the harsher UV conditions of the ISM, through the DISM, the translucent clouds at the interface of and into dense clouds. In these regimes dust particles eventually either work their way out to less dense phases of the ISM and thus presumably UT) circles superposed on the data points (black) are the photometry points taken from the FORCAST imagery in a circular aperture equivalent to the grism extraction area (the average for all grisms was 17. ′′ 54 ± 0. ′′ 74, derived data product keyword PSFRAD) and are used to scale the spectral segments to the photometry. The coma dust composition is dominated by amorphous carbon (dark material) and silicate grains with peak grain sizes (radii) of 0.7 µm (Hanner grain size distribution). Some crystalline material is present.
are cycled into and out of phases, or the dust particles in the dense clouds make their way into protoplanetary disks such as our own. In translucent clouds THEMIS carbon-chemistry facilitates the growth of H-rich and aliphatic-rich matter, denoted a-C(:H), which accretes and then coagulates to tens of nm-size particles through a complex set of chemical reactions. The carbon-chemistry backbones are carbon belt-like molecules with aromatic bonds (n-cyclacenes) and an important process is the epoxylation of the surface materials. The carbonaceous particles, upon return to the harsh UV interstellar radiation field evolve "towards an end-of-the-road H-poor and aromatic-rich a-C material" (Jones & Ysard 2019). Carbonaceous matter in cometary samples appear significantly less dominated by aromatic moieties than implied by THEMIS models. Stardust samples only reveal a small concentration of small PAHs (Clemett et al. 2010). Carbon X-ray Absorption Near Edge Spectroscopy (C-XANES) spectra of Stardust and IDP organics show saturated aliphatic carbon bonds are more recurrent than aromatic C=C bonds as well as amorphous carbon being the only carbon form common between these samples and Bells, Tagish Lake, Orgueil and Murchison meteorites (Wirick et al. 2009;De Gregorio et al. 2017). Laboratory absorption spectra do not quantify amorphous carbon as it has no resonances, although its presence can be discerned through Carbon X-ray Absorption Near Edge Spectroscopy (C-XANES; Keller et al. 2004;Messenger et al. 2008). Amorphous carbon is found in many IDPs Busemann et al. 2009;Wirick et al. 2009;Brunetto et al. 2011) but amorphous carbon is not discussed for all IDPs Ishii et al. 2018) nor for all extraterrestrial particulate samples from primitive small bodies, specifically UCAMMs (Dartois et al. 2018;Mathurin et al. 2019). Despite a diversity of bonding structures in cometary organics (Bardyn et al. 2017) as well as organic matter in asteroids, there is a severe paucity of optical constants (de Bergh et al. 2008). Typically, optical constants of relatively transparent tholens are combined with optical constants of the highly absorbing amorphous carbon to darken the models for surfaces of outer ice-rich bodies (de Bergh et al. 2008). Hence, amorphous carbon, which is devoid of aromatic bond IR resonances, is the best choice for the highly absorbing carbonaceous matter in models of dark surfaces of ice-rich bodies as well as for cometary coma particles.
Amorphous silicates in thermal models are analogous to Glass with Embedded Metal and Sulfides (GEMS) in IDPs (see Floss et al. 2006;Brunetto et al. 2011;Bradley 2013;Ishii et al. 2018;Stroud et al. 2019). The ISM silicate absorption feature has spectral similarities to GEMS (Bradley 2013;Stroud et al. 2019) and radiation damage can explain the non-stiochiometry of GEMS (Jäger et al. 2016). An alternative hightemperature formation scenario for GEMS is proposed for the protoplanetary disk (Keller & Messenger 2011) but is challenged by discovery of GEMS with interior organic matter that could not have survived temperatures above 450 K (Ishii et al. 2018). Amorphous silicates are a ubiquitous component of IR spectra of cometary comae and their radiation equilibrium temperatures require compositions of Mg:Fe≈50:50 .
The Hanner Grain Size Distribution (HGSD)
Our modeling invokes the Hanner grain-size distribution n(a) = (1 − a 0 /a) M (a 0 /a) N , where a is the particle radius, where a 0 = 0.1 µm is the minimum grain radius, and M and N are independent parameters (Hanner 1983;Hanner et al. 1994). The HGSD is a modified power law that rolls over at particle radii smaller than the peak radius a p = (M + N )/N , which is constrained by the thermal model analyses.
Moderately porous particles
The optical properties of porous particles composed of amorphous materials may be calculated by incorporating "vacuum" as one of the material components (Bohren & Huffman 1983). Porous grains are modeled with an increasing vacuous content as expected for hierarchical aggregation, using the porosity prescription or fractional filled volume given by f = 1−(a/0.1 µm) D−3 , where a is the effective particle radius, with the fractal dimension parameter D ranging from D = 3 (solid) to D = 2.5 (fractal porous but still spherical enough to be within the applicability of Mie theory computations; Harker et al. 2018Harker et al. , 2011, and references therein). Particle porosity affects the observed spectra of comets because the porous grains are cooler than solid grains of equivalent radius as their vacuous inclusions make them less absorbent at UV-visible wavelengths . The porosity prescription parameter D is coupled with the grain size distribution slope parameter N , and the two parameters are simultaneously constrained when fitting IR SEDs. Increasing porosity (lowering D) decreases particle temperatures, which can be compensated for by increasing the relative numbers of smaller to larger grains by steepening the slope (increasing N ) of the HGSD as illustrated in Fig. 2 of Wooden (2002).
An extremely porous particle that is an aggregate of submicron compact monomers can have the same temperature as its monomers (P(a) max >80%; Xing & Hanner 1997) or (P(a) max ≥99% with a ≥5 µm; Kolokolova et al. 2007). However, IR spectra of comets are not well-fit by such extremely porous particles that are uniformly as hot as their submicron-radii monomers, regardless of particle size. Thermal models for observed IR spectra of comets need particle size distributions of moderately porous or solid particles. For a comet near 1.5 AU, a HGSD has submicron-to micron-radii particles (a peak ≤ 1 µm) that produce the warmer thermal emission under the 10 µm silicate feature and at shorter near-IR wavelengths, as well as larger cooler particles producing the decline in the thermal emission at longer (far-IR) wavelengths. Compared to a size distribution of compact solid particles (D = 3), a size distribution of moderately porous particles (P(a) ∼66 to 86%, D = 2.727 to D = 2.5, a eff = 5 µm) are cooler and produce enhanced emission at longer wavelengths while still producing a silicate emission feature with the observed contrast compared to the local "pseudo-continuum" (see § 3.4). Hence, the thermal models constrain the porosity of the amorphous materials (amorphous silicates and amorphous carbon), and the slope and the peak radius (a peak ) of the grain size distribution of (see Harker et al. 2018Harker et al. , 2011.
CDE models of solid trirefringent silicate crystals
Silicate crystalline particles are not well modeled as spheres by Mie Theory because of their anisotropic optical constants and irregular shapes (Koike et al. 2010). Crystals are not modeled as porous particles or as mixed-material particles using Effective Medium Theory because modeled resonant features do not match laboratory spectra of the same materials. Discrete solid crystals are better computed using the Continuous Dis-tribution of Ellipsoids (CDE) approach (Fabian et al. 2001) or the discrete dipole approximation (DDA; Lindsay et al. 2013). Crystals of larger sizes than ∼1 µm do not replicate the observed SEDs of comets (Min et al. 2005). CDE with c-axis elongated shapes reasonably reproduces laboratory spectra of crystalline forsterite powders (Fabian et al. 2001) and serves as a starting point for our thermal models. Discrete solid crystals with sizes from 0.1 to 1 µm are included in our admixture of coma dust materials. From our thermal models, we quote the relative mass fractions for the ≤1 µm portion of the HGSD in Table 3.
Comet Crystalline Silicates and Disk Transport
The presence of crystalline silicate materials in cometary spectra and in cometary samples indicates transferal of materials that formed in the inner protoplanetary disk to the outer disk (Westphal et al. 2017;Brownlee et al. 2006;Zolensky et al. 2006) where volatile ices (H 2 O, CO, CO 2 ) were extant along with dust particles to become incorporated into cometary nuclei (Rubin et al. 2020). Crystalline silicates are relatively rare along lines-of-sight through the interstellar medium ( < ∼ 5%, Kemper et al. 2004) and towards embedded young stellar objects or compact HII regions (1 to 2%, with a few sources at >3%, Do-Duy et al. 2020). A significant crystalline silicate component in cometary dust has been clearly demonstrated by laboratory examinations of Stardust (Frank et al. 2014) and IDPs Busemann et al. 2009;Zolensky & Barrett 1992). Crystalline silicate mass fractions (defined as f crys ≡ m cryst /[m amorphous + m cryst ] where m cryst is the mass fraction of crystals) derived from thermal models of cometary IR SEDs typically are ∼20% to 55% Harker et al. 2018Harker et al. , 2011Harker et al. , 2007Wooden et al. 2004, and Appendix I). Detailed laboratory studies of cometary forsterite and enstatite crystals show a small fraction have mineralogical signatures of gas-phase condensation such as low iron manganese enriched (LIME) compositions Frank et al. 2014), 16 O-enrichments commensurate with early disk processes (Defouilloy et al. 2018(Defouilloy et al. , 2016(Defouilloy et al. , 2017, as well as condensation morphologies such as enstatite ribbons in anhydrous IDPs (Bradley et al. 1999).
Moreover, Stardust samples and some cluster IDPs contain olivine crystals with a wider range of Fecontents (10% < ∼ Fe < ∼ 60%) than the low Fe-contents of ≃10% to 20% deduced from the wavelengths of the resonances of olivine crystals in cometary spectra (Wooden et al. 2017;Crovisier et al. 1997Crovisier et al. , 1996. It is a puzzle as to why the spectral signatures of Febearing crystalline silicates are not spectrally detected in comets (Wooden et al. 2017).The Fe-bearing olivine crystals are analogous by their minor element compositions to olivine (Mg ≤ 80%) crystals in type-II chondrules and are called micro-chondrules or chondrule frag-ments Frank et al. 2014).
In Stardust samples, one 15 µm-size type-II chondrule called 'Iris' has an age-date of ≥ 3 million-years (with respect to CAI formation) and is well-modeled as an isolated igneous system (Gainsforth et al. 2015).
Stardust samples pose a number of challenging questions for disk models about the formation of the nucleus of comet 81P/Wild 2. How did particles radially migrate as late as a few million years in disk evolution to the regime of volatile ices of H 2 O, CO and CO 2 ? How did cometary dust minerals that condensed early in disk evolution persist in the disk long enough to be incorporated into this particular cometary nucleus, that is, persist and not be lost via the inward movement of particles? As of yet, satisfactory answers to either of these questions do not exist.
Silicate crystals, specifically referring to forsterite and enstatite that are the abundant Mg-rich silicate crystalline species in comets and/or cometary samples (Wooden et al. 2017), condensed at temperatures near 1800 K or or possibly were annealed materials at temperatures near 1100 to 1200 K in shocks (Harker & Desch 2002) under low oxygen fugacity conditions ). Radial transport may have occurred through a combination of protoplanetary disk processes including advection, diffusion, turbulence and aerodynamic sorting, meridional flows, disk winds, and/or planetary migration (Vokrouhlický et al. 2019;Ciesla 2011;Hughes & Armitage 2010;Wehrstedt & Gail 2008;Gail 2004). Disk models with meridional flows (see Gail 2004) have been successful in predicting ∼20% silicate crystalline mass fractions at disk radii of more than tens of AU in <1 million-years.
Radial transport by advection can work through disk wind angular momentum transport (Bai 2016) but can also be produced by turbulent viscosity in the bulk of the disk. Radial transport by diffusion requires turbulence. It is generally thought that magento-hydrodynamical (MHD) turbulence occurs only in rarified upper layers of the disk atmosphere, if at all (Bai 2016). However even without MHD effects, there are two recently-discussed hydrodynamical mechanisms for producing turbulence: convective over-stability (CO) and vertical shear instability (VSI) that are either individually or collectively operative in various locations in the disk (for example Pfeil & Klahr 2019). Meridional 2D flows are another robust feature of disk models when turbulence mechanisms are considered operative (Lyra & Umurhan 2019;Stoll et al. 2017). Yet, even the qualitative nature of this flow is debated. Meridional flows for 2D and alphadisk models were outwards along the mid-plane and inwards above one scale height (see Gail 2004). Recent 3D models of meridional flow show that the outward flow is above one scale height so particles that are lofted by turbulence to above one scale height above the mid-plane can move outwards (Pfeil & Klahr 2020;Stoll et al. 2017). To date, meridional flows only are inferred from ALMA in 12 CO observations of the >300 au outer disk regions of the ∼5 million-year old more massive Herbig Ae/Be system HD 163296 (Teague et al. 2019;Powell et al. 2019). Large scale gas motions are not yet observed for analogs of our protoplanetary disk but cometary crystalline mass fractions suggest inner disk materials moved over large distances.
Models without meridional flows also show outward movement of small particles, merely following the outward advective motion of the gas, at certain radii and times. Estrada et al. (2016) show disk models (see their Fig. 15) with a range of dust particle masses in which the maximum disk radius reached by particles of a specific particle mass (i.e., size) increases with time, i.e., some particles do move outward and the smaller particles are more successful in moving outwards. Porous particles have larger aerodynamic cross sections compared to solid particles of the same mass so porous particles are favored in outward movement compared to solid particles . Ciesla & Sandford (2012) simulate the migration of particles by randomized turbulent 'kicks', and thereby nicely illustrate the large-distance motions of some particles.
As a complement to transport within the disk, centrifugally driven disk winds may deposit particles with sizes ≥ 1 µm to the outer disk at early times, which "may be relevant to the origin of the 20 µm CAIlike particle discovered in one of the samples returned from comet 81P/Wild 2" (Giacalone et al. 2019). Abrahám et al. (2019) observed the brightest outburst to date from EX Lup using VLTI MIDI interferometry and VLT VISIR IR spectroscopy. Within five years practically all crystalline forsterite that had become enhanced in the inner disk disappeared from the surface of the inner disk. Over that time, the spectral resonances from olivine crystals shifted emphasis from the mid-to far-IR wavelengths indicating that the crystals experienced outward movement.
Disk models are challenged to effectively transport as well as maintain solids in the outer protoplanetary disk against the inward drift of particles, especially as particles grow to 'pebble' size and decouple from the gas. Models that treat particle coagulation as well as particle collisional destruction which maintain a population of fine-grained particles (i.e., smaller particles with lower Stokes numbers [St η ]) then outward movement of small particles with time occurs (see Estrada et al. 2016). Many studies have investigated how material that is injected into the disk spreads outwards and inwards with time (for example, Sengupta 2019). When turbulence is a driving mechanism for radial transport, then aerodynamics affects particle movements, and one can expect signatures of size sorting by St η ∝ρ s a, where a is the particle radius andρ s is the average particle density (Jacquet 2014;Cuzzi et al. 2001). Stardust samples demonstrate that aerodynamic sorting in aggregate formation occurred for particles of olivine compared to FeS, which are denser than olivine (Wozniakiewicz et al. 2013(Wozniakiewicz et al. , 2012. The Rosetta mission's imaging studies showed that comet 67P/Churyumov-Gerasimenko's particles are hierarchical aggregates of hundreds of microns to mm-size with components that are submicron to tens of micron in size (Langevin et al. 2020;Güttler et al. 2019;Hornung et al. 2016) Stardust samples and Rosetta particle studies are commensurate with the idea that aggregate particle components of submicron to tens of micron of size may be favored over larger solid particles in their outward movement to the disk regimes of comet-nuclei formation.
Revised specific density for Amorphous Carbon
Our thermal model adopts an amorphous carbon (Acar) specific density of ρ s (Acar) = 1.5 g cm −3 , from a quoted value of ρ s (Acar) = 1.47 g cm −3 (Williams & Arakawa 1972) measured for the same amorphous carbon material from which our optical constants were derived (Edoh 1983;Hanner et al. 1994). 5 This specific density of ρ s (Acar) used in these analyses of comet C/2013 US 10 (Catalina) herein represents a significant change from our prior thermal models and publications that used an assumed bulk density of carbon of 2.5 g cm −3 (Lisse et al. 1998;, which actually was a specific density slightly higher than that of graphite of 2.2 g cm −3 (Robertson 2002). The relative mass fractions of carbonaceous matter and siliceous matter are important and allow us to take a detailed look at the carbonaceous contribution of comets to the hypothesized gradient of carbon in the solar system ( §3.9) and as discussed by other authors (Hendrix et al. 2016;Gail & Trieloff 2017;Dartois et al. 2018). 6 For completeness, in our thermal models the specific density of amorphous silicates is ρ s (Asil) = 3.3 g cm −3 as discussed by Harker et al. (2002, and references therein).
Coma Dust Composition from Thermal Models
Comet C/2013 US 10 (Catalina) is a dynamically new (DN) Oort cloud with eccentricity of ≃ 1.0003. Compositionally, the dust in the coma of comet C/2013 US 10 (Catalina) is carbon-rich and this comet is among a subset of observed comets that are similarly carbonrich, some of which are also DN. The carbon-rich dust particles of comet 67P/Churyumov-Gerasimenko were measured in situ to have by weight 55% mineral and 45% (carbonaceous) organic (see Fig. 10, Bardyn et al. 2017). If we consider their mineral-to-organic ratio to be analogous to our silicate-to-carbon ratio then 67P/Churyumov-Gerasimenko has a ratio of 1.1 and C/2013 US 10 (Catalina) has ratios of 1.55 and 1.03 for 1.3 au (BASS) and 1.7 au (FORCAST), respectively. However, within the thermal model parameter uncertainties the silicate-to-carbon ratios are the same for both epochs. A decrease by a factor of 1.5 in the silicateto-carbon ratio for the best-fit values between the two epochs is partly attributed to the definitive detection of crystalline forsterite at 1.3 au that increases the silicate mass fraction relative to the upper limit for forsterite at 1.7 au. Between the two epochs the amorphous carbon increases by a factor of 1.21 (see Table 3).
The dust particle population in comet C/2013 US 10 (Catalina) is characterized by a moderate particle porosity (D = 2.727). Coma grains extend to submicron size particles, the HGSD (defined in § 3.2.2) peaks at an average a p = {0.7, 0.5} µm, with a grain size distribution slope of N = {3.4, 3.7}, respectively, for the two epochs at 1.3 au and 1.7 au. The derived coma dust properties of C/2013 US 10 (Catalina) share similar characteristics with those found recently for some other long period Oort cloud comets, such as C/2007 N3 (Lulin) which is also DN .
The HGSD slope of comet C/2013 US 10 (Catalina) is in the range of other comets, including Oort cloud comets, where typically 3.4 ≤ N ≤ 4. However, its HGSD slope is greater (steeper) than found for comet 67P/Churyumov-Gerasimenko, which has multiple measurements of its differential grain size distribution Examination of the SEDs of comet C/2013 US 10 (Catalina) obtained at two different epoch and the thermal model derived parameters (Table 3) enable us to deconstruct and decipher aspects of the inner coma dust environment (Figs. 5 and 6). From the 58% drop in the available ambient solar radiation between the 1.3 au (BASS epoch) and 1.7 au (SOFIA epoch) observations, one would expect on average the particles on the coma to be cooler at the latter epoch. From the long wavelength shoulder (λ > ∼ 12.5 µm) of the 10 µm silicate feature and longward, the SED measured at 1.7 au (Fig. 2) shows enhanced emission at longer wavelengths. Thus, the particles contributing to the far-IR emission are cooler at 1.7 au compared to those at 1.3 au as anticipated. However, the the thermal emission at 7.8 µm and bluewards is similar for the two epochs. Hence, at 1.7 au the coma of comet C/2013 US 10 (Catalina) must have an increased abundance of smaller warm amorphous carbon particles. Moreover, the number of dust particles in the coma at 1.7 au is increased over that at 1.3 au in order to produce about the same flux density of thermal emission at these two epochs with the cooler particles present at 1.7 au.
There is evidence of a narrow 11.2 µm silicate feature attributable to Mg-rich crystalline olivine (Wooden 2008;Hanner et al. 1994). This is borne out by the detailed thermal modeling of the SED which constrains the relative mass fraction of crystalline forsterite grains in the coma at 1.3 au. The ratio of the crystalline silicate mass to the total silicate mass was ∼ 0.44. The crystalline mass fraction determined for comet C/2013 US 10 (Catalina) is greater than that determined for other dynamically new comets such as C/2012 K1 (Pan-STARRS) studied with SOFIA (Woodward et al. 2015).
The derived values for each observational epoch are summarized in Table 3.
For the portion of the grain size distribution with radii a ≤ 1 µm (the submicron population), the silicate-tocarbon ratio is 1.116 +0.072 −0.074 and 0.743 +0.264 −0.220 at 1.3 au and 1.7 au, respectively (see Table 3). Compared to 1.7 au the higher silicate-to-carbon ratio at 1.3 au is partly due to a factor of ∼1.25 less amorphous carbon combined with an increase in mass of silicates from the definitive detection of forsterite. This crystalline silicate material produces the sharp peak at 11.1 to 11.2 µm (Koike et al. 2010, and references therein) is relatively transparent outside of its resonances. At 1.3 au, crystalline silicate mass fraction (f cryst ) is 0.441 +0.033 −0.035 in the coma of comet C/2013 US 10 (Catalina) so forsterite crystals contribute significantly to the silicate-to-carbon ratio. Crystalline silicates are tracers of radial migration of inner disk condensates or possibly shocked Mg-rich amorphous olivine so the 44% crystalline mass fraction indicates significant radial transport of inner disk materials out to the comet-forming regime (see §3.2.5).
Silicate feature shape and strength
The spectral shape of the 10 µm silicate feature can be revealed by dividing the observed flux by a local 10 µm blackbody-fitted 'pseudo-continuum.' The shape of the 10 µm silicate feature arises from emission from submicron-to at most several-micron-radii silicate particles in the the coma, depending on the porosity. In thermal models, the 'pseudo-continuum' has contributions from porous or solid amorphous carbon, which is featureless at all wavelengths. Thermal models require porous partciles (D = 2.7272) for comet C/2013 US 10 (Catalina). Figure 7 shows the silicate feature shape for comet C/2013 US 10 (Catalina) from the BASS observations. The FORCAST mid-IR spectral data show a similar contrast silicate feature but with lower SNR as the BASS data, so these data are not included in the figure for clarity.
The silicate strength parameter historically enables one to inter-compare the dust properties of different comets by quantifying the silicate feature contrast with respect to the local 'pseudo-continuum' (Sitko et al. 2004;Woodward et al. 2015). The 10 µm silicate fea-ture strength, defined as F 10 /F c , where F 10 is the integrated silicate feature flux over a bandwidth of 10 to 11 µm and F c is that of the local blackbody 'pseudocontinuum' at 10.5 µm (Sitko et al. 2004), is a metric that describes the contrast of silicate emission feature. We find the 10 µm silicate feature to be weak in comet C/2013 US 10 (Catalina), approximately 12.8% ± 0.1% above the local 'pseudo-continuum.' The low silicate feature strength in comet C/2013 US 10 (Catalina) is similar to some other comets (Sitko et al. 2004(Sitko et al. , 2013Woodward et al. 2015Woodward et al. , 2011. A second metric used to compare dust properties of comets is the ratio of the SED color temperature (T color ) to the temperature that solid spheres would have at a given heliocentric distance (r h (au)) in radiative equilibrium with the solar insolation, T BB (K) = 1.1 × 278 (r h ) −0.5 (see Hanner et al. 1997). At the epoch of the the SOFIA observations, the combined grism 6.0 to 36.5 µm SED can be fit with a single blackbody of temperature 239.5 ± 0.5 K, hence this ratio is ≃ 1.02. The enhanced color temperature over a graybody, which is expected for the particles smaller than the wavelength, often is historically referred to as "superheat" S (see Gehrz & Ney 1992). The silicate strength parameter is somewhat correlated to S (Sitko et al. 2004;Woodward et al. 2015). For comet 67P/Churyumov-Gerasimenko, 1.15 ≤ S ≤ 1.2, and S is plotted along with the bolometric albedo at phase angle 90 • (0.05 to 0.15) and the dust color (% per 100 nm) (Bockelée-Morvan et al. 2019). Comet C/2013 US 10 (Catalina) has a smaller value for S than comet 67P/Churyumov-Gerasimenko.
C/2013 US 10 (Catalina) and 67P/Churyumov-Gerasimenko, both exhibiting a weak silicate feature and are carbon-rich as determined from thermal modeling, provide a direct contradiction to older concepts commonly asserted in the literature. Commonly, many groups argued that some comets totally lacked silicate features because their solid grains were radiating as graybodies and not displaying resonances because the grains were so large that the grains themselves were optically thick Lisse et al. 2005). For comets with low dust production rates, estimation and subtraction of the nucleus' contribution to the SED is important. When combined with higher sensitivity observations and subtraction of the nucleus flux density, thermal models that integrate over a size distribution of particles with composition-dependent-dusttemperatures shows that the comets with comae particles whose HGSD has a p ≤ 1 µm and that display weak silicate features are carbon-rich.
The "Hot crystal model" and SOFIA in the far-IR
The SOFIA spectrum has enhanced emission that rises near 36 µm but the observations do not extend to longer wavelengths to show a decline in flux density. Laboratory absorption spectra of powders of pure-Mg forsterite show that the absorbance is about equal at 33 µm and 11.1 µm (Koike et al. 2013), while the 19.5 and 23.5 µm features also having significant absorbance. The 33 µm emission from pure-Mg forsterite (Fo100) is not detected in the far-IR. The slope of the HGSD is well constrained by the SOFA data (given the low χ 2 ν ). The SOFIA data provide important constraints on the crystalline resonances in the far-IR and on the slope of the HGSD ( §3.2.2). Our thermal models employ a "hot crystal model" for the temperatures for forsterite and enstatite, where their radiative equilibrium temperatures of crystals are increased by a factor of 1.9±0.1 based on fitting the ISO SWS spectrum of comet C/1995 O1 (Hale-Bopp) . We speculate that hotter crystal temperatures may arise from crystals being in contact with other minerals that are more absorptive or from Fe metal inclusions such as "dusty olivines" (Kracher et al. 1984), or "relict" grains (Ruzicka et al. 2017).
Other mineral species not detected
Within our SNR in the SOFIA mid-to far-IR SED, neither hydrated phyllosilicates that have far-IR resonances distinct from anhydrous amorphous olivine and amorphous pyroxene nor the very broad 23 µm troilite (FeS, submicron-sized) (Keller et al. 2002) spectral signatures were seen (see Schambeau et al. 2015). Phyllosilicates, such as Montmorillonite, as well as carbonates have absorptions in the 5 to 8 µm wavelength region (Roush et al. 1991;Crovisier & Bockelée-Morvan 2008) and neither of these compositions were detected in comet C/2013 US 10 (Catalina).
The search for aliphatic and aromatic carbon
The BASS spectrum spans the 3.0 to 3.5 µm wavelength region where potentially the 3.28 µm peripheral hydrogen stretch on a ring carbon macromolecule (PAH) and the 3.4 µm -CH 2 , -CH 3 aliphatic bonds arrangements that are prevalent in IDPs and Stardust materials (Matrajt et al. 2013) might be detectable. The analyses of a well-defined aliphatic carbon 3.4 µm band on nucleus surface of 67P/Churyumov-Gerasimenko is presented by Raponi et al. (2020) and Rinaldi et al. (2017) also argue for the presence for this feature in coma observations. The BASS spectrum spans the 3.0 to 3.5 µm wavelength region where potentially the 3.28 µm peripheral hydrogen stretch on a ring carbon macromolecule (PAH) and the 3.4 µm -CH 2 , -CH 3 aliphatic bonds arrangements that are prevalent in IDPs and Stardust materials (Matrajt et al. 2013) might be detectable. The analyses of a well-defined aliphatic carbon 3.4 µm band on nucleus surface of 67P/Churyumov-Gerasimenko is presented by Raponi et al. (2020) and Rinaldi et al. (2017) also argue for the presence for this feature in coma observations. A broad 20% deep 3.2 µm features from organic ammonium salts also is discussed for the nucleus Poch et al. (2020). If the aliphatic material in comets is similar to that of IDPs then laboratory absorption spectra by (Matrajt et al. 2005) of whole IDPs provide important information on the relative column densities of C atoms participating in different organic bonding groups including aliphatic bonds (−CH 2 , −CH 3 ), aromatic (C=C), carbonyl and carboxylic acid bonds in ketones, and ammonium salts. Protopapa et al. (2018) point to the possible presence of an organic emission feature near 3.3 µm in higher spectral resolution observations of comet C/2013 US 10 (Catalina) obtained on 2016 January 12 (r h = +1.3 au) but do pursue any further detailed analyses. However, there are strong molecular ro-vibrational emission lines of C 2 H 6 and CH 3 OH in the 3.28 to 3.5 µm region that significantly complicate deciphering underlying solid state organic features (Bockelée-Morvan et al. 1995;Dello Russo et al. 2006;Yang et al. 2009;Bockelée-Morvan et al. 2017a). Given these challenges, we do not report on detection of any aromatic or aliphatic features in the BASS data at our resolving power and sensitivity for comet C/2013 US 10 (Catalina). Thus, no spectral features were seen to indicate the presence of aromatic hydrocarbons (such as HACs, PAHs, a-C(:H) nano-particles) or aliphatic carbons in the coma of C/2013 US 10 (Catalina).
Comet C/2013 US 10 (Catalina) has one of the few reported 5 to 8 µm wavelength spectrum from SOFIA (+FORCAST). We searched for spectral signatures of vibration modes of C=C bonds (6.25 µm = 1600 cm −1 ), based on a constrained search of the observed absorption features in laboratory studies of cometary-like polyaromatic organics in IDPs (Matrajt et al. 2005) and in the UCAMMs (Dartois et al. 2018) as well as asteroid insoluble organic materials (IOM, Alexander et al. 2017). The 6.25 µm C=C resonances are not dependent on the degree hydrogenation or the number of peripheral hydrogen bonds compared to structural C=C bonds ).The UCAMMS are massdominated by organics, richer in N and poorer in O than with probable origins in the outer protoplanetary disk (Dobrica et al. 2009). We also searched for C=O bonds (5.85 µm = 1710 cm −1 ). There are tantalizing ≤ 3 σ fluctuations near 1620 cm −1 and 1510 cm −1 that are in the regions of C=C stretching modes (see Table 2 of Merouane et al. 2014). However, the SNR is insufficient and the width of the fluctuations are narrow, narrower than the widths of the C=C resonances in the UCAMMs that have a preponderance of organics such that their features dominate the 5 to 8 µm region.
The lack of resonances from organics in the 5 to 8 µm wavelength region does not discourage us from further searches in cometary comae for these bonding structures with the much higher sensitivity provided by the James Webb Space Telescope (JWST) and its instruments.
Carbon and Dark Particles
We find amorphous carbon dominates the composition of grain materials in comet C/2013 US 10 (Catalina). Dominance of carbon as a coma grain species was seen in other ecliptic comets including 103P/Hartley 2 (Harker et al. 2018) as well as the Oort cloud comets C/2007 N3 (Lulin) ) and C/2001HT50 (Kelley et al. 2006. The outburst of dusty material from comet 67P/Churyumov-Gerasimenko at 1.3 au was carbon-only-grains (with radii of order 0.1 µm), as measured by VIRTIS-H ( Bardyn et al. 2017) Our cometary comae dust atomic C/Si ratios are calculated using a number of suppositions and should be taken as indicative values. Cometary atomic C/Si ratios are of interest for comparison with in situ studies of 67P/Churyumov-Gerasimenko and 1P/Halley and of laboratory investigations of IDPs and UCAMMs. The IDPs and UCAMMs are extraterrestrial materials likely to have originated from primitive bodies like comets and KBOs, respectively (Bergin et al. 2015;Dartois et al. 2018;Burkhardt et al. 2019, and references therein). We choose to compare C/Si of the submicron grain component determined from thermal models with bulk elemental composition measurements of IDPs (X-ray measurements). We elect to not compare C/Si ratios derived from resonances (aliphatic 3.4 µm, aromatic 6.2 µm, and other bond in UCAMMs) because in laboratory baseline-corrected absorption spectra the amorphous carbon component would not be counted because it does not have a resonance.
Endemic Carbonaceous Matter in Comets
A dark refractory carbonaceous material darkens and reddens the surface of the nucleus of 67P/Churyumov-Gerasimenko, the surface material also displays a 3.4 µm (Raponi et al. 2020) and a similar aliphatic feature is suggested to exist in the coma of 67P/Churyumov-Gerasimenko (Rinaldi et al. 2017). We posit that the optical properties of amorphous carbon are representing well the dark refractory carbonaceous dust component observed in cometary comae through IR spectroscopy. Likely this dark refractory carbonaceous material is endemic to the comet's surface. Cosmic rays of a few 10 keV only damage a thin veneer of hundreds of nm of thickness (Strazzulla et al. 2003;Moroz et al. 2004;Quirico et al. 2016). This damage effects the structure (amorphization) and the composition (destruction of C-H and O-H bonds by dehydrogenation) of the materials (Moroz et al. 2004;Lantz et al. 2015;Quirico et al. 2016). Typical particle radii on the nucleus surface of 67P/Churyumov-Gerasimenko is at least tens of microns based on the observed the red color of the surface at visible wavelengths (Jost et al. 2017), so cosmic rays do not damage the full particle volume. For example, IDPs studied by IR spectra indicate aliphatic bonds in particle interiors (Matrajt et al. 2005(Matrajt et al. , 2013Flynn et al. 2015) but a lack of organic bonds in their near-surfaces possibly due to damaging ultraviolet light and particle radiation in space . Lastly, if the redeposition timescales for particles lofted from the nucleus but not escaping its gravity are about the orbital period of comet 67P/Churyumov-Gerasimenko (Marschall et al. 2020) then the ion-irradiation timescales on the surface, which have been shown to amorphize carbon bonds or damage silicates, are too short by orders of magnitude Brunetto et al. 2014;Quirico et al. 2016).
However, the surface properties of the DN comet like C/2103 US 10 (Catalina) may differ from the Jupiterfamily comet like 67P/Churyumov-Gerasimenko. A photon penetration depth of 1 µm for cosmic-rays can induce chemical changes, such as development of an organic crust due to the conversion of low molecular weight hydrocarbons into a web of bound molecular species, from electronic ionization in dose time per 100eV per 16-amu (H 2 O) in the Local Interstellar Medium, which is a harsher environment than within the heliopause at ∼85 au (see discussion in Strazzulla et al. 2003). Comet C/2013 US 10 (Catalina) may have had a radiation damaged dust rime of up to a few cm depth, but DN comets can have their onset of activity at large heliocentric distances (Meech et al. 2009) where likely this material is shed when the comet's activity first turns on. Thus, the amorphous carbon is not from a radiation rime because of the insufficient volume of the nucleus that can be altered by radiation compared to the mass loss pre-perihelion. Coupled with the arguments about in-sufficient time scales for materials recently exposed on cometary surfaces from either erosion or re-deposition to be space weathered, we assert that the amorphous carbon that is in the observed comae of comet C/2013 US 10 (Catalina) is carbonaceous matter is endemic to the comet nucleus. Moreover, the fluence and time scales or temperatures that change carbon bonding structures typically are not reached in cometary comae. The material is refractory and stable. The dark refractory carbonaceous matter that is modeled with the optical constants of amorphous carbon (see § 3.2.1) is endemic to comets. By the ubiquitous detection of a warm particle component in all cometary IR spectra observed to date, the carbonaceous matter is endemic to comets in general.
If dark refractory carbonaceous matter is stable on the surface then this implies the matter will be stable in the coma, unless the temperatures are raised significantly. For example if the size distribution significantly changes to smaller sizes the latter would occur. Laboratory experiments demonstrate that amorphous carbon becomes graphitized at ∼3000 K (De ). Comae dust temperatures remain at < ∼ 400 K dust compositions and particle sizes near 1 µm-radii for comets near 1 au. The exception will be sun-grazers that come close or enter the solar corona. On the other hand, aliphatic carbon may survive temperatures as high as ≃ 823 K if associated with porous minerals (Wirick et al. 2009). In the outburst of 67P/Churyumov-Gerasimenko at 1.3 au, comae dust temperatures reached 550 to 600 K and were modeled by tiny 0.1 µm-radii amorphous carbon particles (Bockelée-Morvan et al. 2019, 2017bRinaldi et al. 2018). Thus, comet comae dust particles do not reach such high temperatures as ≃ 823 K to destroy aliphatic carbon when comets are near 1 au.
The contribution of amorphous carbon is variable between comets.
In some comets, the contribution of amorphous carbon is temporally variable: 103P/Hartley 2 (Harker et al. 2018), C/2001 Q4 (NEAT) (Wooden et al. 2004), and the after the kineticimpactor encounter in inner coma of 9P/Tempel 2 (Sugita et al. 2005;Harker et al. 2007). The variability of amorphous carbon between comets and the temporally variability for a few comets gives clues to the diversity of protoplanetary disk reservoirs out of which comet nuclei formed. The variability in silicate-to-amorphous carbon ratios for an individual comet also may be related to the size sales of variable-compositions of the nucleus (Belton et al. 2007), to jets (Wooden et al. 2004), or variations coupled to changes in solar insolation in different parts of comets orbits (seasonal effects; Combi et al. 2020). These variations asserted for the nucleus are tied to the hypothesis that the refractory dust particle compositions observed in the coma are endemic to the comet.
Amorphous carbon and other forms of carbon
Amorphous carbon is the one carbon bonding structure common to IDPs, Stardust, and four carbonaceous chondrites including Bells, Tagish Lake, Orgueil, and Murchison (Wirick et al. 2009;De Gregorio et al. 2017). The amorphous carbon bonding structure is observed specifically through C-XANES (Matrajt et al. 2008a) and in Stardust particles from comet 81P/Wild 2 (Matrajt et al. 2008b). In addition to C-XANES spectra, regions of some IDPs are described a poorly graphitized or highly disordered carbon (Thomas et al. 1993b,a).
Other organic bonding structures besides amorphous carbon that are found in cometary samples (IDPs and Stardust ) are: aliphatic, aromatic, and rarely graphitic. IDP organic matter generally occurs as aliphatic-dominated rims (Flynn 2008;Flynn et al. 2015), rims on mineral grains with aromatic (C=C) and carbonyl group (C=O) bonds Flynn et al. (2013), (non-graphitized) aliphatic or aromatic macromolecular material ) as submicron-sized pieces associated with mineral crystals (Wirick et al. 2009), or as a matrix (Brunetto et al. 2014). In one IDP, different bonding structures of carbon occurs in micronsized regions and where amorphous carbon was mixed with GEMS (Brunetto et al. 2014). Two IDPs show N-rich organic rims on GEMS that are in turn are inside other GEMS, indicating two formation epochs, and their specific organic matter requires particle temperatures remained cooler than ∼450 K (Ishii et al. 2018). Cometary carbonaceous matter is sometimes referred to as polyaromatic when there are significant moities of aromatic C=C bonds. UCAMMs are noted for abundant aromatic material as well as for their N=C and N−C bonds (Dartois et al. 2018;Mathurin et al. 2019).
Only four cometary samples display graphitic carbon bonding structures as witnessed through C-XANES. Two of these are from Stardust samples, seen as halos on Fe grain cores which are hypothesized to have formed at high temperatures and at low oxygen fugacity in the protoplanetary disk (De , and in two IDPs (L2021C5, L2021Q3) where its close proximity to other bonding structures is discussed respectively by Brunetto et al. (2014) and (L2021Q3 Merouane et al. 2016). Graphite can be formed at high temperatures ( > ∼ 3273 K) although there are lower temperature processes that form graphite (Wirick et al. 2009). Ion bombardment of amorphous carbon is a competing process between amorphization and graphitization and this process depends on the structure of the starting amorphous carbon (Brunetto et al. 2011). Raman spectroscopy of one IDP shows "localized micrometer-scale distributions of extremely disordered and ordered carbons" (Brunetto et al. 2011).
In summary, cometary carbonaceous matter is macromolecular (De and not strictly aromatic (containing aromatic bonds) like meteoritic IOM (Alexander et al. 2007), as well as highly variable in composition and structure.
Cometary comae elemental C/Si ratios
In the following discussion, we investigate the plausible implications of cometary comae thermal model's relative mass fractions (i.e., the mass fraction of amorphous carbon to the mass fractions of the amorphous and crystalline silicates) on the elemental abundance ratio of C/Si. We compare inferred elemental ratio C/Si for comet C/2013 US 10 (Catalina) from thermal models to the C/Si ratio determined for IDPs using Scanning Electronic Microscopy with Energy Dispersive X-ray analysis (the SEM-EDX method, Thomas et al. 1993b), and by mass spectrometry for comet 1P/Halley, and comet 67P/Churyumov-Gerasimenko (COSIMA).
We will show that the relative mass fractions of C/Si derived from our thermal models of comet C/2013 US 10 (Catalina) and a handful of other recently observed and modeled comets are consistent with the average C/Si = 5.5 +1.4 −1.2 derived by COSMIA for thirty 67P/Churyumov-Gerasimenko particles (Bardyn et al. 2017), for 1P/Halley particles measured by Vega-1 and Vega-2 mass spectrometers during spacecraft encounters, and also for the upper range of C/Si for IDPs (see Bergin et al. 2015). The enigmatic comet C/1995 O1 (Hale-Bopp) with is propensity of submicron crystalline silicates also is included in our analysis to demonstrate its lower C/Si ratio that is in the lower range of the IDP C/Si ratios (Bardyn et al. 2017) and also close to the range determined for CI chondrites (Bergin et al. 2015).
Our cometary comae dust C/Si atomic ratios are calculated using a few suppositions and should be taken as indicative values, which are of interest for comparison with in situ studies of 67P/Churyumov-Gerasimenko and 1P/Halley and of laboratory investigations of IDPs and UCAMMs (Matrajt et al. 2005;Brunetto et al. 2014;Bardyn et al. 2017;Dartois et al. 2018). The IDPs and UCAMMs are extraterrestrial materials likely to have originated from primitive bodies like comets and KBOs, respectively (Dobrica et al. 2009, and references therein). Unlike laboratory measurements of IDPs, micrometeoritic samples, or Stardust particles which generally are the measure of single grains or isolated domains within a matrix, values returned from remote-sensing spectroscopic observations represent a coma-wide measure from a large ensemble of thermally radiating dust particles of various radii.
Our suppositions in deriving C/Si atomic ratios are: (a) amorphous carbon is a good optical analog for dark highly absorbing carbonaceous matter in cometary comae and (b) thermal model relative mass fractions derived for amorphous carbon are representing a significant fraction of the carbonaceous matter in the coma §3.9.1.
Counting Carbon Atoms
We are comparing the C/Si atomic ratio derived for cometary samples using different techniques. Mass spectroscopy directly measures the elemental C/Si ratio, which is the method for in situ measurements. However, non-destructive techniques that allow counting the carbon atoms in IDPs or Stardust samples depend on the method. X-ray SEM-EDX techniques (Thomas et al. 1993b) can count all the carbon atoms whereas IR absorption spectroscopy counts the carbon atoms involved in the observed resonances. Laboratory IR absorption spectroscopy measures the C/Si by converting the integrated band strengths into the number of atoms for aliphatic and/or aromatic bands compared to the 10 µm silicate band (Matrajt et al. 2005;Brunetto et al. 2014). Laboratory absorbance spectroscopy fits and subtracts a spline baseline to yield a linear baseline for the purpose of integrating the observed band strengths (see Matrajt et al. 2005). Amorphous carbon is not observed in absorbance in spectroscopy of IDPs because amorphous carbon lacks spectral resonances. To make a comparison between cometary C/Si derived from thermal models of amorphous carbon and C/Si derived from laboratory measurements and in situ measurements, we choose to employ the SEM-EDX measurements that are counting the carbon atoms but not discerning the carbon bonding structures.
Currently we cannot claim knowledge of aliphatic and aromatic content in comet comae dust populations of multiple comets via IR spectroscopy. If we cannot detect signatures of these bonding structures, we cannot definitely determine their contribution to the observed emission. However, we can use IDPs to indicate what the potential increase in C/Si might be if the aliphatic or aromatic bonds were spectroscopically detected.
We can examine what C/Si atomic ratios are derived from organic features in laboratory absorbance spectra of IDPs and compare to the C/Si derived for comets using thermal modeling of the warm particle component that is modeled with amorphous carbon. Many IDPs show the aliphatic 3.4 µm feature. The 3.4 µm feature is composed of the aliphatic CH 2 symmetric vibration (at ∼2850 cm −1 ), the CH 2 asymmetric vibration (at ∼ 2922 cm −1 ) and the weaker CH 3 asymmetric vibration (at ∼ 2958 cm −1 ) as discussed in Matrajt et al. (2005). In six IDPs, the 3.4 µm aliphatic carbon features yield 0.27 ≤ C/Si ≤ 1.4 with a mean C/Si = 0.55 ± 0.43 (see Table 4 of Matrajt et al. 2005). For three out of the six IDPs, acid dissolution of the silicates allowed the detection of the intrinsically weaker aromatic skeletal ring stretch C=C at 6.25 µm (1600 cm −1 ), which raises the atomic ratios for these three IDPs from C aliphatic /Si = {0.78, 0.11, 0.55} to C aliphatic+aromatic /Si = {19.4, 3.1, 5.1} (see Table 5 of Matrajt et al. 2005).
Most IDPs, however, do not possess an aromatic 3.28 µm feature from C-H peripheral bonds on C=C skeletal rings. Keller et al. (2004) suggest the lack of the 3.28 µm aromatic feature is due to "much of the carbonaceous matter is comprised very poorly graphitized carbon, possessing only short range order (<2 nm), or very large PAH molecules." The C=C bonds that are better tracers of the aromatics than the peripheral C-H bonds. As yet, no comet has been observed with organic features that are of comparable absorbance as the silicate features as observed in absorption spectra of three UCAMMs, where organic absorbances are as strong as for the silicate features (Dartois et al. 2018). As other authors suggest, we infer comets have less "outer disk processed organics" than UCAMMs. This conjecture is also supported by noting the ratio of nitrogen-to-carbon (N/C) in 67P/Churyumov-Gerasimenko is less than the N/C in UCAMMs (Bardyn et al. 2017;Dartois et al. 2018). If IR spectra of cometary comae were to detect the 3.4 µm feature at about the same contrast to the silicate feature as is in laboratory absorbance spectra of IDPs (Matrajt et al. 2005;Brunetto et al. 2014;Merouane et al. 2016), then we may infer that C/Si for our comets that we analyze might increase ∼20%.
The C/Si gradient in the Solar System
We derived the C/Si atomic ratio using the thermal model dust compositions (and relevant atomic amu) described in §3.3 and the relative masses of the submicron grains for each composition returned from the best-fit thermal model. The asymmetric uncertainties in the relative masses derived from the thermal models were 'symmetrized' following the description discussed by Audi et al. (Method#2, 2017), cognizant of the limitations to this approach (see Possolo et al. 2019;Barlow 2003) to enable standard error propagation techniques. The carbon to silicon atomic ratio is defined as: where α = (0.5 · Mg amu + 0.5 · Fe amu ) × 2 + Si amu + 4 · O amu Si amu β = (0.5 · Mg amu + 0.5 · Fe amu ) + Si amu + 3 · O amu Si amu are the α, β, γ, and δ are the number of Si atoms per unit mass, and the values for N p (the number of grains at the peak [a p ] of the HGSD) are found in Table 3. Table 4 summarizes derived the Ci/Si atomic ratios for comet C/2013 US 10 (Catalina) and other comets observed with SOFIA (+FORCAST) as well as comet C/1995 O1 (Hale-Bopp) . The C/Si atomic ratio for the comets in Table 4, UCAMMs (data from Dartois et al. 2018), and IDPs and other comets (data from Bergin et al. 2015) are presented in Fig. 8. Recent measurements of solar cosmic abundances creates an upper limit for the ISM C/Si of 10 as discussed in Dartois et al. (2018, and references therein). UCAMMs are above the solar cosmic abundance limit. Thus those who study UCAMMs suggest that their organics have sequestered carbon from the gas phase and converted it to a solid phase in the cold outer disk or on the surfaces of nitrogen-rich cold body surfaces because of their enhanced N/C ratios (Dartois et al. 2013(Dartois et al. , 2018. As measured or computed, cometary comae appear to lack the high C/Si ratios of UCAMMs. Comets by their C/Si appear to be sampling similar abundances of carbon in the optically active composition of comae particles as SEM-EDX-derived C/Si ratios are measuring for IDPs. Many but not all comets have C/Si commensurate with IDPs, and IDPs are more carbon-rich than carbonaceous chondrites (Fig. 8). Two sun-grazing comets from the Kreutz family of comets, C/2003 K7 and C/2011 W3 (Lovejoy), have silicate-rich dust and fall in the carbonaceous chondrites (CC) range (Bergin et al. 2015;McCauley et al. 2013;Ciaravella et al. 2010). Gail & Trieloff (2017), Dartois et al. (2018Dartois et al. ( , 2013 and other authors suggest that there was a carbon gradient in the early solar system. The comet C/Si values supports this contention of gradient in the carbon with heliocentric distance of formation. Commensurate with these results, CONSERT on Rosetta/Philae suggest comets are a large carbon reservoir given the nucleus' permittivity and density constraints on the dust composition in the nucleus Herique et al. (2016), which agrees within uncertainties with the average specific density of dust particles in the comet C/2013 US 10 (Catalina)'s comae. The existence of a carbon gradient in solar systems also is bolstered by the C/Si ratios of IDPs.
Destruction of carbon occurred in inner disk, which is the long-standing "carbon deficit problem" (Bergin et al. 2015;Lee et al. 2010). Disk modelers are working to predict the carbon depletion gradient with complex chemical networks (Wei et al. 2019). Another model investigates removal of carbon through oxidation and photolysis when particles are transported to the exposed upper disk layers but radial transport erases signatures unless other mechanisms quickly destroy carbon like flash heating from FU Ori outbursts or mechanisms prevent replenishment of the inner disk such as sustained particle drift barrier, i.e., a gap opened by the formation of a giant planet. Klarmann et al. (2018) argue that "a sustained drift barrier or strongly reduced radial grain mobility is necessary to prevent replenishment of carbon from the outer disk [to the inner disk]." Heat and/or high oxygen fugacity conditions in the inner protoplanetary disk can convert carbon from its incorporation in refractory particles to carbon in gas phase CO or CO 2 . As discussed ( §3.7.1), particle temperatures above ∼823 K can destroy aliphatic carbon. Flash heating of Mg-Fe silicates in the presence of carbon is a possible formation pathway for Type I chondrules (Connolly et al. 1994). If cometary particles can drift interior to the water evaporation front, then cometary materials may deliver carbon to the inner protoplanetary disk. Delivery of carbon to the gas phase of inner disk by comet grains requires inward delivery mechanisms during the early pebble accretion phase of disk evolution when the motions of aggregating materials are dominated by inward pebble drift (Andrews 2020;Misener et al. 2019). Such delivery requires that amorphous carbon particles already be incorporated into cometary grains in addition to the need that the sublimation temperature of amorphous carbon be higher than water ice so that the delivery of carbon particles is interior enough for carbon to become enhanced in the gas phase. High carbon abundances in the gas phase are required to explain the poorly graphitized carbon (PGC) halos around Fe cores in two terminal Stardust particles (Wirick et al. 2009;De Gregorio et al. 2017).
Earth's bulk C/Si atomic ratio is much smaller and models for its core formation and evolution assume a carbonaceous chondrite supply of carbon was available to form the Earth (Bergin et al. 2015). Cometary C/Si atomic ratios are much higher than carbonaceous chondrites. The outer disk was richer in carbon than the inner disk. The carbon gradient may be another indication of planetary gaps sculpting the compositions of small bodies. Burkhardt et al. (2019) hypothesize that the isotope variances of planetary bodies, traced through meteoritic and IDPs, can be explained if there were isotopically distinct nebular reservoirs of non-carbonaceous and carbonaceous that were not fully mixed in the primordial disk of the solar system. A planetary gap created by Jupiter's formation which inhibited mixing between the inner and outer disk could also explain the dichotomy in between non-carbonaceous and carbonaceous meteorites (Nanne et al. 2019).
Cometary C/Si atomic ratios highlight the "carbon deficit" that occurred in the inner disk and the dichotomy between the inner and outer disk when juxtaposed with the C/Si atomic ratios found for the Earth and ordinary chondrites. Furthermore, the dust composition of many comets demonstrates a carbon-rich reservoir existed in the regimes of comet formation that are pertinent to the understanding the evolution of our protoplanetary disk and the formation of the planets. (2) Woodward et al. (2015).
The optical spectra of comets in the i ′ -band tends to be dominated by dust. However, red CN gas emission bands, CN(2,0) and CN(3,1), can present at redder wavelengths within the i ′ -band (Cochran et al. 2015;Fink et al. 1991;Swings 1956). Presence of these emission lines may contaminate measurements of the scattered light dust continuum surface brightness, and hence estimates of the dust production rate. Optical spectra of comet C/2013 US 10 (Catalina) obtained on 2015 December 18 (Kwon et al. 2017) show weak CN(2,0) and CN(3,1) band emission. However, optical spectra obtained after the epoch of the MORIS and FPI+ imagery in 2016 March 18 show no strong emission features redward of 7630Å to the i ′ -band long wavelength cut-off (Hyland et al. 2019). The azimuthally-averaged radial profiles of comet C/2013 US 10 (Catalina) derived from the MORIS and FPI+ imagery, presented in Fig. 9, shows little deviation from a 1/ρ profile (Gehrz & Ney 1992) at large cometo-centric distances consistent with a steady-state coma without significant CN contamination. Application of standard comet image enhancement techniques to these optical data (see , reveal no structures in the coma such as jets or spirals at this epoch. The dust production rate of comet C/2013 US 10 (Catalina) during the epoch of the BASS observations (2016 Jan 10.607 UT) was derived using the proxy quantity Af ρ (A' Hearn et al. 1984). When the cometary coma is in steady state, this aperture independent quantity can be parameterized as In this relation, A(θ) is four times the geometric albedo at a phase angle θ, f is the filling factor of the coma, m comet is the measured cometary magnitude, m ⊙ is the apparent solar magnitude, derived 7 as i ′ ⊙ = −27.002, ρ is the linear radius of the aperture at the comet's position (cm) and r h (AU) and ∆(cm) are the heliocentric and geocentric distances, respectively.
The Halley-Marcus (HM) (Marcus 2007a,b;Schleicher et al. 1998) phase angle correction 8 was used to normalize A(θ)f ρ to 0 • phase angle, wherein we adopted an interpolated value of HM = 0.3424 and 0.3946 commensurate with the epoch of our optical observations on 2016 Jan 11.633 UT and 2016 February 09.340 UT, respectively. Table 5 reports values of A(0 • )f ρ = (A(θ)f ρ/HM) at a selection of aperture sizes (distances from the comet photocenter) in the i ′ -band. The dust production rate is similar to that observed in other moderately active comets, such as C/2012 K1 (Pan-STARRS) discussed by Woodward et al. (2015).
We can roughly estimate the dust mass loss rate by taking the mass of dust observed in the coma inside of our aperture as the 1/ρ dependence of the surface brightness distribution indicates a steady state coma. If we adopt for the outflow velocity of 100 µm-radii and larger particles which carry most of the mass a value of v dust ≈ 20 m s −1 (Rinaldi et al. 2018), and assume a steady outflow of material through a spherical bubble at some distance R(m) near the nucleus surface, the mass loss rate can be estimated aṡ whereṀ dust has units of g s −1 . If the nucleus of comet C/2013 US 10 (Catalina) is comparable in size typically inferred for many comets, 1.5 km, thenṀ dust ≈ 4 × 10 −3 M dust v dust /20(m s −1 ) . At 1.7 au when M dust = 4 × 10 8 g (Table 3) thenṀ dust ≈ 1.6 × 10 6 g s −1 . Fink & Rubin (2012) discuss how the A(θ)f ρ can be tied to the mass production rate, given the HGSD parameters, computing dust mass loss rate (in kg s −1 ) assuming a particle density of 1 g cm −3 for various particle size distribution functions. Taking an average value of N = 3.5, corresponding to a dq/da ∼ a −3.5 which Thomas et al. (1993b), and the half-filled circle is average value of the Ci/S atomic ratio of comet 67P/Churyumov-Gerasimenko particles studied by Bardyn et al. (2017). The blue star denotes the values for the UltraCarbonaceous Antarctic MicroMeteorites (UCAMMs) while the limit to the interstellar medium C/Si atomic ratio, brown triangle, is from Dartois et al. (2018). Both C/2011 W3 and C/2003 K7 are sun grazing comets and the determination of the C/Si atomic ratio in these objects is derived from ultra-violet measurements when these comets were in the solar corona (see Bergin et al. 2015 (Table 5) one findsṀ dust ≈ 2.4 × 10 6 g s −1 . This is comparable our latter estimate. If we assume the density of the nucleus, which is a porous dust-ice mixture, is ρ nuc ∼ 1 g cm −3 (Fulle et al. 2019) then a rough estimate of the surface erosion rate . Azimuthally averaged relative intensity per pixel as a function of linear radius (ρ) in km as measured in a SDSS i ′ -band filter from the optical photocenter (centroid) of comet C/2013 US10 (Catalina). The solid red line denotes a 1/ρ profile describing a steady-state coma (see Gehrz & Ney 1992). Top: the IRTF MORIS data obtained on 2016 Jan 11.63 UT when the phase angle was 47.80 degrees. Bottom: the SOFIA FPI+ data obtained on 2016 Feb 09.34 UT when the phase angle was 33.06 degrees. Note the change in scale between the two epochs. from the nucleus of comet C/2013 US 10 (Catalina) is ∼1 mm day −1 if the entire surface is active and if the radius of the comet is ∼1.5 km. The depth of space weathering of an DN comet in the local interstellar medium might be at most a centimeter over the age of the solar system and this material would be shed in a timeframe of < ∼ 2 weeks at the observed dust mass loss rate which we have translated to an erosion rate. For a perspective, cumulative erosion depths for comet 67P/Churyumov-Gerasimenko depended on the nucleus geography and solar insolation and from start of the Rosetta mission until the first equinox were 6 mm to 0.1 m and to the end of the mission were of order 0.3 m to 4 m (Combi et al. 2020).
The quantity ǫf ρ, (see Appendix A of Kelley et al. 2013), a parameter which is the thermal emission corollary of the scattered-light based light Af ρ was also computed using our FORCAST broadband photometry. ǫf ρ is defined as where ǫ is the effective dust emissivity, F ν is the flux density (Jy) of the comet within the aperture of radius ρ, B ν is the Planck function (Jy/sr) evaluated at the temperature T c = T bb = 1.093 × (278 K) r −0.5 h ≃ 232.9 K, where T c is the color temperature. Derived values of ǫ f ρ for comet C/2013 US 10 (Catalina) from SOFIA photometry are presented in Table 2.
Dust Bolometric Albedo
Our near simultaneous optical observations conducted on the same night as our measurement of the infrared SED of comet C/2013 US 10 (Catalina) enable us to estimate the bolometric dust albedo as described by Woodward et al. (2015). The measured albedo depends on both the composition and structure of the dust grains as well as the phase angle (Sun-comet observer angle) of the observations. As the grain albedo is the ratio of the scattered light to the total incident radiation, the thermal emission at IR wavelengths and the scattered light component observed at optical wavelengths are linked though this parameter.
The photometry from the i ′ imagery in an equivalent aperture that corresponds to the apertures used to measure the IR SEDs provides an estimate of [λF λ ] max scattering . An estimate of [λF λ ] max IR is obtained from a filter integrated equivalent photometric point at 10 µm derived by integrating with the observed IR SED over the bandwidth of the FORCAST F111 filter. We find that the coma of comet Oort cloud C/2013 US 10 (Catalina) has a low bolometric dust albedo, A(θ), of ≃ 5.1 ± 0.1% at phase angle of 47.80 • and to ≃ 13.8 ± 0.5% at a phase angle of 33.01 • . Fig. 10 shows the derived A(θ) as a function of phase angle, θ for a variety of comets, where the red stars denoted the values for C/2013 US 10 (Catalina). At 1.3 AU, the bolometric albedo of comet C/2013 US 10 (Catalina) is likely measuring the reflectance properties of the refractory particles because ice grains have very short lifetimes at this heliocentric distance (Beer et al. 2006;Protopapa et al. 2018). Reflectance of individual refractory particles from the coma of comet 67P/Churyumov-Gerasimenko as measured by Rosetta COSIMA/Cosicope are from 3% to 22% at 650 nm (Langevin et al. 2017(Langevin et al. , 2020, which spans the range of bolometric albedos measured for comet comae. 4. CONCLUSION Mid-infrared 6.0 < ∼ λ(µm) < ∼ 40 spectrophotometric observations of comet C/2013 US 10 (Catalina) at two temporal epochs yielded an inventory of the refectory materials in the comet's coma and their physical characteristics through thermal modeling analysis. The coma of C/2013 US 10 (Catalina) has a high abundance of submicron-radii moderately porous (fractal porosity D = 2.727) carbonaceous amorphous grains with a silicate-to-carbon mass ratio < ∼ 0.9. This comet also exhibited a weak 10 µm silicate feature.
Comet C/2013 US 10 (Catalina) is an example of subset of comets with weak silicate features that are definitively shown to have low silicate-to-carbon ratios for the submicron grain component (as deduced from thermal model analysis of the spectral energy distributions), that is, they are carbon-rich. Their thermal emission is dominated by warmer particles that are significantly more absorbing at UV-near-IR wavelengths than silicates. The spectral grasp of SOFIA (+FORCAST) provided a constraint that required the presence of amorphous carbon as a dominate constituent of the coma particle population (submicron dust) as silicate particles cannot provide the lack of contrast above blackbody emission at far-infrared wavelengths. The surface area of the thermal emission is dominated by the smaller grains and for the silicates, the smaller grains produce resonances 19.5, 23.5, 27.5 µm not evident in the spectrum of comet C/2013 US 10 (Catalina), which is a puzzle. A dark refractory carbonaceous material darkens and reddens the surface of the nucleus of 67P/Churyumov-Gerasimenko. Comet C/2013 US 10 (Catalina) is carbonrich. Analysis of comet C/2013 US 10 (Catalina) grain composition and observed infrared spectral features compared to interplanetary dust particles, chondritic materials, and Stardust samples suggest that the dark carbonaceous material is well-represented by the optical properties of amorphous carbon. We argue that this dark material is endemic to comets.
The C/Si atomic ratio of comets in context with that derived from studies of interplanetary dust particles, micrometeroites, and Stardust samples suggest that a carbon gradient was present in the early solar nebula. As we observe more comets, and especially take the opportunities to observe dynamically new comets with SOFIA, the James Webb Space Telescope and other capabilities, a significant subset of comets which are carbon-rich likely will arise providing important constraints on newly proposed interpretations of disk processing in the primitive solar system.
ACKNOWLEDGMENTS
Based in part on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. Financial support for this work was provided by NASA through award SOF 04-0010 and NASA PAST grant 80NSSC19K0868. The authors wish to thank Dr. Aigen Lee for informative discussion regarding carbonaceous materials and there relevance to interpreting astronomical spectra as well as Dr. Jeff Cuzzi and the NASA Ames research group for their keen insights into disk transport models. The authors also express gratitude for the two anonymous referees' very careful reading of the manuscript and their numerous suggestions and comments that enhanced the final narrative. Software: IRAF (Tody 1986(Tody , 1993, IDL, JPL Horizons (Giorgini et al. 1996), Aperture Photometry Tool (APT) Laher et al. (2012) APPENDIX A. TABLES OF REVISED THERMAL MODELS As described in the text ( §3.2.6) we have adopted a value for 1.5 g cm −3 for the specific density of amorphous carbon, ρ s (ACar), in our thermal models. In early work, we employed a higher specific density of 2.5 g cm −3 . In order to compare the atomic carbon-to-silicate ratios consistently thermal models for all SOFIA observed comets included in this analysis were modeled or remodeled with a common value of ρ s (ACar) = 1.5 g cm −3 . Tables for comets C/2012 K1 (Pan-Starrs) (see Woodward et al. 2015), C/1995 O1 (Hale-Bopp) (see , and C/2013 X1 (Pan-STARRS) and C/2018 W2 (Africano) (Woodward et al. 2020) are given for completeness. | 20,832 | 2020-11-13T00:00:00.000 | [
"Physics",
"Geology"
] |
Size Controlled Copper (I) Oxide Nanoparticles Influence Sensitivity of Glucose Biosensor
Copper (I) oxide (Cu2O) is an appealing semiconducting oxide with potential applications in various fields ranging from photovoltaics to biosensing. The precise control of size and shape of Cu2O nanostructures has been an area of intense research. Here, the electrodeposition of Cu2O nanoparticles is presented with precise size variations by utilizing ethylenediamine (EDA) as a size controlling agent. The size of the Cu2O nanoparticles was successfully varied between 54.09 nm to 966.97 nm by changing the concentration of EDA in the electrolytic bath during electrodeposition. The large surface area of the Cu2O nanoparticles present an attractive platform for immobilizing glucose oxidase for glucose biosensing. The fabricated enzymatic biosensor exhibited a rapid response time of <2 s. The limit of detection was 0.1 μM and the sensitivity of the glucose biosensor was 1.54 mA/cm2. mM. The Cu2O nanoparticles were characterized by UV-Visible spectroscopy, scanning electron microscopy and X-ray diffraction.
Introduction
Copper (I) oxide (Cu 2 O) is a highly attractive oxide semiconductor due to its unique properties. It is a p-type semiconductor having a direct bandgap of 2 eV. Cu 2 O is a non-toxic material and its starting material, copper, is abundantly available. Furthermore, the fabrication and processing of Cu 2 O is inexpensive. Due to these advantages, Cu 2 O has potential applications in several fields including photovoltaics, catalysis, batteries, gas sensing and biosensing [1][2][3][4][5][6][7][8][9]. In photovoltaics, Cu 2 O presents a promising alternative to silicon and other potential semiconductors. The author, Rai, in a review article has provided a comprehensive overview of Cu 2 O as an appealing material for solar cells including the inexpensive fabrication methods, the construction of a solar cell followed by its performance. This review also highlights the advantages of the Cu 2 O material and some of its drawbacks [4]. It has been demonstrated that Cu 2 O is a potential material for gas sensing. Deng and co-workers have used graphene oxide conjugated with Cu 2 O nanowires for nitrogen dioxide sensing. Here they demonstrated the crystallization of Cu 2 O in the presence of graphene oxide to form nanowires, which were highly anisotropic. These structures show a high performance, as compared to the separate systems of Cu 2 O and graphene oxide [5]. In another application, Cu 2 O was utilized as a photocathode for solar water splitting [10]. Paracchino and co-workers demonstrated a highly efficient Cu 2 O photocathode with the highest recorded photocurrent of −7.6 mA/cm 2 [10]. The Cu 2 O nanostructures have also been utilized as platforms for biosensing. Zhu and co-workers synthesized Cu 2 O hollow microspheres with the help of polyvinylpyrrolidone [11]. The Cu 2 O hollow microspheres were investigated for biosensing applications and served as an excellent immobilization platform for the DNA probe and enhanced the sensitivity of the DNA biosensor. In a similar study, an enzymatic biosensor was fabricated using graphene oxide, zinc oxide and Cu 2 O [12]. The composite biosensing electrode exhibited enhanced immobilization of glucose oxidase (GOx) enzyme with a linear range of 0.01-2 mM and the detection limit of 1.99 µM.
In all the above studies the Cu 2 O nanostructures were presented in varying morphologies ranging from thin film to nanocubes. Thus, it is pertinent to note that Cu 2 O can be fabricated in several different morphologies. The variations in morphologies has been studied and well documented in the literature. It has been demonstrated that variation in morphology can affect the properties of Cu 2 O including their optical and electronic properties. Radi and co-workers fabricated size and shape controlled Cu-Cu 2 O core shell nanoparticles via electrodeposition on H terminated silicon [13]. The size was varied between 5-400 nm and different shapes including cubic, cuboctahedral, and octahedral were obtained by controlling the deposition time and the electrolyte concentration respectively. Zhang and co-workers synthesized nearly monodispersed Cu 2 O nanoparticles by a hydrothermal method [14]. Here, they observed that by carefully changing the reactant concentration the Cu 2 O nanoparticle size, monodispersity and crystallinity can be controlled. In another example, Xu and co-workers synthesized octahedral Cu 2 O nanoparticles with varying edge length from 130 nm to 600 nm [15]. This variation in edge length was carried out by adjusting the molar ratio of the reactants. The absorption properties of these Cu 2 O nanoparticles also varied and thus demonstrated improved ability in photodegradation of methyl orange compared to cubic Cu 2 O nanoparticles. Feng and co-workers demonstrated the formation of hollow spherical and octahedral Cu 2 O nanocrystals in the presence of EDA and sodium hydroxide [16]. The change in morphology affected the photocatalytic activity of the Cu 2 O nanocrystals. Thus, size and shape control of the Cu 2 O nanoparticles can alter their properties significantly.
In the present work, a strong control over the size of the Cu 2 O nanoparticle during electrodeposition was demonstrated by utilizing ethylenediamine (EDA) in the electrolytic bath. To the best of our knowledge, this is the first report of size-controlled synthesis of Cu 2 O nanoparticles, using EDA during the electrodeposition technique. The electrodeposition method is facile, inexpensive and scalable [17][18][19]. It can be used for precise control of size and morphology of the depositing species. The Cu 2 O nanoparticles have been synthesized in varying sizes from 54.09 nm to 966 nm. The Cu 2 O nanoparticle electrodes, fabrication by the above mentioned route, were utilized for the first time as platform for glucose biosensing. The current response of the Cu 2 O electrodes indicates that the nanoparticle size has a strong influence on the sensitivity of the glucose biosensor. The Cu 2 O electrodes were characterized by UV-visible spectroscopy (UV-Vis), scanning electron microscopy (SEM) and X-ray diffraction (XRD).
Materials
The chemicals used for the electrodeposition of Cu 2 O were cupric sulfate pentahydrate (CuSO 4 ·5H 2 O, ≥98%), ethylenediamine (EDA), lactic acid (C 3 H 6 O 3 , ≥88.5%) and potassium hydroxide (KOH, ≥85.8%). These chemicals were purchased from Fisher Scientific (Hanover Park, IL, USA). The chemicals did not require any further purification and thus were used as purchased. The aqueous solutions were prepared by dissolving the precursor in deionized water. The electrodeposition was performed on a fluorine doped tin oxide (FTO) coated on glass substrate. The size and conductivity of the substrate was 25 mm × 25 mm × 1.1 mm and 6-8 ohm/sq. respectively. The substrate was purchased from University Wafer Inc. (Boston, MA, USA). The chemicals used for cleaning the FTO substrate were acetone (100%, 200 proof), hydrochloric acid (HCl) and nitric acid (HNO 3 ).
Fabrication of Cu 2 O Electrode via Electrodeposition
The electrodeposition was performed in an electrochemical cell (Figure 1a). For the electrodeposition of Cu 2 O, an Ag/AgCl wire was used as the reference electrode (Figure 1b). A platinum wire of 2 mm diameter served as the counter electrode ( Figure 1c) and an FTO substrate was the working electrode ( Figure 1d). Prior to electrodeposition, the FTO substrate was sonicated for 10 min in a bath of acetone. It was then cleaned by hydrochloric acid (HCl) followed by nitric acid (HNO 3 ) for 2 min each. The substrate was rinsed with deionized water between every cleaning step.
Sensors 2017, 17,1944 3 of 10 for 10 min in a bath of acetone. It was then cleaned by hydrochloric acid (HCl) followed by nitric acid (HNO3) for 2 min each. The substrate was rinsed with deionized water between every cleaning step. For the electrodeposition of Cu2O nanoparticles, the CuSO4·5H2O precursor was dissolved in deionized water. The aqueous solution was stabilized by the addition of C3H6O3 to the solution. The addition of EDA was carried out after the copper precursor was completely dissolved in solution. The pH of the final solution was adjusted, to 13, by utilizing KOH. The electrodeposition temperature was kept at 50 °C. The duration for electrodeposition was 30 min. During electrodeposition, the applied potential was −0.6 V.
Fabrication of Enzymatic Biosensor
The glucose oxidase (GOx), enzyme was immobilized on the Cu2O electrode by electrostatic interaction. Since the isoelectric point (IEP) of GOx is 4.5 and that of Cu2O nanoparticles is ~11, the electrostatic interaction is strong leading to successful immobilization [20]. The GOx enzyme solution was prepared by dissolving 1 mg of GOx in 1 mL of 10 mM phosphate buffer saline (PBS) at a pH of 7.4. The immobilization was carried out by drop casting 100 μL of the GOx enzyme solution onto the Cu2O electrode and left to dry for 2 h at room temperature. The dried electrode was rinsed with PBS to discard any enzyme that was not immobilized. The Cu2O electrode with immobilized GOx was stored in PBS at 4 °C overnight in a refrigerator.
Characterization
The absorption properties of the fabricated Cu2O nanoparticles was studied by a UV-Visible spectrometer, which was a Lambda 25 instrument. The morphology of the Cu2O nanoparticles was evaluated by scanning electron microscopy (SEM), a FEI Quanta-250 SEM instrument operating at 10 kV accelerating voltage. The composition of the Cu2O nanoparticles was investigated by X-ray diffraction (XRD). The XRD instrument was a Siemens D500. The X-ray diffractometer utilized a Cu Kα radiation (λ = 1.5406 Å) at 45 kV and 30 mA, with a scanning range of 20° to 80° and a scan step of 0.05°. The electrodeposition was carried out by using a CHI601E potentiostat from CH Instruments.
Characterization of Electrodeposited Cu2O Nanoparticles
The composition and crystallinity of the deposited Cu2O nanoparticles was characterized by X-ray diffraction. Figure 2a shows XRD plots of Cu2O nanoparticles with varying EDA content. The XRD plots were indexed and matched well with the Cu2O reference (JCPDS: 05-0667), having a cubic crystal structure. The XRD plots clearly show (111), (200) and (220) peaks for Cu2O. The intensity of the XRD peaks decreased with increase in the EDA content in the Cu2O samples. It was also observed that the sample thickness reduced with increase in the EDA content. Additionally, it was observed that the peak positions were shifted to higher Bragg values with increase in the EDA content ( Figure 2b). This shift can be attributed to the decrease in the lattice parameter with increase in the EDA For the electrodeposition of Cu 2 O nanoparticles, the CuSO 4 ·5H 2 O precursor was dissolved in deionized water. The aqueous solution was stabilized by the addition of C 3 H 6 O 3 to the solution. The addition of EDA was carried out after the copper precursor was completely dissolved in solution. The pH of the final solution was adjusted, to 13, by utilizing KOH. The electrodeposition temperature was kept at 50 • C. The duration for electrodeposition was 30 min. During electrodeposition, the applied potential was −0.6 V.
Fabrication of Enzymatic Biosensor
The glucose oxidase (GOx), enzyme was immobilized on the Cu 2 O electrode by electrostatic interaction. Since the isoelectric point (IEP) of GOx is 4.5 and that of Cu 2 O nanoparticles is~11, the electrostatic interaction is strong leading to successful immobilization [20]. The GOx enzyme solution was prepared by dissolving 1 mg of GOx in 1 mL of 10 mM phosphate buffer saline (PBS) at a pH of 7.4. The immobilization was carried out by drop casting 100 µL of the GOx enzyme solution onto the Cu 2 O electrode and left to dry for 2 h at room temperature. The dried electrode was rinsed with PBS to discard any enzyme that was not immobilized. The Cu 2 O electrode with immobilized GOx was stored in PBS at 4 • C overnight in a refrigerator.
Characterization
The absorption properties of the fabricated Cu 2 O nanoparticles was studied by a UV-Visible spectrometer, which was a Lambda 25 instrument. The morphology of the Cu 2 O nanoparticles was evaluated by scanning electron microscopy (SEM), a FEI Quanta-250 SEM instrument operating at 10 kV accelerating voltage. The composition of the Cu 2 O nanoparticles was investigated by X-ray diffraction (XRD). The XRD instrument was a Siemens D500. The X-ray diffractometer utilized a Cu Kα radiation (λ = 1.5406 Å) at 45 kV and 30 mA, with a scanning range of 20 • to 80 • and a scan step of 0.05 • . The electrodeposition was carried out by using a CHI601E potentiostat from CH Instruments.
Characterization of Electrodeposited Cu 2 O Nanoparticles
The composition and crystallinity of the deposited Cu 2 O nanoparticles was characterized by X-ray diffraction. Figure 2a shows XRD plots of Cu 2 O nanoparticles with varying EDA content. The XRD plots were indexed and matched well with the Cu 2 O reference (JCPDS: 05-0667), having a cubic crystal structure. The XRD plots clearly show (111), (200) and (220) peaks for Cu 2 O. The intensity of the XRD peaks decreased with increase in the EDA content in the Cu 2 O samples. It was also observed that the sample thickness reduced with increase in the EDA content. Additionally, it was observed that the peak positions were shifted to higher Bragg values with increase in the EDA content ( Figure 2b). This shift can be attributed to the decrease in the lattice parameter with increase in the EDA content. The peak position, in Figure 2b, for the samples with varying EDA content have been offset along the Y axis for clear viewing. The photographs of the Cu2O nanoparticles electrodeposited on the FTO substrates are shown in Figure 3. These Cu2O nanoparticles were prepared with increasing content of EDA solution, from 0.2 mL to 1 mL, in the electrolytic bath during deposition. From these photographs it was clear that there was a distinct difference in the color of the Cu2O samples with increasing EDA content during deposition. The Cu2O sample with 0.2 mL of EDA appeared red-orange in color (Figure 3a), while the sample with 1 mL of EDA appeared yellow in color (Figure 3d). Thus these photographs suggested that there was a change in the absorption properties of the samples with increase in the EDA content in the deposition process. Additionally, UV-Vis absorption spectra were collected from the Cu2O samples to evaluate their absorption properties. Figure 4 shows UV-Vis spectra for Cu2O samples with EDA content of 0.2 mL and 1 mL. The absorption between 350 nm and 550 nm was assigned to the inter-band transition in Cu2O nanoparticles. Further, the broad band feature around 700 nm was attributed to the localized surface plasmon resonance, which is observed in Cu2O nanoparticles [21]. Additionally, a blue shift in the absorption spectra indicated a decrease in the nanoparticle size with increase in the EDA content of the Cu2O samples. The differences in the absorption spectra for 0.2 mL and 1 mL EDA can be related to the photographs shown in Figure 2. For Cu2O sample with 0.2 mL EDA, the combination of absorption peaks at 510 nm and 700 nm can be related to the red-orange color (Figure 3a,b). As the EDA content was increased to 1 mL, the absorption peak blue shifted to 475 nm and a higher intensity broader peak was observed beyond 600 nm. The absorption peak combination of 475 nm and higher intensity at 700 nm can be related to the yellowish color of the Cu2O sample with 1 mL EDA (Figure 3c,d). The photographs of the Cu 2 O nanoparticles electrodeposited on the FTO substrates are shown in Figure 3. These Cu 2 O nanoparticles were prepared with increasing content of EDA solution, from 0.2 mL to 1 mL, in the electrolytic bath during deposition. From these photographs it was clear that there was a distinct difference in the color of the Cu 2 O samples with increasing EDA content during deposition. The Cu 2 O sample with 0.2 mL of EDA appeared red-orange in color (Figure 3a), while the sample with 1 mL of EDA appeared yellow in color (Figure 3d). Thus these photographs suggested that there was a change in the absorption properties of the samples with increase in the EDA content in the deposition process. The photographs of the Cu2O nanoparticles electrodeposited on the FTO substrates are shown in Figure 3. These Cu2O nanoparticles were prepared with increasing content of EDA solution, from 0.2 mL to 1 mL, in the electrolytic bath during deposition. From these photographs it was clear that there was a distinct difference in the color of the Cu2O samples with increasing EDA content during deposition. The Cu2O sample with 0.2 mL of EDA appeared red-orange in color (Figure 3a), while the sample with 1 mL of EDA appeared yellow in color (Figure 3d). Thus these photographs suggested that there was a change in the absorption properties of the samples with increase in the EDA content in the deposition process. Additionally, UV-Vis absorption spectra were collected from the Cu2O samples to evaluate their absorption properties. Figure 4 shows UV-Vis spectra for Cu2O samples with EDA content of 0.2 mL and 1 mL. The absorption between 350 nm and 550 nm was assigned to the inter-band transition in Cu2O nanoparticles. Further, the broad band feature around 700 nm was attributed to the localized surface plasmon resonance, which is observed in Cu2O nanoparticles [21]. Additionally, a blue shift in the absorption spectra indicated a decrease in the nanoparticle size with increase in the EDA content of the Cu2O samples. The differences in the absorption spectra for 0.2 mL and 1 mL EDA can be related to the photographs shown in Figure 2. For Cu2O sample with 0.2 mL EDA, the combination of absorption peaks at 510 nm and 700 nm can be related to the red-orange color (Figure 3a,b). As the EDA content was increased to 1 mL, the absorption peak blue shifted to 475 nm and a higher intensity broader peak was observed beyond 600 nm. The absorption peak combination of 475 nm and higher intensity at 700 nm can be related to the yellowish color of the Cu2O sample with 1 mL EDA (Figure 3c,d). Additionally, UV-Vis absorption spectra were collected from the Cu 2 O samples to evaluate their absorption properties. Figure 4 shows UV-Vis spectra for Cu 2 O samples with EDA content of 0.2 mL and 1 mL. The absorption between 350 nm and 550 nm was assigned to the inter-band transition in Cu 2 O nanoparticles. Further, the broad band feature around 700 nm was attributed to the localized surface plasmon resonance, which is observed in Cu 2 O nanoparticles [21]. Additionally, a blue shift in the absorption spectra indicated a decrease in the nanoparticle size with increase in the EDA content of the Cu 2 O samples. The differences in the absorption spectra for 0.2 mL and 1 mL EDA can be related to the photographs shown in Figure 2. For Cu 2 O sample with 0.2 mL EDA, the combination of absorption peaks at 510 nm and 700 nm can be related to the red-orange color (Figure 3a,b). As the EDA content was increased to 1 mL, the absorption peak blue shifted to 475 nm and a higher intensity broader peak was observed beyond 600 nm. The absorption peak combination of 475 nm and higher intensity at 700 nm can be related to the yellowish color of the Cu 2 O sample with 1 mL EDA (Figure 3c,d). To further probe the nanoparticles size of the Cu2O samples, a series of SEM images were obtained and particle size distribution was calculated. Figure 5 shows the SEM image of Cu2O nanoparticles fabricated in the absence of EDA. Here, we observe a cubic structure of the Cu2O nanoparticles with size approximately 750 nm. Figure 6 shows SEM images of Cu2O oxide samples with increasing EDA content along with their corresponding size distribution. Figure 6a shows Cu2O nanoparticles deposited in the presence of 0.2 mL of EDA in the electrolytic bath. The shape of the Cu2O nanoparticles appear to be mix of triangular and rhombic shapes. When the EDA content was increased to 0.4 mL the average nanoparticle size decreased. The SEM image in Figure 6c shows a combination of large and small nanoparticles. It was also observed that all the nanoparticles had similar shapes as seen in Figure 6a. Further increase in the EDA content to 0.8 mL did not show any apparent change in the Cu2O nanoparticle size and shape (Figure 6e). However, with additional increase in the EDA content to 1 mL, drastic decrease in the nanoparticle size and shape was observed (Figure 6g). Here, a bimodal distribution was observed. This distribution is confirmed by the SEM image for 1 mL EDA sample, which shows small Cu2O nanoparticles underneath larger nanoparticles. Moreover, the nanoparticle size distribution appeared to be narrow for both nanoparticle sizes. The nanoparticles were quasi spherical in shape. Table 1 provides the average nanoparticle size for Cu2O samples under investigation in the present work. To further probe the nanoparticles size of the Cu 2 O samples, a series of SEM images were obtained and particle size distribution was calculated. Figure 5 shows the SEM image of Cu 2 O nanoparticles fabricated in the absence of EDA. Here, we observe a cubic structure of the Cu 2 O nanoparticles with size approximately 750 nm. To further probe the nanoparticles size of the Cu2O samples, a series of SEM images were obtained and particle size distribution was calculated. Figure 5 shows the SEM image of Cu2O nanoparticles fabricated in the absence of EDA. Here, we observe a cubic structure of the Cu2O nanoparticles with size approximately 750 nm. Figure 6 shows SEM images of Cu2O oxide samples with increasing EDA content along with their corresponding size distribution. Figure 6a shows Cu2O nanoparticles deposited in the presence of 0.2 mL of EDA in the electrolytic bath. The shape of the Cu2O nanoparticles appear to be mix of triangular and rhombic shapes. When the EDA content was increased to 0.4 mL the average nanoparticle size decreased. The SEM image in Figure 6c shows a combination of large and small nanoparticles. It was also observed that all the nanoparticles had similar shapes as seen in Figure 6a. Further increase in the EDA content to 0.8 mL did not show any apparent change in the Cu2O nanoparticle size and shape (Figure 6e). However, with additional increase in the EDA content to 1 mL, drastic decrease in the nanoparticle size and shape was observed (Figure 6g). Here, a bimodal distribution was observed. This distribution is confirmed by the SEM image for 1 mL EDA sample, which shows small Cu2O nanoparticles underneath larger nanoparticles. Moreover, the nanoparticle size distribution appeared to be narrow for both nanoparticle sizes. The nanoparticles were quasi spherical in shape. Table 1 provides the average nanoparticle size for Cu2O samples under investigation in the present work. Figure 6a shows Cu 2 O nanoparticles deposited in the presence of 0.2 mL of EDA in the electrolytic bath. The shape of the Cu 2 O nanoparticles appear to be mix of triangular and rhombic shapes. When the EDA content was increased to 0.4 mL the average nanoparticle size decreased. The SEM image in Figure 6c shows a combination of large and small nanoparticles. It was also observed that all the nanoparticles had similar shapes as seen in Figure 6a. Further increase in the EDA content to 0.8 mL did not show any apparent change in the Cu 2 O nanoparticle size and shape (Figure 6e). However, with additional increase in the EDA content to 1 mL, drastic decrease in the nanoparticle size and shape was observed (Figure 6g). Here, a bimodal distribution was observed. This distribution is confirmed by the SEM image for 1 mL EDA sample, which shows small Cu 2 O nanoparticles underneath larger nanoparticles. Moreover, the nanoparticle size distribution appeared to be narrow for both nanoparticle sizes. The nanoparticles were quasi spherical in shape. Table 1 provides the average nanoparticle size for Cu 2 O samples under investigation in the present work.
Cu2O Nanoparticles as Biosensing Platform
Here, Cu2O nanoparticles were utilized for glucose sensing to test whether the fabricated electrodes can serve as a robust and viable biosensing platform. The steady-state amperometric response of the enzymatic biosensor was investigated by the successive addition of equal amounts of glucose in 10 mM PBS, at an applied potential of 0.8 V under constant stirring condition. Amperometric response was first obtained from Cu2O reference sample followed by Cu2O samples with 0.2 mL and 1 mL EDA. These samples were not treated with GOx. All samples exhibited amperometric response, to the addition of glucose, in the absence of GOx. The amperometric response was higher in samples with EDA. The response and sensitivity increased with EDA content (Figure 7a). In the presence of GOx, the Cu2O samples with EDA exhibited an increase in the overall current along with distinct amperometric response. Figure 7b exhibited a rapid and sensitive response to the addition of glucose for the two biosensors fabricated in the presence of EDA. The current response increased with the increase in the glucose concentration at every step. The biosensors also demonstrated a fast current response of <2 s. Additionally, the Cu2O nanoparticles with higher concentration of EDA (1 mL) exhibited a total current enhancement compared to sample with 0.2 mL EDA. The reference sample, immobilized with GOx, exhibited the lowest current response compared to the other samples fabricated with EDA. Thus the total current response indicated that the biosensor was more sensitive to increased surface area. Furthermore, the concentration of EDA used during the deposition process eventually influences the sensitivity of the biosensor. Figure 7c shows the calibration curves for the Cu2O reference sample and Cu2O samples with EDA content of 0.2 mL and 1.0 mL. It is evident that the current increases with glucose concentration, almost linearly from a range of 0.1 mM to 3.5 mM. The sensitivity of the biosensors ranges between 1243.2-1538 μA/cm 2 . mM for Cu2O nanoparticles with EDA content of 0.2 mL to
Cu 2 O Nanoparticles as Biosensing Platform
Here, Cu 2 O nanoparticles were utilized for glucose sensing to test whether the fabricated electrodes can serve as a robust and viable biosensing platform. The steady-state amperometric response of the enzymatic biosensor was investigated by the successive addition of equal amounts of glucose in 10 mM PBS, at an applied potential of 0.8 V under constant stirring condition. Amperometric response was first obtained from Cu 2 O reference sample followed by Cu 2 O samples with 0.2 mL and 1 mL EDA. These samples were not treated with GOx. All samples exhibited amperometric response, to the addition of glucose, in the absence of GOx. The amperometric response was higher in samples with EDA. The response and sensitivity increased with EDA content (Figure 7a). In the presence of GOx, the Cu 2 O samples with EDA exhibited an increase in the overall current along with distinct amperometric response. Figure 7b exhibited a rapid and sensitive response to the addition of glucose for the two biosensors fabricated in the presence of EDA. The current response increased with the increase in the glucose concentration at every step. The biosensors also demonstrated a fast current response of <2 s. Additionally, the Cu 2 O nanoparticles with higher concentration of EDA (1 mL) exhibited a total current enhancement compared to sample with 0.2 mL EDA. The reference sample, immobilized with GOx, exhibited the lowest current response compared to the other samples fabricated with EDA. Thus the total current response indicated that the biosensor was more sensitive to increased surface area. Furthermore, the concentration of EDA used during the deposition process eventually influences the sensitivity of the biosensor. Figure 7c shows the calibration curves for the Cu 2 O reference sample and Cu 2 O samples with EDA content of 0.2 mL and 1.0 mL. It is evident that the current increases with glucose concentration, almost linearly from a range of 0.1 mM to 3.5 mM. The sensitivity of the biosensors ranges between 1243.2-1538 µA/cm 2 . mM for Cu 2 O nanoparticles with EDA content of 0.2 mL to 1.0 mL respectively. The affinity of GOx to the substrate, glucose was obtained by calculating the apparent Michaelis-Menten constant with the help of the Lineweaver-Burk equation [22]: where C is the glucose concentration, i max and i are the currents for substrate saturation and steady state respectively, during the glucose sensing measurements. From the calculation, K app M was obtained to be 1.00 mM and 1.25 mM for Cu 2 O samples with EDA content of 0.2 mL and 1.0 mL respectively, which indicates good affinity of the immobilized GOx to glucose. The biosensor characteristics obtained in this work were compared to the values in literature, shown in Table 2 [12]. The present work exhibited higher detection limit among other studies in literature. Additionally, a large linear range was obtained. Stability test on the biosensor was also performed. The Cu 2 O with 1 mL EDA sample was tested after 7 days of initial amperometric response. The amperometric response of the EDA sample diminished slightly after 7 days of storage. Sensors 2017Sensors , 17, 1944 7 of 10 1.0 mL respectively. The affinity of GOx to the substrate, glucose was obtained by calculating the apparent Michaelis-Menten constant with the help of the Lineweaver-Burk equation [22]: where C is the glucose concentration, and are the currents for substrate saturation and steady state respectively, during the glucose sensing measurements. From the calculation, was obtained to be 1.00 mM and 1.25 mM for Cu2O samples with EDA content of 0.2 mL and 1.0 mL respectively, which indicates good affinity of the immobilized GOx to glucose. The biosensor characteristics obtained in this work were compared to the values in literature, shown in Table 2 [12]. The present work exhibited higher detection limit among other studies in literature. Additionally, a large linear range was obtained. Stability test on the biosensor was also performed. The Cu2O with 1 mL EDA sample was tested after 7 days of initial amperometric response. The amperometric response of the EDA sample diminished slightly after 7 days of storage. CuxO/Cu 1620 -------49 0-4 [24] Cu2O/CRG --------------1.2 0.1-1.1 [25] Cu2O/Cu 62.29 -------37 0.05-6.75 [26] Cu2O/Carbon Vulcan XC-72 629 -------2.4 0-6 [27] Cu2O/Nafion/Glassy Carbon 121.7 -------38 0-0.5 [28] From the above characterizations and biosensor investigation of the Cu2O nanoparticles it was clear that the nanoparticle size decreased and the density increased with increase in EDA content. Evidently, the optical properties varied with changing EDA content. Additionally, the XRD data provided evidence of decreasing lattice parameter with increasing EDA content. Furthermore, the sensitivity was enhanced with decrease in nanoparticle size. Thus, it was evident that EDA played an important role in controlling the size, density, optical and biosensing properties of the Cu2O -------38 0-0.5 [28] From the above characterizations and biosensor investigation of the Cu 2 O nanoparticles it was clear that the nanoparticle size decreased and the density increased with increase in EDA content. Evidently, the optical properties varied with changing EDA content. Additionally, the XRD data provided evidence of decreasing lattice parameter with increasing EDA content. Furthermore, the sensitivity was enhanced with decrease in nanoparticle size. Thus, it was evident that EDA played an important role in controlling the size, density, optical and biosensing properties of the Cu 2 O nanoparticles. It was therefore pertinent to understand the influence of EDA on the final outcome of the Cu 2 O nanoparticles.
Chemical additives including sodium dodecyl sulfate (SDS), polyvinylpyrrolidone (PVP) and EDA have been often utilized in many solution-based synthesis as well as electrochemical synthesis of various nanostructures as a shape modifying agent [29][30][31]. However, the fabrication of Cu 2 O nanoparticles via electrodeposition using EDA as a size and shape modification additive has not been report to the best of our knowledge. Here, there are several factors that could influence the size, density and shape of the Cu 2 O nanoparticles in the presence of EDA. The chemical additive EDA has a tendency to adsorb on high energy faces of a growing crystal, thus leading to a controlled size and shape of the Cu 2 O nanoparticles. The drastic decrease in the nanoparticle size, with increase in the EDA content, can subsequently increase the density of the nanoparticles on the FTO substrate. The decrease in size and increase in the density was corroborated by the SEM data. However, the EDA present in the electrolytic bath can also interact with the FTO substrate occupying deposition sites of Cu 2+ ions. This could hinder the deposition of Cu 2+ and lower the deposition of the Cu 2 O on the FTO substrate. Such interaction of EDA with the FTO substrate resulted in lower sample thickness. This was confirmed by the XRD data and also verified by visual inspection. Thus, the presence of EDA affects the deposition forming small nanoparticle size, higher density and lower sample thickness with increase in EDA content. Further, the interplay between the applied potential, electrolyte pH and EDA content is still unclear and thus requires further exploration. Presently, detailed investigation is underway to understand the precise influence of EDA on size, density, deposition rate and conductivity of these Cu 2 O samples.
Conclusions
In conclusion, successful electrodeposition of Cu 2 O nanoparticle was performed in the presence of EDA. The nanoparticles size was varied between 54.09 nm to 966.97 nm, by adjusting the EDA content in the electrolyte bath. The absorption spectra indicated a blue shift in the absorption peak with decrease in the Cu 2 O nanoparticle size. The enzyme, GOx was successfully immobilized on the Cu 2 O nanoparticles. The sensitivity of the biosensor was influenced by the presence of EDA. The sensitivity increased with EDA content during electrodeposition. More detailed investigations elucidating the influence of EDA on the Cu 2 O nanoparticle size, density and sample thickness are underway. | 7,900.2 | 2017-08-24T00:00:00.000 | [
"Materials Science"
] |
The Influence of Rare Earth Ce on the Microstructure and Properties of Cast Pure Copper
The effects of rare earth Ce on the microstructure and properties of cast pure copper were investigated through thermodynamic calculations, XRD analysis, mechanical testing, metallographic microscopy, and scanning electron microscopy (SEM). The experimental results demonstrate that the reaction between rare earth Ce and oxygen as well as sulfur in copper exhibits a significantly negative Gibbs free energy value, indicating a strong thermodynamic driving force for deoxidation and desulfurization reactions. Ce is capable of removing trace amounts of O and S from copper. Moreover, the maximum solid solubility of Ce in Cu falls within the range of 0.009% to 0.01%. Furthermore, Ce can refine columnar grains while enlarging equiaxed grains in as-cast copper. Upon the addition of rare earth Ce, the tensile strength increased by 8.45%, elongation increased by 12.1%, and microhardness rose from 73.5 HV to 81.2 HV—an increase of 10.5%. Overall, rare earth Ce has been found to enhance both the microstructure and mechanical properties of cast pure copper.
Introduction
Pure copper exhibits excellent electrical and thermal conductivity, high ductility, and exceptional processability, making it extensively utilized in various fields such as the electronic industry, the electric power sector, and military applications [1][2][3][4].The attainment of copper material with elevated cleanliness levels, a uniform microstructure, and superior properties is crucial to ensure the subsequent processing procedures and product quality.
The application of rare earth in pure copper has been relatively under-reported compared to its extensive research and excellent effects observed in iron and steel materials, aluminum, magnesium, and other alloy materials [5][6][7][8].Existing studies suggest that rare earth can also enhance the microstructure and properties of copper alloys.Rare earth primarily functions in copper alloys by the following: (i) exhibiting highly reactive chemical properties that enable it to react with detrimental elements such as oxygen, sulfur, hydrogen, etc., forming high-melting-point compounds which float on the slag and act as purifying agents [9][10][11] and (ii) dispersing fine high-melting-point compounds throughout the copper alloy matrix to serve as new crystalline cores for grain refinement [12][13][14].Due to its reactivity, rare earth readily reacts with impurities in copper to form well-defined shape-rich compounds with high melting points, thereby improving the morphology of original inclusions [15,16].Consequently, rare earth significantly alters the existing forms of impurities within copper alloys while remarkably enhancing their properties.
Considering the advantageous impact of rare earth elements on copper alloys, this study aims to investigate the influence of Ce on the microstructure and mechanical proper-Materials 2024, 17, 2387 2 of 13 ties of pure copper.The objective is to ascertain whether rare earth elements have a similar effect on pure copper, thereby enhancing the overall characteristics of cast pure copper.
Materials and Methods
The cathode copper, serving as the experimental raw material, is melted using a 25 Kg SK-NL300 vacuum medium-frequency induction furnace at a temperature of 1200 • C. In order to take advantage of the low oxidation tendency and similar melting point to Cu exhibited by the Ce-Cu alloy, rare earths are incorporated in the form of a Ce-Cu intermediate alloy.The content of rare earth element Ce in the Ce-Cu alloy amounts to 35%.By employing an Inductively Coupled Plasma Mass Spectrometer (ICP-MS), the rare earth content in the melted test copper is measured, resulting in values of 0 ppm and 97 ppm for sample numbers 1# and 2#, respectively.Thermodynamic calculations were initially employed to analyze the reaction between rare earth element Ce and typical harmful elements present in copper from a theoretical perspective, thereby determining the reaction products.
The organization and performance of the test samples were evaluated using the following methods: The columnar and equiaxed grain regions of the copper ingots after smelting were observed under a low-magnification microscope, specifically the Axiocam105 color model.The crystal structure and phase analysis of the samples were conducted using a Rigaku SmartLab SE X-ray diffractometer (Manufactured by Rigaku Corporation, Tokyo, Japan) with a scanning range from 10 • to 80 • at a scanning speed of 2 • /min.A microstructure observation and EDS spectrum analysis of the samples were performed using a TESCAN MIRA LMS electron microscope (SEM) (The equipment was manufactured by TESCAN, Brno, Czech Republic) operating at an acceleration voltage ranging from 200 V to 30 KV, a probe current ranging from 1 pA to 100 nA, with stability better than 0.2%/h.Hardness testing was carried out on the samples using a KSV-2500 microhardness tester with a loading load of 0.2 Kg (200 gf).According to the standard (GB/T228.1-2021[17]), the tensile performance of the specimen was evaluated using a CMT5305 electronic universal testing machine (Manufactured by MTS Systems Corporation in Shanghai, China).
Thermodynamic Investigation of Primary Ce-Containing Inclusions in Cast Pure Copper
Although the impurity elements in pure copper are present at very low levels (less than 0.01%), these trace impurities can significantly impact the properties of pure copper.For instance, the formation of brittle compounds such as Cu 2 O and CuS due to the presence of oxygen and sulfur can greatly reduce its plasticity.Additionally, rare earth elements exhibit a strong affinity for oxygen and sulfur, leading to the formation of high-melting-point rare earth compounds with excellent thermal stability and low specific gravity, thereby playing a crucial role in purifying liquid copper.Understanding the thermodynamics of rare earth reactions within copper serves as a fundamental basis for investigating their influence on this metal.The occurrence of Ce-containing inclusions in cast pure copper can be explained from a thermodynamic perspective.
Thermodynamic Conditions and the Sequential Formation of Ce-Containing Impurities
At a temperature of 1200 • C, the presence of the rare earth element Ce in copper can lead to the formation of various reaction products.By utilizing the corresponding thermodynamic data [18][19][20][21], it becomes possible to predict the thermodynamic conditions and sequence of formation for rare earth impurities in copper through reactions between Ce and main elements O and S. Table 1 presents the reaction products as well as the activity product of Ce with O and S in liquid copper.At a temperature of 1200 • C, the activity coefficient of rare earth Ce, when added to the main elements in the multi-component copper system, is as follows [20]: ƒ Ce = 0.9890, ƒ o = 0.4537, and ƒ s = 0.6405.In this experiment, ω [Ce] = 0.0097%, ω [O] = 0.0003%, and ω [S] = 0.0005%.Therefore, the activity of Ce, S, and O in the entire group of copper elements can be obtained: α Ce = 9.6 × 10 −3 , α O = 1.4 × 10 −4 , and α S = 3.2 × 10 −4 .
(1) The thermodynamic conditions governing the interconversion between CeO 2 (s) and Ce 2 O 3 (s) According to the reaction Derived from the chemical isothermal equation as follows: When ∆G < 0, indicating Derived from the chemical isothermal equation as follows: When ∆G < 0, i.e., when (∏ a 3 ) 2 = 0.4374, the reaction proceeds towards the right to form Ce 2 O 3 (s); conversely, it yields Ce 2 O 2 S (s).In this experiment, a 0 a s = 0.4375 and (3) The thermodynamic conditions governing the interconversion between CeS (s) and Ce 2 S 3 (s) According to the reaction Derived from the chemical isothermal equation as follows: When ∆G < 0, indicating ∏ a 4 = 0.7737, the reaction proceeds towards the formation of Ce 2 S 3 (s); otherwise, it leads to the production of CeS (s).In this experimental study, as a s = 3.2 × 10 −4 < ∏ a 5 ∏ a 4 = 0.7737, the resulting product is determined to be CeS (s).
(4) The thermodynamic conditions governing the interconversion between CeS (s) and CeS 4 (s) According to the reaction Derived from the chemical isothermal equation as follows: ∏ a 4 = 0.2776 × 10 2 , the reaction proceeds towards the formation of CeS 4 (s); otherwise, it leads to the production of CeS (s).In this experimental study, as (a s ) , the resulting product is determined to be CeS (s).
According to the thermodynamic conditions calculated based on the aforementioned interconversion of reaction products, it can be inferred that in this experimental study, the final inclusion compounds formed by rare earth Ce in a copper melt at 1200 • C consist of Ce 2 O 3 (s), Ce 2 O 2 S (s), and CeS (s).
Thermodynamic Properties of Cu-Ce-O System
The reaction between rare earth cerium and oxygen in liquid copper at 1200 • C can be described as follows: ∆G is the change in the Gibbs free energy of the chemical reaction; ∆G θ is the standard Gibbs free energy; J is activity quotient; R is the gas constant; T is the Kelvin temperature.
Taking a 1% solution by mass as the standard state, the calculation formula of activity α i is as follows: Ce 2 O 3 is a pure material, so α Ce 2 O 3 = 1 during the calculations is the activity interaction coefficient of component j to component i in copper liquid.The interaction coefficient between main elements in copper is shown in Table 2.At 1200 • C, because the activity of O is extremely low, it can be considered that α Ce 2 O 3 .In the formula ω [Ce] = 0.0097%, ω [O] = 0.0003%, and ω [S] = 0.0005%, the values were measured in the test, and the average value of multiple measurements was taken.The reaction between rare earth cerium and sulfur in a copper liquid at 1200 • C can be described as follows: Taking a 1% solution by mass as the standard state, according to Formulas ( 11)-( 13), The reaction of rare earth cerium with oxygen and sulfur in a copper liquid at 1200 • C is depicted as follows: Taking a 1% solution by mass as the standard state, according to Formulas ( 11)-( 13), The calculation results reveal that the values of ∆G 1 , ∆G 2 , and ∆G 3 are negative, with significant magnitudes.This indicates a pronounced thermodynamic inclination for rare earth Ce to facilitate deoxidation and desulfurization reactions in copper, effectively removing trace amounts of O and S from the system.
Solid Solution of Ce in Copper
Combined with the author's previous research data, X-ray diffraction (XRD) analysis was conducted on samples containing varying amounts of Ce to investigate the correlation between diffraction peaks and angles for different elements and Ce addition.The obtained XRD diffraction pattern is presented in Figure 1.Due to the low concentration of the Ce element used in this experiment, only the diffraction peaks corresponding to Cu crystal planes (111), (200), and (220) are observed, while no discernible diffraction peaks related to the rare earth Ce element or its compounds are detected.According to the relationship between the diffraction angle and content, a line diagram illustrating the correlation between the diffraction angle and content for different crystal planes is constructed.
In Figure 2, (a) represents the diffraction angle image of crystal plane (100), (b) represents the diffraction angle image of crystal plane (200), and (c) represents the diffraction angle image of crystal plane (220).According to Bragg's law: 2dsinθ = nλ, the diffraction angle is inversely proportional to the spacing between crystal planes, while the lattice constant is positively correlated with this spacing.Therefore, a larger diffraction angle indicates a smaller lattice constant.Conversely, a larger lattice constant leads to increased solid solubility and a leftward shift of the diffraction peak.From Figure 2, it can be observed that within the Ce concentration range from 0.006% to 0.009%, there is a decrease in diffraction angles with increasing Ce content, accompanied by a gradual increase in the lattice constant, indicating a progressive dissolution of Ce into copper resulting in the formation of a solid solution between them.Within the range of 0.009% to 0.01% content, each grain boundary exhibits an upward trend in its respective diffraction angle, suggesting that Ce has reached its maximum solid solubility and started forming a supersaturated solid solution with copper leading to the precipitation of Ce elements as well.These findings demonstrate that Cu has an approximate maximum solid solubility for Ce within the range of 0.009~0.01%.
Existing Forms of Ce in Pure Copper
The surface of the sample was analyzed using a scanning electron microscope to further investigate the distribution and existing form of Ce in the matrix.As depicted in Figure 3, Ce was uniformly distributed within the Cu matrix in the sample with a Ce According to the relationship between the diffraction angle and content, a line diagram illustrating the correlation between the diffraction angle and content for different crystal planes is constructed.
In Figure 2, (a) represents the diffraction angle image of crystal plane (100), (b) represents the diffraction angle image of crystal plane (200), and (c) represents the diffraction angle image of crystal plane (220).According to Bragg's law: 2dsinθ = nλ, the diffraction angle is inversely proportional to the spacing between crystal planes, while the lattice constant is positively correlated with this spacing.Therefore, a larger diffraction angle indicates a smaller lattice constant.Conversely, a larger lattice constant leads to increased solid solubility and a leftward shift of the diffraction peak.From Figure 2, it can be observed that within the Ce concentration range from 0.006% to 0.009%, there is a decrease in diffraction angles with increasing Ce content, accompanied by a gradual increase in the lattice constant, indicating a progressive dissolution of Ce into copper resulting in the formation of a solid solution between them.Within the range of 0.009% to 0.01% content, each grain boundary exhibits an upward trend in its respective diffraction angle, suggesting that Ce has reached its maximum solid solubility and started forming a supersaturated solid solution with copper leading to the precipitation of Ce elements as well.These findings demonstrate that Cu has an approximate maximum solid solubility for Ce within the range of 0.009~0.01%.According to the relationship between the diffraction angle and content, a line diagram illustrating the correlation between the diffraction angle and content for different crystal planes is constructed.
In Figure 2, (a) represents the diffraction angle image of crystal plane (100), (b) represents the diffraction angle image of crystal plane (200), and (c) represents the diffraction angle image of crystal plane (220).According to Bragg's law: 2dsinθ = nλ, the diffraction angle is inversely proportional to the spacing between crystal planes, while the lattice constant is positively correlated with this spacing.Therefore, a larger diffraction angle indicates a smaller lattice constant.Conversely, a larger lattice constant leads to increased solid solubility and a leftward shift of the diffraction peak.From Figure 2, it can be observed that within the Ce concentration range from 0.006% to 0.009%, there is a decrease in diffraction angles with increasing Ce content, accompanied by a gradual increase in the lattice constant, indicating a progressive dissolution of Ce into copper resulting in the formation of a solid solution between them.Within the range of 0.009% to 0.01% content, each grain boundary exhibits an upward trend in its respective diffraction angle, suggesting that Ce has reached its maximum solid solubility and started forming a supersaturated solid solution with copper leading to the precipitation of Ce elements as well.These findings demonstrate that Cu has an approximate maximum solid solubility for Ce within the range of 0.009~0.01%.
Existing Forms of Ce in Pure Copper
The surface of the sample was analyzed using a scanning electron microscope to further investigate the distribution and existing form of Ce in the matrix.As depicted in Figure 3, Ce was uniformly distributed within the Cu matrix in the sample with a Ce
Existing Forms of Ce in Pure Copper
The surface of the sample was analyzed using a scanning electron microscope to further investigate the distribution and existing form of Ce in the matrix.As depicted in Figure 3, Ce was uniformly distributed within the Cu matrix in the sample with a Ce content of 0.0097%.However, aggregation was observed in the enlarged region indicated by the arrow, as shown in Figure 4. content of 0.0097%.However, aggregation was observed in the enlarged region indicated by the arrow, as shown in Figure 4. EDS energy spectrum analysis was conducted in the region indicated by the arrow in Figure 4, revealing relatively high contents of Ce, Cu, O, and S elements.Thermodynamic calculations and analyses suggest that substances in this area may consist of Ce2O3, CeS, or Ce2O2S compounds; excessive amounts of Ce can also form intermetallic compounds with Cu.Zhang Shihong et al. [22] employed the Miedema thermodynamic model to compute that upon the addition of rare earth element Ce to purple bronze, Ce exhibits a preference for reacting with O and S elements, resulting in the formation of high-melting-point and low-density rare earth compounds.Simultaneously, rare earth Ce can also react with copper to generate corresponding copper/rare earth compounds, which disperse as second-phase particles within the copper matrix.The experimental results are consistent with these conclusions.During the solidification process, formed compounds serve as nucleation centers which facilitate microstructure refinement while pinning dislocations and improving the properties of the copper alloy.EDS energy spectrum analysis was conducted in the region indicated by the arrow in Figure 4, revealing relatively high contents of Ce, Cu, O, and S elements.Thermodynamic calculations and analyses suggest that substances in this area may consist of Ce2O3, CeS, or Ce2O2S compounds; excessive amounts of Ce can also form intermetallic compounds with Cu.Zhang Shihong et al. [22] employed the Miedema thermodynamic model to compute that upon the addition of rare earth element Ce to purple bronze, Ce exhibits a preference for reacting with O and S elements, resulting in the formation of high-melting-point and low-density rare earth compounds.Simultaneously, rare earth Ce can also react with copper to generate corresponding copper/rare earth compounds, which disperse as second-phase particles within the copper matrix.The experimental results are consistent with these conclusions.During the solidification process, formed compounds serve as nucleation centers which facilitate microstructure refinement while pinning dislocations and improving the properties of the copper alloy.EDS energy spectrum analysis was conducted in the region indicated by the arrow in Figure 4, revealing relatively high contents of Ce, Cu, O, and S elements.Thermodynamic calculations and analyses suggest that substances in this area may consist of Ce 2 O 3 , CeS, or Ce 2 O 2 S compounds; excessive amounts of Ce can also form intermetallic compounds with Cu.Zhang Shihong et al. [22] employed the Miedema thermodynamic model to compute that upon the addition of rare earth element Ce to purple bronze, Ce exhibits a preference for reacting with O and S elements, resulting in the formation of high-meltingpoint and low-density rare earth compounds.Simultaneously, rare earth Ce can also react with copper to generate corresponding copper/rare earth compounds, which disperse as second-phase particles within the copper matrix.The experimental results are consistent with these conclusions.During the solidification process, formed compounds serve as nucleation centers which facilitate microstructure refinement while pinning dislocations and improving the properties of the copper alloy.
Effect of Rare Earth Ce on As-Cast Microstructure of Pure Copper
The transverse low power structure of 1# and 2# as-cast copper ingots is depicted in Figure 5.It can be observed that the addition of rare earth significantly alters the microstructure of the as-cast copper ingot.In Figure 5, the majority of crystals from the edge to the center of the 1# copper ingot are columnar with coarse grains and uneven distribution, while a small number of equiaxed grains are present at the center.Upon adding rare earth, more equiaxed grains emerge at the center with enlarged size, and compared to those without rare earth addition, there is evident refinement in columnar grains at the edge.The transverse low power structure of 1# and 2# as-cast copper ingots is depicted in Figure 5.It can be observed that the addition of rare earth significantly alters the microstructure of the as-cast copper ingot.In Figure 5, the majority of crystals from the edge to the center of the 1# copper ingot are columnar with coarse grains and uneven distribution, while a small number of equiaxed grains are present at the center.Upon adding rare earth, more equiaxed grains emerge at the center with enlarged size, and compared to those without rare earth addition, there is evident refinement in columnar grains at the edge.The addition of rare earth effectively refines the as-cast grain of copper through a dual mechanism.Firstly, the inclusion of rare earth Ce reduces the melting temperature of pure copper, thereby increasing the undercooling degree of the composition.This increased undercooling promotes nucleation and improves the nucleation rate, subsequently inhibiting grain growth.Moreover, heightened undercooling in pure copper intensifies cellular dendrite growth and enhances dendritic development, ultimately resulting in reduced dendrite spacing and refined columnar crystals.Additionally, composition undercooling provides sufficient nucleation conditions for new equiaxed grains to form based on effective nucleation points created by rare earth atoms within this region.These newly formed equiaxed grains continue to grow while solute redistribution during grain growth generates another composition undercooling zone at the solid/liquid interface front surrounding grain growth.This facilitates continuous nucleation and growth at these sites, thus promoting the expansion of the equiaxed crystal region [21].Secondly, upon adding rare earth to copper, preferential reactions between rare earth elements and other constituents lead to the formation of high-melting-point compounds which are finely dispersed throughout the molten copper matrix.During solidification, these fine high-melting-point compounds act as heterogeneous crystal nuclei that increase the crystal nucleus density while mechanically impeding grain growth processes.Consequently, this shortens solidification time and contracts the columnar crystal region [11].
The Influence of Rare Earth Ce on Impurities in Pure Copper
The overall morphology of inclusions in pure copper before and after the addition of rare earth is depicted in Figure 6.As illustrated by 1#, the inclusions present in pure copper ingots devoid of rare earth exhibit a substantial size and tend to aggregate, displaying an extremely irregular and angular morphology that significantly compromises the properties of copper.Conversely, as demonstrated by 2#, it is evident that the inclusion size becomes noticeably finer upon introducing a rare earth content of 97 ppm into the copper ingot.Moreover, their shape transforms from irregular to The addition of rare earth effectively refines the as-cast grain of copper through a dual mechanism.Firstly, the inclusion of rare earth Ce reduces the melting temperature of pure copper, thereby increasing the undercooling degree of the composition.This increased undercooling promotes nucleation and improves the nucleation rate, subsequently inhibiting grain growth.Moreover, heightened undercooling in pure copper intensifies cellular dendrite growth and enhances dendritic development, ultimately resulting in reduced dendrite spacing and refined columnar crystals.Additionally, composition undercooling provides sufficient nucleation conditions for new equiaxed grains to form based on effective nucleation points created by rare earth atoms within this region.These newly formed equiaxed grains continue to grow while solute redistribution during grain growth generates another composition undercooling zone at the solid/liquid interface front surrounding grain growth.This facilitates continuous nucleation and growth at these sites, thus promoting the expansion of the equiaxed crystal region [21].Secondly, upon adding rare earth to copper, preferential reactions between rare earth elements and other constituents lead to the formation of high-melting-point compounds which are finely dispersed throughout the molten copper matrix.During solidification, these fine high-melting-point compounds act as heterogeneous crystal nuclei that increase the crystal nucleus density while mechanically impeding grain growth processes.Consequently, this shortens solidification time and contracts the columnar crystal region [11].
The Influence of Rare Earth Ce on Impurities in Pure Copper
The overall morphology of inclusions in pure copper before and after the addition of rare earth is depicted in Figure 6.As illustrated by 1#, the inclusions present in pure copper ingots devoid of rare earth exhibit a substantial size and tend to aggregate, displaying an extremely irregular and angular morphology that significantly compromises the properties of copper.Conversely, as demonstrated by 2#, it is evident that the inclusion size becomes noticeably finer upon introducing a rare earth content of 97 ppm into the copper ingot.Moreover, their shape transforms from irregular to approximately round, This result may be attributed to the following factors: Firstly, upon the addition of rare earths, they react with detrimental elements such as oxygen and sulfur in the copper liquid, forming high-melting-point inclusions.These inclusions preferentially precipitate, and some are entrained into the copper slag, leading to a reduction in impurities and the purification of the copper melt.Additionally, during this process, a small amount of highmelting-point oxides, sulfides, and other inclusions also precipitate from the copper liquid to form solid cores.Due to their strong reactivity, certain rare earths gradually adsorb onto these solid core surfaces.Simultaneously, owing to chemical potential gradients' existence, there is a concentration disparity between nucleation centers and surrounding rare earth atoms that drives more rare earth atoms towards nucleation cores.Through continuous attraction, convergence, and fusion processes, rare earth oxide/sulfide compounds eventually form whose size depends on the aggregation level of rare earth elements.Since there is relatively less presence of rare earth elements compared to oxygen or sulfides at this stage, the size of inclusions remains relatively small.Secondly, the addition of rare earths alters inclusion types resulting in improved morphology.
Effect of Rare Earth Ce on Mechanical Properties of Copper
The mechanical properties of copper, both before and after the addition of rare earth Ce, are presented in Table 3.It is evident that the incorporation of Ce leads to enhancements in both the tensile strength and elongation of copper.Specifically, the average tensile strength and elongation values for copper with Ce addition are measured at 154 MPa and 33%, respectively, representing an increase of 8.45% compared to pure copper's tensile strength and a remarkable improvement by 12.1% over its elongation.This result may be attributed to the following factors: Firstly, upon the addition of rare earths, they react with detrimental elements such as oxygen and sulfur in the copper liquid, forming high-melting-point inclusions.These inclusions preferentially precipitate, and some are entrained into the copper slag, leading to a reduction in impurities and the purification of the copper melt.Additionally, during this process, a small amount of highmelting-point oxides, sulfides, and other inclusions also precipitate from the copper liquid to form solid cores.Due to their strong reactivity, certain rare earths gradually adsorb onto these solid core surfaces.Simultaneously, owing to chemical potential gradients' existence, there is a concentration disparity between nucleation centers and surrounding rare earth atoms that drives more rare earth atoms towards nucleation cores.Through continuous attraction, convergence, and fusion processes, rare earth oxide/sulfide compounds eventually form whose size depends on the aggregation level of rare earth elements.Since there is relatively less presence of rare earth elements compared to oxygen or sulfides at this stage, the size of inclusions remains relatively small.Secondly, the addition of rare earths alters inclusion types resulting in improved morphology.
Effect of Rare Earth Ce on Mechanical Properties of Copper
The mechanical properties of copper, both before and after the addition of rare earth Ce, are presented in Table 3.It is evident that the incorporation of Ce leads to enhancements in both the tensile strength and elongation of copper.Specifically, the average tensile strength and elongation values for copper with Ce addition are measured at 154 MPa and 33%, respectively, representing an increase of 8.45% compared to pure copper's tensile strength and a remarkable improvement by 12.1% over its elongation.The tensile fracture morphology before and after the addition of Ce is presented in Figure 7.It can be observed that both samples exhibit ductile fracture characteristics.However, upon the addition of rare an increase in the number of dimples on the fracture surface is evident, accompanied by a reduction in their size and a more uniform distribution.In contrast, without the addition of rare earth, numerous irregularly shaped and large-sized inclusions surround the dimples on the fracture surface.Remarkably, with the incorporation of rare earth, there is a noticeable decrease in inclusion content.Upon magnification, smaller round-or oval-shaped inclusions within dimples are uniformly distributed.
The tensile fracture morphology before and after the addition of Ce is presented in Figure 7.It can be observed that both samples exhibit ductile fracture characteristics.However, upon the addition of rare earth, an increase in the number of dimples on the fracture surface is evident, accompanied by a reduction in their size and a more uniform distribution.In contrast, without the addition of rare earth, numerous irregularly shaped and large-sized inclusions surround the dimples on the fracture surface.Remarkably, with the incorporation of rare earth, there is a noticeable decrease in inclusion content.Upon magnification, smaller round-or oval-shaped inclusions within dimples are uniformly distributed.When the specimen undergoes tensile deformation, the grain boundary serves as a region of stress concentration.With continuous tensile loading, micro-holes tend to form at the grain boundary.These micro-holes act as nucleation sites for dimples and gradually expand, eventually leading to fracture in the copper grain structure.The presence of larger-size inclusions increases the propensity for fracture, which contributes to premature failure in samples without rare earth addition.Upon introducing rare earth Ce, two significant improvements are observed: Firstly, the refinement of grains occurs due to its addition; according to the Hall-Petch relationship [23], refining grains can substantially enhance a material's tensile strength by negatively correlating it with grain size.Secondly, rare earth Ce enhances inclusion morphology by transforming large-size When the specimen undergoes tensile deformation, the grain boundary serves as a region of stress concentration.With continuous tensile loading, micro-holes tend to form at the grain boundary.These micro-holes act as nucleation sites for dimples and gradually expand, eventually leading to fracture in the copper grain structure.The presence of largersize inclusions increases the propensity for fracture, which contributes to premature failure in samples without rare earth addition.Upon introducing rare earth Ce, two significant improvements are observed: Firstly, the refinement of grains occurs due to its addition; according to the Hall-Petch relationship [23], refining grains can substantially enhance a material's tensile strength by negatively correlating it with grain size.Secondly, rare earth Ce enhances inclusion morphology by transforming large-size irregular inclusions into fine and spheroidized ones that are uniformly dispersed throughout the material matrix.These finely distributed inclusions effectively impede dislocation movement and improve deformation resistance, thereby enhancing material strength.
Effect of Ce on Hardness
The microhardness tests were conducted on Sample 1 and Sample 2 at five different testing locations, namely positions a, b, c, d, and e as depicted in Figure 8.Based on the test results presented in Table 4, it is evident that: After the addition of rare earth Ce, the hardness of pure copper samples increased from 73.5 HV to 81.2 HV, representing a significant enhancement of 10.5%.When subjected to pressure load, pure copper undergoes deformation accompanied by the extensive movement and slip of dislocations within its structure.The presence of a finer grain size results in higher grain boundary density and a greater accumulation of additional dislocations at these boundaries, thereby increasing resistance against external forces and elevating material hardness.Additionally, the finely thus indicating an equilibrium reaction where both Ce 2 O 3 (s) and Ce 2 O 2 S (s) are produced.
Figure 1 .
Figure 1.XRD diffraction stacking diagram of copper samples with different Ce contents.
Figure 2 .
Figure 2. Variation curves of diffraction angle of different crystal planes.
Figure 1 .
Figure 1.XRD diffraction stacking diagram of copper samples with different Ce contents.
Materials 2024 , 13 Figure 1 .
Figure 1.XRD diffraction stacking diagram of copper samples with different Ce contents.
Figure 2 .
Figure 2. Variation curves of diffraction angle of different crystal planes.
Figure 2 .
Figure 2. Variation curves of diffraction angle of different crystal planes.
Figure 4 .
Figure 4. EDS analysis of typical compounds in matrix of 2# Ce-containing copper sample.
Figure 4 .
Figure 4. EDS analysis of typical compounds in matrix of 2# Ce-containing copper sample.Figure 4. EDS analysis of typical compounds in matrix of 2# Ce-containing copper sample.
Figure 4 .
Figure 4. EDS analysis of typical compounds in matrix of 2# Ce-containing copper sample.Figure 4. EDS analysis of typical compounds in matrix of 2# Ce-containing copper sample.
sized inclusions are uniformly dispersed throughout, indicating that rare earth has effectively enhanced the morphology of copper's inclusions.Materials 2024, 17, x FOR PEER REVIEW 9 ofapproximately round, while smaller-sized inclusions are uniformly dispersed throughout, indicating that rare earth has effectively enhanced the morphology of copper's inclusions.
Table 1 .
The reaction and activity products of rare earth Ce with O and S in a copper solution.
Table 2 .
Interaction coefficients between main elements in copper.
Materials 2024, 17, x FOR PEER REVIEW 8 of 13 3.2.3.Effect of Rare Earth Ce on As-Cast Microstructure of Pure Copper | 7,543.2 | 2024-05-01T00:00:00.000 | [
"Materials Science"
] |
Nutraceutical Approach for Preventing Obesity-Related Colorectal and Liver Carcinogenesis
Obesity and its related metabolic abnormalities, including insulin resistance, alterations in the insulin-like growth factor-1 (IGF-1)/IGF-1 receptor (IGF-1R) axis, and the state of chronic inflammation, increase the risk of colorectal cancer (CRC) and hepatocellular carcinoma (HCC). However, these findings also indicate that the metabolic disorders caused by obesity might be effective targets to prevent the development of CRC and HCC in obese individuals. Green tea catechins (GTCs) possess anticancer and chemopreventive properties against cancer in various organs, including the colorectum and liver. GTCs have also been known to exert anti-obesity, antidiabetic, and anti-inflammatory effects, indicating that GTCs might be useful for the prevention of obesity-associated colorectal and liver carcinogenesis. Further, branched-chain amino acids (BCAA), which improve protein malnutrition and prevent progressive hepatic failure in patients with chronic liver diseases, might be also effective for the suppression of obesity-related carcinogenesis because oral supplementation with BCAA reduces the risk of HCC in obese cirrhotic patients. BCAA shows these beneficial effects because they can improve insulin resistance. Here, we review the detailed relationship between metabolic abnormalities and the development of CRC and HCC. We also review evidence, especially that based on our basic and clinical research using GTCs and BCAA, which indicates that targeting metabolic abnormalities by either pharmaceutical or nutritional intervention may be an effective strategy to prevent the development of CRC and HCC in obese individuals.
Introduction
Obesity, which is the result of a positive energy balance, is a serious health problem throughout the world. The World Health Organization (WHO) estimates that currently, more than 1.5 billion adults worldwide are overweight, of which at least 500 million are obese [1]. Obesity is linked to several health disorders such as cardiovascular disease, hypertension, diabetes mellitus, and hyperlipidemia, which are collectively known as "metabolic syndrome". In addition, mounting evidence indicates that obesity and its related metabolic abnormalities, especially diabetes mellitus, are associated with the development of certain types of human epithelial malignancies, including colorectal cancer (CRC) and hepatocellular carcinoma (HCC) [2][3][4][5][6][7][8]. On the basis of systematic reviews of epidemiological evidence as well as mechanistic interpretations and data from animal experimental models, the World Cancer Research Fund and American Institute for Cancer Research released a report in 2007 on the causal relationship between high body fatness and an increased risk of CRC [9]. A large-scale meta-analysis (221 datasets on 282,000 incidence cases) also revealed that the magnitude of risk for CRC was greater among obese men than non-obese men [10]. In a prospectively studied population of more than 900,000 American adults, the body mass index (BMI) was found to be significantly associated with higher rates of death from cancer, especially HCC, because the relative risk of death from HCC was significantly higher (4.52 times) among men with a BMI of at least 35.0 than those who had normal weight (95% confidence interval, 2.94-6.94) [11].
Several pathophysiological mechanisms that link obesity and colorectal and liver carcinogenesis have been shown, including the emergence of insulin resistance, alterations in the insulin-like growth factor-1 (IGF-1)/IGF-1 receptor (IGF-1R) axis, the state of chronic inflammation, induction of oxidative stress, and occurrence of adipocytokine imbalance [2][3][4][5][6]. On the other hand, these findings also suggest that targeting these pathophysiological disorders via nutritional or pharmaceutical intervention might be an effective and promising strategy to inhibit obesity-related carcinogenesis. For instance, a 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitor pitavastatin, which is widely used to treat hyperlipidemia, prevents obesity-related colorectal and liver carcinogenesis by attenuating chronic inflammation [12,13]. Captopril and telmisartan, which are anti-hypertensive drugs, also suppress the development of colonic preneoplastic lesions in obese and diabetic mice, and this suppression is associated with the reduction of oxidative stress and chronic inflammation [14].
In recent years, green tea catechins (GTCs) have received considerable attention because of their beneficial effects: they improve metabolic abnormalities and prevent cancer development [15][16][17][18][19]. Dietary supplementation with branched-chain amino acids (BCAA; leucine, isoleucine, and valine), which can prevent progressive hepatic failure in patients with chronic liver disease by improving insulin resistance [20][21][22], also reduces the risk of HCC in such patients who are obese [8]. In this article, we review the many mechanisms by which obesity and the related metabolic abnormalities influence the development of CRC and HCC while especially focusing on the emergence of insulin resistance and the subsequent inflammatory cascade. We also prove that the nutraceutical approach using GTCs and BCAA might be effective in preventing obesity-related carcinogenesis in both the colorectum and liver.
Potential Pathophysiological Mechanisms Linking Obesity and the Development of CRC
Obesity is the main determinant of insulin resistance and hyperinsulinemia, which is a risk factor for CRC [23]. Insulin itself and the signal transduction network it regulates have important roles in oncogenesis [24,25]. In animal models, insulin stimulates the growth of CRC cells while also promoting CRC tumor growth [26,27]. In addition, insulin resistance increases the biological activity of IGF-1, an important endocrine and paracrine regulator of tissue growth and metabolism. The binding of insulin and IGF-1 to the cell-surface receptors, insulin receptor and IGF-1R, respectively, on tumors and precancerous cells activates the phosphatidylinositol 3-kinase (PI3K)/Akt pathway, which is responsible for cellular processes like growth, proliferation, and survival [24,25]. Alterations in the IGF/IGF-1R axis caused by insulin resistance contribute to the development of CRC [28]. IGF-1 is positively correlated with body fat and waist circumference [29]. Moreover, insulin resistance and increased adipose mass create an oxidative environment in the tissues that upregulates the expression of various pro-inflammatory cytokines, including tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), which stimulate tumor growth and progression [30][31][32][33][34]. Increased oxidative stress promotes damage to cell structures, including DNA, and activates the PI3K/Akt pathway, and both these processes play a key role in cancer development [35,36]. Therefore, insulin resistance and the subsequent inflammatory cascade involving increased oxidative stress are regarded as important factors in the development of obesity-associated CRC.
Excess production of storage lipids causes an adipocytokine imbalance, which entails increased levels of leptin and decreased levels of adiponectin in the serum, for example. This imbalance may also be related to obesity-associated carcinogenesis [37,38]. Leptin stimulates cell growth in CRC [39]. An epidemiologic study by Stattin et al. [40] suggested an association between circulating leptin levels and the development of CRC. TNF-α and IL-6 increase the levels of leptin, while leptin influences inflammatory responses, possibly by triggering the release of TNF-α and IL-6 [41-43]. These findings suggest that the pathophysiological abnormalities caused by obesity cooperatively aggravate the risk of cancers, including CRC, in obese individuals ( Figure 1).
Potential Pathophysiological Mechanisms Linking Obesity, Non-Alcoholic Fatty Liver Disease/Non-Alcoholic Steatohepatitis, and the Development of HCC
Several pathophysiological mechanisms linking obesity, steatosis, and liver carcinogenesis have been shown, including insulin resistance and the subsequent inflammatory cascade. Insulin induces HCC cells to proliferate and resist apoptosis [44,45]. Insulin resistance raises the risk for recurrence of HCC after curative radiofrequency ablation in hepatitis C virus-positive patients [46]. Insulin resistance also leads to an increased expression of TNF-α and its dysregulation is associated with the development of steatosis and inflammation within the liver [47]. Activation of the IGF/IGF-1R axis is involved with liver carcinogenesis [48,49]. High levels of serum leptin, which stimulates the growth of HCC cells [50], increase the risk of HCC recurrence after curative treatment [51]. These findings suggest that in addition to colorectal carcinogenesis, obesity and its related metabolic abnormalities also play an important role in the development of HCC ( Figure 1).
Non-alcoholic fatty liver disease (NAFLD), which is known to be a hepatic manifestation of metabolic syndrome, is the most common form of chronic liver disease in developed countries [52,53]. It covers a spectrum of disorders ranging from simple steatosis to non-alcoholic steatohepatitis (NASH), which can progress to cirrhosis and thus HCC ( Figure 1) [52,53]. Retrospective data suggest that in as many as 4-27% of cases, NASH progresses to HCC after cirrhosis develops [53,54]. Insulin resistance is considered a critical factor in the etiology of NASH [55]. The flux of free fatty acids to the liver and insulin resistance lead to hepatic fat accumulation, which causes inflammatory changes in the liver [56,57]. Enhanced TNF-α expression and increased leptin levels are also found in patients with NASH [58,59]. In addition, Wong et al. [60] recently reported interesting results from a cross-sectional study, indicating that NASH is associated with a high prevalence of colorectal adenomas and advanced neoplasms. This finding may suggest that in addition to HCC, NASH may be associated with an increased risk of CRC.
Preventive Effects of GTCs on the Metabolic Abnormalities and Cancer Development
Numerous studies have indicated that tea catechins, especially GTCs, are beneficial for various reasons, such as their anti-obesity effects [15]. A recent meta-analysis of clinical trials reported that GTCs help reduce body weight [61]. The underlying mechanisms include an increase in energy expenditure, stimulation of fatty acid oxidation, and reduction of nutrient absorption [62]. The effects of GTCs whereby they suppress metabolic syndrome have also been investigated in laboratory, epidemiological, and intervention studies [63,64]. In a rodent model of obesity and diabetes, treatment with green tea or its constituents was found to result in significantly reduced body weight and, therefore, improved hyperglycemia, hyperinsulinemia, hyperleptinemia, hepatic steatosis, and liver dysfunction [65][66][67]. GTCs supplementation was also found to decrease plasma levels of insulin, TNF-α, and IL-6 in a rat insulin resistance model [68]. These reports suggest that long-term treatment with GTCs may be effective for preventing the progression of obesity-related metabolic disorders.
In addition to the anti-obesity effects, GTCs possess anti-cancer and cancer-preventive properties [16][17][18][19]. Intervention studies provide clear evidence of the chemopreventive effects of tea preparations [69,70]. A pilot study also showed that GTCs successfully prevent colorectal adenomas, the precancerous lesions of CRC, after polypectomy [71]. Several properties of GTCs are responsible for their anti-cancer and cancer-preventive effects, including their antioxidant and anti-inflammatory properties [16,72]. An increasing number of studies have reported that GTCs, especially the major biologically active component in green tea (−)-epigallocatechin gallate (EGCG), inhibit proliferation of and induce apoptosis among cancer cells by modulating the activities of different receptor tyrosine kinases (RTKs) and their downstream signaling pathways, including the Ras/extracellular signal-regulated kinase (ERK) and PI3K/Akt signaling pathways [17][18][19]73,74]. EGCG suppresses cell growth by inhibiting the activation of IGF-1R, a member of the RTK family, in human CRC and HCC cells, and this inhibition is associated with a decrease in the expression of IGF-1/2, but an increase in the expression of IGF-binding protein-3 (IGFBP-3), which negatively controls the function of the IGF/IGF-1R axis [49,75]. EGCG also prevents carbon tetrachloride-induced hepatic fibrosis in rats by inhibiting IGF-1R expression [76]. These reports indicate that the IGF/IGF-1R axis, which plays a critical role in both cancer development and obesity-induced pathological events [24,25], might be a critical target of GTCs.
Preventive Effects of BCAA on Metabolic Abnormalities and HCC in Obese, Cirrhotic Patients: Results Form the LOTUS Study
Because the liver, an important target organ of insulin, plays a critical role in regulating metabolism, patients with chronic liver diseases often suffer from several nutritional and metabolic disorders, such as protein-energy malnutrition and insulin resistance [77][78][79][80]. Decreased serum levels of BCAA and albumin are associated with a high incidence of liver cirrhosis, while supplementation with BCAA has been shown to improve protein malnutrition and increase the serum albumin concentration in cirrhotic patients [20,77,78]. In addition, recent experimental studies have revealed that BCAA improves insulin resistance and glucose tolerance [81][82][83]. She et al. [81] reported that mitochondrial branched-chain aminotransferase knock out mice, which show a significant elevation in the serum BCAA level, exhibit decreased adiposity and remarkable improvements in glucose and insulin tolerance. BCAA has favorable effects on glucose metabolism not just in the liver but also in skeletal muscle and adipose tissue [84][85][86]. In the liver, BCAA activates liver-type glucokinase and glucose transporter (GLUT)-2, while suppressing the expression of glucose-6-phosphatase, which catalyzes the final steps of gluconeogenesis [84]. On the other hand, BCAA promotes glucose uptake through activation of PI3K and subsequent translocation of GLUT1 and GLUT4 to the plasma membrane in the skeletal muscle [86]. Moreover, in mice fed a high-fat diet, BCAA supplementation ameliorated insulin resistance by improving adipocytokine imbalance, inhibiting lipid accumulation in the liver, and increasing the hepatic levels of peroxisome proliferator-activated receptor-α [87,88]. Several clinical trials have also reported that oral BCAA supplementation improves glucose tolerance and insulin resistance in patients with chronic liver disease [22, 89,90].
The Long-Term Survival Study (LOTUS) was a large-scale (n = 622) multicenter randomized controlled trial conducted from 1997 to 2003 in Japan to investigate the effects of supplemental BCAA therapy on event-free survival in patients with decompensated cirrhosis. In this trial, oral supplementation with a BCAA preparation was found to significantly prevent progressive hepatic failure and improve event-free survival [20]. Moreover, subset analysis from this trial demonstrated that long-term oral supplementation with BCAA is associated with a reduced frequency of HCC in obese patients (BMI score ≥ 25, P = 0.008) with decompensated cirrhosis [8]. What could the mechanisms of action of BCAA in the prevention of HCC have been? It seems reasonable to consider that the improvement of glucose metabolism by BCAA contributes to a decrease in the HCC incidence in obese cirrhotic patients because these patients generally have a particularly high incidence of hyperinsulinemia and insulin resistance [79,80]. In addition, Hagiwara et al. [91] recently reported significant findings that BCAA suppresses insulin-induced proliferation of HCC cells by inhibiting the insulin-induced activation of the PI3K/Akt pathway and the subsequent anti-apoptotic pathway. The precise mechanisms of action of BCAA in relation to carcinogenesis are explained in detail in the following sections.
Prevention of Obesity-Related CRC via the Nutraceutical Approach-GTCs and BCAA Effectively Prevent Obesity-Related Colorectal Carcinogenesis
Recent evidence indicates that increased body fatness and BMI are associated with an increased risk of CRC [4,5,[9][10][11]. In contrast, studies have provided convincing evidence that dietary habits, especially high fruit and vegetable consumption, may reduce the risk of this malignancy [92]. Hirose et al. [93] established a useful preclinical model to determine the underlying mechanisms of how specific agents prevent the development of obesity-related CRC. The model used was C57BL/KsJ-db/db (db/db) mice, which are a genetically altered animal model with phenotypes of obesity and diabetes mellitus [94]. These mice have hyperlipidemia, hyperinsulinemia, and hyperleptinemia and are susceptible to the colonic carcinogen azoxymethane (AOM) because AOM-induced colonic precancerous lesions, aberrant crypt foci (ACF) and β-catenin accumulated crypts (BCAC), develop to a significantly greater extent in these mice than in the genetic control mice [93]. The colonic mucosa of db/db mice expresses high levels of IGF-1R, the phosphorylated (activated) form of IGF-1R (p-IGF-1R), β-catenin, and cyclooxygenase-2 (COX-2) [95]. Dietary supplementation with certain types of flavonoids, such as citrus compounds, suppresses the development of these putative lesions for CRC in the db/db mice [96][97][98]. We used this experimental model to investigate in detail the effects of EGCG and BCAA on the prevention of obesity-related colorectal carcinogenesis. We found that drinking water with EGCG significantly decreased the number of ACF and BCAC, which accumulate the IGF-1R protein, and this decrease was associated with inhibited expression of IGF-1R, p-IGF-1R, the phosphorylated form of glycogen synthase kinase-3β (GSK-3β), β-catenin, COX-2, and cyclin D1 on the colonic mucosa [95]. EGCG also increased the serum level of IGFBP-3 while decreasing the serum levels of IGF-1, insulin, triglycerides, total cholesterol, and leptin [95]. In accordance with this study, supplementation with BCAA also caused a significant reduction in the number of ACF and BCAC compared with the control diet-fed groups by inhibiting the phosphorylation of IGF-1R, GSK-3β, and Akt on the colonic mucosa [99]. The serum levels of insulin, IGF-1, IGF-2, triglycerides, total cholesterol, and leptin were also decreased [99]. These findings suggest that both EGCG and BCAA effectively suppress the development of premalignant CRC lesions by suppressing the IGF/IGF-1R axis; improving hyperlipidemia, hyperinsulinemia, and hyperleptinemia; and inhibiting the expression of COX-2, which is involved in CRC development because it mediates inflammatory signaling pathways and can therefore be an important target for chemoprevention ( Figure 2) [100].
Prevention of Obesity-Related HCC via the Nutraceutical Approach-BCAA and GTCs Effectively Prevent Obesity-Related Liver Carcinogenesis
In addition to established risk factors such as hepatitis and alcohol consumption, obesity and its related metabolic abnormalities increase the risk of HCC [6-8,11]. NASH is also an important pathological condition when considering the prevention of obesity-related HCC because it progresses to cirrhosis and finally develops into HCC [53,54]. In order to elucidate the pathogenesis of obesityand NASH-associated HCC and evaluate the mechanisms of how chemopreventive agents suppress these diseases, we developed a useful preclinical model using db/db mice and a liver carcinogen N-diethylnitrosamine (DEN) [101]. We found that db/db mice, which have severe steatosis, are more susceptible to DEN-induced liver tumorigenesis than the genetic control mice, and this oncogenic sensitivity is associated with the activation of the IGF/IGF-1R axis and induction of chronic inflammation in the liver [13, [101][102][103].
Using this experimental model, we also investigated the possible inhibitory effects of BCAA and EGCG on obesity-related liver tumorigenesis. We found that BCAA supplementation significantly suppressed the development of hepatic preneoplastic lesions, known as foci of cellular alteration (FCA), in obese and diabetic db/db mice by inhibiting the expression of IGF-1, IGF-2, and IGF-1R in the liver [101]. The development of liver neoplasms, including hepatic adenoma and HCC, was also reduced by BCAA supplementation and this was associated with improvement of insulin resistance, reduction of serum levels of leptin, and attenuation of hepatic steatosis and fibrosis [101]. Yoshiji et al. [104] also reported that the chemopreventive effect exerted by BCAA supplementation against HCC in obese and diabetic rats was associated with the suppression of vascular endothelial growth factor expression and hepatic neovascularization. In addition, drinking water containing EGCG significantly inhibited the development of FCA and hepatic adenoma, and improved hepatic steatosis [103]. The serum levels of insulin, IGF-1, and IGF-2 and the phosphorylation of the IGF-1R, ERK, Akt, and GSK-3β proteins in the liver were reduced by EGCG consumption [103]. EGCG also decreased the levels of free fatty acids and TNF-α in the serum and the expression of TNF-α, IL-6, IL-1β, and IL-18 mRNAs in the liver, indicating that it prevents obesity-related liver tumorigenesis by inhibiting the IGF/IGF-1R axis, improving hyperinsulinemia, and attenuating chronic inflammation [103]. Thus, both BCAA and GTCs may be useful for the chemoprevention of liver carcinogenesis in obese individuals (Figure 3).
Conclusions
In the present social and medical circumstances, the consequences of obesity and its related metabolic abnormalities, including cancer, are critical issues that need to be resolved. Among human cancers, CRC and HCC are the most representative malignancies affected by obesity. In this review, we indicate the possibility that the nutraceutical approach for targeting and restoring metabolic homeostasis may be a promising strategy to prevent the development of obesity-related CRC and HCC. Tea catechins, especially GTCs, are considered one of the most practical agents for the prevention of obesity-related carcinogenesis because the safety and efficacy of GTCs as chemopreventive agents have been demonstrated in recent interventional trials [69,71]. BCAA is also a feasible agent because its preparations are widely used in clinical practice for patients with chronic liver diseases, and a randomized controlled trial has shown that BCAA supplementation can prevent HCC in such patients who are obese [8,20]. Thus, active intervention using GTCs and BCAA might be an effective approach for the chemoprevention of obesity-related CRC and HCC. | 4,513.6 | 2012-01-05T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Cis-2-Decenoic Acid and Bupivacaine Delivered from Electrospun Chitosan Membranes Increase Cytokine Production in Dermal and Inflammatory Cell Lines
Wound dressings serve to protect tissue from contamination, alleviate pain, and facilitate wound healing. The biopolymer chitosan is an exemplary choice in wound dressing material as it is biocompatible and has intrinsic antibacterial properties. Infection can be further prevented by loading dressings with cis-2-decenoic acid (C2DA), a non-antibiotic antimicrobial agent, as well as bupivacaine (BUP), a local anesthetic that also has antibacterial capabilities. This study utilized a series of assays to elucidate the responses of dermal cells to decanoic anhydride-modified electrospun chitosan membranes (DA-ESCMs) loaded with C2DA and/or BUP. Cytocompatibility studies determined the toxic loading ranges for C2DA, BUP, and combinations, revealing that higher concentrations (0.3 mg of C2DA and 1.0 mg of BUP) significantly decreased the viability of fibroblasts and keratinocytes. These high concentrations also inhibited collagen production by fibroblasts, with lower loading concentrations promoting collagen deposition. These findings provide insight into preliminary cellular responses to DA-ESCMs and can guide future research on their clinical application as wound dressings.
Introduction
Biopolymer wound dressings have proven effective in reducing bacterial contamination following surgery or musculoskeletal injury while also enhancing wound healing [1,2].A functional wound dressing maintains an optimal environment for tissue healing by easily conforming to a patient's body, providing a barrier to external contamination, and preventing excessive inflammation [3].Further, it is beneficial for these dressings to be capable of point-of-care loading with antimicrobial or anesthetic agents, making them tailorable to each patient's unique needs.Thus, to optimize the wound healing response, it is necessary to manufacture dressings from biocompatible materials and load them with agents that are cytocompatible and support regrowth of dermal cells.
During wound healing, dermal cells use sequential signals and responses to properly choreograph tissue repair, which occurs in three major phases: the lag phase, proliferation phase, and remodeling phase [4].The lag phase begins after injury with the formation of a fibrin and fibronectin blood clot and recruitment of platelets which release chemokines to recruit inflammatory cells, neutrophils, and macrophages, as well as fibroblasts and endothelial cells [5].This is followed by the proliferation phase, which consists of epithelial keratinocyte cell migration and fibroblast proliferation and concludes with the regeneration Pharmaceutics 2023, 15, 2476 2 of 11 phase of collagen deposition [6].The vitality of keratinocytes in the wound bed is essential as they re-epithelialize damaged tissue and work together with fibroblasts to cover wounds and promote closure.Keratinocytes also release a number of cytokines, such as proinflammatory IL-1 and IL-6 and anti-inflammatory IL-10, as well as various chemokines that recruit monocytes and speed up the healing process [7].In addition to helping keratinocytes cover the wound, fibroblasts also assist in wound healing by breaking down the fibrin clot and synthesizing new extracellular matrix and collagen structures.Thus, keratinocytes and fibroblasts serve as key players that work in union to coordinate the wound healing process.
Chitosan, a polycationic polysaccharide, is a promising material in the manufacture of wound dressings because it is biocompatible, biodegradable, and can be chemically modified to optimize drug loading and release characteristics [8].Previous work has investigated many chitosan products, from sponges to pastes to electrospun membranes, all of which were capable of point-of-care loading and release of various therapeutics [9][10][11][12][13].Electrospinning chitosan membranes result in a random distribution of nanoscale fibers with a high surface area and porosity, similar to the structure of a native extracellular matrix [14].These characteristics also allow for high-volume drug loading as well as continued exchange of fluids and nutrients [15].Furthermore, the nanostructural properties of the electrospun chitosan membranes (ESCMs) allow cell growth but not complete infiltration within intermembrane pores.
The membranes investigated in this study use a decanoic anhydride acylation treatment for ESCMs (DA-ESCMs), a modification of the procedure developed by Wu et al. to allow for loading of hydrophobic local anesthetic bupivacaine (BUP) and the anti-biofilm fatty acid cis-2-decenoic acid (C2DA) [16].BUP is a local anesthetic with intrinsic antimicrobial properties that is clinically used in the forms of topical creams, ointments, and sprays [17,18].Loading membranes with BUP provides local pain management, thus decreasing the need for systemic opioid use.C2DA is a known biofilm dispersal agent that can prevent wound infection and colonization by multiple bacterial strains [19].Previous work has demonstrated the efficacious release of both BUP and C2DA from hexanoic anhydride-treated ESCMs (HA-ESCMs), as well as their prevention of Staphylococcus aureus biofilm growth, though high loading levels were cytotoxic to mouse fibroblasts, L929 [13].
This study sought to evaluate human dermal and inflammatory cell responses to BUP, C2DA, and combinations released from DA-ESCMs.DA-ESCMs are preferred over HA-ESCMs because a longer chain length leads to more hydrophobicity and greater interaction with carbon chains, extending the release of hydrophobics [20].Initially, BUP and C2DA loading concentrations were adjusted to prevent the cytotoxic effects reported in previous studies [13].DA-ESCMs were loaded with a range of C2DA and BUP concentrations, either individually or in combination, and tested with keratinocytes and fibroblasts to determine cytocompatible therapeutic loading concentrations.Collagen production of fibroblasts in contact with DA-ESCMs was measured to determine the effects of loading concentration on fibroblast function.We hypothesize that compatibility and collagen production will vary based on therapeutic loading, with higher concentrations correlating with higher toxicity.
Materials
Chitosan flakes (86% DDA) were purchased from Primex (Siglufjordur, Iceland).C2DA was synthesized by the authors as previously described [21].NHDFs, NHEKs, and their respective media and additives were purchased from Lonza (Basel, Switzerland).CellTiter-Glo was purchased from Promega (Madison, WI, USA).Sirius Red Total Collagen Detection Kit was from Chondrex (Woodinville, WA, USA).Bupivacaine and other chemical reagents were purchased from MilliporeSigma (Darmstadt, Germany).
Membrane Fabrication
Membranes were electrospun using a 71% degree of deacetylation, 311.5 kDa chitosan (Primex) at 5.5% (w/v) in 70% (v/v) trifluoroacetic acid (TFA)-30% (v/v) dichloromethane solution at 26 kV, as previously described.Membranes were spun to 15 cm diameters and ~0.7 mm (30 mL of spinning solution) thickness and treated using a 50-50 solution of pyridine and decanoic anhydride.Membranes were punched into 1 cm diameter discs and UV-sterilized prior to contact with cells.Ethanol (200 proof) was used for dissolving therapeutics, and membranes were loaded by pipetting 30 µL of a stock solution onto the surface so that a known mass was added to the membrane: C2DA (0.075, 0.15, 0.3 mg), BUP (0.25, 0.5, or 1.0 mg), or a combination of both treatments (Table 1).After therapeutic concentrations were applied to membranes, the membranes were dried aseptically in a laminar flow hood to allow ethanol evaporation, leaving therapeutics incorporated within the membrane fibers.
Cytocompatibility
Normal adult human dermal fibroblasts (NHDFs, Lonza) were cultured in FBM-2 Basal Medium plus FGM-2 SingleQuots supplements (Lonza), normal adult human epidermal keratinocytes (NHEKs, Lonza) in KBM-Gold Keratinocyte Growth Basal Medium plus SingleQuots Supplements and Growth Factors (Lonza) and RAW 264.7 monocytes (ATCC, Manassas, VA, USA) in DMEM plus 10% FBS.All cells were seeded at 1 × 10 4 cells/cm 2 in 24-well plates and cultured for 24 h at 37 • C and 5% CO 2 before adding the experimental treatment.All media were supplemented with 500 IU/mL penicillin, 500 µg/mL streptomycin, and 2.5 µg/mL amphotericin-B.After overnight incubation, DA-ESCMs were placed in the wells.After 24 and 72 h, wells were imaged microscopically, and cell viability was quantified using the CellTiter-Glo ® viability assay (Promega).Results were normalized as a percent viability versus cells grown with unloaded DA-ESCMs.Treatments were accepted as cytocompatible if they met or surpassed the 70% cytocompatibility minimum, as established by ISO 10993-5 [22].
NHDF Collagen Production
Supernatant media from NHDF cytocompatibility assays were used for the determination of collagen production using the Sirius Red Total Collagen Detection Kit (Chondrex, Woodinville, WA, USA).Sirius red is a unique dye which specifically binds to the [Gly-X-Y]n helical structure on fibrillar collagen (type I to V) and does not discriminate between collagen species and types.Briefly, supernatants were treated with a concentrating solution, stained with Sirius red dye, washed with a washing solution, and treated with an extraction buffer, and then optical density was read at 540 nm using a BioTek Synergy plate reader.Cells were centrifuged at 10,000 rpm for 3 min between each assay step.Collagen concentra-tions were calculated by referencing a standard curve generated by known concentrations of collagen (µg/mL, normalized by corresponding well's supernatant viability).
Cytokine Production
NHEK and RAW 264.7 supernatants were assayed for IL-10/IL-12/TNF-α/VEGF and IL-10/TNF-α, respectively, using ELISA.ELISA (Peprotech US, Cranbury, NJ, USA) was performed according to the manufacturer's instructions.Absorbance were read at 450 nm using a BioTek Synergy microplate reader (Agilent Technologies, Santa Clara, CA, USA).Levels of cytokine were expressed as specific units of activity (ng/mL, normalized by corresponding well's supernatant viability).One group of RAW 264.7 monocytes were treated with 100 ng/mL LPS for 24-72 h to induce stimulation.
Statistical Analysis
Statistically significant differences were tested with an ANOVA followed by Tukey's multiple comparisons test.Statistical analyses were performed in Prism version 8.4.3 (GraphPad Software, San Diego, CA, USA) at a significance level of 0.05.Data are reported as mean ± standard deviation.Throughout results, * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001, and **** indicates p < 0.0001.
Cytocompatibility
At 24 h, all membrane groups met or surpassed the cytocompatibility threshold of 70% viability.DA-ESCMs loaded with high concentrations of C2DA and the C2DA/BUP combination were determined to be cytotoxic to NHDFs.Membranes loaded with high concentrations of BUP alone were most cytotoxic to NHDFs at 72 h, with approximately 10% of viable NHDFs remaining compared to unloaded controls.The medium and low BUP and C2DA loadings as well as the low combination loading remained cytocompatible with NHDFs at 72 h (Figure 1).
between collagen species and types.Briefly, supernatants were treated with a concentrating solution, stained with Sirius red dye, washed with a washing solution, and treated with an extraction buffer, and then optical density was read at 540 nm using a BioTek Synergy plate reader.Cells were centrifuged at 10,000 rpm for 3 min between each assay step.Collagen concentrations were calculated by referencing a standard curve generated by known concentrations of collagen (µg/mL, normalized by corresponding well's supernatant viability).
Cytokine Production
NHEK and RAW 264.7 supernatants were assayed for IL-10/IL-12/TNF-α/VEGF and IL-10/TNF-α, respectively, using ELISA.ELISA (Peprotech US, Cranbury, NJ, USA) was performed according to the manufacturer's instructions.Absorbance were read at 450 nm using a BioTek Synergy microplate reader (Agilent Technologies, Santa Clara, CA, USA).Levels of cytokine were expressed as specific units of activity (ng/mL, normalized by corresponding well's supernatant viability).One group of RAW 264.7 monocytes were treated with 100 ng/mL LPS for 24-72 h to induce stimulation.
Statistical Analysis
Statistically significant differences were tested with an ANOVA followed by Tukey's multiple comparisons test.Statistical analyses were performed in Prism version 8.4.3 (GraphPad Software, San Diego, CA, USA) at a significance level of 0.05.Data are reported as mean ± standard deviation.Throughout results, * indicates p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001, and **** indicates p < 0.0001.
Cytocompatibility
At 24 h, all membrane groups met or surpassed the cytocompatibility threshold of 70% viability.DA-ESCMs loaded with high concentrations of C2DA and the C2DA/BUP combination were determined to be cytotoxic to NHDFs.Membranes loaded with high concentrations of BUP alone were most cytotoxic to NHDFs at 72 h, with approximately 10% of viable NHDFs remaining compared to unloaded controls.The medium and low BUP and C2DA loadings as well as the low combination loading remained cytocompatible with NHDFs at 72 h (Figure 1).
Collagen Production
Sirius red detection assays indicated minimal collagen production for all groups at 24 h, with all groups producing approximately 10 µg/mL.Fibroblasts treated with unloaded membranes resulted in 50 µg/mL collagen production at 72 h, and similar production levels for fibroblasts treated with a medium concentration of combination membranes and low concentrations of C2DA membranes.Fibroblasts treated with low BUP or low combination concentrations produced approximately 2× the amount of collagen compared to those treated with unloaded membranes.Fibroblasts treated with high concentrations of BUP membranes generated the highest amount of collagen at approximately 500 µg/mL.DA-ESCMs with medium BUP or low C2DA loadings generated approximately 200-300 µg/mL collagen.Finally, fibroblasts treated with medium concentrations of C2DA membranes and high concentrations of combination membranes generated the lowest levels of collagen (Figure 2).
Collagen Production
Sirius red detection assays indicated minimal collagen production for all groups at 24 h, with all groups producing approximately 10 µg/mL.Fibroblasts treated with unloaded membranes resulted in 50 µg/mL collagen production at 72 h, and similar production levels for fibroblasts treated with a medium concentration of combination membranes and low concentrations of C2DA membranes.Fibroblasts treated with low BUP or low combination concentrations produced approximately 2× the amount of collagen compared to those treated with unloaded membranes.Fibroblasts treated with high concentrations of BUP membranes generated the highest amount of collagen at approximately 500 µg/mL.DA-ESCMs with medium BUP or low C2DA loadings generated approximately 200-300 µg/mL collagen.Finally, fibroblasts treated with medium concentrations of C2DA membranes and high concentrations of combination membranes generated the lowest levels of collagen (Figure 2).
Cytocompatibility
The results indicate that while the highest concentration of both therapeutics alone was cytotoxic to NHEKs, lower concentrations and simultaneous delivery of both C2DA and BUP were cytocompatible with NHEKs.At 24 h, membranes loaded with high and medium concentrations of C2DA only were cytotoxic to NHEKs.Membranes loaded with low C2DA concentrations were 100% cytocompatible, equivalent to unloaded membranes.Membranes loaded with high concentrations of BUP alone were also cytotoxic at 24 h, but those loaded with medium or low concentrations of BUP were cytocompatible with NHEKs.All groups loaded with both therapeutics combined (high, medium, and low) were cytocompatible with NHEKs at 24 h.At 72 h, similar trends were present; however, cells recovered 10% viability with a high C2DA loading, while high-BUP-concentration membranes remained cytotoxic.At the 72 h timepoint, NHEK responses to each therapeutic loading concentration began to mimic NHDF responses.Simultaneous delivery of
Cytocompatibility
The results indicate that while the highest concentration of both therapeutics alone was cytotoxic to NHEKs, lower concentrations and simultaneous delivery of both C2DA and BUP were cytocompatible with NHEKs.At 24 h, membranes loaded with high and medium concentrations of C2DA only were cytotoxic to NHEKs.Membranes loaded with low C2DA concentrations were 100% cytocompatible, equivalent to unloaded membranes.Membranes loaded with high concentrations of BUP alone were also cytotoxic at 24 h, but those loaded with medium or low concentrations of BUP were cytocompatible with NHEKs.All groups loaded with both therapeutics combined (high, medium, and low) were cytocompatible with NHEKs at 24 h.At 72 h, similar trends were present; however, cells recovered 10% viability with a high C2DA loading, while high-BUP-concentration membranes remained cytotoxic.At the 72 h timepoint, NHEK responses to each therapeutic loading concentration began to mimic NHDF responses.Simultaneous delivery of both therapeutic molecules showed better levels of cytocompatibility in each group compared to delivering either molecule alone, with medium C2DA and BUP concentrations being almost as cytocompatible as lower concentrations (Figure 3).
both therapeutic molecules showed better levels of cytocompatibility in each group compared to delivering either molecule alone, with medium C2DA and BUP concentrations being almost as cytocompatible as lower concentrations (Figure 3).
Cytokine Production
The NHEK ELISA results indicate that C2DA and BUP, as well as their combinations, triggered a release of 25-50 ng/mL IL-10 from keratinocytes after 24 h, as release occurred at 72 h and not 24 h for most groups (Figure 4).No statistically significant difference was observed between unloaded and loaded DA-ESCMs.ELISAs measuring IL-12/TNF-α did not determine a detectable release for any group.However, after 24 h, the NHEK cells produced significant changes in VEGF production for both medium-and high-concentration groups of C2DA.Production of VEGF appeared to return to baseline by 72 h.
Cytocompatibility
After 24 h, only the high concentrations led to statistically significant decreases in the viability of RAW 264.7 monocytes.However, after 72 h, most groups showed decreases
Cytokine Production
The NHEK ELISA results indicate that C2DA and BUP, as well as their combinations, triggered a release of 25-50 ng/mL IL-10 from keratinocytes after 24 h, as release occurred at 72 h and not 24 h for most groups (Figure 4).No statistically significant difference was observed between unloaded and loaded DA-ESCMs.ELISAs measuring IL-12/TNF-α did not determine a detectable release for any group.However, after 24 h, the NHEK cells produced significant changes in VEGF production for both medium-and high-concentration groups of C2DA.Production of VEGF appeared to return to baseline by 72 h.
both therapeutic molecules showed better levels of cytocompatibility in each group compared to delivering either molecule alone, with medium C2DA and BUP concentrations being almost as cytocompatible as lower concentrations (Figure 3).
Cytokine Production
The NHEK ELISA results indicate that C2DA and BUP, as well as their combinations, triggered a release of 25-50 ng/mL IL-10 from keratinocytes after 24 h, as release occurred at 72 h and not 24 h for most groups (Figure 4).No statistically significant difference was observed between unloaded and loaded DA-ESCMs.ELISAs measuring IL-12/TNF-α did not determine a detectable release for any group.However, after 24 h, the NHEK cells produced significant changes in VEGF production for both medium-and high-concentration groups of C2DA.Production of VEGF appeared to return to baseline by 72 h.
Cytocompatibility
After 24 h, only the high concentrations led to statistically significant decreases in the viability of RAW 264.7 monocytes.However, after 72 h, most groups showed decreases
Cytocompatibility
After 24 h, only the high concentrations led to statistically significant decreases in the viability of RAW 264.7 monocytes.However, after 72 h, most groups showed decreases (Figure 5).The combination delivery of high levels of C2DA and BUP did show a recovery in cell viability after 24 h compared to either molecule delivered on its own.
(Figure 5).The combination delivery of high levels of C2DA and BUP did show a recovery in cell viability after 24 h compared to either molecule delivered on its own.
Cytokine Production
After 72 h, production of both TNF-α and IL-10 increased in the monocyte groups treated with C2DA; however, the levels of IL-10 were much greater than the TNF-α levels.No significant changes were observed after only 24 h (Figure 6).BUP had minimal impact on cytokine production.
Discussion
Biopolymer wound dressings loaded with antimicrobials and local anesthetics have the dual benefits of preventing contamination and alleviating pain at the wound site.Previous studies of ESCMs have shown their efficacy in drug release and antimicrobial capability against multiple bacterial strains, as well as their effects in preliminary cytocompatibility studies [13].This study sought to use a combination of C2DA and BUP loaded on DA-ESCMs to determine their combined and individual effects on dermal cell viability and collagen production for wound healing.We confirmed our hypothesis that therapeutic loading concentrations C2DA and BUP combined greatly affected outcomes synergistically.
Cytokine Production
After 72 h, production of both TNF-α and IL-10 increased in the monocyte groups treated with C2DA; however, levels of IL-10 were much greater than the TNF-α levels.No significant changes were observed after only 24 h (Figure 6).BUP had minimal impact on cytokine production.(Figure 5).The combination delivery of high levels of C2DA and BUP did show a recovery in cell viability after 24 h compared to either molecule delivered on its own.
Cytokine Production
After 72 h, production of both TNF-α and IL-10 increased in the monocyte groups treated with C2DA; however, the levels of IL-10 were much greater than the TNF-α levels.No significant changes were observed after only 24 h (Figure 6).BUP had minimal impact on cytokine production.
Discussion
Biopolymer wound dressings loaded with antimicrobials and local anesthetics have the dual benefits of preventing contamination and alleviating pain at the wound site.Previous studies of ESCMs have shown their efficacy in drug release and antimicrobial capability against multiple bacterial strains, as well as their effects in preliminary cytocompatibility studies [13].This study sought to use a combination of C2DA and BUP loaded on DA-ESCMs to determine their combined and individual effects on dermal cell viability and collagen production for wound healing.We confirmed our hypothesis that therapeutic loading concentrations of C2DA and BUP combined greatly affected outcomes synergistically.
Discussion
Biopolymer wound dressings loaded with antimicrobials and local anesthetics have the dual benefits of preventing contamination and alleviating pain at the wound site.Previous studies of ESCMs have shown their efficacy in drug release and antimicrobial capability against multiple bacterial strains, as well as their effects in preliminary cytocompatibility studies [13].This study sought to use a combination of C2DA and BUP loaded on DA-ESCMs to determine their combined and individual effects on dermal cell viability and collagen production for wound healing.We confirmed our hypothesis that therapeutic loading concentrations of C2DA and BUP combined greatly affected outcomes synergistically.
Dermal keratinocytes appeared to be more sensitive to the DA-ESCMs in general compared to dermal fibroblasts, a finding that was also noted in a study investigating the responses of NHEKs to polyvinyl alcohol nanofibers [23].Previous studies showed a higher compatibility of C2DA with NHEKs, though at lower concentrations than were presented in this study [24].The viability of NHDFs in contact with C2DA-loaded DA-ESCMs was slightly higher than the viability observed in previous studies that directly incubated similar C2DA concentrations with NHDFs and NIH-3T3 fibroblasts cultures [21,25].This may be due to the sustained delivery of therapeutics from membranes as opposed to the direct inoculation of cells with C2DA and no carrier system.A cytocompatibility study of ESCMs loaded with similar concentrations of C2DA (0.125 and 0.25 mg) also resulted in a high viability of L929 murine fibroblasts after 24 h of contact [26].The decrease in monocyte viability in response to C2DA is in line with previous studies that showed the short-chain fatty acid n-butyrate causes apoptosis and reduced adhesion of monocytes [27].The use of leukemic murine monocytes rather than primary human cells is a limitation of this study.The monocytes used in this study were not activated into macrophages, as these wound dressings are intended for immediate use after injury and the goal was to capture preliminary effects within 3 days.Future studies will delve further into the cellular responses of human monocytes and macrophages to C2DA.
ESCMs loaded with higher concentrations of BUP (5.0 and 2.5 mg) in previous studies with murine fibroblasts were far more cytotoxic, solidifying the decision to utilize lower BUP loading concentrations of 0.25, 0.5, and 1.0 mg for human cell testing [13].The BUP toxicity at 1.0 mg observed with each cell type is consistent with previous studies which cited a low viability of fibroblasts treated with 0.6 mg of BUP [28].In previous studies, high concentrations of BUP have shown toxicity to keratinocytes, whereas lower concentrations (<0.1 mmol) showed a proliferative effect [29].Future studies of DA-ESCMs may incorporate lower BUP concentrations to explore this finding further.BUP has been reported to inhibit monocyte viability at 1mmol concentrations (~0.28 mg/mL), which confirms the slightly toxic effect observed with all loading concentrations tested in this study [30].Further, an in vivo rat model showed that bupivacaine hydrochloride was found to induce macrophage apoptosis [31].The variability in cytocompatibility results may have been due to inconsistencies in membrane manufacturing; while representative samples from each electrospun membrane were tested for residual TFA, there may have been remnants on select individual punch-outs that altered the cytocompatibility of each sample.
DA-ESCMs loaded with high concentrations of C2DA inhibited collagen production, though the lowest concentration of C2DA stimulated collagen production.Either behavior can benefit burn wound healing depending on the severity of the injury.An over production of collagen results in disorganized scar tissue, yet collagen production is necessary for tensile strength, vascularization, and remodeling of regenerated tissue [32,33].While collagen production in response to C2DA has not been tested previously, studies of other fatty acids have shown a concentration-dependent relationship between treatment and collagen stimulation, in that high concentrations of fatty acids can inhibit collagen deposition [34,35].Increased collagen production in response to BUP at all concentrations, as well as increased collagen deposition in fibroblasts treated with medium-and low-concentration combinations of C2DA and BUP compared to those treated with unloaded DA-ESCMs, is consistent with previous studies that have shown that BUP can increase collagen production in vivo to a significantly greater extent than the structurally related local anesthetic ropivacaine [36].A limitation of this study is that only total collagen produced was measured; type-specific collagen analysis as well as gene expression could provide insight into the mechanisms of fibroblast stimulation or depression by DA-ESCMs.Comparison to other clinically used systems, such as silver-based wound dressings, will be relevant in future studies, particularly for in vivo analysis.In addition, the monocyte cell line used was a mouse cell line, while the rest of the cells used were human cell lines.
Keratinocyte IL-10 secretion appeared dependent on timepoint, as most groups had more detectable levels at the 72 h timepoint; IL-12 was not detected for DA-ESCM-treated cells at either timepoint.While some studies report IL-12 secretion by keratinocytes, other studies report that IL-12 is upregulated in keratinocytes under specific conditions, e.g., following UV-B radiation [37][38][39].Because data on the production of IL-12 by keratinocytes vary, analyzing the secretion of another pro-inflammatory cytokine, such as IL-1 or IL-6, may benefit future studies.ELISA results indicate high production of both pro-inflammatory TNF-A and anti-inflammatory IL-10.These chemical signals are part of the crosstalk controlling inflammation and mesenchymal stem cell activity [40].Previous studies have shown the timing of TNF-A and IL-10 release varies, so timepoint for sampling may need to be adjusted in future studies.However, IL-10 production was approximately 10x higher for all groups, indicating that BUP and/or C2DA can stimulate IL-10 release and thus have a dampening effect on TNF-A stimulation.Alternatively, IL-10 has been shown to actually have pro-inflammatory activities in some in vivo works, so designating these cytokines as purely anti-or pro-inflammatory may be shortsighted [41,42].The present study is limited to mouse RAW cells.Future studies will investigate a complete panel of cytokines for human monocyte and each cell type to further determine the inflammatory responses to DA-ESCMs.Further, LPS at 100 ng/mL did not induce a significant amount of TNF-α compared to the unloaded groups, so a higher dosage will be used in future studies to ensure maximum inflammatory cytokine release from positive control cells.An overall limitation of this study is that cell studies do not fully predict responses in vivo but instead examine cell responses to refine loading for in vivo studies.
Conclusions
These membranes, which approximate the nanofibrous structure of native ECMs, may prevent further damage and support healing when dressings are applied to cover wounds.Therapeutic concentrations of a biofilm inhibitor and local anesthetic were released to evaluate their ability to protect wounds from biofilm formation, promote non-inflammatory signaling, and support regenerative collagen production profiles.Loading strategies that promote collagen secretion could be beneficial in stimulating tissue healing for burns or soft tissue defects while also reducing contamination.The results of this study will advise future generations of this product, specifically regarding selecting C2DA and BUP loading concentrations, allowing preclinical and clinical studies to further characterize efficacy.
Figure 1 .Figure 1 .
Figure 1.Cytocompatibility of loaded and unloaded DA-ESCMs with normal human dermal fibroblasts.Graphs indicate percent viability of NHDFs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard devi- ation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.** indicates significantly lower viability compared to unloaded control with p < 0.01 and **** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 2 .
Figure 2. Quantification of collagen production by NHDFs after contact with loaded and unloaded DA-ESCMs as determined using a colorimetric collagen assay.Graphs indicate collagen production (µg/mL) from supernatant media of NHDFs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.** indicates significantly higher collagen production compared to unloaded control, detected using one-way ANOVA with Tukey's post hoc tests (p < 0.01), *** indicates p < 0.001, and **** indicates p < 0.0001.
Figure 2 .
Figure 2. Quantification of collagen production by NHDFs after contact with loaded and unloaded DA-ESCMs as determined using a colorimetric collagen assay.Graphs indicate collagen production (µg/mL) from supernatant media of NHDFs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.** indicates significantly higher collagen production compared to unloaded control, detected using one-way ANOVA with Tukey's post hoc tests (p < 0.01), *** indicates p < 0.001, and **** indicates p < 0.0001.
Figure 3 .
Figure 3. Cytocompatibility of loaded and unloaded DA-ESCMs with normal human epithelial keratinocytes.Graphs indicate percent viability of NHEKs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.** indicates significantly lower viability compared to unloaded control with p < 0.01 and *** indicates p < 0.001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 4 .
Figure 4. (A) IL-10 and (B) VEGF production by NHEKs in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 3 .
Figure 3. Cytocompatibility of loaded and unloaded DA-ESCMs with normal human epithelial keratinocytes.Graphs indicate percent viability of NHEKs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.** indicates significantly lower viability compared to unloaded control with p < 0.01 and *** indicates p < 0.001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 3 .
Figure 3. Cytocompatibility of loaded and unloaded DA-ESCMs with normal human epithelial keratinocytes.Graphs indicate percent viability of NHEKs in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.** indicates significantly lower viability compared to unloaded control with p < 0.01 and *** indicates p < 0.001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 4 .
Figure 4. (A) IL-10 and (B) VEGF production by NHEKs in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 4 .
Figure 4. (A) IL-10 and (B) VEGF production by NHEKs in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 5 .
Figure 5. Cytocompatibility of loaded and unloaded DA-ESCMs with murine macrophages.Graphs indicate percent viability of RAW 264.7 cells in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.* indicates significantly lower viability compared to unloaded control with p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001, and **** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 6 .
Figure 6.Production of (A) IL-10 and (B) TNF-α by monocytes in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 5 .
Figure 5. Cytocompatibility of loaded and unloaded DA-ESCMs with murine macrophages.Graphs indicate percent viability of RAW 264.7 cells in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.* indicates significantly lower viability compared to unloaded control with p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001, and **** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 5 .
Figure 5. Cytocompatibility of loaded and unloaded DA-ESCMs with murine macrophages.Graphs indicate percent viability of RAW 264.7 cells in contact with DA-ESCMs for (A) 24 h or (B) 72 h (n = 4).Viability was quantified based on metabolic activity by measuring ATP production.Individual data points are shown as bars representing mean and error bars representing standard deviation.Black line indicates the 70% cytocompatibility minimum established by ISO 10993-5.* indicates significantly lower viability compared to unloaded control with p < 0.05, ** indicates p < 0.01, *** indicates p < 0.001, and **** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 6 .
Figure 6.Production of (A) IL-10 and (B) TNF-α by monocytes in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Figure 6 .
Figure 6.Production of (A) IL-10 and (B) TNF-α by monocytes in contact with DA-ESCMs for 24 or 72 h (n = 4).Individual data points are shown as bars representing mean and error bars representing standard deviation.Concentrations were normalized based on viability for each sample.**** indicates p < 0.0001, detected using one-way ANOVA with Tukey's post hoc tests.
Table 1 .
Loading concentrations and abbreviations for each DA-ESCM group. | 7,988.4 | 2023-10-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Hardware-Intrinsic Multi-Layer Security: A New Frontier for 5G Enabled IIoT.
The introduction of 5G communication capabilities presents additional challenges for the development of products and services that can fully exploit the opportunities offered by high bandwidth, low latency networking. This is particularly relevant to an emerging interest in the Industrial Internet of Things (IIoT), which is a foundation stone of recent technological revolutions such as Digital Manufacturing. A crucial aspect of this is to securely authenticate complex transactions between IIoT devices, whilst marshalling adversarial requests for system authorisation, without the need for a centralised authentication mechanism which cannot scale to the size needed. In this article we combine Physically Unclonable Function (PUF) hardware (using Field Programmable Gate Arrays-FPGAs), together with a multi-layer approach to cloud computing from the National Institute of Standards and Technology (NIST). Through this, we demonstrate an approach to facilitate the development of improved multi-layer authentication mechanisms. We extend prior work to utilise hardware security primitives for adversarial trojan detection, which is inspired by a biological approach to parameter analysis. This approach is an effective demonstration of attack prevention, both from internal and external adversaries. The security is further hardened through observation of the device parameters of connected IIoT equipment. We demonstrate that the proposed architecture can service a significantly high load of device authentication requests using a multi-layer architecture in an arbitrarily acceptable time of less than 1 second.
Introduction
Adopting evolving business models that are enabled by emerging 5G technologies is a challenge when attempting to maintain legitimate security and privacy considerations for Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices [1]. It is clear that raising industrial users' knowledge that a substantial amount of the interest they create is intrinsically connected with intellectual property (IP) ownership and continuous development. There is also the persistent risk of a security breach that could compromise ownership of the IP, putting the underlying business model at higher risk [2,3]. This is particularly prevalent in the provision of IoT assisted healthcare systems, which is a pertinent example of distributed IT systems that have similarly complex needs and stakeholder requirements [4]. Although cloud computing illustrates how technologies and business models are used to provide new business opportunities to enterprises, businesses remain at risk of emerging threats due to the proliferation of cloud services, including multi-tenant cloud environments [5,6]. The promise of 5G Infrastructure holds immense possibilities for greater integration of physical devices that are ideally suited to IIoT for several reasons, as follows: • Less network latency increases overall response times and is able to enhance security protocol strictness without sacrificing the system's user experience; • Higher data rates enable the sharing of data between devices, and the utilisation of metadata to support secure transactions building trust between devices; • Lower power demand allows widespread use of sensing and processing devices where power infrastructure is absent.
The huge advantage of millimetre wave (MMW) radio spectrum for 5G is a crucial enabler for better network performance, although at a loss of propagation range [2]. Whereas the higher frequency band has specific physical security [7,8], this approach is not one that we should depending on. A manipulative attacker seated beside the IIoT device may be able to transmit data externally [9][10][11][12]. The heterogeneous nature of IoT communications with its heterogeneous architecture and devices, requires information sharing and collaboration across a wide range of networks. This poses severe privacy and security issues [13]. IoT privacy protection seems to be more vulnerable than conventional Information and Communication Technology (ICT) systems because of several vector threats against IIoT technologies [14,15]. Modelling these vulnerabilities is challenging, particularly since the multiplicity of IIoT devices each represent agents within a complex system of interactions that need to be secure [16][17][18].
Consequently, there is a need to create a flexible multi-layer cloud security architecture that provides adequate authentication for multiple parties in a reliable way, while being mindful of the heterogeneous nature of how IIoT devices will communicate efficiently. Our article discusses how well the cloud methodology was developed to guide the creation of the security architecture for several purposes. Firstly, cloud computing architectures actively support complex demands via elasticity [19,20] and facilitates the standardisation of diverse systems by abstraction. Secondly, there seems to be a proven architectural reference model given by NIST [21], which is widely used. Lastly, cloud systems have similar features with IIoT systems in that multiple parties need to function together and collaborate by a secure exchange of data and assets [5].
Previous work addressed the specific instance of multi-party trust authentication for the deployment of cloud based business intelligence systems. The authors also have built and adapted to accommodate a particular instance where the introduction of 5G network services would enable new business opportunities through increased efficiency. To support these features, the authors extended cloud-based infrastructure to include Physically Unclonable Function (PUF) hardware. Since the PUFs are resilient to spoofing attacks, the PUF hardware offers a higher level of security toward direct physical attacks, which are essential in situations where there is a need to rapidly authenticate several parties to ensure trustworthy connections [5].
The delivery of analytical resources from manufacturing plant represents a real scenario that the authors addressed, allowing the secure exchange of heterogeneous data, and also performance appraisal, between both the IIoT components and the enterprise (ICT) system of the organisation, often using Micro Services architecture [22]. This article considers the potential adversarial attacks to consider on such a device, which assists the design of an agile approach to multi-layer security. The authors created algorithms that require authentication through PUFs to provide effective, secure, and flexible access to IoT cloud applications. The article is arranged as follows. Section 2 defines a framework for multi-layer security. Section 3 presents a related secure solution for networking which utilises PUFs. In Section 4, we present the results of experiments that illustrate the potential for this approach. Finally, we conclude in Section 5.
Multi-Layer Security Model
The critical challenge for IIoT is the implementation and processing the large amount of data produced by these devices. In attaining this IoT vision, Low-Power (LP) and Loss Networks (LLNs) are diversified, and the interconnection of restricted physical devices by the use of LP and LLNs involves the modification of protocols and existing structures currently in common use [23]. Latterly, hardware trojan attacks have developed as a threat to all hardware and integrated circuits (ICs) [24]. The main challenge of handling network connectivity in a tightly-equipped setting, including a smart factory, is to identify and manipulate different attack vectors. In principle, the promise of cloud resources also introduces potential system vulnerabilities. As such, the authors opted to create a security model that divides a variety of security controls through multiple layers of defences [25]. Figure 1 illustrates a proposed secure architecture. The authors use the example of a traditional enterprise infrastructure with analytics capabilities to promote tactical and organisational business decision-making. Primarily, as just that, our model was examined, in which individual users are tenants in a multi-tenant cloud environment. In our model, we consider the case that each user or (IIoT device or sensor) is described by a multi-cloud enterprise system as a prospective tenant. As the architecture enables the abstraction of resources, users that also require access to the business network can do this remotely, through virtual machines, and also through hardware devices [25].
All endpoints are secured via firewalls. In the beginning, all external requests are assisted by authentication data firewalls for each potential tenant. The Metadata layer, for example offers security controls for the features previously allowed for each tenant registered. The lack of required authentication data will prevent the user from effectively communicating with the system. Once the simple authentication is established, a Tenant Metadata Layer maintaining rules-based controls is required to determine which part of the business system a permitted tenant can access. For instance, this may apply to specific databases or reports. While the IIoT device offers data for a variety of analytics processing, which involves not only adding data to the repository as well as maintaining access to other data sources which can be collected and merged to present better analytical services.
A secure connection must be established, and this has been achieved by using the public-key Infrastructure (PKI). The PKI uses it to verify that the signature is authentic. Within its model layer, public key certificates are preserved within the Digital Vault, and this offers another secure degree where the user session may be approved or removed. In the case where the deceptive attackers penetrated aggressively into the first three layers, Layer four offers a deeper layer of protection. Whereas the controls of the prior layers are capable of protecting against various attacks, these can not prevent them from a harmful intruder who previously has the authority to access the system. The network will monitor suspicious activities using the Intrusion Prevention System as well as to detect irregular actions, in order to set up a session for tenants engaging in inappropriate behaviour.
An anti-malware layer of protection reinforces layer four. Far more surreptitious activity, for example, hidden executable code, may disrupt as it is implemented into the business network. Layer 5 keeps an activity record, and a list of known threats. Within the application cloud layer, this layer comprises the business features and is of considerable value to enterprise clients. During that time, the client has entered this layer, simple authentication, client verification by PKI, intrusion prevention system (IPS) and anti-malware inspections were already made, with each layer being able to terminate the session. Apart from business applications, it is necessary to access corporate repositories by a particular type of user, whether directly through application programming interfaces (API) or through querying and monitoring interfaces, usually provided via a web portal [26].
NIST Cloud Model
NIST is developing standard protocols and guidelines for user or client devices access to the Cloud by means of an interface for virtualisation, Internet browser interface, and the thin client interface [27]. These clouds are formed of a 7-layer architecture, consisting of layers: (1) as the layer of the physical infrastructure components, (2) layer of resources abstraction for virtualisation, (3) the layout layer for virtual Services, (4) the infrastructure as a service (IaaS) layer, (5) the layer for platform as a service (PaaS), (6) the application layer of software as a service (SaaS), and (7) the layer of applications for the tenants. The proposed multilayer security model may be compared to the cloud model of NIST [27] as follows. In NIST layers there are tenant users which could be hardware devices or virtual machines (VMs). Such a model may be implemented to each layer according to the principles of trustworthy computing [1,2].
Each session is aligned to layer six through a sequence of authentication and verification phases in the fourth and fifth layers. For applications which are hosted off-premises, layer seven access is made available via API interfaces. The presence of a firewall suggests infrastructure as a service (IaaS) [20], whereas management systems exist within the platform as a service (PaaS) layer. Software applications will reside in a software as a service (SaaS) tier.
Session Workflow
A typical session workflow is illustrated in Figure 1. The allocation of session IDs in layers three and two contributes to the setup of a new client by a future IIoT tenant user. This would be accompanied by the access identifier given in layer four. Following this stage, where the inspection of packets is a crucial task for each of the sessions that have taken place so far. The database of metadata (DB META ) and database of vault (DB VAULT ) layers require the verification of IIoT requests before the packet inspection is performed for each session using a database of intrusion prevention systems DB IPS and database of anti-malware DB ANTIMAL . The DB IPS and DB META link explicitly to PaaS functions within the context of the NIST model. In comparison, in the model Database of firewall DB FW is known as IaaS. Supplementary authentication is required for each SaaS user, although at this stage, there is still a substantial number of verifications. Nevertheless, this verification is intended to enforce the company structure role-based permissions, such as the sub-set of employees, which offer access to the sensitive payroll information, for organisational data protection.
Hardware-Intrinsic Secure Multi-Layer Connectivity Model
The presented model considers users requesting access to services such as analytics in industrial infrastructure. Due to advancements in hardware technologies, users, as well as IIoT application services, incorporate hardware platforms on a large scale. One such advancement is the use of FPGAs solutions with hardware and software programmability providing flexibility and scalability to address IIoT requirements [28]. Applications such as data processing are an unavoidable part of IIoT, and FPGAs are an invaluable part of meeting future processing demands. FPGA-based data centres provide a volume of computation and storage resources to be efficiently processed on the edge of the network.
The FPGA based accelerations have significant potential for industrial applications enabling real-time data processing by combining locally generated data with additional enterprise data. Hardware acceleration, flexibility, and performance provided by FPGAs are an attractive solution for 5G networks for meeting the changing and increasing demands of the wireless markets. Currently, FPGAs provide optimised solutions for 5G technologies such as cloud-based radio access network (cRAN ), virtual radio access network (vRAN), Massive multiple-input, and multiple-output (MIMO), Backhaul, Fronthaul, Digital Radio Front-End [29].
With the rising number and connectivity of IIoT intelligent devices with the network, the model requires to process an increased volume of transactions. To deal with an increased processing volume, the multi-layer model requires compliance, where the proposed system dynamically provides the required flexibility and security. Below we describe the procedure adopted to introduce an IIoT device with an inbuilt design feature, which increases the level of security with the connecting components. We use the concept of hardware-intrinsic security, which develops security from the intrinsic properties of the silicon. The security primitive employed in this work is Physically-Unclonable Functions (PUF), which utilises intrinsic manufacturing differences in the electronic hardware for strengthening security.
We describe a protocol for secure connectivity in the network. The protocol introduces a series of steps that permit all new clients entering the IIoT system. To grant access to the IIoT system, a current client needs to introduce the new customer following a series of procedures (Algorithm-2), as described below. The model consists of K verification layers. Verification at each layer is assured using a PUF based security protocol. Every layer of the security model has a PUF. In the multilayered model K = 7, there exist a PUF for each existing user in each layer, which represents the fingerprint of every existing genuine member of the IIoT node. In this work, we use FPGAs for implementing the PUF. Packaged as a cloud manager, it generates a composite PUF and model (M A ), that represents the physical PUFs in the K layers of the model. Genuine clients receive an obfuscated bitstream consisting of a description of the mathematical PUF model through a secure communication channel. The genuine user introduced by an existing customer then downloads the bitstream and implements the PUF. We describe a PUF Based Authentication Protocol for verifying a client.
The cloud management plane handles the authorisation request initiated by the client U A . The authorisation is processed by a security check involving the PUF, where q challenge sets(CH p ) of length n is sent to U A together with a random number (rand). The received challenge bits are presented to the PUF model (M A ) at the client's end, and the corresponding responses are collected for every layer in the proposed model.As we are considering a K layer model, there reside K responses which represent a single challenge string for every layer. A pre-agreed shuffling scheme is used to scramble the entire responses (K.q) for all challenge sets. The client and the management plane accord with an encoding E(.) and decoding D(.) scheme sor secure transmission of PUF responses. The user U A then sends the scuffled responses encoded with E(.) to the cloud model for confirmation, and the clould management layer decodes with D(.) to convert the response back and direct the responses to the respective cloud layer.
The original challenge bits of q are then added to the actual PUFs existing in the layers of the IoT cloud, and the responses are collected. The mathematical PUF and physical PUF responses are examined for a high similarity to declare the user U A to be genuine.
The quality of the PUF is mainly determined by two parameters, which are reliability and security. A reliable PUF has a sufficiently long lifetime and provides a stable response under different external circumstances. Considering minuscule variations in XOR PUF responses, the presented algorithm provides a tolerance of 1% in PUF response comparison. The parameter security addresses the level of protection that a PUF offers against a wide range of attacks. We ensure security by employing a powerful Arbiter PUF with > 10 component XOR-PUFs stages to enhance security and to counter machine learning interventions [30].
Algorithm Design
To strengthen security, PUF based verification supplements the existing verification in the primary cloud multi-layer model. FPGAs residing in cloud layers contain PUFs describing all existing clients. Additionally, an existing genuine client comprises a mathematical model of PUF which is transferred from the cloud management unit. The mathematical model is implemented in the client FPGA following Dynamic Partial Reconfiguration. The mathematical model is constructed by the IIoT infrastructure using machine learning as it has access to internal parameters of constituent Arbitor PUF stages. To guaranty security, a strong PUF is employed in the system. A strong PUF promises to provide resistance to cloning by a malicious adversary adopting machine learning approaches [31]. This security is ensured by increasing the constituent Arbitor PUF stages to greater than 10.
Algorithm 1 describes the process to be followed to grant a request to access to an application. Various checks followed in the cloud model is represented in each step of the algorithm. The proposed model is flexible to incorporate additional security if required by provision to extend the security to additional layers. A database of previously used challenges is maintained by the cloud management unit to prevent repeated usage of same challenge bits and ensure security gained replay attacks.
The model contains physical PUF representing the client in each layer of the cloud model and mathematical PUF, describing the functionality of the physical PUF. An obfuscated bitstream is used to download the mathematical model of physical PUF using DPR. The mathematical model is constructed by the IIoT infrastructure using machine learning as it has access to internal parameters of constituent Arbitor PUF stages.
Robustness is provided by a strong PUF, which cannot be cloned by malicious third parties. The requirements for the PUF considered in this work include (A) a Strong PUFs with a vast number of possible challenges, (B) unpredictability of challenge responses which means the difficulty to extrapolate or predict the CRPS from the known CRPs.
The security is ensured by increasing the constituent Arbitor PUF stages to greater than 10. Algorithm 2 provides authentication for all user requests for entry to the IIoT system. Each step of the process delivers security checks required at each stage of the layer. The algorithm is briefly described below. A set of challenges are generated, which excludes prior sets, and these are used between client and cloud layers during their authentication interactions. Each authentication requires a collection of q challenge bits, each being length n. Both the mathematical model and the physical model are provided with the same to generate the responses. At each cloud layer, the produced responses of the mathematical PUF and the physical PUF are compared for verification. The high similarity of responses (>= 99%) is considered genuine, and the client is granted to proceed to the next layer of security checks. A database of previously used challenges bits is maintained to disregard any repeated usage, which would otherwise provide a chance for a replay attack. For a challenge set size of q and length n-bit used for each authentication attempt, provides (2 n /q) possible attempts of access on the application. The challenge bit-size is extensively large, requiring billions of years to be completely exhausted.
Algorithm 1
Multi-layered security model using PUF: New client Objective: The seven layer cloud model consisting of FPGA clouds verifies the identity of a new client FPGA (U B ) who is requesting access.
The cloud model provides application access for the genuine client (U B ).
Prerequisites:
(a) New client ClientU B , requesting application access is known to an existing client U A as a genuine applicant.
U i to management plane MP: request access to application A 3.
MP to U i : MP sends a random number rand and a set of challenges CH p consisting of q challenge bits each of length 'n'.
4.
U i calculates the following: MP to Cloud − C i : Set of challenges CH p and Rim p,j (c) Cloud − C j calculates the following if (Mem && Match), E = 1 ; proceed to next higher layer ii.
Algorithm 2 Multi-layered security model using PUF: Client is an existing User [2]
Objective:
1.
The seven layer cloud model consisting of FPGA clouds verifies the identity of a client FPGA (U A ) who is requesting access. 2.
The cloud model provides application access for the genuine client (U i ).
An n-bit input, 1-bit output XOR PUF P 1 is reconfigured in all layers of the Cloud − FPGA. There exists a PUF for every authenticated user. PUF P ij represents the identity of the user i in the cloud layer j.
2.
A combined mathematical model M i representing all the K PUFs in the cloud layers, resides with each user U i .
3.
Cloud − FPGA and user U i have agreed on a fixed encoding scheme E() and a decoding scheme D(.), such that for any binary string x, E(.) and D(.) are injective, X = E(x) and D(X) = x.
4.
Cloud − FPGA and user U i have agreed on a shuffling scheme Y = S(X, rand), and S (Y, rand) = X where rand is a random number. Contents of session packets:P CT 3.
Input
Contents of FW: DB FW 4.
Contents of TEN ANT META : DB META 5.
Contents of TEN ANT VAULT : DB VAULT 6.
Contents of IPS : DB IPS 7.
Contents of ANTI MALWARE : DB ANTI MAL Note: DB j represents content DB of layer j
Output:
A value in variable S to show that the application access is granted (S = 1) or denied (S = 0).
Algorithm 2 Cont.
Steps: U B requests U A , for an introduction to access application A 3.
U A to MP: request introduction of U B to cloud layers C j 4.
MP to U A : MP sends a random number rand and a set of challenges CH p consisting of q challenge bits each of length 'n'.
5.
U A calculates the following: ii. MP to Cloud − C j : Set of challenges CH p and RAm p,j iii. Cloud − C i calculates the following Additionally, our model is fully flexible with DPR capability, which permits new security primitives, including novel PUF architectures, to be replaced with the existing once. This further increases the life span of the model. In addition, considering the research in the area [7], the possibility of repeated challenges occurring is highly unlikely for comparable challenge set volumes.
The presence of new users being proposed by an pre-existing authenticated system users is made possible by the sharing of model responses. Once the existing client has been successfully authenticated, new PUFs are created by the FPGA at run-time. After DPR, the mathematical PUF model is downloaded to the new user FPGA using an obfuscated bitstream. Again, the security model makes the assumption that the security process of maintaining system integrity is followed by a secure DPR process. The new cloud user will then use an algorithm of 1 to obtain the application layer.
Device Parameter Analysis of Client FPGAs
A genuine client, which turns to be potentially malicious by modifying the client device architecture to attack the IIoT application, is confirmed by client device parameter verification using neural networks. The client analysis process strengthens the security of the IIoT application against potentially malicious clients. A legitimate cloud service requires each IIoT client FPGA to satisfy specific requirements. Firstly, the FPGAs require a Dynamic Partial Reconfiguration (DPR) ability, which facilitates the set up of PUF primitives in the FPGA fabric. DPR allows dynamic reconfiguration of hardware units in selected regions on the FPGA framework.
The FPGA floor plan requires dynamic partitions that promote analysis of the fabric by the cloud service. Although DPR offers tremendous flexibility for IIoT applications, security needs to be ensured to avoid DPR based Trojan insertions as proven in [7]. An additional security measure adopted in the proposed scheme is to analyse the device parameters of the client FPGA. DPR performs this by sending an obfuscated and downloadable bitstream which collects the client device parameters. The device parameters are tested at the malware detection layer to assure that variations in the attributes of the FPGA clients. A client FPGA signature is a mechanism for identifying malicious adversaries. An initial DPR process implements a design that collects the device parameters, and a second DPR process erases the downloaded bitstreams. These device parameters are directly collected by the cloud management unit to evade manipulations from the client.
The proposed architecture for device parameter verification for Trojan analysis is shown in Figure 2. The design consists of two layers of neurons, where the input layer neurons (LAYER-1), produces a spike train with a frequency proportional to the device parameter. The number of input layer neurons represents to the number of device attributes that are analysed. The second layer neuron responds based on the spike rate received from its presynaptic neurons. In Figure 2, K input layer neurons are shown. There exist eight parallel connections between two tiers of pre-post synaptic neurons. This is to mimic the parallel connection between neurons in the brain-inspired systems, which aids in building post-synaptic potential and enhances fault-tolerance. The pattern identification procedure regulates the spike rate between layer 1 and 2. This is depicted using the Gaussian distribution shown between the neurons.
The distribution represents a variable transmission probability depending on the particular pattern. The nomenclature PR Kr represents transmission probability between presynaptic neuron K and the post-synaptic neuron in the r th interconnection between the pair. The output layer neuron provides a stable enable signal for the client FPGA if the received device parameters are within scope. This principle of using a spiking neural network is derived from [32,33], and hardware realization of the approach is described in [34].
However, in [32][33][34], the authors derive bio-inspired principles for homeostasis targeting robotic applications, where this paper emphasis the use of similar methodologies for hardware Trojan detection. Bio-inspired computing develops computational models using various models of biology. Brain-inspired computing is a subset of bio-inspired computing, which is mainly based on the mechanism of the brain. Brain-inspired models help to narrow the hardware Trojan detection process based on the mechanism of the brain, which produces a compact computational model rather than the complex biological process involved in the former. A pattern identification protocol verifies the pattern where spike to the postsynaptic neuron (LAYER-2) is regulated using a transmission regulation following a Gaussian relation. A high transmission probability (PR) is provided by the transmission regulation unit provided the device parameters are in the acceptable range.
A lower PR indicates a more significant deviation from the device parameter standards and fewer input spikes arriving at LAYER-2, which provides a stable firing rate by following Spike-timing-dependent plasticity (STDP) [35] and Bienenstock-Cooper-Munro (BCM) learning rules [36] for spike rates in the permissible range. Otherwise, the postsynaptic spikes drop to zero. Multiple connections are laid between each pre-post synaptic neurons to increase the security of the detection unit from any intruder from attacking the Trojan analysis unit.
Experimental Results
In this work, we implemented an XOR PUF construct consisting of 10 parallel Arbiter PUFs with 64 switch blocks on Xilinx Nexys 4 DDR board with Artix-7 FPGA (device xc7a100t, package csg324, speed − 1) [37]. Verilog Hardware Description Language (HDL) is used for design purposes, and Electronic design automation (EDA) Xilinx ISE 14.7 design suite [38]. For further analysis, we used the Xilinx power analysis tool and Chipscop-Pro [39].
The implementation cost of the PUF design is shown in Table 1. The design used only a fraction (8%) of the FPGA slice of the device (Artix-7 FPGA), which is negligible for large FPGAs stationed for high-end applications. Table 1, reports the size of bitstream required to reconfigurable PUF, which is relatively small. A difference based partial reconfiguration methodology is used for PUF reconfiguration over the network [40]. Additionally, new FPGA tools (Partial Reconfiguration flow in Vivado Design Suite) provided specific flow implementations for dynamic partial reconfiguration. In this work, an 8 bit configuration was followed for the Internal Configuration Access Port (ICAP) and a clock rate of 100MHz. The above settings enabled DPR to be completed in microseconds, which proved to be a real-fit for cloud-based applications.
The proposed device parameter verification circuitry is shown in Figure 2 using a Xilinx Nexys 4 DDR board, Artix-7 FPGA (device xc7a100t, package csg324, speed -1). Neural activities were monitored using an Integrated Logic Analyzer (ILA). The Xilinx Power Estimation and Analysis Tools and Timing Closure and Design Analysis are used additionally. We report hardware and power footprints in Table 2. The Gaussian function representing transmission probability between LAYER-1 and LAYER-2 was implemented using linear approximations to minimize the hardware overhead. The neurons used were deployed using Leaky-Integrate and Fire(LIF) neuron models [41], as they are computationally efficient for hardware implementations. Hardware utilisation and synapses increased, which operated based on a BCM-STDP rule. Alternate synaptic rules such as Spike Driven Synaptic Plasticity (SDSP) [42]-based synaptic rule will be the focus of future investigations. These have the potential to lower synaptic weight from 32 − bits to perform 1 − bit operations. Assessing the practicality of the proposed system in real-world scenarios [43], the overall performance of the system was explored through a Python/SimPy [44] simulation model.
The model developed comprised of five cascaded servers as per the scheme described earlier in this paper, and each server was configured to have a log-normally distributed processing delay with a mean delay of 50 ms (standard deviation = 10 ms). The simulation was run for 1000 s of simulation time for each separate load on the system measured as the number of authentication requests submitted per second (RPS). The authentication requests were modelled as having a Poisson distribution. End-to-end delay data was collected for each authentication transaction in a run, and the following figures present histograms of the delays. Histograms were chosen as for a user of the system the mean or median delay only presents a very limited view of the actual performance that will be delivered to an individual user. The adoption of histograms to show the distribution of delays provides users with a far better indication of the range of performance that they will experience across a large number of authentication requests.
From Figures 3-8 as presented here, some conclusions about the performance of the system can be drawn. The definition of acceptable performance in terms of authentication delay is, of course, subjective. For the sake of this discussion we will take a somewhat arbitrary position that if the overwhelming majority of requests are serviced in less than 1 s then performance is deemed acceptable.
At low loading (10 RPS), all requests are serviced within our 1 sec limit, while at a higher rates of 15 RPS and 16 RPS, progressively more requests take longer than our 1 s target, but overall performance could still be deemed acceptable.
However, at 17 RPS a very significant fraction of requests take longer than 1 s to service, with the some having to wait over 3 s. At 19 RPS, the system effectively fails, and cannot cope with the volume of traffic. This is consistent with expectations as a cascade of five stages with a fixed service time of 50 ms each per request should cope with 20 uniformly distributed RPS with an end-to-end delay of 1.25 s. The breakdown in performance in the simulated system is due to the randomness in timing of request generation and the randomness in processing time at each node.
Conclusions
This article we extends previous work on implementing multiple layers in cloud security. We add hardware security primitives and trojan detection units in combination, for a robust, multi-layer security architecture. The proposed work describes a PUF-based system with a brain-inspired device parameter analysis unit that demonstrates the ability for attack prevention from both external and internal attacks of interest primarily in the IIoT context. A vast array of surreptitious activity may be isolated to withstand a number of attack vectors, by considering security at every cloud-abstraction layers. Multiple layers of packet inspection secure the IIoT application against other opponents who may concurrently implement many tools and approaches to compromise the system security.
The security is primarily maintained by PUF-based security protocols that rely on unique device fingerprints which are hard to be compromised. The inherent flexibility and scalability provided by DPR capability with plug-and-play of new security primitives is a vital advantage of the proposed approach providing a promising direction for 5G enabled IIoT. The DPR facility removes constraints of security functions, which later is replaceable with the more secure ones.
Additionally, the hardware device parameter inspection avoids further attacks on the IIoT application using parameter variations. The continued expansion and accessibility of IIoT hardware requires flexible hardware programmability as provided by novel FPGAs. In addition, embedding analytics functions into industrial organisations requires high computational capabilities and flexible architectures provided by FPGAs. To provide satisfactory operations for the communication demand, IIoT devices with high-speed 5G networking technology is a requirement.
At the same time, appreciating the flexibility, security cannot be compromised in the infrastructure. In order to guarantee security within the model, we proposed to use primitive hardware security, for example, PUFs. The monitoring of client IIoT side-channel parameters also enhances security. In the IIoT cloud era, all software and hardware innovations have to operate together to ensure better security; failing to function might be catastrophic. | 8,122.8 | 2020-03-31T00:00:00.000 | [
"Computer Science"
] |
Monte Carlo determination of radiation‐induced cancer risks for prostate patients undergoing intensity‐modulated radiation therapy
The application of intensity‐modulated radiation therapy (IMRT) has enabled the delivery of high doses to the target volume while sparing the surrounding normal tissues. The drawbacks of intensity modulation, as implemented using a computer‐controlled multileaf collimator (MLC), are the larger number of monitor units (MUs) and longer beam‐on time as compared with conventional radiotherapy. Additionally, IMRT uses more beam directions—typically 5 – 9 for prostate treatment—to achieve highly conformal dose and normal‐tissue sparing. In the present work, we study radiation‐induced cancer risks attributable to IMRT delivery using MLC for prostate patients. Whole‐body computed tomography scans were used in our study to calculate (according to report no. 116 from the National Council on Radiation Protection and Measurements) the effective dose equivalent received by individual organs. We used EGS4 and MCSIM to compute the dose for IMRT and three‐dimensional conformal radiotherapy. The effects of collimator rotation, distance from the treatment field, and scatter and leakage contribution to the whole‐body dose were investigated. We calculated the whole‐body dose equivalent to estimate the increase in the risk of secondary malignancies. Our results showed an overall doubling in the risk of secondary malignancies from the application of IMRT as compared with conventional radiotherapy. This increase in the risk of secondary malignancies is not necessarily related to a relative increase in MUs. The whole‐body dose equivalent was also affected by collimator rotation, field size, and the energy of the photon beam. Smaller field sizes of low‐energy photon beams (that is, 6 MV) with the MLC axis along the lateral axis of the patient resulted in the lowest whole‐body dose. Our results can be used to evaluate the risk of secondary malignancies for prostate IMRT patients. PACS: 87.53.wz, 87.53.‐j
I. INTRODUCTION
Radiation therapy has long been recognized as an effective treatment for the management of clinically localized prostate cancer. Recently, improvements in treatment outcomes have been clearly demonstrated through dose escalation studies for prostate radiotherapy. The state-ofthe-art techniques that facilitate dose escalation are three-dimensional conformal radiotherapy (3D-CRT) and, more recently, intensity-modulated radiotherapy (IMRT). Advanced radiotherapy treatments with IMRT can deliver dose distributions that are more conformal to the tumor targets and that simultaneously minimize radiation damage to the surrounding normal tissues. (1)(2)(3)(4)(5)(6)(7) Overall, IMRT provides increased local tumor control and lower toxicities to nearby critical organs. However, the process of intensity modulation requires more monitor units (MUs) than does conventional radiotherapy or 3D-CRT. The increase in MUs inevitably leads to an increase in the leakage dose and results in a higher dose to the rest of the patient's body. Depending on the design of the accelerator's multileaf collimator (MLC) and the treatment optimization system used, a usual prostate IMRT treatment may consist of 50 -100 MLC field segments that may take 3 -10 times the MUs of a comparable 3D-CRT or conventional prostate treatment. (8,9) Compared with conventional treatments, IMRT uses more beam directions (typically 5 -9) to achieve optimal dose conformity to the target volume while reducing the dose to the surrounding critical structures. Altogether, IMRT treatment may substantially increase the normal-tissue volume receiving low-dose radiation over the dose seen in conventional and 3D-CRT treatment.
In general, radiotherapy has been shown to be associated with a very small, but statistically significant, increase in the risk of secondary malignancies. (10) This increase is more profound in long-term cancer survivors. It has been reported that secondary lung malignancies increased by 4% -6% after prostate radiotherapy as compared with prostate surgery, and that the increase rose to as much as 15% for long-term survivors, (10) although other factors unrelated to radiation (such as smoking) might have contributed. In another study for radiation therapy of the cervix, (11) the risk of secondary malignancies in a wide range of organs was investigated, and higher doses (in the order of several grays) were reported to increase the risk of stomach cancer and leukemia. Movsas et al. (12) observed that 5.7% of patients treated with radiation developed secondary tumors. Dorr and Herrmann (13) showed that most secondary tumors occur in the penumbra region, where the dose is ≤6 Gy, within 11 -16 years after the initial treatment.
Although the above-mentioned reports are for conventional and 3D-CRT treatments, Followill et al. (8) estimated that the percentage likelihood of fatal secondary cancers attributable to a prescribed dose of 70 Gy can be as high as 4.5% for IMRT with 18 MV photon beams and up to 8.4% for 25 MV photon beams.
The dose-response relationship is uncertain in the context of radiotherapy in which a small volume receives a high dose, sometimes 70 Gy or more, and a larger volume receives a considerably lower dose. The low doses can be a result of exposure to only some of the treatment fields or exposure to only leakage radiation from the accelerator's head. Three probable scenarios, all supported by published data, attempt to explain the relationship between the dose received and the risk of secondary malignancies: • First, from animal studies, (14) the risk of second cancers is expected to fall off at higher doses because of cell killing(dead cells cannot give rise to malignancies). • Second, based on data from human studies, the risk for development of solid tumors has been observed to level off at 4 -8 Gy, but not to decline thereafter. (15) • Third, women who have been treated for cervical cancer have an increased risk of developing leukemia. In this situation, the risk increases with the dose up to 4 Gy and then decreases at higher doses. (16)(17)(18) Although it is generally accepted that IMRT improves local tumor control and reduces toxicity to nearby critical structures, questions have been raised concerning whether the widespread use of IMRT could lead to an increase in radiation-induced carcinomas because a higher volume of normal tissue is being exposed to low-dose radiation. (19,20) Although clinical data are still sparse on this subject, we feel that this topic is very important, especially because many community hospitals are implementing IMRT for prostate treatment.
The goal of the present study was to evaluate the overall benefit-risk ratio of IMRT and existing treatment techniques, to calculate the dose received in nearby critical organs and in organs at greater distances, and to compute the whole-body dose equivalent and the risk of radiation-induced malignancies resulting from low doses to the rest of the patient's body. We therefore studied the patient scatter dose from the target volume to the organs at risk, the effect of leakage from the linear accelerator (LINAC) head, and the effect of the energy to the whole-body dose equivalent. We then calculated the relative increase in the risk of secondary cancers attributable to the application of IMRT by taking into account all of the above effects. It is understood that the physical and biologic models and parameters-for example, isodose, dose-volume histogram (DVH), tumor control probability (TCP), normal-tissue complication probability (NTCP), integral dose, whole-body dose equivalent, and so on-currently used for plan evaluation and risk analysis are approximate and that their absolute values can be very uncertain. For example, it is difficult to determine the wholebody dose equivalent for prostate radiotherapy patients because, as a group, these patients are different from a population averaged over a wide range of age, sex, and exposure levels, as considered by the International Commission on Radiological Protection (ICRP) (21) and the National Council on Radiation Protection and Measurements (NCRP) (22) . However, because the same parameters and the same patients are being used to evaluate different treatment techniques, only relative changes in the parameters are of significance, and those relative changes are sufficient for this evaluation. Our results are not intended to be used for the prediction of absolute survival and toxicities, but rather to provide useful benefit-risk information for prostate IMRT planning and treatment design so as to promote best utilization of this advanced technology. Our results will also facilitate verification and modification of the theories and models used for tumor control, toxicity, and risk analysis once such clinical data become readily available.
II. MATERIALS AND METHODS
A fundamental requirement for the benefit-risk calculation is precise knowledge of the dose distribution in the patient. Because this study is based on retrospective patient data, it was not possible to perform measurements to obtain distributions. On the other hand, it is difficult, if not impossible, to accurately measure dose in various organs for every patient whether in retrospective or prospective studies. The Monte Carlo method provides a perfect solution to this problem. Monte Carlo simulations not only can predict dose distributions in heterogeneous patient anatomy, but also can accurately take into account LINAC leakage and patient scatter dose, which are not available in the patient treatment plans and cannot be calculated using any existing commercial treatment planning systems. Furthermore, re-calculation of the patient dose distribution using different treatment techniques also allows for plan comparison and benefit-risk analysis on an equal basis.
A. LINAC simulations and source beam models
In the present work, we used the EGS4/BEAM (23) and EGS4/MCSIM (24) Monte Carlo codes for the LINAC simulations and patient dose calculations. The EGS4/BEAM system was developed for the simulation of radiotherapy beams from clinical accelerators. (25) During an accelerator simulation, a phase-space file is generated that stores information about all the particles exiting the accelerator head, including particle energy, position, direction, and a tag to record particle history (where it has been, and where it was created or has interacted). The geometry and the materials used in the simulation were based on the specifications of the LINAC treatment head as provided by the manufacturer. The cutoff energies used for the simulations were ECUT = 700 keV for electrons and PCUT = 10 keV for photons. The energy thresholds for δ ray production and for bremsstrahlung production were 700 keV and 10 keV respectively. The maximum fractional energy loss per electron step was set to 0.01, and the default parameters were chosen for the parameter reduced electron-step transport algorithm. (26) Excellent agreement (better than 2%) has been achieved between measurements and the Monte Carlo dose distributions calculated using simulated phase-space data. (23,27) Because each phase-space file requires hundreds of megabytes of disk space, it is necessary to characterize the phase-space data for widespread clinical applications when hundreds of beam settings may be required. Beam characterization studies show that using simplified beam models can dramatically reduce the storage requirement and increase the efficiency of the accelerator simulation. (28) Source models (SMs) for the Siemens Primus LINACs (Siemens Medical Solutions, Concord, CA) with nominal photon energies of 6, 10, and 18 MV have been constructed from respective phase space files. Our multiple SM consists of an extended annular source for the target, a planar ring source for the primary collimator, and a planar annular source for the flattening filter. (28,29) The geometry and spatial positions of each source in the SM follow the manufacturers' specifications and are adjusted to yield the best match in the dose distributions between the original phase space and the reconstructed phase space from our multiple SM. The SMs were used in EGS4/MCSIM, and the dose in water was calculated and compared with ion chamber measurements in a water phantom. The agreement was within 1%. (30) To calculate more accurately the dose to organs outside the treatment field and at distances further away from the lateral extent of the LINAC's largest field size, the source dimensions were extended to simulate the leakage from the LINAC head. The accuracy of the extended SMs was tested with ion chamber (with a buildup cap) measurements in air.
B. Dose calculations
The EGS4/MCSIM code system can accurately calculate patient dose distributions by simulating the accelerator head leakage, MLC leaf leakage and scatter, and the effect of beam modifiers such as collimator jaws, wedges, and blocks. It also runs 10 -30 times faster than other widely available general-purpose Monte Carlo codes. (24) The EGS4/MCSIM code was used in this work because of its extended functionality. It is capable of calculating the dose to the patient given the intensity map or the Radiation Therapy Plan (RTP) file from the Corvus treatment system (Nomos Corp, Sewickley, PA), which includes patient setup parameters, and beam and leaf-sequence information. The RTP files generated by our inverse treatment planning system for a prostate case were used as input for EGS4/MCSIM. For comparison purposes, the same RTP files were simulated using all three SMs. To investigate the effect of collimator angle, the RTP files were edited so that the collimator was rotated by 90 degrees. The comparison of the two techniques used RTP files for a conventional four-beam box technique. Based on the dose per incident particle, as derived from calculations using calibration conditions, EGS4/MCSIM calculates the absolute dose to the patient. Moreover, if contours exist in the patient geometry [from computed tomography (CT) images], we are able to calculate the dose to each organ and plot DVHs for all contoured organs.
For this work, we were interested not only in the dose to the target, but also in the dose outside the target volume, to proximal and distal organs. A phantom created from a patient's whole-body CT slices ( Fig. 1) was used. The phantom describes the patient anatomy, is in rectilinear coordinate system, and is composed of voxels of dimensions 4×4×4 mm. The density of each voxel in the phantom is derived from the respective CT information. The contours for all major organs, mainly the organs indicated by NCRP report no. 116, were outlined so that they could be used for our calculations.
C. Phantom scatter and head leakage considerations
The dose to the risk organs outside the treatment volume has two components. The first component relates to the scatter radiation from the patient, which is more or less the same for the same target volume and prescription dose. Theoretically, the dose decreases as the target dose distribution becomes more conformal. The second component relates to leakage radiation from the accelerator. Clinical LINACs are designed to have less than 0.1% head leakage (defined as the ratio of the detector readings at any point outside the largest treatment field to that at the center of a 10×10-cm open field, with a 1-m source-to-detector distance).
We used a water phantom to conduct a Monte Carlo study investigating the effect of the scatter and leakage radiation for field sizes ranging from 5×5 cm to 20×20 cm and for all energies under investigation. First, the calculations were performed in the water phantom, and the dose distributions were computed. To separate the phantom scatter from the leakage radiation, we replaced the mass density of a slab in our water phantom to 2000 g/cm 3 to ensure that the phantom scatter would not penetrate to the other side of the slab, and we then repeated the calculations as before. Moreover, we investigated the effect of collimator rotation on the wholebody effective dose. A similar approach was taken to investigate the effects of the scatter and leakage radiation in patient anatomy.
D. Risk of secondary malignancies
From the dose distributions calculated in the patient, we computed the dose to the outlined organs. For each organ, MCSIM is able to calculate the minimum, maximum, and average doses, together with the volume and the average density. Having all of this data available, the average dose for each organ can then be used to calculate the equivalent dose. The risk of radiation-induced malignancies is estimated from the difference in the whole-body effective dose between IMRT and conventional radiotherapy using the recommendations from NCRP report no. 116 for weighting factors for each organ (Table 1). "Re-weighting" of risk estimates for the prostate patients are applied in the present case, because the population is generally older. It has been reported that the risk estimate for radiotherapy patients should be about 2% per sievert (31) or 1% per sievert for elderly (prostate) patients. Also, as mentioned earlier, because the dose-response relationship is not well established for organs receiving high doses of radiation, we calculated the whole-body dose equivalent-and hence the risk of secondary malignancies-using three methods according to the doses received by the proximal organs (bladder and rectum). In method 1, we calculated the whole-body dose equivalent without any dose restrictions on the nearby organs. The actual dose received was used in the calculations. In method 2, we calculated the whole-body dose equivalent using a threshold of 4 Gy for the nearby organs, assuming that risk of solid tumors increases up to 4 Gy, but then levels off and does not decline. And in method 3, we did not take into account the dose delivered to the proximal organs, assuming that the risk of secondary cancers falls off at higher doses because of cell killing (dead cells cannot give rise to malignancies).
A. LINAC verification and head leakage
Monte Carlo simulations of the Siemens Primus 6-MV, 10-MV, and 18-MV beams were verified by comparing the MCSIM dose calculations with measurements for various fields. The calculated percent depth dose curves and dose profiles at various depths for several fields ranging from 5×5 cm to 40×40 cm were compared against measurements. The agreement between measured and calculated values was within 1% or 0.1 cm [ Fig. 2(A)].
The verification of our SMs was performed with measurements in air using an ion chamber with a buildup cap. Ion chamber measurements were performed for both axes and extended to 80 cm away from the central axis on the plane perpendicular to the central axis. Measurements in water at 10 cm depth were also made at the same location and used for the SM verification. These measurements were performed for all energies studied [ Figure 2(B)].
B. Studies in water phantom
The dose distributions in water were calculated for fields ranging from 5×5 cm to 20×20 cm. The MUs were kept constant (100 MUs) so as to study the effect of leakage radiation attributable to energy and field size (Fig. 3).
FIG. 3. Dose profiles for the 10-MV photon beam at various distances from the defined field edge of the multileaf collimator. Field size ranges from 5×5 cm to 20×20 cm.
C. Field size dependence
The doses at distances away from the central axis depend on field size. For the same distance from the MLC-defined field edge, the dose increases with field size. This increase is in the order of 70% from 5×5 cm to 20×20 cm. For the same fields, the respective increase along the axis of the jaw movement is much less-namely, about 20%. Independent of the field size, the dose drops off with distance from the central axis.
As distance from the field edge increases, it is clear that the scatter contribution of the larger fields is similar to those of the smaller ones. For distances closer to the beam edge, the doses for the larger fields are higher. The same characteristics are observed at the sides, where the fields are defined by the jaws. In these cases, the doses are not as high, because transmission through the jaws and the MLC reduces the dose received.
D. Leakage separation
In our Monte Carlo simulations, the LINAC head leakage was calculated by replacing the density of a slab of the phantom 5 cm from the field edge with a relative density of 2000 g/cm 3 . Such a slab will stop all the electrons and photons generated in the phantom for the given fields from reaching the other part of the phantom. Therefore, only particles outside the field were scored at distances after the high density slab. These particles will deliver dose that is representative of LINAC head leakage. This leakage was found to be constant, independent of the field size.
The leakage dose attributable to jaws and MLC transmission remains the same for all energies. For points that are far from the central axis and "protected" by the LINAC's head shielding, the dose resulting from leakage is lower and within the manufacturer's specifications.
E. Scatter contribution
Because we are able to calculate the dose attributable to leakage radiation, the scatter component of the dose at locations outside the field can also be derived. The scatter dose decreases with increasing distance from the central axis, and at the same time, it increases with increasing field size. This field size dependence is attributed to the fact that larger fields generate more scatter radiation that can be carried outside the field. From our calculations, we see that the scatter component of the dose is higher for lower energies than for higher energies. This phenomenon can be attributed to the more forward direction of penetration as the energy increases. The scatter component for the 6-MV beam is generally higher than that for the 10-MV and 18-MV beams. When the distance from the central axis becomes large, the scatter contribution for all energies becomes indistinguishable and is comparable to the leakage component.
F. IMRT calculations in a water phantom
To study the effect of IMRT treatments in homogeneous geometry, we first used a parallelepiped phantom of dimensions 20×20×50 cm. We placed the isocenter in the phantom, together with its central axis, at a depth of 10 cm. Two treatments were simulated, one representing a four-beam box technique using four 7×7-cm fields at 0 degrees, 90 degrees, 180 degrees, and 270 degrees, and another representing an IMRT with 7 beams. The dose to the isocenter was 180 cGy for both cases, and different numbers of MUs were used in each case to deliver the prescribed dose.
For this study, we also calculated the dose distributions for the same fields, but rotated the collimator by 90 degrees for each field (both for the 7×7 cm fields and for the IMRT fields), so that we could investigate the impact of MLC leakage on the dose. From Fig. 4, we notice that the IMRT treatment increases the dose at distances away from the target and outside the treatment volume. In Fig. 4, the dose profiles from the isocenter are shown at that plane. The collimator setting does not contribute significantly to the whole-body dose equivalent in the case of the four-beam technique, but an increase in the whole-body dose equivalent occurs for the IMRT case with 90 degrees of collimator rotation. This increase could be explained by the increased number of MUs for IMRT, leading to more leakage. From this study, the relative FIG. 4. Leakage-only dose attributable to 6-MV fields ranging from 5×5 cm to 20×20 cm at 10 cm from the field edge defined by the jaws. increase in the dose attributable to IMRT can be seen to be explained by the increase in the modulation scaling factor (MSF). (19) The tripling of values from the four-beam box to IMRT is equal to the MSF for the present study, which was calculated to be 3.5.
G. Equivalent effective dose and prediction of secondary malignancies
The prescription used for all simulations was 72 Gy to the planning target volume. The wholebody dose equivalent was calculated based on that prescription and on Table 1.
From the calculated dose distributions, it is evident that, as compared with four-beam box treatment, application of IMRT increases the whole-body dose equivalent, and hence the risk of secondary malignancies ( Table 2). The risk also increases with increasing energy. Given the assumptions made for the three models described in Materials and Methods, the computed risk seems to increase significantly when the dose for the bladder is included in the calculations (method 1). On the other hand, exclusion of the proximal organs from the calculations of the risk (method 3) reduces the risk of secondary malignancies. We believe that the best case is the use of a 4-Gy cutoff for the proximal organs (method 2), because at dose levels greater than 4 Gy, radiation may have more cell-killing effect than cancer-induction effect.
H. Collimator rotation
The collimator rotation affects the whole-body dose equivalent and hence the percentage likelihood for secondary cancers, especially because the transmission for the MLC leaves is higher than that for the jaws. If the collimator is rotated in such way that the axis of MLC movement is parallel to the patient inferior-superior axis, then the whole-body dose is higher. This phenomenon is more profound for the IMRT cases than for the FBB ones, because the leakage through the MLC is higher because of the higher number of MUs needed for IMRT (Fig. 5, Table 3).
I. Modulation scaling factor
Comparing the MUs required for the IMRT and FBB calculations, we computed the MU factor to be 3.4. As can be seen from the Monte Carlo simulations, the overall increase in whole-body dose equivalent is not exactly proportional to the MSF (Fig. 6). The total increase is less than tripled when the dose to the bladder is not included in the calculations or is limited to 4 Gy. The increase observed in the whole-body dose equivalent is approximately 0.30 -0.40 cGy for all energies simulated. This difference can be attributed to leakage radiation through the MLC.
IV. DISCUSSION
Estimation of the risk of secondary malignancies attributable to radiation therapy treatments is a challenging task that becomes more complex when applied in IMRT. Prostate is the most common treatment site for IMRT, and it is also usually the first site chosen for IMRT implementation in a radiation therapy department.
Our results have shown that the excess in MUs required to deliver IMRT should be considered, because that increase is associated with an increase in the risk of secondary cancers. The excessive MUs will result in increased leakage radiation that will be received by the patient's body. The increase in MUs will also depend on the plan complexity and the level of intensity modulation that is chosen to deliver the prescribed dose. The higher the modulation, the higher the leakage from the LINAC head.
The whole-body dose is also related to the size of the fields used. For prostate treatments, field sizes are usually 7×7 -10×10 cm. If the complexity of the plan is increased because proximal and distal seminal vesicles are included the overall field size, the number of MUs will also increase. That increase will also lead to an increased risk of secondary malignancies.
The choice of energy also plays an important role, because lower energies have been shown to result in lower risk for second malignancies. For departments with a choice of energies, 10-MV photon beams are preferable, if available. The reasons are that 6 MV has insufficient penetrating power, especially for larger patients, and that 18-MV photon beams will introduce neutrons. In our study, the neutron dose was not considered in our calculations for the equivalent effective dose, because the nominal 18-MV photons are generated by a 14 MeV electron beam on a Siemens Primus LINAC, and we previously measured the neutron dose and found it to be insignificant. (19,32) Also, in our department, prostate IMRT is generally treated with 10-MV beams; only very rarely-for larger and older (>65 years) patients-is the 18-MV photon energy used. For departments with Varian 18-MV beams, the neutron dose should be estimated and included in the calculations of whole-body dose equivalent. Based on previously published data, (32) we can estimate the whole-body dose equivalent resulting from neutrons to be approximately 2 -5 mSv if a higher energy were to be used with the same number of MUs. This neutron dose should have been included in the calculation of the risk of second cancers, and it would have increased the risk by approximately 4% -10%, based on the treatment modality and on the MSF.
The choice of collimator rotation was investigated, because the dose attributable to transmission through the MLC increases the whole-body dose. Between two equivalent plans, the plan with the collimator rotated in such way that the MLC leaves are along the axial plane of the patient should be chosen so as to minimize the leakage dose to the patient. For cases in which optimal plans are not achieved with collimator orientation of this sort, jaws should be used to reduce the leakage from the MLC. This approach should be considered when radiation therapy departments are beginning to implement an IMRT program.
Moreover, our study uses 2% per gray to determine risk. Clearly, the recommended 5% per gray recommended by the International Commission on Radiation Units and Measurements (ICRU) report (21) should not be applied, because it refers to a larger population spanning all ages. For the treatment of prostate cancer, the population being considered consists of men of an older age. A more conservative risk should be applied. The choice of such risk should be determined from follow-up of men undergoing prostate IMRT. A lower percentage may perhaps be more appropriate to allow for better determination of risks of second malignancies. However, because the follow-up data from IMRT treatments are not yet available, we feel that the 2% per gray risk used here is reasonable. Furthermore, we aimed here to show the relative increase in risk attributable to IMRT. The values used to estimate risk are mainly for study purposes, to provide readers with information about what should be expected from prostate IMRT treatments.
V. CONCLUSIONS
Application of the IMRT technique to prostate patients has been proven beneficial with regard to reducing normal-tissue complications while conformal dose is delivered to the target. The reduction in normal-tissue complications allows for dose escalation to 76 Gy or higher. The drawback of IMRT is that the number of MUs required to deliver the prescribed dose is increased with respect to conventional treatments or 3D-CRT, leading to higher doses to the patient's body as a result of leakage radiation from the LINAC head.
The relationship of field size to whole-body dose is strong because of the fact that, for larger field sizes, the volume irradiated is larger, and so is the scatter contribution from the treated volume to organs at risk. In the case of IMRT, the irradiated volume, and not the individual segments, should be considered. The irradiated volume therefore remains the same as with conventional treatment techniques, and hence the scatter contribution is not affected.
The choice of energy has a strong effect on the equivalent effective dose. Many centers treat IMRT prostate patients with 10-MV photon beams when available, but higher or lower energies have also been used. In the case of higher energies, the neutron dose should be included in the calculation of the whole-body dose, because it can contribute an increase of 4% -10% to the risk for secondary cancer. If lower energies are used, more MUs are required because of the lesser penetrating power of these beams; hence, radiation leakage is increased. In such cases, the LINAC head shielding and MLC transmission should be evaluated. Collimator rotations should be considered so as to reduce the dose to the patient.
The estimated increase in the risk of developing secondary malignancies is not proportional to the increase in MUs. For a typical IMRT prostate treatment, the MUs increase by a factor of 2 -4, but the overall increase in the risk is about double.
In the present study, we used a 2% per gray risk for the analysis of second malignancies. Although this risk estimate is already conservative as compared with the 5% per gray proposed | 7,294 | 2007-09-17T00:00:00.000 | [
"Medicine",
"Physics"
] |
Mitigating Herding in Hierarchical Crowdsourcing Networks
Hierarchical crowdsourcing networks (HCNs) provide a useful mechanism for social mobilization. However, spontaneous evolution of the complex resource allocation dynamics can lead to undesirable herding behaviours in which a small group of reputable workers are overloaded while leaving other workers idle. Existing herding control mechanisms designed for typical crowdsourcing systems are not effective in HCNs. In order to bridge this gap, we investigate the herding dynamics in HCNs and propose a Lyapunov optimization based decision support approach - the Reputation-aware Task Sub-delegation approach with dynamic worker effort Pricing (RTS-P) - with objective functions aiming to achieve superlinear time-averaged collective productivity in an HCN. By considering the workers’ current reputation, workload, eagerness to work, and trust relationships, RTS-P provides a systematic approach to mitigate herding by helping workers make joint decisions on task sub-delegation, task acceptance, and effort pricing in a distributed manner. It is an individual-level decision support approach which results in the emergence of productive and robust collective patterns in HCNs. High resolution simulations demonstrate that RTS-P mitigates herding more effectively than state-of-the-art approaches.
The organization of social and economic activities to efficiently coordinate participants' effort is an important topic of economic theory. Thanks to the Internet, social media and online social networks, social mobilization through crowdsourcing has achieved unprecedented success. Crowdsourcing refers to the process whereby clients (a.k.a. crowdsourcers) obtain needed services by soliciting contributions from a large group of people (a.k.a. workers) 1 . Crowdsourcing communities based around social networks tend to have hierarchical structures 2,3 . These hierarchical crowdsourcing networks (HCNs) have been used to mobilize the masses in many significant real-world applications including political rallies 4 , scientific research 5 , mapping out natural environment features 6,7 , and large-scale search-and-rescue missions 8 .
In essence, crowdsourcing systems can be treated as resource allocation ecosystems containing a large number of interacting workers (i.e., resources) and crowdsourcers. Crowdsourcers are typically self-interested; their primary intention is to maximize their own utilities. This will usually lead them to only select workers with high perceived reputation, leading to the emergence of herding 9 . Herding refers to the situation in which a large number of task requests concentrate on a small group of reputable workers, causing them to be overloaded while leaving other workers idle. It can lead to cascading failures and eventually result in catastrophic system breakdown 10 . The risk of herding is especially pronounced in HCNs in which crowdsourcers lack global knowledge and workers have limited resources to be tapped into 11 .
Mitigating herding in HCNs is important to ensure sustainable operation of these problem solving ecosystems 12 . In general, workers in an HCN make three important decisions in a distributed manner: 1) how much new workload to accept, 2) how much existing workload to sub-delegate to others in the HCN (and to whom), and 3) how to price their services. The collective effect of these joint decisions made by all HCN participants determines 1. Worker heterogeneity: Workers have different skill levels and productivity. They may produce results of different quality when assigned the same task, and may not be able to maintain the same level of productivity everyday. 2. Timing and targets for sub-delegation: It is difficult for a worker to quantify when sub-delegation is needed and who the suitable candidates for sub-delegation are. This is further complicated by the fact that different workers may incur different costs to complete the same task. Sub-delegation to a worker resulting in a loss for the sub-delegator is not a rational choice. 3. Workers' commitment: Workers may not be fully committed to an HCN. Their eagerness to work (which may change over time) will affect their availability.
Recently, computational approaches for mitigating herding in crowdsourcing systems have emerged. In the Pinning control method 10 , the pinning method is used to control the collective dynamics in complex networks. The study focuses on situations where multiple agents try to decide individually which one of two available resources to use. Thus, this method cannot be directly applied to crowdsourcing systems in which many crowdsourcers need to engage a large number of workers to accomplish their objectives. The Global Considerations (GC) approach uses a worker's current pending workload as a guide to adjust his reputation 13 . GC adjusts the probability for a task to be assigned to a worker based on the worker's reputation standing among all other workers using the softmax approach. In Yu et al. 14 , a centralized task allocation approach was proposed to make dynamic trade-offs between the need for engaging trustworthy workers and obtaining task results on time. A fully distributed variant of this method that helps workers determine which incoming tasks to accept was studied in Yu et al. 15 . All of these approaches allow workers to be automatically assigned to tasks, saving them time spent on exploring open task calls and improving their collective productivity. Nevertheless, these existing approaches are not designed for HCNs. They do not support task sub-delegation, an essential mechanism to avoid herding in HCNs. The aforementioned complexities due to human nature have also not been accounted for by existing approaches.
This paper investigates the herding dynamics in HCNs and proposes the Reputation-aware Task Subdelegation approach with dynamic worker effort Pricing (RTS-P) to mitigate herding through enhancing the efficiency of manpower utilization in HCNs. It is an individual-level decision-making approach based on Lyapunov optimization 16 with objective functions aiming to achieve superlinear time-averaged collective productivity in an HCN 17 . By considering a worker's current reputation, workload, willingness to work, and his trust relationships with others, RTS-P provides a systematic approach for a worker to make joint decisions on task acceptance, sub-delegation, and effort pricing, so as to maximize his income while avoiding significant fluctuations in workload. The approach is distributed and can be implemented as a personal decision support agent for a worker in an HCN (Fig. 1). RTS-P is an extension of our previous model -RTS 18 . The addition of the dynamic worker effort pricing function allows operation in systems which permit workers to set the price of their service. In doing so, substantial modifications to the original system model 18 and the joint task acceptance and sub-delegation decisions are required.
RTS-P is compared with 4 existing methods through extensive experiments based on a large-scale real-world dataset -the Epinions trust network dataset. The results show that RTS-P effectively mitigates herding through efficiently harnessing the available human resources. We also show that RTS-P workers achieve significantly higher total income compared with other state-of-the-art approaches, especially under high workload conditions. RTS-P not only automates key decisions in the situation-task-others triad 19 surrounding a worker, but also sheds light on the long-standing quest for an individual-level decision support approach which results in productive and robust collective patterns in human crowds 20 . Our work provides a general framework to optimally harness the collective productivity of a complex network of human resources in order to mitigate herding, with potential applications in many social and economic systems.
Methods
Our key results include (1) a formulation of the problem of mitigating herding through efficiently harnessing the productivity of workers in an HCN as a constrained optimization problem which minimizes drastic fluctuations in workers' workloads while maximizing their expected earnings; (2) a distributed algorithm which solves the problem by jointly controlling the task acceptance, task sub-delegation, and effort pricing decisions for each worker; and (3) experimental evaluations of the performance of the proposed algorithm against state-of-the-art approaches in a large-scale HCN.
Proposed Framework. Our focus in this paper is to address the problem of delegating/sub-delegating a task, τ j , proposed by a crowdsourcer j, to workers in an HCN. In general, the effort required to complete a task (i.e. the workload of the task) can be expressed in effort units which can be defined by crowdsourcing system operators. For example, the effort required to complete a software programming task can be measured by the expected number of lines of code. A task must be completed before its stipulated deadline and with quality acceptable to the crowdsourcer. A worker i has a limited effort output rate which can be up to μ i max effort units per time slot. Tasks waiting to be completed by i are stored in his pending tasks queue. Let q i (t) be worker i's pending workload at the beginning of time slot t; the queuing dynamics of q i (t) can be formulated as: where λ i (t) is the new workload accepted into q i (t) during time slot t, μ i (t) represents the actual workload completed by i during time slot t, and s i (t) is the sub-delegated workload by worker i during time slot t.
With crowdsourcing analytics tools such as Turkalytics 21 , workers' performance can be tracked in detail. A worker i's past performance as measured by the quality and timeliness of his productive output can be used to estimate i's reputation, r i (t) ∈ (0, 1), using a reputation evaluation model 22 . Reputation acts as a sanctioning mechanism affecting future demand for a worker's services. Using this information, a worker can establish trust relationships with a set of other known workers, n i sub . n i sub is the set of trusted workers to whom worker i's tasks can potentially be delegated or sub-delegated. As a task can be iteratively sub-delegated through a delegation chain, it is reasonble for all workers in the delegation chain to be accountable (to various degrees) for the outcome of the task. A possible model for sharing responsibility in a delegation chain is the Decreasing Weighting (DW) reputation update mechanism 23 which assigns the last worker in the delegation chain (i.e. the one who actually completed the task) the highest share of responsibility and decreasing weight values to other workers higher up along the delegation chain. The future expected demand for a worker i's service, , is affected by his r i (t) value and the current price he charges for his service, p i (t) (e.g., measured in dollars per effort unit), i.e., Automating Task Sub-delegation Decisions. An RTS-P agent takes only local knowledge as input, and automatically offers recommendations to a worker i concerning three key decisions in an HCN at any given point in time: 1) the timing, amount of workload, and the target workers for sub-delegation, 2) how much new workload shall be accepted by i, and 3) how to price his services. If an RTS-P agent determines that its owner i's risk of not completing all pending tasks before the respective deadlines is high, it will attempt to sub-delegate some of the pending tasks to other workers. The selection of candidate workers for sub-delegation takes into account how trusted the workers are and how much they charge for their services (i.e., so that the act of sub-delegating does not incur financial loss for its owner). These heuristics can be converted into a computational task sub-delegation mechanism as follows. A conceptual queue, Q i (t), is used to quantify the urgency for a worker i to sub-delegate pending tasks. Q i (t) is updated by an RTS-P agent in conjunction with q i (t) as follows: In this formulation, the symbol λ¯i represents the average amount of new workload accepted by worker i per time slot. 1 [condition] is an indicator function. Its value is 1 if and only if [condition] is satisfied; otherwise, it evaluates to 0. The dynamics of Q i (t) are as follows: updated, then Q i (t) grows by λ¯i. This ensures that Q i (t) keeps increasing if there are tasks in q i (t) which have not been completed for some time.
In order to efficiently utilize the productivity of a crowdsourcing network, RTS-P must ensure that the upper bounds of both q i (t) and Q i (t) are finite for all workers involved.
Let X i (t) = (q i (t), Q i (t)) be a concatenated vector of worker i's physical and conceptual pending tasks queues. We adopt the Lyapunov function 16 to measure the level of congestion in both q i (t) and Q i (t) for all workers in a given HCN. It can be expressed as . Then, the amount of change in worker i's pending workload can be measured using the conditional Lyapunov drift as: Scientific RepoRts | 6:4 | DOI: 10.1038/s41598-016-0011-6 Based on equation (3), we have: where λ i max and s i max are the respective upper bounds of λ i (t) and s i (t) for a given worker i. Based on the same approach, the conditional Lyapunov drift for the physical queue can be expressed as: From equations (5) and (6), equation (4) can be expressed as: For simplicity of notation, let From a worker i's view point, he would wish to minimize both the cost incurred by task sub-delegation as well as drastic changes in his pending workload. Thus, we formulate a {drift + cost} expression to capture this dual goal as follows: represents the average price of service charged by worker i's known trusted workers at time slot t, and ρ i (t) > 0 represents i's general eagerness to work. A large value of ρ i (t) indicates that a worker is highly motivated to work. It adjusts the relative importance given to the two components in the {drift + cost} expression. It can be inferred by keeping track of the worker's productivity over a period of time, or be explicitly declared by the worker to control how the RTS-P agent behaves. At the beginning of each time slot, the RTS-P agent observes q i (t) and Q i (t), as well as its owner i's current context tuple 〈μ i (t), λ i (t), ϕ i (t)〉, to determine the value of s i (t) which minimizes the {drift + cost} expression. This form of combined value maximization and surprise minimization complies with the latest findings in human choice behaviours 24 . By only considering the terms containing the decision variable s i (t) which can be controlled by the RTS-P agent in equation (8), the {drift + cost} objective function can be re-expressed as: Minimize: is the price the worker i charges for task τ j . r min (t) ∈ [0, 1] is a pre-determined reputation threshold value. In order to minimize equation (9) Intuitively, equation (12) means that when worker i is highly willing to work, the cost of sub-delegating is high, the current workload is low, and tasks in the pending tasks queue have not been pending for too long, worker i should not sub-delegate any tasks. Otherwise, worker i should try to sub-delegate as many tasks as possible. Nevertheless, the actual s i (t) value also depends on the satisfaction of Constraint (11), which requires at least one worker k in n i sub whose reputation is higher than the threshold, and who charges a price no higher than what worker i charges for the task (i.e., worker i does not incur any loss by sub-delegating the task to worker k).
Automating Task Acceptance and Effort Pricing Decisions. Taking the cost of task sub-delegation into account, the expected income for a worker i at time slot t becomes A recent large scale empirical study in e-commerce, involving sellers from both eBay and Taobao 25 , suggests the following expression relating new demand (i.e., workload) for a worker to his price and reputation: where c 0 to c 3 are positive constants, N t ( ) i p is the number of positive ratings received by i over a given period of time, and d i represents how similar the quality of service provided by i is to what he promises. In this paper, we adopt equation (14) for modeling the dynamics of the demand for a worker's service to derive the joint task acceptance and effort pricing strategy. Nevertheless, equation (14) can be replaced by other functions suitable for different systems without affecting the principle on which RTS-P operates.
By taking exponents on both sides of equation (14), we have: As RTS-P only controls the decision variables p i (t) and a i (t) for effort pricing and task acceptance, we only consider the terms containing these decision variables on the right hand side of equation (13) and substitute f(p i (t), r i (t)) with equation (15). Thus, we have: Maximize: i i min where p i min is the minimum price to cover i's cost of service. We assume the value of p i min does not change frequently and can be treated as a constant with respect to i. The solution for maximizing this objective function can be obtained by finding the first order derivative of equation (16) and equating it to 0: Solving equation (18) yields: The result means that i should increase the price he charges for new tasks if his current workload is high, his current reputation is low, or his eagerness to work is low (and vice versa), while ensuring that his price is always no less than p i min . If i's reputation is low, he is less likely to receive a large number of task requests. Thus, whenever others are willing to solicit i's service, from i's perspective, he should charge a higher price in order to capitalize on these opportunities.
To maximize equation (16): In this paper, we ensure that a worker is never assigned more workload than the maximum workload he can handle within one time slot. Thus, when a i (t) = 1, RTS-P accepts up to μ i max effort units worth of new workload into i's pending tasks queue.
The core RTS-P algorithm is presented in Algorithm 1. It can be implemented as a personal decision support agent for each worker in an HCN. Multiple RTS-P agents can then communicate on their respective owners' behalf to automate the task acceptance, sub-delegation, and pricing decisions to maximize the overall productivity of the given crowdsourcing network.
In our previous work 18 , we have proved that the joint task acceptance and sub-delegation decisions made under prices of service fixed by the crowdsourcers are asymptotically optimal compared to an oracle that knows the exact situation of each worker at all times. Although the addition of dynamic worker effort pricing in RTS-P allows workers to adjust their prices according to their changing situations, the joint task acceptance and 1: sub-delegation decisions are made only after the prices have been set. Thus, the original theoretical analysis is still valid for RTS-P. Interested readers may refer to the Analysis section in Yu et al. 18 .
Results
To evaluate the performance of RTS-P under realistic settings, it is compared with four state-of-the-art approaches through extensive numerical experiments in an HCN based on the Epinions trust network dataset 26 . This real-world dataset allows us to construct realistic scenarios for performance comparison. The simulations facilitate understanding of the behavior of RTS-P under different situations.
Model Implementation on a Real Network. The Epinions trust network dataset used in the experiments
contains N = 10,476 workers, each represented by a node in the network structure. These nodes are connected by weighted and directed edges. A weight of "+1" represents a trust relationship, while a weight of "−1" represents a distrust relationship. The dataset contains 15,742 trust relationships and 2,170 distrust relationships. Based on this dataset, we construct an HCN populated by worker agents with different characteristics. For a worker agent i in the experiment, n i sub consists of other worker agents connected with i through a directed "+1" edge originating from i. We assume that agents do not have global awareness. Thus, a worker agent i may only delegate or sub-delegate tasks to other worker agents in n i sub . Each worker agent i has an innate trustworthiness h i ∈ [0, 1] which dictates its probability of producing satisfactory results for tasks delegated to it in simulations. This value is computed using the number of other agents trusting and distrusting agent i in the dataset following the Beta Reputation Model 27 . Figure 2 illustrates the crowdsourcing network derived from the dataset. The size of a node in the figure reflects the worker agent's h i value. The larger the size of a node, the more trusted the worker agent is. Let ρ be the workers' average eagerness to work in a given crowdsourcing network: The value of ρ is varied from 1 to 100 to simulate different levels of workers' general eagerness to work. characteristics to study how effectively RTS-P copes with these situations. Files containing the HCNs used in the experiments can be downloaded from http://goo.gl/QyRjTs.
In the experiments, we assume that the outcome for each task is binary (i.e., a task is regarded to be successful if the worker agent produces the correct result before the stipulated deadline; otherwise, it is considered unsuccessful). A worker agent is only paid if it completes a task successfully. Five duplicate crowdsourcing networks are created in the experiments to study the relative performance of the 5 approaches. They are: 1. The Equality-based Approach (EA): tasks are uniformly distributed among worker agents in a crowdsourcer agent j's n j sub without regard to their reputations. 2. The Reputation-based Approach (RA): the probability for a worker agent i to be selected by a crowdsourcer agent j is determined by its reputation standing among all worker agents in n j sub following the softmax choice rule 28 . 3. The Global Considerations (GC) Approach: a crowdsourcer agent j adjusts the probability for tasks to be delegated to each worker agent in n j sub following the approach in Grubshtein et al. 13 . 4. The DRAFT Approach: worker agents make task request acceptance decisions following the approach in Yu et al. 15 . 5. The RTS-P Approach: worker agents follow the approach proposed in this paper.
Approaches 1 to 4 do not support task sub-delegation. The overall workload level in the experimental HCN is adjusted to simulate different operational conditions. As the workload is measured in relative terms to the collective task processing capacity of the worker agents, we compute the maximum throughput θ of a given crowdsourcing network as θ μ = ∑ = h i N i i 1 max . At each time step, a proportion of the agents, p a , from the network are selected at random to act as crowdsourcers from which tasks originate. Based on empirical studies of the mTurk crowdsourcing system 29,30 , the ratio between crowdsourcers and workers is close to 1:20. Thus, we set p a = 5%. The workload for a given crowdsourcing network is measured by the Load Factor (LF). It is calculated as = θ LF w req , where w req is the amount of new workload generated by crowdsourcer agents at each time slot. In the experiments, the LF value ranges from 5% to 100% in 5% increments. Under each LF setting, the simulation is run for T = 10,000 time slots. Task deadlines are randomized. On average, a task must be completed within 5 time slots after it is first assigned to a worker agent.
Simulation Results. As shown in Figs 3(a), 4(a) and 5(a), as LF increases, RTS-P agents sub-delegate an increasing percentage of their workload to other trusted worker agents in the network to mitigate delays for all worker behaviour characteristic settings. If the worker agents are more eager to work as indicated by larger ρ values (i.e., worker agents prefer working on their tasks instead of sub-delegating them), fewer tasks are sub-delegated. The highest percentage of tasks is sub-delegated under low general eagerness to work and high workload conditions. As ρ increases, this peak sub-delegation percentage shifts towards higher workload conditions. Under = + μ R h (Fig. 3(a)), as LF approaches 100%, an increasing percentage of tasks are sub-delegated. However, when ρ values are small and LF values are large (i.e. the general eagerness to work is low while the overall workload is high), the trend reverses. Under = μ R 0 h ( Fig. 4(a)), fewer trustworthy RTS-P agents are able to accommodate new tasks. Thus, there appears to be a systemic "downward shift" of the contour lines in the figure, indicating fewer tasks are being successfully sub-delegated even when agents show high willingness to work (i.e., larger ρ values). Under = − μ R h (Fig. 5(a)), workers who produce good results have lower productivity. The systemic downward shift of the contour lines is more pronounced compared to Fig. 4(a). The same trends can be observed for the average sub-delegation chain lengths in Figs 3(b), 4(b) and 5(b). This indicates that as the μ R h value decreases (i.e., the dichotomy between workers' productivity and quality of work increases), RTS-P adapts its strategy by reducing task sub-delegation throughout the HCN, especially in cases in which workers are highly eager to work and overall workload is high.
The trade-off between the average task failure rates and the average task expiry rates achieved by all five approaches under different μ R h settings is shown in Figs 3(c), 4(c) and 5(c). The overall effect of the RTS-P strategy is to significantly reduce the average task expiry rate (i.e., improving the timeliness of obtaining task results). This comes at the expense of a slightly lower average task result quality. The average task failure rates of RTS-P under all μ R h settings are comparable to those of GC and consistently stay below 7%. The average task expiry rates of RTS-P under all μ R h settings are significantly lower than all other approaches (more than 20% lower than the best performing approach -DRAFT -under the most challenging situation of = − μ R h ). By making workers sacrifice task quality to a small extent, RTS-P significantly increases the total number of tasks completed by an HCN, putting the Parrondo's Paradox 31 to work on a large scale. Figures 3(d), 4(d) and 5(d) illustrate the total earnings derived by worker agents following the five different approaches under different μ R h settings. The results are averaged over all ρ value settings in the experiments. As RTS-P worker agents consistently achieve the highest total earnings, RTS-P is used as the benchmark for all other approaches. It can be observed that the total earnings achieved by EA, RA and GC worker agents as a percentage of those achieved by RTS-P worker agents start to drop under low LF conditions. As LF approaches 100%, EA, RA and GC worker agents achieve around 70% of RTS-P worker agents' total earnings. DRAFT worker agents earned the same amounts as RTS-P under low LF conditions. The performance of DRAFT worker agents starts to deteriorate under medium LF conditions. As LF approaches 100%, DRAFT worker agents achieve around 75-80% of RTS-P worker agents' total earnings. Under the challenging situation of = − μ R h (Fig. 5(d)), the advantage of RTS-P over other approaches stalls under high LF conditions. However, when averaged over all LF conditions, RTS-P consistently maintains at least a 14.9% advantage on average over the best performing approach -DRAFT -under all μ R h settings. Overall, RTS-P significantly outperforms existing approaches and its performance is robust in the face of different worker behaviour characteristics.
Discussion
To summarize, the proposed RTS-P approach leverages Lyapunov stochastic network queueing theory to make joint decisions on task acceptance, sub-delegation, and effort pricing. To our knowledge, RTS-P is the first principled computational approach to assist hierarchical crowdsourcing workers to dynamically sub-delegate tasks and adjust the price of their services based on changing situational factors while ensuring efficient utilization of their collective productivity. High resolution numerical experiments show that RTS-P is robust under various worker behaviour characteristics and significantly outperforms state-of-the-art approaches, especially under conditions of high workload. As recent empirical results show that such conditions are common among crowdsourcing projects 32 , RTS-P can be a useful tool to help HCNs mitigate the adverse effects of herding through efficiently harnessing the available human resources.
Furthermore, a worker can adjust the ρ i (t) variable value of his RTS-P agent to take on different roles in a crowdsourcing network. Since each worker can establish his trust relationships with a set of known workers, a worker can focus on tracking workers' historical performance and building up his list of trusted workers. With such a list, he could reduce the ρ i (t) value of his RTS-P agent so as to sub-delegate most of the accepted tasks to other trusted workers, thereby deriving most of his earnings from sub-delegation. By doing so, these workers can serve as task brokerage agents and provide a useful service to the crowdsourcing network. Other workers who are able to spend more time and effort completing tasks can increase the ρ i (t) values of their RTS-P agents so as to accept more tasks and sub-delegate only when absolutely necessary, thereby deriving most of their earnings through completing tasks.
RTS-P helps each worker compute a suitable effort price under different situations so that their collective benefits can be maximized. As a task propagates through a sub-delegation chain, subsequent price proposals are subject to Constraint (11) which dictates that workers with prices exceeding the current price for the task being considered for sub-delegation should not be selected (as this will cause the sub-delegator to incur a loss). Thus, there will never be a situation in which a crowdsourcer is forced to accept prices higher than what he can afford. Rather, prices reflect the current demand placed on the workers, and crowdsourcers can decide to either wait or increase their budgets. Such a signal helps coordinate the crowdsourcers' actions to reduce herding in the crowdsourcing network.
Following this work, we foresee a series of interesting research directions. RTS-P works well for workers who have accumulated some historical performance data in the system. For workers new to a system, there is a large body of literature on reputation bootstrapping [33][34][35] . Methods from these works can be put in front of RTS-P as a module to build up a system workflow to help new workers build up their track records. The most important direction of this field lies in understanding the dynamics of how the volume of task requests for a worker varies with his reputation and effort pricing. Large-scale user studies in crowdsourcing networks will be needed to investigate this topic. Furthermore, this field will also benefit from more detailed empirical evidence on how workers decide on what types of tasks to accept and what incentive mechanisms are effective.
In conclusion, the proposed approach and results provide a stepping stone towards more efficient management of large-scale hierarchical crowdsourcing based on evidence about workers' behaviours, and ultimately help improve the collective productivity of our connected world. | 7,222.6 | 2016-12-01T00:00:00.000 | [
"Computer Science"
] |
What underlies inadequate and unequal fruit and vegetable consumption in India? An exploratory analysis
Adequate consumption of fruit and vegetables is key to improved diet-related health in India. We analyse fruit and vegetable consumption in the Indian population using National Sample Survey data. A series of regressions is estimated to characterise the distribution of household fruit and vegetable consumption and explore key socio-economic and food system drivers of consumption. Household income and price are important correlates, but consumption is also higher where households are headed by females, are rural, or involve agricultural livelihoods. Caste is an important source of inequality, particularly amongst those with low consumption, with Scheduled Tribes consuming less F&V than others. We also find preliminary evidence that formal agricultural market infrastructure is positively associated with fruit and vegetable consumption in India.
Introduction
Dietary risks are amongst the top risk factors for death and disability in India (Prabhakaran et al., 2018). Fruits and vegetables (F&V) are a key food group providing essential vitamins and minerals, and their intake is particularly important in settings where micronutrient deficiencies are widespread, such as India (Meenakshi, 2016). There are important associations between F&V intake and lowered risk of cancer, cardiovascular disease and all-cause mortality (Aune et al., 2017). This is of particular importance in India, where F&V consumption has a role to play in combating an ongoing crisis relating to diet-related chronic disease (Reddy et al., 2005).
The WHO's Global Strategy on Diet, Physical Activity and Health recommends that per capita F&V consumption (excluding tubers) should exceed 400g/day. However, diets in India are typically cerealdominated and limited in their diversity (Shankar et al., 2017;Tak et al., 2019). Limited previous research has suggested that consumption of F&V in India has historically been low. Analysis of the 2011-12 National Nutritional Monitoring Bureau data for selected Indian states showed average vegetable consumption amongst men to be 143 g/ person/day for men and 138 g/person/day for women (Shankar et al., 2017). A recent analysis of the nationally representative National Sample Survey (2011)(2012) indicated that household per capita consumption of F&V is 160 g/person/day for rural India and 184 g/person/ day for urban India (Minocha et al., 2018), well short of the WHO benchmark of 400g/person/day. What might underlie this inadequate level of F&V consumption? On the demand side, low income, high prices and social and geographical inequities are hypothesised as potentially important constraints. Sekhar et al. (2017) found F&V prices to be a major contributor to overall food inflation in India. For example, during 2012-2013, fruit and vegetable price inflation ran at 78% compared to an average for all foods of 18%, with the Indian media highlighting the effects on consumers of an 'onion crisis' as onion prices soared by more than 200%. Ruel et al. (2005) noted that F&V consumption is generally expected to be responsive to income growth, but given that F&V are an expensive food source, especially on a per-caloric basis, poorer households struggling to meet energy requirements are likely to find themselves more constrained in increasing consumption (Green et al., 2013;Headey and Alderman, 2019). Regional and social disparities may be important too. Tak et al. (2019) reported that the diversity of diets differs markedly across Indian regions. Previous literature has shown how welfare outcomes in India, including nutrition, can differ substantially across regions and social classifications such as caste, even after controlling for differences in income and other confounders (Cavatorta et al., 2015;Van de Poel and Speybroeck, 2009;Joy et al., 2017).
On the supply side, it has been noted that F&V producers have not responded strongly to increased demand arising from robust economic https://doi.org/10.1016/j.gfs.2019.100332 Received 30 July 2019; Received in revised form 23 October 2019; Accepted 25 October 2019 growth (Pingali, 2015), contributing to high and volatile F&V prices. High transaction costs of linking smallholders to markets and inadequate infrastructure have been identified as major obstacles to producer response (Joshi et al., 2004;Pingali, 2015). Gandhi and Namboodri (2005) characterise F&V value chains in India as highly inefficient marketing structures with poorly coordinated markets and high proportions of spoilage.
However, apart from bivariate associations drawn between F&V intakes and wealth or socio-economic status, studies including F&V as one of many food categories in broader food demand analysis and some insights from small qualitative studies, there is little research examining how F&V consumption in India relates to key economic, socio-demographic or food system drivers. In this paper, we examine the household-level economic, socio-demographic, and key food system drivers of F&V consumption in India. In doing so, we train special focus on the lower tail of the F&V distribution with the worst consumption outcomes.
Data
Our primary source of data is the 68th (latest available) round of the nationally representative cross-sectional survey on household expenditure and consumption, the National Sample Survey (NSS), conducted in 2011-2012. The NSS Household Consumption Expenditure Survey records both quantity (purchase + home production) and value of food items at household level. Information on the NSS's stratified multistage sampling design has been reported elsewhere (Government of India, 2010. Unlike the National Family Health Survey (NFHS) datasets which do not provide information on quantity of food items purchased, the NSS 2011-12 offers two alternatives for the computation of food consumption measures. One alternative, called 'type 1 data', has a recall period of 30 days and has been used in previous computation of summary statistics of F&V consumption (Minocha et al., 2018). Our analysis is based on an alternative NSS survey format ('type 2'), which uses a reference period of 7 days preceding the survey and retains the 30 day recall only for some food items (cereals, pulses and sugar). We use the type 2 schedule based upon 7 day recall since using a shorter recall period can potentially help improve accuracy, particularly for nutrient-rich food groups, compared to the 30 day recall period . The NSS 2014 report on the 68th round also examines some differences between Type 1 and Type 2, and these are summarised by Aleksandrowicz et al. (2017) that there was a small increase in overall calories in Type 2 versus Type 1, and higher intake of those items that used the 7 day recall (meats, eggs, fruit, veg, etc). The original NSS sample consists of 101,651 (59,683 rural and 41,968 urban) households. Following exclusion of households with extreme values of 1 per capita calorie intake, our final primary estimation sample consists of 98,868 households.
Our dependent variable is household per-capita fruit and vegetable consumption (g/capita/day), which is based on the sum of fruits and vegetables (excluding potato) 2 consumed in the home by the household in the previous 7 days from a single respondent, usually the female adult of the household who recalls other household members' consumption. Quantity of consumption from purchased and own production are asked separately and we have taken the sum of these as the household intake. The survey asks about quantities produced/purchased for about 140 individual food items, with a small number of questions on meals/snacks eaten out of home. We aggregated the individual fruit and vegetable items into our fruit and vegetable categories. The fruit and vegetable groups include mango, orange, guava, banana, papaya, grapes, melon, other fruits, onion, garlic, leafy vegetables, tomato, gourd, carrot and other vegetables. When calculating consumption, adjustments have been made to include (1) meals prepared at home but consumed by non-members and (2) meals received for free from other households by household members. Household per capita consumption is calculated by dividing the total household F&V consumption by household size (household composition is controlled for as a covariate in the regression analysis). We use a simple division by household size in order to maintain consistency with the key previous literature (Minocha et al., 2018), and also in accordance with practice in the economics literature that uses NSSO food consumption data (eg. Deaton and Dreze, 2009). However, as an alternative we also provide a full set of results in the online appendix that normalises on the basis of adult equivalent units. Although consumption is normalised by the number of household members here, it must be kept in mind that this remains a household-level measure and is only meant to be a proxy for, rather than an attempt at individual consumption measurement.
Our set of explanatory variables comprises a variety of household level economic and socio-demographic indicators and food systems level factors that have been associated with household dietary outcomes in previous literature (Stifel and Minten, 2017;Alderman and Headey, 2017;Ruel et al., 2005). To proxy income or purchasing power, we include per capita monthly expenditure. We proxy prices using unit values, i.e. by using the ratio of expenditure over quantity (in doing so, it is recognised that unit values incorporate a quality choice dimension). Thus, for a composite good such as fruit and vegetables, unit values, i.e. expenditure divided by quantity consumed, will reflect household choices both about individual types of F&V consumed, and also about relative consumption of higher or lower grade of produce. Therefore, caution is warranted in the interpretation of regression coefficients. Two such unit value measures are calculated for each household, one for all foods and one for the category of fruit and vegetables. The unit value of fruit and vegetables divided by the unit value of all foods is then used as the proxy relative price of fruit and vegetables in all regressions. Caste is represented by a set of dummy variables, where Scheduled Tribes, Scheduled Castes and Other Backward Castes are measured against the baseline of 'Other/Upper' castes. 3 We include a set of state level dummy variables in all regressions to control for regional heterogeneity. In addition to the above, a measure of education is included to proxy nutritional knowledge, specified as years of schooling of the household head (Webb and Block, 2004). Research on intrahousehold allocation of resources suggests that men and women do not necessarily pool their resources and hence may allocate resources differently, depending on their bargaining power within a household (Alderman et al., 1995;Hoddinott and Haddad, 1995). Therefore, we include a dummy variable indicating female headed households in our analyses.
In order to capture socio-cultural aspects beyond what is controlled for by state-level fixed effects, we also include a binary variable to indicate whether a household is Hindu or not. 4 There is now considerable evidence that in the presence of market failures in developing countries, households are highly dependent on own production (Sibhatu and Qaim, 2018), and accordingly we include variables to represent whether a household is rural or urban, and whether its primary employment is in agriculture. Since adult and child consumption levels are likely to differ with implications for household per capita computations, we include the number of children in the household as a covariate.
In addition to our main analysis, we also conduct supplementary exploratory work to gauge the associations of two key supply-side variables in the form of road infrastructure and the density of state-run 1 Based on overall dietary energy intake, we calculated outliers corresponding to values beyond 2.5 SDs around the mean. Correspondingly, we dropped households with per capita calorie intakes less than 50 and greater than 4300.
2 Tubers are excluded in the WHO recommendation of 400 g of F&V/day. 3 The official terminology is "Other". 4 Appendix Table A1 also shows results of Table .3 excluding religion.
agricultural markets with F&V consumption. Roads and markets are potentially critical constraints to the distribution of fruit and vegetables and therefore to their availability and prices across the country, particularly since cold chain availability is minimal to non-existent in many parts of India, and produce is largely traded as fresh (Desai, 2011). A high proportion of fruits and vegetables in India is transported via trucks, often across hundreds of kilometres, to be sold to traders at large state-run mandis (wholesale markets). The NSSO dataset itself does not contain information on such infrastructure variables. However, for a proportion (approximately 20%) of the overall NSSO sample, we are able to match the district location of the household with district-level information on roads and markets. These district-level data on roads and agricultural markets have been retrieved from Village Dynamics in South Asia (VDSA) project of the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT). Such information from the VDSA is available for a subset of 53 districts in 2010-11. No claims can be made about potential randomness in whether district level data are missing or not and therefore this additional analysis should be considered an initial and partial exploration. Road density is defined as road length per 1000 km 2 of geographical area. Market density is calculated as the number of formal agricultural markets per 1000 km 2 of geographical area.
Methods
We start with a set of data visualisations, first graphing the distribution of F&V consumption in the population, and then applying non-parametric regressions in the form of local polynomial smoothers to assess non-linear bivariate relationships between household F&V consumption and a core set of covariates. We then use ordinary least squares regression models to assess the associations between fruit and vegetable consumption and household-level economic and socio-demographic variables using the full sample.
Subsequently, we turn attention to the question of inequality in F&V consumption, specifically asking how the influence of key covariates on F&V consumption varies across the F&V consumption distribution. One option available for such an approach would be a categorical dependent variable model such as a probit or logistic regression, say based on grouping F&V consumption into 'low', 'medium' and 'high'. However, in addition to the ad-hoc nature of such grouping, this would also entail a loss of statistical information and a restrictive characterisation of the joint distribution of the outcome and the covariates (Zanello et al., 2016). We apply Unconditional Quantile Regressions (UQR), specifically Firpo et al.'s (2010) unconditional Recentred Influence Function (RIF) UQR method. The RIF regression methods allow us to estimate the unconditional quantile effects of the covariates on F&V consumption at any quantile of the distribution. Unlike routinely applied conditional quantile regression methods where the estimated relationship between covariate and outcome is conditional on the values of other covariates, Firpo et al.'s method provides unconditional estimates, and has been applied in the analysis of food and nutrition outcomes by Zanello et al. (2016) and Jolliffe (2011) among others. Here, we estimate and present UQR results for the 10th, 25th, 50th, 75th and 90th percentiles of F&V consumption.
Finally, we carry out our exploratory analysis on the smaller sample with available information on road and market density, to gauge the relationship between infrastructure and F&V consumption. Since the regional-level infrastructure variables would conceptually influence household F&V consumption primarily via their prices, our set of covariates for this exercise include all the household-level variables described above, except for price, along with market and road density. The infrastructure variables are at the district level, whereas the rest of the covariates are at household level. Frequently applied approaches in such settings include ignoring the hierarchical structure of the data and assigning district-level values to households, or carrying out analysis based on district level averages. Instead, we maintain and recognise the hierarchy in the data and estimate a multi-level model comprised of two levels, household and district. Specifically, we estimate a random intercept model (Raudenbush and Byrk, 2002) that allows the identification of the influence of the district-level infrastructure variables on household F&V consumption whilst allowing the random error term to vary by district. Table 1 reports summary statistics for all the variables used in the analysis. The WHO norm of 400 g/person/day for F&V refers to adult individuals whereas the consumption outcome here is derived from household level data. Therefore, we refer instead to the benchmark household-level adequacy indicator of 400 g/person/day referred to in the FAO-World Bank ADePT-FSM (Moltedo et al., 2014) and discussed in INDDEX Project (2018). The median household per capita consumption of 200 g/person/day (Table 1) is far short of the 400 g/ person/day benchmark for household level per capita consumption of fruit and vegetables. It is worth noting that this computation based on 7-day recall is larger than the 160 g/person/day for rural India and 186 g/person/day for urban India reported by Minocha et al. (2018) from the same survey using 30-day recall. Fig. 1 shows that the distribution of consumption in the sample is highly unequal. There are considerable proportions of households with consumption less than 100 g/person/day and even 50 g/person/day. There is also a long tail of households consuming more than 400 g/ person/day. Online appendix A2 provides summary statistics for this group. It is evident from comparing that table with Table 1 for the overall population above that households consuming in excess of the benchmark are substantially richer, and also on average have smaller household sizes and more educated household heads.
Descriptive results and bivariate relationships
We turn to bivariate relationships between F&V consumption and key covariates of interest: caste, income and prices. Table 2 highlights another aspect of inequality in F&V consumption in India, connected to caste. Policy structures in India have long recognised four broad caste groupings reflecting the extent of socio-economic disadvantage in descending order: Scheduled Tribes, Scheduled Castes, Other Backward Castes and 'Other/Upper' Castes. Table 2 shows that F&V consumption tracks this caste grouping, with average group consumption lowest among Scheduled Tribes and rising to highest among 'Other/Upper' Castes. 'Other/Upper' castes on average consume 60 more g/person/ day, (or 34%) F&V than Scheduled Castes.
Figs. 2 and 3 graph bivariate relationships between F&V consumption and the two key variables, income/expenditure and prices. Fig. 2 demonstrates the clear positive gradient between F&V consumption and household per-capita expenditure as a proxy for income. Evidently, the bivariate relationship demonstrates some non-linearity and the income effect appears to level off at high incomes. In Fig. 3, a steep decline of F &V consumption with relative F&V price is observed for the most part. However, the bivariate relationships are only an initial guide, and are potentially confounded by numerous other changes. We now focus attention on regression results that control for such confounding.
OLS regression results
We first report results based on ordinary least squares (OLS) estimates (Table 3) using the full sample. 5 The model includes a full set of state dummy variables to provide control for unobserved regional and state policy influences that may impinge upon household F&V consumption (Cavatorta et al., 2015). The results in Table 3 confirm a strong association between household per capita expenditure and F&V consumption. Based upon the estimated coefficient, an income elasticity of 0.65 can be calculated at the mean F&V consumption in the sample (meaning a 10% increase in income is associated with a 6.5% increase in household per-capita F&V consumption). This estimate is consistent with an F&V income elasticity range between 0.60 and 0.97 calculated by Ruel et al. (2005) for a range of African countries. The coefficient on the price (proxied by unit value) of F&V relative to all foods is also significant at the 1% level, and negative as expected, indicating that higher F&V prices do play a role in discouraging consumption.
Also noteworthy in Table 3 are the positive and statistically highly significant coefficients relating to rural status and agricultural occupation of the household. A rural location is associated with a 15 g/ person/day higher F&V consumption, all else held equal, while being occupied in the agriculture sector is associated with a 6 g/person/day increase. Thus, an urban disadvantage in F&V consumption is observed once higher incomes typically observed in urban areas are controlled for. This suggests that market failures may be at play. Where markets are complete and efficient, engaging in or being proximate to agricultural production should have no relationship with F&V consumption, once income and prices are controlled for. However, market failures may lead to a direct link between agricultural involvement, or being in proximity to agricultural production, and improved F&V consumption. 6 This is consistent with a recent literature emphasising agricultural production and nutrition linkages among farm households in South Asia arising from market failures .
F&V consumption in female headed households is higher by about 15 g/person/day compared to male-headed households after controlling for other covariates. Diseconomies of scale are observed in household F&V consumption in Table 3, with larger households linked with lower F&V household consumption per capita. The small and statistically insignificant coefficients attached to the caste variables (in comparison to the baseline of 'Other/Upper' caste) are very informative. Considered in conjunction with the sizeable differences observed in F&V consumption across caste groups in Table 2, they suggest that caste-based inequality in F&V consumption arises from differential levels of income and other observed covariates of consumption across castes. Table 4 presents results from unconditional quantile regressions (UQR) for the 10th, 25th, 50th, 75th and 90th quantiles. The UQR share an identical set of covariates (including state dummy variables) with the OLS regressions discussed in Table 3, and offer insight into how relationships differ across the F&V consumption distribution rather than just at the mean (OLS). Although the coefficients attached to the income and relative price variables increase along the F&V consumption distribution, note that the semi-log functional form means that the coefficients by themselves do not directly indicate strength of association. The implied income elasticity declines from 1.2 at the 10th percentile to 0.6 at the 90th percentile of consumption. Thus, F&V consumption does indeed respond substantially to income improvements amongst those consuming the least.
Unconditional quantile regression results
However, a striking pattern apparent from Table 4 is that several of the key covariates of F&V consumption, such as F&V relative prices and the gender of the household head, actually have weaker relationships with F&V consumption at the lower tail than they do higher up in the distribution. The relative price coefficient is only statistically significant at the higher quantiles of the F&V consumption distribution. An association between household head gender and F&V consumption is practically absent among low consumption households, whereas gender associations strengthen along the top half of the distribution to make a 42 g/person/day difference at the 90th percentile. Likewise, the positive association of rural location with F&V consumption strengthens five-fold when moving from the 10th to the 90th percentile of consumption. Household size makes only a small difference at low consumption levels, but has more sizeable implications at the top of the distribution. Taken together, this pattern of weak relationships in the lower quantiles suggests that F&V consumption of the lower tail may be challenging to shift via identification of typical policy levers or specific groups to focus interventions on.
There is one association relating to caste in Table 4 that is stronger at the lower tail than in the rest of the distribution. The OLS results indicated that, once income and other covariates are controlled for, caste makes little difference to F&V consumption. The UQR results suggest to the contrary that there is a negative Scheduled Tribe association with F&V consumption at the lower tail, even after control for income and other confounders. This effect disappears in the upper half of the distribution, resulting in the overall insignificant OLS estimate observed earlier. All else held equal, a household at the 10th percentile of F&V consumption and classified as belonging to a Scheduled Tribe Standard errors in parentheses ***p < 0.01, **p < 0.05, *p < 0.1. Covariate set includes state dummy variables.
has a 10g/person/day lower F&V consumption compared to a household from Other/Upper castes in that percentile. Specifically, this points to the need to address a caste-based inequality that exacerbates very low consumption levels. With respect to methods, this underscores the importance of allowing regression coefficients to vary across the outcome distribution in such settings.
Multilevel regression results
Finally, in Table 5, we provide a summary of the results of our exploratory analysis based on the smaller sample for which district-level road and market infrastructure information is available. In the first column, we present coefficients relating to the infrastructure variables from a model based on OLS with state-level dummy variables, and in the second column we show estimates for infrastructure from the multi-level random intercept model. Note that the OLS model controls for cross-sectional heterogeneity via state level fixed effects, given districtlevel effects are not separately identified from the infrastructure variables measured at the district level. The multi-level model, on the other hand, controls for cross-sectional heterogeneity via district level random effects. The estimates do not reveal consistent evidence for the influence of road infrastructure on F&V consumption -the OLS fixed effects model shows a statistically significant positive coefficient while the multilevel model produces a statistically insignificant coefficient. However, both models suggest a small albeit positive and statistically significant relationship between district-level density of formal agricultural markets and F&V consumption. Although data deficiencies imply that this result should be interpreted with caution, these preliminary estimates suggest that further analysis based on more complete VDSA data when available may be worthwhile.
Discussion and conclusion
Dietary risks constitute the second most important risk factor for death and disability in India, following malnutrition, and in the decade from 2007 to 2017 the contribution of dietary risks to disability adjusted life years in India increased by 35% (Prabhakaran et al., 2018). A recent major prospective cohort study, PURE (Prospective Urban Rural Epidemiology), presented rare evidence from LMIC settings, including India, for the health implications of F&V intake (Miller et al., 2017). Discussing this evidence, the authors argued that "modest" levels of consumption are sufficient to provide high benefits, noting, "… even three servings per day (375 g/day) show similar benefit against non-cardiovascular and total mortality as higher intakes …" (Miller et al., 2017(Miller et al., , p. 2047). Yet, in India, as also highlighted by Minocha et al. (2018), average consumption is well short of these modest targets. In this paper, we have sought to conduct an initial examination of the socio-demographic and economic basis of household F&V consumption in India. Summary of results: To summarise the key results: firstly, not only is the average consumption low, but consumption is also highly unequal, with large proportions of households displaying worryingly poor consumption levels. Secondly, as expected, household income and relative F&V price emerge as important correlates of F&V consumption, but household F&V consumption is also higher where households are headed by females, are rural, or involve agricultural livelihoods, all else being equal. Thirdly, the association of F&V consumption with covariates is seen to vary substantially across the consumption distribution. While the consumption of households with the poorest consumption levels responds most strongly to income growth, many of the other important covariates such as gender, rural location and relative prices display strongest associations at the top of the F&V consumption distribution. Fourthly, caste emerges as an important locus of inequality, mirroring patterns reported with respect to other population welfare outcomes in India. The distribution of F&V consumption across caste categories shows that Scheduled Tribes, long-recognised as the most socio-economically disadvantaged group in India, consume less F&V than others, particularly the 'Other/Upper' caste group. The mean (OLS) regression suggests that this is largely a matter of an income disadvantage. Yet, the quantile regression analysis demonstrates that amongst those with the worst consumption levels, there is indeed a further disadvantage faced by Scheduled Tribes even after controlling for income and other confounders.
We have also found some preliminary evidence that formal agricultural market infrastructure is positively correlated with F&V consumption in India. Note that the VDSA data used here on market density is restricted to formal state-regulated wholesale markets (mandis). These public wholesale markets were set up in the 1960s with the expectation that all agricultural trade must flow through them, thereby restricting the exploitative nature of wholly private food trade. Although subsequent reforms to the Agricultural Produce Market Committee (APMC) have provided impetus for private trade, the regulated public markets remain the mainstays of agricultural marketing in India, however, and our results (including control for statelevel heterogeneity) suggest they have a role to play in bolstering F&V consumption. Chatterjee et al. (2017) find for India that an increase in such formal markets induces competition for farmer's produce and thereby lead to better returns to farmers (as compared to intermediaries). An implication is that higher market density has a role to play in improving farmer supply of F&V. Plausibly, a greater density of formal markets also has a role to play in more equitably distributing produce across consuming regions.
Policy implications: Private-sector led downstream change in F&V value chains in India is occurring in the form of the expansion of supermarkets and modern retail (Reardon and Minten, 2011). However, the relevance of such transformation to F&V consumption of the poor is questionable, and government policy remains the key lever for broadbased change. Policies to improve F&V consumption in India are almost exclusively focused on production and upstream parts of F&V value chains, even though improving supply is only one element in improving F&V consumption.
Following a review of the policy environment for F&V in India, Khandelwal et al. (2019) concluded that not only did agricultural policy relating to F&V focus almost exclusively on economic opportunities for producers, largely ignoring consumer nutritional considerations, but that even the National Nutrition Policy contained few concrete proposals to improve F&V intakes. The National Food Security Act makes provision for cereals for disadvantaged consumers, but not for F&V (Government of India, 2013; Thow et al., 2018).
Our research suggests on a positive note that continuing household income growth in India will improve F&V consumption, particularly amongst those consuming the least. However, given the large consumption deficit compared to the norm and the inequality inherent in relying on income growth, there is also an urgent need for downstream policies closer to the consumer. For example, government-provided nutrition schemes for nutritionally vulnerable sections, such as the Midday Meal Scheme in government schools, have the potential to incorporate more F&V provision. Nakao and Tsuno (2018) show that the Mid-Day Meal Scheme's focus on food grain provision means that F&V provision is minimal. Mainstreaming nutrition education in school curricula and at Integrated Child Development Services centres has also been identified as an important avenue for improving F&V consumption in the long term (Thow et al., 2018). Since income is one of the few variables to exert an influence on the lower tail of F&V consumption, income transfers may offer potential as a policy intervention option. Our results also suggest that all policies will need to tailor strategies to reach Scheduled Tribes in particular, in line with previous literature documenting the numerous barriers faced by this section in accessing public services relevant to nutrition (Thorat and Sadana, 2009). Basic income continues to be a hotly debated topic in India. However, pilot interventions such as the SEWA-Unicef cash transfer scheme that included tribal villages have shown promise with regard to nutrition-related outcomes (Desai and Vanneman, 2016), and may hold potential for improving F&V intake as well.
Limitations and future research: It is worth emphasising that this analysis is of a preliminary and exploratory nature, aiming to focus attention on the important topic of F&V consumption in India, rather than an attempt to establish definitive estimates or causal relationships. A number of drawbacks are recognised, including the cross-sectional nature of the data, the single equation (rather than demand system) approach to estimation, measurement issues including using unit values as proxy for prices, and the household-level nature of the data and analysis that stops short of the individual level perspective typical in the health literature.
Given the number of people with inadequate F&V consumption in India and the importance of F&V consumption to multiple major health outcomes in the country, a broad research agenda focused on improving the availability, affordability and consumption of F&V across the entire population is called for. Following on from the research reported here, the role of market and road infrastructure in improving F&V availability, particularly for poorer sections of the population, is an important area for further investigation. Our research has also suggested special focus on the constraints faced by Scheduled Tribes in accessing F &V. There is also a need to understand how F&V consumption has changed over time and the drivers of such change, in order to obtain a dynamic perspective. Urgent research questions also arise about how various F&V policies and interventions in India, ranging from F&V aggregation and marketing schemes for smallholder producers, to value chain interventions, can be made more nutrition-sensitive and focused on the needs of poorer consumers.
Funding
This study forms part of the Sustainable and Healthy Food Systems (SHEFS) programme supported by the Wellcome Trust's Our Planet, Our Health programme [grant number: 205200/Z/16/Z]. Funding body had no role in the data collection, analysis or interpretation, and no role in the study design or in writing the manuscript. | 7,646.8 | 2020-03-01T00:00:00.000 | [
"Economics"
] |
An Intelligent Intrusion Detection System for Smart Consumer Electronics Network
The technological advancements of Internet of Things (IoT) has revolutionized traditional Consumer Electronics (CE) into next-generation CE with higher connectivity and intelligence. This connectivity among sensors, actuators, appliances, and other consumer devices enables improved data availability, and provides automatic control in CE network. However, due to the diversity, decentralization, and increase in the number of CE devices the data traffic has increased exponentially. Moreover, the traditional static network infrastructure-based approaches need manual configuration and exclusive management of CE devices. Motivated from the aforementioned challenges, this article presents a novel Software-Defined Networking (SDN)-orchestrated Deep Learning (DL) approach to design an intelligent Intrusion Detection System (IDS) for smart CE network. In this approach, we have first considered SDN architecture as a promising solution that enables reconfiguration over static network infrastructure and handles the distributed architecture of smart CE network by separating the control planes and data planes. Second, an DL-based IDS using Cuda-enabled Bidirectional Long Short-Term Memory (Cu-BLSTM) is designed to identify different attack types in the smart CE network. The simulations results based on CICIDS-2018 dataset support the validation of the proposed approach over some recent state-of-the-art security solutions and confirms it a phenomenal choice for next-generation smart CE network.
I. INTRODUCTION
T HE INTERNET of Things (IoT) is a network of devices embedded with software programs and sensors that utilize the Internet to communicate data.The amalgamation of IoT into traditional Consumer Electronics (CEs) has revolutionized it into next-generation CEs with higher connectivity and intelligence.This improved data availability and automatic control in the CE network are made possible by the connectivity of sensors, actuators, appliances, and other consumer devices [1].Nevertheless, CE devices connections are now remotely accessed anytime, anywhere in the world with the utilization of computing devices, including laptops, smartphones, and smartwatches, regardless of the network to which they are connected.These smart devices can be used in various fields, including smart homes [2].
The CE devices have significantly evolved in the last decade.According to a recent study, the CE segment might reach 2,873.1musers by 2025 while the Average Revenue Per User (ARPU) is expected to amount to U.S. 317.10 billion [3].Today, every device may create and share data online, contributing to the CE expansion.The traditional Internet architecture is a complex system with a multitude of network components, i.e., routers, middleboxes, switches, and several layers, etc. due to decentralization [4].Therefore, the traditional network design likewise struggles to adapt to the dynamic nature of modern applications.Moreover, the traditional static network infrastructure-based approaches need manual configuration and exclusive management of CE devices.Potentially, this results in inefficient use of all resources, which exposes systems to a variety of cyberattacks [5].However, it is clear from the current literature that smart CE networks are subject to various subtle, cyber threats, including botnets, brute force, Denial-of-Service (DoS), Distributed Denial of Service (DDoS), and Web attacks [6].The DDoS attack is identified as one of the most dangerous attacks on today's Internet.In DDoS, attackers use many compromised hosts to generate a lot of worthless traffic flow toward the target server, which causes servers to overload quickly by consuming their resources and making them unreachable to its user.Although DDoS attacks have been investigated for more than two decades, still it is the most compelling yet common attack approach in recent times [7].
In this regard, Software-Defined Networking (SDN) and Intrusion Detection System (IDS) can be considered the backbone for the next-generation smart CE network.An IDS is designed to detect threats and malicious behavior to defend the network against it [8].However, for timely detection, the conventional signature-based IDS must continuously be updated and have information tagged as signatures or patterns of prospective threats.Furthermore, it is unable to detect zero-day threats.Hence, Intelligent threat detection techniques should be developed to identify and counteract the most recent cyber threats in smart CE networks, which are constantly expanding with time.However, due to the specific service needs of smart CE (such as low latency, resource limitations, mobility, dispersion, and scalability), attack detection fundamentally differs from conventional approaches in such a network.Therefore, an adaptable, dynamic, well-timed, and cost-effective detection framework against various growing cyber threats is urgently needed for the CE networks [9].
SDN provides higher security, scalability, dynamism, efficiency, and reconfiguration.This is made possible by the built-in SDN architecture, in which the control functions are transferred to a central controller rather than being incorporated into the forwarding devices.This enables a controller to oversee and run a CE network from a broad perspective [10].Motivated by the aforementioned challenges and discussions, this scientific study aims to provide a highly scalable and effective SDN-orchestrated IDS to safeguard the CE networks from severe multi-vector cyber-attacks.Additionally, our proposed detection framework is highly scalable, adaptable, economical, and well-timed while utilising the underlying CE resources without running out of resources.The main contributions of this work are as follows.
• The authors employed SDN and an intelligent Cuda-enabled Bidirectional Long Short-Term Memory (Cu-BLSTM) to quickly and accurately identify threats in CE networks.Section IV presents the experimental setup and evaluation metrics.The results have been discussed in Section IV-A.Finally, the conclusion and future work is provided in Section V.
II. RELATED WORK
The CE is characterized by the integration of physical things into a network in a way that makes them active participants in corporate operations.These objects might include everything from network gear to sensors to home and healthcare products.CE is made up of a range of devices that can be wireless or wired and can be used in several places and networks.According to a recent Juniper report, more than 46 billion IoT devices were in operation by 2021.This includes sensors, actuators, and gadgets and represents a 200% growth over 2016 [11].In any changing computer and network paradigm, IoT becomes an integral part of it.IoT transformation is growing exponentially, leading to significant growth in terms of revenue and automation.Because these devices are created to satisfy the individual demands of users, it is difficult to find a solution that works for everyone [12].With security being a key concern right now, determining the security of these devices is difficult.These products are too diverse to be compared to a single procedure.
SDN and DL are combined for various benefits, including SDN's capacity to increase IoT's efficacy and Network Traffic Control in Vehicular Cyber-Physical Systems (VCPS) [13].Application (AP), control (CP), and data planes (DP), as well as associated south-and north-bound APIs are part of an SDN architecture.By separating the DP and CP, the introduction of SDNs has resulted in a new networking paradigm.The AP only offers a thorough implementation of commands given by the other planes and is strategically distinct from the other planes.While the whole network's decision-making is the responsibility of the CP.It has customizable characteristics that effectively connect the DP with other outside communication technologies like the IoT [14].The CP can allow the dynamic analysis of all data traffic passing across an IoT network.SDN provides bundled services for IoT, including flexibility, scalability, security, and resilience in multi-controller environment [15].Thus, a precise method of network inspection for identifying suspicious activity, threats, and attacks is made possible by the convergence of IoT with SDN, and this integration offers a bright future for such a network.Significant interest has been shown in Deep Learning (DL) in the last decade, and its applications are being investigated across a wide range of study fields, including healthcare, automobile design, and legal implementation [16].Additionally, various DL-based intrusion detection strategies have been put forth by researchers recently to defend against malicious threats and attacks in IoT networks.However, SDN-enabled, Intelligent IDS are still in the early stages of a thorough evaluation of diversified attacks in such networks.
The scientific literature has witnessed a plethora of research contributions made to secure IoT against a scattered array of internal and external attacks.The thorough development of DL-driven IDS is addressed in [17], which is primarily designed to detect common security attacks including port-based attacks and the DOS slowloris and DOS Hulk.To accomplish the intended security goals, the CICIDS2017 dataset is used for experimentation.The authors compared their proposed to existing techniques and exhibit a significant superiority in terms of productivity, with an attack detection accuracy of 98%.Another threat detection framework is proposed in [18] that is composed of two renowned classifiers, i.e., Spider Monkey optimization (SMO), and Stacked Deep Polynomial Network (SDPN).Along with DoS attacks, the designed model is capable to investigate major commonly occurring attacks such as User-to-Root (U2R) attacks, Remoteto-Local (R2L) attacks, probe attacks, etc.The designed framework is trained on the NDL-KDD dataset, and its performance is compared with benchmarked schemes.The model has significantly achieved 99.02% accuracy.
Authors have specifically designed an IDS to carefully detect DDoS attacks in large-scale IoT networks [19].The system is evaluated on comprehensive performance metrics where it remarkably achieves high attack detection accuracy.The authors of [20] created a threat intelligence technique for industrial environments.The size of the UNSW-NB15 and power system datasets was reduced in this work using Independent Component Analysis.Researchers have combined LSTM with Variational Auto Encoder (VAE) technique to design another attack detection scheme for IoT.The system is effectively trained on ToN-IoT and IoT-Botnet datasets to enhance the learning experience of the proposed system.The system has proven its efficiency on an analytical performance scale regarding attack detection accuracy, training time, etc. [21].Blockchain and DL-based solutions are also regarded as the best choice for threat detection in IoT.Authors have proposed a threat detection scheme based upon the core concepts of the Gated Recurrent Unit (GRU) and Deep Variational Auto Encoder (DVAE) technique.The proposed actively proves its efficiency against potential adversaries [22].In [23], the authors used Multi-Layer Perceptron (MLP) and Natural Language Processing (NLP) to discriminate between crucial and non-crucial posts on the dark Web.Another intrusion detection approach, capable of detecting the presence of cyber threats in IoT, is presented [24].The model is based on Convolutional Neural Network (CNN) classifier and is trained on the BoT-IoT dataset.CNN is also employed in another threat detection scheme proposed in [25], The model is specifically designed for botnet attacks, zero-day attacks, and DDoS attacks.The initial training of the proposed model is performed at the MQTT-IoT-IDS2020 dataset, and the run time performance is evaluated in terms of accuracy, precision, and Recall.CNN is integrated to design another anomaly detection framework purely designed to investigate suspicious entities over the network.The model is evaluated in comparison with some relevant security solutions on a performance scale of threat detection accuracy [26].The authors of [27] designed an ensembled model consist of naïve bayes, QDA, and ID3 classifiers and achieved 95.10% accuracy.Further, in [28], the authors used federated learning based NIDS namely SecFedNIDS to protect IoT networks from poisoning attacks.The authors achieved detection accuracy of 97.03% under CICIDS-2018 dataset.Another intrusion detection scheme using an ensemble approach consisting of ET, RF, and DNN is proposed in [29] to combat threats in IoT and Fog environments.BoT-IoT, IoTID20, NSL-KDD, and CICIDS-2018 datasets are used for a thorough evaluation of the model.The system significantly proves its effectiveness by achieving 98.21% accuracy on CICIDS-2018 dataset.The existing literature is summarized in Table I.
A. Network Model
SDN is considered as a well-established method for building integrated networks in recent years.Its architecture separates the data planes and control planes, allowing simplicity and flexibility.Furthermore, in traditional networks, each router in the network can only perceive the network's local state.The lack of a full overview of the whole network makes it challenging to construct a potentially powerful defensive mechanism against cyber threats.SDN, on the other hand, provides a global network perspective and centralized control capabilities, making network statistics easier to obtain.In SDN, the control plane manages routing choices, data transfers, and traffic monitoring via application techniques.The data plane incorporates many CE devices, such as intelligent devices, sensors, and other wireless technologies.The proposed Cu-BLSTM detection model is placed in the control plane for the following reasons; First and foremost, it is entirely adaptable and therefore capable of changing functionality.Secondly, it Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
B. Proposed DL-Driven BLSTM-Based Framework
A DL-driven Intelligent framework for threat detection in the CE network is provided, incorporating Cu-BLSTM.A lowcost, versatile, and powerful detection module is designed to detect threats across CE networks.Fig. 2 depicts a comprehensive workflow of the proposed acquisition module.Cu-BSLTM consists of two layers with 200 and 100 neurons.
In addition, we added one dense layer with 30 neurons.The proposed work utilized Relu as the activation function (AF) for all levels except the output layer.SoftMax, on the other hand, is employed in the output layer.The Categorical Crossentropy (CC-E) is used as a loss function (LF).Tests are run up to 10 epochs with 64 batch sizes to acquire effective findings.We utilized Cuda-enabled versions for GPU processing for an enhanced performance.Furthermore, the authors used the Keras framework, which is the foundation for Python TensorFlow.Cuda is a GPU-enhanced library that enables repeated readings, resulting in quicker multiplication of matrices.Moreover, we have used Cu-DNN and Cu-GRU as comparison models that have been trained and evaluated in the same environment.Cu-DNN consists of four dense layers with 100, 75, 50 and 30 neurons, respectively.Further, Cu-GRU comprises four layers of GRU with neurons of 500, 400, 300, and 100, respectively, with one dense layer of 03 neurons.
C. Cu-BLSTM
The proposed work used the Cu-BLSTM model for effective and timely threat detection in smart CE networks.An Artificial Neural Network (ANN) type called Recurrent Neural Networks (RNN) offers much promise for learning from earlier time steps [12].RNN utilizes Back Propagation Through Time to constantly learn from previous timesteps.Standard RNN cannot perform better when timesteps overlap.The RNN employs feedback loops and links hidden units to preserve information over time.It can take consecutive inputs of any length and produce fixed-length outputs because of such features.The back-propagation causes error signals to disappear or explode, causing weights to fluctuate, resulting in poor system performance and gradient vanishing problems.Analysts focused on Long-Short-Term Memory (LSTM), as LSTM blocks can save information for a long time.RNN with LSTM blocks was designed to solve this issue.However, to address the shortcomings of the LSTM model, researchers improved it and is known as BLSTM.By traversing time steps both forward and backward, BLSTM makes the best use of the data.To generate two layers side by side, the architecture copies the first recurrent network.The input is sent to the first layer in its original form, while the second layer receives a copy that has been reversed.Complete detail of the BLSTM is given by the authors in [30].The following are the transition functions for Cu-BLSTM gates: where As we used the softmax function in the output layer for multiclass classification.It is calculated by using equation (11).Further, the working of the proposed detection framework is shown in Algorithm 1.
IV. EXPERIMENTAL SETUP AND EVALUATION METRICS
The proposed model is trained using the Python version "Python 3.8" and using Keras.In addition, to enable comparable processing, the PC server is coupled with TensorFlow and the GPU-based package.The test was carried out using an Intel Core i7-7700 HQ CPU with a 2.80 GHz processor, along with a RAM 0f 16 GB, and a 6 GB, 1060 GPU.The proposed IDS is evaluated using CICIDS-2018 [31].The dataset consist of one benign class along with various classes of attacks, i.e., Brute-force (XSS), DDoS, DoS, SSH, etc.However, in this work, we used seven classes of the dataset.Further, we pre-processed the dataset by using various techniques.First, we deleted all lines with empty values and non-numerics since they may have an impact on the performance of the test model.Since DL algorithms primarily handle numerical data, we used the label encoder, i.e., sklearn, to transform any non-numerical values into numerical values.Furthermore, one hot encoding is done on the output label since segment order may affect model performance, resulting in unforeseen
A. Results and Discussion
This scientific study employed 10-fold cross-validation, and the findings are displayed in Table II to explicitly demonstrate unbiased outcomes.For a better understanding, each fold's results are shown in this section.The confusion matrix depicts the model's performance in the test data set.Data that is binary or multi-category.It is advantageous to assess the receiver's operational element's accuracy, precision, memory, and curve (ROC).The confusion matrix of the proposed model is depicted in Fig. 3.The figure is evident that the proposed model identifies all five classes properly.
Further, the ROC curve corrects the given data so that positive and negative positive values may be compared.The extent of segregation is mostly determined by the success of various class division issues, as demonstrated by the ROC.The ROC curve structure is located between the TP and FP levels.The detection accuracy reveals the Cu-BLSTM efficiency and performance.Fig. 5 depicts the ACC, PN, RL, and FS of all three models.The proposed model achieved 99.57% ACC with 99.62% PN.Further, the proposed model is having FS and RL of 99.23% and 99.39% respectively.The figure is evident that the proposed Cu-BLSTM model outclassed the baseline models.We have further provided the per-class accuracy of all three models in Table III Other performance assessment methodologies, such as FP rate, FO rate, FD rate, and FN rate are also studied to properly evaluate the proposed model.Fig. 6 demonstrates that our proposed model has values of 0.0033, 0.0022, 0.0033, and 0.0029 percent for the FP rate, FN rate, FD rate, and FO rate.Furthermore, Cu-GRU outperforms Cu-DNN in terms of such metrics.For a thorough assessment, we have further calculated the TPR, TNR, and MCC.These values are obtained using the uncertainty matrix for comprehensive analysis.The proposed model, i.e., Cu-BLSTM yielded improved outcomes than Cu-DNN and Cu-GRU.Fig. 7 depicts the performance of these models, where it is clear that the proposed model achieved values of 99.15, 99.34, and 99.31 percent respectively, thus proving the efficacy of the proposed model.Furthermore, we have provided the testing time of the proposed model in Fig. 8.We did not considered the training time as it is mostly done offline.and [29], to validate its efficiency.The comparison is made in terms of ACC and the details are provided in Table IV.The table is evident that the proposed model outperformed the existing detection techniques, hence proving its efficiency.
V. CONCLUSION
In this article, to protect consumer electronics network, we proposed an intelligent intrusion detection system based on software-defined networking-orchestrated deep learning approach.Specifically, software-defined networking architecture was integrated with consumer electronics network to handle its distributed architecture and heterogeneous consumer electronic devices.Then, an IDS based on cuda-enabled bidirectional long short-term memory was proposed and deployed at control plane to enhance threat detection mechanism.We proved the effectiveness of the proposed IDS in terms of accuracy, precision and speed efficiency through experimental evaluation on the CICIDS-2018 dataset.We also compared the performance of the proposed IDS against some recent state-ofthe-art technique.In the future we aim to train the model on different datasets to further improve intrusion detection in such networks.Finally, we endorse DL-based Intelligent models for efficient threat detection in next-generation smart consumer electronic networks.
Fig. 4
depicts the ROC of the proposed Cu-BLSTM model, demonstrating the efficiency of the proposed model.The authors further provided the ACC, PN, RL, and FS of the CU-BLSTM model along with the baseline techniques.
Fig. 8 depicts the speed efficiency of the Cu-BLSTM and baseline models.The Cu-BLSTM model achieved a testing time of only 17.40 ms.On the other hand, Cu-DNN is having a better testing time of 25.2 ms than Cu-GRU.Finally, the performance of the proposed Cu-BSLTM model is compared with recent threat detection techniques from existing literature [27], [28],
TABLE III PER
-CLASS ACC OF CU-BLSTM AGAINST BASELINE MODELS
TABLE IV COMPARISON
OF CU-BLSTM WITH EXISTING LITERATURE | 4,534.6 | 2023-11-01T00:00:00.000 | [
"Computer Science"
] |
Model Based Simulation and Genetic Algorithm Based Optimisation of Spiral Wound Membrane RO Process for Improved Dimethylphenol Rejection from Wastewater
Reverse Osmosis (RO) has already proved its worth as an efficient treatment method in chemical and environmental engineering applications. Various successful RO attempts for the rejection of organic and highly toxic pollutants from wastewater can be found in the literature over the last decade. Dimethylphenol is classified as a high-toxic organic compound found ubiquitously in wastewater. It poses a real threat to humans and the environment even at low concentration. In this paper, a model based framework was developed for the simulation and optimisation of RO process for the removal of dimethylphenol from wastewater. We incorporated our earlier developed and validated process model into the Species Conserving Genetic Algorithm (SCGA) based optimisation framework to optimise the design and operational parameters of the process. To provide a deeper insight of the process to the readers, the influences of membrane design parameters on dimethylphenol rejection, water recovery rate and the level of specific energy consumption of the process for two different sets of operating conditions are presented first which were achieved via simulation. The membrane parameters taken into consideration include membrane length, width and feed channel height. Finally, a multi-objective function is presented to optimise the membrane design parameters, dimethylphenol rejection and required energy consumption. Simulation results affirmed insignificant and significant impacts of membrane length and width on dimethylphenol rejection and specific energy consumption, respectively. However, these performance indicators are negatively influenced due to increasing the feed channel height. On the other hand, optimisation results generated an optimum removal of dimethylphenol at reduced specific energy consumption for a wide sets of inlet conditions. More importantly, the dimethylphenol rejection increased by around 2.51% to 98.72% compared to ordinary RO module measurements with a saving of around 20.6% of specific energy consumption.
Introduction
The modern industrial world continues to produce a wide range of harmful organic and non-organic compounds. These pollutants are usually disposed of into a variety of water sources, which in turn have a serious impact on the biological ecosystem [1,2]. This study focuses on phenolic compounds, and especially dimethylphenol, due to their existence in several industrial effluents such as those from refineries and petrochemical plants [3]. Dimethylphenol contains a stable benzene ring, which increases its resistance to biological decomposition and therefore lingers in the environment for a long period of time [4]. Moreover, the hydrophobicity property of phenolic compounds yields the formation of toxicological organic and free radical species, which are very harmful [5]. Several health agencies rate phenol and phenol derivatives as toxic compounds (even at low concentrations). Dimethylphenol is therefore tightly controlled by legislation due to its carcinogenic properties [6]. For example, the ATSDR (Agency of Toxic Substances and Disease Registry) [7] has restricted dimethylphenol concentration in surface water to 0.05 ppm.
UV/H 2 O 2 technology has been used as the prominent treatment process for the elimination of phenol and its derivatives from wastewater. However, this technology consumes a significant amount of energy and with increased carbon concentration of the reused water [8].
Reverse Osmosis (RO) membrane is a well-known water treatment method, which has been used extensively in seawater desalination [9]. The RO process has confirmed its efficiency in terms of low cost of operation and low energy consumption compared to thermal process methods [10]. The use of RO has therefore been extended to treat wastewater from various other industries [11]. For example, RO was magnificently used to eliminate heavy metals such as copper, nickel, acrylonitrile, sulphate, ammonium, cyanide and sodium [12,13]. Generally, the RO process and especially spiral wound membrane method remains the most promising treatment method for the removal of several highly toxic compounds. Whilst various published studies have confirmed RO's efficiency for treating secondary effluents at low cost, the challenge for enhancing its performance for rejecting toxic compounds from wastewater is yet to be explored fully [14,15].
The efficiency of a spiral wound membrane module to remove such harmful compounds is dependent on the membrane type, design parameters and control variables such as the feed pressure, flow rate, concentration and temperature. Several attempts can be found in the literature for improving the efficiency of seawater RO process using optimisation methods [16]. However, only a few of such attempts have been carried out to optimise the membrane design parameters for wastewater treatment.
Boudinar et al. [17] enhanced the efficiency of a ROGA-4160HR spiral wound membrane module used for seawater desalination using a geometric optimisation for one set of input conditions. Sharifanfar et al. [18] assessed the influence of channel height on the permeate flux of a microfiltration membrane for a pomegranate juice clarification process, and concluded that the feed channel height had an impact on the permeate volume. Karabelas [19] studied the effect of membrane sheet dimensions of a spiral wound membrane module used to desalinate seawater based on a fixed effective membrane area efficiency. He used an optimisation methodology based on the geometric characteristics of feed-side spacers. Gu et al. [20] explored the influence of the winding geometry of a spiral wound membrane RO module used for seawater desalination on the total process performance and energy consumption. The parameters studied included the membrane dimensions, number of membrane leaves, centre pipe radii, height of feed and permeate channels. Ruiz-García and de la Nuez Pestana [21] considered the impact of different feed spacer geometries on three different full-scale spiral wound membrane modules. They analysed the performance of membrane elements for wide ranges of feed concentration, pressure and flowrate. Toh et al. [22] studied the 3D feed spacer geometries of a spiral wound membrane RO module with various degrees of "floating" characteristics via the implementation of Computational Fluid Dynamics (CFD) simulations to explore the mechanisms that result in shear stress and mass transfer improvement. Luo et al. [23] presented a hybrid framework of CFD model to explore the optimal design of feed spacer in a non-woven spiral wound membrane RO module and analyse the influence of industrial operating conditions on the performance of brackish water RO process.
This research focuses on exploring the removal of dimethylphenol from industrial effluents using a spiral wound RO membrane module. Al-Obaidi et al. [24] studied the ef-fect of membrane dimensions including membrane length, width, and feed channel height on the retention of dimethylphenol from synthesised wastewater and the total consumed energy of a single spiral wound membrane module. They used a simulation model based on the solution-diffusion principle. However, this was carried out for only one set of inlet parameters of 6.548 × 10 −3 kmol/m 3 , 13.58 atm, 2.583 × 10 −4 m 3 /s, and 31.5 • C of feed concentration, pressure, flow rate, and temperature, respectively. A gPROMS software optimisation tool has been used in this study to optimise the removal of dimethylphenol from wastewater by considering the membrane design parameters as the decision variables. However, the results of Al-Obaidi et al. [25], which were obtained using gPROMS software, were based on a discrete solution of a single objective function. Additionally, gPROMS software cannot provide a set of alternative solutions that trade various objectives against each other. This is to say that it would be interesting to explore the results of a multi objective optimisation approach. This should provide a set of cooperative optimal solutions (alternatives) as confirmed by Savic [26]. The author compared the feasibility of single and multi-objective optimisation methods applied in water distribution design and affirmed that a multi-objective function optimisation-based model can be a useful tool for formulating alternative objectives especially for such systems with high uncertainty, and which can readily be used for exploring trade-off opportunities. This prompted the use of Genetic Algorithms (GA) as an evolutionary computation technique [27] for finding one global solution.
To improve the performance of GAs in identifying global solutions, several methods can be used to solve multimodal problems. They include crowding, fitness share, clearing, multi-national GA and species conserving [27,28]. More specifically, the Species Conserving Genetic Algorithm (SCGA) can generate several solutions of complex optimisation problems [29]. For this reason, SCGA has been selected for solving the proposed optimisation problem.
The Use of Genetic Algorithms for Developing a Global Optimisation Solution
Traditional Genetic Algorithm (GA) based optimisation methods have been implemented in several applications including wastewater treatment. For example, Al-Obaidi et al. [30] applied traditional GA to find the optimal chlorophenol rejection from wastewater using a single spiral wound RO membrane module. Al-Obaidi et al. [31] researched the best configuration of multistage RO processes based on permeate-reprocessing to reject N-nitrosodimethylamine (NDMA) from industrial effluents. To the best of the authors' knowledge, the effect of a wide set of operating parameters on the removal of dimethylphenol from wastewater via a simulation of a single spiral wound RO membrane module has not been yet addressed in the literature.
This research attempts to use, for the first time, the Species Conserving Genetic Algorithm (SCGA) for significantly improving the RO process performance for variable inlet conditions associated with variable membrane design parameters that will yield a higher dimethylphenol rejection from wastewater at low energy consumption. The main output of this work is the generation of multiple optimal solutions for any set of operating data [28]. The net effect of this is the ability to select the most suitable solution based on process requirements. This paper begins with an overview of an earlier mathematical model developed by the same authors [24]. This was successfully applied to simulate the transport phenomenon of permeate and solute via the membrane texture of a spiral wound membrane module. A comprehensive analysis of the validated results against a wide range of experimental data from the literature is provided. The effect of membrane design parameters, such as membrane length, width, and feed channel height, are then assessed in respect of the rejection of dimethylphenol from industrial wastewater. This is used for two different sets of variable inlet conditions, as well as the required energy consumption for given operating conditions. Finally, the process model is developed in the gPROMS software, and the multi-objective optimisation problem is implemented using the species conserving genetic algorithm written in C++. This yields the most economical membrane design parameters, thus providing the highest dimethylphenol rejection at lowest energy consumption. In this regard, it is important to clarify that the implementation of the species conserving genetic algorithm based a model developed to allocate the optimal values of membrane dimensions of a single spiral would RO process that would maximise the dimethylphenol removal from wastewater at the lowest specific energy consumption has not been addressed yet. Therefore, this study attempts to resolve this challenge.
Modelling of a Spiral Wound Membrane Module of RO Process
Sundaramoorthy et al. [32] developed an analytical model based on the solutiondiffusion model, which has been originally deployed by Srinivasan et al. [33] to investigate the performance of a single spiral wound RO membrane module for the removal of dimethylphenol from synthesised wastewater.
The model used in this study was first established by Al-Obaidi et al. [24], who used it to distinguish the transport phenomenon and allow the consideration and estimation of the required energy consumption. The model assumptions are as follows: Table A1 of Appendix A presents the model and physical property equations for a single spiral wound membrane RO module to simulate and optimise the rejection of dimethylphenol from its aqueous solutions. As in many references, the water relationships anticipated by Koroneos et al. [34] have been used to calculate the physical properties of low concentration dimethylphenol aqueous solutions.
The nonlinear algebraic correlations of the proposed model (given in Table A1 of Appendix A) can be presented in the following compact formula.
In this form, x, u, and v represent the set of all algebraic control variables, decision variables, and constant parameters, respectively.
The research outlined in this paper has included the calculation of specific energy consumption. This would be considered as a significant improvement made in this study since both [32,33] did not include the energy consumption parameter in their model. Thus, the most interesting question of this research is how to attain a higher rejection rate at a lower energy consumption. However, it is firstly important to validate the model of Al-Obaidi et al. [24] (provided in Appendix A) against the experimental data of [33] to quantify its consistency.
Experimental Setup
For the convenience of the readers, we highlight the experimental work of Srinivasan et al. [33] here briefly. They conducted extensive experiments to assess the feasibility of the RO process to reject dimethylphenol from synthesised dilute wastewater of different concentrations. Specifically, they used a single spiral wound membrane module of thin film composite RO membrane. The features of the membrane module are chosen the same as from [33] and are given in Table 1. The applied feed concentration of dimethylphenol varies between 0.819 × 10 −3 and 6.548 × 10 −3 kmol/m 3 . Additionally, the operating feed flow, pressure, and temperature were selected between 2.166 × 10 −4 to 2.583 × 10 −4 m 3 /s, 5.83 to 13.58 atm, and 29 to 32.5 • C, respectively. The water and dimethylphenol transport parameters throughout the membrane and friction factor (A w , B s and b) are also given in Table 1. Table 1. Membrane characteristics, dimensions, and transport parameters [33].
Parameter
Ion Exchange, India Ltd. 1 Figure 1 depicts a detailed diagram of the corresponding experimental setup of a single spiral wound membrane RO module with the corresponding items. Figure 2 displays a 3D representation diagram of the flat sheet membrane. The inlet wastewater of aqueous solution of dimethylphenol splits into two streams of permeate (collected at the permeate channel) and retentate of high concentration dimethylphenol. This is due to supplying a higher pressure than the osmotic pressure, which helps high quality water to penetrate through the membrane pores. The experimental data of [33] will be used in the next section to validate the new model, which equations are shown in Table A1 of Appendix A.
Model Validation by Al-Obaidi et al.
The experimental data of Srinivasan et al. [33] were used to validate the model developed (shown in Table A1 of Appendix A) by Al-Obaidi et al. [24], which is used in this study for simulation and optimisation. Table A2 of Appendix A shows such data of [33] and the model calculations for each set of inlet conditions. The results clearly show that insignificant percentage errors exist between the model calculations and experimental data.
Problem Description
In this section, the optimum dimethylphenol removal and the minimum energy consumption are simultaneously investigated via the optimisation of membrane dimensions, including the length, width, and feed channel height of a single membrane module of RO process. The optimisation study is carried out using the SCGA platform, based on the model correlations and the restricted upper and lower bounds of membrane module design parameters. The optimisation considered is based on a fixed membrane area of 7.84 m 2 . This constraint was chosen to meet the manufacturer specification of membrane area and technical requirements. The decision variables are selected between the upper and lower bounds and taken as 0.5-1 m of membrane length, 5-15.69 m of membrane width, and 5.93 × 10 −4 -1 × 10 −3 m of feed channel height. Additionally, the model parameters of permeate and dimethylphenol transport parameters and friction factor (A w , B s and b) are assumed constant (Table 1). Most importantly, the optimisation is carried out for several operating parameters of feed flow rate, concentration, pressure and temperature. These are used to investigate the appropriate operating conditions including the optimum membrane design parameters commensurate with the optimum efficiency of a single spiral wound membrane RO module.
The multi-objective function is targeted to simultaneously maximise the dimethylphenol removal and minimise the energy consumption. This is represented mathematically as follows:
Max and Min
Rej, EC, respectively L, W, t f Subject to: Equality constraints: Inequality constraints: Note, Al-Obaidi et al. [24] used the same optimisation problem formulation but used the gPROMS Model Builder to solve the optimisation problem using Point Optimisation technique that considers the Nonlinear Programming problems (NLP). This method is mathematically comparable to solve an algebraic problem with bearing in mind neither minimising or maximising a nonlinear objective function exposed to equality and inequality nonlinear constraints of upper and lower bounds of process operation. The optimisation problem is therefore solved by controlling a set of optimisation continuous or discrete control variables. Thus, the suitable control variables can be estimated to fit the projected objective function. However, gPROMS Model Builder cannot simultaneously solve several objective function and therefore, the multi objective function is solved after running the optimisation for an individual objective function and incorporating the second objective function as a constraint.
In this work we used a different optimisation technique based on SCGA as described below that enables to solve multi objective functions in one run.
Description of Species Conserving Genetic Algorithm (SCGA)
A species is an important term in SCGA [27] that represents a set of similar individuals. Species are identified from a population. Specifically, a species s i is dominated by its species seed x * , which has the greatest fitness value (objective value), if for everyone y ∈ s i d(x*,y) < r s , and d(*,*) represents the distance between two individuals, and r s is the species radius. A possible species distribution in a 2-D space is illustrated in Figure 3. A species contains some individuals and is a part of the practicability section. However, some individuals may be connected to many species. The pseudo codes of SCGA [27] are described in Figure A1 of Appendix A. In this regard, G(t) signifies the population at time t, and x signifies the species set. A population is dynamically divided into subgroups, called species. Each species seed is a possible solution. The concept of SCGA is to locate species and make them survive in the next generation. Three operators in SCGA are added into a traditional GA. This in turn represents the main differences existing between traditional GA and SCGA. The three SCGA operators, are more particularly discussed below.
•
Identifying species seeds: This operator was developed to explore all the possible species from the current population. Firstly, all the individuals are set as untreated.
Then, a best untreated individual is chosen to be a species seed of a species. An individual will be marked as the member of the species if its distance to the species seed is smaller than the species radius, and will therefore be marked as "processed". This practice is recurrent until all the individuals have been marked.
•
Conserving species seeds: The selected species seed is imitated back to the population and will replace the nearest individual if it is better the individual. The goal of this process is to ensure that all the species can continue in the next generation.
• Identifying global solutions: This is achieved by choosing the top species from xs due to saving the best individual in a species in the set xs. A threshold r f (0 < r f ≤ 1) is used to find the global solutions. A species seed x is therefore treated as a solution, if: f min and f max are the worst and best objective values. The defaulting value of species radius is established as 1 in this research. Figure A1 of Appendix A illustrates a typical genetic algorithm when the above three operators are removed. This includes three genetic operations: selection, crossover and mutation. After those operations, new individuals from the crossover and mutation will be evaluated. Most importantly, SCGA can provide multi optimal solutions for a single objective function. Therefore, the multi-objective functions presented in Section 2.4.1 should be calibrated to represent a single objective function. This is readily achieved by developing a new formula (Equation (4)) based on weighting factors to arrive at a single objective function derived from two objective functions commensurate with SCGA requirements.
W 1 and W 2 are the weighting parameters. Thus, the main aim of this optimisation is to maximise the objective function f in Equation (4). However, we assume that the two objectives are at the same level of importance. In other words, the maximum value Rej (overall rejection) is set as 1 and the maximum value of EC is about 3 for simplification purposes. Thus, let W 1 to be 1, and W 2 be 3, which denotes that both objectives are at an identical level of importance and have a similar involvement of the system objective.
Steady-State Simulation
In this section, the model presented in Table A1 of Appendix A and validated in Section 2.3 is implemented to carry out a simulation to forecast the effect of inlet parameters on dimethylphenol rejection from wastewater for a spiral wound membrane RO module. Firstly, the simulation indicates that the rejection parameter and total water recovery grow due to a rise in feed pressure at any fixed operating feed flow rate, concentration and temperature (for instance, Experiments 2 to 4, Table A2 of Appendix A). This is due to the water flux increase, which in turn is related to the increasing pressure supplied (Equation (A1), Table A1 of Appendix A). This reduces the permeate concentration of dimethylphenol despite insignificantly enhancing the removal of dimethylphenol. The consequence of this is (insignificant) reduction of the energy consumption as outlined in Equation (A29), where any increase of water production serves to limit the energy consumption despite the increase of the operating pressure. Statistically, rising the feed pressure from 9.71 to 13.58 atm at fixed other inlet conditions, would result in 0.07% decrease of energy consumption.
Secondly, simulating the process at fixed feed flow rate, pressure and temperature and increased feed concentration (for instance, Experiments 12 and 15, Table A2 of Appendix A) yields a significant decrease of total permeate recovery and an increase in the removal of dimethylphenol. This might be ascribed to the growth of the osmotic pressure because of the increasing feed concentration, which generally decreases the water flux. The simulation results of Experiments 12 and 15 (Table A2 of Appendix A) showed that the osmotic pressure has been increased from 0.84 to 1.065 atm, which corroborates the validity of the reason above. However, this increased dimethylphenol rejection can be elucidated by the fact that increasing bulk concentration is not necessarily comparable to the increase of permeate concentration, as this is due to the increasing operating concentration. It is argued therefore, that the rejection parameter increases, as outlined by Equation (A28) in Table A1 of Appendix A, due to the lifting of the feed concentration. More generally, any growth of feed concentration is associated with an increase of energy consumption.
Finally, the increase of feed flow rate at constant inlet parameters of operating pressure, concentration and temperature (for instance, Experiments 8 and 17, Table A2 of Appendix A) results in an insignificant rejection increase the but with a noticeable reduction of water recovery. This phenomenon can be ascribed to the reduced resident time of the fluid inside the feed channel, which itself is due to the increased feed flow rate. Such finding can be described by the high frictional pressure drop that reduces the water flux, and in turn increase the energy consumption required.
Influence of Membrane Design Parameters
The effect of membrane design parameters, which include membrane length, width and feed channel height as shown in Table 1 (as per manufacturer's specification), is assessed in respect of dimethylphenol removal, water recovery, and the consumed energy at two particular sets of operating parameters. These were the same as those experimental data used by Srinivasan et al. [33] at the highest and lowest dimethylphenol rejections achieved. More specifically, the highest rejection of 97.3% is commensurate with 6.548 × 10 −3 kmol/m 3 , 13.58 atm, 2.583 × 10 −4 m 3 /s, and 31.5 • C of inlet concentration, pressure, flow rate and temperature, respectively. The lowest rejection of 90.2% is commensurate with 0.819 × 10 −3 kmol/m 3 , 2.166 × 10 −4 m 3 /s, 5.83 atm, and 32.5 • C, respectively. The next sections present the simulation results in detail at the two selected sets of operating conditions.
Influence of Membrane Dimensions of Length and Width
The membrane dimensions of length and width are altered at fixed volume and membrane area. This is done to adjust the flow patterns of the fluid inside the feed channel and will be used to assess the extent of dimethylphenol removal, permeate recovery, and energy consumption. The geometrical amendment of the selected membrane (Ion Exchange, India) is achieved at the two selected operating parameters (provided in Section 3.2) of inlet concentration, pressure, flow rate and temperature. Figures 4 and 5 show the effect of variable membrane dimensions at the fixed membrane area and feed channel height of 7.84 m 2 and 0.8 × 10 −3 m, respectively on dimethylphenol removal for the two sets of operating conditions. Figure 6 presents the effect of membrane width at fixed membrane area and feed channel height on the permeate recovery and energy consumption. A slight increase was noticed in dimethylphenol rejection as a result of increasing membrane length at fixed area and feed channel height (Figures 4 and 5). Clearly, any decrease in membrane width would enhance dimethylphenol rejection. This is owing to a rise in the bulk velocity inside the membrane module, which in turn is due to the membrane width decreasing (Equation (A16), Table A1 of Appendix A) or the membrane length increasing at fixed membrane area. Both result in a reduction of the wall membrane concentration and concentration polarisation, which yield less solute flux and more dimethylphenol rejection. Figure 6 confirms an increase in permeate recovery due to an increase in membrane width at fixed membrane area. This reduces the pressure drop, which rises the permeate flux through the membrane, and this applies to both sets of tested operating conditions. Similarly, the energy consumption required decreases due to the membrane width increasing-this is especially so for the second set of operating conditions ( Figure 6). The reason for this is the increase of permeate recovery due to the increase in the membrane width. Another interesting point here is that running the process at the second set of operating conditions (the lowest rejection) can generate a higher permeate flux as a response of the membrane width variation compared to the first set of operating conditions (the highest rejection). Karabelas et al. [19] confirmed that using short sheets of membrane can potentially improve the recovery performance of low and high-pressure membranes. Figure 7 shows the influence of feed channel height on dimethylphenol removal and energy consumption for the two sets of inlet conditions mentioned in Section 3.2. This simulation is carried out at the fixed membrane area of 7.84 m 2 and variable module volume and fixed feed conditions. Interestingly, the increase of the feed channel height actually reduces dimethylphenol rejection but increases the energy consumption (Figure 7). The cause of this is a rise in the height of feed channel and the pressure drop. The decline in water flux through the membrane, thus dimethylphenol rejection, is the main reason for the rise in the consumption of energy (Equation (A29) in Table A1 of Appendix A). Seemingly, running the process using the second set of operating conditions significantly decreases dimethylphenol rejection compared with doing the same with the first set of operating conditions. This is due to lower water flux for the second set of operating conditions when the feed channel height increases. This results in a higher concentration of the pollutant (dimethylphenol) at the permeate channel. In contrast, the first set of operating conditions cause high operating pressure and therefore higher water flux despite feed channel height variation. These results are corroborated by Sablani et al. [36], who confirmed that the feed channel height of the spiral wound membrane module has a substantial influence on the performance indicators of the RO seawater desalination process. The above results readily provide a clear motivation for an optimisation study for analysing the precise impact(s) of all membrane design parameters within the objective functions and operational constraints. The optimisation study is discussed in more detail in the next section.
Optimisation Results Based on a Species Conserving Genetic Algorithm (SCGA)
The optimal values of the membrane length, width and feed channel height (decision variables) obtained by SCGA are given in Table 2 for several selected control variables of inlet concentration, pressure, flow rate and temperature, respectively. Table 2 presents several optimal solutions for each set of operating parameters, while the best solution is highlighted (Bold). This is done by a simple comparison of the rejection and energy consumption obtained including the proposed solutions. However, the highlighted optimal solutions of different operating conditions show the consistency of these solutions obtained by SCGA compared to the experimental results of Srinivasan et al. [33]. In this regard, Al-Obaidi et al. [24] confirmed the optimum solution of the same membrane as 9.745 m, 0.805 m, and 5.93 × 10 −4 m of membrane width, length and feed channel height, respectively using the gPROMS suite optimisation tool. The optimised solution of Al-Obaidi et al. [24] is quite close to the output of the SCGA solution 3 for the operating conditions under case 2, as presented in Table 2. However, SCGA has generated multiple optimised solutions compared to only one solution provided by the gPROMS optimisation tool. This is the main advantage of using SCGA for multi-objective optimisation compared to gPROMS, which handles a single objective function as part of the optimisation process. Table 2 shows the original dimensions of the membrane selected by Srinivasan et al. [33] and a comparative SCGA analysis of dimethylphenol rejection and energy consumption results. Therefore, optimal solution 1 of the operating conditions (case 1) yields the best process performance results against the experimental data of [33]. In this respect, Table A2 of Appendix A shows the simulation results by means of the optimised solution 1 of the membrane design parameters. This in turn yields the optimised recovery, dimethylphenol rejection and the percentage of energy saving compared to the original membrane design parameters of Srinivasan et al. [33], for each set of operating parameters.
It can readily be seen from the new optimisation results that the proposed methodology can be used to achieve the maximum dimethylphenol rejection at the minimum energy consumption for all the inlet parameters of [33]. The corresponding energy saving varies between 0.79% to 20.66% based on the fundamental set of inlet parameters (Table A2 of Appendix A). In this regard, the minimum and maximum optimum energy consumptions are 1.24 and 1.695 kWh/m 3 , respectively, based on the fundamental set of inlet parameters. Additionally, the rejection of dimethylphenol has been increased between 2.51% to 5.87% to attain 98.72% and 98.14%, respectively, based on the fundamental set of inlet parameters (Table A2 of Appendix A. These results are comparable to the maximum energy saving of 19.2% reported by Al-Obaidi et al. [24] for the same membrane using gPROMS. Additionally, the new optimisation results readily show that the most economical performance is achieved with a specific intermediate spacer thickness (5.93 × 10 −4 m) compared to the manufacturer specifications. An immediate and interesting outcome here is that wastewa-ter usually comes with a low pollutant concentration, which means with a low possibility of fouling. The implementation of the optimised feed channel height of 5.93 × 10 −4 m will therefore result in a much lower risk of membrane blogging.
It would therefore be safe to say that the new optimisation methodology of the membrane design parameters, which yielded improved pollutant rejection at lower energy consumption for a spiral wound membrane RO module, can readily be applied to any type of organic pollutant such as chlorophenol and phenol. Having said this, full details of the membrane transport coefficients of the water and pollutant should be known besides the physical properties.
Conclusions
Dimethylphenol compounds, found in several industrial effluents, are extremely resistant to any biological decomposition and can readily cause serious harm to humans and the environment. This is why dimethylphenol concentration in surface water has been limited to 0.05 ppm by health agencies. The aim of this research was to obtain an efficient method for removing this toxic compound from industrial wastewater using a single membrane RO module. This has been achieved by using a comprehensive simulation based model for analysing the influence of the membrane length, width and feed channel height (membrane design parameters) on the removal of dimethylphenol, total permeate recovery and energy consumption. Firstly, the consistency of the model developed has been tested against experimental data from the literature. Simulation results confirmed that the geometric parameters of membrane length and width have a minor impact on the rejection rate on one hand, and a marked impact on energy consumption on the other. Ever-increasing the feed channel height has a negative influence on both dimethylphenol rejection and energy consumption. Finally, the model was used to carry out a multi-objective optimisation for the membrane design parameters using a species conserving genetic algorithm. The results of the optimisation analysis yielded an optimum removal of dimethylphenol at reduced energy consumption (objective functions) of the RO process. Specifically, the optimisation showed a higher dimethylphenol rejection (around 5.8%) at lower energy consumption (around 20.6%) when compared to ordinary RO module measurements. Table A1. Modelling of a spiral wound membrane module of RO system.
Model Equations Equation No
. | 7,819 | 2021-08-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Chemistry"
] |
A Comparison for Dimensionality Reduction Methods of Single-Cell RNA-seq Data
Single-cell RNA sequencing (scRNA-seq) is a high-throughput sequencing technology performed at the level of an individual cell, which can have a potential to understand cellular heterogeneity. However, scRNA-seq data are high-dimensional, noisy, and sparse data. Dimension reduction is an important step in downstream analysis of scRNA-seq. Therefore, several dimension reduction methods have been developed. We developed a strategy to evaluate the stability, accuracy, and computing cost of 10 dimensionality reduction methods using 30 simulation datasets and five real datasets. Additionally, we investigated the sensitivity of all the methods to hyperparameter tuning and gave users appropriate suggestions. We found that t-distributed stochastic neighbor embedding (t-SNE) yielded the best overall performance with the highest accuracy and computing cost. Meanwhile, uniform manifold approximation and projection (UMAP) exhibited the highest stability, as well as moderate accuracy and the second highest computing cost. UMAP well preserves the original cohesion and separation of cell populations. In addition, it is worth noting that users need to set the hyperparameters according to the specific situation before using the dimensionality reduction methods based on non-linear model and neural network.
INTRODUCTION
The technological advances in single-cell RNA sequencing (scRNA-seq) have allowed to measure the DNA and/or RNA molecules in single cells, enabling us to identify novel cell types, cell states, trace development lineages, and reconstruct the spatial organization of cells (Hedlund and Deng, 2018). Single-cell technology has become a research hotspot. However, such analysis heavily relies on the accurate similarity assessment of a pair of cells, which poses unique challenges such as outlier cell populations, transcript amplification noise, and dropout events. Additionally, single-cell datasets are typically high dimensional in large numbers of measured cells. For example, scRNAseq can theoretically measure the expression of all the genes in tens of thousands of cells in a single experiment (Wagner et al., 2016). Although whole-transcriptome analyses avoid the bias of using a predefined gene set (Jiang et al., 2015), the dimensionality of such datasets is typically too high for most modeling algorithms to process directly. Moreover, biological systems own the lower intrinsic dimensionality. For example, a differentiating hematopoietic cell can be represented by two or more dimensions: one denotes how far it has progressed in its differentiation toward a particular cell type, and at least another dimension denotes its current cell-cycle stage. Therefore, dimensionality reduction is necessary to project high-dimensional data into low-dimensional space to visualize the cluster structures and development trajectory inference.
Research on data dimension reduction has a long history, and principal component analysis (PCA), which is still widely used, can be traced back to 1901. Since the advent of RNA-seq technology, this linear dimension-reduction method has been favored by researchers. In addition, there are non-linear methods such as uniform manifold approximation and projection (UMAP) and t-distributed stochastic neighbor embedding (t-SNE) to reduce dimension. After the rise of neural network, there are many methods of dimensionality reduction based on neural network such as variational autoencoder (VAE). In addition, there are some new theoretical frameworks such as the multikernel learning [single-cell interpretation via multikernel learning (SIMLR)] based on the above methods that have been or are being developed to handle increasingly diverse scRNAseq data.
In this study, we performed a comprehensive evaluation of 10 different dimensionality reduction algorithms comprising the linear method, the non-linear method, the neural network, model-based method, and ensemble method. These algorithms were run and compared on simulated and real datasets. The performance of the algorithms was evaluated based on accuracy, stability, computing cost, and sensitivity to hyperparameters. This work will be helpful in developing new algorithms in the field. The workflow of the benchmark framework is shown in Figure 1.
Methods for Dimensionality Reduction
To our knowledge, about 10 methods are now available to obtain a low-dimensional representation for scRNA-seq data. In this section, we gave an overview of these 10 methods (Table 1).
PCA
As the most widely used dimensionality reduction algorithm, PCA (Jolliffe, 2002) identifies dominant patterns and the linear combinations of the original variables with maximum variance. The basic idea of PCA is to find the first principal component with the largest variance in the data and then seek the second component in the same way, which is uncorrelated with the first component and accounts for the next largest variance. This process repeats until the new component is almost ineffective or reaches the threshold set by users.
ICA
Independent component analysis (ICA) (Liebermeister, 2002), also known as blind source separation (BSS), is a statistical calculation technique used to reveal the factors behind random variables, measured values, and signals. ICA linearly transforms the variables (corresponding to the cells) into independent components with minimal statistical dependencies between them. Unlike PCA, ICA requires the source signal to meet the following two conditions: (1) source signals are independent of each other and (2) the values in each source signal have a non-Gaussian distribution. It assumes that the observed stochastic signal x obeys the model x = As, where s is the unknown source signal, its components are independent of each other, and A is an unknown mixing matrix. The purpose of the ICA is to estimate the mixing matrix A and the source signal s by and only by observing x.
ZIFA
The dropout events in scRNA-seq data may make the classic dimensionality reduction algorithm unsuitable. Pierson and Yau (2015) modified the factor analysis framework to solve the dropout problem and provided a method zero-inflated factor analysis (ZIFA) based on an additional zero-inflation modulation layer for reducing the dimension of single-cell gene expression data. Compared with the above two linear methods, employing the zero-inflation model can give ZIFA more powerful projection capabilities but will pay a corresponding cost in computational complexity.
In the statistical model, the expression level of the jth gene in the ith sample y ij (i = 1,. . ., N and j = 1,. . .,D) is described: where z i is a K × 1 data point in a latent low-dimensional space.
A denotes a D × K factor loadings matrix, H is a D × N masking matrix, W = diag(σ 2 1 , · · · , σ 2 D ) a D × D diagonal matrix, and µ is a D × 1 mean vector. Dropout probability p 0 is a function of the latent expression level, p 0 = exp?(−λx 2 ij ), where λ is the exponential decay parameter in the zero-inflation model.
GrandPrix
GrandPrix (Ahmed et al., 2019) is based on the variational sparse approximation of the Bayesian Gaussian process latent variable model (Titsias and Lawrence, 2010) to project data to lower dimensional spaces. It requires only a small number of inducing points to efficiently generate a full posterior distribution. GrandPrix optimizes the coordinate position in the latent space by maximizing the joint density of the observation data, and then establishes a mapping from low-dimensional space to high-dimensional space. The expression profile of each gene y is modeled as y g is considered a non-linear function of pseudotime which FIGURE 1 | An overview for benchmarking dimensionality reduction methods. The 10 dimensionality reduction methods were evaluated on real scRNA-seq expression datasets and simulation data. k-means was used to cluster low-dimensional latent space. The accuracy, stability, computing cost, and sensitivity to hyperparameters were used to systematically evaluate these methods. accompanies with some noise ∈: ∈∼ N(0,σ 2 noise ) is a Gaussian distribution with variance σ 2 noise , x is the extra latent dimension, σ 2 is the process variance, and k(t, t * ) is the covariance function between two distinct pseudotime points t and t * . GrandPrix employed the variational free energy (VFE) approximation for inference.
t-SNE
t-Distributed stochastic neighbor embedding is a state-of-theart dimensionality reduction algorithm for non-linear data representation that produces a low-dimensional distribution of high-dimensional data (Maaten and Hinton, 2008;Van Der Maaten, 2014). It excels at revealing local structure in high-dimensional data. t-SNE is based on the SNE (Hinton and Roweis, 2002), which starts from converting the highdimensional Euclidean distances between data points into conditional probabilities that represent similarities. The main idea and the modifications of t-SNE are (1) the symmetric version of SNE and (2) using a Student's t distribution to compute the similarity between two points in the low-dimensional space.
UMAP
Uniform manifold approximation and projection is a dimension reduction technique that can be used not only for visualization similarly to t-SNE but also for general non-linear dimension reduction. Compared with t-SNE, UMAP retains more global structure with superior run-time performance (McInnes et al., 2018;Becht et al., 2019).
The algorithm is based on three assumptions about the data: (a) the data are uniformly distributed on the Riemannian manifold; (b) the Riemannian metric is locally constant (or can be approximated); and (c) the manifold is locally connected. According to these assumptions, the manifold with fuzzy topology can be modeled. The embedding is found by searching the low-dimensional projection of the data with the closest equivalent fuzzy topology. In terms of model construction, UMAP includes two steps: (1) building a particular weighted k-neighbor graph using the nearest-neighbor descent algorithm (Dong et al., 2011) and (2) computing a low-dimensional representation which can preserve desired characteristics of this graph.
DCA
Deep count autoencoder (DCA) can denoise scRNA-seq data by deep learning (Eraslan et al., 2019). It extends the typical autoencoder approach to solve denoising and imputation tasks in in one step. The autoencoder framework of DCA is composed by default of three hidden layers with neurons of 64, 32, and 64, respectively, with zero-inflated negative binomial (ZINB) loss functions (Salehi and Roudbari, 2015), learning three parameters of the negative binomial distribution: mean, dispersion, and dropout. The inferred mean parameter of the distribution represents the denoised reconstruction and the main output of DCA. The deep leaning framework enables DCA to capture the complexity and non-linearity in scRNA-seq data. Additionally, DCA can be applied to datasets with more than millions of cells. DCA is parallelizable through a graphics processing unit (GPU) to increase the speed.
Scvis
Scvis is a statistical model to capture the low-dimensional structures in scRNA-seq (Ding et al., 2018). The assumption of scvis is a high-dimensional gene expression vector x n of cell n which can be generated by drawing a sample from the distribution p(x|z, θ). Here, z is a low-dimensional latent vector which follows a simple distribution, e.g., a two-dimensional standard normal distribution. The data-point-specific parameters θ are the output of a feedforward neural network. To better visualize the manifold structure of an scRNA-seq dataset, scvis applies t-SNE objective function on the latent z distribution as a constraint to make cells with similar expression profiles to be close in the latent space. In addition, scvis also provides log likelihood ratio to measure the quality of embedding, which can potentially be used for outlier detection.
VAE
Variational autoencoder is a data-driven, unsupervised model for dimension reduction using an autoencoding framework, built in Keras with a TensorFlow backend (Hu and Greene, 2019). Comparing with a traditional autoencoder, VAE determined nonlinear explanatory features over samples through learning two different latent representations: a mean and standard deviation vector encoding.
The model is mainly composed of two connected neural networks, encoder and decoder. The scRNA-seq data are compressed by the encoder and reconstructed by the decoder. The variable probability Q(z|X) is used to approximate the posterior distribution P(z|X), and it is optimized to minimize the Kullback-Leibler divergence between Q(z|X) and P(z|X) and reconstruction loss. Here, the encoder network is designed as a zero-to two-layer fully connected neural network to generate the mean and variance of a Gaussian distribution q θ (z|X), and then the representative latent space z is sampled from this distribution. The decoder is also a zero-to two-layer fully connected neural network to reconstruct the count matrix.
SIMLR
Single-cell interpretation via multikernel learning performs dimension reduction through learning a symmetric matrix S N × N that captures the cell-to-cell similarity from the input scRNA-seq data (Wang et al., 2017). The assumption of SIMLR is that S N = N should have an approximate block-diagonal structure with C blocks if the input cells have C cell types. SIMLR learns proper weights for multiple kernels, which are different measures of cell-to-cell distances, and constructs a symmetric similarity matrix.
Specifically, developers first define the distance between cell i and cell j asD c i , c j : where each linear weight w represents the importance of each kernel K, which is an expression function for cell i and cell j. In addition, SIMLR applies the following optimization framework to compute cell-to-cell similarity S: where I N and I C are N × N and C × C identification matrices, respectively, and β and γ are non-negative tuning parameters; L denotes an auxiliary low-dimensional matrix enforcing the low rank constraint on S, tr(.) denotes the matrix trace, and |S| F represents the Frobenius norm of S. The optimization problem has three variables: the similarity matrix S, the weight vector w, and an N × C rank-enforcing matrix L. SIMLR solves the optimization problem through updating each variable and fixing the other two variables.
Single-cell interpretation via multikernel learning used the stochastic neighbor embedding (SNE) method (Maaten and Hinton, 2008) to dimension reduction based on the cellto-cell similarity S learned from the above optimization model. However, the objective function of SIMLR involves large-scale matrix multiplication, which leads to a large amount of calculation; thus, it is difficult to extend to highdimensional datasets.
Simulated scRNA-seq Datasets
To investigate the sensitivity of some characteristics of scRNAseq datasets including cell type number, the number of cells and genes, outliers, and dropout event, we generated simulated datasets using the Splatter R package (Zappia et al., 2017). Function splatSimulate() is used to generate simulations, and setParams() is used to set specific parameters. First, we initialized the number of cell types as 5, the cell number as 2,000, the gene numbers as 5,000, and the probability of expression outlier as 0.05. When generating the simulated scRNA-seq data, we updated each parameter and fixed other parameters. Specifically, we generated the simulated data with variable numbers of cell types (5,7,9,11,13), cells (100, 500, 1,000, 2,000, 5,000, 10,000, 20,000, 30,000, 40,000, 50,000), genes (10,000, 20,000, 30,000, 40,000, 50,000), and probabilities of expression outliers (0.1, 0.2, 0.3, 0.4, 0.5). In addition, considering the impact of dropout, we also simulated datasets with five different levels of dropout (dropout.mid = −1, 0, 1, 2, 3, the larger the parameter, the more the points will be marked as 0); other parameters are set as default. Here, the probability of zero value in the data is 41, 53, 62, 71, and 80%, respectively. The detailed parameters are provided in Supplementary Table 1. In total, we created 30 simulated scRNA-seq datasets. The raw expression count matrices of these datasets are generated and normalized to suit for each investigated method.
Real scRNA-seq Datasets
This study analyzed five real scRNA-seq datasets, all of which were downloaded from the publicly available EMBL or GEO databases (Supplementary Table 2). They are derived from different species and organs, covering a variety of cell types and data dimensions. Cell types of every dataset provided in original experiments were used as a gold standard to evaluate dimension reduction methods. The descriptions of all the scRNAseq datasets are as follows: 1. Deng dataset: isolated cells from F1 embryos from oocyte to blastocyst stages of mouse preimplantation development with six cell types were collected and sequenced by Smart-Seq2 (Deng et al., 2014). 2. Chu dataset: single undifferentiated H1 cells and definitive endoderm cells (DECs) from human embryonic stem cells sequenced by SMARTer (Chu et al., 2016). 3. Kolodziejczyk dataset: mouse embryonic stem cells from different culture conditions with three cell types (Kolodziejczyk et al., 2015). Each library was sequenced by SMARTer. 4. Segerstolpe dataset: human pancreatic islet cells with 15 cell types obtained by Smart-Seq2 (Segerstolpe et al., 2016).
Additionally, we use PBMCs from a healthy human (PBMC68k dataset) (Zheng et al., 2017) generated by the 10X Genomics platform to assess the scalability of methods.
Evaluation Metrics
To compare different dimension reduction methods, we performed the iterative k-means clustering on the lowdimensional representation of scRNA-seq data. Taking into account the randomness of k-means clustering when setting the initial cluster centroids, we performed k-means clustering 50 times to obtain a stable metric, and then set the cluster number k to the true cell type number. The evaluation metrics comparing the results to the true cell types are adjusted rand index (ARI), normalized mutual information (NMI), and Silhouette score.
Adjusted rand index (Santos and Embrechts, 2009) is a widely used metric which calculates the similarity between the two clustering results, which ranges from 0 to 1. A larger score means that two clusters are more consistent with each other. Conversely, when the clustering results are randomly generated, the score should be close to zero. Given two clustering X and Y, where a is the number of objects in a pair placed in the same group in X and in the same group in Y; b is the number of objects in a pair placed in the same group in X and in different groups in Y; c is the number of objects in a pair placed in the same group in Y and in different groups in X; and d is the number of objects in a pair placed in the different groups in Y and in different groups in X. Normalized mutual information (Emmons et al., 2016) is used to estimate the concordance between the obtained clustering and the true labels of cells. NMI value is from 0 to 1. A higher NMI refers to higher consistency with the golden standard.
Specifically, given two clustering results X and Y on a dataset, NMI = I(X, Y/max{H (U) , H(V)}, where Silhouette coefficient (Aranganayagi and Thangavel, 2007) measures how well each cell lies with its own cluster, which indicates the separability of each individual cluster. The value of Silhouette coefficient s (i) is between −1 and 1; 1 means that the cell is far away from its neighboring clusters, whereas −1 means that the cell is far away from points of the same cluster.
where a(i) is the average distance from cell i to other cells in the same cluster and b(i) is the average distance from cell i to all cells in other clusters. Average s(i) over all the cells indicates how separable each cell type in the low-dimensional representation, which we call the Silhouette score.
Computing Cost
Computing cost of each method is estimated by monitoring the running time and peak memory usage. We analyzed the PBMC68k datasets from 10X Genomics. The raw count matrix was downsampled to 100, 500, 1,000, 2,000, 5,000, 10,000, 20,000, 30,000, 50,000, and 68,579 cells with 1,000 highly variable genes. All methods were run on the 10 downsampled datasets. We use the command pidstat from the sysstat tool to return the peak memory usage of the process in operation. When calculating the running time, we used the function system.time() in R. In this step, only the running time of the model is considered, and other processes such as data loading are excluded.
Overall Performance Score
To rank methods, the overall scores of the methods were calculated through aggregating accuracy, stability, and computing cost (Zhang et al., 2020). After k-means clustering, we used the known cell populations to calculate the ARI, NMI, and Silhouette scores for simulated data and real data, respectively. For accuracy, scaled mean ARI, scaled NMI, and scaled Silhouette scores obtained from real data were aggregated to the accuracy score. For stability, aggregated scaled scores across different simulation datasets were denoted as the stability score of each method. For the computing cost, we first scale the running time and memory usage to get a value ranging from 0 to 1. Then, we averaged scaled running time and memory usage to obtain the computing cost. Finally, we integrated the accuracy, stability, and computing cost with a ratio of 40:40:20 into the overall performance score of each method.
RESULTS
We benchmarked a total of 10 methods on 30 simulated and five real datasets. We normalized scRNA-seq data based on the corresponding method, and then performed dimensionality reduction to obtain 2D latent space. k-Means clustering method was used to perform cluster analysis. Finally, the methods were compared using accuracy, stability, computing cost, and sensitivity to hyperparameters (Figure 1).
Evaluation of Stability
We used 30 simulated datasets to assess the stability of the 10 dimensionality reduction methods with respect to the number of cell type, cells and genes, outliers, and dropout event.
First, we investigated the effect of cell type numbers to the approaches. We fixed the cell number (n = 2,000), gene number (n = 5,000), and probability of outliers (p = 0.05), and then changed the cell type number from 5 to 13 stepped by 2. As the number of cell types increased, the performance of PCA, ICA, and GrandPrix descended faster (Figure 2A). While the performance of ZIFA, VAE, SIMLR, scvis, and DCA decreased slightly, UMAP and t-SNE fluctuated. Generally, ZIFA, VAE, SIMLR, scvis, DCA, UMAP, and t-SNE have better stability with respect to cell type number than PCA, ICA, and GrandPrix, since their standard deviation is relatively small.
Second, we changed the cell number from 100 to 50,000 and fixed other factors. It was found that too many or too few cells are not conducive to the construction of lowdimensional space of single-cell RNA-seq data. All the methods' performance fluctuated greatly except for PCA and UMAP. PCA and UMAP have strong adaptability to cell number change based on standard deviation ( Figure 2B). All of the methods obtained the best performance between 1,000 and 10,000 cells. It is worth noting that SIMLR has a high computational complexity as it involves large matrix operations which could not perform dimensionality reduction on data with a cell count greater than or equal to 10,000. Additionally, all the methods except PCA and ZIFA have good stability with respect to gene number ( Figure 2C).
To investigate the effect of the complex cell mixtures to methods, we simulated expression outliers; it was found that the performance of all the methods is stable to expression outliers ( Figure 3A). Finally, we randomly dropped expressed genes in each cell to investigate the ability of methods to deal with datasets with various library sizes. Generally, ZIFA, VAE, UMAP, t-SNE, SIMLR, and GrandPrix showed a stable performance, whereas the performance of scvis, PCA, ICA, and DCA decreased remarkably with the increase in the dropout ratio ( Figure 3B).
We found that the stability of each method is different with respect to the number of cell types, cells and genes, outliers, and dropout rate. To evaluate the overall stability of each method, we aggregated all the metrics across simulation datasets to obtain the overall stability score (see section "Materials and Methods"). In summary, the overall stability scores showed that the performance of UMAP has shown more stability than the other methods. Conversely, ICA has poor stability (Figure 4). It is worth mentioning that the Silhouette score of UMAP is significantly higher than the other methods in all simulation tests, indicating that it better separated distinct cell types.
FIGURE 2 | Evaluation stability of the 10 dimensionality reduction methods on simulated scRNA-seq data with respect to the number of cell type (A), cell number (B), or gene number (C). The performance is measured by ARI, NMI, and Silhouette score (SIL). Gray indicates that the SIMLR cannot run on data with more than 10,000 cells.
Evaluation of Accuracy
We applied the 10 dimensionality reduction methods to the four real data and performed k-means cluster analysis based on the low-dimensional representation and calculated the evaluation metrics. No single method dominated on all of these datasets, indicating that there is no "one-size-fits-all" method that works well on every dataset. Regarding the ARI and NMI measures, PCA and t-SNE were ranked in the top five performers on all the four datasets (Figures 5A,B). VAE was ranked in the top five performers on the three datasets. Consistent with the simulation dataset, UMAP can separate each individual cluster very well based on the Silhouette score, compared with other methods (Figure 5C). In addition, the dataset of Segerstolpe et al. has the lowest evaluation metrics compared with the other three datasets, FIGURE 3 | Evaluation stability of the 10 dimensionality reduction methods on simulated scRNA-seq data with respect to the proportion of outlier (A) or dropout rate (B). The performance is measured by ARI, NMI, and SIL.
indicating that the dimensionality reduction method should be improved for the heterogeneous dataset with more cell types. We also visualized the low-dimensional reductions of all the methods on the four datasets (Supplementary Figures 1-4). The ability to separate different cell types of each method is consistent with the above metrics. Aggregating all the three metrics across datasets, t-SNE has the best accuracy, followed by VAE (Figure 4).
Sensitivity of Methods to Hyperparameters
The hyperparameters play a crucial part of the dimension reduction algorithm, especially the deep machine learning model. Therefore, we examined the effect of the hyperparameter settings on the dimensionality reduction in order to guide the user in making a reasonable choice. Among all the 10 algorithms discussed, there are seven methods whose developers have added parameter settings. PCA and ICA are based on linear transformations, so do not require hyperparameter adjustment. In addition, DCA implements an automatic search that could identify a set of hyperparameters in minimizing errors. To decrease time consumption, we used the datasets of Deng to investigate the effect of the hyperparameters to the performance of these seven methods. Detailed evaluation parameters are shown in Supplementary Table 3. Using grid search strategy, we found that ZIFA is insensitive to their respective hyperparameters, and the evaluation metrics have little change in different settings ( Figure 6A). The evaluation metrics of t-SNE and SIMLR increased when their hyperparameters increased from 2 to 5, after that ARI and NMI tend to be stable. Silhouette scores are largely reduced when the hyperparameters are larger than 20 (Figures 6B,C). For those methods with multiple adjustable hyperparameters including GrandPrix, scvis, UMAP, and VAE, we noticed a dramatic change in the results when choosing different hyperparameter settings (Figures 6D-G). Therefore, we recommend that users consider the impact of hyperparameter settings before using these four methods.
Data Preprocessing of All Methods
For the arithmetic design adapting to different algorithms, we performed the corresponding normalization process for one raw single-cell RNA-seq data based on the description of the algorithm. First, PCA, ICA, t-SNE, UMAP, ZIFA, and SIMLR used the original count FIGURE 4 | The overall performance of the 10 dimensionality reduction algorithms. The methods are sorted by overall performance score, which is a weighted integration of accuracy, stability, and computing cost. The accuracy and stability are the average value of scaled ARI, scaled NMI, and scaled SIL in real data and simulated data, respectively. Running time and memory are scaled to a value in [0,1] before averaged as computing cost. matrix of scRNA-seq data as the input. For DCA and GrandPrix, the input is a feature matrix with all the cells and 1,000 highly variable genes. Scvis used PCA as a preprocessing for noise reduction to project the cells into a 100-dimensional space.
The Outputs of All Methods
For some methods, in addition to the low-dimensional representation of the data, other useful information is also provided. Specifically, scvis, DCA, and VAE were developed based on deep learning; thus, a trained model is saved in the corresponding output folder, containing the loss parameters and validation for models. Furthermore, being used as a process of noise reduction, DCA provides an output file which represents the mean parameter of the ZINB distribution which has the same dimensions as the input file. Detailed workflows and explanations are available in the original publications.
Computing Cost Overview
The current scRNA-seq analysis methods are expected to cope with hundreds of thousands of cells as the number of cells profiling by the current protocols increases. We estimated the computational efficiency of each method using running time and memory usage. We generated ten datasets containing different number of cells through downsampling the PBMC68k data. Overall, the running time and memory usage of all methods are positively correlated with the cell number. Most methods except SIMLR and scvis can be completed in 30 min even using all the cells of PBMC68k dataset ( Figure 7A). Most methods except SIMLR and ZIFA can complete all the processes within 4 GB ( Figure 7B). We noted that SIMLR is difficult to be performed on the dataset with more than 10,000 cells due to its unique multikernel matrix operation. In general, ICA took the shortest time (3.7 min) and t-SNE had the lowest memory requirements (2.5 GB) when the number of cells is 68k. Overall, t-SNE has the best computing cost (Figure 4).
Overall Performance
By integrating three metrics from measurement of accuracy, stability, and computing cost, we obtained the overall performance score for each method (Figure 4). We found that t-SNE achieved the best overall performance score with the highest accuracy and computing cost. Meanwhile, UMAP exhibited the highest stability, as well as moderate accuracy and the second highest computing cost. However, the performance score of these methods is different across evaluation criteria. For example, SIMLR and PCA performed better than UMAP based on accuracy, while SIMLR showed weaker computing cost and PCA showed weaker stability.
DISCUSSION
Since 2015, the emergence of 10X Genomics, Drop-seq, Microwell, and Split-seq technologies has completely reduced the cost of single-cell sequencing. This technology has been widely used in basic scientific and clinical research. An important application of single-cell sequencing is to identify and characterize new cell types and cell states. In this process, the key question is how to measure the similarity of the expression profiles of a set of cells, whereas, such similarity analysis can be improved after reducing dimensionality, which can help in noise reduction.
Here, we performed a comprehensive evaluation of 10 dimensionality reduction methods using simulation and real dataset to examine the stability, accuracy, computing cost, and sensitivity to hyperparameters. Taken together, we observed that the summarized performance of t-SNE outperformed the performance of other methods. UMAP has the highest stability and can separate distinct cell types very well. Although, both methods are not specifically designed for single-cell expression data. However, the performance of most methods decreased as cell number and dropout rate increased. Therefore, new algorithms will likely be needed to effectively deal with dropout rate and millions of cells. In addition, the dataset from Segerstolpe et al. containing the lower evaluation metrics showed that the dimensionality reduction method should be improved for the heterogeneous dataset with more cell types. We suggested that users adjust the hyperparameters when using these non-linear and neural network methods. Finally, basic linear methods such as PCA and ICA have shown to be most time saving but perform worse in highly heterogeneous data.
To conclude, we provide a new procedure for comparing single-cell dimensionality reduction methods. We hope that this will be useful in providing and giving method users and algorithm developers an exhaustive evaluation of different data and appropriate recommendation guidelines. At the same time, new dimensionality reduction methods are being developed which will become more robust and standardized. These developments will deepen further exploration and comprehensive understanding of single-cell RNA-seq applications.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author/s. | 7,368.2 | 2021-03-23T00:00:00.000 | [
"Computer Science"
] |
The art of coarse Stokes: Richardson extrapolation improves the accuracy and efficiency of the method of regularized stokeslets
The method of regularized stokeslets is widely used in microscale biological fluid dynamics due to its ease of implementation, natural treatment of complex moving geometries, and removal of singular functions to integrate. The standard implementation of the method is subject to high computational cost due to the coupling of the linear system size to the numerical resolution required to resolve the rapidly varying regularized stokeslet kernel. Here, we show how Richardson extrapolation with coarse values of the regularization parameter is ideally suited to reduce the quadrature error, hence dramatically reducing the storage and solution costs without loss of accuracy. Numerical experiments on the resistance and mobility problems in Stokes flow support the analysis, confirming several orders of magnitude improvement in accuracy and/or efficiency.
introduction to the subject, see the recent text [1]. A range of mathematical and computational techniques are available to approach this problem; a computational method that has seen significant uptake and development over the last two decades is the method of regularized stokeslets, first described by Cortez [2] and subsequently elaborated for three-dimensional flow [3,4].
This technique can be viewed as a modification of the method of fundamental solutions and/or the boundary integral method for Stokes flow [5], the basis for which is the stokeslet [6] or Oseen tensor [7]: þ (x j À y j )(x k À y k ) jx À yj 3 (1:2) and P k (x, y) ¼ 2ðx k À y k Þ jx À yj 3 : (1: 3) The pair of tensors S jk , P k provide the solutions u ¼ (8pm) À1 (S 1k , S 2k , S 3k ) and p = (8π) −1 P k to the singularly forced Stokes flow equations, À r r r r rp þ mr 2 u u u u u þ d(x À y)ê k ¼ 0 where f e (x) is a family of 'blob' functions approximating d(x) as e → 0.
Several different choices for f e and associated regularized stokeslets S e jk have been studied; the most extensively used was presented in the original three-dimensional formulation of Cortez et al. [3], and S e jk (x, y) ¼ d jk (jxj 2 þ 2e 2 ) þ x j x k (jxj 2 þ e 2 ) 3=2 : (1:10) Developments focusing on the use of alternative blob functions to improve convergence include [8] (nearfield) and, more recently, [9] (far-field). The pressure P e k (x, y) P k (x, y) and velocity S e jk (x, y) S jk (x, y) as e → 0; moreover the corresponding single layer boundary integral equation is jk (x, y)f k (y) dS y þ O(e p ), (1:11) where p = 1 for x on or near B and p = 2 otherwise [3]. In equation (1.11) and below, summation over repeated indices in j = 1, 2, 3 or k = 1, 2, 3 is implied. The reduction to the single-layer potential is discussed by e.g. [3,5,10]; in brief, this equation can describe flow due to motion of a rigid body, or with suitable adjustment to f k , the flow exterior to a body which does not change volume. A feature common to both standard and regularized stokeslet versions of the boundary integral equation is nonuniqueness of the solution f k . This non-uniqueness occurs due to incompressibility of the stokeslet, i.e. provided the interior of B maintains its volume, then ÐÐ B S jk n k dS y ¼ 0 so that if f k is a solution of equation (1.11) then so is f k + an k for any constant a. From the perspective of the original partial differential equation system, the non-uniqueness follows from the fact that the pressure part of the solution to equations (1.1) with velocity-only boundary conditions is determined only up to an royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108 additive constant. This issue is not dynamically important, and moreover the discretized approximations to the system described below result in invertible matrices.
Boundary integral methods have the major advantage of removing the need for a volumetric mesh, which both reduces computational cost, and moreover avoids the need for complex meshing and mesh movement. The key strength of the method of regularized stokeslets is in enabling the boundary integral method to be implemented in a particularly simple way: by replacing the integral by a numerical quadrature rule {x x x x x[n], w[n], dS(x x x x x[n])} (abscissae, weight and surface metric), equation (1.11) may be approximated by, As is standard terminology in numerical methods for integral equations, we will refer to this as the Nyström discretization [11]. By allowing m = 1, …, N and j = 1, 2, 3, a dense system of 3N linear ) is formed. The diagonal entries when j = k and m = n are finite but numerically on the order of 1/e, leading to (by the Gershgorin circle theorem) a well-conditioned matrix system. The approach outlined above can be used to solve the resistance problem in Stokes flow, which involves prescribing a rigid body motion and calculating the force distribution, and hence total force and moment on the body. Once the force and moment associated with each of the six rigid body modes (unit velocity translation in the x j direction, unit angular velocity rotation about x j axis, for j = 1, 2, 3) are calculated, the grand resistance matrix A can be formed [5], which by linearity of the Stokes flow equations relates the force F and moment M to the velocity U and angular velocity V for any rigid body motion; (1:13) For example, for a sphere of radius a centred at the origin, the matrix blocks are A FU = 6πμaI, A closely related problem is the two-step calculation of the flow field due to a prescribed boundary motion; starting with prescribed surface velocities u j (x x x x x[m]), first, the discrete force distribution F k [n] is found by inversion of the Nyström matrix system; the velocity field at any point in the fluidx x x x x can then be found through the summation, The mobility problem is formulated by prescribing the total force and moment on the body (yielding six scalar equations) and augmenting the system with unknown velocity U and angular velocity V, which adds six scalar unknowns, so that a (3N + 6) × (3N + 6) system is formed. At a given time, these unknowns can be related to the evolution of the body trajectories (in terms of position x x x x x 0 and two basis vectors b (1) and b (2) ), through a system of nine ordinary differential equations which can be solved using available packages such as MATLAB's ode45. Finally, the swimming problem further prescribes the motion of cilia or flagella with respect to a body frame (typically, a frame in which the cell body is stationary), and often assumes zero total force and moment (neglecting gravity and other forces such as charge), again resulting in a (3N + 6) × (3N + 6) system. The key numerical features and challenges of the method of regularized stokeslets are exhibited by the resistance and mobility problems, which will therefore be our primary focus. (see [12], contained case, equation (2.7)). Reducing the O(e) regularization error by reducing e therefore increases the O(e −1 h 2 ) stokeslet quadrature error, necessitating refinement of the discretization length h. To reduce e by a factor of R requires indicatively reducing h by a factor of ffiffiffi ffi R p , hence increasing the number of surface points and therefore degrees of freedom N by a factor of R. The cost of assembling the dense linear system then increases by a factor of R 2 , and the cost of a direct linear solver by a factor of R 3 . This calculation shows that, for example, improving from a 10% relative error to a 1% relative error may indicatively incur a cost increase of 1000 times. There are several approaches available already to address this issue, which involve a range of computational complexities: the fast multipole method [13], boundary element regularized stokeslet method [14] and the nearest-neighbour discretization [15], for example. In the next section, we will describe and analyse a very simple technique which alone, or potentially in combination with the above, improves the order of the regularization error, thereby enabling a coarser e and hence alleviating the quadrature error. We will then briefly review an alternative 'coarse' approach, the nearest-neighbour method, a benchmark with similar implementational simplicity. Numerical experiments will be shown in the Results ( §5), and we close with brief Discussion ( §6).
Richardson extrapolation in regularization error
Consider the approximation of a physical quantity (e.g. moment on a rotating body) which has exact value M Ã . The value of this quantity calculated with discretization of size h and regularization parameter e is denoted where E r (e) is the regularization error associated with the (undiscretized) integral equation, and E d (h; e) is the discretization error, which as indicated also has an indirect dependence on e via the quadrature.
Recall that and where E f (h) is the error associated with the force discretization and E q (h; e) is the quadrature error. The analysis below will focus on the situation in which the regularization parameter e is not excessively small, so that the quadrature error (h 2 /e) is subleading and hence the discretization error has minimal dependence on e, thus E d (h; e) ≈ E d (h; e 0 ) for some representative value e 0 . Writing we may then expand, Applying the matrix inverse, royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108 Hence, the estimate e M(e 1 , e 2 , e 3 ; h) : provides an approximation to M Ã that has error This improvement in order of accuracy comes at a small multiplicative cost associated with solving the problem three times; however, as these are three independent calculations they are ideally placed to exploit parallel computing architecture, thus reducing the additional computational cost.
Comparison with the nearest-neighbour regularized stokeslet method
Before carrying out numerical experiments, we will briefly recap a different strategy to address the e-dependence of the linear system size which we have developed and described recently, in order to provide a benchmark with similar implementational simplicity. The nearest-neighbour version of the regularized stokeslet method [16] aims to remove the e-dependence of the linear system size. This change is achieved by separating the degrees of freedom for traction from the quadrature by using two discretizations: } for the traction and a finer set {X X X X X [1], . . . , X X X X X[Q]} for the quadrature. If these sets are identical, the method reduces to the familiar Nyström discretization. In general, choosing N < Q leverages the fact that the traction is more slowly varying than the near-field of the regularized stokeslet kernel. Discretizing the integral equation (1.11) on the fine set gives Based on the observation that the traction f k (X X X X X[q]) and associated weighting w[q]dS(X X X X X[q]) are slowly varying, the method employs degrees of freedom F k [n] in the neighbourhood of each point of the coarse discretization, so that where ν[q, n] is a sparse matrix defined so that ν[q, n] = 1 if the closest coarse point to X X X X X[q] is x x x x x[n], and ν[q, n] = 0 otherwise. A detail that was not addressed in our recent papers ( [15,17], for example) is that the closest coarse point to a given quadrature point may not be uniquely defined. Moreover, it is occasionally possible that, for sufficiently distorted discretizations, a coarse point may have no quadrature points associated with it at all, resulting in a singular matrix. In the former case, the weighting may be split between two or more coarse points, so that the sum of each row of ν[q, n] is still equal to 1. In the latter case, the coarse point may be removed from the problem, or (better) the quadrature discretization refined.
The approximation (4.2) leads to the linear system, The computational complexity of the system is given by the 3N × 3Q function evaluations required to assemble the stokeslet matrix, followed by the O(N 3 ) solution of the dense linear system (for direct methods). The nearest-neighbour method is subject to similar O(e) regularization error and O(h f ) discretization error (where h f is characteristic of the force point spacing) as the Nyström method. Analysis of the quadrature error associated with collocation [12] identifies two dominant contributions: (i) Contained case: Quadrature centred about a force point which is also contained in the quadrature set is subject to a dominant error term O(e À1 h 2 q ), where h q is the spacing of the quadrature points; the Nyström method described above is a special case of this, with h q = h; royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108 (ii) Disjoint case: Quadrature centred about a force point which is not contained in the quadrature set is subject to a dominant error term O(h q /δ) 2 h q ), where δ > 0 is the minimum distance between the force point and quadrature points. This term does not appear in the Nyström method error analysis. The term is written in this form because δ is typically similar in size to h q for a given quadrature set, so with a little care, h q /δ behaves as a multiplicative constant.
For contained force and quadrature discretizations (i), the cost of quadrature is still an important consideration. Reducing e by a factor of R, necessitates reducing h 2 q by a factor of R, and hence increasing the number of quadrature points-and associated matrix assembly cost-by a factor of R. Therefore, any improvement to the order of convergence of the regularization error will result in a corresponding improvement in the reduction of quadrature error.
However, when disjoint force and quadrature discretizations (ii) are employed, the nearest-neighbour method is able to entirely decouple the strong dependence of the degrees of freedom (tied only to h f ) on the regularization parameter e and quadrature discretization h q . The nearest-neighbour method, therefore, provides a relatively efficient and accurate implementation of the regularized stokeslet method that, with minor care in the construction of the discretization sets, can be used as a benchmark. In the following section, we will assess the Richardson extrapolation approach against analytic solutions for two examples of the resistance problem, and against the nearest-neighbour method for an example mobility problem.
Results
We now turn our attention to the application of Richardson extrapolation to a series of model problems, comprising the calculation of: For each problem, we use the minimum distance between any two force points in the discretization as our comparative lengthscale h. For the [NyR] method, results are shown against the smallest value of the regularization parameter (e 1 ) used in the calculation. Simulations are performed with GPU acceleration (see [18]) using a Lenovo Thinkstation with an NVIDIA Quadro RTX 5000 GPU. Each of the test problems that we consider, however, are easily within the capabilities of more modest hardware. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108
The grand resistance matrix of a rigid sphere
The [Ny] method is found to achieve 1% relative error for a select number of parameter pairs (e, h). This is strongly dependent, however, on the 'dip' in error which appears as h is decreased for a given e (evident in figure 1e) and is a consequence of the balance between the opposite-signed regularization and quadrature errors; the small h plateau remains above 1% error for each choice of e. By contrast, the [NyR] method is able to significantly reduce the error in the plateau (figure 1f ), resulting in sub-1% errors for e as large as 0.2. Indeed with e = 0.2, the range of values of h capable of producing acceptably accurate results extends from h = 0.00077 to h = 0.0076. As a result of the reduction in regularization error, brought about by the [NyR] extrapolation, this method is able to achieve a minimum relative error of 0:05% compared with 0:6% for the [Ny] method, and moreover, accurate performance no longer depends on a precise interplay between h and e. In the simulations we performed, the [NyR] method was able to attain very accurate results (0:1% error) in 250 s of walltime.
The grand resistance matrix of a rigid prolate spheroid
To assess the performance on a system involving a modest disparity of length scales, the second model problem is the calculation of the grand resistance matrix for a prolate spheroid of major axis length 5 and minor axis length 1. Moreover, prolate spheroids are often used as models for both entire microscopic swimming cells, and for their propulsive cilia and flagella, and so provide an informative test geometry. The exact solution in the absence of other bodies is well known (e.g. [19]). Details of the discretization of the prolate spheroid are provided in appendix B.1. A sketch of the discretization and plot of sDOF as h is varied are shown in figure 2a,b.
Similarly to the case of the unit sphere, the [Ny] method is able to achieve a minimum error of 0:8% for the smallest e in this study and a specific choice of h within the error dip ( figure 2c,e ). For each choice
The motion of a torus sedimenting under gravity
As a final test case, we simulate the mobility problem of a torus sedimenting under the action of gravity (for detailed set-up and discretization, see appendix B.2). In the absence of an exact solution to this problem, we compare the distance travelled in the vertical direction after the system (equations (B 6)-(B 8)) are solved for t ∈ [0, 98.7]. We compare the results obtained with the [Ny] and [NyR] methods with those from a simulation using the nearest-neighbour method ([NEAREST]) with a refined force discretization, disjoint force and quadrature discretizations and e = 10 −6 . Figure 3a- The results for the smallest choice of regularization parameter, e = 0.01, are not converged with h, consistent with our analysis in §3 focusing on moderate values of e for which the quadrature error is subleading.
Discussion
This article considered the implementation of the regularized stokeslet method, a widely used approach in biological fluid dynamics for computational solution of the Stokes flow equations. An inherent challenge is the strong dependence of the degrees of freedom on the regularization parameter e, which necessitates an inverse-cubic relationship between the linear solver cost and the regularization parameter.
Here, we have investigated a simple modification of the widely used Nyström method, by employing Richardson extrapolation; performing calculations with three, coarse values of e and extrapolating to significantly reduce the order of the regularization error. The method was compared with the original Nyström approach on three test problems: calculating the grand resistance matrices of the unit sphere and prolate spheroid, and simulating the motion of a torus sedimenting under gravity. Investigation of these model problems has highlighted two significant phenomena, the first of which is well known but is worth repeating: (i) obtaining an acceptable level of error using the Nyström method is strongly dependent on being within the region where the (opposite-signed) regularization and quadrature errors exhibit significant cancellation, a phenomenon which has sensitive dependence on the discretization h as e is varied. (ii) The improvement in the order of regularization error provided by Richardson extrapolation is able to significantly and robustly reduce errors for simulations with royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108 (relatively) large choices of e, enabling highly accurate results with relatively modest computational resources. This advantage is (by design) only maintained for these coarse values of e, so that the regularization error is subleading. Another approach which improves the order of convergence of the (important) local regularization error is given by Nguyen & Cortez [8], although the resulting regularized stokeslets may not be exactly divergence-free. As discussed above, there are several existing approaches to improving the efficiency and accuracy of regularized stokeslet methods. The best approach in terms of strict computational complexity is the use of fast methods such as the kernel independent fast multipole method, which enables the approximation of the matrix-vector operation required for iterative solution of the linear problem [13,20], resulting in a O(N log N) method-although with somewhat greater implementational complexity. Another formulation is to borrow from the boundary element method developed for the standard singular stokeslet formulation [14], which has been applied to systems such as embryonic left-right symmetry breaking [21] and bacterial morphology [22]. The boundary element approach decouples the quadrature from the traction discretization and hence degrees of freedom of the system, enabling larger problems to be solved, although again at the expense of greater complexity through the need to construct a true surface mesh, with a mapping between elements and nodes. The nearestneighbour discretization [15] retains much of the simplicity of the Nyström method, while separating the quadrature discretization from the degrees of freedom. Provided that the discretizations do not overlap, we still find this method to be an optimal combination of simplicity and efficiency. The Richardson approach does not avoid the need for the regularization parameter to not exceed the length scales characterizing the physical problem, for example the distance between objects. In this respect, the nearest-neighbour approach is advantageous because of its ability to accommodate smaller values of the regularization parameter.
In this work, we have focused on demonstrating how a numerically simple modification to the, already easy-to-implement, Nyström method can provide excellent improvements by employing coarse values of the regularization parameter e. This approach can be considered complementary to the nearest-neighbour method in its coarse philosophy and style: both methods are figuratively coarse in their simplicity, and literally coarse in their approach of increasing numerical parameters. The Richardson approach allows increases in the regularization parameter; the nearest-neighbour approach allows increase the force discretization spacing h f . Either method enables more accurate results to be achieved with greater robustness and for lower computational cost. Moreover, both have the advantage of being formulated in terms of basic linear algebra operations, and therefore can be further improved through the use of GPU parallelization with minimal modifications [18]. The choice of which method to use is a matter of preference; the Richardson approach has the advantage of being immediately adoptable by any group with a working Nyström code, alongside the repeated calculations being embarrassingly parallel; the nearest-neighbour approach has the advantage of completely removing the dependence of the system size on e.
Accessible algorithmic improvements such as these provide the improved ability to solve a plethora of problems in very low Reynolds number hydrodynamics. Various potential application areas include microswimmers such as sperm [23,24], algae and bioconvection [25][26][27][28][29], mechanisms of flagellar mechanics [30,31], squirmers [32,33] and bio-inspired swimmers [34][35][36]. Stokeslet-based methods have been employed since the work of Gray and Hancock [6] in the 1950s; they continue to provide ease of implementation, efficiency and most importantly physical insight into biological systems. The location of points on the prolate spheroid, aligned with the x-axis, can be expressed in terms of the prolate spheroidal coordinates, as x ¼ a cosh m cos n, ( B 1 ) y ¼ a sinh m sin n cos f (B 2) and z ¼ a sinh m sin n sin f, where a and c are the major-and minor-axes lengths, respectively. We first discretize ν into n uniformly spaced points, providing a discretization in x which is slightly more dense in regions of higher curvature. For each choice of ν i (i ∈ [1, n]), we discretize ϕ into m i linearly spaced points, where the choice royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 8: 210108 ensures that each ring is approximately evenly discretized with spacing h. Here, dÁe represents the ceiling function.
B.2. A torus sedimenting under gravity
The equations of motion for a torus sedimenting under gravity are given by and ðð @D e ikj X k f j (X) dS X ¼ 0, (B 8) where repeated indices are summed over i ∈ [1,2,3], U and V are the translational and rotational velocities of the torus, ∂D defines the surface of the torus, the central-and tube-radii of the torus are given by R and r respectively, and e ijk is the Levi-Civita symbol. The term on the right-hand side of equation (B 7) derives from the (dimensionless) effect of gravity. The motion of the torus can be expressed as a system of nine ordinary differential equations for the time derivatives of the torus position x 0 and basis vectors b b b b b (1) and (2) ). More details of how this 'mobility problem' is solved can be found in [15]. While this problem could be further constrained by enforcing that the angular velocity is zero (due to the symmetry of the torus), we focus on solving for the full rigid body motion. The mobility problem is solved using the [Ny], [NyR] and [NEAREST] methods, with results given in §5.3. Points on the torus surface can be written as x ¼ (R þ r cos u) cos f, ( B 9 ) y ¼ (R þ r cos u) sin f (B 10) and z ¼ r sin u, (B 11) for u, f [ [0, 2p). We discretize θ into n ¼ d2pr=he linearly spaced points, ensuring points on each ring are approximately evenly spaced with lengthscale h. For each θ i (i ∈ [1, n]), we discretize ϕ into m i linearly spaced points via resulting in an approximately evenly spaced discretization for the torus with lengthscale h. For simulations with the [NEAREST] method, a fine quadrature discretization is created following the same process with lengthscale h q = h/4. To ensure disjoint force and quadrature discretizations in this case, a filtering step is performed to remove any quadrature points which lie within a distance h q /10 from their nearest force point. | 6,162.8 | 2021-01-22T00:00:00.000 | [
"Mathematics"
] |
Evidence for low density holes in Jupiter’s ionosphere
Intense electromagnetic impulses induced by Jupiter’s lightning have been recognised to produce both low-frequency dispersed whistler emissions and non-dispersed radio pulses. Here we report the discovery of electromagnetic pulses associated with Jovian lightning. Detected by the Juno Waves instrument during its polar perijove passes, the dispersed millisecond pulses called Jupiter dispersed pulses (JDPs) provide evidence of low density holes in Jupiter’s ionosphere. 445 of these JDP emissions have been observed in snapshots of electric field waveforms. Assuming that the maximum delay occurs in the vicinity of the free space ordinary mode cutoff frequency, we estimate the characteristic plasma densities (5.1 to 250 cm−3) and lengths (0.6 km to 1.3 × 105 km) of plasma irregularities along the line of propagation from lightning to Juno. These irregularities show a direct link to low plasma density holes with ≤250 cm−3 in the nightside ionosphere.
J ovian whistlers detected by the Voyager 1 plasma wave instrument 1 provided incontrovertible evidence of lightning at Jupiter 2 . The interpretation of the Jovian whistlers as lightning was based on the comparison to terrestrial lightninginduced whistlers 3 , and the independent observations of optical lightning flashes by Voyager 1 4 . A lightning-induced non-dispersed electromagnetic pulse was observed at Jupiter by the Galileo Probe in a magnetic field waveform in the frequency range from 10 Hz to 100 kHz 5 . Since their discovery intense electromagnetic impulses induced by Jupiter's lightning have been recognised to produce both low-frequency dispersed whistler emissions 2,6,7 and non-dispersed radio pulses 5,8 . Jovian whistlers appear between a few tens of Hz and 20 kHz 6,7 , propagating below the local electron cyclotron frequency f ce or the local electron plasma frequency f pe , whichever is lower according to the definition of the whistler mode 9 . Recently, Juno has also detected lightning-induced radio pulses called sferics at 600 MHz and 1.26 GHz 8 . However, Voyager radio observations in a frequency range from 20 kHz to 41 MHz reported no detections of radio pulses in the Jovian inner magnetosphere 10 . Non-detections of this kind were interpreted as strong radio absorption in the Jovian ionosphere 10 .
Here, we present dispersed millisecond pulses with a lower frequency cutoff between 20 and 150 kHz recorded by the Juno radio and plasma wave (Waves) instrument 11 during eight perijove passes (closest approaches to Jupiter) from perijove 1 (PJ1) on 27 August 2016, through PJ9 on 24 October 2017 [12][13][14][15] . In accounting for the dispersion curves of these pulses, we use a free space ordinary (O) mode straight-line propagation model (Methods and Supplementary Fig. 1), which assumes the presence of plasma density irregularities along Juno's line of sight. Because the occurrence positions of these pulses are collocated with those of Jovian lightning-induced whistlers 7 and 600-MHz sferics 8 independently detected by Juno, these irregularities correspond to an ionospheric plasma density less than 250 cm −3 . On the basis of the theory of lightning-induced microsecond trans-ionospheric pulse pairs on Earth [16][17][18] , we suggest that the upper limit of the vertical height between the thunderstorm and the reflection layer in the Jovian atmosphere might be less than 500 km.
Results
Observations of Jupiter dispersed pulses. We have carried out a survey of the Juno Waves burst mode data from the Low Frequency Receiver High (LFR-Hi) channel in the form of frequency-time spectrograms below 150 kHz (Methods) on PJ1 and PJ3 to PJ9. We found 445 instances of unusual discrete, dispersed pulses within 210 snapshots out of the total 58,542 available snapshots acquired below 5.5 Jovian radii (R J , 1 R J = 71,492 km). All of the pulses were detected while Juno was at altitudes between 9790 km and 316,000 km above the 1 bar level. Figure 1 illustrates various types of spectral structures showing dispersion such as a pair of pulses (Fig. 1a), a train of four discrete pulses (Fig. 1b), a long dispersed pulse (Fig. 1c), and a short dispersed pulse (Fig. 1d). To the best of our knowledge, none of the previous literature reports such pulses, so we call them Jupiter dispersed pulses (JDPs), hereafter.
In the framework of cold plasma theory 9 , there are two observations that lead to the conclusion that JDPs propagate in the free left-hand ordinary (L-O) mode. The first is that, while they generally occur below f ce (98% of the time), they can be found above f ce (about 2% of the time). The small percentage of cases > f ce is at least partly due to the upper frequency limit of the Waves LFR-Hi band, 150 kHz. Emission above f ce eliminates the whistler mode and leaves either the Z or L-O mode. Another observation is that JDPs can be found above the maximum Z-mode frequency, the upper hybrid frequency f uh ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; for example, the maximum frequency of the JDP in Fig. 1c is at least 150 kHz, well above both f ce = 126 kHz and f uh = 132 kHz (assuming f pe = 40 kHz as the highest intensity of the smooth emission below the JDP).
To further understand the nature of JDPs, we have manually digitised spectral shapes of all 445 detections. In Fig. 2a, a histogram of JDP detections (left axis) and cumulative probability (right axis) is plotted as a function of duration with a 0.2 ms step. The peak occurs at the bin whose centre is 0.1 ms, and 95% cumulative probability is achieved within 3.2 ms. Figure 2b depicts another histogram for the inter-pulse spacing. It utilises only snapshots containing two or more pulses allowing interpulse spacing. Hence, only 53% of the detections (236 counts) are included. In this distribution, there are two peaks ( Supplementary Fig. 2) appearing at bins with central values of 0.7 ms and 3.3 ms, and 95% cumulative probability (50% probability from the complete set of all detected pulses) occurs within 7.4 ms.
Modelling JDP spectral shapes. Illustrations of the fit results using the O mode straight-line propagation model (Methods and Supplementary Fig. 1) are shown as orange curves in Fig. 1, clearly capturing the spectral morphology of JDPs. Additional support for the fit results comes from a positive correlation between f pe0 estimated from the model and f cutoff measured from the spectrograms (Supplementary Fig. 3). Figure 2c shows the results for length D versus plasma density N e0 of plasma density irregularities along a line of sight from a source location to Juno in a log-log scatter plot. The distributions show a systematic tendency of increases in N e0 as D decreases due to the strong negative correlation of determinations of both parameters in the model ( Supplementary Fig. 4). Nevertheless, these distributions provide possible solutions to characterise the JDP spectral shapes ( Supplementary Fig. 5). Specifically, the length D of the irregularity structures ranges from 0.60 km with a corresponding N e0 of 110 cm −3 through 1.3 × 10 5 km with a corresponding N e0 of 8.0 cm −3 . In Fig. 2d, we re-organise the distributions of D versus Juno altitude. It is clear that the estimated D is mostly smaller than the altitude, which is consistent with a hypothesis that the JDP radio sources are located in Jupiter's atmosphere in the same hemisphere as seen from Juno.
Comparison of whistlers, sferics, and JDPs. We compare the source locations of Jovian whistlers and sferics with the sources of the JDPs. The Juno Waves instrument detected 1627 whistlers between 27 August 2016, and 1 September 2017 (PJs 1 through 8) 7 . Assuming that whistlers propagate parallel to the magnetic field (so-called ducted whistlers) 3 , the footprints of these whistlers can be estimated by mapping from Juno's position along the modelled JRM09 magnetic field lines 19 onto the Jovian atmosphere at 300 km altitude (the bottom edge of the Jovian ionosphere measured by Voyager 2 20 and Galileo 21 ) above the 1bar level. Similarly, the Juno Microwave Radiometer (MWR) instrument 22 originally captured 377 lightning sferics in a narrowband channel at 600 MHz within 100-ms integration intervals 8 . More recently, the sferic catalogue has been revised with 383 detections related to MWR antenna calibration. Figure 3a shows the Jovian whistler source locations using the ducting assumption as the orange plus marks in Jovian System III coordinates. As the MWR boresight pinpoints the lightning source within the beam projected onto the 1 bar surface, the yellow stars are estimates of the 600-MHz sferic source locations. The JDP vertically projected locations are shown as blue circles in Fig. 3a. Also, the latitudinal histogram of the detection rate with 5°bins for JDPs and whistlers and with 0.5°bins for sferics, is included on Fig. 3b using corresponding colours. The JDP, whistler and sferic distributions are similar in that both occurrence rates are higher in the northern hemisphere than in the southern hemisphere. The longitude range of 0°to 210°in the southern hemisphere shows no JDPs, contrary to the whistler and sferic distributions. Even if we consider the limited LFR-Hi observation coverage, this trend remains ( Supplementary Fig. 6). Hence, the lack of JDPs could be due to a dense ionosphere between the JDP radio source and Juno.
Discussion
In addition to two concurrent JDP-sferic events ( Supplementary Fig. 7), the JDP source locations are similar to those of lightninginduced whistlers and 600-MHz sferics, suggesting JDPs are related to lightning. Given that JDPs are driven by Jupiter's lightning, it is challenging to address the question of how the lowfrequency radio signals propagate from the atmosphere to the magnetosphere through the dense ionosphere. At Jupiter, the peak of the ionospheric plasma frequency is about 1-5 MHz 20,21 , which is much higher than the frequency of the emissions studied in this paper. There are two possible interpretations for the JDP detections. The first is that the lightning-induced waves start out in the L-O mode, couple into the Z mode in the ionosphere, and then couple back into the L-O mode in the topside ionosphere. However, it is likely unfavourable due to a very inefficient conversion of the two mode-couplings. Another interpretation is that the sferic signal escapes through low electron plasma density holes (<250 cm −3 ) in the ionospheric layer before reaching Juno. If we follow this interpretation, our estimated N e0 is a model of the upper bound of ionospheric plasma density irregularities below Juno. Figure 4 shows N e0 of the JDP detections plotted as a function of Jovian latitude and longitude. Recall that, while the 600-MHz sferics propagate freely through the dense ionosphere, the JDPs can be seen only when there is a low density path through the ionosphere that allows these pulses to reach Juno's position. While there is some uncertainty in the exact JDP radio source locations, we assume that the JDP radio sources can be vertically projected from Juno onto the Jovian atmosphere. The low plasma density holes in the ionosphere tend to appear more in the northern hemisphere than in the southern hemisphere. No JDPs are observed around the Jovicentric equator because the local plasma frequency of the Jovian ionosphere tends to exceed the upper recordable frequency of 150 kHz at latitudes from −10°to 25°and the JDP L-O mode waves cannot propagate in this region.
Our current knowledge of Jovian ionospheric profiles relies heavily on a radio occultation technique 23 , as it integrates the plasma density in the transverse direction toward Earth on Jupiter's day side. It is also known that the peak of the ionosphere estimated by Voyager 2 20 and Galileo 21 varies significantly in altitude and plasma density. Another indirect observational study of the Jovian ionosphere was carried out via dispersion analysis using Juno's detections of lightning-induced whistlers. In this analysis, for some cases, it was necessary to reduce the ionospheric plasma density model by 10 to 30% 7,24 . In other words, the Jovian ionosphere changes dynamically. Juno's first nine orbits used in this study were concentrated near the terminator (Supplementary Fig. 8). It is possible that the actual radio locations of JDPs observed by Juno and postulated ionospheric holes are on the night side, where a different recombination process The orange plus marks indicate the whistler footprints that were mapped along the JRM09 magnetic field lines 19 onto the Jovian atmosphere at altitude of 300 km above the 1-bar level. The yellow stars are the MWR boresights at the 1-bar level for the detections of sferics at 600 MHz. These data were taken from the Juno Waves whistler catalogue 3 and the Juno MWR sferic catalogue 5 from perijoves 1 through 8. Note that the region between 200°and 280°w here JDPs present were sampled from the Juno perijove 9 orbit where sferic and whistler observations have not been completed. The Jovian image was provided by NASA/JPL-Caltech/SSI/SwRI/MSSS/ASI/INAF/JIRAM/Björn Jónsson (http://www.planetary.org/multimedia/space-images/jupiter/ merged-cassini-and-juno.html). b Their latitudinal profiles of detection rate are shown using corresponding colours. In addition, the comparison of JDPs with the previous optical detections of lightning is shown in Supplementary Fig. 10 ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-019-10708-w from the day side is anticipated 23 . In addition, because the natural presence of ionospheric holes has been widely recognised in Venus 25 , Mars 26 , and Saturn 27,28 , the similarity to Jupiter can be inferred.
The frequency-time spectral shapes of JDPs (see especially Fig. 1a) are reminiscent of those of the trans-ionospheric pulse pairs (TIPPs) detected by Earth-orbiting satellites [16][17][18] . TIPPs are pairs of dispersed pulses induced by short duration intracloud lightning discharges and observed in a frequency band from 25 to 80 MHz with a duration of a few microseconds and a pulse-topulse interval of tens of microseconds. An interpretation for the appearance of a pair of pulses is that the radio signal branches into a direct pulse as the first pulse and an indirect pulse via ground reflection as the second pulse 17 . The inter-pulse interval is the differential travel time equal to the combination of twice the lightning source distance from the ground and the angular separation between the first and the second pulses, divided by the speed of light. Unlike Earth, Jupiter has no ground but has deeper layers, including the hypothetical possibility of a reflection layer well below the water clouds at the 5 bar level where we place the anticipated location of the source lightning discharges. Applying this interpretation to the JDP inter-pulse intervals of 0.7 ms and 3.3 ms, the apparent distances are respectively 100 and 500 km, which include distance-dependent angular separation but give an upper limit of the vertical distance between the lightning thunderstorm and the reflection layer. Another possible scenario for the observed inter-pulse intervals can be linked to individual strokes with typical repetition periods at 0.7 and 3.3 ms. The synoptic observations of JDPs from Juno, in combination with analyses of Jovian whistlers and sferics, will improve our understanding of the physical process of lightning discharges in Jupiter's atmosphere where in-situ measurements are limited.
Methods
Juno waves data used in this study. One of the instruments onboard Juno is a radio and plasma wave instrument (Waves) 11 , designed to monitor the electric fields of waves from 50 Hz to 41 MHz with an electric dipole antenna and the magnetic fields of waves from 50 Hz to 20 kHz with a magnetic search coil sensor using three on-board receivers. One of the receivers is the Low Frequency Receiver (LFR), recording three different components: one low-frequency (LFR-Lo) electric field component and one magnetic field component both from 50 Hz to 20 kHz and one high-frequency (LFR-Hi) electric field component from 10 kHz to 150 kHz. The LFR-Hi burst mode used in this study obtains 6144 points with a temporal resolution of 85 microseconds in a 16.384-ms waveform snapshot once per second. Using a 256-point fast Fourier transform (FFT) on the ground, spectral data can be obtained covering 10 to 150 kHz with a spectral resolution of 1.5 kHz. In addition, we obtain the local electron cyclotron frequency converted from the measurements of the magnetic field recorded by the Juno's magnetometer 29 .
O mode straight-line propagation model. In accounting for the frequencydependent dispersion curve of JDPs, we use an O mode straight-line propagation model in which the maximum delay occurs in the vicinity of the O mode cutoff frequency, the electron plasma frequency f pe0 in kHz, in a plasma density irregularity along Juno's line of sight. Supplementary Fig. 1 shows the modelled geometry of Juno, a radio source, and a plasma density irregularity in Jupiter's ionosphere or inner magnetosphere, where an enhanced electron plasma density N e0 = (f pe0 /8.98 kHz cm 3/2 ) 2 in cm −3 with a length of D km is located along a straight line from the source through Juno at a distance of L km. The observed time t(f) for JDP can be expressed via a group delay 30 as where D h and D l are the straight-line distances to the upper and lower boundaries, respectively, of the plasma density irregularity, v g is the group velocity of the O mode, c is the speed of light, t 0 is the wave generation time, f is the observed frequency, and C is ðL À DÞ=c þ t 0 . By using a non-linear least-squares fit of the digitised t(f) points, three free parameters, N e0 , D, and C have been estimated for each JDP. It is important to note that C is just an offset due to a lack of determination of the exact source location because we cannot uniquely determine L and t 0 . But N e0 and D provide beneficial information on the structure of the plasma density irregularities. Eight examples of simulated dispersed pulses are displayed in Supplementary Fig. 5.
Data availability
The Juno data used in this study are publicly accessible through the Planetary Data System (https://pds.nasa.gov). The catalogues that support the findings of this study are available from the corresponding author upon reasonable request. | 4,214 | 2019-06-21T00:00:00.000 | [
"Physics",
"Geology"
] |
Effect of VERO pan‐tilt motion on the dose distribution
Abstract Tumor tracking is an option for intra‐fractional motion management in radiotherapy. The VERO gimbal tracking system creates a unique beam geometry and understanding the effect of the gimbal motion in terms of dose distribution is important to assess the dose deviation from the reference conditions. Beam profiles, output factors (OF) and percentage depth doses (PDD) were measured and evaluated to investigate this effect. In order to find regions affected by the pan‐tilt motion, synthesized 2D dose distributions were generated. An evaluation of the 2D dose distribution with the reference position was done using dose difference criteria 1%–4%. The OF and point dose at central axis were measured and compared with the reference position. Furthermore, the PDDs were measured using a special monitoring approach to filtering inaccurate points during the acquisition. Beam profiles evaluation showed that the effect of pan‐tilt at inline direction was stronger than at the crossline direction. The maximum average deviation of the full width half maximum (FWHM), flatness, symmetry, penumbra left and right were 0.39 ± 0.25 mm, 0.62 ± 0.50%, 0.76 ± 0.59%, 0.22 ± 0.16 mm, and 0.19 ± 0.15 mm respectively. The ÔF and the measured dose average deviation were <0.5%. The mechanical accuracies during the PDD measurements were 0.28 ± 0.09 mm and 0.21 ± 0.09 mm for pan and tilt and pan or tilt position. The PDD average deviations were 0.58 ± 0.26 % and 0.54 ± 0.25 % for pan‐or‐tilt and pan‐and‐tilt position respectively. All the results showed that the deviation at pan and tilt position are higher than pan or tilt. The most influences were observed for the penumbra region and the shift of radiation beam path.
the patient and estimate the target motion coordinate during treatment. 1,2 The predicted target coordinates are passed to the gimbal head controller of the accelerator, which adapts accordingly to compensate the target motion. Eventually, the reduced intra-fractional uncertainties will potentially shrink CTV-PTV margins. 3 In comparison to other motion mitigation strategies, the treatment time can potentially be reduced since the beam is delivered continuously while the target is in motion. [1][2][3][4][5][6][7][8][9][10] Instead of moving its gantry or multi leaf collimator (MLC) leaves for tracking, VERO swings the gimbal head. It has its own center of rotation, which is located at 40 mm below the source. 1,2 Therefore, the geometry of the beam during tracking is different from a common oblique beam, which is created by moving the linac gantry. The gimbal rotation is currently not supported by available treatment planning systems (TPS) [3][4][5][6][7][8] and thus potentially leads to an inaccurate treatment delivery. Such systematic errors will not be noticeable during the dose calculation. Currently, the gimbal tracking dose calculation is performed on a stationary CT without considering the gimbal's motion. Dose calculation is thus relying only on shifting of the target into the radiation field, 3 which is not giving a similar radiation path of the beam. Some dosimetry studies showed that the gimbal motions will not affect the beam profile characteristics, i.e., beam profile and penumbra agreed within 1%/1 mm. 5,6 Furthermore, deviations in output factor (OF), percentage depth dose (PDD) and 2D dose distribution were not observed. 5 All the previous studies used films as their primary detectors. 5,6 The aim of this work was to investigate the effect of pan-tilt motions during treatment on the delivered dose distribution in a water phantom. However, the situation is more complex in the real clinical condition, since the patient surface, tissue density, and tumor motion direction will greatly influence the dose calculation and the treatment accuracy during the treatment. A comprehensive evaluation regarding the effect of the VERO's gimbal motion on fundamental dosimetry properties such as beam profile, OF, and PDD was performed to provide a better understanding of the gimbal motion effects on the delivered dose. Furthermore, it will help medical physicists to estimate the accuracy of treatment delivery. [11][12][13][14][15] Especially, if the tracking treatment is combined with other complex treatments such as intensity modulated radiotherapy (IMRT), or wavearc 16 treatments.
2.A | Measurement geometry and setup
As shown in Fig. 1, the VERO gimbal can be swung to inline/ tilt (A ! C) and crossline/pan (A ! B) direction using its center of rotation (COR). A target tracking utilizing the gimbal motion creates a unique beam geometry as shown in Fig. 2. Unlike the common oblique beam geometry, the gimbal geometry creates a longer source surface distance (SSD) and a larger effective field size (FS). The gimbal can track a moving target up to a maximum tracking distance TD ð Þ ¼ 41:9 mm away from the isocenter at a pan or tilt directions, which is equal to an angle (a) of pan or tilt of 2.5 degree (Fig. 1).
Moreover, a maximum gimbal position at a pan and tilt position (A ! D) will result in a relative angle of the gimbal to the water surface of 3.5 degree (Fig. 1). This creates an SSD increase of 1.83 mm compared to the reference. Since beam profile, OF and PDD are influenced by the FS and SSD, 17,18 both parameter changes could leads to a deviation of the reference dosimetry parameters.
All measurements in this study were carried out using BLUE PHANTOM 2 (IBA Dosimetry GmbH, Schwarzenbruck, Germany) and a microDiamond single crystal detector (T60019, PTW-Freiburg GmbH, Freiburg, Germany). The reference condition was defined at pan and tilt angle 0 o . Moreover, the PDDs and profile measurements were done using an output resolution of 1 mm and the field detector reading was normalized at a depth of 15 mm.
The central axis (CAX) position played an important role during the measurement since it was used as a reference to determine the scanning region. Moreover, it ensured that the scanning profile was always intersected at the CAX position ( Figs. 1 and 2). Therefore, the PDD scan direction, as well as the OF depth, were always along the CAX beam path. The coordinates of the CAX at a certain depth (d) were determined by calculating the tracking angle (a) based on the TD of the gimbal at pan-tilt angle (Fig. 1) using the COR SSD at the reference position, which was 960 mm: The gimbal center of rotation is located 40 mm below the radiation source, which creates an SSD 960 mm without pan or tilt.
2.B | Beam profile and output factor
Beam profiles were measured as described in Table 1
2.C | Percentage depth dose
A similar approach as the OF measurements was applied to determine the start-stop position for the PDD measurements. The start and stop positions on the CAX were calculated using Eqs. (1) and (2) at a depth of 300 mm and 0 mm respectively. The 3D dose scan mode was used for the PDD measurement.
A dry run test was performed prior the measurement to ensure a detector position at CAX of the beam. The test also functioned as detector positioning consistency and mechanical movement quality control of the BLUE PHANTOM 2 during the 3D dose scan mode. After the initial dry run test, it was found that the mechanical movement of the detector arms was not always smooth, which could create a positioning inaccuracy. This Additionally, r was calculated by projecting the measurement directional vector A 2 B 2 ! into the scanning line directional vector The distance of r was determined by dividing the crossproduct of both vectors with the magnitude of the scanning vector The effective measurement depth (d eff: ) was determined by calculating its relative distance to the CAX coordinate: Measurement point coordinate at inline (YÞ, crossline X ð Þ direction and depth d ð Þ were obtained from the measurement software output.
2.D | 2D dose distribution synthesis
Beam geometry properties for the beam profile, PDD and OF measurements at pan-tilt position, with a being the pan-tilt angle according to the tracking distance (TD). The beam profiles, PDDs and OFs were measured from point F 1 to F 2 , B 1 to B 2 , and at A 1 respectively. The isocenter(ISO 1 and ISO 2 ) is separated with the target plane during tracking, due to the fix distance of the isocenter. The PDD assessments were obtained by comparing the dose difference at each depth according to Table 1 to the reference condition. The comparison was performed at depths beyond 15 mm or the build-up region to avoid large uncertainties of the depth dose distribution close to the water surface due to electron scattering created by the beam-shaping aperture above the water surface. 20,21 The calculated 2D dose distributions from inline and crossline measurements were assessed pixel-by-pixel. The approach was performed to study the effect of pan-tilt movements on the dose distribution profile and to identify the region that is mostly influenced by the movement. The assessment was conducted using full field measurements using a 2% threshold as the limit of the radiation field.
2.E | Data analysis
Secondly, the evaluation was done using 80% of the physical field size at the corresponding depth of measurement, which represents the effective field size for treatment.
3.A | Profile characteristics and output factor
Calculation of beam profile characteristics, i.e., FWHM, flatness, symmetry, and penumbra were calculated and compared with the reference condition. Figure 3 is an example of the D at pan-tilt positions À41.9 mm and 41.9 mm. The penumbra left and right shows similar patterns and values, therefore only the left penumbra is shown in Fig. 3.
Similar results were found at other pan-tilt position as seen in The OF deviation shows a decreasing trend from 10 mm 9 10 mm to 100 mm 9 100 mm field size and then an increasing trend toward 150 mm 9 150 mm. The dose measured during the OF measurements during pan-tilt were lower than at the reference position. The average deviation of the measured dose at pan and tilt is slightly higher than at pan or tilt position, which was less than 0.1%.
3.B | Mechanical movement monitoring
Unlike the profile measurements, the PDD measurements were more complex with respect to phantom mechanical movements since the detector positioning involved more than one motor. The measurement involving one motor movement did not show any positioning deviation since vector A 2 B 2 ! and B 1 B 2 ! will always overlap with a resulting deviation of r ¼ 0 (Fig. 2). Moreover, the other two motors will lock itself, as a result the remaining motor movement will always be on its vector direction.
The dry run test showed unsmoothed mechanical movement since the motion was based on more than one motor. Maintaining the distance r of the measured point to the nominal scanning line less than 0.5 mm resulted in a mean positioning accuracy of 0.21 AE 0.09 mm and 0.28 AE 0.09 mm for 2 and 3 axes movements during the PDD measurement (Fig. 6), respectively. The PDD average deviations at all field sizes at pan-tilt position were less than 1% (Fig. 8). Figure 8 also shows that the average deviations at pan and tilt are higher than at pan or tilt position, since the SSD is longer than the pan or tilt position.
3.D | The 2D dose distribution
The synthesized dose distribution resulting from multiplying an inline and crossline dose profile at a corresponding depth has been compared with the calculated dose profile from the treatment planning system. The comparisons showed the smallest and largest mean difference between the synthesized and calculated dose profile of À0.1 AE 1.9% and 0.7 AE 0.7% respectively. The synthesized 2D dose distributions comparisons with the reference were evaluated using a pixel-to-pixel comparison and a criteria 1%-4% dose deviation was used to evaluate the dose distribution. Figure 9 shows that most of the profiles had less than 95% of the point that passed the criteria.
The pixel-by-pixel comparisons showed that the mean dose deviation was less than 1% with high dose deviations at the penumbra regions (Fig. 10), resulting in high standard deviations of its mean dose difference. Removing the penumbra region by comparing only 80% of the full field size for further evaluation showed that the and tilt were higher than pan or tilt as shown in Table 2, and the values show a decreasing trend as the depth increases.
3.E | The radiation path
The radiation paths of pan-tilt are shown in Fig. 11. The position of the dose distribution is shifted according to the pan-tilt position which depends on the pan-tilt angle as described in Eqs. (1) and (2).
The radiation path at CAX is shifted compare with its reference positions, as well as the dose distribution above and below the SAD
| DISCUSSION
In this study, a comprehensive investigation regarding the effect of gimbal movements and its impact on the dose distributions has been conducted. The dose distribution parameters being observed were 2D dose distribution, OF, and PDD. Observation of the dose distribution deviation could help physicists to understand the effect of gimbal movements on the delivered dose.
We measured the dose distribution at maximum pan-tilt position according to Table 1. However, there were no sufficient data F I G . 8. The PDD dose average deviation from its reference condition at all pan-tilt positions for each field size.
F I G . 9. The percentage of passed pixels at depth 50 mm using 1%-4% dose difference criteria for pan and tilt (a) and pan or tilt (b). To ease readability the data points were shifted at each field size entry. Beam profile comparison showed that the effect of pan-tilt movement at inline directions was stronger than in crossline direction (Fig. 3). Kamino et al. 2 The gimbal swing created an extended SSD and a larger field size at the water surface as shown in Fig. 2. The SSD will extend by 0.9 mm and 1.8 mm at maximum pan or tilt and pan and tilt respectively. Such movements can create a systematic increase of dose uncertainties during treatment delivery. The value of the extended SSD is close to the AAPM TG 142 17 error tolerance for the optical distance indicator (ODI) which is 2 mm. Several studies showed that a combination of extended SSD and larger field size would contribute to changes in OFs, PDD curves, and 2D dose distributions. 17,18 Relative OFs were calculated by dividing the detector reading of each field size measurement to that of the reference field size of 100 mm 9 100 mm at each pan-tilt measurements. Therefore, the OF deviation at 100 mm 9 100 mm was always zero and the deviation decreases as the field sizes changed toward the reference field size of 100 mm 9 100 mm. Due to the small deviation, the shape of the OF curves was also similar with the reference condition. Therefore, the scatter patterns that occurred during the measurements were similar with its reference condition. 18 The tolerable deviation of OFs with the reference condition was 1% 17 and the maximum deviation value of (À0.11 AE 0.33)% was still below the tolerance value. The measured dose during the OF measurement also showed a lower dose value, since the SSD was further than the reference condition. The differences were less than 0.5% except for After filtering and correcting, 22 the measured PDD data points using Eqs. (3) and (4) it was found that all PDD curves beyond the buildup region were higher than the references. The average deviations for all field sizes are 0.58 AE 0.26% and 0.54 AE 0.25% for the pan-or-tilt and pan-and-tilt position respectively. This result confirms that the gimbal movements affect the PDD curves. Figure 8 showed that combinations of the extended SSD and beam angle change caused this increase. 18 The mean deviation of the PDD curve at pan and tilt is higher than pan or tilt, which shows that pan-tilt move- The 2D dose profile evaluation using the pixel-to-pixel approach showed that only 30% of the 320 fields had > 95% points that passed dose difference criteria 1%-4%. Figure 10 showed that the penumbra region was the main contributor to the failure. The pan- tilt movements will influence the dose distribution in the penumbra region, which could create a dose difference up to 8%. Nevertheless, the mean dose deviations from the pixel-to-pixel comparison were less than 1%. Excluding the penumbra region and considering only 80% of the field size during evaluation, resulted in a similar mean dose deviation at a lower standard deviation. This concluded that the source of the high standard deviation was coming from the penumbra region. Table 2 shows that the mean deviation at pan and tilt is also higher than at pan or tilt position since the beam angle is higher at pan and tilt position. which is less than 1 mm in term of distance deviation (Fig. 3). These results indicated that the dose distribution at the target edge region and the OAR beyond the target depth would be affected the most.
Increasing the CTV-PTV margin could increase dose coverage at the CAX plane during tracking but would be contra productive with the aim of tracking treatment purpose.
The effect of gimbal motion during tracking in the current study was performed at the maximum position and could be considered a worst-case scenario. Therefore, further studies regarding the effect of gimbal motion in a real treatment setting are very important to assess the accuracy of the calculated and delivered dose distribution during the entire motion.
The pan-tilt motion effect measured in this study is limited to the ideal conditions in a water phantom. The difference of each fundamental dosimetry parameter to the reference is less than 1%. The pan-tilt motion effect measured in this study is limited to the ideal conditions in a water phantom. The difference of each fundamental dosimetry parameter to the reference is less than 1%. However, the conditions are much different in a real clinical condition due to the tumor motion direction and the shape of the patient which are not ideally represented by the phantom. The act of shifting the beam alone will not give the accurate dose calculation. The target motion during treatment will influence the dose coverage within the target and its surrounding OAR. For example, the target motion in parallel direction will create over-and under-dose of the target due to the target being no longer at the SAD point.
The dose located in the target plane was not suffering a large dose difference at ring and gantry 0 o and the dose difference was less than 1%. However, the dose above and below the target shows more dose deviation due to the path difference, and the separation of the isocenter and the target dose (Fig. 2). The edge of the beam was suffering large deviation ( Fig. 11) even though the beam shifting was applied. Taking the depth of the target as the reference CAX, the shift of the beam position above or below the target could be
4.A | Future study
More detailed works to simulate the real clinical situation are required to estimate a dose calculation closer to the real situation.
New CT datasets and MU distributions corrected for the pan-tilt position cannot be generated manually but require dynamic adjustment as part of the dose calculation. Nevertheless, this fundamental data can illustrate the effect of the pan-tilt motion in the ideal situation and can be used as a precaution how to implement the pan-tilt motion for the treatment in the absence of appropriate 4D TPS.
A feasibility study regarding the implementation of the pan and tilt motion in a TPS using image transformations of the CT data outside the TPS while implementing the ring and gantry rotations within the TPS was done. 26 Full implementation of the approach in a TPS requires transformation of the CT dataset according to the pan and tilt orientation but does not need any modification of the TPS dose calculation algorithm, which makes the implementation is much easier.
| CONCLUSION S
The dose deviation for pan and tilt motion are higher than for gimbals moved in pan or tilt due to a longer SSD and a higher pan-tit PRASETIO ET AL.
| 153 angle relative to the water surface. The impact of VERO gimbal movement on 2D dose profile, OF and PDD were less than 1%, 0.5%, and 0.5% respectively. The penumbra region is greatly influenced, at the investigated maximal gimbal motion with dose differences up to 8% against the reference position. There is also a shift of the radiation path that depends on the depth relative to the isocenter, which can influence OARs distal to the target volume.
Considering the gimbal motion in the dose calculation would be beneficial to improve the accuracy of treatment delivery.
ACKNOWLEDGMENTS
The presented work was performed by the first author HP in fulfillment of the requirements for obtaining the degree "Dr. rer.
CONF LICT OF I NTEREST
The authors declare that there are no conflicts of interest in connection with this work. | 4,912.6 | 2017-06-06T00:00:00.000 | [
"Physics",
"Medicine"
] |
The Synergistic Enhancing-Memory Effect of Donepezil and S 38093 (a Histamine H3 Antagonist) Is Mediated by Increased Neural Activity in the Septo-hippocampal Circuitry in Middle-Aged Mice
Donepezil, an acetylcholinesterase inhibitor, induces only moderate symptomatic effects on memory in Alzheimer’s disease patients. An alternative strategy for treatment of cognitive symptoms could be to act simultaneously on both histaminergic and cholinergic pathways, to create a synergistic effect. To that aim, 14 month old C57/Bl6 mice were administered per oesophagy during nine consecutive days with Donepezil (at 0.1 and 0.3 mg/kg) and S 38093 (at 0.1, 0.3, and 1.0 mg/kg), a H3 histaminergic antagonist developed by Servier, alone or in combination and tested for memory in a contextual memory task that modelized the age-induced memory dysfunction. The present study shows that the combination of Donepezil and S 38093 induced a dose-dependent synergistic memory-enhancing effect in middle-aged mice with a statistically higher size of effect never obtained with compounds alone and without any pharmacokinetic interaction between both compounds. We demonstrated that the memory-enhancing effect of the S 38093 and Donepezil combination is mediated by its action on the septo-hippocampal circuitry, since it canceled out the reduction of CREB phosphorylation (pCREB) observed in these brain areas in vehicle-treated middle-aged animals. Overall, the effects of drug combinations on pCREB in the hippocampus indicate that the synergistic promnesiant effects of the combination on memory performance in middle-aged mice stem primarily from an enhancement of neural activity in the septo-hippocampal system.
INTRODUCTION
The "cholinergic hypothesis" in aging or Alzheimer's disease is based on the correlation between the memory impairment and the decrease of the cholinergic function in the brain (Bartus et al., 1982;Johannsson et al., 2015). Such correlation has also been observed in aged rodents (Fu et al., 2014;Lim et al., 2015). The first clinical approach has thus consisted in inhibiting the decrease of acetylcholine (ACh) by blocking its degradation by acetylcholinesterase (AChE) in the synaptic cleft (Marighetto et al., 2008). Acetylcholinesterase inhibitors (AChEI), such as Donepezil, are currently used in this way but are however modestly effective in AD with only moderate cognitive improvements (Lockhart and Lestage, 2003;Francis et al., 2010).
In addition to deficits in ACh, histamine neurotransmission is also reportedly diminished in elderly or in AD (Fernández-Novoa and Cacabelos, 2001). Histamine has raised interest for its implication in memory and attention (Witkin and Nelson, 2004;Schwartz, 2011). Among the four types of histaminergic receptors, the H3 subtype, mainly presynaptic, is expressed on neurons in the central nervous system (CNS), particularly in brain areas involved in cognitive processes and arousal. Expressed on histaminergic neurons, its activation leads to the inhibition of the synthesis and release of histamine (Arrang et al., 1983) and also negatively regulates the release of other neurotransmitters such as ACh when H3 is expressed on heterologous nerve endings (Blandina et al., 1996;Brown et al., 2001). Thus, it has been argued that H3 antagonists, which could hamper the constitutive negative feedback of H3 receptors on the release of these neurotransmitters, would be valuable in correcting cognitive deficiencies (Fox et al., 2003;Ligneau et al., 2007;Femenía et al., 2015).
To that aim, S 38093 was developed by Servier. S 38093 is an inverse agonist/antagonist of H3 receptors, which has shown procognitive properties at a mean pharmacological dose of 0.3 mg/kg (Panayi et al., 2014). Indeed, it improves performance in episodic-like memory paradigm both in adult rats (object recognition with natural forgetting or scopolamine-induced amnesia) and aged mice (relational memory task). It is also effective in working memory paradigms in middle-aged mice (spontaneous alternation or concurrent serial alternation) and in aged monkeys (delayed matching to sample task). These effects are thought to be mediated by an enhanced release of neurotransmitters especially ACh and histamine, which are indeed observed by microdialysis in the prefrontal cortex (PFC) and the hippocampus of rats after S 38093 administration (Panayi et al., 2014) An interesting alternative for treatment of cognitive decline could be to act simultaneously on both histaminergic and cholinergic pathways, to create a synergistic effect. Indeed, combined treatments can be more effective than compounds alone and can allow using lower doses of each compound, i.e., minimizing the potential negative side effects. Therefore, the aim of the present study was to investigate in a first experiment the effect of the chronic administration of S 38093 and Donepezil, alone or in combination, in a model of contextual memory impairments in middle-aged mice . In a second experiment, we measured the cAMP response element binding protein (CREB) phosphorylation as a marker of intracellular PKA activation and increased neuronal activity after behavioral testing. Indeed, it has been shown that memory consolidation relies on PKA activation and subsequent CREB phosphorylation in the hippocampus (Bernabeu et al., 1997;Colombo et al., 2003;Baudonnat et al., 2011) whereas cognitive abilities involving mPFC are impaired by PKA activation (Runyan and Dash, 2005;Barsegyan et al., 2010). Therefore, CREB appears as a point of convergence for the intraneuronal kinase/phosphatase balance, and reflects neuronal activity sustaining memory processes (Benito and Barco, 2010).
Animals
Animals were 12 months-old mice of the C57/Bl6 inbred strain obtained from Charles River (L'Arbresle, France). They were housed in collective cage in the colony room (12 h lightdark cycle) until they were 14 months. Three weeks before the experiments, they were housed individually. All procedures were carried out during the light phase of the cycle, between 08:00 a.m. and 12:00 a.m. Three days before the acquisition phase of memory testing and during the remaining behavioral phase, all subjects were maintained at 85-90% of their ad libitum body weight. All experiments were performed in accordance with the local Ethics Committee for Animal Experiments and the European Communities Council Directive of 1st February 2013 (2010/63/UE).
Memory Test
The memory task and apparatus has been fully described previously (Chauveau et al., 2009). The contextual serial discrimination (CSD) task is based on two successive discriminations in a four-hole board which can be retrieved with the help of specific temporal and contextual cues associated with each of them. We already showed that unlike young mice, middle-aged mice showed a deficit in this task (Béracochéa et al., 2007(Béracochéa et al., , 2011Tronche et al., 2010).
Acquisition Phase
The acquisition phase took place in room A where animals learned two consecutive spatial discriminations (D1 and D2; Figure 1A) which differed by the color and texture of the floor and were separated by a 2-min delay interval. For both D1 and D2, 10 20-mg food pellets were available during the 6-min exploration sessions; for D2 specifically, the baited hole was consistently located in the opposite symmetrical hole. Environmental cues made of colored paper sheets were positioned at 1.00 m above the board. At the end of the acquisition phase, mice returned in the animal's room for 24 h. Animals retained for the test phase in the present study for both Experiments 1 and 2 have eaten at least 7-8 pellets/10 during both acquisitions.
Test Phase
Mice were replaced on the D1 floor in the board without any pellet in the apparatus and were allowed to freely explore for 6 min during which the number of head-dips in each hole was counted. This allowed measures of the % of "correct responses" (head-dips into the hole previously baited on the same floorcontext), the % of "interference responses" (head-dips into the hole previously baited at D2, on the other floor-context) FIGURE 1 | (A) Contextual serial discrimination: at the acquisition phase, mice performed two consecutive spatial discriminations varying by the color and texture of the floor, i.e., D1: Discrimination 1 and D2: Discrimination 2. For each discrimination, only one hole out of the four holes of the apparatus was baited (hashed circles). The two discrimination were separated by a 2 min delay interval during which animals are placed in room B. A 24-h delay was interpolated between the acquisition and test phases, during which mice were returned in the colony room. 1 h prior to acquisition and test phases, mice received a per se administration of the compounds or vehicle solution in a chamber placed in a room (room C) different from the one in which the behavioral experiments was conducted (room A). Subsequently, mice were submitted to the test phase in which they were replaced on the floor of the first discrimination without any food pellet in the apparatus. (B) Two types of responses were calculated: (i) correct responses corresponding to head-dips into the hole baited at the acquisition of the first discrimination (D1), on the same floor context; (ii) interference responses corresponding to head-dips into the hole baited at the other (second) discrimination (D2). These two parameters allow calculus of the SCM score.
( Figure 1B) and of the "strength" of "contextual memory" score (SCM) (% correct responses -% interference responses). Thus, since correct responses are based on the use of the internal context (color of the floor) and interferent responses are based on the use of spatial allocentric cues previously associated with the other floor, thus the trend of SCM score toward a positive difference represents the gain of internal contextual memory, at the expense of allocentric spatial one. in the paper) was used in this study. Its chemical formula is C 17 H 24 N 2 O 2 , HCl (Figure 2).
S 38093 and Donepezil were diluted in purified water. Mice were allocated to administration of vehicle (purified water), S 38093 (S1: 0.1 mg/kg; S2: 0.3 mg/kg; S3: 1.0 mg/kg) or Donepezil (Don1: 0.1 mg/kg; Don2: 0.3 mg/kg), or a combination of both S 38093 and Donepezil at the same doses with N = 12 in each group. S 38093 and Donepezil doses were determined according to previous data Panayi et al., 2014). Before behavioral testing, all mice received for nine consecutive days a daily esophageal administration of S 38093, Donepezil, vehicle or S 38093 + Donepezil combinations, administered at a volume of 10 mL/kg. The two last administrations were delivered 1 h before the acquisition and test phases.
Pharmacokinetics Study
For the pharmacokinetics study, an additional day of treatment (day 10) was done for blood sampling to measure concentrations of S 38093 and Donepezil in plasma, on the combinations groups where either the most important synergistic effect was observed, or containing the highest doses of S 38093 and Donepezil. Then, blood sampling (250 µL /sample) was performed on five groups (N = 3 animals per group): S 38093 (0.1 and 1.0 mg/kg); Donepezil (0.3 mg/kg); combination of Donepezil (0.3 mg/kg); and S 38093 (0.1 and 1.0 mg/kg). The sampling times are 30 min, 1, 2, 4, and 6 h. In each mouse, two time points of blood sampling were performed. Plasma was extracted from the blood sample by centrifugation (+4 • C, 3000 g, 10 min) and stored at −80 • C. Then, plasma samples were sent in dry ice to MDS (MDS Pharma Services, Switzerland) for the pharmacokinetic analysis.
S 38093 and Donepezil were independently measured by liquid chromatography using tandem mass spectrometry detection (LC/MS-MS). Prior to analysis, S 38093 was extracted from 25 µL of sample by solid phase extraction on an Isolute 96 CBA SPE 50 mg cartridge while Donepezil was extracted by liquid/liquid extraction from a second 25 µl aliquot. The limit of quantification was 0.3 ng/mL for S 38093 and 0.1 ng/mL for Donepezil.
Animals were sacrificed by cervical elongation immediately after either behavioral testing or plasma sampling.
Behavioral Testing
Food deprivation, drug administration and behavioral testing procedures were similar as in Experiment1. We performed in Experiment 2 an immunohistochemical study on the combination groups where either the most important synergistic effect was observed (Don2+S1 and Don2+S2), or Vehicle, S 38093 and Donepezil alone (Vehicle, S1, S2, and Don2). Independent groups of mice (N = 10 per group) were used. An additional group composed by young vehicle mice (4-5 months young-vehicles, N = 10) was added for determination of the aging effect on both memory and phosphorylated CREB immunoreactivities, as compared to middle-aged Vehicles. For the immunohistochemical study, animals submitted to D1 memory testing were compared to "naïve" mice isolated in the colony room (with N = 5 per group; Naïve condition) and that underwent the food deprivation procedure and drug treatments as behaving animals (Test condition).
Immunohistochemistry
Thirty minutes after completion of the test session, mice were deeply anesthetized (Avertin, 10 mL/kg intraperitoneally i.p.), and perfused transcardially with an ice-cold solution of 4% paraformaldehyde in phosphate buffer (0.1 M, pH 7.4). After perfusion, brains were removed and post-fixed overnight in the same fixative at 4 • C. Brains were then put in a sucrose solution (30% in Tris Buffer 0.1 M, pH 7.4) during 24 h. They were then frozen and cut in 50-mm coronal free-floating sections with a freezing microtome (Leica) to proceed to immunochemistry.
FIGURE 3 | Effects of S 38093, Donepezil and combinations on SCM scores in middle-aged mice (Experiment 1). SCM scores are expressed as mean ± SE. SCM scores are obtained by calculation (% of correct responses -% of interferent responses). As can be seen, the lowest S 38093 dose combined with the highest Donepezil dose (DON 2 + S1), and inversely (DON 1 + S3), significantly increased SCM scores as compared to Vehicle and compounds alone. * * p < 0.01 and * * * p < 0.001 as compared to vehicle; ## p < 0.05 and ### p < 0.01 respectively as compared to compounds alone.
Total CREB (tCREB) and phosphorylated CREB (pCREB) immunostainings were performed as described in full previously (Vandesquille et al., 2013;Dominguez et al., 2014Dominguez et al., , 2016. Countings were made in the following brain regions according to Paxinos and Franklin (2001) atlas: the CA1 of the dorsal (dCA1) and ventral (vCA1) hippocampus, the prelimbic cortex (PL), the dorsal striatum (St), the basolateral amygdala nucleus (BLA) and the medial septum nucleus (MS). Digital images were captured at 10X magnification using an Olympus (BX50) and an imaging analysis system (ImageJ R ). At least six serial sections for each brain region were analyzed using a computerized image analysis system (Visiolab 2000 R , Biocom, and V4.50). Quantification was expressed as mean number of positive nuclei per mm 2 .
Statistical Analysis
Behavioral data were analyzed by one-way or two-way factorial analyses of variance, followed when adequate, by post hoc (Dunnett test) comparisons using the least significant difference test, with a p < 0.05 statistical threshold. Data were represented as mean ± standard-error of the mean. For correlation analyses, the Spearman's correlation coefficient, R, was determined. For immunohistochemistry, in so far as no difference was found in the naïve condition, data were analyzed similarly using the rough number of immunopositive cells , with a p < 0.05 statistical threshold.
Effect of the Compounds and Their Combinations on Age-Related Memory Deficit
Body weights among the groups were ranged from 28.9 ± 3.5 to 32.4 ± 4.3 g and no significant between-groups difference was observed [F(11,132) < 1.0]. During the food restriction period, The PK parameters of the co-administration of S 38093 (0.1 and 1.0 mg/kg) with Donepezil (0.3 mg/kg) are in the gray columns.
Frontiers in Pharmacology | www.frontiersin.org both vehicles and drug-treated mice eaten all their allocated daily amount of dry food. All animals retained for the test session have eaten at least 7/8 pellets out of 10 at both the first and second acquisitions.
Acquisition phase
No significant difference was observed among the groups on the total number of head-dips as well as on the % exploration of the baited hole both at acquisitions 1 and 2 (p > 0.10 in all analyses).
Test phase
Total number of head-dips. The total number of head-dips ranged from 31.2 ± 9.4 to 62.3 ± 9.4 and no significant between-groups difference was observed
Pharmacokinetics Interaction
Results have been summarized in Table 1. The inter-individual variability on S 38093 plasma concentrations was moderate. Cmax was reached 0.5 h after dosing and the elimination half-life was between 1.5 and 2.4 h. Both exposure and C max increase in a dose proportional manner within the same treatment regimen. Variability of Donepezil plasma concentrations following concomitant administration of 0.3 mg/kg was moderate. Maximal Donepezil plasma concentrations were observed between 0.5 and 1 h after administration and were similar with and without co-administration of S 38093. There was no increase of neither S 38093 nor Donepezil plasma exposure when administered as a combination.
Experiment 2: Study of the Effect of the Compounds on Age-Related Memory Deficit and CREB Phosphorylation
Body weights among the groups were ranged from 25.9 ± 2.8 g (young adult mice) to 31.6 ± 4.1 g and no significant betweengroups difference was observed (p < 0.12). During the food restriction period, both vehicles and drug-treated mice eaten all the allocated daily amount of dry food. As in Experiment 1, all animals retained for the test session have eaten at least 7/8 pellets out of 10 at both the first and second acquisitions.
Effects of Aging on SCM Score and pCREB Activity Behavior Acquisition phase. No significant difference was observed among the groups on the total number of head-dips as well as on the % exploration of the baited hole both at acquisitions 1 and 2 [F(1,18) < 1.0 in all analyses; data not shown].
Immunohistochemistry
Effects of aging on tCREB and pCREB immunoreactivity. Results are presented in Figure 4B. One way ANOVA evidenced no significant difference (p > 0.10 in all analyses) between young adult and middle-aged mice on the number of tCREB immunopositive cells in naïve and test conditions, whatever the brain areas counted (data not shown). Concerning pCREB, one way ANOVA also evidenced no significant between-groups difference in naïve condition on pCREB scores whatever the brain areas counted (p > 0.10 in all analyses; data not shown).
Effects of Treatments on SCM Scores and pCREB Activity in Middle-Aged Mice Behavior
The group of young adult mice has been discarded from further statistical analyses since the effects of the compounds alone or in combination have been studied only in middle-aged animals.
Acquisition phase. No significant difference was observed among the groups on the total number of head-dips as well as on the % exploration of the baited hole both at acquisitions 1 and 2 [F(1,54) < 1.0 in all analyses; data not shown].
Strength of contextual memory. Data are represented in Figure 5. ANOVA evidenced significant effects of Donepezil [F(1,54) = 48.88; p < 0.0001], of S 38093 [F(2,54) = 7.49; p = 0.0013] and of the interaction between Donepezil and S 38093 [F(2,54) = 5.57; p = 0.0063]. Donepezil (+7.48 ± 3.60) and S 38093 at the doses of 0.3 mg/kg (+5.19 ± 6.68) and 0.1 mg/kg (−14.3 ± 5.76) produced no significant modifications of the scores as compared to vehicle (−18.14 ± 7.9; NS in FIGURE 4 | (A) Effects of aging on SCM scores (Experiment 2). As can be seen, 14 months-old middle-aged mice exhibited a significant reduction of SCM score as compared to 5 months-old young-adult mice. Between-groups difference: * * * p < 0.001. (B) Effects of aging on pCREB immunoreactivities. Counting was made in the PFC, dCA1, vCA1, medial septum, dorsal striatum, and BLA. No significant differences were observed between the groups in all brain areas in naïve condition (data not shown). Middle-aged mice showed reduced behavioral testing-related pCREB immunoreactivities in the dCA1, vCA1, and medial septum as compared to young adult mice. In contrast, a significant increase of pCREB was observed in the BLA in middle-aged animals, as compared to young ones. Results are expressed as mean number of positive pCREB nuclei per mm 2 . * p < 0.05, * * p < 0.01. all comparisons). However, the higher positive scores were observed in groups receiving the combinations of Donepezil and S 38093 at the doses of 0.1 mg/kg (+43.59 ± 7.10; p < 0.001 versus vehicle and p < 0.01 versus the Donepezil group) and 0.3 mg/kg (+25.73 ± 3.96; p < 0.01 versus vehicle; NS versus Donepezil). Thus, only the combination of Donepezil and S 38093 at 0.1 mg/kg produced a significant increase of SCM score as compared to both vehicle-treated mice and Don 0.3 mg/kg.
In contrast, significant differences were observed in test condition (Figure 5). More specifically, middle-aged mice exhibited less immunopositive cells as compared to young animals in the dCA1 (−47.5%; p = 0.007), the vCA1 (−56%; p = 0.0037), and the MS (−66.2%; p = 0018). A weak increase of immunopositive cells is observed in the BLA of middle-aged mice (+32.7%; p = 0.039). No significant between-groups difference was observed in the PFC and the dorsal striatum.
Effects of treatments on tCREB and pCREB immunoreactivity in middle-aged mice
Total CREB. Analysis of variance analyses showed no significant between-groups difference, no treatments effect and no significant interaction between drugs were found both in naïve and test conditions (p > 0.10 in all analyses; data not shown).
Phosphorylated CREB. In naïve condition, no significant between-groups difference, no treatments effect and no significant interaction between drugs was found whatever the brain structure counted (p > 0.10 in all analyses; data not shown).
In test condition (Figure 6A), no significant between-groups difference and no significant interaction between drugs were observed in the BLA, the dorsal striatum and the vCA1 (p > 0.10 in all analyses). However, donepezil induced a significant increase of immunopositive cells in the dCA1 [F(1,54) = 11.2; FIGURE 5 | Effects of S 38093, Donepezil and combinations on SCM scores in middle-aged mice (Experiment 2). SCM scores are expressed as mean ± SE. SCM scores are obtained by calculation (% of correct responses -% of interferent responses). As can be seen, the lowest S 38093 dose combined with Donepezil significantly increased SCM scores as compared to Vehicle and compounds (Donepezil and S 38093) alone. Combination of Donepezil and the higher S 38093 dose induced a significant increase of SCM score as compared to vehicle only. * * p < 0.01 and * * * p < 0.001 as compared to vehicle; ##p < 0.01 as compared to Donepezil and S 38093 alone. p < 0.001] and in the MS [F(1,54) = 18.3; p < 0.001]. S 38093 at both doses has no significant effect in these two brain areas. Two way ANOVA also showed that S38093 and Donepezil have a significant interaction effect in the dCA1 [F(2,54) = 3.57; p < 0.03] and the interaction is near from statistical significance in the MS [F(2,54) = 3.0; p = 0.058]. Figure 6B illustrated changes in pCREB activity in the dCA1 of the hippocampus in the different groups in naïve and test conditions. Calculus in percentage of the enhancement of pCREB in the combination groups as compared to compounds alone or vehicles were performed. The main findings are as follows: In the BLA, Dunnett post hoc test evidenced a significant increase of pCREB expression in mice receiving S 0.1 mg/kg +Don 0.3 mg/kg (+60.9%) in comparison with S 0.1 mg/kg alone (p < 0.01).
In the dCA1, results showed a significant increase of pCREB expression in mice receiving the combination S 0.1 mg/kg+ Don 0.3 mg/kg (+129.4%) and in mice receiving S 0.3 mg/kg+ Don0.3 mg/kg (+64.7%; p < 0.01) as compared to vehicle. More importantly, for the group receiving the combination S 0.1 mg/kg+Don0.3 mg/kg, a significant increase of pCREB expression of +192.7% has been evidenced as compared to S 0.1 mg/kg alone (p < 0.01) and of +137.1% in comparison with Donepezil alone (p < 0.01).
In the MS, Donepezil alone as well as the combination S 0.1 mg/kg+ Don 0.3 mg/kg induced a significant increase of pCREB expression of +174.3% and +214.1% respectively as compared to vehicle. A significant increase of +380.7% has also been evidenced between S 0.1 mg/kg alone and the combination S 0.1+ Don 0.3 mg/kg.
Synergistic Effects between Donepezil and S 38093 on Contextual Memory in Middle-Aged Mice
As regards SCM scores, vehicle-treated middle-aged mice showed a negative score on the retention of D1 (indicating an increase of interferent responses) in sharp contrast to young adult mice which exhibited a very positive score. In contrast, compounds alone or in combination (excepted for the combinations of the two lowest and the two highest doses of each compound) reverse this age-related memory pattern, as those mice have a substantial memory of D1 but not of D2 as compared to vehicle-treated animals. Since it has been shown that memory of D1 is hippocampus-dependent (Chauveau et al., 2008(Chauveau et al., , 2010 whereas memory of D2 is dependent on the PFC activity (Chauveau et al., 2009), it is thus of importance to verify that any promnesiant impact of the combinations of compounds on D1 did not alter memory of D2. This effect, statistically significant for the combinations with the exception quoted above, is the index of a substantial contextual memory-enhancing effect since the increase of the correct response is accompanied by a concomitant decrease of the interference one. Interestingly, the more powerful combination is observed with the lowest FIGURE 6 | Effects of S 38093, Donepezil and combinations on pCREB immunoreactivities in middle-aged mice. (A) Counting was made in the PFC, dCA1, vCA1, medial septum, dorsal striatum, and BLA. No significant differences were observed between the groups in all brain areas in naïve condition (data not shown). As can be seen, the lowest S 38093 dose combined with Donepezil significantly increased pCREB immunopositive cells as compared to vehicles and compounds (Donepezil and S 38093) alone. A significant increase of immunopositive cells was also observed in the BLA with the same combination as compared to compounds alone (Donepezil or S 38093). Combination of Donepezil and the higher S 38093 dose induced a significant increase of pCREB immunopositive cells as compared to vehicles only. * p < 0.05 and * * * p < 0.001 as compared to vehicles; ##p < 0.01 as compared to Donepezil and • p < 0.05 as compared to S 38093 alone. (B) Representative photomicrographs showing pCREB immunoreactivities in the dCA1 of the hippocampus in young adult and middle-aged mice after administration of S 38093 (0.1 and 0.3 mg/kg), Donepezil (0.3 mg/kg) and combinations both in naïve and test conditions. Magnification X20, Scale bar: 50 µm.
S 38093 dose + Donepezil 0.3 mg/kg since this combination induced a significant enhancement of contextual memory as regards to both Donepezil and S 38093 alone. It is noteworthy that combination of S 38093 with low pro-cognitive doses of any other compound currently approved for moderate-to-severe AD [i.e., others AChEI (rivastigmine and galantamine) and memantine, an uncompetitive NMDA (N-methyl-D-aspartate) antagonist] also leads to higher reversal of age-related memory deficit in this task compared to compounds alone (unpublished results, data not shown). Moreover, there was a very good correlation between the increase of percentage of correct responses and the decrease of percentage of interferent responses, confirming the specific effect of each compound and their combinations on contextual memory (data not shown).
Interestingly, a synergistic effect between a H3 antagonist and an anti-cholinesterase inhibitor has also been described in cognitive impairment associated with scopolamine in healthy young subjects (Cho et al., 2011). Our data showing synergistic effects between the H3 antagonist S 38093 and drugs approved in AD are innovative since, to our knowledge, that has never been described for other H3 compounds nor procognitive compounds on natural model of age-induced amnesia in rodents.
As the two compounds were administered simultaneously, it was important to assess a potential pharmacokinetic interaction, which could have faked a synergistic effect by increasing the blood exposure. Our study confirmed that the synergistic effect of the combination of the two compounds is not due to a pharmacokinetic interaction between them. Moreover, the safety of S 38093 and Donepezil, alone or in combination, was tested using the primary observation (IRWIN) test in mice and results showed that the combination of pharmacological doses of both compounds did not induce any observable clinical signs, similarly as compounds administered alone (data not shown).
Synergistic Effects between Donepezil and S 38093 on pCREB Immunoreactivity in the Dorsal Hippocampus and the Medial Septum
Aging reduced pCREB in the dCA1, the vCA1 and the MS as compared to young vehicle, whereas an increase of pCREB is observed in the BLA. These data confirm that alterations of the hippocampus activity induced contextual memory deficit in middle-aged animals (Béracochéa et al., 2011). Interestingly, we also previously showed that the BLA and the mPFC are importantly involved in the memory retrieval of D2 (Chauveau et al., 2009;Dominguez et al., 2014). Thus, the increase of pCREB in the BLA during memory retrieval of D1 could reflect an abnormal concomitant recruitment of a BLA-PFC network in middle-aged animals which could increase interference (D2) responses.
Whereas no significant between-groups difference is observed in the naive condition, S 38093 and Donepezil have a significant synergistic effect in the dCA1 and near significant effect in the MS, as compared to each compound alone. More precisely, the combination of S 0.1 mg/kg and Donepezil 0.3 mg/kg reverses in the dCA1 the age-induced hypo-phosphorylation of pCREB. This combination is also the most efficacious in reversing the ageinduced memory retrieval deficit for D1, as previously reported.
Hypothesis on the Mechanism of Action
Immunohistochemical data show that the combination of S 0.1 mg/kg and Don 0.3 mg/kg increases pCREB in structures of the cholinergic septo-hippocampal loop. Indeed, the medial septal area provides most of the cholinergic innervation of the hippocampus (Jakab and Septum, 1995;Mamad et al., 2015).
Thus, one hypothesis regarding the mechanisms of action of the compounds is an effect on the hippocampal cholinergic system. Indeed, in separate experiments, we showed that the acute as well as chronic administration of S 38093 in rats, by antagonizing presynaptic H 3 receptors, has been shown to rapidly and dose-dependently increase the release of ACh in the ventral hippocampus and the PFC of rats (intracerebral microdialysis, see supplemental data and Panayi et al., 2014). On the other hand, the well-known AChEI Donepezil, given alone, also increased ACh level in the synaptic cleft after acute administration. Previous studies have also confirmed that the increase in ACh levels induced by Donepezil in the cortex and hippocampus of rats are maintained after chronic administration (Scali et al., 2002). Either action could result in the observed enhancement of contextual memory with compounds alone, whereas the stronger effect of the combinations could be attributed to a synergistic effect on the hippocampal cholinergic system, including the additive effect on ACh release of the compounds observed in microdialysis when S 38093 and Donepezil were administered together (Supplemental Data, see Figures 1 and 2). Thus the microdialysis experiment evidenced the potential capacity of S 38093 and Donepezil to enhance the cholinergic activity within the hippocampo-PFC network that is substantially implicated in the CSD task.
Even if the effect of the compounds were evaluated in the PFC and the ventral hippocampus in supplemental data, it was also demonstrated that H3 antagonists including S 38093 (data no shown) as well as Donepezil were also able to significantly increase ACh levels in other brain areas and in particular dorsal hippocampus (Medhurst et al., 2007;Herrik et al., 2016), a critical region for memory processes, in which we have demonstrated pCREB increases in the present study.
Mice treated with the combination of the two highest doses of S 38093 and Donepezil showed impairments in the retrieval of D1. It could be hypothesized that this combination induces an important increase of acetylcholine levels both in the hippocampus and PFC, which could alter the interaction between these two areas during the testing of D1, leading to memory impairment. Indeed, the concomitant recruitment of the PFC by acetylcholine in mice receiving the drug combinations could enhance interference (D2) responses (Chauveau et al., 2009), at the expense of the dHPC-dependent one (D1).
CONCLUSION
The present study shows that the combination of the two memory-enhancing compounds, Donepezil and the H 3 antagonist S 38093, can lead to synergistic memory-enhancing effects, with a statistically higher size of effect never obtained with any memory-enhancing compounds alone without any pharmacokinetic interaction between both compounds. The memory-enhancing effect of the S 38093 and Donepezil combination is mediated by its action on the septo-hippocampal circuitry, since it canceled out the hypo-phosphorylation of pCREB in both the dCA1 and the medial septum that is observed in vehicle-treated middle-aged mice. Given the known procholinergic effects of histaminergic H3 inverse agonists and Donepezil (Schwartz and Lecomte, 2016) and our own data drawn from the microdialysis experiment (supplemental data), it could be suggested that the synergistic effects of the combination of S 38093 and Donepezil on memory performance in middleaged mice could stem from an enhancement of the septohippocampal cholinergic system. | 7,596.4 | 2016-12-22T00:00:00.000 | [
"Biology",
"Psychology",
"Medicine"
] |
Research on Dynamic Game Model of Enterprise Green Technology Innovation Driving Force
As environmental pollution problem becomes increasingly serious,the driving force of green technological innovation from the government and public consumers is currently strengthening. Hence relationship of mutual restraint and interaction exists among the government,� innovative enterprises, and general consumers are quite concerned. This paper proposes the tripartite evolutionary game model of the government,enterprises, and public consumers,then applies dynamic game evolution theory to build the third-party game payoff matrix, analyses the influences of enterprises, governments and consumers decision on green technology innovation through evolution model, the results show that pollution resistance measures of consumers will promote the diffusion of green technology innovation to some extent, high pollution regulation of governments make contributions to the green technology innovation as well.
Introduction
With Green wave of the global economy, a growing number of Enterprises realized that green technology innovation is not only to protect the environment but also the social responsibility of Enterprises.To gain a competitive advantage in the market, most of the Enterprises have to achieve the dual goals of economic performance and environmental performance.For macro ecological governance, to coordinate the contradiction between economic development and ecological environment, the Government actively supports the green management enterprises, such as provide financial subsidies and support policies for developing green products.Majority research shows that the green management behavior of Enterprises is influenced by government, policy, market, cost, law, consumer, etc. [1] , According to the institutionalism theory [2] , the organization is embedded in the institution environment, and the organization can obtain the legitimacy of social by isomorphism of institutional environment.However, most Enterprises show negative attitude to the high barriers of green technology innovation and the cost of RD [3] , additionally, low public awareness of green technology innovation reduces the development and diffusion efficiency of enterprises green technology innovation [4] .Therefore, how to eliminate the old RD technology and promote the enterprise to carry on the green technology innovation under the public participation and the government environmental regulation is of great practical significance.
Literature review
At present, the research on the driver force of green innovation mostly takes strategic management theory as the theoretical framework, and probes into the motivation and influence of enterprise's implementing environment strategy from the perspective of resource-based theory.Zhong Liu believes that under the influence of institutional environment, the institutional driving force of enterprise green technology innovation consists of regulatory, normative and cognitive pressures [5] .Zhang Haijiao stress green management must integrate by the whole process of production and management and adhere to the competitive green management strategy to establish the green competitive advantage of enterprises [6] .Zhu Qinghua suggests the external institutional pressure is the reason that the enterprise carries on the internal green practice, the enterprise's green supply chain practice also enhances the enterprise's economic performance and the operation performance indirectly [7] .However, the heterogeneity of green innovation in industry shows that when the institutional pressure restraints of the enterprise's relatively strong, the enterprise's strategic response either chooses the breakthrough or passively subject to it, therefore the Enterprise green technology innovation practice heterogeneity is the result of triple institutional pressure synergy.From the previous literature, Kammerer points out Consumer environmental awareness, and government regulation can promote the application and technological innovation of environmental products [8] , Wang Bingcheng suggests that direct factors as such market demand and national legislation promote technology innovation and diffusion of green products, indirect factors including consumer income and product intellectual property rights [9] .Pujari discusses the role of market demand, technology, product life cycle and their coordination relationship in green product innovation activities through hierarchical regression method [10] .Cantonos proposes a green product diffusion network model based on the heterogeneity of consumers preference [11] .On the whole, the existing research only focuses on the interaction between the government, consumers and the enterprises, and ignores the influence of the three parties combination driving force on green technology innovation.In fact, The enterprise embedded in a complex institutional environment is the connection point of the multi-stakeholder relationship, its green technology innovation strategy is influenced by the government and consumers, and the interaction of three parties will cause the change of the green technology innovation behavior of the enterprise.Based on this, this research applies institutional theory to construct the three-party evolutionary game model of government, enterprise, and public consumers, and analyze the influence of the evolution behavior of the three in different stages on the Green technology innovation strategy.
Dynamic Game of Enterprise Green Technology Innovation
(1) Green Technology Innovation Game Model Conditions Given the strong regulatory pressure of the Chinese government, and the complexity of the technology on green innovation and the long-term return on investment, in order to realize the dynamic equilibrium between the enterprise and the government, we assumed that the government should try to provide the enterprise with the policy support especially for the small and medium-sized enterprises, and the consumers should stimulate the enterprises to carry out green innovation by actively buying green products.Condition 1, the government, enterprise and consumer for Enterprise technology green innovation consists of tripartite game, the government's regulation means includes publicizes the social environmental protection consciousness, the green innovation subsidy, the pollutant emissions right transaction, its execution strength factor is α, β, γ, its corresponding regulation cost respectively is αI, βJ, γK.Condition 2, Enterprises, consumers and government have two options, enterprises can choose whether to implement green innovation, the government can choose whether to conduct environmental regulation, consumers can choose whether to buy green products, x, y, z indicate the probability of positive attitude of three parties, the probabilities are all the function of the period t, and belongs to [0, 1].Condition 3, the profitability of before starting green technology innovation is set to P, the income of the enterprise is ΔP after the green innovation, and the green technology innovation income is ΔP 1 when the government regulates and the consumers purchase green products.The enterprise green innovation income for ΔP 2 when government does not regulate but consumers buy green products, the enterprise green Innovation income for ΔP 3 when government regulation but consumers do not have green product purchasing power, the enterprise Green Innovation income is 0 if the government does not regulate and consumers do not have green product purchasing power, The investment cost of green technology innovation is Ci, the government's environmental regulation income and loss are Pg, Sg, consumers buy green products creates environmental benefits is Pc, and losses for Sc if consumers don't buy it, consumer's resistance to enterprise pollution is Rm.
(2) Green Technology Innovation Game Model Evolution After enterprises adopt green innovation, there will be two kinds of situations in the market, The number of enterprises that don't carry on green technology innovation set to T 0 , and the number of enterprises that adopt green innovation in the market set to T N , R 1 and R 2 represent the Government's support strength for green technology innovation and the restrain strength for non-green innovation enterprise technology respectively.σ n0 =K n /K 0 indicates the barrier factor of backward technology to green innovation, it explains the purchasing power of consumers for non-green products, σ 0n =K 0 /K n indicates the renewal coefficient of green innovation to backward technology, it means the market purchase power of green products.Thus, we get the enterprises evolution state since they adopt green innovation in the period, it as follows: Under the different game strategies conditions, the three-party game matrix of enterprises, consumers, and governments as follows:
P -S g -S c
With the development and progress of technology, the game of green technology innovation of enterprises will evolve with the time before they reach the equilibrium state, because of the information asymmetry between the game parties, the strategy decision of either party can only be based on its own experience and the strategy of other parties.According to the probability of three strategy alteration x, y, z, From the above tables, we get the Replicator dynamics equations of enterprises, government, and consumers as follows, After the continuous cooperation and confrontation, the final three-party Nash equilibrium obtained.
The game evolution of consumer game depends on linear equations R m2 y-R m1 y-R m2 =0.
To fully demonstrate the equilibrium state model of the diffusion of green technology innovation in the market, we assume dt dT n = t dT d 0 ,then we obtain as following, At this point, we can see four-game equilibrium points in the model, namely A(0,0)、B(0 ]. Point D is the Nash equilibrium solution to be explored in this paper.
When the enterprise's green technology innovation product market share is low, the previous backward technology will restrain the enterprise to adopt the new environmental protection technology, when the green technology innovation gradually diffuses along with the green product market demand level enhancement, the whole game system finally reach the equilibrium state.We assume the linear P and linear Q as following, With continual learning experience and practice of game parties, as long as governments impose regulations on and consumers resists companies that produce non-green products, the number of companies that had previously used backward technology would fall down, then the number of green innovation manufactures grow, as shown in Figure 1 below, the game parties out of line P and Line Q gradually approach to the aggregation around two lines, the final evolution of the game will end with the green products occupy the market.From the above-mentioned game equilibrium solution 3 and solution 4, we can learn that, In the evolution of the 3 parties game of government, consumer, and enterprises, three results can be derived in the end, namely green technology innovation product will spread to the entire market, green technology innovation product coexist with non-green products, and non-green producers dominate the markets.
Conclusions
This paper applies dynamic game evolution theory to build the third-party game payoff matrix, analyses the influences of enterprises, governments and consumers decision on green technology innovation through evolution model, the results show that pollution resistance measures of consumers will promote the diffusion of green technology innovation to some extent, high pollution regulation of governments make contributions to the green technology innovation as well.We conclude following aspects, firstly, the government should choose reasonable environmental regulatory measures to lower the cost of environmental regulation.Secondly, consumers have an extensive and intensive influence to play for the promotion of green technological innovation; they should stop consuming non-green products to continuously push forward the development of green innovation technologies.Thirdly,enterprises should make every efforts to research and develop green technology products to reduce environmental pollution and consumption of energy.
Fig.
Fig. 1 E e process, if and consu y lead to ackward tec vation in ma
Table 2
Trilateral Game Matrix of Consumer Boycott but without Government Regulation
Table 3
Trilateral Game Matrix of Regulation but without Consumer Boycott | 2,529.8 | 2017-10-10T00:00:00.000 | [
"Environmental Science",
"Economics",
"Business"
] |
Blue-noise sampling for human retinal cone spatial distribution modeling
This paper proposes a novel method for modeling retinal cone distribution in humans. It is based on Blue-noise sampling algorithms being strongly related with the mosaic sampling performed by cone photoreceptors in the human retina. Here we present the method together with a series of examples of various real retinal patches. The same samples have also been created with alternative algorithms and compared with plots of the center of the inner segments of cone photoreceptors from imaged retinas. Results are evaluated with different distance measure used in the field, like nearest-neighbor analysis and pair correlation function. The proposed method can effectively describe features of a human retinal cone distribution by allowing to create samples similar to the available data. For this reason, we believe that the proposed algorithm may be a promising solution when modeling local patches of retina.
Introduction
Sampling is the reduction of a continuous signal into a discrete one, or the selection of a subset from a discrete set of signals.For sampling to be effective, samples should be uniformly distributed in a way that there are no discontinuities; but at the same time, regular repeating patterns should be avoided, to prevent aliasing.In the human retina, the mosaic of the cone photoreceptor cells samples the retinal optical projection of the scene, achieving the first neural coding of the spectral information from the light that enters the eye.To solve the sampling problem, the human retina has adopted an arrangement of photoreceptors that is neither perfectly regular nor perfectly random.Local analysis of foveal mosaics shows that cones are arranged in hexagonal or triangular clusters, but extending this analysis to larger areas shows characteristics such as parallel curving and circular rows of cones associated with rotated local clusters.
There are different theories regarding the regularity and development of the cone cells mosaic.Wassle and Riemann [61] proposed two models based on mechanisms that assume the self-regulation of an original random pattern, one with a repulsive force acting between nerve cells and the other based on competition for territory for each neighboring cell.Later, Yellott [67] discovered that the photoreceptors in the human retina, especially the cones, are distributed conforming to a Poisson disk distribution.He performed spectral analysis to an array of cones treated as sampling points and observed that the spectral properties of cones mosaic are representative of a Poisson disk array, with the additional restriction of a minimum distance between the center of the cells and their nearest neighbors, because of the size of the cells.This was confirmed by Galli-Resta et al., which investigated the spatial features of the ground squirrel retinal mosaics, suggesting that a minimal-spacing rule d min in conjunction with an adequate density of receptors can adequately describe the array of rods and S cones [27].Poisson disk distribution is now regarded as one of the best sampling patterns, by virtue of its blue-noise power spectrum [38].
It is still unclear how the spatial distribution and mean density of cones can affect the sampling of a retinal image [17].An interesting evidence of this open issue is the experi-ment from Hofer [33] which tested the perception of stimuli of small spatial scale.Showing brief, monochromatic flashes of light of half the diameter of a cone size on previously characterized retinal areas of the subjects, they described the same stimuli with a large number of hue categories, including white, blue and purple, indicating that the stimulation of two different cones with the same photopigment results in different color sensations, even with no stimuli in different regions of the retina or on other wavelength-sensitive cones.
In this study, we examined the Nearest Neigbour (NN) regularity index of the population of cones in images of real human retina.We then compared the results to another measure of spatial patterning, the Pair Correlation Function.The goal of this paper is to show that the sampling properties of the cone photoreceptor mosaic can be modeled by a bluenoise algorithm, and that they can be used to generate sampling arrays with the same features of the retinal cone mosaics.More specifically, we want to identify an algorithm capable of generating sampling arrays with the same range of densities in the retina, and use specific metrics to compare the spatial and spectral properties of the cones distribution.
Retinal and Cone sampling modeling
The most recent works on retinal modeling are focused on the neural behavior [63,45,44]; for example, Virtual Retina by Wohrer and Kornprobst is a large scale simulation software that transforms a video input into spike trains, designed with a focus on nonlinearities, implementing a contrast gain control mechanism.
However, there have not been many attempts at modeling the cone sampling array.The first known sampling model for positioning cones in the retina with the same qualities as the human sampling was described by Ahumada [4].It works by placing cones, which are surrounded by circular disks representing their region of influence, starting from the center of the retina, and then applying a random jitter to each point.There is an attempt to generate a space-varying parameter model, to extend the modeling capabilities past the foveola, by varying with the eccentricity the mean radius of the cone disk, the standard deviation of the cone disk radius, and the standard deviation of postpacking jitter; but ultimately those parameters seem to be only fit for the foveola.
After their studies on human photoreceptor topography, Curcio and Sloan continued in Ahumada's direction proposing a model of cones distribution based on regular arrays subjected to a spatial compression and a jitter, to fit the actual cones mosaic [14].Their analysis was based on the distribution of distance and angles of neighboring cones, com-paring real mosaics with artificially generated ones, and evidencing anisotropies in retinal cell spacing.
Another attempt at modeling the sampling properties of the cone mosaic was proposed by Wang [60], which created a polar arranged array of cones and jittered the points according to the standard deviation of a Gaussian distribution, constrained by a minimal spacing rule.The comparison of power spectrum of the human foveal cones and the generated sampling arrays show similarities, and the generated arrays exhibit some basic features of the mosaic of foveal cones.
In Deering's [16] human eye model, cones are modeled individually as a center points surrounded by points that define a polygon constituting the boundaries of the cell, each photoreceptor is then subjected to attractive and repulsing forces to adjust its position.This retinal synthesizer is then validated by calculating the neighbor fraction ratio and by empirically measuring the cone density in cells/mm 2 and comparing it from data from Curcio et al. [15].
Blue Noise Distributions
Coined by Ulichney [58], the term blue noise refers to an even, isotropic, yet unstructured distribution of points.Blue noise was first recognized as crucial in dithering of images since it captures the intensity of an image through its local point density, without introducing artificial structures of its own.It rapidly became prevalent in various scientific fields, especially in computer graphics, where its isotropic properties lead to high-quality sampling of multidimensional signals, and its absence of structure prevents aliasing.It has even been argued that its visual efficacy (used to some extent in stippling and pointillism) is linked to the presence of a blue-noise arrangement of photoreceptors in the retina discovered by Yellott [67].Over the years, a variety of research efforts targeting both the characteristics and the generation of blue noise distributions have been conducted in computer graphics.
Arguably the oldest approach to algorithmically generate point distributions with a good balance between density control and spatial irregularity is through error diffusion [26,58], which is particularly well adapted to low-level hardware implementation in printers.Concurrently, a keen interest in uniform, regularity-free distributions appeared in computer rendering in the context of anti-aliasing [12].Cook [11] proposed the first dart-throwing algorithm to create Poisson disk distributions, for which no two points are closer together than a certain threshold.Considerable efforts followed to modify and improve this original algorithm [43,41,36,7,28].Today's best Poisson disk algorithms are very efficient and versatile [20,22], even running on GPUs [62,6,66].
Thanks to the pioneering work by Dippé and Wold [18], Mitchell [42], Cook [11], Shirley [56], the computer graph-ics community became sensitive to the fact that noise and aliasing are tightly coupled to sampling.A large variety of optimization-based approaches has been proposed since then.Two main optimization-based approaches have been developed and presented in numerous papers: (1) on-line optimization [41,21,40,5,6,23,8,54,53,25,31,68,48,32,49], and (2) off-line optimization [47,37,46,59,2,1], where the nearoptimal solution is prepared in form of lookup tables, used in runtime.The present work uses as reference the approach called Blue Noise Through Optimal Transport (BNOT), developed by de Goes et al. [31], because it allows to achieve the best Blue Noise distribution known today.
In an effort to allow fast blue noise generation, the idea of using patterns computed offline was raised in [18].To remove potential aliasing artifacts due to repeated patterns, Cohen et al. [9] recommended the use of non-periodic Wang tiles, which subsequently led to improved hierarchical sampling [37] and a series of other tile-based alternatives [47,39,46,3].Wachtel et al. [59] propose a tile-based method that incorporates spectral control over sample distributions.More recently, Ahmed et al. [1] proposed a 2-D square tilebased sampling method with one sample per tile and controllable Fourier spectra.However, all precalculated structures used in this family of approaches rely on the offline generation of high-quality blue noise.
Methods
The cone mosaics used for this work are from previously published images of patches of real human retinas, as shown in the leftmost boxes of Figures 2 through 5; they were acquired from the pdf versions of the papers or html, if available, and saved as png images.The pictures are from different subjects of various ages and were obtained with different techniques, from histological tissue prepared for electronic microscopic imaging in [15,35,13,30], to the most recent in vivo imaging techniques, using adaptive optics like deformable mirrors coupled with a wavefront sensor to compensate for the ocular aberrations of the eye [51,57,55,64].The x and y coordinates of the cells inner segments were manually plotted using WebPlotDigitizer [50].This preliminary work has been based on a relatively small dataset sue to the difficulty of finding wide collections of retinal images.We understand these difficulties related also with problem of the use of different imaging techniques and tissue preparation and we hope to have larger datasets in the future.When analyzing the points distribution, the distance between the cone centers was converted in real µm on the retina by multiplying them with the appropriate scale factor of the image, determined by the size of the sample window's side.Conversion from degrees was performed according to the model from [19], with one degree of visual angle equal to 288 µm on the retina.Cone spacing values are compatible with Wyszecki and Styles [65], with the exception of the data from [30] exhibiting lower values, probably due to post mortem shrinkage.Retinas 6, 7-A and 7-B have been cropped during analysis because they didn't fully fill the sampling window, and would have included uncharacterized areas.
Analysis of point process
In this section, we briefly introduce basic notions from Stochastic Point Processes [34].A point process S is a stochastic generating point in a given domain Ω (here, [0, 1) s ).We denote by P n := {x (1) , x (2) and isotropic if any rotation or translation of S have the same statistical properties.We also define the density of a point set as the average number of samples inside a region B of volume V B around a sample x.
This density is constant for isotropic and stationary point processes.A sampler generating sets with a non constant density is sometimes called a non-uniform sampler.To characterize isotropic stationary point processes, the Pair Correlation Function (PCF) is a widely used tool.Such function is a characterization of the distribution of pair distances of a point process.Oztireli [48] devised a simplified estimator for this measure in the particular case of isotropic and stationary point processes.The PCF of a pointset P n in the unit domain [0, 1) s is given by where d(x (i) , x ( j) ) is a distance measure between x (i) and x ( j) .The factor k σ is used to smooth out the function.Oztireli relies on this smoothing to assume ergodicity for all sets.He uses the Gaussian function as a smoothing kernel, but one could use a box kernel or a triangle kernel instead.To compute a PCF, we use this estimator with 3 parameters, the minimal r, r min , the maximal r, r max and the smoothing value σ .Those values are usually chosen empirically.Note that as the number of samples increase, the distances between samples will be very different for similar distributions.To alleviate this, we normalize the distances in our estimations using the maximal possible radius for n samples ( [29], Eq (5)).
In Figure 1, we illustrate how the PCF of several point processes captures the spectral content of the point distribution: a pure uniform sampling, Greean-Noise and Pink-Noise samplers obtained using [3], a jittered sampler (for N samples, subdivision of the domain into regular √ N × √ N square tile and a uniform random sample is drawn in each tile), a Poisson-Disk sampler [7] and a Blue-noise sampler (BNOT) [31].
Results and discussion
Regularity index, or conformity ratio is a quantitative method used for assessing spatial regularity of photoreceptor distributions [61,24,10].A k-d tree structure has been used to find the nearest neighbor for each point, the euclidean distance was calculated for each pair found this way and all the results are classified in histograms.Each distribution of nearest neighbours can be described by a normal Gaussian distribution described by the equation where µ is the mean of the distribution and σ the standard deviation of the measurements.The regularity index is expressed by the ratio of the mean µ by the standard deviation σ .This index is reported to be 1.9 for a full random sampling and the more regular the arrangement, the higher the value, usually 3-8 for retinal mosaics.Regularity indexes for retinal data are shown in Table 1.In contrast with previous claims, our calculated indexes range from 8 to 12.In the lower bound there is data obtained from [13], which instead of a retinal image shows the marked locations of the inner segments of photoreceptors; meanwhile in the upper bound, close to 12, most of the data is from foveal centers in [30], with the exception of retina 8-G, where the different sizes of the photoreceptor profiles reflect different levels of sectioning through the inner segments.
The indexes for data generated with Green noise, Pink noise and BNOT samplers are presented in the same table.As expected, the indexes for Green and Pink noise are assimilable to those of a full random sampling, in fact they are even lower, averaging 1.3 and 1.4 respectively; meanwhile, for the BNOT data, the indexes values are much higher, more than the double of the highest values for retinal RIs.It is not very surprising that, thanks to the the uniformity optimization of BNOT, the indexes are this high; but still very far from the infinite RI of regular lattices.Given the fact that fully regular hexagonal or square patterns are proven to have poor sampling properties and therefore not suitable for simulating cones distribution, in the scope of this paper a higher RI indicates that BNOT is better at generating point processes than the other analyzed point processes.A more recent and reliable method for assessing the goodness of these processes is the previously mentioned Pair Correlation Function.In Table 2, we present the l ∞ distance, between our generated point sets and the measured PCF.From two PCFs ρ and ρ 2 , we denote their l ∞ distance as the maximal distance between the two functions: where r is a given radius.We rely on this measure as it was already used in [48] to compare PCFs, two distributions can be considered the same if this difference is under 0.1.The closest results are from comparison with BNOT and Dart Throwing samplers, moreover, the higher the measured RI for the retinal distribution of photoreceptors, the lower the distance from BNOT PCF.The opposite happens when comparing with Dart throwing algorithm, the closer to the reported RI of 8, the lower the l ∞ distance.This evidences that not only the indexes are actually higher than the ones previously measured, but also that the most effective method to simulate these distributions comes from Blue-noise samplers.
Conclusions
Blue noise sampling can describe features of a human retinal cone distribution with a certain degree of similarity to the available data and can be efficiently used for modeling local patches of retina.We hope this work can be useful to understand how spatial distribution affects the sampling of a retinal image, or the mechanisms underlying the develop- ment of this singular distribution of neuron cells and the implications it has on human vision.Given the nature of bluenoise algorithms, it should be possible to develop an adaptive sampling model that spans the whole retina.However, there would be issues in validating the cone sampling, since imaging of the whole retina is difficult to obtain and analyze.All validation in fact should also be local.Future works will explore the possibility of applying a smooth sampling across the retina to obtain an adaptive sampling, given the PCF and spectra of local patches, the patches can be reproduced [69] and correlated with a heat map that represents interpolation in space [52].
Figure 1
Figure 1 PCF of various 2-D samplers.Fist row from left to right: Realizations of 1024 samples from a uniform (a), a Green-Noise sampler (b), a Pink-Noise sampler (c), a jittered (d), a Poisson-disk (e) and a Blue-Noise sampler ( f ).The second row shows the Fourier specrtum (power spectrum) of each sampler ((g) − (l), spectrum computed on 4096 samples).The PCFs capture the spectral content of each sampler as shown in (m).
Figure 2
Figure 2 From left to right: The picture of the patch of retina, the point samples extracted from the cones' location, Nearest neighbor analysis with mean and standard deviation, Pair Correlation Function.a-d.Images from Scoles et al. [55] e. Image from Roorda & Williams [51].
Figure 3
Figure3From left to right: The picture of the patch of retina, the point samples extracted from the cones' location, Nearest neighbor analysis with mean and standard deviation, Pair Correlation Function.Images from Curcio et al.[15]
Figure 4 Figure 5
Figure 4 From left to right: The picture of the patch of retina, the point samples extracted from the cones' location, Nearest neighbor analysis with mean and standard deviation, Pair Correlation Function.a-d.Images from Jonas et al. [35] e. Image from Curcio et al.[13]
Figure 6
Figure6From left to right: The picture of the patch of retina, the point samples extracted from the cones' location, Nearest neighbor analysis with mean and standard deviation, Pair Correlation Function.Images from Gao & Hollyfield[30] realization of a point process with n samples.A point process S is stationary if it is invariant by translation, and isotropic if it is invariant by rotation.More formally, if we assume P a probability measure, S is stationary if ∀x ∈ R s | 4,461.8 | 2019-06-11T00:00:00.000 | [
"Computer Science"
] |
Critical Model Insight into Broadband Dielectric Properties of Neopentyl Glycol (NPG)
This report presents the low-frequency (LF), static, and dynamic dielectric properties of neopentyl glycol (NPG), an orientationally disordered crystal (ODIC)-forming material important for the barocaloric effect applications. High-resolution tests were carried out for 173K<T<440K, in liquid, ODIC, and solid crystal phases. The support of the innovative distortion-sensitive analysis revealed a set of novel characterizations important for NPG and any ODIC-forming material. First, the dielectric constant in the liquid and ODIC phase follows the Mossotti Catastrophe-like pattern, linked to the Clausius–Mossotti local field. It challenges the heuristic paradigm forbidding such behavior for dipolar liquid dielectrics. For DC electric conductivity, the prevalence of the ‘critical and activated’ scaling relation is evidenced. It indicates that commonly applied VFT scaling might have only an effective parameterization meaning. The discussion of dielectric behavior in the low-frequency (LF) domain is worth stressing. It is significant for applications but hardly discussed due to the cognitive gap, making an analysis puzzling. For the contribution to the real part of dielectric permittivity in the LF domain, associated with translational processes, exponential changes in the liquid phase and hyperbolic changes in the ODIC phase are evidenced. The novelty also constitutes tgδ temperature dependence, related to energy dissipation. The results presented also reveal the strong postfreezing/pre-melting-type effects on the solid crystal side of the strongly discontinuous ODIC–solid crystal transition. So far, such a phenomenon has been observed only for the liquid–solid crystal melting transition. The discussion of a possible universal picture of the behavior in the liquid phase of liquid crystalline materials and in the liquid and ODIC phases of NPG is particularly worth stressing.
It is claimed that ODIC mesophase appears for materials composed of globular or pseudo-globular molecules, where the free orientation of the molecules seems to be an inherent feature [12,13,16,19].Nevertheless, plastic crystal mesophases are often also observed for molecules with a topology/shape that allows free rotation about any axis of their symmetry [14][15][16][17][18][19][20].For molecules with an elongated, rod-like shape, a rotor/rotatory plastic crystal phase, where the 'free' rotations are associated with the dominant long axis of the molecule, can appear [21][22][23][24][25].For disc-like molecules, a 'free' rotation related to the axis that is approximately perpendicular to the molecular surface can lead to the formation of ODIC mesophase.An example with an extensive research record is cyclooctanol and related compounds or their mixtures [16][17][18]20].
Neopentyl glycol (NPG) is the ODIC-forming compound of particular technological significance.It is the basis for resinous coatings, a component of lubricants and greases.It is used in the textile industry and is vital in the pharmaceutical and food industries [31,32].The global market for NPG is worth ~USD 2.5 billion in 2024 [32].
Recently, it has been shown that NPG can be a 'breakthrough' material for the new generation of coolers or air-conditioners based on the barocaloric effect of innovative implementations [33][34][35][36][37][38][39].The existence of ODIC-crystal discontinuous phase transition at near room temperature in NPG has been known for a long time.Four years ago, it was shown that it is associated with a colossal entropy change: ∆S = 300 − 500 J/Kkg, increasing with compressing [33,34].This value is even higher than for omnipresent devices exploring vapor-liquid transition [33][34][35][36].However, innovative NPG-based devices may have qualitative advantages, as follow: (i) minimal pollution threads for the environment, (ii) no impact on global warming, (iii) lesser usage of energy than for existing refrigeration technologies, and (iv) the possibility of 'cool' storage, with virtually no energy consumption.
Extensive experimental and modeling studies have been carried out for NPG and related systems, especially since 2019 [33][34][35][36][37][38][39].Surprisingly, the discussion regarding dielectric properties is limited, although broadband dielectric spectroscopy (BDS) is the essential research method for ODIC-forming materials.In 1997, Tamarit et al. [40] presented the evolution of the primary relaxation time in ODIC phase of NPG for temperatures from ∼ 353 K to 305 K, i.e., covering ca.50% of the ODIC phase range.The slightly nonlinear changes in the Arrhenius scale plot, ln τ(T) vs. 1/T, reveals the Super Arrhenius (SA) dynamics, considered the universalistic feature for pre-vitreous dynamics.In subsequent reports, the parameterization via the Vogel-Fulcher-Tamman (VFT) dependence, i.e., the replacement equation for the general SA relation, was shown as [41,42]: where the left part is for the general SA equation, with the apparent (temperature-dependent) activation energy, E a (T).It reduces to the basic Arrhenius pattern for E a (T) = E a = const.The right part is for the VFT replacement equation.Equation ( 1) is for the supercooled liquid-like temperature domain, T > T g , where T 0 < T g is the extrapolated VFT singular temperature and T g is the glass temperature, which can be estimated via the empirical condition: τ T g = 100s.D T is the fragility strength parameter, and D T T 0 = const [26,27].The VFT equation is the commonly used dependence for describing the pre-vitreous dynamics, including the vitrifying ODIC phase.Notwithstanding, starting from the year 2006, the prevalence of the critical-like parameterization in the ODIC phase was evidenced [14][15][16][17][18]: where T C < T g is the extrapolated singular temperature, and the exponent φ = 9 − 15 for different ODIC-forming materials.It is notable that the primary relaxation-time-focused BDS studies in NPG constitute a particular experimental challenge since they require multi-GHz-range measurements carried out in a relatively volatile and sensitive contaminated material.
The currently most-often-recalled report for BDS studies in NPG was published in 2021 [43].It was related to frequencies, f ≤ 1 MHz, and covered liquid, ODIC (called Phase I), and crystal (called Phase II) phases in the temperature range from 416 K to 293 K.The report focused on the evolution of DC electric conductivity, for which the portrayal via the parallel of VFT Equation (1) was shown [43]: where A, B = const, and T V is the extrapolated singular temperature.
In [43], the description via the above relation was evidenced in the liquid phase for the range covering ∼ 10 K and in the ODIC phase (denoted as Phase I) in the domain covering ∼ 60 K, namely, starting 22 K below the melting temperature and terminating ∼ 7 K before the transition to the low-temperature Phase I I.For the latter, the basic Arrhenius dynamics (E σ a = const) is reported.In [43], also, spectra of imaginary parts of dielectric modulus, M ′′ ( f ), and electric impedance, Z ′′ ( f ), for four temperatures were superposed and discussed.The authors of [43] concluded: '. . .In the plastic crystalline phase, the proton hopping mechanism is most likely the underlying ion-conducting mechanism because of the rotational disorder and intrinsic defects (vacancies. ..) of the NPG molecules.In the ordered crystalline phase, the proton conduction is presumed to follow the proton hopping mechanism as determined from the localized relaxation and the temperature dependence of σ DC (Arrhenius behavior)'.
The in-depth insight into dielectric properties is essential for ODIC-forming materials since their properties are shaped by more-or-less freely rotating molecules translationally frozen in a crystalline network.The meaning of dielectric insight is strengthened by the coupling of rotating molecules to permanent dipole moments.For NPG, such essential evidence is surprisingly limited, in fact, to [40,41,43].These reports generally focus on the plastic ODIC phase, suggesting the VFT portrayal.It is an 'extremely flexible' functional portrayal often used to describe dynamics in glass-forming systems.
Nevertheless, its fundamental significance can be questioned, and it should be considered rather as an effective 'tool'.It is particularly evident for symmetry-limited glass formers, to which ODIC-forming materials belong.There is also temperature-limited evidence for dielectric constants, but only in the ODIC-phase-restricted temperature range, which can be questioned, as shown below.All these indicate a grand cognitive gap for basic NPG properties, a material valuable for significant innovative devices.
This report aims to fill the cognitive gap regarding NPG dielectric properties.Below, the results of high-resolution BDS studies in NPG for an extreme temperature range (173 K < T < 440 K), i.e., covering all phases of NPG, for frequencies up to f < 10 MHz are presented.Tests and analysis are focused on the static and low-frequency (LF) domains.We emphasize the latter since scaling relations describing this domain remain a cognitive puzzle.The presented results include the distortion-sensitive insight into electric conductivity behavior, revealing new scaling patterns in liquid and ODIC phases.Temperature changes in the dielectric constant indicate the Mossotti Catastrophe behavior, generally considered as 'forbidden' for liquid polar dielectrics.The evidence of such behavior covers the liquid phase and the ODIC mesophase, with the orientational freedom of permanent dipole moments.
Materials and Methods
Studies were carried out in neopentyl glycol, i.e., 2,2-dimethyl-1,3-propanediol, presented also as C 5 H 12 O 2 or (CH 3 ) 2 C(CH 2 OH) 2 ).The structure is shown schematically in Figure 1.The compound was purchased from Sigma Company and used as delivered.Broadband dielectric spectroscopy (BDS) studies were conducted using the Novocontrol Alpha Analyzer in the frequency range from 1 Hz to 10 MHz, with U = 1 V of the measuring field.It enabled 5-6-digit resolution.Samples were placed in the flat-parallel capacitor, made from gold-coated Invar, with a quartz ring as the spacer.The latter enabled the observation of the filling, which is significant in avoiding gas bubbles.The gap between plates was equal to 0.3 mm.The gap was supported by the quartz ring, as shown in [44], so it did not impact the measurement area between the capacitor plates.Such a design also enabled avoiding gas bubbles that can bias the results.The capacitor plated was made from gold-coated Invar.Generally, for the Alpha Analyzer, one can use voltages from 0.1 V to 40 V, but the optimal resolution reaching even 6 significant digits is related to U = 1 V.It was possible to use such voltage in the given experiment due to the macroscale gap between capacitor plates, d = 0.3 mm, with diameter 2r = 20 mm, yielding E ≈ 33 V/cm.For comparison, for micrometric gaps often used in dielectric studies, the intensity is essentially higher, namely, for d = 10 µm and the lowest possible voltage U = 0.1 V, one obtains E = 1000 V/cm.Such intensities are in the domain of nonlinear dielectric effects, and the question arises of the biased impacts of gas bubbles or dust parasitic impurities.With such weak intensities of the measurement electric field, as applied in the given report, no influence on the dielectric constant could be detected despite the extreme sensitivity and resolution of the Alpha Analyzer.
Figure 1.The compound was purchased from Sigma Company and Broadband dielectric spectroscopy (BDS) studies were conducted usin Alpha Analyzer in the frequency range from 1 Hz to 10 MHz, with U uring field.It enabled 5-6-digit resolution.Samples were placed in the itor, made from gold-coated Invar, with a quartz ring as the spacer.Th observation of the filling, which is significant in avoiding gas bubble plates was equal to 0.3 mm.The gap was supported by the quartz rin so it did not impact the measurement area between the capacitor pl also enabled avoiding gas bubbles that can bias the results.The capacit from gold-coated Invar.Generally, for the Alpha Analyzer, one can 0.1 V to 40 V, but the optimal resolution reaching even 6 significant U = 1 V.It was possible to use such voltage in the given experiment du gap between capacitor plates, d = 0.3 mm , with diameter 2 r = 20 33 V/cm.For comparison, for micrometric gaps often used in dielectri sity is essentially higher, namely, for d = 10 μm and the lowest po 0.1 V, one obtains E = 1000 V/cm.Such intensities are in the domain tric effects, and the question arises of the biased impacts of gas bubbl impurities.With such weak intensities of the measurement electric fiel given report, no influence on the dielectric constant could be detected sensitivity and resolution of the Alpha Analyzer.First, the capacitor was heated and filled with liquid NPG, which contact between the sample and the electrodes when cooling to subse First, the capacitor was heated and filled with liquid NPG, which guaranteed optimal contact between the sample and the electrodes when cooling to subsequent phases.Temperatures ranging from 173 K to 440 K were tested, with the support of a Novoconrol Quattro thermostatic system yielding the control from 0.02 K to 0.1 K, depending on the temperature range.Figures 1 and 2 show obtained frequency-related spectra for the sequence of tested temperatures presented as the complex dielectric permittivity, ε * = ε ′ + iε ′′ , and transformed to the complex electric conductivity representation.Obtained spectra are presented in Figures 1-3.Distortions at the frequency limit were related to borders of the allowed impedance measurement for the spectrometer.The derivative analysis of data was supported by the subsequent numerical filtering using the Savitzky-Golay principle [45].This report focuses on broadband dielectric properties of ODIC-forming NPG, which are essential because of the origins of the ODIC phase.As noted in the Introduction Section, they are surprisingly limited for NPG.Extensive material characterization regarding structural insight (XRD), Raman spectroscopy, DSC/DTA, etc., is presented in [33,34,38,39].We did not explore these results in this report since they are insignificant to the presented reasoning.In the figures presented below, subsequent tested phases are additionally noted as 'Phase I' and 'Phase II' to support correlation with the results presented in [43].
2024, 17, x FOR PEER REVIEW
iε , and transformed to the complex electric conductivity represent tra are presented in Figures 1-3.Distortions at the frequency limit w of the allowed impedance measurement for the spectrometer.The data was supported by the subsequent numerical filtering using the ciple [45].This report focuses on broadband dielectric properties of which are essential because of the origins of the ODIC phase.As not Section, they are surprisingly limited for NPG.Extensive material ch ing structural insight (XRD), Raman spectroscopy, DSC/DTA, [33,34,38,39].We did not explore these results in this report since th the presented reasoning.In the figures presented below, subsequen ditionally noted as 'Phase I' and 'Phase II' to support correlation with in [43].
where ω = 2π f .Note that the DC electric conductivity is related to the horizontal behavior, which is absent in the solid crystal phase.
Obtained BDS Spectra in NPG
Figures 1 and 2 show the results of BDS measurements presented as the complex dielectric permittivity frequency scans for subsequent temperatures.They are indicated in colors, changing from red in the liquid state to orange-green in ODIC phase and blue-violet in the solid crystal phase.Gaps related to liquid-ODIC and ODIC-crystal phase transitions are also visible.In Figure 1, the structure of neopentyl glycol is also presented.Significant domains of spectra are indicated in Figures 1 and 2. Spectra are split into three parts, related to the liquid, ODIC, and crystal phases, from top to bottom, with 'gaps' related to liquid-ODIC and ODIC-crystal phase transitions.
Other representations of electric impedance output detected in BDS studies can be convenient for some systems.For systems with the dominant impact of translational processes, the complex conductivity presentation, σ * = σ ′ + iσ ′′ , offers a better insight.Notable is the link [46,47]: where ε ∞ is for the infinite frequency, where only the atomic and electronic polarization contribute to the real part of dielectric permittivity, i.e., the contribution from permanent dipole moments dominates in the static domain, and the contribution from translational processes (LF domain) is absent.The circular frequency ω = 2π f , and ε 0 ≈ 8.854pFm −1 is the vacuum (free space) permittivity.This is the case of NPG in the tested frequency range, as visible for ε ′′ ( f ) in Figure 2. Figure 3 shows spectra for the real part of electric conductivity, obtained via data presented in Figure 2, transformed via Equation (4).The horizontal part in Figure 3 defines the DC electric conductivity domain: σ ′ = σ DC = σ.Notably, the DC electric conductivity domain appears only in the liquid and ODIC phases, and there is no such behavior in the solid crystal phase, which shows the non-horizontal pattern of changes.
The Temperature Evolution of Dielectric Constant-Basic Reference
The dielectric constant is a fundamental characterization of dielectric properties of materials, introduced already by Michael Faraday [48].This discovery and subsequent pioneering works by Ottaviano Mossotti [49] and Rudolf Clausius [50] led to the formation of Dielectric Physics and related material engineering topics.
Figure 4 presents the dielectric constant changes in NPG, covering the liquid, ODIC, and solid crystal phases down to 173 K.The eye inspection of temperature evolution in the central part of Figure 4 can suggest linear changes in ε(T), except for 'weak' premelting/post-freezing-type effects in the ODIC phase, just below the transition to the liquid phase.Passing Liquid − ODIC and ODIC − Crystal is manifested by step-type changes in the dielectric constant: (i) for the liquid-ODIC transition, ∆ε ≈ 2.16 or ∆ε ≈ 0.73, if the impact of the pre-melting effect is omitted, and (ii) for the ODIC − Crystal transition, ∆ε ≈ 14.3.Notably, the dielectric constant value, ε ≈ 3.06, at the onset of the crystal phase agrees with those noted for crystalline dielectric materials with frozen translational and orientational freedom [46].Timmermans, in his classic report [9], indicated a small valu for the liquid-plastic crystal mesophase transitions, usually Timmermans, in his classic report [9], indicated a small value of the entropy change for the liquid-plastic crystal mesophase transitions, usually ∆S < 20 JK −1 mol −1 , as its characteristic feature.It is ca.10× less than that typically noted for the liquid-crystal transition [2,5].It is notable that a similar ratio takes place when comparing ∆ε changes for Liquid − ODIC and ODIC − Crystal transition in NPG.Worth recalling is the recent report that discussed different contributions to entropy changes for a discontinuous melting transition [51]: The term ∆s 1 is associated with internal energy, the term ∆s 2 is related to compressibility, and ∆s 3 is related to dielectric constant changes.One can expect that ∆s 2 and ∆s 3 can be particularly important for ODIC-forming materials.They refer to the 'softness' of the ODIC mesophase and the large 'jump' of ∆ε.
Notably, a similar sequence of ∆S values can be concluded for isotropic liquid-LC mesophase and LC mesophase-crystal transitions [5].First, it is related to the weakly discontinuous phase transition behavior associated with critical-like, pre-transitional effects in the liquid phase [4,5,8,52].Such a phenomenon has recently been shown for ODICforming cyclooctanol, based on the dielectric constant, nonlinear dielectric effect, and Kerr effect investigations [20].Finally, the model for the common description of the liquid-LC mesophase and liquid-ODIC mesophase was proposed [20].
The high resolution of BDS measurements and resulting dielectric constant values enabled a subtle insight, shown in the inset in Figure 4 for crystal (Phase II).It also revealed a slight but detectable 'jump' for ε(T) changes, ca. 10 K below the transition to the solid crystal phase, and the continuous change characterized by dε(T)/dT > 0 ← dε(T)/dT < 0 .Generally, such behavior is linked to the crossover, indicating parallel ← antiparallel arrangement of dipole moments [46].
The pattern and values presented in Figure 4 correlate with the results reported by Tamarit et al. [40], which covered about 50% of the temperature range in ODIC phase of NPG.In the mentioned report, the Kirkwood-Frölich-Onsager model [46,47,53,54], generally developed for liquid dielectrics, was recalled to discuss the behavior in the ODIC phase.Their output relations do not enable portraying ε(T) temperature evolution, offering a discussion of the isothermal, concentration-dependent behavior for a dipolar component dissolved in a non-dipolar solvent.Another possibility is the tests of the tendency toward dipole-dipole parallel or antiparallel arrangement via the Kirkwood factor, expressing the short-range dipole-dipole correlations [40,46,47,55]: where k B is the Boltzmann constant, ε 0 = 8.854 C 2 J −1 m −1 denotes vacuum electric permittivity, V m denotes the molar volume of the molecule, ε ∞ is dielectric permittivity in the high-frequency limit, where the impact of the permanent dipole moment is absent, and µ is the permanent dipole moment.
Values of g > 1 indicate the preference for the parallel and g < 1 the antiparallel arrangement of neighboring dipole moments [40,46].In [40], for NPG, the value g = 1 was presented in the ODIC phase close to the transition to the crystal phase, and g = 0.7 in the middle of the ODIC phase.
The Temperature Evolution of the Dielectric Constant and the Mossotti Catastrophe
Figure 5 shows the re-analysis of data from Figure 4, presenting them as the reciprocal of dielectric susceptibility, χ = ε − 1.It reveals a superior linear behavior in the liquid and ODIC phases, suggesting the following critical-like scaling pattern: where T * is the singular, 'critical' temperature related to χ −1 (T * ) = 0, the amplitude A = const, and then A −1 T * = const.
Figure 5 shows the re-analysis of data from Figure 4, presenting th cal of dielectric susceptibility, = − 1.It reveals a superior linear beh and ODIC phases, suggesting the following critical-like scaling pattern: where * is the singular, 'critical' temperature related to ( * ) = 0, th , and then * = .This result proves that the 'linear' behavior in Figure 4 is only virtu to the 'damping impact' of the scale.A similar behavior was reported f forming material, cyclooctanol [20].
A similar temperature dependence as in Equation ( 7 This result proves that the 'linear' behavior in Figure 4 is only virtual, appearing due to the 'damping impact' of the scale.A similar behavior was reported for another ODIC-forming material, cyclooctanol [20]. A similar temperature dependence as in Equation ( 7) appears for dielectric systems due to the Clausius-Mossotti local field model [44,45,51,52,54].It considers the effective local field acting on a molecule within the dielectric by locating it in the center of a semimacroscopic cavity surrounded by a homogeneous dielectric.Generally, it is described as follows [46,47,56]: where E is the external electric field, E 2 is the electric field created by elements/molecules close to the given molecule, and E 1 results from charges situated on the surface of the cavity.
For dielectric materials with a random distribution of elements/molecules (gases and liquids) or a regular crystalline lattice (solids), one can assume E 2 = 0, and the relatively simple consideration for the remaining contribution yields: where P is for the polarization and ε 0 = 8.854 pFm −1 denotes the vacuum electric permittivity.
The above relation can be considered in two equivalent forms, taking into account that the number of polarizable molecules in a unit volume N = N A ρ/M, where M denotes the molar mass and ρ is for the density [44,54]: Equation ( 11) defines the molar polarizability, II, or the molar refraction, R, for lightrelated frequencies, where the Maxwell dependence obeys: ε = 1 + χ = n 2 , where n denotes the refractive index.Values of II and R are significant practical tools in chemical physics applications.
von Hippel [56] noted that for dipolar dielectrics, the contribution to the dielectric constant from electronic (α e ) and atomic (α a ) polarizations, expressed via ε ∞ , is minimal in comparison to the impact of permanent dipole moments.He accepted Debye's estimation for the latter, which yielded α P = (α a + α e ) + α dip = α ind.+ µ 2 /3k B T ≈ µ 2 /3k B T, and after the substitution to Equation ( 12), the following relation was obtained [56]: where T C = Nµ 2 /9ε 0 k B and N = N A ρ/M.This is the famous 'Mossotti Catastrophe', suggesting that in an arbitrary system composed of permanent dipole moments, non-interacting or weakly interacting, a singularity resembling the Weiss-type pre-transitional effect, known for the paraelectric phase in the way toward the ferroelectric state, appears.von Hippel presented a famous example of water, where he estimated T C ≈ 1520 K, which suggests a ferroelectric state for lower temperatures.Finally, von Hippel concluded [56]: 'Hence water should solidify by spontaneous polarization at high temperature, making life impossible on this earth!'.This picturesque example has often been cited in subsequent decades to illustrate the consequences of exceeding a model's assumptions, leading to paradox predictions, absent in nature.To avoid this paradox, Onsager developed the model in [57], explicitly considering the Debye concept, considering short-range interactions by a different cavity definition, and introducing the reaction field associated with the feedback interaction between the cavity and the permanent dipole moment.This approach was further developed by Kirkwood, Frölich, and followers, leading to the agreement with experiments in real dielectric liquids [46,47,[53][54][55][56][57].The problem, however, constitutes limited possibilities for describing temperature changes.In analyzing experimental data, the Kirkwood factor discussion plays a leading role (Equation ( 6)).Consequently, a 'paradigm' emerged that the Clausius-Mossotti model obeys only for a gaseous or non-dipolar liquid dielectric [46,47,[53][54][55][56][57][58][59][60][61][62].However, there are numerous solid-state systems where this model is widely accepted, e.g., in solid or liquid crystalline ferroelectric materials or relaxor ceramics [60][61][62][63][64][65][66][67][68].This issue is resumed and developed in the authors' recent report (ADR and SJR) [69].
It should be noted that von Hippel [56] overlooked a primary problem.Namely, he assumed ρ = 1 gcm −3 for the density of water, also at the extreme temperatures T > 1500 K.This is possible only under multi-GPa pressure, which may lead to the appearance of exotic states of matter.
Results presented in Figure 5 for NPG and the recent evidence for cyclooctanol [20] explicitly show that dipolar liquid or quasi-liquid systems with the Mossotti Catastrophe behavior exist.This can be considered as the cancellation of von Hippel's 'catastrophic paradigm', which significantly changes some basics of Dielectrics Physics, such as the canonic picture presented in classic monographs [46,47,[53][54][55][56].
The Link to the Pre-Transitional, Fluctuation-Related Behavior
In [20], it was indicated that one can consider the appearance of pre-transitional fluctuations both in the liquid and the ODIC mesophase, namely, with the ODIC-like order and the 'chaotic surrounding' in the liquid phase and in the ODIC phase related to the 'frozen' orientational arrangement within the orientationally free quasi-liquid mesophase surrounding.It led to the following relation for 'static' dielectric susceptibility [20,70]: where χ T is the compressibility, in the given case, meaning susceptibility related to the coupled-order parameter, and ∆M 2 V is the mean square of the order parameter fluctuations, i.e., the metric of fluctuations of the local order parameter, which is related to the difference in dielectric constants between fluctuations recalling features of the next phase and the surrounding in the specified type of materials.
For the the liquid phase and the adjacent ODIC mesophase, one can assume ∆M 2 V = const, which directly yields the same temperature dependence as Equation ( 7), in agreement with Figure 5.The model implemented in [20] recalls the authors' analysis of nonlinear dielectric properties on approaching the critical consolute point and the isotropic-nematic transition in LC materials [70].
One can conclude that for dipolar ODIC-forming materials, both in the liquid and quasi-solid mesophase, the Clausius-Mossotti local field model, and its crucial output, the Mossotti Catastrophe (Equation ( 13)) can be obeyed.This is due to the possibility of the free orientation of permanent-moment dipoles and the practical lack of short-range interactions associated with their translational 'localization'.This means that the canonic condition of the Clausius-Mossotti local field model is fulfilled.
Considering the results of the current report and that presented in [20,70], the need arises for an exceptional universal model description linking ODIC-forming materials in the liquid and ODIC phases, liquid crystalline materials in the isotropic liquid phase, and maybe the homogeneous phase of critical binary mixtures, particularly under the strong electric field inducing the uniaxial anisotropy.
The Evolution of Electric Conductivity and DC Electric Conductivity
DC electric conductivity is the metric of the ability to conduct a direct, in-phase electric current in a specific material.Heuristically, DC conductivity can be treated as a dynamic equivalent of the dielectric constant since it is also a frequency-independent quantity over a wide (low and static) frequency range.This is shown in Figure 3, where the horizontal domain for the frequency dependence of the imaginary part of electric conductivity, DC electric conductivity, σ DC = σ ≈ const.Such behavior appears only in the liquid and ODIC phases, and it is absent in the solid crystal phase.
Figure 6 shows temperature dependencies of electric conductivity for a set of frequencies.Notable is the overlapping in the liquid and ODIC phases for less than 5 MHz, which agrees with the frequency domain of DC electric conductivity in Figure 3, as discussed.Rising distortions appear for higher frequencies, which can be considered the impact of relaxation processes.The split in the crystal phase confirms the lack of canonic DC electric conductivity in this region.
cussed.Rising distortions appear for higher frequencies, which can be pact of relaxation processes.The split in the crystal phase confirms the electric conductivity in this region.As mentioned in the Introduction Section, in [43], the VFT portra for the DC electric conductivity in all phases of NPG, namely, for the tested range .= 10 , for the ODIC 'mesophase' in the tested ran and for the crystal phase in the tested range .= 17 .The VFT dated by the linear dependence for the so-called 'Stickel plot' ( originally developed to test pre-glassy changes in the primary relax Figure 7 presents the temperature dependence of DC electric conduc and ODIC phases based on data presented in Figure 6.Results are p Arrhenius-type scale, σ vs. 1 ⁄ , i.e., the standard representation cesses, at which the basic Arrhenius pattern with constant activation en a linear dependence.There is no such behavior in Figure 7.As mentioned in the Introduction Section, in [43], the VFT portrayal was suggested for the DC electric conductivity in all phases of NPG, namely, for the liquid phase in the tested range ∆T liq.= 10 K, for the ODIC 'mesophase' in the tested range ∆T ODIC = 59 K, and for the crystal phase in the tested range ∆T Cryst.= 17 K.The VFT portrayal was validated by the linear dependence for the so-called 'Stickel plot' (dlnσ DC /dT) −1/2 vs. T, originally developed to test pre-glassy changes in the primary relaxation time [71][72][73].Figure 7 presents the temperature dependence of DC electric conductivity in the liquid and ODIC phases based on data presented in Figure 6.Results are presented using the Arrhenius-type scale, lnσ −1 vs. 1/T, i.e., the standard representation for dynamic processes, at which the basic Arrhenius pattern with constant activation energy manifests via a linear dependence.There is no such behavior in Figure 7.
The inset in Figure 7 presents the temperature evolution of the reciprocal of apparent activation enthalpy, H a (T), proportional to the so-called steepness index.Notable is the link to the 'technical Stickel plot' [71][72][73][74][75][76].It covers all NPG phases tested in the given report.The emerging linear behavior validates the following dependence [75,76]: where H = const, and T + is the extrapolated singular temperature related to the [H a (T + )] −1 = 0, The inset in Figure 7 presents the temperature evolution of the r activation enthalpy, (), proportional to the so-called steepness link to the 'technical Stickel plot' [71][72][73][74][75][76].It covers all NPG phase report.The emerging linear behavior validates the following depend where = , and is the extrapolated singular temper ( ) = 0, = condition.
Such 'universal' behavior was first noted in [76] for the so-called the primary relaxation time, which is proportional to the related ap thalpy or, equivalently, the steepness index.It directly leads to the 'a equation formulated by Aleksandra Drozd-Rzoska [76].For the DC considered in this report, it has the form: Such 'universal' behavior was first noted in [76] for the so-called apparent fragility of the primary relaxation time, which is proportional to the related apparent activation enthalpy or, equivalently, the steepness index.It directly leads to the 'activated and critical' equation formulated by Aleksandra Drozd-Rzoska [76].For the DC electric conductivity considered in this report, it has the form: where t = (T − T + )/T, T + is the extrapolated singular temperature, the pre-factor C Γ = const, and the exponent Γ = const.
The parameterizations of experimental data in Figure 7 are related to Equation (16).They are supported by parameters obtained by the analysis presented in the inset in Figure 7, in agreement with Equation (15).It covers the liquid phase in the range ∆T liq.≈ 30 K, and ∆T ODIC ≈ 85 K in the ODIC phase.
It is worth stressing that the authors' recent work explicitly evidenced that the VFT relation is fundamentally justified only for a limited number of systems exhibiting 'glassy' dynamics, and this group does not include ODIC-forming materials [75].
Low-Frequency Behavior of Dielectric Permittivity and the Loss Factor
Temperature dependencies of the real and imaginary parts of dielectric permittivity in the low-frequency domain remain a puzzling issue, if not a cognitive gap [43,46,47,56].
Figure 8 present ε ′ ( f , T) and ε ′′ ( f , T) temperature evolutions in NPG for selected fre- quencies covering the static and the LF domains.For ε ′ ( f , T), experimental data overlapped for frequencies in the static domain for the liquid and ODIC phases.In the LF domain, a 'fan' of temperature dependencies appeared.For ε ′ ′( f = const, T) dependencies, the overlapping of data for different frequencies using the definition of DC electric conductivity was possible: ε ′′ ( f , T)ω = σ DC (T)/ε 0 .
Figure 8 focuses also on the temperature evolution of the magnitudes linking the above contributions, namely, the dissipation factor, D = tanδ( f , T) =ε ′′ ( f , T)/ε ′ ( f , T).This mag- nitude is commonly used in engineering applications but scarcely in fundamental analysis of dielectric materials.It estimates the power loss under the action of the external electric field, which can be converted into heat, namely [47,[78][79][80][81][82][83]: It is often discussed via the quality factor, Q = 1/D.It is a significant metric of the dielectric materials' property commonly referred to in engineering applications.However, it is hardly considered in fundamental studies.In this context, it is worth recalling that: Figure 8 shows the temperature dependences of the dissipation factor, D(T), using data presented above.It is notable that ε ′ (T) and ε ′′ (T) evolutions were presented using the semi-log scale since their values changed by almost six decades for the discussed frequencies.This range was qualitatively reduced when considering the ratio of ε ′ (T) and ε ′′ (T), so the linear scale could be informative for the dissipation factor.
The characteristic feature was the dissipation maximum in the ODIC phase, for the lowest tested frequency occurring near the ODIC − Crystal transition and shifting toward the Liquid − ODIC transition when the frequency increased.For frequencies larger than ∼ 5 kHz, only a dissipation decay in the ODIC mesophase upon cooling occurred.The dissipation was negligible in the Crystal phase.Negligible dissipation also appeared in the ODIC and liquid phases for f > 100 kHz.
The issue of a reliable scaling relation, which can describe the behavior of the real part of dielectric permittivity, remains a challenge, which constitutes a fundamental cognitive gap and creates a problem because of the significance of this domain in applications [45,[84][85][86][87][88][89][90][91].Figures 9 and 10 are related to a possible parameterization of the real part of dielectric permittivity, focusing solely on the LF contribution.It is realized by testing the magnitude, ∆ε ′ ( f , T) = ε ′ ( f , T) − ε(T), obtained by subtracting the static part, ε ′ = ε(T), from the total value, ε ′ ( f , T). Figure 9 presents ∆ε ′ ( f , T) changes in the semi-log scale, revealing a linear pattern.This linear dependence appearing in the liquid phase indicated the following temperature parameterization: where ∆ f 0 is the pre-factor, related to the specified frequency, f , and amplitude F = const.
overlapped for frequencies in the static domain for the liquid and ODIC phase domain, a 'fan' of temperature dependencies appeared.For ′( dependencies, the overlapping of data for different frequencies using the defin electric conductivity was possible: (, ) = () ⁄ .Temperature evolutions of the real (ε ′ ( f )) and imaginary (ε ′ ′( f )) parts of dielectric permit- tivity and for the dissipation factor, D( f ) = tanδ( f ) = ε ′ ′( f )/ε ′ ( f ), for selected frequencies in tested phases of NPG.Arrows indicate phase transition.ε ′ ( f )) and ε ′ ′( f ) are in the semi-log scale, and tanδ( f ) is exclusively in the linear scale.Results are presented as curves linking experimental data points to support the view.Note the strong pre-melting/postfreezing-type effects on the solid crystal side of the strongly discontinuous phase transition.These effects disappeared for the evolution of the energy dissipation factor.
Figure 10 shows that such parametrization was absent in the ODIC mesophase.However, the presentation of the same data using the following scaling: [∆ε( f , T)] −1 vs. T, revealed explicit linear changes.It led to the following parameterization: where T f is the singular temperature obtained from the extrapolation of the emerging temperature dependence in the ODIC phase via the condition ∆ε f , T f −1 = 0. 19)).The plot is data presented in Figure 6, using the frequency = 5 as the reference for determ dielectric constant, namely, ( = 5 ) = = .
Such behavior was recently noted in the isotropic liquid phase of liquid c nematogenic 4-methoxybenzylidene-4'-butylaniline (MBBA) [77].Figure 10 shows that such parametrization was absent in the ODIC mesoph ever, the presentation of the same data using the following scaling: (, )
Conclusions
The report discussed the low-frequency, static, and dynamic dielectric properties of neopentyl glycol, an ODIC-forming material of growing importance in applications that still requires fundamental insight support.The nature of ODIC-forming systems indicates that dielectric studies are essential for explaining and modeling their properties.
The basic dielectric property is the dielectric constant, related to the static frequency domain.It was shown that its evolution in the liquid and ODIC phases followed a pattern reminiscent of the Mossotti Catastrophe pattern, which follows directly from the Clausius-Mossotti local field concept.Formally, it is 'forbidden' for dipolar liquid dielectrics.From this report on neopentyl glycol and the recent study [20] on cyclooctanol, such a description is possible for ODIC-forming, dipolar dielectric materials.It obeys both the liquid and ODIC phases.
The dielectric constant is related to the real part of dielectric permittivity.DC electric conductivity can be considered its specific equivalent for dynamic properties and related to the imaginary part of dielectric permittivity.This work showed critical-like changes in apparent activation enthalpy associated with electric conductivity.This characterization led directly to the description of changes in electrical conductivity using the critical and activated Drozd-Rzoska dependence (Equation ( 16)) [77].The work also attempted to search for patterns of scaling temperature changes for changes in electrical conductivity in the low-frequency area.The description of extreme changes in the ε′( f , T) value remains a challenge for dielectric physics.This work showed the emergence of two simple and well-defined scaling relations in the ODIC and liquid phases.In the latter case, it was consistent with that recently reported for the isotropic liquid phase of a nematogenic liquid crystalline material.This report for the ODIC mesophase and recent work on nematic mesophases [20] also showed highly characteristic, pre-transitional-like changes in the dissipation factor, a quantity combining the real and imaginary parts of the dielectric permittivity.We emphasized this issue.Finally, the authors would like to indicate the relatively small step changes in the dielectric constant and tgδ for the phase transition from the isotropic liquid to the ODIC phase, in comparison to 'huge' step changes for the phase transition from the ODIC phase to the crystalline phase.Such a mutual relationship is consistent with Timmermans 'classic' observation [9], suggesting a weak and strong sequence of phase transitions based on specific heat studies.
For ODIC-forming materials, dielectric studies are essential due to their association with translational freezing and freely rotating molecules and coupled permanent dipole moments.This report showed that the picture emerging from dielectric studies in NPG, which should be considered a specific representative of ODIC-forming materials, differs from what has been suggested so far.It showed virtually exotic features, such as Mossotti Catastrophe-type behavior.The pre-melting/postfreezing-type effects on the solid crystal side of the discontinuous transition or the possible link between the behavior in the liquid and ODIC mesophase transition is worth stressing.All these results can suggest opening a new multitude of cognitive pathways for further studies.
Figure 1 .
Figure 1.Frequency scans of real ( ()) contributions to dielectric permittiv acteristic temperatures are recalled in the figure for the orientation in the teste lar structure and relevant frequency domains are also shown.The static domai region of the real part of dielectric permittivity, where a frequency shift does of ().Below the static domain is the low-frequency (LF) domain.
Figure 1 .
Figure 1.Frequency scans of real (ε ′ ( f )) contributions to dielectric permittivity in NPG.The charac- teristic temperatures are recalled in the figure for the orientation in the tested range.The molecular structure and relevant frequency domains are also shown.The static domain is for the horizontal region of the real part of dielectric permittivity, where a frequency shift does not change the value of ε ′ ( f ).Below the static domain is the low-frequency (LF) domain.
Figure 2 .
Figure 2. Frequency scans of the imaginary () to dielectric permittivity structure and relevant frequency domains are also shown.The characteris trum are indicated.The low-frequency part of () is related to DC el () = (), = 2πf.Note that the DC electric conductivity exists phases, but it is absent in the solid crystal phase.
Figure 2 .
Figure 2. Frequency scans of the imaginary ε ′′ ( f ) to dielectric permittivity in NPG.The molecular structure and relevant frequency domains are also shown.The characteristic domains of the spectrum are indicated.The low-frequency part of ε ′′ ( f ) is related to DC electric conductivity: σ = σ(DC) = ε 0 ωε ′′ ( f ), ω = 2πf.Note that the DC electric conductivity exists in the liquid and ODIC phases, but it is absent in the solid crystal phase.
Figure 3 .
Figure 3. Frequency-related behavior of the real part of electric conductivity in the liquid, ODIC (Phase I), and crystal (Phase II) phases, based on data presented in Figure 2, namely, σ ′ = ε 0 ωε ′′ ( f ),where ω = 2π f .Note that the DC electric conductivity is related to the horizontal behavior, which is absent in the solid crystal phase.
s 2024, 17, x FOR PEER REVIEW
Figure 4 .
Figure 4. Temperature changes in the dielectric constant in subsequent p indicate the solidification to the plastic crystal phase at = 399.5 and dered crystal (ODIC)-solid crystal (Cr.) transition at = 307.75 solid crystal phase, showing hallmarks of 'hidden' transitions at = 3 Such evidence was possible due to extreme resolution in BDS measurem
Figure 4 .
Figure 4. Temperature changes in the dielectric constant in subsequent phases of NPG.The arrows indicate the solidification to the plastic crystal phase at T f = 399.5K and the orientationally disordered crystal (ODIC)-solid crystal (Cr.) transition at T Cr−PCr = 307.75K.The inset focuses on the solid crystal phase, showing hallmarks of 'hidden' transitions at T 1 = 342.4K and T 2 = 232.5 K.Such evidence was possible due to extreme resolution in BDS measurements.
Figure 5 .
Figure 5. Temperature evolution of the reciprocal of dielectric susceptibility, lin constant via: = − 1.The plot is based on () data presented in Figure 4.
Figure 5 .
Figure 5. Temperature evolution of the reciprocal of dielectric susceptibility, linked to the dielectric constant via: χ = ε − 1.The plot is based on ε(T) data presented in Figure 4.
Figure 6 .
Figure 6.Temperature dependence of the real part of electric conductivity fo indicated in the plot.Arrows show subsequent phase transitions.Results are linking experimental data points-to support the view.Note the overlapping cies for a set of frequencies in the liquid and ODIC phases.It disappears for hig the rising impact of the dielectric relaxation process.The mentioned overlap solid crystal phase, reflecting the lack of DC electric conductivity for this phas
Figure 6 .
Figure 6.Temperature dependence of the real part of electric conductivity for a set of frequencies indicated in the plot.Arrows show subsequent phase transitions.Results are presented as curves linking experimental data points-to support the view.Note the overlapping of σ(T) dependencies for a set of frequencies in the liquid and ODIC phases.It disappears for high frequencies due to the rising impact of the dielectric relaxation process.The mentioned overlapping is absent in the solid crystal phase, reflecting the lack of DC electric conductivity for this phase.
Figure 7 .
Figure 7. Temperature dependence of the DC electric conductivity reciprocal in the liquid and ODIC phases of NPG, using the Arrhenius scale.The inset shows the evolution of the apparent activation enthalpy reciprocal, as defined in the plot and Equation (14).The curves following experimental data in the central part of the plot are related to Equation (15) with the following parameters: C Γ = 0.19 × 10 −4 Scm −1 , T C = 120 K, and the exponent Γ = 3.40 (green curve, ODIC phase), and C Γ = 0.18 × 10 −4 Scm −1 , T C = 357 K, and the exponent Γ = 0.63 (light-green curve, liquid phase).
Figure 8 .
Figure 8. Temperature evolutions of the real ( () ) and imaginary ( ′() ) parts permittivity and for the dissipation factor, () = () = ′() () ⁄ , for selected in tested phases of NPG.Arrows indicate phase transition. ()) and ′() are in scale, and () is exclusively in the linear scale.Results are presented as curves lin mental data points to support the view.Note the strong pre-melting/postfreezing-typ the solid crystal side of the strongly discontinuous phase transition.These effects disa the evolution of the energy dissipation factor.
Figure 8 .
Figure 8. Temperature evolutions of the real (ε ′ ( f )) and imaginary (ε ′ ′( f )) parts of dielectric permit- tivity and for the dissipation factor, D( f ) = tanδ( f ) = ε ′ ′( f )/ε ′ ( f ), for selected frequencies in tested phases of NPG.Arrows indicate phase transition.ε ′ ( f )) and ε ′ ′( f ) are in the semi-log scale, and tanδ( f ) is exclusively in the linear scale.Results are presented as curves linking experimental data points to support the view.Note the strong pre-melting/postfreezing-type effects on the solid crystal side of the strongly discontinuous phase transition.These effects disappeared for the evolution of the energy dissipation factor.
aterials 2024 ,Figure 9 .
Figure 9.The temperature behavior of the low-frequency (LF) contribution of the real p lectric permittivity in NPG, presented via the semi-log scale.Arrows indicate phase transi ear changes in the liquid phase validate exponential changes (Equation (19)).The plot is data presented in Figure6, using the frequency = 5 as the reference for determ dielectric constant, namely, ( = 5 ) = = .
Figure 10 .
Figure 10.The temperature behavior of the reciprocal of the low-frequency (LF) contribu real part of dielectric permittivity in the ODIC phase of NPG.Arrows indicate phase trans
Figure 9 .
Figure 9.The temperature behavior of the low-frequency (LF) contribution of the real part of dielectric permittivity in NPG, presented via the semi-log scale.Arrows indicate phase transitions.Linear changes in the liquid phase validate exponential changes (Equation (19)).The plot is based on data presented in Figure6, using the frequency f = 5 MHz as the reference for determining the dielectric constant, namely, ε ′ ( f = 5 MHz) = ε static = ε.
Figure 9 .
Figure 9.The temperature behavior of the low-frequency (LF) contribution of the real p lectric permittivity in NPG, presented via the semi-log scale.Arrows indicate phase trans ear changes in the liquid phase validate exponential changes (Equation (19)).The plot i data presented in Figure6, using the frequency = 5 as the reference for determ dielectric constant, namely, ( = 5 ) = = .
Figure 10 .
Figure 10.The temperature behavior of the reciprocal of the low-frequency (LF) contribu real part of dielectric permittivity in the ODIC phase of NPG.Arrows indicate phase tran ear changes in the liquid phase validate the behavior outlined by Equation (20).The pl on data presented in Figure 2, using the frequency = 5 as the reference for deter dielectric constant, namely, ( = 5 ) = = .
Figure 10 .
Figure 10.The temperature behavior of the reciprocal of the low-frequency (LF) contribution to the real part of dielectric permittivity in the ODIC phase of NPG.Arrows indicate phase transition.Linear changes in the liquid phase validate the behavior outlined by Equation (20).The plot is based on data presented in Figure 2, using the frequency f = 5 MHz as the reference for determining the dielectric constant, namely, ε ′ ( f = 5 MHz) = ε static = ε. | 11,376.4 | 2024-08-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Text segmentation of health examination item based on character statistics and information measurement
: This study explores the segmentation algorithm of item text data, especially of single long length data in health examination. In the specific implementation, a large amount of historical health examination data is analysed. Using the method of character statistics, the connection tightness values T AB s between two adjacent characters are calculated. Three parameters, the candidate number N , the best position BP, and balance weight BW are set. The total segmentation indexes SIs are calculated, thus determined the segmentation position Pos. The optimal parameter values are determined by the method of information measurement. Experimental results show that the accuracy rate is 78.6% and reaches 82.9% in the most frequently appeared text item. The complexity of the algorithm is O ( n ). Using no existing domain knowledge, it is very simple and fast. By executed repeatedly, it is convenient to obtain the characteristics of each single item of text data, furthermore, to distinguish respective express preference of different physicians to the same item. The assumption is verified that without professional domain knowledge, a large amount of historical data can provide valuable clues for the text understanding. The results of this research are being applied and verified in the following research works in the field of health examination.
Introduction
Health information collection is the first step in the trilogy of health management and disease preventive treatment in traditional Chinese medicine (TCM), which is the basis of subsequent health risk assessment and health intervention [1]. Health examination data is the most important source of health information, which plays a pivotal role in the health management industry chain in China. At present, a large amount of health examination data has been obtained [2], among which, precious data of unstructured text type is difficult to be used for automatic health assessment. Up to now, text data analysis and evaluation are mainly performed by artificially string matching; while, they are lack of automation and intelligence due to the difficulty in comprehension and meticulousness, and also necessary to be checked manually leading to the low efficiency.
In China, Health examination becomes popular after SARS in 2003. With the development of social economy, the improvement of people's living standards, and people's increasing attention to their own health, the health examination industry has developed rapidly. This job is not only medical work, but also closely related to the commercial operation. A large number of records have been accumulated over the past 10 years. These records are not as strict or formal as the clinical medical files, especially the text type data. Mixed using and abusing of traditional Chinese medicine and Western Medicine terminology, colloquial expression, vague concept, and so on, the insufficiency of these misleads to a poor quality health examination records. It is difficult to analyse and utilise these text data, and there are few research specially carried out on them. However, these physical examination data records the changes of the health for people, especially those who have regular annual physical examination and have an important potential value.
Our team is carrying out several researches related to health examination: construction of knowledge graph in the field of health examination, development of special input method for health examination results, design of intelligent and automated method for health examination results evaluation, visualisation of health examination results, and so on. All these researches need to deal with the analysis of health examination result of text type. In the previous attempts, we found that the tools and methods of clinical text analysis are not so applicable. Although the health examination data is not standardised, and there are a large number of individual categories and items, each specific item has its unique characteristics, in which the expression of information is limited to a very limited range. What we need is the characteristics of each single item of text data in health examination, furthermore, respective express preference of different physicians to the same item.
No relevant research results of the characteristics analysis of the text item data in health examination have been found. As a result, an algorithm of the starting point of the above study is needed and it should be as simple as possible. Any existing domain knowledge is not used for the time being to avoid too much restriction on the algorithm and results due to the purpose of the algorithm is text features and knowledge discovery. Similarities and differences from the large sample of item data have been selected for clues. The algorithm must be simple enough and can be executed repeatedly. It will run repeatedly for a large number of existing and continuously emerging item data. Perhaps, personal data by different physicians need to be analysed in real time. The algorithm does not pursue the perfect result at one time, which will be continuously verified and improved in the use and interaction with the doctor. The algorithm will be upgraded to make use of the verified knowledge to improve the ability of text analysis.
This study is such a simple starting algorithm. It is conducted by analysing a large amount of historical health examination data with character statistics and information measurement used. The goal is to search for the inherent law of the specific field jargon, and to explore appropriate algorithm and tool for encoding and analysis of text data in health examination. It will provide a basis for follow-up researches.
Related work
There is a large amount of health information in form of natural language, which is difficult to be analysed and utilised. The analysis of medical texts for the purpose of information extraction and knowledge discovery has been the focus of the research. Spasić reported KneeTex (a system for information extraction of knee pathology from MRI reports) which is modelled by a set of sophisticated lexico-semantic rules with minimal syntactic analysis in combination with the ontology [3]. Nguyen assessed the utility of Medtex on automating cancer registry notifications from pathology HL7 messages [4]. Koopman automatically extracted ICD-10 classification information of cancers from free-text death certificates [5]. Yepes used the technology of machine learning to improve the performance of Mesh keyword indexing program such as MTI [6]. Chard leveraged cloud-based approaches to solve the problem of poor accessibility, scalability, and flexibility of natural language processing (NLP) systems on processing medical text [7]. Botsis demonstrated a multilevel text mining approach for automatic rule-based text classification of adverse event reports that could potentially reduce human workload [8]. Li reported the research on information extraction based on domain ontology, which can improve the computer's ability of information extracting and knowledge discovering from electronic medical records in Chinese [9]. Nishmoto constructed a medical dictionary for ChaSen from unified medical language system (UMLS) believing that retrieval of transitional probability would improve the accuracy of parsing compound medical terms [10]. Zhou proposed a method and a prototype system for discovering implicit temporal assertions in medical text by applying discourse analysis as well as semantic and syntactic analysis, and by generating heuristic rules that encode the discovered domain and linguistic knowledge [11]. Yetisgenyildiz improved the efficiency of MEDLINE document classification by medical phrases extracting based on the medical knowledge base and NLP [12]. Niu treated analysis of the polarity information of clinical outcomes as a classification problem, which could be solved by NLP and supervised machine learning [13]. Travers evaluated an emergency medical text processor, a system for cleaning chief complaint text data [14]. There are many similar researches in China, in which Chinese word segmentation methods are used [15][16][17], and the research field is extended to traditional Chinese medicine [18][19][20][21][22].
As mentioned above, the current researches and applications on medical text processing are based on NLP, like lexical, syntactic, and semantic analysis. Ontology, knowledge base, and other medical expertise in specific areas are often used. The goals are to extract a small amount of specific information. It is difficult to use medical NLP and it is difficult to obtain and maintain comprehensive domain knowledge; furthermore, it is difficult for the specific researches to be extended to related fields. Reports on analysis of text data in health examination are rare.
These related works utilise specific domain knowledge to extract a small number of information of specific purpose from a large number of raw data. The obtained information has limited amount, and may cause important information omissions, which is not suitable for the analysis of health examination data and the discovery of unknown knowledge and rules.
Data source
The data used in this paper came from a health examination department in a top-level first grade hospital in Wenzhou, Zhejiang, China. The work of health examination has been carried out for 20 years there. Health examination software was introduced at the end of 2009 and electronic data has been saved for more than 7 years from then on with about 20,000 people per year. The software is developed by a Hangzhou medical software company, who has a relatively high market share. The data shows the common data condition in Chinese health examination.
Data status
Health examination results of 130,028 people have been stored in the database. There are 11,380,790 rows in the detailed data table, and 599 items are involved. The items can be divided into three types according to the health examination methodslaboratory test type, physical examination type, and instrument check type. The results are saved as numeric or textual data, as shown in Table 1.
Laboratory results are mainly of numerical type, and data of text type is very short, with an average of 2.3 and all in 5 characters. Also, they have strictly limited range for input choices, with only an average of six kinds. Two-thirds of the physical examination results are text type. They are also mainly short, while the numbers of input freedom vary greatly with no more than ten kinds and sometimes are very high. Instrument check results are mainly text type, their length and input freedom increase significantly, as shown in Table 1 and Figs. 1 and 2.
Problems
The difficulty degrees of health item data to be analysed and utilised vary greatly according to the data types. Numerical results can be used most easily, because they always have reference ranges, according to which a given result is confirmed as normal or not, even to get its degree abnormality. Most laboratory test results and some physical examination indicators are in this category. Text results of shorter length and limited degrees of input freedom are not so difficult because the possible results can be listed easily and assessed separately. All the laboratory test results and lots of physical and instrument ones are this type. It turns to be the most difficult one for the text data of long length and high input freedom degree since there are no strict format specifications and can be input arbitrarily.
Current measures include the following series for the analysis and utilisation of long text data: all the data are ignored directly, just not used; in addition to these original data, the physicians are also required to input a thumbnail copy which can be assessed relatively easily, leading to duplication of work and increase of medical staff burden; manual reading and analysis; natural languages are too flexible and complex by keywords matching and it is difficult to list all the keywords comprehensively without strictly input constraints, which causes the necessity of manual review. The problem of regular expressions is the same as keywords. These methods are lack of automation and intelligence resulting in low efficiency.
In order to make better utilisation of these texts, it is necessary to analyse the structures and rules of the data. The large amount of historical data accumulated in the physical examination system can play an important role. In this study, we explore the methods of long text data analysing and provide methods and tools for encoding based on the historical health examination data, compression, structuring, analysis, and assessment, thus achieving more automatic and intelligent health assessment.
Data processing algorithm
Natural languages have very high freedom degrees of expression, especially in Chinese. However, when applied to a specific context, the degree of freedom is limited. A health examination item describes a single physiological or test outcome, its degree of freedom was obviously stricter. In the 347 types of health examination items with input freedoms of 4, between 5 and 64, and more than 256, account for 32.3, 79.3, and 9.5%, respectively. Higher degree of freedom resulted in longer text length; while there must exist context domain constraints and unique language fingerprints like character frequencies, word frequencies, and their connection rules.
To use analysis and evaluation in a better way, the long unstructured texts should first be segmented, encoded, and structured. The information in long unstructured text includes each short sentence and their permutation sequence. First of all, the short sentences need to be analysed and segmented and each sentence can be regarded as a piece of basic information, including the item name and the corresponding value. Take the sentence 'Intrahepatic light spots are thickening and disorder' as an example, 'Intrahepatic light spots' should be regarded as its item name and '(are thickening and disorder)' as its value; and after the segmentation, the sentence can be easier to be encoded and classified, being ready for analysis and evaluation.
Based on the assumptions above, this study employed the large amount of historical health examination data and constructs a text analysis algorithm with the character statistics and information measurement used. The algorithm is developed by C# language, and exemplified by the B ultrasound results of liver as an example, as described below.
Data preparation
To avoid the impact on medical online services, the 11,380,790 rows of data are exported into a Microsoft LocalDB database with the table name 'ExaminItemResults'. The main column information is shown in Table 2. Liver B ultrasound data is one of the most common type of long text, with the examination item number '050001', and a total of 82,772 rows saved.
Data loading and numerical substitution
In order to merge the same results, the structured query language (SQL) aggregation statement is used as code 1. About 12,941 results are returned from the database, in which the default normal results occur most frequently, and the count is 41,383. There are many measured value in the texts, such as the size of liver or liver cyst, and the figures will affect the classification. So, a regular expression is used to identify and replace all the figures with a placeholder ' ┻ ', then the number of result kinds reduces to 7438. As shown the regular expression below:
Character frequency counting and segmentation
The connection tightness values T AB s between two adjacent characters A and B are calculated as follows: first of all, three frequencies are counted, F A* represents the frequency of arbitrary two adjacent characters that start with A, F *B end with B, F AB start with A and end with B. Three candidate formulas (1a)-(1c) are shown below. By comparison, (i)-(iii) shows the best performance By adding an end tag to each sentence, the same number as the containing characters of T AB s can be counted. Then the T AB s are sorted in ascending order, and the first NT AB s are chosen and used to segment the sentence. All the front parts are counted and sorted in descending order. Then each T AB gets its own front part order FO. Setting a new parameter BP, which means best position, the balance indexes BIs can also be calculated for each position of (2) where Pos represents the split position of the sentence and Len is the length of the sentence. Setting another parameter BW, balance weight, the sum split indexes SIs of each candidate position can be calculated In each sentence, the segment position with the largest SI is chosen finally.
Determination of the optimal value of parameters N, BP, and BW
Each sentence can be classified according to its front segmentation after segmented, which represents the problem KEY the sentence describes and the latter part represents the CHOICE the sentence makes about the KEY. A dictionary is then built, where stores all the N KEYs and all the M CHOICEs for each KEY, and the storage space of the dictionary SD can be calculated as follows: Two parts are required for encoding and storing each sentence, the first is for the KEY code, and the second for CHOICE code. The storage space for all detailed sentences SS and the total storage space ST are calculated as follows: Different SD, SS, and ST can be calculated according to different parameter values of N, BP, and BW. By sorting SD in ascending order, the order rank of SD, SS, OSD, and OST can be obtained. OAVG is the average of OSD and OST. The optimal parameter values, 2, 9, 0.8 are determined by the minimum OAVG, as shown in Table 3 3 Experimental results and analysis
Segmentation results
Experimental results above show that the segmentation results of sentences have been used for 10 or more times and the accuracy rate is 78.6%. As shown in Table 4, the weighted accuracy rate is 80.3%, which reaches 82.9% in the results of the most frequently appeared (more than 100 times) long text.
Algorithm efficiency
The algorithm is of high execution efficiency; the complexity is O(n) according to the data row count n. In the VS.Net 2015 development environment, a demo has been developed with the usage of C# language and WPF interface. Regardless of the time of loading data from the database, it takes 170 ms for the first time to run in x64 Win10, i5-4590 CPU, 4G memory debugging environment, and only 90 ms for later time. The determination of the optimal values for N, BP, and BW requires up to 810 times execution cycles of the algorithm, consuming about 46,912 ms.
Limitations and further improvement
This algorithm accomplishes the segmentation of historical text data in health examination, and is only based on character statistics and information measurement without manual intervention. It runs fast and efficiently, and achieves the expected ideal results. However, there are limitations of the algorithm because the accuracy still needs further improvement. The possible reasons include: (i) the results are input arbitrarily causing irregularities and errors; (ii) some sentences of results have inadequate frequencies to display the language clue needed by the algorithm; (iii) some sentences do not match the assumed KEY-CHOICE pattern; (iv) the syntax and semantics are too complicated in Chinese; (v) the algorithm only measured and compared the connection tightness is two characters. In order to improve the segmentation accuracy, future work may be performed as follows: (i) introducing professional Chinese word segmentation and other NLP tools; (ii) maintaining custom dictionary to justify abnormal T AB s; (iii) standardising physician input operation, and screening data of high quality; (iv) considering connection tightness within more than two characters.
Practical application
This algorithm has achieved the expected goals, laying a good foundation for the follow-up period of work and research. Based on this algorithm, several research in our team is progressing smoothly.
By this algorithm, we obtain the structure characteristic of all the individual text item data, and construct mini knowledge graphs for each item. Physicians can use these mini graphs for the input of text item data. The application of text segmentation results greatly reduces the degree of input freedom, so the input method is to slide the finger on the touch screen. As the algorithm can analyse and treat physician's personal preferences, it can greatly improve the convenience and speed of Chinese character input. In the process of using this input method, the accurately segmented results are often touched; while the poor ones are seldom or never touched. The accuracy of segmentation can be judged through the use and interaction with physicians. In the later period, we will develop a new algorithm to judge the segmentation results using the physician's interaction information, and help this algorithm to improve the ability of text segmentation.
Also by this algorithm, the original unstructured textual data can be structured, and greatly reduces the difficulty of analysis of health examination text data. This algorithm reduces the freedom degree of text data in health examination, thus reduces the difficulty of analysis. Therefore, this algorithm also contributes a lot to the work of setting up an intelligent and automated method for health examination results evaluation.
The above studies will be reported later.
Conclusion
This study employs historical health examination data, and makes the long text segmentation in health examination based on character statistics and information measurement. The assumption is verified that without professional domain knowledge, a large amount of historical data can provide valuable clues for the text understanding. The toolkit can be used in automatic data analysis, encoding, lossless compression, encryption, structured storage, and information classification, which thus can make health assessment more automatic and intelligent. The results of this research are being applied and verified in the works of our team, such as the construction of knowledge graph in the field of health examination, development of special input method for health examination results, design of intelligent and automated method for health examination results evaluation, visualisation of health examination results, and so on. Possible applications of the algorithm include: (i) implement the automatic encoding and compression for text data. In the experiment above, each liver B ultrasound result only needs to be stored in an average of 5.89 bytes, which is significantly reduced compared to original 56 bytes. The compression is lossless and loyal to the physician's input; thus the data can be completely recovered. Compression can greatly reduce the pressure load of the network and database system; (ii) with this encoding used, a certain degree of encryption can be achieved to improve the safety of medical information; (iii) with this encoding used, the texts are better structured and greatly reduced in freedom, thus can lead to better information classification, evaluation, and analysis.
Acknowledgments
This project is supported by the Health and Family Planning Commission of Zhejiang Province, Wenzhou Science & Technology Bureau, and Wenzhou People's Hospital. | 5,174.4 | 2018-03-05T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Different brain networks underlying intelligence in Autism Spectrum Disorders and Typically Developing Children
There has been sustained clinical and cognitive neuroscience research interest in the neural basis of intelligence. The characterisation of brain structure and function underlying cognitive performance is necessary to understand the neurodevelopment of intelligence across the lifespan, and how associated neural correlates could be perturbed in atypical populations. As most work in this area has focused on neurotypical adults, the nature of functional brain connectivity underlying intelligence in paediatric cohorts with or without abnormal neurodevelopment requires further investigation. We use network-based statistics (NBS) to examine the association between resting-state functional Magnetic Resonance Imaging (fMRI) connectivity and fluid intelligence ability in male children with Autism Spectrum Disorders (ASD; M=10.45, SD=1.58 years, n=26) and in matched controls (M=10.38, SD=0.96 years). Compared to typically developing controls strictly matched on age, sex and fluid intelligence scores, boys with ASD displayed a subnetwork (network size=24, p=.0373, FWE-corrected) of significantly increased associations between functional connectivity and fluid intelligence performance. Between-group differences remained significant at a higher edge threshold of t=4 (size=6, p=.0425, FWE-corrected). Results were validated in independent-site replication analyses representing a similar male cohort with ASD (network size=14, p=.0396, FWE-corrected). Regions implicated in atypical ASD fluid intelligence connectivity were the angular gyrus, posterior middle temporal gyrus, occipital and temporo-occipital regions. Across all sites, within-group analyses failed to identify functional connectivity subnetworks associated with fluid or general intelligence performance in matched typically developing males. Findings suggest a prematurely accelerated but aberrant development of fluid intelligence neural correlates in young ASD males, possibly as a compensation mechanism that supports equivalent task performance to controls. The absence of whole-brain network correlates of general and fluid intelligence in young neurotypical males may represent the shift from local to global integration in the development of cognitive ability.
Introduction
Fluid intelligence, or fluid reasoning, refers to the broad cognitive ability to solve novel problems and is typically estimated from composite scores of non-verbal or abstract tests (Barbey, Colom, Paul, & Grafman, 2014;Reynolds & Keith, 2017;Schneider & McGrew, 2012). As a cognitive construct, fluid intelligence is distinct from but strongly correlated with general intelligence (Blair, 2006). The neural architecture of human cognition likely comprises of complex large-scale networks with dynamic interactions and co-functioning between distributed cortical and subcortical regions (Bressler & Menon, 2010;Petersen & Sporns, 2015). However, associated neural correlates could be differentially expressed in neurodevelopmental conditions (Gray, Chabris, & Braver, 2003;Kosslyn et al., 2002).
The autism spectrum disorders (ASD) are a group of heterogeneous neurodevelopmental conditions associated with deficits in social communication, social interaction, and restricted and repetitive behaviours. Compared to typically developing controls, fluid intelligence in children and adults has been suggested to be increased in Asperger's disorder, an associated ASD condition, as well as elevated within ASD groups relative to crystallized intelligence scores on verbal tasks (Ehlers et al., 1997;Happé, 1994;Hayashi, Kato, Igarashi, & Kashima, 2008). The observed strengths in ASD fluid ability were attributed to theorized disorder-specific deficits such as weak central coherence in autism, although others have argued that measured performance in ASD represent valid estimates of fluid intelligence (Dawson, Soulières, Gernsbacher, & Mottron, 2007). Early studies have reported deficits in executive functioning in ASD but were limited in methodology and sampling (Pennington & Ozonoff, 1996). An important note is that these findings should not be taken as indicative of any diagnostic profile for ASD, more so given the highly heterogeneous and variable nature of the condition (Ehlers et al., 1997). On the other hand, failure to account for variability in fluid intelligence performance in ASD can contribute to estimation errors of group effects in brain-behaviour models (Hazlett, Poe, Gerig, Smith, & Piven, 2006). Individuals with ASD demonstrate an atypical reliance on enhanced visuospatial processes in extrastriate and parietal regions when engaging in fluid tasks (Koshino et al., 2005;Mottron et al., 2013). Increase in fluid task complexity modulated stronger activity in occipital and temporal regions in ASD, coupled with higher connectivity between major lobar regions (superior frontal gyrus, superior parietal lobe, inferior temporal gyrus, middle and inferior occipital gyrus; Simard, Luck, Mottron, Zeffiro, & Soulières, 2015;Soulières et al., 2009). Connectivity to prefrontal cortical areas observed in controls during fluid tasks were either altered or absent in ASD, suggesting aberrant functional segregation and integration in neural mechanisms underlying ASD fluid intelligence ability that are primarily characterized by increased occipito-parietal and temporal activity. (Sahyoun, Belliveau, Soulières, Schwartz, & Mody, 2010;Yamada et al., 2012). ASD performance on visual search tasks show a similar pattern of atypically increased occipito-temporal but absent prefrontal activity, indicating a predisposition for local rather than global processing that could explain differences in brain activity and connectivity related to higher level cognition in this population. The atypical emphasis on constituent features in ASD could however be an efficient strategy for the processing of complex stimuli, consistent with observations of increased performance on fluid tasks in individuals with ASD (Ring et al., 1999).
In contrast, fluid intelligence performance in neurotypical individuals involve broad recruitment across frontal, parietal, temporal and occipital cortices, as well as subcortical striatal and thalamic regions (Burgaleta et al., 2014;Geake & Hansen, 2010;Gong et al., 2005;Kroger et al., 2002;Perfetti et al., 2009;Prabhakaran, Smith, Desmond, Glover, & Gabrieli, 1997). Lateral prefrontal and parietal regions could mediate between-subject variability in the association between fluid intelligence and task performance (Gray et al., 2003). Psychometrically unidimensional tasks with high factor loadings on fluid intelligence also share similar patterns of associations with superior frontal, inferior and posterior parietal and temporal-occipital regions (Ebisch et al., 2012). Overall, it is not surprising that these pattern of findings are consistent with the parieto-frontal integration theory (P-FIT) of general intelligence neural correlates, given that fluid intelligence ability is related to a higher-order general intelligence factor (Carroll, 1993;Colom et al., 2009;Jung & Haier, 2007;Reynolds & Keith, 2017). The recent voxel-based meta-analysis of Basten, Hilger, and Fiebach (2015) on brain structural and functional correlates of intelligence lends further support to the P-FIT hypothesis.
Functional connectivity refers to the temporal dependency between the time series of measured neurophysiological signals and can express network mechanisms of high level cognitive processes (Biswal, Zerrin Yetkin, Haughton, & Hyde, 1995). Resting-state or intrinsic functional connectivity provide data about the functional architecture of the brain that also correspond to individual differences during task-dependent active states (Smith et al., 2009;Tavor et al., 2016). Intrinsic functional connectivity profiles have been shown to predict fluid intelligence ability, although existing investigations on brain networks in cognition are mostly limited to general intelligence in typically developing adult populations (Finn et al., 2015;Haász et al., 2013;Malpas et al., 2016;Penke et al., 2012).
Previous investigations on the neural correlates of ASD fluid intelligence ability have mainly relied on functional Magnetic Resonance Imaging (fMRI) task-based paradigms using blood oxygen level dependent (BOLD) as an estimate of brain activity to infer the role of local brain regions. Consequently, current knowledge about the neural basis of cognition for different intelligence constructs is limited especially in paediatric populations, and the neural correlates of fluid intelligence in children with ASD are not well-defined. The common use of a priori specified seed-target correlations to examine brain-cognition relationships may be associated with a bias for the identification of task-positive regions, and there is a need for network-based investigations across the whole-brain to be integrated with localization-focused findings on the neural mechanisms of fluid intelligence. Approaches that investigate brain-wide activity related to intelligence have recently been recommended (Basten et al., 2015;Langeslag et al., 2013). The nature and developmental trajectory of whole-brain fluid intelligence connectivity networks in ASD therefore merits further investigation. Given that ASD fluid task performance is characterized by aberrant activity in local anatomical regions-of-interest, we expect wholebrain intrinsic functional connectivity networks associated with fluid intelligence to be altered in the ASD group in comparison to typically developing controls matched on age, sex and fluid intelligence ability.
Participants
Data was obtained from the Kennedy Krieger Institute (KKI, ABIDE-II) sample (n=148) from the Autism Brain Imaging Database Exchange (ABIDE I and II; Di Martino et al., 2014). Full protocol details for sampling, image acquisition and phenotyping are available for public access 1 .Participants in the KKI sample were recruited as part of a study run by the Center for Neurodevelopment and Imaging Research (CNIR) at the KKI. All eligible participants received an MRI scan and cognitive assessment with the Wechsler Intelligence Scale for Children (Fourth Edition, WISC-IV; Fifth Edition, WISC-V). Handedness was assessed using the Edinburgh Handedness Inventory. Inclusion criteria were an age range of 8 years and 0 months to 12 years, 11 months and 30 days, and WISC-IV or WISC-V Full Scale Intelligence Quotient >80. For participants with a discrepancy of 12 points or more across indexes, the Verbal Comprehension Index (VCI), and the Perceptual Reasoning Index (PRI) score (or Visual Spatial Index and Fluid Reasoning Index in the WISC-V) had to be greater than 80 points, with the lowest index score above 65 points. Diagnosis of ASD was determined using the Autism Diagnostic Interview-Revised (ADI-R), Autism Diagnostic Observation Schedule-Generic (ADOS-G) module 3 or the ADOS-2 module 3. Instruments were administered by psychologists with graduate training. ASD classification criteria was based on the ADOS-G and/or ADI-R and clinical assessment by an expert paediatric neurologist with extensive experience in autism diagnosis. ASD participants were excluded if they had an identifiable cause of autism. For the control group, participants with a history of developmental or psychiatric disorders or with a first-degree relative with ASD were excluded. For all participants, exclusion criteria were the presence or history of a neurological disorder, major visual impairment, history of alcohol or substance use, and a developmental level of 3 or above on the Physical Development Scale.
For the present study, inclusion criteria applied to the KKI sample were male participants satisfying DSM-IV-TR 2 Pervasive Developmental Disorder criteria (Autistic Disorder, Asperger's or Not Otherwise Specified), assessed with the WISC-IV, and with MRI data acquired under the same scanning protocol. Continuous variables in the phenotype data were demeaned. Non-parametric propensity matching was conducted using the MatchIt package (Ho, Imai, & King) in the R environment (Team, 2014). Male participants with ASD were matched with TD controls on the following variables: sex, age in years [ASD: M=10.45 (SD=1.58); TD: M=10.38 (SD=0.96)], and PRI score from the WISC-IV [ASD: 108.65 (13.57); TD: 108.83 (14.14)]. The matching procedure resulted in a final sample of 50 male participants (ASD: n=26; TD: n=24).
Image Processing and Analysis
Functional connectivity analysis and visualizations were generated with the Functional Connectivity Toolbox v.16.b (Whitfield-Gabrieli & Nieto-Castanon, 2012) pipeline, Matlab R2010b (The MathWorks, Inc., Natick, MA, USA) and NeuroMArVL (http://immersive.erc.monash.edu.au/neuromarvl/). The initial 4 functional volumes per session were removed to account for T1 saturation effects. Slice-timing correction and firstvolume realignment (using a six rigid-body parameter spatial transformation) were applied to adjust for temporal and motion artefacts. Functional volumes were normalized to MNI-space, and smoothed with a full-width half-maximum Gaussian kernel of 8mm. Structural images were co-registered and segmented into grey-matter, white-matter and cerebrospinal fluid for later use in the removal of physiological noise from the functional volumes. In the first-level BOLD model, identified outliers from head motion parameters and global signal intensities (Artifact Detection Tool scrubbing) were regressed from the BOLD signal. Using the aCompCor approach (Behzadi, Restom, Liau, & Liu, 2007), confounds from non-neuronal sources such as cardiac, respiratory and physiological activity were removed. The residual BOLD time series were detrended and band-pass filtered (0.008-0.09Hz) to reduce noise in the detection of gray-matter signals.
Regions-of-interest (ROI) were defined using the FSL Harvard-Oxford Atlas (http://www.fmrib.ox.ac.uk/fsl/) for cortical and subcortical areas, and the Anatomical Automatic Labelling (AAL) atlas (Tzourio-Mazoyer et al., 2002) for cerebellar regions, resulting in 132 ROIs. The mean BOLD time series for all voxels in each ROI were extracted to compute pairwise correlations between all ROIs with the Fisher r-to-z transformation to construct a 132x132 connectivity matrix. Brain networks showing between-group differences in functional connectivity were identified with network-based statistic (NBS; Zalesky, Fornito, & Bullmore, 2010). Fluid intelligence performance scores (PRI) were regressed onto individual edges in the functional connectivity matrix and one-way ANCOVA covariate models were used to test for between-group differences in functional connectivity associated with cognitive performance scores (PRI by group interaction), or between-group differences in functional connectivity. Handedness and age were included as covariates for all analyses. To identify connected subnetworks, a breadth first search (Ahuja, Magnanti, & Orlin, 1993) was performed among connections surviving a t-statistic threshold of at least t=3.0 and permuted to generate a null distribution of largest network sizes. Each permutation randomly reassigns group labels and identifies the size of the largest interconnected subnetwork. The family-wise error (FWE) corrected p-value for a given subnetwork of size m reflects the proportion of permutations for which the largest subnetwork size is equal to or greater than m. The FWE rate is therefore controlled nonparametrically through the use of a randomized null distribution of maximum component size. Finally, subnetworks with a corrected p-FWE<0.05 value were retained.
Validation
To investigate if results from the present study (age range: 8 to 13 years) could be generalized to similar or older age cohorts, the above analyses were replicated on independent samples of data from other ABIDE sites. The University of Utah School of Medicine (USM, ABIDE-I), NYU Langone Medical Center (NYU, ABIDE-I) and Georgetown University (GU, ABIDE-II) sites were selected based on cohort age range and adequate sample size for analysis. This resulted in three independent samples representing a broad range of age cohorts (GU: 8 to 13 years; USM: 15 to 24 years; NYU: 6 to 39 years) of individuals with ASD and typically developing controls. Table 1 provides descriptive statistics for all samples used for present analyses. Diagnostic criteria, imaging acquisition protocols and parameters differed between sites. To investigate the findings of a general intelligence subnetwork in neurotypical adults (Malpas et al., 2016) but in younger cohorts with or without ASD, we further ran the above analyses to identify between-group differences in functional connectivity networks that were associated with estimate scores of general intelligence ability.
Results
Between-group differences in the association of resting-state fMRI subnetwork connectivity with fluid intelligence performance was identified in ASD males aged 8 to 13 years (network size=24 links, t-statistic threshold=3.5, p=.0373, FWE-corrected), with higher strength in ASD compared to typically developing matched controls (Figure 1). The fluid intelligence subnetwork involved regions in the left temporo-occipital middle temporal gyrus, left posterior middle temporal gyrus, bilateral paracingulate gryus, posterior cingulate gyrus, right frontal pole, right inferior frontal gyrus pars triangularis, bilateral angular gyrus, left lateral occipital cortex (superior division) and precuneus ( Table 2). Between-group differences remained significant at a higher network edge threshold (t=4, size=6, p=.0425, FWE-corrected) with nodes of the left temporo-occipital middle gyrus, bilateral angular gyrus, precuneus, posterior cingulate gyrus. Within-group edge associations in ASD also survived higher thresholding (t=4.5, size=48, p-FWE=0.0017). No networks associated with fluid intelligence were found in the matched control group, even when initial statistical thresholding was relaxed (t=2.5).
A B
T-statistic threshold: t=3.5 T-statistic threshold: t=4 C1 C2 Repeat analyses for replication in independent samples from different sites presented a similar pattern of findings in the same age cohort (site GU; age range 8 to 13 years), showing a fluid intelligence subnetwork with increased association in ASD compared to controls (network size=14 links, t-statistic threshold=3.5, p=.0396, FWE-corrected, alternate hypothesis: ASD>controls; Figure 2). Implicated regions were the bilateral occipital pole, right temoporo-occipital middle temporal gyrus, right anterior middle temporal gyrus, left posterior middle temporal gyrus, right angular gyrus and the cerebellum (Table 3). No fluid intelligence subnetworks were found in the matched control group, consistent with initial findings. Across replicated analyses from independent sites in age-matched samples, the right angular gyrus, left posterior middle temporal gyrus, occipital and temporo-occipital regions were consistently implicated in fluid intelligence subnetwork differences.
Findings failed to replicate in older age cohorts from two independent sites of ages 15 to 24 years (site USM; p>.05, FWE-corrected) and ages 6 to 39 years (site NYU; p>.05, FWEcorrected). Fluid intelligence subnetworks were not identified in older cohorts with ASD, even when the initial thresholding of pairwise functional connectivity links and FWEcorrected p-values were relaxed. No intrinsic connectivity subnetworks associated with general intelligence were identified in cohorts across all sites in our study.
Discussion
This is the first study to investigate the neural correlates of intelligence in paediatric cohorts using a sophisticated functional connectivity modelling technique with multi-site replication. We identified a resting-state functional connectivity subnetwork of fluid intelligence that showed atypical increased strength of association with fluid intelligence ability in young ASD males relative to matched controls. Findings were replicated in an independent sample of a similar matched cohort. This subnetwork was not found in older ASD samples, or in typically developing controls across all age groups in the present investigation. In the context of inter-site differences in sampling, phenotyping, diagnostic classification, MRI scanner and acquisition parameters, and the inherent heterogeneity within ASD neurobiology, the consistency of findings with replication support the robustness of a dysfunctional fluid intelligence subnetwork in individuals with ASD between 8 to 13 years of age. A unique strength of this study is the use of a network-based analysis approach to identify fluid intelligence intrinsic subnetworks controlling for wholebrain multiple comparisons, together with replication of analyses in independent samples for validation. Controls were further matched on age, sex and fluid task performance for equivalency.
Primary nodes implicated in atypical ASD fluid intelligence connectivity were the angular gyrus, posterior middle temporal gyrus, occipital and temporo-occipital regions. These regions remained consistent across independent-site replication and increased network thresholding. The strength of association between fluid intelligence and functional connectivity was greater in ASD compared to controls. Typical dorsolateral prefrontal involvement in fluid performance was notably absent in relation to fluid intelligence ability in children with ASD in replicated results when compared to controls, and did not survive increased thresholding in the initial analyses.
These findings are consistent with investigations of localized functional BOLD activity during fluid task performance that report atypical increases in occipito-temporal activity coupled with decreased prefrontal activation in ASD (e.g. Yamada et al., 2012). A recent meta-analysis of grey-matter abnormalities in paediatric ASD reported grey-matter alterations in the right angular gyrus, left inferior occipital gyrus and right inferior temporal gyrus, as well as in frontal, medial parietal and cerebellar regions. Increased greymatter volume of the right angular gyrus was further associated with increased severity of repetitive behaviours, a core symptom in ASD (Liu et al., 2017). The distinct overlap of specific patterns of grey-matter abnormalities with our present study in similar age cohorts could suggest a structural morphometric basis for the identified atypical fluid intelligence functional connectivity subnetwork in children with ASD. Independent component analysis of task-based fMRI metadata in previous work on neurotypical subjects show that temporo-occipital and inferior parietal regions constitute a cluster of intrinsic connectivity networks related to visual perception of complex stimuli higher-level visual processing, visual tracking, mental rotation and spatial discrimination (Laird et al., 2011). The angular gyrus and temporal-occipital cortex also demonstrate shared activity even among different fluid intelligence tasks, suggesting their role as potential neural correlates of fluid intelligence ability (Ebisch et al., 2012). In task-free states, connectivity between the angular gyrus and occipital regions form part of a temporally independent functional mode (Smith et al., 2012). Functionally, the angular gyrus serves as a cross-modal hub that combines and integrates multisensory information for attentional reorientation to critical information, comprehension of environmental events, manipulation of mental representations and problem solving (Seghier, 2013). The role of the angular gyrus according to the P-FIT hypothesis of information processing stages involves integration and abstraction of information, followed by parietal-frontal interactions that support problem solving and evaluation of solutions (Colom et al., 2009;Jung & Haier, 2007). That our present findings cohere with known resting-state and taskdependent functional connectivity networks in relation to fluid ability in typical controls suggest that components of the dysfunctional fluid intelligence subnetwork subserve similar functions in ASD, but are susceptible to alterations in local activity and global connectivity.
The dysfunctional fluid intelligence subnetwork in 8 to 13 year-old males was absent in older ASD cohorts. Age-dependent alterations in brain structure and function with interand intragroup heterogeneity are characteristic of atypical neurodevelopment in ASD (Uddin, Supekar, & Menon, 2013). A pattern of early increased functional connectivity followed by a decline in later stages has been reported in other disorders such as schizophrenia, possibly related to dysregulation of brain activity due to aberrant neurodevelopment of structural connectivity of hub regions in the association cortices (Fornito & Bullmore, 2015). Similarly in ASD, significant hypoactivation of the middle frontal gyrus during nonsocial tasks in children compared to adults suggest age-dependent trajectories of atypical changes in task neural correlates, and may account for discrepant findings between age cohorts in the present study (Dickstein et al., 2013). As dysfunctional subnetworks related to general intelligence were not detected in our ASD cohorts, the agedependent aberrant network structure of cognitive correlates may be specific to fluid intelligence ability in this population.
Intelligence in Typically Developing Children
Using the network-based analysis pipeline, a single subnetwork broadly distributed across regions in fronto-parietal and default-mode resting-state networks are associated with general intelligence in neurotypical adults (Hearne, Mattingley, & Cocchi, 2016;Malpas et al., 2016). The developmental trajectory of global neural architectures underlying intelligence ability in younger populations is less defined. Based on findings with replication in our present study, the absence of both general and fluid intelligence intrinsic connectivity subnetworks in typically developing boys suggests that whole-brain network correlates of intelligence ability in younger male cohorts are not yet robustly connected enough to be detected with fMRI. Typical brain maturation from infancy through adolescence involves a decrease in shortrange functional connectivity coupled with increases in long-range connections that reflect increased integration and segregation of brain networks with age. (Richmond, Johnson, Seal, Allen, & Whittle, 2016). The development of network topology is characterized by the reorganization of functional networks with a shift from local anatomical clustering to distributed function-dependent segregation, through which specific cognitive abilities coevolve with modular specialization and selective cross-network integration (Fair et al., 2009;Grayson & Fair, 2017). The absence of a whole-brain intrinsic functional connectivity subnetwork of intelligence in our analyses of typically developing children could reflect the ongoing shift from local to global networks observed in adulthood. In neurotypical adults, individual variation in general intelligence is associated with a large functional subnetwork, diffuse white-matter organization and increased global network efficiency (Li et al., 2009;Malpas et al., 2016;van den Heuvel, Stam, Kahn, & Hulshoff Pol, 2009). Consistent with this hypothesis are findings of intelligence-related differences in nodal but not global brain network properties in children between 5 to 19 years of age (Wu et al., 2013). Functional network properties however showed age and sex differences, highlighting the need for cohort-specific groups to investigate network connectivity and age-dependent developmental trajectories of brain-behaviour associations with cognitive ability. (Shaw et al., 2006).
In contrast, both global and local network properties of structural connectivity estimates of axonal white-matter tracts are related to fluid intelligence measures in children aged 6 to 11 years. Better performance on measures of perceptual reasoning was associated with greater communication capacities of structural networks from both whole-brain and specific regions (Kim et al., 2016). Because anatomical networks determine pathways of neuronal signalling, structural connectivity drives and constrains functional connectivity throughout development (Petersen & Sporns, 2015;Vertes & Bullmore, 2015). Our findings could suggest that structural network development precedes the complete deployment of global intrinsic functional connectivity networks underlying intellectual ability in children and adolescents. Cross-modal network analysis that integrates structural and functional data will be necessary to delineate the mechanisms of cognitive development and their abnormal nature and trajectory over time in neurodevelopmental disorders (Grayson & Fair, 2017).
Implications
There are several key points to draw from our analyses across distinct age and disorderspecific samples. Importantly, between-group differences in intrinsic connectivity networks underlying fluid intelligence performance remain significant between ASD and controls even when groups were matched on fluid intelligence ability. The degree of association between subnetwork connectivity and fluid ability observed in ASD is therefore more likely related to disorder-specific effects, rather than differences in fluid performance ability (Gray et al., 2003;Perfetti et al., 2009). According to the neural efficiency hypothesis, differential cortical activity may be observed among subjects with discrepant neural resources in relation to cognitive performance. The degree of brain activity associated with cognitive processing could therefore be interpreted as a measure of neural efficiency that varies as a function of individual ability or task complexity, although findings have been ambiguous and may be region-specific (Haier et al., 1988;Neubauer & Fink, 2009;Perfetti et al., 2009). Abnormally increased activation of brain regions during cognitive tasks in atypical populations may indicate a mechanism of neural compensation, and often involves mediation by hub nodes that integrate multiple neural systems, such as the angular gryus in the parietal association cortex. Dedifferentiation, the failure of neural processes to specialize due to neurodevelopmental abnormalities could also underlie early aberrant increases in hub activity (Fornito, Bullmore, & Zalesky, 2017). Consistent with this framework, our results complement previous findings of increased BOLD signal changes with increased fluid task difficulty in the inferior parietal lobule including the angular gyrus, and the left temporo-occipital junction in healthy individuals (Preusse, van der Meer, Deshpande, Krueger, & Wartenburger, 2011). Increased resting-state connectivity was also associated with higher intelligence scores (Hearne et al., 2016). Given the aberrant brain structure and function in ASD, atypically increased strength of association in the ASD fluid intelligence subnetwork may reflect a compensatory effect to achieve the same level of fluid task performance as ability-matched controls in our analyses.
Consequently, the common approach of controlling for performance variables based on matching of test scores may still remain fallible to sources of variation across different scales of brain structure and function. The general assumption behind matching groups on performance variables is that the neural architecture supporting covariates of interest also remain equivalent across groups, and therefore presumably do not contribute to variation in comparison analyses. However, observed between-group differences in neuroimaging measures in atypical neurodevelopmental conditions could be explained by variation at the level of associated neural correlates of cognitive ability, such as altered subnetwork connectivity underlying ASD fluid intelligence performance as we have shown. Matching groups on general intelligence ability without careful considerations could introduce artefactual differences in case-control comparisons biased by differential associations with neural correlates in ASD (Lefebvre, Beggiato, Bourgeron, & Toro, 2015). The critical point is that group differences in brain structure and function should demonstrate covariation with variables of interest such as clinical symptom severity, beyond rudimentary matching on performance variables as a method for confound control (Picci, Gotts, & Scherf, 2016).
The strength of the present study design based on variable matching also comes with inherent limits to generalizability of findings. The degree and structure of neural correlates of cognition could vary as a function of both task complexity and individual differences in intelligence ability (Gray et al., 2003;Khundrakpam et al., 2017;Perfetti et al., 2009;Preusse et al., 2011). ASD research is unfortunately biased towards the sampling of highfunctioning individuals, likely under-representing subgroups with lower nonverbal intelligence ability. The proportion of cohorts with neuroimaging data available is even smaller (Jack & Pelphrey, 2017). Interpretations of findings are generally limited to sampled cohorts, and extension of assumptions to understudied ASD populations should be done with caution. This is reflected in ASD samples in our study with intelligence ability in the average range. Previous work has shown that intelligence estimates in ASD based on the Raven's Progressive Matrices (RPM) were higher than scores derived from the Wechsler intelligence scales, suggesting an under or over-estimation of intelligence in this population (Dawson et al., 2007). However, others have postulated that findings may only be specific to ASD subgroups with low intelligence ability, emphasizing the need to consider the implications of individual differences in task performance in clinical research (Bölte, Dziobek, & Poustka, 2009).
With the wide range of tasks and instruments used to interrogate the neural mechanisms underlying cognition, test construct validity should be carefully evaluated in the selection of dependent variables. That is, the validity of neuroimaging and psychometric task measures of brain structure, function and performance should warrant equal consideration when investigating the neural correlates of cognition. For our study, we relied on full and abbreviated forms of the Wechsler intelligence scales with established construct validity and reliability in both typically developing and ASD populations (Minshew, Turner, & Goldstein, 2005;Scott, Austin, & Reid, 2007;Weiss, Keith, Zhu, & Chen, 2013). Others have suggested that the abbreviated form may overestimate nonverbal intelligence ability, and the PRI has also been recently separated into two independent factors representing fluid intelligence and visual processing in the latest iteration of the WISC (Axelrod, 2002;Reynolds & Keith, 2017). Varying definitions and measurement of cognitive constructs might account for inter-site differences in findings, such as the prominent occipital mediation in ASD that we observed in replication analyses. The nomenclature of brain regions also tend to differ between studies depending on the cognitive domain of interest (Seghier, 2013). Despite discrepancies in task, image acquisition and site, in addition to the heterogeneous nature of ASD, the consistent finding of a single atypical subnetwork associated with fluid intelligence in independent ASD samples is remarkable evidence that the neural correlates of fluid performance are altered in this population.
A final consideration are the constraints of NBS. The technique yields increased power over link-based FWE control to detect connected components with whole-brain multiple comparisons, but at the cost of localizing resolution for independent links (Zalesky et al., 2010). We have thus refrained from directly interpreting individual edge links in the identified subnetworks. Connectivity strength and network topology are distinct properties of the brain connectome that can demonstrate mutually exclusive perturbations (Hong et al., 2013). While we have focused on between-group differences in functional connectivity, these findings establish a framework for subsequent investigations into the multi-scale configuration of connections fundamental to cognition. Apart from identifying fundamental units or collective features of network topology, graph-based measures allow the identification of intermediate mesoscale structures through community detection techniques and across multiple timescales. Importantly, the function of network nodes may differ depending on the scale of analysis (Betzel & Bassett, 2016). The characterization of the neural architecture of intelligence in both typical and atypical populations will require an appreciation of brain structural and functional network topology across multiple scales, and their integration with valid measurement of cognitive constructs of interest.
Conclusion
We demonstrate preliminary evidence with replication for an atypical intrinsic connectivity brain network underlying fluid intelligence in male ASD children matched on fluid task ability to controls. Together with the absence of such a network in typically developing children, the neural architecture of fluid intelligence in ASD children may involve prematurely accelerated but aberrant network integration of distributed regions to support equal task performance with same-aged peers. There is potential for longitudinal investigations to delineate inter-and intra-individual variation and between-sex differences in the neurodevelopment of cognitive ability across different populations. | 7,222.6 | 2017-05-30T00:00:00.000 | [
"Psychology",
"Biology"
] |
A Unified Picture of Short and Long Gamma-Ray Bursts from Compact Binary Mergers
The recent detections of the ∼10 s long γ-ray bursts (GRBs) 211211A and 230307A followed by softer temporally extended emission (EE) and kilonovae point to a new GRB class. Using state-of-the-art first-principles simulations, we introduce a unifying theoretical framework that connects binary neutron star (BNS) and black hole–NS (BH–NS) merger populations with the fundamental physics governing compact binary GRBs (cbGRBs). For binaries with large total masses, M tot ≳ 2.8 M ⊙, the compact remnant created by the merger promptly collapses into a BH surrounded by an accretion disk. The duration of the pre-magnetically arrested disk (MAD) phase sets the duration of the roughly constant power cbGRB and could be influenced by the disk mass, M d . We show that massive disks (M d ≳ 0.1 M ⊙), which form for large binary mass ratios q ≳ 1.2 in BNS or q ≲ 3 in BH–NS mergers, inevitably produce 211211A-like long cbGRBs. Once the disk becomes MAD, the jet power drops with the mass accretion rate as Ṁ∼t−2 , establishing the EE decay. Two scenarios are plausible for short cbGRBs. They can be powered by BHs with less massive disks, which form for other q values. Alternatively, for binaries with M tot ≲ 2.8 M ⊙, mergers should go through a hypermassive NS (HMNS) phase, as inferred for GW170817. Magnetized outflows from such HMNSs, which typically live for ≲1 s, offer an alternative progenitor for short cbGRBs. The first scenario is challenged by the bimodal GRB duration distribution and the fact that the Galactic BNS population peaks at sufficiently low masses that most mergers should go through an HMNS phase.
INTRODUCTION
Gamma-ray bursts (GRBs) can originate from at least two distinct astrophysical systems: the collapse of massive rotating stars ("collapsars"; Woosley 1993;MacFadyen & Woosley 1999) and compact binary mergers (Eichler et al. 1989;Paczynski 1991).These two event classes are commonly associated with long GRBs (lGRBs) and short GRBs (sGRBs), respectively.Their durations follow log-normal distributions, with mean values of ∼ 30 s for lGRBs and ∼ 0.5 s for sGRBs (Kouveliotou et al. 1993;McBreen et al. 1994).The overlap of the two distributions poses a challenge to a clear distinction between the classes (Bromberg et al. 2013), particularly for bursts lasting between ∼ 1 s and ∼ 30 s (Nakar 2007).A more accurate burst classification can be obtained when the GRB is followed by optical emission from the astrophysical site: supernova Ic-BL (Galama et al. 1998;Hjorth et al. 2003) or kilonova from a compact object merger (Li & Paczyński 1998;Metzger et al. 2010<EMAIL_ADDRESS>et al. 2013).Being the most luminous events in the sky, GRBs are detected out to large distances, and in part because of their bright synchrotron afterglows, are infrequently accompanied by detectable thermal optical counterparts.
The recent detection of optical/infrared kilonova signals following two ∼ 10 s-long bursts in GRB 211211A (Rastinejad et al. 2022;Troja et al. 2022;Yang et al. 2022) and GRB 230307A (Levan et al. 2023a;Sun et al. 2023;Yang et al. 2023) has reignited interest in the origin of long-duration GRBs that are not associated with collapsars (see also Gal-Yam et al. 2006;Della Valle et al. 2006;Bromberg et al. 2013;Lü et al. 2022;Levan et al. 2023b), but likely originating from compact binary mergers (cbGRBs).Such long durations would at least naively be unexpected in binary mergers insofar that the accretion timescales responsible for the jet launching are expected to be of order of seconds (e.g., Narayan et al. 1992).The long-duration cbGRB (lbGRBs) events may constitute a third type of GRB population.Indeed, a closer examination of the GRB duration distribution reveals that it is best fit with three log-normal distributions (Horváth & Tóth 2016;Tarnopolski 2016).These distribu-tions potentially correspond to three distinct populations: (i) collapsar lGRBs with T 90 ≳ 30 s, (ii) short-duration cbGRBs (sbGRBs) from binary mergers with T 90 ≲ 1 s, and (iii) lb-GRBs 211211A and GRB 230307A-like events from binary mergers, lasting T 90 ∼ 10 s.Below we adhere to the conventional assumption that sbGRBs are more common than lbGRBs (Yin et al. 2023).However, we note that three lognormal distribution fits suggest otherwise (Horváth & Tóth 2016), so we do not consider the rates to be a stringent constraint.
It is tempting to associate the two cbGRB classes with the two types of compact binary mergers: black hole (BH) and neutron star (NS), and binary NS (BNS) systems.Based on the two BH-NS mergers detected during the LVK O3b run, the BH-NS merger rate was constrained to be R BHNS = 45 +75 −33 Gpc −3 yr −1 if these two events are representative of the entire population, versus R BHNS = 130 +112 −69 Gpc −3 yr −1 for a broader BH-NS population (Abbott et al. 2020).In comparison, the rate of BNS mergers was found to be R BNS = 320 +490 −240 Gpc −3 yr −1 (Abbott et al. 2021).Therefore, if the two detected BH-NS events are representative, BH-NS mergers are likely to be significantly rarer than BNS mergers, similar to the scarcity of lbGRBs compared to sbGRBs.In the case of a broader BH-NS population, other merger properties such as larger mass ratios, significant spin-orbital misalignment, and low BH spins need to be considered (Belczynski et al. 2008), all of which would result in less massive disks and the associated challenges in launching a relativistic jet (e.g., Kyutoku et al. 2015).Regardless of the BH-NS merger rate, the fraction of this population that yields electromagnetic emission is thus likely to be negligible compared to BNS mergers (Fragione 2021;Sarin et al. 2022;Biscoveanu et al. 2023).
The main cbGRB emission phase is often accompanied by additional light curve components.For example, in lb-GRB 211211A, the variable hard burst that lasted ∼ 10 s was preceded by an oscillating precursor flare (Xiao et al. 2022), and followed by a smoother and softer γ/X-ray emission for ∼ 100 s (Gompertz et al. 2023), referred to as the "extended emission" (EE; Norris & Bonnell 2006;Perley et al. 2009).The prolonged EE, which is more commonly seen in association with lbGRBs (Norris et al. 2011;Kaneko et al. 2015), accompanies the main signal in ∼ 25% − 75% of cbGRBs (Norris & Gehrels 2008;Norris et al. 2010;Kisaka et al. 2017).It is generally characterized by two components: an initial roughly flat "hump" (Mangano et al. 2007;Perley et al. 2009), followed by a power-law decay ∼ t −2 (Giblin et al. 2002;Kaneko et al. 2015;Lien et al. 2016).Any cbGRB model linked to the underlying physics of binary mergers must therefore explain the entire emission signal, including precursor flares and EE phases.
In this paper, we review recent first-principles simulations, and how they constrain the origins of the different types and phases of cbGRB light curves.In particular, we present a framework for connecting the binary merger population with the entire spectrum of cbGRB observations.The paper is structured as follows.In §2 we argue that while lGRB jets are powered by magnetically arrested disks (MADs), BHpowered cbGRB jets are generated before the disk enters a MAD state.In §3 we show that the formation of a massive disk ( M d ≈ 0.1 M ⊙ ) around the post-merger BH inevitably powers lbGRBs such as GRB 211211A.In §4 we present two self-consistent models as the origin of sbGRBs: promptcollapse BHs forming low-mass disks and hypermassive NSs (HMNSs); we describe why we favor these two scenarios over alternatives, such as delayed-collapse BHs, supramassive NSs (SMNSs), white dwarf (WD) mergers/accretion induced collapse (AIC), and neutrino-driven jets.In §5 we discuss the origin of the precursor and EE of cbGRBs, compare the models with observables, and deduce that sbGRBs are likely powered by HMNSs, whereas lbGRBs are powered by BHs with massive disks.We summarize and conclude in §6.
COLLAPSAR GRBS VS. CBGRBS: TO BE MAD OR
NOT TO BE MAD Long GRBs and cbGRBs take place in very different astrophysical environments, leading to distinct conditions for their occurrence and potentially differing central engines that drive these events.A recent study by Gottlieb et al. (2023a) demonstrated that lGRB jets are launched from BHs once the accretion disk becomes MAD.The reason for this is that a successful jet launching requires the Alfvén velocity to surpass the free-fall velocity of the inflowing gas, allowing magnetohydrodynamic waves to escape from the BH ergosphere and form the emerging jet (Komissarov & Barkov 2009).In other words, a sufficiently powerful magnetic flux empowers a BH to launch jets in defiance of the inward motion of the surrounding stellar envelope.Numerical simulations (Gottlieb et al. 2022a) have confirmed that this process is sustained once the disk becomes MAD, occurring when the dimensionless magnetic flux on the BH reaches a threshold of ϕ ≡ Φ( Ṁr2 g c) −1/2 ≈ 50, where r g is the BH gravitational radius, Φ is the dimensional magnetic flux, and Ṁ is the mass accretion rate (e.g., Tchekhovskoy 2015).The BZ-jet power is determined by (Blandford & Znajek 1977;Tchekhovskoy et al. 2011): where r H is the radius of the BH horizon, and f (a) is the functional dependency on the BH spin.This relation can also be expressed in terms of the dimensionless magnetic flux ϕ: where the jet launching efficiencies are defined as: where η a is the maximum efficiency for a given BH spin calibrated by Lowell et al. (2023).In a MAD state η ϕ = 1, and thus Eq. ( 2) shows that the jet launching efficiency depends only on a.This implies that the lGRB timescale is governed either by the BH spin-down timescale, ȧ (Jacquemin-Ide et al. 2023), or by the accretion timescale (e.g., Gottlieb et al. 2022a).
In contrast to collapsars, where the newly-formed BH is embedded in a dense massive stellar core, binary mergers take place in a considerably less dense environment surrounding the central engine.Consequently, jets can emerge well before the disk reaches a MAD state at T MAD .Numerical simulations incorporating self-consistent models of binary mergers, capable of launching these jets, have verified this expectation (e.g., Hayashi et al. 2022Hayashi et al. , 2023)).These simulations show that the compactness of the post-merger disk allows for the dimensional magnetic flux to rapidly accumulate on the BH 1 , resulting in a constant jet power, P j (t < T MAD ) ∼ Φ ∼ const (Eq.( 1)).Due to the decaying mass accretion rate, the dynamical importance of the magnetic field (as measured by the dimensionless magnetic flux ϕ ∝ Φ Ṁ−1/2 ) grows with time.Once ϕ ≈ 50 is reached, the disk enters a MAD state, which saturates the jet launching efficiency η ϕ ≈ 1.Thereafter, the jet power follows the declining mass accretion rate, P j (t > T MAD ) ∝ Ṁ following Eq.(2).
Unlike collapsars, the disks formed from binary mergers do not have an external supply, resulting in their steady depletion and a continuous decrease in the BH mass accretion rate.In fact, at t ≳ 0.1 s, the mass accretion rate Ṁ follows a single power-law decay without a characteristic timescale relevant to cbGRBs (which in the collapsar case is set by the structure of the progenitor star).This implies that, in contrast to lGRBs where jet launching persists during the MAD phase of the disk and its timescale is set by Ṁ or ȧ, in mergers 2 it is the MAD transition at T MAD (dictated by M d and Φ) that eventually causes the jet power to decay, thus setting the cbGRB duration, as we now describe.
3. LBGRBS FROM BHS WITH MASSIVE DISKS Gottlieb et al. (2023b) presented first-principles simulations of a BH-NS merger with mass ratio q = 2, which results in a rapidly spinning BH with a ≃ 0.86.A substantial accretion disk of mass M d ≈ 0.15 M ⊙ formed around the BH, resulting in a high initial accretion rate Ṁ ∼ M ⊙ s −1 .We find a similar outcome here for simulations of a BNS merger of component masses 1.06 M ⊙ and 1.78 M ⊙ , initialized from the endpoint of the merger simulations of Foucart et al. (2023).In that system, the remnant promptly collapses to a BH with a = 0.68, surrounded by a disk with M d ≈ 0.1 M ⊙ (see Appendix §A for the full numerical results of the BNS merger simulations).
Eq. (2) shows that the jet power depends on both the mass accretion rate and the magnetic flux on the BH, Φ. Binary compact mergers produce small accretion disks that promptly feed the available magnetic flux onto the BH3 .Because Φ hardly changes thereafter during the subsequent accretion phase, this results in a constant jet power P j ∼ const with a magnitude that depends on the disk's poloidal field strength.This is demonstrated in Figure 1, which depicts the jet power as a function of time for different values of Φ and If the initial plasma beta in the disk is low (leading to large Φ), then the jet launching efficiency is high, and the jet starts with too much power compared to sbGRB luminosities.In such cases, the dimensionless magnetic flux on the BH quickly saturates and the disk becomes MAD, ending the (Gottlieb et al. 2023b) and the 5 BNS merger simulations presented here, all of which generate massive disks M d ≈ 0.1 M ⊙ .The purple line delineates the logarithmic average of these mass accretion rates, which constitutes the maximum jet power assuming η a = 1 corresponding to a BH spin a ≈ 0.87 (left vertical axis).Turquoise lines illustrate schematically the jet power evolution for different assumptions about the dimensional magnetic flux threading the BH, Φ, and the corresponding total jet energy, E j .Since the magnetic flux on the BH is likely accumulated early and hence remains nearly constant before the disk transitions to MAD, the jet power, P j , is also predicted to be roughly constant at these times.However, once the dimensionless magnetic flux saturates in the MAD state, the jet power saturates at P j = Ṁc 2 and thus follows the mass-accretion rate Ṁ ∝ t −2 thereafter (we have extrapolated P j by a dashed line to later times).The yellow (blue) region outlines the estimated average jet power and duration T 90 (T 50 ) of the sbGRB (lbGRB) population based on prompt emission and afterglow observations (see text).While the jets from such massive disks are either too powerful, or operate for too long, compared to the sbGRB population, BH accretion from such massive disks nicely matches the observed properties of lbGRBs.
constant jet power phase.This translates to a relatively short and exceedingly luminous cbGRB (see e.g. the top turquoise line in Fig. 1).This outcome challenges the model of Gao et al. (2022), which suggests that a strong magnetic field can halt accretion to prolong the cbGRB duration.
If instead, the initial plasma beta in the disk is high (low Φ) or the initial magnetic field configuration is predominantly toroidal (see e.g., Appendix §A), then the jet launching efficiency is low and the jet can generate a luminosity characteristic of sbGRBs.Over time, the efficiency increases due to the development of a global poloidal magnetic field and the decrease in the mass accretion rate that follows 4 Ṁ ∼ t −2 , 4 Energy injection from alpha-particle recombination can also act to steepen the mass accretion power-law, after neutrino cooling is no longer important, at t ≳ 1 s (Metzger et al. 2008a;Haddadi et al. 2023).
as was also found in other numerical simulations (Fernández et al. 2015(Fernández et al. , 2017(Fernández et al. , 2019b;;Christie et al. 2019;Metzger & Fernández 2021;Hayashi et al. 2022), where the normalization of the mass accretion rate is set by M d .When the disk finally becomes MAD at T MAD , the efficiency stabilizes at η ϕ ≈ const, and Eq. ( 2) reads P j ∼ Ṁ ∼ t −2 (see e.g. the bottom turquoise lines in Fig. 1).The two phases of P j (t < T MAD ) ∼ P 0 and P j (t > T MAD ) ∼ t −2 are generic for BH-powered cbGRB jets.This motivates future analytic and numerical models to consider such temporal evolution of the jet power, with two free parameters: T MAD , determined by the values of ϕ, and P 0 , determined by Φ.
We stress that a roughly constant jet power does not imply a constant γ-ray luminosity.Firstly, as shown in Fig. 4(d) in Appendix A, the jet power itself exhibits temporal variability, particularly for the initially toroidal configurations, owing to the stochastic nature of the dynamo process.Secondly, different portions of the jet undergo different levels of mixing and mass entrainment by the surrounding environment, leading to fluctuations in the baryon loading, magnetization, and Lorentz factor.These variations likely translate to a range of radiative efficiencies.This implies that even though the jet power remains roughly constant on average (consistent with the observed lack of temporal evolution in the statistical properties of GRB light curves throughout the burst; e.g., McBreen et al. 2002), different light curves can exhibit different shapes and variability, depending on the specifics of the merger.
Constraints from cbGRB observations
To compare the predictions of numerical simulations with observational data, we need to deduce the true jet properties from observations.The observed duration of the γ-ray emission from cbGRB, T 90 , varies depending on the detectors used (Bromberg et al. 2013), and whether the GRB duration distribution is modeled assuming 2 (lGRB and cbGRB) or 3 (lGRBs, sbGRBs, and lbGRBs) populations.To estimate the range of T 90 for sbGRBs, we refer to the lowest and highest T 90 values found among 2 and 3 Gaussian fits to Fermi and BATSE duration distributions in Tarnopolski ( 2016) and find: 0.38 s ≤ T 90 ≤ 0.85 s.For lbGRBs, we take the prompt emission durations of recent events GRB 211211A and GRB 230307A as boundaries: where T 50 = 12.1 s (Tamura et al. 2021) and T 50 = 9.2 s (Svinkin et al. 2023), respectively.The use of T 50 instead of T 90 , in this case, is motivated by the comparable radiated energies of the prompt burst and EE phases (Kaneko et al. 2015;Zhu et al. 2022), rendering T 50 a more accurate estimate for the prompt duration.
The characteristic jet power of cbGRBs can be estimated as: where E iso,γ is the isotropic equivalent γ-ray energy, f b is the beaming fraction, and ϵ γ is the radiative efficiency, of the γ-ray emission.We take E iso,γ ≈ 2 × 10 51 erg for sbGRB (Fong et al. 2015), while for lbGRB we adopt values E iso,γ ≈ 5.3 × 10 51 erg (Yang et al. 2022) and E iso,γ ≈ 1.5 × 10 52 erg (Levan et al. 2023a) measured for GRB 211211A and GRB 230307A, respectively.We adopt a range of beaming factors 0.01 ≤ f b ≤ 0.11 (Fong et al. 2015), corresponding to a true γ-ray jet energy for sbGRB of E obs,γ ≈ 2×10 49 −2×10 50 erg (Fong et al. 2015).Early estimates of the γ-ray efficiency in lGRBs found ϵ γ ≈ 0.5 (Panaitescu & Kumar 2002), but later analyses by Beniamini et al. (2015Beniamini et al. ( , 2016) ) suggested a lower value of ϵ γ ≈ 0.15.Berger (2014) found that the ratio of cbGRB prompt to afterglow energy is higher by an orderof-magnitude compared to lGRBs, indicating a potentially higher ϵ γ for cbGRBs.Nevertheless, this discrepancy might be attributed to the brighter afterglow emission arising from the denser large-scale environments surrounding the massive star progenitors of lGRBs.It thus remains unclear whether the difference between lGRBs and cbGRBs results from variations in the external medium, or is intrinsic (i.e., attributed to higher ϵ γ in cbGRBs) due to e.g.substantial wobbling jet motion in collapsar jets (Gottlieb et al. 2022b).We thus consider a range of 0.15 ≤ ϵ γ ≤ 0.5 in our estimates.
Figure 1 compares theoretical and numerical estimates of the jet power with cbGRB observations.The right vertical axis shows the characteristic evolution of the BH accretion rate as a function of time after the merger (purple line), which we have obtained by averaging the results of BH-NS merger simulations by Gottlieb et al. (2023b) and the BNS merger simulations presented here (gray lines), all of which produce massive disks with M d ≈ 0.1 M ⊙ .The jet power, displayed on the left vertical axis, is expected to be roughly constant at early times, insofar that most of the magnetic flux Φ accumulates on the BH quickly.However, as the accretion rate drops, the dimensionless magnetic flux ϕ ∝ Ṁ−1/2 increases with time, until the disk enters a MAD state and the jet efficiency η ϕ ≈ 1 saturates.After this point, the jet power P j ≈ η a Ṁc 2 (Eq.( 2)) tracks the decaying mass-accretion rate P j ∝ t −2 , marking the cbGRB characteristic timescale as that at which the disk goes MAD.
In order to achieve the characteristic jet powers required to explain sbGRBs (yellow region), the magnetic flux needs to be Φ ∼ 10 27.5 G cm 2 (bottom turquoise lines).However, for such a flux, the accretion disk can only enter a MAD state after several seconds, significantly longer than the sbGRB duration, T MAD ≫ T 90 .On the other hand, flux at roughly this same level Φ ≲ 10 27 G cm 2 leads to a jet which naturally achieves both the correct power and duration of the lbGRB class (blue region).
We conclude that for relatively large disk masses M d ≳ 0.1 M ⊙ (consistent with that required to produce the kilonova ejecta in GW170817; e.g., Perego et al. 2017;Siegel & Metzger 2017), the resultant jets exhibit either excessively high power (if the seed magnetic flux threading the disk is large) or lower power with extended duration of activity (if the seed flux is weaker).The former is ruled out observationally, implying that massive disks must give rise to lbGRBs.Therefore, if the jet in GW170817 was powered by a BH surrounded by a massive disk, then the inferred jet energy, E j ≈ 10 49 − 10 50 erg (Mooley et al. 2018) indicates that the jet was not a luminous cbGRB but rather a lbGRB (e.g., the bottom turquoise line in Fig. 1).Unfortunately, because the jet was ∼ 20 • off-axis (Mooley et al. 2018), the bulk of the gamma-ray emission was beamed away from Earth, precluding a direct measurement of the jet duration.
Disfavored solutions
Here we explore potential caveats to the conclusions of the previous subsection.However, finding reasons to disfavor each, we shall ultimately conclude that BHs surrounded by massive disks remain the most likely explanation for lb-GRBs.
Lower post-merger BH spins
According to Eq. ( 2), one potential way to reduce the jet power is to decrease the maximum efficiency η a by considering a lower post-merger BH spin for an otherwise similar magnetic flux.For example, a BH spin of a ≈ 0.4 yields maximum efficiency of only η a ≈ 0.1 (Lowell et al. 2023).This would allow BHs with massive disks to power sbGRBs provided the BH spin obeyed a ≲ 0.4.However, this requirement conflicts with the results of numerical relativity simulations, which find post-merger BH spins 0.6 ≲ a ≲ 0.8 (Kiuchi et al. 2009;Kastaun & Galeazzi 2015;Sekiguchi et al. 2016;Dietrich et al. 2017) for BNS mergers, corresponding to 0.3 ≲ η a ≲ 0.7.BH-NS mergers result in comparable or slightly higher remnant BH spins, at least for systems leading to the formation of massive accretion disks (Foucart et al. 2011(Foucart et al. , 2013(Foucart et al. , 2014(Foucart et al. , 2017(Foucart et al. , 2019;;Kyutoku et al. 2011Kyutoku et al. , 2015;;Kawaguchi et al. 2015).Appealing to a lower BH spin can thus only reduce the jet energy by a factor of ≈ 2 compared to our estimates assuming η a ≈ 1.
Delayed jet launching
As the magnetic field in post-merger accretion disks is anticipated to be predominantly toroidal (e.g., Ruiz et al. 2018), a jet of significant power may only be launched after a dynamo process in the disk generates a sufficiently strong global poloidal field.If the seed magnetic field is weak, the jet onset might be delayed for several seconds (see e.g., Hayashi et al. 2023), thus operating for only a brief period before the disk transitions into a MAD state.This would make it possible for a BH with a massive disk to produce a sbGRB.Nevertheless, it is unlikely that this scenario can serve as a generic explanation for sbGRBs, as fine-tuning is required to launch the jet only briefly after ∼ 10 s, just before the disk reaches a MAD state, in order to achieve T 90 ≲ 1 s.
Misestimating the cbGRB duration
Another possible caveat worth exploring is whether the jet duration could be inferred incorrectly from observations.Such an erroneous estimation could occur while (i) converting from the engine activity duration to T 90 , or (ii) due to uncertainties in observations: (i) If the interaction of the jet with the external medium is sufficiently strong to decelerate the jet head to sub-relativistic velocities, the radial extent of the jet can become significantly shorter than T MAD /c, leading to an observed GRB duration considerably shorter than the MAD timescale over which the jet is launched.However, for typical properties of merger ejecta and cbGRB jet energies, the jet head exhibits at least mildly relativistic motion from the onset (Gottlieb & Nakar 2022), supporting the usual assumption that the GRB duration follows the activity time of the jet (i.e., T 90 ∼ T MAD ).
(ii) In collapsars, the physics of jet propagation (Bromberg et al. 2011) and the observed GRB duration distribution (Bromberg et al. 2012) support a substantial fraction of jets being choked inside the star (see also Gottlieb et al. 2022a).Some jets may operate just long enough to break out of the star and power a short-duration GRB (Ahumada et al. 2021;Rossi et al. 2022).If collapsar jets outnumber those originating from binary mergers within the sGRB population, this could in principle lead to underestimates of the typical duration of binary merger jets.However, while such an increase in the inferred T 90 of binary merger jets could potentially alleviate the tension in accounting for sbGRB from massive BH disks, it provides no natural explanation for the bimodal distribution of GRB durations.
Prompt-collapse Black Holes
When the total mass of a BNS exceeds a critical threshold M tot ≳ 2.8 M ⊙ , the remnant created by the merger promptly collapses into a BH surrounded by an accretion disk (Bauswein et al. 2013), the mass of which depends sensitively on the binary mass ratio.For unequal mass ratios (q ≳ 1.2), as characterized by our BNS merger simulations, the lighter NS is disrupted, resulting in a massive accretion disk, M d ≈ 0.1 M ⊙ .By contrast, prompt-collapse mergers with q ≈ 1 generate significantly smaller disk masses, M d ≲ 10 −2 M ⊙ (see Shibata & Hotokezaka 2019, for a review).Assuming the mass accretion rate to scale linearly with the disk mass, and Φ to be largely independent of M d , then disk masses of M d ≲ 10 −2 M ⊙ could power jets consistent with sbGRB observations.This implies that sbGRB can in principle be powered through massive BNS mergers with M tot ≳ 2.8 M ⊙ and q ≈ 1.In BH-NS mergers, similarly low disk masses of M d ≲ 10 −2 M ⊙ are possible for high binary mass ratios, q ≫ 1, low pre-merger BH spin, or large spin-orbit misalignment (Foucart et al. 2018).
The region M tot > 2.8 M ⊙ in Figure 2 overviews this scenario.Low disk masses, such as those produced by equal mass BNS mergers that undergo prompt BH formation (bottom yellow region) or high mass ratio BH-NS mergers (top right yellow region)5 , giving rise to sbGRBs.The opposite case of mergers forming massive BH disks then power lb-GRBs (blue region).If BHs power all cbGRB jets, then it is expected that the cbGRB duration spectrum will be continuous via the disk mass distribution.This seems to be in tension with the observed bimodal distribution.This scenario also poses an additional requirement on the rates given that most cbGRBs arise from BNS mergers.If sbGRBs are more common than lbGRBs, this would require that q ≈ 1 BNS mergers (sbGRBs) should be more common than unequal mass ratio BNS mergers (lbGRBs).While consistent with the mass ratio distribution of the Galactic BNS population being narrowly concentrated around q ≲ 1.2 (Vigna-Gómez et al. 2018;Farrow et al. 2019), this picture is in tension with the BNS masses being below the expected prompt collapse threshold ≈ 2.8 M ⊙ , as we now discuss.
Long-lived HMNSs
Observations of Galactic BNSs indicate an average NS mass of M NS ≈ 1.33 M ⊙ (Özel et al. 2012;Kiziltan et al. 2013;Özel & Freire 2016;Farrow et al. 2019).If representative of the extragalactic merger population as a whole, this relatively low mass suggests that most mergers will not undergo a prompt collapse into a BH given current constraints on the NS Equation of State (EoS) (e.g., Margalit & Metzger 2019).Furthermore, larger Fe cores are generally expected to result in both more energetic explosions and greater NS natal kicks, resulting in a correlation between these two properties (Tauris et al. 2017).Since large kicks tend to unbind the binary, this makes less massive BNS systems more likely to eventually merge compared to their more massive counterparts.
The merger of BNS systems with M tot ≲ 2.8 M ⊙ results in the formation of a highly magnetized differentially rotating HMNS, which only collapses into a BH after some de-lay (e.g., Shibata & Taniguchi 2006;Kastaun & Galeazzi 2015;Hanauske et al. 2017).As a result of amplification of the magnetic field via differential rotational and instabilities, such HMNSs have the potential to produce energetic jets that could be the source of sbGRBs (Kluźniak & Ruderman 1998).One challenge to this scenario is that the polar outflows from HMNS are subject to baryon contamination of ∼ 10 −4 M ⊙ str −1 driven by strong neutrino heating from the atmosphere just above the surface (Thompson et al. 2001;Dessart et al. 2009;Metzger et al. 2018), which for jets of sbGRB energies limits their bulk Lorentz factors to Γ ≲ 10 ( Metzger et al. 2008b).While relatively low, such Lorentz factors might be nevertheless compatible with constraints based on compactness arguments in cbGRBs (Nakar 2007) 6 .
Comparing the observed properties of cbGRBs with the energy output and lifetime of HMNSs is challenging due to the sensitivity of the latter to several theoretically uncertain properties of the post-merger system.The lifetime of the HMNS is governed by various physical processes, including neutrino cooling and angular momentum transport, the timescales for which in turn depend on factors such as the strength of the remnant's large-scale magnetic field, the saturation level of various magnetohydrodynamic instabilities giving rise to turbulent transport, and the initial distribution of angular momentum and temperature (Margalit et al. 2022).The complexity of incorporating all of these physical processes into long-term simulations, on top of uncertainties in the EoS, renders the lifetimes of HMNSs highly uncertain (Hotokezaka et al. 2013a;Dietrich et al. 2017).
More massive binaries in general produce HMNSs with shorter lifetimes (Shibata & Taniguchi 2006;Bauswein et al. 2013).For binaries with M tot ≈ 2.7 M ⊙ the HMNS lifetime is primarily governed by angular momentum transport and the specific EoS (Hanauske et al. 2017).For less massive HMNSs, the collapse is dictated by either angular momentum transport with a timescale of T HMNS ∼ 0.1 s, or if the HMNS is partially thermally supported (Hotokezaka et al. 2013a;Kaplan et al. 2014), by neutrino cooling with a timescale of T HMNS ∼ 1 s (Sekiguchi et al. 2011).The binary mass ratio also plays a role, with greater asymmetry resulting in a longer HMNS lifetime due to increased angular momentum support (Dietrich et al. 2017).
Short cbGRB
No Disks
BNS
No Binaries
Long cbGRB
Figure 2. The outcomes of compact object mergers and their ability to power various cbGRBs sub-classes as a function of the binary mass ratio and total mass.lbGRBs occur in high M tot and high q BNS mergers that form a massive BH disk, or in high pre-merger BH spin and low mass ratio BH-NS mergers (blue region).sbGRBs may arise either from equal mass ratio BNS mergers (bottom yellow region) and low pre-merger BH spin/high mass ratio BH-NS mergers (top yellow region), or by HMNS formed in BNS mergers with M tot ≲ 2.8 M ⊙ (left yellow region).The absence of evidence for distinct sub-classes of sbGRBs suggests that either BHs or HMNSs are likely to be the sole origin of these events, i.e.only one of the proposed sbGRB scenarios is correct.The Galactic BNS mass distribution, the bimodal GRB duration distribution, and GW170817 observations favor HMNSs as the engine of sbGRB jets.
Siegel 2023; Kiuchi et al. 2023).On the other hand, Most & Quataert (2023) found for a similar magnetic field and binary mass that the jet emission is lower by several orders of magnitude compared to other simulations.Furthermore, the HMNS lifetime varies greatly among those simulations, from T HMNS ∼ 10 ms to T HMNS ≳ 1 s, demonstrating the uncertainty in the HMNS lifetime, even when similar magnetic fields and M tot are considered (Ruiz et al. 2016(Ruiz et al. , 2020;;Ciolfi et al. 2019;Ciolfi 2020;Aguilera-Miret et al. 2023;Most & Quataert 2023;Kiuchi et al. 2023).The specific properties of the binary and the EoS, thus play a crucial role in determining the characteristics of HMNSs.
Perhaps the tightest constraint on the properties of HMNSs comes through the interpretation of the first multi-messenger BNS system, GW170817, characterized by M tot ≈ 2.75 M ⊙ and q ≲ 1.3 (Abbott et al. 2019).GW170817 provided valuable insights into the EoS of dense matter (Radice et al. 2018b), and supported the existence of a transient HMNS phase (Margalit & Metzger 2017;Shibata et al. 2017;Rezzolla et al. 2018).The large quantity of slow-moving ejecta inferred from the kilonova, argues against a prompt collapse of the BH but is consistent with the expectation of disk outflows from a merger accompanied by a HMNS phase.The low inferred abundance of lanthanides in the ejecta (e.g., Kasen et al. 2017) supports strong neutrino irradiation of the disk by the HMNS (e.g., Metzger & Fernández 2014;Kasen et al. 2015;Lippuner et al. 2017).These findings thus point towards the requirement of a sufficiently stiff EoS, capable of supporting the formation of an HMNS from the GW170817 merger with M tot ≈ 2.75 M ⊙ .The HMNS could have persisted for the Alfvén crossing timescale of ∼ 1 s (Metzger et al. 2018), sufficiently long to power a sbGRB.Based on a suite of merger simulations targeted towards GW170817, Radice et al. (2018a) found that the remnant indeed most likely possessed enough angular momentum to prevent a collapse and to form a long-lived HMNS, even for M tot ≈ 2.75 M ⊙ .
The region M tot < 2.8 M ⊙ in Figure 2 summarizes this alternative scenario, in which sbGRBs arise from transient jets powered by moderately long-lived HMNSs formed from relatively low-mass binaries (left yellow region).In this scenario, all prompt-collapse BHs give rise to lbGRBs, where dimensional analysis suggests that M d determines the jet power ( §5.2).
Delayed-collapse Black Holes
In BNS mergers where the combined mass is M tot ≲ 2.8 M ⊙ , the collapse of the HMNS into a BH may introduce a delayed launching of BZ-jets, which could potentially contribute to the cbGRB populations.When the BH formation is preceded by a transient phase of a HMNS, the disk mass depends on T HMNS .If the HMNS collapses within a few ms, the system evolves in a similar way to promptcollapse BHs.A longer-lasting HMNS with T HMNS ≳ 10 ms allows for a greater opportunity for the post-collapse disk to grow through angular momentum transport to M d ≈ 0.1 M ⊙ (e.g., Hotokezaka et al. 2013a).However, a longer-lived HMNS also provides an opportunity for the disk to lose mass prior to the BH formation.The disk continuously expands due to viscous angular momentum transport by the differentially rotating HMNS and viscous heating by magnetorotational instabilities (MRI) in the disk.Once neutrino cooling becomes subdominant to viscous heating, the disk expels winds, thereby reducing its mass (see, e.g., Siegel & Metzger 2018;Fernández et al. 2019b).In cases where vigorous viscous heating prompts rapid expansion, a substantial portion of the disk mass might be lost within T HMNS (Fujibayashi et al. 2018(Fujibayashi et al. , 2020)).
The post-HMNS collapse disk mass remains elusive due to uncertainties pertaining to variables such as the magnetic field and effective viscosity in the disk, T HMNS , and other contributing factors.Given the significant impact of the disk mass on determining the cbGRB type, the role of delayedcollapse BHs remains uncertain7 .Two possibilities exist: (i) If the disk mass is appreciably reduced by viscous heating prior to BH formation, then the BZ-jet might be less luminous compared to the preceding HMNS-powered jet that generated the sbGRB.In such instances, the jets launched by delayed-collapse BHs could serve as sources of EE once they transition into the MAD state.(ii) If the viscous heating is insufficiently strong to remove the bulk of the disk mass on T HMNS timescale, the BH forms with a massive disk.As outlined in §4.1, such disks are likely to give rise to lb-GRBs.If this configuration characterizes the standard picture of HMNSs, the lbGRBs would supersede the observational imprint of HMNS-powered jets, indicating that all cbGRBs are powered by BHs.Interestingly, this perspective forecasts that BNS mergers with M tot ≲ 2.8 M ⊙ lead to lbGRBs, implying that lbGRBs are more common than sbGRBs.
Long-lived SMNSs
For particularly low-mass binaries M tot ≲ 2.4 M ⊙ , a very long-lived rigidly rotating SMNS with M d ≈ 0.1 M ⊙ can form (Giacomazzo & Perna 2013;Foucart et al. 2016).Similar to the HMNS case, the early stages after the formation of a SMNS can in principle give rise to moderately relativistic outflows with Γ ∼ 10 (e.g., Metzger et al. 2008b).However, SMNSs can live for t ≫ 1 s before collapsing, and thus may generate a relativistic wind that reaches Γ ≳ 100 as the rate of neutrino-driven mass-ablation from the SMNS surface decays (e.g., Thompson et al. 2004;Metzger et al. 2008b).Relativistic magnetohydrodynamic (MHD) (Bucciantini et al. 2012) and numerical relativity (Ciolfi et al. 2017;Ciolfi 2020;Ruiz et al. 2020) simulations have demonstrated that longlived magnetars are potentially capable of powering cbGRB jets.Such jets could be compatible with energy injection into cbGRB afterglows (Zhang & Mészáros 2001), and the latetime spin-down luminosity of the magnetar obeys ∼ t −2 , also consistent with the observed decay evolution of the EE (Metzger et al. 2008b;Bucciantini et al. 2012;Gompertz et al. 2013).
The kilonovae which accompanied the two recent lb-GRBs, GRB 211211A and GRB 230307A, support relatively slow outflows ( v ej ≲ 0.1c) containing high-opacity material consistent with significant lanthanide/actinide enrichment (Rastinejad et al. 2022;Levan et al. 2023a;Barnes & Metzger 2023).While both these properties are consistent with the disk outflows from a BH accretion disk (e.g., Siegel & Metzger 2017;Fernández et al. 2019b), the ejecta velocities are too low compared to those expected following substantial energy injection from the magnetar wind (Bucciantini et al. 2012).Sustained neutrino irradiation of the disk outflows from the hot stable neutron star remnant, also precludes significant heavy r-process material (e.g., Metzger & Fernández 2014;Kasen et al. 2015;Lippuner et al. 2017).
Additional arguments which disfavor SMNSs as the progenitors of the majority of the cbGRBs include: (i) lack of evidence for a significant injection of rotational energy from the magnetar based on the late radio afterglow emission (Metzger & Bower 2014;Horesh et al. 2016;Schroeder et al. 2020;Beniamini & Lu 2021); (ii) the BNS mass distribution favors HMNSs as the common remnant of a BNS merger, and recent results by Margalit et al. (2022) show that accretion can shorten the SMNS lifetime such that it is closer to T HMNS , reducing the parameter space capable of generating long-lived magnetars.In light of the viability of the massive BH disk scenario, the above arguments disfavor the model suggested by Metzger et al. (2008b), Sun et al. (2023), in which lbGRBs with EE are powered by long-lived magnetars.
Binary WD merger and AIC
The formation of a magnetized NS does not require a merger that involves a pre-existing NS.Instead, it may originate from the gravitational collapse of a WD in a binary system (Taam & van den Heuvel 1986).The secondary star for AIC can either be a merging WD companion, or a nondegenerate donor (e.g., Duncan & Thompson 1992;Usov 1992;Yoon et al. 2007).The resulting newly formed NS can be a magnetar if the magnetic field of the progenitor WD is very strong and is amplified by flux freezing during the collapse (see e.g., Burrows et al. 2007) or after the collapse through magnetic winding or other dynamo action after the merger/collapse.Magnetars formed from AIC may potentially act as central engines for cbGRBs (Usov 1992;Metzger et al. 2008b).
Accreting WDs are generally considered to lose much of their angular momentum during their evolution (e.g., through classical nova eruptions), ultimately becoming slow rotators (Berger et al. 2005).In the case of binary WD mergers, the angular momentum budget is much higher initially; however, the most massive mergers capable of undergoing AIC ultimately produce an NS with a mild rotation period of ∼ 10 ms, due to angular momentum redistribution during the postmerger phase prior to collapse (Schwab 2021).Such slowly rotating magnetars have a limited energy reservoir and would not be accompanied by an appreciable accretion disk.
AIC occurs when a massive oxygen-neon WD accretes matter from a companion star until it reaches the Chandrasekhar limit and collapses into an NS (e.g., Nomoto & Kondo 1991; however, see Jones et al. 2016).During the collapse process, conservation of angular momentum may lead to the formation of a rapidly spinning NS surrounded by a disk (Bailyn & Grindlay 1990).Additionally, the fast and differential rotation in the newly formed NS results in a substantial amplification of the magnetic field (Dessart et al. 2007), which may result in a millisecond magnetar.However, the AIC faces similar challenges as the SMNS scenario ( §4.4).For example, neutrino irradiation from the long-lived magnetar will increase the electron fraction in the disk outflows (e.g., Metzger et al. 2009;Darbha et al. 2010), leading to inconsistencies with the lanthanide-rich ejecta inferred from the kilonova emission from GRB 211211A and GRB 230307A.
Another scenario involving WDs is an NS-WD merger (Fryer et al. 1999;King et al. 2007), which was proposed as origins of GRB 211211A (Yang et al. 2022) and possibly GRB 230307A (Sun et al. 2023).It is argued that the burst duration scales with the accretion timescale, which in turn scales inversely with the density of the companion star for an accretion-powered engine, favoring a WD.However, as we have shown in §3, the burst timescale depends on the disk mass and the magnetic flux threading the BH and does not necessarily require a low-density WD to prolong the accretion timescale.In fact, we find that after t ∼ 100 ms, the mass accretion rate follows a single power-law profile, indicating that there is no accretion timescale relevant to cb-GRBs.Additionally, proton-rich matter accreted from the disrupted WD is unlikely to reach high enough densities to produce neutron-rich outflows capable of generating any significant r-process material, much less the relatively heavy lanthanides (Metzger 2012;see Fernández et al. 2019a for simulations of the post-merger disk evolution and nucleosynthesis).The NS-WD merger scenario thus faces difficulties in explaining the observed kilonova emission (see Barnes & Metzger 2023, and references therein).
Neutrino annihilation
The high accretion rates anticipated in post-merger disks give rise to strong neutrino emission.Efficient annihilation of neutrinos and anti-neutrinos can generate relativistic jets that may power cbGRBs (e.g., Woosley 1993).These jets are expected to operate as long as the accretion rate is Ṁ ≳ 10 −2 M ⊙ (Popham et al. 1999).This requirement implies that massive disks are necessary (e.g., Leng & Giannios 2014) to enable jet launching for T 90 ≲ 1 s.If the initial magnetic field in the disk is predominantly toroidal, then BZ-jet may follow the neutrino-driven jet after t ≳ 1 s (e.g., Christie et al. 2019;Gottlieb et al. 2023b), and power the late EE (Barkov & Pozanenko 2011).This scenario cannot explain lbGRBs, and as we now argue, is also disfavored as the origin of sb-GRBs.
The main limitation of neutrino-driven jets lies in their available energy (Leng & Giannios 2014;Just et al. 2016).In BNS mergers, where a significant amount of ejecta is expected along the polar axis, these low-energy jets would fail to break out and generate a cbGRB (Just et al. 2016).Furthermore, the mass distribution of the Galactic BNS population suggests that most post-merger remnants are HMNSs.The large amount of mass in the HMNS atmosphere ( §4.2) would load neutrino-driven jets with baryons, hindering their ability to achieve relativistic velocities (Dessart et al. 2009).Consequently, such jets would be incapable of producing cbGRBs.
ORIGIN OF THE PRECURSOR FLARE AND EXTENDED EMISSION, AND COMPARISON OF BH-POWERED AND HMNS-POWERED JETS
Figure 3 utilizes the light curves of lbGRB 211211A (black) and sbGRB 930131A (gray) to illustrate the connection between the underlying physics of the compact object (orange labels) and the various phases observed in the cb-GRB light curve (yellow for sbGRBs, blue for lbGRBs, and green for preceding and succeeding phases).A sbGRB can be powered by either a BH with a light accretion disk or a long-lived HMNS before it eventually collapses into a BH.A lbGRB is fueled by a BH surrounded by a massive accretion disk, as the dimensionless magnetic flux threading the BH steadily accumulates.The origin of the precursor flare and the EE are discussed below.
Up to this point, we have presented both HMNS-powered and BH-powered jets as potential contributors to sbGRBs.However, there is no evidence indicating the existence of two distinct sub-populations among sbGRBs, suggesting that only one of these engines is responsible for producing the majority of sbGRBs.Table 1 summarizes the origin of sb-GRBs and lbGRBs, as well as the outcomes of the different types of mergers, as predicted in both scenarios.We denote the scenario in which HMNSs power sbGRBs and BHs power lbGRBs by the "hybrid" scenario.The scenario in which all cbGRBs are powered by BHs, with the GRB duration increasing with the disk mass, is denoted by "all-BH" scenario.Both scenarios predict the formation of a lb-GRBs when the BH is surrounded by a massive disk.When a less massive disk is present (in nearly-equal mass ratio BNS mergers with M tot ≳ 2.8 M ⊙ , or in BH-NS mergers with either high q or low a), the all-BH scenario predicts a sbGRB signal, whereas the hybrid scenario predicts a lbGRB signal.When M tot ≲ 2.8 M ⊙ , the cbGRB duration in the all-BH scenario depends on the uncertain post-HMNS collapse disk mass (see §4.3).
In the all-BH scenario, the cbGRB duration spans a continuous spectrum, whereas, in the hybrid scenario, the BHpowered lbGRBs comprise a separate class.Therefore, the hybrid scenario offers a natural distinction between sbGRBs Event type Scenario: Table 1.Summary of the mapping between the Hybrid and All-BH scenarios and associated cbGRB classes.
powered by HMNSs and lbGRBs powered by BHs.Furthermore, the hybrid scenario finds support from the bimodal cbGRB duration distribution, the mass distribution of BNS systems, as well as from observations and simulations of GW170817.In the following subsections, we show that the hybrid scenario is also more compatible than the all-BH scenario with all phases of the cbGRB light curve.
Precursor flare
Each of the proposed hybrid and all-BH scenarios postulates a different physical origin for the precursor flare before the rise of the main burst.In the hybrid scenario, Most & Quataert (2023) demonstrated how the differentially rotating HMNS builds loops with footpoints at different latitudes on its surface.The resultant twist in the loop causes it to become unstable, inflate and buoyantly rise, forming a bubble that is entirely detached from the HMNS surface, and erupting after reconnecting (e.g., Carrasco et al. 2019;Mahlmann et al. 2023;Most & Quataert 2023).This behavior powers quasi-periodic flares prior to the jet formation.
For BH-powered jets, Gottlieb et al. (2023b) showed that if the seed magnetic field in the disk is toroidal, as expected in binary systems, then the stochastic accumulation of incoherent magnetic loops on the horizon can lead to a short burst of energy (see model T s in their figure 1(d)), which may constitute the precursor flare.As more flux reaches the BH, the stochastic field cancels out by virtue of contribution of loops of different polarity.Consequently, the total flux drops to zero, before starting to build a large-scale poloidal field through the dynamo process and power the cbGRB emission.Due to the stochastic nature of the accumulated flux, the flare energy is expected to be very weak, and the resultant outflow may not be able to punch through the optically thick disk wind and/or dynamical ejecta (Gottlieb et al. 2023b).Therefore, the emergence of such precursor flares in the all-BH scenario may require fine-tuning.Nevertheless, it is possible that the precursor in the all-BH scenario is also powered by a short-lived HMNS before it collapses into a BH on a ∼ 10 ms timescale.
Main cbGRB burst
Dimensional analysis suggests that Φ ∼ √ M d , thus M d ∼ Φ 2 ∼ P 2 j , while the dimensionless magnetic flux ϕ is independent of M d .This is also supported by the fact that the saturation level of the amplified ordered field in the disk seems to scale with the turbulent disk pressure, which in turn likely scales with Ṁ.This implies that reducing the disk mass results in a lower jet power, rather than shortening the cbGRB duration, which scales with the dimensionless magnetic flux (see §3).Therefore, unless there is an intrinsic correlation be-tween M d and ϕ, the variation in M d does not naturally yield the variation in the cbGRB duration.This favors BHs with less massive disks to power weaker lbGRBs, and sbGRBs as a distinct cbGRB population, which emerges from HMNSs.
Extended emission
Following the main hard burst, the softer EE phase commences.In both hybrid and all-BH scenarios, an accretion disk forms and is present at the time of the EE.Once the disk enters the MAD state, the jet power evolves in accordance with the mass accretion rate, P j ∼ t −2 , similar to the observed temporal evolution of the EE decay.The preceding flat EE hump is thus generated by the constant power jet, just before the disk transitions to a MAD state.The EE may end once the disk is overheated after ∼ 100 s, and evaporates on this timescale (Lu & Quataert 2023).This evolution of a constant jet power followed by a t −2 decay for another order-of-magnitude in time naturally results in a comparable energy content between the cbGRB prompt emission and the EE, as suggested by observations (Kaneko et al. 2015).
Any cbGRB model must account for two observational constraints related to EE: (i) the EE is observed in ∼ 25% − 75% of cbGRBs and is commonly found in lbGRBs (Norris et al. 2011;Kaneko et al. 2015;Kisaka et al. 2017).Considering that all disks eventually transition to a MAD state where P j ∼ t −2 , this correlation might be attributed to an observational bias if lbGRBs exhibit brighter EE. (ii) The EE likely emerges ∼ 10 s after the onset of the prompt emission.This implies that if the EE follows a sbGRB where T 90 ≪ 10 s, there must be a quiescent period between the prompt and the EE phases (e.g., Perley et al. 2009).
The all-BH scenario, which posits that both cbGRB types are powered by BHs, encounters difficulties in explaining either of the constraints mentioned above: (i) If sbGRBs are powered by BHs, then the cbGRB duration would be determined by the disk mass, while the jet power would depend on the magnetic flux.Consequently, there would be no obvious correlation between the cbGRB duration and jet power, and thus with the EE power.Therefore, the all-BH scenario does not explain the observed correlation between lbGRBs and the EE.(ii) As described in §3, BHs launch jets with a constant power followed immediately by the EE decay once the disk transitions to a MAD state.Therefore, no quiescent times would be expected to emerge between the prompt emission and the EE phase.
If sbGRBs are powered by HMNSs, both observational constraints can be accounted for, provided that at the time of BH formation the disk mass is M d ≲ 10 −2 M ⊙ (see §4.3).In this scenario, HMNSs power the sbGRB while the postcollapse BZ-jet generates the EE hump followed by the EE decay.(i) Low-mass disks contain a reduced energy reservoir available for the post-HMNS collapse BZ-jet.As a result, a significant fraction of sbGRB-associated EE may fall below the detection threshold, increasing the likelihood of detecting an EE associated with lbGRBs.(ii) Following the HMNS collapse and before the launch of the BZ-jet, a quiescent time between the prompt emission and the EE emerges.
CONCLUSIONS
The discoveries of ∼ 10-s long prompt emission in lbGRBs 211211A (Rastinejad et al. 2022) and 230307A (Levan et al. 2023a), followed by softer EE signals, suggest that the cb-GRB population can be divided into two classes: sbGRBs (T 90 ≲ 1 s) and lbGRBs (T 50 ∼ 10 s).However, the underlying physics that differentiates these classes and the origin of the prolonged EE are poorly understood.Moreover, drawing inferences about the astrophysical properties of binary mergers from cbGRB observables poses a formidable challenge.In this paper, we have developed a novel theoretical framework that connects different binary merger types to the distinct sub-populations of cbGRB and to the different components in their light curves.
In collapsars, the presence of a dense stellar core surrounding the BH hinders the launching of jets when the accretion disk is not in a MAD state.This implies that for lGRBs, the jet operates in a MAD state at all times, and the characteristic lGRB duration can be set by either the mass accretion rate or by the BH spin-down timescale.By contrast, in binary systems where the environment is less dense, the conditions allow for the launching of the jet before the disk enters the MAD state.Due to the compactness of the disk, the dimensional magnetic flux, Φ, quickly accumulates on the BH, resulting in a roughly constant jet power before the transition to MAD occurs.After the accretion disk enters the MAD state, the jet power follows the mass accretion rate of P j ∼ Ṁ ∼ t −2 , signaling the end of the prompt emission phase.This behavior is consistently observed in all first-principles simulations and should be considered when modeling cbGRB jets.In this jet power evolution model, there are two free parameters: (i) the time of the transition to a MAD state, T MAD , which determines the cbGRB duration and is influenced by ϕ; (ii) the magnitude of the constant jet power, which is governed by Φ.
BNS mergers with an unequal binary mass ratio and BH-NS mergers with a moderate mass ratio and high pre-merger BH spin can lead to the formation of a massive accretion disk with a mass of M d ≳ 0.1 M ⊙ , as was inferred from GW170817 observations (e.g., Perego et al. 2017;Siegel & Metzger 2017).Depending on Φ (as illustrated in Fig. 1), such a massive disk can give rise to either extremely bright sbGRB, or lbGRB.Analyzing the sbGRB and lbGRB observational data, we conclude that these massive disks inevitably power long-duration signals, and thus are most likely the progenitors of lbGRBs such as GRB 211211A and GRB 230307A.
The nature of the resultant central engine is determined by the total mass of the binary system.In the case of a BH-NS merger, or a BNS merger with M tot ≳ 2.8 M ⊙ , the immediate merger product is a BH (e.g., Bauswein et al. 2013).As mentioned, the mergers of BNS with a high mass ratio, and of BH-NS with a moderate mass ratio and a high premerger BH spin, form a massive accretion disk and therefore are the progenitors of lbGRBs.In other mergers, the resultant BH disk is less massive, and if Φ is weakly dependent on M d , a sbGRB jet can be generated.Regardless of the disk mass, the disk ultimately becomes MAD.Before the transition is complete, the constant jet power may give rise to the EE hump.When the MAD state commences, the jet power follows P j ∼ t −2 and thus aligns with the observed temporal evolution of the EE decay (Giblin et al. 2002;Lien et al. 2016).While this interpretation of cbGRBs powered by BHs provides an explanation for sbGRBs and lbGRBs, it faces challenges in explaining various observational features in cbGRB light curves, including flares observed before the prompt emission, the correlation between the EE and lbGRBs, and the quiescent time observed between the prompt emission and the EE.Most importantly, the Galactic BNS population suggests that most binary systems have M tot ≲ 2.75 M ⊙ (e.g., Özel et al. 2012;Kiziltan et al. 2013), where a prompt collapse into a BH is not anticipated.
In BNS mergers with M tot ≲ 2.8 M ⊙ , the product of the merger is a HMNS (e.g., Margalit & Metzger 2019).Both analytic and numerical studies demonstrated that HMNSs are capable of generating relativistic jets that power cb-GRBs (e.g., Metzger et al. 2008b;Kiuchi et al. 2023).The best-studied event in this mass range is the multi-messenger GW170817 with M tot ≈ 2.75 M ⊙ .The associated kilonova signal observed in GW170817 supports the formation of a long-lived ( T HMNS ≲ 1 s) HMNS (Metzger et al. 2018;Radice et al. 2018b).This timescale is sufficiently long to power sbGRBs.Unlike BHs, HMNSs can naturally produce precursor flares (Most & Quataert 2023), and account for the quiescent time between the prompt and the EE by virtue of the transition from HMNS-powered to BH-powered jets.It can also explain why sbGRBs are infrequently followed by EE (Norris et al. 2011).The reason is that if the accretion disk has lost mass through disk winds such that at the time of BH formation it is less massive, then the BZ-jets that form after the HMNS collapse are likely weaker compared to lb-GRBs powered by prompt-collapse BHs.Therefore, the EE in sbGRB events is fainter, such that EE are commonly observed following lbGRBs.
Various constraints, from kilonova observations to radio constraints on late-time rotational energy injection, favor prompt-collapse BH-powered jets and HMNS-powered jets over models that include long-lived magnetars, WDs, or neutrino-driven jets.While we thus find it likely that BHs with massive disks are responsible for lbGRBs, we are less certain about the origin of the shorter sbGRB population.A priori, both BH-powered jets (BH-NS mergers or BNS mergers with M tot ≳ 2.8 M ⊙ and q ≲ 1.2) and HMNSpowered jets ( M tot ≲ 2.8 M ⊙ ) remain viable possibilities (Fig. 2).However, the lack of evidence for two distinct sub-classes among the sbGRB population, suggests that one of these channels dominates.We find several reasons to prefer transient HMNSs over low-disk mass BHs in this case.
A key distinction between the all-BH and hybrid scenarios lies in the cbGRB duration distribution.BH-powered jets should exhibit a continuous spectrum from sbGRBs to lbGRBs, scaling with the binary mass ratio.Conversely, if HMNSs are the progenitors of sbGRBs, they differ intrinsically from BH-powered lbGRBs, proposing two distinct cb-GRB classes.The recent joint detections of cbGRBs with kilonovae provide an exciting opportunity to assemble a sizable sample of confirmed cbGRB events.Analyzing this collection could shed light on whether kilonova-associated sbGRBs and lbGRBs form a continuous spectrum or represent distinct classes.This, in turn, may enable us to deduce whether HMNSs, BHs, or both, serve as the primary progenitors of sbGRBs.
Figure 1 .
Figure1.The jet power evolution of post-merger accretion disks for varying levels of magnetic flux ranging from non-MAD to MAD.Gray lines show the post-merger mass accretion rate evolution (right vertical axis) obtained for 4 BH-NS merger simulations(Gottlieb et al. 2023b) and the 5 BNS merger simulations presented here, all of which generate massive disks M d ≈ 0.1 M ⊙ .The purple line delineates the logarithmic average of these mass accretion rates, which constitutes the maximum jet power assuming η a = 1 corresponding to a BH spin a ≈ 0.87 (left vertical axis).Turquoise lines illustrate schematically the jet power evolution for different assumptions about the dimensional magnetic flux threading the BH, Φ, and the corresponding total jet energy, E j .Since the magnetic flux on the BH is likely accumulated early and hence remains nearly constant before the disk transitions to MAD, the jet power, P j , is also predicted to be roughly constant at these times.However, once the dimensionless magnetic flux saturates in the MAD state, the jet power saturates at P j = Ṁc 2 and thus follows the mass-accretion rate Ṁ ∝ t −2 thereafter (we have extrapolated P j by a dashed line to later times).The yellow (blue) region outlines the estimated average jet power and duration T 90 (T 50 ) of the sbGRB (lbGRB) population based on prompt emission and afterglow observations (see text).While the jets from such massive disks are either too powerful, or operate for too long, compared to the sbGRB population, BH accretion from such massive disks nicely matches the observed properties of lbGRBs.
Figure 3 .
Figure3.An illustration of how the underlying physics of the merger product (orange) in the hybrid and all-BH scenarios (red) translates into different phases in the cbGRB light curves: sbGRB (yellow), lbGRBs (blue) and preceding and succeeding phases (green).Representations of the light curves of the lbGRB 211211A(Rastinejad et al. 2022) and sbGRB 930131A(Kouveliotou et al. 1994) are shown in black and gray, respectively, in a log-log scale. | 14,435.4 | 2023-08-31T00:00:00.000 | [
"Physics"
] |
Development of an informative web application for the promotion of ecotourism: A case study
INTRODUCTION: En a global context where technology is essential for the search of information and decision-making by tourists, the creation of digital platforms is a key strategy to promote sustainable tourism practices. OBJECTIVES: We aim to develop an informative web application aimed at promoting ecotourism in Lake Cuipari, using it as a case study. METHODS: The software development followed the phases established in the Agile methodology Extreme Programming: i) Exploration, ii) Planning, iii) Interactions, iv) Production, and v) Maintenance. To ensure the quality of the software product, we applied black-box testing. RESULTS: We successfully developed a functional informational web application with two panels, administrative and visitor. The web application allows users to learn about the location of Lake Cuipari, explains access conditions, and provides information about native species in and around the lake. These species are categorized into birds, amphibians, and fish, with academic, scientific, and tourist interest. CONCLUSION: The informational web application serves as a digital platform that enables the municipality and the local community to promote ecotourism in Lake Cuipari for the sake of its preservation and sustainability. This is achieved through the increased provision of information for potential tourists.
Introduction
The COVID-19 pandemic has caused a global crisis that has affected human activities.One of the most impacted sectors is tourism, as it faces the challenge of creating conditions that ensure the health of tourists and provide them with the necessary confidence for a gradual return to activity [1].
To reactivate tourism, it is necessary to conceive innovative ideas and business plans that can meet current needs [2].Among these are strategies related to ecotourism, also known as landscape tourism or ecological tourism, which focuses on offering tourist activities in natural environments while preserving the environment.This approach becomes relevant in a post-pandemic context, as tourists show a growing interest in participating in outdoor activities [3].
According to the World Tourism Organization, Peru is a globally iconic destination due to its scenic, cultural, and gastronomic attractions, being considered a potential sector for economic development.According to the Ministry of Foreign Trade and Tourism, between January and June 2023, 1.1 million international tourists entered the country, surpassing the 200,000 tourists per month threshold [4].
However, in the province of Alto Amazonas, Loreto region, despite the existence of potential natural, cultural, and historical tourist destinations such as Lake Cuipari, the Apangurayacu community, the petroglyphs of Kumpanama, among others; very few are promoted by local authorities and communities [5], leading to a missed opportunity for ecotourism development in this area of the Peruvian Amazon.
As such, this research focuses on the low promotion of ecotourism in Lake Cuipari, located in the district of Teniente Cesar López Rojas, 45 km from the city of Yurimaguas, the capital of the Alto Amazonas province.This is reflected in the low influx of visitors to the lake despite having tourist attractions sought after by national and international tourists [6], such as wildlife observation (birds, amphibians, etc.), canoeing, camping, artisanal fishing, among other activities.
We identified that the low promotion of ecotourism in Lake Cuipari is mainly due to a lack of familiarity with digital dissemination tools.Both the authorities and the residents of the local community have lagged behind in the use of technologies such as web platforms, mobile applications, or social media, limiting the visibility and reach of the tourist offerings [7,8].
The low promotion of this natural resource leads to the loss of economic opportunities for the community and region, limits the generation of local employment, contributes to a lack of environmental awareness among visitors, and results in the absence of incentives for its conservation [9], posing a threat to the sustainability and preservation of the lake.
The purpose of this study is to develop an informative web application aimed at promoting ecotourism in Lake Cuipari as a case study.This tool aims to facilitate its use by both authorities and the local community, enabling the effective promotion of ecotourism through digital channels targeted at potential national and international visitors.
Methodology
We applied the agile software development methodology Extreme Programming (XP), given its focus on flexibility and collaboration.XP allows continuous adaptation as new needs are identified and feedback is collected for the development of web applications [10].Moreover, it encourages constant communication and collaboration among the members of the development team, which is essential for success in a project involving multiple stakeholders [11].In this regard, we carried out the following phases:
Exploration
It involved identifying the project requirements based on the needs of the stakeholders, including the authorities of the District Municipality of Teniente César López Rojas, the community surrounding Lake Cuipari, and the authors of this research.
The work meetings were conducted on-site, successfully concluding the exploration phase with the definition of the purpose of the informative web application: Promote ecotourism in Lake Cuipari.Simultaneously, the web application will serve as a digital dissemination medium to increase the visibility and reach of the tourist activities offered at the lake, attracting both national and international tourists.
Planation
During this phase, we established user stories to understand the functional requirements of the web application.We used technical language that could be understood by the project development team.For each requirement, we assigned a priority of "high," "medium," or "low," allowing flexibility in the production of the web application based on the prioritized needs of stakeholders and testing the system as it is developed.Table 1 shows the registered user stories.On the other hand, in this phase, we also selected the client-server system architecture, which allows users, such as tourists interested in obtaining information about Lake Cuipari, to connect easily through their web browsers, while the server handles processing requests and providing the necessary resources and data.This architecture facilitates scalability, maintenance, and system updates, in addition to enabling faster and more efficient access to information [12,13], enhancing the user experience and contributing to the success of the ecotourism promotion initiative.
Interactions
Interaction is a fundamental process that involves continuous and close communication between developers and end-users of the web application.In XP, active collaboration with users is encouraged throughout the development cycle, including the planning, design, coding, and testing phases.User interaction involves obtaining constant feedback on the functionalities of the web application, ensuring that the needs and expectations of users defined in user stories are being met.This allows for agile adjustments and improvements, ensuring that the final product meets the real needs of users.
This led us to define 12 interactions according to the modules and sub-modules to be developed from the administration panel (back end), which will be reflected in the visitor panel (front end).Additionally, the development of the web application was planned for three months of work (Table 3).After completing the creation of forms and integration with the backend, we conducted black-box testing to verify the programmed functionalities and ensure their correct availability and security.Subsequently, the system was deployed on a server with 1 GB of RAM and 20 GB of disk space, running on the Ubuntu 22.04 operating system with a Nginx version 1.18.0 proxy server.Additionally, we used Passenger 6.0.18 to run the Ruby on Rails services.
Interaction
To access the software, we configured the domain lagocuipari.comfor the main page, while the administrative application is available at apps.lagocuipari.com,both linked through the server's IP address.
Maintenance
The maintenance phase reflects the agile development philosophy, emphasizing adaptability and responsiveness to changes.It is a continuous phase that extends throughout the entire software lifecycle and relies on constant user feedback to ensure that the software effectively meets its objectives.
In this stage, we provide support to administrative users to help them understand the functioning of the web application and address their questions and issues.Additionally, we identify and correct any defects that endusers report in the production web application.Finally, we take into account user feedback to make continuous adjustments and improvements to the web application.
Administrator Panel
Based on the XP methodology, we successfully developed an informative web application to promote ecotourism in Lake Cuipari, considering agile and collaborative practices within the production team.The system includes an administrative panel that allows data and information management through forms, which will be visible on the visitor panel.Figure 1 shows the Login interface of the web application.
Figure 1. Login interface
Figure 2 highlights the developed modules: Security and Maintenance.In the first module, the submodules for Users, Profiles, and Access are programmed.In the second module, the submodules for Species Class, Species, Species Family, Order, Allies, Photo Gallery, Project Information, and Technical Team are programmed.
Visitor Panel
Below, we present the visitor panel based on the programmed modules.In general terms, the web application aims to promote ecotourism in Lake Cuipari by showcasing information about native species and available activities in the area.Figure 5 shows the main interface that a tourist will encounter when accessing the web portal.
Figure 5. Main interface of the visitor panel
Regarding Figure 6, it displays information about the technical team of the project promoting ecotourism in Lake Cuipari.Users will be able to access information about the professionals responsible for managing the portal and connect with them through their contact links.
Figure 6. Technical team interface
One of the highlights of the informative web application is to provide data about the inventoried species in Lake Cuipari and its surrounding areas.By disseminating information about birds, we aim to encourage the possibility for tourists to engage in birdwatching at the lake.Similarly, amphibian species generate interest from both an academic and scientific perspective, in addition to providing tourist attractions (Figure 7).
Figure 7. Interface of inventoried species
Furthermore, the informative web application details the routes to access Lake Cuipari, presents a location map, and provides information about weather conditions and available means of transportation, allowing tourists to plan their visits more effectively (Figure 8).
Figure 8. Interface of Lake Cuipari location
It is crucial to emphasize that the information gathered and presented in the web application is the result of a collaborative and integrated effort with the community surrounding Lake Cuipari.The goal of promoting ecotourism is primarily aimed at contributing to the socioeconomic development of both the locality and the region.
Discussion
Informative web applications are crucial in the current era due to their ability to provide global and instant access to relevant information [14].These tools facilitate informed decision-making by offering updated and accessible data in various sectors, such as tourism, education, and health.Additionally, they play a key role in promotion and marketing, enabling businesses and destinations to showcase products and services attractively [15].
In the contemporary tourism industry, the diversity of services and products available has seen an increase, offering users a wide range of options ranging from travel packages to hotels and tourist attractions.This exponential growth in offerings, while providing travelers with a variety of alternatives, also poses challenges as the abundance of choices leads users into a difficult situation to identify and select exactly what they need.In this scenario, accurate and accessible information is crucial [16].
Therefore, informative web applications, for example, can act as tools by providing users with updated and detailed data on various available options, thus facilitating informed and personalized decision-making.The growing complexity of the tourism landscape emphasizes the importance of solutions that simplify the search and selection of services, ensuring a more satisfying experience tailored to the individual needs of travelers.
Similarly to the research conducted by Fararni et al. [17], Chai-Arayalert [18], and Nishanbaev [19], the implementation of technological solutions, especially disruptive ones, in the tourism sector is key to meeting the expectations and needs of visitors.This study demonstrates that the development of an extensive informative web application expands the means of communication for the ecotourism offerings provided by Lake Cuipari, which are potentially attractive to national and international tourists seeking to promote environmental sustainability and protect natural resources.
Like any research, this current study has limitations, specifically regarding the web application, as it is only available in one language (spanish), necessitating the addition of english and portuguese options.Additionally, there is a need to integrate digital analytics to understand how many users navigate the website, their geographical origins, and other demographic data.This information will help in better promoting ecotourism at Lake Cuipari.Finally, the web application should be promoted by the government and the local community to ensure it can have a positive impact on regional and international tourism promotion.
Conclusions
We managed to implement an informative web application designed to boost ecotourism at Lake Cuipari.The application was developed efficiently thanks to the agile methodology employed (XP), which facilitated a systematic and collaborative process within the production team.
The launch of this application provides the authorities of the municipality of Teniente Cesar López Rojas and the local community with a digital tool for disseminating information about the various ecotourism activities available at the lake.In a digitized era where tourists actively seek information on the internet to make informed decisions, this application stands as an essential and necessary technological resource.
As the web application solidifies its presence online, it is expected to play a crucial role in the long-term preservation and sustainability of Lake Cuipari.By providing access to detailed information about ecotourism activities, it fosters greater environmental awareness and promotes a sustainable approach in visitors' interaction with the environment.This initiative not only addresses current tourists' demands but also significantly contributes to the joint effort to preserve the natural beauty and biodiversity of Lake Cuipari.
Figure 3 .
Figure 3. Example from the Species submodule Figure 4 also showcases the form for the Project Team submodule of the informative web application.In this case,
Figure 4 .
Figure 4. Example from the Project Team submodule this stage, we defined the roles of the project development team, which we describe in Table2.
We developed the software in a Ruby on Rails 7.0 working environment, serving as the backend connected to a MySQL database version 8.0.This framework adopts the Model-View-Controller (MVC) design paradigm, where the model handles interaction with the database tables (model = table), and the controller manages data and maintenance through CRUD operations.Additionally, we used Vue.js for the frontend, where forms were created, integrated with the backend through an internal web service interface. | 3,288.6 | 2023-11-28T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Acoustic-feedback wavefront-adapted photoacoustic microscopy
Optical microscopy is indispensable to biomedical research and clinical investigations. As all molecules absorb light, optical-resolution photoacoustic microscopy (PAM) is an important tool to image molecules at high resolution without labeling. However, due to tissue-induced optical aberration
INTRODUCTION
Optical microscopy is crucial to many science and engineering fields.As all molecules absorb light, photoacoustic (PA) microscopy (PAM) is an important technique that employs non-ionizing photons and low-scattering ultrasound to image molecules.By offering universal optical absorption contrast, deep penetration, and label-free capability, PAM is suitable for a wide range of biomedical applications, such as functional imaging of blood oxygenation, tracking of circuiting tumor cells, lipid imaging, and label-free histology [1][2][3][4][5][6].PAM involves both optical excitation and acoustic detection, thereby providing a great diversity of embodiments.When biological tissue is probed with a pulsed and focused light beam, acoustic waves are generated due to absorption-induced heat generation, thus revealing cellular or even sub-cellular structures with high spatial resolution determined by the optical diffraction limit.This modality, termed optical-resolution photoacoustic microscopy (OR-PAM), was first developed in 2008 to image microvasculatures in mice in vivo [7].However, biological tissue is inherently heterogeneous, which distorts the optical wavefront and causes optical aberration as light propagates through.Since the performance of OR-PAM strongly relies on the quality of the excitation focus, it inevitably suffers from degradations in both PA amplitude and spatial resolution while imaging at depths.
OR-PAM relies on optical focusing to achieve high lateral resolution and acoustic time-of-arrival to achieve high axial resolution; it produces a one-dimensional (1D) depth-resolved image (an Aline) per laser pulse.Therefore, the quality of the excitation optical focus is crucially important to the performance of OR-PAM.The first attempt to introduce AO into OR-PAM was in 2010, which aims to correct system-generated aberration [43].By employing an SH sensor to directly measure the backscattered light from a white paper and a deformable mirror (DM) to compensate for the system-generated aberration, significant improvements in both signal strength and lateral resolution were demonstrated [43].In 2022, AO was employed to compensate for the spherical aberration caused by the mismatch of refractive indices between water and a target sample, by using a three-layer liquid crystal device optimized for correcting this specific aberration mode [44].Despite these efforts, however, using AO to correct tissue-induced aberration in OR-PAM has not been demonstrated.This situation may be due to the lack of appropriate guide stars in PAM.On one hand, back-scattered excitation light does not function well, as the back-reflected light may come from different regions from the focal volume.On the other hand, the widely used fluorescent signals do not naturally exist in PAM.To fill this void, we develop acousticfeedback wavefront-adapted PAM (AWA-PAM), which employs acoustic feedback to correct for tissue-induced optical aberration in OR-PAM.In contrast to other AO assisted three-dimensional (3D) optical microscopy techniques that generally abandon the correction for defocus to avoid axial shifting [45], AWA-PAM delightfully takes the correction of defocus into consideration.This choice is because the depth information of PAM is uniquely determined by the time of arrival of the acoustic waves.In this condition, optimizing the depth of the focal plane for each A-line can considerably benefit the imaging of small features at different depths across the field of view.To demonstrate the feasibility of AWA-PAM, we built a microscope system by integrating a liquid crystal based spatial light modulator and an OR-PAM system.The tissue-induced aberration was dynamically compensated point-bypoint by optimizing the phase map displayed on the SLM with a greedy algorithm.We will show in the next section that AWA-PAM effectively corrects for tissue-induced aberration when imaging in vivo zebrafish embryos and mouse ears and significantly improves the image quality, revealing microstructures that are indiscernible with conventional PAM.
A. Principle of AWA-PAM
We start by describing the operational principle of AWA-APM, which is schematically shown in Fig. 1.In conventional PAM, as shown in Fig. 1(a), pulsed light is focused into the biological tissue to locally induce ultrasonic waves, which are subsequently detected by a focused ultrasonic transducer through confocal geometry.The peak-to-valley value of the measured ultrasonic wave, defined as the PA amplitude, is generally used to reflect the local absorption of the tissue within the optical focus.In this condition, the full width at half maximum (FWHM) size of the optical focus determines the lateral resolution.In practice, however, optical aberration aggregates as the excitation light propagates due to the heterogeneity of biological tissue, which broadens the focus and deteriorates the resolution.Moreover, as light is not tightly focused and energy is not highly concentrated, the measured PA amplitude also decreases for small features.As a result, tissue-induced aberration deteriorates both lateral resolution and signal strength.
Inspired by the effectiveness of adaptive optics in fighting against tissue-induced optical aberration in optical microscopy, we developed AWA-PAM to effectively mitigate this issue, as shown in Fig. 1(b).The key enabling point is to modulate the excitation light by using an SLM before it enters the tissue.One would expect that tissue-induced aberration can be compensated by modulating the incident wavefront, resulting in a sharp focus at depths.Due to the special detecting scheme with ultrasound, AWA-PAM adopts an indirect wavefront sensing approach by employing acoustic feedback to estimate the desired phase map through a greedy optimization algorithm.To begin with, the SLM displays a planar phase map such that AWA-PAM effectively functions as conventional PAM.Then, a series of ordered Zernike polynomials are loaded to the SLM in order.These Zernike polynomials are orthogonal to each other over circular pupils [46], and each one effectively represents one type of optical aberration.A feedback loop is established among the SLM, the ultrasonic transducer, and the computer.The corresponding coefficients for each order of the Zernike polynomials are traversed and determined in an ergodic manner [47,48].This feedback loop guarantees continuous enhancement in PA amplitudes after correcting aberrations for each order, leading to the formation of a high-quality focus eventually.The AWA-PAM is relatively simple and does not require direct optical wavefront sensing to determine tissue-induced aberration, which is particularly suitable for PAM with ultrasonic detection.This process is essentially similar to that of feedback-based wavefront shaping with acoustic feedback [49][50][51][52].However, the scattering process inside biological tissue is almost random, which requires a large number of modes to be tested to generate a focus with acceptable quality.Thus, this inefficient method makes it almost impossible to implement for in vivo studies.In comparison, optical aberrations are known to be well-represented by the Zernike polynomials.In this condition, the feedback process is quite efficient, as one needs to consider only a few low-order Zernike polynomials to account for optical aberration.Therefore, point-by-point compensation for spatially inhomogeneous optical aberration can be realized even for live tissue.Furthermore, some of the previous works allow only focusing light to indefinite locations and are not suitable for 3D imaging, unlike AWA-PAM.
B. Compensating for System Aberration
We first compensate for the system aberration by imaging a carbon fiber.The experimental setup of AWA-PAM is schematically shown in Fig. 2(a) with a detailed description in Appendix A. To guarantee sufficient accuracy under static conditions, the step size for traversing the coefficients for a given Zernike mode was set to 0.2 π rads, and the range was from −10 π to +10 π rads.Phase wrapping was performed.Using PA amplitudes as feedback, Fig. 2(b) shows the signal enhancement contributed from each order.Among them, the 1st order (i.e., piston) has no effect, while the 2nd and 3rd orders that correspond to tip and tilt are the two largest contributors.These two corrections mainly account for the misalignment of the SLM.Besides, the correction for the 4th order that corresponds to defocus helps, possibly due to correction for the curvature of the SLM, the use of a lens pair and depth-induced aberration.Corrections for the Zernike polynomials from the 5th order to the 8th order also provide certain contributions.In contrast, the Zernike polynomials after the 10th order have negligible effects, indicating that high-order aberration does not exist in the current system.Overall, compensating for system aberration effectively enhances PA amplitudes by about three times.The image of the focus shown in Fig. 2(c) confirms that, in free space, the lateral resolution can be brought back to the theoretically predicted ∼3 µm after correcting for the system aberration.Since systemgenerated aberration is static, a fixed phase map corresponding to correcting the system aberration is always used as the initial phase map before subsequent wavefront optimization to correct for sample-induced aberration.
C. Imaging Spinal Cords of Zebrafish Embryos In Vivo
The feasibility of AWA-PAM to compensate for tissue-induced aberration was demonstrated through imaging spinal cords of zebrafish embryos in vivo.The zebrafish used for imaging is two days after fertilization.After anesthetizing the zebrafish, we first performed whole-body imaging through conventional PAM, which is shown in Fig. 3(a).Then, we proceeded to execute AWA-PAM to demonstrate its superiority in revealing structures that were hindered by optical aberrations.Given that imaging the whole fish is time-consuming, wavefront correction was performed on a smaller region that mainly contains the spinal cord, which is denoted by the white dashed square in Fig. 3(a).To avoid causing artifacts due to beam drifting, we restrict the orders of the Zernike polynomials from 4 to 10 (i.e., excluding tip, tilt, and piston) during in vivo experiments.The step size of the coefficient for traversing each Zernike order was set to π rads, and the optimization range was from −4π to +4π rads.The isoplanatic patch size was measured to be 15 × 15 µm 2 (with the quantification procedures described in Supplement 1, Note 1) and we used the same corrective wavefront for the measurements taken within the isoplanatic patch.Higher Zernike orders, smaller step sizes, finer isoplanatic patches, and larger optimization ranges can be targeted, which is at the cost of increased imaging time and will be discussed later.Figures 3(b) and 3(c) show the obtained 3D image and its corresponding 2D maximum amplitude projection (MAP) image of the spinal cords of the zebrafish embryo by employing AWA, respectively.As a comparison, the obtained 3D image and 2D MAP image for the same region without using AWA correction are shown in Figs.3(d) and 3(e), respectively.It is clear that AWA-PAM provides richer structural information than that obtained without AWA correction.Notably, AWA correction includes the adjustment of the 4th order of the Zernike polynomials that represents defocusing, allowing spinal cords at different depths to be identified.Such an implementation presents a key difference between AWA-PAM and other AO-assisted 3D optical microscopy.For example, the spinal cord enclosed within the white dashed box in Fig. 3(c) could hardly be seen without AWA correction [Fig.3(e)] but becomes visible with the assistance of AWA correction.The depth information for the 2D MAP image acquired with AWA correction [Fig.3(c)] was extracted based on the time-of-flight information of the ultrasonic wave and provided in Fig. 3(f ).This image shows that different spinal cords are indeed located at different depths.Detailed information on the optimization performance and phase maps at different locations can be found in Supplement 1, Note 2.
D. Imaging Microvascular Structures of Mouse Ear In Vivo
The effectiveness of AWA-PAM was also demonstrated by imaging microvascular structures of mouse ears in vivo.The mouse was anesthetized by isoflurane during the entire experiment, and the ear to be imaged was in a natural state, i.e., without being pressed or flattened, which can otherwise restrain blood flow.The same setting used for imaging zebrafish embryos was used here.Figures 4(a These observations demonstrate the superior performance of AWA-PAM over conventional PAM in revealing microstructures that are wiped out by optical aberrations.Detailed information on the optimization performance and phase maps at different locations can be found in Supplement 1, Note 3.
DISCUSSION
In principle, AWA-PAM relies on the assumption that the tightest focus produces the strongest PA amplitude.Unlike two-photon fluorescent microscopy, which employs nonlinearity, PA is generally considered a linear process.Nonetheless, when imaging capillaries in practice, Supplement 1, Note 4 suggests that the lateral resolution degrades to tens of microns at the depths of a few hundred microns.Since the lateral resolution without AWA correction is larger than the diameters of many capillaries, AWA-PAM can increase the PA amplitude and achieve a tighter focus, thus revealing more microscopic features.Moreover, even for very big vessels shown in Fig. 4, we found a roughly 20%-30% signal enhancement by using AWA correction (detailed in Supplement 1, Note 5).This observation is likely because a tighter focus produces more high-frequency components, which leads to larger signal strengths in OR-PAM where high-frequency ultrasonic transducers are generally employed [53][54][55].It is also worth mentioning that the optical focusing depth was adjusted in AWA-PAM, which is very different from the procedures in conventional AO-assisted 3D microscopy.Such an operation takes advantage of the acoustic detection in PAM and the sparsity of the sample along the depth direction, which is generally true for many biological tissues including vasculatures in mouse ears.In this context, the imaging depth range of AWA-PAM can reach acoustic depth of focus, which is much larger than the optical depth of focus.In short, since PAM detects ultrasound instead of light, using acoustic signals rather than optical signals as a feedback metric is a natural choice.Moreover, the generation of acoustic signals is a rather complicated process that involves light, heat, and sound; the validity of AWA-PAM also relies on the fact that the distribution of the acoustic property of the sample is much more uniform than that of the optical property.
The imaging speed of AWA-PAM can be further improved in future studies.In the current practice, seven measurements were made to determine the optimum coefficients for each Zernike order.With certain prior knowledge of the extent of aberration and a sufficiently high signal-to-noise ratio, 3N or even 2N + 1 measurement could be enough to determine the coefficients of N orders [41,45].Such a decrease in the number of measurements, albeit at the cost of determination accuracy, can effectively improve the imaging speed.Second, for the current imaging system, the bottleneck of the system speed is the low refreshing rate of the SLM (60 Hz).Consequently, optimizing the coefficients of the Zernike polynomials (locating the inflection point) leads to an averaged A-line scanning rate of 1 Hz.Thus, taking the imaging process of mouse ears as an example, acquiring a two-dimensional image of 1 mm 2 takes about 37 min.Although this duration is much longer than that consumed by conventional PAM, AWA-PAM can be further sped up by using micro-electro-mechanical-system-based SLMs that operate up to hundreds of kHz [56][57][58][59], allowing twodimensional images of 1 mm 2 to be formed within ten seconds.Imaging speed can be further improved by choosing an adaptive scanning strategy.For example, by tracking the vessels' profile [60], one can perform fine compensations for optical aberration only in those regions that are of particular interest.Given that more than 60% of the area is informationless, the imaging time could be further reduced with this strategy.To demonstrate the principle, the current system was built in transmission mode.A reflection-mode system will be built in the future to adapt AWA-PAM for various applications [61][62][63].
In this study, AWA-PAM was demonstrated using an imaging system with a numerical aperture (NA) of 0.1 (which is the most widely used value), leading to a 3 µm resolution under the aberration-free condition.The principle of AWA-PAM works for different NAs, and the adaptive correction should be more critical for larger NAs.While it is difficult to characterize resolution enhancement for in vivo studies, this enhancement may be quantified through the employment of tissue-mimicking phantoms with controlled thickness.As detailed in Supplement 1, Note 4, experimental results show that AWA-PAM effectively improves the lateral resolution at all depths tested.In particular, it achieves nearly two-fold improvement in lateral resolution in comparison to conventional PAM at a depth range of 150-200 µm.Nonetheless, since scattering phantoms are mesoscopic homogeneous and do not induce strong optical aberration, we consider the obtained values may be conservative compared to the ones achieved during in vivo experiments.Moreover, as illustrated in Supplement 1, Note 4, AWA-PAM exclusively operates within the quasi-ballistic regime, unable to address optical scattering.Beyond imaging depths of 1 mm in soft tissue, AWA-PAM diminishes its efficacy, and the resolution eventually becomes acoustically determined.
In conclusion, we have developed AWA-PAM to compensate for tissue-induced aberration dynamically.Endogenous optical absorptive objects are employed as internal guide stars to provide acoustic feedback in order to characterize spatially inhomogeneous optical aberration.The feasibility of AWA-PAM was experimentally demonstrated on both zebrafish embryos and mouse ears in vivo.Considerable improvements in both lateral resolution and signal strength were observed, showing that AO effectively helps PAM to generate a more intensified and localized focus at depths.We envision that the demonstrated approach will serve as a useful tool for label-free optical imaging in the quasi-ballistic regime.
Experimental Setup
A pulsed laser (Changchun New Industries, MPL-H-532-30 µJ) operating at 532 nm was used as a light source.A halfwave plate (Thorlabs, WPH05ME-532) and a polarizing beam splitter (Thorlabs, PBS532) controlled the power being dumped into the system and set the output light to be horizontally polarized.The laser has pulse-to-pulse intensity fluctuations up to 10%, which can potentially contaminate the feedback signal.To address this issue, a photodiode (Thorlabs, SM1PA1A) was employed to monitor the intensity of the reflected light from the polarizing beam splitter, and the measured value was used for normalization.Then, the laser beam was expanded through a pair of lenses
Fig. 1 .
Fig. 1.Schematics of the operational principle of AWA-PAM.(a) In conventional PAM, pulsed light is focused into the biological tissue to locally induce ultrasonic waves, which are measured by a focused ultrasonic transducer.The heterogeneity of the biological tissue distorts the wavefront of the focused light, decreasing the signal strength and resolution.(b) In AWA-PAM, a spatial light modulator modulates the wavefront of the pulsed excitation light to compensate for tissue-induced aberration.The phase map displayed by the SLM is the inverse phase of the distorted wavefront, thereby nullifying the distortion and creating a sharp focus at depths.To obtain the desired phase map, a feedback loop between the SLM, the computer, and the ultrasonic transducer is established to optimize the PA amplitude.
) and 4
(b) show the 3D microvascular images of the mouse ear from two different views obtained with AWA correction, and its corresponding 2D MAP image is also shown in Fig.4(c).In comparison, 3D images and corresponding 2D MAP image obtained without AWA correction are illustrated in Figs.4(d)-4(f ).Although big vessels can be identified in both images, the AWA correction effectively helps to identify microvascular structures at depths.For example, by scrutinizing the regions highlighted by the white dashed boxes in Fig.4(c), one could hardly see similar features in the same regions in the image obtained without AWA correction [Fig.4(f)].Moreover, by comparing the MAP images belonging to different categories, we note that signal strengths obtained with AWA correction are generally stronger than those obtained without AWA correction.Furthermore, Figs.4(g) and 4(h) plot the profiles along the two white dashed lines labeled in Figs.4(c) and 4(f ).A direct comparison shows that richer structural information can be observed with AWA correction (the red solid curve).In contrast, the 1D profiles obtained without AWA correction, denoted by the dashed blue curves, are almost informationless.As before, for the 2D MAP image with AWA correction activated, the depth information based on the time-of-flight information of the ultrasonic wave is provided in Fig.4(i).As we can see from the figure, microvascular structures that are hardly seen in Fig.4(f ) are at a depth that is distinctively different from the focal plane (13.1 mm).
Fig. 3 .
Fig. 3.In vivo imaging of zebrafish embryos.(a) Whole-body image of the zebrafish embryo, captured through conventional PAM.The white dashed square denotes the area of interest with spinal cords.Scale bar: 500 µm.(b) 3D image and (c) its 2D MAP image of the area highlighted in (a) obtained with AWA correction.Image sizes: 0.60 × 0.50 × 0.45 mm 3 (x , y , z).Scale bar, 100 µm.(d) 3D image and (e) its 2D MAP image of the area highlighted in (a) obtained without AWA correction.Image sizes: 0.60 × 0.50 × 0.45 mm 3 (x , y , z).Scale bar, 100 µm.(f ) Depth information of the microstructures in the 2D MAP image in (c).Scale bar, 100 µm.
Fig. 4 .
Fig. 4. In vivo imaging of mouse ears in the natural state.(a), (b) Two different views of the 3D microvascular images obtained with AWA correction.Image sizes: 1.00 × 1.00 × 0.40 mm 3 (x , y , z).(c) Corresponding 2D MAP image obtained with AWA correction.Scale bar, 200 µm.(d), (e) Two different views of the 3D microvascular images obtained without AWA correction.Image sizes: 1.00 × 1.00 × 0.40 mm 3 (x , y , z). (f ) Corresponding 2D MAP image obtained without AWA correction.Scale bar, 200 µm.(g), (h) Line profiles of the dashed lines in (c) and (f ) for the microvascular structures obtained with (red) and without (blue) AWA correction.Scale bars, 200 µm.(i) Depth information of the microvascular structures in the 2D MAP image in (c).Scale bar, 200 µm. | 4,832.8 | 2024-01-08T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
A high-quality chromosome-level genome assembly reveals genetics for important traits in eggplant
Eggplant (Solanum melongena L.) is an economically important vegetable crop in the Solanaceae family, with extensive diversity among landraces and close relatives. Here, we report a high-quality reference genome for the eggplant inbred line HQ-1315 (S. melongena-HQ) using a combination of Illumina, Nanopore and 10X genomics sequencing technologies and Hi-C technology for genome assembly. The assembled genome has a total size of ~1.17 Gb and 12 chromosomes, with a contig N50 of 5.26 Mb, consisting of 36,582 protein-coding genes. Repetitive sequences comprise 70.09% (811.14 Mb) of the eggplant genome, most of which are long terminal repeat (LTR) retrotransposons (65.80%), followed by long interspersed nuclear elements (LINEs, 1.54%) and DNA transposons (0.85%). The S. melongena-HQ eggplant genome carries a total of 563 accession-specific gene families containing 1009 genes. In total, 73 expanded gene families (892 genes) and 34 contraction gene families (114 genes) were functionally annotated. Comparative analysis of different eggplant genomes identified three types of variations, including single-nucleotide polymorphisms (SNPs), insertions/deletions (indels) and structural variants (SVs). Asymmetric SV accumulation was found in potential regulatory regions of protein-coding genes among the different eggplant genomes. Furthermore, we performed QTL-seq for eggplant fruit length using the S. melongena-HQ reference genome and detected a QTL interval of 71.29–78.26 Mb on chromosome E03. The gene Smechr0301963, which belongs to the SUN gene family, is predicted to be a key candidate gene for eggplant fruit length regulation. Moreover, we anchored a total of 210 linkage markers associated with 71 traits to the eggplant chromosomes and finally obtained 26 QTL hotspots. The eggplant HQ-1315 genome assembly can be accessed at http://eggplant-hq.cn. In conclusion, the eggplant genome presented herein provides a global view of genomic divergence at the whole-genome level and powerful tools for the identification of candidate genes for important traits in eggplant.
Introduction
The large family Solanaceae contains over 3000 plant species that are adapted to a wide range of geographic conditions, including eggplant (Solanum melongena), tomato (S. lycopersicum), potato (S. tuberosum), tobacco (Nicotiana tabacum) and petunia (Petunia inflata). Asian eggplant (S. melongena L.), also known as brinjal or aubergine, is a vegetable crop widely grown across Southeast Asian, African, and Mediterranean countries 1 .
Eggplant is the third most widely grown solanaceous vegetable after potatoes and tomatoes, with a global total production of~54.08 million tons in 2018 (FAOSTAT; http://faostat3.fao.org). Approximately 90% of eggplants are produced in Asia, mainly in China and India, with Indonesia, Turkey, Egypt, the Philippines and Iran grow-ing~1% of the world's total production 1 (Fig. 1).
Unlike tomato and potato, which are both New World representatives of the genus Solanum 2 , eggplant is an Old World crop belonging to subgenus Leptostemonum 3 (the "spiny solanums"). Two other Solanum species, Ethiopian/scarlet eggplant (S. aethiopicum L.) and African/ Gboma eggplant (S. macrocarpon L.), are also called eggplants, and their fruits and leaves are used for food and medicine. There are obvious local preferences for eggplant fruits, which may be either elongated or round, with colors from dark purple to light green. The domestication history of eggplant has been under debate and presumably started in Africa, with radiation to Asia; however, relationships among the African species and their Asian relatives are not well resolved 4 . The two most commonly hypothesized regions of origin are India and southern China/Southeast (SE) Asia, which have equally old written records of eggplant use for~2000 years 4 . Both regions have vastly diverse landraces, close wild relatives and candidate progenitors of eggplant. A recent study proposed that S. insanum is the wild progenitor, which split into an Eastern and Western group, with domesticates derived from the Eastern group 5 . Eggplants exhibit highly diverse variations in growth habits, biotic and abiotic resistance, and fruit and leaf morphology among local landraces and wild relatives. Identification of candidate genes/gene families controlling these differences will provide insight into the genetic mechanisms of agronomically important traits, as well as resources for eggplant breeding.
Genome sequencing is a powerful tool in plant genetics and genomics research. The genome of Arabidopsis thaliana was sequenced and published in 2000, representing the first plant genome. Since then, the development of genome sequencing technologies has resulted in multitude of plant genomes in recent years, including those of many horticultural crops [6][7][8][9][10][11][12][13][14][15] . Traditionally, the majority of research in Solanum crops has focused on potato and tomato, for which genomes have been published 9,10 . The first genome sequence of S. melongena was published in 2014, with 85,446 predicted genes and an N50 of 64 kb 13 . However, this draft assembly is not at the chromosome level and is highly fragmented, containing 33,873 scaffolds and covering only 74% of the eggplant genome. An improved S. melongena genome of the inbred line 67/3 using Illumina sequencing and single-molecule optical mapping was then published 16 . In addition, the genome of the African eggplant S. aethiopicum, a close relative of S. melongena, has also been published 17 . However, these eggplant genomes were all sequenced with next-generation sequencing (NGS) technologies using short reads, whereas genome sequence data derived from third-generation sequencing with long reads are still not publicly available. Here, we report a high-quality chromosome-level eggplant genome using nextgeneration Illumina sequencing and third-generation Nanopore sequencing combined with 10X genomic and Hi-C technologies, with a contig N50 of 5.26 Mb and a scaffold N50 of 89.64 Mb.
Genome sequencing, assembly, and assessment
The genome size of the eggplant inbred line HQ-1315 is 1205.25 Mb, with a heterozygosity rate of 0.15%, as assessed by k-mer analysis based on 93.33 Gb Illumina HiSeq data. The estimated proportion of repeat sequences was~69.60%.
A high-quality eggplant genome (hereafter S. melongena-HQ) was assembled with a genome size of~1.1 Gb and contig N50 of 5.26 Mb. We used a combination of Illumina HiSeq, Nanopore sequencing, and 10X Genomics sequencing technologies to sequence and assemble the eggplant genome; with the assistance of the Hi-C technique, a chromosome-level genome assembly was generated. A total of 114.45 Gb reads were obtained from Illumina HiSeq, including 93.33 Gb data for k-mer analysis and 21.12 Gb of additional read data, the average coverage of which was 94.96×; Nanopore sequencing generated 129 Gb data with 107.03× coverage. These data were used for preliminary assembly, producing a total contig length of 1159. 53 13 . Finally, with the Table 1). Detailed information on the stepwise assembly of the genome is shown in Table S1. The GC content in the eggplant genome is 35.94%, similar to that of Arabidopsis 18 (36.06%), tomato 10 (34.05%) and celery 15 (35.35%) but lower than that of rice 19 (43.57%) and tea plant 20 (42.31%).
The quality of the eggplant genome assembly was further assessed ( Supplementary Fig. S1). The alignment rate of all short reads to the genome was~99.48%, covering 91.24% of the genome. The heterozygous and homozygous SNP ratios were calculated to be 0.0253% and 0.0014%, respectively, indicating a high single-base accuracy rate for the genome assembly. The integrity of the assembled genome was assessed by the Core Eukaryotic Genes Mapping Approach (CEGMA); 237 genes were assembled from 248 core eukaryotic genes (CEGs), accounting for 95.56% of the total and reflecting that the sequence assembly was relatively complete. The statistical results of BUSCO evaluation of the eggplant genome showed that 2,190 homologous single-copy genes were assembled and that 94.2% of all single-copy genes were assembled.
Genome annotation
For the annotation of the eggplant genome, we used a combination of gene prediction strategies, including de novo, homology, and transcriptome-based predictions. RNA from five different tissues, including root, stem, leaf, flower and fruit, was extracted for next-generation transcriptome sequencing and full-length transcriptome sequencing. A total of 36,582 coding genes were predicted, with an average of 4.31 exons per gene and an average transcript length of 4095.69 bp. Repetitive sequence annotation results showed that 70.09% of the eggplant genome is repeat sequences, with a size of 811.14 Mb. Most of the repeat sequences are long terminal repeat (LTR)-type retrotransposons, which account for 65.80%; 1.54% is the long interspersed nuclear element (LINE) type, and DNA transposons account for only 0.85%. In addition, 5929 noncoding RNAs were detected in the eggplant genome, including 268 miRNAs with an average length of 127.81 bp as well as 2549 tRNAs, and 554 snRNAs (Supplementary Table S2).
Evolution of the S. melongena genome A total of 9 sequenced Solanaceae genomes were analyzed to reveal the evolution of the eggplant genome, including Nicotiana tabacum, Capsicum annuum, Petunia inflata, S. tuberosum, S. lycopersicum, S. aethiopicum, S. melongena-HQ, and two other S. melongena genomes, S. melongena-NS 13 and S. melongena-67/3 16 . Phylogenetic analysis indicated that eggplant is closer to potato and tomato than pepper (Fig. 3a), diverging from the common ancestor~14.4 Mya (Fig. S2). The group of three Solanum species (eggplant, potato and tomato) is sister to pepper, diverging~18.5 Mya. Among the different eggplants, S. melongena-HQ and its close relative S. aethiopicum diverged from a common ancestor~2.4 Mya (Fig. S2). Moreover, S. melongena-HQ is more closely related to the European eggplant variety S. melongena-67/3 than the Japanese eggplant cultivar S. melongena-NS, all of which are distant from S. aethiopicum (Fig. 3a).
There were 32,529 gene families in total according to clustering results. Among the nine genomes, 6087 gene families are common, of which 463 single-copy gene families are common to each genome (Fig. 3b). The corresponding clustering results for S. melongena-NS, S. melongena-67/3, S. aethiopicum, and S. melongena-HQ were extracted to draw a Venn diagram, which showed that the four eggplant genomes have 11,123 genes ( Fig. 3c). Compared with other eggplants, S. melongena-NS has the most unique genes (1,256 genes), followed by S. aethiopicum with 1226 unique genes; S. melongena-67/3 has only 295 unique genes. In addition, S. melongena-HQ has a total of 563 accession-specific gene families containing 1009 genes (Fig. 3c, Supplementary Table S3). We performed GO and KEGG enrichment analyses on accession-specific gene families of S. melongena-HQ (Supplementary Table S3) and found them to be mainly involved in the processes of metabolism, biosynthesis and modification of proteins/nucleic acids.
Whole-genome duplication (WGD) events in the S. melongena-HQ genome were detected based on the rate of fourfold degenerative third-codon transversion (4DTv) of paralogous gene pairs among S. melongena-HQ, A. thaliana and four other Solanaceae species. As illustrated in Fig. 4, A. thaliana and S. melongena-HQ had one peak value at~0.72, indicating an ancient WGD before the divergence of asterids and rosids. S. melongena-HQ had only one WGD event common to Solanaceae species at 0.30, whereas there was no recent WGD after species differentiation. Among Solanaceae crops, S. melongena-HQ first diverged from pepper at~0.1, followed by tomato at~0.08, and then S. tuberosum at~0.06. The two eggplants S. aethiopicum and S. melongena-HQ diverged from each other quite recently compared with other species.
Expansion and contraction of gene families
The 9 sequenced Solanaceae genomes were analyzed to reveal the dynamics of gene family evolution in the eggplant genome. A total of 32,522 most recent common ancestor (MRCA) gene families were found (Fig. 3d). Table S4). The expanded and contracted genes were also annotated by GO and KEGG analyses (Supplementary Table S4). The KEGG pathway plantpathogen interaction showed the most contracted genes (25 genes), which may be related to reduced resistance in cultivated eggplant.
Comparative genomic analysis
Synteny analysis showed that the S. melongena-HQ genome exhibits high collinearity with that of S. melongena-67/3, with a total of 19,620 gene pairs and 178 syntenic blocks. Chromosome E01 in these two eggplant genomes is in the same direction but inverted compared with tomato chromosome 1. There is one missing block in S. melongena-67/3 chromosome E02, which exists between S. melongena-HQ and tomato and between tomato and pepper. Similar missing segments were also found for corresponding chromosomes 5 and 9. Chromosomes 4, 5, 10, 11, and 12 have undergone more complex chromosome rearrangements, such as translocations and inversions, during evolution among eggplant, tomato and pepper, as reflected by an increased number of syntenic blocks. We identified a total of 18,337 gene pairs and 151 syntenic blocks between S. melongena-HQ and tomato. S. melongena-HQ chromosome E04 was partly aligned to tomato chromosomes 4, 10 and 11; some of the genes on S. melongena-HQ chromosome E05 were aligned to tomato chromosome 12. Genes from S. melongena-HQ chromosome E10 were aligned to S. lycopersicum chromosomes 3, 5 and 12. Similar collinearity was also detected among the genes from corresponding chromosomes 11 and 12 between S. melongena-HQ and S. lycopersicum (Fig. 5). Pairwise comparisons are presented in Supplementary Figs. S3-S5.
Although the overall genome lengths of S. melongena-HQ and S. melongena-67/3 are not significantly different, the length of each chromosome differ significantly ( Table 2). The total sizes of the two eggplant genomes are 1073. 14 that of S. melongena-67/3. The differences in the lengths of other chromosomes, E04, E08, E06, E10, E11, and E12, are between 19.30 and 28.93 Mb. Despite the minor differences in total genome size between the two assembled eggplant genome versions themselves, the differences in chromosome length between the two assembled versions are significant. This result may have been caused by different sequencing technologies (second vs third generation) and assembly strategies (linkage map vs Hi-C).
We then compared S. melongena-HQ with two previously sequenced eggplant genomes, those of European eggplant S. melongena-67/3 and African eggplant S. aethiopicum, to investigate genomic divergence among them (Fig. 6a). Three types of variations were analysed, including single-nucleotide polymorphisms (SNPs), insertions/deletions (indels) and structural variants (SVs). We detected 2,189,112 SNPs, 512,849 indels, and 741 large SVs between S. melongena-HQ and S. melongena-67/3. In contrast, 22,092,994 SNPs, 1,988,560 indels, and 7,362 large SVs were identified between S. melongena-HQ and S. aethiopicum. Between S. melongena-HQ and S. melongena-67/3, the 512,849 indel mutations involve 14,756 genes, which were annotated using GO and KEGG (Supplementary Table S5). The 741 SVs correspond to 211 genes, among which 60 were functionally enriched by GO analyses (Supplementary Table S5). For S. melongena-HQ and S. aethiopicum, 3,066 genes are associated with large SVs, among which 1,370 and 350 genes were functionally enriched according to GO and KEGG analysis, respectively (Supplementary Table S6). There are 90 genes involved in antibiotic biosynthesis networks according to the KEGG enrichment results, and 16 genes related to the citrate cycle (TCA cycle). It has been proposed that the African eggplant S. aethiopicum has better disease resistance and drought tolerance than cultivated S. melongena-HQ 17 . Therefore, these genes will provide valuable resources for resistance improvement in eggplant breeding.
We further investigated SV abundance in potential regulatory regions of protein-coding genes; different types of indel variation suggest different patterns of SV accumulation (Fig. 6b). There were more deletions than insertions between S. melongena-HQ and S. aethiopicum. However, insertions and deletions between the two S. melongena genomes were similar in both coding and noncoding areas, with the two lines basically coinciding. Higher insertion-deletion variations were observed in transcription start site (TSS) and transcription terminal site (TTS) regions of S. melongena-HQ and S. aethiopicum, and the variation in the gene coding regions was found to be slightly higher than that in noncoding regions. In contrast, variations in coding regions were lower than those in noncoding region between cultivated eggplants.
NBS gene family and transcription factor analysis
Nucleotide-binding site-leucine-rich repeat (NBS-LRR) proteins constitute the largest family of resistance (R) proteins and play significant roles in defense against pathogens. The NBS protein family was systematically analysed in five plants of the Solanaceae family. In S. melongena-HQ, 301 NBS genes were identified as involved in seven types (Table 3; Supplementary Table S7), whereas only 250 genes were identified in S. melongena-67/3 as involved in eight types. S. aethiopicum has outstanding resistance to various pathogens, including Fusarium, Ralstonia and Verticillium 21,22 , with 436 NBS genes involved in ten types. Accordingly, S. aethiopicum has been routinely used to improve disease resistance in S. melongena. S. lycopersicum was found to possess 223 NBS genes. In terms of transcription factors, for S. melongena-HQ, a total of 1970 transcription factors divided into 64 categories, the top three of which were APETALA2/ ethylene responsive factor (AP2/ERF, 150), cysteine 2histidine 2 type zinc finger gene (C2H2, 137) and basic helix-loop-helix (bHLH, 135) were identified. The v-myb avian myeloblastosis viral oncogene homolog superfamily (MYB) has 127 transcription factors. Detailed information on the number and gene sequences of each transcription factor, including S. melongena-67/3, S. aethiopicum and S. lycopersicum, is shown in Supplementary Table S8.
Candidate gene identification for fruit length and QTL hotspots in eggplant
Eggplants display extensive variations in fruit morphology among landraces and wild relatives. There are obvious local market preferences for fruit shape (i.e., oval, round or linear) according to different consuming habits; thus, the fruit length, diameter and shape index of eggplants show significant differences (Fig. 1). The immature fruits of HQ-1315 are generally~35 cm in length and 3 cm in diameter, and it is a long (elongated type) eggplant. An F 2 population containing 129 individuals was obtained from a cross between HQ-1315 (P 1 ) and the short round eggplant 1815 (P 2 ; Fig. 7). Bulked segregant analysis (BSA) and quantitative trait locus (QTL) analysis on eggplant fruit length were then conducted using the S. melongena-HQ genome (Fig. 7). F 2 plants with extremely long and short fruits were selected and pooled for genome sequencing. Resequencing P 2 generated 23.41 Gb of data, and sequencing of the two extreme pools yielded 41.52 Gb for the extreme long pool and 40.05 Gb for the extreme short pool. The average length (L), diameter (D), and fruit shape index (L/D) of three fruits from each F 2 individual were measured to determine the value for the individual plant (Supplementary Table S9). Based on genotyping 48 Mb region, and they may also play potential roles in controlling fruit size.
Based on the QTL results of previous studies and the available marker sequence information, we anchored these markers to our latest reference genome to investigate QTL hotspots in eggplant [23][24][25][26][27][28][29][30] . A total of 210 linkage markers related to 71 traits, including fruit-related traits (i.e., fruit size and color), leaf morphology, and nutrient components, were anchored (Fig. 8, Supplementary Table S10). Except for the linkage markers for Fusarium resistance in Miyatake et al. 29 , most of the markers were mapped to physical locations on corresponding chromosomes. We summarized the regions with clustered linkage markers or traits and finally obtained 26 QTL hotspots, with two to three on each chromosome.
Eggplant Genome Database
We constructed an advanced, intuitive, and userfriendly Eggplant Genome Database using genome assembly and annotation data (Fig. 9). Eggplant Genome Database consists of three main modules. The browse module has links to information for 36,582 genes, including start/end locations and chromosome information. KEGG, Pfam, GO, NR, and Swiss-Prot database annotation information can be easily accessed by clicking the gene ID, as can the coding sequence (CDS) and protein sequence information corresponding to each gene. The BLAST module aligns sequences to the genome, gene, and protein databases to obtain the required information for users. The eggplant genome assembly, as well as genome gff, CDS, protein, and other data files, can be downloaded using the download module. Eggplant Genome Database provides access to various types of data, allowing researchers and breeders to browse, search, and download information for genomics studies and breeding. The online database can be accessed at http://eggplanthq.cn/.
Discussion
Genome sequencing technologies have undergone tremendous improvement during the past decades, resulting in substantial advances in the availability of plant genomes. Since the publication of the first plant genome, Arabidopsis thaliana, using whole-genome shotgun sequencing, over 200 plant genomes have been published 31 (www.plabipd.de). However, genome sequencing of plant species with large genome sizes and high repetitive sequence contents remains difficult 32 . Compared with the short reads produced by NGS technologies, long reads with kilobase-length DNA fragments are extremely efficient in resolving repetitive regions and facilitating genome assembly. Several new technologies have been developed based on long reads, such as synthetic long reads, long PacBio reads, and optical mapping, and these methods have been applied to Arabidopsis 33 , tomato (3.0 genome release; www.solgenomics.net) and maize 34 . Nevertheless, long-read sequencing technologies are still costly and rely on the previous extraction of high-quality DNA. Oxford Nanopore is a recently developed long-read sequencing technology that can greatly reduce the sequencing costs and generate gigabases of sequence data from a single flow cell 35 . Hi-C proximity ligation is another driving technology that may help in the assembly of fragmented plant genomes at the chromosome level 36 . In the present study, we combined 114.45 Gb Illumina short reads with 129 Gb long reads from Nanopore sequencing and~113.46 Gb 10X Genomics data to generate a high-quality eggplant genome, with a contig N50 of 5.26 Mb and a scaffold N50 of 89.64 Mb. With the assistance of 131.73 Gb Hi-C data, 12 eggplant pseudochromosomes were obtained, with a total size of Supplementary Table S10 ~1.07 Gb, covering 92.72% of the eggplant genome. Both contig N50 and scaffold N50 were significantly improved compared with those of previously published S. melongena genomes 13,16 . The number of scaffolds obtained was 10,383 for S. melongena-67/3 and 33,873 for S. melongena-NS; we assembled 2,263 scaffolds. A total of 36,582 protein-coding genes were detected in the present study, similar to the~35,000 genes annotated in other sequenced diploid Solanaceae genomes.
Eggplant belongs to the genus Solanum and the family Solanaceae, which comprises over 3000 species adapted to a wide range of environments, including nine with sequenced genomes, i.e., potato 9 , tomato 10 , pepper 11,12 , tobacco 37 , petunia 38 , and four eggplants 13,16,17 (S. melongena-HQ, S. melongena-NS, S. melongena-67/3, and S. aethiopicum). The Old World subgenus Leptostemonum comprises~500 species and 30 sections, including half of the economically important crops 1 . The brinjal eggplant S. melongena belongs to section Melongena, whereas the closely related species, the scarlet eggplant S. aethiopicum, belongs to section Oliganthes. We found 6,087 gene families in common in the nine genomes, among which we identified 463 single-copy gene families (Fig. 3). S. melongena and S. aethiopicum diverged from each other 2.4 Mya (Fig. S2). In addition, comparative genomics were performed among three sequenced eggplant genomes, S. melongena-HQ, S. melongena-67/3 and S. aethiopicum, and three types of variations (SNPs, indels and SVs) were characterized. As expected, S. melongena-HQ has significantly higher numbers of SNPs (22,092,994), indels (1,988,560) and SVs (7362) when compared with S. aethiopicum than compared with S. melongena-67/3 (Fig. 5). SVs consist of deletions and insertions that may result in divergent gene expression and phenotypes [39][40][41][42] . Interestingly, asymmetric SV accumulation was found in potential regulatory regions of protein-coding genes among the different eggplants, with more deletions than insertions between S. melongena-HQ and S. aethiopicum. In contrast similar insertion and deletion levels were observed between the two S. melongena genomes. This phenomenon has also been detected between two subgenomes of the allotetraploid peanut 42 . Overall the genome sequence of the linear eggplant HQ-1315 and comparative genomic information of S. melongena with that of related species allowed for the identification of genomic divergence at the wholegenome level, and the findings provide genomic tools for the improvement of agronomic traits in eggplant.
Stress resistance and fruit morphology (i.e., shape and color) are important traits during eggplant domestication that are vastly different among cultivated S. melongena varieties and closely related species. S. aethiopicum is mostly grown in tropical Africa, with outstanding disease resistance to various pathogens, such as Fusarium and Verticillium and is cross-compatible with S. melongena 43,44 . We identified 301 NBS-LRR genes in S. melongena-HQ and 250 NBS-LRR genes in S. melongena-67/ 3. As expected, S. aethiopicum has a higher number of disease resistance genes, with 436 genes involved in ten types. S. melongena-NS (Japanese eggplant) and S. melongena-67/3 (European eggplant) both have dark-purple fruits, with elliptical, oval or round shapes, whereas S. melongena-HQ has unusually linear-shaped fruits with a bright-purple color (Fig. 1). We constructed an F 2 segregating population and performed QTL mapping analysis on eggplant fruit length using the S. melongena-HQ genome (Fig. 7). A QTL interval for fruit length was identified within a 71.29-78.26-Mb region on chromosome E03, with a 99% confidence interval. Gene prediction was conducted by homology comparison based on the syntenic relationship between eggplant and tomato, which yielded 11 homologous genes for fruit size on eggplant chromosome E03. Combining these results with the identification of the QTL region FS3.1 in our previous study 30 , we propose that Smechr0301963 (the ortholog from S. melongena-67/3 is SMEL_003g182360), a gene potentially orthologous to SUN gene family members, is a key candidate gene for regulating eggplant fruit length.
Eggplant research is far behind that of other Solanaceae crops (i.e., tomatoes, peppers, and potatoes) and important crops such as cucumber. For QTL mapping research, previous studies have often used tomato genomes for collinear comparisons because of the lack of high-quality eggplant reference genomes [25][26][27]45,46 . Our study provides a high-quality eggplant genome that has wide applications in eggplant genetics and genomics studies, such as marker development, gene detection and chromosome evolution. In the present study, we detected QTL hotspots based on published QTL mapping results and marker information [23][24][25][26][27][28][29][30] , with 210 markers associated with 71 traits anchored to the S. melongena-HQ reference genome ( Fig. 8; Supplementary Table S10). We identified and summarized 26 QTL hotspots, providing a valuable reference and basis for further exploration of regulatory genes controlling important traits in eggplant.
Materials and methods
Plant materials, DNA extraction, and genome sequencing The eggplant cultivar HQ-1315 was selected for wholegenome sequencing; it is a high-generation self-crossbred inbred line with elongated purple fruits. HQ-1315 is an important parental material derived from the Vegetable Institute of Zhejiang Academy of Agricultural Sciences. The HQ-1315 plants were grown in a greenhouse at Qiaosi of Zhejiang Academy of Agricultural Sciences (Hangzhou, China) under standard conditions. DNA was extracted from the young leaves of HQ-1315 for genome sequencing using DNA Secure Plant Kit (TIANGEN, China) and broken into random fragments. Four kinds of DNA sequencing libraries were constructed, including a 350-bp insert size library, Nanopore library, 10× Genomics library, and Hi-C library, according to the manufacturers' instructions. The genome was sequenced using Illumina NovaSeq PE150 and Nanopore PromethION according to standard Illumina (Illumina, CA, USA) and Nanopore (Oxford Nanopore Technologies) protocols at Novogene.
To estimate the eggplant genome size, k-mer distribution analysis was used, and 17-nt k-mers were employed to determine abundance with 93.33 Gb of paired-end reads. SOAPdenovo software was used to splice and assemble the reads into scaffolds with 41-nt k-mers.
Genome assembly and evaluation
We used wtdbg2 software 47 to assemble the noncleaned raw reads from Nanopore sequencing according to the Fuzzy Bruijn Graph (FBG) algorithm. To derive each point, a 1024-bp sequence was selected from the reads, and the points were connected to construct the FBG figure using gapped sequence alignments. Finally, a consensus sequence was obtained. We polished the consensus sequence three times with Nanopore reads using Racon software 48 . The split size was 50, and the other parameters were set to defaults. Paired-end clean reads obtained from the Illumina platform were aligned to the eggplant assembly using BWA software 49 (v0.7.17). Postprocessing error correction and conflict resolution of the assembly were performed using the Pilon tool with default parameters. The fragScaff software 50 was applied for 10X Genomics scaffold extension. Linked reads generated from the 10X Genomics library were aligned to the consensus sequence of the Nanopore assembly to obtain long scaffolds. The consensus sequences were filtered, and only those with linked-read support were used for subsequent assembly. Then, clean Hi-C data were aligned to the primary draft assembly using BWA software v0.7.17 49 . SAMtools 51 was utilized to remove duplicates and nonaligned reads, and only read pairs with both reads in the pair aligned to contigs were considered for scaffolding. Ultimately, 12 superscaffolds (pseudochromosomes) were assembled from corrected contigs using LACHESIS software 52 .
To evaluate the accuracy of the assembly, short reads were blast searched against the genome using BWA software 49 . CEGMA (http://korflab.ucdavis.edu/datasets/ cegma/) was used to assess the completeness of the eggplant genome assembly, and BUSCO v4 53 analysis was performed to further evaluate the assembled genome.
Transcriptome sequencing and gene annotation HQ-1315 plants were grown in a greenhouse at Qiaosi of Zhejiang Academy of Agricultural Sciences (Hangzhou, China) under standard conditions. RNA from five different tissues (root, stem, leaf, flower, and fruit) was extracted for next-generation transcriptome sequencing and full-length transcriptome sequencing using Illumina NovaSeq PE150 as an auxiliary annotation. Transcriptome read assemblies were generated with Trinity 54 (v2.1.1) for gene annotation.
To optimize the gene annotation, RNA-seq reads from different tissues were aligned to genome fasta sequences using TopHat 55 (v2.0.11) with the default parameters to identify exon regions and splice positions. The alignment results were then applied as input for Cufflinks 56 (v2.2.1) with default parameters for genome-based transcript assembly. A nonredundant reference gene set was generated by merging genes predicted by three methods with EvidenceModeler 57 (EVM, v1.1.1) using PASA 58 For structural annotation, ab initio prediction, homologybased prediction, and RNA-seq assisted prediction were used to annotate gene models.
Repeat annotation
A combined strategy based on homology alignment and a de novo search was used in the repeat annotation pipeline to identify repetitive elements in the eggplant genome. Tandem repeats were extracted using TRF (http://tandem.bu.edu/trf/trf.html) by ab initio prediction. For homolog-based prediction, the Repbase TE library and TE protein database were employed to search against the eggplant genome using RepeatMasker 64 (version 4.0) and RepeatProteinMask, respectively, with the default parameters. For de novo-based approach prediction, a de novo repetitive element database was built with LTR_FINDER 65 , RepeatScout 66 , and RepeatModeler 67 , also with default parameters.
Homolog prediction
A total of five species were included in homolog prediction: S. tuberosum, S. melongena, S. lycopersicum, C. annuum, and N. tabacum. Sequences of homologous proteins were downloaded from NCBI and aligned to the genome using tBlastn 68 (v2.2.26; E-value ≤ 1e − 5). The matching proteins were then aligned to the homologous genome sequences using GeneWise 69 (v2.4.1) software to produce accurate spliced alignments, which were applied to predict the gene structure contained in each protein region.
Functional annotation
The functions of protein-coding genes were assigned according to the best match by aligning the protein sequences against the Swiss-Prot database using Blastp 70 , with a threshold of E-value ≤ 1e −5 . Protein motifs and domains were annotated by searching against the Pro-Dom 71 , Pfam 72 (V27.0), SMRT 73 , PANTHER 74 , and PROSITE 75 databases using InterProScan 76 (v5.31). GO IDs 77 for each gene were assigned according to the corresponding InterPro entry. Protein functions were predicted by transferring annotations from the closest BLAST hit (E-value < 10 −5 ) in the Swiss-Prot and NR databases. We also assigned a gene set to the KEGG pathway database 78 (release 53) and identified the best matched pathway for each gene.
Gene family construction and expansion/contraction analysis
Protein sequences predicted from the S. melongena-HQ eggplant genome and eight other sequenced Solanaceae genomes, S. tuberosum, S. lycopersicum, S. melongena-NS, S. melongena-67/3, S. aethiopicum, C. annuum, P. inflata, and N. tabacum, were used for gene family clustering. The gene set from each species was filtered according to the three steps described by Sun et al. 13 , with slight changes. The genes encoding proteins of fewer than 50 amino acids were filtered out. The gene families of the four eggplant genomes (S. melongena-HQ S. melongena-NS, S. melongena-67/3, and S. aethiopicum) were extracted for Venn diagram analysis to identify species-specific gene families in S. melongena-HQ. GO and KEGG annotation was performed to investigate the functions of those speciesspecific genes.
The expansion and contraction of gene families were analyzed by comparing family sizes between the MRCA and each of the nine sequenced Solanaceae genomes using CAFE 82 . The corresponding p-value for each lineage was calculated using conditional likelihoods, and families with a p-value of 0.05 were considered significantly expanded or contracted. The expanded and contracted genes were also analysed by GO and KEGG annotation.
Phylogenetic analysis MUSCLE 83 (http://www.drive5.com/muscle/) was used to align single-copy genes from representative Solanaceae genomes, and the results were combined to generate a superalignment matrix. Using RAxML 84 (http://sco.h-its. org/exelixis/web/software/raxml/index.html), a phylogenetic tree of the nine sequenced Solanaceae genomes was constructed with the maximum likelihood (ML) algorithm and 1000 bootstrap replicates. P. inflata was designated as the outgroup. To determine divergence times based on the phylogenetic tree, the MCMCTree program implemented in PAML5 software 85 was used. Divergence time calibration information was obtained from the TimeTree database (http://www.time.org/).
Detection of WGD events
Protein sequences from S. melongena-HQ, S. aethiopicum, S. lycopersicum, S. tuberosum, C. annuum, and A. thaliana were used for BLASTP (E-value < 1e−05) searches within or between genomes to identify syntenic blocks, after which syntenic blocks were searched using MCScanX (http://chibba.pgml.uga.edu/mcscan2/) software according to the locations of the genes and the blast results. Muscle multiple sequence alignment was performed on the paralogous genes in the syntenic blocks, and the results of the protein alignment were used as templates to generate CDS alignment results. Finally, 4DTv values were calculated according to the comparison results, and a frequency distribution diagram of the 4DTv values and gene pairs was drawn.
Identification of SNPs, indels, and SVs
The genome sequence of S. melongena-HQ was aligned to that of S. melongena-67/3 and S. aethiopicum using BWA v0.7.17 49 using default parameters. Picard tools v1.9.4 (https://broadinstitute.github.io/picard/) was applied to sort the alignment result sequence alignment map (SAM) files. SNPs and indels were called using Genome Analysis Toolkit 86 , and related genes were called according to genome position using an in-house Perl script.
Clean reads of S. melongena-HQ were aligned to those of S. melongena-67/3 and S. aethiopicum using BWA v0.7.17 49 with default parameters. BreakDancerMax-0.0.1r61 was used for genome-wide detection of SVs with default parameters 87 . Deletion and insertion structure variations <10 bp or >10 kb in length were discarded. For the identification of SV genes, any gene with SVs in the main body or upstream/downstream regions was defined as an SV gene; otherwise, it was defined as a non-SV gene.
Identification of the NBS gene family and transcription factors
Most NBS-encoding genes in eggplant were identified based on NB-ARC (NBS) conserved domains that are shared within the gene family and have relatively conserved NBS domains. The latest Markov model for the NBS transcription factor PF00931 was downloaded from the Pfam database (http://pfam.xfam.org/). The HMMER program was used to search for proteins containing this domain against the annotated protein database using the PF00931 domain as a query, with a cutoff E-value of 1e−4. To annotate the maximum number of NBS genes in the genomes, we also used the obtained NBS protein sequences for homologous annotation of genome sequences. tBlastn was applied for homology comparison, and the upper and lower segments of the comparison region were expanded by 5 kb each. Genewise software was then used for gene structure prediction, and homologous protein sequences were screened with PF00931. For the identification of transcription factors, iTAK-1.5alpha software was utilized to predict transcription factors among the longest transcribed translated protein sequences of each species.
QTL-seq
An F 2 population with 129 individuals was generated from a cross between HQ-1315 (linear-long fruits) and 1815 (round fruits), and phenotypic data on eggplant fruit length (L), diameter (D) and fruit shape index (L/D) were collected. Three mature fruits of each individual plant were selected for measurement; plants with extremely long/short fruits were selected and pooled according to the fruit length statistics. Equal amounts of DNA from the young leaves of 20 extreme individuals in each pool were mixed and sequenced. GATK 3.8 software was used to improve multiple-sample SNP and indel detection, and VariantFiltration was applied for filtering 86 . The SNP index was calculated with QTL-seq 88 methods. Indel markers that were exactly the same as those of the parent were assigned an indel-index of 0, with those completely different from the parent assigned an indel-index of 1. To intuitively reflect the distribution of all indices on the chromosome, the SNP index and indel index were combined to obtain Δ(all-index). Any interval with an aΔ(allindex) value higher than the threshold at the 95% confidence level was selected as a candidate interval. SNPs and indels were annotated using ANNOVAR 89 . | 8,492.4 | 2020-09-21T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Sexual Dimorphism Within Brain Regions Controlling Speech Production
Neural processing of speech production has been traditionally attributed to the left hemisphere. However, it remains unclear if there are structural bases for speech functional lateralization and if these may be partially explained by sexual dimorphism of cortical morphology. We used a combination of high-resolution MRI and speech-production functional MRI to examine cortical thickness of brain regions involved in speech control in healthy males and females. We identified greater cortical thickness of the left Heschl’s gyrus in females compared to males. Additionally, rightward asymmetry of the supramarginal gyrus and leftward asymmetry of the precentral gyrus were found within both male and female groups. Sexual dimorphism of the Heschl’s gyrus may underlie known differences in auditory processing for speech production between males and females, whereas findings of asymmetries within cortical areas involved in speech motor execution and planning may contribute to the hemispheric localization of functional activity and connectivity of these regions within the speech production network. Our findings highlight the importance of consideration of sex as a biological variable in studies on neural correlates of speech control.
INTRODUCTION
Speech production is a complex motor behavior that requires the involvement of several brain regions and their respective networks, which collectively support different aspects of auditory and phonological processing, sensorimotor integration (SMG), executive function, motor planning and execution . Contrary to the empirical notion of left-hemispheric lateralization of brain activity during speech production, several recent studies defined a bilateral functional and structural distribution of the large-scale speech network (Simonyan et al., 2009;Morillon et al., 2010;Gehrig et al., 2012;Silbert et al., 2014;Simonyan and Fuertinger, 2015;Kumar et al., 2016). Within this network, a hemispheric lateralization of functional activity and connectivity was found to be a feature of selected brain regions and their subnetworks. While these studies refined our understanding of the hemispheric lateralization of speech production, its potential physiological underpinnings remain poorly understood. A recent multimodal study combining functional MRI (fMRI), intracranial electroencephalographic (EEG) recordings and large-scale neural population simulations based on diffusion-weighted MRI has demonstrated a direct modulatory role of dopaminergic neurotransmission on a functional lateralization of nigrostriatal and nigro-motocortical pathways involved in speech production (Fuertinger et al., 2018). Given the previous reports of sex differences in perceptual aspects of speech and language neural representations (Binder et al., 1995;Frost et al., 1999;Kansaku and Kitazawa, 2001;Clements et al., 2006), it is plausible to assume that another factor contributing to cortical hemispheric lateralization during speech production may be rooted in sexspecific differences of structural brain organization. Along these lines, it has been suggested that females have a more bilateral language representation, while language processing is mostly leftlateralized in males (McGlone, 1980;Dorion et al., 2000;Gur et al., 2000). For example, males show left-hemispheric activation during phonological tasks, while females show largely bilateral activity (Shaywitz et al., 1995). Male stroke patients have been reported to exhibit verbal impairments more frequently after lesions of the left hemisphere than females (McGlone, 1980;Hier et al., 1994), although sex differences were not replicated in other stroke studies involving unilateral lesions (Basso, 1992;Pedersen et al., 1995Pedersen et al., , 2004. Several studies, including large meta-analyses, have also failed to identify sex-specific differences in brain lateralization (Binder et al., 1995;Frost et al., 1999;Kansaku and Kitazawa, 2001;Sommer et al., 2004;Kitazawa and Kansaku, 2005;Clements et al., 2006;Wallentin, 2009;Kong et al., 2018). However, it should be noted that these studies have primarily focused on perceptual and cognitive aspects of speech and language processing and have not specifically examined the motor aspects of speech control. Inconsistencies in findings might also stem from high functional heterogeneity that characterizes large atlas-based macroanatomic labels as used in previous studies. Therefore, to circumvent these limitations and to focus on the speech production system, we examined the presence of sex differences in cortical thickness (CT) in brain regions that are functionally active during real-life speech production in healthy males and females. We hypothesized that hemispheric lateralization of regional brain activity during speech production may, in part, be explained by sex-specific asymmetry in cortical morphology within the speech controlling network.
Study Subjects
A total of 109 subjects participated in the study, including 59 healthy females (mean age 50.4 ± 10.5 years) and 50 agematched healthy males (mean age 51.9 ± 9.3 year). All subjects were monolingual native English speakers, right-handed as determined by the Edinburgh Handedness Inventory (Oldfield, 1971), had normal cognitive performance and lexical verbal fluency as determined by the Mini-Mental State Examination (Cummings, 1993), and had no history of speaking, hearing, psychiatric or neurological problems. There were no differences in mean age and the education level between the male and female groups (p > 0.46). This study was carried out in accordance with the recommendations of the Internal Review Board of Massachusetts Eye and Ear Infirmary. All subjects gave written informed consent in accordance with the Declaration of Helsinki.
Image Acquisition
All subjects underwent high-resolution MRI on 3.0 T Philips scanner with an 8-channel Sense head coil. An anatomical scan was acquired in all subjects using a T1-weighted MPRAGE sequence (flip angle = 8 • , TR = 7.5 ms, TE = 2 ms, FOV = 210 × 210 mm 2 , 172 slices with an isotropic voxel size of 1 mm 3 ). Among these, 16 females (mean age 50.9 ± 9.6 years) and 13 age-matched males (mean age 52.3 ± 9.0 years) participated in an additional whole-brain fMRI scan using a gradient-weighted echo planar imaging (EPI) pulse sequence and blood oxygen level dependent (BOLD) contrast (TR = 10.6 s, including an 8.6 s delay for listening to and production of the task and 2 s for image acquisition, TE = 30 ms, flip angle = 90 • , 36 contiguous slices, slice thickness = 4 mm, matrix size = 64 × 64 mm, FOV = 240 × 240 mm 2 ). A sparse-sampling event-related fMRI design was used to minimize scanner noise, task-related acoustic interferences, and orofacial motion (Gracco et al., 2005;Blackman and Hall, 2011;Adank, 2012).
Subjects were instructed to listen to an auditory sample of eight different English sentences (e.g., "Jack ate eight apples, " "Tom is in the army") delivered one at a time by the same female native English speaker through MR-compatible headphones within a 3.6 s period. When cued by an arrow, subjects produced the task (i.e., repeated the sentence once) within a 5 s period, which was followed by a 2 s whole-brain volume acquisition (Figure 1). Rest periods without any auditory input or task production were incorporated as a baseline condition. Each subject completed four functional runs, consisting of 24 task and 16 resting conditions.
Anatomical MRI
Whole-brain T1-weighted images were analyzed using the automated "recon-all" function implemented in FreeSurfer software. Briefly, the processing included motion correction, intensity normalization, skull-stripping, volumetric registration with labeling, tissue segmentation, and gray-white interface and pial surface delineation. Cortical parcellation was performed using the Destrieux atlas, which assigned neuroanatomical labels to each location on the cortical surface while incorporating geometric information derived from the subject's cortical model (Fischl, 2012). All cortical parcellations were visually inspected for accuracy and, if necessary, corrected manually.
Functional MRI
Image analysis was performed using the standard afni_proc.py pre-processing pipeline in AFNI software, which included removal of spikes, registration, alignment of the EPI volume to the anatomical scan, spatial normalization to the AFNI standard Talairach-Tournoux space, spatially smoothed with a 4-mm Gaussian filter, scaling of each run mean to 100 for each voxel, and motion scrubbing. A task regressor was convolved with a canonical hemodynamic response function and entered into a multiple regression model to predict the observed BOLD response during speech production. Group analysis was carried out using a two-sided one-sample t-test. The statistical threshold was set at a voxel-wise and cluster-wise corrected p ≤ 0.001, with minimal cluster size of 100 voxels using AFNI's 3dClustSim.
FIGURE 1 | Schematic illustration of the experimental fMRI design. The subject fixated on the cross and listened to the acoustically presented sentence for a 3.6-s period. Sentences were pseudorandomized and presented one at a time. No stimulus was presented for the baseline resting condition, during which the subject fixated on the cross. An arrow cued the subject to initiate the task production within a 5-s period, which was followed by a 2-s period of image acquisition.
Cortical Regions-of-Interest
Consistent with the previous studies of neural activity during speech production (e.g., Tourville and Guenther, 2003;Simonyan and Fuertinger, 2015;Simonyan et al., 2016;Basilakos et al., 2018;Kearney and Guenther, 2019), the cortical regions-of-interest (ROIs) included the precentral, postcentral and inferior frontal gyri, supplementary motor area, middle cingulate cortex, supramarginal (SMG), superior temporal (STG) and Heschl's gyri, and insula (Figure 2A). Following the extraction of parcellated Destrieux atlas-based ROIs, a further delineation of these regions included their restriction to areas activate during speech production ( Figure 2B). For this, the group mean activity map during speech production was binarized, warped into MNI space using AFNI's 3dWarp, transformed from the volumetric space to the surface space using AFNI's 3dVol2Surf and conjoined with atlas-based ROIs, resulting in speech-specific cortical ROIs (Figure 2C).
In each subject, the mean CT measure was extracted from each speech-specific cortical ROI using Freesurfer's mri_segstats. Multivariate analysis of covariance, accounting for age as a covariate, was used to examine between-group differences in CT measures within each right and left hemisphere. Separately, within-group differences in CT measures between hemispheres were examined using paired t-tests. Statistical significance was FIGURE 2 | (A) Visualization of atlas-based anatomical regions-of-interest (ROIs) within the speech production network based on the Destrieux atlas parcellation, including the precentral, postcentral and inferior frontal gyri, supplementary motor area, middle cingulate cortex, supramarginal, superior temporal and Heschl's gyri, and the insula. (B) Group statistical map of whole-brain activation during speech production across males and females. Color bar represents the t-score at p ≤ 0.001. (C) Speech-specific cortical ROIs derived from conjoining the atlas-based anatomical ROIs with the binarized map of speech-related brain activity. The ROIs are color-coded based on their anatomical affiliation and displayed on the FreeSurfer average template.
Bonferroni-corrected by the number of ROIs used in the analysis and set at p < 0.005.
RESULTS
Both males and females exhibited a typical pattern of cortical activity during speech production, which involved primary sensorimotor, premotor, inferior frontal, middle cingulate, auditory, inferior parietal and insular regions (Figure 2A), in agreement with other studies investigating speech production (e.g., Tourville and Guenther, 2003;Fuertinger et al., 2015;Simonyan et al., 2016;Basilakos et al., 2018;Kearney and Guenther, 2019). For further analysis, this activity was restricted to the a priori delineated cortical structural ROIs, as outlined above and illustrated in Figure 2. Analysis of regional CT showed that females had significantly greater left Heschl's gyrus compared to males (p = 0.002) (Figure 3 and Table 1). None of other cortical regions showed significant differences in CT between the male and female groups (p ≥ 0.11).
DISCUSSION
Our study demonstrated the presence of speech-specific sexual dimorphism in CT of primary auditory cortex within the Heschl's gyrus. In addition, structural hemispheric asymmetry both in males and females was identified in selected brain regions controlling speech motor execution (precentral gyrus), auditory processing (STG) and sensorimotor integration (SMG).
Auditory cortex within the Heschl's gyrus is known to encode short-latency temporal features of auditory stimuli that have repetition rates within the range of the fundamental frequency of human voice (Belin et al., 1998;Price, 2000;Zatorre, 2001;Scott and Wise, 2004;Brugge et al., 2008Brugge et al., , 2009Warrier et al., 2009;Chevillet et al., 2011;Nourski and Brugge, 2011;Kusmierek et al., 2012). Distinct functional parcellations of core and noncore auditory areas within the Heschl's gyrus process natural human vocalizations and pitch perturbations in the auditory feedback (Behroozmand et al., 2016). Earlier lesion studies have demonstrated that damage to the left auditory cortex often results in deficits of temporal processing, manifesting as a speech FIGURE 3 | Boxplot shows mean cortical thickness (in mm) and standard error in each speech-specific cortical region-of-interest in males and females. Asterisk ( * ) depicts statistically significant differences between males and females as well as within each male and female group.
Regions-of-interest
Mean ± Standard Error CT P Statistically significant differences between males and females as well as within each male and female group are shown in bold.
Frontiers in Neuroscience | www.frontiersin.org disorder (Damasio and Damasio, 1980;Phillips and Farmer, 1990). Along these lines, our finding of greater CT in the left Heschl's gyrus in females than males suggests that structural enhancement of this region might be associated with sexspecific differences in processing of auditory cues during speech production as well as contribute to increased prevalence of speech and language developmental disorders in males (Shriberg and Kwiatkowski, 1994;Law et al., 1998;Keating et al., 2001). We further found between-hemispheric rightward asymmetry of the STG in males but not females. This finding is in line with earlier studies that suggest the influence of genes involved in steroid hormone receptor activity in this region. Specifically, testosterone and progesterone may exert opposing effects on the STG structural organization by promoting its rightward asymmetry in males and forging its structural symmetry in females, respectively (Geschwind and Galaburda, 2003;Guadalupe et al., 2015). This is consistent with the hypothesis that region-specific sexual dimorphisms might be related to factors affecting in utero and early postnatal sexual differentiation of the neural system (Goldstein, 2001).
In both males and females, a characteristic feature of CT organization within the speech production network was its rightward asymmetry of the SMG and leftward asymmetry of the precentral gyrus, encompassing primary motor and premotor cortical areas. The SMG is involved in higher-order processing and plays an important role in the coordination of speech-motor learning, sensorimotor adaptation, phonological decisions, auditory error recognition, and speech onset monitoring (Price et al., 1997;McDermott et al., 2003;Shum et al., 2011;Sliwinska et al., 2012;Deschamps et al., 2014;Kort et al., 2014;Fuertinger et al., 2015). In line with a recent study showing involvement of the right SMG in the prosodic and paralinguistic aspects of speech production (Lindell, 2006), our results suggest that rightward asymmetry of this region may be important for higher-order integration of phonological processing in both males and females. Similarly, leftward CT asymmetry in the precentral gyrus, specifically encompassing its speech motor cortex, may be linked to the general lefthemispheric dominance of this region in the fulfillment of motor tasks in right-handed males and females. This finding also substantiates the left-hemispheric dominance of functional network originating from the laryngeal motor cortex (Lindell, 2006;Simonyan et al., 2009).
Putting the current findings in context with the previous literature, it is important to note that earlier investigations of CT asymmetry have used large atlas-based brain regions that were not confined to speech-related brain activity. This might have led to the mixed reports of both left-and righthemispheric lateralization of the precentral gyrus and STG in both males and females (Luders et al., 2006;Guadalupe et al., 2015;Kong et al., 2018). Additionally, some studies have reported left-hemispheric asymmetry of CT and regional surface area in the SMG (Lyttelton et al., 2009;Koelkebeck et al., 2014;Plessen et al., 2014;Maingault et al., 2016), while others have found no such differences in this region (Luders et al., 2006;Koelkebeck et al., 2014;Kong et al., 2018). While these inconsistencies might indicate the absence of population-level CT asymmetries (Kong et al., 2018), they may also stem from a failure to account for sex differences in structural organization of the speech production network.
In summary, this study provides evidence for the existence of sex-specific structural dimorphisms within the cortical speech production circuitry. Our findings highlight the importance of the inclusion of sex as a biological variable in research on neural correlates of speech control. Furthermore, our data suggest that examination of speech-specific cortical morphology benefits from restricting analysis to anatomical areas that are functionally active during this complex behavior.
DATA AVAILABILITY
All datasets generated for this study are included in the manuscript and/or the supplementary files.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the Internal Review Board of Massachusetts Eye and Ear Infirmary. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Internal Review Board of Massachusetts Eye and Ear Infirmary.
AUTHOR CONTRIBUTIONS
KS collected the data. KS, LdLX, and SH designed the study and statistical methods. KS critically reviewed the manuscript and obtained funding. LdLX and SH analyzed the data and drafted the manuscript.
FUNDING
This study was funded by the National Institute on Deafness and Other Communication Disorders, National Institutes of Health (grants R01DC011805 and R01DC012545 to KS). | 3,847.2 | 2019-07-30T00:00:00.000 | [
"Biology",
"Psychology"
] |
A Computational Analysis to Burgers Huxley Equation
: The efficiency of solving computationally partial differential equations can be profoundly highlighted by the creation of precise, higher-order compact numerical scheme that results in truly outstanding accuracy at a given cost. The objective of this article is to develop a highly accurate novel algorithm for two dimensional non-linear Burgers Huxley (BH) equations. The proposed compact numerical scheme is found to be free of superiors approximate oscillations across discontinuities, and in a smooth flow region, it efficiently obtained a high-order accuracy. In particular, two classes of higher-order compact finite difference schemes are taken into account and compared based on their computational economy. The stability and accuracy show that the schemes are unconditionally stable and accurate up to a two-order in time and to six-order in space. Moreover, algorithms and data tables illustrate the scheme efficiency and decisiveness for solving such non-linear coupled system. Efficiency is scaled in terms of L 2 and L ∞ norms, which validate the approxi-mated results with the corresponding analytical solution. The investigation of the stability requirements of the implicit method applied in the algorithm was carried out. Reasonable agreement was constructed under indistinguishable computational conditions. The proposed methods can be implemented for real-world problems, originating in engineering and science.
Introduction
This paper describes the multiplex schemes solution for two dimensional non-linear Burgers Huxley equation. Such an equation serves as the coupling between the Z xx , Z yy diffusive terms and Z(Z x + Z y ) the convectional phenomena. This equation is of high importance for showing a prototype model describing the interaction between reaction mechanisms, convection effects and diffusion transports. It is the combination of both Burgers & Huxley phenomena with non-linear term means reactions kind of characteristics behaviour, to capture some features of uid turbulence which caused by the effects of convection & diffusion [1][2][3]. It is a quantitative paradigm which deals with the ow of electric current through the surface membrane of a giant nerve bre. Nerve pulse propagation in nerve bres and wall motion in liquid crystals. Recently research has been measured to investigate two dimensional Burgers Huxley phenomena for understanding the various physical ows in uid theory [4][5][6] which leads to implementing a novel methodology for studying new insights [7,8]. It is worth mentioning that there is a vast amount of different approaches available in the literature to calculate the solutions of non-linear systems of partial differential equations. Seeking the Burgers Huxley equations numerical solution, wavelet collocation methods for the solution of Burgers Huxley equations [9] have already been studied in combination with variational iteration technique [10,11]. Moreover, the propagation of genes (Burger & Fisher) and Reaction-Diffusion (Gray Scott) models [12,13] investigated largely by the technique of computation [14]. On the other hand, optimal homotopy asymptotic & homotopy perturbation method was carried out to nd the approximate solution of Boussinesq-Burgers equations [15]. Finally, some novel techniques also take into account like chaos theory [16], nonlinear optics and fermentation process [17,18]. Wazwaz obtained the solitary wave solutions of one dimensional Burgers Huxley equation using tanh-coth method [19]. Hashim et al. [20,21] using Adomian Decomposition Method. Molabahrami et al. [22] used the homotopy analysis method to nd the solution of one dimensional Burger Huxley equation also E mova et al. [23] nd the travelling wave solution of such equation. Batiha et al. [24] used Hope-Cole transformation with Gao et al. [25] nd the exact solution of the generalized Burgers equation.
This research aims to deal with higher-order compact schemes with the nite difference methodology [8]. Our primary focus is to attain a compatible scheme which is highly ef cient and easy to implement with better accuracy. Although, Burgers Huxley equation can be in three dimensions still some features kept unexplored in the two-dimensional scenario. Let us explorer some new insights in BH equation which consists of the two-dimensional domain which can be written as: where Z = Z(l, m, t) is the unknown velocity & (l, m, t) ∈ × (0 T]. Laplacian can be de ned as with two dimensional behavior, is a non-linear reaction term. The coef cient ξ , η are advection and reactions coef cients accordingly with 0 < β < 1 & µ > 0. These parameters describe the interaction between reaction mechanisms, convection effects & diffusion transports [26,27]. Let us consider the initial condition, which can be seen from the upcoming Eq. (12). The Dirichlet boundary conditions are given by, where is a rectangular domain in R 2 & Z 0 , p 1 , p 2 , q 1 , q 2 are given suf ciently smooth functions, and Z(l, m, t) may represent unknown velocity, whereas Z.Z l , Z.Z m represents convection terms along with linear diffusion Z ll , Z mm . Such phenomena perpetuate the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon [28,29].
More generally, it is a challenging task for determining and preservation of physical properties like accuracy, stability, convergence criteria and design ef ciency for the given two-dimensional problem. This equation can be an effective procedure for the solution of various deterministic problems in physics, biology and chemical reactions. Also, deals in the investigation of the growth of colonies of bacteria consider population densities or sizes, which are non-negative variables. Most non-linear models of real-life problems are still very challenging to solve either numerically or theoretically. There has recently been much attention devoted to the search for better and more ef cient solution methods for determining a solution, analytical or numerical, to non-linear models [30,31]. In [31][32][33][34] authors present a method used to solve partial equations with the use of arti cial neural networks and an adaptive strategy to collocate them. To get the approximate solution of the partial differential equations Deep Neural Networks (DNNs) has been used, which shows impressive results in areas such as visual recognition [35]. Recently in [36], authors develop a numerical method with third-order temporal accuracy to solve time-dependent parabolic and rst-order hyperbolic partial differential equations. We focused on elaborating further by comparing analytical and numerical techniques.
Tanh-Coth Method
The dynamical balance between the non-linear reaction term and diffusive effects which constitute stable waveform after colliding with each other. In (1) the negative coef cients of Z ll , Z mm and Z 3 follow the physical behaviour of two dimensional BH Eq. (1). Such an equation can be converted into the non-linear ordinary differential equation which is as follows: Let σ = x − et, the wave variable which balances the non-linear reaction term (P µ,λ (Z)) where µ, λ are index values and diffusion transport (the highest derivative involved), we have M + 2 = 3, MM = 1. This enables us to set: Put M = 1 in (6) we get Let Y = tanh(γ σ ), and σ = ((x + y) − et) Then, Substitutes aforementioned in Eq. (6), we have the following solution to (7) 1 Arranging the coef cients of Y i , i ≥ 0, and equating these coef cients to zero, the system of algebraic equations in a 0 , a 1 , b 1 , γ and e are obtained. By solving the following set of the algebraic system of equations, we have the following form: In Eq. (10), the solution is of the form: Case 2: We found that a 1 = 0, From Cases 1 and 2, the kink solution is of the form: Now by solving (1) using the tanh-coth method, the analytical solution (kink solution ) is in a compact form in both cases is as follows: with initial condition: where Z is the unknown velocity, and γ & σ are wavenumbers which are developed during the solution of BH equation.
Description of Compact Schemes
Let us discretize the spatial domain which consists of N and M positive integers, such that h l and h m present step sizes along with l and m directions, respectively [37]. The spatial nodes can be denoted by l i , m j , namely, For the temporal domain, let us take dt as time-step discretization, τ = T/dt, with t n = Nτ . Also . . , w dt ) T , for any w ∈ ZZ τ , with some more notations: for n = 0, 1, 2, . . . , dt − 1.
Implementation Procedure:
Let us we divide (1) into two parts such as: Now considering one-dimensional steady convection-diffusion equation in the following form: where α 1 , α 11 are the constants while β 1 , β 11 are the convective velocities. F is the smooth functions of l and m may represent the reaction, vorticity. Now the three-point scheme is as follows: Now applying the Taylor series expansion to Eq. (14) we have the following results: where 0 ≤ n ≤ dt − 1 and the truncation error is By adding Eqs. (17) and (18) we have the following form (1) which yields: where Residual Apply Crank-Nicholson time discretization, which leads to: represents the truncation error [36][37][38][39]. The existence and uniqueness of the solutions of the scheme (21) can be easily found by positive de nite property. By applying operators and simplifying the (21) in compact form, The scheme is a system of linear equations based on variable Z n i, j , then after applying operators 21 can be written in the following way: where T 1 , T 11 , T 2 , T 22 , ff 1 , . . . , are all constants coef cients of Z n+1 i, j , Z n i, j and ff which includes α 1 , α 11 , β 1 , β 11 , h l , h m , τ and constant values.
The matrix form of the compact scheme is as follows By calculating and simplifying the terms, we have the following tridiagonal matrix if of the form: .
where D 11 matrix is the same as the matrix D 33 . The Eq. (23) is a tridiagonal block matrix.
The matrix we have generated is diagonally dominant and can solve through Thomas algorithm. Which authenticate the consistency & accuracy of the solution of the form
Description of Six order Compact Finite Difference Scheme
For complex systems the results will be dependent on the formation of the mesh, We apply higher-order compact scheme at the system in Eq. (1) with a uniform mesh at l = m = h l = h m . Scheme description is as follows:
First Boundary Point 1:
At the rst boundary point, the six order compact scheme is of the following form.
Above system in Eq. (26), the coef cients can be found by matching Taylor's series expansion comparing with various orders up to order O 7 , as a result, construction of the linear system is obtained. By constructing the linear system values of d s, which can be solved in the usual way to get the following along l direction, d 1 [36,37]. Others ones can found in the same way. [36][37][38][39]. Others ones can found in the same way.
Nth Boundary Point:
At Nth boundary point of six order compact scheme is of the following way: Above system in Eq. (28), by constructing the linear system values of d s, which can be solved in the usual way as done in boundary point 1 and 2.
Implementation Algorithm:
By arranging Eqs. (26)- (28) in the following algorithm: where P are mentioned in Eq. (1), also matrices A and B are N m × N m sparse with triangular nature along C and D are N l × N l sparse with triangular in shape.
Theorem:
The truncation error in the compact six order nite difference scheme for equations in the system (1) is,
Error Analysis
The convergence benchmark, ef ciency and accuracy of the proposed scheme in terms of norms can be de ned as: where
Stability Analysis
The stability is concerned with the growth or decay of the error produced in the nitedifference solution. For the representation of theoretical analysis, we set P = 0 in Eq. (1). Assuming the boundary conditions are accurately propagating, we can apply the Fourier analysis method to our proposed equation.
De nition : For a time-dependent PDE, the corresponding difference scheme is stable in the norm . if there exists a constant M such that e n ≤ M e 0 , for all n∇t ≤ t F where M is independent of ∇t, ∇x and initial condition e 0 .
Following the Von Neumann stability analysis criteria, x the non-linear terms so that for linear stability, the numerical solution can be displayed in the following way: where is the amplitude at time level n, √ −1 is called the imaginary unit. l , m leads to wave number in l, and m directions with l h l , m h m are phase angles. The ampli cation factor is de ned by By using Eqs. (30) and (35) and dividing by r.h.s of Eq. (35) and simplifying, we have the following form: where I = √ −1 and Eq. (37) is of the following way: where R & S are the compact forms of Eq. (37). For stability, it has to satisfy the following condition: After simpli cation to an aforementioned condition which holds true. Therefore, |E| ≤ 1 [38][39][40][41]. Hence the scheme is unconditionally stable.
Experimental Results
The novel numerical scheme is compared with the analytical results of Eq. (1) by using tanhcoth method. For this objective, we consider the same parameter α = µ = η = 1, and varying β. Numerical and analytic solutions are compared and justi ed in term of error norms to magnify the importance of higher accuracy.
Furthermore, to avoid turbulence, by varying β values in the Tab. 1 with grid size (15 × 15), dt = 0.001 and grid space =0.3125 with respect to time = 1 is observed. Improvement in accuracy is noted by varying the values of β parameter. Also, the BH equation produced the best results by using six order compact nite difference scheme. At different β values, Tab. 2 indicates error which increased at a very low rate by changing the values of β from high to low which make the comparison to previous work give authentication for accuracy [34]. The truncation error is calculated in Tab. 3, using L 2 , Relative error and L ∞ with xed grid size (31 × 31). By changing time steps dt = 0.001 with the same grid size showed results in the Tab. 4. The approximate results using six order compact scheme correspond to error norm are shown in the Tab. 6. In this, table the comparison of fourth-order and six order are analyzed by re ning the temporal space, which shows this scheme is better than the corresponding fourth-order. In the Tab. 7 six order and fourth-order compact nite difference scheme comparison is carried out which measured in term of L ∞ norm. Different parameters are also observed under the same scheme. In the Tab. 8 scheme ef ciency encountered using L ∞ , L 2 & Relative error norms. Graphical representation of numerical schemes on BH equation is observed. Comparison of analytical and numerical results by using fourth and sixth-order compact nite difference scheme has been analyzed. At t = 2, β = 0.1, dt = 0.0001, grid = (21 × 21) can be seen from the Fig. 1. While six order scheme at β = 0.1 with time-space dt = 0.0001 and grid space (21 × 21) is seen from the Fig. 2 which shows more accurate and re ne results as compared with Fig. 1 using the same parameter. In Figs. 3 and 4, analysis shows that the error norm using fourth-order scheme at β = 0.001. While in Fig. 5, we choose β = 0.0001 using a higher-order scheme to analyze error pro le at grid size (51 × 51). In summary, it is aspirant from the gures and tables; the analytical and numerical solutions are best tted with generation encrypting. In the end, the novel six order compact scheme is the best agreement with the analytical solution.
Comparison between approximation and analytical solutions is made at the nal time of computation time = 2 s at the critical point (1, 1) using fourth-order compact scheme at grid size (15 × 15). Comparison between approximation and analytical solutions is made at the nal time of computation time = 1 at the critical point (1, 1) using fourth-order compact scheme at grid size (31 × 31). Tab. 3 shows error pro le data by using fourth-order compact scheme at gridsize = (31 × 31) for unknown value Z(l, m, t). Selftime: is the time spent in a function excluding the time spent in its child functions while Totaltime is the time to execute the algorithm.
Tab. 4 shows error pro le data by using Six order compact scheme at gridsize = (31 × 31) for unknown value Z(l, m, t).
Central Processing Unit Performance
A combinatorial logic circuit executes the mathematical operation for each function in the algorithm within the central processing unit. To establish the platform of CPU performance along physical memory transmission capacity is observed when the higher-order compact scheme is developed by using MATLAB software [35,39,42,43]. By increasing the grid size, the number of calculations is increased, and it is dif cult to overcome such issue which can take a longer time to execute. Because of numerical schemes ef ciency, the computational experiment is done on two different computer machines like Lenovo 6th generation having 2.4 GHz 8 cores and 16 GB memory along 5th generation Dell machine having 4 physical cores and 16 logical cores. Different feathers involved in two computational experiments can be analyzed from the following data tables. Tab. 7 shows results for the different grid using 6th order compact scheme on Lenovo CPU oriented computational machine (MATLAB software).
Tab. 8 shows results for the different grid using 6th order compact scheme on DELL CPU oriented computational machine (MATLAB software). Comparison is performed to analyze Dell with Lenovo machines with both clock rate performance and relative performance. Thus MATLAB handles problems with care, and we can analyze results at each point of the loop and any iteration during computations.
Conclusion
Higher-order schemes for determining the two dimensional Burgers Huxley equation was developed in this paper. As it was not studied before by using such schemes of diffusive dissipation of errors. We came to know that the BH equation in two dimensional which is studied to nd ef ciency, accuracy and stability and by comparing with analytical and numerical approaches in terms of L 2 , L ∞ & relative errors. It is evident from the fact that computed numerical experiments of two dimensional Burgers Huxley equation, solutions obtained by fourth and six order schemes are in good agreement with the analytical solutions. Figures and tables clearly show the tendency of fast and monotonic convergence of the results toward the analytical solution. Also, the computational discretization of the proposed model results in a sparse tridiagonal structure of the matrix, which can be overcome by the Thomas algorithm. Results lead to a remarkable improvement in accuracy, ef ciency and computer performance which can be seen from data tables.
Funding Statement:
The authors received no speci c funding for this study.
Con icts of Interest:
The authors declare that they have no con icts of interest to report regarding the present study. | 4,578 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Optimization of Cross-Border e-Commerce Logistics Supervision System Based on Internet of Things Technology
,
Introduction
In recent years, the application of cross-border e-commerce is more and more widespread, and cross-border electronic logistics is also closely followed. One of the most important advantages of cross-border e-commerce is that it is able to break through the national boundaries of the traditional commerce; e-commerce may conduct business transactions between different countries, change the traditional offline trading patterns into online, and make the process more convenient and quick, combining the way of traditional logistics and e-commerce [1]. Cross-border e-commerce makes the circulation of goods at home and abroad much smoother, breaking the traditional commodity trading time and regional restrictions, but there are still some problems in the process of commodity transportation, because the national customs are very strict on the inspection of crossborder goods. erefore, the cross-border electronic logistics will face very strict customs screening and will also face different problems raised by the customs agencies of various countries. But in recent years, the cross-border e-commerce industry in China has been making continuous progress, which has also led to the development of the national economy to a large extent. erefore, the country has launched a variety of supporting policies, which undoubtedly makes the development of cross-border e-commerce quite faster. e development of cross-border e-commerce needs the support of fast and safe logistics delivery and requires a higher level of logistics service to improve competitiveness.
e main logistics problems faced by small-and mediumsized enterprises in the development of cross-border e-commerce in Quanzhou mainly include the following aspects: the high logistics cost, weakening the price advantage of cross-border e-commerce products; the long logistics links affecting the timeliness of cross-border e-commerce; the complex logistics process increasing the risk of cross-border e-commerce; the low level of logistics informatization affecting the customer experience of crossborder e-commerce; the shortage of cross-border logistics talents affecting the rapid development of cross-border e-commerce [2]. erefore, this paper aims to analyze the development status of cross-border e-commerce and the application of Internet of ings technology, find out the logistics problems faced by enterprises in the development of cross-border e-commerce, and try to find out the countermeasures to solve these problems. It is hoped that this study can provide theoretical reference for relevant research and provide reference for the rationalization of cross-border e-commerce logistics. e rest of this article is organized as follows: Section 2 discusses the relevant work. Section 3 elaborates the crossborder e-commerce logistics supervision system based on Internet of ings technology. Section 4 presents functional optimization and testing of cross-border e-commerce logistics supervision system. Section 5 summarizes the text. Under the framework of the monitoring system, this paper realizes the functions of group intelligent contract, legal anonymous identity authentication, intelligent transaction matching, abnormal data analysis and detection, privacy protection, and traceability. en the security, controllability, and operating efficiency of the framework are verified by security analysis and transaction monitoring software.
e measurement results show that the proposed crossborder logistics supervision system is secure and controllable and can protect the privacy of e-commerce logistics users and data, prevent forgery and fraud, and achieve the auditability and traceability of user behavior and user data.
Related Work
e development of cross-border e-commerce cannot be separated from the support of cross-border logistics. Only when the two develop together can they achieve a win-win situation. According to the collation and analysis of the published literature, it is found that experts and scholars mainly conduct research from two aspects in the research process: Ma et al. [3] analyzed the commonly used logistics modes and characteristics of cross-border e-commerce in China. Li et al. [4] analyzed the diversified demands of crossborder e-commerce and its development status in China and proposed a logistics and transportation mode suitable for cross-border e-commerce. Xie et al. [5] chose India as the research object, discussed the mode adopted in the development of cross-border e-commerce, analyzed the problems and countermeasures in the development of cross-border e-commerce, and laid a foundation for the development of cross-border e-commerce logistics mode in other countries. Taking China as the research object, Zhang et al. [6] discussed in detail the e-commerce development mode that can be adopted by small foreign trade agency enterprises in the development process and analyzed the development status and existing problems of their cross-border e-commerce. At the same time, the countermeasures that can be taken in the process of its development are given. It is concluded that foreign experts and scholars pay great attention to the analysis of the development model of e-commerce in the research process and analyze the existing problems and countermeasures in the use of the existing model. ese analyses provide reference for the development of cross-border e-commerce model of freight forwarders in China. Secondly, as for the research on the development strategy of crossborder e-commerce logistics, Sun et al. [7] analyzed the crossborder logistics mode that can be adopted in China's tea and clothing industries. Porambage et al. [8] proposed that crossborder e-commerce enterprises should form a good cooperative relationship with cross-border logistics enterprises, so as to ensure the logistics and transportation quality of crossborder e-commerce commodities under the B2C mode, reduce logistics and transportation costs, and improve logistics and transportation efficiency. Mohammed et al. [9] analyzed the commodity transport channels adopted by cross-border logistics enterprises and used empirical methods to verify the application of e-commerce platforms in the development process of cross-border e-commerce. Cao et al. [10] analyzed win-win cooperation approaches between cross-border e-commerce enterprises and logistics enterprises and pointed out that the Internet is the key to maintain their relationship. It can be concluded that when foreign experts study crossborder e-commerce logistics, they make targeted analysis on its existing problems and put forward specific solutions. ese solutions can provide reference for the sustainability of crossborder e-commerce business development of Chinese freight forwarding enterprises.
ere are also a large number of researches on crossborder e-commerce logistics supervision based on the Internet of ings technology in the literature. rough sorting and analysis, the research are mainly carried out from the following three aspects: Firstly, on the research of cross-border e-commerce and international logistics supervision mode, Xiao et al. [11] take the third-party logistics as the research object, analyze the development mode adopted in the context of cross-border e-commerce and the problems existing in the existing mode, and then put forward the countermeasures that can be taken. Mou et al. [12] chose Company T as the research object to analyze the logistics regulatory operation mode under the background of cross-border e-commerce and discussed from the aspects of its monitoring system, logistics service, operation technology, and transportation economy. Ai et al. [13] chose foreign trade enterprises as the research object to analyze their logistics and transportation supervision mode under the background of cross-border e-commerce. Secondly, regarding the research on the current situation of crossborder e-commerce logistics and the problems faced in supervision, Zhang et al. [14] take cross-border logistics enterprises as the research object, analyze the business models they can adopt and the regulatory problems they face in the process of economic globalization development, and then put forward corresponding solutions, so as to provide better logistics and transportation services for the development of cross-border e-commerce. Li [15] selected China's Heilongjiang province as the research object to analyze the problems in its cross-border commodity transportation process and the supervision scheme, that is, to further optimize the cross-border commodity logistics 2 Complexity and transportation process through the introduction of information technology and personnel training. irdly, on the research of cross-border e-commerce logistics supervision strategies, Li et al. [16] proposed the necessity and advantages of the development of third-party logistics through the analysis of the current development of crossborder e-commerce in China and proposed specific supervision strategies. Sun et al. [17] also chose the third-party logistics as an example to discuss the relevant situation of its development under the background of cross-border e-commerce and analyzed its development problems from internal and external aspects. Finally, combined with the specific situation of the development of the third-party logistics of cross-border e-commerce in China, the optimization strategies that can be adopted are proposed.
Construction of Basic Platform.
To achieve effective electronic logistics supervision, it is necessary to consider the construction of a reasonable cross-border e-commerce and logistics collaborative platform. ere is a mutual promotion and inseparable relationship between cross-border e-commerce and logistics. e development of cross-border e-commerce cannot be separated from the support and guarantee of logistics, and the development of logistics also needs the strong promotion of cross-border e-commerce. Although cross-border e-commerce and logistics belong to two different industries, they are actually a whole and closely linked to each other. e coordinated development of the two has an important impact and significance on the economic growth and development of a region and even a country. According to the composite system theory, crossborder e-commerce and logistics belong to two systems, but they are contained in a large composite system, and belong to two subsystems. e two subsystems are affected by a variety of factors, which can affect the structure, behavior, and function of the system and can be called order parameters. e size of these factors reflects the operating state and order degree of the subsystem in different stages. Based on the construction and research of the synergistic degree model, this paper constructs the synergistic development model of cross-border e-commerce and logistics.
Construction of Subsystem Order Degree Model.
Suppose that the composite system of cross-border e-commerce and logistics is S i , where i � 1, 2, that is, S i � {S 1 , S 2 }, S 1 and S 2 , respectively, represent the subsystem of crossborder e-commerce and logistics in the composite system. e order parameters in the subsystem development process can be used e i � (e i1 , e i2 , . . ., e ij ), where j � 1,2,3, . . ., n, α ij ≤ e ij ≤ β ij , α ij and β ij are, respectively, the upper and lower limits of the order parameter index data of the critical point of system stability [18]. en the value range of the order parameter of the cross-border e-commerce subsystem is α 1j ≤ e 1j ≤ β 1j , and the value range of the order parameter of the logistics subsystem is α 2j ≤ e 2j ≤ β 2j . e order parameter is the degree of system ordering and the measure of the collaborative development of subsystems. Its size represents the degree of macroorder. When the order parameter is zero, the system is disordered [19]. When E i1 , E i2 , . . ., E ij represents the benefit index, the larger the value is, the higher the degree of order is; the smaller the value is, the lower the degree of order is. When E i1 , E i2 , . . ., E ij represents the cost index, the larger the value is, the lower the degree of order is; and the smaller the value is, the higher the degree of order is. erefore, the order degree of order parameter component index e ij of the subsystem can be defined by its efficiency function, that is, the contribution degree of each order parameter index in the subsystem, as shown in Starting from the whole composite system, the overall coordination and development of the system is not only determined by the contribution of order parameter components in the subsystem, but also determined by the integration and integration among order parameters. erefore, the calculation of subsystem order degree can be realized by the integration of the function μ (e ij ) of order parameter e i [20]. erefore, the order degree of the statutory subsystem can be calculated by linear weighting, as shown in where u i (e i ) in the formula represents the subsystem order degree, and 0 ≤ u i (e i ) ≤ 1. e larger the value is, the greater the contribution will be and the higher the subsystem order degree will be. w j is the weight coefficient of each order parameter component index, namely, the relative weight of each order parameter, and refers to the role of each order parameter in the ordered development process of the subsystem.
Establishment of the System Synergy Degree
Model. e development of cross-border e-commerce and logistics is a process of dynamic evolution, and the relationship and function between various subsystems are not invariable [21][22][23]. erefore, the degree of order and synergistic effect in different periods should be dynamically measured with the development [24,25]. Assume that the order degree of subsystem Si of the composite system of cross-border e-commerce and logistics is u t0 i (e i ) at the initial stage of t 0 and u t1 i (e i ) at the moment of T1 in the development and evolution process and then in the process of development from t 0 to t 1 . e synergistic degree of the composite system composed of two subsystems, cross-border e-commerce and logistics, is U, and the calculation of U is shown in the following formula: It can be seen from the formula that the degree of synergy between cross-border e-commerce and logistics system is obtained by the change of the order degree of subsystems, and the value range of its degree of synergy U is [−1, 1]. e closer it is to 1, the higher the degree of synergy between cross-border e-commerce and logistics will be, and the closer it is to −1, the lower the degree of synergy of the system will be. e synergy model constructed above fully analyzes the orderly development of the two subsystems and provides a systematic basis for the establishment of an effective and feasible supervision system.
Structure of Cross-Border E-Commerce Logistics Network.
Based on the characteristics of cross-border e-commerce, the structure of cross-border logistics network is divided into three stages: logistics of the exporting country, international logistics, and logistics of the importing country. e nodes are the sellers of cross-border e-commerce, the transit warehouse of the exporting country, the transit point of international logistics, the bonded warehouse of the importing country, and consumers, respectively [8,26,27]. e connection between the nodes is the line connection between the nodes. e five types of nodes in the cross-border e-commerce logistics network are (1) cross-border e-commerce sellers, including traditional enterprises, e-commerce enterprises, and individual merchants; (2) transit warehouse of the exporting country, where the exported commodities are gathered through the domestic logistics link to complete the process of commodity inspection and customs declaration; (3) international logistics transit points, including ports and airports, corresponding to water and air transport modes, respectively; (4) the bonded warehouse of the importing country, where the customs declaration and commodity inspection are completed, and some merchants keep their goods in the bonded warehouse of the customs, which is equivalent to the distribution center; (5) consumers. After the last kilometer of distribution, the goods finally reach the consumers, and the whole cross-border e-commerce logistics process is completed. e physical network topological structure formed by these nodes and the connections between nodes and the goods flow on the connections between nodes constitute the cross-border e-commerce logistics network system.
Cross-border e-commerce logistics network is composed of nodes and edges with different natures. To build a crossborder e-commerce logistics network model, these nodes and edges need to be abstracts into homogeneous nodes and edges of the network. e logistics network model of crossborder e-commerce is described as In the formula, G is the entire cross-border e-commerce logistics network; V is the node set of cross-border e-commerce logistics network, including cross-border e-commerce sellers, transit warehouse of exporting country, transit point of international logistics, bonded warehouse of importing country, consumers, and other practical nodes. If V � n, it means that there are n nodes in the network, V � {v 1 ,v 2 ,v 3 ,. . ., v n }. E is the side of cross-border e-commerce logistics network, including urban roads, sea routes, air routes, and highways. D is the set of section distance, D � d ij |i, j ∈ n , is the set of load flow, and is also the set of edge weight. In this way, the cross-border e-commerce logistics network is transformed into a weighted undirected connected graph including cross-border e-commerce sellers, transit warehouses of exporting countries, international logistics transit points, bonded warehouses of importing countries, consumers, and logistics roads, as shown in Figure 1.
Cross-border e-commerce logistics network is distributed in the global scope, which has the characteristics of small batch, multifrequency, long distance, and being easy to be affected by spatial geographical conditions. At the same time, cross-border logistics passes through many intermediate nodes and has a long logistics cycle. In the cross-border e-commerce logistics network, nodes mainly realize different functions such as warehousing, packaging, and customs declaration. From the perspective of network structure, the degree of different nodes is very different, which has the characteristics of nonuniform distribution.
In the process of cross-border e-commerce logistics, there are five types of nodes, including cross-border e-commerce seller, transit warehouse of exporting country, transit point of international logistics, bonded warehouse of importing country, and consumer. When constructing network model with complex network theory, the nodes are abstracted into homogeneous network nodes without considering the heterogeneity of nodes. erefore, this section takes the differences of nodes into consideration and divides the nodes of cross-border e-commerce logistics network into the following three categories: Class I nodes: terminal nodes, including nodes of cross-border e-commerce sellers and consumers. In comparison, the business volume of such nodes is small and the degree of such nodes is small. e second is Class II node: midtransition logistics node, including transit node in the international transportation process such as transit port, airport, and railway station, as well as transit warehouse node in the exporting country, which realizes logistics transit activities such as functional storage, cargo collection, and distribution.
is class of nodes is of medium degree. e third is Class III node: middle transition function node. is kind of node refers to the customs and realizes the national inspection and commodity inspection of goods. When the goods need to be stored in the bonded warehouse, this kind of node also realizes the storage function. is class of nodes has the greatest degree. Connected nodes, the main 4 Complexity function of transport cross-border electricity logistics network nodes between single transport cargo is small, but the current frequency of relatively more, and the frequency of different logistics process between different nodes, this leads to cross-border electricity logistics network, the edge load capacity is also different, this will be represented by the edge of the weight. At the same time, the weight of the node will also be expressed by the weight of the edge. e cross-border e-commerce logistics network is similar to the general logistics network in that the flow of goods is b i -directional, so the directional problem is not considered in the cross-border e-commerce network. To sum up, the cross-border e-commerce logistics network constructed in this paper is the undirected weighted network graph G � V, W, R { }, as shown in Figure 2, which represents the nodes in the network, represents the risk grade coefficient on the edge, and represents the weight of the edge in the network. Adjacency matrix is established according to network graph G, as detailed below: where a ij and b ij , respectively, represent the values of the adjacency matrix in the horizontal and vertical directions, where i � 1,2,3, . . ., n; j � 1, 2, 3, . . ., n.
Construction of Cross-Border E-Commerce Logistics Supervision System.
is paper focuses on the theoretical framework of cross-border logistics monitoring and supervision and gives the corresponding implementation methods. e hierarchical and multilayer intelligent logistics regulatory framework is shown in Figure 3.
Each node on the chain usually has a corresponding industry, such as logistics and financial entities. Participants join the network through authorization and form a stakeholder alliance, which maintains the operation of the block chain. erefore, as a general solution, this framework is applicable not only to the field of crowd sourced logistics, but also to other industrial areas such as inclusive finance. e whole transaction regulatory framework deploys two levels of regulatory points. Among them, the supervisory point I is the top-level national authorized certification center of the supervisory structure, which is the trust root of the intelligent service transaction regulatory framework to deploy and implement the supervision module of enterprise implantation to the alliance chain of the registration marrow. All industries or organizations to the national authorized certification center to apply for the deployment of the implementation of the alliance chain need to provide the access proof including enterprise legal person, qualification certificate, and other entity information and sign the monitoring implant informed agreement section. Monitoring and implantation of informed agreement are a legal effect of the alliance chain implementer to accept the supervision of the superior competent department of the National Authorized Certification Center. It shows that all the transaction data and operation logs are authorized by the National Authorized Certification Center real-time access.
Logistics Supervision Methods Used by the System.
is section takes the modern logistics service industry as an example to introduce the transaction supervision method supported by the hierarchical and multilayer intelligent logistics service regulatory framework. As shown in Figure 4, where 19 tags have been added, the implementation of the whole approach involves two aspects: (1) Supervision of logistics service platforms by the National Certification and Authorization Center: the intelligent service platform first needs to submit the qualification to the national authorized certification 6 Complexity center (AC). e National Authorized Certification Center shall conduct a comprehensive assessment on whether the logistics service platform has the qualifications after necessary identity, qualification, and reputation retrieval. If it has the service access qualification, it will be issued with the corresponding authorization license (license) to make it become a legal intelligent transaction service node. Strapped home authorization certification center to the logistics service platform issued the authorization license and the service platform industry alliance chain implants the corresponding functional level of regulatory functions. e logistics service process refers to the use of chain plug-ins embedded in the logistics service platform and industry alliance supervision functions by the national authorized certification center for data collection and online analysis. It captures the abnormal behavior in the process of the trade service platform in real time, such as the data on the behavior of prohibited items on the national chain.
(2) e logistics service platform has great influence on the main participants of logistics: logistic service requesters (Requester) and workers (Workers) register in the logistics service platform, and the service platform checks their identity, reputation, etc. (if necessary, they need to seek help from the National Reputation Center through the National Authorization Center). Requester (specifically and successfully registered) submits a logistics task request to the logistics service platform, and the server triggers a smart contract (running a specific task matching algorithm) to automatically match the logistics request with the Worker and submits the matching result to Miner. e service platform will reward and punish workers according to the completion of logistics tasks.
Implementation of the Regulatory
Framework. e software architecture of the cross-border e-commerce logistics supervision system is shown in Figure 5. e middle part is the intelligent logistics service center. It is divided into the overall architecture design, hierarchical multilayer supervision design, and monitoring node function design. e overall software architecture design includes the following functions and research contents: (1) regulation area division and node deployment, aiming at improving the efficiency, accuracy, and expansibility of regulation; (2) design of regulatory interface for intelligent service trading, that is, different regulatory access modes adopted for different types of energy service trading systems. e functions of supervision node mainly include object identity authentication, transaction record storage, abnormal behavior gun detection, supervision action implementation, supervision log recording, and other functions.
Based on the above functional requirements, the intelligent service trading regulatory framework platform development includes verification of the built intelligent service trading regulatory platform availability and efficiency. At the same time, it verifies the availability and efficiency of the service transaction operation and supervision scheme focused on the supervision platform and the traceability scheme oriented to the multimodal transaction object, and then it comprehensively verifies the stability, real time, and accuracy of the intelligent service transaction supervision platform.
Risk Resistance Testing and
Optimization. Now, it is assumed that there are 2 international logistics transit points, 8 cross-border e-commerce seller and exporter transit warehouses, and 10 cross-border e-commerce bonded warehouses and demand points.
is network is abstracted into an undirected network weighting diagram with 20 nodes and 34 edges by using the representation method of complex network (as shown in Figure 6). e simulation experiment process is as follows: in the random destruction simulation experiment, one node or edge is deleted randomly each time until all edges or nodes are deleted, and the results of 8 experiments are averaged. In the simulation experiment of intentional attack, the importance of nodes or edges is sorted according to the number of nodes or edges, and one node or edge is deleted each time from high to low. When there are multiple nodes or edges with the same number of nodes or edges, it is determined by random selection until all nodes or edges are deleted.
As can be seen from Figure 6, in the case of random destruction, the proportion of nodes deleted in the network reaches about 80%, and the network risk efficiency drops to 0, indicating that the network is in a state of paralysis at this time. However, in the case of deliberate attack, the proportion of nodes deleted in the network is about 60%, and the network is in a state of paralysis. It can also be seen that intentional attacks are more destructive than random ones. All these indicate that the cross-border e-commerce logistics network is vulnerable to deliberate attacks and robust to random destruction, which proves that the cross-border e-commerce logistics network is a scale-free network. e loss caused by random sabotage and deliberate attack to cross-border e-commerce logistics network is almost the same. With the increase of the proportion of edge deletion, the loss caused by deliberate attack to network increases sharply. When the proportion of edge deletion reaches about 80%, the loss caused by deliberate attack and deliberate attack tends to be the same. Again, intentional attacks are more destructive than random ones. Moreover, the stability of the network is maintained by the important edges. In the actual security maintenance of cross-border e-commerce logistics network, special attention should be paid to the protection of important traffic lines. Keeping the lines unblocked can effectively maintain the network structure, thus reducing the occurrence of network losses.
Optimization of Supply Chain Supervision.
In terms of supply chain management, compared with traditional trade, the supply chain of cross-border e-commerce has the characteristics of fewer links in the whole supply chain process and convenient transaction mode. ere are big differences in the way they are regulated. e import and export of cross-border e-commerce mainly include four regulatory modes: "bonded import online," "direct purchase import," "general export," and "export from special regions." As shown in Figure 7, crossborder e-commerce companies generally sell goods directly to consumers through manufacturers or brand owners, and there are only international logistics, overseas warehouse, bonded warehouse, and customs clearance procedures in the middle. In addition to the above process nodes, traditional trade generally involves general distributors, provincial distributors, regional distributors, and retailers. ere are many transaction links and the overall cost of supply chain is high. Domestic consumers in cross-border electric business platform after order, the merchants of platforms, or the electric business platform will be the order of the order information, payment information, and logistics waybill information submitted to the customs special cross-border electricity cross-border import customs clearance formalities declaration system (i.e., cross-border business model declaration), and then the customs are in accordance with the terms of this commodity category such as tax rate and tax department. Import and retail tax shall be levied on the goods in the order, and automatic verification and cancellation shall be carried out on the commodity account book of the customs after the relevant information is verified and released. e general trade mainly includes import and export declaration, inspection, payment of taxes and fees, goods release four supervisory links. For general trade import and export declaration, the first is the enterprise or individual, in accordance with the relevant law within the prescribed time limits, in the import and export port or close condition, the corresponding electronic customs clearance or paper declaration, declare to the customs import and export commodities actual situation, including the information such as quantity, price, countries, and regions, and then the customs to declare documents for review process.
en, it is necessary to cooperate with the customs for inspection. In order to determine whether the corresponding import and export commodities match with the declaration documents, or to determine the geographical and physical attributes of the commodities, the relevant commodities need to be checked. In this way, we can judge whether the enterprises or individuals who declare are cheating or not. At the same time, the customs inspection also provides reliable data support for the reasonable taxation of the customs.
Orderly Degree Test of the Supervision System.
In the two subsystems of cross-border e-commerce and logistics, the influence degree of each order parameter index on the system is different. In order to improve the accuracy and effectiveness of their collaborative supervision system, different weights should be given to each order parameter index. e weighting method mainly includes the subjective weighting method and the objective weighting method, and the objective weighting method has the principal component analysis method, standard deviation method, entropy weight method critic method, etc. [28,29]. e index of order parameters of crossborder e-commerce and logistics in Henan province is calculated by standard deviation formula. In this paper, SPSS18.0 software was used to carry out correlation analysis on the order parameter indexes of the cross-border e-commerce subsystem and logistics subsystem of a city, and the correlation coefficient matrix was obtained. e weights of the order parameter indexes of the cross-border e-commerce subsystem and logistics subsystem of the city were, respectively, calculated. e results are shown in Figure 8. In addition, the overall synergy degree between crossborder e-commerce and logistics is calculated according to the composite system synergy degree model. e collaborative development trend of cross-border e-commerce and logistics in Henan province plotted by the data results of the order degree of cross-border e-commerce and logistics subsystem and their overall synergy degree is shown in Figure 8.
Conclusion
is paper proposes a regulatory framework for crossborder e-commerce based on the Internet of ings, aiming at the typical industry of crowd sourced logistics in modern service industry. e system adopts a two-level supervision model: the first level is the supervision of the national authorization center on the logistics service platform and the second level is the supervision of the logistics service platform on the logistics participants. en, with the business logic of crowd sourcing logistics as a reference, the implementation method of e-commerce logistics transaction regulatory framework is elaborated in detail, and the closedloop security analysis of the scheme is carried out according to the chronological sequence. e analysis results show that the proposed hierarchical and multilayer cross-border e-commerce logistics regulatory framework is safe and controllable. Finally, the corresponding software design architecture is proposed, which realizes the functions of node analysis, optimization of system network structure, implementation and supervision of logistics service based on Internet of ings, and traceability of multimodal transaction objects. In the case of random destruction, the proportion of nodes deleted in the network reaches about 80%, and the network risk efficiency drops to 0. With the continuous development of e-commerce, cross-border logistics should also keep up with the pace of e Times and constantly improve the logistics and transportation system. Meanwhile, cross-border logistics involves multiple logistics companies, which can exchange experience, communicate, and coordinate with each other continuously, so that goods can reach customers more quickly, and accelerate the development speed of electronic business and optimization.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 7,585.2 | 2021-05-27T00:00:00.000 | [
"Computer Science",
"Business",
"Engineering"
] |
Quenched vs Annealed: Glassiness from SK to SYK
We show that any SYK-like model with finite-body interactions among \textit{local} degrees of freedom, e.g., bosons or spins, has a fundamental difference from the standard fermionic model: the former fails to be described by an annealed free energy at low temperature. In this respect, such models more closely resemble spin glasses. We demonstrate this by two means: first, a general theorem proving that the annealed free energy is divergent at low temperature in any model with a tensor product Hilbert space; and second, a replica treatment of two prominent examples which exhibit phase transitions from an"annealed"phase to a"non-annealed"phase as a function of temperature. We further show that this effect appears only at $O(N)$'th order in a $1/N$ expansion, even though lower-order terms misleadingly seem to converge. Our results prove that the non-bosonic nature of the particles in SYK is an essential ingredient for its physics, highlight connections between local models and spin glasses, and raise important questions as to the role of fermions and/or glassiness in holography.
What the SYK model does not exhibit is spin glass physics [25][26][27]. This is surprising, because the SYK Hamiltonian bears a striking similarity to the quintessential mean-field models of spin glass theory [28][29][30]. Both the SYK and spin glass models are defined by randomstrength interactions among all degrees of freedom. Here we show that the essential difference is the fermionic nature of the particles in SYK: any model with strictly local degrees of freedom will share much more in common with spin glasses.
This result is relevant because interest in SYK physics has spread to generalizations of the original model. To name a few: including multiple flavors of fermions [31], using bosonic particles [2,32,33], using spins [34,35], forming lattices of SYK models [36,37], and introducing supersymmetry [38,39]. With the analysis presented in this paper, we are able to immediately identify large classes of such models in which the potential for glassiness must be carefully addressed.
On the spin glass side, all-to-all disordered models have featured prominently for decades. Sherrington and Kirkpatrick first introduced a system of Ising spins with infinite-range random interactions which exhibits an intricate spin glass phase [28,40,41]. The model has been extended in numerous directions, both classical and quantum, many of which are central to the field in their own right: p-body interactions [29,42,43], spherical spins [44][45][46], Potts spins [47,48], Heisenberg interactions [1,2,30,49], and transverse fields [50][51][52], among many others. These variants all share certain phenomena which unite them as spin glasses. As one lowers the temperature, the system first undergoes a "dynamical" transition at temperature T d , below which dynamical correlation functions never fully decay. The system experiences a further "static" transition at a potentially lower T s , below which one can detect frozen magnetization patterns in the equilibrium Gibbs distribution. Certain systems undergo a third "Gardner" transition at an even lower T g , below which the magnetization patterns become more complex, with sub-patterns and so on. For pedagogical expositions of the physics, see Refs. [53][54][55].
All indications are that the SYK model does not show any such behavior [1,[25][26][27]56]. This raises multiple questions, chief among which is simply: which generalizations of the SYK model do have spin glass phases? Presumably such glassiness would rule out any connection to quantum gravity (although that is itself an important open question). It has long been known that the bosonic variant of SYK is a spin glass [2,3,32], yet this is merely one model out of the multitude which could arise. A recent numerical study on small systems found evidence suggesting that the hard-core bosonic variant is a spin glass as well [33]. Beyond this, the question has remained unexplored. There has been no general framework for understanding when all-to-all disordered systems behave as spin glasses rather than SYK.
This paper aims to fill that gap. On a technical level, generic models can be analyzed in two ways, and we address both. The first relies on the replica formalism: one expresses the moments of the partition function as a path integral and uses standard mean-field techniques to obtain the free energy [53][54][55]. One can circumvent replicas by making the "annealed" approximation, namely replacing the partition function by its first moment at the outset. The second approach is to organize the diagrammatic expansion of the propagator in powers of system size N . One averages each term over the disorder and finds that a summable set of diagrams (the so-called "melons" in SYK) gives the leading-in-N contribution. This ultimately gives the same results as the annealed approximation. Even though the annealed approximation appears to be correct for the SYK model, it is in general extremely unreliable at low temperature. Indeed, breakdown of the annealed approximation is often what signals entry into a spin glass phase. We will be studying this breakdown and its consequences in generic all-to-all disordered systems.
In Sec. II, we introduce our notation and the specific models which will serve as our examples. In Sec. III, we prove that the annealed approximation cannot hold at low temperature in any model for which the Hilbert space is a tensor product. This includes bosons (soft-& hardcore), spins, distinguishable particles, etc. It shows that all such models are fundamentally different from SYK. In Sec. IV, we then give a more detailed and transparent analysis of the hard-core bosonic and quantum p-spin models. Despite the models not being fully solvable, we show that each undergoes a transition from an annealed phase at high temperature to a non-annealed phase at low temperature. Lastly, in Sec. V, we use a concrete example to demonstrate the difficulty in obtaining such results through a 1/N expansion.
II. MODELS AND DEFINITIONS
Here we define the models of interest, starting with the original SYK model and then introducing various modifications. All of the models discussed here are in fact ensembles of Hamiltonians given by Gaussian random couplings. We also give a very brief description of the replica method. More detailed accounts can be found in the references.
• Fermionic models: The original SYK model is defined using N Majorana (i.e., Hermitian) fermion operatorsγ i . Note that the Hilbert space of the theory has dimension 2 N/2 . The Hamiltonian, which has an even integer q as a parameter, is where the couplings J i1···iq are independent Gaussian random variables with mean zero and variance One can also consider the analogous complex SYK model, where the Majorana operators are replaced by complex fermionsĉ i andĉ † i (p ≡ q/2 of each): Here and throughout, we use a convenient notation in which the multi-index I represents a set of p indices i 1 < · · · < i p arranged in increasing order.
Thus H cSYK consists of all possible p-body interactions. The couplings J II are again independent Gaussians, but now complex with and such that J I I = J * II . One reason for considering H cSYK as opposed to H SYK (or vice-versa) is that H cSYK has a conserved particle number.
• Bosonic models: The bosonic SYK model simply replaces the fermionic operatorsĉ i with bosonic operatorsb i : The couplings J II remain exactly as in Eq. (4). An issue with this definition is that in the grandcanonical ensemble, where the number of particles is unlimited, H bSYK is unbounded from below. One could therefore work at fixed particle number, as past works on the bosonic SYK model have done [1,2,32], or one could interpret theb i as hard-core bosons [33] (i.e., exclude double occupancies on sites). Either choice guarantees that the model has a definite ground state. We shall do the latter: in addition to being more interesting (in the sense that much less is known about it), the hardcore model has the benefit of having a well-defined grand-canonical ensemble.
• Spin models: The quantum p-spin model consists of all-to-all p-body interactions among spinŝ σ α i (α ∈ {x, y, z}): where I is the same multi-index as before and A = {α 1 , · · · , α p }. We will use spin-1/2, but our results apply to any spin. The couplings J A I are real Gaussians with variance Connections between this spin model and SYK have recently been explored in Refs. [34,35].
It should be stressed that our conclusions are in no way restricted to these models. We focus on those listed here solely for the sake of concreteness and current relevance. Regardless of the model, one is always faced with the question of how to treat the random couplings (the "disorder"). Assuming the ultimate goal is to calculate the statistics of physical observables, an important quantity is the "quenched" free energy: where E[ · ] denotes the average over random couplings and Tr[ · ] is the usual sum over states. Derivatives of f (β) clearly give the disorder-averaged values of observables, exactly as the free energy does in non-random systems. f (β) is extremely difficult to evaluate, even for classical systems. The replica method is one of the few ways to make analytic progress. It is based on the identity E ln Tre −βH = lim One evaluates the average on the right-hand side for integer n, interpreting (Tre −βH ) n as the partition function for n uncoupled "replicas" of the system, i.e., where {Ψ} is a complete set of states. In the first line, the operators and states live in the original Hilbert space H, whereas in the second line, they live in the product space H ⊗n . Assuming one can obtain an analytic expression for the disorder average of Eq. (10), one then pretends that n is an arbitrary real number and takes the n → 0 limit. This technique is clearly not rigorous. It has nonetheless been tremendously successful in the study of disordered systems [53][54][55].
A drastic but useful approximation which avoids replicas entirely is to interchange the disorder average and logarithm in the definition of the quenched free energy (and then take the average inside the trace). This gives the "annealed" free energy: Note that derivatives of f (ann) (β) do not correspond to physical quantities. One often finds that f ∼ f (ann) at high temperature but that f (ann) gives patently incorrect results at low temperature (see Sec. III). The SYK model seems to be the only known non-trivial counterexample.
As an aside, the terminology "quenched" versus "annealed" comes from metallurgy, and refers to whether fluctuations in the disorder (accounted for by E[ · ]) are treated on the same footing as thermal fluctuations in the degrees of freedom (accounted for by the trace). Eq. (8) treats the disorder as fixed when computing observables and only afterwards averages over disorder, whereas Eq. (11) sums over fluctuations in both simultaneously.
III. BREAKDOWN OF THE ANNEALED APPROXIMATION IN TENSOR PRODUCT MODELS
Here we prove a general result: the annealed free energy cannot be correct at low temperature for any all-toall model with a tensor product structure. Specifically, consider any N -particle Hamiltonian of the form where I denotes sets of p particles and A denotes sets of p indices from some group of size k, and J A I is Gaussian with We shall take the operators O α i to be Hermitian, but models such as complex SYK involving non-Hermitian operators can be treated in the exact same manner. The only restriction we place on the operators is that they obey a tensor product structure: the Hilbert space H is a tensor product H 1 ⊗ · · · ⊗ H N andÔ α i is shorthand for 1 1 ⊗ · · · ⊗Ô α i ⊗ · · · ⊗ 1 N . The quenched and annealed free energies are, respectively, We prove that there is a finite β * such that for β > β * , A. Warm-up Let us first consider a classical model, for which the annealed free energy is easily computed. The Sherrington-Kirkpatrick (SK) model mentioned in the introduction is with Ising spins σ z i and Var[J ij ] = 1/N . A simple calculation gives and thus Yet if Eq. (18) were the correct expression for the average free energy, then the average energy per spin would be = −β/2 and the average entropy would be s( ) = ln 2− 2 . This cannot be, since the entropy in a discrete configuration space is non-negative: the number of configurations Ω( ) within a small energy window around is a non-negative integer, thus lim N →∞ N −1 ln Ω( ) is either −∞ or non-negative. The annealed free energy of the SK model must be invalid for < − √ ln 2, i.e., β > 2 √ ln 2.
B. Generic tensor product models
The statement that the entropy must be non-negative applies equally well to quantum systems: simply replace the word "configurations" by "energy eigenstates". For any Hamiltonian H g of the form in Eq. (12), we give an upper bound to f (ann) which diverges to −∞ as T ≡ 1/β → 0. It follows that the annealed entropy, being −∂f (ann) /∂T , must diverge to −∞ as T → 0, and thus cannot be correct below a certain temperature. See Fig. 1 for a sketch of the situation.
Since the variousÔ α may not commute for different α, we cannot directly evaluate the annealed free energy as for the SK model. Yet we always have Jensen's inequality: for any quantum state |Ψ . Summing Eq. (19) over a complete set of states, averaging over disorder, and taking the logarithm, we find that Note that Eq. (20) holds for any basis |Ψ used on the right-hand side.
The tensor product structure allows us to use a product basis, i.e., Furthermore, since the operatorsÔ α are not identically 0, there must be some single-particle state |ψ * i for which at least for some α. Use this |ψ * i as a basis state. Then where the omitted terms (coming from A = {α, · · · , α}) are positive. Inserting into Eq. (20) gives our final bound: Clearly f (ann) → −∞ as β → ∞, as claimed.
The divergence of f (ann) at low temperature is not limited to Gaussian disorder. The Gaussian coupling distribution was used only to evaluate the average in Eq. (20), and an analogous bound can be obtained for any other distribution. For example, suppose each J A I has some alternate probability density P (J) for which the mean is zero and the variance is still given by Eq. (13). Assume P (J) falls off faster than exponentially for J 2 Var[J], so that we can safely expand inside the average: We can proceed with the proof as before and obtain the same Eq. (24), for any such P (J).
For the sake of concreteness, we next consider some specific models.
C. Example: Quantum p-spin
A natural basis to use for the quantum p-spin Hamiltonian (Eq. (6)) is theσ z i eigenstates | ↑ and | ↓ . Both states have expectation values | σ z (20) gives the bound The extra term compared to Eq. (24) comes from summing over the 2 N basis states, which we neglected for simplicity in the general treatment.
D. Example: Hard-core bosonic SYK
In this case (Eq. (5)), we have that
E. Example: Complex fermionic SYK
Note that the bound in Eq. (20) always applies, regardless of whether the Hilbert space is a tensor product or not. In particular, it holds for the fermionic SYK model (both real and complex), for which the annealed free energy seems to be correct at all temperatures. It is informative to see how this result is consistent with the bounds obtained through Eq. (20), in contrast to the examples above.
Given the similarity between hard-core bosons and fermions, and given that the states (|0 ± |1 )/ √ 2 yielded a free energy diverging at low temperature in the former, let us consider the analogous basis in the fermionic Hilbert space: Here the state index Ψ ∈ {0, 1} N is denoted as a vector ψ, and similarly for s. Starting from Eq. (20), we need to evaluate Note that to leading order in N , none of the i k are equal to any of the j l . A given term vanishes unless s i1 = · · · = s ip = 0, s j1 = · · · = s jp = 1. Furthermore, we need s k = s k except for k ∈ {i 1 , · · · , i p , j 1 , · · · , j p }, in which case s k = 1 − s k . Thus (−1) ψ·( s+ s ) = (−1) ψi 1 +···+ψi p +ψj p +···+ψj 1 , which can be taken outside the sum.
Additional minus signs come from rearranging the fermion operators. First note that factors ofĉ i1ĉ † i1 ,ĉ j3ĉ † j3 , etc., in which the two matching operators are adjacent, can be treated as the identity: as a pair, they commute past all other operators, andĉ iĉ † i |0 = |0 . Thus owing to the initial order of theĉ † l in Eq. (29) (the index increases from left to right), one can convince oneself that we obtain a factor of −1 for each k less than i 1 , each k less than i 2 , each l less than j 1 , each l less than j 2 , and so on. But now suppose that j 1 ≤ i 1 − 2. Each choice of s can be associated with an r according to s j1−1 = 1 − r j1−1 , s j1+1 = 1 − r j1+1 , with all other s l = r l . The two vectors give contributions differing by exactly one minus sign, and therefore sum to 0.
The lesson is that Eq. (30) evaluates to 0 unless every i & j index is adjacent to another, e.g., i 1 = j 1 − 1 or i 2 = j 1 + 1. Yet this restricts the number of free indices for us to sum over, and we needed all 2p to be free in order to obtain an extensive bound on f (ann) (a factor of N 2p to compensate for N −(2p−1) from the coupling variance). In the thermodynamic limit, the only bound we obtain in this case is The right-hand side does not diverge as β → ∞, and f (ann) cSYK has the potential to remain correct even at zero temperature.
IV. REPLICA ANALYSIS FOR SPECIFIC MODELS
Much additional insight comes from considering the replica analysis in detail for specific models. We shall focus on the hard-core bosonic and p-spin models. Yet keep in mind that even though we limit ourselves to these two for ease of presentation, our analysis is in fact much more general. It can be applied with minimal modifications to any model which admits a path integral representation, even if local constraints on the fields are required.
Furthermore, the replica analysis allows us to make conclusions about the high-temperature behavior of the models. Indeed, we show that the hard-core bosonic and p-spin models undergo genuine phase transitions: for each, there exists a β c such that the free energy equals f (ann) for β < β c and does not for β > β c . We are able to say this without needing to calculate the precise functional behavior of f (ann) .
A. Hard-core bosonic SYK
The hard-core bosonic SYK model is given by Eq. (5), reproduced here: To construct a path integral representation of the partition function, we express each hard-core boson operator b i as a pair of fermions together with a constraint: The partition function is then with Note that we enforce the constraint by way of a Lagrange multiplier µ i on each site. Also note that we use a slightly non-standard definition of imaginary time: τ ranges from 0 to 1 for all β. This will be convenient in what follows. As described in Sec. II, we now evaluate E[Z n bSYK ]. To save space, we give the steps of the calculation in Appendix A. The method is standard, and analogous calculations can be found in, e.g., Refs. [3,51,54,55]. The result is a path integral over an order parameter G rr (τ, τ ) and Lagrange multiplier F rr (τ, τ ): where The indices r and r denote different replicas: r, r ∈ {1, · · · , n}.
In the thermodynamic limit, Eq. (36) is dominated by the saddle-point value, whose location is determined by the equations where · eff denotes an expectation value using the effective action of Eq. (38). From the analysis in Appendix A, one can show that in physical terms, the solution G rr (τ, τ ) to Eqs. (39) and (40) is simply the equilibrium Green's function for the hard-core bosons in the original model: where T denotes time ordering.
Thus far, all calculations have been exact. We cannot proceed any further in full generality, since (unlike in the SYK model) the remaining action for h and a is not quadratic. Nonetheless, we shall use this starting point to both confirm that the annealed free energy diverges at low temperature and show that it is correct at high temperature.
Low temperature
If we set n = 1 in Eq. (36), then we in fact have an expression for the annealed free energy: In terms of the order parameter G(τ −τ ) (note the lack of replica indices and that we have assumed time translation invariance), the expression for Φ 1 is and the saddle-point equations are At low temperature, the maximizer of Eq. (43) is static, i.e., independent of τ . We show this self-consistently. Given a τ -independent F , we can perform a Hubbard-Stratonovich transformation on the remaining path integral: The path integral on the second line is precisely that of a single hard-core boson with z-dependent Hamiltonian We at last have a tractable expression. Eq. (45) becomes The right-hand side must be independent of τ in order to be consistent, and this is indeed what happens at low temperature: for τ 1/β ∼ 0, we have Returning to Eq. (43), the annealed free energy is where the ellipses denote terms subleading in β.
Of course, F (τ ) is not strictly static at low but nonzero temperature. Rather, it has a static component F and a correction ∆F (τ ), where the correction is nonnegligible only for τ 1/β. The presence of ∆F (τ ) does not change the fact that the correlation time is O(1/β), and thus this ansatz for F (τ ) is fully self-consistent. Furthermore, ∆F (τ ) only gives subleading corrections to f (ann) . The expression shown in Eq. (50) is correct to leading order.
Finally, note that while we obtained the annealed free energy by setting n = 1 in Φ n [G, F ] (Eq. (37)), the same expression results from instead setting all inter-replica order parameters to 0: set G rr = F rr = 0 for r = r , and take the n → 0 limit as prescribed (see Sec. II). Since we know that this expression cannot be correct at low temperature, it follows that the true equilibrium value of the order parameter, whatever it may be, cannot be diagonal in replica indices.
The conclusion is that in the hard-core bosonic SYK model, the autocorrelation function G(τ ) develops a static component as T → 0 which is responsible for the divergent annealed free energy. This in turn implies that the system must no longer be replica-diagonal. It also suggests why the fermionic model should behave differently: there, G(τ ) cannot have a non-zero static component because the Fourier transform only has weight on odd multiples of π.
High temperature
To show that the annealed free energy is correct at high temperature, we place a bound on the probability of a random disorder realization having free energy other than f (ann) . Write the (random) partition function Z as ZE[Z]. Chebyshev's inequality states that We shall show that for β less than a certain value, it follows from Eq. (51) that f bSYK = f (ann) bSYK with probability approaching 1 in the thermodynamic limit.
The second moment of Z bSYK is obtained by setting n = 2 in Eq. (36). We have two order parameters: the intra-replica correlator G rr (τ, τ ) ≡ G(τ −τ ) (r ∈ {1, 2}) and the inter-replica correlator G 12 (τ, τ ) ≡ Q, which we take to be static. Thus The saddle-point equations are We immediately have one solution to the saddle-point equations. Denote the solution to the annealed equations, Eqs. (44) and (45), by G eq (τ ) and F eq (τ ). It is self-consistent to set in Eqs. (55) through (58). Note that this is not a trivial statement: it relies on the fact that in the absence of any inter-replica coupling, each replica has a separate U (1) symmetry which ensures h r (τ ) * a r (τ ) = 0. If the action were to include an explicit U (1)-breaking term, then Q = 0 would not be a valid solution.
The question is now whether Q = 0 is the dominant solution to the saddle-point equations, i.e., that which maximizes Φ 2 . To address this carefully, in Eq. (54), denote every part of the expression except for Q 2p and λQ by A 2 [G, F ; λ]. This lets us write the second moment of Z bSYK as follows: This is easiest to see starting from Eq. (A4) in Appendix A. As a result, we can write We have already established that Λ 2 [0] = 0, by virtue of Q = 0 satisfying the saddle-point equations. Furthermore, evaluating Eq. (61) by saddle-point demonstrates that Λ 2 [Q] is the Legendre-Fenchel transform of A 2 [G eq , F eq ; λ] and must therefore be convex [57]. Thus Q = 0 is the unique minimum of Λ 2 [Q]. Finally, a direct calculation starting from Eq. (61) gives i.e., the curvature of Λ 2 remains non-zero even as β → 0. These observations imply that β 2 Q 2p −Λ 2 [Q], although not itself concave, has its global maximum at Q = 0 for β less than a certain non-zero value. Furthermore, for such β, This establishes what we claimed: there exists a critical temperature above which Var[Z bSYK ]/E[Z bSYK ] 2 → 0 in the thermodynamic limit and f bSYK = f (ann) bSYK .
B. Quantum p-spin
The quantum p-spin model is given by Eq. (6), reproduced here: Our treatment of it will be very similar to that of the bosonic SYK model, and we include it here to highlight the generality of the method. For that reason, we will present only the results of each step, and leave the details to be filled in by analogy with Sec. IV A. We express the partition function in terms of spin coherent states |Ω . In fact, the only features of the states that we need are the identities [58,59] (67) The integrals are over the unit sphere, and Ω x i = sin θ i cos φ i , etc. The partition function is (68) In this notation, we are including the overlaps between coherent states in the integration measure, i.e., 2π We will never need to express the overlaps in continuum notation.
The replicated, disorder-averaged partition function is The saddle-point equations are then The notation is the same as before: r and r are replica indices, and · eff denotes an expectation value using the effective action of Eq. (72).
Low temperature
The n = 1 action is We again make a static ansatz for F (τ ), which will turn out to be consistent at low temperature. A Hubbard-Stratonovich transformation gives The remaining path integral is that of a single spin-1/2 in a magnetic field proportional to h, thus we can evaluate the Ω(τ ) · Ω(τ ) correlator directly: which is simply G ∼ 1/27 in the limit β → ∞ with τ 1/β. Finally, the annealed free energy is which indeed diverges at low temperature.
High temperature Using the same notation as in Sec. IV A, the n = 2 action is One saddle-point of Φ 2 is at where G eq (τ ) and F eq (τ ) are the order parameters which maximize Φ 1 . Without any coupling between the two replicas in Eq. (79), Ω α 1 (τ )Ω α 2 (τ ) = Ω α 1 (τ ) Ω α 2 (τ ) , and Ω α r (τ ) = 0 owing to the (statistical) symmetry of the original Hamiltonian. By establishing that this is the dominant saddle-point at high temperature, we show that with probability 1, exactly as done for the hard-core bosonic model.
where A 2 consists of all the remaining terms in Φ 2 . By the same arguments as in Sec. IV A, we have that: (83) Thus for β less than some non-zero value, E[Z 2 p ] is dominated by Q = 0 and The free energy then agrees with the annealed value [60].
V. THE DANGER IN 1/N EXPANSIONS
The preceding sections have been in the spirit of the replica formalism, but there is another technique for studying all-to-all disordered models: expanding the free energy in powers of system size N . Many studies of the SYK model and its variants have taken the latter approach [31,35,61]. Although in principle one could obtain all of the above results through a 1/N expansion, the correct low-temperature physics cannot be identified without taking subtle issues of convergence seriously. The purpose of this final section is to present an example of the issues which arise, as an argument in favor of the replica method over 1/N expansions.
First consider the structure of a 1/N expansion, say for the quantum p-spin model for concreteness. Suppose one wishes to compute the moments of the partition function, E[Z n p ]. We can expand the exponentials: (−β) L1+···+Ln L 1 ! · · · L n ! Tr 1 · · · Tr n E H L1 p,1 · · · H Ln p,n .
Note that in the second line, the n replicas are considered as separate degrees of freedom, each with its own trace. However, the L r factors of H p,r all involve the same spins of replica r, and every factor contains the same Gaussian couplings J A I . Since H p is linear in the couplings, the product H L1 p,1 · · · H Ln p,n is a sum of products of Gaussians, and the disorder average is given by all pairwise contractions according to Wick's theorem. These features are all naturally expressed in terms of chord diagrams [34,35], which we describe in Appendix B. Evaluation of Eq. (85) is then reduced to a sum over chord diagrams. Each diagram comes with a power of N , which allows the sum to be organized as a 1/N expansion.
We further show in Appendix B that, assuming all L r N , the diagrams having contractions between replicas are subleading. In other words, the disorder average factors to leading order: Since in the thermodynamic limit L r N for any fixed L r , the naive conclusion would be that the entire sum factors, and thus E[Z n p ] ∼ E[Z p ] n . In particular, The error is in assuming that the dominant terms of Eq. (85) have L ∼ O(1). Since we expect the energy to be extensive, i.e., H p ∼ O(N ), the expansion of e −βHp should be dominated by L ∼ O(N ). We must at the very least include such L in our evaluation of Eq. (85).
The non-commutativity of the operators in the quantum model makes it difficult to be any more quantitative. Thus instead consider the simpler classical model: where the sum is again over all multi-indices of p spins, and Var[J I ] = p!/2N p−1 . Note that p = 2 is precisely the SK model described in Sec. III (Eq. (16)). Every statement made above about the quantum pspin model can also be made about the classical model, and in the classical model we can confirm our suspicion that the breakdown of the annealed approximation appears only at O(N )'th order in the 1/N expansion. We start with A term of Eq. (88) with given (L 1 , L 2 ) contains L 1 + L 2 factors of the Gaussian couplings, which are contracted in pairs. Some contractions will connect spins on the first replica to spins on the second. By organizing the expansion in terms of the number L of such pairings, as detailed in Appendix B, we have that where in the first line σ z Ij r ≡ i∈Ij σ z ir (r is the replica index), and in the second line the nested sum has been factored. We also used that E[Z cl ] = e N (ln 2+β 2 /4) .
To proceed, write the trace as with The integral over Q can be evaluated by saddle point, leaving us with a single sum over L.
First consider L ∼ O(1) with respect to N . The saddle point is at Q = 1/2, and the integral goes as (pL/2 − 1)!!N −pL/2 . We have that Every term in the sum over L is subleading in N , for any β (except if p = 2, in which case see [60]). This would seem to say that E[Z 2 cl ] ∼ E[Z cl ] 2 , even though that cannot possibly be the correct result at low temperature.
However, consider L ∼ O(N ). The saddle point Q * is now given by the equation where l ≡ L/N , and the sum over L can be written (ignoring sub-exponential prefactors) Yet if β is large, the maximum of g(l) is positive. The fluctuations in Z cl become greater than the mean, and we can no longer claim that the annealed free energy is correct.
We have shown that a 1/N expansion of the partition function (and thus of the free energy) converges at small β but diverges at large β. Furthermore, note that the saddle point of Eq. (94) is at l * ∼ β 2 /2 for large β, i.e., L * ∼ N β 2 /2. Were one to take the N → ∞ limit be-fore resumming the series, one would miss the divergence entirely, and indeed overlook much of what makes these spin glass models interesting.
VI. CONCLUSION
We have demonstrated that the annealed approximation breaks down at low temperature in any all-to-all disordered model with finite-body interactions and a tensor product Hilbert space. This encompasses many in the family of SYK-like models, such as the bosonic variants and the quantum p-spin model. Furthermore, we have shown that, at least in the hard-core bosonic and quantum p-spin models (although the technique can easily be generalized), the partition function is self-averaging at high temperature. Thus we have identified two distinct phases: one in which the free energy equals the annealed value, and one in which it does not. These results were obtained using rigorous bounds on the annealed free energy and the replica technique. Note that we did not rely on any of the more cryptic aspects of the replica method (taking the number of replicas to 0 and maximizing rather than minimizing the free energy). Finally, we have highlighted the subtleties that come with applying 1/N expansions to such models.
Strictly speaking, these results are not enough to prove that the models are spin glasses at low temperature. Spin glass order is characterized by an overlap matrix in which the permutation symmetry is broken ("replica symmetry breaking"), whereas we have shown only that the matrix cannot be diagonal. In more physical terms, a spin glass has multiple low temperature states, whereas we have shown only the existence of some low temperature state distinct from the high temperature state.
That said, the results established here do force one to confront the issue of glassiness. The standard annealed approximation cannot accurately describe the effects of disorder in any tensor product model, and one must use an approach which allows for non-diagonal and potentially symmetry-broken replica order parameters. In particular, this statement applies to many models of current interest in the context of SYK physics. Whether replica symmetry is broken or merely non-diagonal in any specific model is an interesting open question which requires further analysis.
As for the relevance of these models to holography, it is still possible that some might have gravitational duals despite the breakdown of the annealed approximation. The precise dynamics cannot be exactly as in fermionic SYK, since that model is described by the annealed approximation, but a more complex gravitational theory is not ruled out. It is also possible that glassiness and gravitational dynamics can coexist in an interesting way, e.g., Refs. [62][63][64]. These are all important questions that remain to be investigated.
There is one potential way for the annealed free energy to remain accurate at low temperature even in tensor product models: have an interaction degree which increases with system size. Note that every bound obtained here no longer diverges if the p → ∞ limit is taken before the T → 0 limit. This does not prove that the annealed approximation holds, but we cannot claim that it must break down in such models. One example is the double-scaling limit studied in Refs. [34,35,65,66], where p ∼ √ N . It was argued that the quantum p-spin model has a Schwarzian density of states in this limit. In view of our results, it would clearly be desirable to have a more detailed understanding of the low-energy physics for general p.
Finally, it is interesting to note that every system currently known to have a simple gravitational dual includes fermionic degrees of freedom. This could be a streetlight effect, perhaps related to the difficulty of reliably studying non-supersymmetric theories at strong coupling. However, here we have uncovered a general result preventing a wide class of bosonic theories from exhibiting the simplest kind of gravitational dynamics known to occur in a corresponding fermionic theory. Perhaps this is one example of a general class of constraints which places purely bosonic theories of gravity into the swampland [67].
We present the details for the hard-core bosonic model. The quantum p-spin model and others proceed analogously. Beginning from Eq. (33) The sums over spins/multi-indices now come alongside sums over replica indices r, r ∈ {1, · · · , n}. Note that, to leading order in N , · h i p r (τ ) * a i p r (τ )a i p r (τ ) * h i p r (τ ) · · · h i 1 r (τ ) * a i 1 r (τ )a i 1 r (τ ) * h i 1 r (τ ) In the large-N limit, the path integral is dominated by a specific value of G rr (τ, τ ). We determine this saddle point by introducing a Lagrange multipler F rr (τ, τ ), and thus have where S (eff) is given by Eq. (38). The path integral over h, a, and µ now factors among the N different sites i, and we obtain Eqs. (36) and (37): Φ n [G, F ] ≡ β 2 2 rr 1 0 dτ dτ G rr (τ, τ ) p G r r (τ , τ ) p − F rr (τ, τ )G r r (τ , τ ) + ln r Dh r Da r Dµ r e −S (eff) [h,a,µ] .
(37) Note that the remaining integration over h, a, and µ is for a single site. In return, the action for that site has couplings between different replicas and times.
Note that for the l = 0 terms, the {IA} sum factors into two separate sums, one for each circle. Furthermore, the sums over k 1 and k 2 are then precisely those that gave us E[Z p ]. Thus For l = 1, η {IA} = 0. This is because eachÔ A I is traceless, and the two operators paired between the circles are each left unpaired in their respective traces.
For l = 2, let us count the powers of N . Assume that k 1 , k 2 ∼ O(1) as well, so that we can ignore minus signs as discussed above. Yet we do still need to ensure that every factor ofσ α i occurs in pairs to survive the trace. This restricts the number of sums over spin indices in {IA} to pk 1 + pk 2 + p: each of the multi-indices within each circle can be summed freely, but the two which connect the circles must have every index paired with each other. The counting for other l ∼ O(1) is analogous, and once we include the number of contractions, we have Note that, at least for p > 2, all l = 0 are suppressed by powers of N relative to l = 0. If we were to naively sum this expression over all (l, k 1 , k 2 ), we would be led to believe that Var[Z p ]/E[Z p ] 2 → 0 as N → ∞, regardless of β. Yet we have proven in the main text that this cannot be true. The resolution, as also discussed in the main text, is that Eq. (B10) holds only for l, k 1 , k 2 ∼ O(1), whereas we need to sum over all values at fixed N . Once (l, k 1 , k 2 ) become comparable to N , not only do anticommuting operators begin to matter, but the combinatorics of the chord diagrams changes. This second point is demonstrated explicitly in the main text. | 9,917.6 | 2019-11-26T00:00:00.000 | [
"Physics"
] |
Enrichment of Copper, Lead, and Tin by Mechanical Dry Processing of Obsolete Printed Circuit Board Residues
Waste printed circuit boards (WPCB) residues were mechanically processed to concentrate their metal content aiming to reduce costs of the subsequent recovery of copper, tin, and lead. A fully dry route was proposed to avoid the generation of liquid effluents that would require additional treatment. Firstly, 10.7% of the residue was segregated by magnetic separation; the remaining nonmagnetic fraction was comminuted and sieved. Ceramic and polymer materials (1/3 of the total weight) concentrated in the finer and the coarser size fractions, while metallic materials (2/3 of the total weight) concentrated in the intermediate size fraction (90.7, 94.5, and 95.6% of the total copper, tin, and lead contained in the milled WPCB, respectively). The fractions between 0.3–1.20 mm were submitted to gravity separation using a zig-zag air classifier; enrichment of copper (from 43±11% to 68±5%), tin (from 10±3% to 17±1%), and lead (from 4±1% to 6.4±0.5%) were obtained.
Introduction
The increase in demand for electrical and electronic equipment and shorter life span brings to a global sustainability concern. Waste electrical and electronic equipment (WEEE) constitute urban mineral resources for recovery of different metals. Their disposal as open-pit waste, incineration and/ or landfill generates hazardous by-products and gaseous emissions, which impacts the environment, human health and the sustainable economy. The core part of WEEE constitute the waste printed circuit board (WPCB), which stands out as one of the most difficult component to treat and most valuable part to recycle 1-2 . They contain approximately 23% of polymers, 38% of metals, and 49% of ceramic materials, however, such composition change considerably depending on the electronic device type, model, year of fabrication, etc [3][4] . The main metals present in WPCB include copper, lead, and tin. Other elements can also be found in relatively lower contents as is the case of iron, nickel, zinc, as well as precious metals like gold, silver, and palladium.
Different strategies based on reuse, recovery and recycling of WPCB are under development worldwide. Pyrometallurgy, hydrometallurgy, biometallurgy, or hybrid processing routes recycling techniques have been proposed to recover their valuable components [5][6][7][8][9][10] . Such treatments may contribute to reduce the environmental impacts caused by the extraction of high-valued materials as well as to eliminate the discharge of highly toxic materials from nature. As generally seen in WPCB, the content of some metals may surpass the typical values found in their respective ores 1-2 . More efficient routes normally include mechanical processing of WPCB with the first phase aiming to reduce the volume of material to be treated, however effluents and residues can be generated thus requiring adequate environmental conditioning. Even those, all routes present technical limitations. For instance, the formation of volatile organic compounds like dioxins during smelting operations of WPCB will require off-gas treatment equipment to avoid toxic gas emissions in pyrometallurgical processing, the handling and disposal of strongly acidic solutions must be attempted in hydrometallurgical operations and the time required for decomposition in biological routes must be considered. In fact, the key of a valuable metal recovery process relies on a robust separation technique that can separate different metal components from a complicated metal bearing mixture in a low cost operation and without producing further negative impacts to environment 11 .
Independently of the chosen route, the metal recovery efficiency may increase if WPCB are preprocessed by physical methods, which are based on differences of magnetic, density, or size properties, aiming at concentrating the metal fraction through its segregation from the polymer and ceramic fractions [12][13][14][15][16] . For instance, He and Duan 17 verified that metal components in WPCB are mainly distributed in sieving products with larger fractions; in the analyzed samples, the optimum size fraction to recover the metals from WPCB was 0.074-0.250 mm, which ensured metal concentration recovery efficiency greater than 84% using reverse flotation. Zhang et al. 18 included mechanical processing (comminution, sieving, and magnetic separation) as pretreatment of WPCB before triboelectric separation; the recovery of the nonmetallic products increased from 24.88% to 35.36%. Also, in the study of Xia et al. 19 20 used grinding, magnetic separation, milling, and sieving to treat WPCB. The processed material was submitted to an electrostatic corona separation to obtain metal and nonmetal fractions. The residual metals present in the nonmetallic fraction were processed in a fluidized bed. The metals recovery in the size fractions of 1.0-0.5 mm, 0.50-0.25 mm, and <0.25 mm were 86.39%, 82.22%, and 76.63%, respectively. In the present study, the mechanical processing methods commonly used in ore treatment were applied to separate non-metallic (plastic and ceramics) and metallic fractions of WPCB, aiming to concentrate copper, lead, and tin in the metallic fraction. The proposed route included comminution and magnetic separation, followed by a zig-zag air classifier; therefore no water was required to separate the metallic and non-metallic fractions of the residue. The advantage of the fully dry route proposed in the present work is to avoid the generation of liquid effluent streams that would need to be adequately treated, besides providing a very fast separation if compared to other physical separation techniques.
Collection and manual dismantling
In order to develop a route to process different types of printed circuit boards, WPCB from obsolete computers of distinct types, models, and fabrication years were collected at Federal University of Rio de Janeiro campus. The material was manually dismantled to remove capacitors, resistors, CPU (central processing unit), fans, heat-dissipators, and other soldered components that could bend during comminution and reduce the liberation efficiency of metal, ceramic, and polymer fractions. In addition, components containing precious metals like gold, silver and palladium were separated to specific treatment 21 . After manual dismantling, approximately 3.6 kg of WPCB were obtained, thus corresponding to 86.5% of the total mass, which was subjected to the subsequent mechanical processing.
Comminution
Firstly, the WPCB were cut into small pieces (below 10 cm length and 5 cm width) using a laboratory-scale shredder (Fragmaq, FT/75 model, 7.5 hp). Then, milling of the cut material was run using a Wyllie knife mill (CIENLAB, CE-430 model, 2 hp, exit opening 2.5 mm).
Magnetic separation
A laboratory-scale magnetic separator designed at UFRJ with a 3000 G magnetic field produced by a rare earth magnet was used. The operation was performed under dry conditions (17 rpm rotation, 80 g/min feed rate). The material was fed to the conveyor belt of the equipment, being separated into two fractions: the magnetic fraction retained by the magnetic field and the nonmagnetic fraction, which was submitted to the subsequent mechanical processing step to concentrate copper, tin, and lead.
Sieving
The nonmagnetic fraction was screened by sieves with openings of 2, 1.2, 0.85, 0.71, 0.5, 0.3, and 0.125 mm (−9# to +115# mesh Tyler) and bottom. The sample was placed at the top sieve and the set of sieves was shaken for 15 min, which was found adequate to provide complete separation for the sieve shaker used (Ro-tap). The fractions from each sieve were weighed and submitted to the gravity pneumatic separation step; samples (1 g) were withdrawn for chemical analysis.
Zig-zag pneumatic separation
The pneumatic separation tests were performed using a laboratory-scale zig-zag air classifier, aiming to concentrate the metallic content of the nonmagnetic fraction of each size fraction obtained in the sieving step. The advantage of using an air classifier is to avoid the generation of liquid effluents in the mechanical processing of WPCB. In addition, given the equipment has no internal parts, it quickly performs metal separation requiring very low energy consumption. The operation of the zig-zag air classifier is schematically shown in Figure 1. For each test, samples of approximately 10 g of the waste were fed into the classifier. The upward air flowing in a counter current mode was provided by a blower (B-air Blowers, KP-1200 model, maximum flow rate = 18.5 m³/h) which immediately drags the light fraction to the upper direction, while the heavy fraction is directed by gravity to the bottom exit of the classifier. Samples (1 g) of both fractions were withdrawn for chemical analysis. The air flow rate was defined using Schytil's diagrams.
In the zig-zag air classifier, particles are classified based on their falling behavior in the channel in which air stream flows upward. To determine the occurrence of fluidization in the equipment for a given operating condition, the Schytil's diagram for gas/solid suspensions was plotted for each particle size and particles desity. This diagram is, a log-log plot of the dimensionless numbers Froude (Fr = v²/ gd) versus Reynolds (Re = Re = rvd/µ), assuming constant velocity (Eq. 1) and constant particle diameter (Eq. 2) as given by Pacheco 22 : which are obtained by substituting both dimensionless number definitions and expliciting the particle diameter (d) or the velocity (v), respectively. In the equations, ρ and µ are density and viscosity of fluid, respectively, and g is the acceleration due to gravity. Despite not considering the particle shape, the Schytil's diagram represents the conditions The relative density of the samples for each fraction size after sieving was determined using the picnometric method. Scanning electron microscopy (SEM) images were obtained to analyze the morphology structure of the samples using a JEOL JSM-6460 LV equipment, which was equipped with an energy dispersive spectrometer (EDS) to obtain a semiquantitative elemental composition of the samples. Eventually, given the heterogeneity of the samples, the content of copper, lead, and tin of each size fraction after sieving step was determined using an atomic absorption spectrophotometer (Shimadzu, AA 6800 model) after previous digestion of the samples (1 g) in 20 mL of aqua regia (3 HCl : 1 HNO 3 , both reagents analytical grade, Vetec).
Comminution, magnetic separation, and sieving steps
Approximately, 3.6 kg of WPCB were obtained after manual dismantling, being first subjected to the comminution step using a shredder and a knife mill, followed by magnetic separation, and then by sieving steps. Table 1 summarizes the total mass balances of these steps. The mass loss in the comminution step was negligible, below 1%. This step was relatively easy to perform because the collected material was previously dismantled; in fact, the components that could bend during crushing and milling operations (capacitors, resistors, CPUs, fan/cooler, heatdissipator, and soldered components) were manually removed. Moreover, as the WPCB metal fraction is usually covered by ceramic and polymer materials 5 , the comminution step resulted in the liberation of such fractions. In particular, the mass difference (10.7%) verified in the magnetic separation step corresponds to the magnetic fraction originally present in the WPCB, basically comprising electronic components that remain in the waste 18 . No mass loss was verified in the sieving step. that will result in a fluidized bed rather than a fixed bed remaining or going pneumatic transport.
From the Ergun equation, used to calculate the pressure drop across a randomly packed bed of spherical particles for laminar and turbulent flow conditions, equations determining the domain between fixed and fluidizing beds (Eq. 3), and the domain between fluidizing bed and pneumatic transport (Eq. 4) can be derived 23 : . Re Fr 150 1 1 75 where ρ s is the solid density, ε is the void fraction, and C d is the drag coefficient whose value depends on Reynolds number.
Characterization of the solid fractions
Samples of the material were withdrawn along the treatment route for physical and chemical characterization. The size distribution of the WPCB obtained in the sieving step is shown in Figure 2. Approximately 66% of the material concentrate in size fractions higher than 0.85 mm, corroborating previous studies 12, 24 , whereas less than 10% of the sample consists of fine material (<125 mm).
Samples of each size fraction were digested in aqua regia, resulting in two main fractions: 51% of soluble (predominance of metals -36.9% copper, 6.6% tin, and 2.6% lead) and 49% of insoluble (predominance of polymers and ceramics) materials. Such figures are the average values calculated for all fractions studied. As shown in Figure 3, the amount of soluble and insoluble materials changed with size particle. Soluble materials concentrate in some intermediate fractions (0.71-2 mm) and insoluble materials concentrate in the finest fractions (<0.3 mm), since ceramic and polymeric materials are relatively easier to be crushed, and in the coarsest fraction (>2 mm), due to certain irregularity in size, whereas approximately the same content of soluble and insoluble materials was also verified in the remaining intermediate fractions (0.3-0.71 mm).
to EDS analysis ( Figure 5), the presence of nickel, zinc, silicon, and aluminum was also identified in the samples.
SEM images of WPCB samples, shown in Figure 5, reveal that the geometrical properties of particles are very heterogeneous in terms of size, size distribution, and shape. The geometric properties of particles affect the particle flow behavior through an interaction with the gas medium as exhibited by the drag force, the distribution of the boundary layer on the particle surface, and the generation and dissipation of wake vortices 25 . In addition, the chemical composition of samples also changes considerably with size fraction, as verified in AA and EDS analysis, affecting the average density of particles. As the flow characteristics of solid particles in a gas-solid suspension varies with the geometric and material properties of the particles, and considering the extremely irregular shape of the particles of WPCB, a zigzag air classifier was chosen because its particular geometry subjects both streams of moving particles (the one carried upward by the up flowing air current and the one moving downward along the lower wall) to a renewed classification at the end of each stage. The distribution of copper, lead, and tin comprising the soluble fractions of WPCB for different particle sizes is shown in Figure 4. In terms of percentage, such metals concentrate mainly in the intermediate size fractions (0.125-2 mm). For copper, contents higher than 25% were obtained in this size fraction range, reaching 60.8% in the size fraction 0.85-1.2 mm. For tin, the average content was 10 ± 3% in the size fraction 0.3-1.2 mm, reaching 14.6% for the 0.85-1.2 mm size fraction. For lead, an average content of 3 ± 1% was obtained in the size fraction 0.125-2 mm reaching 5.2 ± 0.3% at size fraction 0.71-1.2 mm, with only 0.8% in the finer (<0.125 mm) and 0.02% in the coarser (>2 mm) fractions. Based on this analysis, in 3188 g of sieved WPCB (Table 1), there are approximately 18.8% of copper (599.7 g), 3.4% of tin (107.2 g), and 1.3% of lead (42.6 g) to be recovered. Consequently, the mass difference from the soluble fraction results in 27.5% of the initial material (876 g); according
Zig-zag pneumatic separation step
Based on the previous steps and aiming at designing a processing route to treat WPCB, the sample could be divided into the following three main size fractions: finer (<0.3 mm), intermediate (0.3-2.0 mm), and coarser (>2.0 mm). For the zig-zag separation step, solely the intermediate size fraction was chosen for investigation because it has approximately 2/3 of the total weight and concentrates 90.7% of copper, 94.5% of tin, and 95.6% of lead present in the WPCB residue. The finer fraction (<0.3 mm, representing 15.9% of the total weight) and the coarser fraction (>2.0 mm, representing 21.5% of the total weight) contained mostly insoluble materials (67% of this fraction weight consist of ceramics and polymers), with only 9.3% of copper, 5.5% of tin, and 4.4% of lead originally present in the sample. Table 2 summarizes the calculated and the experimental flow rate of air used in the zig-zag air classifier for each particle size of the intermediate size fraction (0.3-1.2 mm), including their relative density. The decrease in the values of the relative density observed in the smaller fractions is directly related to its composition, since these fractions present high levels of ceramics and polymers, as shown in Figure 3.
velocities were attributed to variations in porosity and shape of particles not considered in the calculations.
The total weight of light and heavy streams obtained in the zig-zag air classifier operated using the experimental air flow rate (given in Table 2) is shown in Figure 7. The light stream corresponds to the WPCB particles dragged upward with the air stream, which is mostly constituted by ceramic and polymeric materials. The heavy stream corresponds to the WPCB particles deposited in the collection box, which is mostly constituted by metals. The weight increase in the light stream (and consequent reduction on the weight of the heavy stream) when finer fractions of particles are treated fairly agree with the soluble (metal) and insoluble (ceramic and polymer) contents shown in Figure 3. The measured relative density of each size fraction was used to calculate the range of flow rate of air to fluidize the particles in the zig-zag air classifier using the Schytil's diagram, assuming spherical shape particles 26 . A typical diagram is shown in Figure 6. As expected, lower flow rates of air are required for fluidization when smaller particles are treated. In the range of air flow rate shown in Table 2 (fluidization region), the separation between light and heavy particles is expected to occur quickly, since the drag is almost instantaneous, less than 20 seconds. Below the calculated range (fixed bed region), all particles tend to flow downward in the equipment because the air flow is not enough to drag the light particles; alternatively, when higher flow rates of air are used (pneumatic transport region), all particles may be dragged upward by the flow of air, including the heavy particles. Therefore, in both cases, no gravity separation is expected to occur. The experimental flow rate of air differed a little from those calculated, as shown in Figure 5, because the particles shape is nonspherical and extremely irregular; the theoretical porosity was used in the calculations and this may also reveal some effect. In particular, considering the flatted needle shape of the particles, a smaller flow rate of air is expected to fluidize them in the zig-zag air classifier, when compared to the fully spherical particles. A similar result was obtained by Sagratzki 27 using Schytil's diagram, whose deviations between theoretical and experimental air The soluble fractions metals (copper, lead, and tin) content of both streams is shown in Figure 8. Metals were concentrated in the heavy streams as expected: in the size fraction 0.85-1.2 mm, the content of copper increased from 60.7% to 72.5%, tin from 14.5% to 18.4%, and lead from 5.5% to 7.1%; in the size fraction 0.71-0.85 mm, the content of copper increased from 47.5% to 57.3%, tin from 12.2% to 17.9%, and lead from 4.9% to 6.6%; in the size fraction 0.5-0.71 mm, the content of copper increased from 29.2% to 72.2%, tin from 6.8% to 14.6%, and lead from 2.3% to 6.4%; and in the size fraction 0.3-0.5 mm, the content of copper increased from 35.4% to 69.2%, tin from 6% to 15.7%, and lead from 2.4% to 5.4%. The heavy stream corresponds to 63.5% of total mass weight treated in the zig-zag air classifier, while the light stream corresponds to 36.5% in mass weight. The separation between soluble (metal) and insoluble (ceramic and polymer) materials was quite fast and efficient. In the light stream, 72.6% of the mass weight comprised insoluble materials, whereas 27.4% comprised soluble materials with low metal content of copper, tin, and lead. In the heavy stream, 9.8% of the mass weight comprised insoluble materials, whereas 90.2% of the mass weight comprised soluble materials, concentrated in copper, tin, and lead.
•
Insoluble materials concentrated in the finer (65% in 0.125-0.3 mm and 77% in <0.125 mm) and the coarser (63% in >2 mm) size fractions of the residue. These fractions correspond to approximately one third of the total weight to be treated. In terms of metal content, these fractions contain 9.3%, 5.5% and 4.4% of total copper, tin, and lead contained in the milled WPCB, respectively; • Soluble materials concentrated (from 47% to 74%) in the intermediate size fraction (between 0.3-2 mm), corresponding to approximately 2/3rd of the total weight to be treated, which contains 90.7%, 94.5%, and 95.6% of the total copper, tin, and lead contained in the milled WPCB, respectively; • The fractions between 0.3-1.20 mm were submitted to gravity separation using a zig-zag air classifier, which ensured efficiency and celerity in this step. Considering only these size fractions, the content of copper increased from 43 ± 11% to 68 ± 5%, tin increased from 10 ± 3% to 17 ± 1%, and lead increased from 4 ± 1% to 6.4 ± 0.5%; • The zig-zag classifier has shown to be a sustainable and energy saving alternative for the concentration of metals from WPCB. It is easy to use, has no internal parts, fast, very economic and environmental friendly using air at room temperature.
Conclusions
WPCB were mechanically treated (manual dismantling, comminution, magnetic separation, sieving, and gravity separation by air) aiming at separating the non-metallic fraction and to concentrate the metal content for subsequent recovery of copper, tin, and lead. The proposed route has generated no liquid or gaseous effluent to the environment.
The following conclusions can be drawn: • Approximately 10.7% of the obsolete WPCB (excluding capacitors, resistors, CPUs, fan/cooler, heat-dissipator, soldered components and others containing precious metals like gold, silver and palladium that were manually dismantled) was comprised of magnetic materials; • The milled WPCB evaluated in this work consisted of approximately 50% of soluble (metal) materials and 50% of insoluble (ceramic and polymer) materials. The metals content in the residue was 18.8% of copper, 3.4% of lead, and 1.3% of tin.
Minor content of other metals like nickel, iron, zinc, and aluminum were also identified; | 5,056.8 | 2019-01-01T00:00:00.000 | [
"Materials Science"
] |
A Hybrid Matching Network for Fault Diagnosis under Different Working Conditions with Limited Data
Intelligent fault diagnosis methods based on deep learning have achieved much progress in recent years. However, there are two major factors causing serious degradation of the performance of these algorithms in real industrial applications, i.e., limited labeled training data and complex working conditions. To solve these problems, this study proposed a domain generalization-based hybrid matching network utilizing a matching network to diagnose the faults using features encoded by an autoencoder. The main idea was to regularize the feature extractor of the network with an autoencoder in order to reduce the risk of overfitting with limited training samples. In addition, a training strategy using dropout with random changing rates on inputs was implemented to enhance the model's generalization on unseen domains. The proposed method was validated on two different datasets containing artificial and real faults. The results showed that considerable performance was achieved by the proposed method under cross-domain tasks with limited training samples.
Introduction
Mechanical fault diagnosis plays a significant role in modern industry. Failures of machines are likely to result in an entire mechanical system collapse and production line downtime, as well as serious economic losses. Timely and accurate fault diagnosis has become an indispensable technology in modern industries to ensure the safe and reliable operation of mechanical systems [1][2][3].
Recently, deep learning has achieved considerable progress in computer vision [4,5], speech and natural language processing [6], product defect detection [7], and road planning [8]. Expectedly, an increasing number of researchers have applied deep learning techniques to fault diagnosis and proposed intelligent fault diagnosis methods [9][10][11][12][13][14][15][16]. Hasan et al. [17] proposed an explainable AI-based model for bearings fault diagnosis. Sun et al. [18] developed a sparse autoencoder-based deep neural network for the fault diagnosis of induction motors, which realized accurate fault prediction. Li et al. [19] designed a two-layer Boltzmann machine to develop representations of the statistical parameters of wavelet packet transform for gearbox fault diagnosis. Ding et al. [20] applied a deep convolutional neural network (CNN) by using wavelet packet energy as the input to develop a bearing fault diagnosis system, with which they obtained reasonable fault detection performance. Zhang et al. [21] proposed a method based on deep learning that uses raw temporal signals as input, which achieved high accuracy under noisy conditions. Qiao et al. [22] built a dualinput model and achieved satisfactory antinoise and load adaptability based on a CNN and a long short-term memory neural network. e deep learning methods have discarded the traditional time-consuming and unreliable manual analysis, improving the efficiency of fault diagnosis [23][24][25][26][27][28] considerably.
Traditional deep learning methods can only achieve satisfactory results when the training set (source domain) and the test set (target domain) are in the same data distribution. In practical applications, however, due to the complexity of the working conditions of the mechanical system (load, motor speed, etc.), the training set and the testing set may have distinct distributions. e predictive performance of the deep learning models is greatly affected by these facts. To face this challenge, some transfer learning algorithms have been proposed to enhance the domain adaptability of the model. Zhang et al. [21] presented a novel algorithm based on deep learning to alleviate the degradation of the performance of intelligent fault diagnosis under noisy environments and different working loads. Yao et al. [29] designed a new model based on a Stacked Inverted Residual Convolution Neural Network to ensure the accuracy of the model in noisy environments. Hu et al. [30] proposed a data augmentation algorithm and presented a self-adaptive neural network to boost models' generalization ability. Lu and Yin [31] developed a transferable common feature space mining algorithm to extract the common features from multidomain data. Wu et al. [32] constructed a few-shot transfer learning method in variable conditions. Wei et al. [33] proposed multiple source domain adaptation methods to extract condition-invariant features for fault diagnosis.
Aside from the obstacle posed by cross-domain tasks, a limited training set is another challenge that restricts the practical application of deep learning fault diagnosis algorithms. Most of the deep learning methods require a large amount of labeled data for model training. However, in actual industrial application scenarios, collecting a huge amount of labeled data for every type of failure under each working condition poses a considerable challenge. To address this problem, some studies on mechanical fault diagnosis using limited labeled training data have been conducted. Wang et al. [34] presented an integrated fault prognosis and diagnosis method for the predictive maintenance of turbine bearings, which achieved reasonable performance under limited labeled data. Zhang et al. [35] applied the few-shot approach for fault diagnosis and designed an artificial neural network based on a Siamese network, achieving interesting results with limited data. Li et al. [36] designed a meta-learning fault diagnosis method (MLFD) framework using model-agnostic meta-learning, which has performed excellently under complex working conditions. Hang et al. [37] applied a two-step clustering algorithm and principal component analysis to improve classification performance in the case of unbalanced highdimensional data. Li et al. [38] proposed a deep, balanced domain adaptation neural network, which achieved satisfactory results with limited labeled data. Duan et al. [39] proposed a novel data description support vector based on deep learning for unbalanced datasets.
As two important research directions of fault diagnosis, improving the model's generalization to new domains and performance under limited training samples has made good progress, respectively. However, the reports of studies combining these two directions are relatively rare to find. In this study, to achieve domain generalization under limited training samples, we proposed a hybrid matching network (HMN) designed by connecting a prototypical network to the bottleneck of an autoencoder for fault diagnosis to unseen domains with limited training samples.
Our model mainly consists of two parts: (1) the autoencoder regularizing the feature extractor of the model to reduce the risk of overfitting and (2) the matching network achieving the measurement of samples similarity. Besides, a novel strategy is implemented in the training process to improve the model's domain generalization.
e main contributions of this study can be summarized as follows: (1) A novel fault diagnosis method based on matching network and autoencoder, known as HMN, was proposed to face the cross-domain scenarios. In the tasks, the model was training on the source domain with limited data and testing on the unseen target domains without access to their distributions. (2) Dropout on the input layer with randomly changing rates was employed to improve the generalization ability of the model. Autoencoder was built to reduce the risks of model overfitting with limited training samples by regularizing the feature extractor of the network. e rest of the paper is organized as follows. Autoencoder and prototypical networks are introduced in Section 2. Section 3 describes the proposed method in detail. Section 4 presents the experiments, results, and discussion. Finally, the conclusions are drawn in Section 5.
Autoencoder and Prototypical Network
2.1. Autoencoder. Autoencoder, an unsupervised learning method, uses a neural network to implement the representation learning task. Specifically, a neural network architecture designed to impose a bottleneck layer forces a compressed knowledge representation of the original input.
As shown in Figure 1, the autoencoder is mainly composed of two parts: an encoder and a decoder. e encoder function, which is denoted as f θ , enables the efficient computation of a feature vector h � f θ (x) from an input vector x. It is important to note that the dimensions of h are usually lower than the dimensions of x. Another parameterized function g θ , known as the decoder, maps the feature vector back to the input space, generating a reconstruction vector x � g θ (h).
A simplified autoencoder structure can be represented as a fully connected neural network with three layers, i.e., an input layer, a bottleneck layer, and an output layer. e parameter sets of the encoder and the decoder are trained simultaneously when performing the task of reconstructing the input as much as possible, i.e., minimizing reconstruction error L(x, x) which is usually described by MSE over training examples. For a training set x (i) n i�1 , the reconstruction error of MSE is expressed as follows: 2 Computational Intelligence and Neuroscience If the input is normalized to [0, 1], the cost function can be described as binary cross-entropy, which comes in the form below: where x (i) j and x (i) j represent the j th element of x (i) and x (i) , respectively, n and m represent the batch size and the dimension of x, respectively.
Using penalizing parameters based on reconstruction errors, the network can learn about the most important attributes of the input data and how to best reconstruct the input from the feature vector. [40] have been proposed for few-shot learning, which requires only a small amount of training data with limited information, as compared to traditional machine learning methods requiring a large amount of data to train a model for good results. As shown in Figure 2, the classification task can be achieved by comparing the distances with mean representations of each class in the metric space produced by Prototypical Networks.
Prototypical Networks. Prototypical Networks
Specific to a few-shot task, given a support set that has M labeled samples S � ( } is the label of each class, S k describes the set labeled with k. A representation c k , or prototype, of each class is computed by meaning the support points belonging to class k: where f θ is an embedding function with learnable parameters θ. For a function computing distance d � R D × R D ⟶ [0, +∞), distribution of a query point x q over distances to all prototypes of each class in the metric space is computed by prototypical networks: Train the network by minimizing L(θ) � − logp θ (y � k | x q ), the loss of the k class.
Methods
e proposed HMN for fault diagnosis is described in detail in this section. As shown in Figure 3, our model has both one-input and two-output configurations. One of the outputs was the reconstruction of the input, and the other was the prediction of health conditions using a prototypical network. e details of the model are illustrated in Table 1.
Data Preprocessing.
e proposed model used the shorttime spectrogram as a 2D input. Firstly, as shown in Figure 4, the sliding window of 2048 points generated the samples. Secondly, STFT used a fixed-length nonzero window function to slide along the time axis, truncating the source signal into segments of equal length. Assuming that these segments are stable, Fourier transform can be used to obtain the local frequency spectra of the segments. And finally, these local frequency spectra were recombined along the time axis to obtain a 2D time-frequency graph. e formula is presented in equation (5) as below: where x(t) is the original timing signal and g(t − τ) is the window function applied as the center point at time τ. In this study, the Hann window was used. To speed up the convergence of the model, we converted 2D spectrogram into a grayscale image with a value between 0 and 1. is process can be expressed as follows: where |X(τ, ω)| is the element magnitude, |X(τ, ω)| min and |X(τ, ω)| max represent the minimum and maximum magnitude, respectively. Finally, the normalized spectrogram Computational Intelligence and Neuroscience X ′ (τ, ω) was compressed into 64×64 time-frequency graphs as the input of the model.
Random Dropout on Input.
Dropout is a technique proposed in [41] to prevent the deep neural nets from overfitting. e key idea is to randomly deactivate the units along with their connections from the network with probability p during training, preventing units from coadapting too much. Applying dropout amounts to sampling a "thinned" network from the original one during training. During the testing phase, dropout is disabled, which can be seen as an average of the predictions of many "thinned" networks. e networks trained with dropout usually have much better generalization ability on supervised learning tasks. e deactivated units affect all the ones in the network, including the layers with dropout. Dropout applied in the lower layers can also be seen as providing noisy inputs for the higher layers. It can be interpreted as a method of data augmentation by adding noise to its hidden layers.
Adding noise with a specific distribution was not enough. Inspired by [21], we randomly changed the dropout rate during the training to obtain noise with the uncertain feature. Specifically, in each batch of training, the dropout rate was a random value between 0.1 and 0.9. e visualization of the operation is illustrated in Figure 5.
Here * denotes an elementwise product. r i is a vector whose elements follow independent Bernoulli random variable which has a probability p.
x and x are the raw input and the interfered output of x.
e purpose of adding dropout to the input layer was to add masking noise to the input, making the model insensitive to disturbance and improving the domain generalization of the model.
Feature Extraction.
To make full use of unlabelled information, an autoencoder was designed for feature extraction. In the encoding stage, the 2D time-frequency images first passed through a set of 2D convolutional layers. e 2D convolutional layers captured the localized features of the image well due to its translation invariance. To obtain more diverse features at the same feature level, the weights in the convolutional layer were designed as a series of 2D filters. Each filter convolves independently across the input feature map in the forward pass, obtaining the output of one of the convolution layer's channels. Generally, the computing of the convolutional layer l is expressed as follows: where * operator denotes the convolution of the channel i of the feature matrix Z l− 1 i and the kernel W l i,c , which produces the feature map Z l c of the c th channel of the layer l. b l c is the bias of c th channel in the layer l. e f l (·), a nonlinear activation function using RELU in this study is implemented on the final output of the convolution network. e encoder and decoder were designed in a symmetrical form. To reconstruct the coding of the bottleneck layer to the same size as the input time-frequency image, a transposed convolution layer was used in the decoder to unsampled the feature map. Following [42], the encoder contained four convolution layers and two fully connected layers, while the decoder contained four transposed convolution layers and two fully connected layers.
Training of the Proposed Model.
e two outputs of the model correspond to two different losses, including the reconstruction loss L r computed by the autoencoder and the classification loss L c computed by the prototype network. In the training process, L r and L c are minimized. e total loss in the model training can be described as follows: where the hyperparameter α is the weight coefficient used to adjust the weights of different losses. In the training process, the network is optimized with an Adam optimizer which sets the learning rates for each parameter adaptively. e steps of the proposed training algorithm are listed in Algorithm 1.
Experiment Description.
To verify the validity of our method, experiments are carried out on two bearing datasets selected from the Case Western Reserve University (CWRU) bearing datasets [43] and Paderborn bearing dataset [44]. We assume the source domain contains limited labeled samples and set 6, 10, 15, 50, 100, 200, 300, 500, 600 training samples per class to test the performance of the proposed method. Fivefold cross-validation is applied to the experiments. e test platform uses an Ubuntu 18.04 + Python 3.6 + Pytorch with an Intel ® CORE ™ i7-9750H CPU and a Nvidia GTX 1080Ti GPU.
Comparison Methods and Evaluation Metrics.
To verify the advantages of the proposed model, as shown in Table 2, several popular models are compared, using three types of time series input methods (Siamese-based CNN [35], PSDAN [45], and WDCNN [46]) and three types of time-frequency input methods (SCNN, HCAE [42],s and DeIN [47]). e Siamese-based CNN was designed by [35]. PSADAN was an adversarial domain adaptation method. WDCNN, in which a wide convolution kernel was used in the front of the network, was proposed in [46]. DeIN was proposed in [47]. SCNN is a common CNN that follows a softmax at the end of the same structure with the encoder of HMN. e HCAE was proposed in [42]. e HMN model was proposed by our team. All the models are trained in the source domain and tested in the unseen target domain. For the sake of fair comparison, the hyperparameters of models are carefully selected.
Computational Intelligence and Neuroscience
Several evaluation indicators are used to evaluate the performance of the proposed model in the following aspects: (1) accuracy, (2) precision, (3) F1 score (F1), and average F1 score (αF1). Precision, F1, and αF can be obtained using the following equations: where TP, FP, and FN represent true positive, false positive, and false negative, respectively.
Data Description.
In the CWRU bearing datasets [43], the 12k drive end fault data were selected as the original experimental data. Four types of faults, i.e., normal, ball fault, inner race fault, and outer race fault, were found in these data, as shown in Table 3. Each fault type had three different subtypes, i.e., 0.007 inches, 0.014 inches, and 0.021 inches. us, there were altogether 10 different types of fault. Signals of all fault types are shown in Figure 6. Each type of fault had three different loads, i.e., 1, 2, and 3 hp (motor speed of 1772, 1750, and 1730 RPM), as illustrated in Table 4. During data collection, each sample was collected from a vibration signal, as shown in Figure 7. Half of the signals were used to generate training data, and the remaining signals were used to generate the test set. As shown in Initialize: weight coefficient α � 0.5, the batch size is set to 8, the learning rate η is set to 0.0001, and the epoch is set to 300 for n � 0,. . ., epoch do for i � 0,. . .,steps do input a batch samples from the source domain random sampling p from Uniform (0.1, 0.9) dropout on inputs with rate p compute prototypes c k L � L c + αL r , θ ⟵ θ − η(zL c /zθ + αzL r /zθ) end for end for ALGORITHM 1: e proposed training algorithm. Train 600 600 600 600 600 600 600 600 600 600 1 Test 25 25 25 25 25 25 25 25 25 25 Dataset B Train 600 600 600 600 600 600 600 600 600 600 2 Test 25 25 25 25 25 25 25 25 25 25 Dataset C Train 600 600 600 600 600 600 600 600 600 60 0 3 Test 25 25 25 25 25 25 25 25 25 25 6 Computational Intelligence and Neuroscience Figure 4, the training samples were generated using 2048 points sliding window with 80 points overlapping steps. e test set samples passed through sliding windows in the same size, but the samples were generated without overlapping.
We set the data under different working conditions as experimental data. Datasets A, B, and C correspond to different working conditions with loads of 1, 2, and 3 hp, respectively. Each dataset contained 6000 training samples and 250 test samples. Figure 8 illustrates the accuracy of all methods of training with various amounts of samples. With outstanding performance, HMN is evidently superior to the other approaches. We can find that cross-domain task C to A is the most difficult, in which even with sufficient training samples, the accuracy of four compared methods does not reach 90%, but the proposed model still achieves satisfactory results.
Results and Analysis.
e results of training with 6 samples per class were observed. e classification accuracies of the cross-domain tasks are shown in Table 5. e best performance was achieved using HMN among all the methods in all the scenarios. Specifically, HMN achieved an accuracy of 92.65% in C-A, which was 34.61%, 21.57%, 26.38%, 19.32%, 40.21%, and 27.09% higher than DeIN, Siamese To further evaluate the effectiveness of the proposed method, we observed the effects of the autoencoder and random dropout in improving model's performance through the loss curve. Figures 9 and 10 show the loss curves in cross-domain task C-A with 6 training samples per class. Computational Intelligence and Neuroscience
Computational Intelligence and Neuroscience
As shown in Figure 9, training losses containing reconstruction loss L r and classification loss L c are considered to originate from equation (9), with testing losses set to classification loss L c . According to equation (9), when α is set to 0, the autoencoder does not work. A greater α indicates a higher weight of autoencoder during the training process. As α increases from 0 to 0.2, the testing loss converges to a smaller value. e testing loss's convergence process is smoother when α equals to 0.5. is demonstrates how the autoencoder branch may prevent overfitting and improve the model's performance.
As shown in Figure 10, when the HMN does not employ random dropout on input, the convergence value of the testing loss is greater than 3; however, when random dropout is used, the convergence value of testing loss drops to less than 1, and the curve descends more smoothly. e effect of random dropout on input in improving the model's cross-domain generalization is demonstrated. (1) (2) (3) (4) (5) Figure 11: Test rig of Paderborn bearing dataset. Figure 11, the test rig [44] consists of five modules: (1) electric motor, (2) torquemeasurement shaft, (3) rolling bearing test module, (4) flywheel, and (5) load motor. Bearings with different state types were installed in the test module to obtain experimental data. Fault types of bearings come from artificial and real damages.
In the basic setting of operating condition, the test platform ran at n � 1500 rpm with a load torque of M � 0.7 Nm and a radial force on the bearing of F � 1,000 N. Other settings were set up by changing the parameters one by one to M � 0.1 Nm and F � 400 N (named D, E, F, respectively, shown as Table 8.
e bearings with 32 different states were operated under different working conditions, including 14 states with natural damages from accelerated lifetime tests, 12 states with artificial damage, and 6 states with health data.
Each bearing under a load setting is measured with a vibration signal of about 4s at a 64 kHz sampling rate. In the experiment, datasets contained signals obtained from healthy bearings, artificially damaged bearings, and naturally damaged bearings. All bearings of different fault types were running under three different loads at a speed of 1500 rpm. e datasets filenames selected are shown in Table 9. e details of the datasets selected are listed in Table 10. Each dataset contains 1800 training samples and 120 test samples.
Results and Analysis.
By performing the same implementation, Figure 12 compares our method with the compared approaches in terms of the accuracy of different e results show that our method outperformed the other six stat-of-the-art methods in all the scenarios. Table 11 illustrates the cross-domain tasks accuracy of different methods with 6 training samples per class. e proposed method outperformed all comparative methods by 6.87%-41.26% on average. Tables 12 and 13 compare the methods in terms of precision, F1, and αF1 in the cross-domain task E-D with 6 training samples per class.
e results also show that our method outplay the alternatives.
Conclusions
A novel HMN was proposed for cross-domain fault diagnosis with limited training samples. We improved the model's diagnostic performance in two ways: (1) a novel deep learning structure combining autoencoder and matching network was built, (2) a random dropout strategy adding random disturbance into the inputs during the training process was developed to enhance the model's domain generalization. In Section 4, we present the experimental results showing that the proposed method has better domain generalization ability with limited training samples compared with the state-of-theart approaches.
However, the method proposed in this study still has some restrictions. For example, the method is limited to cross-domain tasks between different working conditions on the same device. However, cross-domain across multiple devices makes intelligent fault diagnosis algorithms more realistic. In addition, HMN can only perform classification tasks, limiting the model's potential to multitask. In future work, we will further optimize HMN and employ it in more complex cross-domain fault diagnosis scenarios and multitask learning.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest. | 5,824 | 2022-07-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Visualizing Partially Ordered Sets for Socioeconomic Analysis
In this paper, we develop a visualization process for partial orders derived from considering many numerical indicators on a statistical population. The issue is relevant, particularly in the field of socio-economic evaluation, where explicitly taking into account incomparabilities among individuals proves much more informative than adhering to classical aggregative and compen-sative approaches, which collapse complexity into unidimensional rankings. We propose a process of visual analysis based on a combination of tools and concepts from partial order theory, multivariate statistics and visual design. We develop the process through a real example, based on data pertaining to regional competitiveness in Europe.
Introduction
A detailed study on the economic competitiveness of European regions has been recently published by the Joint Research Centre (Annoni & Dijkstra 2013), to provide insights into the differences and the similarities of regional economic performances.A composite indicator, named RCI (Regional Competitiveness Index), has been computed based on a set of 73 elementary indicators, selected as relevant from a socio-economic point of view1 .The RCI computation proceeds through a hierarchical process, where elementary indicators are weighted and progressively aggregated in so-called "pillars", until a single composite indicator is obtained.Once RCI is computed, inter-regional comparisons can be made and a final competitiveness ranking is achieved.This kind of aggregative process is prototypical of the way economists and social scientists usually address the assessment of multidimensional socio-economic issues, like competitiveness, well-being, quality-of-life and the like.In fact, multidimensional assessments are very often designed with the aim to return clear and unambiguous rankings of statistical units.Common practice shows that this can be achieved only at the cost of losing a great deal of information.Competitiveness, well-being, quality-of-life and many other similar socio-economic topics are complex, multidimensional, full of ambiguities, nuances and uncertainties.Turning them into unidimensional rankings is burdensome and not necessarily leads to clearly interpretable results.In essence, the problem resides in the fact that these issues are truly multidimensional.This is often confirmed by the absence of strong interrelations among elementary indicators, so that multidimensionality-reducing tools based on correlations (e.g.structural equation models) prove mostly ineffective in achieving any meaningful synthesis.Likewise, it must be noted that RCI primarily aims at measuring the level of competitiveness, despite no natural scale against which can be compared.More properly, competitiveness of a region can be compared to that of other regions, rather than assessed on an absolute scale.Due to multidimensionality, however, such comparisons generally do not lead to complete rankings but to partial orderings, since conflicting indicators in regional competitiveness profiles lead to incomparabilities.The impossibility of obtaining meaningful and unambiguous rankings is typical of multi-criteria decision problems and the relevance of taking this feature into account has been also noted by Nobel prize Sen, in his book on inequality (Sen 1992).It is thus very important for social scientists to get acquainted with this kind of data structures, that is, in technical terms, with partially ordered sets (Barthélemy, Flament & Monjardet 1982).In fact, one can easily figure out the consequences in policy decisions, when a policy-maker looks at regional competitiveness (or well-being, or quality of life. . . ) in terms of unidimensional rankings, without realizing that different and incomparable competitiveness patterns do exist.Partially ordered sets have their drawbacks too, in that metric information gets lost.But this issue can be, at least partially, solved by exploiting suitable visualization tools, as shown below (see also Al-Sharrah (2014), for analogous attempts to introduce metric information in a partial order context).Generally speaking, the mathematical theory of partially ordered sets is well-established, but its application to socio-economic problems is at a beginning stage (Fattore, Bruggemann & Owsiński 2011, Fattore, Maggino & Greselin 2011, Fattore, Maggino & Colombo 2012).This motivates the present attempt to develop graphical and software tools devoted to the visualization of partial orders, to incline social scientists towards this way of looking at socio-economic data.The paper is organized as follows.In Section 2, we describe the structure of RCI more deeply and introduce the example used to illustrate the visualization tool.In Section 3, we present some elements of partial order theory and introduce Hasse diagrams, the basic visualization tool for partial orders.In Section 4, we provide some details on Self-Organizing Maps, the tool used to cluster statistical units prior to visualization.Section 5 develops the visualization tool.Section 6 provides a conclusion.
Regional Competitiveness Data
The Regional Competitiveness Index (RCI) proposed by the Joint Research Centre in its 2013 Report aims at providing a synthetic measure of the socioeconomic attractiveness of 262 European regions, mainly at NUTS 2 level.To build RCI, 73 elementary indicators2 are first aggregated into 11 so-called "sub-pillars"; in turn, these are aggregated into 3 "pillars", whose final aggregation produces the RCI index.Each aggregation step is performed through simple weighted means (see Annoni & Dijkstra (2013) for details).A scheme of the index architecture is represented in Figure 1.
The structure of pillars in terms of subpillars is as follows: 1. Basic pillar.Subpillars: (i) Institutions, (ii) Infrastructure, (iii) Health, (iv) Macroeconomic stability, (v) Basic education.This paper is not devoted to the analysis of the RCI in itself, so we focus just on data pertaining to one pillar (the Basic pillar), in order to show the visualization tool in action, on a simpler example.Regional Basic pillar scores are built as averages of the corresponding 5 subpillars scores.Such 5 scores, seen as an ordered set of indicator values, constitute what we call the "profile" of the region.If one attempts to compare regions based on their profiles, a lot of "undecidable" comparisons occur, whenever a profile is higher than another on a subpillar and it is lower on another.As explained in the next Section, the set of profiles is technically a "partially ordered set".All of these undecidable comparisons disappear when the aggregated Basic pillar score is computed, but at the cost of losing much information on differences in competitiveness profiles.
If x ≤ y or y ≤ x, then x and y are called comparable, otherwise they are said incomparable, written x || y.A partial order P where any two elements are comparable is called a chain or a linear order.On the contrary, if any two elements of P are incomparable, then P is called an antichain.Given x, y ∈ P , y is said to cover x (written x ≺ y) if x ≤ y and there is no other element z ∈ P such that x ≤ z ≤ y.A finite poset P (i.e. a poset defined on a finite set of elements) can be easily depicted by means of a Hasse diagram.Hasse diagrams are graphs drawn according to the following two rules: (i) if x ≤ y, then node y is placed above node x; (ii) if x ≺ y, then an edge is inserted linking node y to node x.By transitivity, x ≤ y in P , if and only if in the Hasse diagram there is a descending path linking the corresponding nodes.An example of a Hasse diagram is included in Figure 2. In the case of partial orders built upon a large set of numerical profiles, classical Hasse diagrams have two main drawbacks.First, in general the resulting graph is very cumbersome and hardly readable, due to the "density" of nodes and edges; secondly, any metric information is absent (as in the concept of partial order itself), since Hasse diagrams just represent comparabilities and incomparabilities among statistical units (later, we will take advantage of the flexibility in which Hasse diagrams can be drawn, to graphically introduce some metric information).Consider, for example, the Hasse diagram of 262 European regions assessed on the competitiveness covariates previously introduced (Figure 3).As can be seen, the diagram is very complicated.Moreover, Euclidean (i.e.visual) distances between nodes do not imply any similarity between regional profiles.Only comparabilities and incomparabilities are meaningful, but from the diagram one cannot assess whether these are due to large or small (and possibly statistically non-significant) differences between corresponding components of statistical unit profiles.Although the diagram of Figure 3 is very cumbersome, at the same time it reveals a large number of incomparabilities among regions (red dots).As above mentioned, these incomparabilities disappear in the aggregation leading to the Basic pillar index.It should be quite clear that a great deal of information gets lost in this unidimensional reduction (and similarly in the whole aggregative computation of the RCI).It is our opinion that information pertaining to incomparabilities, i.e. to competitiveness patterns, should be preserved and conveyed to those who address the topic.Some complexity reduction is indeed necessary, to make the diagram of Figure 3 more readable.This is the reason why we implement a clustering analysis process, namely a Self-Organizing Map, prior to Hasse diagram visualization (Tsakovski & Simeonov, 2008, 2011).
Self-Organizing Maps
As already suggested by other authors (Bruggemann & Carlsen 2014), to simplify large Hasse diagrams, the original dataset has to be preliminary processed through some cluster analysis.After clusters have been generated, a representative for each is selected and a (smaller) Hasse diagram is built on these elements.In Bruggemann & Carlsen (2014), a hierarchical cluster analysis is implemented.
Here we prefer to rely on a more sophisticated tool, namely Self-organizing map (Kohonen 2001).The Self-Organizing Map (SOM) can be viewed as a non-linear projection of a multidimensional density distribution on a bidimensional grid, such that the topology of the input space is preserved as much as possible.The main advantage of SOMs over classical clustering algorithms (e.g.hierarchical cluster analysis or k-means) is that it can fit complex frequency distributions in an adaptive way.The resulting clusters are arranged on a regular euclidean grid in such a way that regions next to each other in the input space are mapped to clusters next to each other in the grid.The grid is thus a planar "image" of the input space.Notice that in the application proposed in the paper, a limited number of clusters will be produced, since the aim is to obtain an easy-to-read visualization.In this respect, SOMs are not directly used as a visualization tool, but for their effectiveness in extracting clusters, "exploring" the input density in an effective way.SOMs are implemented in many programming languages.Here we rely on R3 package "kohonen" that provides an effective and easy-to-use tool for practical computations.As explained in the package documentation (Wehrens & Buydens 2007), and leaving aside more technical details, to apply the SOM algorithm, one must previously determine the number of clusters and define how they will be arranged in the bidimensional rectangular grid.Usually one performs several attempts, balancing between two conflicting needs, namely having a number of clusters (i) large enough to assure for their internal homogeneity, but (ii) not too large, to avoid losing interesting density patterns.All in all, setting the right grid and the right number of clusters is an empirical task.After the grid is defined, to each cluster an initial reference profile (called "codebook" in the SOM literature) is associated, randomly extracted from the dataset.Then the SOM algorithm is launched.As the algorithm proceeds, codebooks are updated until a smooth map is obtained, where final codebooks are arranged in an ordered fashion.We refer to specialized literature (Kohonen 2001) for details on the SOM algorithm and limit ourselves to some examples, so as to show what kind of outputs are provided.Consider the data pertaining to the Basic pillar of the Regional Competitiveness Index.As a first example, we cluster European regions into 9 clusters arranged in a 3×3 square grid.Clusters are depicted as circles and the corresponding codebooks are represented by the colored slices within each (this is the standard output of R package "kohonen").The larger the radius of a slice, the higher the corresponding profile component.Statistical units are then assigned to the cluster whose codebook is most similar to their profile.Figure 4 reports the result of the computations.The left map reproduces the clusters and their codebooks; the right map associates each statistical unit (represented as a dot) to its cluster (notice that some jittering has been added, so as to avoid dot overlapping and give a visual impression of the number of units in the clusters).Similar computations have been performed increasing the number of clusters of the square grid to 5 × 5 = 25 and 6 × 6 = 36.Results are depicted in Figures 5 and 6.Some remarks are in order.In each example, clusters are arranged on the square grid in such a way that similar codebooks are placed next to each other.This is the main effect of the self-organization process implemented by the SOM.As the number of clusters increases, the SOM reproduces more nuances "selecting", in an adaptive way, which part of the input density to reproduce with more details.Notice that the map orientation has no absolute meaning and that some clusters may be empty.This is not a fault of the algorithm, but a consequence of SOM topology preserving nature.Codebooks of empty clusters may be seen as "bridges" between densely populated regions, needed to preserve the smoothness of the map.
The Visualization Tool
The principal aim of a visualization tool for multidimensional and partially ordered datasets is to provide a direct representation of the data structure, reducing its complexity, but retaining the essential patterns in it.Hasse diagrams, the "official" partial order graphical representation, convey a great deal of information on the partial order structure of the data, but they are not easy to read as the number of elements increases and, as noticed before, do not provide any metric information, when this is available in the original data.Cluster analysis, on the other side, reduces the complexity of the data, but it is not designed to preserve information on comparabilities and incomparabilities.Following a suggestion Bruggemann & Carlsen (2014), we combine Hasse diagrams and Cluster analysis in a complexityreduction process, producing a visual output allowing final users to jointly grasp the partial order and the metric structure of the data.The visualization process is composed of three main steps: 1. Reducing dataset complexity through a clustering process based on a SOM.
2. Building a classical Hasse diagram on the population of clusters, that is on SOM codebooks.
3. Visually adding information on statistical units and clusters (particularly, information pertaining to the value of the covariates).We now build the visualization step-by-step, on the RCI data pertaining to the Basic pillar.To make things easier, we reduce the population of 262 European regions to 9 clusters.Running the SOM algorithm identifies 9 codebooks, represented by the colored slices in the circles of Figure 4.As a second step, we draw the corresponding Hasse diagram on the codebooks, keeping the same color codes (see Figure 7).Considering this image and Figure 3, we can see that clusters are partially ordered and that there are many incomparabilities, i.e. essentially different competitiveness patterns.has a metric meaning, at least along the vertical axis.Notice that moving clusters vertically as done in Figure 8 does not affect the global partial order relation: if a cluster is "greater than" another in the original Hasse diagram, then it is also "greater than" another in the modified one.The same kind of diagram can be obtained for other components.Sometimes, clusters overlap, as in the case of the Health subpillar (Figure 9).Although not pretty from a visual standpoint, this indeed conveys some information, i.e. that some clusters may be very similar with respect to a component of the competitiveness profile.In our computations, all of the five components of the Basic pillar are scaled to 0 − 1 (simply subtracting the minimim and dividing by the range), so that y distances in the Hasse diagrams are comparable and one can get an impression of the differences among score distributions of the five subpillars, at cluster level.This is made easier if one arranges all of the diagrams side by side, as in Figure 10, where the vertical axis of the first Hasse diagram reports the profile mean of each cluster.Visual comparison reveals many features of the data.For example, one sees that the Hasse diagram reporting the mean value of the cluster profiles is very similar to the diagram reporting the Institutions value.This is somewhat interesting, since Institutions data are collected at national and not at regional level (i.e. each region in the same Country shares the same Institutions score).So it seems that the mean profile value has the same behavior of a national feature (at least at cluster level) and that the metric structure associated to Hasse diagrams is the same for both the mean and the Institutions scores.It may also be observed that the Hasse diagram relative to Infrastructure has a "metric shape" very different from the others, with great variations in score levels.It is also interesting to look at the different vertical positions of clusters in different diagrams.These reveal the existence of conflicting indicators, explain the existence of incomparabilities and help in assessing whether due to big or small differences in score values.These are some suggestion arising from this kind of visualization and that may deserve further scrutiny, through more technical, and less intuitive, statistical procedures and data analysis.As usual, visualizations give the hint and suggest interesting directions to investigate.Note 1.Other graphical features could be used to add additional information to the modified Hasse diagrams.One could link the dimension of circles to other covariates or use the background color to plot the value of other profile components or even plot dot clouds in the circle areas (as in Figure 4) to give an idea of population distribution.Benchmarks (e.g. the population mean value of the subpillar measured on the y axis) may be graphically added to the diagrams, to see whether a cluster is below or above it.Alternatively, the values of two covariates could be jointly considered, moving nodes both vertically and horizontally, to produce a bivariate "metric" Hasse diagram (i.e.combining a Hasse diagram and a scatterplot).All of these options can be easily implemented with many software languages, also adding interactions to ease user experience.Here we limit ourselves to identifying basic visualization structures that may be improved using classical infovis tools.
Conclusion
In this paper, we have proposed a simple way to visualize partially ordered datasets.Partial orders arise typically when multicriteria evaluation problems are addressed.They constitute an alternative to classical aggregative compensative procedures, that solve multidimensional evaluation problems computing unidimensional rankings, usually through some composite indicator.Admittedly, the final output is more complex than a simple ranking, but at the same time it is much more informative, reflecting the true nature of the data and helping final users to realize the existence of complex patterns in the data.The procedure integrates the Self-organizing map with Hasse diagrams and simple graphic design.It is planned to develop an R package to make the visualization tool freely available, adding also some interactive functions.The proposed way to combine partial orders and metric information is indeed quite simple.More sophisticated approaches could be explored.In particular, it would be very interesting to try to integrate partial order structures within the SOM algorithm, so as to get the final Hasse diagram through an adaptive process.The application of partial order theory to socio-economic evaluation problems is still at an early stage, although some methodologies have been already proposed, mainly in connection with multidimensional poverty evaluation (Fattore, Bruggemann & Owsiński 2011, Fattore, Maggino & Greselin 2011, Fattore et al. 2012).An R package, named PARSEC (PARtial order in Socio-EConomics Fattore & Arcagni, 2014), is also being re-leased to the scientific community.The proposed visualization enriches the set of tools available to researchers, and we hope this will promote the use of partial orders in socio-economic studies.
Figure 1 :
Figure 1: Global architecture of the RCI.Circles in the bottom represent elementary indicators.Rectangles represent aggregation of indicators.
Figure 2 :
Figure 2: Example of a Hasse diagram.
Figure 3 :
Figure 3: Hasse diagram for the Basic pillar data. | 4,562.6 | 2014-07-01T00:00:00.000 | [
"Computer Science"
] |
Bottom Ash of the Largest Kuzbass Coal Power Plants: Secondary Use Possibility
Kemerovo district coal power plant, Tom-Usinskaya district coal power plant and Belovo district coal power plant are the largest coal power plants in Kuzbass and during the combustion of coal they generate annually about 1600 tons of coal ash which consists of fly ash and bottom ash. Almost all the generated ash is disposed into ash dumps except a small quantity of fly ash (3.5%) that is effectively utilized. Therefore, secondary use of the bottom ash can be a sustainable solution for reducing its byproducts and overcoming the scarcity of raw materials required for construction work. Therefore, the main aim of this research was to determine the chemical composition and granulometric properties of bottom ash to find out the possibility of using it as raw material for the building materials production. A series of laboratory experiments were conducted to determine basicity index, activity index, average grain density, bulk density, true density and grain size distribution. The experimental results reveal that the particle size of ash is predominantly sand-sized while containing some silt-sized and rubble-sized fractions as well. The studied bottom ash has a low basicity and activity index, respectively, does not have independent hydraulic activity. Thus, bottom ash of the largest Kuzbass coal power plants can be used as raw material for the building materials production.
Introduction
The Russian power industry annually produces not only heat and electricity, but also more than 25 million tons of ash and slag waste [1]. Ash dumps occupy large areas of valuable urban or suburban land and degrade the environment. The ash and slag waste dumps area are located is more than 20000 hectares today. On the other hand, vast experience of ash and slag secondary use has accumulated in the in the world. Countries such as England and Germany use the whole annual volume of ash and slag waste, Chinamore than 80%, Poland -up to 80%, the USA -about 70% [2]. Unfortunately, Russia lags behind the listed countries by ash and slag processing volume.
Kemerovo region is the one of the leading regions in the concentration of the coal power plants and at the same time in the volume of ash and slag waste. The large amount of coal deposits contributes high prevalence of power plants working on solid fuels. The Kuzbass coal power plants scattered throughout the region. Kemerovo district coal power plant, Tom-Usinskaya district coal power plant and Belovo district coal power plant are the largest coal power plants in Kuzbass. They are respectively located in the north, south and center of the region (Fig. 1).
Fig. 1. Location of the largest Kuzbass coal power plants.
They are annually generates about 1600 tons of coal ash which consists of fly ash and bottom ash. Most of bottom ash is disposed into ash dumps. Almost all the generated fly ash is effectively utilized. But it is only a small part of the all generated ash (3.5%) [3].
Bottom ash of coal power plants secondary using in construction is the most effective solution to this problem. Therefore, the possibility of using bottom ash of the largest Kuzbass coal power plants as raw for the production of building materials production was considered.
Materials and methods
The bottom ash of Kemerovo district coal power plant, Tom-Usinskaya district coal power plant and Belovo district coal power plant was used for the experiment. Samples were taken at several characteristic points in the ash dump of each power plant.
Each sample was examined using an inductively coupled mass spectrometer Agilent 7500cx and an inductively coupled plasma optical emission spectrometer iCAP 7400 Duo. The level of ash activity was evaluated by a basicity index representing the ratio of the basic oxides to the amount of acid oxides contained in sample: and an activity index representing the of alumina to silica: Granulometric properties were determined for a mixture of all each power plant https://doi.org/10.1051/e3sconf /202131502012 E3S Web of Conferences 315, 02012 (2021) VI th International Innovative Mining Symposium samples. Bulk density was defined as the ratio of ash mass to vessel volume: = Average grain density was determined by a method based on measuring the volume of grains in dry quartz sand: = True density was determined by the volume of distilled water displaced by ash from the pycnometer by boiling: Determination of the grain size composition was carried out by sieving and weighing the samples on a standard set of sieves. (table 3), while the unit surface area ranges from 1900 to 3714 cm 2 /g [4]. As can be seen from the table 4, the bottom ash of Belovo district coal power plant contains only 9.6% of fractions of 5-20 mm, particles equal to or less than 0.14 mm are 46.5%. Ash within the fractions of 0.315-5 mm fits into the optimal zone of the grain size composition of the sands, the excess of fraction <0.14 in it can be considered as a micro filler. According to the grain size composition, this aggregate can be classified as sands intended for lightweight concrete.
Results and Discussion
The bottom ash of Kemerovo district coal power plant contains 34.7% of coarse fractions (more than 5 mm), the presence in its composition of particles that have passed through a 0.14 mm sieve is 6%. This aggregate does not meet the grain size requirements for porous sands and can be considered only as a mixture of fine and large grains.
The bottom ash of Tom-Usinskaya district coal power plant contains almost 50% of dusty fractions, and the volume of particles larger than 1.25 mm is only 18.6%. This fine aggregate can be considered as a mixture of dusty ash and fine sand.
Conclusion
The results of the research show that the bottom ash of Belovo district coal power plant and Tom-Usinskaya district coal power plant can be used as a filler for various types of lightweight concrete. The bottom ash of Kemerovo district coal power plant should be considered as a possible aggregate for heavy concrete.
Thus, bottom ash of the largest Kuzbass coal power plants can be used as raw for the building materials production. | 1,435 | 2021-01-01T00:00:00.000 | [
"Geology"
] |
Modified Vogel’s approximation method for transportation problem under uncertain environment
The fuzzy transportation problem is a very popular, well-known optimization problem in the area of fuzzy set and system. In most of the cases, researchers use type 1 fuzzy set as the cost of the transportation problem. Type 1 fuzzy number is unable to handle the uncertainty due to the description of human perception. Interval type 2 fuzzy set is an extended version of type 1 fuzzy set which can handle this ambiguity. In this paper, the interval type 2 fuzzy set is used in a fuzzy transportation problem to represent the transportation cost, demand, and supply. We define this transportation problem as interval type 2 fuzzy transportation problems. The utility of this type of fuzzy set as costs in transportation problem and its application in different real-world scenarios are described in this paper. Here, we have modified the classical Vogel’s approximation method for solved this fuzzy transportation problem. To the best of our information, there exists no algorithm based on Vogel’s approximation method in the literature for fuzzy transportation problem with interval type 2 fuzzy set as transportation cost, demand, and supply. We have used two Numerical examples to describe the efficiency of the proposed algorithm.
Introduction
The fuzzy transportation problem is one of the most well-known optimization problems in the field of fuzzy set and system.This problem appears in many real-life applications, e.g., computer networks, routing, shortest path problems [1][2][3][4][5][6][7][8], communication, etc.It has been researched extensively in many engineering fields such as electronics engineering, electrical engineering, and computer science in terms of effective algorithms.
The supply and demand costs are considered as real numbers, i.e., crisp numbers in classical transportation problems.It computes a solution on the base of demand and supply.It has been applied in many fields, including optimal control, inventory, logistics management, and supply chain management.Many researchers have used fuzzy variables/numbers (especially triangular fuzzy number and trapezoidal fuzzy number) to express the approximate intervals, linguistic terms, and unequally possible data set.Zimmermann [9] has introduced a fuzzy linear programming model.It has applied to solve different fuzzy transportation problem (FTP) [10][11][12][13][14]. Chanas et al. [15] proposed a fuzzy linear programming model to determine the solution of an FTP with fuzzy supply and demand, but transportation costs are in real number.Dinagar and Palanivel [16] described an FTP where demand, supply, and transportation costs are trapezoidal fuzzy numbers.Kaur and Kumar [17] have introduced an algorithmic for solving the fuzzy transportation problem.Some researchers used the rough set to handle the uncertainty of transportation problems.Liu [18] has initiated the concept of rough variables to manage the uncertainty of the problem.Xu and Yao [19] have proposed an algorithmic approach for solving the two-person zero-sum matrix games with payoffs as rough variables.Kundu et al. [20] introduced a solid transportation model with crisp and rough costs.Some other researchers [21][22][23][24][25][26][27][28][29][30] also have studied this transportation problem in a fuzzy environment.
Usually, human perception [31,32] is used to evaluate the degree of membership of an ordinary fuzzy set, which is a crisp value.However, it may not be possible to find an exact membership degree using a type-1 (ordinary) fuzzy set, i.e., fuzzy variable/number because of various types of complications, insufficient information, noises, multiple sources of available data.The type 2 fuzzy set is an extension of a fuzzy set, and it can be used to solve them.Type 2 fuzzy set (T2FS) [33,34] has also been proposed by Zadeh [33] as an extension of type 1 fuzzy set (T1FS), about 10 years after he has introduced T1FS.Zadeh [33] has described the T2FS as the fuzzy set, which is a mapping from U to [0, 1] with the membership function of this set, classified as type 1.The uncertainty associated with the linguistic description of information [35][36][37][38][39] is not represented properly by T1FS due to incorrectness of human perception in the evaluation of membership degrees having crisp values.Mendel and Karnik [40] have enhanced the number of degrees of freedom for fuzzy sets.They have described the idea to add at least one higher degree to T1FSs.It provides a measurement of dispersion for a certain membership degree of T1FS.Hence, T2FS is the extension of the T1FS to a higher degree.T2FSs have a degree of membership that is itself determined by T1FSs.The membership function of T2FS is known as secondary membership functions.T2FS enhances the number of degrees of immunity to handle the ambiguity of the problem T2FS and has a better ability to cover inexact information is logically appropriate behavior.Since the generalized T2FSs are demanding for computation, most of the researchers use interval type-2 fuzzy set (IT2FS) in practical fields [41][42][43][44][45][46][47][48].Computation in IT2FS is more manageable compared to generalize T2FS.Both IT2F membership function and generalized fuzzy membership function are three dimensions, but the secondary membership value of the IT2F membership function is all-time equal to 1.
Let A is a type-1 fuzzy set and à is an interval type-two fuzzy set as displayed in Figs. 1 and 2, respectively.For a certain value of x, say x i , a single membership value r 1 is obtained in A. However, there is an interval of membership degree between r 1 and r 2 in à for the same value of x i .
The motivation of this paper is to present an algorithmic approach for the transportation problem, which will be simple enough and efficient in real-world situations.In transportation problems, transportation parameters (e.g., demands, supplies, transportation costs) are not always crisp, and that parameters could be uncertain due to several reasons.Therefore, computing the exact parameters in such scenarios could be challenging.Fuzzy can be used in transportation problems to handle this type of uncertainty, and many researchers have described this transportation problem with type 1 fuzzy variables.T2FS extends the degrees of freedom to present uncertainties, and it increases the capacity to deal uncertain/fuzzy/inexact information of any real-life problem in a logically appropriate manner.The main objective of this paper is to consider transportation problems with T2FSs.In this paper, we have mainly investigated the following things: 1. We propose an algorithm to solve the fuzzy transportation problem based on Modified Vogel's approximation method (MVAM), where the costs are trapezoidal IT2FSs.2. We introduce a linear programming problem (LPP) method for solving this problem.
The rest of our paper is arranged as follows.
In Sect.2, we briefly describe some ideas about the fuzzy set, T1FS, T2FS, IT2FS and centroid-based ranking method [49].In Sect.3, we introduce the interval type 2 fuzzy transportation problems and some algorithms with flowcharts to solve this problem.In Sect.4, the two numerical examples are illustrated to describe our proposed algorithm.We present the conclusion in Sect. 5.
Preliminaries
Definition 1 Modification form of classical set is called the fuzzy set, where the elements have various degrees of membership.In the classical set, the logic is based on two truth values, either it will be true or it will be false.It is sometimes insufficient when relating human thoughts.Fuzzy logic can use the whole interval between 1 (true) and 0 (false) for better result.A fuzzy set accommodate its members with different membership degrees in the interval [0, 1].Let x be an element of X, then a fuzzy set à in X is a set of ordered pairs in which the value of Ã(x, u) lies between 0 and 1.Here, x is primary variable, T x is primary membership function of x, u is the secondary variable and Ã(x, u) is the secondary member function of x.
Definition 3 [50] : IT2FS is a simpler version of T2FS.IT2FS has uniform shading over the footprint of uncertainty (FOU).A T2FS with all Ã(x, u) =1 is named an IT2FS.Let à represent an IT2FS, then it is described as where primary variable is x,the primary membership of x is T x an interval in [0,1], the secondary variable is u and the secondary membership function at x is ∫ u∈T x 1∕u. (1) In this paper, we initiate an algorithmic problem for solving Transportation problem using IT2FS.Heights of the lower and upper membership functions of IT2FS represent an IT2FS of a reference point.We consider trapezoidal IT2FS in our algorithm.A trapezoidal IT2FS à is shown in Fig. 3.The shaded region is the FOU.It is bounded by a lower membership function (LMF) i x i and an upper membership function (UMF) i x i .The LMF and UMF have represented type-1 fuzzy sets.
The result of Addition is also a IT2FS.Multiplication [51] operation ( ⊗ ) between the two trapezoidal IT2FSs Ã1 and Ã2 is defined in (6) as follows.
(5) The result of Multiplication is also a IT2FS.
Definition 5 [52]: The centroid value,i.e., C( B) of an IT2FS B is the union of the centroid values of all its embedded type 1 fuzzy set B e as follows.
Here ∪ represents the union operation and It is expressed in [40,[52][53][54] that c l B and c r B can be described as follows: Here R and L are right and left switch points.First, we calculate the centroids for IT2FS B , then we find out the average centroid Here centroid-based ranking value [49] of IT2FS B is C( B).
Definition 6 [55]: Let B1 and B2 be considered two interval type 2 fuzzy sets (IT2FSs).Then The value of IT2FS B1 is greater than
Mathematical statement
Suppose that there are p numbers of sources and q destination.Let si represent the fuzzy numbers of sources i ( i = 1, 2, 3, … , p ) and let dj represent the fuzzy numbers of destinations j ( j = 1, 2, 3, … , q).Mathematical model of fuzzy transportation problem is given below: subject to where cij is the interval type 2 fuzzy set that represents the transportation cost for one unit from source node i to the destination node j and zij is the interval type 2 fuzzy set of units transported from source i to destination j. si is the sup- ply at source i and dj is the demand at destination j.
Proposed algorithm 1
The Vogel's approximation method algorithm is a wellknown algorithm to compute the transportation problem.In this article, we modify the VAM algorithm to handle the transportation problem in a fuzzy environment.Most of the cases, researches use T1FS or fuzzy number as transportation cost value.T1FS is unable to handle the ambiguity due to the inaccuracy of human conception.An interval type-2 fuzzy set (IT2FS) is an extension of the type 1 fuzzy set.IT2FS can handle this ambiguity.The flowchart of modified Vogel's approximation method is shown in Fig. 4.
Step 1 : In this problem, the cell cost, demand, and supply are considered as trapezoidal IT2FSs.The centroid value of IT2FS, i.e., the real values of each cell, demand, and supply have computed using Eq. ( 13).These are used for computation purposes.
Step 2: For the given transportation table, determine the cost of the penalty, distinction between minimum ( 14)
Proposed Algorithm 2
Step 1 : Formulate the given transportation problem into linear programming method using Eqs.( 14) and (17).
Step 2 : Using definition of centroid ranking (Definitions 5 and 6), write the standard linear model as given below.
subject to where Step 3: We use Definition 4 in Eq. ( 19) and find out the crisp standard model.Step 4: Solve the given problem using the standard linear programming technique and find out the optimal value and objective value.
Step 5: We get the fuzzy optimal cost to use this value in Eq. ( 14).
The flowchart of the proposed LPP method is shown in Fig. 5.
Modified MODI method
Optimality tests always conduct depend upon the initial basic feasible solution of a transportation problem, where the value of m + n − 1 is equal to the number of non-nega- tive unoccupied cells and n is the number of columns and m is the number of row.These all allocated cells stay in an independent position.This type of method always uses for better solutions than the initial basic feasible solution.The flowchart of the modified MODI (MMODI) method is shown in Fig. 6.
Step 1: Calculate u i and v j with the expression u i + v j = c ij for each occupied cell.
Step 2: Consider the value of u i or v j equal to 0 to any row or column, respectively, with having maximum no
Start
The cell value, demand, and supply are considered as trapezoidal IT2FSs.
Determine the cost of the penalty, distinction between minimum cost and next minimum cost of every column and row.
Select the largest penalty value from all column and row difference, which was found out and select the corresponding row or column.
Allocate the minimum value corresponding row or column which was selected and eliminate row or column that has been adjusted.of allocation.If it is more than one then choose any one of them arbitrarily and calculate rest of all u i and v j for all rows and columns, respectively.
Step 3: Compute the value of every unoccupied cell with the equation xij = c ij − ( u i + v j ).Case I: Have an unique solution and it is an optimal solution if all � x ij > 0. Case II: Have an alternative solution and it is optimal if all � x ij >= 0. Case III: It has no optimal solution if � x ij < 0 .For that type, the case considers the next step for the optimal solution.
Step 4: If case III occurs, then select the maximum negative value of xij and form a close loop with occu- pied cell and assign "+" and "−" sign alternately.
Find out the minimum allocation with a negative sign.It will be added to the positive allocation cell and subtract from the negative allocation cell.into linear programming method using expression Equation ( 14) to Equation (17).
Write the standard linear model using the Definition 5 and Definition 6.
Find out the crisp standard model using Definition 4.
Solve the given problem and find out the optimal value and objective value using standard linear programming technique.
Numerical illustration
We have used two examples for demonstrating our modified Vogel's approximation method to solve FTP with IT2FS cost, i.e., transporting each unit from source to destination.
Numerical Example I
Now we are consider first example with three supply and four demand base problem and solve it using our proposed modified Vogel's approximation method.
Example 1
We have taken IT2FSs from [56] and which are indexed in Tables 1 and 2. In Table 1, IT2FSs transportation cost is indexed, and in Table 2 IT2FSs supply and demand indexed.
Solution: We calculate the centroid based rank of cell cost, demand, and supply using Definitions 5 and 6.
In Table 3, the costs of transportation, supplies, and demands represent as centroid-based ranking values and Table 4 represents the final allocation table of Example I, respectively.
Step 1: Select the first row S1 in Table 3 for finding BFS using the proposed algorithm.
In Table 4, the total number of source (m) is 3, the total number of destination (n) is 4, and total number of nonnegative allocation 6 is equal to m + n − 1 = 3 + 4 − 1 = 6.
So, it has a basic feasible solution.The total cost can be computed by multiplying the units assigned to the allocated value of each cell with the concerned transportation cost of respective cell. Therefore
Compute optimal value of Example I using MMODI method
To calculate the optimal value, we use the MMODI method.Tables 5 and 6 represent the initial and final allocation table of MMODI method, respectively.In Table 5, In Table 6, all Cij are positive.So, there is an optimal solution and this optimal solution is displayed in Table 7
Numerical Example II
We considered another example with six supply and eight demand base problems and solve it modified Vogel's approximation algorithm same as the first one.
Example 2
We have taken IT2FSs as transportation cost which are indexed in Tables 8 and 9, assigned to the IT2FSs supply and demand.
Results and discussion
In this study, we have worked on two transportation problems.In the first problem, there are three sources and four destination nodes.We have used IT2FSs to represent those costs.The Lingo software is used to solve this problem.In Example 1, we get IT2FSTP cost is ((33.6337,56.7825, 76.0925, 104.6313), (57.002, 65.8038, 65.8038, 72.5788, 0.27)) and the predicted optimal transportation cost is 67.0683.Here, it is clear that our predicted cost is within the IT2FS fuzzy.The efficiency of our proposed algorithm is shown in Fig. 7.
Conclusion
The VAM algorithm is a common algorithm to solve the transportation problem.In this paper, the classical VAM algorithm is modified to solve the fuzzy transportation problem fuzzy.We represent all the demand, supply, and transportation costs as IT2FSs.The idea of our proposed algorithm is elementary and effective to apply for realworld scenarios, e.g., management, transportation system, and many other network optimization problems.Here, we present two small numerical examples to demonstrate our proposed algorithm.Therefore, as future research, we have to solve a large scale practical transportation problem using the proposed algorithm.Furthermore, we will try to modify our proposed algorithm for the Pythagorean fuzzy set [57] and interval type-2 intuitionistic fuzzy sets [58][59][60][61].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creat iveco mmons .org/licenses/by/4.0/.
Fig. 3
Fig. 3 Trapezoidal interval type 2 fuzzy set (IT2FS) Ãi with footprint of uncertainty (FOU) (color figure online) cost and next minimum cost of every column and rows.Step 3: Select the largest penalty value from all column and row difference, which is found out in Step 2 and select the corresponding row or column.Step 4: In this step, select the least cost in row or column which is identified in Step 3. Step 5: This step allocates minimum value corresponding row and column value which is select in Step 4. Step 6: Depending upon this cell which column or row has adjusted, this column or row has removed.Step 7: Same methodology from Step 2 to Step 6 has been applied to the rest of all the unallocated cells until all demands and supplies have been adjusted.
Since demand value adjusts with cell allocation value, eliminate the first column say D1 Step 5: For creating a new table, we should be use above steps.Step 6: The same procedure is used in the rest of the table , basic feasible solution of the problem = 2.32 * 2.19 + 7.25 * 3.22 + 5.19 * 2.19 + 8.12 * 1.91 + 2.13 * 3.91 + 2.13 * 2.59 = 69.1461.
Table 10 represents the corresponding ranking values and Table 11 represents the required optimal solution of Example II.
Table 1
The cell costs of the transportation problem represented as IT2FSs for Example I
Table 2
The supplies and demands of the transportation problem represented as IT2FSs for Example I
Table 3
Initial allocation table of the IT2F transportation problem for
Table 4
Final allocation table of the IT2F transportation problem for Example I
Table 5
Initial allocation table for MMODI method
Table 6
Final allocation table of MMODI method
Table 8
The cell costs of the transportation problem represented as IT2FSs for Example II | 4,629.8 | 2020-06-29T00:00:00.000 | [
"Mathematics",
"Engineering",
"Computer Science"
] |
Comparative analysis of the honeycomb and thin-shell space antenna reflectors
. Parabolic three-layered reflectors from polymer composite materials with the aluminium honeycomb fillers became widely used in space communication systems in the past decades. There are technological possibilities for creating reflectors in the form of thin-walled ribbed shell with the lower linear density than that of the three-layered structures. The paper presents the results of the temperature and stress-strain analysis for the two types of structures, which could help to select the variant with the best performance characteristics.
Introduction
The state-of-the-art space antenna reflectors are essentially parabolic shells with various types of reinforcement that enable the required stiffness. The most frequent design layout is a three-layered shell with the honeycomb reinforcement, less frequent is a thin shell with the reinforcing ribs over the convex surface [1]. The common features for all structures are, firstly, the requirement for the high shape and dimensional stability under condition of varying temperature as a result of the spacecraft entering the Earth shadow and, secondly, low linear density (the mass in relation to the surface area). The design of the reflectors with the honeycomb filler meets the most stringent requirements with the regard to the shape and dimensional stability, while having a relatively small linear density in the range of 3.5-5.0 kg/m 2 [2]. Sources [3] and [4] presented the results of simulating the thermal and stress-strain behavior of several reflector variants with the concave shell ribbed reinforcement for the geostationary Earth orbit (GEO) conditions, which proved the possibility of the designs with the small linear density and high stiffness [4]. After the analysis of various reinforcement patterns [5], the "five-point star" pattern was singled out, as being superior to the other variants with respect to stiffness and linear density.
This research aims to compare the characteristics of the space antenna reflector layouts with the honeycomb filling and ribbed reinforcement. In order to achieve this aim, the following objectives were fulfilled consecutively: x Geometric models were developed; x Thermal physical characterisctics of the honeycomb filling were determined; x Temperature and stress-strain state at GEO was determined.
Geometric models development
The comparative analysis involved building geometric models of the design with the honeycomb filling ( Fig. 1) and the ribbed reinforcement (Fig. 2). The shells in the both variants were made from carbon fibre composite.
The temperature and stress-strain simulation was performed using the finite-element method in the Siemens NX PLM software package. The carbon fibre composite was assumed to have the following thermal physical and mechanical properties: thermal conductivity coefficient 1000 J/(kg•K); density 1550 kg/m 3 ; emissivity 0.85; absorptivity 0.735; CLTE 5.27•10 -7 , Young's modulus 140 GPa, Poisson ratio 0.3.
In the honeycomb filled design each carbon fibre composite shell was 0.6 mm thick, the honeycomb layer was 25 mm thick.
Honeycomb filler characterization
For the finite-element analysis of the thermal state, the information about the thermal physical and mechanical properties of the honeycomb fillers is essential. The unified reference data for the honeycomb materials are not available, which necessitates conducting thermal physical and mechanical characterization for each design layout.
The mechanical characteristics of the honeycomb materials were determined by menas of the e-Xstreme Digimat 6.0. The input data included the АМг-2 alloy properties, in particular, Young's modulus 71 GPa, density 2680 kg/m 3 ; the honeycomb characteristics: hexagonal pattern, 0.015 mm wall thickness, 25 mm filler height. The mechanical properties values for the honeycomb material are given in Tables 1 and 2. The thermal physical properties in the longitudinal direction were determined using a 3D model of the honeycomb elementary cell, with 0.015 mm wall thickness and 25 mm height. Hexagonal polygonal bodies 0.6 mm thick were added above and below it to simulate the carbon fibre shell. After that a finite-element model was created and the boundary conditions were specified in the form of temperature and radiative heat transfer inside the cell.
The Siemens NX Nastran was used to conduct the finite-element analysis for 15 calculating cases simulating various operating temperatures at 20 °C interval: from minus 120 °C to plus 160 °C. The heat flux density through the elementary honeycomb cell was determined in accordance with Fourier law: ; (1) where q is the heat flux density; λ eff is the effective heat conductivity, T 1 is the temperature on the upper shell surface, T 2 is the temperature on the lower shell surface. Temperatures T 1 , T 2 for each case were specified with a 2 °C difference. For example, for the 60 °C case the temperature T 1 constituted 59 °С, T 2 -61 °С. After the heat density flux was determined, the honeycomb material thermal conductivity was estimated ; (2) . (3) where λ 1 , λ 3 are the shell thermal conductivities; λ 2 is the filler thermal conductivity. The estimation of the thermal efficiency in the transverse direction employed the layout similar to that for the longitudinal direction. The thermal load conditions were identical to the estimation of the longitudinal thermal conductivity.
The thermal conductivity of the honeycomb filler in the longitudinal and transverse directions is presented in Fig. 3. Fig. 3. Thermal efficiency of the honeycomb material as a function of temperature, longitudinal (1) and transverse (2) directions.
Determining the stress-strain behavior of the honeycomb filler structure
The temperature gradient was determined by means of Space Systems Thermal solver from the Siemens NX software package. The research involved analyzing 24 moments of operation in the GEO. The largest gradient coincided with the moment when the reflector axis of rotation was at 150 ° relative to the Earth-Sun axis. The thermal calculation data were used as the reference data for stress-strain calculation. The simulation results are presented in Fig. 4. Fig. 4. Thermal (left) and stress-strain (right) behavior of the three-layered structure at the moment, when the reflector axis of rotation was at 150°angle the Earth-Sun axis at GEO.
Determining the stress-strain behavior of the ribbed reinforcement structure
To estimate the stress-strain behavior of the ribbed reinforcement structure, 24 moments of the work at GEO were analyzed. The largest temperature gradient coincided with the moment when the reflector axis of rotation was at 120° relative to the Earth-Sun axis. The thermal calculation data were used as the reference data for stress-strain calculation. The simulation results are presented in Fig. 5. 5. Thermal (left) and stress-strain (right) behavior of the ribbed reinforcement structure at the moment, when the reflector axis of rotation was at 120°angle to the Earth-Sun axis at GEO. Table 3 presents the comparison of the design layout characteristics under investigation. As is evident from Table 3 the honeycomb filler structure is characterized by a more uniform distribution and a lower temperature gradient. The three-layered structure had 0.238 maximum displacement, while the mean displacement across the surface is in the range of 0.04 to 0.06 mm. The maximum displacement for the ribbed reinforcement structure constituted 0.049. However, the ribbed reinforcement layout has a clear advantage with the mass twice as low as that of the honeycomb filler layout. Therefore, the ribbed reinforcement layout is selected for the further development.
Analysis
Some results of this work were obtained in the framework of the project №1864 under the state commission №2014/104 for the public research activity as part of the commission by the Ministry of education and science of the Russian Federation. | 1,717.8 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Effect of Doppler Shift on the Performance of Multicell Full-Duplex Massive MIMO Networks
This paper studies the uplink and downlink achievable rates for large-scale multiple-input–multiple-output (MIMO) systems, assuming perfect and imperfect channel state information under the effect of Doppler shift. Different from the previous relevant work, we consider a multiuser system scenario, where a full-duplex mode worked in the multicell base stations, and maximum-ratio combining/maximum-ratio transmission is applied at the receiver end. Then, we derive the asymptotic uplink and downlink sum rate by utilizing the large number theorem and considering the performance comparison between the perfect and imperfect channel. In addition, the impact of the Doppler shift on the system performance (e.g., the uplink and downlink sum rates) is analyzed via the simulation results.
I. INTRODUCTION
D UE to its high-frequency spectrum and energy utilization, security, and robustness, large-scale multiple-inputmultiple-output (MIMO) systems are well suited for the future broadband, digital network architectures for Internet of Things (IoT) and cloud service interconnection. It is also true that many scholars regard it as one of the key technologies of fifthgeneration (5G) mobile communication and have conducted in-depth research on it. Dr. Marzetta of the Seoul Laboratories conducted a detailed analysis of the spectrum utilization and system throughput of large-scale MIMO systems [1], and proposed the main factors constraining the development of the large-scale MIMO such as pilot pollution. In [2] and [3], Jensen's inequality and other mathematical tools were used to obtain the user reachable rate for a finite number of base station (BS) antennas, and then, used it to analyze the spectrum efficiency and throughput of the system. In detail, Ngo explained another important advantage of the massive MIMO in reducing the transmit power [4]. When the antenna size is large, the transmit power can be reduced by an order of magnitude or even more. In [4], the relation between the transmit power reduction degree and the BS antenna number (M ) is investigated. In addition, it is pointed out that if the BS can obtain the ideal channel state information (CSI), the transmit power per user can be reduced 1/M , and if the BS needs to estimate the channel, the transmit power per user can be reduced to 1/ √ M . On the other hand, the full-duplex technology can increase the spectrum efficiency of the system by nearly twice, while the premise is that it can effectively remove or suppress selfinterference (SI) (e.g., the cofrequency interference) [5], [6]. Because of the extreme performance requirements in the 5G wireless networks, the use of a single item often fails to meet these demands. Thus, scholars usually combine multiple techniques to improve the system performance. In full-duplex largescale antenna systems, the spectral efficiency of the system increases with the number of antennas. When the number of antennas is large enough, since the linear processing method effectively suppresses the SI, the spectral efficiency of the entire smart worker almost doubles that of the time-division duplex half-duplex system. Therefore, the combination of the full-duplex and large-scale antenna technologies can well satisfy the requirements of the future 5G communications, which has been studied widely in [7]- [9]. In [9], multiple pairs of users exchanged information with the aid of full-duplex relays with large-scale antennas, and proposed an optimal power allocation algorithm to minimize the total energy consumed by the system. Ngo et al. studied the linear processing method of the full-duplex one-way relay system to improve the energy efficiency [8]. Zhang studied the effect of the power scaling on the spectrum and energy efficiency in a large-scale two-way relay system [7]. Due to the effects of Doppler shifts, the channel causes channel aging. To the best of the authors' knowledge, the effects of Doppler shifts have not been fully demonstrated in the previous work on massive MIMO been studied in other MIMO cellular configurations, such as in multicell transmission [9], the obtained results cannot be directly applied to massive MIMO systems. Previously, we have studied the influence of the Doppler shift in a single-cell scenario on a full-duplex massive MIMO communication system. However, in contrast to the single-cell scenario, the pilot pollution caused by the reuse of pilot sequences between different cells must be considered in the multicell scenario. For the multicell downlink transmission, the authors in [10]- [13] analyzed the spectral efficiency of multicells and the impact of pilot pollution. Using pilot design [14], pilot allocation [15], and precoding [16] can effectively reduce the impact of pilot pollution. For the multicell uplink transmission, this paper proposes an optimal linear receiver by maximizing the signal-to-interference-plus-noise ratio (SINR) and deduces the achievable rates in closed forms.
The rest of this paper is organized as follows. The system model is introduced in Section II. Section III mainly analyzes the uplink and downlink rate. Section IV shows the simulation results. Section V is the summary of the full paper.
Notations: The symbols used in this paper are as follows: (A) T , (A) H , tr (A), A , and E {•} denote the matrix transpose, conjugate transpose, matrix trace, the Euclidean norm, and expectation, respectively; [A] nn denotes the n × n diagonal entry of the matrix A; I M represents the M × M identity matrix.
x ∼ CN (0, σ ij I) represents that x is a circularly symmetric complex-Gaussian vector whose entry's mean and variance are 0 and σ ij , respectively.
II. SYSTEM MODEL
This paper studies a massive MIMO system with L cells in the full-duplex mode as shown in Fig. 1. Each cell consists of K full-duplex single-antenna users and a full-duplex BS equipped with N antennas. Transmission is considered over frequency-flat fading channels. In this paper, an additive white Gaussian noise is considered at each transmit antenna through a dynamic range model, in which the variance is κ (κ 1) times the power of the transmit signals. Here, κ is the dynamic range parameter. Compared with the thermal noise of the transmitter, the fullduplex transmitter noise will be propagated on the SI channel and turn into complicated. However, compared with the receiver thermal noise, the influence of the transmitter noise transmitted on the uplink/downlink channel can be ignored [17].
Here, G u,jl [0] is supposed as an N × K matrix that denotes the uplink channel matrix from the users in the lth cell to the jth BS at time 0, which means the time as the all symbols are transmitted in the training phase. G d,jl [0] is supposed as an N × K matrix denoting the downlink channel matrix from the jth BS to the users in the lth cell. The propagation channel model in our system considers both small-scale fading caused by the multipath and the large-scale fading caused by the shadowing effect. Next, the channel vector of uplink and downlink is denoted as where H φ,jl [0] ∈ C N ×K represents the small-scale fading channels and its elements obey the independent and identically distribution (i.i.d.) CN (0, 1), and D φ,jl is the large-scale fading diagonal matrix with diagonal term [D φ,jl ] n = β φ,jln , which denotes the large-scale fading between the nth uplink/downlink user in the lth cell and the jth BS. Let g φ,jlk [0] be the kth column of the matrix G φ,jl [0].
A. Uplink Transmission
The N × 1 uplink signal vector received by the jth full-duplex BS at time n(n = 0) is y u,j [n] . And the receiver noise is shown as n u,j [n] containing i.i.d CN(0, σ 2 ) entries. As we all know, if the jth BS knows the SI channel and downlink signal, the SI cancellation can be performed. So, (2) can be written as
B. Downlink Transmission
In the lth cell, the users will receive the signals, which can be expressed as a K × 1 vector y d,l [n], shown as where F lj [n] is a K × K matrix that denotes the user-user interference channel from K uplink users in the jth to K downlink users in the lth cell and the large-scale fast fading channel coefficient between the ith uplink user in the jth cell and the kth downlink user in the lth cell is
A. Perfect CSI
We first consider the case that the BS has perfect CSI. We suppose that the users begin to move at time n, so the autoregressive model can be applied as [18] In this expression, α [n] = J 0 (2πf D T s n) denotes a temporal correlation parameter. J 0 () denotes the first kind Bessel function with zero order, the maximum Doppler shift is written as f D = vf c /c, where v denotes the relative velocity of the users, f c denotes the carrier frequency, and c denotes the light speed.
. Then (5) can be rewritten as 1) Achievable Uplink Rate: The uplink will be first analyzed in this part. The jth BS receive signals from users in the lth cell, which include interference signals. Therefore, we apply maximum-ratio combining (MRC) detector for detecting the uplink signal. Thus, a K × 1 signal vector can be obtained as By substituting (6) into (7), (7) can be rewritten as The M × 1 downlink signal vectors x d,l [n] will be transmitted by the lth BS by precoding the downlink messages using the maximum-ratio transmission (MRT) where E(s d,l s H d,l ) = P d I K . So, by substituting (9) into (8), the uplink signal from the user k in the lth cell received by the jth BS is given by (10) at the bottom of this page.
Let both sides of the equal sign be divided by √ N . Then So, we can get the power of I u,1 as Continuously, the power of I u,2 can be obtained as (14) can be rewritten as Nβ u,jjk .
(15) Based on Lemma 1, we know Substituting (16) into (15), the power of I u,2 can be obtained as The power of I u,3 will be introduced as (18) can be rewritten as Similarly from Lemma 1, we get Therefore, the power of I u,3 is Also, we can write the power of I u,4 as Using the same method in (18) So, the power of I u,4 can be obtained as At last, the power of I u,5 can be shown as Combining (13), (17), (21), (26), and (27), the uplink rate of the user k in the lth cell is given as in (28) at the bottom of this page.
2) Achievable Downlink Rate: Similarly, the kth full-duplex user in the lth cell can exercise the SI cancellation by taking away SI from the received signal in (2). Therefore, we can get The power of I d,1 is as follows: .
Using the same method in (20), the power of I d,2 is obtained as We can find that I d, 3 is similar to I d,2 , so the power of I d, 3 is Since each user is on the move, we still utilize an autoregressive model for the channels between the users. So, I d,4 in (31) can be rewritten as Finally, it is easy to get the power of I d, 4 as Substituting (33), (36), (37), and (39) into (32), the downlink rate of the user k in the lth cell is given by (40) at the bottom of this page.
B. Imperfect CSI
In MIMO networks, to perform uplink and downlink beamforming, the BS must get the uplink and downlink channel information to perform coherent detection and precoding in the uplink and downlink, respectively. Here, the ergodic achievable rates with channel estimation error will be derived. The uplink and downlink channels are estimated by the uplink training sequences in this system, therefore, the pilot overhead is only proportional to the number of users. The uplink and downlink data transmissions begin simultaneously after the uplink training.
During the uplink training period, K mutually orthogonal pilot sequences of the length τ (τ K) symbols are adopted to estimate the channel between each BS and its associated users within the coherent interval of T . L cells will reuse the same set of pilot sequences. Due to the nonorthogonality of the reused pilot, the channel estimate will be destroyed by the pilot pollution [20]. Denote an average channel training power at each user by P p , which is dependent on the length of the pilot sequence.
To acquire an M -dimensional vector y P p ,jk [0], the jth BS will associate the received signal from the uplink training with the pilot sequences assigned for the kth user. So, y P p ,jk [0] can be obtained as (41) The MMSE channel estimate of the kth user in the jth cell can be obtained aŝ Since the MMSE estimator has orthogonality, the real channel can be divided into the estimated channel and channel estimation errors. Therefore, one can get where . Also, the user is supposed to start moving at time n (n = 0), applying the autoregressive model as Thus, the power of I u,1 is Let . Then the power of is I u,2 can be written as Therefore, the power of I u,2 is Thus, the power of I u, 3 can be obtained as Let . The power Thus, the expression in (56) can be rewritten as we can get the power of I u,4 as Finally, we can easily know 2) Achievable Downlink Rate: The signal received by the kth full-duplex user in the lth cell, where k ∈ K is written as So, using Lemma 1, the power of I d,2 is Also, the power of I d, 3 is Since each user is on the move, we also use an autoregressive model for the channels between the users. Thus, we can turn I d,4 in (64) into Finally, it is easy to get the power of I d, 5 as Therefore, the downlink rate of the user k in the lth cell is obtained as (73), shown at the bottom of this page.
IV. SIMULATION RESULTS
We consider seven cells, where the radius of each cell is 1000 m, and all the users (K = 4) are uniformly distributed within the cell. Each cell has a full-duplex BS. Assuming that each cell has a guard range of r 0 = 100 m that means the distance between the nearest user and the BS. The large-scale fading can be modeled as β k = z k / (r k /r 0 ) η . In this expression, z k is a log-normal random variable with standard deviation σ representing the shadow fading effect and the distance between the kth user and the BS is denoted by r k (100 ≤ r k ≤ 1000). η is the path loss in the system [20]. In simulation, we use σ = 8 dB and η = 3.8. In Doppler frequency shift factor, we use the carrier frequency f c = 2.5 GHz, the channel sampling interval T s = 5 ms, the dynamic range parameter κ =0.013 and an average power constraint P d =10 dB. AS the user rate v = 3 km/h, we can get η = 0.9881; as the user rate v = 250 km/h, also we can get η = 0.0204. So, it is easy to see that the Doppler shift is getting worse with the increase of users velocity. Fig. 2 shows uplink rate versus the transmit power of the user. From the curve, we can see that the rate increases with the increasing transmit power of the user. And when the transmission power is fixed, increasing the number of antennas can increase the rate. Therefore, we can add the number of antennas to reduce the transmit power of the user. Figs. 3 and 4 present the uplink and downlink sum rate versus the normalized Doppler shift with perfect CSI. From Fig. 3, we can easily find that uplink sum rate will deduce as the normalized Doppler shift increases, especially in the second peak. Also, when the normalized Doppler shift increases (i.e., user speed increases), deploying more antennas can make up for the reduced rate. Besides, the uplink can resist the decline better than the uplink. Similarly, Fig. 5 shows the sum rate versus the normalized Doppler shift with perfect CSI. And the same conclusion can be drawn. Figs. 6 and 7 present the uplink and downlink sum rate versus the normalized Doppler shift with imperfect CSI. As is shown in the figure, uplink sum rate will deduce with the normalized Doppler shift increasing, especially in the second peak. Also, when the normalized Doppler shift increases (i.e., user speed increases), deploying more antennas can make up for the reduced rate. Besides, the uplink can resist the decline better than the uplink. Fig. 8 shows the sum rate versus the normalized Doppler shift with imperfect CSI. And the same conclusion can be drawn. Fig. 9 shows the sum rate changing with the uplink transmit power. Obviously, as the transmit power gets larger, the rate increases. And, increasing the number of antennas can further increase the rate. Fig. 10 provides a contrast for the uplink sum rate under perfect and imperfect channel. From the picture, we can easily find the influence of the imperfection of the channel. Moreover, the bigger the value of time n, the more serious the effect of the Doppler shift on the performance of the system. As is shown in Fig. 5, the rate of the imperfect channel is nearly half that of perfect channels under the same conditions.
V. CONCLUSION
This paper mainly discusses the performance of the fullduplex system with effects of Doppler shift. Both perfect and imperfect CSI are, respectively, considered. We use an autoregressive model, to model the effects of the Doppler shift. In this paper, we assume a scenes with L cells. Then, we apply MRC/MRT to optimize the system performance. Through simulation, we find that when the normalized Doppler shift increases (i.e., user speed increases), deploying more antennas can make up for the reduced rate and as the transmit power gets larger, the rate increases. Also, increasing the number of antennas can further increase the rate. | 4,271.4 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
The use of polyazolidineammonium and dimethyl-sulfoxide antigen Yersinia pseudotuberculosis to obtain hyperimmune serum
The use of polyazolidinammonium modified with iodine hydrate ions (PAAG) as an adjuvant made it possible to obtain rabbit hyperimmune blood serums for dimethyl-sulfoxide antigen (DA) of a pseudotuberculosis microbe with generic specificity. Antibody titers in ELISA with Y. pseudotuberculosis and Y. enterocolitica cells amounted to 1:25600-1:12800, and with cells of other genera of intestinal microflora 1:100-1:400. The optimal immunizing dose for obtaining hyperimmune yersiniosis serum was a dose of 2 mg DA of Y. pseudotuberculosis per rabbit. Such a dose made it possible to obtain hyperimmune sera with a high titer of specific antibodies with a small consumption of antigen. The optimal concentration of PAAG solution for hyperimmunization of Y. pseudotuberculosis DA rabbits was 1%.
Introduction
Intestinal yersiniosis is registered in many countries of the world and occurs in pigs with a large livestock population. In addition to pigs, the circulation of Yersinia enterocolitica (Y. enterocolitica) is detected in other domesticated animals and birds. However, pigs are the main source of Y. enterocolitica for human infection [1][2][3][4][5][6][7].
There is less information on the circulation of the pseudotuberculosis microbe in animals than on the circulation of the causative agent of intestinal yersiniosis. Pseudotuberculosis in animals occurs sporadically or in the form of small outbreaks. Infection of people with Yersinia pseudotuberculosis (Y. pseudotuberculosis) occurs through an alimentary route mainly through plant products and the role of animals in human infection is not clear [1][2][3][4].
Studies by a number of scientists indicate the possibility of simultaneous circulation of both pathogens in the intestines of pigs [3,4]. At the same time, diagnostic preparations are in demand, allowing simultaneous indication of Y. enterocolitica and Y. pseudotuberculosis. Such drugs are based on hyperimmune serums with generic specificity.
Hyperimmune sera are obtained by repeated immunization of animal producers with a mixture of antigen and adjuvant. Dimethyl-sulfoxide antigen (DA) can be used to obtain blood serums with generic specificity. This antigen was first studied in Mycobacterium tuberculosis [8]. We then isolated and studied DA of Y. enterocolitica and Y. pseudotuberculosis, as well as antibodies to them [9,10]. The antibodies obtained from Y. enterocolitica allowed us to create two diagnostic test systems based on them [11,12], the successful tests of which showed the potential for further research in this area.
Recently, synthetic polyelectrolytes have gained popularity as adjuvants. The simplicity of chemical synthesis, solubility in water, and the ability to form conjugates with antigen particles have opened up prospects for their use as adjuvants [13,14]. One of the representatives of this group of chemical compounds is polyazolidinammonium modified with iodine hydrate ions (PAAG). It has a wide range of antimicrobial properties [15] and is safe for warm-blooded animals [16]. A complex adjuvant consisting of PAAG and microparticles of calcium carbonate was developed for vaccination of animals [17]. The possibilities of using PAAG as an adjuvant for obtaining hyperimmune blood serum were first studied by us when immunizing rabbits with lipopolysaccharide and disintegrated membranes of Y. pseudotuberculosis [18,19]. This experiment showed the promise of using PAAG for hyperimmunization. The aim of our study was to determine the possibility of using PAAG in combination with DA Y. pseudotuberculosis to obtain rabbit hyperimmune pseudotuberculous serum.
Experiment design: 1. Multiple immunization of rabbits with different doses of DA in combination with PAAG to determine the optimal immunizing dose of Y. pseudotuberculosis DA.
2. Multiple immunization of rabbits with DA using various concentrations of PAAG to determine the concentration of the drug with the highest adjuvant properties.
3. The study of the specificity of the obtained hyperimmune sera in ELISA. 4. Based on the analysis of the results of ELISA and leukocyte counting, it is possible to draw a conclusion about the effectiveness of the use of PAAG in combination with DA Y. pseudotuberculosis.
Materials and methods
To obtain DA, a microbial culture of Y. pseudotuberculosis III O:3 serovariants from the museum collection of pathogenic microorganisms of the Federal Research Institution of Health Protection and Health Research of Russia "Microbe" was used, which has characteristic morphological, cultural, biochemical and serological properties.
The method for obtaining the DA of a pseudotuberculosis microbe consisted in treating the dry acetone microbial mass of bacteria with dimethyl-sulfoxide, followed by taking the liquid, releasing it from dimethyl-sulfoxide and lyophilization [9].
Immunization of male rabbits weighing 2.5 kg of the Chinchilla breed was carried out subcutaneously along the back at 3-4 points in a volume of 1 ml of the mixture. The ratio of adjuvant to antigen solution was 1:1. There were 5 immunizations with an interval of 2 weeks. Blood for the study was taken from the ear vein in a volume of 5 ml a day before the introduction of the antigen, starting with 1 immunization.
The resulting hyperimmune rabbit blood serum was studied by the method of solidphase indirect ELISA.
The number of leukocytes, lymphocytes and granulocytes in the blood of rabbits was determined on a hematological analyzer. As
Results
To determine the optimal immunizing dose of DA for Y. pseudotuberculosis, rabbits were divided into 6 experimental and 6 control groups, 3 rabbits in each group. Animals from each of the 6 groups were immunized five times with one of the DA doses: 0.2, 1, 2, 4, 8, or 16 mg / animal. Before immunization, the rabbits of the experimental groups were mixed with the antigen 1:1 with 1% solution of PAAG (DA + PAAG), and the rabbits of the control groups were mixed with physiological saline (DA + PS). The obtained blood serum was examined by ELISA in reaction with DA Y. pseudotuberculosis (20 μg / ml) ( Table 1). As can be seen from table 1, in the control groups, the increase in antibody titer is directly proportional to the increase in the dose of DA and the number of immunizations.
In the experimental groups, the action of PAAG cancels the dependence of the antibody titer on the dose of DA in the range of 2-16 mg / rabbit, and at doses of 0.2-1 mg / rabbit this dependence is not as pronounced as in the control groups.
The use of PAAG allows in experimental groups to obtain, after 5 immunizations, serums with a higher antibody titer than in the control: when immunized with doses of 0.2-2 mg / rabbit DA, the values of the experimental group titers are 8 times higher than the control titer, with a dose of 4 mg / rabbit DA -4 times, with a dose of 8 mg / rabbit -2 times. However, with a significant dose of DA (16 mg / rabbit), the effect of PAAG on the antibody titer is absent.
The highest antibody activity is serum obtained from rabbits immunized with DA in doses of 2-16 mg / rabbit using PAAG. The titers of these sera were 1:409600.
In rabbits immunized with various doses of DA Y. pseudotuberculosis, after 5 immunizations, blood was additionally examined to calculate the total number of leukocytes (WBC), as well as their two types: lymphocytes (L) and granulocytes (G) ( Table 2). As can be seen from table 2, an increase in the number of leukocytes is affected by an increase in the immunizing dose of DA Y. pseudotuberculosis, as well as the use of PAAG. PAAG has a more pronounced stimulating effect on lymphocytes than on granulocytes.
The optimal concentration of PAAG was determined by immunization of 3 groups of rabbits, which were injected with 2 mg of DA Y. pseudotuberculosis in a mixture with various concentrations of PAAG (0.2%, 1%, 5%) in a 1:1 ratio. Rabbits were immunized as described above. The blood serum obtained after 5 immunizations was tested by ELISA in response to DA Y. pseudotuberculosis (20 μg / ml).
The specificity of the sera obtained after 5 immunizations was studied in ELISA with formalized bacterial cells. The results are shown in table 3. Table 3. The results of the determination of the specificity of the blood serum of rabbits immunized with DA Y. pseudotuberculosis and PAAG. In the study of the specificity of rabbit blood serum obtained using PAAG and DA Y. pseudotuberculosis, showed high titers of antibodies with cells of Y. pseudotuberculosis and Y. enterocolitica, as well as low titers with cells of other genera of intestinal microflora and brucella, which indicates the genus specificity of the data serums.
Discussion
For hyperimmunization of rabbits with DA Y. pseudotuberculosis and PAAG, 2 mg DA / rabbit should be used as the main immunizing dose, as such a dose allows for a small consumption of antigen to obtain hyperimmune serum with a high titer of specific antibodies. However, it should be noted that when using high doses of DA (16 mg / rabbit), the use of an adjuvant is not required, because the composition of DA Y. pseudotuberculosis includes proteins that aggregate in concentrated solutions, increasing their antigenicity.
The mechanism of action of PAAG on antibody genesis is apparently associated mainly with an increase in the rate of antigen presentation, lymphocyte differentiation, and to a lesser extent due to an increase in the number of lymphoid cells. This assumption is indicated by high titers of specific antibodies with a relatively low number of lymphocytes in the blood.
The optimal concentration of PAAG solution for immunization is 1%. Exceeding this concentration by 5 times (5% PAAG solution) is accompanied by the appearance of a smell of iodine. An excess of which, apparently, negatively affects the properties of DA and the local reaction of the body, which leads to a decrease in the titer of specific antibodies.
The results of ELISA with various bacteria indicate the yersiniosis specificity of the | 2,245.6 | 2020-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Effect of Synthesis of the Starting Powders on the Properties of Cu-Ti-TiB 2 Alloy Obtained by Laser Melting
: A comparison was made between layer-by-layer laser melting (LM) of two types of feedstock powders: (1) elemental powder blend and (2) mechanically alloyed powder. LM was done by Nd:YAG laser at 1064 nm (max. average power 100 W) in argon ambience. Samples synthesized were Cu-Ti-TiB 2 rectangular tracks (20×6×1 mm), and input parameters of the process: powder layer thickness 100-250 µm, hatch spacing 1 mm, pulse length 4 ms, energy 4 J, pulse repetition rate 20 Hz. Part of the samples was heat-treated in argon at 900 °C, 10 h. Structural characterization of the samples was done using light microscope and scanning electron microscope (SEM). Chemical analysis of the as-obtained laser melted samples was done by inductively coupled plasma-atomic emission spectrometry (ICP-AES). It was established that the microstructure of LM samples was comprised of Cu-Ti and Cu-B solid regions, and in situ formed microparticles of primary TiB 2 . Only after high-temperature thermal treatment has the secondary TiB 2 occurred. Tensile tests showed much higher strengthening in heat-treated samples with mechanically alloyed powder as starting material, where the formation of secondary TiB 2 nanoparticles was considerable.
Introduction
Laser melting (LM) is a unique additive manufacturing (AM) technology for production of complex-shaped objects with mechanical properties comparable to bulk material. Also, LM is one of few Rapid Prototyping (RP) techniques used for obtaining of composite materials [1][2][3]. SLM is controlled through set of parameters comprising process parameters [4], properties of powders subjected to radiation [5] and physico-chemical parameters of the process [6]. It is considered that the most important parameters in the process of laser melting are: laser power, scanning speed, hatch distance and layer thickness. These parameters were the ones mostly studied in the investigations related to this area. However, in order to obtain 3D compact of desired properties, numerous other parameters are not to be neglected. Aside from parameters related with laser and interaction process between laser beam and powder, there was a lot of research on parameters connected with starting material, i.e. their effect on microstructural, physical and mechanical characteristics of the final product. Particularly, the influence of shape [7] and size distribution [8] of the individual powder particles was studied, as well as the influence of powder tapped density and surface oxidation degree on the properties of LM compact [9]. Apart from these parameters, one of the important conditions for obtaining compacts with desired properties in metal alloys and composites is certainly preparation of powders for laser melting. Starting materials for production of 3D compacts by laser melting can be mixtures of elementary powders, prealloyed powders of the appropriate composition, or coated powders where particles of one metal are coated with particles of another metal [10]. All of these three ways have their advantages and disadvantages depending on the nature and composition of the constituents comprising alloy or composite. The advantage of using prealloyed powders is a homogeneous chemical composition of the material. The downside is a weaker control of the melting process due to dependence of the laser melting parameters (temperature, viscosity and surface tension) on the bulk composition. In mixed powders, a better control is achieved regarding viscosity and surface tension compared to prealloyed powders, however the wetting is poorer, as well as the kinetics of liquid phase spreading which is generally longer that the melt pool life time. Coated powders provide better bonding and higher absorption of laser energy. On the other hand, problem which can occur with these powders is a sublimation of impurities in the coating.
The aim here was to investigate the effect of the mixture of elementary powders, i.e. mechanically alloyed powders of the same composition, on the formation of TiB 2 reinforcements in copper matrix during the process of obtaining Cu-Ti-TiB 2 composite by laser melting. Copper is a material of choice when high thermal and electrical conductivity are required, and various copper-based composites are developed for improving its mechanical properties [11]. Composite Cu-Ti-TiB 2 was selected due to its potential, considering its properties [12], as well as future applications in military industry [13] and nuclear technique [14,15]. This multiple (precipitation and dispersion) strengthened material is superior in structural stability to the precipitation hardenable alloys (such as Cu-Ti, Cu-Cr, Cu-Zr, etc.), because the second phase (TiB 2 ) has no tendency to dissolve at high temperatures [16], a characteristic of the precipitation hardenable alloys . Also, since the second phase is inert, it reduces electrical conductivity only to the extent that it reduces the cross-section of the material. Thus, electrical conductivities of the order of 80-95 % IACS can be achieved [17].
Materials and Experimental Procedures
Starting powders employed in experiments were: water atomized copper, titanium produced by hydride-dehydride processes (both size <63 µm, 99.5 % purity), and amorphous boron obtained by reduction of boron oxide (size <0.08 µm, 97 % purity). Feedstock powders used were: (1) elemental powder blend, and (2) mechanically alloyed (MA) powder, with the goal of comparing their behavior during SLM. In case of powder blend, the ratio was Cu-1Ti-0.35B (wt.%) and homogenization was done for 1 h. The second powder was obtained by mixing for 1 h binary powders Cu-2wt%Ti and Cu-0.7wt%B which were first separately mechanically alloyed for 24 h. MA was done in argon using steel balls (diameter 6 mm), with ball/powder ratio 5:1 and stirring speed 330 rpm. Experiments were done in argon ambient, and rectangular samples of Cu-Ti-TiB 2 (20×6×1 mm) were obtained in layer-by-layer manner using Nd:YAG millisecond laser. Process parameters employed were as follows: pulse repetition rate 20 Hz, pulse duration 4 ms, energy ~4 J, hatch distance 1 mm, and layer thickness ~100 µm/250 µm for mixed/MA powders. Some picese were thermally treated at 900 °C for 10 h in Ar. Chemical analysis of the as-obtained laser melted samples was done using inductively coupled plasma-atomic emission spectrometry (ICP-AES). Concentration of TiB 2 compound was determined from the internal standards formed for the referent samples of this compound.
Microstructure of the starting powders and LM samples was investigated by optical microscope (OM), as well as scanning electron microscope (SEM) connected with energy dispersive x-ray spectroscope (EDS). Metallographic preparation for OM comprised grinding, polishing and etching in KLEM III solution (40 g K 2 S, 11 ml N 2 S 2 O 3 and 100 ml H 2 O). Density of the obtained samples was measured by Archimedes method in water. Ultimate tensile strength and elongation to fracture were tested on Instron universal testing machine (crosshead speed 0.5 mm/min, room temperature). Fractured surfaces were also observed with OM and SEM.
Results and Discussion
Morphology of the mixed powder for LM is shown in Fig. 1. The majority of Cu particles have irregular shape with the presence of some smaller rounded particles. Titanium particles were also irregularly shaped with sharp edges. It can be seen that the particles of alloying elements were uniformly distributed around the base Cu particles, i.e. the mixture was dominantly homogeneous which is critical for uniform absorptance of laser beam and melting. However, some agglomerations consisting of small particles can be clearly observed in given images. EDS analysis showed the presence of certain, not significant amount of oxygen on the surface of copper and titanium particles. with characteristic layers formed due to deformation, fracturing and welding of soft particles [18] leading to their microforging (Fig. 2a, b). Chosen milling parameters enabled alloying of copper particles with titanium and boron to a significant degree, which was confirmed with SEM images (Fig. 2c, d) and EDS quantitative analysis. The presence of boron could be identified only at higher magnification. Compared to mixed powders, number of agglomerated boron particles in mechanically alloyed Cu-B particles was lower.
Through LM process Cu-Ti-TiB 2 3D samples (~15 layers) were obtained, Fig. 3. Laser melting process was conducted so that in both cases (mixed and mechanically alloyed powders) building parameters of synthesis, as well as the content of alloying elements was the same. The overall porosity in laser melted samples from mixed powder was about 15 %. Due to in situ formation of TiB 2 particles and their rapid growth surface of the formed layer had prominent roughness, which led to a lot of empty space between melted layers, Figs. 4a and 4b. In laser melted samples with mechanically alloyed powder, Figs. 4c and 4d, there was not enough time for larger number of primary TiB 2 particles to form, nor for higher degree of their coarsening. Particles were mostly formed on solid region boundaries. For that reason, melted layers were flatter and the overall porosity was lower, around 8 %.
Temperatures during interaction of laser beam with powder particles of copper alloys under these experimental conditions can reach 4000-5000 K, which enables formation of TiB 2 dispersoids by rapid solidification from these temperatures [19]. In compacts where the starting material was comprised of blended powders, preferential locations for formation of these ceramic particles were between base metal particles, although they could be formed also on the starting titanium particles. It can be noted from Figs. 4a and 4b that considerably large (even up to 10 µm) TiB 2 particles were formed. Formation of ceramic particles this big in the course of laser melting, as well as their wider size distribution, is characteristic for in situ obtaining of TiB 2 in liquid phase [20]. Most frequent locations for the formation of primary TiB 2 particles in compacts where the starting material were mechanically alloyed powders, were borders between Cu-Ti and Cu-B solid regions, where alloying elements are closest to one another. Formed TiB 2 particles were significantly smaller (submicron size and somewhat above 1 µm, Fig. 4c and 4d) than in previous case, due to longer diffusion paths titanium and boron atoms had to pass in order to form diborides. Elongations were 1.5-6 %.
Obtained tensile strength values in both laser melted samples were low, which was expected due to significant presence of pores and cracks. Based on pore shape, conclusion regarding their origin could be made. Most common reasons for the formation of pores in the structure were weaker chemical bond, especially larger, TiB 2 particles and copper matrix, their falling out during preparation of samples for tensile tests (irregular pore shape, Fig. 4a, b) and vaporization of the base metal during laser melting (spherical pore shape, Fig. 4b).
Residual stresses characteristics for laser melting process (rapid heating and cooling cycles) are the cause for the occurrence of cracks in the structure of these samples (Fig. 4d). Of course, the number of present microstructural defects could be lowered or completely eliminated through appropriate treatments prior to and after laser melting (e.g. by substrate heating before laser melting process or sintering under pressure to reduce or eliminate porosity). Optimization of the process parameters was not priority in this work, but the focus was on the formation of primary and secondary TiB 2 particles and exclusively their influence on the tensile properties of Cu-Ti-TiB 2 alloy. Somewhat higher UTS value in laser melted sample with mechanically alloyed powder is a consequence of better density and the presence of smaller number of coarser primary TiB 2 particles. By heat treatment at 900 °C, 10 h, we wanted to initiate the formation of secondary TiB 2 nanoparticles. Chosen heat treatment conditions were, according to literature [21], optimum for the formation of secondary TiB 2 particles. We noticed subtle reduction of porosity in both samples (from 15 to 12 % in samples with blended powder, i.e. from 8 to 6 % in samples with mechanically alloyed powder). Still high pore content and large number of coarse primary formed TiB 2 particles did not allow for a more significant increase in UTS, as well as ductility (from 1.5 to 2 %) in heat treated samples where blended powder was used as a starting material. The discussion was supported by fractographic and chemical analysis. Fig. 6 shows longitudinal and transversal section of the fracture in laser melted samples after the heat treatment. In both longitudinal (crack propagation direction, Fig. 6a) and transversal section (fracture surface, Fig. 6b) of the fracture in the laser melted samples where blended powder was a starting material , large TiB 2 particles could be observed as a preferential spot for the formation and propagation of cracks. The obtained value for UTS was an indicator that there were very few free Ti and B atoms in this sample for the formation of secondary TiB 2 particles. This was also confirmed by chemical analysis (Tab. I), which shows almost unchanged content of TiB 2 particles before and after the heat treatment. In Fig. 6c, showing the direction of crack propagation, the presence of secondary crack is also observed, which implies better fracture resistivity of the tested sample compared to the sample from the blended powder. In relation to samples from blended powders with prominently brittle fracture surfaces, in samples from mechanically alloyed powders fracture surfaces showed the presence of ductile areas with characteristics dimples in certain parts of the structure (Fig. 6 d). On the fracture surface, in higher magnification, submicron, primary TiB 2 particles could be identified.
After the heat treatment of the samples obtained from mechanically alloyed powder, tensile test showed completely different result, Fig. 5. Increase in the UTS was almost 70 %, while the ductility increased from 3 to 6 %. UTS value is most probably due to occurrence of secondary TiB 2 particles resulting from the reaction between Cu 4 Ti and B [22]. These particles represent obstacles to the movement of dislocations in the compact, and their higher content in matrix brings a noTab. strength increase. Another factor that should be taken into account is the relaxation of residual stresses present in the SLM part through heating and retaining at 900 °C, which also has a positive effect on the UTS value. Of course, formed nanoparticles of secondary TiB 2 phase could not be observed by optical and scanning electron microscope, Fig. 4, but their presence was identified with chemical (Tab. I) analysis. ICP-AES analysis has shown considerbly higher content of TiB 2 phase particles after the heat treatment than prior to exposure to temperature of 900 °C for 10 h of the SLM compacts from mechanically alloyed powders.
Conclusion
Synthesis of Cu-Ti-TiB 2 alloy was conducted through laser melting of (i) blended Cu, Ti and B powders, i.e. (ii) mixed mechanically alloyed Cu-Ti and Cu-B powders. Microstructure of the obtained pieces contained solid regions Cu-Ti and Cu-B, with microparticles of primary TiB 2 formed in situ. It was necessary to conduct a thermal treatment (900 °C, 10 h) for the occurrence of secondary, nanoparticles of TiB 2 , which was detected using ICP-AES analysis and tensile testing of 3D parts. Tensile tests have shown that the strengthening was much higher in heat-treated samples with mechanically alloyed powder as the starting material, due to a more significant formation of secondary TiB 2 nanoparticles. | 3,504.2 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Ensemble System of Deep Neural Networks for Single-Channel Audio Separation
: Speech separation is a well-known problem, especially when there is only one sound mixture available. Estimating the Ideal Binary Mask (IBM) is one solution to this problem. Recent research has focused on the supervised classification approach. The challenge of extracting features from the sources is critical for this method. Speech separation has been accomplished by using a variety of feature extraction models. The majority of them, however, are concentrated on a single feature. The complementary nature of various features have not been thoroughly investigated. In this paper, we propose a deep neural network (DNN) ensemble architecture to completely explore the complimentary nature of the diverse features obtained from raw acoustic features. We examined the penultimate discriminative representations instead of employing the features acquired from the output layer. The learned representations were also fused to produce a new features vector, which was then classified by using the Extreme Learning Machine (ELM). In addition, a genetic algorithm (GA) was created to optimize the parameters globally. The results of the experiments showed that our proposed system completely considered various features and produced a high-quality IBM under different conditions.
Introduction
Both signal processing and neutral network researchers have paid a lot of attention to source separation (SS) in recent years.Source separation refers to the ability to separate a mixed signal into distinct components.Separating target speech from mixed signals is crucial for several applications, including speech communication and automatic speech recognition.From an application viewpoint, conducting speech separation by utilizing a single recorder is frequently the preferred method.To solve this difficult issue, several solutions have been proposed.The recovery (separation) of several audio sources that have been mixed into a single-channel audio signal, such as many persons talking over each other, is the challenge of single-channel audio source separation.Many methods have been suggested to solve the Single-Channel Source Separation (SCSS) issue.One of the main methods, Computational Auditory Scene Analysis (CASA), attempts to emulate the human auditory system in order to identify a variety of sound sources based on distinctive individual qualities [1,2].
A deep-neural-network-based ensemble system is suggested in this study, and 'wide' and 'forward' ensemble systems are used to comprehensively examine the complimentary properties of various characteristics.Additionally, the penultimate representations are looked into rather than the characteristics learnt from the output layer.The Extreme Learning Machine classification of the final embedded features produces binary masks to separate the mixed signals.The experimental findings show that the suggested ensemble system can produce a high-quality binary mask in a variety of settings.
The contributions of this paper are as follows: The Ideal Binary Mask (IBM) is estimated by using a DNN ensemble audio separation method to separate the premixed signal.Each DNN in the proposed system is trained with raw acoustic features by using a layer-wise pretraining approach.Various DNNs can extract different meaningful representations with different initializations.The multiview spectral embedding (MVSE) is used to embed the output of the penultimate layer of each individual DNN into a low-dimensional embedding [3][4][5].The objective is to extensively investigate the aspects that complement the previously studied ones."DNN Ensemble Embedding (DEE)" is the name of the first module.DNN Ensemble Stacking (DES) is the second module, which is a stack of DNN ensembles.The embedded features from the bottom module are concatenated with raw acoustic features to create a new feature set for each individual DNN in this module.
The DNNs in the system have the same design but different initializations for simplicity.By ensembling and stacking the input data, the proposed ensemble system is capable of completely exploring the complementary characteristics of the data and therefore generalize the learned representations with greater robustness and discriminative features than an individual DNN.As a result, even with limited training examples, the suggested system may still perform effectively.The Extreme Learning Machine (ELM) classifier is able to classify the time-frequency (TF) unit more accurately by using the learned discriminative characteristics of the ensemble system, and therefore the estimated IBM is more precise for source separation.Finally, a genetic algorithm is used to finetune the entire system settings in order to regularize any outliers learned by the DNNs and create a smooth map to increase the classification accuracy.Experiments were carried out on a limited training dataset, and the testing results showed that our proposed system could achieve a high separation performance.The proposed method has a high learning speed and high accuracy and lower computational complexity, and the separation performance is improved.
The remainder of this paper is organized as follows: The related work is presented in Section 2. The learning system is introduced in Section 3. Section 4 presents the proposed approach to generate acoustic features and the ensemble and stacking of deep neural networks.Section 5 discusses the experimental results and compares the obtained performance with other contending methods.Finally, the conclusion is drawn in Section 6.
Related Work
In [6], a single-channel audio source separation (SCASS) task was tackled by using a couple of stages in order to separate the sound sources, which was achieved by exploring the interference from other sources and other distortions.From the mixed signal, the sources were separated in the first stage, while deep neural networks (DNNs) were used to minimize both the distortions and the interference between the separated sources in the second stage.In the second stage, two techniques were used to employ the DNNs to increase the quality of the separated sources.Each separated source was improved separately by using a trained DNN that was employed by using the first technique, whereas all the separated sources were improved collectively by using a single DNN that was employed by using the second method.These enhancement techniques utilizing DNNs resulted in the attainment of separated sources with low interference and distortion.Additionally, the DNN-based enhancement approaches have been compared with the Non-Negative Matrix Factorization (NMF)-based enhancement, and the results demonstrated that utilizing DNNs for enhancement is more effective than using NMF.
In [7], a deep-neural-network-based gender-mixture detection method was presented to conduct unsupervised speech separation on mixtures of sound from two unseen speakers in a single-channel situation.A thorough amount of experiments and analyses were carried out, including comparisons between different mixture combinations and the relevance of DNN-based detectors.The results showed that the DNN-based strategy outperformed state-of-the-art unsupervised approaches without requiring any particular knowledge about the mixed target and interfering speakers that were being separated.A stacked Long Short-Term Memory (LSTM) network was suggested in [8], based on the single-channel Blind Source Separation of a spatial aliasing signal by using a deep learning approach.The results showed that when compared to classical techniques (Independent Component Analysis (ICA), NMF, and other deep learning models), the model had a strong performance in both pure and also noisy environments.In addition, a one-shot single-channel source separation problem was presented in [9].Based on a mix of separation operators and domain-specific information about sources, a unique adaptive-operator-based technique to derive solutions was achieved.This method is capable of separating sparse sources and also AM-FM sources.In addition, in both noiseless and noisy environments, this technique outperformed identical state-of-the-art solutions.
In [10], a multichannel audio source separation task was proposed by using Gaussian modeling and a spectral model of a generic source that could be previously learned by NMF.The Expectation-Minimization (EM) method was presented in this work for parameter estimation.In order to properly restrict the intermediate source variances calculated in each EM iteration, a source variance separation criterion was exploited.Experiments using the Signal Separation Evaluation Campaign (SiSEC) benchmark dataset have proven the efficacy of the suggested technique when compared to the current state-of-the-art techniques.Moreover, [11] produced a Multichannel Non-Negative Matrix Factorization (MNMF) based on Ray Space for audio source separation.The findings demonstrated that the Ray Space is appropriate when using the MNMF algorithm and that it is successful in real-world settings.Additionally, for the single-channel speech separation problem, the multihead self-attention was proposed in [12], whereby the authors used a deep clustering network approach.To boost the performance even further, the density-based canopy K-means method was used.In addition, the training and evaluation for this system were achieved by using the Wall Street Journal dataset (WSJ0).
Experiments have demonstrated that when compared to several advanced models, the new method outperforms them.Other works such as [13] adopted a Generative Adversarial Networks (GANs) technique for convolutive mixed speech separation in a single channel.In this work, the dereverberation and separation of speech and interference are the two phases in the separation process.Moreover, reverberation suppression and target speech improvement are two elements of the proposed network.Furthermore, an improved Cycly GAN was utilized in order to dereverberate the target speech and interference, while a differential GAN was exploited for speech enhancement.Consequently, according to simulation findings, this study achieved an excellent recognition rate and separation performance in long and severe reverberation environments.
Other researchers have employed a deep learning system that is completely convolutional in time-domain audio separation for time-domain speech separation from end to end [14].The convolutional time-domain audio separation network (Conv-TasNet) creates a speech waveform representation that is optimized in order to separate individual speakers by using a linear encoder.In addition, the encoder output is subjected to a series of weighting functions (masks) to accomplish the speaker separation.Moreover, by using a linear decoder, the modified encoder representations are inverted back to the waveforms.The proposed ConvTasNet system outperforms earlier time-frequency masking approaches as well as various ideal time-frequency magnitude masks, with a substantially smaller model size and lower minimum latency, which makes it a good fit for both real-time and offline speech separation applications.In [15], a deep multimodal architecture for multichannel target speech separation is presented.The multimodal framework takes advantage of a variety of target-related data, such as the target's physical position, lip movements, and voice characteristics.Within the framework, robust and efficient multimodal fusion methods are presented and studied.Experiments were evaluated on a large-scale audio-visual dataset obtained from YouTube, and the findings demonstrated that the proposed multimodal framework outperformed both single and bimodal speech separation techniques.
In [16], Blind Source Separation (BSS) approaches were adopted, namely the Singular Spectrum Analysis (SSA) algorithm, to solve the challenge of eliminating drone noise from single-channel audio recordings.This work introduced an algorithm optimization with an O(nt) spatial complexity where n was the number of sources to reconstruct and t was the signal length.Several tests were carried out to validate the technique, both in terms of accuracy and performance.The suggested method was successful at effectively separating the sound of the drone and the sound of the source.Furthermore, the Wavesplit is presented in [17], which is a neural network for source separation.This system derives a representation for each source from the input mixed signal and estimates the separated signals based on the inferred representations.In addition, Wavesplit uses clustering to infer a collection of source representations, which solves the separation permutation issue.In comparison to previous work, the suggested sequence-wide speaker models enable a more robust separation of long, difficult recordings.On clean mixes of two or three speakers, in addition to noisy and reverberated situations, Wavesplit redefines the state-of-the-art techniques.Moreover, On the new LibriMix dataset, a modern benchmark was established.
The authors of [18] suggested the use of the ICA approach based on time-frequency decomposition in order to decouple a single-channel source from a single mixed signal.The paper introduced a novel concept of integrating the statistically independent timefrequency domain (TFD) components of the mixed signal generated by ICA in order to reconstruct real sources.The evaluations showed that automatic signal separation necessitates qualitative information about the constituent signals' time-frequency properties.The authors of [19] proposed an unsupervised speech separation algorithm based on a mix of Convolutional Non-Negative Matrix Factorization (CNMF) with the Joint Approximative Diagonalization of Eigenmatrix (JADE).Furthermore, an adaptive wavelet transform-based speech enhancement approach is presented, which can improve the separated speech signal adaptively and effectively.The goal of the suggested technique is to produce a generic and efficient speech processing technique that can be used on the data collected by speech sensors.According to the findings of the experiments, the suggested approach can be used to successfully extract the target speaker from mixed speech after a small training sample of the TIMIT speech sources is used.The algorithm is very generic and robust and capable of processing speech signals obtained by most speech sensors in a technically sound manner.
In [20], SCSS was used to separate multi-instrument polyphonic music that was conditioned by external data.In [21], a Discriminative Non-Negative Matrix Factorization (DNMF) is suggested for a single-channel audio source separation task.In [22], the underdetermined single-sensor Blind Source Separation (BSS) issue with discrete uniform sources with known finite support and complicated normal noise is discussed.In addition, the DNN approach was also exploited in [23][24][25] to be employed for single-and multichannel speech and audio source separation.However, other researchers [26][27][28][29] have adopted different algorithms in terms of speech separation.
Overview of the System
The proposed system is depicted in Figure 1 and is divided into four phases: DNN training, multiview spectral embedding, ELM classification, and global optimization.To provide the training data, raw acoustic features were extracted from source signals.This was then used to train each DNN in each frequency channel individually.MVSE was then used to merge the penultimate layer's learned features into a complementary features vector.The acquired features vector was then input into the second module, which extracted more robust and discriminative information, before classifying each TF unit into the speech domain or nonspeech domain with the ELM classifier.Finally, in order to optimize the parameters globally, a genetic approach was developed.The optimal ensemble system was used to classify each TF unit of the mixed signal in order to create binary masks (BM) for testing.By weighting the mixed cochleagram via the mask and correcting the phase shifts produced through Gammatone filtering, the predicted time-domain sources were resynthesized by applying the method described in [30].
The following is a description of the proposed framework's architecture: On the equivalent rectangular bandwidth rate scale, the mixed signal with a sampling frequency of 16 kHz is put into a Gammatone filter bank with a 64 channel [31], with center frequencies evenly spread from 50 Hz to 8000 Hz.Each filter channel's output is split into time frames with an overlap of 50% between successive frames.
A Gammatone filter bank is often used in single-channel audio separation tasks to model the cochlear filtering that occurs in the human ear.The cochlea in the inner ear contains thousands of hair cells that are sensitive to different frequencies of sound.These hair cells act as bandpass filters that decompose the incoming sound into its constituent frequency components.A Gammatone filter bank is a set of bandpass filters that are designed to mimic the frequency selectivity of the cochlear hair cells.The filters are based on the Gammatone function, which is a mathematical model of the impulse response of the auditory system.By applying a Gammatone filter bank to the mixed audio signal, we can decompose the signal into a set of frequency components that correspond to different regions of the cochlea [32].
This frequency decomposition can be useful in audio separation tasks because it allows us to isolate specific frequency components that correspond to different sources of sound.For example, if we are trying to separate a speech signal from a noisy background, we can use a Gammatone filter bank to isolate the frequency components that correspond to the speech signal and attenuate the components that correspond to the background noise.Overall, the Gammatone filter bank is a powerful tool for modeling the human auditory system and can be used to improve the performance of single-channel audio separation algorithms.The cochleagram [32] is formed by establishing the TF units of all the filter outputs.Then, we can classify each TF unit to its identical domain in order to estimate the BM, which is our aim.
However, the spectral characteristics of the source signals in various channels might be quite varied.As a result, we trained a subband classifier for each channel to make the decision.Because of its low computational complexity and high classification performance, we chose the ELM classifier [33][34][35][36].For each TF unit, several features were extracted in order to conduct the classification.15-Dimensions (15-D) of an Amplitude Modulation Spectrogram (AMS), 13-D of the Relative Spectral Transform and Perceptual Linear Prediction (RASTA-PLP), and 31-D of the Mel-Frequency Cepstral Coefficients (MFCCs) make up the feature set.
A features vector was created by concatenating the extracted features.We propose pooling many DNNs and establishing an ensemble system of DNNs to learn more discriminative and robust representations instead of sending the features vector straight into the classifier.Additionally, each individual DNN's penultimate layer was embedded to investigate the complementary nature of the learned representation in order to increase the classification robustness, and as a result, the separation performance is also improved.At the top of the first module, a second module was stacked to extract more robust and discriminative representations for the classification.A genetic method was also created to identify the best coefficients for all DNNs and ELMs, resulting in more consistent estimates.We used the traditional frame-level acoustic feature extraction for each Gammatone filter channel's output to gain the features of each TF unit, and the concatenated features vectors were used as the raw acoustic feature set, which was input into the DNN ensemble system.
The envelope of the mixture signal was calculated by using full-wave rectification and then decimated by a factor of four to generate the 15-D AMS.To create a 256-point Fast Fourier Transform (FFT), the decimated envelope was split into overlapping segments, and then Hanning windowing and zero padding were applied.To create the 15-D AMS [37], the FFT magnitudes were multiplied by uniformly spaced 15 triangular-shaped windows across the 15.6-400 Hz band.The spectral amplitude was compressed by using a static nonlinear transformation to create the 13-D RASTA-PLP.Each converted spectral component's temporal trajectory was filtered and extended again, and then a traditional PLP analysis was performed [38,39].A short-time Fourier transform with a Hamming window was used to obtain the 31-D MFCC, which was then warped to the Mel scale; after that, a log operation with a discrete cosine transform was used [40][41][42][43].
In addition, the delta features of the RASTA-PLP were also exploited to benefit the speech separation [38].As a result, the original features of the RASTA-PLP were concatenated with their first-and second-order delta features (which are denoted by and ) to generate a combined features vector in order to learn features and classification.Finally, 85-dimensional raw acoustic features were produced from a collection of the following features: 15-D AMS, 13-D RASTA-PLP, 13-D RASTA-PLP, 13-D RASTA-PLP, and 31-D MFCC.
The Proposed Ensemble System Using DNN
Two modules with DNNs are introduced in this part.In the case of a mixed signal, the acoustic features are extracted for each TF unit in the cochleagram represented as {x n } N n=1 , where N is the number of frames.
DNN Ensemble Embedding (DEE)
Assume there are M DNNs in the DEE, where M is greater than one.An output layer, as well as a number of nonlinear hidden layers, are present in each DNN.
DNN Training
The m-th DNN learns a mapping function that can be expressed as in Equation (1).
where ξ = 1, . . ., Ξ indicates the number of hidden layers; w mξ is the weight linking the ξ-th hidden layer and the one above it; f m(•) indicates the output activation function; and g mξ (•) indicates the activation function of the ξth hidden layer.
The activation function that we chose is the sigmoid function.It is worth noting that each DNN in the same module had a different weight parameter W = {w m } M m=1 .The network was pretrained by utilizing the Restricted Boltzmann Machine (RBM) in a greedy layer-wise style, followed by back-propagation finetuning.We used the raw acoustic features that were extracted as the training data.The Gaussian-Bernoulli RBM (GBRBM) was used to train the first layer, and its energy function can be defined as where h v and v φ are both vth and φth units of the hidden layer and visible layer, respectively; c v denotes the bias of the vth hidden unit; b φ denotes the bias of the φth visible unit and the weight between the φth visible units; and the vth hidden unit is w φv .For all the remaining layers, Bernoulli-Bernoulli RMBs are used: The RBM is a generative model in which the parameters are improved by using a stochastic gradient descent on the training data's log likelihood [44].
where • indicates the expected outcomes under the distribution provided by the following subscript.
x ∞ denotes the equilibrium distribution defined by the RBM while x 0 indicates the distribution of the data.The DNN is initialized by using the learned parameters from a stack of RBMs.This empirical approach of initialization has been created to assist the subsequent backpropagation finetuning, and it is often crucial when training a deep network with numerous hidden layers [45].Finally, the back-propagation method is used to finetune the whole network.After the network has been adequately finetuned, the penultimate layer activations represented as P m are regarded as the learned intermediate representations instead of the final layer activations of the DNN [46][47][48][49].
Spectral Embedding in Multiple Views
In M DNNs, the learned intermediate representations P = {P m ∈ d m × n } M m=1 are fed into a Laplacian multispectral graph to investigate the complimentary characteristics [4].Varying representations have different strengths, which might lead to different mistakes in the separation system [5].
MVSE is a technique used to take advantage of complementary representations and exploit the strengths of specific representations.Assume that P = [P m1 , P m2 , . . ., P mn ] d m × n } is the m-th learned representation, and consider p mj as an arbitrary point and that its k associated points are in the same features set (for example, nearest neighbors) p mj1 , p mj2 , . . ., p mjk ; the patch of p mj is defined as P mj = [p mj , p mj1 , p mj2 , . . ., p mjk ] ∈ d m × (k+1) , where v represents the dimension of the intended embedding and is a predetermined number.The component optimization for the jth patch on the mth feature set is used in the projected low-dimensional space to preserve the locality.This part is arg min where µ mj is a column vector that has a k-dimension and is weighted by (u mj ) i = (exp −||p mj −p mji ||2 /γ), and the width of the neighborhoods is controlled by γ; as a result, we can reformulate the part optimization to arg min where the trace operator is tr(•) and encodes the jth patch's objective function on the mth learned representation.
A suitably smooth, low-dimensional-embedding R mj can be constructed by maintaining the inherent structure of the jth patch on the mth learned representation.The DNN ensemble extracts multiple features with varying mapping parameters that may contribute differently to the final low-dimensional embedding.A collection of non-negative weights α = [α 1 , • • • , α m ] is imposed on the portion optimizations of various DNNs independently to investigate the complementary characteristics of different extracted features.The P mj plays a more important role in learning how to obtain the low dimensional embedding R mj as α m grows larger.The component optimization for the jth patch is represented as the sum of all the m-th learned representations and can be formulated as arg min There is a low-dimensional embedding R mj for each patch P mj .By supposing that the coordinate for R mj = r mj , r mj1 , r mj2 , • • • , r mjk is chosen from the global coordinate R = r 1 , r 2 , r 3 , • • • , r n , all R mj can be integrated as one, i.e., R mj = RV mj , where V mj ∈ n× (k+1) is the matrix employed in a patch in the original high-dimensional space to encode the spatial relation of the samples.Consequently, Equation ( 7) can be rewritten as arg min The global coordinate alignment is calculated by adding all the optimization parts together and is expressed as where the alignment matrix for the mth learned representations is L m ∈ n×n , and it is also defined as L m = ∑ N j=1 V mj L mj (V mj ) T .The restriction RR T = I is used to determine R in a unique way.The coefficient for managing the interdependency between various perspectives is the Exponent , which should satisfy ≥ 1.We constructed a symmetric and positive semidefinite normalized graph Laplacian L sys by conducting a normalization on L m .L sys is defined as (10) where , and it is called a degree matrix.
Equation ( 9) is a nonconvex nonlinear optimization problem with nonlinear constraints, and the best solution can be found by using an iterative technique such as the Expectation Maximization (EM) technique [50].Both R and α are updated iteratively in an alternating style by the optimizer.
Step 1: Fix R to update α By using a Lagrange multiplier λ and taking into account the restriction ∑ M m=1 α m = 1, the Lagrange function can be written as The solution for α m can be obtained by When R is fixed, then Equation (12) gives the global optimal α.
Step 2: Fix α to update R The optimization problem in Equation ( 9) is equivalent to min where L = ∑ M m=1 α m L sys .When α is fixed, Equation ( 9) has a global optimum solution according to the Ky-Fan theorem [51].The optimal R is given as the eigenvectors related to the lowest d eigenvalues of the L matrix.After obtaining the embedded feature R, then the raw acoustic features will be concatenated with it to produce a new feature vector, as the raw acoustic features can offer global information that can aid in mask estimation.The updated feature vector will be sent into the ensemble stacking in the second module.
DNN Ensemble Stacking (DES)
A second DNN ensemble is stacked on top of the first in this module.The first DNN ensemble is considered a lower module, whereas the second ensemble is considered a higher module.As input to the upper module, the embedded features of the lower module with the raw features are concatenated.This enables the extraction of higher-order and more robust discriminative features.DES is a masking-based module, unlike the previous module, in which DNNs are trained by using pretraining, and then supervised finetuning is applied.In order to learn feature encoding, DES includes training Z > 1 DNNs, which is indicated as φ Z at this stage.The z-th DNN's learning procedure can be represented as where φ is the result of concatenating the embedded and raw acoustic features of the lower module.In DES, each single DNN learns a masking function.In the output layer, linear, softmax, and sigmoid functions are common activation functions.We selected the softmax function for the output layer because the training objective was the IBM, which has a value of either 0 or 1, and the softmax function is an extension of the logistic function, whose output reflects a categorical distribution: where p(y = j | x) indicates the predicted probability for the jth class given a sample vector x and a weighting vector w.The combined features set is utilized as training data for the first GBRBM, whose hidden activations are subsequently used as new training data for the second RBM and so on.To obtain the internal discriminative representations, the pretrained GBEBM, RBMs, and softmax layer are merged and finetuned with labeled data.The softmax classifier is trained during the first 10 iterations of the module while it is being finetuned.The outputs of the DES's penultimate layer are then sent into a multispectral graph Laplacian to investigate the complementary property once the network has been finetuned.In the following stage, ELM is used to classify the concatenation for both the embedded features with raw features.
ELM-Based Classification
We utilized ELM [33] to classify the TF units into the target domain or interference domain at this step by using the concatenated features.For a single-layer feed-forward neural network, ELM is suggested.With K hidden nodes, the ELM model may be written as where x φ is the input vector and t φ denotes the output.The parameters of the activation function of the kth hidden node are u k and v k , and the output of the kth hidden node with respect to the kth input is S(x φ , u k , v k ).The output weight of the kth hidden node is β k .The Equation ( 16) can be formulated as where , and the hidden output matrix S can be written as The parameters are learned in two phases by using an ELM: random feature mapping and linear parameter solution.By using the activation function s • with randomly initialized parameters, the input data are projected into a feature space in the first step.
The ability of the randomly initialized parameters to approximate any continue function has been demonstrated [33,36].As a consequence, the output weight β is the lone parameter that has to be computed, which can be estimated by using the following formula: where S ♠ is the Moore-Penrose generalized inverse.
Global Optimization with a Genetic Algorithm
The last stage in this research involved using a genetic algorithm to optimize the weights α = α 1 , • • • , α M and τ = τ 1 , • • • , τ Z globally based on the estimation error, as shown in Figure 1.Essentially, a genetic algorithm includes a population containing a certain number of people.Every individual in a population has the potential to solve the optimization problem.As a result, a new generation is created by the use of selection, crossover, and mutation among individuals.This procedure is performed numerous times until a new individual offers the best solution to the problem.Based on our research, the generated DNN ensemble and stacking system were finetuned by employing genetic algorithms in the next steps:
(i) Defining the fitness function
The developed genetic algorithm's fitness function in this step was to reduce the mean square error between the real TF unit value T and the estimated value S β: (ii) Determining the initial population of chromosomes L 0 For both steps, the initial population size was selected as 1000 chromosomes (individuals) for this genetic algorithm.These initial chromosomes represent the first generation.(iii) Encoding Each chromosome in the population was encoded by using binary strings of 0 s and 1 s.Every α m was represented as a 10-bit string of binary numbers (0 s and 1 s) in the DEE step.Similarly, each chromosome (individual) refers to Z weights in the DES step.As a consequence, each chromosome was represented by Z × 10 bit strings.(iv) Boundary conditions The boundary conditions were set in both stages such that every element The fitness function was used to test each chromosome in the first generation (L 0 ) to calculate how effectively it solved the optimization issue.The chromosomes that performed better or were more fit were passed on to the next generations.They were wiped out otherwise.Crossover occurred when two chromosomes swapped some bits of the same region to produce two offspring, whereas mutation occurred when the bits in the chromosome were turned over (0 to 1 and vice versa).The occurrence of mutation was determined by the algorithm's mutation probability (ρ) as well as a random number generated by the computer (ω).We set the ρ value to 0.005 in this stage.The mutation operator can be defined as follows: (vi) Until the best chromosome was attained, the processes of selection, crossover, and mutation were repeated.
Finally, with regard to the input data, a binary mask was created, and by weighting the mixture cochleagram, the estimated time-domain sources were resynthesized by using the mask.
Experimental Results and Discussion
The proposed separation technique is evaluated with recorded audio signals in this section.The simulation was achieved by using the MATLAB codes that were running on a PC with an Intel Core i5 processor running at 3.20 GHz and 8 GB of RAM.We used voice data from the 'CHiME' database [52], which has data from 34 speakers, and each speaker has 500 utterances.For the training data, ten utterances were chosen at random and mixed with music [53] at 0 dB.The test set was made up of 25 different utterances from the same speaker's training data mixed with the same music at 0 dB.Unless otherwise specified, we used data from the same speaker for both training and testing, i.e., a speaker-dependent setup.We started by extracting each channel's basic acoustic features.Then, before being fed into the system, we applied normalization to the extracted features until we achieved a mean and unit variance of zero [54].As the first layer, the GBRBM was trained between the visible layer and the first hidden layer for each DNN in the system, whereas the higher layers were built by using RBM pretraining data.For pretraining, we used 50 epochs of gradient descent, and to finetune the whole network, we used 50 epochs of gradient descent.The GBRBM's learning rate was set to 0.001, whereas RBM's learning rate was set to 0.01.The first 5 epochs' momentum was set to 0.5, while the rest of the epochs' momentum was set to 0.9.A somewhat modest DNN with two hidden layers was used because of its performance and computational complexity.The small number of adjustable network parameters allows for fast, scalable training with a satisfactory performance.The size of the nearest neighbors in the MVSE was set to be 10.The embedded feature dimension was set at 50.When training the ELM classifier, the embedded features were always mixed with raw acoustic features.The proposed system was compared with different machine learning approaches, such as Support Vector Machine (SVM)-based, ELM-based, DNN-based, and DNN-ELM-based approaches.
The fusion technique was exploited via concatenation to merge the raw acoustic features with their first and second delta features, which were used to train the SVM and ELM for the SVM-based and ELM-based techniques.A total of 50 epochs were used for both minibatch gradient descents for the RBM pretraining and for network finetuning to train the DNN-based approach.The output of the DNN's final hidden layer was used to train an ELM by using DNN-ELM-based approaches.By using the raw acoustic features of each TF unit, all four approaches were used to train a classifier for each channel.In addition, as a comparison approach, we used the Itakura-Saito NMF (IS-NMF) [5] and NMF2D [32] algorithms.IS-NMF has already been proven to accurately capture the semantics of audio and to be more appropriate for representation than the regular NMF [55].MGD IS-NMF-2D [32], which was recently presented, delivers promising separation results for music mixtures and is regarded as a competitive solution to solving separation difficulties, where MGD is the Multiplicative Gradient Descent (MGD).
Optimizing the Number of DNNs
The ensemble of DNNs is the initial module.We compared the separation performance according to the number of DNNs to calculate the number of DNNs in each module.We initially evaluated the separation performance for the set 1 DNN in DEE and DES (referred to as 1DEE-1DES).Then, we evaluated the performance of the set 2 DNNs in DES and 1 DNN in DEE (2DES-1DEE).The experiments were carried out until all the settings (5DEE-5DES) were evaluated.Figure 2 depicts the separation findings.In the trials, we trained a different number of DNNs by using the same training data.The Short-Time Objective Intelligibility (STOI) [56] is an evaluation metric that is used to evaluate the Objective Speech Intelligibility (OSI) of time-domain signals.The STOI scores are closely associated with speech intelligibility scores, according to empirical evidence.The expected intelligibility improves as the STOI value rises.Adding a second DNN in DEE and DES increases the separation performance over employing a single DNN in each module, as seen in Figure 2. When one DNN is added to each module, the performance increases dramatically when compared to when only one DNN is used in each module.Not only that, but after adding the DNNs, it was observed that the improvement became more significant.With more DNNs in the DES module, this is amplified even further.With 4 DNNS and 3 DNNs from the first and second modules, respectively, the greatest attainable STOI is 0.82.However, with five DNNs or more, the improvement in the separation performance becomes less significant.This might be because more DNNs cannot extract additional discriminative features that would increase the separation performance.We employed various metrics to evaluate the proposed learning system, such as a Perceptual Evaluation of Speech Quality (PESQ) and Signal-to-Distortion Ratio (SDR), to further study the usefulness of the number of DNNs in the learning system.Figure 3a,b depicts the results.The separation performance when the 4DEE-3DES was used improved when compared to when the 4DEE-2DES and 3DEE-3DES were used, as shown in Figure 3a.Although 5DEE-5DES had the greatest PESQ, the improvement was less substantial when compared to 4DEE-3DES.Figure 3b shows that the separation performance of 4DEE-3DES was 11.82 dB, which was much superior to the performance of a single DNN in each network module.To summarize, the separation performance improved as the number of DNNs in each module increased; however, the improvement was less noticeable after 4 DNNs were in each module, which means that using 4 DNNs in DEE and 3 DNNs in DES is a decent decision given the computational complexity of the network.
Speech Separation Performance
We compared the separation performance of our proposed strategy with the performance of selected approaches for various mixtures in order to demonstrate its effectiveness.A total of 10 utterances were selected randomly from males and females to create the training set.At 0 dB, the selected utterances from the SNR training data were mixed with guitar and bass music.For the testing data, 30 utterances were created differently than the training data mixed with guitar and bass music at 0 dB SNR in order to test our system.
From each TF unit, a feature set of 85 dimensions (85-D) was extracted from the training and testing data for preprocessing.In this experiment, the Signal-to-Distortion Ratio (SDR), which includes the Signal-to-Interference Ratio (SIR) and Signal-to-Artifacts Ratio (SAR), was used to evaluate the separation performance.The following methods were selected for comparison: Itakura-Saito Non-Negative Matrix Factorization (IS-NMF), Non-Negative Two-Dimensional Matrix Factorization (NMF2D) based on an Extreme Learning Machine (ELM) and based on a deep neural network (DNN), and Ideal Binary Mask (IBM).The IS-NMF was used in conjunction with a clustering approach, whereby the mixed signal was factorized into ℵ = 2, 4, • • • , 10 components and then the ℵ components were clustered to each source by using a grouping method.For comparison, the best value of the outcome of each case of the ℵ different configurations was kept.The mixed signal spectral and temporal features were factorized in the nonuniform TF domain created by the Gammatone filter bank for MGD IS-NMF-2D, where the MGD is the Multiplicative Gradient Descent.To separate the mixed signal, the obtained features were employed to produce a binary mask.The mask was produced directly from the speech and music by using the IBM technique.According to Figure 4, the SDR performance varied significantly depending on the separation approaches used.The ELM-based technique had an average SDR of 7.47 dB for the mixtures, whereas the NMF-2D method had an average SDR of 8.37 dB, and the DNN delivered an average SDR of 9.83 dB.However, our proposed method had an average SDR of 11.09 dB, and the IBM had an average SDR of 12.66 dB.It is worth noting that the DNN-based techniques and our proposed system's outcomes outperformed the ELM-based approach.This is attributed to the deep architecture's classified features, which are more discriminative than shallow networks.It is also worth noting that both the DNN and the proposed system had a high SDR performance.Furthermore, the proposed technique consistently outperformed the DNN in terms of the performance.This supports our findings that the proposed system can extract more complementary features than a single DNN.It also demonstrated that the higher layers of deep architecture represent more abstract and discriminative features than the lower ones.To further analyze the separation performance of the proposed approach, an experiment was conducted with a mixture of a female voices mixed with guitar music at 0 dB.
Generalization under Different SNR
This section describes the experiments that were performed to evaluate the effectiveness of the proposed method under different SNR conditions.The training set comprised mixtures at a single input SNR, and the system was evaluated on mixtures with various SNRs to generalize the SNR.To create the test data, 10 utterances of a speaker were chosen and mixed with the music at 0 dB SNR, whereas 20 utterances of the same speaker were chosen and combined with the same music at SNRs ranging from −6 dB to 6 dB with a 3 dB increase.ELM-based, SVM-based, DNN-based, and DNN-ELM-based algorithms were selected for comparison purposes.Figure 6 shows a comparison of several separation approaches in terms of the output of the Short-Time Objective Intelligibility (STOI).There were several observations to consider.Originally, deep architectures such as DNN, DNN-ELM, and the proposed technique significantly outperformed shallow architectures such as the ELM and SVM across a wide range of input SNRs.When compared to ELM, the proposed technique resulted in an average STOI improvement close to 24%.The proposed technique achieved a 29% improvement, especially at −6 SNR.This was due to the ability of deep architecture to extract the features by using a multilayer distributed feature representation, with higher levels representing more abstract and discriminative features.As a result, the Binary Mask (BM) created by deep architectures was more precise than those generated by shallow architectures.In addition, DNN-ELM produced higher SNR results than the DNN.This was because of the assistance of the ELM classifier.Although the outputs of the DNN already created an estimated BM, the ELM could produce additional features extracted from the DNN outputs and categorize them to their corresponding domain with a higher accuracy.Finally, among the deep architectures, the proposed technique produced the best STOI result.It is also worth noting that the separation performance was not affected dramatically by the SNR.The proposed approach showed increased robustness when compared to other techniques, as the STOI index changed relatively slightly because DNN ensembles with multiview spectral embedding can extract more beneficial complementary and robust features.In addition, the embedded features in the stacking module were more discriminative than in the lower module.Moreover, the genetic algorithm was utilized to globally improve the parameters in order to obtain a higher level of classification accuracy.The SDR performance was plotted in order to further analyze the effectiveness of the proposed technique.To compare, we used deep architectures to learn and categorize the input signals, including the DNN and DNN-ELM.
Generalization to Different Input Music
We conducted tests to show the generalization capabilities of our proposed system.In the testing set, the interfering music differed from that in the training set, but the testing speech (which differed from the training speech) was from the same speaker.The system was evaluated by using a blend of speech and unseen music, whereby the training set included signals mixed with a piece of music at 0 dB.To train the proposed system, we randomly selected 10 male and female utterances from the 'CHiME' dataset and mixed them with guitar music at 0 dB SNR in order to produce the training set.The features set included 85-D raw acoustic features.To evaluate our system, 30 male and female utterances that were different from those in the training data were selected and mixed with bass and piano music at 0 dB.During the preprocessing, for each TF unit, the feature set with 85-D of the testing data was extracted and then normalized to a mean and unit covariance of zero.The ELM-based, DNN-based, and IBM approaches were selected for comparison.Figure 8 depicts the comparative result.First, despite the fact that the proposed approach was trained with the selected music, its applicability to different music mixtures resulted in a good performance, as shown in Figure 8.The bass and female mixture's SDR performance was 10.67 dB.It should also be highlighted that the proposed technique outperformed the ELM-based method substantially.The reason for this is that the deep architecture could extract more separable features, which increased the classification accuracy when estimating the binary mask.The proposed approach also outperformed the DNN-based technique, which implied that the DNN ensembles and stacking could give more comprehensive information than a single network.Although the IBM approach produced the highest overall outcomes, the proposed technique produced results that were almost as good as the IBM method.In terms of the SDR performance, the proposed technique achieved 10.12 dB, while ELM achieved 5.23 dB, DNN achieved 7.06 dB, and IBM achieved 12.67 dB. Figure 9 shows the time-domain findings for a blend of recovered speech and recovered bass music.
Generalization to Different Speaker
We conducted trials with different speakers to further evaluate the efficacy of the proposed technique.The training data came from one speaker, while the testing data came from another speaker.Speech was mixed with music for the training set, and the system was evaluated by using mixtures of speeches from different speaker mixed with the same music.The training dataset comprised 10 utterances from a speaker mixed with guitar music at 0 dB, whereas the testing dataset comprised another 10 utterances from a different speaker mixed with the same music at 0 dB.It is worth noting that the selected speeches by various speakers were also different.Figure 10 depicts the SDR performance.Although the proposed system was trained with different speeches, the separation performance stayed robust with little fluctuation.When music and utterances from speaker 2 were mixed, the SDR performance was 9.97 dB.The DNN, on the other hand, provided 6.85 dB.The original speech and recovered speech are displayed in Figure 11 to further demonstrate the separation performance of the proposed technique.When compared to the original speech, it can be noted that the recovered speech was quite similar to it, demonstrating the capabilities of our proposed technique.
Comparisons with the Baseline Result
Table 1 shows the comparison of the suggested approach's computational effectiveness and efficiency when the MLP was trained by using the back-propagation methodology and the DNN was trained by using the Restricted Boltzmann Machine (RBM) pretraining method.Since the hierarchical structure allows for the extraction of higher-order correlations between the input data, the MLP was chosen as the baseline for the deep architecture.
The MLP may, however, become trapped at local minima quite readily.By using the layer-wise pretraining strategy, the DNN has made promising progress when compared to the MLP [57].The DNN is, however, supplemented by a high computational complexity and significant time consumption.The Deep Sparse Extreme Learning Machine (DSELM) discussed before is an alternative, and its performance will be contrasted in terms of its training duration and testing precision.In order to train the deep frameworks, we chose 400 utterances from each man and female together with guitar and bass music [53], whereas 50 utterances that were not part of the training set were chosen as the testing data.The input data were standardized to a mean and unit variance of zero before being used to train the MLP and DNN.A total of 50 epochs were used for the back-propagation training of MLP.For the DNN, we employed 50 iterations of gradient descent to pretrain the RBM, which serves as the network's fundamental building block, and 50 iterations to finetune the whole network.We utilized a learning rate of 0.001 to train the first Gaussian-Bernoulli RBM and a learning rate of 0.01 to train the previously mentioned Bernoulli-Bernoulli RBM.
The findings shown in Table 1 show how the DSELM compared to the MLP and DNN in terms of the training time and testing accuracy.The frame of the magnitude spectrogram of the speech and music was the input for these designs.It should be noted that when using the same training data, the DSELM executed far more quickly than the MLP and DNN.This is mostly attributable to the DSELM's straightforward training process without gradual finetuning.This is in contrast to the MLP and DNN, which require repetitive backpropagation algorithm training and repeated finetuning before the network is ready for use, respectively.Additionally, before training and testing the MLP and DNN, the input data have to be normalized to a mean and unit covariance of zero.Our proposed approach, on the other hand, does not require additional data preprocessing, which is one of its advantages over the MLP and DNN.Data preprocessing may introduce bias in the estimation of the mixing gains.Referring to Table 1, it is generally noted that the DSELM not only outperformed the MLP and DNN in terms of the training time, but also in terms of the testing accuracy.For all types of mixtures, the MLP and DNN delivered average accuracies of 93.57% ± 0.4% and 97.02% ± 0.2% while the DSELM had an average accuracy of 98.78% ± 0.2%.In addition, the proposed method had a high learning speed and high accuracy and lower computational complexity, and the separation performance was improved.
In addition, Single-Channel Source Separation (SCSS) is a challenging problem in signal processing.It involves separating multiple sources that are mixed together in a single channel.One of the main challenges in SCSS is dealing with interference, which refers to the presence of other sources in the same channel that can make it difficult to separate the desired source.Reducing interference response times can be important in some SCSS research, especially in applications where real-time processing is required.For example, in speech enhancement applications, reducing the interference response times can help improve the quality of the separated speech signal by reducing the delay between the original speech signal and the processed signal.
However, for other SCSS research, reducing the interference response times may not be as important.For example, in some music-source separation applications, the goal may be to separate the sources offline without the need for real-time processing.In this case, the processing time is less important than the quality of the separated sources.In short, the importance of reducing interference response times in SCSS research depends on the specific application and the requirements of the system.Furthermore, the real-time processing of audio signals requires low latency and efficient algorithms.However, this may not be the primary concern in all applications of Single-Channel Source Separation.For example, in some offline applications such as audio restoration or audio forensics, the processing time is less critical compared to the quality of the separated sources.In this work, the interference response time was not a priority.
Conclusions
The motivation for this study was the fact that although the machine learning algorithms used to estimate the optimum binary mask have had considerable success at tackling single-channel audio separation difficulties, their performance level remains undesirable.An ensemble system of DNNs with stacking was proposed in this paper.By using varying initializations of each DNN in the module, the DNN ensemble system extracted various features.Furthermore, by analyzing each DNN's complementary attribute, the system could extract the most discriminative features, which consequently improved the binary mask estimate accuracy.The activation of the penultimate layer of each DNN enabled the learning of distributed and hierarchical representations.Our experiments revealed that the proposed technique resulted in a considerably better separation performance compared with conventional methods.The proposed method had a high learning speed and high accuracy and lower computational complexity, and the separation performance was improved.
In future work, we will try to investigate areas such as informed source separation and deep reinforcement learning.
Abbreviations
The following abbreviations are used in this manuscript:
Figure 1 .
Figure 1.The architecture of the proposed work.
Figure 3 .
Figure 3.The performance of Perceptual Evaluation of Speech Quality (PESQ) (a) and Signal-to-Distortion Ratio (SDR) (b).
Figure 5
Figure 5 depicts the original speech, music, mixture, and separation results.The speech had an SDR of 11.69 dB, whereas the music had an SDR of 9.16 dB.
Figure 7
Figure 7 depicts the findings of the comparison and shows that our proposed method outperformed the DNN and DNN-ELM over a wide range of input SNRs.The ability of the proposed approach to extract more discriminative features than a single DNN was demonstrated.
Figure 8 .
Figure 8. Signal-to-Distortion Ratio (SDR) with unmatched Bass and Piano music.
Figure 9 .
Figure 9. Separation performance based on different input music.
Figure 11 .
Figure 11.Original speech and recovered speech.
Table 1 .
The comparative result between the proposed approach and the baseline result.
Part mapping of patch P mj R mj Part embedding of patch P mj v Dimension of embedded features µ mj k-dimensional column vector of jth patch on the mth feature set | 12,209.4 | 2023-06-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Contraction and Deletion Blockers for Perfect Graphs and $H$-free Graphs
We study the following problem: for given integers $d$, $k$ and graph $G$, can we reduce some fixed graph parameter $\pi$ of $G$ by at least $d$ via at most $k$ graph operations from some fixed set $S$? As parameters we take the chromatic number $\chi$, clique number $\omega$ and independence number $\alpha$, and as operations we choose the edge contraction ec and vertex deletion vd. We determine the complexity of this problem for $S=\{\mbox{ec}\}$ and $S=\{\mbox{vd}\}$ and $\pi\in \{\chi,\omega,\alpha\}$ for a number of subclasses of perfect graphs. We use these results to determine the complexity of the problem for $S=\{\mbox{ec}\}$ and $S=\{\mbox{vd}\}$ and $\pi\in \{\chi,\omega,\alpha\}$ restricted to $H$-free graphs.
Introduction
A typical graph modification problem aims to modify a graph G, via a small number of operations from a specified set S, into some other graph H that has a certain desired property, which usually describes a certain graph class G to which H must belong. In this way a variety of classical graph-theoretic problems is captured. For instance, if only k vertex deletions are allowed and H must be an independent set or a clique, we obtain the Independent Set or Clique problem, respectively. Now, instead of fixing a particular graph class G, we fix a certain graph parameter π. That is, for a fixed set S of graph operations, we ask, given a graph G, integers k and d, whether G can be transformed into a graph G by using at most k operations from S, such that π(G ) ≤ π(G) − d. The integer d is called the threshold. Such problems are called blocker problems, as the set of vertices or edges involved "block" some desirable graph property, such as being colourable with only a few colours. Identifying the part of the graph responsible for a significant decrease of the parameter under consideration gives crucial information on the graph. Blocker problems have been given much attention over the last few years, see for instance [2,3,4,13,39,40,43,45]. Graph parameters considered were the chromatic number, the independence number, the clique number, the matching number, the weight of a minimum dominating set and the vertex cover number. So far, the set S always consisted of a single graph operation, which was a vertex deletion, edge deletion or an edge addition. In this paper, we keep the restriction on the size of S by letting S consist of either a single vertex deletion or, for the first time, a single edge contraction. As graph parameters we consider the independence number α, the clique number ω and the chromatic number χ.
Before we can define our problems formally, we first need to give some terminology. The contraction of an edge uv of a graph G removes the vertices u and v from G, and replaces them by a new vertex made adjacent to precisely those vertices that were adjacent to u or v in G (neither introducing self-loops nor multiple edges). We say that G can be kcontracted or k-vertex-deleted into a graph G , if G can be modified into G by a sequence of at most k edge contractions or vertex deletions, respectively. We let π denote the (fixed) graph parameter; as mentioned, in this paper π belongs to {α, ω, χ}.
We are now ready to define our decision problems in a general way: Contraction Blocker(π) Instance: a graph G and two integers d, k ≥ 0 Question: can G be k-contracted into a graph G such that π(G ) ≤ π(G) − d?
Deletion Blocker(π) Instance: a graph G and two integers d, k ≥ 0 Question: can G be k-vertex-deleted into a graph G such that π(G ) ≤ π(G)−d?
If we remove d from the input and fix it instead, then we call the resulting problems d-Contraction Blocker(π) and d-Deletion Blocker(π), respectively.
d-Contraction Blocker(π)
Instance: a graph G and an integer k ≥ 0 Question: can G be k-contracted into a graph G such that π(G ) ≤ π(G) − d?
d-Deletion Blocker(π) Instance: a graph G and an integer k ≥ 0 Question: can G be k-vertex-deleted into a graph G such that π(G ) ≤ π(G)−d?
The goal of our paper is to increase our understanding of the complexities of Contraction Blocker(π) and Deletion Blocker(π) fo π ∈ {ω, χ, α}. In order to do so, we will also consider the problems d-Contraction Blocker(π) and d-deletion blocker(π).
Known Results and Relations to Other Problems
It is known that Deletion Blocker(α) is polynomial-time solvable for bipartite graphs, as proven both by Bazgan, Toubaline and Tuza [3] and Costa, de Werra and Picouleau [13]. The former authors also proved that Deletion Blocker(α) is polynomial-time solvable for cographs and graphs of bounded treewidth. The latter authors also proved that for π ∈ {ω, χ}, Deletion Blocker(π) is polynomial-time solvable for cobipartite graphs. Moreover, they showed that for π ∈ {ω, χ, α}, Deletion Blocker(π) is NP-complete for the class of split graphs, but becomes polynomial-time solvable for this graph class if d is fixed.
By using a number of example problems we will now illustrate how the blocker problems studied in this paper relate to a number of other problems known in the literature. As we will see, this immediately leads to new complexity results for the blocker problems.
1. Hadwidger Number and Club Contraction. The Contraction Blocker(α) problem generalizes the well-known Hadwiger Number problem, which is that of testing whether a graph can be contracted into the complete graph K r on r vertices for some given integer r. Indeed, we obtain the latter problem from the first by restricting instances to instances (G, d, k) where d = α(G) − 1 and k = |V (G)| − r. Note that the diameter and independence number of K r are both equal to 1. Hence, one can also generalize Hadwiger Number in another way: the Club Contraction problem (see e.g. [21]) is that of testing whether a graph G can be k-contracted into a graph with diameter at most s for some given integers k and s. As such, Contraction Blocker(α) can be seen as a natural counterpart of Club Contraction.
2. Graph transversals. Blocker problems generalize so-called graph transversal problems. To explain the latter type of problems, for a family of graphs H, the H-transversal problem is to test if a graph G can be k-vertex-deleted, for some integer k, into a graph G that has no induced subgraph isomorphic to a graph in H. For instance, the problem {K 2 }-transversal is the same as Vertex Cover. Here are some examples of specific connections between graph transversals and blocker problems.
-Let H be the family {K p | p ≥ 2} of all complete graphs on at least two vertices. Then H-transversal is equivalent to Deletion Blocker(ω) restricted to instances (G, d, k) with d = ω(G) − 1. -In our paper we will prove that for a graph G with at least one edge and an integer k ≥ 1, the instance (G, ω(G) − 1, k) is a yes-instance of Deletion Blocker(ω) if and only if (G, k) is a yes-instance of Vertex Cover. -The Odd Cycle Transversal problem is to test whether a given graph can be made bipartite by removing at most k vertices for some given integer k ≥ 0. This problem is NP-complete [31], and it is equivalent to Deletion Blocker(χ) for instances (G, d, k) where d = χ(G) − 2. -The d-Transversal or d-Cover problem [13] is to decide whether a graph G = (V, E) contains a set V that intersects each maximum set satisfying some specified property π by at least d vertices. For instance, if the property is being an independent set, 1-Transversal is equivalent to 1-Deletion Blocker(α).
3. Bipartite Contraction. The problem Bipartite Contraction is to test whether a graph can be made bipartite by at most k edge contractions. Heggernes et al. [26] proved that this problem is NP-complete. It is readily seen that 1-Contraction Blocker(χ) and Bipartite Contraction are equivalent for graphs of chromatic number 3.
Maximum induced bipartite subgraphs. The Maximum Induced Bipartite
Subgraph problem is to decide if a given graph contains an induced bipartite subgraph with at least k vertices for some integer k. Addario-Berry et al. [1] proved that this problem is NP-complete for the class of 3-colourable perfect graphs. We observe that, for 3-colourable graphs, 1-Deletion Blocker(χ) is equivalent to Maximum Induced Bipartite Subgraph.
5.
Cores. The two problems 1-Deletion Blocker(α) and 1-Deletion Blocker(ω) are equivalent to testing whether the input graph contains a set of S of size at most k that intersects every maximum independent set or every maximum clique, respectively. If k = 1, these two problems become equivalent to testing whether the input graph contains a vertex that is in every maximum independent set, or in every maximum clique, respectively. In particular, the intersection of all maximum independent sets is known as the core of a graph. Properties of the core have been well studied (see, for example, [25,29,30]). In particular, Boros, Golumbic and Levit [7] proved that computing if the core of a graph has size at least is co-NP-hard for every fixed ≥ 1. Taking = 1 gives co-NP-hardness of 1-Deletion Blocker(α).
6. Critical vertices and edges. The restriction d = k = 1 has also been studied when π = χ. A vertex of a graph G is critical if its deletion reduces the chromatic number of G by 1. An edge of a graph is critical or contraction-critical if its deletion or contraction, respectively, reduces the chromatic number of G by 1. The problems Critical Vertex, Critical Edge and Contraction-Critical are to test if a graph has a critical vertex, critical edge or contraction-critical edge, respectively. We note that the first two problems are the restrictions of Contraction Blocker(χ) and Deletion Blocker(χ) to instances (G, d, k) where d = k = 1. Complexity dichotomies exist for each of the three problems on H-free graphs, and moreover the latter two problems are shown to be equivalent [36]. Graphs with a critical (or equivalently contraction-critical) edge are also called colour-critical (see, for instance, [41]).
Due to links to problems as the ones above, it is of no surprise that many results for blocker problems are known implicitly in the literature already in various settings. For example, Belmonte et al. [5] proved that 1-Contraction Blocker(∆), where ∆ denotes the maximum vertex-degree, is NP-complete even for split graphs. We make use of several known complexity results for some of the related problems stated above for proving our results.
Our Results
In Section 1.1 we mentioned that Deletion Blocker(π) is known to be NP-complete for π ∈ {α, ω, χ} even when restricted to special graph classes. Non-surprisingly, Contraction Blocker(π) is NP-complete for π ∈ {α, ω, χ} as well (this follows from our results in Section 8, but it is also easy to show this directly). Due to the above, it is natural to restrict inputs to some special graph classes in order to obtain tractable results and to increase our understanding of the computational hardness of the problems. Note that it is not always clear whether Contraction Blocker(π) and Deletion Blocker(π) belong to NP when restricted to a graph class G. However, when G is closed under edge contraction or vertex deletion, respectively, and π can be verified in polynomial time, then membership of NP holds: we can take as certificate the sequence of edge contractions or vertex deletions, respectively.
Part I. In the first part of our paper we focus on the class of perfect graphs and a number of well-known subclasses of perfect graphs. Most of these classes are not only closed under vertex deletion but also under edge contraction. This enables us to get unified results for the cases π = ω and π = χ (note that ω = χ holds by definition of a perfect graph). Another reason for considering subclasses of perfect graphs is that α, ω, χ can be computed in polynomial time for perfect graphs; Grötschel, Lovász, and Schrijver [23] proved this for χ and thus for ω, whereas the result for α follows from combining this result with the fact that perfect graphs are closed under complementation. This helps us with finding tractable results or at least with obtaining membership of NP (if in addition the subclass under consideration is closed under edge contraction or vertex deletion). Table 1 gives an overview of the known results and our new results for the classes of perfect graphs we consider. We have unified results for the cases π = ω and π = χ even for the perfect graph classes in this table that are not closed under edge contraction, namely the classes of bipartite graphs; C 4 -free perfect graphs with clique number 3; and the class of perfect graphs itself. As the class of perfect graphs is not closed under edge contraction we could for perfect graphs only deduce that the three contraction blocker problems are NP-hard (even if d = 1). As the class of cographs coincides with the class of P 4 -free graphs (where P r denotes the r-vertex path) and split graphs are P 5 -free, the corresponding rows in Table 1 show a complexity jump of all our problems for P t -free graphs from t = 4 to t = 5. Recall also from Section 1.1 that the Hadwiger Number problem is a special case of Contraction Blocker(α)) As such, our polynomial-time result in Table 1 for Contraction Blocker(α) restricted to cographs generalizes a result of Golovach et al. [21], who proved that the Hadwiger Number problem is polynomial time solvable on cographs.
Part II. In the second part of our paper we give several dichotomy results. First we give, for π ∈ {α, ω, χ}, complete classifications of Deletion Blocker(π) and Contraction Blocker(π) depending on the size of π, that is, we prove the following theorem. Theorem 1. The following six dichotomies hold: (i) Contraction Blocker(α) is polynomial-time solvable for graphs with α = 1 and 1-Contraction Blocker(α) is NP-complete for graphs with α = 2; (ii) Contraction Blocker(χ) is polynomial-time solvable for graphs with χ = 2 and 1-Contraction Blocker(χ) is NP-complete for graphs with χ = 3; (iii) Contraction Blocker(ω) is polynomial-time solvable for graphs with ω = 2 and 1-Contraction Blocker(ω) is NP-complete for graphs with ω = 3; (iv) Deletion Blocker(α) is polynomial-time solvable for graphs with α = 1 and 1-Deletion Blocker(α) is NP-complete for graphs with α = 2; (v) Deletion Blocker(χ) is polynomial-time solvable for graphs with χ = 2 and 1-Deletion Blocker(χ) is NP-complete for graphs with χ = 3; (vi) Deletion Blocker(ω) is polynomial-time solvable for graphs with ω = 1 and 1-Deletion Blocker(ω) is NP-complete for graphs with ω = 2; In particular we extend the hardness proof of Theorem 1 (iii) in order to obtain the hardness result for C 4 -free perfect graphs with ω = 3 in Table 1. We note that some of the results in Table 1, such as this result, may at first sight seem somewhat arbitrary. However, we need the result for C 4 -free perfect graphs with ω = 3 and other results of Table 1 to prove our other results of the second part of our paper. Namely, by combining the results for subclasses of perfect graphs with other results, we obtain complexity dichotomies for our six blockers problems restricted to H-free graphs, that is, graphs that do not contain some (fixed) graph H as an induced subgraph. These dichotomies are stated in the following summary; here, P r is the r-vertex path, C 3 is the triangle, and the paw is the triangle with an extra vertex adjacent to exactly one vertex of the triangle, whereas ⊆ i denotes the induced subgraph relation and ⊕ denotes the disjoint union of two vertex disjoint graphs.
Theorem 2. Let H be a graph. Then the following holds: Statements (i), (ii), (iii), (v), (vi) of Theorem 2 correspond to complete complexity dichotomies, whereas there is one missing case in statement (iv). In particular we note that statements (v) and (vi) do not coincide for disconnected graphs H. We also observe from Theorem 2 (i) that Deletion Blocker(α) is computationally hard for triangle-free graphs; in fact we will show co-NP-hardness even if d = k = 1. This in contrast to the problem being polynomial-time solvable for bipartite graphs, as shown in [3,13] (see also Table 1).
Paper Organization
Section 2 contains notation and terminology. Sections 3-7 contain the results mentioned in Part I. To be more precise, Section 3 contains our results for cobipartite graphs, bipartite graphs and trees. In Sections 4 and 5, we prove our results for cographs and split graphs, respectively. In Section 5 we also show that our NP-hardness reduction for split graphs can be used to prove that the three contraction blockers problems, restricted to split graphs, are W[1]-hard when parameterized by d. The latter result means that for split graphs these problems are unlikely to be fixed-parameter tractable with parameter d. In Sections 6 and 7 we prove our results for interval graphs and chordal graphs, respectively. Sections 8 and 9 contain the results mentioned in Part II. In Section 8 we first prove dichotomies for the three contraction blocker and three vertex blocker problems when we classify on basis of the size of π ∈ {α, χ, ω}. In the same section, we modify the hardness construction for 1-Contraction Blocker(ω) to prove that 1-Contraction Blocker(ω) is NP-complete even for C 4 -free perfect graphs with ω = 3. In Section 9 we prove Theorem 2.
Section 10 contains a number of open problems and directions for future research.
Preliminaries
We only consider finite, undirected graphs that have no self-loops and no multiple edges; we recall that when we contract an edge no self-loops or multiple edges are created. We refer to [14] or [47] for undefined terminology and to [15] for more on parameterized complexity. Let G = (V, E) be a graph. For a subset S ⊆ V , we let G[S] denote the subgraph of G induced by S, which has vertex set S and edge set {uv ∈ E | u, v ∈ S}. We write H ⊆ i G if a graph H is an induced subgraph of G. Moreover, for a vertex v ∈ V , we write For a set {H 1 , . . . , H p } of graphs, a graph G is (H 1 , . . . , H p )-free if G has no induced subgraph isomorphic to a graph in {H 1 , . . . , H p }; if p = 1 we may write H 1 -free instead of (H 1 )-free. The complement of G is the graph G = (V, E) with vertex set V and an edge between two vertices u and v if and only if uv / ∈ E. Recall that the contraction of an edge uv ∈ E removes the vertices u and v from a graph G and replaces them by a new vertex that is made adjacent to precisely those vertices that were adjacent to u or v in G. This new graph will be denoted by G|uv. In that case we may also say that u is contracted onto v, and we use v to denote the new vertex resulting from the edge contraction. The subdivision of an edge uv ∈ E removes the edge uv from G and replaces it by a new vertex w and two edges uw and wv.
Let G and H be two vertex-disjoint graphs. The join operation ⊗ adds an edge between every vertex of G and every vertex of H. The union operation ⊕ takes the disjoint union of G and H, that is, . We denote the disjoint union of p copies of G by pG. For n ≥ 1, the graph P n denotes the path on n vertices, that is, V (P n ) = {u 1 , . . . , u n } and E(P n ) = {u i u i+1 | 1 ≤ i ≤ n − 1}. For n ≥ 3, the graph C n denotes the cycle on n vertices, that is, V (C n ) = {u 1 , . . . , u n } and E(C n ) = {u i u i+1 | 1 ≤ i ≤ n−1} ∪ {u n u 1 }. The graph C 3 is also called the triangle. The claw K 1,3 is the 4-vertex star, that is, the graph with vertices u, v 1 , v 2 , v 3 and edges uv 1 , uv 2 , uv 3 .
Let G = (V, E) be a graph. A subset K ⊆ V is called a clique of G if any two vertices in K are adjacent to each other. The clique number ω(G) is the number of vertices in a maximum clique of G. A subset I ⊆ V is called an independent set of G if any two vertices in I are non-adjacent to each other. The independence number α(G) is the number of vertices in a maximum independent set of G. For a positive integer k, a k-colouring of G is a mapping c : V → {1, 2, . . . , k} such that c(u) = c(v) whenever uv ∈ E. The chromatic number χ(G) is the smallest integer k for which G has a k-colouring. A subset of edges M ⊆ E is called a matching if no two edges of M share a common end-vertex. The matching number µ(G) is the number of edges in a maximum matching of a graph G.
The Coloring problem is that of testing if a graph has a k-colouring for some given integer k. The problems Clique and Independent Set are those of testing if a graph has a clique or independent set, respectively, of size at least k. The Vertex Cover problem is that of testing if a graph has a vertex cover of size at most k. We need the following lemma at several places in our paper. Lemma 1 ( [42]). Vertex Cover is NP-complete for C 3 -free graphs.
An interval graph is a graph such that one can associate an interval of the real line with every vertex such that two vertices are adjacent if and only if the corresponding intervals intersect. A graph is cobipartite if it is the complement of a bipartite (2-colourable) graph. A graph is chordal if it contains no induced cycle on more than three vertices. A graph is a split graph if it has a split partition, which is a partition of its vertex set into a clique K and an independent set I. Split graphs coincide with (2P 2 , C 4 , C 5 )-free graphs [18]. A P 4 -free graph is also called a cograph.
A graph is perfect if the chromatic number of every induced subgraph equals the size of a largest clique in that subgraph. A hole is an induced cycle on at least five vertices and an antihole is the complement of a hole. A hole or antihole is odd if it contains an odd number of vertices. We need the following well-known theorem of Chudnovsky, Robertson, Seymour, and Thomas. This theorem can also be used to verify that the other graph classes in Table 1 are indeed subclasses of perfect graphs.
Theorem 3 (Strong Perfect Graph Theorem [9]). A graph is perfect if and only if it contains no odd hole and no odd antihole.
Cobipartite Graphs, Bipartite Graphs and Trees
We first consider the contraction blocker problems and then the deletion blocker problems.
Contraction Blockers
Our first result is a hardness result for cobipartite graphs that follows directly from a known result. Proof. Golovach, Heggernes, van 't Hof and Paul [21] considered the s-Club Contraction problem. Recall that this problem takes as input a graph G and an integer k and asks whether G can be k-contracted into a graph with diameter at most s for some fixed integer s. They showed that 1-Club Contraction is NP-complete even for cobipartite graphs. Graphs of diameter 1 are complete graphs, that is, graphs with independence number 1, whereas cobipartite graphs that are not complete have independence number 2.
We now focus on π = χ and π = ω. For our next result (Theorem 5) we need some additional terminology. A biclique is a complete bipartite graph, which is nontrivial if it has at least one edge. A biclique vertex-partition of a graph G = (V, E) is a set S of mutually vertex-disjoint bicliques in G such that every vertex of G is contained in one of the bicliques of S. The Biclique Vertex-Partition problem consists in testing whether a given graph G has a biclique vertex-partition of size at most k, for some positive integer k. Fleischner et al. [17] showed that this problem is NP-complete even for bipartite graphs and k = 3.
We are now ready to prove Theorem 5.
Proof. Since cobipartite graphs are perfect and closed under edge contractions, we may assume without loss of generality that π = χ. The problem is in NP, as Coloring is polynomial-time solvable on cobipartite graphs and then we can take the sequence of edge contractions as certificate. We reduce from Biclique Vertex-Partition. Recall that this problem is NP-complete even for bipartite graphs and k = 3 [17]. As the problem is polynomial-time solvable for bipartite graphs and k = 2 (see [17]), we may ask for a biclique vertex-partition of size exactly 3, in which each biclique is nontrivial. Let (G, 3) be an instance of Biclique Vertex-Partition, where G is a connected bipartite graph on n vertices that has partition classes X and Y . We claim that G has a biclique vertex-partition consisting of three non-trivial bicliques if and only if G can be (n − 6)-contracted into a graph G with First suppose that G has a biclique vertex-partition S of size 3. Let S 1 , S 2 , S 3 be the three (nontrivial) bicliques in S. Let A i , B i be the two bipartition classes of S i for i = 1, 2, 3. So, in G, we have that A 1 , A 2 , A 3 , B 1 , B 2 , B 3 are six cliques that partition the vertices of G, and moreover, there is no edge between a vertex of A i and a vertex of B i , for i = 1, 2, 3. In G we contract each clique A i to a single vertex that we give colour i, and we contract each clique B i to a single vertex that we give colour i as well. In this way we have obtained a 6-vertex graph G (so the number of contractions is n − 6) with a 3-colouring. Thus, χ(G ) ≤ 3. Now suppose that G can be (n − 6)-contracted into a graph G with χ(G ) ≤ 3. We first observe that the class of cobipartite graphs is closed under taking edge contractions; indeed, if e is an edge connecting two vertices of the same partition class, then contracting e results in a smaller clique, and if e is an edge connecting two vertices of two different partition classes, then contracting e is equivalent to removing one of its end-vertices and making its other end-vertex adjacent to every other vertex in the resulting graph.
As the class of cobipartite graphs is closed under taking contractions, G is cobipartite. As cobipartite graphs have independence number at most 2, each colour class in a colouring of G must have size at most 2. Consequently, G must have exactly six vertices a 1 , a 2 , a 3 , b 1 , b 2 , b 3 such that a 1 , a 2 , a 3 form a clique, b 1 , b 2 , b 3 form a clique, and moreover, a i and b i are not adjacent, for i = 1, 2, 3. This means that we did not contract an edge uv with u ∈ X and v ∈ Y (as the resulting vertex would be adjacent to all other vertices). Hence, we may assume without loss of generality that for i = 1, 2, 3, each a i corresponds to a set of vertices A i ⊂ X (that we contracted into the single vertex a i ) and that each b i corresponds to a set of vertices B i ⊂ Y (that we contracted into the single vertex b i ). As each pair We now assume that d is fixed. We show that d-Contraction Blocker(π) becomes polynomial-time solvable on cobipartite graphs for π ∈ {χ, ω}. For π = χ, we can prove this even for the class of graphs with independence number at most 2, or equivalently, the class of 3P 1 -free graphs, which properly contains the class of cobipartite graphs.
Theorem 6. For any fixed d ≥ 0, the d-Contraction Blocker(χ) problem can be solved in polynomial time for 3P 1 -free graphs.
Proof. Let G be a graph with α(G) ≤ 2. Consider a colouring with χ(G) colours. The size of every colour class is at most 2. Hence every subgraph of G induced by two colour classes has at most 4 vertices, and as such has a spanning forest with in total at most 3 edges. This means that we can contract two colour classes to an independent set (that is, to a new colour class) by using at most 3 contractions. This observation gives us the following algorithm. We guess a set of at most 3 contractions. Afterward we decrease d by 1 and repeat this procedure until d = 0. For each resulting graph G we check whether χ(G ) ≤ χ(G) − d. If so, then the algorithm returns a yes-answer and otherwise a no-answer.
Let m be the number of edges of G. Then the total number of guesses is at most m 3d , which is polynomial as d is fixed. Because Coloring is polynomial-time solvable on graphs with independence number at most 2 and this class is closed under edge contractions, our algorithm runs in polynomial time.
Proof. For π = χ this follows immediately from Theorem 6. As cobipartite graphs are perfect and closed under edge contraction, we obtain the same result for π = ω.
We now consider the class of bipartite graphs. If π ∈ {χ, ω}, then Contraction Blocker(π) is trivial for bipartite graphs (and thus also for trees). To the contrary, for π = α, we will show that Contraction Blocker(π) is NP-hard for bipartite graphs. The complexity of d-Contraction Blocker(α) remains open for bipartite graphs. Bipartite graphs are not closed under edge contraction. Therefore membership to NP cannot be established by taking a sequence of edge contractions as the certificate, even though due to König's Theorem (see, for example, [14]), Independent Set is polynomialtime solvable for bipartite graphs.
Proof. We know from Theorem 4 that 1-Contraction Blocker(α) is NP-complete on cobipartite graphs. Consider a cobipartite graph G with m edges and an integer k, which together form an instance of 1-Contraction Blocker(α). Subdivide each of the m edges of G in order to obtain a bipartite graph G . We claim that (G, k) is a yes-instance of 1-Contraction Blocker(α) if and only if (G , α(G ) − 1, k + m) is a yes-instance of Contraction Blocker(α).
First suppose that (G, k) is a yes-instance of 1-Contraction Blocker(α). In G we first perform m edge contractions to get G back. We then perform k edge contractions to get independence number . Then there exists a sequence of k +m edge contractions that transform G into a complete graph K. We may assume that K has size at least 4 (as we could have added without loss of generality three dominating vertices to G without increasing k). As K has size at least 4, each subdivided edge must be contracted back to the original edge again. This operation costs m edge contractions, so we contract G to K using at most k edge operations. Hence, (G, k) is a yes-instance of 1-Contraction Blocker(α). This proves the claim and hence the theorem.
We complement Theorem 7 by showing that Contraction Blocker(α) is lineartime solvable on trees. In order to prove this result we make a connection to the matching number µ of a graph.
Theorem 8. Contraction Blocker(α) is linear-time solvable on trees.
Proof. Let (T, d, k) be an instance of Contraction Blocker(α), where T is a tree on n vertices. We first describe our algorithm and prove its correctness. Afterwards, we analyze its running time. Throughout the proof let M denote a maximum matching of T .
First suppose that d ≤ n−2µ(T ). There are exactly σ(T ) = n−2µ(T ) vertices that are unsaturated by M . Let uv be an edge, such that u is unsaturated. As M is maximum, v must be saturated. Then, by contracting uv, we obtain a tree T such that µ(T ) = µ(T ). It follows from the above that α(T ) = α(T )−1. Say that we contracted u onto v. Then in T we have that v is saturated by M , which is a maximum matching of T as well. Thus, if d ≤ n − 2µ(T ), contracting d edges, one of the end-vertices of which is unsaturated by M , yields a tree T with µ(T ) = µ(T ) and α(T ) = α(T ) − d. Since an edge contraction reduces the independence number by at most 1, it follows that this is optimal. Hence, as Now suppose that d > n − 2µ(T ). Suppose that we first contract the n − 2µ(T ) edges that have exactly one end-vertex that is unsaturated by M . It follows from the above that this yields a tree T with µ(T ) = µ(T ) and α(T ) = α(T ) − (n − 2µ(T )). Since T does not contain any unsaturated vertex, M is a perfect matching of T . Then, contracting any edge in T results in a tree T with µ(T ) = µ(T ) − 1 and thus, α(T ) = α(T ). If we contract an edge uv ∈ M , the resulting vertex uv is unsaturated by M = M \ {uv} in T . Hence, as explained above, if in addition we contract now an edge (uv)w, we obtain a tree T with α(T ) = α(T ) − 1 and µ(T ) = µ(T ). Repeating this procedure, we may reduce the independence number of T by d with n − 2µ(T ) + 2(d − n + 2µ(T )) = 2(d + µ(T )) − n edge contractions. Below we show that this is optimal.
Suppose that we contract p edges in T . Let T be the resulting tree. We have α(T ) + µ(T ) = n−p. As µ(T ) ≤ 1 2 (n−p), this means that α(T ) ≥ 1 2 (n−p). If p < 2(d+µ(T ))−n we have − p 2 > −(d + µ(T )) + n 2 , and thus So at least 2(d + µ(T )) − n edge contractions are necessary to decrease the independence number by d. It remains to check if k is sufficiently high for us to allow this number of edge contractions.
As we can find a maximum matching of tree T (and thus compute µ(T )) in O(n) time by using the algorithm of Savage [44], our algorithm runs in O(n) time.
Remark 1. By König's Theorem, we have that α(G) + µ(G) = |V (G)| for any bipartite graph G, but we can only use the proof of Theorem 8 to obtain a result for trees for the following reason: trees form the largest subclass of (connected) bipartite graphs that are closed under edge contraction, and this property plays a crucial role in our proof.
Deletion Blockers
We first show that all three deletion blocker problems are polynomial-time solvable for bipartite graphs (and thus for trees). It is known already that Deletion Blocker(α) is polynomial-time solvable for bipartite graphs [3,13]. Hence it suffices to prove that the same holds for Deletion Blocker(π) when π ∈ {χ, ω}. In order to do so we need the following relation between 1-Deletion Blocker(ω) and Vertex Cover. Proposition 1. Let G be a graph with at least one edge and let k ≥ 1 be an integer.
Proof. Let G = (V, E) be a graph with |E| ≥ 1. Thus, ω(G) ≥ 2. Let k ≥ 1 be an integer. First suppose that (G, k) is a yes-instance of Vertex Cover, that is, G has a vertex cover V of size at most k. So, every edge of G is incident to at least one vertex of V . Then, deleting all vertices of V yields a graph G with no edges. This means that ω(G ) ≤ 1, and thus (G, ω(G) − 1, k) is a yes-instance for Deletion Blocker(ω). Now suppose that (G, ω(G) − 1, k) is a yes-instance of Deletion Blocker(ω). Then there exists a set V ⊆ V of size |V | ≤ k such that ω(G − V ) ≤ 1. This implies that G − V has no edges. Thus V is a vertex cover of G of size at most k. So, (G, k) is a yes-instance for Vertex Cover.
Proposition 1 has the following corollary, which we will apply in this section and at some other places in our paper.
Corollary 2. Let G be a triangle-free graph with at least one edge and let k ≥ 1 be an integer. Then (G, k) is a yes-instance of 1-Deletion Blocker(ω) if and only if (G, k) is a yes-instance of Vertex Cover.
We are now ready to prove the following result.
Proof. As bipartite graphs are perfect and closed under vertex deletion, the problems Deletion Blocker(ω) and Deletion Blocker(χ) are equivalent. Therefore, we only have to consider the case where π = ω. As bipartite graphs have clique number at most 2, Deletion Blocker(ω) and 1-Deletion Blocker(ω) are equivalent. As bipartite graphs are triangle-free, we can apply Corollary 2. To solve Vertex Cover on bipartite graphs, König's Theorem tells us that it suffices to find a maximum matching, which takes O(n 2.5 ) time on n-vertex bipartite graphs [27].
We now consider the the class of cobipartite graphs. It is known that Deletion Blocker(π) is polynomial-time solvable on cobipartite graphs if π ∈ {ω, χ} [13]. Hence we only have to deal with the case π = α. For this case we prove the following result, which follows immediately from Theorem 9.
Theorem 10. Deletion Blocker(α) can be solved in polynomial time on cobipartite graphs.
Cographs
It is well known (see for example [8]) that a graph G is a cograph if and only if G can be generated from K 1 by a sequence of operations, where each operation is either a join or a union operation. Recall from Section 2 that we denote these operations by ⊗ and ⊕, respectively. Such a sequence corresponds to a decomposition tree T , which has the following properties: 1. its root r corresponds to the graph G r = G; 2. every leaf x of T corresponds to exactly one vertex of G, and vice versa, implying that x corresponds to a unique single-vertex graph G x ; 3. every internal node x of T has at least two children, is either labeled ⊕ or ⊗, and corresponds to an induced subgraph G x of G defined as follows: A cograph G may have more than one such tree but has exactly one unique tree [11], called the cotree T G of G, if the following additional property is required: 4. Labels of internal nodes on the (unique) path from any leaf to r alternate between ⊕ and ⊗.
Note that T G has O(n) vertices. For our purposes we must modify T G by applying the following known procedure (see for example [6]). Whenever an internal node x of T G has more than two children y 1 and y 2 , we remove the edges xy 1 and xy 2 and add a new vertex x with edges xx , x y 1 and x y 2 . If x is a ⊕-node, then x is a ⊕-node, and if x is a ⊗-node, then x is a ⊗-node. Applying this rule exhaustively yields a tree in which each internal node has exactly two children. We denote this tree by T G . Because T G has O(n) vertices, modifying T G into T G takes linear time.
Corneil, Perl and Stewart [12] proved that the problem of deciding whether a graph with n vertices and m edges is a cograph can be solved in time O(n+m). They also showed that in the same time it is possible to construct its cotree (if it exists). As modifying T G into T G takes O(n + m) time, we obtain the following lemma. For two integers k and we say that a graph G can be (k, )-contracted into a graph H if G can be modified into H by a sequence containing k edge contractions and vertex deletions. Note that cographs are closed under edge contraction and under vertex deletion. In fact, to prove our results for cographs, we will prove the following more general result.
Theorem 11. Let π ∈ {α, χ, ω}. The problem of determining the largest integer d such that a cograph G with n vertices and m edges can be (k, )-contracted into a cograph H with π(H) ≤ π(G) − d can be solved in O(n 2 + mn + (k + ) 3 n) time.
Proof. First consider π = α. Let G be a cograph with n vertices and m edges and let k, be two positive integers. We first construct T G . We then consider each node of T G by following a bottom-up approach starting at the leaves of T G and ending in its root r.
Let x be a node of T G . Recall that G x is the subgraph of G induced by all vertices that corresponds to leaves in the subtree of T G rooted at x. With node x we associate a table that records the following data: for each pair of integers i, x is a ⊕-node. Let y and z be the two children of x. Then, as G x is the disjoint union of G y and G z , we find that α(G x ) = α(G y ) + α(G z ). Hence, we have Since x is a ⊗-node, G x is connected and as such has a spanning tree T . If i + j ≥ |V (G x )| and j ≥ 1, then we can contract i edges of T in the graph G x followed by j vertex deletions.
As each operation will reduce G x by exactly one vertex, this results in the empty graph.
. From now on assume that i + j < |V (G x )| or j = 0. As such, any graph we can obtain from G x by using i edge contractions and j vertex deletions is non-empty and hence has independence number at least 1. Let y and z be the two children of x. Then, as G x is the join of G y and G z , we find that α(G x ) = max{α(G y ), α(G z )}. In order to determine d(i, j, x) we must do some further analysis. Let S be a sequence that consists of i edge contractions and j vertex deletions of G x such that applying S on G x results in a graph H x with α(H x ) = α(G x ) − d(i, j, x). We partition S into five sets S e y , S e z , S e yz , S v y , S v z , respectively, as follows. Let S e y and S e z be the set of contractions of edges with both end-vertices in G y and with both end-vertices in G z , respectively. Let S e yz be the set of contractions of edges with one end-vertex in G y and the other one in G z . Let a y = |S e y | and let a z = |S e z |. Then |S e yz | = i − a y − a z . Let S v y and S v z be the set of deletions of vertices in G y and G z , respectively.
We distinguish between two cases. First assume that S e yz = ∅. Then a y + a z = i. Let H y be the graph obtained from G y after applying the subsequence of S, consisting of operations in S e y ∪ S v y , on G y . Let H z be defined analogously. Then we have where the second equality follows from the definition of S. Now assume that S e yz = ∅. Recall that i + j < |V (G x )| or j = 0. Hence α(H x ) ≥ 1. Our approach is based on the following observations.
First, contracting an edge with one end-vertex in G y and the other one in G z is equivalent to removing these two end-vertices and introducing a new vertex that is adjacent to all other vertices of G x (such a vertex is said to be universal).
Second, assume that G y contains two distinct vertices u and u and that G z contains two distinct vertices v and v . Now suppose that we are to contract two edges from {uv, uv , u v, u v }. Contracting two edges of this set that have a common end-vertex, say edges uv and uv , is equivalent to deleting u, v, v from G x and introducing a new universal vertex. Contracting two edges with no common end-vertex, say uv and u v , is equivalent to deleting all four vertices u, u , v, v from G x and introducing two new universal vertices. Because the two new universal vertices in the latter choice are adjacent, whereas the vertex u may not be universal after making the former choice, the latter choice decreases the independence number by the same or a larger value than the former choice. Hence, we may assume without loss of generality that the latter choice happened. More generally, the contracted edges with one end-vertex in G y and the other one in G z can be assumed to form a matching. We also note that introducing a new universal vertex to a graph does not introduce any new independent set other than the singleton set containing the vertex itself.
We conclude that each edge contraction in S e yz may be considered to be equivalent to deleting one vertex from G y and one from G z and introducing a new universal vertex. If one of the two graphs G y or G z becomes empty in this way, then an edge contraction in S e yz can be considered to be equivalent to the deletion of a vertex of the other one. Finally, if both sets G y and G z become empty, then we can stop as in that case H x has independence number 1 (which we assumed was the smallest value of α(H x )).
By the above observations and the definition of S we find that Hence we can do as follows. We consider all tuples (a y , b) with 0 ≤ a y ≤ i and 0 ≤ b ≤ j and compute max{α( Let α x be the minimum value over all values found. We then consider all tuples (a y , a z , b) with a y ≥ 0, a z ≥ 0, a y + a z ≤ i and 0 ≤ b ≤ j and compute max{1, Let α x be the minimum value over all values found. Then After reaching the root r, we let our algorithm return the integer d(k, , r). By construction, d(k, , r) is the largest integer such that G = G r can be (k, )-contracted into a graph H with α(H) ≤ α(G) − d(k, , r). We are left to analyze the running time.
Constructing T G can be done in O(n + m) time by Lemma 2. We now determine the time it takes to compute one entry d(i, j, x) in the table associated with a node x. It takes linear time to compute the independence number of a cograph 1 . The total number of tuples (a y , b) and (a y , a z , b) that we need to consider is O((k+ ) 3 ). Note that the table associated with a node x has O((k+ ) 2 ) entries but that we only have to compute α(G x ) once. Hence, it takes O(n + m + (k + ) 3 ) time to construct a table for a node. As T G has O(n) vertices, the total running time is O(n + m) + O(n(n + m + (k + ) 3 )) = O(n 2 + mn + (k + ) 3 n). Now consider π = χ. Note that we cannot consider the complement of a cograph (which is a cograph) because an edge contraction in a graph does not correspond to an edge contraction in its complement. However, we can re-use the previous proof after making a few modifications. Let G be a cograph with n vertices and m edges and let k, be two positive integers. We follow the same approach as in the proof for π = α. We only have to swap Cases 2 and 3 after observing that χ(G x ) = max{χ(G y ), χ(G z )} if x is a ⊕-node with y and z as its two children and χ(G x ) = χ(G y ) + χ(G z ) if x is a ⊗-node. We can use the same arguments as used in the proof for π = α for the running time analysis as well; we only have to observe that it takes O(n + m) time to compute the chromatic number of a cograph (using the same arguments as before or by using another algorithm of [10]).
Finally consider π = ω. As cographs are perfect and closed under edge contractions, the proof follows immediately from the corresponding result for π = χ. Corollary 3. For π ∈ {α, χ, ω}, both the Contraction Blocker(π) problem and the Deletion Blocker(π) problem can be solved in polynomial time for cographs.
Split Graphs
A split partition (K, I) of a split graph is minimal if I ∪ {v} is not an independent set for all v ∈ K, in other words every vertex v ∈ K is adjacent to some vertex u ∈ I. Note that for a minimal split partition (K, I) we have α(G) = |I|. A split partition (K, I) is maximal if K ∪ {v} is not a clique for all v ∈ I, in other words every vertex v ∈ I is non adjacent to at least one vertex u ∈ K. Note that for a maximal split partition (K, I) we have ω(G) = χ(G) = |K|. We first show the following result. First suppose that |I | ≤ d. For (G, k) to be a yes-instance, G must be contracted into a graph G with α(G ) ≤ α(G) − d = |I | + |I | − d ≤ |I |. This means that we must contract D into the empty graph, which is not possible. Hence, (G, k) is a no-instance in this case. Hence, we may assume without loss of generality that |I | ≥ d + 1.
Suppose that k ≥ d + 1. If k ≥ |I |, then we contract every vertex of I onto a neighbour in K. In this way we have k-contracted G into a graph G with α(G ) = |I | + 1 = |I | + |I | − (|I | − 1) ≤ |I | + |I | − d = α(G) − d. So, (G, k) is a yes-instance in this case. If k ≤ |I | − 1, we contract each vertex of an arbitrary subset of k vertices of I onto a neighbour in K. In this way we have k-contracted G into a graph G with α(G ) ≤ |I | − k + 1 + |I | ≤ |I | + |I | − d = α(G) − d. So, (G, k) is a yes-instance in this case as well.
If k ≤ d, then we consider all possible sequences of at most k edge contractions. This takes time O(|E(G)| k ), which is polynomial as d, and consequently k, is fixed. For every such sequence we check in polynomial time whether the resulting graph has stability number at most α(G) − d. As split graphs are closed under edge contraction and moreover are chordal graphs, the latter can be verified in linear time (see [22]).
The only graph with chromatic number at most 0, is the empty graph. However, a non-empty graph cannot be contracted to an empty graph. Hence, (G, k) is a no-instance in this case.
For (G, k) to be a yes-instance, G must be k-contracted into a graph G with χ(G ) ≤ χ(G)−d = 1. Hence, every connected component of G must consist of exactly one vertex. If G has no connected components with edges, then (G, k) is a yes-instance. Otherwise, because G is a split graph, G has exactly one connected component D containing one or more edges. In that case, (G, k) is a yes-instance if and only if k ≥ |V (D)| − 1; this can be checked in constant time.
First, assume that k < d. Because every edge contraction reduces the chromatic number by at most 1, (G, k) is a no-instance.
Second, assume that k = d. We consider all possible sequences of at most k edge contractions. This takes time O(|E(G)| k ), which is polynomial as d, and consequently k, is fixed. For every such sequence we check in polynomial time whether the resulting graph has chromatic number at most χ(G) − d. As split graphs are closed under edge contractions and moreover are chordal graphs, the latter can be verified in polynomial time (see [22]).
Third, assume that k > d. We claim that (G, k) is a yes-instance. This can be seen as follows. Let (K, I) be a maximal split partition of G.
If k < |K|, then we contract k arbitrary edges of K. The resulting graph G has a split partition (K , Note that the latter equality follows from our assumption that (K, I) is maximal. Now suppose that k ≥ |K|. We contract |K| arbitrary edges of K. The resulting graph G has chromatic number χ(G ) = 2 ≤ χ(G) − d. Hence, in both cases, we conclude that (G, k) is a yes-instance.
Finally consider π = ω. We use the previous result combined with the fact that split graphs are perfect and closed under edge contractions.
In our next theorem we give two hardness results which, as explained in Section 1, show that Theorem 12 can be seen as best possible. In their proofs we will reduce from the Red-Blue Dominating Set problem. This problem takes as input a bipartite graph G = (R∪B, E) and an integer k, and asks whether there exists a red-blue dominating set of size at most k, that is, a subset D ⊆ B of at most k vertices such that every vertex in R has at least one neighbour in D. This problem is NP-complete, because it is equivalent to the NP-complete problems Set Cover and Hitting Set [20]. The Red-Blue Dominating Set problem is also W[1]-complete when parameterized by |B|−k [24]. Belmonte et al. [5] reduced from the same problem for showing that 1-Contraction Blocker(∆) is NPcomplete and W[2]-hard (with parameter k) for split graphs, but the arguments we use to prove our results are quite different from the ones they used.
Proof. The problem is in NP for π ∈ {α, χ, ω}, as split graphs are closed under edge contraction and the three problems Clique Coloring and Independent Set are readily seen to be polynomial-time solvable on split graphs; hence, we can take the sequence of edge contractions as the certificate. Recall that we reduce from Red Blue Dominating Set in order to show NP-hardness and W[1]-hardness with parameter d.
First consider π = α. Let G = (R ∪ B, E) be a bipartite graph that together with an integer k forms an instance of Red-Blue Dominating Set. We may assume without loss of generality that k ≤ |B|. Moreover, we may assume that every vertex of R is adjacent to at least one vertex of B. We add all possible edges between vertices in R. This yields a split graph G * with a split partition (R, B). Because every vertex in R is assumed to be adjacent to at least one vertex of B in G, we find that (R, B) is a minimal split partition of G * .
Because Red-Blue Dominating Set problem is NP-complete [20] and W[1]-complete when parameterized by |B| − k [24], it suffices to prove that G has a red-blue dominating set of size at most k if and only if (G * , |B|−k) is a yes-instance of (|B|−k)-Contraction Blocker(α). We prove this claim below.
First suppose that G has a red-blue dominating set D of size at most k. Because k ≤ |B|, we may assume without loss of generality that |D| = k (otherwise we would just add some vertices from B \ D to D).
In G * we contract every u ∈ B \ D onto a neighbour in R. In this way we (|B| − k)contracted G * into a graph G . Note that G is a split graph that has a split partition (R, D). Because every vertex in R is adjacent to at least one vertex of D in G by definition of D, it is adjacent to at least one vertex of D in G * . The latter statement is still true for G , as contracting an edge incident to a vertex u ∈ B is equivalent to deleting u. Hence, (R, D) is a minimal split partition of G , so α(G ) = |D|. Because (R, B) is a minimal split partition of G * , we have α(G * ) = |B|. This means that α(G ) = |D| = |B| − (|B| − |D|) = α(G * ) − (|B| − k). We conclude that (G * , |B| − k) is a yes-instance of (|B| − k)-Contraction Blocker(α). Now suppose that (G * , |B| − k) is a yes-instance of (|B| − k)-Blocker(α), that is, G * can be (|B| − k)-contracted into a graph G such that α(G ) ≤ α(G * ) − (|B| − k). Recall that α(G * ) = |B|. Hence, α(G ) ≤ k. Let p be the number of contractions of edges with one end-vertex in B. Note that any such contraction decreases the size of the independent set B by exactly one. If p < |B|−k, then G contains an independent set of size |B|−p > k, which would mean that α(G ) > k, a contradiction. Hence, p ≥ |B| − k, which implies that p = |B| − k as we performed no more than |B| − k contractions in total. Let D denote the independent set obtained from B after all edge contractions. Then we find that Hence, |D| = α(G ), which means that (D, R) is a minimal split partition of G . This means that every vertex of R is adjacent to at least one vertex of D in G . Because all our contractions were performed on edges with one end-vertex in B, we have only removed vertices from G * , that is, G is an induced subgraph of G * . Hence, every vertex of R is adjacent to at least one vertex of D in G . Consequently, D is a red-blue dominating set of G with size |D| = k. Now consider π = χ. Let G = (R∪B, E) be a bipartite graph that together with an integer k forms an instance of Red-Blue Dominating Set. We may assume without loss of generality that k ≤ |B|. Moreover, we may assume that every vertex of R is adjacent to at least one vertex of B.
We take the bipartite complement of G, that is, we construct the bipartite graph with partition classes R and B, and we add an edge between any two vertices u ∈ R and v ∈ B if and only if uv / ∈ E. Then, we add all possible edges between vertices in B. Finally we add a new vertex x to the graph. We make x adjacent to all vertices of B ∪ R. This yields a split graph G * with a split partition (B ∪ {x}, R). Because every vertex in R is assumed to be adjacent to at least one vertex of B in G, it is non-adjacent to at least one vertex of B in G * . Hence, (B ∪ {x}, R) is a maximal split partition of G * (we will explain the role of vertex x in our construction later). Similarly to the previous case, we claim that G has a red-blue dominating set of size at most k if and only if (G * , |B| − k) is a yes-instance of (|B| − k)-Contraction Blocker(χ). We prove this claim below.
First suppose that G has a red-blue dominating set D of size at most k. Because k ≤ |B|, we may assume without loss of generality that |D| = k (otherwise we would just add some vertices from B \ D to D).
In G * we contract every u ∈ B \ D onto x. In this way we (|B| − k)-contracted G * into a graph G . Note that G is a split graph that has a split partition (D ∪ {x}, R). Because every vertex in R is adjacent to at least one vertex of D in G by definition of D, it is non-adjacent to at least one vertex of D in G * . The latter statement is still true for G , as no vertex of D ∪ R was involved in any of the edge contractions performed. Hence, (D ∪ {x}, R) is a maximal split partition of G , so χ(G ) = |D| + 1. Because (B ∪ {x}, R) is a maximal split partition of G * , we have χ(G * ) = |B| + 1. This means that χ(G ) = |D| + 1 = k + 1 = |B| + 1 + k + 1 − (|B| + 1) = χ(G * ) − (|B| − k). We conclude that (G * , |B| − k) is a yes-instance of (|B| − k)-Contraction Blocker(χ). Now suppose that (G * , |B| − k) is a yes-instance of (|B| − k)-Blocker(χ), that is, G * can be (|B| − k)-contracted to a graph G such that χ(G ) ≤ χ(G * ) − (|B| − k). Recall that χ(G * ) = |B| + 1. Hence, χ(G ) ≤ k + 1. Let p be the number of contractions of edges between two vertices of B ∪ {x}. Note that any such contraction decreases the size of the clique B ∪ {x} by exactly one. If p < |B| − k, then G contains a clique of size |B| + 1 − p > k + 1, which would mean that χ(G ) > k + 1, a contradiction. Hence, p ≥ |B| − k, which implies that p = |B| − k as we performed no more than |B| − k contractions in total. Let B denote the clique obtained from B ∪ {x} after all edge contractions. Then we find that Hence, |B | = χ(G ), which means that (B , R) is a maximal split partition of G . This means that no vertex of R is adjacent to all vertices of B in G . We may assume without loss of generality that x ∈ B , as we can view any edge contraction of an edge between a vertex u ∈ B and x as a contraction of u onto x. Furthermore, suppose we performed a contraction of an edge uu with u, u ∈ B, say we contracted u onto u . We change this by contracting u onto x instead. Because x is adjacent to all vertices of B ∪ R in G, we find that x is adjacent to all vertices (except to itself) of G and of any intermediate graph that we obtained while contracting G into G . Hence, contracting u onto x is equivalent to deleting u. As such, contracting u onto x does not lead to a vertex v ∈ R becoming adjacent to all vertices of B . Consequently, the size of a maximum clique in the modified graph is also equal to |B | = χ(G ). As we can do the same for any other contraction of an edge between two vertices in B, we may assume without loss of generality that every edge contraction is a contraction of a vertex of B onto x.
Let D = B \ {x} ⊆ B. As noted, contracting a vertex of B onto x is the same as deleting such a vertex of B from the graph. Hence, every vertex of D has exactly the same neighbours in G as it has in G * . Because every vertex in R is adjacent to x but not to all vertices of B = D ∪ {x}, we find that every vertex in R is non-adjacent to at least one vertex of D in G , and consequently, in G * . Because x ∈ B and |B | = k + 1, we find that |D| = k. We conclude that D is a red-blue dominating set of G with size |D| = k.
Finally, consider π = ω. As split graphs are perfect and closed under edge contractions, this case follows directly from the previous case where π = χ.
Regarding the Deletion Blocker(π) problem, for π ∈ {α, χ, ω}, we know from [13] that it is NP-complete. In the same paper it was shown that if d is fixed, all three problems become polynomially solvable.
Interval Graphs
Let G = (V, E) be an interval graph with n vertices and m edges that corresponds to a set of intervals I = {I 1 , I 2 , . . . , I n } on the real line. Let V = {v 1 , . . . , v n } be such that vertex v i corresponds to interval I i for i = 1, . . . , n. Note that the class of interval graphs is closed under edge contraction. Indeed, contracting an edge v i v i corresponds to removing the intervals I i and I j and adding a new interval I ij = I i ∪ I j . It is well known (see e.g. [19]) that G has at most n maximal cliques which can be linearly ordered in O(n + m) time so that the maximal cliques containing a vertex v i appear consecutively for i = 1, . . . , n.
We first prove a useful lemma for the class of C 4 -free graphs, which contains the class of interval graphs as a proper subclass. Lemma 3. Let G = (V, E) be a C 4 -free graph and let v 1 v 2 ∈ E. Let G|v 1 v 2 be the graph obtained after the contraction of v 1 v 2 and let v 12 be the new vertex replacing v 1 and v 2 . Then every maximal clique K in G|v 1 v 2 containing v 12 corresponds to a maximal clique K in G and vice versa, such that (a) either |K| = |K | and K \ {v 12 Moreover, every other maximal clique in G|v 1 v 2 is a maximal clique in G and vice versa.
Proof. Let A 1 (resp. A 2 ) be the set of neighbours of v 1 (resp. v 2 ) that are nonadjacent to v 2 (resp. v 1 ). Let A 3 be the set of vertices adjacent to both v 1 and v 2 . Now consider a clique K in G|v 1 v 2 containing v 12 . As G is C 4 -free, we find that G, and hence G|v 1 v 2 , contains no edge between a vertex in A 1 and a vertex in A 2 . Therefore we are in exactly one of the following cases: (i) K contains one or more vertices from both A 1 and A 3 but no vertices from A 2 ; (ii) K contains one or more vertices from both A 2 and A 3 but no vertices from A 1 ; (iii) K contains one or more vertices from A 1 but no vertices from A 2 and A 3 ; (iv) K contains one or more vertices from A 2 but no vertices from A 1 and A 3 ; (v) K contains one or more vertices from A 3 but no vertices from A 1 and A 2 .
Suppose we are in case (i). Since K is maximal, it follows that (K \ {v 12 }) ∪ {v 1 } is a maximal clique in G and thus outcome (a) holds. By symmetry, if we are in case (ii), outcome (b) holds. Assume now that case (iii) occurs. Since K is maximal, it follows that (K \ {v 12 }) ∪ {v 1 } is a maximal clique in G and thus outcome (a) holds. By symmetry, we conclude that if case (iv) occurs, outcome (b) holds. Finally, suppose that we are in case (v). Then (K \ {v 12 }) ∪ {v 1 , v 2 } is a maximal clique in G and thus outcome (c) holds.
Lemma 3 tells us that if we contract an edge e in a C 4 -free graph, every maximal clique containing both end-vertices of e will have its size reduced by exactly one in the resulting graph, and moreover, the size of every other maximal clique of the original graph will remain the same and we do not create any new maximal clique. Lemma 4. Let G = (V, E) be an interval graph and let d ≥ 0 be an integer. Let K 1 be the first maximal clique of size strictly greater than ω(G) − d starting left on the real line, and let I x , I y be the intervals with the rightmost right endpoints among all intervals corresponding to the vertices in K 1 . Let B ⊆ E be a set of edges such that the graph G obtained from G after having contracted all edges from B satisfies ω(G ) ≤ ω(G) − d.
and such that the graph G obtained from G after contracting all edges in B satisfies Proof. We first note that, by their definition, x and y are contained in all maximal cliques of size strictly greater than ω(G) − d that contain at least two vertices of K 1 . Moreover, contracting the edge xy instead of another edge v 1 v 2 of K 1 does not create cliques of larger size, due to Lemma 3.
Lemma 4 tells us that if for an interval graph the answer of the Contraction Blocker(ω) problem is yes, then there always exists a set B ⊆ E with |B| ≤ k such that ω(H) ≤ ω(G) − d, where H is the graph obtained from G by contracting the edges of B, and xy ∈ B where x, y belong to the first maximal clique K in G with size strictly greater than ω(G) − d starting left on the real line and such that I x , I y have the rightmost right endpoints among all intervals corresponding to vertices in K. Since interval graphs are closed under edge contractions, we can use this property recursively to obtain a polynomial-time algorithm for Contraction Blocker(π), with π ∈ {χ, ω}, in interval graphs.
Proof. Since interval graphs are perfect and closed under edge contractions, we may assume without loss of generality that π = ω. Let G = (V, E) be an interval graph and let d ≥ 0 be an integer. Our algorithm goes as follows. Let K 1 be the first maximal clique of size strictly greater than ω(G) − d starting left on the real line. By Lemma 4, we know that if there exists a solution, then there exists one in which we contract the edge xy where x, y ∈ K 1 are such that the corresponding intervals I x , I y have the rightmost right endpoints among all intervals corresponding to vertices in K 1 . So we contract the edge xy. Since the resulting graph is still an interval graph, we may repeat our procedure. We consider again the first maximal clique of size strictly greater than ω(G) − d starting left on the real line and contract the edge whose end-vertices correspond to the intervals with the rightmost right endpoints among all intervals corresponding to vertices in that clique. We continue like this until there is no more maximal clique of size strictly greater than ω(G) − d in the graph.
The correctness of our algorithm follows from Lemmas 3 and 4. Indeed, by Lemma 3 we know that our choice of the edges that we contract is such that at each step there is at least one maximal clique of size strictly greater than ω(G) − d whose size is reduced by one and furthermore, we do not create any new maximal clique. Since an interval graph on n vertices contains at most n maximal cliques, it follows that our algorithms stop after at most nd steps. Since all maximal cliques of an interval graph can be found in time O(n + m), where m is the number of edges, we then find that our algorithm runs in time O(nd(n + m)). Finally, Lemma 4 ensures that the set of edges we choose to contract has minimum size.
The proof of Theorem 14 can be readily adapted to show polynomial-time solvability of the Deletion Blocker(π) problem on interval graphs for π ∈ {χ, ω}.
We recall that for π = α the complexity of both problems is open for interval graphs.
Chordal Graphs
The following result shows that Theorem 14 cannot be generalized to chordal graphs.
Proof. Since chordal graphs are perfect and closed under taking edge contractions, we may assume without loss of generality that π = ω. As Clique is polynomial-time solvable on chordal graphs, this means that the problem is in NP (take the sequence of edge contractions as the certificate). We reduce from Vertex Cover, which is well known to be NP-complete (see [20]).
Let G = (V, E) be a graph that together with an integer k forms an instance of Vertex Cover. From G we construct a chordal graph G as follows. We introduce a new vertex y not in G. We represent each edge e of G by a clique K e in G of size |V | so that K e ∩ K f = ∅ whenever e = f . We represent each vertex v of G by a vertex in G that we also denote by v. Then we let the vertex set of G be V ∪ e∈E K e ∪ {y}. We add an edge between every vertex in K e and a vertex v ∈ V if and only if v is incident with e in G. In G we let the vertices of V form a clique. Finally, we add all edges between y and any vertex in V ∪ e∈E K e . Note that the resulting graph G is indeed chordal. Note also that ω(G ) = |V | + 3 (every maximum clique consists of y, the vertices of a clique K e and their two neighbours in V ).
We claim that G has a vertex cover of size at most k if and only if G can be kcontracted to a graph H with ω(H) ≤ ω(G ) − 1. First suppose that G has a vertex cover U of size at most k. For each vertex v ∈ U , we contract the corresponding vertex v in G to y. As |U | ≤ k, this means that we k-contracted G into a graph H. Since U is a vertex cover, we obtain ω(H) ≤ |V | + 2 = ω(G ) − 1.
Now suppose that G can be k-contracted to a graph H with ω(H) ≤ ω(G ) − 1. Let S be a corresponding sequence of edge contractions (so |S| ≤ k holds). By Lemma 3 and the fact that chordal graphs are closed under taking edge contractions, we find that no contraction in S results in a new maximum clique. Hence, as we need to reduce the size of each maximum clique K uv ∪ {u, v, y} by at least 1, we may assume without loss of generality that each contraction in S concerns an edge with both its end-vertices in V ∪ {y}. We construct a set U as follows. If S contains the contraction of an edge uy we select u. If S contains the contraction of an edge uv, we select one of u, v arbitrarily. Because each maximum clique K uv ∪ {u, v, y} must be reduced, we find that U ⊆ V is a vertex cover. By construction, |U | ≤ k. This completes the proof. Similar arguments as in the above proof can be readily used to prove the following result, which shows that Theorem 15 cannot be generalized to chordal graphs.
8 Six Dichotomy Results and C 4 -free Perfect Graphs with ω = 3 In this section we first prove that for π ∈ {α, χ, ω} the contraction and deletion blocker problems become very quickly NP-hard when we increase π, that is, we prove Theorem 1.
(i) The problem is trivial if α = 1. As cobipartite graphs have independence number at most 2, we can apply Theorem 4 to obtain NP-completeness if α = 2.
(ii) The problem is trivial if χ ≤ 2. We now consider the class of graphs with χ = 3. Recall that the problem Bipartite Contraction is to test whether a graph can be made bipartite by at most k edge contractions. It is readily seen that 1-Contraction Blocker(χ) and Bipartite Contraction are equivalent for graphs of chromatic number 3. Heggernes, van 't Hof, Lokshtanov and Paul [26] observed that Bipartite Contraction is NP-complete by reducing from the NP-complete problem Edge Bipartization, which is that of testing whether a graph can be made bipartite by deleting at most k edges. Given an instance (G, k) of Edge Bipartization, they obtain an instance (G , k ) of Bipartite Contraction by replacing every edge in G by a path of sufficiently large odd length. Note that the resulting graph G has chromatic number 3 (assign colour 1 to the vertices of G and give the new vertices colours 2 and 3).
(iii) The problem is trivial if ω ≤ 2. We now consider the class of graphs with ω = 3. We use a polynomial reduction from the problem ONE-IN-3-SAT, which is well known to be NP-complete (see [20]). This problem has as input a set X = {x 1 , . . . , x n } of n boolean variables and a collection C = {c 1 , . . . , c m } of clauses over X ∪X such that |c i | = 3 for i = 1, . . . , m. The question is whether there a truth assignment for X such that each clause of C contains exactly one true literal. Let I = (X, C) be an instance of ONE-IN-3-SAT. We construct an instance (G, n+m) of 1-Contraction Blocker(ω), where G is constructed as follows (see Fig. 1 for an example): -For each variable x ∈ X, introduce five vertices forming a triangle and a square sharing exactly one edge. This yields the gadget for the variable x, where the two edges that do not belong to the square correspond to the two literals x andx. -For each clause c i ∈ C, introduce three vertices forming a triangle T i . This yields the gadget for the clause c i , where each edge corresponds to one of the three literals forming c i . -For every edge of a triangle T i corresponding to a literal λ, link its two end-vertices by a matching to the two end-vertices of the edge corresponding to λ in the variable gadget.
Observe that (G, n + m) can be obtained in polynomial time. Moreover, ω(G) = 3 and G contains exactly n + m disjoint triangles. Thus, in order to obtain a graph G from G with ω(G ) = 2, we need to contract at least one edge from each of these triangles. We claim that I is a yes-instance of ONE-IN-3-SAT if and only if (G, n+m) is a yes-instance of 1-Contraction Blocker(ω).
First suppose that I is a yes-instance. For each variable x which is true (resp. false), we contract the edge corresponding to the literal x (resp. the literalx) in the triangle of the variable gadget; for each clause c i , we contract the unique edge of the clause gadget corresponding to the literal which is set to true (see Fig. 2). Thus we contract exactly n + m edges, one in each of the n + m disjoint triangles. For each clause gadget in G, the unique contracted edge is linked to the unique contracted edge in the variable gadget corresponding to the true literal. Hence the four original vertices are transformed into two adjacent vertices.
We claim that no new triangles are created by performing the n+m edge contractions. Indeed, when contracting an edge from a clause gadget, we do create a triangle T one edge of which belongs to a variable gadget. But by construction, this edge will necessarily be contracted as well. Thus this triangle T is transformed into a single edge. Hence ω(G ) = 2, which means that (G, n + m) is a yes-instance.
Suppose now that (G, n + m) is a yes-instance. This means that we can obtain a graph G with ω(G ) = 2 by contracting n + m edges of G. Since G contains exactly k disjoint triangles, we must, as already mentioned before, contract exactly one edge in each of these triangles. Furthermore, in a variable gadget we must contract an edge not belonging to the square, as otherwise a new triangle is created and hence we would need more than n + m contractions, a contradiction. Let e be an edge in a variable gadget that is contracted. Suppose that e corresponds to a literal λ. In G, e is contained in some squares containing edges of clause gadgets which correspond to λ. Thus, after this contraction, we create new triangles each containing an edge of a clause gadget corresponding to λ. It follows that we must contract the edges in the clause gadgets corresponding to the literal λ, otherwise triangles will remain in G . Since we use n + m edge contractions, exactly one edge in each clause gadget is contracted. Hence, by assigning the value true to the literal corresponding to the edge contracted in each variable gadget, one literal has value true and the other two have value false in each clause. This yields a positive answer for I, so I is a yes-instance.
(iv) & (vi) Both problems are trivial if π ∈ {α, ω} has value 1. Now consider the class of graphs with ω = 2, or equivalently the class of triangle-free graphs. Since Vertex Cover is NP-complete for triangle-free graphs by Lemma 1, we conclude from Corollary 2 that 1-Deletion Blocker(ω) is NP-complete for triangle-free graphs. The remainder of statement (iv) follows immediately after recalling that 1-Deletion Blocker(α) can be solved by taking the complement of the input graph and solving 1-Deletion Blocker(ω) instead.
(v) First consider the class of graphs with χ = 2, which coincides with the class of bipartite graphs. Then the problem becomes equivalent to Independent Set, which is polynomialtime solvable for bipartite graphs (due to König's Theorem; see, for example, [14]). Now consider the class of graphs with χ = 3. Recall that the Maximum Induced Bipartite Subgraph problem is to test if a given graph contains an induced bipartite subgraph with at least k vertices for some integer k and that this problem is NP-complete even for the class of 3-colourable perfect graphs [1]. As for 3-colourable graphs 1-Deletion Blocker(χ) is equivalent to Maximum Induced Bipartite Subgraph, we find that 1-Deletion Blocker(χ) is NP-complete for graphs with chromatic number 3.
We have proven each of the six claims and thus have proven the theorem.
We note that the graph G in the proof of Theorem 1 (iii) contains no induced diamond (the complete graph K 4 on four vertices minus an edge) and no induced butterfly (the graph with vertices a, b, c, d, e and edges ab, bc, ca, cd, de, ec). As a graph G is K 4 -free if and only if ω(G) ≤ 3, we have in fact proven the following.
We use Theorem 1 (iii) to prove the following hardness result.
Proof. As before, the problem is readily seen to be in NP. Let π = ω, or equivalently, π = χ. We adapt the construction used in the proof of Theorem 1 (iii) by doing as follows for each edge e of the graph G in this proof. First we subdivide e. This gives us two new edges e 1 and e 2 . We introduce two new non-adjacent vertices u e and v e and make them adjacent to both end-vertices of e 1 . Denote the resulting graph by G * . Note that we got rid of all the induced C 4 s while not creating any new induced C 4 in this way. Hence G * is C 4 -free. Moreover, we did not introduce any clique on four vertices. Hence, as ω(G) = 3, we also have ω(G * ) = 3. The vertices of the original graph together with the subdivision vertices form a bipartite graph on top of which we placed a number of triangles. Hence, G * contains no odd hole and no odd antihole. By Theorem 3, G * is perfect.
We increase the allowed number of edge contractions accordingly and observe that, because of the presence of the vertices u e and v e for each edge e, we are always forced to contract the edge e 1 , which gives us back the original construction extended with a number of pendant edges (which do not play a role). Note that we have left the class of C 4 -free perfect graphs after contracting away the triangles, but this is allowed.
We recall that Contraction Blocker(α) is still open for the class of C 4 -free perfect graphs as well as Deletion Blocker(π) for π ∈ {α, χ, ω}, even if d is fixed.
H-free Graphs
In this section we prove our complexity results for the six blocker problems restricted to H-free graphs, that is, we prove Theorem 2. To summarize, for π ∈ {α, ω, χ} we are able to give a dichotomy both for the contraction and deletion blocker problem except for one open case for the contraction blocker problem when π = ω. We first consider π = α, then π = ω and then π = χ.
When π = α
We call a vertex forced if it is in every maximum independent set of a graph [13]. Recall that the set of all forced vertices is called the core of a graph and that Boros, Golumbic and Levit [7] proved that computing whether the core of a graph has size at least k is co-NP-hard for every fixed k ≥ 1. As a special case of their result, the problem of testing the existence of a forced vertex is co-NP-hard. We prove that the latter problem, or equivalently, Deletion Blocker(α) with d = k = 1, stays co-NP-hard even for graphs of girth p + 1, or equivalently, (C 3 , . . . , C p )-free graphs, for any constant p ≥ 3 ((the girth of a graph is the length of a shortest cycle in it).
Proof. Let G be a graph. We pick one of its edges uv and subdivide uv twice, that is, we replace the edge uv by two new vertices x and y and edges ux, xy, yv. We let G denote the resulting graph. Note that α(G ) = α(G) + 1 (see also [42]). We claim that G has a forced vertex if and only if G has a forced vertex.
First suppose that G has a forced vertex s. Then s is also a forced vertex of G . In order to see this consider a maximum independent set I of G . For contradiction, suppose that I does not contain s. Recall that I has size α(G) + 1. If x is in I , then its neighbour y is not in I , and thus I \ {x} is a maximum independent set of G that does not contain s, a contradiction. Hence x is not in I , and for the same reason y is not in I either. Then u is in I , as otherwise we could put x in I to get a larger independent set than I . However, we now find that I \ {u} is a maximum independent set of G that does not contain s, a contradiction. Hence s belongs I . We conclude that s is a forced vertex of G as well. Now suppose that G has a forced vertex s. First suppose s ∈ {x, y}, say s = x. Then v is a forced vertex of G. In order to see this consider a maximum independent set I of G. For contradiction, suppose that I does not contain v. Then I ∪ {y} is a maximum independent set of G not containing s = x, a contradiction. Hence s does not belong to {x, y}, so s must be in G. Then s is also a forced vertex of G. In order to see this consider a maximum independent set I of G. For contradiction, suppose that I does not contain s. As u and v are adjacent in G, not both of them are in I. Assume without loss of generality that u is not in I. Then I ∪ {x} is a maximum independent set of G that does not contain s, a contradiction. We conclude that s is a forced vertex of G.
We now subdivide each edge of G a sufficiently number of times (say p times) so that the resulting graph G is (C 3 , . . . , C p )-free. By repeatedly applying the above claim, we find that G has a forced vertex if and only if G has a forced vertex. As deciding whether a graph has a forced vertex is co-NP-hard [7], the result follows.
Before we present our two complexity dichotomies for π = α we need one additional observation.
Proof. As H is 3P 1 -free, H contains at most two connected components. Suppose H contains exactly two connected components. Then, as H is 2P 2 -free, at least one of these components must be a P 1 . As H is 3P 1 -free, this means that H is an induced subgraph of P 1 ⊕ P 2 , so H ⊆ i P 4 . Suppose H is connected. As H is 3P 1 -free, H contains no claw and no path on more than five vertices. Hence, H ⊆ i P 4 .
We are now ready to present our first dichotomy.
Theorem 20. Let H be a graph. If H ⊆ i P 4 , then Deletion Blocker(α) is polynomialtime solvable for H-free graphs, otherwise it is NP-hard or co-NP-hard for H-free graphs.
Proof. Let H be a graph. If H ⊆ i P 4 , then we use Corollary 3 to obtain polynomial-time solvability. Suppose H is not an induced subgraph of P 4 . If H contains an induced cycle C r for some r ≥ 3, then we pick p = r + 1 and apply Theorem 19 to obtain co-NPhardness even if d = k = 1. Note that for r = 5, we could have applied Theorem 13 to obtain NP-hardness, as split graphs are C 5 -free. Similarly, if r ≥ 6, then H contains an induced 2P 2 and we could have applied Theorem 13 (as split graphs are 2P 2 -free) to obtain NP-hardness as well.
Now assume that H is forest. As H is not an induced subgraph of P 4 , by Lemma 5 either 2P 2 ⊆ i H or 3P 1 ⊆ i H. If 2P 2 ⊆ i H, then we apply Theorem 13 again to obtain NP-hardness. If 3P 1 ⊆ i H, then we use Theorem 1 (iv) to obtain NP-hardness even if d = 1, after observing that a graph G is 3P 1 -free if and only if α(G) = 2.
Remark 2.
Recall that H-free graphs are closed under vertex deletion. Hence, Deletion Blocker(α) for H-free graphs will be in NP if we can solve Independent Set for Hfree graphs in polynomial time; in that case we can take a sequence of vertex deletions as certificate. To give an example, Independent Set is polynomial-time solvable for P 5 -free graphs [32]. Hence, for P 5 -free graphs, Deletion Blocker(α) is not only NPhard (which, as argued in the proof of Theorem 20, follows from Theorem 12) but even NP-complete.
We now consider the edge contraction variant and present our second dichotomy.
Theorem 21. Let H be a graph. If H ⊆ i P 4 , then Contraction Blocker(α) is polynomial-time solvable for H-free graphs, otherwise it is NP-hard for H-free graphs.
Proof. Let H be a graph. If H is an induced subgraph of P 4 , then we use Corollary 3 to obtain polynomial-time solvability. Now suppose that H is not an induced subgraph of P 4 . If H contains an induced cycle that is odd, then we use Theorem 7 to obtain NPhardness. If H contains an induced cycle that is even, then H either contains an induced C 4 or, if the even cycle has at least six vertices, an induced 2P 2 . This means that we can use Theorem 13 to obtain NP-hardness after recalling that split graphs are (2P 2 , C 4 )-free. Assume H contains no cycle. Then H is a forest. If H contains an induced 3P 1 , then we use Theorem 1 (i) to obtain NP-hardness even if d = 1, after observing that a graph G is 3P 1 -free if and only if α(G) = 2. Assume H is 3P 1 -free. Then 2P 2 ⊆ i H by Lemma 5, which means we can use Theorem 13 again to obtain NP-hardness.
When π = ω
The complexity dichotomy for Deletion Blocker(ω) follows immediately from Theorem 20 after making two observations. First, Deletion Blocker(ω) for H-free graphs is equivalent to Deletion Blocker(α) for H-free graphs. Second, the graph P 4 is selfcomplementary, that is, P 4 = P 4 .
Theorem 22.
Let H be a graph. If H ⊆ i P 4 , then Deletion Blocker(ω) is polynomialtime solvable for H-free graphs; otherwise it is co-NP-hard or NP-hard for H-free graphs.
We now consider the Contraction Blocker(ω) problem for H-free graphs. We start by giving a sufficient condition for computational hardness. Let G be a graph class with the following property: if G ∈ G, then so are 2G and G ⊕ K r for any r ≥ 1. We call such a graph class clique-proof.
Theorem 23. If Clique is NP-complete for a clique-proof graph class G, then Contraction Blocker(ω) is co-NP-hard for G, even if d = k = 1.
Proof. Let G be a graph class that is clique-proof. From a given graph G ∈ G and given integer ≥ 1 we construct the graph G = 2G ⊕ K +1 . Note that G ∈ G by definition and that ω(G ) = max{ω(G), + 1}. It suffice to prove that ω(G) ≤ if and only if G can be 1-contracted into a graph G * with ω(G * ) ≤ ω(G ) − 1.
Now suppose that G can be 1-contracted into a graph G * with ω(G * ) ≤ ω(G ) − 1. As contracting an edge in one of the two copies of G in G does not lower the clique number of G , the contracted edge must be in the K +1 , that is, G * = 2G ⊕ K . As this did result in a lower clique number, we conclude that ω(G ) = ω(K +1 ) = + 1 and ω(G * ) = ω(2G ⊕ K ) = max{ω(G), } = . The latter equality implies that ω(G) ≤ .
We need a number of special graphs, namely the cobanner, bull, the aforementioned butterfly and the paw (the graph P 1 ⊕ P 3 ), which are all displayed in Figure 3. We also need the following lemma from Poljak.
We use Lemma 6 in the proof of our next lemma.
Lemma 7.
Let H be a connected graph. If H is neither an induced subgraph of P 4 nor of the paw, then 1-Contraction Blocker(ω) is NP-hard or co-NP-hard for H-free graphs.
Proof. Let H be a connected graph that is neither an induced subgraph of P 4 nor of the paw. If H contains an induced C 4 , use Theorem 18. If H contains an induced K 4 , diamond or butterfly, use Corollary 4. If H contains an induced K 1,3 , C 5 , P 5 , bull or cobanner, use Lemma 6 with Theorem 23. So from now on we may assume that H is (C 4 , C 5 , P 5 , K 1,3 , K 4 , diamond, bull, butterfly, cobanner)-free. Below we show that this leads to a contradiction. First suppose that H contains no cycle. Then, as H is connected, H is a tree. Because H is K 1,3 -free, H is a path. Our assumption that H is neither an induced subgraph of P 4 nor of the paw implies that H contains an induced P 5 , which is not possible as H is P 5 -free. Now suppose that H contains a cycle C. Then C must have exactly three vertices, because H is (C 4 , C 5 , P 5 )-free. As H is not an induced subgraph of the paw, we find that H contains at least one vertex x not on C. As H is connected, we may assume that x has a neighbour on C. Because H is (diamond, K 4 )-free, x has exactly one neighbour on C.
Let v be this neighbour. Hence, H contains an induced paw (consisting of x, v and the other two vertices of C). As H is not an induced subgraph of the paw and H is connected, it follows that H contains a vertex y / ∈ V (C) ∪ {x} that is adjacent to a vertex on C or to x.
Suppose that y is adjacent to a vertex of C. Then, as H is (diamond, K 4 )-free, y has exactly one neighbour u in C. If u = v then H either contains an induced claw (if x and y are non-adjacent) or an induced butterfly (if x and y are adjacent). Since, by our assumption, this is not possible, it follows that u = v. Then, because H is bull-free, we deduce that x and y are adjacent. However, then the vertices, u, v, x, y form an induced C 4 , which is not possible as H is C 4 -free. We conclude that y is not adjacent to a vertex of C, so y must be adjacent to x only. However, then H contains an induced cobanner, a contradiction. This completes the proof of Lemma 7.
A graph G is complete multipartite if V (G) can be partitioned into k independent sets V 1 , . . . , V k for some integer k, such that two vertices are adjacent if and only if they belong to two different sets V i and V j . We need a result of Olariu on paw-free graphs.
We are ready to present our result for Contraction Blocker(ω) restricted to Hfree graphs. This is the only result where we do not have a dichotomy due to one missing case.
Theorem 24. Let H = C 3 ⊕P 1 be a graph. If H ⊆ i P 4 or H ⊆ i paw, then Contraction Blocker(ω) is polynomial-time solvable for H-free graphs, otherwise it is NP-hard or co-NP-hard for H-free graphs.
Proof. First assume that H is connected. If H is an induced subgraph of P 4 then we use Corollary 3. If H is an induced subgraph of the paw, then we know from Lemma 8 that G is either C 3 -free or complete multipartite. In the first case one must contract all the edges of an H-free graph in order to decrease its clique number. Hence Contraction Blocker(ω) is polynomial-time solvable for C 3 -free graphs. In the second case H is P 4free, so we can use Corollary 3 again. If H is neither an induced subgraph of P 4 nor of the paw, then we use Lemma 7. Now assume that H is not connected. If H contains a connected component that is not an induced subgraph of P 4 or the paw then we use Lemma 7 again. Assume that each connected component of H is an induced subgraph of P 4 or the paw. If 3P 1 ⊆ i H or 2P 2 ⊆ i H then we use Theorem 5 or Theorem 13, respectively. Hence, H ∈ {2P 1 , P 2 ⊕P 1 , C 3 ⊕P 1 }. In the first two cases H ⊆ i P 4 and thus we can use Corollary 3, whereas we excluded the last case.
When π = χ
Recall that Deletion Blocker(χ) and Contraction Blocker(χ) are called Critical Vertex and Contraction-Critical Edge, respectively, if d = k = 1. We need the following result announced in [36]; see [35] for its proof.
Theorem 25 ([36]). If a graph H ⊆ i P 4 or of H ⊆ i P 1 ⊕P 3 , then Critical Vertex and Contraction-Critical Edge restricted to H-free graphs are polynomial-time solvable, otherwise they are NP-hard or co-NP-hard.
We also need the following result of Král', Kratochvíl, Tuza, and Woeginger. Theorem 26 ( [28]). Let H be a graph. If H ⊆ i P 4 or of H ⊆ i P 1 ⊕P 3 , then Coloring is polynomial-time solvable for H-free graphs, otherwise it is NP-complete for H-free graphs.
We also need the following lemma.
Proof. Let G = (V, E) be a 3P 1 -free graph with |V | = n and let k ≥ 1 be an integer. Consider an instance (G, k, d) of Deletion Blocker(χ). We proceed as follows. Consider an optimal colouring of G. Since G is 3P 1 -free, the size of each colour class is at most 2. Moreover, the number of colour classes of size 1 is the same for every optimal colouring of G. Let be this number. Hence, there are n− 2 colour classes of size 2 and χ(G) = + n− 2 . Now (G, k, d) is a yes-instance if and only if we can obtain a graph G from G by deleting at most k vertices such that χ(G ) ≤ χ(G) − d = + n− 2 − d. Since G is also 3P 1 -free, the colour classes in any optimal colouring of G have size at most 2 and thus, G contains at most 2( + n− 2 − d) = n + − 2d vertices. In other words, we need to delete at least 2d − vertices from G in order to get such a graph G . As such, (G, k, d) is a no-instance if k < 2d − .
Next we will show that if k ≥ 2d − , then (G, k, d) is a yes-instance and this will complete the proof. If d ≤ , we delete d vertices representing colour classes of size 1. If d > , we delete the vertices representing the colour classes of size 1 and 2(d − ) vertices of d − colour classes of size 2. In this way we obtain a graph G whose chromatic number is exactly χ(G) − d.
Due to the above, all we need to do is check if k ≥ 2d − . This can be done in polynomial time, since we can compute in polynomial time due to Theorem 26.
Two disjoint subsets of vertices in a graph are complete if there is an edge between every vertex of A and every vertex of B. Lemma 8 implies the following lemma, which we use together with Corollary 3 and Lemma 9 to prove Lemma 11. Lemma 10. The vertex set of every (P 1 ⊕ P 3 )-free graph G can be decomposed into two disjoint sets A and B such that G[A] is 3P 1 -free, G[B] is P 4 -free and A and B are complete to each other.
Proof. Let G = (V, E) be a (P 1 ⊕ P 3 )-free graph. Then G is P 1 ⊕ P 3 -free. By Lemma 8 every connected component of G is triangle-free or complete multipartite. Let A be the union of the vertices of all triangle-free components. Then G[A] = K 3 -free, so G[A] is 3P 1 -free. Let B = V \ A. As every component of G[B] is complete multipartite, G[B] is P 4 -free. As P 4 = P 4 , this means that G[B] is P 4 -free. Moreover, A and B are complete to each other in G.
Proof. Let (G, d, k) be an instance of Vertex Deletion Blocker(χ), where G = (V, E) is (P 1 ⊕ P 3 )-free. By Lemma 10, the vertex set of G can be decomposed into two disjoint sets A and B such that G 1 = G[A] is 3P 1 -free, G 2 = G[B] is P 4 -free and A and B are complete to each other. The latter implies that χ(G) = χ(G 1 ) + χ(G 2 ). Moreover, this property is maintained when deleting vertices from V . For each pair (k 1 , k 2 ) with k 1 + k 2 = k we check by how much we can decrease χ(G 1 ) using at most k 1 vertex deletions and by how much we can decrease χ(G 2 ) using at most k 2 vertex deletions. We can do this in polynomial time by Corollary 3 and Lemma 9, respectively. We keep track of the maximum sum of these values. In the end, we are left to check if the value found is at least d or not. Since the number of pairs (k 1 , k 2 ) is at most k, the total running time is polynomial.
We can now state and prove the following two dichotomies. Theorem 27. Let H be a graph. Then the following holds: -If H ⊆ i P 1 ⊕ P 3 or P 4 , then Deletion Blocker(χ) for H-free graphs is polynomialtime solvable, and it is NP-hard or co-NP-hard otherwise.
-If H ⊆ i P 4 , then Contraction Blocker(χ) for H-free graphs is polynomial-time solvable for H-free graphs, and it is NP-hard or co-NP-hard otherwise.
Proof. Let H be a graph. If H is neither an induced subgraph of P 4 nor of P 1 ⊕ P 3 , then for both problems we can apply Theorem 25. If H ⊆ i P 4 , then for both problems we apply Corollary 3. In the remaining case H = 3P 1 or H = P 1 ⊕ P 3 . Then applying Lemma 11 gives us the desired dichotomy for Deletion Blocker(χ). And applying Theorem 5 gives us the desired dichotomy for Contraction Blocker(χ) after recalling that cobipartite graphs are 3P 1 -free.
After proving Theorem 27 we have shown all six cases of Theorem 2. Note that, unlike the case d = k = 1 (see Theorem 25), the complexity dichotomies of the problems Contraction Blocker(χ) and Deletion Blocker(χ) restricted to H-free graphs are different when H is disconnected.
Future Work
We aim to solve the blank entries in Table 1. In particular, we pose the following open problems: Q1. Determine the complexity of Contraction Blocker(α) for interval graphs.
Q2. Determine the complexity of Deletion Blocker(α) for interval graphs.
We observe that the complexity of the two problems in Q1 and Q2 is unknown for interval graphs even if d is fixed. We also aim to research the complexity of 1-Contraction Blocker(α) for bipartite graphs and chordal graphs, and the complexity of 1-Deletion Blocker(α) for perfect graphs and chordal graphs.
In addition to the above it would be interesting to generalize our results for the blocker problems restricted to H-free graphs in Section 9 to families of more than one forbidden induced subgraph H. However, we still need to complete one stubborn remaining case for one problem: Q3. Determine the complexity of Contraction Blocker(ω) for (C 3 ⊕ P 1 )-free graphs.
We observe that it is not difficult to construct graph classes for which a blocker problem is tractable, but the original problem is NP-complete. However, we do not know of such examples of hereditary graph classes. Hence it would be interesting to solve the following question.
Several computationally hard cases of our dichotomies for H-free graphs in Theorem 2 hold in fact even when d = 1 or d = k = 1. In particular, from Theorems 25 and 27 we immediately deduce that if H ⊆ i P 1 ⊕ P 3 or P 4 , then 1-Deletion Blocker(χ) for Hfree graphs is polynomial-time solvable, and NP-hard or co-NP-hard otherwise. However, for the other five variants we still have a number of missing cases to solve.
Finally, we aim to determine a dichotomy with respect to H-free graphs for the variant (π ∈ {α, ω, χ}), where S consists of other graph operations, for instance when S consists of an edge deletion. This variant has been less studied than the vertex deletion and edge contraction variant. The reason for this is that no class of H-free graphs is closed under edge deletion, whereas such a class is closed under vertex deletion, and in the case when H is a linear forest, under edge contraction as well. For π = χ we are close to a dichotomy. Recall that an edge of a graph is critical or contraction-critical if its deletion or contraction, respectively, reduces the chromatic number of G by 1. It is known that an edge is contraction-critical if and only if it is critical [36]. Hence by Theorem 25 we only need to consider the cases where H ⊆ i P 4 or H ⊆ i P 1 ⊕ P 3 . Bazgan et al. [2] showed that the edge deletion variant for chromatic number is polynomial-time solvable for threshold graphs, that is, for (C 4 , 2P 2 , P 4 )-free graphs, and NP-hard for cobipartite graphs, and thus for 3P 1 -free graphs. This means that the only two open cases for chromatic number are when H = P 1 + P 2 or H = P 4 . | 26,747 | 2017-06-27T00:00:00.000 | [
"Mathematics"
] |
Rankin-Selberg local factors modulo $\ell$
After extending the theory of Rankin-Selberg local factors to pairs of $\ell$-modular representations of Whittaker type, of general linear groups over a non-archimedean local field, we study the reduction modulo $\ell$ of $\ell$-adic local factors and their relation to these $\ell$-modular local factors. While the $\ell$-modular local $\gamma$-factor we associate to such a pair turns out to always coincide with the reduction modulo $\ell$ of the $\ell$-adic $\gamma$-factor of any Whittaker lifts of this pair, the local $L$-factor exhibits a more interesting behaviour; always dividing the reduction modulo-$\ell$ of the $\ell$-adic $L$-factor of any Whittaker lifts, but with the possibility of a strict division occurring. In our main results, we completely describe $\ell$-modular $L$-factors in the generic case. We obtain two simple to state nice formulae: Let $\pi,\pi'$ be generic $\ell$-modular representations; then, writing $\pi_b,\pi'_b$ for their banal parts, we have \[L(X,\pi,\pi')=L(X,\pi_b,\pi_b').\] Using this formula, we obtain the inductivity relations for local factors of generic representations. Secondly, we show that \[L(X,\pi,\pi')=\mathbf{GCD}(r_{\ell}(L(X,\tau,\tau'))),\] where the divisor is over all integral generic $\ell$-adic representations $\tau$ and $\tau'$ which contain $\pi$ and $\pi'$, respectively, as subquotients after reduction modulo $\ell$.
Introduction
Let F be a locally compact non-archimedean local field of residual characteristic p and residual cardinality q, and let R be an algebraically closed field of characteristic ℓ prime to p. In this article, following Jacquet-Piatetskii-Shapiro-Shalika in [7] for complex representations, we associate local Rankin-Selberg integrals to pairs of R-representations of Whittaker type ρ and ρ ′ of GL n (F ) and GL m (F ), and show that they define L-factors L(ρ, ρ ′ , X) and satisfy a functional equation defining local γ-factors.
In particular, we define local factors for ℓ-modular representations. The theory of ℓ-modular representations of GL n (F ) was developed by Vignéras in [15], culminating in her ℓ-modular local Langlands correspondence for GL n (F ), c.f. [18], which is characterised initially on supercuspidal ℓ-modular representations by compatibility with the ℓ-adic local Langlands correspondence. The possibility of characterising such a correspondence with ℓ-modular invariants forms part of the motivation for this work. Indeed, already for GL 2 (F ) this is an interesting question, answered in this special case by Vignéras in [17].
We show that an L-factor attached to ℓ-adic representations of Whittaker type is equal to the inverse of a polynomial with coefficients in Z ℓ , allowing us to define a natural reduction modulo-ℓ map on the set of ℓ-adic L-factors. Furthermore, for ℓ-modular representations π and π ′ of Whittaker type of GL n (F ) and GL m (F ), there exist ℓ-adic representations τ and τ ′ of Whittaker type of GL n (F ) and GL m (F ) which stabilise natural Z ℓ -lattices Λ and Λ ′ in their respective spaces such that the ℓ-modular representations induced by the actions of τ and τ ′ on Λ ⊗ Z ℓ F ℓ and Λ ′ ⊗ Z ℓ F ℓ are isomorphic to π and π ′ . Our first main result is a comparison between the L-factors and local γ-factors defined by these two reduction modulo-ℓ maps. Theorem.
2. Let θ be an ℓ-adic character of F . The local γ-factor associated to L(π, π ′ , X) and to the reduction modulo-ℓ of θ is equal to the reduction modulo-ℓ of the local γ-factor associated to L(τ, τ ′ , X) and θ.
A particularly interesting case is the L-factor associated to a pair of irreducible cuspidal representations of GL n (F ). For R-representations ρ and ρ ′ of GL n (F ), we write n(ρ, ρ ′ ) for the number of unramified characters χ of GL n (F ) such that ρ ≃ χ ⊗ (ρ ′ ) ∨ . Let τ and τ ′ be integral cuspidal ℓ-adic representations of GL n (F ) and let π and π ′ denote their reductions modulo-ℓ. In our second main result we examine the L-factor L(π, π ′ , X). Theorem.
This work further develops the theory of ℓ-modular local L-factors of Mínguez in [10]. In particular, we use his results on Tate L-factors modulo-ℓ. Recently, Moss in [13] has studied Lfactors attached to representations of GL n (F )× GL 1 (F ) in a more general setting, and has given partial results concerning the GL n (F )×GL n−1 (F ) convolution in [14]. In a further investigation, we intend to study the local factors associated to generic segments, in terms of the local factors associated to cuspidal representations, as well as the inductivity relation satisfied by the local factors.
Preliminaries
Before embarking on the study of local L-factors in positive characteristic, we introduce the basic theory and background on representations of the general linear group. In particular, starting with results given in the standard reference [15], we show how integration behaves with respect to group decompositions. Indeed, this deserves checking as not all formulae follow from mimicking the proofs in the characteristic zero setting, due to the presence of compact open subgroups of measure zero. Additionally, we review the theory of ℓ-adic and ℓ-modular representations of Whittaker type and reduction modulo-ℓ, drawing on results originally in [16], but our exposition will be influenced by the recent generalisation to inner forms of general linear groups in [11].
Notations
Let F be a locally compact non-archimedean local field of residual characteristic p with absolute value | |. Let o denote the ring of integers in F , p = ̟o the unique maximal ideal of o, and q the cardinality of k = o/p.
Let R be a commutative ring with identity of characteristic ℓ not equal to p. If R contains a square root of q, we fix such a choice q 1/2 .
Let M n,m = Mat(n, m, F ), M n = Mat(n, n, F ), η be the row vector (0, . . . , 0, 1) ∈ M 1,n , G n = GL(n, F ). We write ν for the character | | • det. Let G k n = {g ∈ G n , |g| = q −k } (and more generally X k = X ∩ G k n for X ⊂ G n ), B n the Borel subgroup of upper triangular matrices, A n the diagonal torus, N n the unipotent radical of B n .
We fix a character θ from (F, +) to R × and, by abuse of notation, we will denote by θ the character If λ is a partition of n, P λ is the standard parabolic subgroup of G n attached to it, M λ the standard Levi factor of P λ , and N λ its unipotent radical. If t + r = n, we let and H t,r = G t U t,r . By restriction, θ defines a character of U t,r . We let P n = H n−1,1 denote the mirabolic subgroup of G n .
We denote by w n the antidiagonal matrix of G n with ones on the second diagonal, and if n = r+t, we denote by w t,r the matrix diag(I t , w r ). Notice that our notations are different from those of [8] for U r,t , H t,r , and w t,r .
If φ ∈ C ∞ c (F n ), we denote by φ its Fourier transform with respect to the θ-self-dual R-Haar measure dx on F n satisfying dx(o) = q −s/2 , where s satisfies that θ | p s is trivial, but θ | p s−1 is not.
For G a locally profinite group, we let R R (G) denote the abelian category of smooth Rrepresentations of G. All R-representations henceforth considered are assumed to be smooth. For π an R-representation with central character, for example an R-representation of G n parabolically induced from an irreducible R-representation, we denote its central character by c π .
Let Q ℓ be an algebraic closure of the ℓ-adic numbers, Z ℓ its ring of integers, and F ℓ its residue field which is an algebraic closure of the finite field of ℓ elements. By an ℓ-adic representation of G we mean a representation of G on a Q ℓ -vector space, and by an ℓ-modular representation of G we mean a representation of G on a F ℓ -vector space. For H a closed subgroup of G, we write Ind G H for the functor of normalised smooth induction from R R (H) to R R (G), and write ind G H for the functor of normalised smooth induction with compact support.
We assume that our choice of square roots of q in F ℓ and Q ℓ are compatible; in the sense that the former is the reduction modulo-ℓ of the latter, which is chosen in Z ℓ .
R-Haar Measures
Let R be a commutative ring with identity of characteristic ℓ and let G be a locally profinite group which admits a compact open subgroup of pro-order invertible in R. We let f is locally constant and compactly supported}, (we sometimes write this, more simply, as C ∞ c (G) according to the context). A left (resp. right ) R-Haar measure on G is a non-zero linear form on C ∞ c (G, R) which is invariant under left (resp. right) translation by G. If µ is a left (or right) R-Haar measure on G and f ∈ C ∞ c (G, R), we write By [15, I 2.4], for each compact open subgroup K of G of pro-order invertible in R there exists a unique left R-Haar measure µ such that µ(K) = 1. The volume µ(K ′ ) = µ(1 K ′ ) of a compact open subgroup K ′ of G is equal to zero if and only if the pro-order of K ′ is equal to zero in R.
In the present work, the modulus character of G is the unique character δ G : G → R × such that, if µ is a left R-Haar measure on G, δ G µ is a right R-Haar measure on G. More generally, if H is a closed subgroup of G, we let δ = δ −1 G | H δ H , and be the space of functions from G to R, fixed on the right by a compact open subgroup of G, compactly supported modulo H, and which transform by δ under H on the left (we sometimes write this as C ∞ c (H\G, δ)).
for dh a right R-Haar measure on H. It is proved in [15, I 2.8] that the map f → f H is surjective, and that there is a unique, up to an invertible scalar, non-zero linear form d H\G g on C ∞ c (H\G, δ, R), which is right invariant under G. We call such a non-zero linear form on C ∞ c (H\G, δ, R) a δ-quasi-invariant quotient measure on H\G and, for f ∈ C ∞ c (H\G, δ, R), we write For the remainder of this section, let G denote a unimodular locally profinite group. Suppose that B is a closed subgroup of G, K is a compact open subgroup of G such that G = BK, and K 1 is a normal compact open subgroup of K with pro-order prime to ℓ.
Lemma 2.2. Let dg be an R-Haar measure on G. There exist a right R-Haar measure db on B and a right K-invariant measure dk on K ∩ B\K such that, for all f ∈ C ∞ c (G, R), we have Proof. We observe first that the map φ → φ | K is a vector space isomorphism between C ∞ c (B\G, δ B , R) and C ∞ c ((K ∩ B)\K, R). It is injective because G = BK. To show surjectivity, we recall that the characteristic functions 1 (K∩B)kU , with U a compact subgroup of K of pro-order invertible in R and k ∈ K, span C ∞ c ((K ∩ B)\K, R). Moreover, 1 B kU belongs to C ∞ c (B\G, δ B , R), and a computation shows Remark 2.3. Let K n = GL n (o F ). By the Iwasawa decomposition, we have G n = B n K n . Let µ Gn be an R-Haar measure on G n . If ℓ = 0, or more generally ℓ ∤ q − 1, then, for all f ∈ C ∞ c (G n , R), we have for good choices of a left R-Haar measure db on B n and an R-Haar measure dk on K n . As noticed by Mínguez in [10], this is no longer true in general. More precisely, it is not true when ℓ | q − 1 as the restriction of an R-Haar measure on K to C ∞ c ((K n ∩ B n )\K n ) is zero. That is why we use a right invariant measure on K ∩ B\K in Lemma 2.2.
Let K be a compact group, K 1 an open subgroup of K of pro-order invertible in R and P be a closed subgroup of K. Proof. To prove this, we introduce the R-Haar measure λ : f → µ(f P ) on K. By computation, as in the proof of Lemma 2.2, we have 1 P sK 1 = dp(P ∩ sK 1 s −1 )1 P sK 1 = dp(P ∩ K 1 )1 P sK 1 , for s in K. In particular, with t = 1/dp(P ∩ K 1 ), we have This implies where t ′ = tλ(K 1 ) ∈ R * , and the result follows.
Remark 2.5. An example of this we need is when K = K n , K 1 = K n,1 the pro-p unipotent radical of K n , P = P n ∩ K n , and R is of positive characteristic ℓ not equal to p. In this case, Let Q, L and N be closed subgroups of G such that Q = LN and L normalises N . Suppose that there exists a compact open subgroup K 1 of G of pro-order invertible in R such that Let dl be an R-Haar measure on L and dn be an R-Haar measure on N .
Lemma 2.6. Let f ∈ C ∞ c (Q, R). There exists a unique right R-Haar measure dq on Q such that Proof. As L normalises N , we see that L N f (nl)dldn is a right Q-invariant linear form on C ∞ c (Q, R). But as Q ∩ K 1 = (N ∩ K 1 )(M ∩ K 1 ), it is easy to see that L N 1 Q∩K 1 (nl)dldn is non-zero.
Remark 2.7. A typical instance is when G = G n , Q = LN is a standard parabolic subgroup of G n , and K 1 = K n,1 .
We have the following corollary to Lemmata 2.2 and 2.6.
Corollary 2.8. Let dg be an R-Haar measure on G. There exist an R-Haar measure da on A n , and a right K n -invariant measure dk on (K n ∩ B n )\K n such that, for all f ∈ C ∞ c (N n \G n , R), we have From Iwasawa decomposition, we also have G n = P n Z n K n . We use the following integration formula, which is proved in a similar fashion.
Corollary 2.9. Let dg be an R-Haar measure on G. There exist an R-Haar measure dz on Z n , a δ-quasi-invariant quotient measure dp on N n \P n , and a right K n -invariant measure dk on Henceforth, equalities involving integrals will be true only up to the correct normalisation of measures.
Derivatives
Henceforth, we suppose that R is an algebraically closed field. Following [2], we define the following exact functors: , extension by the trivial representation twisted by ν 2. The identity functor 1 : We recall the classification of irreducible R-representations of P n .
Let π be an R-representation of G n . The zeroth derivative π (0) of π is π. Let τ = π | Pn and set Lemma 2.12 ([1]). Let π be an R-representation of finite length. Then the dimension of π (n) is finite and equal to the dimension of Hom Nn (π, θ).
The derivatives of a product are given by the Leibniz rule. ). Suppose π is an R-representation of G n and ρ is an R-representation of G m , then (π × ρ) (k) has a filtration with successive quotients π (i) × ρ (k−i) , for 0 i k.
Finally, we will use several times the following proposition.
Proposition 2.14 ([2, Proposition 3.7]). Let ρ and ρ ′ be R-representations of G n and τ and τ ′ be R-representations of P m , we have
Parabolic induction, integral structures and reduction modulo-ℓ
Let Q be a parabolic subgroup of G n with Levi factor L. We write i Gn Q for the functor of normalised parabolic induction from R R (L) to R R (G n ). If τ = π 1 ⊗ π 2 ⊗ · · · ⊗ π r is a smooth Rrepresentation of L (m 1 ,...,mr) with i m i = n, we will use the product notation π 1 × π 2 × · · · × π r for the induced representation i Gn P (m 1 ,...,mr ) (τ ). An R-representation of G n is called cuspidal if it is irreducible and it does not appear as a subrepresentation of any parabolically induced representation.
An ℓ-adic representation (π, V ) of G is called integral if it has finite length, and if V contains a G-stable Z ℓ -lattice Λ. Such a lattice is called an integral structure in π. A character is integral if and only if it takes values in Z ℓ . By [15,II 4.12], a cuspidal representation is integral if and only if its central character is integral.
If π is an integral ℓ-adic representation with integral structure Λ, then π defines an ℓ-modular representation on the space Λ ⊗ Z ℓ F ℓ . By the Brauer-Nesbitt principle [19,Theorem 1], the semisimplification, in the Grothendieck group of finite length ℓ-modular representations, of (π, Λ ⊗ Z ℓ F ℓ ) is independent of the choice of integral structure in π and we call this semisimple representation r ℓ (π) the reduction modulo-ℓ of π. We say that an ℓ-modular representation π lifts to an integral ℓ-adic representation τ if r ℓ (τ ) ≃ π, we will only really use this notion of lift when π is irreducible.
Let H be a closed subgroup of G, σ be an integral ℓ-adic representation of H, and Λ be an integral structure in σ. By [15, I 9
Representations of Whittaker type
Before defining representations of Whittaker type we recall that the irreducible R-representations of G n satisfying Hom Nn (π, θ) = 0 are called generic.
ρ are said to be equivalent if they have the same length, and ν a ρ ≃ ν a ′ ρ ′ . Hence, as noticed in [11, 7.2], the segment [a, b] ρ identifies with the cuspidal pair and the equivalence relation on segments is the restriction of the classical isomorphism equivalence relation on cuspidal pairs. To such a segment ∆, in [11, Definition 7.5] the authors associate a certain quotient L(∆) of ν a ρ × ν a+1 ρ × · · · × ν b ρ. The representation L(∆) in fact determines ∆, as its normalised Jacquet module with respect to the opposite of N (m,...,m) is equal to r ∆ = (M (m,...,m) , ν a ρ ⊗ ν a+1 ρ ⊗ · · · ⊗ ν b ρ) according to [11,Lemma 7.14]. The conclusion of this is that the objects ∆, L(∆), and r ∆ determine one another, and hence we call L(∆) a segment and, by abuse of notation, we write ∆ for L(∆), in order to lighten notations. We say that ∆ precedes ∆ ′ if we can extract from the sequence (ν a ρ, . . . , a subsequence which is a segment of length strictly larger than both the length of ∆ and the length of ∆ ′ . We say ∆ and ∆ ′ are linked if ∆ precedes ∆ ′ or ∆ ′ precedes ∆. Let ρ be a cuspidal R-representation of G m . In [11], a positive integer e(ρ) is attached to ρ, For k ∈ N, we denote by St(k, ρ) the normalised generalised Steinberg representation associated with ρ, i.e. the unique generic subquotient of ν (1−k)/2 ρ×· · ·×ν (k−1)/2 ρ. By [11,Remarque 8.14], the representation St(k, ρ) is equal to the segment ∆ associated to the sequence [(1 − k)/2, (k − 1)/2] ρ if and only if k < e(ρ). In this case, we say that ∆ = St(k, ρ) is a generic segment. Note that, our notation St(k, ρ) differs from that used in [11] and [16] (more precisely, our St(k, ρ) corresponds to St(k, ν (1−k)/2 ρ) in those references).
We will study L-factors of representations of Whittaker type.
Let π be a representation of Whittaker type. By Lemmata 2.12 and 2.13, the space Hom Nn (π, θ) is of dimension 1, and we denote by W (π, θ) the Whittaker model of π, i.e. W (π, θ) denotes the image of π in Ind Gn Nn (θ). Note that, a representation of Whittaker type may not be irreducible, however, it is of finite length. In fact, thanks to the results of Zelevinsky (c.f. [20]) in the ℓ-adic setting, and by [16,Theorem 5.7] (a more detailed proof of which can be found in [11, Theorem 9.10 and Corollary 9.12]) in the ℓ-modular setting, the irreducible representations of Whittaker type of G n are exactly the generic representations.
Remark 2.16. According to [16,Theorem V.7], if π = ∆ 1 × · · · × ∆ t is a representation of Whittaker type of G n , then π is irreducible if and only if the segments ∆ i and ∆ j are unlinked, for all i, j ∈ {1, . . . t} with i = j.
If π is a smooth representation of G n , we denote by π the representation g → π( t g −1 ) of G n . Let τ be an ℓ-adic irreducible representation of G n , then τ ≃ τ ∨ , by [6]. Hence when ∆ is an ℓ-modular generic segment of G n , it lifts to an ℓ-adic segment D according to the discussion before Definition 2.15, and as D ≃ D ∨ , we deduce by reduction modulo-ℓ, c.f. [15, I 9.7], that ∆ ≃ ∆ ∨ . If π = ∆ 1 × · · · × ∆ t is a representation of G n of Whittaker type, we have π = ∆ t × · · · × ∆ 1 = ∆ ∨ t × · · · × ∆ ∨ 1 , and we deduce that π is also of Whittaker type. In order to state the functional equation for L-factors of representations of Whittaker type, we will need the following lemma: Lemma 2.17. Let π be a representation of Whittaker type of G n , then π is of Whittaker type and the map W → W , where W (g) = W (w n t g −1 ), is an R-vector space isomorphism between W (π, θ) and W ( π, θ −1 ).
Rankin-Selberg local factors for representations of Whittaker type
The theory of derivatives ([1] and [2]) being valid in positive characteristic (see Subsection 2.3) and equipped with the theory of R-Haar measures (see Subsection 2.2), means we can now safely follow [7] to define L-factors and ε-factors. However, as we do not have a Langlands' quotient theorem at our disposal, which would allow us to associate to an irreducible representation of G n , a unique representation with an injective Whittaker model lying above it, we restrict our attention to representations of Whittaker type (see Subsection 2.5).
Definition of the L-factors
We first recall the asymptotics of Whittaker functions obtained in [8, Proposition 2.2]. We write Z i for subgroup {diag(tI i , I n−i ), t ∈ F × } of G n . The diagonal torus A n of G n is the direct product Z 1 × · · · × Z n .
Lemma 3.1. Let π be a representation of Whittaker type of G n . For each i between 1 and n − 1, there is a finite family X i (π) of characters of Z i , such that if W is a Whittaker function in W (π, θ), then its restriction W (z 1 , . . . , z n−1 ) to A n−1 = Z 1 × · · · × Z n−1 is a sum of functions of the form The proof of Jacquet-Piatetskii-Shapiro-Shalika in [op. cit.] applies mutatis mutandis for ℓmodular representations.
Remark 3.2. For 1 ≤ i ≤ n − 1, we can take X i (π) to be the family of central characters (restricted to Z i ) of the irreducible components of the (non-normalised) Jacquet module π N i,n−i . We denote by X n (π) the singleton {ω π }. We denote by E i (π), the family of central characters (restricted to Z i ) of the irreducible components of the normalised Jacquet module π N i,n−i , for 1 ≤ i ≤ n−1, and let E n (π) = X n (π). The family E i (π) is obtained from X i (π) by multiplication by an unramified character of Z i , in particular, if R = Q ℓ , the characters in E i (π) are integral if and only if those in X i (π) are integral.
Proposition 3.3. Let π be a representation of Whittaker type of G n , and π ′ a representation of Whittaker type of G m , for m ≤ n.
Then for every k ∈ Z, the coefficient is well-defined, and vanishes for k << 0. When m = n − 1, we will simply write c k (W, W ′ ) for c k (W, W ′ ; 0). • The case n = m. Under the same notation as Proposition 3.3, we define the following formal Laurent series • The case m ≤ n − 1. Under the same notation as Proposition 3.3, we define the following formal Laurent series When m = n − 1, we will simply write I(W, W ′ , X) for I(W, W ′ , X; 0).
The L-factors we study are defined by the following theorem.
Theorem 3.5. Let π be a representation of Whittaker type of G n , and π ′ a representation of Whittaker type of G m , for 1 ≤ m ≤ n.
• If n = m, the R-submodule spanned by the Laurent series I(W, W ′ , φ, X) as W varies in W (π, θ), W ′ varies in W (π ′ , θ −1 ), and φ varies in C ∞ c (F n ), is a fractional ideal of R[X ±1 ], and it has a unique generator which is an Euler factor L(π, π ′ , X).
• If 1 ≤ m ≤ n−1, fix j between 0 and n−m−1. The R-submodule spanned by the Laurent series I(W, W ′ , X; j) as W varies in W (π, θ), W ′ varies in W (π ′ , θ −1 ), is a fractional ideal of R[X ±1 ], is independent of j, and it has a unique generator which is an Euler factor L(π, π ′ , X).
Proof. We treat the case m ≤ n − 2, the case m ≥ n − 1 is totally similar. First we want to prove that our formal series in fact belong to R(X). In this case, the coefficient c k (W, W ′ ; j) is equal to which, by smoothness of W and W ′ , we can write as a finite sum it is thus enough to check that belongs to R(X). Following the proof of [7] we see that, by Lemma 3.1, they belong to 1 P (X) R[X ±1 ], where P (X) is a suitable power of the product over the unramified characters χ i 's in E i (π) for 1 ≤ i ≤ n and the unramified characters µ j 's in E j (π ′ ) for 1 ≤ j ≤ m of the Tate L factors L(χ i µ j , X). By [10], this factor is equal to 1 if R = F ℓ , and q ≡ 1[ℓ], and is equal to 1/(1 − χ i µ j (̟)X) otherwise. The other properties follow immediately from [7].
The proof of Theorem 3.5 implies the following corollary.
Proof. By our assertion at the end of the proof of Theorem 3.5, the polynomial Q = 1/L(π, π ′ , X) divides (in R[X ±1 ], hence in R[X]) a power of the product P of the polynomials 1/L(χ i µ j , X) over the set of unramified characters χ i in E i (π) for 1 ≤ i ≤ n and unramified characters µ j 's in E j (π ′ ) for 1 ≤ j ≤ m. We already noticed that P must be 1 if R = F ℓ and q ≡ 1[ℓ], which proves our assertion in this case. In general, P belongs to Z ℓ [X], with constant term 1, as so do the polynomials 1/L(χ i µ j , X). Let
The functional equation
We have defined Rankin-Selberg L-factors of pairs of representations of Whittaker type, we now need to show that these satisfy a local functional equation. By identifying F n with M 1,n , the space C ∞ c (F n ) provides a smooth representation ρ of G n , with G n acting by right translation. We also denote by ρ the action by right translation of G n on any space of functions. For a ∈ R[X ±1 ], we denote by χ a the character in Hom(G n , R[X ±1 ] × ) defined as: g → a v(det(g)) , in particular ν = χ q 1/2 is the absolute value of the determinant.
Let π be a representation of Whittaker type of G n , and π ′ be a representation of Whittaker type of G m . If m = n, we write for the space of R-linear maps, L : for the space of R-linear maps, L : π × π → R[X ±1 ], satisfying for all W ∈ W (π, θ), W ′ ∈ W (π ′ , θ −1 ), h ∈ G k m , and u ∈ U m+1,n−m−1 . We denote by C ∞ c,0 (F n ) the subspace of C ∞ c (F n ) which is the kernel of the evaluation map Ev 0 : φ → φ(0).
The proof in the complex case of Jacquet-Piatetskii-Shapiro-Shalika in [7] is long. Some results obtained in the complex case [op. cit.] using invariant distributions can be obtained quicker using derivatives which is how we proceed.
We have an exact sequence of representations of G n We tensor this sequence by π ⊗ π ′ and, as π ⊗ π ′ is flat as an R-vector space, we obtain By considering central characters, it is clear that the space Hom Gn (π ⊗ π ′ , χ X ) = 0. Applying Hom Gn ( , χ X ) which is left exact, we obtain that Hom is of rank at most 1. Now, we have an isomorphism between C ∞ c,0 (F n ) with G n acting via right translation and ind Gn Pn (δ
1/2
Pn ), hence we have by Frobenius reciprocity. Now, by the theory of derivatives (see Subsection 2.3), π and π ′ , as P n -modules, are of finite length, with irreducible subquotients of the form (Φ + ) k Ψ + (ρ), for ρ an irreducible representation of G n−k−1 and k between 0 and n − 1. Moreover, (Φ + ) n−1 Ψ + (1) appears with multiplicity 1, as a submodule. By Proposition 2.14, the space is zero, except when j = k, in which case it is isomorphic to Hom G k (ρ ⊗ ρ ′ , χ X ν −1 )). If ρ and ρ ′ are irreducible and k ≥ 1, by considering central characters, the space ]. This ends the proof in the case n = m, as R[X ±1 ] is principal.
We now consider the case m ≤ n − 1. Again, the space D(π, π ′ ) is nonzero as it contains the map (W, W ′ ) → I(W, W ′ , X)/L(π, π ′ , X), we will show that it injects into R[X ±1 ], which will prove the statement. Let L be in D(π, π ′ ), by definition, the map L factors through τ × π ′ , where τ is the quotient of π by its subspace spanned by π(u)W − θ(u)W for u ∈ U m+1,n−m−1 and W ∈ W (π, θ). Hence τ is nothing other than the space of (Φ + ) n−m−1 (π). Taking into account the normalisation in the definition of the derivatives, we obtain the following injection: We next prove the following lemma.
is a P m+1 -module of finite length, and as π is of Whittaker type, it contains (Φ + ) m−1 Ψ + (1) as a submodule, the latter's multiplicity being 1 as a composition factor. By the theory of derivatives, all of the other irreducible subquotients are either of the form Ψ + (σ), with σ an irreducible representation of G m , or of the form Φ + (σ), with σ an irreducible representation of P m of the form Proof of the Lemma. As π ′ is of Whittaker type, its restriction to P m is of finite length, with irreducible subquotients of the form (Φ + ) m−k−1 Ψ + (µ), for µ an irreducible representation of G k . Moreover, the representation (Φ + ) m−k−1 Ψ + (1) occurs with multiplicity 1, and is a submodule. If σ is an irreducible representation of P m of the form (Φ + ) m−j−1 Ψ + (σ ′ ), with σ ′ a representation of G j , for some j ≥ 1, then is zero by Proposition 2.14 if j = k (in particular if k = 1). Moreover, if k = j, by the same Proposition, we have which is zero, by considering central characters. Hence we have proved the first part of the lemma. If σ = (Φ + ) m−1 Ψ + (1), reasoning as above, we see at once that , the latter space being isomorphic to R[X ±1 ], and this completes the proof of the lemma.
All in all, we deduce that , and this ends the proof of the proposition.
Remark 3.10. Notice that all the injections defined in the proof of Proposition 3.7 are in fact isomorphisms. This could be viewed directly, or we can simply see that after composing all of them we obtain an isomorphism.
We are now in a position to state the local functional equation and define the Rankin-Selberg ε-factor of a pair of representations of Whittaker type. We recall that an invertible element of R[X ±1 ] is an element of the form cX k , for c in R × , and k in Z.
Corollary 3.11. Let π be a representation of Whittaker type of G n , and π ′ be a representation of Whittaker type of G m .
Proof. It is a consequence of Proposition 3.7 if n = m, and if m ≤ n − 1 with j = 0, as the functionals on both sides of the equality belong, respectively, to D(π, π ′ , C ∞ c (F n )) and D(π, π ′ ). For j = 0, it follows from the case j = 0 as in the complex setting, c.f. [8].
Compatibility with reduction modulo-ℓ
Let π = ∆ 1 × · · · × ∆ t be an ℓ-modular representation of Whittaker type of G n . As in Section 2.5, for 1 i t, we can choose integral ℓ-adic segments D i and integral structures Λ i in D i such that Λ i ⊗ Z ℓ F ℓ ≃ ∆ i . Moreover, the ℓ-adic representation τ = D 1 × · · · × D t is an integral representation of Whittaker type, and Λ = Λ 1 × · · · × Λ t is an integral structure in τ satisfying We denote by W e (τ, θ) the functions in W (τ, θ) with integral values. We will need the following result concerning integral structures in ℓ-adic representations of Whittaker type.
Proof. If τ is generic, it is shown in [19,Theorem 2] that W e (τ, θ) is an integral structure in W (τ, θ), such that W (π, r ℓ (θ)) = W e (τ, θ)⊗ Z ℓ F ℓ . We use this result together with the properties of parabolic induction with respect to lattices, and a result from [4] about the explicit description of Whittaker functionals on induced representations.
A function f in τ = W (D 1 , θ) × · · · × W (D t , θ), by defintion of parabolic induction, is a map from G n to W (D, θ), i.e. for g ∈ G n , f (g) ∈ W (D, θ) identifies with a map from M to Q ℓ , so we can view f as a map of two variables from G n × M to Q ℓ , and similarly, we can view the elements in π = W (∆ 1 , θ) × · · · × W (∆ t , θ) as maps from G n × M to F ℓ . In [4,Corollary 2.3], it is shown (for minimal parabolics, but their method works for general parabolics), that there is a Weyl element w in G n , such that if one takes f ∈ τ , then there is a compact open subgroup U f of U which satisfies that for any compact subgroup U ′ of U containing U f , the integral This assertion is also true for π with the same proof, for the same choice of w, we write µ(h) = U h(wu, 1 M )r ℓ (θ) −1 (u)du for h ∈ π. Both λ and µ are nonzero Whittaker functionals on τ and π, respectively, and λ sends L to Z ℓ for a correct normalisation of du. We can moreover suppose, for correct normalisations of the ℓ-adic and the ℓ-modular Haar measures du, that µ = r ℓ (λ). The surjective map w : τ → W (τ, θ) which takes f to W f , defined by W f (g) = λ(τ (g)f ), sends L to W e (τ, θ). Similarly, for h ∈ π, if we set W h (g) = µ(π(g)h), then the map π → W (π, r ℓ (θ)), taking h to W h , is surjective, and we have r ℓ (W f ) = W r ℓ (f ) . From this, we obtain that W = w(L) is a sublattice of W e (τ, θ) (see [15], I 9.3), and r ℓ (W) = W (π, θ).
Let π and π ′ are integral ℓ-adic representations of Whittaker type of G n and G m . By Corollary 3.6, we already know that L(π, π ′ , X) is the inverse of a polynomial with integral coefficients, even without the integrality assumption. With the integrality assumption, we now consider the associated ε-factor.
Lemma 3.13. The factor ε(π, π ′ , θ, X) is of the form cX k , for c a unit in Z ℓ .
If P is an element of Z ℓ [X] with nonzero reduction modulo-ℓ, we write r ℓ (P −1 ) for (r ℓ (P )) −1 . We now prove our first main result.
Let π and π ′ be ℓ-modular representations of Whittaker type of G n and G m . Let τ and τ ′ be ℓ-adic representations of Whittaker type τ and τ ′ of G n and G m with integral structures W e (τ, θ) and W e (τ ′ , θ), as in Lemma 3.12.
Proof. We give the proof for m ≤ n − 1, and j = 0, the other cases being similar. By definition, one can write L(π, π ′ , X) as a finite sum i I(W i , W ′ i , X), for W i ∈ W (π, θ) and W ′ i ∈ W (π ′ , θ −1 ). By Theorem 3.12, there are Whittaker functions W i,e ∈ W e (τ, µ) and . By Remark 2.1, we have L(π, π ′ , X) = r ℓ ( i I(W i,e , W ′ i,e , X)). As i I(W i,e , W ′ i,e , X) belongs to we obtain that L(π, π ′ , X) belongs to r ℓ (L(τ, τ ′ , X))F ℓ [X ±1 ]. This proves the first assertion. The equality for γ factors follows the functional equation, and Remark 2.1.
L-factors of pairs of cuspidal representations.
We introduce the terminology of [5] on exceptional poles of Rankin-Selberg L-functions. We will not, however, make full use of this machinery in the following, as we will specialise to L-factors of pairs of cuspidal representations. In this case, following [7] or [5] is completely equivalent. However, for a further inquiry of L-factors of generic segments, we will use the full theory. As we intend to study in more detail the L-factors of representations of Whittaker type in the near future, we already introduce the terminology in this section.
Exceptional poles
We now recall the notion of exceptional pole, due to Cogdell and Piatetski-Shapiro ( [5]). Let R be an algebraically closed field. Let π and π ′ be a pair of R-representations of Whittaker type of G n , W ∈ W (π, θ) and W ′ ∈ W (π ′ , θ −1 ), and φ ∈ C ∞ c (F n ). Suppose that x as is a pole of order d of L(π, π ′ , X). As R[X] is a principal ideal domain, we can write the partial fraction expansion of I(W, W ′ , φ, X) at x as The map T x is a non-zero trilinear from W (π, θ) × W (π ′ , θ −1 ) × C ∞ c (F n ) to R.
Definition 4.1. Let π and π ′ be R-representations of Whittaker type of G n and G m , and let W and W ′ belong to W (π, θ) and W (π ′ , θ −1 ), respectively.
• If n = m, we say that a pole x in R of the rational map L(π, π ′ , X) is an exceptional pole if and only if the trilinear form T x , vanishes on W (π, θ) × W (π ′ , θ −1 ) × C ∞ c,0 (F n ), i.e. admits a factorisation of the form T x (W, W ′ , φ) = B x (W, W ′ )φ(0).
• If m < n, we say the local factor L(π, π ′ , X) has no exceptional poles.
Thanks to a change of variable in I(W, W ′ , φ, X), one sees that if x is an exceptional pole of L(π, π ′ , X), then B x satisfies for any g in G n . We deduce the following property.
Proposition 4.2. If π and π ′ are irreducible R-representations of Whittaker type of G n and G m , i.e. generic R-representations, and x is an exceptional pole of L(π, π ′ , X), then π ′ ∨ ≃ χ −1 x π.
Remark 4.3. When π and π ′ are complex or ℓ-adic representations, the converse of Proposition 4.2 is true, and is proved in [9,Proposition 4.6]. This is no longer the case for ℓ-modular representations, in cases when q n ≡ 1[ℓ], as we shall see later. In fact, we can already see this when q ≡ 1[ℓ], as L(π, π ′ , X) is always equal to 1 in this case. This makes the case q n ≡ 1[ℓ] pathological for the computation of L factors of pairs.
We now introduce an auxiliary Euler factor.
As before, we have the following result.
Remark 4.6. When π is cuspidal, as W | Pn has compact support modulo N n (hence W | G n−1 has compact support mod N n−1 ), the Laurent series I (0) (W, W ′ , X) only has finitely many nonzero terms, hence is a Laurent polynomial. In particular, the factor L (0) (W, W ′ , X) is equal to 1.
The following property follows from Corollary 2.9.
Remark 4.8. When π and π ′ are complex or ℓ-adic representations, the factor L (0) (π, π ′ , X) has simple poles, as L(c π c π ′ , X n ) does. This latter assertion is not true for ℓ-modular representations, when n is not prime to ℓ. The first assertion is not true either, as we shall see for example when q n ≡ 1[ℓ]. In any case, we always have L(π, π ′ , X) = L (0) (π, π ′ , X)L (0) (π, π ′ , X), and L (0) (π, π ′ , X) can be expressed in terms of the L-factors of the derivatives of π and π ′ (see [5], at least for generic segments, this fact remains true modulo-ℓ). So in the characteristic zero case, the factor L (0) (π, π ′ , X) is well understood, as it has simple poles, and those are the exceptional poles, which can be determined thanks to Proposition 4.2 and Remark 4.3. In the ℓ-modular setting, we already said that in Remark 4.3 that the converse of Proposition 4.2 is no longer true in general. Moreover, L (0) (π, π ′ , X) might have poles which are not simple anymore. These are the two sources of complications modulo-ℓ, the first being the most problematic.
L-factors of pairs for cuspidal representations
We will study in more detail the L-factors of pairs of cuspidal representations. We will express such factors in terms of the Tate L-factors of the unramified characters fixing these cuspidal representations. Before we can do this, we need to recall the following result of Bernstein in [3] for ℓ-adic representations. Theorem 4.9. Let π and π ′ be irreducible ℓ-adic representations of G n , if π ′ ≃ π ∨ (i.e. Hom Gn (π ⊗ π ′ , Q ℓ ) = {0}), Then we have Hom Pn (π ⊗ π ′ , Q ℓ ) = Hom Gn (π ⊗ π ′ , Q ℓ ), i.e. any P n -invariant bilinear pairing between π and π ′ is in fact G n -invariant.
Let ρ and ρ ′ be cuspidal representations of G n and G m , with m ≤ n. First, we observe that if m < n, as the restriction to P n of any W in W (ρ, θ) has compact support modulo N n (c.f. [17]), the integrals of the form I(W, W ′ , X) for W ′ ∈ W (ρ ′ , θ) are in fact in R[X ±1 ]. In particular, if m < n, then L(ρ, ρ ′ , X) is trivial. Proposition 4.10. If m < n, then L(ρ, ρ ′ , X) is equal to 1.
Hence the interesting case is when n = m. We will use the following ℓ-modular version of Bernstein's result.
Proof. Let τ be a cuspidal ℓ-adic representation of G n with reduction modulo-ℓ equal to π. Any W and W ′ lift to V and V ′ in W e (τ, θ) and W e (τ ∨ , θ −1 ). As C : (V, V ′ ) → Nn\Pn V (p)V ′ (p)dp is P n -invariant, it is G n -invariant by theorem 4.9. But C takes integral values on W e (τ, θ) × W e (τ ∨ , θ −1 ), and r ℓ (C) = B, hence B is G n -invariant.
Let R be Q ℓ or F ℓ , and let R u (G 1 ) denote the set of unramified R-characters of G 1 . Let τ be a integral cuspidal ℓ-adic representation of G n , and π be the reduction modulo-ℓ of τ . Denote by R(τ ) and R(π) the respective cyclic subgroups of R u (G 1 ) fixing τ and π by twisting, and denote by n(τ ) and n(π) their respective orders. We recall, that by looking at the central characters of τ and π, the integers n(τ ) and n(π) both divide n. It follows, from the Bushnell-Kutzko construction of all irreducible cuspidal representations via types given in [15,III 5], that the map r ℓ from R(τ ) to R(π) is surjective, with kernel the ℓ-part R ℓ (τ ) of R(τ ). Hence we can write ℓ dπ = |R(τ )|/|R(π)|, with d π the multiplicity of ℓ as a factor of |R(τ )|, which is independant from τ (c.f. [12,Remark 3.21] for more details about these assertions).
By Proposition 4.2, we know that if x is a pole of L(π, π ′ , X), then it will be of the form x = χ(̟) for some χ ∈ R(π, π ′ ). In this case, we want to compute the order of x as a pole of L(π, π ′ , X). We can suppose that π ′ ≃ π ∨ , and look at the pole at x = 1. | 11,884.6 | 2014-08-22T00:00:00.000 | [
"Mathematics"
] |
Numerical Analysis of a CZTS Solar Cell with MoS2 as a Buffer Layer and Graphene as a Transparent Conducting Oxide Layer for Enhanced Cell Performance
Copper zinc tin sulfide (CZTS) can be considered an important absorber layer material for utilization in thin film solar cell devices because of its non-toxic, earth abundance, and cost-effective properties. In this study, the effect of molybdenum disulfide (MoS2) as a buffer layer on the different parameters of CZTS-based solar cell devices was explored to design a highly efficient solar cell. While graphene is considered a transparent conducting oxide (TCO) layer for the superior quantum efficiency of CZTS thin film solar cells, MoS2 acts as a hole transport layer to offer electron–hole pair separation and an electron blocking layer to prevent recombination at the graphene/CZTS interface. This study proposed and analyzed a competent and economic CZTS solar cell structure (graphene/MoS2/CZTS/Ni) with MoS2 and graphene as the buffer and TCO layers, respectively, using the Solar Cell Capacitance Simulator (SCAPS)-1D. The proposed structure exhibited the following enhanced solar cell performance parameters: open-circuit voltage—0.8521 V, short-circuit current—25.3 mA cm−2, fill factor—84.76%, and efficiency—18.27%.
Introduction
At present, the world's energy demand is greatly dependent on fossil fuels. However, this limited stock of fossil fuels will run out very soon. Moreover, it is not environmentally friendly because it emits greenhouse gases. Hence, if alternative or renewable sources are not explored, the world will face a severe energy crisis in the upcoming future. Renewable energies are clean sources of energy, as they emit no harmful elements. Among the renewable energy sources, solar is the fastest growing because of its easy adoption. Solar cells, commonly noted as PV cells or photovoltaic cells, transform sun-powered energy directly into electricity. There are many forms of solar cells. Silicon (Si) solar cells are today's most widely used cells, as they are available at a reasonable price and can offer a good efficiency of 26.7% against an intrinsic limit of 29% [1,2].
Thin-film solar cells are another type of photovoltaic cell where a very thin layer of semiconductor materials, for instance, copper indium gallium diselenide (CIGS) and cadmium telluride (CdTe), is used. They are flexible and lightweight due to their attractive thickness. Hence, they are more economical than Si solar cells [3]. Though group III-V solar cells show a more attractive efficiency, they have higher manufacturing costs. Moreover, research has been conducted on organic, quantum dots, and perovskite solar cells to offer more efficient solar cells. Thin film solar cells have gained popularity due to their low material consumption. Though CdTe-and CIGS-based solar cells are industrially successful [4], there are some restrictions on using them. Indium in CIGS is a rare material [5] and tellurium in CdTe is not only rare but also mildly toxic [6]. This is why looking for a cost-effective, environment-friendly, naturally abundant material that provides an adequate conversion efficiency absorber layer is of great interest. Ultra-thin copper zinc tin sulfide (Cu 2 ZnSnS 4 ), also known as CZTS, can be a good choice that may fulfill all these requirements. Its components materials are non-toxic and earth-abundant. Furthermore, CZTS-based solar cells are highly efficient and reliable [7].
CZTS-based solar cell shows good optical and electrical properties with an absorption coefficient of 10 4 cm −1 , an electrical resistivity of around 10 −2 ohm cm, and a band gap of 1.54 eV at room temperature [8,9]. According to the Shockley-Queisser limit, CZTS-based solar cells have a theoretical 32.4% conversion efficiency limit [10]. The highest power conversion efficiency of 12.2% was obtained recently for CZTS-based solar cells, which is significantly less than the theoretical limit. Therefore, there is still a huge scope to improve the efficiency and other parameters [11]. A numerical study showed that a solar cell with an Al:ZnO/i-ZnO/CdS/CZTS/Mo structure exhibits an efficiency of 15.84% [12], where Al-ZnO acts as a transparent conducting oxide (TCO) layer, CdS as a buffer layer, and Mo as the back contact. The intrinsic ZnO layer was grown between the buffer and TCO layer to ensure that the defective region of the absorber layer did not suppress the open-circuit voltage [13]. Conversely, ZnO/InSe/CZTS-and i-ZnO/MoS 2 /CZTS/Mo-based structures showed conversion efficiencies of 16.30% [7] and 17.03% [14], respectively, as obtained from simulation analyses. Furthermore, a maximum efficiency of 17.6% was obtained from FTO/ZnO/CdS/CZTS baseline solar cells [15]. Very recently, we numerically investigated and reported a solar cell efficiency of 17.14% for the graphene/ZnO/CZTS/Ni structure [16], which showed further enhancement of the conversion efficiency. In this CZTS solar cell structure, graphene, ZnO, and Ni are employed as a TCO layer, buffer layer, and back contact, respectively. In this current work, we introduced molybdenum disulfide (MoS 2 ), considering their promising features as a buffer layer material for further investigation.
The buffer layer's role in solar PV is to reduce the defects and interfacial imperfections caused by the window layer and to make sure that the absorber layer and the window layer have the right band alignment. From the literature, it was found that cadmium sulfide (CdS) is usually used as the buffer layer material. However, this contains the toxic element cadmium (Cd) and it generates a significant amount of waste through the deposition process [17]. On the other hand, molybdenum disulfide (MoS 2 ) belongs to the two-dimensional transition metal dichalcogenides (TMDCs) group, which exhibits excellent optical (absorption coefficient is 10 5 -10 6 cm −1 ), mechanical, and electronic properties with a suitable bandgap of 1.3 eV. It has attractive optoelectronic properties, such as a tunable work function, an adjustable bandgap with reduced thickness, great mobility of 50 cm 2 V −1 s −1 , and potential interaction with light [18]. Moreover, graphene and MoS 2 composites provide more active sites and improve the conductivity [19]. This feature also motivated us to incorporate MoS 2 as a buffer layer with the TCO layer of graphene. Moreover, graphene is employed over conventional indium tin oxide (ITO) because they are more flexible, transparent, and economical. Furthermore, they provide attractive optical, mechanical, and thermal properties [20]. It has greater transparency of approximately 2.3% light absorption, a higher melting point of approximately 5000 K, and higher thermal conductivity of approximately 10 3 Wm −1 K −1 [21]. In addition, at room temperature, suspended graphene has a higher carrier mobility of 2 × 10 5 cm 2 V −1 s −1 .
In this research work, a highly efficient CZTS solar cell was designed by incorporating graphene (GnP) as a transparent conducting layer and MoS 2 as a buffer layer. A systematic and thorough investigation was conducted using the solar cell simulator named SCAPS-1D. The influences of the buffer layer, thickness variation of CZTS, the doping concentration in CZTS, and thermal stability were analyzed and subsequently discussed in the following subsections for the proposed solar cell structure.
Structure of CZTS Solar Cell
Photons from solar radiation have different energies, where these energies can be turned into electricity by stacking different bandgap materials. The material's bandgap should be big enough to absorb high-energy photons. Therefore, a layer with a bigger gap is used at the front. Usually, two types of solar cell structures are used: heterojunction and multi-junction. Figure 1 shows the proposed structure of a CZTS solar cell structure (GnP/MoS 2 /CZTS/Ni/glass substrate), which is based on a heterojunction solar cell. concentration in CZTS, and thermal stability were analyzed and subsequently discussed in the following subsections for the proposed solar cell structure.
Structure of CZTS Solar Cell
Photons from solar radiation have different energies, where these energies can be turned into electricity by stacking different bandgap materials. The material's bandgap should be big enough to absorb high-energy photons. Therefore, a layer with a bigger gap is used at the front. Usually, two types of solar cell structures are used: heterojunction and multi-junction. Figure 1 shows the proposed structure of a CZTS solar cell structure (GnP/MoS2/CZTS/Ni/glass substrate), which is based on a heterojunction solar cell.
Transparent Conducting Oxide or Window Layer
In optoelectronic and photovoltaic devices, a transparent conducting oxide (TCO) layer constructed of doped metal oxide is usually used. This layer is also known as the window layer. It has the lowest carrier concentration of 10 20 cm −3 , the highest conductivity of larger than 10 3 Scm −1 , and better than 80% transparency [22]. Given its desirable optical and electrical characteristics, high conductivity, and transparency (>90%), graphene was chosen as the window layer in this instance [23]. Moreover, we preferred graphene to the widely used window layer material indium tin oxide (ITO) because ITO needs expensive fabrication methods and it contains the rare element indium (In) [24]. It offers improved thermal stability compared with other traditional conducting oxide layers. Furthermore, the bandgap of graphene can be tuned with the addition of dopants such as boron or copper [25]. In our previous work, we optimized the thickness of graphene as a window layer at 2 nm, which ensured greater transparency [16]. Thus, the thickness of the window layer as 2 nm is deliberately used throughout the simulation of the proposed solar cell structure.
Buffer Layer
To form a p-n junction with the absorber layer, a buffer layer (single or double) is used. For the buffer layer, a more extensive bandgap layer is required to ensure maximal light transmission, little absorption loss, and minimal recombination loss or to convey the most photo-generated carriers possible to the outer circuit. Additionally, it has the ideal thickness to provide low series resistance. The open-circuit voltage (Voc) in solar cells is improved significantly by this buffer layer [26,27]. Here, molybdenum disulfide (MoS2) is selected as the buffer layer material. It was anticipated that the performance of solar cells can be increased by using an n-type semiconductor with a narrow direct bandgap of 1.3 eV compared with the graphene/ZnO/CZTS/Ni structure [16], where MoS2 will be used as
Transparent Conducting Oxide or Window Layer
In optoelectronic and photovoltaic devices, a transparent conducting oxide (TCO) layer constructed of doped metal oxide is usually used. This layer is also known as the window layer. It has the lowest carrier concentration of 10 20 cm −3 , the highest conductivity of larger than 10 3 S cm −1 , and better than 80% transparency [22]. Given its desirable optical and electrical characteristics, high conductivity, and transparency (>90%), graphene was chosen as the window layer in this instance [23]. Moreover, we preferred graphene to the widely used window layer material indium tin oxide (ITO) because ITO needs expensive fabrication methods and it contains the rare element indium (In) [24]. It offers improved thermal stability compared with other traditional conducting oxide layers. Furthermore, the bandgap of graphene can be tuned with the addition of dopants such as boron or copper [25]. In our previous work, we optimized the thickness of graphene as a window layer at 2 nm, which ensured greater transparency [16]. Thus, the thickness of the window layer as 2 nm is deliberately used throughout the simulation of the proposed solar cell structure.
Buffer Layer
To form a p-n junction with the absorber layer, a buffer layer (single or double) is used. For the buffer layer, a more extensive bandgap layer is required to ensure maximal light transmission, little absorption loss, and minimal recombination loss or to convey the most photo-generated carriers possible to the outer circuit. Additionally, it has the ideal thickness to provide low series resistance. The open-circuit voltage (V oc ) in solar cells is improved significantly by this buffer layer [26,27]. Here, molybdenum disulfide (MoS 2 ) is selected as the buffer layer material. It was anticipated that the performance of solar cells can be increased by using an n-type semiconductor with a narrow direct bandgap of 1.3 eV compared with the graphene/ZnO/CZTS/Ni structure [16], where MoS 2 will be used as a buffer layer instead of ZnO. Graphene/MoS 2 composites have overcome the shortcomings of their respective counterparts owing to their beneficial physical or chemical properties. They constitute heterostructures in certain ways, where molybdenum disulfide (MoS 2 ) and graphene have integral physical properties and possess parallel lattice structures. This mitigates the shortcomings of the respective counterparts and optimizes photovoltaic solar cell performance [19].
Absorber Layer
Another crucial and integral component of a solar cell that absorbs energy from natural or artificial light is the absorber layer. An effective absorber layer should absorb the radiation at wavelengths in the visible portion of the electromagnetic spectrum because the majority of the light energy is found here. Copper zinc tin sulfide (Cu 2 ZnSnS 4 ) or CZTS thin film was taken into consideration as an absorber layer in the proposed structure because of its appealing absorption coefficient (>10 4 cm −1 ) and good physical and electrical characteristics (bandgap of approximately 1.4 to 1.6 eV) [28]. They also solely offer an efficiency of more than 20% and are earth-friendly and abundant [25].
Back Contact
The back or rear contact plays a critical function in improving performance metrics and solar cell efficiency. Nickel (Ni) performs better than other materials when used as a back contact. The energy needed to remove electrons from the metal surface is known as the work function, which is 5.15 eV for nickel [29]. The performance of the device can be greatly enhanced by a stable ohmic contact, which can lower the back contact interface recombination. CZTS has a bandgap of 1.54 eV and an electron affinity of 4.5 [7]. As a result, a metal with a greater work function is required for a static ohmic contact, and Ni has just that.
Soda Lime Glass Substrate
Thin-film solar cells also need parts that are made of a substrate. The electron-hole recombination at the grain borders is prevented by the diffusion of Na from the soda-lime glass (SLG) substrate to the absorber layer's grain boundaries. Because of this, alkali metal oxides (Na 2 O and K 2 O) were studied and made in a lab [30]. This SLG substrate, which is smooth and provides the thin-film solar cell mechanical support, is widely utilized for thin-film deposition. It is comparably inexpensive, chemically stable, and very useful in solar cell research.
Mathematical Modeling
In this work, simulation was carried out using SCAPS (version 3.3.0.9). SCAPS-1D (a Solar Cell Capacitance Simulator), developed by Burgleman et al. [31], is used to carry out simulations of solar cell structures; it is a one-dimensional solar cell simulator that enables the simulation of up to seven layers. The SCAPS-1D simulator helps to perform quick simulations and come up with batch calculations. It provides a user-friendly interface and helps to load and save all settings and data easily [32]. SCAPS is designed to simulate practical situations; hence, convergence failure and superficial output may occur when unrealistic parameters are input.
SCAPS usually performs simulations based on three groups of equations. The first one is transport equations for electrons and holes, which can be written as where J n , J p , G n , G p , U n , and U p represent the electron current density, hole current density, the generation rate of electrons, the generation rate of holes, recombination rate of electrons, and recombination rate of holes, respectively. The second one is the Poisson equation, which is represented as where φ(x), p(x), n(x), E o , E r , q, N d , N a , ρ n , and ρ p are the electrostatic potential, hole concentration, electron concentration, vacuum permittivity, relative permittivity, electric charge, charge impurities of donor, charge impurities of the acceptor, electron distribution, and hole distribution, respectively. Finally, the third group of equations contains the drift and diffusion equations, which can be formulated as where D n , µ n , D p , and µ p are the diffusion coefficient of electrons, mobility of electrons, the diffusion coefficient of holes, and mobility of holes, respectively [32,33]. All these equations are used to calculate solar cell performance parameters via the SCAPS simulator.
Numerical Simulation and Device Modeling
Numerical modeling and simulation are required before the production process to ensure the performance and stability of the proposed cell. The simulation settings of the layers determine the cell's performance. Different characteristics of a layer, such as the thicknesses of buffer and absorber layers and the doping concentration of the absorber layer were varied accordingly to study the cell's performance. The influence of temperature was also investigated to assess the cell's endurance and thermal stability. Three layers (TCO, buffer, and absorber) of the proposed structure were modeled using the SCAPS 3309 tools. Necessary electrical and optical parameters of different layer materials were obtained from the literature [13,15,21,23,[34][35][36][37][38][39][40] for their reasonable estimation during the simulation, as described in Table 1.
Effect of Buffer Layer Thickness
With a bandgap of about 1.3 eV, MoS 2 was used as a buffer layer to ensure that the majority of incident light was directed toward the junction. The n-layer thickness should ideally be as thin as possible to improve the device's series resistance. A reduced buffer layer thickness results in better short-circuit current density and minimal absorption in the blue region of the AM1.5 sun spectrum.
Simulations were carried out by varying the MoS 2 layer thickness from 0.02 µm to 0.18 µm, and the effects on performance metrics were noted. The simulated results are shown in Figure 2. It was evident from the simulation findings that the thickness of MoS 2 had a considerable impact on both the short-circuit current density (J sc ) and efficiency (η). In this context, the optimized thickness of MoS 2 was chosen to be 0.04 µm since above this thickness, the efficiency did not increase significantly, and a higher thickness increased the number of ionizing photons, resulting in more carriers. Moreover, the J-V curve for the variation of buffer layer thickness is shown in Figure 3. It was found that both the shortcircuit current density and open-circuit voltage increased with the buffer layer thickness. Hence, the efficiency increased. This was because the MoS 2 layer ensured a good p-n junction with the p-type CZTS absorber layer. ideally be as thin as possible to improve the device's series resistance. A reduced buffer layer thickness results in better short-circuit current density and minimal absorption in the blue region of the AM1.5 sun spectrum.
Simulations were carried out by varying the MoS2 layer thickness from 0.02 μm to 0.18 μm, and the effects on performance metrics were noted. The simulated results are shown in Figure 2. It was evident from the simulation findings that the thickness of MoS2 had a considerable impact on both the short-circuit current density (Jsc) and efficiency (η). In this context, the optimized thickness of MoS2 was chosen to be 0.04 μm since above this thickness, the efficiency did not increase significantly, and a higher thickness increased the number of ionizing photons, resulting in more carriers. Moreover, the J-V curve for the variation of buffer layer thickness is shown in Figure 3. It was found that both the short-circuit current density and open-circuit voltage increased with the buffer layer thickness. Hence, the efficiency increased. This was because the MoS2 layer ensured a good p-n junction with the p-type CZTS absorber layer.
Effect of Absorber Layer Thickness
One of the major goals of this research was to improve cell performance by preserving the material and optimizing the CZTS absorber layer thickness. Keeping this in mind, we simulated the CZTS solar cell structure and the results are presented in Figure 4, where the absorber layer thickness varied from 0.5 µm to 4 µm.
Effect of Absorber Layer Thickness
One of the major goals of this research was to improve cell performance by preserving the material and optimizing the CZTS absorber layer thickness. Keeping this in mind, we simulated the CZTS solar cell structure and the results are presented in Figure 4, where the absorber layer thickness varied from 0.5 μm to 4 μm. Figure 4 also presents the effect of absorber layer thickness on the key performance indicators of a solar cell. These include the open-circuit voltage (Voc) in V, short-circuit current density (Jsc) in mAcm −2 , fill factor (FF) in percent, and power conversion efficiency (η) in percent.
As shown in Figure 4, the efficiency (η) of the designed cell increased significantly as the CZTS absorber layer's thickness increases up to 2 μm. After that, increasing the thickness of the absorber layer did not effectively boost the efficiency. However, the other parameters, such as Voc, Jsc, and FF had an increasing trend with the increase in the absorber layer thickness. Hence, 2 μm was considered to be the optimized absorber layer thickness that can contribute to obtaining optimal efficiency. Moreover, considering a thinner absorber layer could reduce the fabrication cost of CZTS solar cells. We also added the J-V characteristic curve for the variation in absorber layer thickness, as shown in Figure 5. This figure indicates that the short-circuit current density and open-circuit voltage had a significant effect on the increase in absorber layer thickness. It was hypothesized that a thicker absorber layer allows more photons to enter, resulting in more electron-hole pair generation. As shown in Figure 4, the efficiency (η) of the designed cell increased significantly as the CZTS absorber layer's thickness increases up to 2 µm. After that, increasing the thickness of the absorber layer did not effectively boost the efficiency. However, the other parameters, such as V oc , J sc, and FF had an increasing trend with the increase in the absorber layer thickness. Hence, 2 µm was considered to be the optimized absorber layer thickness that can contribute to obtaining optimal efficiency. Moreover, considering a thinner absorber layer could reduce the fabrication cost of CZTS solar cells. We also added the J-V characteristic curve for the variation in absorber layer thickness, as shown in Figure 5. This figure indicates that the short-circuit current density and open-circuit voltage had a significant effect on the increase in absorber layer thickness. It was hypothesized that a thicker absorber layer allows more photons to enter, resulting in more electron-hole pair generation.
Effect of Doping Density of CZTS Absorber Layer
Using the SCAPS-1D simulation software, many trials were conducted to determine how different doping concentrations of the CZTS absorber layer in solar cells could be used. The doping density was varied between 1 × 10 11 cm −3 and 1 × 10 18 cm −3 to investigate their effect on the solar cell parameters, as shown in Figure 6. From Figure 6, it is found that with the increase in CZTS doping concentration, except for the short-circuit current density (Jsc), all other parameters (Voc, FF, and η) increase, indicating their dependence on the doping density of the CZTS absorber layer. The decrease in Jsc was due to the increase in the recombination of photogenerated carriers. Alternatively, the relationship between the open-circuit voltage (Voc) and short-circuit current density (Jsc) with doping density could be explained by the following equations.
Considering a solar cell with a p-n junction diode, the well-known diode equation can be written as where I, I0, IL, q, V, k, and T denote the net current flowing through the junction, the diode leakage current density in the absence of light, load current, electron charge, the voltage across the p-n junction, Boltzmann's constant, and temperature, respectively. Furthermore, I0 can be represented by where A, Dn, Dp, Ln, Lp, NA, ND, and ni signify the cross-sectional area of the p-n junction,
Effect of Doping Density of CZTS Absorber Layer
Using the SCAPS-1D simulation software, many trials were conducted to determine how different doping concentrations of the CZTS absorber layer in solar cells could be used. The doping density was varied between 1 × 10 11 cm −3 and 1 × 10 18 cm −3 to investigate their effect on the solar cell parameters, as shown in Figure 6. From Figure 6, it is found that with the increase in CZTS doping concentration, except for the short-circuit current density (J sc ), all other parameters (V oc , FF, and η) increase, indicating their dependence on the doping density of the CZTS absorber layer. The decrease in J sc was due to the increase in the recombination of photogenerated carriers. Alternatively, the relationship between the open-circuit voltage (V oc ) and short-circuit current density (J sc ) with doping density could be explained by the following equations.
Considering a solar cell with a p-n junction diode, the well-known diode equation can be written as where I, I 0 , I L , q, V, k, and T denote the net current flowing through the junction, the diode leakage current density in the absence of light, load current, electron charge, the voltage across the p-n junction, Boltzmann's constant, and temperature, respectively. Furthermore, I 0 can be represented by where A, D n , D p , L n , L p , N A , N D, and n i signify the cross-sectional area of the p-n junction, the diffusion coefficient of the electron, the diffusion coefficient of the hole, the diffusion length for an electron, the diffusion length for a hole, the concentration of acceptor atoms, the concentration of donor atoms, and intrinsic carrier concentration, respectively. In contrast, the mathematical equation for the open-circuit voltage (V oc ) is given as Putting V = 0 into Equation (6) provides the short-circuit current (I sc ) from which the short-circuit current density (J sc ) can be evaluated. It is seen from the above equations that V oc and J sc are strongly dependent on the carrier doping density. As shown in Figure 6, above a carrier density of 0.1 × 10 17 cm −3 , the solar cell parameters did not improve significantly. Hence, the optimized value of doping density was 0.1 × 10 17 cm −3 for the proposed cell structure. Figure 7 shows the J-V curve that was generated by varying the doping density of the absorber layer. From this figure, it can be seen that increasing the doping concentration decreased the short-circuit current density. Since the opencircuit voltage increased with the doping concentration, there was an improvement in the overall efficiency. and Jsc are strongly dependent on the carrier doping density. As shown in Figure 6, above a carrier density of 0.1 × 10 17 cm −3 , the solar cell parameters did not improve significantly. Hence, the optimized value of doping density was 0.1 × 10 17 cm −3 for the proposed cell structure. Figure 7 shows the J-V curve that was generated by varying the doping density of the absorber layer. From this figure, it can be seen that increasing the doping concentration decreased the short-circuit current density. Since the open-circuit voltage increased with the doping concentration, there was an improvement in the overall efficiency.
Effect of Temperature
The efficacy of solar cells is typically negatively impacted by increased temperatures [35]. Here, the working temperature was considered to be 300 K (27 °C). Solar cells are used outside, where temperature fluctuations might affect the output. Excessive heat may degrade the performance as well. The thermal stability of the CZTS-based solar cell was explored within the temperature range of 290 K to 380 K. This helped to examine the performance of the designed solar cell at various operating temperatures, as shown in Figure 8.
The drop in the efficiency (η) of the cell was due to the decrease in the open-circuit voltage (Voc) with increasing cell temperature. It was observed that current density (Jsc) was almost unchanged throughout the temperature range, indicating no impact with temperature variation. A solar cell's thermal stability is indicated by the temperature coefficient, which also demonstrates how the solar cell output varies with temperature. From Figure 8, it can be illustrated that there was a declining marginal trend for other output parameters (Voc, FF, and η). However, these were not greatly altered. Therefore, it can be concluded that the designed structure exhibited improved thermal stability.
Effect of Temperature
The efficacy of solar cells is typically negatively impacted by increased temperatures [35]. Here, the working temperature was considered to be 300 K (27 • C). Solar cells are used outside, where temperature fluctuations might affect the output. Excessive heat may degrade the performance as well. The thermal stability of the CZTS-based solar cell was explored within the temperature range of 290 K to 380 K. This helped to examine the performance of the designed solar cell at various operating temperatures, as shown in Figure 8.
The drop in the efficiency (η) of the cell was due to the decrease in the open-circuit voltage (V oc ) with increasing cell temperature. It was observed that current density (J sc ) was almost unchanged throughout the temperature range, indicating no impact with temperature variation. A solar cell's thermal stability is indicated by the temperature coefficient, which also demonstrates how the solar cell output varies with temperature. From Figure 8, it can be illustrated that there was a declining marginal trend for other output parameters (V oc , FF, and η). However, these were not greatly altered. Therefore, it can be concluded that the designed structure exhibited improved thermal stability.
Quantum Efficiency (QE)
Quantum efficiency can be known as the proportion of collected charge carriers to photons incident onto a solar cell. Due to the generation of an electron-hole pair from each photon, the quantum efficiency could theoretically be 100%. However, this does not occur in actual cells owing to many types of losses, such as buffer and window layer absorption, absorber layer absorption restriction, deep penetration, and recombination loss [35,40]. The quantum efficiency of the simulated solar cell structure is depicted in Figure 9. It demonstrates that the designed structure achieved a quantum efficiency of close to 90%. Therefore, it can be stated that the designed solar cell structure could maximize the utilization of solar irradiance. Micromachines 2022, 13, x 12 of 16 Figure 8. Effe ct of te mpe rature versus Voc, Jsc, FF, and η.
Quantum Efficiency (QE)
Quantum efficiency can be known as the proportion of collected charge carriers to photons incident onto a solar cell. Due to the generation of an electron-hole pair from each photon, the quantum efficiency could theoretically be 100%. However, this does not occur in actual cells owing to many types of losses, such as buffer and window layer absorption, absorber layer absorption restriction, deep penetration, and recombination loss [35,40]. The quantum efficiency of the simulated solar cell structure is depicted in Figure 9. It demonstrates that the designed structure achieved a quantum efficiency of close to 90%. Therefore, it can be stated that the designed solar cell structure could maximize the utilization of solar irradiance.
Current Density-Voltage (J-V) Characteristics
The designed structure of graphene/MoS2/CZTS/Ni solar cell exhibits enhanced current density-voltage (J-V) characteristics, as revealed in Figure 10. In Figure 10, the proposed structure shows superior J-V properties and performance. The reason behind the
Current Density-Voltage (J-V) Characteristics
The designed structure of graphene/MoS 2 /CZTS/Ni solar cell exhibits enhanced current density-voltage (J-V) characteristics, as revealed in Figure 10. In Figure 10, the proposed structure shows superior J-V properties and performance. The reason behind the better performance of the solar cell with MoS 2 as the buffer material is its higher absorption coefficient and, consequently, more photogenerated carriers. Table 2 provides a summary of the optimal results attained in this investigation. This table also includes other solar cell performance parameters that were obtained from the numerical study that are available in the literature for further comparison. This table shows that our obtained results were comparable to the literature's experimental values. From this, we can infer that the SCAPS-1D simulator is a capable tool for forecasting solar cell behavior in actual scenarios. Figure 9. Quantum e fficie ncy versus wave length.
Current Density-Voltage (J-V) Characteristics
The designed structure of graphene/MoS2/CZTS/Ni solar cell exhibits enhanced current density-voltage (J-V) characteristics, as revealed in Figure 10. In Figure 10, the proposed structure shows superior J-V properties and performance. The reason behind the better performance of the solar cell with MoS2 as the buffer material is its higher absorption coefficient and, consequently, more photogenerated carriers. Table 2 provides a summary of the optimal results attained in this investigation. This table also includes other solar cell performance parameters that were obtained from the numerical study that are available in the literature for further comparison. This table shows that our obtained results were comparable to the literature's experimental values. From this, we can infer that the SCAPS-1D simulator is a capable tool for forecasting solar cell behavior in actual scenarios.
Conclusions
Using the SCAPS-1D program, we numerically analyzed CZTS solar cells by considering MoS 2 as the buffer layer and graphene as the transparent conducting oxide (TCO) or window layer. Our objective was to select MoS 2 as the buffer material for the CZTS absorber layer. The simulation results revealed that MoS 2 as a buffer layer was suitable for the CZTS absorber layer. We then investigated the influence of the absorber layer's thickness and doping density on the selected heterojunction structure. We were able to obtain the best open-circuit voltage (V oc ), short-circuit current density (J sc ), fill factor (FF), and efficiency (η) by optimizing these two factors. The optimized values for the buffer layer thickness, absorber layer thickness, and doping density were 40 nm, 2 µm, and 0.1 × 10 17 cm −3 , respectively. These adjusted settings allowed us to increase the efficiency of each heterojunction significantly. Indeed, we obtained a significantly higher efficiency of 18.27% (V oc = 0.8521 V, J sc = 25.3 mA cm −2 , and FF = 84.76%) for the proposed CZTS solar cell structure. The obtained results were satisfactory and may be employed experimentally to fabricate actual graphene/MoS 2 /CZTS-based solar cells in the future. | 7,891.8 | 2022-08-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
A New Method for Parameter Sensitivity Analysis of Lorenz Equations
A new method for parameter sensitivity analysis of Lorenz equations is presented. The sensitivity equations are derived based on the staggered methods. Experimental results indicate that it is possible to determine effects of parameters on model variables so that we can eliminate the less effective ones. Robustness can also be verified in some confidence intervals by simply looking at the corresponding phase portraits. This enables us to control the system. Although the stability properties of the Lorenz equations are studied extensively, to the best knowledge of the authors, the PSA of Lorenz equations has not been considered which is the main goal of this paper.
Introduction
Parameter sensitivity analysis (PSA) of large-scale differential algebraic systems is important in many engineering and scientific applications, including biology, chemistry, and economics.Problems such as population dynamics, network modeling, and chemical reactors coming from different branches of science have many parameters whose values may not be known accurately.Infinitesimal changes in most of these model input parameters change the future behavior of the systems partially or completely.Consequently, one can observe an uncontrolled and chaotic behavior of the system.In the present day, one has an opportunity to adjust these parameter values accordingly and make some list of parameters with respect to their effect on model.For instance, if a parameter is less effective than the other parameters, the designer of the model can eliminate that parameter.The analysis of this effectiveness is called parameter sensitivity analysis.Consequently, algorithms which perform PSA in an efficient and rapid manner are invaluable to researchers in many fields.
In this paper, a new method for parameter sensitivity analysis of Lorenz equations is presented.The sensitivity equations are derived based on the staggered methods.
Experimental results indicate that it is possible to determine effects of parameters on model variables so that we can eliminate the less effective ones.Robustness can also be verified in some confidence intervals by simply looking at the corresponding phase portraits.This enables us to control the system.Although the stability properties of the Lorenz equations are studied extensively, to the best knowledge of the authors, the PSA of Lorenz equations has not been considered which is the main goal of this paper.
The structure of this paper is as follows.In Section 2, we overview the concept of parameter sensitivity analysis.In Sections 3 and 4, we study the chaotic and sensitivity analysis of Lorenz equations.We complete the paper by some simulation results.
Parameter Sensitivity Analysis
It is difficult to construct a model without any parameter.In fact, in reality, the problems coming from different branches of science such as engineering, biology, ecology, and meteorology have many parameters.With the help of faster computers of today, one has a chance to adjust them and make some list of parameters with respect to their effect on model.If a parameter is less effective than the others we, the designer of the model, can eliminate it.The analysis of this effectiveness is called "sensitivity." When qualitative estimates of sensitivity are desired, a mathematical model of phenomena is desired or at least a relationship is required.Infinitesimal changes in all (or some) of the model input parameters change the future of mathematical design partially or completely (in some cases).The important thing here is the sensitivity of a single component compared to other input variables changed a little bit simultaneously.By saying single component we mean the parameters in the model whose values may not be accurately known.However, such a model brings the questions concerning stability, optimality, sensitivity, and so forth.In this work we concentrated on only the sensitivity analysis of a concrete example, namely, Lorenz equations.
PSA generates essential information for parameter estimation, optimization, control, model simplification, and experimental design.In the literature, staggered direct method, simultaneous corrector method, adjoint method, and staggered corrector method are some of the well-known methods for parameter sensitivity analysis.We can give [1] as a general reference for most of these methods.Some popular software packages for the same task can be listed as ASAP, DASPK, and DASKADJOINT.
In the theory of PSA, another important concept is the index structure which could be defined as the number of differentiations needed for transforming a DAE to an ODE.Intuitively, it is clear that all ODEs have index 0. What defines the index is up to the constraints given in systems.For example, let us consider a simple predator-prey model where is differentiable.Taking derivative of constraint equality, we get = 3 − , and 2 + = () ⇒ = ( () − 3 + )/2.Hence, new ODEs are given as To obtain this, it takes one differentiation.Thus, the model has index 1.
In order to capture the main idea of the PSA, let us consider the general form of the parameter-dependent DAEs given by where ∈ and ∈ .It is not always the case but assume that we have index 0 or 1 DAEs and convert (3) to explicit form of ODEs: where (, , ) ∈ × × .Sensitivity analysis requires the calculation of the term, namely, , defined as the derivative of with respect to ; that is, := / .Since we are interested in partial derivatives, we can treat one parameter after another, while keeping the remaining ones fixed.Therefore, the derivative of (4) with respect to parameter is (5) Replacing = / into the right-hand side of (5), the th sensitivity equation becomes where . . .
The initial condition takes the form
Lorenz Equations
The Lorenz equations invented by E. N. Lorenz, a meteorologist and a pioneer of chaos theory, are typical examples of equations for system of differential algebraic equations that can be written as where > 0 is Rayleigh number, is Prandtl number corresponding to temperature difference between two horizontal plates in convection problem, and is a positive number.These equations arise in studies of convection and instability in planetary atmospheres, models of lasers and dynamos, and so forth.Although the stability and bifurcation properties of the Lorenz equations are studied in the literature [2,3], to the best knowledge of the authors, the parameter sensitivity analysis of Lorenz equations has not been considered so far which is the main goal of this paper.The Lorenz equations are nonlinear due to the terms and .They are also symmetric equations, because the equations are invariant under (, ) → (−, −).Thus, if ((), (), ()) is a solution of Lorenz equations, so is (−(), −(), ()).System of the Lorenz equations is dissipative.In other words, volumes in phase-space contract under the flow and and are usually known as dissipation parameters.Next, we compute the fixed points of Lorenz equations.Letting each term of (11) be equal to 0, that is, letting ẋ = ẏ = ż = 0, we get the following identities: (13) In this study, behaviors dependent on initial conditions are not studied, and they are fixed as 0 = 0, 0 = 1, and 0 = 0.In the following figures different trajectories are given with respect to different values.
The behavior in Figure 1 continues up to a value of = 24.08.After that, it becomes more complicated and chaotic; for example, for = 27 some periodic and aperiodic motions are observed as seen in Figure 2.
Further explanations of these and stability features of Lorenz equations might be seen, for instance, at [4].In the next section, we study parameter sensitivity analysis of Lorenz equations.
Parameter Sensitivity Analysis of Lorenz Equations
Let us write the Lorenz equations having some initial conditions in the following way: Our new variable defined in (7) is given as follows: where Finally, the sensitivity equations take the form Note that the initial conditions do not depend on parameters, so the new initial conditions = 0 for all , = 1, 2, 3.In other words, the initial conditions for the new variable are given as 0 = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] .(18) In the next section we are making some simulations in order to visualize the results.
Computational Results
In order to visualize the results, we made many different simulations as phase portraits, sensitivity analysis, and relation between the components.In this section, we present only some of the simulation results.
After integrating the sensitivity equations, we get 's as a function of time so that one can analyze the change in the solution with respect to perturbations in the parameters.In these experiments, for time interval = [0100], we solved the system by Matlab ODE solver, namely, ode45, based on an explicit Runge-Kutta Method.After integrating the sensitivity equations, we get 's as a function of time so that one can analyze the sensitivity in phase portraits.
For = 30, the qualitative behaviors of the system might be seen in Figure 3.
Before entering the chaotic region which starts from the value = 24.08,all nine sensitivity components demonstrate the same behavior.For example, taking = 20, we have well stable solutions, but the relation between the sensitivity variables is highly nonlinear, that is, a significantly important result for this well-known system.This is illustrated in Figure 4.
Remember that 21 and 31 represent sensitivity components of 1 with respect to 2 and 3 .Figure 4(b) tells us that altering 2 and 3 in a simultaneous manner can affect the controllability of the system completely.In Figure 5, phase portrait for 31 and 1 is given.
In the chaotic region, what happened to the sensitivity equations is that, first, they seemed to be very complicated and irregular when they are considered as a function of time as seen in Figure 6.
However, in the phase portraits of 22 and 13 , we obtained a completely linear relation between them as seen in Figure 7.
Conclusion
When qualitative estimates of sensitivity are desired, a mathematical model of phenomena is desired or at least a relationship is required.However, such a model brings the questions concerning stability, optimality, sensitivity, and so forth.In this work, we concentrated only on the PSA of Lorenz equations.As we saw in the small application, it is possible to determine effects of parameters on model variables so that we can eliminate the less effective ones.Robustness can also be verified in some confidence intervals by just looking at the phase portraits.This enables us to control the system.This method is efficient if the number of variables is much more than that of the parameters.In a future work, we plan to study PSA for Van der Pool equations.
Figure 2 :
Figure 2: For = 27.The in (a) and in (b) are periodic, whereas in (c) is aperiodic.(d) space for critical values 1 and 2 .
Figure 3 :
Figure 3: For = 30, the behavior of the systems at - plane. | 2,479.4 | 2013-09-16T00:00:00.000 | [
"Engineering",
"Physics"
] |
Wideband multi-stage CROW filters with relaxed fabrication tolerances
: We present wideband and large free spectral range optical filters with steep passband edges for the selection of adjacent WDM communication channels that can be reliably fabricated with mainstream silicon photonics technology. The devices are based on three cascaded stages of coupled resonator optical waveguides loaded on a common bus waveguide. These stages differ in the number of resonators but are implemented with exactly identical unit cells, comprised of a matched racetrack resonator layout and a uniform spacing between cells. The different number of resonators in each stage allows a high rejection in the through port response enabled by the interleaved distribution of zeros. Furthermore, the exact replication of a unique cell avoids the passband ripple and high lobes in the stopband that typically arise in apodized coupled resonator optical waveguide based filters due to fabrication and coupling induced variations in the effective path length of each resonator. Silicon photonics filters designed for the selection of 9 adjacent optical carriers generated by a 100 GHz free spectral range comb laser have been successfully fabricated with 248 nm DUV lithography, achieving an out-of-band rejection above 11 dB and an insertion loss of less than 0.5 dB for the worst channels.
Introduction
Optical notch filters are a key component in optical communication systems. For example, they can serve for channel selection in a wavelength division multiplexed (WDM) communication system or to reduce amplified spontaneous emission (ASE) after optical amplification. Here, we aim at wideband notch filters for the selection of a predetermined number of optical carriers generated by a comb source in a WDM transmitter. In this system, several optical carriers are sourced by a single semiconductor mode-locked laser (MLL) on a 100 GHz grid and are further modulated by silicon photonics resonant ring modulators (RRMs) prior to being jointly amplified by a semiconductor optical amplifier (SOA) [1,2]. In this specific context, extinction of unused comb lines is of paramount importance to avoid excessive saturation of the SOA and to optimally allocate the available SOA output power to active channels. In order for the wideband optical notch filter to serve its purpose, it needs to have a wide enough passband to let through the selected optical carriers (900 GHz), while maintaining a transfer function with sufficiently steep edges and a wide enough stopband of at least 900 GHz to fully cut off adjacent unwanted MLL comb lines. Moreover, in order to avoid burdening the optical power link budget, maintaining low insertion losses is critical. Requirements on stopband extinction are discussed in more details towards the end of the paper, however it can already be said here that while a guaranteed extinction of at least −16 dB throughout the stopband would be ideal for the target application, the −11 dB worst case extinction shown in the experimental section proved fully adequate due to averaging of the transmitted power across the rejected comb lines. In order to monolithically integrate as much of the required optical functionality as possible on a single transceiver chip, the optical filters are developed in silicon photonics technology.
A number of approaches have been followed to implement integrated optical filters with a flat transfer function and/or steep passband edges. Athermal flattop filters have for example been obtained with multi-stage Mach-Zehnder interferometers with asymmetric waveguide cross-sections [3]. Grating based approaches have also yielded high performance [4][5][6], in particular in terms of very high rejection levels critical for quantum optics applications such as heralded photon generation [7], but tend to require lower critical dimensions and higher precision lithographic resolution and are thus most suited to fabrication with electron beam lithography. Moreover, they tend to be long devices, potentially increasing the required power levels for thermal tuning. Here, we base our approach on coupled resonator optical waveguides (CROW) consisting in several coupled racetrack resonators [8,9]. While this is a well-known approach for compact filter implementation, it is also associated with a number of difficulties in its reduction to practice: An important challenge associated with the implementation of optical filters in silicon photonics technology resides in the spectral shifting of the transfer function of individual filter elements, here the resonance frequency of individual rings, due to fabrication tolerances [10] or due to coupling-induced resonance frequency shifts [11]. While independent dynamic tuning of the ring's resonances can in principle address this problem [12], independent tuning of a large number of optical elements is a very complex problem. Just the acquisition of necessary feedback signals and interpreting them in terms of the required tuning adjustments can be very difficult. Independent thermal tuning of close by elements can furthermore be very challenging due to thermal cross-talk [13]. On the other hand, tuning of the entire filter transfer function with a single control signal, for example by globally heating up the filter, while being undesirable from the perspective of power consumption, remains a tractable problem for sufficiently compact devices.
Early attempts at implementing CROWs with a large number of coupled resonators suffered from the mismatch in resonance frequencies arising as a consequence of fabrication tolerances. While the resulting frequency spread has been considerably improved by moving to more advanced technology nodes [10], the remaining resonance frequency variations still need to be addressed. A solution consists in increasing the coupling between the resonators to the extent that the resulting increase of the resonators' linewidths exceeds the spread in resonance frequencies [14]. In other words, the time required to couple light from one resonator to the next has to be shorter than the time required for the phase of light stored in two resonators to significantly dephase as a consequence of their resonance frequency mismatch. This requirement on resonator-to-resonator coupling strength constrains the minimal width of the CROW filter stop-or passband, depending on the configuration of the device (through or drop port configurations described below). This problem of resonance frequency repeatability is further exacerbated by systematic effects when the coupling strength between adjacent resonators is purposefully modified in order to obtain specific transfer functions [15][16][17][18][19][20]. Due to proximity effects associated to lithography or microloading effects during etching [21], modifications in the layout introduced to modify the coupling strength will also result in small systematic modifications in the waveguide width leading to a shift in the resonance frequency. Moreover, modified coupling strengths also result in the modification of the coupling induced resonance frequency shift [11]. While in principle these systematic biases can be determined and compensated for, this might not only require several experimental iterations to get the device right, but also an extremely stable process for this calibration procedure to converge. Filters relying on a single non-apodized CROW on the other hand, while featuring good extinction when operated in drop port configuration, as defined below, require a large number of resonators to achieve a steep rolloff. When operated in through port configuration, they feature the lowest passband insertion losses, but suffer from side lobes in the stopband limiting the maximum extinction. Several techniques can be applied to apodize the coupling strength between resonators, such as varying the spacing between resonators on a nanometric scale, or offsetting the resonators one relative to the other so as to vary the length of the coupling junction [22,23]. While the latter alleviates difficulties related to the necessity of shifting the resonator positions by very small amounts -placement accuracies being ultimately limited by the design grid on which the mask is defined -problems related to proximity effects and coupling induced resonance frequency shifts remain. An additional difficulty, a coupling strength dependent modification of the phase of the resonator-to-resonator coupling coefficient, is introduced, constraining design optimization [24]. While the high resonator-to-resonator coupling strengths required to achieve the wide pass-and stopbands targeted for our application help to overcome variability in resonance frequencies, in particular the residual non-systematic resonance shifts still present in our device, fairly large systematic resonance shifts need to be addressed by design.
The approach followed in this paper is to implement CROW filters without apodization, but to cascade several stages each composed of a different number of resonators. As will be described in the following, this results in a compact filter implementation small enough to be globally thermally tuned, while achieving the targeted transfer function without tuning of individual rings. Since all the resonators are exactly identical, down to the snapping on the design grid and including the layout of their nearest neighbors, proximity effects and coupling induced resonance frequency shifts become a non-issue. While local process variability remains, with the std. dev. of the resonance frequency shift for close by rings extracted to be on the order of 0.3 nm in the 248 nm deep ultra-violet (DUV) line used here, it is seen as being significantly smaller than the resonance frequency shifts observed here in test structures due to systematic effects resulting from layout changes introduced to vary the resonator-toresonator coupling strengths. While we are varying the resonator-to-resonator gap in order to apodize the coupling strength in these test structures and while the magnitude of systematic effects might be different when using the offsetting technique instead, such systematic effects are likely to remain a limiting factor in apodized CROWs if they are not addressed by design, particularly in view of progress made in intra-die (non-systematic) device repeatability with more advanced 193 nm lithography, with reported variability as low as 0.15 nm (std. dev.) for close by devices [10].
In section 2 we will review single stage CROW filters with and without resonator-toresonator spacing apodization, as already reported in the literature, and exemplify difficulties in their implementation associated to process biases and minimum feature size. In section 3, we will introduce our designs relying on exactly identical unit cells, exemplify how this facilitates design, and report experimental results. , so that the maximum achievable FSR is limited by bending losses and junction lengths required to achieve targeted coupling strengths. As depicted in Fig. 1(a), the racetrack resonator is comprised of two 180° waveguide bends with radius R connected by two straight sections with length S L that also constitute the directional couplers between the resonators. Thus, the total resonator length is given by
Single stage CROW filters
. In order to facilitate the filter implementation, all individual resonators in a CROW typically share a common length and therefore a similar FSR. Highest required coupling strength (corresponding to κ 1 = κ N+1 , the coupling between the bus waveguide and the first resonator as well as the last resonator and the drop waveguide), as a function of the ratio between the width of the passband (BW P ) and the Free Spectral Range (FSR) for the implementation of a 5 th order apodized CROW filter. BW P is shown both for a CROW filter in drop port configuration (lower x-axis) and a CROW filter in through port configuration (upper x-axis). The red, green and blue curves refer to Chebyshev responses with passband ripple levels of respectively 0.1 dB, 0.5 dB and 1 dB (-16.4 dB, -9.6 dB and -6.86 dB side lobe levels in through port configuration). The black curve corresponds to a Butterworth filter response with a maximally flat drop port passband response. (c) Spectral response over a whole FSR of the Chebyshev filters in drop (solid lines) or through (dashed lines) configurations with a ratio BW P /FSR of 0.3.
Typical specifications for CROW filter designs include the bandwidth of the passband BW P , the bandwidth of the stopband BW S (or equivalently the FSR, since FSR ≈BW P + BW S ), the maximum allowed insertion loss in the passband (including the ripple attenuation), the minimum rejection in the stopband (including the required side lobe suppression level) as well as the width of the transition band (related to the roll-off steepness and filter order) [9].
CROW filters can be used either in through port or in drop port configuration, respectively referring to the through port or to the drop port being connected to the downstream portion of the system. Both configurations have their strengths and weaknesses in terms of stopband rejection, passband ripples, insertion losses, and required maximum coupling strengths. In particular, in the through port configuration the width of the stopband scales with the coupling strength, while in the drop port configuration the width of the passband scales with the coupling strength. The through port configuration tends to have less insertion losses, as in the drop port configuration the passband suffers from ripples and the light has to pass through the CROW in order to be transmitted. In the through port configuration, however, the stopband suffers from high side lobes, characterized in the following by the side lobe suppression level (SSL).
At the drop port, CROWs with a uniform coupling strength between resonators (κ i 2 = κ 2 , with i = 1, 2, … N + 1, wherein the squared parameters represent power coupling coefficients [25]) present a large rejection in the stopband but significant ripples in the passband. On the other hand, the complementary spectral response at the through port exhibits a flat and low loss passband. However, the filter rejection level is then hindered by prominent side lobes in the stopband. The width of the filter passband (drop port) or the width of the filter stopband (through port), as well as the level of passband ripples (drop port) or side lobes (through port) are mainly determined by κ 2 and are almost independent of the number of resonators N forming the CROW (filter order). On the other hand, increasing the filter order reduces the spectral width of the passband ripples (drop port) or side lobes (through port) and leads to a steeper roll-off.
Tradeoffs in the design of single stage, apodized CROW filters and limitations in the largest achievable FSR and bandwidth
In this section, we will review the tradeoffs in the design of single stage apodized CROW filters. In particular, we will describe how increasing the FSR (requiring a reduction of the resonator circumference) and increasing the BW P /FSR or BW S /FSR ratios, respectively in drop or through port configurations (requiring a high coupling strength, a minimal junction length and thus constraining the minimum resonator length) result in opposite requirements. This is particularly constraining in the design targeted here, in which both BW P and BW S are required to be above 900 GHz (hence swapping between a drop port and through port configuration does not significantly modify the requirements on the maximum coupling strength, even though it does have important consequences in terms of insertion losses vs. worst case extinction). The CROW filter thus needs both a high FSR of at least ≈1800 GHz, while at the same time also requiring high coupling strengths so that BW P /FSR (respectively BW S /FSR) is on the order of ≈0.5. In the following modeling, waveguides are assumed to be dispersionless -a simplification justified here by the intent of exemplifying design tradeoffs rather than designing a concrete device -so that quantitative results can be simply described with frequencies / wavelengths given as a percent of FSR (since the FSR scales as 1/L and the splitting between the resonances scales as κ 2 /L, the ratio of bandwidths to FSR can be given as a function of κ 2 only, independently of the assumed FSR).
The established approach for simultaneously achieving a flat response in the passband and a high rejection in the stopband consists in setting different coupling strengths between resonators (coupling apodization). With this aim, several synthesis algorithms have been proposed to determine the coupling values that implement typical filter responses such as Butterworth or Chebyshev [9,15]. In general, for a filter order N (selected to meet a target roll-off at the transition band), the coupling coefficients between resonators follow a symmetric distribution with κ 1 = κ N+1 , κ 2 = κ N , etc., wherein κ 1 is the amplitude coupling coefficient between the (input) bus waveguide and the first resonator, κ 2 is the coupling between the first and the second resonator, κ N is the coupling between the before last and the last resonator and κ N+1 is the coupling between the last resonator and the drop waveguide. Furthermore, the strongest coupling is typically implemented between the external resonators and the bus and drop waveguides (κ 1 and κ N+1 ). Coupling coefficients are successively reduced as one moves deeper into the structure, until a minimum value is reached between the central resonators. Figure 1(b) shows the highest required coupling strengths (κ 1 2 = κ N+1 2 ) for 5th order filters as a function of the ratio between the passband width (BW P ) in either drop or through port configurations and the FSR. The different curves correspond to filters in drop port configuration with different values of ripple attenuation in the passband, respectively filters in through port configuration with different SSL. Given an initial CROW filter design in drop port configuration, the bandwidth of its passband can be widened by i) increasing the FSR and/or ii) increasing the coupling. Figure 1(c) shows that, alternatively, the passband ripples (drop port configurations) or the SSL (through port configuration) can be decreased by increasing the maximum coupling strength at a fixed BW P /FSR. Thus, in our target application requiring wide BW P and BW S , high coupling strengths are required both to achieve the targeted BW P /FSR ≈0.5 as well as to achieve low passband ripples (drop port configuration) or low SSL (through port configuration). In addition, a large FSR is required. This leads to a number of difficulties: First, the maximum FSR achievable by means of a reduction in R is limited by the bending loss level, which eventually penalizes the insertion loss at the drop port. The high confinement of fully etched single-mode waveguides in silicon-on-insulator technology (220 nm core thickness) allows very compact designs with radii down to 3 µm without a significant performance degradation due to bending losses at the scale relevant to the performance of the device specifications targeted here.
On the other hand, stronger coupling coefficients have to be implemented by reducing the gap between the resonators if the FSR is not to be reduced. Here, the minimum value is limited by the resolution of the fabrication process. The CROW filters presented in this work have been designed to be compatible with fabrication with 248 nm DUV lithography and a minimum feature size of 180 nm. A thinner waveguide could also enhance the coupling strength, but at the price of higher radiative losses arising from the bending or complications resulting from adiabatic mode conversion inside the resonators [26].
Once both gap and waveguide width are fixed, the remaining alternative for a wider passband consists in increasing the coupling strength by elongating the coupler section length S L . As a drawback, this approach also limits the FSR which further increases the required κ 1 to reach a target passband width.
Since, in drop port configuration, the design performance in terms of passband ripple also depends on the coupling strength (see different colored curves in Figs. 1(b) and 1(c)), this leads to stringent limitations when trying to meet all specifications simultaneously. This is particularly constraining for applications requiring both large FSR values and a wide passband width. These limitations may seem to be less constraining in case of CROW filters in through port configuration since, additionally to the characteristic flat passband, their passband width scales inversely with the coupling strength. However, in this case the challenge typically consists in achieving the required stopband width, which again scales directly with the coupling strength. Furthermore, the reduction of the SSL for a higher stopband rejection by means of coupling apodization further increases the required coupling strength with the concomitant limitation in the maximum achievable FSR. Since in our case the requirement ended up being BW P /FSR ≈BW s /FSR ≈0.5, the choice of drop port vs. through port configuration would be driven by other considerations (compare both configurations in Fig. 1(b)), such as a minimization of passband insertion losses favoring a through port configuration.
Fabrication challenges related to coupling apodization and optical lithograhy
CROW filters with coupling apodization are typically implemented by changing the gap in the coupling sections between resonators while maintaining a common racetrack resonator layout [17][18][19][20] or by offsetting resonators relative to one another [22,23]. The common racetrack length is intended to ensure matching between the round-trip phases of each individual resonator, assuming deviations in the silicon device layer film thickness and process variations are sufficiently small across the device. This matching is essential in order to get the desired transfer function with the targeted distribution of filter poles and zeros. However, coupling strength apodization also results in systematic variations of the resonance wavelength due to either process proximity effects [10,21] or due to coupling-induced resonance frequency shifts [11], as well as, in case of the offsetting technique, variations in the resonator-to-resonator coupling phase [24], so that significant additional challenges arise that need to be addressed for a successful fabrication with optical lithography.
In both schemes, in case obtaining a large FSR is a high priority, the gaps for the first and last resonators may be set to the minimum value that can be safely resolved, allowing a minimization of the coupling length L S . In this section, we are taking a closer look at the random process variations and systematic resonance frequency shifts occurring when coupling apodization is obtained by varying the resonator-to-resonator spacing, in which case the resonator-to-resonator spacing is successively increased for the inner resonators.
First, as a consequence of the chosen technique, the practical implementation of the targeted coupling strengths requires resolving the different gap sizes with an accuracy of a few nanometers. As an example, Fig. 2(a) shows the required coupling strengths for the implementation of a 7 th order Chebyshev filter with a maximum ripple of 0.1 dB, an FSR of 14 nm and a BW P /FSR ratio of 0.47. The corresponding required gaps (marked with green circles) have been determined with 3D FDTD simulations considering a resonator design that meets specifications with R = 3 µm, L S = 9.7 µm and a minimum gap of 200 nm (silicon-oninsulator waveguide with a 220 nm thickness and a 400 nm width, clad on all sides by SiO 2 ). Based on these simulation results, we determined that this CROW filter requires gaps of 200, 239, 275, 281, 281, 275, 239, and 200 nm. An accurate fabrication of these gaps is already challenging in terms of mask layout irrespectively of process biases, since structures have to be snapped onto the design grid if one wants to ensure minimal changes to the resonators. As already described above, the problem of required nanometric resolution can be alleviated by an alternative apodization scheme, the longitudinal offset technique, in which individual resonators are offset one relative to the other in order to vary the coupling strength [22,23]. However, this does not resolve other issues related to systematic resonance frequency shifts as proximity effects and coupling induced frequency shifts continue to play a role, while a new difficulty, phase offsets in the resonator-to-resonator coupling coefficients, is introduced [24].
The resolution of the waveguide width is affected by optical proximity effects, which introduce nanometric deviations that prevent the exact matching of the optical path lengths of the individual resonators. These process biases depend on whether the waveguide is isolated or in proximity to another one (line pair) [27]. Furthermore, in the case of line pairs, the waveguide cross-section is affected by spacing dependent process biases. Since the lengths of the line pairs vary from resonator to resonator in the offsetting technique, this problem remains relevant also in that case even though the resonator-to-resonator spacing remains constant.
In order to experimentally illustrate this problem, we have fabricated a set of test structures consisting in single racetrack resonators that have a common resonator length (R = 4 µm, L S = 6.7 µm) but different gaps between the bus/drop waveguide and the resonator. The test structures with gaps of 200, 250 and 300 nm were replicated and interspersed in order to differentiate the magnitude of the resonance shifts due to local process variations and local variations of the film thickness from the magnitude of systematic effects, so as to conclusively identify the dominant source of mismatch.
Test structures in 5 different chips from the same wafer were spectrally characterized with a tunable laser. They showed significant shifts in the resonance wavelength around 1550 nm as a function of the gap size. The results are plotted in Fig. 2(b). Resonance shifts are taken relative to the mean resonance wavelength for the structures with 200 nm gap averaged over the same chip, in order to normalize out the effect of longer range film thickness variations and process biases between different chips. Thus, the data serves to visualize the dominant sources of mismatch for nearby structures (same chip). The rationale behind this is that the global shift of the resonance across all the rings of a given device is not the issue here, as it is deemed to be fine-tuned by global thermal control across the entire device. Rather, the issue consists in mismatch between rings of a same filter stage, as a correction of the latter would require individual trimming of the rings.
The high magnitude of the measured systematic resonance shifts is attributed to variations in the effective path lengths resulting from optical proximity effects, as well as coupling induced resonance shifts. Assuming similar systematic biases, the implementation of the 7th order Chebyshev filter described above would result in a deviation of the individual resonances in a range of more than 4 nm, which would very significantly deteriorate the filter response ( Fig. 2(c)). The random variations recorded within a given device category on the same chip, in an area of 4 mm by 2 mm, is significantly smaller. Within an actual CROW stage, this random variation is expected to be even smaller as the rings are much closer to each other as compared to being spread across a chip. The resulting variation is estimated in section 3.2. Since mismatch between resonators featuring different gaps is deterministic across the different measured chips, it appears to be feasible to apply optical proximity correction in the mask for compensation of the process biases [27] combined with optical design for a correction of the coupling induced frequency shifts. However, it should also be noted that fine tuning such a compensation strategy might require several experimental iterations and an extremely stable process.
For all these reasons, it appeared highly desirable to find an alternative filter topology that allows keeping both the racetrack resonators and their spacing constant throughout the device, as this is undoubtedly the safest strategy when a small number of design iterations are allowable in the overall design of an already highly complex system such as an integrated transceiver chip [1,2].
Multi-stage CROW filters with constant spacing
In order to alleviate the stringent fabrication requirements associated to CROW filters with coupling apodization, we propose an optical filter that relies on the cascaded combination of three stages of CROWs loaded on a common bus waveguide. All three are implemented with exactly identical unit cells, comprised of a matched racetrack resonator layout with equal radius and racetrack length, and a uniform spacing (uniform coupling κ 2 ) between resonators, chosen for all the racetrack resonators to be equally snapped to the design grid (see Fig. 3(a)). Moreover, we find that for the specifications of our target application resulting in BW P /FSR ≈BW S /FSR ≈0.5, the required coupling strength κ 2 is reduced relative to the maximum required coupling strength 2 1 κ of an apodized single stage CROW filter of identical specifications, so that design constraints relative to maximum achievable FSR are reduced.
Design
In this section, the main modeling tradeoffs are first exemplified. As in section 2.1, assuming a wavelength independent FSR (no waveguide dispersion) and coupling strength κ 2 , we obtain perfectly periodic transfer functions, so that these high-level modeling results are not reported at a specific wavelength, but rather stop-and passbands are reported as a function of the FSR. CROW filters are further modeled assuming waveguide losses of 20 dB/cm, consistent with excess losses extracted from single stage add-drop multiplexers based on the same racetrack geometry. These waveguide losses are ascribed to excess losses in the coupling junctions of ≈0.04 dB per junction, effectively split over the ring's circumference. The combined filter response can be easily determined by cascading the through port transfer functions of the three stages.
As shown in Fig. 3(b), the gradual increment in the CROW filter order of the cascaded stages leads to an interleaved distribution of the zeros inside the stopband (green, red and blue solid curves) which allows a significant reduction in both the side lobes' level and their spectral width in the combined filter response (black dashed curve). This results from the fact that the width of the stopband is a function of the coupling strength (κ 2 ) which is maintained constant throughout each of the stages, so that they have very similar stopbands. Since the transfer function zeroes are equally distributed throughout the stopband, and have a different number for each stage, this results in interleaving. While each stage has, individually, comparable SSL, once the interleaved transfer functions are combined a significantly improved SSL is obtained. In the proposed device, the exact replication of a unique cell avoids the deleterious effects (excess passband ripple or high lobes in the stopband) that typically arise in apodized CROWs due to unwanted systematic variations in the effective path length of each resonator. Non-systematic variations due to process and film thickness variations remain of course and are partially alleviated by the strong resonator-to-resonator coupling required to achieve the targeted pass-and stopbands.
Since the multi-stage CROW filter uses a cascaded through port configuration, there exists an inverse relation between the 3 dB passband width and the coupling strength (κ 2 ). Figure 4(a) shows the effect of increasing κ 2 on the combined filter response in a three stage CROW with N = 3, 4, and 5 resonators in each stage, respectively. On the one hand, a stronger coupling reduces both the SSL and the passband width. On the other hand, coupling strengths above 0.8 (BW P /FSR < 0.35) already deteriorate the flatness in the passband and introduce insertion losses above 0.5 dB at the center wavelength. Increasing the filter order of each of the stages could compensate for this last effect but at the price of a higher SSL (see Fig. 4(b)) as well as a larger, harder to globally tune structure. The introduction of an additional stage (for example filter with N = 3, 4, 5, and 6 resonators) could achieve both lower SSL and lower insertion loss at the center of the passband. These trade-offs are depicted in Figs. 4(c) and 4(d) where it can be seen that i) there is a minor dependence of the passband width on the filter order and the number of stages, ii) the SSL gets worse with higher filter orders but can be improved by cascading additional stages, iii) the insertion loss at the central wavelength depends on the coupling strength and is mainly given by the stage with the lowest order. All these characteristics make the proposed filter configuration particularly attractive for applications requiring ratios of BW P /FSR between 0.35 and 0.55, SSLs between −12 and −20 dB, as well as very low insertion loss in the passband. We focused our design on a filter aimed at 950 GHz (7.6 nm) passband (BW P ), for the selection of 9 adjacent lines of a 100 GHz FSR comb laser (with the passband chosen somewhat above the minimum required to avoid incurring insertion losses due to the rollingoff of the transfer function at the outlying carriers). In order to introduce enough extinction in the rest of the comb lines on both sides of the passband, the FSR of the filter should verify 2⋅FSR -BW P > BW comb , with BW comb the width of the comb laser spectrum in which the lines exhibit significant power (> −10 dBm). Since the comb lasers of interest for our applications have typical values of BW comb between 15 and 20 nm, we designed the FSR of the CROW filter to be larger than 14 nm (≈1748 GHz).
We selected a CROW filter configuration based on three stages, consisting in 12 identical racetrack resonators with R = 4 µm and L S = 6.7 µm, distributed across three stages according to the orders N = 3, 4 and 5. As previously, 400 nm wide waveguides are etched into the 220 nm silicon device layer of Silicon-on-Insulator (SOI) material followed by the deposition of a PECVD oxide. The gaps between resonators are set to 200 nm in order to achieve the target coupling strength around a target wavelength of 1570 nm (around which the MLL spectrum is centered). These dimensions can be easily resolved by 248 nm DUV optical lithography. The filter achieves the required passband width (7.6 nm) by means of a coupling strength of κ 2 = 0.55 at 1565 nm wavelength, as well as an FSR of 14.5 nm / 1810 GHz (BW P /FSR = 0.53). Figures 4(c) and 4(d) show that the selected design presents negligible insertion loss at the center of the passband and an SSL better than −12.6 dB.
It should also be noted that in order to reach BW P /FSR ≈0.5 as required in our application (constrained by the fact that the FSR cannot be increased indefinitely in order to compensate for a low BW P /FSR), a coupling strength κ 2 of ≈0.5 is needed, while in the apodized CROW filter designs described in section 2.1 a coupling strength of κ 2 ≈0.85 would have been required to obtain a similar (and slightly worse) SSL of −9.6 dB (see the green curve in Fig. 1(b) corresponding to 0.5 dB ripple in the drop port passband and −9.6 dB SSL in the through port configuration). Thus, while this does not allow to conclude on a general comparison of the maximum required coupling strength in the most general case, one can conclude that for our specific wideband filter application the cascaded CROW filter configuration relaxes the requirements on the maximum coupling strength, deconstraining either the minimum circumference of the rings (allowing a higher FSR) or the minimum spacing between the rings (facilitating fabrication).
Experimental results
The devices were fabricated in the standard silicon photonics fabrication process of IME A*STAR with 248 nm DUV optical lithography. Figure 5 shows micrographs of (a) a chip with the fabricated passive filter and (b) a chip in which the resonators have been overlaid with TiN heaters allowing independent tuning of the three CROW filter stages (but not independent tuning of individual rings). As can be seen in (a), not only were the racetrack unit cells and the distance between racetrack resonators kept constant throughout the device, a number of dummy structures were added in the layout of the waveguiding layer in order to minimize process micro-loading effects associated e.g. to etching [21]. Gratings couplers were connected to all ports of the device (including drop ports of individual stages) for complete optical characterization. All measurements reported in the following were made with the chip temperature stabilized to 25°C with a Peltier element.
First, we measured the spectral response of 5 passive filters corresponding to 5 different chips (D1-D5) randomly selected from the same wafer. The individual stages were measured with help of the monitor ports connected to the drop waveguides (labeled in Fig. 5), so that they could be independently characterized even though they are cascaded with each other on the main waveguide bus. Since the light is injected through one monitor port and collected through another monitor port, the transfer function of the stage is collected in through port configuration, with a transfer function that is nominally identical to the one obtained if it had been individually accessed through the main bus waveguide (assuming that non-uniform process variations throughout the CROW stage does not significantly impact the symmetricity of the transfer functions; even then, it would not change the overall statistics of the analysis).
For each structure, the optical characterization of the three individual stages showed a good alignment between the central wavelengths of their respective passbands, with the overall std. dev. of the wavelength misalignment evaluated as 0.15 nm. This number is below the std. dev. of the resonance frequency of individual rings, since the transfer function of entire stages already corresponds to an average. Factoring in the number of rings in each stage, this corresponds to a ring resonance std. dev. of ≈0.3 nm (close by rings within one device). This fact indicates a good reproducibility in the fabrication of the resonators with uniform gap in contrast to the fabrication of conventional CROW filters with gap apodization. Nevertheless, a residual mismatch between resonators still led to some penalization of the SSL (−8.2 dB in the worst measured chip) in comparison with the design value (−12.6 dB). The reduction in SSL seen in the experimental data can be modeled well assuming the resonances of the individual rings to be misaligned with the 0.3 nm std. dev. estimated above. On the other hand, the measured FSR and BW P were consistent with the expected design values. Variations in the central wavelength of the passband across chips are mainly attributed to the non-uniformity in the wafer core layer thicknesses. The measurement results are summarized in Table 1. Next, we measured the spectral response of five CROW filter structures with TiN heaters, corresponding to another five different chips (D6-D10). The initial characterization of the individual stages without thermal tuning showed a larger misalignment between the central wavelengths of their passbands (0.32 nm standard deviation) in comparison to the previous passive filter structures. We attribute these increased misalignments to larger nonuniformities resulting from the larger distance between the stages in the filter layout as made necessary to fit the electrical signal lines (see Fig. 5) as well as to the additional processing steps that may further increase variability. As depicted in Fig. 6(a), the wavelength misalignments between stages increase the SSL (e.g. −6 dB for the filter in D9). In these structures, the thermal phase shifters allowed for the correction of the fabrication induced deviations between stages. The heaters exhibit a resistance of around ≈80 Ω per resonator, with measured resistances of respectively 24.24 Ω, 20.88 Ω and 17.57 Ω for the 3, 4, and 5 resonator stages. The required tuning power is ≈2.6 mW/nm/resonator and 31 mW/nm for the whole structure. The measurement results of the filter structures with heaters are summarized in Table 2. It should be noted that, even after thermal tuning of the three CROW stages, the residual SSL is still slightly worse than the −12.6 dB design value. This points to the fact that, after tuning, the SSL remains limited by variability of the ring wavelengths inside a single CROW stage, that is not compensated for. As previously, the assumed 0.3 nm ring resonance std. dev. within a single stage allows mimicking the experimental results.
Notably, the device on D8 exceeds SSL performance expectations. Since BW P is also an outlier for this device (7.3 m vs. 7.7 nm for the other devices), it appears very likely that a higher coupling strength is the root cause for both deviations in this specific instance.
Finally, Fig. 7 shows the normalized spectrum of a typical comb laser used as a multicarrier light source for WDM communications in [1,2] before (red dashed line) and after being filtered by the proposed three stage CROW filter (solid blue line). In this example, it can be seen that the filter allows for the selection of 9 adjacent lines and introduces a small insertion loss to each one (less than 0.5 dB in the worst channels closest to the edges of the passband). On the other hand, the comb lines outside of the passband undergo a rejection of higher than 12 dB (worst case extinction in the stopband). Since outer comb lines already carry less power by nature of being at the periphery of the MLL's gain spectrum (with a cumulative power −4.2 dB below the power carried by the 9 central lines), and since some lines also overlay with the zeros of the transfer function rather than with the side lobe maxima, after filtering the total power carried by the peripheral comb lines is actually 20 dB below the power carried by the 9 central lines intended to be used as carriers. Thus, the power overhead entering the SOA is significantly reduced.
While a 20 dB rejection might seem large, one has to take into account that the average power of the 9 central carriers undergoes an additional extinction of −13.8 dB after modulation by RRMs in the intended WDM system [2], as a consequence of low drive voltages of 2 V pp sourced by chip scale driver electronics combined with a requirement for high extinction. This results in the peripheral lines carrying −6 dB of the average power of the 9 optical carriers at the entrance of the SOA. A guaranteed worst case extinction of 16 dB applied uniformly across the entire stopband would also have resulted in a guaranteed overall power ratio slightly better than −6 dB at the entrance of the SOA and would thus be adequate even in the worst-case alignment. While the worst case −11.2 dB extinction of the present filter would result in an overall power ratio of -1.6 dB at the entrance of the SOA if applied across the entire stopband and would thus be marginal in a worst-case yield analysis, the actual extinction is much higher as described above. Moreover, this scenario, in which all or most of the comb lines coincide with the side lobe maxima of the filter, is actually impossible due to the comb spectrum FSR differing from the spacing between the zeroes. Moreover, it should be noted that some amount of unmodulated power entering the partially saturating SOA can be beneficial in order to stabilize its gain [2] and reduce cross-gain modulation, so that an overall power ratio of −6 dB is considered fully adequate here. Fig. 7. Normalized spectrum of a typical comb laser used as a multi-carrier light source for WDM communications in [1,2] before (red dashed line) and after transmission through the proposed three stage CROW filter (blue solid line). The filter response in also plotted with a black solid line.
Conclusions
We have shown novel wideband optical filters based on three cascaded CROW stages that combine low insertion losses, low passband ripples, steep passband edges and reasonable extinction without requiring independent thermal tuning of individual resonators. Moreover, the proposed solution avoids issues associated to proximity effects during fabrication and to coupling induced resonance shifts during the design phase that typically play an important role in apodized CROW filters. Fabrication of the demonstrated three stage CROW filter with 248 nm DUV optical lithography resulted in adequate performance for the targeted WDM architecture: The fabricated filters exhibit a wide passband of 7.7 nm and an FSR of 14.7 nm. Measured passive filters feature a side lobe suppression level better than −8.2 dB. Additionally, the incorporation of thermal tuners for the adjustment of the individual stages allows an improvement of the side lobe suppression level to better than −10 dB across all measured devices. These filters were developed specifically to be part of integrated transceivers using comb light sources for the selection of 9 adjacent carriers located on a 100 GHz grid. Very low loss is applied to the selected comb laser lines (<0.5 dB). Out-of-band comb lines are attenuated by an average −16 dB, since some of the rejected comb lines fall in between side lobe maxima (the FSR of the MLL differs from the spacing between zeros in the filter transfer function).
By varying the common coupling strength between resonators throughout the device and by varying the number of stages and/or the number of resonators per stage, different requirements in terms of passband width, edge steepness, and side lobe suppression levels can be reached without having to recharacterize process biases at every design cycle, reducing risk and development time. | 10,383.8 | 2018-02-19T00:00:00.000 | [
"Engineering",
"Physics"
] |
An Integrative Approach to Building Peace Using Digital Media
The purpose of this article is to offer scholars and practitioners a more coherent and holistic starting point for asking questions about information and communication technologies for peacebuilding than has been available so far. A transdisciplinary proposal is made that applies critical pedagogy of peace education to the way that digital media can be used to build peace in communities and societies. This argument is further underpinned by insights from cognitive science and social psychology. The concept of sociotechnical consciousness is developed, which describes what it is like to be experiencing a sociotechnical system. We conclude that, to deploy digital media as part of peacebuilding initiatives, the media’s impact on individuals and groups deserve as much consideration as the content that is delivered via these media. This has important implications for how to design and use media in peacebuilding contexts.
the employed technologies; more specifically, the authors are interested in the role technologies play with regards to empowerment, democratisation, and inclusivity. This focus is partially due to their concern with diplomacy and the intention to provide recommendations to the European Union (Gaskell et al., 2016).
The research documented in the current paper follows on from this work but has a somewhat different focus. The Isôoko project (a Horizon 2020 project funded by the European Union) focuses on ICTs (in particular, digital platforms) for peacebuilding (in particular, peace education) in East Africa (https://i sooko.eu). The situation described above, with regards to empirical evidence, positive bias, and ethical challenges, applies to this domain as much as to any other within the broader field.
The purpose of this article is to offer scholars and practitioners a more coherent and holistic starting point for asking questions about ICTs for peacebuilding (ICT4Peace; Gaskell et al., 2016). With that intention, this article explores the theoretical merit of ICTs (and, in particular, digital media) in the domain of peacebuilding (and, in particular, peace education). As there is an identified lack in conceptualisation and theory in the field, an attempt is made to offer some broad concepts and ideas that will introduce further questions for consideration through empirical study and monitoring and evaluation efforts. Finally, some general implications for wider debates around human development and particular implications for specific actors in the development and peacebuilding field are suggested.
Background
There are many different understandings of the aims of peacebuilding and peace education, but the creation and transformation of relationships are usually part of it (Lederach, 2010;UNESCO-IICBA, 2017). The same is true for peace education, of which there are a number of definitions, for example: "Peace education is the process and practice of developing non-violent skills and promoting peaceful attitudes and learning to pinpoint the challenges of achieving peace" (UNESCO-IICBA, 2017, p. 4). "Peace education in present times aims for the transformation of human consciousness in all aspects of "peace learning toward the development of the full spectrum of the peacebuilder in everyone-inner and outer, personal and professional; and the development of peace systems-local to global" (Lum, 2010, p. 122).
What these definitions usually have in common is the intention to use education to change people in a way that makes peace more likely than would have otherwise been the case. In many cases, the aim is to affect behavioural changes (through changing someone's attitudes, beliefs, or skills) that leads to more peaceful social dynamics (Lum, 2010;UNESCO-IICBA, 2017). However, Zembylas and Bekerman (2013) have argued that peace education also suffers from weak theoretical foundations. They argue that within the field there is a lack of reflection regarding the assumptions and premises that lend legitimacy to peace education ideas and practices.
These conceptual deficiencies as regards both peace education (discussed in detail below) and ICT4-Peace (identified by the WOSCAP study and outlined above) mean that the Isôoko project, and other endeavours in the field, lack the sound theoretical foundations that are necessary for forming appropriate research questions and for developing effective practice. At the very least, addressing these deficiencies would make a contribution to diversifying conceptual and theoretical approaches in peace education and ICT4Peace.
Changing Behaviours to Build Peace
As indicated above, a primary aim of peacebuilding is to change relationships, and peace education is intended to contribute to this by working through education. In peace education, there is a strong focus on the individual, in line with the traditional understanding of education, and social changes are achieved primarily through the individual. The emphasis on behaviour change (influenced by attitudes, values, etc.) is an indicator of this.
The following discussion will therefore proceed at two levels. It will start by addressing the individual and enquiring into what we know about behaviour change and the interventions that may result in such changes. It will then consider a range of psychological and environmental factors before moving on to consider the social level of peace education via digital media and its impact.
Behaviour Change Interventions
As outlined above, behaviour change is an explicit objective of peacebuilding and education and may be understood as a change in a pattern of behaviour such as smoking, drinking or level of physically [sic] activity. However, it can also refer to one-off behaviours such as making a blood donation, and to forestalling a change in a behaviour pattern such as preventing uptake of smoking. (West & Michie, 2016, p. 5) Thus, a "behaviour change intervention" is a service, product, or activity that an actor uses to change the default behaviour of another actor. Many peacebuilding interventions are behaviour change interventions aiming for concrete changes in how individuals or groups treat each other.
Many peacebuilding interventions are behaviour change interventions aiming for concrete changes in how individuals or groups treat each other.
Human beings change their behaviour in a variety of imaginable scenarios-our motivations may change (relative to competing behaviours), or our capability or opportunity to engage in alternative behaviours may increase-and these aspects can be influenced through intervention. People can be supported by removing social, physical, or psychological barriers to behaviour change; environments conducive to desirable behaviours can be created; examples can be provided that help people feel, think, or act in more desirable ways; and interventions can be undertaken to support them in seeing why and how a certain change should be made (West & Michie, 2016).
Some of these observations indicate how behaviour can change due to intentional/deliberate changes to a person's mental state (e.g., seeking out new information that makes alternatives more attractive or that changes a person's motivation). In this scenario, the individual concerned can exercise its agency and change through intrinsic processes. Other aspects of the above indicate the possibility of unintentional behaviour change. For example, people being exposed to changes in their environment might have to adjust their behaviour as a result. Furthermore, behaviour changes might be deliberately one-off, revert (for whatever reason) to original patterns or be sustained for a (in)definite period of time (West & Michie, 2016). This suggests that to some extent our behaviour is determined by people consciously engaging with the world and either intentionally or unintentionally changing their behaviour in response to a stimulus.
However, unintentional behaviour change is more encompassing than that. A good example of this is the role of emotions in determining our behaviour. Our emotions have a critical impact on the way in which we ascribe value to events that we experience. They influence us in judging how desirable an event or situation is to us. "Emotion provides the principal currency in human relationships and as well as the motivational force for what is best and worst in human behaviour" (Dolan, 2002(Dolan, , p. 1191. Emotions also exert a powerful influence over our reasoning, partly due to their embodied nature, but also due to the fact that-compared to other psychological states-our emotions are less influenced by our attentions and because of the effect they have on all aspects of cognition. Research shows how emotional triggers can influence our visual perception, direct our attention, and influence a whole range of other cognitive processes (Dolan, 2002).
Behaviour and Emotions in Conflict Contexts
It is thus no surprise that emotions have been found to play a critical role in contexts of long-term violent conflict (e.g., the Middle East). Halperin (2014) outlines that "extreme emotional phenomena, such as hatred, contempt, and humiliation, which in most aspects of our lives are considered almost illegitimate, constitute the dominant feelings held by many of those living in areas of intractable conflict" (p. 68). In (post-)conflict contexts, people experience emotions with high intensity, and their charged contexts lead to increasing personal sensitivity. Emotions triggered in this way can result in a continuation of conflict and constitute strong psychological barriers to resolving conflict in peaceful ways.
Whilst this has long been recognised, the way that this recognition has influenced research and practice in conflict studies and peacebuilding is highly questionable, from the standpoint of psychology. Halperin (2014) argues that emotions have been treated as "monolithic packages of intergroup negative affect" (p. 68), and whilst such a view at least involves an acknowledgement of the importance of emotions, it is ineffective (or even counterproductive) with regards to supporting conflict resolution and reconciliation.
Many peace education (and conflict resolution) programmes explicitly present reducing (intergroup) hatred, anger, or fear, as well as increasing hope and empathy, as an objective. The Isôoko project is a case in point here. Due to the partners we work with, "empathy" is a core objective of the interventions we deliver; added to this are attitudes like "personal responsibility" and skills such as "critical thinking." Conducting participatory design workshops in Kenya and Rwanda highlighted the fact that such abstract ideas are difficult to work with in practice.
Assessing this through the lens of social psychology leads to the insight that such abstract goals constitute oversimplifications. Before a programme can successfully work with emotions to achieve positive outcomes, it needs to determine the specific contribution that discrete emotions make in its broader sociopolitical context (Halperin, 2014). Considering the complexity of behaviour change from a psychological viewpoint, and the importance of taking into account distinct emotional states in behaviour change, we may conclude that the foundations of behaviour are broader than simply sensing one's environment (e.g., by accessing information) and processing it rationally. Behaviour is critically influenced by how we experience being in the world rather than being determined solely by what we know about the world.
Using this insight to reflect on ICTs in the context of peacebuilding (ICT4Peace) highlights the need to look beyond the content we disseminate via these technologies. It becomes necessary to determine people's distinct emotional states as conditioned by both content and technologies. This is necessary to prevent further harm and trauma but is also a prerequisite for sustainable changes in individual and collective states of human development.
Consciousness and Peacebuilding
As peace builders, we must take individuals' experiences seriously because monolithic approaches are misleading and in some cases counterproductive.
If we are concerned with people's experiencing of their world then, by definition, we are concerned with the phenomenon of consciousness. A discussion of consciousness has direct implications for our comprehension of psychological states that contribute to continuing dehumanisation and conflict, as well as for our approaches to intervening in such situations.
As peace builders, we must take individuals' experiences seriously because monolithic approaches are misleading and in some cases counterproductive.
According to Stangor, who provides a particularly succinct account of the connections: Our experience of consciousness is functional because we use it to guide and control our behaviour, and to think logically about problems. Consciousness allows us to plan activities and to monitor our progress toward the goals we set for ourselves. And consciousness is fundamental to our sense of morality-we believe that we have the free will to perform moral actions while avoiding immoral behaviours. (Stangor, n.d.) Nevertheless, it is becoming increasingly clear that we regularly perform relatively complex behaviours (such as driving a car) without being conscious of doing so and that there is a wide range of behaviours over which we have little to no control. For example, psychologists differentiate between explicit (conscious) and implicit (unconscious) memory and between controlled (conscious) and automatic (unconscious) behaviour (Stangor, n.d.).
So far, we have indicated that consciousness is relevant to our enquiry by showing that it influences our behaviour. We have pointed out that our lived experience plays a crucial role in determining our behaviours. It was highlighted that and how emotions condition our behaviour in complex ways that require specific attention to discrete emotional states. Furthermore, it was argued that a large percentage of human behaviours are (to varying degrees) unconscious.
Experiences and mental states work at different levels in our individual consciousness; they span mental and physical layers (Damasio, 2000(Damasio, , 2018. Trauma, as well as recovery, are processes that we can work with and influence. Behaviour change interventions are one form of such influencing. Emotions (and emotion regulation) play critical roles in these processes, and we should not underestimate the importance of personal experiences, situations, and context.
Building Peace by Changing Who People Are
Given these psychological considerations, working to "increase empathy" in a population looks overly simplistic and even risks re-traumatisation of some individuals, as the interventions do not sufficiently take into account individuals' situations. In the following, we will develop the hypothesis that taking an integrative approach to peacebuilding (via digital media) holds promise for lessening the complications of group/societal interventions and, at the same time, deepen their transformative potential. To develop an integrated approach for building peace using digital media, the below section will consider lessons from peace education and other fields to strengthen our understanding of ICT4Peace.
Changing who people are through systematic intervention. Education systems are systematic interventions in a society to enable the formation of people who develop to be members of that society. A common distinction drawn in peace education literature and practice is that between additive approaches and integrative approaches to peace education. In additive peace education, the knowledge, values, and skills relevant to peace are taught in specific subjects that are added to the curriculum. In integrative peace education, the knowledge, values, and skills relevant to peace are integrated into the curriculum as a whole (and beyond). In the Kenyan context, for example, Lauritzen (2016) states that UNESCO argued that peace education has to move beyond the mere teaching of peace as a subject, and address the violent school cultures, the organisation of schools, and the policies guiding the system. Addressing such a range of layers in the system requires an integrative rather than an additive approach. (p. 324) Integrative peace education promotes the frequent practice of transferable skills and the development of environments that are more conducive to the behaviours that peace builders like to see, develop, and enact. In contrast to additive peace education, integrative peace education does not consider peace education to be (primarily) a matter of content (and thus as a separate subject) but as something else (noncontent).
Despite this, Zembylas and Bekerman highlight that common integrative peace education approaches, understanding of valid knowledge, values, and skills, is often monolithic and centrally defined. "Therefore, the perspective of an integrative theory does not necessarily provide any fundamental educational rationale for peace education, other than claiming ipse facto [sic] that there are universal notions of problems and solutions with little attention to locality and contextualization issues" (2013, p. 199).
Why might this be an issue? In the light of the psychological literature discussed above, it is clear that individual people's experiences are crucial in any attempts to change behaviour. Even though they do not discuss the individual level, Zembylas and Bekerman (2013) argue that at the social level, people's situation and context need to be taken into account in any kind of peacebuilding and education intervention, and call for the prevention of "one size fits all" approaches that are likely to entrench and exacerbate existing inequalities. As an alternative, they set out a "critical peace education" paradigm, in which they suggest basing peace education on four interlinked foundations: "reinstating the materiality of 'things' and practices; reontologizing research and practice in peace education; becoming 'critical experts of design'; and, engaging in critical cultural analysis" (2013, p. 203). With regards to content, within the framework of critical peace education, they argue that the creation of meaning is a reflective accomplishment that individuals co-author in-context and in-process. Individuals' reflections concern themselves, the process, and the context. Thus, "actors are being constituted and constituting the environments in which and for which they have to make sense. We posit that raising actors to consider all these (indeed) complex aspects might possibly 'slower' their 'progress,' yet deepen their humanity" (Zembylas & Bekerman, 2013, p. 204).
From this perspective, it becomes clear that content is merely an opportunity for practice; it is practising itself that leads to transformational change (such as the deepening of humanity). This focusing (away from content and) on practice is common in other approaches to critical pedagogy. Freire (1996), in his well-known work, Pedagogy of the Oppressed, disapproves of "traditional" ideas in education and describes them as systems of indoctrination that aim at the standardisation of consciousness (with a political interest). In contrast to this, he develops a critical pedagogy, based on critical theory, which aims at the awakening and development of a critical consciousness. He, and others since, approached education as an ontological, and not merely epistemological, enterprise (i.e., concerning being rather than just knowledge). In contrast to what is sometimes described as the banking system of education, where the transfer of content is the main priority, critical pedagogy considers education to be working with the whole person with the aim of developing, through practice, transferable skills (amongstother capacities) such as critical thinking.
What critical pedagogy contributes, both in general and as applied to peace education, is at least threefold. First, education is an example of a system/intervention/environment that transforms our consciousness at individual and collective levels. The above example shows that perspectives in pedagogy reflect views on how different approaches focus on different levels and priorities in personal (and collective) development (e.g., critical consciousness, critical thinking, or content). Second, there is a call to reontologise education. With regards to peace education, Zembylas and Bekerman (2013) argue . . . for the need to reontologize what has been epistemologized; that is, we emphasize the need to materialize abstractions and ask about their consequences in everyday life. In other words, we are asking whether and how (if it is possible) we can reontologize educational rhetoric about peace and conflict. (p. 205) Third, there is a focus on practice rather than knowledge.
A focus on practice, and with it a call for a reontologising, is also found in other theories of learning that have become more prominent in the last 3 decades. The focus on practice is strong in Lave and Wenger's (1991) theory of communities of practice and the underpinning concept of situated (and/ or social) learning. Their theory focuses on social practising and acknowledges individuals to be members of a sociocultural community in which knowing is an activity undertaken under specific circumstances by specific people. This view of "knowing" is what reontologises. It situates a person in the world and forms that person's whole experience of the world; the interrelationship between person and world is of central relevance to knowing and learning. Their social theory of learning (Figure 1) is in line with the focus on (Wenger, 1999, p. 5). experience above and highlights the role of practice (and further aspects of situatedness and embeddedness) as elemental in learning processes. Polanyi (1967), in the field of knowledge management, proposed similar ideas of knowing when publishing his work on "the tacit dimension." Tacit knowledge and knowing, in his understanding, are processes that are fundamentally about the whole person, so much so that it is often difficult to externalise (i.e., communicate via language) what it is we know or why we behave in the way we do. It is difficult (and in many situations impossible) to externalise what we know since in that process (of attempting to externalise it), we are taking it out of the context of our own knowledge (knowing) and of who we are (being), and thus, it loses the meaning it had for us. With tacit knowing, we are back at an understanding of learning that is somewhat in line with the implicit (unconscious) memory that psychologists refer to. In the cases of both tacit knowing and implicit memory, scholars argue that one of the crucial identifying features is the practical difficulty or impossibility of externalising what is known. They also share an emphasis on this type of knowledge being acquired, for the most part, by practice and participation rather than consumption of information.
When aiming at changing practice or behaviour, pedagogical approaches that work with non-content aspects of learning are at the very least highly relevant and may be the approaches most likely to open spaces for personal and/or collective transformational change. In peace education, as well as in pedagogy generally, there are various debates around prioritising working with different dimensions and aspects of human consciousness. If we regard the education system as a technology, which is possible when using a broad conception of technology (e.g., Kelly, 2010), we can draw explicit parallels between education and other technology-related theories (like ICT4Peace). (1971) co-developed pedagogical approaches that understand "teaching as a subversive activity," as well as the theory of technopoly and our understanding of media ecology.
Non-content and technologies. Postman
Media ecology looks into the matter of how media of communication affect human perception, understanding, feeling, and value; and how our interaction with media facilitates or impedes our chances of survival. The word ecology implies the study of environments: their structure, content, and impact on people. An environment is, after all, a complex message system which imposes on human beings certain ways of thinking, feeling, and behaving. (Postman, n.d.) The concept of media ecology is based on the work of McLuhan (2001), who is most famous for his argument that "the medium is the message" (a phrase he coined in 1964), in which he emphasised the importance of the non-content domain in (media) communications. The characteristics of the medium influence the content, but more importantly he suggested that the medium itself (more so than the content) changes individuals and societies. With a broad definition of technology, moving from the study of mass communications and media to other technologies is a single conceptual step. Important contributions to our argument can be identified in the domain of philosophy of technology, a crucial concept to highlight being that of "technical mediation." Technical mediation is concerned with "how technology mediates human existence" (Dorrestijn, 2012, p. 16). Technology, in the philosophy of technology, is generally not seen as a neutral thing but rather as inherently politicised and moralised, as in the work of, for example, Latour and Venn (2002) and Verbeek (2006Verbeek ( , 2011. Without getting too deeply into these arguments, it can be safely assumed that technology does influence human behaviour, and from the same literature, we can conclude that technology influences our morals and values. This body of research also raises the question of whether designers of technology should intentionally design technologies in a way that encourages certain behaviours that promote, for example, safety or sustainability (or peace; Dorrestijn, 2012).
Again, the Isôoko project can serve as a case for illustrating the relevance of the above. Besides the non-governmental organisations (NGOs) and community-based organisations (CBOs), there are also technology partners involved in the project. What they offer to the Isôoko project are crowdsourcing technologies and digital infrastructure (hardware and software) for the consumption and interaction with digital content. As these technologies had already been deployed (partially or fully) in the context of our work (e.g., Kenya and Rwanda), a question arose around how these technologies were already mediating human existence (prior to the Isôoko project conducting behaviour change interventions). During the design processes of our trials and pilots, it became apparent that to assess any changes to human behaviour (resulting from the piloted interventions), findings will have to be produced that illustrate how the technologies mediate people's experiences.
In the context of this article, we are raising the questions: How do digital technologies affect our moral life, how do they mediate our existence, how do they change our experiences, and how do they change our behaviour in ways that go beyond the content dimensions?
How digital technologies affect behaviour is studied in psychology, amongst other fields. For example, digital behaviour change interventions are ICT products or services that promote behaviour changes. West and Michie (2016) argue that digital interventions are no substitute for other interventions (such as persuasive media campaigns, punishment of behaviours, incentivising more desirable behaviours, etc.) but make their distinct contribution by focusing "on amplifying or adding to them by increasing users' abilities to put decisions to change behaviour into effect, and to sustain the new behaviour" (West & Michie, 2016, p. 2).
In any intervention, West and Michie distinguish between content and delivery. Delivery can encompass a range of aspects that are not addressed explicitly in their work. However, the relevant challenge they set is: (West & Michie, 2016, p. 2) This challenge echoes the lack of theoretical foundations and empirical evidence that we have identified in the field of ICT4Peace.
An Integrative Approach to Building Peace Using Digital Media
In what follows, we will attempt to offer a response to this challenge by drawing together the different arguments and insights we have explored so far, and by doing so, we offer a holistic theoretical framing for ICT4Peace.
We have seen that in peacebuilding, people's consciousness is of primary concern. How we experience our world is crucial for the values and morals we hold and the behaviours we display. Education is a systematic approach of working with people's consciousness; critical pedagogy makes this clear and, furthermore, outlines pedagogical philosophies and approaches that can develop critical consciousness in human beings. These threads open the field to different understandings of learning and the importance of practice. In practice, the content becomes secondary, and processes and structures become the focus of attention. The structures and processes in which we are embedded (the environments that we experience) have a direct impact on our experiences and, over time, mediate the way we perceive and engage with the world. Technologies, whether in the form of an education system or a digital medium, shape our practices and consciousness in important ways.
It becomes evident that the boundaries between systems that mediate human experiences are fluid when observed through the integrative lens we propose; this is the case, for example, with the conceptual boundary between human development and peacebuilding. However, this does not mean that these concepts merge or that either loses its value. The integrative approach suggests that boundaries between, for example, peacebuilding and development can only be drawn within specific contexts and situations.
Based on the above, it is evident that, when people experience the same technological environment, there will be overlapping influences that the processes and structures embedded in the technology will have on individuals; interaction patterns between technology and people will, inevitably, emerge. Thus, technologies (like education systems) have a standardising effect on our consciousness (due to them mediating and patterning our experiences and practices). This leads us to the fundamental insight that an integrative approach to building peace using digital media is needed. The aim of an integrative approach to building peace using digital media is the empowerment of people to determine how digital technologies mediate their experiences and practices.
In conflict studies and peacebuilding, we are concerned with individual actions and behaviour, as well as the social phenomenon; we are concerned with both individual consciousness and a type of collective consciousness. The type of collective consciousness applicable to the argument we will term sociotechnical consciousness. Digital media are generally used by more than one person, which means that not only are the experiences we make and the practices we engage in mediated by the digital spaces we inhabit but also by the social networks we are embedded in. This is well established in sociotechnical theory.
Sociotechnical consciousness is an emergent process of a sociotechnical system that gives expression to what it is like to be experiencing that system. This process leads to patterning and to emergent (transient) patterns of which people are mostly unconscious; we are mostly unconscious of the ways in which, for example, technologies pattern our experiences because we are usually occupied with focusing on the content (rather than the environment). When directing our consciousness at the patterns or at the process itself, through collective practising, we can make collectively conscious attempts at changing these patterns or even the emergent process of sociotechnical consciousness itself; by redirecting our awareness and consciousness at the patterns we experience when, for example, we use a certain technology, we can redesign (or appropriate) those technologies to suit our intentions. In this case, the non-content dimensions (e.g., processes and structures) become the content dimension of our sociotechnical consciousness.
Sociotechnical consciousness is an emergent process of a sociotechnical system that gives expression to what it is like to be experiencing that system. For example, as discussed above, some scholars consider conventional education processes and structures to be oppressive and to lead to a standardisation of consciousness. Instead, they promote different pedagogical approaches that ask us to direct our consciousness at the educational processes and structures (non-content) that influence us and realise that we need to shift our practices for a more critical consciousness (as in the case of critical pedagogy) to emerge (in ourselves and our collectives). This is transformative sociotechnical change; this is a fundamental qualitative shift in our collective experiencing of the sociotechnical system.
In the ICT4Peace context, the Isôoko project can serve as an example. The co-design process that led to the development of our digital behaviour change interventions engaged NGOs, CBOs, and service recipients in activities that elicited their views on "how can we use the technologies available to us to support peacebuilding in your community?" This acknowledged the importance of people's contexts and situations, and it enabled the practising of co-creation and critical thinking. However, it did not include a collective reflection on what it is like to be experiencing the technologies as they exist. The values and practices encoded in the ICTs have remained (for the most part) unquestioned, yet they will mediate human experiences of any behaviour change intervention undertaken via these technologies. This example shows how the integrative approach proposed in this article can be aspirational for practitioners in ICT4Peace. Further aspects of this are outlined below.
An integrative approach to building peace using digital media will avoid the mass distribution of contents to passive audiences. Such mass distributions are merely epistemological interventions that lead to a standardisation of consciousness. Neither does the integrative approach call for the personalisation of content by third-party agents (human and/ or algorithmic). Instead, it calls for the facilitation of opportunities to practice critical engagement with digital media and content as people-in-the-world.
An integrative approach to building peace using digital media will avoid the mass distribution of contents to passive audiences.
Integrative digital behaviour change interventions co-create environments with people that enable us (designer users) to change who we are by practising, belonging, experiencing, and doing. This process (and its accommodating structures) leads to the emergence of a sociotechnical consciousness that is qualitatively different from content-focussed endeavours (that standardise consciousness). The intended result shifts from people who know certain things to people who experience and participate in their world in certain ways (e.g., critically, peacefully). When offering up and facilitating such processes, the domain of taken for granted (and unconscious) processes and structures that condition our behaviours shrinks proportionally with the literacy that is generated in the process of human development across all ontological domains. Research in the diverse fields discussed above-such as critical pedagogy, media ecology and technopoly, technical mediation, mediatisation, and so on-all point towards this being the case. Our technological environments continually mediate and can transform our sociotechnical consciousness. An integrative approach to building peace using digital media facilitates people becoming conscious of the very ways in which their consciousness is mediated by the technologies they engage with.
Transformation for Peace?
The above discussion enables us to synthesise a range of findings that are of importance to the theory and practice of peacebuilding. peace education, and the use of digital media for such ends. First, research from critical pedagogy and critical peace education implies that whole person, person in the world, ontological, and non-content approaches are of key importance for peace education and digital behaviour interventions.
Second, peacebuilding is about who people (individually and collectively) are rather than what they know. What they know is just one aspect of the person in the world and offers limited potential for behaviour change, as behaviour is based on who people are rather than on what they know. This shifts the focus from the content we distribute via ICTs towards the values and practices they encode.
Third, trauma's impact on people goes beyond what they are conscious of (the contents of consciousness), for example, what they think about, reflect on, or feel. The impacts of people's experiences embed themselves into the very processes of consciousness and cognition (non-content) and pattern our lived experience from there on. ICT4Peace is thus about how technologies mediate our lived experiences; how they pattern the practising of certain skills, communicating, and consuming information in certain ways, and, more generally, influence us as people-in-the-world in immeasurable ways of which we are usually unconscious.
Fourth, empathy, for example, in peace education is not about people feeling empathetic (towards someone in a certain situation) but about being more empathetic. The difference here is that the former is a content of consciousness (what it is like to experience being with that person in that moment) and the latter an increased general capacity (a qualitative change to consciousness). 1 This calls for an ontological perspective.
Fifth, working with consciousness from an ontological perspective shifts the focus from content to practice. Content only leads to transformational change if embedded in practice. Sociotechnical systems embody and pattern practices that mediate our sociotechnical consciousness. In other words, the ontological perspective highlights the circular relationship between the changing individual (and groups) and the mediating environments.
Sociotechnical systems embody and pattern practices that mediate our sociotechnical consciousness.
Sixth, human relationships (which are at the core of peacebuilding) are one mode in which the noncontent domain of sociotechnical consciousness expresses itself. It is who we speak to and the quality of our relationships with them that matter more than what we speak about. For ICT4Peace, this means that addressing the destructive influence of existing social media is as (if not more) important for peacebuilding and human development as the design and implementation of ICT4Peace interventions.
Integrative Approaches and Design
Beyond these considerations, important questions remain. One of them was raised above, in the context of the philosophy of technology, namely: Should designers of technology design technologies intentionally in a way that encourages certain behaviours and practices, for example, to promote safety or sustainability (Dorrestijn, 2012).
In addressing this question, we take our cue from critical pedagogy. As technology is mediating our existence already, we should direct our sociotechnical consciousness at this process of mediation and endeavour to influence the process at the non-content level of technologies and the transformative effect they have on us. We need to identify the patterning that results from the use of technology and develop a critically conscious engagement with technology, something that Dorrjestin (2012) and others (e.g., Kelly, 2010) have called for.
Contrary to dominant, modernistic, approaches in moral philosophy, the framework of technical mediation and technology, allows one to give an account of the ethical subject which is not in opposition to the influences of technology. Instead, the focus is on the emergence, self-constitution of the ethical subject through practices of coping with its own conditioning circumstances. (Dorrestijn, 2012, p. 159) Furthermore, Dorrjestin (2012) calls for designers to consider how the technologies they design mediate human experience and existence and offers a framework as a tool for designers. The theoretical foundations outlined in this article add a variety of further considerations for design processes, such as a focus on practice, different modes of learning, the importance of experiencing, and others.
The design issues raised above also align with conversations in the responsible innovation domain. One of the central non-content considerations in that domain concerns values; there is a range of "design-for-values" approaches, all of which seek to link the embeddedness of values in technology with responsible design and innovation processes. One example, for illustration purposes, is valuesensitive design (Umbrello, 2018;van den Hoven, 2013). Value-sensitive design may be defined as "a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process" (Friedman et al., 2006, p. 349), and it underlines how the (moral) values of designers manifest in designed artefacts (van den Hoven, 2007).
Some of the work in this domain has started to point in directions not dissimilar to the foundations we have been laying in this article. This is exemplified in the work of Rychwalska and Roszczynska-Kurasinska (2017), who argue that groups are able to change their governance and aims in response to emergent patterns of interactions between individuals. The systemic design that mediates their experience can be redesigned when informed by the emergent collective awareness of the group.
This article suggests that the phenomenon of sociotechnical consciousness is at the core of such explorations in theory and practice. Alongside this, value-sensitive design, for example, is highly relevant to ICT4Peace because human (moral) values are seen as of key importance to behaviour in conflict contexts. However, as shown above, this is merely one domain that can be considered in the design process.
Conclusions
The foundations of an integrative approach to building peace using digital media set out above have fundamental implications for ICT4Peace and beyond. Bringing together contemporary understandings of consciousness, cognitive science, behavioural psychology, technology, and education and pedagogy, with an eye on how to promote peaceful societies, we have identified a need to focus on non-content domains, the patterning of sociotechnical consciousness, the ways our environments mediate our experiences, the processes by which we (individually and collectively) critically engage with our environments, and the processes by which environments/spaces come into being (and are appropriated).
The underpinning process has been termed sociotechnical consciousness. Sociotechnical consciousness is an emergent process of a sociotechnical system, which gives expression to how it is like to be experiencing that system. The process leads to patterning and the emergent (transient) patterns we are generally unconscious of (because they are taken for granted or broadly accepted as being the norm). In integrative approaches to building peace using digital media, we facilitate people directing their sociotechnical consciousness at the patterns and/or the process itself; through collective practising, we can make conscious attempts at changing the patterns or/the process itself.
The values, attitudes, and skills that we hold are continually reinforced and/or transformed through practice and repetition (within the context of meaning, community, and identity). For effective peacebuilding, we need to design our lives in ways that embed the practice of relevant values, attitudes, and skills into the (sociotechnical) environments we experience. As these skills, attitudes, and values transcend the content with which we can engage (such as history, natural science, neighbourhood matters, etc.), we can partially avoid re-engaging with traumatising content in the development of critical thinking, active listening, mediation, emotion regulation, personal responsibility, or mutual understanding.
For effective peacebuilding, we need to design our lives in ways that embed the practice of relevant values, attitudes, and skills into the (sociotechnical) environments we experience.
Sociotechnical consciousness is both an emergent phenomenon in a sociotechnical system and a collective capability that can be enhanced and developed. The integrative approach explored here suggests that ICT4Peace requires engaging as critical makers with and within the media that condition our experiences. Thus, the facilitation of the integrative approach towards building peace using digital media lies within the remit of any designers and developers of digital media, journalists, and other influencers using digital media for communication, as well as anyone else whose experience is mediated by these technologies. As, depending on the context, this may include large proportions of communities and populations, their representative bodies (be they governmental or non-governmental) have a responsibility for ensuring that the ICTs that mediate our experiences foster peace amongst us.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project has received funding from the European Union's Horizon 2020 Research and Innovation programme under the grant agreement: No 779793. | 9,571.8 | 2020-10-30T00:00:00.000 | [
"Education",
"Political Science",
"Computer Science"
] |
Formation of hybrid higher-order cylindrical vector beams using binary multi-sector phase plates
Nowadays, the well-known cylindrical vector beams (CVBs) – the axially symmetric beam solution to the full-vector electromagnetic wave equation – are widely used for advanced laser material processing, optical manipulation and communication and have a great interest for data storage. Higher-order CVBs with polarisation order greater than one and superpositions of CVBs of various orders (hybrid CVBs) are especially of interest because of their great potential in contemporary optics. We performed a theoretical analysis of the transformation of first-order CVBs (radially and azimuthally polarised beams) into hybrid higher-order ones using phase elements with complex transmission functions in the form of the cosine or sine functions of the azimuthal angle. Binary multi-sector phase plates approximating such transmission functions were fabricated and experimentally investigated. The influence of the number of sectors and a height difference between neighbouring sectors, as well as the energy contribution of the different components in the generated hybrid higher-order CVBs were discussed in the context of polarisation transformation and vector optical field transformation in the focal region. The possibility of polarisation transformation, even in the case of weak focusing, is also demonstrated. The simple structure of the profile of such plates, their high diffraction efficiency and high damage threshold, as well as the easy-to-implement polarisation transformation principle provide advanced opportunities for high-efficient, quickly-switchable dynamic control of the generation of structured laser beams.
threshold and efficiency of commercially available solutions, which somewhat limits the use of SLMs with high power lasers, for example, it requires additional SLM cooling systems 48 .
Previously, we demonstrated the conversion of an azimuthally polarised beam to a radially polarised beam, and vice versa, by introducing a higher-order vortex phase singularity into an investigated CVB 17 . In this paper, we demonstrate that interrelation of polarisation with the phase of the light field can be used for the transformation of an order of cylindrical polarisation and generation of hybrid CVBs and their superpositions. In contrast to the above-mentioned complex techniques, we propose to use easy-to-manufacture two-level pure-phase diffractive optical elements, the so-called binary multi-sector phase plates, fabricated on the fused silica substrate with a high damage threshold to realise the transformation of CVBs. The numerical simulation and experimental results obtained demonstrate the efficient formation of nth-order CVBs and their superpositions using n-sector phase plates even under conditions of weak focusing (NA < 0.7).
Results
Theoretical analysis. For the description of the focusing of CVBs in both the paraxial and non-paraxial cases, the Debye approximation is widely used 25 . In this case the electric field components of a monochromatic electromagnetic wave can be calculated as follows: [ where (ρ, ϕ, z) are the cylindrical coordinates in the focal region, (θ, φ) are the spherical angular coordinates of the focusing system's output pupil, α is the maximum value of the azimuthal angle related to the system's numerical aperture (NA), so changing the value α makes it possible to vary the sharpness of focusing, B(θ, φ) is the transmission function, T(θ) is the pupil's apodization function (equal to θ cos for aplanatic systems), k = 2π/λ is the wavenumber, λ is the wavelength, f is the focal length, c x (φ) and c y (φ) are the polarisation coefficients of the incident radiation. It is evident that for small values of NA, the z-component of the electric field becomes insignificant.
Different types of CVBs with the polarisation order p and the inner polarisation rotation of the beam φ 0 can be described by the following generalised expression 36 : 0 0 For light fields described by Eq. (2) and for radially-symmetric light fields (B(θ, φ) = R(θ)), Eq. (1) can be written as follows: As follows from Eq. (4), the z-component of a tightly focused electromagnetic field is not zero at the optical axis only in the case when p = 1, and φ 0 = 0 or π, that is, in the case of radial polarisation. The z-component completely disappears in the case when p = 1, φ 0 = π/2 or 3π/2, that is, in the case of azimuthal polarisation. For other cases of CVBs, the z-component has cosine or sine dependence on the angle ϕ: for example, for a CVB with p = −1 and φ 0 = 0 E z (ρ, ϕ, z) ∝ cos(2ϕ) and for a CVB with p = −1 and φ 0 = π/2 E z (ρ, ϕ, z) ∝ sin(2ϕ) (see Figs 1 and 2). Such negative-order CVBs can be obtained by passing the positive-order CVB through a half waveplate [49][50][51][52] . In the case of negative-order CVBs, light field patterns generated after passing through a rotating linear polariser rotate in the direction opposite that of rotation in the case of positive-order CVBs 49 . In contrast to the positive-order CVBs, the energy contribution of the formed z-component of the tightly focused negative-order CVBs is always less than the energy contribution of the x-and y-components (see Fig. 2).
Generation of hybrid CVBs using multi-sector phase plates. The interrelation of polarisation with the phase of light field allows the use of diffractive optical elements for the transformation of first-order or lower-order CVBs into higher-order CVBs (p > 1). The easiest way to increase the polarisation order is to increase the multiplicity of the angle in Eq. (2). A partial solution of this problem is possible by multiplying the ScIentIfIc RepoRtS | (2018) 8:14320 | DOI:10.1038/s41598-018-32469-0 initial cylindrically polarised electromagnetic field by the cosine or sine function of a multiple angle. For example, a combination of the radially polarised beam with the cosine function of the angle ϕ has the following form: It is clear that the result of Eq. (5) is in fact a superposition of two laser beams -a beam linearly polarised in the x-direction and a second-order CVB. For a combination of the first-order radial polarisation with the sine function of the angle ϕ, the following result is obtained: Rad Lin y p 2, /2 0 that is, a superposition of two laser beams with polarisations orthogonal to those obtained in Eq. (5), namely, a beam linearly polarised in the y-direction and a second-order CVB with the inner polarisation rotation of π/2. In general, an increase in the multiplicity of the angle for the used cosine and sine functions leads to the following transformation: The well known binary multi-sector phase plates 53,54 can be used for generation of functions approximating the above considered trigonometric functions [55][56][57] . The expansion in a Fourier series of the transmission function of a phase plate f(ϕ) = exp[iψ(ϕ)], where ψ(ϕ) is a phase of the phase plate, has the following form: Note the contribution to the intensity near the optical axis comes from terms with the low indices n 56,57 ; other terms will change just the off-axis distribution because of higher frequency.
Several configurations of the binary phase plates will now be considered. Then, the transmission function of the phase plate has the following form: From Eq. (12), it is evident that such phase plate simultaneously generates a set of trigonometric functions with decreasing weight factors. In this case, if ϕ 1 − ϕ 2 = π, then the free term a 0 is absent. In particular, when ϕ 1 = 0 and ϕ 2 = π, Eq. (12) is transformed to the following: n 0 that is, the phase plate corresponds to a sum of the sine functions of the odd orders. Taking into account the decreasing weight factors, as well as the concentration of the energy in the focal region near the optical axis, we can assume that such a phase plate substantially corresponds to the sin ϕ function and can be used for transformations described by Eqs. (5)-(8). It is evident, when the phase plate rotates by 90 degrees, we get an analogous sum of cosine functions with cos ϕ as the main term.
N-sector binary phase plate with phase values of ϕ 1 and ϕ 2 for different sectors
The general formula for an N-sector binary phase plate has the following form: As follows from Eq. (14), the first term of the series corresponds to the sin(Nϕ) function, while the other harmonics have orders changing by 2N and proportionally decreasing weight factors. The presence of the free term (for ϕ 1 − ϕ 2 ≠ π) leads to the presence of an additional term corresponding to the initial polarisation state in the polarisation superpositions described by Eqs. (5)- (8). In order to estimate the energy contribution of the free term and the first (main) term in the series, we assume that ϕ 1 = 0, then the energy contribution of the free term equals ϕ | | = a 4cos ( /2) . Thus, it is possible to change the ratio between the initial polarisation and the formed one by varying the value ϕ 2 . In particular, the equal energy contributions of these two components is when ϕ = ≈°π 2arctan( ) 96 2 2 2 . It is evident that the average value of ϕ N sin ( ) 2 equal to 0.5 does not change when N is replaced with a multiple of it, so the total fraction of the total sum in Eq. (14) is , taking into account that the sum of the series in brackets is π 2 /8. Thus, the energy contributions of the free term and the total sum of the series are the same when ϕ 2 = 90° = π/2. Figure 3 visualises the results of replacing the trigonometric functions cos(mϕ) or sin(mϕ) by a binary multi-sector phase plate. The distributions formed in the focal region are very similar in structure. Some differences are observed only in the peripheral area 56 .
Numerical modelling. Figure 4 shows the modelling results for tight focusing (NA = 0.99) of a laser beam with radial polarisation passing through binary multi-sector phase plates with ϕ 1 = 0, and ϕ 2 = π or π/2. It is clear that the modelling results are in good agreement with the presented theoretical analysis. In the case of a two-sector phase plate with ϕ 2 = π (the first column of Fig. 4), whose action analogous to the action of the sin ϕ function, an initial radially polarised CVB is transformed into a sum of a beam linearly polarised in the y direction e _ Lin y and a second-order CVB When this phase plate is rotated 90 degrees (the second column of Fig. 4), its action is similar to that of the cos(ϕ) function -that is, a superposition of laser beams with different polarisations (a linearly x-polarised beam e _ Lin x and the second-order CVB φ = = e p 2, 0 0 ) is generated. As follows from Eq. (14) and the modelling results, the presence of the two-sector phase plate with ϕ 2 = π/2 (the third column of Fig. 4) leads to a situation when half of the initial light energy does not change its initial polarisation state (radially polarised beam e Rad ) and half of the initial light energy is transformed to a superposition of a linearly y-polarised beam and the second-order CVB φ = = e p 2, 0 0 . A four-sector phase plate with ϕ 2 = π (the fourth column of Fig. 4), whose action is analogous to the sin(2ϕ) function, allows to generate the superposition of CVBs: . − . . In this case, the central part of the generated light pattern has circular polarisation and the peripheral part of the pattern has the polarisation analogous to the polarisation formed by the 4-sector phase plate with ϕ 2 = π. Finally, the presence of a 8-sector phase plate with ϕ 2 = π or π/2 (the sixth and seventh columns of Fig. 4) has a hybrid polarisation state, partially radial and partially circular. When ϕ 2 = π, the azimuthal polarisation is substantial.
Analogous modelling results obtained when using a light field with azimuthal polarisation as the initial field are shown in Experiments. The optical setup for the experimental investigation of the higher-order hybrid CVBs generation and polarisation transformation is shown in Fig. 6A. The input laser beam was extended and spatially filtered by a system composed of a microobjective MO1 (10×, NA = 0.2), a pinhole PH (aperture size of 40 μm), and a lens L1 (focal length of 150 mm). The collimated linearly polarised laser beam with a Gaussian profile of intensity distribution (waist diameter is approximately 3 mm) was transformed into a "donut"-shaped first-order radially/ azimuthally polarised laser beam using a commercially available S-waveplate (Altechna, clear aperture diameter of 4 mm). Then, a wavefront of the formed laser beam was modulated using the fabricated 2, 4, or 8-sector phase plate. The 2, 4, and 8-sector phase plates with a diameter of 4 mm were manufactured on surfaces of 2-mm thick fused silica plates. Two variants of each of the sector plates, with a height difference between neighbouring sectors of approximately 290 ± 20 and 580 ± 20 nm corresponding to π/2 and π-phase shift at 532 nm, were manufactured (see an example of the manufactured 8-sector phase plate with relief steps with the height h = 580 ± 20 nm and side-wall inclination angle of 5 ± 1 degrees in Fig. 6B). A combination of two lenses, L2 (f 2 = 250 mm) and L3 (f 3 = 150 mm), and a diaphragm was used for spatial filtering of the modulated laser beam. Finally, the generated higher-order hybrid CVB was focused by microobjective MO2 (40×, effective numerical aperture NA eff = 0.
Conclusion and Discussion
We conducted a theoretical analysis of the generation of hybrid cylindrical polarisation states of the light field using optical elements having a transmission function defined by the cosine or sine function of the azimuthal angle in the polar coordinate system. Easy-to-manufacture binary phase elements in the form of the multi-sector phase plate allowed the approximation of these functions and to experimentally realise the polarisation conversion from low-order radially/azimuthally polarised beams into higher-order superpositions. Such transformation does not depend on the numerical aperture of the focusing optical system and can also be performed under conditions of weak focusing (NA < 0.7). In the latter case, the difference between the results obtained for the transformation of low-order radial or azimuthal polarisation is only in the rotation of the light pattern in the focal plane. The experimental results obtained with the help of multi-sector phase plates manufactured on surfaces of fused silica substrates are in good agreement with the numerical modelling results, providing evidence of the proposed technique for the generation of hybrid cylindrical vector beams.
The proposed method evidently does not allow one to generate any polarisation state as it can be done with micro-structured q-plates. However, in our opinion, the main advantage of the described polarisation transformation approach in comparison with well known techniques for the generation of higher-order CVBs using structured q-plates or single/double spatial light modulators supporting implementation of multi-level phase functions is the simplicity of the transmission function of the binary multi-sector phase elements. Due to this, such elements can be realised not only as 'static' phase plates, but also with the help of low cost binary spatial light modulators with low resolution for a higher frame rate for fast switching between different polarisation states. In addition, the utilised multi-sector phase plates have a high-damage threshold which allows the use of these plates with a high power laser.
Methods
Binary multi-sector phase plate manufacturing process. A technological process comprising lithography and plasma etching was utilised to manufacture the DOEs. This process consists of the following steps: 1) Hardmask's direct laser writing in the chromium thin film (45 nm) on the fused silica substrate (UV Fused Silica (JGS3)) produced by circular laser writing system CLWS-200S (Del Mar Photonics, Inc.). The chromium thin film exposed by focused laser radiation oxidises into Cr 2 O 3 . Planar resolution is 1 μm. 2) Unexposed chromium is removed using a hexacyanoferrate (III) potassium (K 3 [Fe(CN) 6 ]) solution within 5 minutes. | 3,766 | 2018-09-25T00:00:00.000 | [
"Physics"
] |
A Ship Detection Model Based on Dynamic Convolution and an Adaptive Fusion Network for Complex Maritime Conditions
Ship detection is vital for maritime safety and vessel monitoring, but challenges like false and missed detections persist, particularly in complex backgrounds, multiple scales, and adverse weather conditions. This paper presents YOLO-Vessel, a ship detection model built upon YOLOv7, which incorporates several innovations to improve its performance. First, we devised a novel backbone network structure called Efficient Layer Aggregation Networks and Omni-Dimensional Dynamic Convolution (ELAN-ODConv). This architecture effectively addresses the complex background interference commonly encountered in maritime ship images, thereby improving the model’s feature extraction capabilities. Additionally, we introduce the space-to-depth structure in the head network, which can solve the problem of small ship targets in images that are difficult to detect. Furthermore, we introduced ASFFPredict, a predictive network structure addressing scale variation among ship types, bolstering multiscale ship target detection. Experimental results demonstrate YOLO-Vessel’s effectiveness, achieving a 78.3% mean average precision (mAP), surpassing YOLOv7 by 2.3% and Faster R-CNN by 11.6%. It maintains real-time detection at 8.0 ms/frame, meeting real-time ship detection needs. Evaluation in adverse weather conditions confirms YOLO-Vessel’s superiority in ship detection, offering a robust solution to maritime challenges and enhancing marine safety and vessel monitoring.
Introduction
Ship image detection technology is widely applied in various domains, such as maritime ship monitoring, shipping supervision, and maritime cruise search and rescue.However, in practical applications, different lighting conditions, complex backgrounds at sea, and stormy weather all increase the difficulty of ship detection [1].This places higher demands on both accuracy and real-time ship detection.As computer vision technology rapidly advances, its applications are becoming increasingly widespread.These techniques have been gradually applied to ship detection and identification [2], which provides a new direction for maritime ship detection.
Ship image detection methods fall into traditional and deep learning methods.Traditional methods generally adopt support vector machine (SVM), histogram of oriented gradients (HOG), local binary pattern, and other algorithms for ship feature extraction and detection.For example, Feng et al. [3] present a multi-branch SVM approach to enhance the rapid detection of moving ships by incorporating effective multi-scale features.However, waves or changes in the background introduced higher computational costs to the model.Shi et al. [4] proposed an extended HOG method for detecting actual ships in candidate regions by computing a histogram of oriented gradients of local image regions.This method has the advantage of geometric invariance but increases the computational time, making it unsuitable for real-time applications.Zhu et al. [5] introduced a new texture operator to Sensors 2024, 24, 859 2 of 24 enhance feature extraction capabilities.However, in environments characterized by clouds, sea waves, and clutter, the method failed to extract detailed ship information, thereby reducing ship detection performance.Traditional methods exhibit limited portability and robustness across diverse scenes, and they are often susceptible to interference from complex backgrounds, noise, and low-light conditions [6].Their real-time performance and accuracy are insufficient to meet task requirements.In the context of ship detection under adverse weather conditions, a series of methods for improving detection models have been proposed in references [7,8].These methods are applied in complex maritime vessel monitoring systems under haze and low visibility conditions.The authors employ a direct detection technique.Another approach involves a two-step process: preprocessing the image to remove haze, and then conducting ship recognition.Song et al. [9] introduced a method for ship detection in hazy marine remote-sensing images.This method uses color polarization classification and haze concentration clustering to balance the remote sensing image (RSI) color and eliminate haze interference.The subsequent recognition of the processed image reduces the difficulty, but this method results in the loss of more image details.Liu et al. [10] proposed a novel image-dehazing algorithm based on color prior knowledge.This method achieves a higher accuracy in ship detection in thin cloud and mist environments, albeit with an increased computational burden due to the intricate preprocessing steps.
In contrast, deep learning possesses formidable feature learning capabilities, rendering it the prevailing approach in current ship detection technology.Object detection techniques in deep learning can be classified into two-stage and one-stage methods.For the two-stage detection method, Escorcia-Gutierrez et al. [11] introduced an enhanced Mask R-CNN model for improved recognition and classification of small ships in shipping.However, there remains a significant error in locating the ship's contour edge region, indicating the necessity for further model accuracy improvement.Yu et al. [12] proposed an enhanced R-CNN method called Ship R-CNN, which improves ship detection accuracy in scenarios with complex backgrounds and minimal differentiation between ships and distant shores.However, this method did not account for ship recognition under nighttime conditions.Li et al. [13] improved the Cascade R-CNN method for more accurate small-ship detection.However, it faces efficiency challenges in recognizing redundant features.In the one-stage detection methods, the most representative algorithm is the You Only Look Once (YOLO) family.The algorithms employ a direct regression approach to make predictions over the entire image, effectively improving detection speed while maintaining accuracy, and are thus widely used in ship detection.Specifically, Guo et al. [14] presented LVENet, an enhancement network for improved low-light maritime vessel detection by enhancing image channel luminance.However, the network does not account for the challenges of rainy and foggy weather conditions, and its model exhibits limited generalization.Guo et al. [15] improved the deblurring and defogging performance of the model by enhancing the fused image feature information, but their method is limited by the dark light environment and noise interference, which increases the uncertainty of the prediction results.
In summary, although some research results have been achieved in ship image detection under complex sea conditions, challenges persist.First, the presence of various ship types with significant size variations between classes and small target scales poses difficulties for target detection.Secondly, adverse factors like sea haze, uneven illumination, and low visibility in complex backgrounds can degrade imaging quality.Extracting effectively ship features from the ocean background remains a challenge for algorithms.This paper presents a ship detection model tailored for complex sea state images, with three key contributions.
1.
We propose an improved real-time ship detection model based on YOLOv7 (YOLO-Vessel), specifically designed to address ship detection challenges in the complex sea conditions mentioned above.
2.
A backbone network called Efficient Layer Aggregation Networks and Omni-Dimensional Dynamic Convolution (ELAN-ODConv) with strong feature extraction capability is designed to reduce false and missed detections.Then, a network termed Efficient Layer Aggregation Networks Head and Space-to-depth and Convolution (ELANH-SPDC) is introduced at the head to achieve fine-grained detection and identification of ships.In addition, a new prediction network structure named ASFFPredict is designed, which adaptively learns each feature layer's weights and can fuse each scale's feature information more efficiently.
3.
To adapt ship detection under different adverse weather conditions, this paper proposes a ship dataset under adverse weather conditions, which is then artificially synthesized using physical haze, rain, snow, and low light algorithms, and experiments are conducted in real scenarios to verify the detection accuracy and operation efficiency of this model.
In this paper, Section 2 introduces some related research work.Section 3 describes the presented ship detection model.Section 4 analyzes the experimental performance of the YOLO-Vessel model and showcases ship detection results in real environments.Section 5 concludes this article.
Related Work
The YOLO series of models has garnered extensive attention in recent years, and researchers have achieved a series of advancements in ship detection research based on the YOLO framework.For example, Yao et al. [16] employed the YOLOv8 model for multiclass ship detection, improving ship recognition accuracy.However, their dataset was not comprehensive enough and lacked training data for large-sized vessels.Furthermore, Zhao et al. [17] proposed a detection model named YOLOv7-sea, incorporating attention mechanisms to enhance focus on regions containing vessels of interest.However, this approach fails to extract multi-scale ship features and may lead to erroneous detections.To tackle inadequate feature extraction, dynamic convolutions have gradually found applications across various domains in deep learning.The representative omni-dimensional dynamic convolution (ODConv) [18] is a novel convolutional operation capable of dynamically adjusting convolution kernels to effectively capture multi-dimensional features in data, thereby enhancing the model's performance in detection tasks.Cheng et al. [19] integrated dynamic convolution modules into shallow networks, enhancing the model's efficiency in ship recognition under complex backgrounds.However, this approach encounters issues such as false positives for small-sized vessels and prolonged model training times.Complex maritime ship target recognition often results in false positives, and due to intricate backgrounds and noise interference, it may also fail to recognize small-sized vessels.Chen et al. [20] presented a multi-scale ship detection model for complex scenes.They incorporated the ASPP module to expand the receptive field while reducing feature loss for small-sized vessels.However, this model did not take into account the time cost.
To further improve the detection of small ships in complex sea conditions, SPDC has unique advantages in detecting small targets and low-resolution images.The SPDC structure consists of space-to-depth and convolution [21], where space-to-depth is a transformation layer that downsamples the feature maps in the CNN using image transformation techniques while retaining all the channel information to enhance small-size feature extraction.Ma and Pang et al. [22] presented an SP-YOLOv8s detection model that enhances the fine-grained feature information during downsampling, improving the accuracy of detecting small objects.However, this gain in accuracy comes at the cost of increased computational complexity.Multi-scale fusion networks have found widespread application in deep learning models.Zhang et al. [23] presented an improved model built upon YOLOv7-tiny.This model integrates multi-scale residual modules, enhancing ship detection performance in complex water surface environments.However, its performance may degrade when detecting target vessels at more minor scales.
To enhance multi-scale feature information, the adaptive spatial feature fusion (ASFF) mechanism dynamically tunes the weights assigned to feature maps.This dynamic adjustment empowers the model to get information at varying scales and hierarchical levels [24], resulting in a more comprehensive feature fusion [25].Guo et al. [26] proposed a lightweight LMSD-YOLO model to create a real-time maritime vessel detection model with a reduced parameter count.The model achieves an adaptive fusion of multi-scale features.However, its feature extraction capability falls short in low-visibility images and noise interference, leading to potential false negatives for small vessels.
The current YOLO algorithm has been developed to version 8 (YOLOv8).Compared to the previous version, YOLOv7, YOLOv8 introduces the convolution to fusion structure, reducing the number of convolution modules, resulting in faster detection speeds.However, this speed enhancement comes at the cost of sacrificing some detection accuracy.Consequently, YOLOv8 might exhibit reduced ship detection accuracy in complex environments.YOLOv7 incorporates the efficient layer aggregation networks (ELAN) structure to facilitate multi-branch gradient flow feature extraction [27].This design enhances the model's detection performance, making it better suited for ship detection in complex maritime conditions.The YOLOv7 consists primarily of four parts: input, backbone network, head network, and prediction network.The ELAN [28] combines VoVNet and CSPNet [29].It enables the deep network to converge more effectively without changing the original model structure gradient propagation path and continuously enhances its learning capability.The head network enhances its feature fusion capability with the SPPCSPC module and path aggregation network (PANet).
Given the challenges related to work poses and the difficulty of detecting small vessels in complex environments, further improvements are needed for the YOLOv7 model.Dynamic convolutions offer advantages in capturing multi-dimensional features, and SPDC is adept at enhancing small object detection at low resolutions.Both of these techniques contribute to improving feature information extraction.ASFF can effectively merge multiscale features to enrich feature detection information.Therefore, this study introduces dynamic convolutions, SPDC, and ASFF networks to the basic YOLOv7 model to enhance detection outcomes.
Proposed Detection Framework
One needs to strengthen the network's feature extraction capabilities and optimize information flow to enhance the model's effectiveness in detecting ships under challenging sea conditions.Therefore, this paper optimizes the backbone, head, and prediction network components of YOLOv7 by leveraging the advantages of ODConv, SPDC, and ASFF.As illustrated in Figure 1, the YOLO-Vessel model comprises three main components: the ELAN-ODConv backbone network structure, ELANH-SPDC head structure, and ASFFPredict prediction network structure.
Backbone Network
The YOLO-Vessel backbone network incorporates CBS, ELAN-ODConv, and MP modules.The CBS module comprises a convolution layer, with the SiLU activation function applied afterward, followed by the addition of a batch normalization (BN) layer.The ELAN-ODConv module includes the CBS module and the ODConv unit.This design enhances the network's learning capabilities by extending and merging bases while preserving the original gradient path.In each ELAN-ODConv structure, downsampling is achieved by compressing the feature map scale using a 3 × 3 convolution kernel with a stride of 1 and zero padding.Then, the feature map passes through two branches: one enters the CBS module, and the other enters multiple CBS modules and an ODConv structure.Finally, the outputs of the two branches are operated with Concat and partially transformed using a 1 × 1 convolution module to improve the learnability of the model.As shown in Figure 2, the design idea of ODConv is to generate a new feature map by performing elementwise multiplication and addition of four convolution kernels, each of the same size and dimension, while considering their corresponding attention weights, and finally, by using a convolution operation.
Backbone Network
The YOLO-Vessel backbone network incorporates CBS, ELAN-ODConv, and MP modules.The CBS module comprises a convolution layer, with the SiLU activation function applied afterward, followed by the addition of a batch normalization (BN) layer.The ELAN-ODConv module includes the CBS module and the ODConv unit.This design enhances the network's learning capabilities by extending and merging bases while preserving the original gradient path.In each ELAN-ODConv structure, downsampling is achieved by compressing the feature map scale using a 3 × 3 convolution kernel with a stride of 1 and zero padding.Then, the feature map passes through two branches: one enters the CBS module, and the other enters multiple CBS modules and an ODConv structure.Finally, the outputs of the two branches are operated with Concat and partially transformed using a 1 × 1 convolution module to improve the learnability of the model.As shown in Figure 2, the design idea of ODConv is to generate a new feature map by performing element-wise multiplication and addition of four convolution kernels, each of the same size and dimension, while considering their corresponding attention weights, and finally, by using a convolution operation.ODConv is a more generalized form of dynamic convolution, with its computational form as follows: , , and are three newly introduced attention weights, representing the weights associated with the spatial position of the convolution kernel, input channels, and output channels, respectively. is the attention weight corresponding to the number of convolutional kernels.
Figure 2 shows that ODConv learns the four attention weights of the convolutional kernel in parallel along four dimensions.The input feature map is first subjected to a global average pooling (GAP) operation and then passed through a Fully Connected (FC) Layer-ReLU Activation Layer-Fully Connected (FC) Layer structure.Finally, a set of ODConv is a more generalized form of dynamic convolution, with its computational form as follows: A 1 , A 2 , and A 3 are three newly introduced attention weights, representing the weights associated with the spatial position of the convolution kernel, input channels, and out-Sensors 2024, 24, 859 6 of 24 put channels, respectively.A 4 is the attention weight corresponding to the number of convolutional kernels.
Figure 2 shows that ODConv learns the four attention weights of the convolutional kernel in parallel along four dimensions.The input feature map x is first subjected to a global average pooling (GAP) operation and then passed through a Fully Connected (FC) Layer-ReLU Activation Layer-Fully Connected (FC) Layer structure.Finally, a set of attention weights {A 1 , A 2 , A 3 , A 4 } is obtained at the output of the sigmoid and softmax activation function layers.Specifically, A 1 , A 2 , and A 3 are generated by the Sigmoid activation function, i.e., This function maps input values to the interval (0,1).A 4 is generated by the Softmax activation function, i.e., Softmax(x Here, x represents the ith element of the input vector.Softmax produces a set of probability values, introducing normalized constraints to simplify the learning of A 4 .The final weighted values of each group of convolutional kernels are used to generate output features.Compared with ordinary dynamic convolution, which only considers the single factor of the number of convolution kernels, ODConv adds multiple-dimensional information so that the input features can obtain rich contextual information.
In this study, ODConv is employed to achieve the goal of a more precise model without increasing the network's width and depth.
Head Network
YOLO-Vessel's head network consists of the PANet structure, the SPPCSPC module, and the ELANH-SPDC module.The feature information extracted from the backbone network is generated in the p3_in, p4_in, and p5_in layers at the three scales of 80 × 80, 40 × 40, and 20 × 20, respectively, and output to the head network.Subsequently, the 20 × 20 scale feature map is first upsampled using the SPPCPSC module to perform a Concat operation with the 40 × 40 scale feature map.Then, the output features are upsampled to
Head Network
YOLO-Vessel's head network consists of the PANet structure, the SPPCSPC module, and the ELANH-SPDC module.The feature information extracted from the backbone network is generated in the p3_in, p4_in, and p5_in layers at the three scales of 80 × 80, 40 × 40, and 20 × 20, respectively, and output to the head network.Subsequently, the 20 × 20 scale feature map is first upsampled using the SPPCPSC module to perform a Concat operation with the 40 × 40 scale feature map.Then, the output features are upsampled to perform a second fusion with the 80 × 80 scale feature map to further the fusion between adjacent scales of the feature map.However, in the multiscale structure, as the depth of the network grows, the location information of the feature map is weakened, and the resolution of the map gradually decreases, which leads to the degradation of the model's detection performance.Accordingly, we introduce an ELANH-SPDC module, incorporating the SPDC module into the head network's tail.This enhancement aims to sharpen the model's attention towards low-resolution and small objects, particularly in remote sea areas, while also boosting recognition performance for low-resolution feature maps.
As shown in Figure 4, this study incorporates the SPDC module into three head network positions: p3_out, p4_out, and p5_out.When the feature maps enter the Head, the channel dimension reaches its maximum while the network resolution reaches its minimum.Introducing features learned by SPDC at this stage is crucial for detecting low-resolution features.Experimental results (as detailed in Section 4.5.1)indicate that inserting SPDC at position p5_out in the head network yields the optimal detection performance for the model.Therefore, the SPDC designed at position p5_out is employed to create the ELANH-SPDC module, which is then integrated into the overall head network structure.The ELANH-SPDC module is the main feature extraction module located at the output position of p5_out in the head network, which divides the gradient stream into network paths of different lengths.The Concat operation fuses the features of each branch, ultimately replacing the original 1 × 1 standard convolution with the SPDC module to obtain more effective feature information.This approach preserves more fused, detailed features and optimizes the model's detection accuracy.We set as the original feature map, as the intermediate feature map, as the final feature map, as the subfeature map, as the feature map scale scaling factor, as the feature map length and width dimension values, as the feature map depth value, and as the convolution kernel depth value.Space-to-depth cuts a feature map of size × × into a series of subfeature maps, and each subfeature map , is formed by all entries (, ) of + and + divided by scale.The SPDC calculation equation is as follows: , = 0: : , 0: : , , = 1: : , 0: : , … = − 1: : , 0: : ; The ELANH-SPDC module is the main feature extraction module located at the output position of p5_out in the head network, which divides the gradient stream into network paths of different lengths.The Concat operation fuses the features of each branch, ultimately replacing the original 1 × 1 standard convolution with the SPDC module to obtain more effective feature information.This approach preserves more fused, detailed features and optimizes the model's detection accuracy.We set X as the original feature map, X 1 as the intermediate feature map, X 2 as the final feature map, f as the subfeature map, scale as the feature map scale scaling factor, S as the feature map length and width dimension values, C 1 as the feature map depth value, and C 2 as the convolution kernel depth value.Space-to-depth cuts a feature map X of size S × S × C 1 into a series of subfeature maps, and each subfeature map f x,y is formed by all entries X(i, j) of i + x and j + y divided by scale.The SPDC calculation equation is as follows: f 0,0 = X[0 : S : scale, 0 : S : scale], f 1,0 = X[1 : S : scale, 0 : S : scale], . . .f scale−1,0 = X[scale − 1 : S : scale, 0 : S : scale]; . . .
The graphical process is given in Figure 5, where four subfeature maps f 0,0 , f 0,1 , f 1,0 , f 1,1 are obtained when scale = 2, each with size (S/2, S/2, C 1 ), which is equivalent to twice the downsampling of the original feature map X.Then, all the subfeature maps are connected along the channel dimension to obtain the intermediate feature map X 1 , where the X 1 spatial dimension is reduced by a scale factor, and the channel dimension is increased by a scale factor.The space-to-depth layer converts the original feature map X(S, S, C 1 ) into an intermediate feature map X 1 S/scale, S/scale, scale 2 C 1 with feature discrimination information.A convolutional layer with a C 2 filter is added in Figure 5 to achieve further transformation from the intermediate feature map X 1 to the final feature map X 2 .The step size of this convolutional layer is set to 1 to retain the maximum amount of discriminative feature information.Therefore, we introduce the SPDC structure into the ELANH structure of the head network, which can effectively improve the detection performance of the model for low-resolution and small ships at sea.
Prediction Network
The YOLO-Vessel model uses a multiscale prediction method in the prediction network part, where the input 640 × 640 scale images are downsampled in the backbone structure by factors of 8×, 16×, and 32×.Then, the output of the head network yields 80 × 80 scale feature maps p3_out, 40 × 40 scale feature maps p4_out, and 20 × 20 scale feature maps p5_out.Among them, p3_out is used for small-scale target detection, p4_out is used for medium-scale target detection, and p5_out is used for large-scale target detection.As depicted in Figure 6, this paper introduces an adaptively spatial feature fusion network structure named ASFFPredict.Utilizing ASFF, the network learns optimal fusion weights to accentuate feature layers that contain more information about small targets.Subsequently, features from each feature layer are fused to ensure that elements with higher weights dominate the expression in the resulting feature map.The introduction of ASFF between the head network and YOLO head aims to enhance the model's detection performance for small maritime targets, providing an innovative solution for ship detection problems in maritime regions.
Prediction Network
The YOLO-Vessel model uses a multiscale prediction method in the prediction network part, where the input 640 × 640 scale images are downsampled in the backbone structure by factors of 8×, 16×, and 32×.Then, the output of the head network yields 80 × 80 scale feature maps p3_out, 40 × 40 scale feature maps p4_out, and 20 × 20 scale feature maps p5_out.Among them, p3_out is used for small-scale target detection, p4_out is used for medium-scale target detection, and p5_out is used for large-scale target detection.As depicted in Figure 6, this paper introduces an adaptively spatial feature fusion network structure named ASFFPredict.Utilizing ASFF, the network learns optimal fusion weights to accentuate feature layers that contain more information about small targets.Subsequently, features from each feature layer are fused to ensure that elements with higher weights Sensors 2024, 24, 859 9 of 24 dominate the expression in the resulting feature map.The introduction of ASFF between the head network and YOLO head aims to enhance the model's detection performance for small maritime targets, providing an innovative solution for ship detection problems in maritime regions.We select three layers of feature map output from the head network for fusion.In Figure 6, ( = 80 × 80 × 256), ( = 40 × 40 × 512), and ( = 20 × 20 × 1024) denote the feature maps involved in adaptive fusion.Direct feature fusion cannot be performed since these three feature maps have different scales and channels.Therefore, the resolution and channels of each feature layer need to first be adjusted to be the same.Taking layer fusion as an example, the fused feature map is labeled , and then, the three spatial weights from , , and to are labeled , , and , respectively.The expressions are as follows: where + + = 1, , , ∈ 0,1 , and , , and are normalized scalars that are calculated using the Softmax function.The expressions are as follows: → and → denote the feature maps with transformed scales, which are transformed from the feature maps of the layer to the layer and from the feature maps of the layer to the layer.In this process, a 3 × 3 convolution is first used to downsample the layer four times and the layer two times, thus adjusting the feature maps of all layers to a 20 × 20 scale size.Then, the other feature layers are fused and upsampled using nearest neighbor interpolation.Next, the feature channels of → and → are adjusted to 1024 using 1 × 1 convolution, and the feature map scale is kept constant.Finally, → and the transformed → and → are subjected to the Concat operation, and each of the three feature maps are multiplied by their respective weights; the results are then summed to get the feature map .The weight parameters are learned from the convolutional layer output using gradient back-propagation, and the weights can be adaptively adjusted in the feature fusion process.The adaptively spatial feature fusion process achieves a better fusion of features at different scales, effectively recognizes small and multiscale objects, and enhances the capability for maritime vessel detection.We select three layers of feature map output from the head network for fusion.In Figure 6, L 0 (L 0 = 80 × 80 × 256), L 1 (L 1 = 40 × 40 × 512), and L 2 (L 2 = 20 × 20 × 1024) denote the feature maps involved in adaptive fusion.Direct feature fusion cannot be performed since these three feature maps have different scales and channels.Therefore, the resolution and channels of each feature layer need to first be adjusted to be the same.Taking L 2 layer fusion as an example, the fused feature map L 2 is labeled P 2 , and then, the three spatial weights from L 0 , L 1 , and L 2 to P 2 are labeled α 2 , β 2 , and γ 2 , respectively.The expressions are as follows: where , and α 2 ij , β 2 ij , and γ 2 ij are normalized scalars that are calculated using the Softmax function.The expressions are as follows: L 0→2 ij and L 1→2 ij denote the feature maps with transformed scales, which are transformed from the feature maps of the L 0 layer to the F 2 layer and from the feature maps of the L 1 layer to the F 2 layer.In this process, a 3 × 3 convolution is first used to downsample the L 0 layer four times and the L 1 layer two times, thus adjusting the feature maps of all layers to a 20 × 20 scale size.Then, the other feature layers are fused and upsampled using nearest neighbor interpolation.Next, the feature channels of L 0→2 ij and L 1→2 ij are adjusted to 1024 using 1 × 1 convolution, and the feature map scale is kept constant.Finally, L 2→2 ij and the transformed L 0→2 ij and L 1→2 ij are subjected to the Concat operation, and each of the three feature maps are multiplied by their respective weights; the results are then summed to get the feature map P 2 .The weight parameters are learned from the convolutional layer output using gradient back-propagation, and the weights can be adaptively adjusted in the feature fusion process.The adaptively spatial feature fusion process achieves a better fusion of features at different scales, effectively recognizes small and multiscale objects, and enhances the capability for maritime vessel detection.
Dataset Preparation and Data Augmentation
The datasets we use are derived from two public datasets and one handcrafted dataset, including the public ship dataset SeaShips7000 [30] and the ship dataset provided by the 2nd International Challenge for Intelligent Perception of Marine Targets in 2021, with 200 and 4300 images, respectively.The mixed public dataset covers six ship categories: bulk carrier, sailboat, container ship, yacht, cruises, and fishing boat.This dataset is divided into an 8:1:1 ratio for training, validation, and testing.There are 415 pictures in the homemade dataset collected using a fixed shooting device in the Yangtze River waters of the Chongqing section.The shooting device used a Hikvision 23× zoom monitoring dome with a resolution of 2560 pixels (horizontal) × 1440 (vertical) pixels and shot ship images containing three real scenes of normal, rain, haze, and dawn from different positions and angles.Due to the lack of snowy weather, the ship images in snowy weather come from real snowy ship pictures collected by web crawlers.The homemade dataset includes ship pictures captured in various weather conditions: normal clear, rain, haze, snow, and dawn.Each weather type constitutes 20% of the dataset, allowing the model's performance in detecting ships under real-world environmental conditions to be evaluated.
Usually, images acquired in complex adverse weather are visually richer and better match the actual complex sea-going ship conditions.However, acquiring large quantities of realistic ship images in adverse weather conditions in real sea environments is challenging, which makes it necessary to use synthetic images.In this paper, the above-mixed public dataset is artificially synthesized with images of ships under severe weather conditions [31], and the synthesized dataset is named "SeaShips_weather."It is a synthesis of rain, haze, snow, and dawn weather images based on the RGB layer stacking algorithm, atmospheric scattering model, and retinex theory, as shown in Figure 7.
1.
Rain patterns with different tilt angles of −45, 0, and 45 degrees are randomly added to the preprocessed image to synthesize the rain ship image.The expressions of synthetic rain are as follows: A(x, y) = N(x, y) + M(x, y) + δ (7) A(x, y) denotes the synthesized rain image, (x, y) denotes any pixel in the image, N(x, y) denotes the original image, M(x, y) denotes the rain pattern layer, and δ denotes a random luminance value, as shown in Figure 7a.
2.
The haze can significantly degrade image quality during detection.To simulate ship scenes in such conditions, we employ an atmospheric scattering model to synthesize hazy sky images.The formula for creating artificial haze is as follows: I(x, λ) is the synthesized dense haze image, R(x, λ) is the original image, the parameter x is any pixel in the image, λ is the wavelength of light, L ∞ is the value of scattered atmospheric light at infinity, and g(λ, x) = e −β(λ)d(x) is the light propagation function, in which β is the atmospheric scattering factor and d(x) is the distance of the target object.Adjusting the atmospheric scattering factor β ∈ {0.02, 0.04, 0.06} synthesizes the sea haze images of different degrees, as shown in Figure 7b.
3.
As depicted in Figure 7c, the snowflake texture is randomly added to the original image by adjusting the snow amount value r ∈ {1, 3, 5} to synthesize a ship image on a snowy day.The expression of the synthesized snowflake is as follows: B(x, y) = E(x, y) + R(x, y, r) + β (9) B(x, y) denotes the synthesized snow day image, (x, y) denotes any pixel in the image, E(x, y) denotes the original image, R(x, y) denotes the rain pattern layer, and β denotes the random luminance value in the image.4.
Dawn weather tends to cause low brightness, low contrast, and detail loss in the image.
The ship image under dawn weather is synthesized using the retinex algorithm, as shown in Figure 7d, where the attenuation coefficient φ ∈ {0.25, 0.55, 0.85} changes the image brightness value.The expression of the synthesized dawn image is as follows: P(x, y) is the synthesized dawn image, Q(x, y) is the original image, and L(φ) is the spatially smoothed luminance function.
The datasets we use are derived from two public datasets and one handcrafted dataset, including the public ship dataset SeaShips7000 [30] and the ship dataset provided by the 2nd International Challenge for Intelligent Perception of Marine Targets in 2021, with 200 and 4300 images, respectively.The mixed public dataset covers six ship categories: bulk carrier, sailboat, container ship, yacht, cruises, and fishing boat.This dataset is divided into an 8:1:1 ratio for training, validation, and testing.There are 415 pictures in the homemade dataset collected using a fixed shooting device in the Yangtze River waters of the Chongqing section.The shooting device used a Hikvision 23× zoom monitoring dome with a resolution of 2560 pixels (horizontal) × 1440 (vertical) pixels and shot ship images containing three real scenes of normal, rain, haze, and dawn from different positions and angles.Due to the lack of snowy weather, the ship images in snowy weather come from real snowy ship pictures collected by web crawlers.The homemade dataset includes ship pictures captured in various weather conditions: normal clear, rain, haze, snow, and dawn.Each weather type constitutes 20% of the dataset, allowing the model's performance in detecting ships under real-world environmental conditions to be evaluated.
Usually, images acquired in complex adverse weather are visually richer and better match the actual complex sea-going ship conditions.However, acquiring large quantities of realistic ship images in adverse weather conditions in real sea environments is challenging, which makes it necessary to use synthetic images.In this paper, the abovemixed public dataset is artificially synthesized with images of ships under severe weather conditions [31], and the synthesized dataset is named "SeaShips_weather."It is a synthesis of rain, haze, snow, and dawn weather images based on the RGB layer stacking algorithm, atmospheric scattering model, and retinex theory, as shown in Figure 7.
Experimental Environment
The experiments were conducted with the following configurations: an Ubuntu 18.04.6operating system, a Tesla V100 GPU with 32GB of memory (Austin, TX, USA), and an Intel(R) Xeon(R) Platinum 8163 CPU (Santa Clara, CA, USA).To accelerate the computations, we employed CUDA 10.2 and cuDNN 7.6.5.The training of our proposed model and the comparison models was carried out using the PyTorch 1.7.1 framework.
Experimental Setup
The crucial details of the key parameters for training the network model in this study are as follows: the input image size is set to 640 × 640, the initial learning rate is 0.001, momentum is 0.937, weight decay is 0.0005, the batch size is 4, each training epoch duration is 62 s, Mosaic is set to 1.0, and the optimizer employs stochastic gradient descent (SGD) with cosine learning rate decay strategy.Other parameters adopt default values from YOLOv7.Mosaic data augmentation enriched the training dataset by introducing four images simultaneously, which underwent flipping, zooming, and splicing operations to diversify the detection scenarios.The training process comprised 150 epochs, totaling 135,000 iterations.
Evaluation Index
To quantitatively evaluate the detection effectiveness of the proposed model, seven evaluation metrics are introduced to examine the performance, including precision (P), recall (R), average precision (AP), mean average precision (mAP), giga floating point operations per second (GFLOPS), inference time (Infer), and F-Measure (F1).Their calculation formulae are shown below.
where P(r) denotes the P-R curve, m denotes the number of detected ship categories, N TP denotes the number of correctly detected ships, N FP denotes the number of misdetected ships, and N FN denotes the number of undetected ships.
Impact of SPDC Integration on Network Performance
Incorporating the SPDC module into the YOLOv7 model enables effective learning of ship features under complex weather conditions, which is particularly crucial for detecting low-resolution images.A meticulous evaluation and pre-selection of three potential scenarios was conducted to determine the optimal location for introducing SPDC into the network.The objective was to identify the optimal scenario that achieves the highest performance regarding mAP and F1 metrics while minimizing the computational complexity of the model's GFLOPS.The first scenario involves introducing SPDC at the p3_out output position of the head network, the second scenario selects the p4_out output position, and the third scenario opts for the p5_out output position.As shown in Table 1, through a comparative analysis of these scenarios, the performance is optimal when SPDC is positioned at p5_out.Despite a slight increase in inference time, the model achieves an mAP of 77.6%, F1 of 76, and a significant reduction in model complexity, with GFLOPS reaching 94.8.This section analyzes the performance of applying ODConv at different positions in the ELAN module of YOLOv7.The ELAN layer in YOLOv7 consists of seven positions for ordinary convolutions.The third, fifth, sixth, and seventh positions are selected as potential locations for replacing ordinary convolutions with ODConv.As illustrated in Figure 3, four positions in the ELAN module are labeled as a, b, c, and d, representing four improved modules.The comparative detection results are presented in Table 2.The experimental results indicate that the YOLOv7-ODConv-c model achieves better performance at position c.Compared to the YOLOv7 model, the YOLOv7-ODConv-c model shows a slight increase in inference time due to the increased learning weight dimensions of ODConv.However, it achieves an improvement of 1.6% in mAP, a 2% increase in F1, and a reduction in GFLOPS to 100.9.In summary, with a marginal sacrifice in speed, the YOLOv7-ODConv-c model significantly enhances ship detection accuracy.
Ablation Experiment
This section verifies the effectiveness of ELAN-ODConv, ELANH-SPDC, and ASFF-Predict for ship detection tasks in adverse weather conditions at sea.Based on the YOLOv7 model, three different network models were constructed sequentially, introducing new modules combined with varying network structures, as shown in Table 3.Here, √ indicates the incorporation of the corresponding improvement module, while × suggests the absence of the respective improvement module.
Model SPPFCSPC ELANH-SPDC ELAN-ODConv ASFFPredict
YOLO-ES introduces a single new module, YOLO-OS introduces two new modules, and YOLO-Vessel incorporates all three new modules.
We conducted several experiments on a synthetic weather ship dataset.Figure 8 shows the mAP, precision, and recall curves for all detectors during model training.All curves rise gently and converge quickly, indicating that the model is well-trained and not overfitted.As depicted in Figure 8a,b, the overall training curves exhibit minimal fluctuations.However, Figure 8c,d reveal more significant volatility during the ascending phase of the overall training curve, although these fluctuations minimally impact detection accuracy.Due to the introduction of ODConv and SPDC, the model's detection capability has been enhanced.YOLO-Vessel improves precision from 82.8% to 83.7% compared to the original YOLOv7 model.Additionally, after training for 150 epochs, the recall<EMAIL_ADDRESS>and mAP@0.5:0.95indices reach 70.2%, 74.5%, and 53.6%, respectively.The model performs well in detecting ship images.During the individual testing process for each image, predictions and actual values for each category were recorded, resulting in a confusion matrix, as shown in Figure 9.It can be observed that there are misclassifications between cargo ships and cruise ships, primarily due to the similar features of these two types of vessels and the blurriness of ships in adverse maritime weather conditions, making them easily confused.To address this, a "no-vessel" category has been added to the detection, representing the absence of any vessel.Fishing boats also experience misclassifications, mainly due to their small size.In complex maritime weather conditions, interference from background noise can easily result in misidentifying fishing boat classes as "no-vessel" classes.Therefore, adverse weather conditions significantly impact the detection of ships.During the individual testing process for each image, predictions and actual values for each category were recorded, resulting in a confusion matrix, as shown in Figure 9.It can be observed that there are misclassifications between cargo ships and cruise ships, primarily due to the similar features of these two types of vessels and the blurriness of ships in adverse maritime weather conditions, making them easily confused.To address this, a "no-vessel" category has been added to the detection, representing the absence of any vessel.Fishing boats also experience misclassifications, mainly due to their small size.In complex maritime weather conditions, interference from background noise can easily result in misidentifying fishing boat classes as "no-vessel" classes.Therefore, adverse weather conditions significantly impact the detection of ships.During the individual testing process for each image, predictions and actual values for each category were recorded, resulting in a confusion matrix, as shown in Figure 9.It can be observed that there are misclassifications between cargo ships and cruise ships, primarily due to the similar features of these two types of vessels and the blurriness of ships in adverse maritime weather conditions, making them easily confused.To address this, a "no-vessel" category has been added to the detection, representing the absence of any vessel.Fishing boats also experience misclassifications, mainly due to their small size.In complex maritime weather conditions, interference from background noise can easily result in misidentifying fishing boat classes as "no-vessel" classes.Therefore, adverse weather conditions significantly impact the detection of ships.
Next, the results of different improved models of YOLOv7 were compared, as shown in Table 4. Next, the results of different improved models of YOLOv7 were compared, as shown in Table 4.The following conclusions were drawn from Table 4: 1. YOLO-Vessel is the best performer in the YOLO series regarding mAP.Compared with the original YOLOv7, its mAP performance improved by 2.3%.It can be observed that the original YOLOv7 has the lowest mAP value and unsatisfactory detection results.Compared with the original YOLOv7 model, all three improved network models have improved mAP.Among them, the overall mAP values for ship detection increased by 1.6%, 1.7%, and 2.3%, respectively.The analysis shows that YOLO-ES demonstrates higher accuracy in recognizing cruise ships.Additionally, the improved models offer better recognition performance for small to medium-sized sailboats, yachts, and fishing boats.Compared to YOLOv7, YOLO-ES exhibits improved recognition of fishing boats, with an increase in AP value by 0.9%, indicating that the ELANH-SPDC structure can enhance the feature information for low-resolution targets during feature fusion, demonstrating exemplary performance in detecting low-resolution targets in complex backgrounds.Continuing to introduce the ODConv structure into the model, YOLO-OS is more advantageous in fishing boat detection accuracy, with AP values improved by 1.0% compared to YOLO-ES.This demonstrates that combining the ELAN-ODConv and ELANH-SPDC structures enhances the model's performance in detecting small targets.YOLO-Vessel The following conclusions were drawn from Table 4: 1. YOLO-Vessel is the best performer in the YOLO series regarding mAP.Compared with the original YOLOv7, its mAP performance improved by 2.3%.It can be observed that the original YOLOv7 has the lowest mAP value and unsatisfactory detection results.
Compared with the original YOLOv7 model, all three improved network models have improved mAP.Among them, the overall mAP values for ship detection increased by 1.6%, 1.7%, and 2.3%, respectively.The analysis shows that YOLO-ES demonstrates higher accuracy in recognizing cruise ships.Additionally, the improved models offer better recognition performance for small to medium-sized sailboats, yachts, and fishing boats.Compared to YOLOv7, YOLO-ES exhibits improved recognition of fishing boats, with an increase in AP value by 0.9%, indicating that the ELANH-SPDC structure can enhance the feature information for low-resolution targets during feature fusion, demonstrating exemplary performance in detecting low-resolution targets in complex backgrounds.Continuing to introduce the ODConv structure into the model, YOLO-OS is more advantageous in fishing boat detection accuracy, with AP values improved by 1.0% compared to YOLO-ES.This demonstrates that combining the ELAN-ODConv and ELANH-SPDC structures enhances the model's performance in detecting small targets.YOLO-Vessel surpasses YOLOv7, YOLO-ES, and YOLO-OS in detecting fishing boats, highlighting the ASFF network's effectiveness in improving detection performance for the IDetect head.In addition, YOLO-Vessel also demonstrates an advantage in detecting large-scale vessels such as bulk carriers and container ships.Its AP values for these categories are improved by 3.7% and 2.7%, respectively, compared to YOLOv7.This indicates that the combination of ELANH-SPDC structure, ELAN-ODConv structure, and ASFFPredict structure can fully learn more visual features of ships and thus improve the model's performance of ship detection in adverse weather, especially detecting critical hull parts and reducing the number of false alarms.The final YOLO-Vessel has good performance for overall ship detection and can significantly improve the accuracy of ship detection under adverse weather conditions at sea. 2.
In terms of inference speed, the YOLO-ES model achieves the fastest detection speed.However, with the introduction of the ELAN-ODConv module, the network's inference speed slightly decreases, but the model's accuracy improves.Therefore, the YOLO-Vessel model trades off accuracy and inference speed, compensating for the slight reduction in speed to enhance detection accuracy.
3.
Regarding model computation, compared to the original model, YOLO-ES, YOLO-OS, and YOLO-Vessel reduced GFLOPS by 8.4, 10.7, and 2.4, respectively.The significant reduction in computational demands alleviates the computational burden on the machine.Furthermore, the YOLO-Vessel model incorporates adaptively spatial feature fusion and dynamic convolution techniques for ship detection, enhancing detection performance.It achieves the highest F1 score in ship image detection, surpassing the original model by 3%.
Figure 10 illustrates the Precision-Recall (P-R) curve.The area enclosed by the P-R curve and the coordinate axes in the image represent the mAP value.It can be observed that the mAP value of the YOLO-Vessel is higher than that of other improved models.The improved model shows a slight enhancement at different recall rates, indicating the effectiveness of the proposed improvements in enhancing the ship detection performance of the model.
significantly improve the accuracy of ship detection under adverse weather conditions at sea. 2. In terms of inference speed, the YOLO-ES model achieves the fastest detection speed However, with the introduction of the ELAN-ODConv module, the network's inference speed slightly decreases, but the model's accuracy improves.Therefore, the YOLO-Vessel model trades off accuracy and inference speed, compensating for the slight reduction in speed to enhance detection accuracy.3. Regarding model computation, compared to the original model, YOLO-ES, YOLO-OS, and YOLO-Vessel reduced GFLOPS by 8.4, 10.7, and 2.4, respectively.The significant reduction in computational demands alleviates the computational burden on the machine.Furthermore, the YOLO-Vessel model incorporates adaptively spatial feature fusion and dynamic convolution techniques for ship detection enhancing detection performance.It achieves the highest F1 score in ship image detection, surpassing the original model by 3%.We also validate the effectiveness of introducing SPDC, ODConv, and ASFF.We employ gradient-weighted class activation mapping (Grad-CAM) to visualize heatmaps for the four models, explaining the significance of feature capture in the improved models.In the Grad-CAM heatmaps, deeper colors indicate regions contributing more to ship detection in the heatmap.Figure 12a displays the original image of the target to be detected.As depicted in Figure 12b, the heatmaps of YOLOv7 exhibit poor ship detection performance under adverse weather conditions, with instances of heatmap region misalignment or sparsity.In Figure 12c, introducing the ELANH-SPDC module mitigates the impact of adverse weather or noise to some extent, capturing the approximate location of ships and reducing interference in learning ship features under low resolution.Figure 12d illustrates the continued improvement by introducing the ELAN-ODConv, enhancing the backbone network's ability to extract ship features and reducing the number of irrelevant regions in the heatmap.Finally, in Figure 12e, introducing the ASFFPredict structure further refines the localization of ship targets, narrowing the heatmap range and indicating an increased focus on ship features.We also validate the effectiveness of introducing SPDC, ODConv, and ASFF.We employ gradient-weighted class activation mapping (Grad-CAM) to visualize heatmaps for the four models, explaining the significance of feature capture in the improved models.In the Grad-CAM heatmaps, deeper colors indicate regions contributing more to ship detection in the heatmap.Figure 12a displays the original image of the target to be detected.As depicted in Figure 12b, the heatmaps of YOLOv7 exhibit poor ship detection performance under adverse weather conditions, with instances of heatmap region misalignment or sparsity.In Figure 12c, introducing the ELANH-SPDC module mitigates the impact of adverse weather or noise to some extent, capturing the approximate location of ships and reducing interference in learning ship features under low resolution.Figure 12d Figure 13 compares ship image detection performance under challenging conditions, including adverse weather, small target occlusion, and low lighting.This comparison involves YOLOv7, YOLO-ES, YOLO-OS, and YOLO-Vessel.As shown in Figure 13, YOLOv7 has problems with missed and false ship detection in all types of images.The same is true for YOLO-EO and YOLO-OA, and problems include, for example, insufficient accuracy in identifying frame positioning.However, YOLO-Vessel can successfully perform the identification task and locate the target vessel more accurately in the images.Figure 13 compares ship image detection performance under challenging conditions, including adverse weather, small target occlusion, and low lighting.This comparison involves YOLOv7, YOLO-ES, YOLO-OS, and YOLO-Vessel.As shown in Figure 13, YOLOv7 has problems with missed and false ship detection in all types of images.The same is true for YOLO-EO and YOLO-OA, and problems include, for example, insufficient accuracy in identifying frame positioning.However, YOLO-Vessel can successfully perform the identification task and locate the target vessel more accurately in the images.In the rainy and hazy conditions shown in Figure 13, YOLOv7 has missed ships because of the small size of individual vessels and the less distinct hull outline, which is very close to the texture of the background.In addition, in the snow and dawn conditions shown in Figure 13, the detection performance remains poor because YOLOv7 does not locate the target size well, and hence the confidence level of detecting the ship targets is low.In contrast, YOLO-ES has good detection results and can detect multiple and small ships simultaneously, indicating the effectiveness of ELANH-SPDC, but it detects frame calibration imprecisely.In Figure 13, YOLO-OS and YOLO-Vessel can successfully identify the small target position, demonstrating the efficacy of ASFFPredict and ELANH-ODConv.In addition, YOLO-Vessel can identify more small target ship positions with smaller objects, proving that YOLO-Vessel is more robust in images of ships under extreme adverse weather conditions.
Comparison with Other Algorithms
To further evaluate the detection performance of YOLO-Vessel, this paper compares the algorithms with several mainstream algorithms, employing the same synthetic ship dataset, and including Faster R-CNN, Fast R-CNN, Mask R-CNN, Cascade R-CNN, SSD, YOLOv3, YOLOv4, YOLOv5, YOLOv7, and YOLOv8.The experimental results in Table 5 demonstrate that the proposed YOLO-Vessel outperforms other mainstream algorithms.Specifically, the performance of Faster R-CNN, Fast R-CNN, Mask R-CNN, Cascade R-CNN, SSD, YOLOv3, and YOLOv4 is significantly worse than that of other algorithms, and it is usually challenging to perform ship detection robustly at sea level in adverse weather still needs to be higher.YOLOv4 misses ship detection in the snowy environment, while YOLOv3 and YOLOv5 have better feature extraction ability, but there is a false detection situation, which may mistakenly detect the background as a ship.In contrast, the YOLO-Vessel model improves the prediction network by using ASFF, enabling it to fuse targets' structural features at different scales, detect more ships in complex sea conditions, and achieve better detection performance.Compared with the YOLOv7 algorithm, YOLO-Vessel can extract different types of ship features under different adverse weather conditions and perform target detection for six common types of ships with good recognition results.From the ship detection results in rainy, hazy, snowy, and dawn weather, we can conclude the following: 1.As shown in the rain of Figure 17, YOLOv7 mistakenly identifies the house building background as a ship in a rainy environment; in the same case, as shown in the snow in Figure 17, YOLOv7 incorrectly detects street light as some other vessels in snowy weather, while YOLO-Vessel can correctly detect the ship category and location with 94% confidence, which solves the complex background interference problem.2. As shown in the haze of Figure 17, under hazy weather conditions, YOLOv7 successfully detected large bulk carriers and cruise ships near the shore but failed to detect small target vessels in the distance.In contrast, YOLO-Vessel significantly improved recognition of small ships in haze weather, effectively reducing the probability of missed detections.3.As shown in the dawn of Figure 17, in the dawn environment, YOLOv7 only detects one ship, and the YOLO-Vessel model avoids missed detection and detects all the cruise ships and bulk carrier in the picture.
In conclusion, YOLO-Vessel outperforms YOLOv7 on the real adverse weather dataset, demonstrating its superiority in challenging conditions and further validating the effectiveness of YOLO-Vessel for ship detection in complex backgrounds.Compared with the YOLOv7 algorithm, YOLO-Vessel can extract different types of ship features under different adverse weather conditions and perform target detection for six common types of ships with good recognition results.From the ship detection results in rainy, hazy, snowy, and dawn weather, we can conclude the following: 1.
As shown in the rain of Figure 17, YOLOv7 mistakenly identifies the house building background as a ship in a rainy environment; in the same case, as shown in the snow in Figure 17, YOLOv7 incorrectly detects street light as some other vessels in snowy weather, while YOLO-Vessel can correctly detect the ship category and location with 94% confidence, which solves the complex background interference problem.
2.
As shown in the haze of Figure 17, under hazy weather conditions, YOLOv7 successfully detected large bulk carriers and cruise ships near the shore but failed to detect small target vessels in the distance.In contrast, YOLO-Vessel significantly improved recognition of small ships in haze weather, effectively reducing the probability of missed detections.
3.
As shown in the dawn of Figure 17, in the dawn environment, YOLOv7 only detects one ship, and the YOLO-Vessel model avoids missed detection and detects all the cruise ships and bulk carrier in the picture.
In conclusion, YOLO-Vessel outperforms YOLOv7 on the real adverse weather dataset, demonstrating its superiority in challenging conditions and further validating the effectiveness of YOLO-Vessel for ship detection in complex backgrounds.
Conclusions
In addressing the challenges posed by adverse weather, complex background interference in ship detection images, and multiscale ship target detection, this paper introduces ODConv to optimize the YOLOv7 model.It fully uses four-dimensional weights to learn image features, achieving efficient feature extraction and addressing the challenges of missed and false detections in complex backgrounds.Moreover, the model's detection accuracy is enhanced by introducing the ELANH-SPDC module, guided by SPDC, in the head network to preserve finer feature details.Furthermore, incorporating ASFFPredict into the detection head allows for aggregating more semantic information, enabling more comprehensive feature fusion.This effectively tackles the issues related to detecting ships at various scales, addressing the uneven distribution of target features and enhancing the detection performance for smaller targets.Under the sample constraint, this paper improves the robustness of YOLO-Vessel for target detection in adverse weather using a mixed weather ship image training mechanism.Compared with other YOLO series models, the YOLO-Vessel method trades off complexity and detection accuracy.It can detect many maritime vessels in real-time with high accuracy.In general, the mAP of YOLO-Vessel reaches 78.3%, the detection speed is 8.0 ms/frame, and the GFLOPS is 100.8,thus satisfying real-time ship detection at sea.In our future research, we plan to explore lightweight techniques for the primary backbone of the model to reduce parameter count and model size.This may include methodologies such as model pruning, quantization, and utilizing lightweight convolution modules.We aim to develop models better suited for deployment on mobile devices, enhancing their applicability to maritime ship detection tasks.
Figure 1 .
Figure 1.Entire structure of the YOLO-Vessel.
Figure 3 presents four variations of ELAN-ODConv types, namely ELAN-ODConv-a, ELAN-ODConv-b, ELAN-ODConv-c, and ELAN-ODConv-d, designed with ODConv at different positions in the backbone network of the YOLOv7-Vessel.Experimental results (as detailed in Section 4.5.2) indicate that replacing the ordinary convolution module with ODConv at position c within the ELAN module results in the maximum mAP value.Therefore, ELAN-ODConv-c is selected as the preferred ELAN-ODConv configuration.Sensors 2024, 24, x FOR PEER REVIEW 7 of 26 network of the YOLOv7-Vessel.Experimental results (as detailed in Section 4.5.2) indicate that replacing the ordinary convolution module with ODConv at position c within the ELAN module results in the maximum mAP value.Therefore, ELAN-ODConv-c is selected as the preferred ELAN-ODConv configuration.
Figure 3 .
Figure 3. Structure of ELAN-ODConv.The ODConv is introduced into the ELAN module, where positions a, b, c, and d are highlighted in red, representing the locations where ordinary convolutions are replaced with ODConv.
Figure 3 .
Figure 3. Structure of ELAN-ODConv.The ODConv is introduced into the ELAN module, where positions a, b, c, and d are highlighted in red, representing the locations where ordinary convolutions are replaced with ODConv.
Sensors 2024, 24, x FOR PEER REVIEW 8 of 26 to create the ELANH-SPDC module, which is then integrated into the overall head network structure.
Figure 4 .
Figure 4.The head network is equipped with the ELANH-SPDC module.
Figure 4 .
Figure 4.The head network is equipped with the ELANH-SPDC module.
Sensors 2024, 24, x FOR PEER REVIEW 9 of 26 of discriminative feature information.Therefore, we introduce the SPDC structure into the ELANH structure of the head network, which can effectively improve the detection performance of the model for low-resolution and small ships at sea.
Figure 6 .
Figure 6.The process of adaptively spatial feature fusion.
Figure 6 .
Figure 6.The process of adaptively spatial feature fusion.
Figure 7 .
Figure 7. Synthesized ship images in adverse maritime weather conditions.(a) Rainy weather images, including the original and rainy images with rain streak angles of −45 degrees, 0 degrees, and 45 degrees.(b) Haze images, including the original image and images with haze concentrations of 0.02, 0.04, and 0.06.(c) Snowy weather images, including the original and snowy images with 1,
Figure 7 .
Figure 7. Synthesized ship images in adverse maritime weather conditions.(a) Rainy weather images, including the original and rainy images with rain streak angles of −45 degrees, 0 degrees, and 45 degrees.(b) Haze images, including the original image and images with haze concentrations of 0.02, 0.04, and 0.06.(c) Snowy weather images, including the original and snowy images with 1, 3, and 5 snow levels.(d) Dawn images, including the original and dawn images, have low light coefficients of 0.25, 0.55, and 0.85.
Figure 8 .
Figure 8. Performance comparison of training different improved models on the synthetic weather ship dataset.
Figure 8 .
Figure 8. Performance comparison of training different improved models on the synthetic weather ship dataset.
Figure 8 .
Figure 8. Performance comparison of training different improved models on the synthetic weather ship dataset.
Figure 9 .
Figure 9. Confusion matrix for the YOLO-Vessel model.
Figure 9 .
Figure 9. Confusion matrix for the YOLO-Vessel model.
Figure 10
Figure 10 illustrates the Precision-Recall (P-R) curve.The area enclosed by the P-R curve and the coordinate axes in the image represent the mAP value.It can be observed that the mAP value of the YOLO-Vessel is higher than that of other improved models.The improved model shows a slight enhancement at different recall rates, indicating the effectiveness of the proposed improvements in enhancing the ship detection performance of the model.
Figure 10 .
Figure 10.Precision-Recall curves for different improved models.As Figure 11 demonstrates the comparison of the loss values of the original model with different improvements.Specifically, the loss curve of the YOLO-Vessel exhibits a swifter decline during initial training and converges more stably in the final phase compared to the other models, indicating that ELAN-ODConv and ELANH-SPDC can effectively extract features, resulting in a faster decrease in loss values and optimal model detection performance.
Figure 11 .
Figure 11.Comparison of loss values for different improved models of YOLOv7.
Figure 11 .
Figure 11.Comparison of loss values for different improved models of YOLOv7.
26 Figure 12 .
Figure 12.The heat map of the improved model of YOLOv7 introduces different modules for detecting ships.
Figure 13
Figure13compares ship image detection performance under challenging conditions, including adverse weather, small target occlusion, and low lighting.This comparison involves YOLOv7, YOLO-ES, YOLO-OS, and YOLO-Vessel.As shown in Figure13, YOLOv7 has problems with missed and false ship detection in all types of images.The same is true for YOLO-EO and YOLO-OA, and problems include, for example, insufficient accuracy in identifying frame positioning.However, YOLO-Vessel can successfully
Figure 12 .
Figure 12.The heat map of the improved model of YOLOv7 introduces different modules for detecting ships.
Figure 12 .
Figure 12.The heat map of the improved model of YOLOv7 introduces different modules for detecting ships.
Figure 13 .
Figure 13.The experimental comparison of YOLO-Vessel with YOLOv7, YOLO-ES, and YOLO-OS models is presented in maritime vessel detection.From top to bottom, they represent vessel detection in rainy, hazy, snowy, and dawn weather conditions.
Figure 13 .
Figure 13.The experimental comparison of YOLO-Vessel with YOLOv7, YOLO-ES, and YOLO-OS models is presented in maritime vessel detection.From top to bottom, they represent vessel detection in rainy, hazy, snowy, and dawn weather conditions.
Figure 14 .
Figure 14.From top to bottom: representing the results of complex background ship detection at sea in rainy, hazy, snowy, and dawn weather, respectively.Figure 14.From top to bottom: representing the results of complex background ship detection at sea in rainy, hazy, snowy, and dawn weather, respectively.
Figure 14 .
Figure 14.From top to bottom: representing the results of complex background ship detection at sea in rainy, hazy, snowy, and dawn weather, respectively.Figure 14.From top to bottom: representing the results of complex background ship detection at sea in rainy, hazy, snowy, and dawn weather, respectively.Sensors 2024, 24, x FOR PEER REVIEW 22 of 26
Figure 15 .
Figure 15.From top to bottom: the results of multi-objective ship detection at sea representing rainy, hazy, snowy, and dawn weather, respectively.
Figure 16 .
Figure 16.From top to bottom: small target ship detection results at sea representing rainy, hazy, snowy, and dawn weather, respectively.4.5.5.Experiments on Realistic Ship Detection This section further demonstrates the YOLO-Vessel model's detection performance in a realistic environment, as illustrated in Figure 17.
Figure 15 .
Figure 15.From top to bottom: the results of multi-objective ship detection at sea representing rainy, hazy, snowy, and dawn weather, respectively.
Figure 15 .
Figure 15.From top to bottom: the results of multi-objective ship detection at sea representing rainy, hazy, snowy, and dawn weather, respectively.
Figure 16 .
Figure 16.From top to bottom: small target ship detection results at sea representing rainy, hazy, snowy, and dawn weather, respectively.4.5.5.Experiments on Realistic Ship Detection This section further demonstrates the YOLO-Vessel model's detection performance in a realistic environment, as illustrated in Figure 17.
Figure 16 .
Figure 16.From top to bottom: small target ship detection results at sea representing rainy, hazy, snowy, and dawn weather, respectively.
4. 5 . 5 . 26 Figure 17 .
Figure 17.Ship detection experiments in authentic adverse weather images.From left to right: ship detection results for (a) YOLOv7 and (b) YOLO-Vessel in rainy images, haze, snow, and dawn weather, respectively.
Figure 17 .
Figure 17.Ship detection experiments in authentic adverse weather images.From left to right: ship detection results for (a) YOLOv7 and (b) YOLO-Vessel in rainy images, haze, snow, and dawn weather, respectively.
Table 1 .
Detection results of SPDC applied at different positions in the YOLOv7 head network.
Table 2 .
Detection results of replacing ordinary convolutions with ODConv at different positions in the ELAN module.
Table 3 .
YOLOv7 models with different improvements.
Table 4 .
Performance comparison of the original model with different improved models.
Table 4 .
Performance comparison of the original model with different improved models. | 14,732.2 | 2024-01-28T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
An Improved Grain Growth Model and Its Application in Gradient Heat Treatment of Aero-Engine Turbine Discs
A new grain growth model was developed by introducing the ultimate grain size to the traditional model. The grain growth behavior and its ultimate size under the Zenner pinning force are also discussed. This model was applied to the nickel-based superalloy and integrated into an FEM code. The grain evolution of a forged third-generation powder superalloy heat treated at different temperatures and holding times was studied. A gradient heat treatment setup was designed and implemented for a full-size turbine disc based on the model prediction to meet the accurate dual-microstructure requirements of an advanced aero-engine turbine disc design. The predicted temperature was validated by thermal couple measurements. The relative error between the prediction and the measurements is less than 2%. The metallographic examination of the whole turbine disk through sectioning showed that the grain size was ASTM 7-8 at the rim area and ASTM 11-12 at the bore region, which agrees well with the prediction. The predicted values of the three measurement areas are ASTM 12.1, ASTM 9.1, and ASTM 7.1, respectively, with a maximum error of 5% compared to the measured values. The proposed model was validated and successfully applied to help manufacture a dual-microstructure aero-engine turbine disc.
Introduction
Grain size is a very important factor in determining the properties of polycrystalline materials, such as tensile and creep properties.Hence, it is critical to understand and further predict grain growth during manufacturing processing.There are several ways to predict grain growth, such as phase field and cellular automata, which mainly focus on the change of microstructure and morphology in the process of grain growth at the microscale [1,2].Due to its low computational efficiency, it is normally hard to apply to large structural parts.The deterministic kinetics model for grain growth is widely used in the prediction of grain size with high efficiency and reasonable accuracy.
A simple model for isothermal grain growth was proposed by Beck, which can be expressed as [3]: By introducing an initial grain size D 0 , it can be rewritten as: where D, t, and n are grain size, annealing time, and time-dependent exponent, respectively.The parameter K is a temperature-dependent constant and can be calculated as: where K 0 , Q, R, and T are a constant, grain growth activation energy, gas constant, and annealing temperature, respectively.Burke and Turnbull [4] proposed the grain growth exponent n = 2 by combining the K factor with the surface energy of the boundary and the grain atomic volume.The kinetics of austenite grain growth under isothermal conditions is frequently explained by the Arrhenius constitutive relationship, which is based on a thermally activated atomic movement process.Sellars and Whiteman [5] suggested: where A is the generalized mobility constant, a large value of n up to 10 was utilized.Du [6] suggested that a holding time exponent m should not be ignored and proposed an equation of: Based on the above models, researchers have conducted a lot of valuable work.Zhong et al. [7] fitted the grain growth equation of oil well steel pipe under different insulation temperatures based on the Beck empirical formula and applied it to industrial production.Tian G et al. [8] fitted the kinetic model through isothermal grain coarsening tests and applied it to the heat treatment of powder turbine disks to predict the grain size.More studies have been conducted on the values of n and Q in the kinetic model under different materials and temperatures, in order to obtain better prediction results [9][10][11][12].Unfortunately, most models above are limited to grain growth at constant temperature and cannot overcome the drawback that the grain grows continuously with time based on the model calculation.
Due to the formation of a second phase and impurities, the grain can normally only grow to an ultimate size for most alloys.The main driving force behind grain growth is the grain boundary decreasing due to the grain size increasing in polycrystalline metal materials.There is no inherent relationship between the grain size and the particle pinning characteristics of polycrystalline metals, such as nickel-based powder metallurgy superalloys which normally contain precipitate-strengthening particles [13].The particle size and its distribution may remain relatively unchanged, regardless of the size of the matrix grains.However, the size and volume fraction of the precipitation particles located at grain boundaries play a crucial role in limiting grain boundary migration and affecting the kinetics of grain growth.As the heat treatment temperature and holding time increase, the precipitates experience a decrease in volume fraction and an increase in size.As such, the Zenner pinning force generated by these particles decreases in accordance with the following equation [14]: where C is a constant, γ represents the grain boundary energy, f and r denote the volume fraction and radius of undissolved coarse particles, respectively.For nickel-based powder metallurgy superalloys, with the heat treatment temperature increasing, the volume fraction of γ phase at the grain boundary decreases.As a result, the grain boundary is pinned less effectively.With prolonged holding time at elevated temperature, according to the mechanism of Oswald ripening, the radius of γ phase at the grain boundary increases with the dissolution of small particles, or by the mechanism of precipitate agglomeration [15], the γ phase at the grain boundary aggregates and connects, which also reduces the pinning force.Thus, the particle pinning forces determine the final grain size more effectively than the driving forces.
In polycrystalline alloys, grain boundaries refer to the interfaces between individual grains.The presence of these boundaries increases the overall free energy of the alloy.During heat treatment, grain boundary energy decreases as grain size increases, promoting grain growth.Crystalline thermodynamic stability also contributes to such growth, as smaller grains tend to be less stable thermodynamically.Thus, increasing grain size reduces the instability and allows the crystal coarsening to reach a more stable state.During subsolvus heat treatment, grain growth stops when precipitation pinning forces at the grain boundary are stronger than the driving force for grain growth.Even for heat treatment at super-solvus temperature without precipitates, grains will not grow forever [16].An ultimate grain size threshold exists, which should be incorporated into the grain growth model.
To consider the Zener pinning effect in the grain growth model, an improved one was proposed after modifying Equation (4) by introducing a limit grain size term and integrating it into an FEM code.The code was applied to simulate the heat treatment process of a turbine disc made with a third-generation nickel-based powder superalloy, FGH4113A [17,18].The model parameters were calibrated through isothermal grain coarsening experimental results.The proposed model was validated and successfully applied to help manufacture a dual-microstructure aero-engine turbine disc.
Grain Growth Model Development
If the Zener pinning effect were considered, the rate of growth would decrease to zero as the grain size approached the ultimate size.To simplify the following mathematical derivation, the exponential grain size D n is treated as a variable, and Equation ( 4) can be rewritten in a differential form: If the ultimate grain size D n ult is known, instead of constant grain growth rate ( ∆t ), it is assumed that the growth rate is linearly related to the difference between the ultimate size and the current size.Then the incremental form of the new equation is: If the monotonicity of grain growth is considered: Equation ( 9) can be applied to calculate the grain growth during the continuous heating process.In order to obtain the model parameters based on isothermal grain growth experiments, the proposed model can be expressed in an integral form: Like the original model, the parameter K ult at different temperatures also can be expressed as: The derivation process of the model is shown in Figure 1.The proposed model, based on the Zener pinning effect, describes how the grain size approaches the ultimate size.The growth rate slows down due to the increasing influence of pinning obstacles.To simplify the problem, the proposed grain growth rate is derived from the original model by considering the linear influence of the difference between the ultimate size and the current size.The incremental form of equation (Equation ( 8)) can be rewritten as follows: The proposed model, based on the Zener pinning effect, describes how the grain size approaches the ultimate size.The growth rate slows down due to the increasing influence of pinning obstacles.To simplify the problem, the proposed grain growth rate is derived from the original model by considering the linear influence of the difference between the ultimate size and the current size.The incremental form of equation (Equation ( 8)) can be rewritten as follows: This equation has the same form to the original model (Equation ( 7)), when: when the ultimate size is much larger than the grain size, it will have little influence on the grain growth rate.Let K ult = K D n ult , Equations ( 8) and ( 10) can be expressed by K that is from the original model as follows: Based on Equations ( 10) and ( 16), Figure 2 compares the two models by setting the same K value.Figure 2a shows how the grain size deviates from the original model to the current proposed one.Figure 2b presents the influences of different ultimate grain sizes on grain growth.It is found that the curve of the proposed model approaches the original model if the ultimate size is large enough.The two curves overlap when D ult = 10,000 µm for this calculation.According to Equation ( 6), the limit term is affected by the grain boundary energy, the volume fraction, and the radius of the second phase.Many researchers have studied at the microscale, but these models are too complicated to be applied in heat treatment analysis.Since the influencing factors are all temperature dependent, the limit terms can be simplified to a function expressed in terms of temperature.It is proposed as follows: According to Equation ( 6), the limit term D ult is affected by the grain boundary energy, the volume fraction, and the radius of the second phase.Many researchers have studied at the microscale, but these models are too complicated to be applied in heat treatment analysis.Since the influencing factors are all temperature dependent, the limit terms can be simplified to a function expressed in terms of temperature.It is proposed as follows: where, a and b are material parameters, T s is the solvus temperature of the pining phase, T is the heat treatment temperature.Once the temperature exceeds T s , the pining effect disappears, and the grain will grow continuously following the original model.E. A, Holm et al. [19] used large-scale polycrystalline molecular dynamics simulation to study the influence mechanism of the smooth grain boundary on grain size.The simulation results can be plotted by Equation ( 17) with parameter fitting, as shown in Figure 3a.Song et al. [16] studied the ultimate grain size that was influenced by the multiple pinning effects of the second-phase particles.Below the γ phase solvus temperature, γ has a strong inhibitory effect.While beyond the solvus temperature, oxides and carbides play a dominant role.Equation ( 17) can be applied to each pining mechanism as well, and the fitting results are presented in Figure 3b.According to Equation ( 6), the limit term is affected by the grain boundary energy, the volume fraction, and the radius of the second phase.Many researchers have studied at the microscale, but these models are too complicated to be applied in heat treatment analysis.Since the influencing factors are all temperature dependent, the limit terms can be simplified to a function expressed in terms of temperature.It is proposed as follows: where, and are material parameters, is the solvus temperature of the pining phase, is the heat treatment temperature.Once the temperature exceeds , the pining effect disappears, and the grain will grow continuously following the original model.E. A, Holm et al. [19] used large-scale polycrystalline molecular dynamics simulation to study the influence mechanism of the smooth grain boundary on grain size.The simulation results can be plotted by Equation ( 17) with parameter fitting, as shown in Figure 3a.Song et al. [16] studied the ultimate grain size that was influenced by the multiple pinning effects of the second-phase particles.Below the ' phase solvus temperature, ' has a strong inhibitory effect.While beyond the solvus temperature, oxides and carbides play a dominant role.Equation ( 17) can be applied to each pining mechanism as well, and the fitting results are presented in Figure 3b.
Experimental Materials and Methods
In order to obtain the needed parameters of the equation, extensive isothermal grain growth experiments were designed and performed.
The alloy used in the experiment is the third-generation nickel-based powder superalloy, FGH4113A.Table 1 gives the nominal composition.The main process route of the alloy is: vacuum induction melting (VIM) + argon atomization (AA) + hot isostatic pressing (HIP) + hot extrusion (HEX) + isothermal forging (IF).The master alloy ingot is melted in a vim-80II 500 kg vacuum induction melting furnace with a working vacuum of 10 −3 Pa.The argon atomization powder production process adopts VIGA (100 kg) equipment.The powders were sieved to 270 mesh, vacuum degassed, filled into a container, sealed, and HIPped to a cylindrical part.The HIP process condition was to raise the temperature and pressure to 1150 • C and 150 MPa in 4 h, hold for 4 h, and then cool down with the furnace.Then a Φ105 × 1040 mm bar was extruded by a 5000 t horizontal extruder at an extrusion temperature of 1120 • C, an extrusion speed of 25 mm/s, and an extrusion ratio of 5:1.Finally, a Φ200 mm experimental disk blank was made by a 3000 t vertical die forging hydraulic press.
In order to establish the relationship between heat treatment temperature and grain size, extensive isothermal grain growth tests were performed with a KSL-1400 muffle furnace.The temperatures were set at 1060 • C, 1120 • C, 1160 • C, and 1180 • C, respectively, with different soaking times followed by air cooling as shown in Table 2.The grain sizes of the test pieces were measured to study the effect of heat treatment parameters.The intercept method was used to count the grain size grade of the test piece, and the measurement standard is in accordance with GB/T6394-2017 [20].
Experimental Results
The initial forged-state microstructure is shown in Figure 4.The grain structure is rather uniform after extrusion and forging.The microstructure evolution after heat treatment at different temperatures and soaking times can be found in Figure 5.The grain size of each test piece was measured and plotted in Figure 6.The measurement error is within a 95% confidence interval.Grains grow relatively slowly when the heat treatment temperature is between 1060 • C and 1120 • C. The grain size increases from 4.8 µm to 8.4 µm only when the temperature is 1120 • C, even after 240 min.The grain size remains unchanged if the temperature is 1060 • C.However, if the temperature is 1160 • C or 1180 • C, the grains grow much faster even after 15 min of holding, as large as 12.9 µm and 16.3 µm, respectively.The growth rate slows down gradually with the increase in holding time.After holding for 2 h, the grain sizes are 17.1 µm and 21.8 µm for those two heat treatment temperatures.Thermodynamic calculation shows that the γ' solves temperature of the alloy is 1150 • C.This is one of the reasons why the grain grows relatively slowly when the heat treatment temperature is lower than 1150 • C. The grain growth is impeded by γ' phase due to the pinning effect.However, when the temperature is higher than 1150 • C, the grains grow rapidly because of the dissolution of the γ'.As also shown in Figure 6, the grain will not grow forever with the increase in holding time.The grain size will reach an equilibrium state for each heat treatment temperature, which is consistent with the previous discussion results.temperatures.Thermodynamic calculation shows that the ` solves temperature of the alloy is 1150 °C.This is one of the reasons why the grain grows relatively slowly when the heat treatment temperature is lower than 1150 °C.The grain growth is impeded by ` phase due to the pinning effect.However, when the temperature is higher than 1150 °C, the grains grow rapidly because of the dissolution of the `.As also shown in Figure 6, the grain will not grow forever with the increase in holding time.The grain size will reach an equilibrium state for each heat treatment temperature, which is consistent with the previous discussion results.
Parameter Calibration of Grain Growth Model
Since there is no obvious grain size change in the experimental data at 1060 °C, it is believed that the initial grain size exceeds the limit grain size.Therefore, only the experimental data of 1120 °C and 1180 °C are used in the equation parameter calibration.In
Parameter Calibration of Grain Growth Model
Since there is no obvious grain size change in the experimental data at 1060 • C, it is believed that the initial grain size exceeds the limit grain size.Therefore, only the experimental data of 1120 • C and 1180 • C are used in the equation parameter calibration.In order to calculate the saturated grain size in Equation ( 17), parameters a and b need to be determined.
In this example, the grain size of samples that experienced 720 min heat treatment was assumed to be the ultimate grain size D ult .Equation (17) was fitted in Figure 7 based on the measured grain sizes, and the material parameters are derived a = −114.5628,b = −18.6934,Ts = 1598K (assuming to be the melt temperature).Then Equation ( 10) can be written as follows: After experimental data fitting by Equation ( 18), the material parameters can be determined as = 2.06547 × 10 15 and = 5.15704 × 10 5 , = 2 , and the fitted curves are plotted in Figure 8.The proposed model exhibits an obviously higher accuracy than the original model.Moreover, as an example, the comparison between the original model and the current proposed model is shown in Figure 9.It shows the improvement of the current proposed model.Then Equation ( 10) can be written as follows: After experimental data fitting by Equation ( 18), the material parameters can be determined as A ult = 2.06547 × 10 15 and Q ult = 5.15704 × 10 5 , n = 2, and the fitted curves are plotted in Figure 8.The proposed model exhibits an obviously higher accuracy than the original model.Moreover, as an example, the comparison between the original model and the current proposed model is shown in Figure 9.It shows the improvement of the current proposed model.Then Equation ( 10) can be written as follows: After experimental data fitting by Equation ( 18), the material parameters can be determined as = 2.06547 × 10 15 and = 5.15704 × 10 5 , = 2 , and the fitted curves are plotted in Figure 8.The proposed model exhibits an obviously higher accuracy than the original model.Moreover, as an example, the comparison between the original model and the current proposed model is shown in Figure 9.It shows the improvement of the current proposed model.
Grain Size Calculation Scheme
In order to calculate the grain size from temperature evolution, the Scheil superposition method is used to discretize the continuous change of temperature into a discontinuous isothermal process with multiple incremental steps.In step m, the discrete isothermal grain growth increment is calculated at the current temperature: The grain size at the end of step m can be calculated as: Repeat the operation until the completion of the whole heat treatment process, as shown in Figure 10.
Grain Size Prediction of a Dual Microstructure Turbine Disk and Its Validation
The turbine disk is one of the critical components of an aero-engine turbine.Its performance directly determines the overall performance of the engine.During the service of a high-performance aero-engine turbine, the temperature at the rim area is high, and the coarse-grained structure is preferable to ensure high durability and creep performance.The operating temperature of the bore region is relatively low but subject to large torsion and centrifugal force, which requires a fine grain structure to provide excellent tensile and fatigue strength.The dual-performance powder disk can maximize the performance potential of the material and better meet the actual working conditions of different parts of the turbine disk [21].Simulation is the most effective and efficient method to help design
Grain Size Calculation Scheme
In order to calculate the grain size from temperature evolution, the Scheil superposition method is used to discretize the continuous change of temperature into a discontinuous isothermal process with multiple incremental steps.In step m, the discrete isothermal grain growth increment is calculated at the current temperature: The grain size at the end of step m can be calculated as: Repeat the operation until the completion of the whole heat treatment process, as shown in Figure 10.
Grain Size Calculation Scheme
In order to calculate the grain size from temperature evolution, the Scheil superposition method is used to discretize the continuous change of temperature into a discontinuous isothermal process with multiple incremental steps.In step m, the discrete isothermal grain growth increment is calculated at the current temperature: The grain size at the end of step m can be calculated as: Repeat the operation until the completion of the whole heat treatment process, as shown in Figure 10.
Grain Size Prediction of a Dual Microstructure Turbine Disk and Its Validation
The turbine disk is one of the critical components of an aero-engine turbine.Its performance directly determines the overall performance of the engine.During the service of a high-performance aero-engine turbine, the temperature at the rim area is high, and the coarse-grained structure is preferable to ensure high durability and creep performance.The operating temperature of the bore region is relatively low but subject to large torsion and centrifugal force, which requires a fine grain structure to provide excellent tensile and fatigue strength.The dual-performance powder disk can maximize the performance potential of the material and better meet the actual working conditions of different parts of the turbine disk [21].Simulation is the most effective and efficient method to help design
Grain Size Prediction of a Dual Microstructure Turbine Disk and Its Validation
The turbine disk is one of the critical components of an aero-engine turbine.Its performance directly determines the overall performance of the engine.During the service of a high-performance aero-engine turbine, the temperature at the rim area is high, and the coarse-grained structure is preferable to ensure high durability and creep performance.The operating temperature of the bore region is relatively low but subject to large torsion and centrifugal force, which requires a fine grain structure to provide excellent tensile and fatigue strength.The dual-performance powder disk can maximize the performance potential of the material and better meet the actual working conditions of different parts of the turbine disk [21].Simulation is the most effective and efficient method to help design the gradient heat treatment scheme by accurately predicting the temperature field of the disk in order to have the right grain size distribution for the dual performance requirement [22,23].
Gradient Heat Treatment Setup Design
In order to let the rim and bore of the disc have different grain sizes, it is needed to form a temperature gradient on a single disc.There are many ways to achieve this [24][25][26][27][28][29].Among them, only heating the disk edge by adding an endothermic block to the core is easy to implement.The key is how to control the effective heating time and design the size of the thermal insulation device and endothermic block to meet the temperature requirements of each region.The setup and processing parameters can be obtained with the help of an accurate simulation.
The setup of the disk heat treatment is presented in Figure 11.The simulation model with tooling is illustrated in Figure 11a.The gray part is the insulation, and the black part is the disc body.Because the disc and shaft are together in this case, there is no need to add additional heat-absorbing blocks, which are normally needed to prevent the temperature from rising at the bore area.And the main dimensions of the overall model are shown in Figure 11b.
The heat transfer can be expressed by the following formula: where Flux corresponds to an external flux.h(T − T a ) corresponds to convective heat transfer between the part surface and the medium, where h is the convective heat transfer coefficient and T a is the environment temperature.σε T 4 − T 4 a is the radiation heat transfer term.The radiation heat exchange is proportional to the fourth power of the temperature difference between surface and ambient, where σ is the Boltzmann constant and ε is the emissivity.After tedious tuning of parameters by validating predictions against thermal couple measurements, the radiation heat transfer coefficient between the outer surface and the furnace is determined to be 0.5, the convection heat transfer coefficient is 10 W/(m 2 •K), the contact heat transfer coefficient between the disc and the thermal insulation tooling is 50 W/(m 2 •K), and the furnace wall temperature is 1200 • C.
The FEM model was built in ABAQUS 2020.The model adopts quadratic quadrilateral axisymmetric elements (DCAX8).The mesh size is 4 mm.The chamfer is locally densified, and the size is about 1 mm.The specific parameter settings are shown in Figure 11c.The grain size was calculated in the user subroutine UVARM based on the proposed model.
The thermal insulation tooling is made of heat-resistant fiber.The main thermophysical parameters of the materials used in the calculation are shown in Figure 12.The density of FGH4113A is 8300 kg/m 3 .The density of thermal insulation material is assumed to be constant at 450 kg/m 3 , and the specific heat capacity is 0.5 kJ/(kg K). the gradient heat treatment scheme by accurately predicting the temperature field of the disk in order to have the right grain size distribution for the dual performance requirement [22,23].
Gradient Heat Treatment Setup Design
In order to let the and bore of the disc have different grain sizes, it is needed to form a temperature gradient on a single disc.There are many ways to achieve this [24][25][26][27][28][29].Among them, only heating the disk edge by adding an endothermic block to the core is easy to implement.The key is how to control the effective heating time and design the size of the thermal insulation device and endothermic block to meet the temperature requirements of each region.The setup and processing parameters can be obtained with the help of an accurate simulation.
The setup of the disk heat treatment is presented in Figure 11.The simulation model with tooling is illustrated in Figure 11a.The gray part is the insulation, and the black part is the disc body.Because the disc and shaft are together in this case, there is no need to add additional heat-absorbing blocks, which are normally needed to prevent the temperature from rising at the bore area.And the main dimensions of the overall model are shown in Figure 11b.
The heat transfer can be expressed by the following formula: where corresponds to an external flux.ℎ( − ) corresponds to convective heat transfer between the part surface and the medium, where ℎ is the convective heat transfer coefficient and is the environment temperature.( 4 − 4 ) is the radiation heat transfer term.The radiation heat exchange is proportional to the fourth power of the temperature difference between surface and ambient, where is the Boltzmann constant and is the emissivity.After tedious tuning of parameters by validating predictions against thermal couple measurements, the radiation heat transfer coefficient between the outer surface and the furnace is determined to be 0.5, the convection heat transfer coefficient is 10 W/(m 2 •K), the contact heat transfer coefficient between the disc and the thermal insulation tooling is 50 W/(m 2 •K), and the furnace wall temperature is 1200 °C.
The FEM model was built in ABAQUS 2020.The model adopts quadratic quadrilateral axisymmetric elements (DCAX8).The mesh size is 4 mm.The chamfer is locally densified, and the size is about 1 mm.The specific parameter settings are shown in Figure 11c.The grain size was calculated in the user subroutine UVARM based on the proposed model.
The thermal insulation tooling is made of heat-resistant fiber.The main thermophysical parameters of the materials used in the calculation are shown in Figure 12.The density of FGH4113A is 8300 kg/m³.The density of thermal insulation material is assumed to be constant at 450 kg/m³, and the specific heat capacity is 0.5 kJ/ (kg K).
Heating Process of Gradient Heat Treatment
In order to obtain the maximum temperature gradient, hot loading is usually used.When the furnace temperature reaches 1200 °C, the whole setup (disc and insulation) is loaded into the furnace.The temperature field distribution is shown in Figure 13 after holding for about 2 h and 55 min.The dotted line position is the boundary of the thermal insulation tooling.It can be found that the temperature on the outside of the dotted line is above 1150 °C, which is higher than the solvus temperature of the material, while the temperature on the inside of the dotted line is lower than the solvus temperature of the material, which is shown on the left part of Figure 13.As a consequence, γ` phase on the rim area will be dissolved so that the grains can grow.While the temperature in the core area is low, only the intracrystalline γ` phase will dissolve, and the grain size will remain basically unchanged, as shown on the right part of Figure 13 by the current model prediction.
Heating Process of Gradient Heat Treatment
In order to obtain the maximum temperature gradient, hot loading is usually used.When the furnace temperature reaches 1200 °C, the whole setup (disc and insulation) is loaded into the furnace.The temperature field distribution is shown in Figure 13 after holding for about 2 h and 55 min.The dotted line position is the boundary of the thermal insulation tooling.It can be found that the temperature on the outside of the dotted line is above 1150 °C, which is higher than the solvus temperature of the material, while the temperature on the inside of the dotted line is lower than the solvus temperature of the material, which is shown on the left part of Figure 13.As a consequence, γ` phase on the rim area will be dissolved so that the grains can grow.While the temperature in the core area is low, only the intracrystalline γ` phase will dissolve, and the grain size will remain basically unchanged, as shown on the right part of Figure 13 by the current model prediction.
Heating Process of Gradient Heat Treatment
In order to obtain the maximum temperature gradient, hot loading is usually used.When the furnace temperature reaches 1200 • C, the whole setup (disc and insulation) is loaded into the furnace.The temperature field distribution is shown in Figure 13 after holding for about 2 h and 55 min.The dotted line position is the boundary of the thermal insulation tooling.It can be found that the temperature on the outside of the dotted line is above 1150 • C, which is higher than the solvus temperature of the material, while the temperature on the inside of the dotted line is lower than the solvus temperature of the material, which is shown on the left part of Figure 13.As a consequence, γ' phase on the rim area will be dissolved so that the grains can grow.While the temperature in the core area is low, only the intracrystalline γ' phase will dissolve, and the grain size will remain basically unchanged, as shown on the right part of Figure 13 by the current model prediction.
Due to the high solution temperature on the rim, the grains grow rapidly, and the grain size is at the level of ASTM 7-8, while the grain size of the bore part is at the level of ASTM 11-12, which is basically maintained at the level of the initial forged state grain size.The transition region is small and rather smooth near the thermal insulation tooling position.The dual microstructure forms dual properties for the disc, which will better meet the operation requirements.Such results also align well with some advanced turbine discs manufactured by those well-known aero-engine OEMs.For example, General Electric Aviation Engine has manufactured a dual-performance turbine disk made of René 104 alloy.
The report [30] shows that the grain size levels of its rim are ASTM 6-7, the grain size grade of its hub is about ASTM 11, and the transition zone is ASTM 8-10.NASA's Glenn Research Center has also manufactured a dual-energy turbine disk of ME209 alloy by a low-cost method [31].The grain size of the wheel hub is ASTM 11-12, and the rim is about ASTM 5. Due to the high solution temperature on the rim, the grains grow rapidly, and the grain size is at the level of ASTM 7-8, while the grain size of the bore part is at the level of ASTM 11-12, which is basically maintained at the level of the initial forged state grain size.The transition region is small and rather smooth near the thermal insulation tooling position.The dual microstructure forms dual properties for the disc, which will better meet the operation requirements.Such results also align well with some advanced turbine discs manufactured by those well-known aero-engine OEMs.For example, General Electric Aviation Engine has manufactured a dual-performance turbine disk made of René 104 alloy.The report [30] shows that the grain size levels of its rim are ASTM 6-7, the grain size grade of its hub is about ASTM 11, and the transition zone is ASTM 8-10.NASA's Glenn Research Center has also manufactured a dual-energy turbine disk of ME209 alloy by a low-cost method [31].The grain size of the wheel hub is ASTM 11-12, and the rim is about ASTM 5.
Prediction Validation
Temperature measurement was carried out in order to validate the prediction as well as the effectiveness of the design based on the prediction.The thermal couples were inserted at the points shown in Figure 14a (P1-P2-P3-P4), where P1, P2, and P3 are 1/2 of the thickness and P4 is 40 mm from the bottom.
The setup picture is shown in Figure 14b.The heating equipment is a trolley furnace, and the N-type thermocouples are inserted at the position shown in Figure 14a.The outline dimensions of the powder turbine disk and thermal insulation tooling are already shown in Figure 11b.The furnace is heated based on the curve in Figure 14c.When the furnace temperature reaches the predetermined temperature of 1200 °C, the setup shown in Figure 12b is loaded.After holding another 2 h and 55 min, the setup was discharged.Thermal couples recorded the thermal history throughout the entire process.The whole experimental scheme is shown in Table 3.The same practice had been repeated several times.The results show excellent repeatability for such an operation.
The comparison of temperature variation between simulation and measurement is shown in Figure 14c, while the solid lines are from measurement and the dotted lines are from prediction.The calculated temperature field agrees very well with the measurement through the whole temperature history for all the positions studied.The relative error between the calculation and measurements at the discharging moment is shown in Table 4.
Prediction Validation
Temperature measurement was carried out in order to validate the prediction as well as the effectiveness of the design based on the prediction.The thermal couples were inserted at the points shown in Figure 14a The setup picture is shown in Figure 14b.The heating equipment is a trolley furnace, and the N-type thermocouples are inserted at the position shown in Figure 14a.The outline dimensions of the powder turbine disk and thermal insulation tooling are already shown in Figure 11b.The furnace is heated based on the curve in Figure 14c.When the furnace temperature reaches the predetermined temperature of 1200 • C, the setup shown in Figure 12b is loaded.After holding another 2 h and 55 min, the setup was discharged.Thermal couples recorded the thermal history throughout the entire process.The whole experimental scheme is shown in Table 3.The same practice had been repeated several times.The results show excellent repeatability for such an operation.As shown in Figure 14c The comparison of temperature variation between simulation and measurement is shown in Figure 14c, while the solid lines are from measurement and the dotted lines are from prediction.The calculated temperature field agrees very well with the measurement through the whole temperature history for all the positions studied.The relative error between the calculation and measurements at the discharging moment is shown in Table 4.After dissecting the disc, a metallographic examination was performed.The grain size was measured.The comparison results between prediction and measurement are shown in Figure 15.The path in the radial direction is defined, and its horizontal position is half the thickness of the disc edge.The prediction agrees very well with the measurement from bore to rim as well as the transition zone.The metallographic structures at positions of a, b, and c are shown in Figure 16.Position a is located in the bore with fine grains.Due to the low temperature during heat treatment, the grain size changes slightly.However, position c is located at the edge of the plate, and its grain size is coarse due to the sup-solvus temperature it went through.Position b is located at the edge of the heat shield, which is the transition zone, and its grain size is in between.The grain sizes of the three areas are ASTM 11.8, ASTM 9.0, and ASTM 6.8, respectively, which match well with the predicted values of ASTM 12.1, ASTM 9.1, and ASTM 7.1.The comparison results are shown in Table 5.
slightly.However, position is located at the edge of the plate, and its grain size is coarse due to the sup-solvus temperature it went through.Position is located at the edge of the heat shield, which is the transition zone, and its grain size is in between.The grain sizes of the three areas are ASTM 11.8, ASTM 9.0, and ASTM 6.8, respectively, which match well with the predicted values of ASTM 12.1, ASTM 9.1, and ASTM 7.1.The comparison results are shown in Table 5. measurement from bore to rim as well as the transition zone.The metallographic structures at positions of , , and are shown in Figure 16.Position is located in the bore with fine grains.Due to the low temperature during heat treatment, the grain size changes slightly.However, position is located at the edge of the plate, and its grain size is coarse due to the sup-solvus temperature it went through.Position is located at the edge of the heat shield, which is the transition zone, and its grain size is in between.The grain sizes of the three areas are ASTM 11.8, ASTM 9.0, and ASTM 6.8, respectively, which match well with the predicted values of ASTM 12.1, ASTM 9.1, and ASTM 7.1.The comparison results are shown in Table 5.
Conclusions
In this paper, a grain growth model that considers the Zener pining effect was developed.It was validated by a full-size turbine disc after gradient heat treatment.The following conclusions can be drawn:
Figure 1 .
Figure 1.Derivation process of the model.
Figure 2 .
Figure 2. Comparisons of the original model and the proposed model.(a) Different models approaching the ultimate size; (b) the influences of different ultimate grain sizes.
Figure 2 .
Figure 2. Comparisons of the original model and the proposed model.(a) Different models approaching the ultimate size; (b) the influences of different ultimate grain sizes.
Figure 2 .
Figure 2. Comparisons of the original model and the proposed model.(a) Different models approaching the ultimate size; (b) the influences of different ultimate grain sizes.
Figure 6 .
Figure 6.Grain size grows with time.
Figure 8 .
Figure 8.The improved grain growth model prediction by parameter calibration.
Figure 8 .
Figure 8.The improved grain growth model prediction by parameter calibration.Figure 8.The improved grain growth model prediction by parameter calibration.
Figure 8 .Figure 9 .
Figure 8.The improved grain growth model prediction by parameter calibration.Figure 8.The improved grain growth model prediction by parameter calibration.
Figure 10 .
Figure 10.Illustration of the iterative calculation algorithm.
Figure 9 .
Figure 9.Comparison between the original model and the current proposed model.
Figure 9 .
Figure 9.Comparison between the original model and the current proposed model.
Figure 10 .
Figure 10.Illustration of the iterative calculation algorithm.
Figure 10 .
Figure 10.Illustration of the iterative calculation algorithm.
Figure 11 .
Figure 11.Heat treatment model with thermal insulation.(a) 3D model section; (b) main dimensions of the model in mm; (c) finite element model and boundary parameters.
Figure 11 .Figure 12 .
Figure 11.Heat treatment model with thermal insulation.(a) 3D model section; (b) main dimensions of the model in mm; (c) finite element model and boundary parameters.
Figure 12 .
Figure 12.Thermal physical properties of materials in the calculation.
Materials 2023 , 18 Figure 13 .
Figure 13.Predicted temperature and grain level distribution of the disk.
Figure 13 .
Figure 13.Predicted temperature and grain level distribution of the disk.
Figure 14 .
Figure 14.Experimental validation.(a)layout of measuring points; (b) experimental setup; (c) heating process; (d) comparison on temperature between experiment and calculation.
Figure 14 .
Figure 14.Experimental validation.(a)layout of measuring points; (b) experimental setup; (c) heating process; (d) comparison on temperature between experiment and calculation.
Figure 15 .Figure 16 .
Figure 15.Predicted and measured grain size along the path.
Figure 15 .
Figure 15.Predicted and measured grain size along the path.
Figure 15 .Figure 16 .
Figure 15.Predicted and measured grain size along the path.
Figure 16 .
Figure 16.Metallographic structure of different positions.(a) position a at the bore; (b) position b at the transition zone; (c) position c at edge of the plate.
( 1 )
A new grain growth model has been developed by combining the traditional model and the Zener pinning effect.This model can reflect how the ultimate grain size influences grain growth.(2)A simplified temperature-dependent ultimate grain size model has been proposed.It can be applied flexibly to materials exhibiting both single and multiple pinning mechanisms.(3) Isothermal grain growth experiments were conducted to study the grain growth of a nickel-based superalloy and to validate the proposed model.
Table 4 .
Comparison of temperature between prediction and measurement.
Table 5 .
Comparison of grain size level at measuring positions.
Table 5 .
Comparison of grain size level at measuring positions.
Table 5 .
Comparison of grain size level at measuring positions. | 9,815.4 | 2023-10-01T00:00:00.000 | [
"Materials Science"
] |
Tim17p Regulates the Twin Pore Structure and Voltage Gating of the Mitochondrial Protein Import Complex TIM23*
The TIM23 complex mediates import of preproteins into mitochondria, but little is known of the mechanistic properties of this translocase. Here patch clamping reconstituted inner membranes allowed for first time insights into the structure and function of the preprotein translocase. Our findings indicate that the TIM23 channel has “twin pores” (two equal sized pores that cooperatively gate) thereby strikingly resembling TOM, the translocase of the outer membrane. Tim17p and Tim23p are homologues, but their functions differ. Tim23p acts as receptor for preproteins and may largely constitute the preprotein-conducting passageway. Conversely depletion of Tim17p induces a collapse of the twin pores into a single pore, whereas N terminus deletion or C terminus truncation results in variable sized pores that cooperatively gate. Further analysis of Tim17p mutants indicates that the N terminus is vital for both voltage sensing and protein sorting. These results suggest that although Tim23p is the main structural unit of the pore Tim17p is required for twin pore structure and provides the voltage gate for the TIM23 channel.
Because more than 95% of the ϳ700 yeast mitochondrial proteins are encoded in the nucleus, newly synthesized proteins need to be translocated to their final destinations in the outer and inner membranes, the matrix, or the intermembrane space (1). Three multisubunit complexes or translocases mediate this routing of preproteins. All precursor proteins cross the outer membrane through the translocase of the outer membrane (TOM). 3 The TIM22 and TIM23 complexes are two translocases in the inner membrane (for reviews, see Refs. [2][3][4][5][6][7]. Mul-tipass membrane proteins carrying internal targeting signals, e.g. phosphate carrier, are inserted into the inner membrane by the TIM22 complex. Those preproteins with cleavable N-terminal presequences that are destined for the matrix or the inner membrane are transported by the TIM23 translocase. The TIM23 complex is formed by at least three integral membrane proteins including Tim23p, Tim17p, and Tim50p. Preproteins are recognized in the intermembrane space by the large C-terminal domain of Tim50p. This domain interacts with Tim23p and guides the precursor protein to the pore of the complex (8 -10). Tim23p is embedded in the inner membrane and putatively forms the translocation pore of the complex. Tim23p also contains a hydrophilic domain of about 100 amino acids exposed to the intermembrane space that has receptorlike properties for the recognition of preproteins (11)(12)(13)(14). For complete translocation, preproteins need the driving force of the presequence translocase-associated motor or PAM complex, which contains mtHsp70 (matrix heat shock protein), Mge1, Pam16p/Tim16p, Pam18p/Tim14p, and Tim44p (15)(16)(17)(18)(19)(20).
Until recently, little was known of the function of the integral membrane protein Tim17p. The high degree of homology of Tim17p with Tim23p led to speculation that these two proteins form the protein-translocating channel (21)(22)(23). It has also been hypothesized that Tim17p might form a Tim23p-independent channel that mediates incorporation of proteins into the inner membrane by a stop-transfer mechanism (24). Recently Tim17p was shown to be essential for both sorting of proteins into the inner membrane and translocation of precursor proteins into the matrix where Tim17p may provide a link between the TIM23 and PAM complexes (2). Moreover Tim17p specifically interacts with the purified N-terminal domain of Pam18/Tim14p, which is exposed to the intermembrane space (2).
At the core of each of the translocases is a channel, or pore, that provides the aqueous pathway for the transit of unfolded proteins. The TOM and TIM22 complexes were found to have twin pore structures by single particle analysis (25,26); this approach has not yet been successfully applied to the TIM23 complex. In previous electrophysiological studies, the channel activities associated with the TOM and TIM23 complexes were found to be remarkably similar (13,27). In this study, we provide evidence that the TIM23 channel has a twin pore structure, like the TOM and TIM22 channels, and that Tim17p is vital to maintaining this structure. Analysis of several mutants revealed that the N terminus of Tim17p acts as the voltage sensor for the TIM23 complex.
Isolation of Mitochondria and Preparation of Proteoliposomes-A mutant strain of Saccaromyces cerevisiae,
Tim17(Gal10), in which the expression of TIM17 gene is controlled by a Gal10 promoter was used (28). Cells were cultivated at 30°C on standard defined lactate medium with 2% glucose in the presence or absence of 1% galactose for 24 h as described by Milisav et al. (28). Three additional Tim17 mutants were analyzed and include versions lacking the C-terminal 24 (Tim17⌬C) or N-terminal 11 (Tim17⌬N) amino acids, double point mutant D4R/D8K (Tim17DD3 RK), and a fourpoint mutant with an additional replacement at amino acids 80 and 83 (Tim17DD3 RK/KR3 DD). Cells were grown at 30°C on semisynthetic lactate medium as described by Meier et al. (29).
Patch Clamping Techniques-Patch clamp experiments were carried out on reconstituted TIM23 channels of proteoliposomes containing purified mitochondrial inner membranes (13,32). Briefly membrane patches were excised from giant proteoliposomes after formation of a gigaseal using microelectrodes with ϳ0.4-m-diameter tips and resistances of 10 -30 megaohms. Unless otherwise indicated, the solution in the microelectrode and bath was 150 mM KCl, 5 mM HEPES, pH 7.4, at ϳ23°C. Voltage clamp was established with the inside-out excised configuration (35) using a Dagan 3900 patch clamp amplifier in the inside-out mode. Voltages across excised patches were reported as bath potentials. The open probability, P o , was calculated as the fraction of the total time the channel spent in the open state from total amplitude histograms generated with WinEDR Software (courtesy of J. Dempster, University of Strathclyde, Glasgow, UK) from 20 -40 s of current traces. V 0 is the voltage at which the channel spends half of the time open (P o is 0.5). Mean open time was measured by analyzing Ͼ1000 transition events per patch. Filtration was 2 kHz with 5-kHz sampling for all analysis and currents traces shown unless otherwise stated. Simulations were generated by WinEDR Software as described previously (27) by providing single channel parameters including transition amplitude, mean open and closed times, and designating five openings/ burst for each data set. The distribution of time spent in each of the three states (O, two open (P O1O2 ϭ P o 2 ); S, one open and one closed (P O1C2 or C1O2 ϭ 2P o (1 Ϫ P o )); and C, two closed (P C1C2 ϭ (1 Ϫ P o ) 2 )) was fit to the open state of two independent channels. Permeability ratios were calculated from the reversal potential in the presence of a 150:30 mM KCl gradient as described previously (13). Peptides were introduced by perfusion of the 0.5-ml bath with 3-5 ml of medium. Flicker rates were determined from 20 -40 s of current traces as the number of transition events/s from the open to lower conductance states with a 50% threshold of the predominant event (ϳ250 pS).
The pore size was estimated using the polymer exclusion method (36,37). The transition size and peak conductance in the presence of a series of polyethylene glycols (PEGs; molecular mass, 200 -8000 Da) was determined. PEG solutions were 15% (w/v) in 150 mM KCl, 5 mM HEPES, pH 7.4, and were added to the bath by perfusion. The radius of the pore of the TIM23 channel was also estimated from the peak conductance assuming a pore length of 7 nm (38). That is, R pore ϭ (l ϩ (a)/2) (/a 2 ) where R and are the resistivity of the pore and solution, l is pore length, and a is pore radius (39).
Immunoblotting-Mitochondrial proteins were separated by SDS-PAGE (40) and electrotransferred (32) onto polyvinylidene difluoride membranes. Indirect immunodetection used chemiluminescence (ECL by Amersham Biosciences) using horseradish peroxidase-coupled secondary antibodies. Membrane proteins (0.5-12 g/line) were decorated with antibodies against Tim23p, Tim17p, and Tim44p (gift of M. Brunner and I. Milisav). Scion imaging and densitometry were used to semiquantify the amount of Tim17p and Tim23p from the signal intensities of bands on Western blots that were normalized relative to 1 g of total protein of cells grown in the presence of galactose.
Peptides-Peptides were prepared by the New York State Department of Health Wadsworth Center Peptide Synthesis Core Facility (Albany, NY) using an Applied Biosystems 431A automated peptide synthesizer as described previously (13). The presequence peptides used were based on amino acids 1-13 and 1-22 from the N terminus of cytochrome oxidase subunit IV of S. cerevisiae (yCoxIV-(1-13) and yCoxIV-(1-22)) and a synthetic mitochondrial presequence, SynB2 (46). Peptides were subjected to mass spectroscopy to determine impurities and proper composition and were typically Ͼ90% pure.
RESULTS
Considerable evidence links TIM23 channel activity to protein import and the TIM23 complex. Antibodies against Tim23p specifically block TIM23 channel activity in patch clamp experiments and protein import in mitoplasts (13). A tim23.1 strain that is import-deficient displays altered TIM23 channel activity (13). TIM23 channel activity is reversibly regulated by synthetic presequence peptides (13,33,41,42). The frequency of detecting TIM23 channels is directly coupled to the amount of Tim17p (supplemental Fig. S1) or Tim23p (47) present in the membrane. Moreover the single channel properties of TIM23 are surprisingly similar to those of TOM channel, the import channel of the outer membrane (27).
To determine whether TIM23 channels have a twin pore structure like that of TOM channels, the architecture of the TIM23 channel was investigated by characterizing normal and Tim17 mutant strains of yeast with patch clamp techniques. Mitochondrial inner membranes were purified and fused with small liposomes to form giant proteoliposomes. Membrane patches were excised from proteoliposomes with a micropipette, and the conductance (which is proportional to the resistance to the current flow of ions and the pore size) was measured at various voltages in the presence and absence of molecules that can size the channel pore. Single channel properties were routinely measured to verify TIM23 channel identity. Table 1) (13,27).
The Putative Twin Pore Structure of the TIM23 Channel-The single channel behavior can shed light on the pore structure of the TIM23 channel. TIM23 behavior could be described by any of the three models in Fig. 1, C-E. Changes in the radius of the "iris-like" pore of (43). Alternatively the TIM23 channel may have two pores of equal size (Fig. 1, D and E). In both cases, the two pores are open when the conductance is ϳ1000 pS, one pore is open and one is closed when the conductance is 500 pS, and both are closed when the conductance is 0 pS. The transition size corresponding to opening or closing of one pore in current traces is typically 500 pS (Fig. 1, A and B). The difference between the models of Fig. 1, D and E, is that gating of the twin-sized pores of Fig. 1D is independent and that of Fig. 1E is cooperative.
The model with a single pore that changes diameter (iris-like) can be distinguished experimentally from the twin pore models by differences , and closed states, respectively. C-E, three models of TIM23 channel pore structure are shown. Model C has an iris-like single pore whose conductance changes with radius. Model D has two equal sized pores that gate independently. Either channel 1 or 2, but not both, would be open in the substate. Model E has twin pores of equal size that cooperatively gate. Open and filled circles indicate open and closed pores, respectively. F-K, the pore size of the TIM23 channel was measured using the polymer exclusion method. The relative transition size (F and G) or peak conductance (H) of TIM23 channels from wild-type (F) and Tim17(Gal10)ϩGal (G and H) strains are shown in the presence (g) and absence (g 0 ) of various PEGs. The PEG molecular mass range (radius) was: 200 (0.5 nm), 400 (0.7 nm), 600 (0. Table 1) (13,27).) Assuming a pore length of 7 nm (38), this approach indicates that the iris-like single pore would have an open state radius of 1.43 Ϯ 0.08 nm and half-open radius of 0.93 Ϯ 0.03 nm ( Table 2). Both twin pores are predicted to have a radius of 0.93 Ϯ 0.03 nm. The polymer exclusion method was then used to measure the pore size to determine whether the radius actually changed during transitions between the open and half-open states.
The polymer exclusion method is a common means of measuring pore sizes and is based on an observed decrease in conductance when non-electrolytes, e.g. PEG of various molecular weights and known radii, enter the pore of the channel (3,36,37,44). The presence of non-electrolytes in the pore reduces the "room" available for the current-carrying electrolytes K ϩ and Cl Ϫ , reducing the conductance. Impermeable non-electrolytes have no effect on the conductance once corrected for differences in the conductivity of media.
Both the transition size and the peak conductance of TIM23 channels decreased in the presence of 200 -600-Da PEGs (0.5-0.8 nm) indicating that these PEG were permeable (Fig. 1, F-H). Neither were affected by the 1000 -8000 molecular weight PEG (0.94 -3.05 nm) indicating that these PEGs were not permeable in either the half-open or open state. A pore radius of 0.81-0.94 nm was calculated from the transition size (Fig. 1, I and J), and 0.90 -0.94 nm was calculated from the peak conductance data ( Fig. 1K and Table 2). Both values are similar to the 0.93 Ϯ 0.03-nm radius estimated from the transition size by the method of Hille (39). However, these values are significantly different from 1.43 Ϯ 0.08 nm predicted from the peak conductance (Table 2). Hence the TIM23 channel is not an iris-like single pore (Fig. 1C) because the fully open and half-open states have the same permeability for various sized PEGs, i.e. their radii are the same. Further investigations were needed to distinguish between the two twin pore models.
Do the twin pores open and close independently (Fig. 1D) or cooperatively (Fig. 1E)? If multiple, independent channels are in a membrane patch, the total amplitude histograms should fit a binomial distribution. This is not the case for the TIM23 channel. That is, distributions simulated to fit the occupancy of two open and independent 500 pS channels (O) poorly fit the observed occupancy for the two closed (C) or one open/one closed (S) substates (Fig. 1, L and M). The closed state is almost never occupied in Fig. 1L, and the substrate is occupied much less in Fig. 1M than that predicted by a binomial distribution. Because the distribution of occupancy of the three states in the total amplitude histogram is not binomial at voltages ՆϮ30 mV, the twin pores are not independent. Instead they cooperatively gate. (Another example of cooperative gating is in the amplitude histogram of Fig. 5A, upper panel.) Hence the twin pore model of Fig. 1E accounts for the observed cooperative gating and the pore size determinations for the TIM23 channel.
In summary, the single channel behaviors of the TIM23 and TOM channels are nearly identical and consistent with a twin pore structure that cooperatively gates as described above. Although the TOM pore is composed of only Tom40p, it is not known whether Tim23p forms the TIM23 pore alone or in conjunction with Tim17p. Several mutants were examined to determine the role of Tim17p in the twin pore structure and gating of the TIM23 channel. Depletion of Tim17p Modifies the Architecture of the TIM23 Pore-Depletion of Tim17p by the removal of galactose from the growth medium of Tim17(Gal10) yeast did not significantly modify the expression levels of some other components of the TIM23 complex as shown in the immunoblot of Fig. 2A. But these ϪGal mitochondria were protein import-incompetent as they imported significantly less Su9-(1-69)-DHFR than mitochondria of the same strain grown in the presence of galactose (Fig. 2B) (28).
The effects of depleting this protein on TIM23 channel activity were then determined by patch clamping proteoliposomes containing inner membranes from mitochondria with normal or depleted levels of Tim17p. As described above, the TIM23 channel activity from this strain grown with galactose is identical to that of wild-type mitochondria (Figs. 1, A and B, and 2C and Table 1) (13,27). However, striking differences were seen in the TIM23 channels after depletion of Tim17p. (Note that a few normal TIM23 channels were detected after depletion of Tim17p, but as shown in supplemental Fig. S1, a low level of Tim17p expression through the leaky Gal promoter could account for these channels in Tim17gal10-Gal mitochondria.) Tim17p-depleted channels had a conductance of ϳ700 pS (Fig. 2D). The usual flickering between open, sub-, and closed states was eliminated, and the mean open time was dramatically longer than wild type (Table 1). This activity showed no voltage dependence as the channel remained open regardless of voltage (Fig. 2F). Thus, the gating properties of TIM23 channels were abolished by depletion of Tim17p.
Wild-type TIM23 channels flicker (downward deflections in the current traces at positive voltages) more rapidly between conductance levels in the presence of synthetic signal peptides whose sequences mimic that of the targeting region of many mitochondrial preproteins, e.g. yCoxIV-(1-13) and yCoxIV-(1-22) (45). This increase in flickering is reversible and voltage-dependent (45) and likely corresponds to peptide translocation (3). SynB2 is a control peptide whose charge and secondary structure is similar to that of signal peptides. However, SynB2 does not modify TIM23 channel activity, and the sequence does not target preproteins to mitochondria (45,46).
Although the TIM23 channels depleted of Tim17p were still affected by signal peptides, the response was markedly different from that of wild-type and ϩGal channels. The current traces and histograms of mitochondria of Tim17(Gal10)ϩGal show the same rapid flickering as those of wild-type strains (Fig. 2, G and H) (45). In contrast, signal peptides induced a closure of Tim17p-depleted channels (Fig. 2, G and H) that was not reversible as repeated replacement of the bath solution to remove the peptides did not reopen the channel. Therefore, signal peptides were recognized, but the normal response of flickering was not induced. This finding indicates that Tim17p is not requisite for recognition of signal peptides, but this com-ponent is essential, either directly or indirectly, for the increase in flickering normally induced by signal peptides.
Depletion of Tim17p disrupted the twin pore structure of the TIM23 channel. We estimated the pore size using the polymer exclusion method from the peak conductance because Tim17p-depleted channels showed no transitions in current traces regardless of voltage (Fig. 2D). The peak conductance of Tim17p-depleted channels decreased in the presence of 200 -1450-Da PEGs indicating that these PEGs were permeable (Fig. 3A); normal TIM23 channels were permeable to 200 -600-Da PEG (Fig. 1, F-H). These data show that the pores of the Tim17p-depleted channels were slightly larger than normal as they also allowed permeation of 1000-and 1450-Da PEGS, whereas the wild-type channels did not. Plots of the second derivative of the relative conductance and polymer radius predicted a pore size of 0.81-0.94 nm for normal TIM23 channels (Fig. 1, I-K) and a slightly larger 0.95-1.14-nm pore for those depleted of Tim17p ( Fig. 3B and Table 2). Interestingly these channels maintained their poor ion selectivity (Table 1). These data indicate that TIM23 channels depleted of Tim17p are single, not twin, pores because no transitions to half-open states were seen in the current traces, and the pores sizes estimated by the methods of Hille (39) and polymer exclusion agree ( Fig. 3E and Table 2). In summary, Tim17p is essential to the twin pore structure of TIM23 channels as depletion of this protein resulted in channels that were single pores and voltage-independent.
Tim17p Is Essential for the Voltage Dependence of TIM23 Channels-TIM23 channels are voltage-dependent and usually occupy the fully open states at low potentials regardless of polarity. The V 0 (voltage at which the channel spends half of its time open) is ϳ50 mV for positive potentials, and the gating charge for positive voltages is about Ϫ4 ( Table 1). The voltage dependence is asymmetrical as TIM23 channels close at lower positive than negative potentials. The voltage gate of TIM23 channels has not been identified previously.
The C terminus of Tim17p is important in stability of the twin pores but not the voltage dependence. A truncation mutant in which the last 24 amino acids of Tim17 were deleted (Tim17⌬C) was characterized. The Tim17⌬C channels maintained a twin pore structure and many of the single channel parameters of wild-type TIM23 channels, including voltage dependence. However, multiple substates (S* in figures) were seen, and the transition sizes were variable, most frequently 300 -600 pS instead of the usual 500 pS. Hence this mutation modified the twin pore structure (Table 3 and Fig. 4B). However, Tim17⌬C yeast grow normally (29) and are import-com-FIGURE 3. Depletion of Tim17p modifies the size of the pore and the cooperative gating characteristics of TIM23 channels. A and B, estimation of the pore sizes of the TIM23 channels of the Tim17(Gal10) grown without galactose (Ϫgal) was done by the polymer exclusion method. A, the relative peak conductance is shown in the presence (g) and absence (g 0 ) of various PEGs as described in Fig. 1. B, the second derivative of the data of A reveals the restriction radii for small and large non-electrolytes. C, a total amplitude histogram is shown for 30 s of the current trace at ϩ30 mV shown in D. D, single channel current traces recorded at ϩ30 mV from proteoliposomes containing inner membranes from Tim17(Gal10)ϪGal. The dotted line corresponds to I ϭ 0 pA. Other conditions are as in Fig. 1. E, model of TIM23 channel shows that the "twin pore" structure that cooperatively gates becomes a "single pore" after depletion of Tim17p. O corresponds to the open state. Unlike the C terminus, the N terminus of Tim17p is highly conserved. Several mutants, including N-terminal deletions, severely diminish the import of preproteins that depend on the TIM23 complex for translocation (29). Deletion of the first 11 amino acids of Tim17p (Tim17⌬N) generated two different channel activities (Figs. 4, C and D, and 5). As shown in Table 3, most of the Tim17⌬N channels behaved like the Tim17p-depleted channels (Tim17 (Gal10)ϪGal, Fig. 3), suggesting that Tim17p may have failed to incorporate into the TIM23 complex. The single pore channels had no transitions and no voltage dependence (Figs. 4C and 5B). Like the Tim17p-depleted channels, the size of the single pores of the Tim17⌬N channels was slightly larger (0.92-1.21 nm) (Fig. 5C) than the wild type (0.81-0.91 nm). The remaining Tim17⌬N channels more closely resembled the Tim17⌬C channels as the transition sizes were variable, and some other characteristics, i.e. peak conductance and effects of signal peptides, were not modified (Figs. 4 and 5). Like wild type, these Tim17⌬N channels displayed cooperative gating as the amplitude histograms were not easily fit by binomial distributions (Fig. 5A). However, in contrast to wild-type and Tim17⌬C channels, these Tim17⌬N channels were not voltage-dependent (Fig. 4). Hence the N terminus of Tim17p is necessary for normal voltage gating of TIM23 channels.
Further studies were undertaken to identify the voltage gate. Previously two highly conserved, negatively charged residues at positions 4 and 8 of Tim17p were changed to positively charged residues (29); this mutant is referred to as Tim17DD3 RK (Fig. 6A). This mutation caused a severe growth defect and suppression of protein import. Both growth and protein import were improved by the additional replacement of two conserved positively charged residues between transmembrane domains 2 and 3 by aspartate residues (positions 80 and 83) in the Tim17DD3 RK/KR3 DD mutant (29). We characterized the TIM23 channel activity of these two mutants (Fig. 6). For the most part, Tim17DD3 RK channels were modified twin pores with variable transition sizes and cooperative gating (Fig. 6B and Table 3). Importantly the voltage dependence of Tim17DD3 RK channels was lost like that of the Tim17⌬N channels. Hence the two aspartates in the N terminus of Tim17p form, at least in part, the voltage sensor for the TIM23 channel. Despite the fact that Tim17DD3 RK/ KR3 DD yeast grow normally and recover much of their protein import function (29), these channels do not behave normally. The Tim17DD3 RK/KR3 DD channels display modified twin behavior with transitions of variable sizes ( Fig. 6C and Table 3). Unlike the previous mutants, these channels displayed an inverted voltage dependence (Fig. 6C), i.e. substates were occupied at low voltage, and the pores fully opened (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13). O, S*, and C correspond to the open, multiple sub-, and closed states, respectively. C, the polymer exclusion method was used to estimate the size of the TIM23 channel single pore from the Tim17⌬N strain. The left panel shows the relative peak conductance in the presence (g) and absence (g 0 ) of various PEGs. The right panel shows the second derivative of the data of C corresponding to the restriction radii for small and large non-electrolytes. Because of the multiplicity of substates, the same approach could not be applied to channels showing the modified twin pore behavior.
with high voltage of either polarity. Thus, reinstating two essential negative charges on the intermembrane space face of Tim17p restored some voltage dependence (although it is typically inverted) but does not cure the modified twin pore structure.
When applied to Tim17 mutants, signal peptides induced rapid flickering of modified twin pore channels and irreversible closure of single pores channels (Table 3). A wild-type increase in flickering was induced by signal peptides (yCoxIV-(1-13)) in Tim17⌬N (Fig. 5A), Tim17DD3 RK, Tim17DD3 RK/ KR3 DD, and Tim17⌬C channels (not shown) displaying modified twin pore behavior. In contrast, signal peptides induced irreversible closure like that of the Tim17p-depleted channels (Fig. 2E) if the Tim17⌬N (Fig. 5B), Tim17DD3 RK, and Tim17DD3 RK/KR3 DD channels exhibited single pore behavior (not shown). In agreement with the data obtained with Tim17p-depleted channels, the N terminus of Tim17p is not essential for signal peptide recognition, but this domain is intimately involved in the response of TIM23 channels to signal peptides.
In summary, the N terminus of Tim17p is vital to the twin pore structure. Two negative charges at positions 4 and 8 are essential to the normal voltage dependence of TIM23 channels. Truncation of the C terminus resulted in changes in pore structure that had little effect on import competence or growth. Interestingly signal peptides were recognized and modified the channel activities regardless of mutations in Tim17p.
DISCUSSION
The TIM23 complex is responsible for translocation across or insertion into the inner membrane for many mitochondrial preproteins. Here we showed that the TIM23 complex has a twin pore structure, i.e. two equal sized pores that cooperatively open and close. Previously single particle analysis showed that the TOM and TIM22 complexes also have twin pores (25,26,45) suggesting a common principle in the working mechanism of the mitochondrial translocases. An important issue in the field is to define the mechanism(s) that requires a regulated twin pore structure for efficient protein import for all three of these translocases. Studies of both native and reconstituted membranes have found that TOM and TIM23 channels have an open state of 1000 pS and a half-open state of 500 pS, but the pore structure for TIM23 had not been described previously (3,4,13,27,33,34,45). Measurements of the pore sizes of the open and half-open states were used to establish the twin pore structure of the TIM23 channel. The finding that the open and halfopen states have the same permeability to various PEGs, and hence have the same radius, is strong evidence supporting this assignment. The twin pore structure was further probed in this study by examination of a variety of mutants, most of which were characterized previously using biochemical approaches (12,28,29). Patch clamp techniques allowed characterization of individual TIM23 channels and distinguished modified twin and single pore behaviors in various mutants (Fig. 4 -6), behaviors that were not resolved by other techniques.
Most of TOM and TIM23 channel characteristics (like size and voltage dependence) are almost the same (3,27). Both are presequence-sensitive and translocate preproteins. However, TOM channels have a -barrel structure composed only of Tom40p, whereas TIM23 channels are composed of ␣-helices mostly due to Tim23p and perhaps in part due to Tim17p.
The role of Tim17p in the TIM23 complex is a subject of speculation. Previously Tim23p and Tim17p were thought to form the pore of the channel because of their high degree of sequence homology and other studies (11,12,23,48). However, the involvement of Tim17p in the pore was largely dismissed after reports showing that recombinant Tim23p formed channels (14). It has also been suggested that Tim17p might form a second, independent channel (24). Unfortunately difficulties in producing other individual recombinant subunits of the TIM23 translocase like Tim17p have excluded their electrophysiological characterization. In this study, Tim17p was found to play a principal role in regulating pore structure and voltage gating of the TIM23 channel. Nevertheless no novel channel activity could be attributed to Tim17p in a Tim23-depleted strain suggesting that Tim17p did not form pores in the absence of Tim23p (47).
The functional status of Tim17p is crucial to the twin pore structure of the TIM23 channel. The Tim17⌬C mutants were viable but displayed a modified twin pore phenotype with wildtype voltage dependence. As this mutant is protein importcompetent, the modified twin pore structure is functionally operational. Modified twin pore behavior was also seen in Tim17DD3 RK, Tim17DD3 RK/KR3 DD, and some of the Tim17⌬N channels. Although a variety of substates were observed, transitions of 300 pS were common in channels expressing the modified twin pore structure. Transitions of 300 pS would correspond to a pore radius of around 0.7 nm calculated by the method of Hille (39) (see "Experimental Procedures"). Although a single ␣-helix (ϳ0.6-nm radius) could slip through a 0.7-nm pore, the decrease in the pore size may compromise but not prohibit translocation. Previous studies indicate that Tim17p interacts with the hydrophobic C-domain of Tim23p (23). Integrity of both the N and C termini of Tim17p may be crucial to its stable interaction with Tim23p. In contrast, single pore behavior was typically observed in the absence of competent Tim17p for all of the Tim17p-depleted, most of the Tim17⌬N, and some of the Tim17DD3 RK and Tim17DD3 RK/KR3 DD channels. Hence the N terminus of Tim17p may provide a "pivotal switch" necessary to maintain normal twin pore structure. The single pore structure would not preclude translocation as it is slightly larger than the normal pore. However, targeting peptides cause an irreversible closure of the single pores. These "plugged" channels would be incompetent to translocate proteins.
Are these mutant phenotypes the result of instability in Tim17p or assembly of the TIM23 complex? Reconstitution procedures may weaken the stability of the complex, which may result in a dissociation of some components. However, studies with native membranes showed that Tim17p-depleted complexes contain at least Tim50p, Tim44p, Tim23p, and Tim14p (19), whereas the Tim17⌬N complexes contain at least Tim50p, Tim44p, Tim23p, Tim17p, Tim16p, and Tim14p (29). Therefore, the two phenotypes may not be the result of a loss of associated proteins but rather may be attributable to the ability of Tim17p to stabilize the twin pore structure. Modified twin pore structure may be intermediate in the collapse to a single pore structure in the Tim17⌬N channels.
Patch clamping mitoplasts revealed the in situ conductance and voltage dependence of the TIM23 channel that is, for the most part, maintained after reconstitution into proteoliposomes by dehydration-rehydration (Figs. 2 and 4 and Refs. 13, 33, 34, and 45). The voltage gating of the TIM23 channel is an essential characteristic and is responsible for maintaining a closed state in the absence of import. Opening TIM23 channels in the absence of import would depolarize the potential and uncouple oxidative phosphorylation. Binding a presequence to the complex shifts the voltage dependence and opens the TIM23 channel (41,42). Similarly calcium-activated K ϩ channels shift their V 0 (voltage at which the channel spends half of the time open) upon ligand binding so that the open probability changes and the channels, which are always closed, now open (49). Hence binding of a ligand, like signal peptide, would enable a TIM23 channel, which is normally closed at physiological potentials, to briefly open while the presequence is inserted into the pore (41,42).
Conversely, depletion of Tim17p or deletion of its N terminus, which eliminated or severely compromised protein import, usually rendered the complex a single open pore with essentially no voltage dependence at Ϯ60 mV (Figs. 4 and 5). This finding is consistent with the reduced membrane potential estimates made by Meier et al. (29) as open channels would cause depolarization. Furthermore the modified twin pore channels of the Tim17DD3 RK strain did not show voltage dependence (Fig. 6B). These findings suggest that Tim17p, in particular the aspartates at positions 4 and 8, are vital to the normal voltage sensing of TIM23 channels. Although inverted, a voltage dependence was observed when two negatives charges were replaced at positions 80 and 83, which is similar to the voltage dependence of recombinant Tim23p (see below). This inverted voltage dependence would render the channel open at physiological potentials, which would depolarize the mitochondria. Interestingly this strain inserted inner membrane proteins as well as wild type, although translocation remained reduced. This finding, although speculative, suggests there is yet another means of preventing stochastic opening of TIM23 channels. A recent report proposes that Tim50p maintains the permeability barrier of mitochondria by closing the TIM23 channel, which is only activated when presequences need to be translocated (50). Accordingly these essential negative charges may be vital to appropriate association of other translocon components, e.g. Tim50p, PAM18/Tim14p, and/or Tim21p (2). Thus, it appears that TIM23 channels require both a twin pore structure and normal gating properties for efficient import of proteins destined for the matrix. All of the Tim17 mutants examined in this study retained some peptide sensitivity; twin pore channels flickered, whereas single pores closed in the presence of signal peptides. These findings indicate that Tim17p is not integral to presequence recognition in agreement with studies assigning this role to the N terminus of Tim23p (14,23).
The intimate role of Tim23p in the translocation pathway was shown in patch clamp experiments by the effects of a point mutation and antibodies on Tim23 channel activity (13,14). Importantly seminal studies found that recombinant Tim23p formed channels in planar bilayers (14). Nevertheless significant differences exist between the channel activities of recombinant Tim23p channels or purified TIM23 complexes and the channel activities recorded from the native inner membrane by patch clamping mitoplasts or proteoliposomes. These differences include significantly smaller peak conductance and transition sizes, longer mean open times, and lack of voltage dependence for recombinant Tim23p (14,50). Furthermore Truscott et al. (14) reported no significant changes in channel activity in planar bilayers after loss of Tim17p by destabilization of the complex in the Tim23.2 mutant, results strikingly different from the results presented here (Figs. 2-6). These marked differences in channel behavior are likely due to reconstitution and/or the influence of other components, almost certainly Tim17p and possibly Tim50p whose role in closing TIM23 was recently reported (50). Differences in membrane curvature and/or fluidity may also be contributing factors as reported in other systems (51,52). For example, the conductance of colicin E1 channels changes from 60 to 600 pS depending on the lipid environment (51).
Both genetic and biochemical data indicate that equimolar amounts of Tim23p and Tim17p are organized in a subcomplex (22). Modifying this stoichiometry by depletion of either protein abolishes protein import (28). Tim17p and Tim23p are closely related homologues, but the function of these subunits differs. Tim23p is crucial for pore formation and largely consti-tutes the preprotein-conducting channel in the inner membrane. Depletion of Tim17p induces a collapse of the normal twin pore structure into a single pore. Analysis of Tim17 mutants identified the N terminus as vital for normal gating and the twin pore behavior. These results show that the TIM23 channel is formed by a mainly structural subunit, Tim23p, with its voltage gate Tim17p, which together in a concerted action, mediate presequence-gated protein translocation into mitochondria through a twin pore structure. | 8,386 | 2006-12-05T00:00:00.000 | [
"Biology"
] |
Rockfall and Rainfall Correlation in the Anaga Nature Reserve in Tenerife (Canary Islands, Spain)
Rockfalls are frequent and damaging phenomena that occur on steep or vertical slopes, in coastal areas, mountains and along coastal cliff. Water, in different forms, is the most common triggered factor of rockfalls. Consequently, we can consider that precipitation is the most influential factor for slope instabilities and it influences almost all other water parameters. Besides, the specific geology of the Anaga nature reserve in the volcanic island of Tenerife, together with its steep landscape, contributes to the instability of the slopes and frequent rockfalls. Recently, due to climate change and global warming, the annual precipitation/rainfall has declined but the number of heavy storms, associated with intense rainfall and strong winds, events that exceed precipitation thresholds in a brief period has increased which triggers slope movements. This paper describes the analysis of information on rainfall-induced rockfalls in Anaga, Tenerife (Canary Islands), to forecast rock failures of social significance and to improve the capability to respond and emergency decision making. To define reliable thresholds for a certain area, we analized information during the period 2010–2016, reconstructed the rockfall events, and statistically analyzed the historical rainfall conditions that led to landslides. The summary graph correlating precipitation to the probability of occurrence of an event was plotted. Statistical and probability graphs were made with the direct relationship between the number of rockfall events and total rainfall in that period by examining the maximum daily precipitation, not only on the day of the event but up to 3 days before. Hence, the results of this study would serve as a guide for the possible forecasting of rainfall-induced rockfalls, especially for road maintenance services, so that they can be on alert or mobilize the necessary resources in advance depending on the intensity of the expected rainfall. We have determined the correlation between the probability of occurrence of a rockfall event in a natural reserve (Anaga, island of Tenerife, Canary Islands) and the expected rainfall intensity. We have observed the time delay between the occurrence of rainfall and rock falls, corroborated by experience in this area, between the day of the event and the day of the maximum rainfall associated with it. We have provided a tool to be used by the Civil Protection and Emergency and Road Maintenance and Conservation Services of the island of Tenerife as part of their management to mobilise the necessary resources or means or to adopt traffic limitations or restrictions depending on the level of alert decreed for adverse meteorological phenomena related to rainfall. We have determined the correlation between the probability of occurrence of a rockfall event in a natural reserve (Anaga, island of Tenerife, Canary Islands) and the expected rainfall intensity. We have observed the time delay between the occurrence of rainfall and rock falls, corroborated by experience in this area, between the day of the event and the day of the maximum rainfall associated with it. We have provided a tool to be used by the Civil Protection and Emergency and Road Maintenance and Conservation Services of the island of Tenerife as part of their management to mobilise the necessary resources or means or to adopt traffic limitations or restrictions depending on the level of alert decreed for adverse meteorological phenomena related to rainfall.
Introduction
Rockfalls are, by definition, a type of landslide involving abrupt downward movement of rock or soil, or both, that detach from steep slopes or cliffs (Highland and Bobrowsky 2008). The falling mass may break on impact, start rolling on steeper slopes, and continue rolling until the terrain flattens. Rockfalls are frequent and damaging phenomena that occur on steep or vertical slopes, in coastal areas, mountains, and along rocky banks of rivers and streams (Langping and Hengxing 2015). The volume of material in a rockfall can vary considerably, from individual rocks or clumps of soil to massive blocks thousands of cubic meters in size (Margottini et al. 2013).
Water, whether it is solid water (ice, snow), whether it is in liquid form such as rain, groundwater, melting ice, etc.; or whether it is water pressure, water energy (undercutting of slopes by natural processes such as streams, rivers, and ocean/sea waves), seismic activity, or anthropogenic activities (burst water pipes, and similar), etc., is the most common cause of rockfalls (Ansari et al. 2015;de Vallejo et al. 2020a, b;Hibert et al. 2011;Hürlimann et al. 1999;Keefer 2002;Mateos et al. 2020;Saroglou 2019;Uchimura et al. 2010;Wieczorek and Jäger 1996).
At regional scales, empirical approaches to forecast the occurrence of rainfall-induced landslides depend on accuracy defining rainfall thresholds. In recent years, several authors have proposed different methods for calculating rainfall thresholds through statistical analysis of empirical triggering rainfall distributions. These methods include cumulative rainfall amount versus rainfall duration, or average rainfall intensity versus rainfall duration. However, these precipitation thresholds include numerous uncertainties that limit their application in early warning systems (Rosi et al. 2020;Melillo et al. 2018;Guzzetti et al. 2020).
The empirical estimation of precipitation thresholds is affected by different uncertainties linked to: (i) the availability of quality information concerning rainfall measurements, with numerous parameters of intensity, duration, daily and even hourly data; (ii) the existence of a good inventory of rockfalls that have occurred, specifically with the date of their occurrence; (iii) the characterization and identification of the rainfall event responsible for the landslide. For this reason, it is difficult to find in the literature case studies with a good definition of the triggering rainfall thresholds. However, we can consider that rainfall is the factor that most influences the instability of the slopes, and it can influence almost all the other water parameters mentioned above (Ayonghe et al. 1999;Contino et al. 2017;Vessia et al. 2020). In addition, there is a great danger from rockfalls triggered by rain, especially on volcanic, unstable slopes (Barbano et al. 2014;Kimura and Kawabata 2015;Smerekanicz et al. 2008). Furthermore, evidence of the major influence of rain was provided in the previous study developed in the island of Tenerife by Jiménez and García-Fernández (2000). The study concluded that there is a correlation, with more than 99% certainty, between intense rainfall periods and the temporal distribution of local microearthquake activity in Tenerife.
Moreover, climate change poses risks to human and natural systems, and the processes of slope instabilities are part of those risks (Komori et al. 2018;Lollino et al. 2015). The result of climate change and global warming is less rainfall in annual rainfall count, but at the same time there are more severe weather events that exceed precipitation thresholds and trigger slope movements, so there will be an increase in precipitation in concentrated events (Luo et al. 2017;Mateos et al. 2020). Although both factors have an influence, for triggering the mechanisms of soil and rock breakage that move the material on the slopes, exceeding the precipitation thresholds (punctual intense precipitation) is more important than the number of days of rainfall per year (accumulated annual precipitation) (Bello-Rodríguez et al. 2019;Hendrix and Salehyan 2012;Hernández González et al. 2016;Hernandez et al. 2018).
In order to define reliable thresholds for a certain area, we need to reconstruct the rockfall events and statistically analyze the historical rainfall conditions that caused landslides. The data should serve as a guide for the possible prediction of rainfall-induced rock slides.
Rock curtains or other slope covers, protective covers over roadways, retaining walls to prevent rolling or bouncing are used to mitigate unstable slopes. Although rock bolts or other similar types of anchoring are used to stabilize cliffs, some landslides cannot be mitigated, making the importance of predicting failures even greater (Gutiérrez et al. 2010;Mateos et al. 2020). Primarily for the safety of the inhabitants but also for preserving infrastructures such as roads and buildings. (Guzzetti et al. 2007(Guzzetti et al. , 2008Miklin et al. 2016;Peruccacci et al. 2017;Valenzuela et al. 2018Valenzuela et al. , 2019Vennari et al. 2014).
When it comes to protecting human lives, there can never be sufficient research to ensure safety and prevent catastrophic events. For Gran Canaria and Tenerife (Canary Islands) a research group of the Research Institute for Geohydrological Protection (IRPI) and the Geological Survey of Spain (IGME) analyzed rainfall-induced rockfalls based on CTRL-T algorithm exploiting continuous rainfall measurements, and landslide information . Therefore, in this article, we will show a new advance about correlation of rockfall and rainfall events from a historicalstatistical perspective in the area of Anaga, Tenerife.
Study Area: Anaga, Tenerife (Canary Islands, Spain)
The Canary archipelago in the Atlantic Ocean consists of eight islands with a total area of about 7500 km 2 . Tenerife is the largest (2057 km 2 ) and the most populated (966,000 inhabitants and 13.2 million visitors in 2019) island in the center of the Canary archipelago ( Fig. 1) (data from the Spanish National Statistical Institute).
Tenerife not only occupies a central position within the archipelago but also represents an intermediate evolutionary stage relative to the eastern and the western islands of the island chain. It is home to the third-largest volcano in the world, Pico del Teide. If you take the seafloor as the base of the volcano and not the sea level, then the Teide rises more than 7000 m in height (3718 masl) Troll and Carracedo 2016).
Tenerife is mainly a basaltic shield, which represents about 90% of the volume of the island (Hürlimann et al. 1999). It lies on the Jurassic (150-170 Ma) oceanic lithosphere and was constructed via Miocene-Pliocene shields that now form the vertices of the island (Fullea et al. 2015). The shields were unified into a single edifice by later volcanism that continued in central Tenerife from about 12 to 8 million years ago and was followed by a period of dormancy. Rejuvenation at approximately 3.5 Ma is recorded by the central Las Cañadas volcano. During this period magmatic differentiation processes occurred, leading to an episode of felsic and highly explosive felsic volcanism ( Fig. 1) (Martı́ and Wolff 2000;Troll and Carracedo 2016).
The Anaga massif belongs to Series I (Middle-Upper Miocene) and, due to erosion, this massif currently has a steep orography (Fig. 2) with steep slopes (Marinonia and Gudmundssonb 2000). The natural reserve of Anaga is a protected area due to its richness in flora and fauna, as well as archaeological sites (Jiménez-Gomis et al. 2019). The orography and altitude favor rainfall in this area of the island, a phenomenon that increases the probability of rock falls in Anaga (Fig. 3).
The Anaga massif has an average altitude of 850 m above sea level and, in addition, a phenomenon known as the "Foëhn effect" occurs here, which is produced by the warm, humid winds that blow frequently from the northwest to northeast, producing a layer of stratocumulus on the higher ground, often accompanied by drizzle (Santana 2014). Subsequently, the air descends, losing its watery content on the opposite slope. This causes a constant humidity in Anaga, which leads to the capture of horizontal precipitation that (Troll and Carracedo 2016) produces the Foëhn phenomenon (Kalivodová et al. 2020). Therefore, the geographical situation of Anaga, located in the northeast of Tenerife, its altitude and the constant humidity on the slopes, make this area of Tenerife one of the areas with the highest rainfall compared to the rest of the island (Diez-Sierra and del Jesus 2020).
High gradient slopes spread over large areas of the islands, and two antagonistic processes are involved in their formation, namely erosion and the formation of lavas, scoria, and pyroclastic layers. Erosion or mass wasting processes occur on previously unstable slopes. Thus, the northern part of the island is characterized by narrow and deep ravines that contribute to intense slope activity (del Potro and Hürlimann 2008; Melillo et al. 2020). Usually, these landscapes are associated with the oldest basaltic outcrops of the islands (along the deep ravines and coastal cliffs of the "Anaga" and "Teno" massifs, the wall of the "Cañadas" caldera, head and edges of the "Güimar" and "La Orotava" valleys in Tenerife Island) (Fig. 1). In these locations the slopes dip at angles ranging from 50° to 65° and from 26° to 30° and are nearly vertical in the cliff areas. Most of these areas are the result of earlier landslides and consequently, there is an early occurrence of large mass wasting processes (González de Vallejo et al. 2008;Ledo et al. 2015). The accumulation of volcanic lava flows and interbedded pyroclastic layers is the result of cycles of continuous volcanic eruptions that can result in the the build-up of large steeped edifices with poorly stabilized slopes and high risk of landslides. The growth of such unstable volcanic edifices may occur over previously collapsed areas, filling the resulting deeply eroded depressions prone landslides. Steep slopes created by the accumulation of lavas and pyroclasts are thought to be zones of potential landslide hazard. An example of these steep areas is the "Teide stratovolcano", whose flank inclination varies from 25° to 30°. Teide has slopes higher than 1000 m and conditions close to the limit where the slope gradient exceeds the friction angle of their rock massifs (del Potro and Hürlimann 2008; Martı́ and Wolff 2000;Rodríguez-Losada et al. 2009). These areas can be prone to extremely large landslides if cohesion decreases rapidly. The decrease can be due to shallow magma injection, fluid injection, or groundwater pressure (Herrera and Custodio 2014;Kimura and Kawabata 2015;Rodríguez-Losada et al. 2009). Additional factors such as pre-existing fracture zones also increase the risk of landslides (Hibert et al. 2011).
The steep orography and climatic diversity of Tenerife have resulted in a variety of landscapes and geographical formations. The climate of Tenerife is subtropical oceanic; the minimum and maximum annual average temperatures are about 15 °C in winter and 24 °C in summer. The annual rainfall ranges from 100 to 900 mm, being the northern slope the one that receives the highest volume of rainfall, as can be seen from the following image taken from the document CLIMCAN-010 of the Government of the Canary Islands (Fig. 4). Besides, Tenerife offers a wide variety of microclimates controlled by altitude and winds (Bechtel 2016;Hernández González et al. 2016;Köhler et al. 2006).
Methodology
The methodology followed for data collection was: 1. Collec data on all events classified as "Rockfalls" that occurred during the period 2010-2016 (specifically from 01/08/2010 to 05/05/2016) that were addressed by the personnel assigned to the Contracts of Integral Conservation of Roads (North, South, West and Anaga Sectors), promoted by the Cabildo Insular de Tenerife and conducted by external companies. This information was compiled from the management system implemented in the Organic Unit of Integral Conservation (Cabildo de Tenerife), through the computer application GCC.2 "Gestor de Conservación de Carreteras Versión 2". 2. The representativeness of the data in relation to the totality of the administered roads (Insular and of Regional Interest), a total of 584 kms was included out of the esti-mated total of 1378 kms that in that period were available in the entire island (Table 1). However, all the most important main roads of the island (highways, multi-lane roads and conventional roads with the highest traffic) are included; thus, their representativeness of the island's road infrastructure is considered as sufficient. 3. Review of all the reports of incidents collected, checking through the attached graphic information for possible errors, as well as the magnitude of the event, discarding for the study those related to very small surface detachments and with no effect on traffic. 4. Compilation of all meteorological data during that period (2010-2016) from all existing raingauges in the study area. The information published by AEMET 1 and AgroCabildo 2 was used, selecting the gauge closest to each incident. 5. Study the relationship between events and precipitation by calculating probabilities. In the case of Anaga Sector, due to previous experience, it was known that there was a certain delay between the day on which the precipitation occurred and the day on which the event occurred. Therefore, the study was done by checking the maximum daily precipitation, not only on the day of the event, but up to 3 days before. Figure 5 shows the summary graph of the statistical study conducted on the relationship between the daily level of precipitation and landslide events occurring on the roads of the Anaga Sector. A significant observation is the delay between the day of the event and the day the maximum precipitation occurred, up to a maximum of 3 days before the event. The results indicate:
Results and Discussion
• 39% of the maximum precipitation took place on the day of the landslide; • 26% the day before; • 15% two days before; • 20% three days before.
Therefore, only 40% of the events seem to be associated with the maximum precipitation occurring on the day of the landslide. This fact corroborates the experience in this Sector, which confirms that after an episode of rainfall of a certain intensity, events tend to occur not only on the same day but also on the following days without new rainfall in most cases. Figure 6 shows the total number of events in Anaga and the monthly accumulated rainfall. Thus, the results highlight the direct relationship between the number of events and the total amount of rainfall in that period.
The implications of this study can be used in emergency prevention management, especially for road maintenance services, so that they can be on alert or mobilize the necessary resources in advance depending on the intensity of the expected rainfall. In this sense, a graph (Fig. 7) has been prepared for the Anaga Sector, which relates the level of expected rainfall-type of alert with the probability of at least one event occurring on that day. The alert system is similar to the meteorological alert system used by the AEMET (yellow, orange and red levels whose thresholds for Tenerife are 60/100/180 mm in 12 h), with the following results: • A yellow alert (or pre-alert situation according to the DGSE of the Government of the Canary Islands) indicates the probability of at least one event occurring in the Anaga Sector ranges between 70 and 90%. • An orange alert indicates a probability ranging between 70 and 90%. • A red alert indicates a probability of 100%.
Conclusions
Intense rainfall modifies hydrogeological conditions and water levels. Surface movements, predominantly of soils and altered materials, can be triggered. These movements can include new landslides or debris flows, reactivation of old landslides and rockfalls.
From the analysis of the data, it can be concluded that there is a direct relationship between accumulated rainfall and the occurrence of instabilities. The greatest probability for these events applies to when precipitation is greatest. This relationship can be uses as a predictive tool in emergency management, especially for road maintenance and conservation service. Likewise, it has been possible to verify in a certain Sector that in a relevant percentage, the events do not occur in a rhythmic manner on the same day that the maximum rainfall occurs but until several days later. This circumstance is related to the geomorphology (steep reliefs) and the type of material (strongly altered and weathered on the surface) that makes up the slopes in Anaga area.
Finally, it should be noted that this study has been based on data provided by the Road Conservation Organic Unit. These data apply only to landslides and rockfalls that occur from slopes adjacent to roads the Unit is responsible for and for events assessed by the Unit. Other events may have occurred in this area but may not have affected a road or may not have been assessed by the Unit. Therefore, the values obtained should be considered as a minimum threshold.
Acknowledgements The authors thank the Technical Service of Roads and Landscape of the Island Council of Tenerife for the information and documentation considered in this study. This work has been carried out with the financial support of the Interreg Atlantic Area Programme through the European Regional Development Fund under grant agreement N° EAPA_884/2018 (AGEO project). This contribution reflects only the authors' view and that the European Union is not liable for any use that may be made of the information contained therein. Likewise, we would like to thank the previous work developed by the technical team drafting the MACASTAB Project as well as the University of La Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 4,935 | 2022-01-08T00:00:00.000 | [
"Environmental Science",
"Geology",
"Geography"
] |
Prevalence and Impact of Minority Variant Drug Resistance Mutations in Primary HIV-1 Infection
Objective To evaluate minority variant drug resistance mutations detected by the oligonucleotide ligation assay (OLA) but not consensus sequencing among subjects with primary HIV-1 infection. Design/Methods Observational, longitudinal cohort study. Consensus sequencing and OLA were performed on the first available specimens from 99 subjects enrolled after 1996. Survival analyses, adjusted for HIV-1 RNA levels at the start of antiretroviral (ARV) therapy, evaluated the time to virologic suppression (HIV-1 RNA<50 copies/mL) among subjects with minority variants conferring intermediate or high-level resistance. Results Consensus sequencing and OLA detected resistance mutations in 5% and 27% of subjects, respectively, in specimens obtained a median of 30 days after infection. Median time to virologic suppression was 110 (IQR 62–147) days for 63 treated subjects without detectable mutations, 84 (IQR 56–109) days for ten subjects with minority variant mutations treated with ≥3 active ARVs, and 104 (IQR 60–162) days for nine subjects with minority variant mutations treated with <3 active ARVs (p = .9). Compared to subjects without mutations, time to virologic suppression was similar for subjects with minority variant mutations treated with ≥3 active ARVs (aHR 1.2, 95% CI 0.6–2.4, p = .6) and subjects with minority variant mutations treated with <3 active ARVs (aHR 1.0, 95% CI 0.4–2.4, p = .9). Two subjects with drug resistance and two subjects without detectable resistance experienced virologic failure. Conclusions Consensus sequencing significantly underestimated the prevalence of drug resistance mutations in ARV-naïve subjects with primary HIV-1 infection. Minority variants were not associated with impaired ARV response, possibly due to the small sample size. It is also possible that, with highly-potent ARVs, minority variant mutations may be relevant only at certain critical codons.
The oligonucleotide ligation assay (OLA) is an HIV-1 drug resistance assay that is more sensitive than consensus sequencing and can detect mutations at select codons when they occur in as few as 2-5% of the viral quasi-species [35][36][37][38]. This study evaluated the prevalence of mutations detected by OLA and the impact of minority variants on responses to ARV therapy in a cohort of subjects with primary HIV-1 infection.
Patient population
Characteristics of the University of Washington Primary Infection Clinic (PIC) cohort have previously been described [39][40][41]. For this project, we selected a subgroup from among 201 subjects in the cohort who acquired HIV-1 after highly active ARV therapy became widely available in 1996. We preferentially selected subjects who 1) had enrolled in the cohort within one month of their estimated date of HIV-1 infection (defined as the date of onset of seroconversion symptoms or, for asymptomatic individuals, the midpoint between dates of the last negative and first positive HIV-1 tests), 2) had results of a pretreatment HIV-1 drug resistance test (consensus sequencing) already available, and/or 3) initiated ARV therapy within six months of study enrollment. We performed consensus sequencing and sensitive drug resistance testing to determine HIV-1 genotype on the first available (i.e. baseline) plasma and peripheral blood mononuclear cell (PBMC) specimens that had been collected no more than seven days after the start of ARVs. Thirty-four subjects had consensus sequencing performed as part of clinical research evaluations prior to our undertaking this analysis; results of resistance testing performed specifically for this study were not used to guide selection of ARV therapy. This study was approved by the University of Washington Institutional Review Board, and all subjects gave written consent for participation in the cohort.
HIV-1 RNA quantification in blood plasma
Specimens collected between 1996 and 2002 were initially tested with branched DNA (bDNA) assays with lower limits of detection of 50 and 500 copies/mL (Chiron Corporation; Emeryville, CA). When specimens were available, results censored at 500 copies/mL were re-tested using an ultra-sensitive reverse transcription polymerase chain reaction (RT-PCR) assay (Roche; Branchburg, NJ) or an independently-validated real-time RT-PCR amplification assay with lower limits of detection equal to 50 copies/mL [42]. Since 2002, all specimens have been evaluated by an RT-PCR assay.
RT-PCR and PCR for genotyping of HIV-1 pol RNA was extracted from plasma and reverse transcribed, as previously described [35]. DNA was extracted from PBMCs using the Puregene Cell and Tissue kit (Gentra Systems, Inc.; Minneapolis, MN) according to manufacturers' instructions. Nested PCR was performed as previously described [37] with different primers. Briefly, first-round PCR of cDNA or DNA was carried out in a 50-ml reaction mixture containing 10-ml of cDNA or $1 mg DNA, and second-round PCR contained 2-ml of the first-round product. First round primers were PRA and RTA; second round primers were PRB and RT3 [43]. If no amplicon was produced, we used alternate primer sets NE10 and NE11. We visualized the amplicon, a 1,193-bp DNA fragment extending from amino acid 1 in protease to amino acid 230 in reverse transcriptase, in a 1% agarose gel with ethidium bromide staining. Samples with a visible band of the correct size were used for sequencing and OLA.
Consensus sequencing
PCR amplicons were purified and sequenced as previously described [37] using sequencing primers that were identical to those used for second-round PCR. Sequences were analyzed with Sequencher, version 4.2 (Gene Codes Corp; Ann Arbor, MI), and submitted to the Stanford HIV-1 Sequence Analysis Program [44] to identify mutations. For quality assurance, all genotypes generated for this study were aligned with ClustalW, v1.81 and reviewed using a neighbor-joining phylogenetic tree to monitor for cross-contamination.
Oligonucleotide Ligation Assay (OLA)
Amplicons submitted to consensus sequencing were evaluated by OLA for mutations in the region encoding reverse transcriptase (K65R, K70R, L74V, M184V, T215F/Y, K103N, Y181C, and G190A) and protease (D30N, I50V, V82S/A/T, I84V, N88D, and L90M). Results for M41L are not included, as oligonucleotide probes for this codon were not optimized when the laboratory work for this project was completed. OLA was performed as previously described [35][36][37]. All subjects' samples and controls were analyzed in duplicate. We classified samples as mutant if the mean optical density (OD) of duplicates at 490 nm was greater than the OD of the 5% mutant control or 2.5 times the OD of the wild-type control. If the specimen was not classified as mutant and the mean wild-type OD was under 50% of the wild-type control OD, we classified the specimen as ''indeterminate.''
Statistical analysis
We used McNemar's exact tests to compare the number of subjects with transmitted drug resistance mutations detected by OLA and consensus sequencing of plasma, by OLA and consensus sequencing of PBMCs, and by OLA of plasma to OLA of PBMCs. Multivariable regression models explored factors potentially associated with risk of transmitted drug resistance and included year of HIV-1 acquisition (divided into quartiles) to evaluate for evidence of a secular trend.
We used 2-sample t-tests, non-parametric tests, and regression analyses where appropriate to compare mean baseline (i.e. first visit) CD4 + T-cell count, baseline HIV-1 RNA level, and median viral ''set point'' among subjects with minority variant mutations and subjects without detectable transmitted drug resistance. We estimated set point using the HIV-1 RNA level obtained closest to 150 days (between 120 and 730 days) following HIV-1 infection from subjects who had not yet received ARV therapy [45].
We conducted time-to-event analyses using Cox proportional hazard regression models with maximum likelihood estimation to compare time to virologic suppression (defined as the first HIV-1 RNA below 50 copies/mL) among subjects receiving highly active antiretroviral therapy. These analyses were adjusted for pretreatment HIV-1 RNA level closest to and within 30 days before the start of ARVs. We excluded subjects from the time-to-event analyses if they received only single or dual nucleoside reverse transcriptase inhibitor (NRTI) therapy or if they had any mutations detected by consensus sequencing because pre-treatment resistance testing could have guided selection of ARVs. The Stanford University HIV Drug Resistance Database (http:// hivdb.stanford.edu; accessed December 29, 2009) was used to predict the number of active agents in regimens; an ARV agent was considered inactive if subjects had a mutation associated with intermediate or high-level HIV-1 drug resistance to that ARV. Subjects were divided into three groups: 1) subjects without any HIV-1 drug resistance mutations detected by consensus sequencing or OLA, 2) subjects with minority variant mutations treated with ARV regimens with three or more active ARV agents, and 3) subjects with minority variant mutations treated with ARV regimens with fewer than three active agents. Virologic failure was defined as: 1) failure to suppress HIV-1 RNA levels to below 50 copies/mL within 240 days after initiation of ARVs, 2) switch of ARV agents due to a perceived inadequate response to therapy, or 3) viral rebound to greater than 500 copies/mL on two consecutive measurements following successful suppression of HIV-1 RNA levels to below 50 copies/mL. All statistical analyses were performed using Stata9 software (StataCorp LP, College Station, TX).
Results
Demographics and other baseline characteristics of the 99 subjects are shown in Table 1. All subjects were men, and 98% of subjects reported sex with men as their risk for HIV-1 acquisition. Subjects who experienced symptoms consistent with the acute retroviral syndrome (92%) were over-represented in this analysis compared to the entire PIC cohort (84%). All subjects acquired HIV-1 subtype B infection.
We performed HIV-1 drug resistance testing on plasma and PBMC specimens that had been obtained a median of 29 (IQR 19-66) and 31 (IQR 19-66) days after HIV-1 infection; all specimens were collected within six months of infection. Consensus sequencing and OLA detected HIV-1 drug resistance mutations (in either plasma or PBMCs) in 5% and 27% of 99 subjects, respectively. There was no evidence of a trend in incidence of transmitted drug resistance over time, although resistance was more common among subjects infected after May 2000 (31%) compared to those infected prior to this date (16%), and we did not detect non-nucleoside reverse transcriptase inhibitor (NNRTI) resistance among subjects infected prior to this date.
Compared to consensus sequencing, OLA detected significantly greater number of subjects with resistance mutations in both plasma (p = .0005) and PBMCs (p = .002) ( Table 2). OLA performed on plasma and PBMCs detected similar numbers of subjects with drug resistance mutations, but concordance of results was low.
Consensus sequencing detected one subject with M184V in PBMCs only, one subject with M41L and T215D, one subject with T215D and L90M, and two subjects with G190A. The mutations most commonly identified by OLA in reverse transcriptase were M184V (n = 9) and T215Y (n = 5) and in protease were I84V (n = 5) and N88D (n = 5). K103N was identified in only one subject and by OLA only. With use of OLA, detection of NRTI resistance mutations increased from 3% to 13% of subjects, detection of NNRTI resistance mutations increased from 2% to 8% of subjects, detection of protease inhibitor (PI) resistance mutations increased from 1% to 13% of subjects, and detection of multi-drug resistant HIV-1 increased from 1% to 6% of subjects.
Compared to subjects without detectable mutations, there were no differences in the CD4 + T-cell counts or HIV-1 RNA levels at presentation among subjects having at least one mutation detected by OLA or subjects with mutations conferring resistance to NRTIs, NNRTIs, or PIs. Among 24 subjects in this study who remained untreated at a median of 146 days following HIV-1 infection, the median viral ''set point'' was 4.5 (IQR 3.7-4.8) log 10 copies/mL among five subjects with minority variant mutations and 4.5 (IQR 4.1-5.3) log 10 copies/mL among 19 subjects with no detectable drug resistance mutations (p = .6).
Eighty-nine (90%) of the 99 subjects initiated ARV therapy a median of 48 (IQR 24-107, range 5-1092) days after HIV-1 infection ( Table 3). Mean CD4 + T-cell counts (498 versus 486 cells/mm 3 , p = .8) and HIV-1 RNA levels (5.1 versus 4.9 log 10 copies/mL, p = .5) at the start of ARV therapy did not differ between subjects without detectable mutations and those with minority variant mutations. Similarly, we found no association between any class of drug resistance mutation and CD4 count or HIV-1 RNA level at the start of ARV therapy.
The five subjects with mutations detected by consensus sequencing and two other treated subjects without follow-up were excluded from survival analyses. Median time to HIV-1 RNA less than 50 copies/mL was 110 (IQR 62-147) days for 63 treated subjects without detectable mutations, 84 (IQR 56-109) days for ten subjects with minority variant mutations treated with three or more active ARVs, and 104 (IQR 60-162) days for nine subjects with minority variant mutations treated with fewer than three active ARVs (p = .9) (Figure 1). After adjustment for HIV-1 RNA levels at the start of ARV therapy, time to virologic suppression was similar (aHR 1.2, 95% CI 0.6-2.4, p = .6) for subjects with minority variant mutations treated with at least three active agents and for subjects with minority variant drug resistance mutations who received fewer than three active ARV agents (aHR 1.0, 95% CI 0.4-2.4, p = .9) compared to subjects without drug resistance mutations. The eighty-seven treated subjects, including the five subjects with mutations identified by consensus sequencing, were followed for a median of 4.4 (IQR 2.6-7.7) person-years following the start of ARV therapy. Only four (5%) subjects experienced virologic failure. One subject ( Table 3 ID #95816) did not have resistance testing performed prior to starting ARV therapy. After virologic failure, G190A was identified in his baseline specimen by both consensus sequencing and OLA. The second subject with virologic failure (ID #26973) had no major mutations identified by consensus sequencing. His HIV-1 RNA level decreased to 2.0 log 10 copies/mL before it quickly rebounded; T215Y was detected by only OLA in his PBMCs from baseline. Drug resistance mutations were not identified at baseline in two other subjects who experienced virologic failure after receiving ARV therapy for five months and five years;
Discussion
The results described here represent one of the most comprehensive surveys of minority variant drug resistance in primary HIV-1 infection in terms of the number of mutations studied. We detected drug resistance mutations in 27% of a male cohort who acquired HIV-1 infection after 1996. Despite a high prevalence of minority variant drug resistance mutations, this was not associated with a difference in viral set point or in the virologic response to ARV therapy among treated subjects.
The finding that a sensitive assay detected HIV-1 drug resistance mutations in a greater number of subjects compared to consensus sequencing is consistent with other studies of ARVnaïve subjects with primary [21,32] and established HIV-1 infection [24,46]. Although the high prevalence of minority variants in our subjects is somewhat incongruous with the previously-held belief that sexual transmission of HIV infection is predominantly monophyletic, more recent data have suggested that men who have sex with men frequently acquire multiple variants [47]. It is also possible that OLA detected variants that had been spontaneously generated by random misincorporation of base pairs during reverse transcription of HIV RNA. However, although possible, it is unlikely that, without ARV selection pressure, mutations could be generated with sufficient frequency to reach a level detectable by OLA in nearly one quarter of our subjects [48]. It is also conceivable that false positive results contributed to the estimated prevalence of drug resistance in our subjects. Although ligase binding is highly specific [49], false positive results could have occurred due to high background in the EIA portion of the assay for isolated specimens.
In contrast to our previous study of ARV-experienced persons with chronic HIV-1 infection [37], OLA of PBMC DNA did not detect a greater number of persons with mutations compared to OLA of plasma RNA. All minority variant drug resistance mutations detected in this study were identified in only one component of blood (i.e. either plasma or PBMCs but not both). These specimens had concentrations of mutant virus that were close to the limit of detection of the assay, and thus detection in plasma or PBMCs was likely stochastic. We suspect that the reason the numbers of persons who had mutations detected in one component or the other were similar was due to collection of specimens during primary infection with insufficient time lapse for wild-type viruses to have overgrown less fit mutants in plasma, where virus turnover occurs more rapidly. One of the strengths of this work is that we studied subjects close to the time of HIV acquisition, as outgrowth of some wild-type viruses can occur very quickly [50,51]. We also cannot exclude the possibility that minority variants were spontaneously generated, as mentioned above.
Similar to other studies [32][33][34], we found that low-level mutations did not appear to affect the time to virologic suppression following initiation of ARV therapy in treated subjects. In contrast, studies of persons with established HIV-1 infection have shown an increased risk of virologic failure associated with minority variant drug resistance mutations [23,24,26,27,29,30], particularly with NNRTI mutations [28][29][30][31]. The high rate of treatment success among our subjects was similar to another observational study of subjects with primary HIV-1 infection [52], but as a result only 4% of subjects were observed to have virologic failure and our study was underpowered to detect differences in clinical outcomes. Although the median follow-up time in our study was longer than other studies that observed higher rates of virologic failure, it is possible we would have seen differences in rates of virologic failure if subjects had remained on treatment and in follow-up or if we had studied more subjects.
Another likely explanation for our failure to identify negative consequences from minority variant mutations was the lack of uniformity of the impact of HIV-1 drug resistance mutations across regimens and the use of ARV therapy with a high genetic barrier to resistance to the mutations we observed. Much prior research on this topic has focused on NNRTI mutations, which have a lower genetic barrier to resistance. Of the two subjects in our cohort with NNRTI mutations who were treated with NNRTI-based regimens, the one who had high concentrations of mutant experienced virologic failure (ID#95816). The second subject (ID#44378) had the Y181C mutation detected only by OLA; he was treated with nevirapine and had an initial .3 log 10 copies/mL decrease in his HIV-1 RNA level, but he discontinued medications after forty-four days due to rash. It is possible that the clinical impact of minority variant drug resistance mutations may be modified by the relative concentration of the mutant virus at specific codons [23,30,53]. In one recent study, subjects who had NNRTI resistance mutations detected in 1-20% of the viral population had a lower risk of virologic failure following initiation of ARV therapy compared to subjects who had NNRTI resistance mutations detected in greater than 20% of the population, but both groups had a greater risk of virologic failure compared to subjects who did not have NNRTI resistance mutations [23]. These authors did not observe a similar relationship between risk of virologic failure and variation in concentrations of NRTI or PI mutations. Another recent study suggested that the K103N mutation was associated with an increased risk of virologic failure when these viruses were present in amounts greater than 2000 copies/mL [54]. However, like many previously published studies, we did not quantify the amount of virus used for drug resistance assays and therefore cannot report the precise concentrations of minority variants that were detected. In this table, the subset of subjects who received antiretroviral (ARV) therapy are grouped based on whether they had drug resistance detected by consensus sequencing (Group I), drug resistance detected by OLA but who received at least three active ARV agents (Group II), or drug resistance detected by OLA who received fewer than three active agents (Group III). ARVs are highlighted in grey if subjects had mutations conferring at least intermediate level resistance to that ARV. K70R, L74V, T215F, and V82S/T were not detected in any treated subjects. CS: consensus sequencing; OLA: oligonucleotide ligation assay; VL: viral load (HIV-1 RNA level); ARV: antiretroviral; VF: virologic failure; IDV: indinavir, HU: hydroxyurea, ABC: abacavir, EFV: efavirenz, NVP: nevirapine, r-: ritonavir-boosted, LPV: lopinavir, ATZ: atazanavir; DNS: did not suppress. 1 Log 10 copies/mL. 2 Antiretroviral medications were switched on day 5 due to side effects. 3 Subject #69234 subsequently discontinued medications two months later due to adherence difficulties. 4 OLA probes did not test for M41L and T215D. 5 OLA on PBMC for subject 56710 yielded indeterminate results for T215Y. It is also possible that the clinical impact of minority variant transmitted drug resistance mutations may be further modified by the persistence or decay in concentration of the mutant virus. In several studies, receipt of single dose nevirapine (SD-NVP) has been associated with poor subsequent response to NVP-based ARV therapy when treatment was initiated within six months of the SD-NVP [55,56]. It is plausible that the interaction between delayed initiation of ARV therapy following SD-NVP and reduced risk of virologic failure is mediated by decay in the concentration of HIV-1 drug resistant variants over time. In the study by Jourdain et al. [56], risk for virologic failure was associated with detection of mutations by OLA at the time of initiation of ARVs but not with detection of mutations by consensus sequencing ten days post-partum. How the specific mutations, threshold levels, dynamics of decay, and timing and type of ARV therapy all interact to modify the effect of transmitted drug resistance remains to be determined. A longitudinal study is ongoing that will quantify minority variants over time in a greater number of subjects with primary HIV-1 infection.
Although OLA is more sensitive than consensus sequencing, more sensitive assays such as allele-specific PCR [57,58] and parallel allele-specific sequencing (PASS) [59] can detect HIV-1 drug resistance mutations in as little as 0.01-1% of the viral population if sufficient numbers of viruses are studied. Had we used one of these assays, it is possible that we would have estimated that the prevalence of HIV-1 drug resistance was even greater, and we might have found a relationship between drug resistance and virologic response to therapy. However, in our hands, OLA and pyrosequencing produce similar results when mutant viruses are at concentrations greater than 2% of the viral population [60], and the clinical relevance of minority variants at even lower concentrations is even less clear. At the highest sensitivity, it is possible to detect and misclassify random mutations, as some mutations were detected among HIV-infected persons even prior to the availability of ARVs [48,61]. Advantages of OLA include the greater specificity compared to PCR-based methods [62], reagents anneal at relatively low temperature (37uC) and therefore tolerate polymorphisms in the region of the probe, OLA uses relatively less costly equipment than allele-specific PCR and pyrosequencing, and oligonucleotides have been adapted to non-B subtypes [63].
In conclusion, results from this study reinforce findings of others that consensus sequencing significantly underestimates the point prevalence and possibly the ''persistence'' of transmitted HIV-1 drug resistance mutations. However, additional data are still needed to precisely determine the clinical impact of different drug resistance mutations at different concentrations. If minority variants are clinically important, use of more-sensitive assays might aid in the selection of potent ARV regimens with greatest chance of success. On the other hand, if minority variant drug resistance mutations have minimal clinical impact in ARV-naïve individuals, the detection of minority variants might lead care providers to prescribe complex first-line ARV regimens with a high pill burden and frequent dosing. Higher complexity of ARV regimens could reduce patient adherence and lead to a paradoxical increase in the prevalence of drug resistance. Given the uncertainty regarding the clinical impact of minority variant mutations and the fact that many people with these mutations still have excellent responses to therapy, prospective randomized trials that include cost-effectiveness analyses should be completed prior to the adoption of more-sensitive assays for clinical care. | 5,521.6 | 2011-12-16T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Impacts of Non-Renewable Energy Consumption and Education Expenditure on CO2 Emission Intensity of Real GDP in China
With the economic development, China has become the world's largest CO2 emitter. Given that climate warming has increasingly become the focus of the international community, Chinese government committed to reducing its CO2 emission intensity substantially. Prior studies nd that the evolution of economic structure and technological progress can reduce CO2 emissions, but lack of considering CO2 emissions and output as a whole. In addition, the role of education expenditure is relatively overlooked. This paper contributes to the literature by examining the link of CO2 emission intensity, non-renewable energy consumption and education expenditure in China during 1971-2014. We use the ARDL approach and nd that in the long run, every 1% increase in non-renewable energy consumption results in a 0.92% increase in CO2 intensity, while every 1% increase in operational education expenditure reduces the CO2 intensity by 0.86%. In the short term, 36% of the deviation from the long run equilibrium is corrected in the next period.
Introduction
It has reached a common consensus that carbon dioxide emissions (CO2) are the main cause of global warming. Burning fossil fuels (no-renewable energy) and producing cement are the two primary sources of CO2 emissions. The United Nations General Assembly adopted the "United Nations Framework Convention on Climate Change" (UNFCCC) on May 9, 1992. The goal of the convention is to maintain the concentration of greenhouse gases in the earth's atmosphere at a level that "human activities do not interfere with systemic hazards in the climate". According to its principle of "common but differentiated responsibilities", different obligations and ful lling procedures are stipulated for developed and developing countries, as well as for the least developed countries.
Economic growth and CO2 emissions are the two inseparable sides of the coin of human economic activities. Thus, limiting CO2 emissions will inevitably have a negative impact on economic growth in the historical stage where human economic activities still rely mainly on burning fossil fuels to generate energy. This has led to erce controversies between the developing countries such as China and the developed countries such as the United States, based on their respective economic and political considerations. A central question is what indicators should be used to measure the reduction of CO2 emissions. Gross CO2 emissions, total historical CO2 emissions, CO2 emissions per capita and historical CO2 emissions per capita are the common indicators proposed by each party respectively. With the economic success, China has become the world's largest CO2 emitter regardless of how emission is measured, imposing China under enormous international pressure in the climate change negotiations.
However, according to the World Bank's classi cation of countries by income level, China is a country of middle-high income since 2019 On the one hand, China needs to keep prioritizing economic growth to increase national income. On the other hand, it also needs to pay more attention to the quality of living, which might be deteriorated by CO2 emissions. Therefore, Chinese government attaches great importance to promoting the work of CO2 emissions reduction.
CO2 intensity of real GDP is arguably a better index than other indicators since it can take production and CO2 emission as a whole to re ect the nature of human economic activities. In fact, Chinese government has promised to cut it by 60-65% of 2005 level by 2030 (Yang, Xia, Zhang & Yuan, 2018). Many economic variables have effects on the formation of real GDP, such as physical capital, education expenditure, energy consumption, population, foreign trade, industrial structure and the level of urbanization. Among these variables, energy consumption, foreign trade, industrial structure and the level of urbanization have effects on CO2 emissions as well, and were studied by researchers repeatedly. However, the impact of education expenditure or closely related human capital on CO2 emissions has not attracted enough attention from the academic community. Education expenditure might at least take the upgrading of industrial structure as a carrier among other mechanisms to impact on CO2 emissions. To ll this literature gap, we employ the ARDL approach to study the possible long-run and short-run relationship among CO2 intensity of real GDP, non-renewable energy consumption and education expenditure.
Literature Review
A seminal study on the relationship between economic growth and environmental pollution is Forster (1972), which pioneer the introduction of pollution stock in the production function of the neoclassical economic growth model, and proposes that the cause of pollution is the use of capital. Grossman & Krueger (1991) nd that there is an inverted U-shaped relationship between the three air quality indicators of sulfur dioxide, dust and suspended particles and income, using data from 42 countries. Arrow, Bolin, Costanza, Dasgupta, Folke & Holling, et al. (1995) further believes that there is an inverted U-shaped relationship between environmental pollution and economic growth. Since the famous Kuznets curve also exhibits an inverted U-shape in economic theory, the inverted U-shaped relationship between environmental pollution and economic growth is called the environmental Kuznets curve (EKC), which argues that with the economic development of a country, the level of environmental pollution increases rst, and then begins to decline after the economy reaches a certain critical level.
The empirical research based on EKC is very rich, most of which focus on testing whether it exists or whether it shows an inverted U-shaped relationship. Selden & Song (1994) use cross-country panel data to study the relationship between four important air pollutants and GDP per capita, and the results show that the inverted U-curve relationship holds between them. Fodha & Zaghdoub (2010) With the intensi cation of global warming, the emission of greenhouse gases, especially CO2, has become the focus of environmental pollution. Therefore, studies on EKC are increasingly using CO2 emissions as a proxy for environmental pollution. The increase in non-renewable energy consumption directly causes the increase in CO2 emissions (Huang, Hwang & Yang, 2008;Lapinskienė, Peleckis & Nedelko, 2017;Lapinskienė, Peleckis & Slavinskaitė, 2017;Belke, Dobnik & Dreger, 2011;Wu, Xu, Ren, Hao & Yan, 2020). The explanation of this fact from a chemical view is straightforward: the carbon component of non-renewable energy is converted into carbon dioxide during the combustion process. In this context, other scholars have found that the industry structure and technological level can reduce the CO2 emissions from non-renewable energy consumption (Al-mulali, Lee, Mohammed & Sheau-Ting, 2013; Han & Chatterjee, 1997;Lantz & Feng, 2006;Hogan & Jorgenson, 1991;Sohn, 2007;Deng, Alvarado, Toledo & Caraguay, 2020). The former ndings of industry structure and CO2 emissions are also con rmed in studies using China as a context (Zhou, Zhang & Li, 2013;Zhang, Liu, Zhang & Tan, 2014;Wang, Wu, Sun, Shi, Sun &Zhang, 2019;Guan, Meng, Reiner, Zhang, Shan & Mi et al., 2018), as well as technological level and CO2 emissions (Ang, 2009;Wang, Zeng & Liu, 2019;Yunfeng & Laike, 2010). Some scholars suggest that education plays a key role in the evolution of industry structure and technological progress (Keep, 2012;He, Zheng, Cheng, Lau & Cheng, 2019;Hansmann, 2012;Atkinson & Mayo, 2010;Adams & Demaiter, 2018). Following this reasoning, education should also play an important role in reducing CO2 emissions.
However, there are very scarce studies that integrate education and CO2 emissions into a uni ed framework. Li & Ouyang (2019) is the research that most directly related to our paper, which studies the dynamic impacts of nancial development, human capital, and economic growth on CO2 emission intensity in China for the period of 1978-2015 using the ARDL approach. Yet, Li & Ouyang (2019) does not take into consideration non-renewable energy consumption as a key factor in affecting both CO2 emissions and real GDP. Our study adds to the understanding of the literature by including non-renewable energy consumption in the framework.
Methodology
In this article, the approach of autoregressive distributed model (ARDL) is used to capture the long-run and short-run relationship among our concerning variables since it has the following advantages (Pesaran & Shin, 1998;Pesaran, Shin & Smith, 2001): First, this approach can be used to test if there exists a level or co-integrating relationship among the variables irrespective of whether the regressors are purely I(0), purely I(1) or mutually cointegrated. Second, the coe cient can be easily estimated by the ordinary least square method (OLS). Third, it can estimate the long-run and short-run relationship simultaneously thought a simple linear transformation of the coe cient estimated from the OLS method.
Last but not least, consistent and unbiased estimations of the underlying regressors for a small sample can be obtained, which is particularly suitable for our research (44 observations).
Similar to the research that has considered the impact of nancial development and human capital on CO2 intensity in China (Li & Ouyang, 2019), the ARDL models of this paper are as follow: Where notation Ln stands for the logarithm of relevant variables; a 0 is the intercept term; CO2_PG (CO2 PER GDP) represents CO2 emission intensity of real GDP, p stands for the maximum lags of it, and α i is the coe cient for its each lagged term; EN_PC (ENERGY PER CAPITA) represents non-renewable energy consumption per capita, q 1 stands for the maximum lag of it, and β j is the coe cient for its each lagged term; EDU_PC (EDUCATION EXPENDITURE PER CAPITA) represents education expenditure per capita, q 2 stands for the maximum lag of it, and β k is the coe cient for its each lagged term; ε t is an error term of covariance stationary and is not serial correlated.
To begin with, we test the long-run relationship between CO2 emission intensity of real GDP and nonrenewable energy consumption per capita. Afterwards the LnEDU_PC term is included in equation (2) to test if there is a long-run relationship among the three variables. Pesaran et al. (2001) have proposed an F-bounds test to check the possible long-run relationship in levels for an ARDL model. The null hypothesis is that there is no level relationship between a dependent variable and the regressors, under this null hypothesis, the asymptotic distributions of the statistics are nonstandard. They provided two sets of asymptotic critical values, one is for the situation that all regressors are purely I(0) named lower bound, and the other is for the situation of purely I(1) named upper bound.
The test results are classi ed into three cases: First, if the F-statistic exceeds the upper bound, then the null hypothesis can be rejected, which means there is a level relationship. Second, if the F-statistic is smaller than the lower bound, the null hypothesis cannot be rejected. Third, if the F-statistic is between the upper and lower bounds, the conclusion will be inde nite.
Data
The data used in this research are extracted from the World Development Indicators (WDI) provided by World Bank (2020). CO2 emissions intensity of real GDP (CO2_PG) measured by kilograms per US dollar, is original in WDI.
[1] Non-renewable energy consumption per capita (EN_PC) measured by kg of oil equivalent per capita, is also original in WDI.
[2] Education expenditure per capital (EDU_PC) measured by US dollars per capita, is not original in WDI. Since only the total educational expenditure is provided and measured by current US dollar,[3] therefore, we rst convert it to be measure by constant 2010 US dollar using a GDP de ator generated through dividing the GDP (current US dollar) by GDP (constant 2010 US dollar), and then divide it by the corresponding population.
According to the metadata provided by WDI, it is important to emphasize the data caliber of the variables used in this research as follow: CO2 emissions refers speci cally to anthropogenic CO2 emissions. They result primarily from fossil fuel combustion and cement manufacturing. In combustion, different fossil fuels release different amounts of CO2 for the same level of non-renewable energy use: oil releases about 50 percent more CO2 than natural gas, and coal releases about twice as much. Cement manufacturing releases about half a metric ton of CO2 for each metric ton of cement produced. Data of CO2 emissions in WDI includes gases from the burning of fossil fuels and cement manufacture, but excludes emissions from land use such as deforestation. They are often calculated and reported as elemental carbon, and were converted to actual CO2 mass through multiplying them by 3.667 (the ratio of the mass of carbon to that of CO2).
Non-renewable energy consumption refers to the use of primary energy before transformation to other end-use fuels, which is equal to indigenous production plus imports and stock changes, minus exports and fuels supplied to ships and aircraft engaged in international transport.
Education expenditure refers to the current operating expenditures in education, including wages and salaries but excluding capital investments in buildings and equipment.
Unit root tests
To apply the ARDL approach, it has to ensure that the integrating orders of all the variables included in the model are less than two. Therefore, the ADF (Dickey & Fuller, 1981), PP (Phillips & Perron, 1988) and KPSS (Kwiatkowski, Phillips, Schmidt & Shin, 1992) unit root tests are employed to examine the stationarity of the variables. *, **, and *** means that the hypothesis can be rejected at 10%, 5% and 1% level respectively.
Pre x D. means the rst difference of a corresponding variable
Before the ADF unit root testing, the maximum lags of the serials have to be selected. We determine the maximum lag orders of the sequences according to the principle of minimizing AIC information criteria. In addition, whether the intercept or trend term are included in the testing equation will also affect the test results. Table 1 summarizes the results of ADF unit root tests of variables LnCO2_PG, LnEN_PC and LnEDU_PC in their levels and 1st differences.
LnCO2_PG is stationary at the signi cance level of 10% if the intercept term and the trend term are both excluded from the testing equation but non-stationary in the other two cases, which implies that the stationarity of LnCO2_PG might depend on whether its data generating process includes a deterministic term. However, it is stationary in its 1st difference under all the situations, which makes it meet the prerequisites of ARDL approach. LnEN_PC is a differential stationary process under all the situations. LnEDU_PC is non-stationary in its level, but stationary in its 1st difference except for the situation that both the intercept and trend term are excluded from the testing equation.
Because the power of the ADF unit root test is relatively weak, these results require us to further test the stationarity of the rst-order difference form of the variable LnEDU_PC. In addition, if all the variables of interest are differential stationary processes, the conventional approach of Johansen (1991) can be employed to verify the co-integrating relationship among them, which makes the ARDL approach not the unique option. Therefore, the KPSS unit root test is used to check the stationarity of LnCO2_PG in its level form. Table 2 reports the results of PP unit test for LnEDU_PC, which implies that LnEDU_PC is a differential stationary process under all the situations. *, **, and *** means that the hypothesis can be rejected at 10%, 5% and 1% level respectively.
Pre x D. means the rst difference of a corresponding variable
Tables 3 shows the KPSS unit test results of variable LnCO2_PG, which con rms that it is not clear whether the variable is stationary in its level form but consistently stationary in its rst-order difference form. This suggests the necessity of applying ARDL approach to examine the co-integrating relationship among the variables.
F-bounds test and long-run relationship
Intuitively, there might be a co-integrating relationship between the variables of LnCO2_PG and LnEN_PC. Therefore, the ARDL approach is applied to equation (1) to verify if there exists a long-run relationship between them at rst. The optimal lag structure is selected using the AIC criterion, which results in an ARDL(3,2) model. Table 4 reports the results of F-bounds test applying to equation (1). The F-statistic is below the critical value of I(0) even at the signi cance level of 10%, regardless of asymptotic sample size or the actual sample size, which means that there is no co-integrating relationship between the two variables, and implies that attempting to decrease the CO2 intensity of real GDP simply by reducing the consumption of non-renewable energy is futile. Although reducing non-renewable energy consumption can reduce CO2 emissions, it will also reduce economic output. Therefore, the impact of just reducing non-renewable energy consumption on CO2 intensity depends on the relative degree of the above two reductions. However, the effect of non-renewable energy consumption on the two also depends on many other factors, such as industrial structure, technological level, etc., and education has played a key role in the changes of these other factors. Under the condition that the average education level of society remains unchanged, these factors will not change much. Then, the CO2 emissions and output changes caused by the changes in non-renewable energy consumption will be relatively xed, so that there is no cointegration relationship only between CO2 intensity of real GDP and non-renewable energy consumption Therefore, we proceed to examine whether there is a long-run relationship among the three variables of LnCO2_PG, LnEN_PC and LnEDU_PC. The AIC criterion is used to determine the optimal lag structure again, and selects an ARDL(3,2,2) model. Table 5 shows the results of F-bounds test applying to equation (2). The value of F-statistic is 16.56, which exceeds the critical value of I(1) at the 1% level under both the asymptotic and actual sample sizes.
This implies that there exists a co-integrating relationship among them in the long term, which con rms our explanation for the results of the former ARDL(3,2) model. Table 6 presents the result of the co-integrating equation with three variables. The coe cients of LnEN_PC and LnEDU_PC are 0.92 and -0.86 respectively, which means that in the long run when the operational education expenditure remains hold, every 1% increase of non-renewable energy consumption per capita will lead to a 0.92% increase in CO2 intensity of real GDP, while operational education expenditure has a negative impact of 0.86% on CO2 intensity of real GDP. Generally, the coe cients are both statistically and economically signi cant, and have the expected signs.
Furthermore, the relative larger coe cient of LnEN_PC means that reducing CO2 intensity of real GDP in the long run needs the percentage increase in operational education expenditure per capita exceed it in non-renewable energy consumption per capita. This is consistent with the data used in this paper, that is, during the sample period, the former increased by 4.81 times, while the latter increased by 32.87 times. It might be an important reason for the decrease in CO2 intensity of real GDP in China over time.
T-bounds test and short-run relationship
After the long-run co-integrating relationship is examined by F-bounds test, Pesaran et al. (2001) proposed a t-bounds test, and suggest to apply it to further con rm the level relationship among the variables. Note: * means that the p-value is incompatible with the standard t-distribution.
Pre x D. means the rst difference of a corresponding variable, and su x (-1) means the rst lag term of a corresponding variable. Table 7 reports the results of an error correction regression (ECM) derived from the ARDL(3,2,2) model. It shows that most coe cients are statistically signi cant at 1% level, with the coe cients of D(LnEN_PC(-1)) and D(LnEDU_PC(-1)) are statistically signi cant at 2% and 5% level respectively. However, the statistical signi cance of the coe cient of the rst order lagged error correction term CoinEq(-1) cannot be inferred by the standard t-distribution. Table 8 shows the critical values of the t-statistic for the coe cient of the rst order lagged error correction term CoinEq(-1). Pesaran et al. (2001) use an example to demonstrates that if the absolute value of the t-statistic exceeds the absolute value of I(1) critical value, it can be con rmed that there is a level relationship among the variable include in the ARDL model and the coe cient of the rst order lagged error correction term is signi cant at the corresponding signi cant level. Therefore, it can be con rmed that the term CoinEq(-1) in our results is statistically signi cant at 1% level. The coe cient of the error correction term -0.36 is also very signi cant economically, and means that any deviation of the cointegration relationship among the three in the short term will be corrected by 36% in the next period, which is a fast correction speed.
Residual diagnostic and stability test
To further check the stability of the long run co-integrating relationship and the short run error correction term parameters, three conventional methods are used, which are Breusch-Godfrey serial correlation LM test, Cumulative Sum Recursive Residuals (CUSUM) test, and Cumulative Sum of Squares of Recursive Residuals (CUSMSQ). Note: the null hypothesis is that there is no serial correlation in the residual Table 9 reports the results of Breusch-Godfrey serial correlation LM test, since the p values of the Fstatistic and Chi square-statistic are 0.27 and 0.15 respectively, it can be inferred that the residuals are not serial correlated, which means that our ARDL model is well speci ed.
Conclusion And Policy Implications
This paper examines the relationship among CO2 intensity of real GDP, non-renewable energy consumption, and operational education expenditure. Our results show that there is no co-integration relationship between CO2 intensity of real GDP and non-renewable energy consumption, while after the introduction of operational education expenditure, a co-integration relationship appears among the three variables. In the long run, every 1% increase in non-renewable energy consumption results in a 0.92% increase in CO2 intensity of real GDP. In contrast, every 1% increase in operational education expenditure reduces the CO2 intensity of real GDP by 0.86%. In the short term, 36% of the deviation from the long run equilibrium is corrected in the next period. Based on the results of the empirical research, we can draw several important conclusions and make important policy recommendations as follow: First and foremost, as long as the increase in operational educational expenditure exceeds the increase in non-renewable energy consumption, CO2 intensity of real GDP will decrease in the long run. This means that in the development stage when economic activities are still highly dependent on non-renewable energy sources, the Chinese government should continue to vigorously increase expenditures on public education, particularly improving the salary of teachers.
Second, the increase in non-renewable energy consumption will result in an increase in CO2 intensity of real GDP. Therefore, gradually increasing the proportion of clean energy consumption in the energy nexus Plot of Cumulative Sum of Recursive Residuals | 5,262.6 | 2021-02-24T00:00:00.000 | [
"Economics"
] |
Projection-based integrators for improved motion control: Formalization, well-posedness and stability of hybrid integrator-gain systems ✩
In this paper we formally describe the hybrid integrator-gain system (HIGS), which is a nonlinear integrator designed to avoid the limitations typically associated with linear integrators. The HIGS keeps the sign of its input and output equal, thereby inducing less phase lag than a linear integrator, much like the famous Clegg integrator. The HIGS achieves the reduced phase lag by projection of the controller dynamics instead of using resets of the integrator state, which forms a potential benefit of this control element. To formally analyze HIGS-controlled systems, we present an appropriate mathematical framework for describing these novel systems. Based on this framework, HIGS-controlled systems are proven to be well-posed in the sense of existence and forward completeness of solutions. Moreover, we propose two approaches for analyzing (input-to-state) stability of the resulting nonlinear closed-loop systems: (i) circle-criterion-like conditions based on (measured) frequency response data, and (ii) LMI-based conditions exploiting a new construction of piecewise quadratic Lyapunov functions. A motion control example is used to illustrate the results.
Introduction
For the control of linear motion systems, linear time invariant (LTI) control theory is appealing due to its well-understood and straightforward controller design with guaranteed stability and performance properties. However, LTI control designs suffer from fundamental performance limitations such as Bode's gain-phase relationship and the waterbed effect due to Bode's sensitivity integral (Freudenberg, Middleton, & Stefanpoulou, 2000;Seron, Braslavsky, & Goodwin, 1997). In the context of closed-loop performance, this typically results in design trade-offs.
Inspired by the advantages of reset control, in this paper we are interested in formalizing, as an alternative to reset control, a new nonlinear integrator referred to as the hybrid integrator-gain system (HIGS), which offers the same phase advantages, but without the need for hard resets of the (integrator) state.
The HIGS is designed to keep its input-output relation bounded in the sector [0, k h ], where k h ∈ R >0 denotes the gain parameter, thereby inheriting the hinted phase advantage of reset control (as the input and output of the HIGS have the same sign). However, the HIGS avoids resetting the integrator state, and exploits projection of the (controller) dynamics in a manner resulting in continuous control signals. In particular, a HIGS element acts as a linear integrator as long as its input-output pair lies inside the mentioned sector (called the 'integrator mode'). At moments when the sector condition tends to be violated, the vector field of the HIGS element is altered via projection in such a way that the resulting trajectories stay on the boundary of the sector. This second mode of operation is referred to as the 'gain mode' of the HIGS, explaining the terminology of hybrid integrator-gain systems. Interestingly, upon switching from integrator to gain mode, the integrator buffer is preserved as much as possible while respecting the sector condition, instead of being completely depleted by resets. This leads to increased potential for improving closed-loop performance for this hybrid integrator in comparison to, for instance, the Clegg integrator.
In this paper, we formalize the above idea based on extensions of so-called projected dynamical systems (PDS) (Dupuis & Nagurney, 1993;Henry, 1973;Nagurney & Zhang, 2012), a class of discontinuous dynamical systems introduced in the early 1990s. PDS are described by differential equations of which the solutions are restricted to a constraint set. At moments when the solutions tend to leave this set, the vector field of the system is changed by means of projection so that the solutions remain inside the constraint set. Although the PDS philosophy resembles that of the HIGS, there are essential differences that prevent direct description of the HIGS as a PDS. First of all, the constraint set of the HIGS (the sector), does not satisfy the regularity requirements that the PDS framework commonly requires, see, e.g., Henry (1973). Secondly, in the case of PDSs, the complete vector field is projected on (the tangent cone of) the constraint set. In the context of control, however, when considering a HIGS element in feedback interconnection with a physical plant, it is only possible to project the dynamics of the controller (HIGS) and not the full dynamics (including the plant dynamics). This calls for important generalizations of PDS, as provided in this paper, for which we coin the term extended projected dynamical systems (ePDS). Based on this new ePDS framework, which naturally captures the design philosophy of HIGS, we provide a formal mathematical description of HIGS-controlled systems. Interestingly, the representation used in our preliminary work (Deenen, Heertjes, Heemels, & Nijmeijer, 2017) follows from the ePDS-based formulation of HIGS-controlled systems. Furthermore, we establish the well-posedness of HIGS-controlled systems in the sense of existence and forward completeness of solutions, under mild assumptions as generally satisfied for linear motion systems.
In our preliminary work (Deenen et al., 2017), a circlecriterion-like argument is presented for stability analysis of HIGS-controlled systems (however, without a proof of the stability and without a well-posedness proof). Clearly, this circle-criterion approach offers great advantages in terms of easyto-check graphical conditions based on accurate and quickly measurable frequency response functions. A potential drawback, however, is that it may yield conservative bounds on closed-loop stability due to (i) the underlying use of a common quadratic Lyapunov function for a piecewise linear closed-loop system (Sontag, 1981), and (ii) solely using the hybrid integrator's sector-boundedness instead of its complete nonlinear dynamic behavior. Furthermore, its application is limited to control configurations where a stable LTI system is used in feedback connection with a HIGS element. One of the key contributions of this paper, next to providing a full proof of the circle-criterion-like stability condition for the first time, is to also provide less conservative stability conditions in terms of linear matrix inequalities (LMIs). This approach is inspired by Aangenent et al. (2009) and Zaccarian et al. (2005Zaccarian et al. ( , 2011 for reset control systems, where LMIs guarantee stability based on a piecewise quadratic (PWQ) Lyapunov function by partitioning the two-dimensional inputoutput plane of the reset element into double cones with the apices at the origin. The HIGS differs from the previously considered reset integrators in the sense that its switching dynamics are determined in a three-dimensional space. Therefore, an extension of the planar approach is proposed that uses volumetric partitions in a spherical coordinate system leading to LMI-based (input-to-state) stability conditions for HIGS-controlled systems. Both methods will be used to verify stability of an LTI motion system in feedback with a HIGS to illustrate, on the one hand, the practical convenience of the circle-criterion-like approach, and, on the other hand, the increased potential of the LMI-based conditions in terms of reduced conservativeness regarding parameter bounds.
The remainder of this paper is organized as follows. Section 2 contains preliminaries and notation. The HIGS is described in Section 3 and proven to be well-posed in Section 4. In Section 5, the LMI-based closed-loop stability conditions and their derivation are discussed, followed by the circle-criterion-like frequency-domain stability conditions in Section 6. In Section 7, these stability conditions are compared using an illustrative motion control example. Section 8 states the conclusions.
A function w : I → R nw , with I ⊆ R is called a Bohl function, denoted by w ∈ B I , if there exist matrices H ∈ R nw ×n F , F ∈ R n F ×n F , and a vector v ∈ R n F such that w(t) = He Ft v for all t ∈ I.
Note that piecewise Bohl functions can be discontinuous, but they are continuous from the right in the sense that for each T ∈ R ≥0 it holds that w(T ) = lim t↓T w(t).
An absolutely continuous (AC) function
Definition 2.4 (Rockafellar & Wets, 1998). The tangent cone to a set S ⊂ R n at a point ξ ∈ S, denoted by T S (ξ ), is the set of all vectors v ∈ R n for which there exist sequences {ξ i } i∈N ∈ S and
System description
In this section we consider the closed-loop system setup in Fig. 1, consisting of a linear time-invariant (LTI), single-input single-output (SISO) plant G interconnected with a (SISO) HIGS element H . The plant G contains the linear part of the closedloop system including the plant to be controlled and possibly an LTI controller, given by the state-space representation G : with states x g taking values in R ng , performance output e in R, control input v in R and exogenous disturbances and references denoted by w taking values in R nw . Moreover, the realization (A g , B gv , C g ) is assumed to be minimal. As our key area of application involves motion systems containing floating masses, the following assumption is typically satisfied.
Assumption 3.1. The LTI system G as in Fig. 1 is such that C g B gw = 0 and C g B gv = 0.
The HIGS element H has as its preferred mode of operation the linear integrator dynamicṡ where the state x h takes values in R, the (HIGS) input e and the (HIGS) output u both take values in R and ω h ∈ [0, ∞) denotes the integrator frequency. This mode of operation of the HIGS is referred to as the integrator mode. The integrator mode (2) can only be used as long as the input-output pair (e, u) of H remains inside where k h ∈ (0, ∞) denotes the gain parameter of H . Note that (e, u) ∈ F implies equal sign of the input e and the output u of the HIGS as eu ≥ 0, see Fig. 2. At moments when the input-output pair (e, u) of H tends to leave the sector F we will ''project'' the integrator dynamics in (2) such that (e, u) ∈ F remains true along the trajectories of the system. We will formalize this operation of the HIGS in the upcoming subsections.
Projection-based representation
To mathematically introduce the operation of the HIGS, we directly use the interconnection of the HIGS element H and the linear system G described by (1), resulting in a closed-loop system as in Fig. 1 where x g and x h are the states of G and H , respectively and thus n = n g + 1. Note that the constraint (e, u) ∈ F translates to x ∈ S with S = K ∪ −K, where K is a polyhedral cone given by . In fact, F 1 x = k h e − u and F 2 x = u such that (e, u) ∈ F if and only if x ∈ S. When H operates in the integrator mode, by combining (1) and (2) we obtain the state space representation for the HIGS-controlled system in Fig. 1, given bẏ where y = [e u] ⊤ , and As indicated above, when the state trajectory tends to leave the set S, which in terms of Definition 2.4 happens when for x(t) ∈ S, the vector field of (6), is altered by partial projection such that the resulting trajectory remains inside S. Using this perspective, we can formally introduce the HIGS-controlled system where for x ∈ S and f ∈ R n Π S,E (x, f ) := arg min with Π S,E : S × R n → R n an operator, which projects the dynamics f onto the tangent cone of the set S at point x, in the direction imE. In the case of (9), E = [0 ⊤ np×1 1] ⊤ such that the correction of the dynamics (6) is only possible for the dynamics of the HIGS and not for the (physical) plant dynamics (1), which can clearly not be modified (we cannot directly modifyẋ g ). Note that the projection operator Π S,E , is well-defined in the sense that it provides a unique outcome for every x ∈ S and each f ∈ R n , in the setting considered here (see . The model (10) resembles so-called projected dynamical systems (PDS) (Henry, 1973;Nagurney & Zhang, 2012) given bẏ where f : R n → R n is a general vector field and S ⊆ R n is a constraint set. Our representation (10) differs from (11) in two essential ways. First of all, we have partial projection of dynamics as a result of using the matrix E, the image of which specifies the direction of projection. It should be noted that the matrix E is not limited to the choice made in (9) and should be chosen depending on the specific case under consideration. Secondly, in the PDS literature, the PDS (11) is shown to be well-defined for constraint sets that satisfy certain regularity conditions. In particular, Henry (1973) and Nagurney and Zhang (2012) restrict the constraint sets to be convex, while in Hauswirth, Bolognani, and Dorfler (2021) (and some references therein) convexity is relaxed to Clarke regularity and prox-regularity of the constraint set for existence and uniqueness of Carathéodory solutions, respectively. However, the constraint set S considered here does not satisfy any of the above mentioned regularity requirements, cf. (4). In addition, note that (10) is a generalization of (11) since by taking imE = R n and restricting S to satisfy the required conditions, one recovers the classical PDS as in (11). In fact, for these reasons we refer to the class of systems (10) as extended projected dynamical systems (ePDS).
Remark 3.1. Note that we could extend the dynamics (9) that are currently defined for initial states x(0) ∈ S, such that they are also defined for x(0) / ∈ S. In case x(0) ̸ ∈ S, we can use to reset the state to a state inside S. Note that this reset only occurs at the initial time and not afterwards, as the state never leaves S for time t ∈ R >0 .
Remark 3.2.
It is easy to see that (9) satisfies This symmetry property will prove to be useful in Section 4, in showing well-posedness of the system.
Discontinuous PWL model
In this subsection we reformulate (9) as an equivalent piecewise linear (PWL) model. To explicitly compute (9), we first note that where (7), (12), and Assumption 3.1, we obtain that The proof of the statement above can be established by comparing the algebraic expressions of (12) and (13) for states lying in the interior of S where F 1 x > (<)0 and F 2 x > (<)0, and its boundaries where F 1 x = 0 or F 2 x = 0. Due to space limitations a complete proof is omitted here. As a result of the discussion above, S 1 is the region where the integrator mode of H is active. Moreover, when (8) holds. Based on (9) and (12) (with f = A 1 x + Bw and S as in (4)), when x ∈ S 2 , by solving (9), through manipulating (10) (see equation (17) in Sharif et al., 2019) and resorting to the Karush-Kuhn-Tucker (KKT) optimality conditions for constrained optimization (Boyd & Vandenberghe, 2004), we obtaiṅ We refer to (15) as the gain mode dynamics. By considering both modes of operation (given by (6) and (15)) and their corresponding regions, we obtain the explicit discontinuous PWL model for (9). Note that S 1 has a non-empty interior while S 2 does not (it is part of the lower-dimensional sub-space kerF 1 ). The matrices A 1 , B and, C have been explicitly computed in (7). We can also compute A 2 from (15) aṡ As a result of Assumption 3.1, this simplifies tȯ and thus for (16) we have Hence, (16) with (7) and (18) is an explicit PWL formulation of the HIGS controlled system in Fig. 1.
Remark 3.3.
As observed from the expressions of S 1 and S 2 the switching in (16) is based on where F 2 x = x h = u is the output of the HIGS element (input u to the linear system G in Fig. 1) and which is a function of e (output of the linear plant) and u (output of H ). Lastly, F 1 (A 1 x) = k hė − ω h e is a function ofė, the first derivative of the plant output, and the plant output e. Hence, the regions S 1 and S 2 can be fully described in terms of e,ė and u.
Indeed, one has x ∈ S 2 when (ė, e, u) ∈ F 2 , where where F is as defined in (3).
A graphical illustration of the regions F 1 and F 2 is provided in Fig. 3. As a result, an (internally) equivalent representation of (16) is given by Σ : We will use (9), (16), and (21) interchangeably.
From (16), we also see that we are dealing with a discontinuous differential equation, which makes proving (global) existence of solutions, given an initial state x 0 and external signal w, a difficult problem, since typical continuity properties used for studying differential equations/inclusions (such as upper-semicontinuity of the right-hand side cf. Aubin & Cellina, 1984) are not fulfilled, see also Cortes (2008).
Well-posedness analysis
In this section we show that the HIGS-controlled system (16) is well-posed in the sense of global existence of solutions. To this end, we first prove in Section 4.1 that (16) is locally well-posed, i.e., for each initial state x(0) ∈ S and exogenous signal of interest w, the system admits a solution on [0, ϵ] for some ϵ > 0. We select here the class of exogenous signals (disturbances, references, etc.) to be of piecewise Bohl (PB) nature (see Definition 2.2). Note that sines, cosines, exponentials, polynomials, and their sums are all Bohl functions, thereby showing that the class of PB functions is sufficiently rich to accurately describe (deterministic) disturbances frequently encountered in practice. In particular, any piecewise constant signal is PB, and thus this class of functions can approximate any measurable function arbitrarily closely. 1 Building on the local existence results of Section 4.1, in Section 4.2 we prove that all (maximal) solutions are forward complete, i.e., are defined for all times t ∈ R ≥0 .
To make this discussion precise, we will formalize the solution concept. (16) holds almost everywhere in T.
The solutions in Definition 4.1 are Carathéodory-type solutions, see also Cortes (2008) for more details regarding solution concepts for discontinuous dynamical systems.
Local well-posedness
Definition 4.2. We call the HIGS-controlled system (16) locally well-posed if for all x 0 ∈ S and w ∈ PB, there exists an ϵ > 0 such that the system admits a solution on [0, ϵ] with initial state x 0 and input w. Proof. Take x 0 ∈ S and w ∈PB. Without loss of generality we can take w ∈ B [0,ε] by selectingε > 0 sufficiently small. Hence, w can be represented as for some matrices H w ∈ R nw ×n Fw , F w ∈ R n Fw ×n Fw and a vector v w ∈ R n Fw . In other words, w is generated (on [0,ε]) by the exo-systeṁ Combining this exo-system with (16) yieldṡ as an equivalent description of (16) with w as in (22) 1 An interesting future research direction is establishing the existence of solutions for larger classes of input signals.
For proving local well-posedness of (24) (and thus of (16) with w as in (22)), we define the set In fact, sinceŜ int ⊆Ŝ 1 , we conclude that e 1 tx 0 is also a solution to (24) on a non-trivial time window [0, ε] for some 0 < ε ≤ε. Next, we will also show that for eachx 0 ∈Ŝ \Ŝ int , a local solution to (24) exists, so that it is established that for all x 0 ∈Ŝ a local solution exists. In order to do so, we first rewrite (25) in a more algebraic form. Using the definition ofŜ, one can rewrite (25) aŝ By using the Taylor series expansion of e 1 t together with the Cayley-Hamilton theorem, the characterizationŜ int which can be verified based on the expressions ofF 2 , 1 , andF 1 .
Forward completeness
Hence, a maximal solution is a solution that cannot be prolonged (is not a strict prefix of another ''larger'' solution for the same input). (16) for w ∈ PB are forward complete.
Theorem 4.2. All maximal solutions to HIGS-controlled system
Proof. Consider a maximal solution x : T → R n of (16) for initial state x 0 ∈ S and w ∈ PB. We will show that if T is equal to [0, T ] or [0, T ) with T ∈ R ≥0 a finite number, the left-limit x(T ) := lim t↑T x(t) ∈ S exists, and we can exploit the local existence result to prolong x to a solution on [0, T + ε).
This would contradict the maximality of the solution and thus T = R ≥0 , hence x has to be forward complete.
To show the existence of lim t↑T x(t), let us remark that if
T is equal to [0, T ], the solution is AC on [0, T ] and thus the left-limit trivially exists. So, the exciting case to handle is [0, T ). By Definition 2.2, w can be represented on [t i , T ] as in (22) for some t i < T (in fact, t i is the largest value in the set {t k } k∈N , which is strictly smaller than T ). Thus, (16) can be equivalently written as (24) on [t i , T ]. This implies the existence of a constant M ∈ R such that the vector field of (24) satisfies the linear growth condition ∥ΠŜ ,Ê (x, 1x )∥ ≤ M∥x∥, for allx ∈ R n+n Fw , because ΠŜ ,Ê (x, 1x ) ∈ { 1x , 2x }. As a result of (32), By applying Gronwall's Lemma (Khalil, 2002), one concludes that Hence, the solution x(t) is Lipschitz continuous on t ∈ [t i , T ], and thus also absolutely continuous and uniformly continuous. Thereby, the limitx(T ) := lim t↑Tx (t) exists, as required. □ Since we proved local existence of solutions and forward completeness of all maximal solutions, it is concluded that for each initial state x 0 ∈ S and w ∈ PB a global solution exists on [0, ∞) and all solutions can be extended to be defined on [0, ∞).
Time-domain stability analysis
In this section, we present a Lyapunov-based stability analysis for HIGS-controlled systems. In particular, an input-to-state stability (ISS) condition in terms of LMIs is proposed that guarantees the existence of a PWQ Lyapunov function (Johansson & Rantzer, 1998), in which a novel partitioning is used to reduce conservativeness.
Definition 5.1 ( 2 Khalil, 2002). The closed-loop system (16) is said to be input-to-state stable (ISS), if there exist a KL-function β and a K-function γ such that for any initial state x(0) ∈ S and any bounded w ∈ PB, any corresponding solution x : .
Three-dimensional partitioning
For the system given by (16) (or equivalently (21)), numerically tractable stability conditions can be formulated using LMIs. A piecewise quadratic Lyapunov function (Johansson & Rantzer, 1998) is pursued inspired by Aangenent et al. (2009) and Zaccarian et al. (2005Zaccarian et al. ( , 2011 for reset control systems. In the cited works, however, the flow set is partitioned only in the input-output plane of the reset element, as its nonlinear behavior is captured within this two-dimensional space. As explained in Remark 3.3, the HIGS' switching dynamics, by contrast, are determined in the (ė, e, u)-space, requiring a novel three-dimensional partitioning to reduce conservativeness. To ensure a partitioning of the (ė, e, u)-space such that the edges of (some of) the resulting regions exactly coincide with the boundaries of regions where different modes are active, first note that all boundaries of the regions F 1 and F 2 pass through the origin (see Remark 3.3 for analytic expressions of F 1 and F 2 ). Hence, a spherical coordinate system in the (ė, e, u)-space can be used to realize such a partitioning. That is, the azimuthal angle θ and polar angle φ can be used to divide this space into polyhedral (double) cones, see C ij in Fig. 4(a), which will be used to partition F 1 , the region where the integrator mode is active. For F 2 (the region where the gain mode is active), which is a subset of the plane u = k h e, a partitioning using the spherical coordinate system is possible using φ (and fixed θ), yielding regions such as depicted by T j in Fig. 4(b).
The construction of the N × M polyhedral cells is as follows. Define the N + 1 azimuthal angles and the M + 1 polar angles where φ M 1 = arctan(k h /(ω h cos(θ N ))). The angle θ N is chosen specifically such that it describes the sector boundary u = k h e. Similarly, the angle φ M 1 is defined such that the vector at angular coordinates (θ N , φ M 1 ) (red vector in Fig. 4(b)) coincides with the dynamics switching boundary at the intersection of the planes ω h e = k hė and the plane u = k h e, at which the HIGS switches back from gain mode to integrator mode.
Let the subsets C ij and T j , depicted in Fig. 4, that partition the regions where HIGS' integrator mode and gain mode are active, respectively, be given by where the inequalities in (35a) hold element-wise, and These matrices follow from the patching hyperplanes described where the latter is obtained by computing the cross product of the unit vectors with angular coordinates (θ i−1 , φ j ) and (θ i , φ j ), i.e., the two unit vectors that individually span the two region corners (successive in θ-direction) shared by the regions C ij and C i(j+1) , and together spanning the common boundary plane between these regions. Note that as a result of the symmetry of the system with respect to the origin of the (ė, e, u) space, the subsets C ij as in (35a) are defined such that if z ∈ C ij then −z ∈ C ij .
For the actual partitioning, let the index sets be defined as N = {1, 2, . . . , N} and M = M 1 ∪ M 2 , where M 1 = {1, . . . , M 1 } and M 2 = {M 1 + 1, . . . , M}. The closure of the integrator mode flow set F 1 = F can then be partitioned by C ij for i ∈ N , j ∈ M, and the gain mode flow set closure F 2 ⊂ F 1 by T j for j ∈ M 2 , i.e., Note that T j captures the boundary plane of C Nj where u = k h e, hence T j ⊂ C Nj .
LMI-based stability condition
In addition to N and M, defineÑ = N \{N} andM = M\{M}, and let S n and S n ≥0 denote the sets of n × n symmetric matrices, the latter consisting of nonnegative elements only. The following result then states a sufficient condition for ISS of a closed-loop system with HIGS as described by (16) (or (21)).
Theorem 5.1. If there exist matrices U ij , W ij ∈ S 4 ≥0 for i ∈ N , j ∈ M, and V j ∈ S 2 ≥0 for j ∈ M 2 such that P ij ∈ S n for i ∈ N , j ∈ M satisfy the LMIs wherê with C ij and T j from (36), arbitrary scalars v j,1 , v j,2 , v j,3 ∈ R,C as in (21a(i)) and whereΘ i,⊥ ,Φ ij,⊥ ∈ R n×(n−1) andΦ M,⊥ ∈ R n×(n−2) are matrices of full column rank such that im( respectively, then the system in (16) (or equivalently (21)) is ISS.
Proof. Since z =Cx (21a(i)), we can define the setŝ to describe the polytopic partitions in the state space of the closed-loop system. For this system, consider the radially un- To prove ISS, we will show that V is an appropriate ISS Lyapunov function. By (35a) and (46a) together with the nonnegativity of the elements in W ij ∈ S 4 ≥0 , it holds that As a result, using the S-procedure (41) implies that thereby ensuring positive definiteness of V . Next, using Finsler's lemma continuity of V is imposed over the boundary between two cells connected in azimuthal direction by (42) and in polar direction by (43). Since φ 0 = 0 and φ M = π, we also require (44) to ensure continuity of V over the boundary between the first and last regions in polar direction. Note that the former two constraints ensure continuity over hyperplanes of dimension n − 1, whereas the latter does so in only n − 2 dimensions. This is due to the fact that in (ė, e, u)-space, all region boundaries are double-conical subsets of planes, except for the boundary line between the first and Mth region in φ-direction, i.e., on the intersection C i1 ∩ C iM . In fact, since φ 0 = 0 and φ M = π this boundary coincides with theė-axis, i.e., at e = u = 0, from which it is easily seen that this leaves only n − 2 directions in the state space of the closed-loop system in which continuity over this boundary must be ensured. Hence, V is a locally Lipschitz continuous function.
Inspecting the time derivative of V in integrator mode, note that due to (35a), (46a), and U ij ∈ S 4 ≥0 it holds that which via the S-procedure ensures that (39) yields if x ∈Ĉ ij , for some ε ij,1 > 0 and (i, j) ∈ N × M by strictness of the matrix inequalities. For the gain mode, let us first observe that (35b), (46b), and V j ∈ S 2 ≥0 , imply forV j as in (45) with arbitrary v j,1 , v j,2 , v j,3 ∈ R. Simultaneously employing the S-procedure and Finsler's lemma results in (40) to ensure if x ∈T j , with ε Nj,2 > 0 and j ∈ M 2 by strictness of the LMIs in (40). Combining (50) and (52) and applying Young's inequality yields almost everywhere the upper bound on the time derivative of V over both modeṡ with constants ε = min (i,j,q) ε ij,q − 1 δ > 0 and ρ = δ max (i,j) ∥P ij B∥, where (i, j, q) for sufficiently large δ > 0. Thus, V is an appropriate PWQ ISS Lyapunov function by which the closed-loop system with HIGS is ISS. □ Remark 5.1. The constraints (42) and (43) are sufficient conditions for continuity of V , as they in fact demand continuity over the entire (n − 1)-dimensional hyperplanes rather than only over the angularly bounded subset of such a hyperplane shared by two neighboring partitions. By contrast, (44) is a necessary and sufficient condition, requiring continuity of V only where it is truly needed. Moreover, for (44) one may remark that it would suffice to only impose this condition for any single i ∈ N , provided that (42) is satisfied, since theė-axis is a common boundary for all regionsĈ i1 andĈ iM , i ∈ N . In fact, the HIGS' dynamics are such that every crossing of this boundary, i.e., every zero crossing of e, leads to trajectories traveling fromT M intô C 11 (except for trajectories traveling through z = 0, for which continuity of V is already guaranteed by both (42) and (43)), and hence it would be sensible to require (44) only for i = N.
Discussion
The main strength of the proposed LMI-based conditions is that the discontinuous PWL dynamics are explicitly incorporated, and that the flexibility of a PWQ Lyapunov function is used to reduce conservativeness. Moreover, since the conditions pose a convex optimization problem, they can efficiently be solved by numerical algorithms. Furthermore, this LMI-based approach is general in nature in the sense that it makes no restrictive demands on G (only Assumption 3.1 is required), as opposed to the approach presented in Section 6, which requires stability of the linear system G . Hence, the LMI-based approach is in principle applicable to any HIGS-controlled (motion) system that can be written in the form (16). However, being an LMI-based stability analysis, it has two aspects that might be experienced as less desirable. First, it requires an accurate parametric state-space model, which for high-precision industrial (motion) systems may not be straightforward to obtain. Second, if infeasible, the evaluated conditions provide no direction to the control engineer on how to (re)design the controller or how to guarantee robustness margins.
Frequency-domain conditions
In this section we discuss circle-criterion-like conditions used in Deenen et al. (2017), which similarly to the circle criterion (Khalil, 2002) exploits the sector-boundedness of the HIGS' input-output behavior to enable nonparametric stability analysis in the frequency domain.
Circle-criterion-like condition
Similar to the classical circle criterion, we inspect the frequency response function of G that connects the HIGS' input and output in the loop of Fig. 1, given by in relation to the HIGS' input-output sector. The following theorem states the sufficient condition for ISS.
Theorem 6.1. Consider the system in Fig. 1 described by (16) with fixed ω h ∈ R >0 and k h ∈ R >0 . This system is ISS in the sense of Definition 5.1 if the following conditions are satisfied: (I) The system matrix A g of (1) is Hurwitz; (II) The transfer function G ev (s) as in (54) Proof. The proof is based on modifications of the circle criterion as proposed in van Loon et al. (2017). Different from van Loon et al. (2017), however, is the absence of an explicit additional detectability condition when considering the scalar-state HIGS, and the fact that, besides for the integrator dynamics, the Lyapunov function must be proven to decrease for an additional set of flow dynamics resulting from the gain mode. The proof is divided into the following steps: (1) Initially, the internal dynamics of H are disregarded. Using the circle criterion, the sector-boundedness of its input-output pair (e, u) is exploited to prove ISS of G with respect to w by construction of a quadratic ISS Lyapunov function (Sontag, 1995) V g via the Kalman-Yakubovich-Popov (KYP) lemma (Khalil, 2002). (2) A quadratic Lyapunov-like function V h is constructed for the HIGS in isolation, and an upper bound on its time derivative is found through explicit use of the sector condition and mode constraints, showing that the hybrid element is a (state) strictly passive system. (3) The functions V g and V h constructed in the previous two steps are combined into a (common) quadratic ISS Lyapunov function V c for the closed-loop system including the HIGS to prove the theorem.
Step 1: The KYP Lemma (Khalil, 2002), shows that the conditions (I) and (II) and minimality of (A g , B gv , C g ) imply the existence of a positive definite matrix P g ∈ S ng , a matrix L, and a positive constant ε g that satisfy Hence, the Lyapunov function V g (x g ) = x ⊤ g P g x g satisfies λ(P g )∥x g ∥ 2 ≤ V g (x g ) ≤ λ(P g )∥x g ∥ 2 , where λ(P g ) and λ(P g ) denote the minimum and maximum eigenvalues of the matrix P g ≻ 0, respectively. Following the derivation in Step 1 of the proof of Theorem 6 in van Loon et al. (2017), in which the sector condition eu ≥ 1 k h u 2 from (3) is explicitly used twice, we find that the time derivative of V g along solutions of (1) satisfies almost everywherė where c 1 = ε g λ(P g ) − 1 δ 1 > 0 for sufficiently large δ 1 , and c 2 = δ 1 (λ(P g )∥B gw ∥) 2 > 0, from which we conclude that V g is indeed an ISS Lyapunov function for G with respect to w.
Step 2: Consider the quadratic Lyapunov function where c 3 = 1−δ 2 ω h > 0 with 0 < δ 2 < 1, for the isolated HIGS. In integrator mode (2), the corresponding time derivative is given bẏ which using u = x h and the sector condition u 2 ≤ k h eu from (3) can be rewritten aṡ where c 4 = δ 2 k h > 0 such that the last equality holds. Note that (60) shows that the integrator mode of the isolated HIGS is (state) strictly passive (Khalil, 2002). For the gain mode, let us first note that substitution of u = k h e into the quadratic condition ω h e 2 > k hė e in (19) yields We consecutively use (2), (61), and the gain mode constraint u = k h e from (19) to rewrite the time derivative of V h in gain mode aṡ which is equal to (60), and thereby denotes the uniform upper bound onV h (almost everywhere) over both the integrator and gain mode, showing strict passivity of H . Employing Young's inequality and ∥u∥ ≤ k h ∥e∥ from (3), we find almost everywherė where (1) gives almost everywherė where c 6 = c 5 ∥C g ∥ 2 > 0.
Step 3: For the closed-loop system consisting of the interconnection of (16) (or equivalently (21)) as depicted in Fig. 1 with state where P c = , and 0 < µ < c 1 c 6 . V c is positive definite and radially unbounded according to An upper bound for the time derivative is given bẏ almost everywhere, with ε c = min(c 1 − µc 6 , µc 4 ) > 0 and ρ c = c 2 > 0.
Consequently, we conclude that the system is ISS in the sense of Definition 5.1. □
Discussion
Condition (I) of Theorem 6.1 restricts the theorem's applicability to stable systems G . For motion systems with floating masses, i.e., where the transfer function of the open-loop system contains poles at s = 0, this means that G ev (s) may only describe an already stabilized closed-loop system. Another weakness of this approach with respect to Theorem 5.1 is the fact that this approach potentially yields a conservative estimate on stability for two main reasons. First, the Lyapunov function that underlies the circle criterion is a common quadratic one. Second, the actual nonlinear behavior of the HIGS is not taken into account, but instead condition (II) only considers the sector following from the HIGS gain k h . In fact, from the Nyquist diagram of the linear system k h G ev we find that condition (II) directly leads to k h,cc ≤ k h,gain (which becomes an equality in case the largest negative real part of G ev is on the real axis), where k h,cc denotes the smallest upper bound on the gain k h that satisfies condition (II) of Theorem 6.1, and k h,gain denotes the supremum of the gain value by which the closed loop G ev /(1 + k h G ev ) is stable. In other words, using Theorem 6.1 one can only guarantee stability for gain values k h that also render the individual gain mode subsystem stable.
Another indication of the circle-criterion-like approach being more conservative than the LMI-based conditions is found by the inequality in (67), which implies that any Lyapunov function V c (x) = x ⊤ P c x that follows from satisfying the conditions in Theorem 6.1 also satisfies the conditions (39)-(41) of Theorem 5.1 with N = M = 1 and P 11 = P c (rendering the continuity constraints trivial), and with U 11 , V 1 , and W 11 being zero matrices. Conversely, satisfying the conditions in Theorem 5.1 need not imply that the conditions in Theorem 6.1 are satisfied, hinting that the former may possibly guarantee stability in cases where the latter cannot.
The main strength of the circle-criterion-like approach lies in its convenience of application and compatibility with current industrial practice for nonparametric frequency-domain stability evaluation during the controller design process. That is, classical linear loopshaping techniques can be used to design a linear controller contained in L that stabilizes G ev , thereby satisfying condition (I). Also, for typical motion systems and as formalized by Assumption 3.1, it holds that G ev (jω) → 0 for ω → ∞, implying (55) of condition (II) is satisfied. Thus, only (56) of condition (II) remains to be verified, which boils down to a graphical evaluation of (measured) frequency response data of G ev with respect to the design parameter k h in a Nyquist diagram. As such, this provides the control engineer with frequency-specific information regarding violations and robustness margins toward satisfying condition (II), which can directly be used for controller redesign.
Remark 6.1. Let us briefly elaborate on robustness issues of a discontinuous differential equatioṅ with x(t) ∈ R n , w(t) ∈ R nw , and X ⊆ R n , just as our HIGS-controlled system (9). As discussed in Goebel, Sanfelice, and Teel (2012), in order to obtain robust (with respect to arbitrary small state perturbations) stability guarantees, it is important to consider the Krasovskii regularization of (68), defined aṡ where B(x(t), δ) is the open ball of radius δ around x(t) and for a set Ω ⊆ R n , co(Ω) denotes its closed convex hull and X denotes the closure of X . In case of (16), its Krasovskii regularization is given aṡ It can be shown that Theorems 5.1 and 6.1 hold for (69) as well, and thus stronger stability guarantees for (16) can be obtained including state perturbations (see Goebel et al., 2012 for more details).
Illustrative example
In this section, we demonstrate how the two stability analysis approaches of Sections 5 and 6 can be used to evaluate stability of a typical control system including a HIGS. The reader interested in time and frequency domain simulations/experiments of successful applications of HIGS-based control, is referred to Deenen et al. (2017), van den Eijnden, Heertjes, and Nijmeijer (2019) and Heertjes, van den Eijnden, Sharif, Heemels, and Nijmeijer (2019). Moreover note that in the recent work (van den Eijnden, Heertjes, Heemels, & Nijmeijer, 2020) it is shown how HIGS-based controllers overcome fundamental limitations of LTI control.
System description
Consider the SISO motion control tracking problem depicted in Fig. 5, where the LTI open-loop system L represents the series interconnection of a single-mass plant P and the stabilizing (nominal) linear controller C consisting of a PD-controller and first-order lowpass filter. The corresponding Laplace transforms are given by with mass m = 1 kg, PD-controller parameters k p = 1 N/m and k d = 0.2 Ns/m, and lowpass corner frequency ω lp = 7 rad/s, resulting in a bandwith of 1 rad/s. The plant output y l ∈ R is the position of the mass, which must track the reference r ∈ R. Using an appropriate (non-causal) feedforward signal u f the effects of r are assumed to be fully canceled in the closed loop. As a result, the tracking error is given by e = −y l , and the exogenous input d ∈ R represents only the disturbance effects. Furthermore, corresponding to Fig. 1, the linear part G (red dashed box) representing the baseline linear control system is in feedback with the HIGS H , the latter generating the control signal u = −v ∈ R. In particular, the transfer function G ev (s) is given by in which we recognize the complementary sensitivity function, which similarly to L (s) has relative degree two, thereby satisfying Assumption 3.1.
LMI-based stability analysis
Evaluation of the conditions in Theorem 5.1 requires a state-space representation of G as in (1). To this end, consider first the state-space model of the open-loop system L given by L : with states denote the controller and plant states, respectively, and corresponding matrices To obtain a state-space description of G in the form of (1), we close the loop using e = −y l and u = −v, resulting in G : where additionally x g = x l is substituted as no pole-zero cancellation is found to occur by closing the loop. An extended closed-loop state-space representationG follows from augmenting the output in (74) toẽ = [ė e] ⊤ , which using the fact that C l B l = 0 (by Assumption 3.1) results in the matrices with which we can construct closed-loop system (21). To demonstrate the added benefit of higher-dimensional partitioning of the state space, we evaluate stability using three different sets of LMI conditions: (1) Without partitioning, which results in a common quadratic Lyapunov function. The corresponding LMIs are given by (39)-(41) for N = M = 1.
(3) Volumetric partitioning using all conditions stated in Theorem 5.1, implying the existence of a three-dimensionally PWQ Lyapunov function. For fair comparison, we choose the same number of regions N = 10 in the azimuthal direction as in the previous case, but now also use M 1 = 3 and M 2 = 2 to partition the state space in polar direction.
The LMI-based conditions are solved using the YALMIP toolbox (Lofberg, 2004) with SDPT3 (Tutuncu, Toh, & Todd, 2003) in MATLAB. In the implementation, the right-hand sides of (39) and (40) are tightened to −ϵ 1 I, and the right-hand side of (41) is set to ϵ 1 I, where ϵ 1 = 10 −3 is sufficiently large with respect to the machine precision such that the resulting LMIs may be solved in a non-strict manner for solver compatibility. Furthermore, similar to Remark 4 in Zaccarian et al. (2011), the equality constraints (42)-(44) are replaced by auxiliary inequality constraints with a small tolerance of ϵ 2 = 10 −8 to reduce numerical problems. Moreover, a balancing state transformationx g = T g x g with T g ∈ R ng ×ng is applied to the linear system with matrices (75) to improve numerical conditioning. Stability is evaluated using Theorem 5.1 for a grid of HIGS parameter values (k h , ω h ), the results of which are shown in Fig. 7 and will be discussed in more detail in Section 7.4.
Frequency-domain stability analysis
To evaluate stability of the closed-loop system using the circle-criterion-like approach, we simply inspect the complementary sensitivity function (71). In verifying the conditions of Theorem 6.1, we first observe that condition (I) is satisfied by design of the nominal controller C , as can also be seen from Fig. 6(a) using the Nyquist criterion. Next, since G ev (s) has a relative degree of two, it holds that G ev (jω) → 0 as ω → ∞, meaning (55) in condition (II) is satisfied for any k h > 0. For (56) in condition (II), we inspect the Nyquist diagram of (71) shown in Fig. 6(b), from which it follows that the closed-loop system including HIGS is guaranteed to be ISS by Theorem 6.1 for any ω h ∈ (0, ∞) and k h < k h,cc = 0.12, as indicated in Fig. 7 by the region to the left of dashed red line.
Comparison
For a grid of HIGS parameters (k h , ω h ), Fig. 7 visualizes the range of parameter values for which the different approaches are able to guarantee closed-loop stability of the HIGS-controlled example system (70). In addition, the figure shows the parameter range for which the system has been concluded to be stable on the basis of time-series simulation studies. Fig. 7 illustrates the conservativeness associated with this frequency-based approach, which can only guarantee stability for HIGS gains up to k h,cc = 0.12 (dashed red line). This conservativeness is thought to stem partly from inadequately accounting for the true nonlinear dynamics determined by the parameter pair (k h , ω h ), and instead using an upper bound based only on k h , which is evident in Fig. 7 as the stable parameter region according to Theorem 6.1 does not depend on ω h .
The LMI-based approach without partitioning, on the contrary, does explicitly include the nonlinear closed-loop dynamics. Consequently, the corresponding range of verifiably stable parameter values (black dots) increases with respect to the circle criterion, see Fig. 7(a). Comparing to the simulation-based stable region (gray), however, a considerable degree of conservativeness remains due to the LMIs demanding the existence of a common Lyapunov function. In Fig. 7(b), it is shown that the planar partitioning as described by Theorem 5.1 for M = 1 is able to partly alleviate this problem, resulting in a significantly larger set of parameters for which stability can be guaranteed. For parameter pairs close to the edge of the stable region, also this approach is consistently unable to guarantee stability. Finally, Fig. 7(c) illustrates the potential of the conditions in Theorem 5.1 in terms of reducing the conservativeness by extending the partitioning to three dimensions. The resulting range of (k h , ω h )-values for which closed-loop stability can be concluded on the basis of LMI conditions approximately coincides with the stable parameter region found by time-series simulation . Clearly, the LMI-based analysis outperforms the circle-criterion-like approach in terms of conservativeness. Nevertheless, the latter may sometimes be the preferable option due to its conditions being easier to verify. This is especially true in practice, in case accurate state-space models such as (72) are not available. The other potential drawbacks of the LMI-based approach are caused by the conditions being evaluated qualitatively, numerically, and for only a single parameter pair (k h , ω h ) at a time. Moreover, using numerical solvers may cause sensitivity to numerical inaccuracies, especially those related to the continuity constraints (42)-(44). In particular, if ϵ 2 is chosen too small, the solver may be unable to find a feasible solution for some values (k h , ω h ) within the stable region, while for ϵ 2 too large, increasingly many false positive conclusions on stability occur outside the stable region. Moreover, the suitable tolerance values may depend on the number of partitions N, M 1 , and M 2 , which in turn affects the number of LMIs and thereby the required solver time. The process of finding a suitable partitioning and corresponding tolerance values, combined with the fact that each value of the pair (k h , ω h ) must be evaluated individually, renders this approach more time-consuming compared to the circle-criterion-like analysis.
To reduce the numerical sensitivity of the LMI-based approach in this example, we tuned the PD-controller with reduced robustness margins, allowing for a fair comparison of the conservativeness of the different approaches (i.e., independent of numerical issues).
Conclusion
In this paper, we have introduced the formalization of the hybrid integrator-gain system (HIGS). The HIGS is a nonlinear integrator that projects its dynamics onto a sector, thereby keeping the sign of its input and output the same while maintaining a continuous control signal. We have presented an appropriate mathematical framework for the formal description of HIGS-controlled systems based on generalizations of projected dynamical systems, called extended projected dynamical systems (ePDS), which naturally describes the main design philosophy behind the HIGS. The ePDS framework was used in showing the fundamental property of well-posedness of HIGS-controlled systems in the sense of existence and forward completeness of solutions, thereby laying down a mathematical framework for formal studies of HIGS-based controllers. Moreover, two approaches for analyzing closed-loop stability of a motion system including a HIGS have been presented. The first involves LMI-based conditions that guarantee ISS of the closed-loop system via a PWQ Lyapunov function. Its main strength lies in a novel three-dimensional partitioning of the state space specifically tailored to the HIGS' dynamics, which reduces conservativeness of the conditions to a degree similar to what would be expected from a necessary condition on closed-loop stability. The second approach involves a circle-criterion-like analysis. Although potentially more conservative and only applicable to certain (common) feedback configurations, this approach allows for a nonparametric frequency-domain evaluation of input-to-state stability of the closed-loop system including HIGS. Both methods have been demonstrated on a motion system. Since their strengths and weaknesses are largely complementary, together they form a powerful set of tools for the stability analysis of HIGS-controlled systems. | 12,384.4 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
Tourism Dynamics and Sustainability: A Compared Analysis in Mediterranean Islands. Evidence for Post-Covid-19 Strategies
: Tourism may not sustainably support territories with limited natural resource stock as islands. The volume in visitor arrivals and the industry investments can increase the pressure even beyond sustainable levels. There is an evident and unresolved tension between these two great po-larities, sustainability and economic growth driven by tourism. The aim for policymakers is to find an acceptable equilibrium between these two dimensions. This paper investigates tourism evolution between 2007 and 2019 in 15 Mediterranean islands, comparing tourism pressures through statistical indicators. The analysis will compare tourism demand and supply trends in these contexts. The performances will be evaluated to identify the Islands positioning between sustainability needs and tourism development opportunities while considering post-covid-19 challenges.
Introduction
The consideration of tourism as a development driver is still under discussion because the efforts to enhance local benefits and competitiveness in tourism seem controversial from a sustainable perspective. Despite this consideration, the potential economic growth of tourism is documented in international literature, as highlighted in several recent studies. In general, when tourist activity grows, visitors increase and spend more money in a destination, leading to an increase in the GDP and economic growth [1][2][3][4][5][6][7][8][9][10][11].
With regard to insular contexts, the need to consider the peculiarity of these territories emerges. Tourism in islands is not a solved question because islands have a limited natural resource stock, so the increase in visitor arrivals can put pressure on the use of these to their viability limit, even beyond sustainable levels. Studies on the impact of tourism on island destinations worldwide have shown both positive and negative externalities generated by tourism in these contexts [12,13]. The increase of tourism flows could have unexpected detrimental impacts on environments and local communities, deriving from the excess of tourism, called overtourism [14,15,16]. Monitoring tourism impacts is fundamental to avoid negative effects on environment and residents [17], and finding new opportunities for the local industry's expansion [18,19,20,21]. In terms of sustainability [2222] (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20), for coastal or island areas is not easy being the territorial target for significant tourism flows. In this way, Mediterranean islands' environmental and cultural images can act as magnets attracting many tourists (i.e. overnight visitors). But, the arrival of consistent tourist flows could alter the fragile insular ecological equilibrium, negatively affecting those natural and cultural resources that have initially aroused tourists' interest in the knowledge of that place and ultimately could cause the displacement of tourists to the islands. That's the "paradox of tourism in the islands" [23] (131-143), and this is even more significant in the Mediterranean space. Tourism appears as an essential part of the local economy [13], being perceived generally as one of the few economic development opportunities available in the insular context and the only natural economic alternative (at the production level, economic activity, and income) capable of responding to the socio-economic needs of its inhabitants. Given that, the need to contain and eliminate negative effects on the environment and residents emerges.
The unresolved tension between these two great polarities, sustainability, and economic growth, is still far from an acceptable equilibrium. In this sense, a current of critical thinking [2424,2525] rejects the use of the term "sustainable tourism," suggesting that its use can be instrumented by political actors whose fundamental objective is somehow "green" which is simply economic growth. Adequate implementation of sustainable tourism [26] must emphasize the systematic management of environmental degradation, the generation of economic benefits for the receiving communities, and residents' perception 27, 2828]. This paper will explain how sustainable tourism was an accepted and practical reality in Mediterranean islands and what tourism development could be undertaken in these contexts. A sample of the Mediterranean island territories belonging to the European Union was included in the proposed analysis to explore this possibility.
After examining the features of tourism demand and supply in each insular context, the paper analyses four indicators comparing the results in the different observed islands. The positioning of each context, depending on the combination of the values obtained by the indicators selected in the two years, is observed. This allows us to also consider the evolution over the period observed.
The proposed analysis concerns the performances recorded before the COVID19 pandemic. The effects of COVID 19 on the economy of the Mediterranean islands, especially in terms of tourism, could be defined disastrous, with tourist activity on many islands having been reduced by almost 80%. The combined effect of restrictions (curfews, lockdowns, closing of theatres, discos, the closure of hotels and restaurants) with travel difficulties (border closures, shortage of air and maritime connections, airport closures) and the fear of being infected or becoming infected has caused a tremendous crash of the demand, causing a severe economic crisis worldwide and especially in islands, whose economy depends mainly on tourism. From a sustainability perspective, COVID 19 should be an opportunity to rethink tourism in the Mediterranean islands more consciously, by achieving a balanced equilibrium between the policies to increase tourism industry and public policy to contain and protect the islands territories.
Insularity condition and tourism
Isolation determines islands' social, cultural, political, and economic life. Historically, being isolated from the outside world, the islands appeared to be considered autarkic societies, without social and economic dynamism and with few commercial relations. Hence this nineteenth-century idea of the islands as ultra-conservative, immovable, and atavistic societies reluctant to change, whose distrustful island population hardly interacts with outsiders. A typical romantic idea but whose influence still continues today [2828].
Separation and unavoidable "territorial discontinuity" affect the life of the islands by questioning their external accessibility, both for those who intend to leave and those who intend to enter the island, since the external mobilization of people can only be carried out through air and maritime transport units. Likewise, uncertainty is generated in essential aspects of island life, such as providing necessities.
Insularity [30,31] requires a port infrastructure adequate to current needs and improved to meet demand expansions. Ports are needed for the reception of vessels, equipped with means for loading, disembarking, and storing goods, with devices for customs control. Passengers' entry and exit must also be foreseen. Likewise, airports and other connected infrastructures are essential for the accessibility to islands and, from the tourist perspective, currently even more important. Itineraries established in island transport may be affected by adverse weather and maritime conditions for navigation or by specific over-demands and thus generate discomfort in the mobilization of ordinary users and consumers. Moreover, critical marine phenomena destroy port facilities, coastal roads, and homes. Therefore, insularity can be considered according to two complementary dimensions. The former is related to the physical vulnerability of the islands in spatial terms (isolation, small size or smallness, scarcity of resources) in relation to specific characteristics associated with the physical and geographical features of these contexts. This dimension is persistent in economic-commercial or economic development analysis on the islands. While the latter dimension, the "islandness" [31, 32, presents a rather metaphysical cut, reflects feelings common to all islanders, based on the isolation inherent to the insular, usually in line with solid senses of roots and community.
According to the former dimension, territorial discontinuity increases the costs of external supply products and export goods caused by the mobilization and storage of shipments and landings. In this respect, we talk about the costs of insularity, originating over time a whole literature on the nature of such costs [33,35,12], on the way to measure, calculate and evaluate them [36,36,37] and more recently, how to compensate the excess costs caused by remoteness, insularity, and ultra-peripherality of the island territories [38,39].
We wonder whether it might be more expensive to consume and produce on an island than to do it on the mainland. According to Manera and Garau [28], the natural environment where human activity takes affects and conditions it. For this reason, the costs of insularity are evident since the smaller the territory, the greater the cost of human activity. Moreover, the further the part is from world economic flows, the more the costs increase [28]. From this perspective, the cost of accessing the market is much higher in the case of island economies: if we consider the transport of goods, for example, this is between two and four times more expensive than on the mainland. For this reason, the transfer of raw materials, the higher costs of storing stocks, the degradation of perishable products, and delays due to adverse weather conditions are critical factors which directly affect the competitiveness of island productions [28].
All these factors related to insularity and the verification of their simultaneous presence in these territories have led to the emergence of insular vulnerability [40,41]. In their economic development process [Error! Reference source not found.], the islands start from a situation characterized by a multiplicity of handicaps and physical, financial, and sociocultural weaknesses that cannot be avoided; therefore, a specific policy design is needed. The open debate in the European Union on insularity, its costs, and the way to face them are far from reaching a conclusion.
For island contexts, tourism represents, in this sense, the only policy option to overcome the structural constraints imposed by the small size of their economies and the insular physical conditions.
From an economic point of view, many islands have simply insufficient domestic market demand for a good or service to enable local firms to achieve any efficiencies or economies of scale. However, in the case of tourism, the demand is imported (Incoming Tourism), and thus, the market size can change, and increase, due to the possibility to attract external visitors. In this way, a local firm operating on an island could have a larger market than the local context for its goods and services. Then, they may begin to achieve economies of scale and efficiencies thanks to the tourist flows [43] (453-465). Therefore, islands firms can face the problem of the small size of the local market thanks to the demand deriving from the incoming tourists. Moreover, tourists are high spending people, so the incomes for local enterprises will increase more than proportionally. Given that, insular economies are almost totally based on tourism and related activities.
Another condition which affects islands is the geographical distance, which limits the accessibility to a destination with consequences for tourism flows, which are consequently affected by the higher cost of transport and the difficulty to reach them. Then, also in tourism, the need to consider the costs of insularity in the economic development dynamics arises. Island destinations represent a unique cluster, where tourism development and sustainability issues are connected and represent crucial aspects for the local economy and well-being 44]
The survey
Islands are defined as natural land extensions surrounded by water above the water level at high tide [45] (147-154). This geographic element differentiates them and identifies them from other territorial realities (such as peninsulas, capes, or promontories). Then, both characteristics, isolation and separation, define the island's nature and the basis of its insular condition, i.e., the fact of being an island or "insularity", a defining characteristic of islands, based on isolation and geographic discontinuity. In the selection of the sample observed, we considered the definition of "island" provided by Eurostat [46] as follow: • Have an area greater than 1 km 2 • Are at least 1 km away from the mainland • Do not have bridge connections to the mainland • Have a stable population of at least 50 people However, here the two islands of Cyprus and Malta, excluded by the European body since their respective capital cities fall within their territories, are analyzed.
This indicates a first difference between the institutional contexts examined, namely the Mediterranean: island states, autonomous regional islands, and coastal islands, which belong to a region situated on the mainland. Mediterranean islands were classified according to their size and density. The geographical dimension and population are not only featured from a geographical point of view, but they are issues from which tourism impacts cannot be separated.
In this survey a clusterizzation of islands and archipelagos according the 4 following categories was carried out: 1. Micro Islands = 0 Km 2 > Island area <1,000 km 2 2. Small Islands = 1.001 Km 2 > Island area <5.000 Km 2 3. Medium Islands = 5.001 Km 2 > Island area <10.000 Km 2 4. Large Islands = Island area> 10,001 km 2 The sample of Mediterranean islands (Table 1) was observed in order to analyze the leading type of tourism and sustainability dimensions.
Given this evidence, the need to investigate the performances recorded by islands in the tourism sector arises. Comparing the results obtained with those of other islands could be further relevant to define the best strategies to reach a sustainable development through tourism [26,47,48].
Islands evolutionary analysis
If we consider hotels and other facilities, the Mediterranean islands counted 24.416 (2019) accommodations and 1.813.269 beds. The distribution of the tourist supply is not uniform in all the islands; one should think that, for example, the Balearic Islands on their own contribute to 25,8% of the total availability in the Mediterranean islands in terms of beds. The Spanish archipelago is the first, counting more beds than Sardinia and Sicily, although characterised by a territorial extension equal to one-fifth of Sicily, which is the largest Mediterranean island.
The highest proportion of tourist accommodation structures is recorded in Sicily (30,6%), followed by Sardinia with 23,4% accommodation establishments.
A further comparison can be made by considering the size of the structures. The hotel accommodation class provides the highest number of beds (1.355.348, in 2019) throughout the Mediterranean, albeit with apparent differences from island to island. The largest hotels are in the Balearic Islands, the Maltese archipelago, the Dodecanese Islands, Sardinia, Crete, Cyprus, and Ionian islands. These contexts have hotels that provide an average of no less than 100 beds, according to a range including the 264 beds in the Balearic Islands and 103 in the Ionian islands. The other accommodation facilities are smaller than the previous one, except for Cyprus and Corse, equipped with a small number of large structures with an average size of 494 beds and 308 beds in each establishment. This figure is not surprising since the main kind of other facilities in these contexts is camping. The number of establishments and beds is not enough to explain the actual development of tourism in the Mediterranean islands.
In 2019, the tourism flow which affected the Mediterranean islands totaled 43.819.664 arrivals, +53% than to 2007, and 215.899.617 overnight stays, +34% compared to 2007. Also, demand flows distribution was not equal in all the contexts examined, as highlighted by the fact that 52% of arrivals are due to three contexts (Balearic Islands, Sicily, Crete), and 56% of overnights can be attributed to the Balearic Islands, Crete, and the Dodecanese. In both components of demand, the superiority of the Spanish archipelago arises. Indeed, it represents almost 30% of arrivals to Mediterranean islands and 32% of the total overnight stays corresponding to more than 68 million nights. They are left behind all the other contexts with a significant gap. Indeed, if Crete, second in the overnight stays, has a hole from the Spanish archipelago of more than 7.000.000 arrivals, Sicily, which has the second-highest number of admissions, is separated from the first position by more than 50,000,000 nights. Source: Observatory on Tourism in the European Islands -OTIE On average, the length of stay across the area was 4,7 days. Some differences should be noted beyond the individual contexts that can be highlighted according to four categories in which the islands have been divided. The average length of stay is quite similar for small islands, micro-islands, and medium islands (5 days) and coherent with the general average shown. A lower length of stay on average lower is recorded in more extensive contexts (3,7 days in the large islands). In that respect, a reflection is needed. Let us suppose the result regarding small islands is due in part to 3,2 days of Cyclades and four days of Evia, excluding which the category would have an average of 5,6 days. In that case, the large islands appear unable to restrain their guests for longer than the weekend, especially Sicily with its three days. This shows that broader contexts, which would suggest a more significant presence of tourist attractions or a more significant number of sights, things to do and places to visit, fail to become holiday destinations probably due to a lack of diversification of the supply.
Even within the same year, the flows were not evenly distributed in all island contexts. In general, tourist movements are more concentrated around May to September. Some isolated cases of seasonality extended from April to October, indicating the presence of a type of resort tourism, which makes the Mediterranean one of the favourite locations for the summer holiday.
Overall, both arrivals and nights have registered positive inflexions in the islands of the Mediterranean, in the ten years from 2007-2019, with increases which are more significant than 15.000.000 for appearances and more than 50.000.000 for presences.
Evaluating the overall result in the observed years, the scenarios have recorded an increase of 53% in arrivals and 34% in the overnight stay, highlighting a tendency towards more numerous but short travel. The best performances have been recorded by the Greek Islands, Malta, and Sardinia, which show an increase greater than 50% in arrivals, and Greek Islands and Corse with an increase greater than 70% of overnights.
Data Analysis and results
The first indicator analyzed is the territorial density index. It allows you to assess how many beds are available per km2. The first interesting result concerns the Maltese archipelago, which shows the highest concentrations of beds on their territory in 2019, followed by the Italian Tuscan archipelago. Lower index values are found in two Greek islands (the North-Eastern Aegean Islands and Evia) and the two large Italian islands. Another indicator to be considered is the Occupancy Rate. The numerator indicates the number of visitors' overnight stays. The denominator is the potential number of overnights stays, i. e. the total number of available beds in that year.
This index expresses the efficiency of the management in terms of the ability to maximise the occupancy of the accommodation establishment.
Malta highlights the best value of this rate in both years observed. The last indicator concerning the supply structure considers the average size of the accommodation establishments in each insular context examined. In general, the size of the accommodation establishments is relatively stable from 2007 to 2019. The most significant structures are in Malta, Corse, the Balearic Islands and Dodekanisa, with an average size higher than 150 beds. The territorial exploitation index measures the pressure on the environment from tourist and resident populations from the demand side. It relates the impact of tourist arrivals and residents to the territory's total area. Its value can be regarded as an indirect measure of stress that tourists and residents carry on the infrastructures of the region. Comparative analysis of tourist industry/performances in Mediterranean islands requires a selection and evaluation of a set of indicators for all the territorial contexts considered. The table below shows, for each island, the values of the chosen indicators in the two years observed. Table 5 shows the variation of 2019/2007 for each observed statistical indicator for each island cluster. Corsica, Cyprus, and the Tuscan Islands reduced the Territorial Exploitation Index and, therefore, tourist pressure on the territory. The TEI indicator shows a general increase in territorial pressure in all the islands. The most significant increase is for the Dodecanese, the Cyclades, the Ionian Islands, and Sardinia. In terms of Occupancy Rate, the Cyclades recorded the best increase in the observed period. The concentration of beds is relatively stable, except that for Dodekanisa which show a higher density and a greater average size in 2019 than in 2007.
Since the obtained values are quite different from each other, we proceed with standardising the data based on the territorial extension of the islands and the maximum value recorded for each index. This normalisation leads to a more equal comparison between the different contexts with several structural differences. They are normalised on the maximum value recorded for each islands cluster dimension. It does not express the maximum value of the indicator in absolute terms.
The islands comparative analysis considers the relationship between tourist pressure on the destination, measured by the Territorial Exploitation Index (TEI), with the three other statistical indicators concerning the structural endowment. This allow to analyse the positioning in respect the two dimension simultaneously observed. The graphs show the positioning of the islands according to the relations observed between the two statistical indicators in 2007/2019. The first graph compares the relationship and evolution between TEI and the Occupancy Rate (OR). Low TEI characterises the second quadrant's optimal positioning, where the high OR levels indicate an excellent tourism industry performance and a contained pressure on islands. Considering a dynamic view, data show a general trend to the first quadrant: an increase in efficiency, as the rate of beds occupancy increases, with a reduction in sustainability expressed by the rise in the pressure on the island. This is especially true in the case of Sardinia and the Dodecanese, which pass from the second to the first quadrant during the observed period. Although it remains in the 1st quadrant, Cyprus significantly reduced its tourist pressure while maintaining production efficiency. In 2019, the Cyclades Islands improved both parameters and moved from the third to the second quadrant.
Precisely, small and micro islands lie in the second and third quadrants both in 2007 and 2019, at least moving within them. For example, the Tuscan islands and the Ionian archipelago improve their results by reducing the TEI value. Among these islands, the only exception is the Dodekanisa, which moves from the second quadrant (the best position) to the first one, getting worse in sustainability. The Balearic archipelago and Malta maintain Sicily's position, the largest Mediterranean island.
Large and medium islands are between the first and the last quadrant, highlighting less attention to socio-environmental issues. Cyprus and Corse reduced the TEI from 2007 to 2019, while Sardinia moved from the best quadrant to the first one, improving the efficiency at the expense of sustainability.
The second analysis concerns the relationship between the TEI and the structural characteristics of the tourism supply, described by the average size indicator (AS) of the accommodation facilities. In this case, the desirable positions are indifferently the II and III quadrants, connected to a low socio-environmental impact. Dodecanese and Sardinia worsen the performance related to environmental pressure and move from the second to the first quadrant. In parallel, in 2019, Corse Island showed more significant attention to sustainability positioning almost at the border between the I and the II quadrant. Besides, Cyprus improved its performances by reducing the socio-environmental impact, as revealed by moving from the top side of the last towards the lower part of the first quadrant. Graph 3 compares TEI with the Territorial Density Index (TDI), which concerns the territorial concentration of tourist supply in terms of beds. The best positioning is the III quadrant, characterized by low beds on the territory and low-pressure levels.
The small and micro islands have the lowest values of the TEI and are in the II and II quadrants, except for Malta and Balearic, which follow the large islands. Dodekanisa, among the small contexts, and Sardinia, among the large ones, move from the II to the I quadrant and both worsen their position in terms of density of beds and TEI. Cyprus improves its performance moving from the top side of the first quadrant to the lower part of the last one, with a lower TEI and density of beds for square kilometer. Corse stays within the first quadrant reducing socio-environmental impact, moving towards the lowest part of the area. Six island contexts take up the win-to-win position in the two observed years (quadrant III). These are small and micro contexts, highlighting once again greater environmental attention than larger ones.
Discussions. Policy implication for Mediterranean islands
Fifteen insular contexts belonging to six different countries, Cyprus, Greece, Italy, Malta, France, and Spain, were compared to highlight general findings and specific features.
Insular contexts are different in geo-demographic and institutional dimensions and terms of tourism development.
The various combination of territorial extension, population, and tourism industry characteristics leads to different socio-environmental impacts and levels of efficiency in managing the tourism industry in two different periods of time.
The distribution of the tourist supply is not uniform in all the islands. The Spanish archipelago is the first in terms of beds, counting more beds than Sardinia and Sicily, although characterised by a territorial extension equal to one-fifth of Sicily, which is the largest Mediterranean island. Here we find the highest portion of tourist accommodation structures (30,6%), followed by Sardinia with 23,4%.
Considering the size of the structures, the highest number of beds is in hotel accommodations (1.355.348, in 2019). With more than 100 beds, the largest hotels are in the Balearic Islands, the Maltese archipelago, the Dodecanese Islands, Sardinia, Crete, Cyprus, and Ionian islands.
The other accommodation facilities are smaller than the previous one, except for Cyprus and Corse, equipped with a small number of large structures with an average size of 494 beds and 308 beds for establishment. This figure is not surprising given that the main kind of other facilities in these contexts is camping. In 2019, both arrivals and overnights increased in the islands of the Mediterranean basin (+53% and +34%, respectively). Remarkably, 52% of arrivals are due to the Balearic Islands, Sicily, Crete, and 56% of overnights can be attributed to the Balearic Islands, Crete, and the Dodecanese. The Spanish archipelago, in itself, represents almost 30% of arrivals to Mediterranean islands and 32% of the total overnight stays corresponding to more than 68 million nights. Considering the variation in the observed period, the best performances have been recorded by the Greek Islands, Malta, and Sardinia, which show an increase greater than 50% in arrivals, and Greek Islands and Corse with an increase greater than 70% of overnights.
Malta shows the highest TEI and TDI values in sustainability and socio-environmental impact.
By focusing on the deviations recorded by each index during the period 2007-2019, the best and worst cases can be highlighted. Corsica, Cyprus, and the Tuscan Islands show to have reduced the Territorial Exploitation Index and, therefore, tourist pressure on the territory. The islands that experienced the most significant increase in this indicator are the Dodecanese, the Cyclades, the Ionian Islands, and Sardinia. In terms of Occupancy Rate, the Cyclades recorded the best increase in the observed period (+0,5). The Comparing the TEI index with the other three indicators selected, greater attention to sustainable aspects in the small contexts can be observed. Large islands always appear in the quadrant corresponding to the higher socio-environmental pressure.
In general, the Cyclades, the Ionian islands and the North-Eastern Aegean Islands are always in the win-to-win quadrant. On the other hand, large and medium insular contexts show always low sustainability positions. Balearic and Malta, among the small and micro contexts, show the same positioning. Sardinia began with a sustainable approach in 2007, moving towards the I quadrant in 2019, getting worse in terms of socio-environmental impact.
Conclusions. Islands tourism policy implications
Islands are considered fragile territories due to the physical and economic limited resources and an unstable environmental balance. Sustainability aspects are always regarded as central for those territories, and at the same time, the need to support local economies through tourism is considered essential. The paper compared Mediterranean islands performances by using statistical indicators considering island clusters. The analysis shows that islands are characterized by a model of tourist development that has encouraged the construction of large hotels with a high average number of beds per establishment, thus creating sizeable and prominent tourist destinations.
The need to increase the number of tourism establishments, number of beds, and the need to rise in efficiency measured by beds occupancy resulted in a rise in islands pressure between 2007/2019. Analysis results are more evident for large and medium Mediterranean islands and in the case of large archipelagos. Due to this comprehensive tourism policy, the pressure on the islands is increasingly attracting more visitors to islands with an increase in tourists and overnights. Conversely, small and micro islands kept a contained pressure in 2007/2019 by choosing a small establishment dimension.
The analysis could further consider other external factors that influenced the increase of tourist supply: territorial dimensions, ability to attract investment, size of flows, and different time stages of these destinations' life cycles.
From the results obtained island dimensions show a natural limitation in tourism investments. Large and medium islands and archipelago offer a development model based on the tourism industry model, increasing the industry following the increase in tourism demand before the covid-19 pandemic. The rise in island pressure was not considered a limitation, and the expansion of the market supported the economic growth in the industry and local economy. Small and micro islands followed a more balanced model, by following the demand increase which adopted policies to keep a moderate level of pressure and islands sustainability.
The analysis represents a starting point for further studies and insights. Mediterranean islands need to address strategic development policy to ensure economic efficiency and at the same time respecting local environment and culture. In this context, new technologies as well as European strategies could support the management to take action on specific issues, like urban and environmental planning, mobility, smart cities, waste and water management, energy consumption, promotion of local culture, tourists' flows management.
Furthermore, advances in ICT help improve destination management and promotion at the same time arising visitors' awareness towards a tourism which respect local people and resources [49].
Author Contributions:
The authors equally contribute to each section of the paper. They have read and agreed to the published version of the manuscript. | 7,000 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Quantitative Study of Different Forms of Geometrical Scaling in Deep Inelastic Scattering at HERA
We use recently proposed method of ratios to assess the quality of geometrical scaling in deep inelastic scattering for different forms of the saturation scale. We consider original form of geometrical scaling (motivated by the Balitski-Kovchegov (BK) equation with fixed coupling) studied in more detail in our previous paper, and four new hypotheses: phenomenologically motivated case with $Q^2$ dependent exponent $\lambda$ that governs small $x$ dependence of the saturation scale, two versions of scaling (running coupling 1 and 2) that follow from the BK equation with running coupling, and diffusive scaling suggested by the QCD evolution equation beyond mean field approximation. It turns out that more sophisticated scenarios: running coupling scaling and diffusive scaling are disfavored by the combined HERA data on $e^+p$ deep inelastic structure function $F_2$.
Introduction
Geometrical scaling (GS) has been introduced in Ref. [1] in the context of low x Deep Inelastic Scattering (DIS). It has been conjectured that γ * p cross-section σ γ * p (x, Q 2 ) = 4π 2 α em F 2 (x, Q 2 )/Q 2 which in principle depends on two independent kinematical variables Q 2 and W (i.e. γ * p scattering energy), depends only on a specific combination of them, namely upon called scaling variable. Bjorken x variable is defined as and M p denotes the proton mass. In Ref. [1], following Golec-Biernat-Wüsthoff (GBW) model [2], function Q s (x) -called saturation scale -was taken in the following form Here Q 0 and x 0 are free parameters which can be extracted from the data within some specific model of DIS, and exponent λ is a dynamical quantity of the order of λ ∼ 0.3. In the GBW model Q 0 = 1 GeV/c and x 0 = 3 × 10 −4 .
In our previous paper [3] (see also [4]) we have proposed a simple method of ratios to assess in the model independent way the quality and the range of applicability of GS for the saturation scale defined in Eq. (1.3). Here we follow the same steps to test four different forms of the saturation scale that have been proposed in the literature.
Geometrical scaling is theoretically motivated by the gluon saturation phenomenon (for review see Refs. [5,6]) in which low x gluons of given transverse size ∼ 1/Q 2 start to overlap and their number is no longer growing once Q 2 is decreased. This phenomenon -called gluon saturation -appears formally due to the nonlinearities of parton evolution at small x given by so called JIMWLK hierarchy equations [7] which in the large N c limit reduce to the Balitsky-Kovchegov equation [8]. These equations admit traveling wave solutions which explicitly exhibit GS [9]. An effective theory describing small x regime is Color Glass Condensate [10,11].
Gluon saturation takes place for Bjorken x much smaller than 1. Yet in Ref. [3] we have shown that GS with saturation scale defined by Eq. (1.3) works very well up to much higher values of x, namely up to x ∼ 0.1. In this region GS cannot be attributed to the saturation physics alone. Indeed, it is known that GS scaling extends well above the saturation scale both in the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi [12] (DGLAP) [13] and Balitsky-Lipatov-Fadin-Kuraev [14] (BFKL) [15] evolution schemes once the boundary conditions satisfy GS to start with. It has been also shown that in DGLAP scheme GS builds up during evolution for generic boundary conditions [16]. Therefore in the kinematical region far from the saturation regime where, however, no other scales exist (e.g. for nearly massless particles) it is still the saturation scale which governs the behavior of the γ * p cross-section.
The form of saturation scale given by Eq.(1.3) is dictated by the asymptotic behavior [9] of the Balitsky-Kovchegov (BK) equation [8], which is essentially the BFKL equation [14] supplied with a nonlinear damping term. It has been first used in the papers by K. Golec-Biernat and M. Wüsthoff [2] where the saturation model of inclusive and diffractive DIS has been formulated and tested phenomenologically.
Since the original discovery of GS in 2001 there have been many theoretical attempts to find a "better" scaling variable which is both theoretically justified and phenomenologically acceptable. An immediate generalization of the saturation model of Refs. [2] has been done in Ref. [17] where DGLAP [12] evolution in Q 2 has been included. Although the exact formulation of DGLAP improved saturation model requires numerical solution of DGLAP equations, one can take this into account phenomenologically by allowing for an effective Q 2 dependence of the exponent λ = λ phn (Q 2 ) which is indeed seen experimentally in the low x behavior of F 2 structure function (see e.g. Refs. [17,18] and Fig. 1). This piece of data can be relatively well described by the linear dependence of λ phn (Q 2 ) on log Q 2 leading to the scaling variable of the following form In another approach to DIS at low x one considers modifications of BK equation through an inclusion of the running coupling constant effects. Depending on the approximations used two different forms of scaling variable have been discussed in the literature [9]: and [19] where subscripts "rc" refer to "running coupling". Note that from phenomenological point of view (1.6) is in fact a variation of (1.4) where a different form of Q 2 dependence has been used. Finally, generalization of the BK equation beyond a mean-field approximation leads to so called diffusive scaling [20] characterized by yet another scaling variable: These different forms of scaling variable (except (1.4)) have been tested in a series of papers [21][22][23] where the so called Quality Factor (QF) has been defined and used as a tool to assess the quality of geometrical scaling. In the following we shall use the method developed in Refs. [3,4] to test hypothesis of GS in scaling variables (1.4)-(1.7) and to study the region of its applicability using combined analysis of e + p HERA data [24]. We shall also compare our results with earlier findings of Refs. [21][22][23].
Our results can be summarized as follows: more sophisticated scenarios i.e. running coupling scaling and diffusive scaling are disfavored by the combined HERA data on e + p deep inelastic structure function F 2 . In contrast, phenomenologically motivated case with Q 2 dependent exponent λ and the originally proposed form of the saturation scale [1] with fixed λ exhibit high quality geometrical scaling over the large region of Bjorken x up to 0.1. The fact that GS is valid up to much larger Bjorken x's than originally anticipated has been already used in an analysis of GS in the multiplicity p T spectra in pp collisions [25].
In Sect. 2 we briefly recapitulate the method of ratios of Ref. [3] and define the criteria for GS to hold. In Sect. 3 we present results for 4 different scaling variables introduced in Eqs.(1.4)-(1.7). Finally in Sect. 4 we compare these results with our previous paper [3] and with the results of Refs. [21][22][23].
Method of ratios
Throughout this paper we shall use model-independent method used in Refs. [3,4] which was developed in Refs. [26] to test GS in multiplicity distributions at the LHC. Geometrical scaling hypothesis means that where for simplicity we define σ γ * p as for different x i 's, evaluated not in terms of Q 2 but in terms of τ , should fall on one universal curve. This means in turn that if we calculate ratio of cross-sections for different Bjorken x i 's, each expressed in terms of τ , we should get unity independently of τ . This allows to determine parameter governing x dependence of τ by minimizing deviations of these ratios from unity. Generically we denote this parameter as α, although for each scaling variable (1.4) -(1.7) it has a different meaning: α = β, µ, ν and κ for Q 2 -dependent, running coupling (1 and 2) and diffusive scaling hypotheses, respectively. Following [3,4] we apply here the following procedure. First we choose some x ref and consider all Bjorken x i 's smaller than x ref that have at least two overlapping points in Q 2 (or more precisely in scaling variable τ ). Next we form the ratios By tuning α one can make R x i ,x ref (α; τ k ) = 1 ± δ for all τ k with accuracy of δ for which following Ref. [3] we take 3%. For α = 0 points of the same Q 2 but different x's correspond generally to different τ 's. Therefore one has to interpolate the reference cross-section ). This procedure is described in detail in Refs. [3,4].
In order to find optimal value of parameter α that minimizes deviations of ratios (2.3) from unity we form the chi-square measure where the sum over k extends over all points of given x i that have overlap with x ref and Finally, the errors entering formula (2.5) are calculated using where ∆σ γ * p (τ (x, Q 2 )) are experimental errors (or interpolated experimental errors) of γ * p cross-sections (2.2). For more detailed discussion of errors see Ref. [3]. In this way, for each pair of available Bjorken variables (x i , x ref ), we compute the best value of parameter α, denoted in the following by a subscript min: 1 α min (x i , x ref ) and the corresponding χ 2 . For GS to hold we should find a region in ( and the corresponding χ 2 x i ,x ref is small. We shall also look for possible violations of GS in a more quantitative way. In order to eliminate the dependence of α min (x, x ref ) on the value of x, we introduce averages over x (denoted in the following by . . . ) minimizing the following chi-square function: which gives the best value of α denoted as α min (x ref Data points as in Ref. [17], see also [18].
Since GS is expected to work for small x's, the "average" value of scaling parameter To quantify further the hypothesis of geometrical scaling we form yet another chisquare function which we minimize to obtain α min (x cut ) . Equation (2.8) allows us to see how well one can fit α min (x ref ) with a constant α up to x ref = x cut . Were there any strong violations of GS above some x 0 , one should see a rise of α min (x cut ) once x cut becomes larger than x 0 .
Results
Let us now come back to the discussion of different scaling variables defined in Eqs. (1.4) -(1.7). All of them depend on one variational parameter, which we constrain analyzing ratios (2.3) for combined HERA e + p DIS data [24].
In the case of Q 2 -dependent exponent λ phn (1.4), however, there are in fact two parameters, one of them (λ 0 ) being fixed using our previous analysis of Ref. [3] where we have shown that GS scaling works very well with constant λ = λ 0 : λ 0 = 0.329 ± 0.002.
(3.1) On the other hand looking at low x behavior of the F 2 structure function it has been shown that [17,18]: where λ phn (Q 2 ) can be well parametrized as (for Q 2 in (GeV/c) 2 ) as depicted in Fig. 1. Taking therefore scaling variable in the form of (1.4) with λ 0 = 0.329 we test in fact consistency of the slopes β as extracted from Fig. 1 and by the procedure described in Sect. 2. Note that this is therefore a kind of perturbative two parameter fit, and as such it has a different status than the remaining Ansätze for the scaling variable (1.5) -(1.7). Similar remarks apply to the running coupling rc2 case (1.6), where the scale of the logarithm Q 2 ν has been fixed at 0.04 (following e.g. Ref. [22]). Then for all points Q 2 > Q 2 ν and τ rc2 decreases with rising ν. Let us first examine 3 dimensional plots of α min (x, x ref ) (note again that α = β, µ, ν or κ, depending on the scaling variable). For GS to hold there should be a visible plateau of α min over some relatively large part of (x, x ref ) space (recall that by construction x < x ref ). Looking at Fig. 2 one has to remember that the values of α min (x, x ref ) are subject to fluctuations that will be "averaged over" when we discuss more "integrated" quantities α min and α min . Note that statistical errors of α min (x, In the case of Q 2 -dependent scaling variable (1.4) (Fig. 2.a) and for the running coupling case (1.5,1.6) (Figs. 2.b, c), the values of parameters β, µ and ν rise steeply for large x's, whereas for diffusive scaling parameter κ is falling down rapidly. More closer look reveals that for running coupling rc1 case ( Fig. 2.b) there is in fact no distinct plateau, one can also see a systematic rise of µ min in a region of very small x's. Similarly for the diffusive scaling ( Fig. 2.d) we see rather systematic growth of κ min for small x's with possible plateau in a small corner of very low x's. At first glance no plateau is neither seen for β min (x, x ref ) (Fig. 2.a). However -as will be shown in the following -because of considerable statistical uncertainties within the scale used in Fig. 2.a, very good description of GS with constant β is still possible.
It is interesting to look at 3 dimensional plots of the corresponding χ 2 values (2.5) shown in Fig. 3. Recal that for GS to hold one should observe small values of χ 2 (α min ) in the same region where α min is constant. This happens for τ phn (Fig. 3.a) where χ 2 oscillates around 1 not exceeding 2 even for large values of x. Similarly τ rc1 (Fig. 3.b) stays smaller than 2 up to x ∼ 10 −2 where χ 2 jumps above 2. In this region, however, parameter µ is steadily decreasing with x. In contrast, in the case of τ rc2 (Fig. 3.c) and τ ds χ 2 (Fig. 3.d) χ 2 have pronounced fluctuations and a plateau (if at all) is visible only below x ∼ 10 −3 . However, in this region parameter ν (corresponding to Fig. 3.c) rises with x, whereas κ (corresponding to Fig. 3.d) exhibits rather strong fluctuations.
Due to different functional dependence of the saturation scales entering Eqs. (1.4) -(1.7) variations of parameters β, µ, ν and κ differently influence pertinent scaling variable τ . Therefore -before we turn to average quantities . . . and . . . displayed in Fig. 5 let us define effective exponents λ eff : which depend on fitting parameters β, µ, ν and κ. In Fig. 4 we plot these effective powers as functions of x and Q 2 for the values of the parameters β min , µ min , ν min and κ min fixed at the end of this Section. In order to find the scale relevant for a parameter entering definition of a given scaling variable τ (1.4) -(1.7), for each scaling hypothesis separately we have varied this parameter around the best value by ± and required that |λ eff (α min ± ; x, Q 2 ) − λ eff (α min ; x, Q 2 )| = 1 (3.5) for some typical values of x = 0.0001 and Q 2 = 10 GeV 2 /c 2 . In this way in each case the value of provides the reference scale for each variational parameter α = β, µ, ν or κ. Therefore looking at Fig. 5 one should bear in mind that the span of the vertical axis corresponds to the variation of the effective exponent ∆λ eff ∼ ±1 around its best value. Looking at Figs. 5 we see immediately that the best scaling properties are exhibited by parameter β of Q 2 -dependent scaling variable τ phn (1.4). Parameter β is well described by a constant β 0 = β min (0.08) = 0.02 ± 0.001 (3.6) over 3 orders of magnitude in x. We have used the value of maximal x cut = 0.08, since it was the value of x cut for which λ 0 = 0.329 has been extracted in Ref. [3], althoughas clearly seen from Fig. 5.a -GS in variable τ phn works well up to x 0.2. There is an impressive agreement between both averages β min and β min , however the value (3.6) is five times smaller than expected from the fit to low x behavior of F 2 structure function (3.3). For comparison in Fig. 6 we present the plot from Ref. [3] where λ min and λ min for scaling hypothesis with constant λ (i.e. for β = 0) are shown. We see that the quality of a fit with a constant λ is only a little worse than GS in τ phn but in general much better than in the case of the remaining scaling variables (1.5) -(1.7).
Indeed, for the running coupling constant rc1 case (1.5) we see in In this case, however, one should bear in mind that more "differential" measure of GSχ 2 (ν min ) -shown in Fig. 3.c does not support hypothesis of GS above x ∼ 10 −3 . Finally, in the case of diffusive scaling (1.7) we can hardly conclude that GS is really seen; although it is possible to find constant behavior of κ min and κ min below x ∼ 10 −3 with κ 0 = κ min (0.0013) = 0.449 ± 0.012. (3.9) Note, that the errors in Eqs. (3.6) -(3.9) are purely statistical (for discussion of systematic uncertainties see [3]).
Summary and Conclusions
In this paper we have applied the method developed in Refs. [3,4] to assess the quality of geometrical scaling of e + p DIS data on F 2 as provided by the combined H1 and ZEUS analysis of Ref. [24]. In a sense our analysis is in a spirit of previous works [21,22] and especially Ref. [23] where the same set of data has been analyzed by means of so called quality factor. Although the authors of Ref. [23] applied kinematical cuts 4 ≤ Q 2 ≤ 150GeV 2 , x ≤ 0.01 our results for scaling parameters given in Eqs. (3.1) and (3.7) -(3.9) are in good agreement with their findings. For completeness let us quote their results (note that they did not consider logarithmic Q 2 dependence of τ phn ): µ 0 = 1.61 (rc1), ν 0 = 2.76 (rc2) and κ 0 = 0.31 (ds). Difference in κ 0 can be explained by applied kinematical cuts, indeed, if we take maximal x cut = 0.01 we obtain κ min (0.01) = 0.301 ± 0.006 in agreement with [23].
Despite the fact that we have been able to find some corners of phase space where geometrical scaling in variables (1.5) -(1.7) could be seen, it is absolutely clear that the best scaling variable is given by (1.4) (or even by a constant λ of Eq. (3.1)), whereas diffusive scaling hypothesis is certainly ruled out. This is quite well illustrated in Fig. 4 where effective exponent λ eff for scaling variable (1.7) changes sign for small Q 2 . This is the reason why in Ref. [23] a cut on low Q 2 has been applied. Similar argument applies for the running coupling rc2 case (1.6) which blows up for small Q 2 . Because of that χ 2 x i ,x ref functions have no minima for very low x i and x ref (points with small x have also small Q 2 ). Therefore the only candidate for scaling variable is running coupling rc1 case (1.5). Nevertheless, comparing Fig. 5.b with Fig. 6 where we plot results for GS scaling with constant exponent λ, we see that both by quality and applicability range, the original form of scaling variable does much better job than (1.5). Although our results for best values of parameters entering definitions of scaling variables (1.5) -(1.7) are in agreement with Refs. [21][22][23] we do not confirm their conclusion that only diffusive scaling is ruled out while for other forms of scaling variable geometrical scaling is of similar quality. It is of course perfectly possible that the HERA data are not "enough asymptotic" and geometrical scaling in one of the variables defined in Eqs. (1.5) -(1.7) will show up at higher energies and lower Bjorken x's. | 4,897.2 | 2013-02-18T00:00:00.000 | [
"Physics"
] |
HEAT TRANSFER AND FLOW PROFILES IN ROUND TUBE HEAT EXCHANGER EQUIPPED WITH VARIOUS V-RINGS
This study numerically investigates pressure loss, heat transfer and thermal efficiency in round tube heat exchangers attached with various types of Vrings. A typical type A V-ring is compared with two types of modified V-rings (type B and C). The impacts of blockage ratios, b/D = 0.05, 0.10, 0.15 and 0.20 for all V-ring types in the turbulent region are discussed (Re = 3000 – 20,000). Flow directions in the round pipe attached with the V-rings are varied. The V-apex setting downstream is referred to as “V-Downstream, while the V-apex setting upstream is referred to as “V-Upstream”. The flow and heat transfer profiles in the tested section are analyzed using the finite volume method (a commercial code with the SIMPLE algorithm). The thermal performance of the tested tube is measured in terms of dimensionless variables: thermal enhancement factor (TEF), Nusselt number (Nu) and friction factor (f). Numerical results reveal that type B and C V-rings can reduce pressure drop compared with type A V-ring. Additionally, the VUpstream of type C V-ring yields the maximum TEF of 3.10 at b/D = 0.05.
INTRODUCTION 1
Increasing energy demand has driven the need to improve the efficiency of heat exchanger systems in industrial plants. There are many ways to improve thermal performance. Adding external power such as vibration to the heating/cooling systems to increase heat transfer is one example. This heat transfer augmentation with additional power is called "active method". This method has a tendency to increase operation cost. Another way to increase the ability to transfer heat is called "passive method". It attaches vortex generators such as ribs (Eiamsa-ard et al. (2019)), baffles (Phila et al. (2020)), twisted tapes (Piriyarungrod et al. (2018)), etc. to create vortex flows/swirling flows. The passive method has been adopted extensively because it does not increase operation cost. Hence, this study also selects the passive method as a mean to increase thermal performance.
Much work has been done to examine the use of passive methods to enhance thermal performance of various types of heat exchanges. For examples, examined the heat transfer enhancement in a square section attached with V-shaped ribs and nanofluid. The 45 o V-shaped rib was compared with the 60 o V-shaped rib. They found that the 45 o V-shaped rib resulted in lower entropy than that of the 60 o Vshaped rib and concluded that the bigger rib height with smaller pitch spacing performed lower exergy destruction and increased the second law efficiency. Bahiraei et al. (2019) numerically studied the thermohydraulic performance of Cu-water nanofluid in a square duct placed with 90 o V-shaped ribs. The effect of rib configurations: rib heights and rib pitches were compared. They showed that the Nusselt number increased by 28.3% when the rib height was increased from 2.5 to 7.5 mm with the pitch distance of 50 mm. Matsubara et al. (2020) performed direct simulation of entry regime heat transfer in a channel attached with ribs at Re = 20,460. They found that the disturbed thermal boundary enhanced heat transfer at the entry regime of the ribbed channel. Li et al. * Corresponding author. Email<EMAIL_ADDRESS>(2020) studied heat transfer and flow characteristics in a microchannel with solid and porous ribs. The thermal performance in the microchannel attached with the ribs was found to be greater than the mirochannels with no ribs. examined nanofluid in a channel equipped with conical ribs using the second law analysis. The influence of rib arrangements and nanoparticle shapes on flow and heat transfer patterns were measured. Jiang et al. (2020) investigated the fluid flow and heat transfer mechanisms of two-phase flow in a rectangular section with column-row-ribs. studied the effect of ribs on heat transfer at the entrance of a pin-fin array (Re = 7000 -40,000). Three configurations of ribs: 60 o rib, V-shaped rib and W-shaped rib, were assessed. They found that the entrance effect not only enhanced the heat transfer rate, but also decreased the pressure drop. Li et al. (2019) evaluated pressure loss and heat transfer of turbulent flow in a channel with miniature structured ribs on one wall. Their results showed that the averaged Nusselt number and the overall Nusselt number are 2.2 -2.6 and 2.9 -3.3 times, respectively, greater than those of the smooth channel. numerically studied pressure loss and heat transfer of a pin-fin array attached with rib turbulators. They reported that the rib produces the secondary flow which accounts for heat transfer enhancement. They also found that the 90 o rib performs the best. Jedsadaratanachai et al. (2015) simulated the thermo-hydraulic performance in a circular pipe fitted with 45 o V-baffles. The V-baffles in the form of inline arrangement were attached in the pipe. The effects of blockage ratios and flow directions in the Re range of 100 -2000 (laminar regime) were examined. It was found that the disturbed thermal boundary layer enhanced heat transfer and thermal efficiency in the pipe. The maximum thermal enhancement factor was 3.2. Jedsadaratanachai and Boonloi (2014) selected 30 o double V-baffles to augment heat transfer coefficient in a square channel. The effects of blockage ratios and pitch spacing in the laminar region with Re between 100 and 1200 were examined. They found that the increased blockage ratio with the declined pitch spacing led to the augmentation of the heat transfer rate.
Frontiers in Heat and Mass Transfer
Available at www.ThermalFluidsCentral.org 2 Boonloi and Jedsadaratanachai (2016) studied the forced convective heat transfer and pressure loss of a square channel with discrete combined baffles. The impact of flow path and baffled height for the thermal performance were examined in the Re range of 5000 -20,000. They reported that the heat transfer in the channel with the discrete combined baffle was 2.8 -6 times better than that of the smooth channel. Boonloi and Jedsadaratanachai (2015) reported that the 30 o and 45 o wavy ribs in a square channel had the best thermal performance of 1.47 and 1.52, respectively, at Re = 3000. This research focuses on the V-ring in the round pipe heat exchanger as the V-shaped turbulators have been found to increase heat transfer performance (Jedsadaratanachai and Boonloi (2017), Boonloi and Jedsadaratanachai (2019a, b), Boonloi and Jedsadaratanachai (2018a, b), Boonloi and Jedsadaratanachai (2014)). The V-ring and discrete V-ring are attached in the round pipe heat exchanger to change the flow structure. The V-ring and discrete V-ring will generate vortex flows which in turn disturb the thermal boundary layer on the tube wall. The disruption of the thermal boundary layer can significantly increase heat transfer and thermal efficiency. The configurations, sizes and placement of the V-ring and discrete V-ring in the round pipe heat exchanger are analyzed. The numerical investigation is chosen to study fluid flow and heat transfer patterns. Insights about the mechanisms in the heat exchanger is the key to thermal performance improvement in heat exchangers.
PHYSICAL DOMAIN OF THE ROUND PIPE ATTACHED WITH VARIOUS V-RINGS
The round pipe heat exchangers attached with three V-ring types are depicted in Fig. 1, while the periodic modules are shown in Fig. 2. The general configuration of the V-ring with an additional bar at the middle is called "type A". The additional bar can help to improve the rigidity of the turbulator when inserted in the tested section. The bar will also enhance the flow mixing. The discrete V-rings are known as "type B" and "type C". The purposes for the creation of type B and C discrete Vrings are to facilitate the turbulent blending of the fluid flow and reduce pressure loss. The reduction of pressure loss and the increment of turbulent mixing may enhance thermal efficiency. The tube diameter for all models is fixed at 0.05m. The V-ring height is represented with "b". The ratio of the V-ring height to the round pipe diameter, b/D, is varied in the range of 0.05 -0.20. The middle bar at the central line of the tested section is fixed at 0.05D in all configurations. The middle bar should not be larger than 5% of the tube diameter because it may bring high pressure loss. The flow attack angle of the V-ring is set at 30 o for all domains. According to past research, this angle provides optimum values of both heat transfer and pressure drop in the tested section , Boonloi and Jedsadaratanachai (2015)). The periodic length of the round pipe attached with various V-ring types is "L" and set to be equal to the round pipe diameter, L = D. The length between the V-rings, P, is set to be equal to the tube diameter (P/D = 1).
The flow directions in the tested tube are V-Downstream and V-Upstream. The flow setting with V-apex directing downstream is called "V-downstream", while the reverse setting is called "V-Upstream". The turbulent flow region with the Re number between 3000 and 20,000 is selected in this study.
ASSUMPTIONS, INITIAL CONDITION AND BOUNDARY CONDITIONS
Numerical analysis of three-dimensional, steady state fluid flow and heat transfer in the round tube heat exchanger attached with V-rings is performed under the following assumptions.
Forced convective heat transfer in the tested section is considered, while radiation heat transfer and natural convection are ignored.
The tested fluid is air and its thermal properties are assumed to be constant at the average bulk mean temperature (300K).
Body force and viscous dissipation are negligible. Both inlet and outlet boundaries are assigned with a periodic condition.
Constant heat flux along the tube wall is set at 600 W/m 2 , while the V-ring is set as an insulator.
A no-slip boundary condition is applied for all surfaces.
MATHEMATICAL FOUNDATION AND NUMERICAL METHOD
Based on the assumptions, the governing equations are presented below. Energy equation: Continuity equation: The Reynolds-averaged approach to turbulent modeling requires that the Reynolds stresses, illustrates the Boussinesq hypothesis which relates the Reynolds stresses to the mean velocity gradients.
K is turbulent kinetic energy, which can be determined by ij is a Kronecker delta. An advantage of the Boussinesq hypothesis approach is the relatively low computational cost associated with the computation of the turbulent viscosity ( ). The RNG k-ε turbulent model is an example of the two-equation models that use the Boussinesq hypothesis. The RNG k-ε model is derived from the instantaneous Navier-Stokes equation using the "renormalization group" (RNG) method. The steady state transport equations are presented in Eqs. (6) and (7) ( 3 Fig. 1 The round pipe heat exchanger attached with various V-rings.
The governing equations were discretized by the QUICK scheme together with SIMPLE pressure-velocity decoupling algorithm. The numerical problem of the round pipe attached with V-rings is solved with the finite volume method. When the normalized residual values were less than 10 −5 for all variables, and less than 10 −9 for the energy equation, the solutions were known as convergent.
The important dimensionless variables: Reynolds number, friction factor, local Nusselt number, average Nusselt number and thermal performance enhancement factor are shown in Eqs. (10), (11), (12), (13) and (14), respectively: The Nusselt number and friction factor for the plain pipe tube are represented with Nu0 and f0, respectively.
NUMERICAL VALIDATIONS
The numerical validations or the model validation are important for the numerical investigation. The validation results ensure accuracy of the numerical results. The numerical validation of the present work can be divided into 4 parts: smooth pipe validation, grid independence, compared with the experimental results and periodic condition test. Fig. 3 shows the plots of the smooth pipe validations on heat transfer and friction loss. The Nusselt number and friction factor of the present investigation are compared with the correlations (Cengel and Ghajar (2015)). The Nusselt number and friction factor correlations of the smooth tube are presented in Eqs. (15) and (16), respectively. The difference between the correlations and the results from this study, namely, the Nusselt number and the friction factor values, was less than 10% and 3%, respectively. 8 Therefore, for the sake of computational time and result accuracy, the grid cell number of 180000 is chosen for all numerical models of the round pipe heat exchangers attached with the V-rings.
In Figs. 4a and 4b, the Nusselt number and friction factor from the computational domain of the round pipe heat exchanger attached with the general type of orifice (d/D = 0.5, P/D = 6) are compared with the experimental results (Kongkaitpaiboon et al. (2010)). The Nusselt number and friction factor values from our numerical simulation differ from the experimental results by 10.7% and 14%, respectively.
Because the periodic boundary is applied to the inlet and outlet boundaries, the periodic concepts on flow and heat transfer for the round pipe heat exchanger attached with the V-ring are checked. Figs. 5a and 5b show the plots of the u/u0 and Nux/Nu0 vs the Reynolds numbers, respectively, for the round pipe heat exchanger attached with the type A V-ring (b/D = 0.15, V-Downstream, Re = 10,000). The periodic profiles of velocity and heat transfer can be found at the 4 th and 3 rd module, respectively. This suggests that the periodic boundary is an appropriate condition for the round pipe attached with V-rings.
Based on the results in this section, it can be summarized that the created model of the round pipe heat exchanger attached with V-rings is a reliable mean to determine friction loss, heat transfer, thermal efficiency and mechanisms in the tested section.
NUMERICAL RESULTS
Discussions of numerical results are divided into two sections. First, the mechanisms in the round pipe attached with the V-ring are explained. Insights into the flow and heat transfer profiles in the heat exchanger pipe is crucial for suitable configurations of the V-rings to improve heat transfer. Second, pressure drop, thermal efficiency and the heat transfer rate in the heat exchanger pipe attached with the V-rings are assessed by dimensionless parameters: friction factor, thermal enhancement factor and Nusselt number, respectively.
Flow and heat transfer patterns
In this section, the streamlines in y-z axis and longitudinal airstreams are selected to describe the flow configuration in the round pipe heat exchanger attached with the V-rings. The fluid temperature distributions in y-z planes and local Nusselt number distributions on the pipe surface are simulated to show the heat transfer behavior in the tested tube. In the type A model, four major vortex cores can be observed in both flow directions. The symmetric flow at the upper-lower part and left-right part is clearly seen due to the symmetric configuration. Changing flow direction causes the vortex rotation to be in the opposite direction. In the case of the V-Downstream, the small vortices which are created by the additional bar are obviously observed in all transverse planes.
In the type B model, the four main vortex cores are seen for all directions along the tested section. The symmetric flow at the upper and lower regime is found. The flow rotation changes when changing flow directions. The additional bar can also produce small vortices in the tested section. In the type C model, the V-rings create four vortex cores along the tested section for both flow directions. The symmetric flow at the left and right section is observed. However, the small vortices, which are produced by the additional bar, cannot be detected. The fluid flow in the pipe heat exchanger attached with the V-rings for the type A, B and C is shown in Figs. 7a, 7b and 10 the tube wall. The impinging flows near the tube surface reduce the thermal boundary layer thickness.
In the type A model, the reduction of the red contour is found at the left-right zone for the V-Downstream, but for the V-Upstream, it is seen at the upper-lower regime. In the type B model, the disturbance of the thermal boundary layer for both arrangements follow a similar pattern.
In the type C model, the lower red layer is seen at the left-right section for the V-Downstream, but for the V-Upstream, it is seen at the upperlower section. The comparison of the temperature distributions in y-z axis for each plane at various cases can be seen in Fig. 10. The variation of heat transfer characteristics in the tested pipe is a result of different flow structure caused by various V-ring types. The local Nusselt number distributions on the heat transfer surface are another indicator of the impingements of the vortex flows. Figs. 11a, 11b and 11c illustrate the local Nusselt number distributions on the tube wall of the round pipe heat exchanger attached with the type A, B and C V-rings, respectively. These figures show that heat transfer is higher in the pipe heat exchanger with the V-rings compared with all the other cases with the smooth pipe. The impinging regime on the pipe wall is seen in all types of V-rings and flow arrangements. An increase in the Vring height enhances the vortex strength, which in turn leads to greater heat transfer. For all types of V-rings and flow arrangements, the lowest heat transfer is observed in the condition which b/D = 0.05, whereas the highest heat transfer is observed in the condition which b/D = 0.20. Regardless of V-ring types, the maximum heat transfer occurs at the leftright section for the V-Downstream, but for the V-Upstream, it occurs at the upper-lower section. This shows that the V-ring can induce the longitudinal flows which impinge at the left-ring part of the tested section for the V-Downstream, but for the V-Upstream, this happens at the upper-lower section. The plots of flow structures and heat transfer characteristics of different configurations of V-rings in the tested section cannot be used to conclude vortex strength and heat transfer. Therefore, the value of Nusselt number is chosen to determine heat transfer. Heat transfer depends on the vortex strength and flow mixing. The friction factor is selected to assess pressure loss in the tested section. Because the attachment of the V-rings in the tube heat exchanger not only enhances heat transfer but also reduces friction loss, the thermal enhancement factor at similar pumping power is presented to discuss the benefit of the V-rings. The friction factor, Nusselt number and thermal enhancement factor values are presented in the next section.
Thermal performance analysis
The thermal performance analysis in the round pipe heat exchanger attached with various types of V-rings is divided into three sections: heat transfer, pressure loss and efficiency. The heat transfer rate of the tested tube can be presented in terms of Nusselt number ratio (Nu/Nu0), while the pressure loss and efficiency can be presented in the form of friction factor ratio (f/f0) and thermal enhancement factor (TEF), respectively. The Nu/Nu0 versus Re at different b/D values for all V-ring types is plotted in Figs. 12a, 12b and 12c. In the case of V-Downstream, the Nu/Nu0 in the tested tube is 2.42 -5.10, 2.48 -4.08 and 2.19 -5.67 for type A, B, and C V-rings, respectively. In the case of V-Upstream, the Nu/Nu0 is 2. 93 -5.66, 2.88 -5.30 and 2.89 -5.73, for type A, B, and C V-rings, respectively. In the case of type A V-ring, Nu/Nu0 decreases with an increase in Reynolds numbers, except when b/D = 0.05 for the V-Downstream. When 0.05 ≤ b/D ≤ 0.10, greater heat transfer is observed in the V-Upstream compared with the V-Downstream for all Reynolds numbers. At b/D = 0.15, the V-Upstream, compared with the V-Downstream, results in larger Nusselt number when Re < 10,000, but lower when Re > 10,000. At b/D = 0.20 and Re = 3000, the V-Upstream provides greater heat transfer than the V-Downstream.
In the case of type B V-ring, the heat transfer rate reduces with an increase in Reynolds numbers, except for b/D = 0.05 for the V-Downstream. At b/D = 0.05, the Nusselt number of the V-Upstream is greater than the V-Downstream for all Reynolds numbers. At b/D = 0.10, the Nu/Nu0 of the V-Upstream is greater than the V-Downstream when Re < 17,000, but slightly lower when Re > 17,000. At b/D = 0.15, heat transfer in V-Upstream is greater than the V-Downstream when Re > 13,000, but slightly lower when Re ≥ 13,000. At b/D = 0.20, the V-Upstream results in greater heat transfer compared with the V-Downstream when Re > 17,000.
In the case of type C V-ring, the Nu/Nu0 decreases with an increase in Reynolds numbers, except for b/D = 0.05 for the V-Downstream. At b/D = 0.05 and 0.10, the V-Upstream results in greater heat transfer compared with the V-Downstream for all Reynolds numbers. At b/D = 0.15, the Nu/Nu0 of V-Upstream is greater than the V-Downstream when Re > 17,000. At b/D = 0.20, Nu/Nu0 for both directions are found to be similar.
11
The improvement in heat transfer rate of the tested section is due to the optimum helical pitch length of the vortex flow or swirling flow. Moreover, the vortex strength and the disturbance of the thermal boundary layer on the heat transfer surfaces are main factors for the enhanced heat transfer rate and thermal performance.
Figs. 13a, 13b and 13c show the variation of f/f0 with the Reynolds number at various ratios of b/D and flow directions for type A, B and C, respectively. As shown in the figures, f/f0 increases with an increase in Reynolds number and b/D for all cases. The highest friction loss value is observed when b/D = 0.20, but when b/D = 0.05, the opposite trend is observed for all V-ring types and flow arrangements. In the case of the V-Downstream, the attachment of the V-rings, compared with the plain pipe, produces 4.66 -100.68, 4.17 -63.71 and 3.28 -68.38 times higher friction loss for type A, B and C, respectively. As for the V-Upstream, the addition of the type A, B and C V-rings, compared with the plain pipe, produces 5. 00 -96.55, 4.73 -74.21 and 4.35 -104.11 times more friction loss, respectively. The V-rings with the V-Upstream, compared with the V-Downstream, increases more pressure loss, except for type A when b/D = 0.20. The differences of friction factors for type B and C, especially at high b/D value, can be observed.
12
The relationships between the TEF and the Reynolds number at different b/D and Re values are plotted for type A, B and C V-rings in Figs. 14a, 14b and 14c, respectively. As shown by the plots, the TEF decreases with increasing Reynolds numbers for all cases. In the case of the V-Downstream, the optimum TEF is observed when b/D = 0.10. The TEF observed are 1.66, 1.68 and 1.95 for type A, B and C, respectively, at Re = 3000. In the case of the V-Upstream, the optimum TEF is observed when b/D = 0.05. The TEF observed are 2.98, 2.74 and 3.10 for type A, B and C, respectively, at Re = 3000. The results suggest that in the case of V-Downstream and V-Upstream when b/D = 0.10 and 0.05, respectively, the optimum ratio between the Nusselt number and friction factor can be obtained at similar pumping power.
Figs. 15a, 15b, and 15c show the plots comparing Nu/Nu0, f/f0 and TEF obtained from the present results with those from the published works. As shown in the figures, the Nusselt number of the present work is not larger than other types of vortex generators, but the friction loss of the present work is very low, especially at b/D = 0.05. Therefore, the Vrings (types A, B and C at b/D = 0.05 of V-Upstream) offers the highest TEF as compared with the published works.
CONCLUSION
The numerical analysis of flow topology and heat transfer in the round pipe heat exchanger attached with various configurations of the V-rings is performed. The flow blockage ratios, V-ring types and flow directions are considered for the turbulent flow regime with the Reynolds number based on the inlet condition ranging between 3000 -20,000. The conclusion drawn from the numerical results is as follows.
The round pipe heat exchange with V-rings induces greater heat transfer compared with the pipe with no V-rings. The V-rings produce vortex flows which disturb the thermal boundary layer. The thermal boundary layer disturbance leads to the enhancement of heat transfer. The vortex flows also help to improve fluid blending in the pipe, another contributing factor to heat transfer improvement.
In view of heat exchanger design, to achieve the optimum TEF, the ratio of b/D = 0.10 and 0.05 of the V-rings are recommended for the V-Downstream and V-Upstream directions, respectively. In the present study, the best TEF is found to be approximately 3.10 for type C at Re = 3,000 and b/D = 0.05. Compared with the pipe without V-rings, heat transfer is approximately 2.19 -5.73 times greater, depending on b/D, Re, flow directions and V-ring types.
The V-rings also provide higher TEF than other types of vortex generators. Moreover, the structure of the V-rings makes them easy to install and maintain in round tube heat exchangers. | 5,805.8 | 2022-01-27T00:00:00.000 | [
"Physics",
"Engineering"
] |
The simplest description of charge propagation in a strong background
Exploiting the gauge freedom associated with the Volkov description of a charge propagating in a plane wave background, we identify a new type of gauge choice which significantly simplifies the theory. This allows us to develop a compact description of the propagator for both scalar and fermionic matter, in a circularly polarised background. It is shown that many of the usually observed structures are gauge artefacts. We then analyse the full ultraviolet behaviour of the one-loop corrections for such charges. This enables us to identify and contrast the different renormalisation prescriptions needed for both types of matter.
Introduction
Very early in the development of quantum electrodynamics, QED, it was understood that the interaction of light with matter was best described in a way that introduced extra, unphysical, degrees of freedom, [1] [2]. The expected two components of the photon, at each spacetime point, were embedded in the four components of the vector potential, as these were needed to formulate the interaction 1 with matter. The recovery of physical results then followed from the gauge invariance of QED. Gauge fixing allows for a direct recovery of the physical dynamics of the theory. This is the case both for interactions in the vacuum and in a background. For example, in the Volkov description of the propagation of matter through a plane wave background, [3], there is also an implicit gauge fixing for the background field.
Counting degrees of freedom in such gauge theories is complicated by the Lorentzian signature of spacetime. The naive expectation would be that two gauge fixing conditions are needed to remove the two extra degrees of freedom, but in practice a single covariant gauge suffices to define photon propagators and hence S-matrix elements. Unitarity arguments can then show that suitably defined crosssections between appropriate states correspond to physical results with the correct degrees of freedom.
For propagation in a background, the gauge freedom in describing the background potential is, as noted above, often implicit in the formalism. That is, the explicit form of the potential implies that a gauge fixing condition has been used. So for the plane wave situation described by the Volkov solution, the scalar product of the null momentum, pointing along the beam, with the background potential vanishes. This is essentially a light cone gauge choice for the potential. There is still some residual gauge freedom in the choice of the background potential, but there is no fundamental requirement for adding an additional gauge fixing condition on the background. However, the Volkov solution is very complicated and disentangling physics from flotsam is a challenge. In this paper we will argue that a specific choice of additional gauge fixing on the background can significantly simplify the description of both the classical and quantum propagation of a charge through the background. We shall see that this holds for both weak and strong backgrounds, and leads to clear renormalisation conditions on the fields and physical parameters. This will be shown for both scalar and fermionic matter, and in this way we will be able to highlight and contrast some of the simple results found here for the renormalisation of both theories through the use of our additional gauge fixing condition on the background.
The approach taken here is perturbative in the strength of the background, and in that way we can build upon the familiar and precise language of perturbative quantum field theory. We will thus be able to explicitly introduce counterterms and renormalise using standard field theory constructions. That this can be done for both types of matter and for both weak and strong backgrounds, points to the great utility of imposing our additional gauge fixing condition on the background. This paper will take the background to be circularly polarised, as that choice will lead to the simplest possible expressions for the propagator, especially in the context of scalar matter.
The plan of this paper is to first discuss, in section 2, the background gauge freedom and introduce the additional, momentum gauge choice on it. Then, in section 3, we couple the background to scalar matter. The great utility of the momentum gauge is demonstrated here as we will be able to present a full and self contained account of the propagation, and its one-loop ultraviolet corrections, of such matter in both a weak and strong background. As well as being of great interest in its own right, this will allow us to introduce some of the key arguments and notation that will then be refined when we treat the case of fermionic matter in the rest of the paper.
In section 4 we consider fermionic matter, and introduce the key ingredients needed to describe its interaction with the background. Then, in section 5, we derive the full tree level fermionic propagator in the background, and see again how the additional momentum gauge choice greatly simplifies this derivation. The oneloop corrections to the propagator in a weak background are presented in section 6, and these are extended to the full, strong background, one-loop calculations in section 7. Armed with these results, we then go on to discuss the renormalisation of both the scalar and fermionic theories in section 8. We then conclude this paper in section 9, where we also discuss how the approach taken here can be extended to other polarisation choices for the background. Some key technical results are given in appendices.
Background field gauge freedom
The real classical potential, A µ c , describing a circularly polarised background is most conveniently written as the sum of two conjugate fields: where the complex potential is given by Here k µ is the null momentum characterising the plane wave background, a µ 1 and a µ 2 are orthogonal, real, spacelike vectors which satisfy the common normalisation condition that a 2 := a 1 · a 1 = a 2 · a 2 < 0. This complex potential also satisfies the null gauge condition which is equivalent to the two real conditions that k · a 1 = k · a 2 = 0. It should be noted that choosing a particular direction along which the background points introduces both a directional, x, and momentum, k, dependence to the potential A µ , as is clear from the final term in (2). We shall soon see, though, that this potential is essentially the background interaction term in a perturbative approach to the system, and it is for that reason that we suppress its explicit dependence on these variables. However, as discussed in [4], and shown later here at all orders in perturbation theory, this will still lead to a multiplicative, momentum space renormalisation procedure.
It is important to also note that there is still a residual gauge freedom in the potential, since A µ + Λk µ also satisfies the null gauge condition (3), for arbitrary Λ with the same spatial dependence as A µ . In terms of the real potentials, this gauge freedom is a µ 1 → a µ 1 +Λ 1 k µ and a µ 2 → a µ 2 +Λ 2 k µ , where then Λ = 1 2 e(Λ 1 +iΛ 2 )e −ix·k . It is helpful to be a bit more explicit about this residual gauge freedom. If we write k µ = (k 0 , 0, 0, k 0 ), then all the conditions on the real potentials are satisfied by writing where the common amplitude normalisation is a 2 = −α 2 − β 2 , and the sign ambiguity reflects left or right polarisation choices, as discussed in [5].
It is tempting to think of the two parameters α and β in (4) as the natural representation of the true degrees of freedom for the background. Indeed, for light by light scattering, such an identification is sensible. But it is not necessarily the best representation of the background when matter is present. We now introduce a new characterisation of the true degrees of freedom that is much better suited to calculations involving a charge propagating through the background.
Consider a charge of mass m that has associated with it a timelike momentum p µ describing its propagation through the background. This momentum may, or may not, be taken to be on-shell. But, given that k µ is the fixed null momentum associated with the background, we can ensure that p · k = 0. What is more, in this plane wave description, the momentum p is interpreted as an external momentum and thus is not integrated over in any loop calculation associated with the propagation of the charge. So we are able to ensure that p · k will never vanish in both the tree level propagator and its loop corrections.
We now impose an additional gauge condition on the background potential by requiring that, as well as (3), we have In terms of the explicit representation (4), this momentum gauge condition fixes the residual gauge freedom so that For example, if the charge was static, or moving solely along the z-axis, so that p = (p 0 , 0, 0, λp 0 ) with |λ| < 1, then Λ 1 = Λ 2 = 0, and we have the very natural representation mentioned earlier with a µ 1 and a µ 2 only having components in the transverse directions to the background.
But now, suppose the particle was moving along the x-axis. The momentum can then be written as p = (p 0 , p 1 , 0, 0), and we impose the timelike requirement that p 2 = p 2 0 − p 2 1 > 0. Then we find, from (4), that We shall see that these simple examples, and their full timelike extensions, give a computationally efficient way to characterise the background field for such a propagating charge.
Note that the above mentioned static class of representations of the background potential could also be characterised by the additional light cone condition that k · A = 0, wherek points along the dual light cone direction:k µ = (k 0 , 0, 0, −k 0 ). This choice also ensures that k ·k = 0 as it is just the λ → −1 limit of our earlier static class. In applications to light by light scattering, there is also great utility in this additional gauge choice, see for example [6], [7] and [8]. However, in the context of particle propagation, we shall see that focusing on the light cone structure obstructs the rich interplay between the lightlike background and the timelike particle dynamics inherent in this system. Exploiting this will lead to significant computational advantages and clearer physical insight in to this complex but important system.
Before concluding this general introduction to the kinematics of our system, it is worth noting that the above discussion of the momentum gauge choice assumes that it is sensible to talk of the charge as having a given momentum, p. Obviously, in the context of scattering, the momentum will change. Any such measurable scattering is not an ambiguity in the formalism, and the momentum gauge can still be used for at least, say, the incoming particle. However, even in the context of simple charge propagation, with no additional external interactions, the background itself obscures any idea of an unambiguous particle momentum.
So, although we have characterised the charge as having momentum p, the fact that it is propagating in a background means that the actual momentum is ambiguous. More precisely, we should allow for its momentum to be of the form p + nk, where the integer n counts the number of absorptions from the background minus the number of emissions degenerate to the background.
It is important, though, to note that this change in the momentum will not affect the overall gauge fixing conditions being proposed here since, from (3) and (5), we also have (p + nk) · A = 0, for all possible values of n.
We now begin our analysis of the propagation of matter through this background. Although our primary interest is in fermionic matter, we shall start with the much simpler case of scalar matter. Our choice of polarisation and momentum gauge now becomes particularly effective, and the transition to intense backgrounds will be almost immediate. This will be a good test case and help motivate the key definitions needed for the more complex fermionic structures that will be the main focus of this paper.
Scalar matter
The quadratic nature of the Lagrangian for scalar QED means that the matter interacts with photons via either a three or four point vertex. The Feynman rules for these vertex contributions are given by the truncated 1 diagrams, i.e., Green functions with external lines removed as signified by the small bars on them: Dotted lines are used here to represent the scalar propagators while wavy lines correspond to the photons. These rules are equivalent to those derived in, for example, section 6-1-4 of [10]. When the photon is taken to be degenerate with the plane wave background, these vertex terms can be contracted with suitably normalised products of the background potential, (1), to give the background interactions with the scalar matter in terms of either absorptions or emissions along with the various mixed absorption or emission, seagull interactions Our choice of momentum gauge, (5), then greatly simplifies these interactions as both the absorption, (9), and emission, (10), interactions vanish. In addition, our choice of circular polarisation, (2), means that both A · A = 0 and A * · A * = 0, and we quickly see that the only surviving seagull interaction is the momentum conserving, p = p, one with Feynman rule From the discussion preceding (3), we see that the background amplitude satisfies a spacelike normalisation condition, a 2 < 0. It is thus useful to introduce the positive scalar quantity, m > 0, defined by Hence we see that, by using the momentum gauge (5), the sole surviving interaction of the scalar matter with the circularly polarised background is given by the simple seagull term: Note that we have introduced in this last expression a useful, compact notation for the scalar propagator that emphasises its mass dependence, so that Two such seagull interactions are then given by From this we see that multiple interactions with the background are now simple products of these seagulls. So r ≥ 0 such interactions can be represented as Note that r = 0 here corresponds to the scalar propagator P(m 2 ). When r = 1 we will often omit the label, as in (14). Summing this last result over all possible values for r will then describe the physical propagation of the scalar charge through the background. It is important to be able to distinguish the resulting all orders propagation from the usual, perturbative, vacuum propagation of the charge. The convention adopted for a long time, as can be seen in section 105 of [11], was to use a thicker line to represent the propagator in the background. However, there has been a trend in recent years to make visually clearer the distinct types of propagation being considered in these complex systems. This can be seen in, for example, Figure 6 in [12] and Figure 1 in [13], where a double line was used to distinguish propagation in the background.
We hence define the scalar double line propagator of momentum p, in the momentum gauge (5), by Thus we have the simple result, which follows immediately from (17), that So, from this strong field summation of the interactions with the background, it is clear that the only impact of the background on the scalar particle is that its mass has increased to m 2 * := m 2 + m 2 . This is surprisingly simple. Normally this propagator would involve an infinite sum over different poles (sidebands) and, most strikingly, break translational invariance. None of these complications are present due to our momentum gauge choice.
Having constructed the tree level propagator for the scalar particle in our momentum gauge fixed background, we now consider its one-loop corrections and the associated ultraviolet structures. This will require an additional gauge fixing choice to be made, but now its role is to allow for the construction of the photon's propagator within the loop, rather than the description of the charge's propagation through the background.
In order to understand the impact of this gauge choice on the one-loop structure, we will take the photon's propagator, D µν (s), to be in the full Lorentz class of gauges, as given by We recall that this class includes the Feynman gauge, where ξ = 1, and Landau (or Lorenz) gauge when ξ = 0. In addition to gauge fixing, loops require a method to regularise the ultraviolet sector, and for that we adopt dimensional regularisation. So we take the spacetime dimension to be D = 4−2ε, with ε > 0, and introduce a mass scale, µ, to maintain the canonical dimensions for the loop integral and renormalised fields.
As expected, loop corrections to the scalar propagator in the background will contain ultraviolet divergences, but these are now worse than those encountered in the fermionic theory. In addition to the logarithmic divergences, we now also get quadratic ones. The great attraction to using dimensional regularisation is that it deals with these polynomial types of ultraviolet structures in a very efficient way by putting them equal to zero. But some care is needed in doing that as we can also encounter other classes of divergences that can interfere with this prescription.
The simplicity of dimensional regularisation can be seen most dramatically in the seagull loop diagram given by By simple power counting, we see that the loop integral in (21) diverges quadratic in the large s, ultraviolet sector. But, within dimensional regularisation, this integral can be evaluated for some D < 4 and then analytically continued back to four dimensions. The end result is that the integral vanishes. Thus there is no contribution to the propagator from the one-loop term (21).
The non-vanishing one loop correction to the scalar propagator is thus given by the self-energy term where the scalar's self-energy is given at one-loop by and we have suppressed for brevity the i prescription for the poles of the various propagators within the loop.
Again we see quadratic divergences here coming from the s 2 term in the first numerator and the s 2 s 2 factor in the second. But in addition, the second term also produces a subleading, logarithmic divergence. Factorising the numerators so as to cancel terms in the denominators leads to finite parts, which we are not considering in this paper, plus the ultraviolet divergent terms: The important point to note here is that, as with the seagull, the quadratically divergent term is purely ultraviolet in nature, and can thus be robustly set to zero within dimensional regularisation. But the final, logarithmic divergence, tadpole is more subtle as it diverges in both the ultraviolet (regularised by taking ε > 0) and infrared (regularised by taking ε < 0) regimes. Extracting its ultraviolet divergence now leads to a non-vanishing, gauge dependent contribution to the self-energy for the scalar particle. The end result is that the self-energy term (22) has a gauge dependent ultraviolet pole contribution, −iΣ UV s se , which can be written in terms of the mass and inverse propagator as The vertex correction to the lowest order scalar interaction with the background, (14), is then given by where now the loop correction to the induced mass is The extra scalar propagator in the loop here ameliorates the divergences seen in the related self-energy (23), so that we now only have logarithmic terms to deal with. Thus, by naive power counting, both terms in (27) will contribute the same ultraviolet pole, but the second term will have an additional multiplicative factor of ξ − 1. When combined we see that the overall ultraviolet pole is proportional to the gauge fixing parameter ξ. Thus, writing this ultraviolet, double pole correction as −iΣ UV m 2 , we have at one-loop the gauge dependent result that The one-loop ultraviolet results (25) and (28), and the expansion of the double line propagator (18), are the only ingredients needed for building the full, one-loop, ultraviolet corrections to the scalar propagator in the background. This claim might seem surprising as we have only considered a single seagull interaction, but we note that if a loop straddles more than one background interaction, then it is ultraviolet finite by simple power counting. This follows immediately from the extension of (27) to that situation, where the power of (s 2 −m 2 ) in the denominator will then be greater than two. This means that we only need to consider loops spanning single background seagulls in order to extract the ultraviolet terms.
An immediate consequence of this, single seagull within a loop, result is a simple inductive characterisation of the loop corrections to multiple seagulls. We thus have the ultraviolet, loop factorisation identity that, for r ≥ 0, From this it follows by simple induction that if we define ∆(r), for r ≥ 1, by Writing ∆ UV (0) = −iΣ UV s se P(m 2 ), we then see that summing over all such degenerate processes yield the strong field, one-loop corrections Recognising in this expression the double line expansions (18) squared, allows us to write (32) as the double line self-energy: This last result is a very succinct and attractive summary of the ultraviolet, one-loop structure of the scalar QED propagator in our background. Although it was built up perturbatively in the background interactions, it is an all orders result and thus valid for both weak and strong background fields. Through the use of the momentum gauge to remove irrelevant clutter, we have recovered a very simple result that clearly identifies the ultraviolet divergences that need renormalising in this scalar theory. Indeed, we see here a direct and simple link between the double line representation of a loop contribution and the precise algebraic structure of the corresponding Green function.
Having obtained this compact result for the scalar theory in the background which is structurally identical to that found in a vacuum, we can now introduce counterterms and renormalise using the familiar techniques of QED. But, before doing so, we shall first analyse what happens with fermionic matter. We will then return to discuss the renormalisation of both theories in section 8.
Fermionic matter
The impact of the plane wave background on fermionic matter includes a mass shift to m 2 * , first identified in [3] and [14], which was later seen to have a more subtle, matrix structure in [15]. The background also generates an infinite class of sidebands that permeate the theory [16] [17][5] [4], resulting in momentum shifts and additional spacetime phases. We will see, though, that the use of the momentum gauge streamlines the route to the induced mass and also reduces the sidebands to a finite number.
To lay the groundwork for these results, and their one-loop extensions, we will first, in this section, introduce the matrix structures associated with the fermions and identify the key simplifications that follow from the use of the momentum gauge and our choice of polarisation. We will then apply these results in the following sections to both the tree and one-loop description of the full propagator in our background.
The absorption by an electron of a photon from the background is now characterised by the absorption matrix, A, which is given in terms of the complex potential, A µ , by The dual matrix, E, describing the emission of a photon degenerate to the background, is then given by These are the fermionic counterparts to the scalar terms (9) and (10), but they do not now vanish in the momentum gauge. Indeed, these are now the only interactions with the background for the fermion, as there is no equivalent to the seagull term that was central to the scalar theory.
In terms of these absorption and emission matrices, the gauge conditions (3) and (5) imply the anti-commutation results that and Just as we did for the scalar field, it is useful to introduce a compact notation for the fermionic propagator, but now it needs to incorporate the degeneracy induced by the background that was alluded to earlier. We thus define, for integer n, the shifted fermionic propagator, P(m) n , by Diagrammatically, these fermionic propagators will be represented by the usual plane line. Thus the fundamental absorption process from the background is now given by while the emission process to the background is 13 The linear mass dependence and the extra subscript in (38) will help to distinguish this propagator from the scalar one, P(m 2 ), introduced in (15). Obviously, but very importantly, the big technical difference between P(m) n and P(m 2 ) that must be kept in mind is that the fermionic one is a matrix.
The algebraic complexities of the fermionic theory mean that it will often be useful to abbreviate this fermionic propagator by suppressing the explicit mass term, as in P(m) n → P n . Indeed, we will use this condensed notation for the fermionic propagator in what follows and only reintroduce the more explicit form after equation (69).
From our discussion of the scalar theory, we know that the propagator P 0 describes both the usual free propagator of momentum p and also the propagator where the number of absorptions equals the number of emissions. Obviously, a similar degeneracy will arise for each propagator, P n . But, in addition, a mismatch between the number of emissions and absorptions will result in a shift in the value of n. This process is described using the interactions (34) and (35) in the standard way, by considering the vertex term P n+1 AP n for an absorption and its dual P n EP n+1 for an emission. However, what is not standard from a field theory point of view is the fact that these interactions can be rewritten as the difference of two distinct propagators, and that this holds at all orders in the background interactions. This Ward type of identity leads to the sideband description of the charge that was first described in [16] and further refined in [4].
In a perturbative framework, the emergence of sidebands is simply a partial fraction expansion of the absorption, (39), and emission, (40), interactions. This quickly leads to the key absorption and emission identities that P n+1 AP n = IP n − P n+1 I and P n−1 EP n = P n−1 O − OP n .
The existence of this partial fraction decomposition only relies on the light cone property of k µ and the null gauge condition (3). But the form of the 'In' factor, I, and the 'Out' factor, O, is sensitive to the momentum gauge choice, (5). We now find that while its dual 'Out' matrix is We have now introduced all the basic variables needed to build up a description of the electron propagating through the background. The way to proceed, that makes the transition to the loop corrections most transparent, is to now shift from the language of absorptions, (34), and emissions, (35), to that of the 'In', (42), and 'Out', (43), factors. This perturbative refocussing of the formalism for fermions will be the topic of the next section. Prior to embarking on that, though, it is useful to conclude this section with a summary of the key new simplifications that follow from our choice of circular polarisation, (2), and the additional momentum gauge condition, (5).
The immediate impact of using a circular polarisation is that polynomials in the interactions become trivial. In particular, we have already noted that from (2), A · A = A * · A * = 0. This means that the scalar term v and v * that play an important role in the full elliptical class of polarisation, see [5], now vanish: In terms of the absorption and emission matrices, these polarisation dependent simplifications become A 2 = 0 and E 2 = 0 .
These last two results are easily extended using the momentum gauge conditions (37), so that A / pA = 0 and E / pE = 0 .
Written in terms of the propagators (38), these last identities become AP n A = 0 and EP n E = 0 .
For the 'In' and 'Out' factors we have similar algebraic properties to (45), but now these are polarisation independent and just follow from the momentum gauge choice, so that I 2 = 0 and O 2 = 0 .
In addition, the momentum gauge choice also implies the polarisation independent, trivial mixed products of these factors: The choice of circular polarisation combines with the momentum gauge to now give momentum insertion identities similar to (46), namely I / pI = 0 and O / pO = 0. These simple results have two very important refinements that will be repeatedly used and extended in our analysis. If we first view the momentum term here as coming from the propagator (38), then we have It is important to note that the mixed identities in (49) are not reflected in the product of an absorption and an emission matrix. Indeed, from (2), we have A * · A = 1 2 e 2 a 2 , which does not vanish. This result is actually polarisation independent, see [5]. In terms of the absorption and emission matrices, we write this last key identity as where the oxymoronic, 'mass null vector' is defined by Note that from the scalar mass definition (13), we have the vector mass identity that m 2 = 2p · M.
Finally, we note that the propagator identities in (50) have the immediate mass insertion generalisations that, for r ≥ 0,
Tree level propagation
Loop corrections to the propagation of the charge are most readily introduced via a perturbative formulation of the tree level results. We shall now develop such a description, taking full advantage of the simplifications that follow from using the momentum gauge. Armed with these results, we shall then be ready to add one-loop corrections to these all orders interactions with the background.
Building upon our definition of the scalar double line propagator, (18), we define the fermionic double line propagator, of momentum p, to be the sum over all possible tree level, perturbative interactions with the background that start with momentum p, The notation being introduced here is the generalisation of the scalar result (18) to the situation where we have both absorptions and emissions from the background. So the sum is now over all processes with r 1 absorptions and r 2 emissions, degenerate to the background. In contrast to the scalar theory, each such process will in general correspond to multiple Feynman diagrams. Note that momentum conservation implies that the out going momentum is p = p + (r 1 − r 2 )k. The single absorption process, (39), but in which the charge with initial momentum p absorbs a background photon, is given in terms of the propagator, (38), by the vertex contribution P 1 AP 0 . Thus we have, from (41), the sideband representation of this interaction: In a similar way, the single emission process, (40), becomes If we now consider two absorptions, then we get from (47) the vanishing result that 2 0 This last example can be easily generalised so that if the difference between the number of absorptions and emissions is greater than one, then the contribution to the double line propagator, (57), vanishes: This key vanishing result follows from both our choice of momentum gauge and polarisation. The proof is straightforward since all perturbative contributions in (61) must now include parts where we have either two consecutive absorptions or two consecutive emissions, separated by an appropriate propagator P n . These then vanish by the identities (47). From the vanishing result (61), we see that the only other non-vanishing contributions to the propagator (57) arise when the absorptions alternate with the emissions. The lowest order terms of this form are given by the the processes whereby the electron first absorbs a photon and then emits back into the background, or first emits then absorbs from the background: P 0 AP −1 E + EP 1 A P 0 . The central factor here can be seen as a propagator insertion into the mass term (52). One quickly finds that the momentum gauge implies the polarisation independent result that Hence we see that This is the fermionic version of the scalar seagull term (14). Again we note that the fermionic theory has even sidebands associated with this central term.
The lowest order results (58), (59) and (63) can now be extended to all orders in the background interaction. Key to that extension is the following factorisation result, that is derived in Appendix A, valid for r 1 ≥ 0 and r 2 ≥ 0: Given the mass generating term in (63), the factorisation in (64) will spawn background induced mass terms into all the sidebands seen in (58), (59) and (63). Exploiting this factorisation then quickly leads to the results, also discussed in Appendix A, that for all r ≥ 0: and r+1 r+1 Using the vanishing result (61) in the double line definition (57), we see that the double sum over interactions becomes the single sum: The terms being summed over here are explicitly given by the previous key results (65), (66) and (67).
To interpret this representation of the double line propagator for fermionic matter, it is helpful to reinstate the explicit mass dependence of the propagator, so that P n → P(m) n . Mimicking the scalar argument in (19), if the fermionic mass m is now shifted by the matrix term / M, then we get the expansion where in (69) the single propagator term can also be factored out to the left. Using this mass shift identity allows us to rewrite in a very succinct way the all orders tree level result (68) as the core sideband expansion: Note that in this all orders, tree level expression for the fermionic double line propagator, the upper terms have spacetime dependence inherited from the 'In' factor of e −ix·k , the central terms have no such spacetime factors, while the lower terms inherited from the 'Out' factor an e ix·k dependence. It is also useful to note that the fermionic propagator P(m + / M) n in these last few expressions can also be partially written in terms of the scalar mass m, introduced in (13), as 19 This representation makes clear the new pole structure in the sidebands and highlights the fundamental difference in this fermionic theory due to the vector nature of the induced mass term in the numerator. The expression (70) for the all-orders, tree level, fermionic propagator in the background is surprisingly compact, with a very manageable number of core sidebands. In Appendix B we show how this formulation of the fermionic double line propagator relates to the more standard discussions found in the literature.
One-loop correction in a weak background
Having constructed the tree level, double line fermionic propagator in (68) and (70), we now want to incorporate into these sideband expressions their one-loop corrections. Just as for the scalar theory, this will be built up perturbatively over the interactions with the background. So we will start our analysis in this section by considering a weak background and hence looking at the loop corrections to the lowest order background terms introduced earlier: the single absorption (58), the single emission (59), and both processes with a single absorption and emission (63). These lowest order one-loop corrections were first discussed in [4], and extended to the full Lorentz class of gauges in [18]. Now we shall exploit the simplifications that arise due to our momentum gauge choice, (5), to give a more direct account of these results for a circularly polarised background.
Using a simple notational extension of the scalar self-energy term introduced in (22), but now allowing for a sideband momenta of p + nk, we take the fermionic self-energy to be given by the expression Note, though, that by translational invariance, we can focus on the structure of the central sideband here, with n = 0, and then replace p by p + nk for the more general result in what follows. We thus have, within our Lorentz class of gauge fixings for the loop, In contrast to the equivalent scalar version, (23), there are now no quadratic ultraviolet divergences, but there are still linear and logarithmic ones to be identified.
Simple power counting arguments now quickly show that Thus the fermionic version of the ultraviolet scalar result (25), adapted to the n th sideband, is the self-energy expression that An analysis of the vertex corrections in the fermionic theory is more involved than in the scalar theory, due to the associated changes in sideband structures related to whether we have an absorption or an emission or both. To unpick this we start with the vertex correction to the fundamental absorption process (58).
The vertex correction to the absorption of an in-coming background photon is given by where Again, simple power counting arguments quickly show that Recalling our definition of the absorption matrix, (34), this quickly leads to the ultraviolet, in-coming, vertex contribution Using the sideband representation of the absorption matrix, (41), allows us to rewrite this vertex contribution in terms of consecutive sideband self-energies, To understand the significance of the one-loop results, (75) and (80), we now consider the full, one-loop corrections to the single absorption process as given by the two self-energy and one vertex corrections: This we recognise as the naively expected, one loop self-energy corrections to the sidebands in (58).
In a very similar way, the leading one-loop vertex correction to the dual emission process, (59), is given by where the out-going version of the in-coming vertex, (79), and its sideband representation (80), are now Hence we quickly see that the one-loop corrections to the single emission process are: This is also the expected, one-loop self-energy corrections to the absorption sidebands in (59).
Loop corrections to processes with a mixture of emissions and absorptions introduce an additional, but gauge dependent, ultraviolet divergence associated with the background induced mass term, (53). This was first identified at lowest order in the background, for Feynman gauge, in [4] and then extended to the full Lorentz class in [18]. We have already seen here, in expression (63), that the identification of the background induced mass is simplified by the use of the momentum gauge. Now we will see how that gauge also streamlines the discussion of this new ultraviolet correction.
The one-loop corrections to the lowest order mixed absorption and emission process, (63), can clearly spawn simple self-energy terms. But these corrections can also straddle more complex, interaction structures associated with the background. Indeed, the vertex term here is now a mixture of emissions and absorptions, and the loop corrections are thus more involved. Focusing, though, on the ultraviolet structure leads to a simple factorisation of these vertex corrections, the details of which are discussed in Appendix C. One finds The first thing to note about this expression is that the inverse propagator term here does not match the double line propagators that surround these corrections in (33). Under the simple mass shift, P(m 2 ) −1 = P(m 2 + m 2 ) −1 − im 2 , we get the full inverse propagator plus additional m 2 terms that now combine, in a very attractive way, with the mass term to give Note that this last expression can be interpreted as the usual ultraviolet pole of the self-energy for a scalar particle of mass m 2 * = m 2 + m 2 . Now introducing the tree-level, but not free, bare mass m 2 * and the, Volkov field, wave function renormalisation, allows for the expansion in terms of mass, δ m 2 * , and Volkov wave function, δ 2 , counterterms so that From (96) we can now read off the strong field renormalisation conditions that The most striking thing about this result is its simplicity. In terms of the strong field variables, m 2 * and P(m 2 * ) −1 , we have the same multiplicative structure familiar from the scalar matter in a vacuum. If we were not in the momentum gauge, then many additional gauge artifacts would obstruct this simple result.
For the fermionic theory, this argument needs to be repeated in each of the sidebands, with propagator P n , for n either 0 or ±1. We quickly see that for the n th such sideband, Again, the inverse propagator here needs to match the terms multiplying it, so we make the replacement P(m) −1 n = P(m + / M) In contrast to the scalar result above, now we see only a renormalisation of the vacuum mass, m, but the wave function term is still with respect to the full, strong field normalisation.
Thus, in this fermionic theory, we need to introduce just a vacuum mass counterterm, δ m , along with the strong field, wave function term, δ 2 . Then, written in terms of renormalised fields, we have The renormalisation conditions that follow from this and (96) are now The differences revealed in this section in the renormalisation conditions needed for scalar, (98), and fermionic, (102), matter are surprising. That the gauge fixing conditions enter differently is not itself unexpected, but the fact that different classes of counterterms are needed seems unexpected. In particular, the contrast between the full mass counter term for scalars and only the vacuum mass counter term for fermions seems unexpected. It is not clear, a priori, why this should be the case. Especially given the fact that both types of matter have the same, strong field, counterterm structure for the wave function renormalisation associated with the full propagators (19) and (70).
Conclusions
There is huge theoretical and experimental interest in particle physics in an intense laser background. Much of the theoretical work builds upon the Volkov solution, where there is known to be a mass shift and a loss of translational invariance, both induced by the background. The Volkov solution has built into it a gauge freedom. In this paper we have introduced an additional gauge fixing condition on the background, which we call the momentum gauge, which dramatically simplifies the description of charged matter propagation.
For scalar matter, in a circularly polarised background, we have seen in this paper that only one type of background interaction survives in this gauge. This interaction respects translation invariance, and can be easily summed to all orders. As the background non-translational invariance has been gauged away, all the familiar tools from vacuum scalar QED could be deployed. The strong field solution developed here exhibits the background induced mass shift, but none of the normally expected sideband structures, which are revealed here to be gauge artefacts, even at one-loop.
For fermionic matter, a small number of sidebands persist in the momentum gauge. This corresponds to a limited violation of translational invariance. Despite this, we have been able to develop momentum space techniques to construct the propagator and its one-loop, ultraviolet corrections. The background induced mass term has a matrix structure that is common to each sideband. We emphasise that in this momentum gauge, the infinite tower of sidebands has reduced to just seven terms for fermionic matter and only one for the scalar theory.
Our analysis of the renormalisation has further revealed a difference in the counterterm structures needed in both theories. For the scalar matter, (96), we have renormalisation of the shifted mass for the full propagator, and of the residue of the pole at this shifted mass. For fermionic matter, (100), the vacuum mass is renormalised, rather than the shifted mass, but we saw that it is the residue of the shifted mass that acquires a wave function renormalisation.
Our focus on circular polarisation for the background field has led to particularly simple results for both the tree level and one-loop renormalisation of these theories. The most immediate impact of widening the class of polarisations is that the terms v and v * , as defined in (44), no longer vanish, and one will get Bessel functions of these terms as factors in the sideband structure. This was alluded to in equation (109) of this paper. These added effects from the background will impact on our results for both scalar and fermionic matter. However, the evidence from [4] and [18] is that these terms do not acquire any one-loop corrections. We conjecture that this observation will also hold in a strong background for both types of matter. The circular polarisation case considered in this paper seems to represent the simplest configuration that captures the essential physics of the loop corrections in a plane wave background, for both types of matter.
Future work will also include an analysis of the finite parts and the infrared structures associated with charged matter propagating in the plane wave background. For scattering processes the momentum gauge can be applied to one leg, thus simplifying some of the one-loop structures, and the details of this will be presented elsewhere.
Acknowledgements
We thank Tom Heinzl for discussions.
A Perturbative factorisation results
In this appendix we collect together the details of the arguments that lead to the key perturbative factorisation result (64), and then the explicit expressions (65), (66) and (67) that follow from it. The simplicity of these results all depend critically on our choice of gauge and polarisation.
Although the factorisation result has been stated quite generally in (64), the vanishing result (61) means that we only need to consider three non-trivial cases corresponding to a net absorption (r 2 = r 1 − 1), a balanced interaction (r 2 = r 1 ) or a net emission (r 2 = r 1 + 1).
A net absorption means that there will be one extra power of the absorption vertex (34) over the emission vertex (35). The vanishing results (47) means that these vertices must alternate and hence, for r ≥ 0, r+1 r p p + k = P 1 A(P 0 EP 1 A) r P 0 .
Hence we recover the factorisation identity that, for r > 0, The dual emission version of this factorisation identity can be shown in a very similar way. For the balanced case there are now two contributing terms when we have r > 0 absorptions and emissions: (P 0 EP 1 A) r P 0 + (P 0 AP −1 E) r P 0 . But, using the vanishing identities again, this can be written as (P 0 EP 1 A + P 0 AP −1 E) r P 0 , from which the factorisation result immediately follows. These factorisation results now allow for an inductive derivation of the key identities (65), (66) and (67), where the base cases have already been seen in (58), (59) and (63). In fact, we only need to show (67), as the other two then then follow using repeated applications of the factorisation results.
Assuming the identity (67) holds for r ≥ 1 absorptions and emissions, the factorisation result then allows us to write Expanding the right hand side of (106) yields nine potential terms but this quickly reduce to three by using the trivial identities that / MI = / MO = 0, along with (51).
Thus we get Now we can use the identities (55) and (56) to deduce the claimed result (67), for all r ≥ 0.
B Derivation of Ritus matrices
The compact tree level result (70) is of interest in its own right since there are other approaches to this double line propagator that are not based on perturbation theory, and look very different. As a check of the results developed here, we now show how the more familiar Ritus matrices [19] reduce to the terms in (70) for our choice of circular polarisation and use of the momentum gauge. In order to trace the consequences of the assumptions made in this paper, we first consider the more general elliptic class of polarisations, which includes circular and linear polarisation as limiting cases. In equation (44) of [5], a suitable timeordered product of Volkov fields was calculated and, written here in terms of a double line, shown to be equal to p = n,r e i(n+r)x·k J n+r (Ω 1 , v , Ω 2 )P(m + / M) n J n (Ω 1 , v , Ω 2 )e −inx·k . (108) The notation used here is more refined than that used in [5], and is essentially found in the discussion leading to equation (66) of [4]. The normalising functions, J n , are the elliptic class of generalised Bessel functions with first and last arguments Ω 1 = −(I + O ) and Ω 2 = −i(I − O ). The 'In' and 'Out' terms here are written with primes to signify that they do not include the final exponential factors in the complex potential (2), used in this paper. But we note that the terms in (108) are multiplied by the exponential e irx·k , which we have factorised to match the order of the normalising Bessel functions. These factors can then be reabsorbed into Following the discussion related to figure 9 in [4], the ultraviolet pole of the oneloop version of the bracketed expression on the right in (113) can easily be found by the simple algebraic replacement: P n → P n + P n (−iΣ UV f se (n))P n ; A → A − iΣ | 11,671.8 | 2020-11-07T00:00:00.000 | [
"Physics"
] |
Balanced homodyne detection of Bragg microholograms in photopolymer for data storage
Wavelength multiplexed holographic bit oriented memories are serious competitors for high capacity data storage systems. For data recording, two interfering beams are required whereas one of them should be blocked for readout in previously proposed systems. This makes the system complex. To circumvent this difficulty and make the device simpler, we validated an architecture for such memories in which the same two beams are used for recording and reading out. This balanced homodyne scheme is validated by recording holograms in a Lippmann architecture. © 2007 Optical Society of America OCIS codes: (210.2860 ) Holographic and volume memories ; (090.7330) Volume holographic gratings References and links 1. H. J. Coufal, D. Psaltis, and G. T. Sincerbox, eds., Holographic Data Storage, Springer, Series in Optical Sciences (Springer-Verlag, 2000). 2. G. J. Steckman, A. Pu, D. Psaltis, “Storage density of shift-multiplexed holographic memory,” Appl. Opt. 40, 3387-3394 (2001). 3. S. S. Orlov, W. Phillips, E. Bjornson, Y. Takashima, P. Sundaram, L. Hesselink, R. Okas, D. Kwan, and R. Snyder, “High-transfer-rate high-capacity Holographic disk data-storage system,” Appl. Opt. 43, 4902-4914 (2004). 4. K. Anderson and K. Curtis, “Polytopic multiplexing,” Opt. Lett. 29, 1402-1404 (2004). 5. H. Fleisher, P. Pengelly, J. Reynolds, R. Schools, and G. Sincerbox, “An optically accessed memory using the Lippmann process for information storage,” Optical and Electro-Optical Information Processing (MIT Press, 1965). 6. S. Orlic, S. Ulm, and H.-J. Eichler, “3D bit-oriented optical storage in photopolymers,” J. Opt. A: Pure Appl. Opt. 3, 72–81(2001). 7. A. Labeyrie, J. P. Huignard, and B. Loiseaux, “Optical data storage in microfibers,” Opt. Lett. 23, 301-303 (1998). 8. R. R. McLeod, A. J. Daiber, M. E. McDonald, T. L. Robertson, T. Slagle, S. L. Sochava, and L. Hesselink, “Microholographic multilayer optical disk data storage,” Appl. Opt. 44, 3197-3207 (2005). 9. I. Sh. Steinberg, “Multilayer recording of the microholograms in lithium niobate,” in Photorefractive Effects, Materials and Devices, Vol. 99 of OSA Trends in Optics and Photonics Series (Optical Society of America, 2005), 610–615. 10. M. Dubois, X. Shi, C. Erben, K.-L. Longley, E.-P. Boden, and B.-L. Lawrence, “Characterization of microholograms recorded in a thermoplastic medium for three-dimensional optical data storage,” Opt. Lett. 30, 1947-949 (2005). 11. G. Maire, G. Pauliat, and G. Roosen, “Homodyne detection readout for bit-oriented holographic memories,” Opt. Lett. 31, 175-177 (2006). 12. J.-J. Yang and M.-R. Wang, “White light micrograting multiplexing for high density data storage,” Opt. Lett. 31, 1304-1306 (2006). 13. R. Jallapuram, I. Naydenova, S. Martin, R. Howard, V. Toal, S. Frohmann, S. Orlic, and H.-J. Eichler, “Acrylamide-based photopolymer for microholographic data storage,” Opt. Mater. 28, 1329-1333 (2006). #77432 $15.00 USD Received 28 November 2006; revised 26 January 2007; accepted 29 January 2007 (C) 2007 OSA 5 March 2007 / Vol. 15, No. 5 / OPTICS EXPRESS 2234 14. A. Murciano, S. Blaya, L. Carretero, R. F. Madrigal, and A. Fimia, “Holographic reflection gratings in photopolymerizable solgel material,” Opt. Lett. 31, 2317-2319 (2006). 15. J. M. Bendickson, J. P. Dowling, and M. Scalora, “Analytical expressions for the electromagnetic mode density in finite, one-dimensional, photonic band-gap structures,” Phys. Rev. E 53, 4107-4121 (1996). 16. H. Kogelnik, “Coupled wave theory for thick hologram gratings,” Bell Syst. Tech. J. 48, 2909-2947 (1969). 17. J. Shamir, “Paradigms for bit-oriented holographic information storage,” Appl. Opt. 45, 5212-5222 (2006).
Introduction
Holographic data storage is a serious candidate for the next generation of optical data storage devices with potential capacities exceeding one Terabyte for a 12 cm disk [1].Most studies have been conducted on page-oriented systems.In these systems the holograms are multiplexed inside the volume of the recording material with, commonly, angular, shift or phase multiplexing [1].Each stored hologram represents one page of typically 10 6 pixels.Each hologram being recorded and reconstructed at once, these devices are per se massively parallel [2][3][4].
Bit oriented holographic storage systems are less extensively studied.They nevertheless also present very attractive features such as a better compatibility with conventional surface storage devices [5][6][7][8][9][10][11][12][13][14].Each hologram now represents one bit of data and the recording and readout of these data are commonly achieved bit after bit, although a given amount of parallelism was already demonstrated by simultaneously using several wavelengths [5,12].Furthermore, the capacity of this holographic approach favourably compares with the capacity of the page-oriented approach [8].One-bit holograms are most often recorded by two counterpropagating beams focused inside the medium.This hologram is detected by the Bragg reflection of a reading beam.The wavelength selectivity of reflection holograms allows to record several multiplexed microholograms in the same location, each microhologram being recorded and readout at a given wavelength.In most systems, for recording, only a single beam is sent onto the disk, the counter-propagating beam being provided by a reflection of the reading beam onto a reflective unit placed on the other side of the disk.Most often, this reflective unit is not a part of the disk but lies at a distance beneath the disk [6,8,10].Conversely, in Lippmann structures, this reflective unit is a mirror set in contact with the recording layer.This compact structure increases the stability of the system and reduces the required coherence length of the recording source: recording of such microholograms with a white light source followed by a monochromator has even been successfully demonstrated [5,12].
Nevertheless, whatever the reflective unit is, for most demonstrated micrograting holographic data storage systems, the reflection on this unit should be prevented during readout in order to detect the diffracted beam only.
Alternatively, we recently proposed a homodyne detection scheme in which the reflection onto the reflective unit is not modified during readout [11].This homodyne detection is especially attractive for Lippmann data storage approaches in which the mirror is in contact with the sensitive layer and is thus part of the recording structure (optical disk).Therefore, during data readout, the beam diffracted by the recorded hologram interferes with the beam reflected onto the reflective unit.During the homodyne readout, the hologram modifies the interference state: it may increase or decrease the detected signal or just modify the phase of the optical signal according to the relative phase between the two interfering beams.It should be noted that, besides its simplicity, homodyne detection also increases the amplitude of the detected signal compared to the conventional intensity detection of the diffracted signal.The cost to pay for these advantages is the important DC component of the signal that corresponds to the signal reflected onto the reflective unit.In case of low diffraction efficiencies, fluctuations of this DC component (for instance due to power fluctuations of the laser) could mask the signal variations originating from the weak beam diffracted by the hologram.
We previously validated this homodyne detection in a scheme where the mirror was not glued onto the sensitive layer (a photorefractive crystal in this case) [11].The small air gap between the 50% mirror and this layer allowed us to make the mirror oscillate at frequency ω with an amplitude much smaller than the optical wavelength.With a lock-in amplifier set at frequency ω , we were able to extract the beating signal between the two interfering beams, which makes the detection insensitive to the strong DC component.Although this first demonstration validated the principle of the homodyne detection, it is not transposable to a realistic disk system in which the mirror would be in contact with the sensitive layer.In this communication we investigate a new scheme for homodyne detection that is compatible with Lippmann data disks.
Principle of the balanced homodyne detection
The principle of the detection scheme we investigate in this communication is illustrated in Fig. 1.The Lippmann mirror, whose reflectivity should be below 100%, and is typically about 50% in our experiments, is set in contact with the sensitive layer.The recording beam is focused onto this mirror through the sensitive layer so that it interferes with its reflection on the mirror.These interferences record the Bragg hologram.Several gratings can overlap in the same location by tuning the wavelength of the optical source.For reading out, we propose to probe the grating by the beam used for recording, without modifying the set-up.Two signals, S R and S T , respectively corresponding to the reflected beam and the transmitted beam, are detected.They can be expressed as: with I the reading intensity, α and β proportionality constants taking into account the gain of the photodiodes, R and T the reflectivity and transmission coefficients of the structure "hologram-mirror".In the presence of hologram, R and T differ from the reflectivity R 0 and transmission T 0 of the mirror alone.For a lossless system we get: with ΔR being the variation of reflectivity induced by the hologram.Although not mentioned, ΔR is a function of the readout wavelength.The signal of interest, proportional to ΔR , is easily extracted by performing a balanced detection between the two normalized signals: The two normalization coefficients can be experimentally determined by adjusting the gains of the two photodiodes in order to nullify S balanced in the case when no hologram is present.
Experimental set-up
The experimental set-up is depicted in Fig. 2. It is made of two identical optical heads.Each head is fed by a single-mode polarization-maintaining optical fibre.The beams from the fibres are first collimated by aspheric lenses L 1 (focal length 4 mm) and then focused again by aspheric lenses L 2 (focal length 11mm) onto the Lippmann mirror.The beam waist on the mirror is about 5 μm .We do not use the fibres to collect the beams transmitted and reflected by the Lippmann sample.Indeed, the structure of these beams differs from the nearly-gaussian shape of the mode of the fibres: light coupling of these beams inside the fibres is quite inefficient.Therefore inside each couple of lenses L 1 − L 2 is inserted a beam splitter to respectively extract the reflected and transmitted signals.The Lippmann sample is held on a x _ y piezo-translation stage whose excursion range is 100 × 100 μm 2 .This piezo-translation stage is itself mounted on a rotation stage whose rotation axis is along the y axis and lies in the plane of the Lippmann mirror.This rotational motion is used to probe the angular dependence of the hologram diffraction efficiency.The Lippmann sample is shown in Fig. 3.The hologram is recorded by photopolymerization.This photopolymerizable layer is a solvent-free mixture of a photoinitiator (Eosin Y), of an amine as a cosensitizer (N-methyl diethanolamine) and of a liquid monomer base (pentaerythritol triacrylate).It is embedded between two glass plates, a substrate and a superstrate.Its thickness, 160 ± 5 μm , is defined by metallic spacers.The external faces of the glass plates are anti-reflection coated to minimize the parasitic reflections that could interfere with the homodyne signal.On its facet in contact with the sensitive layer, the substrate glass plate received a coating to act as a mirror.Its intensity reflectivity is about 60%.For the glass material, we selected BK7 as its refractive index is close to the refractive index of the formulation.This refractive index matching minimizes the Fresnel reflections at the interface polymer-superstrate.The thickness of each glass plate is 1.6 mm .This thickness was selected large enough so that the beams reflected by the residual reflection onto the antireflection coatings do not properly overlap with the beams reflected and transmitted by this mirror.These reflections do not corrupt the homodyne signal.For recording and probing the holograms, we used two Fabry-Perot diodes from Nichia.The beams from these two diodes can be injected inside the two optical fibres to feed the recording set-up either from the rear side or the front side (see Figs. 2 and 3).One diode emits around 473 nm and the other one around 475 nm.Their wavelengths can be slightly tuned with the temperature, but we did not use this possibility as this tuning is not continuous and also as it considerably changes the spectral width of the emitted beam.For the experiments reported below, the diodes usually oscillate on a few longitudinal modes with a spectral width of about 0.5 nm.The coherence length is thus larger than the photosensitive layer thickness, so that the holograms are recorded with a maximum modulation ratio over this whole 160 μm thickness.
Recording of reflection holograms with plane-waves
As we will show in a next section, the sign of the modifications of the R and T coefficients of the Lippmann structure [Eq.(1) and Eq. ( 2)] resulting from the hologram recording with focused beams differ from those obtained with planes waves.To better understand these latter results, we first tested the recording of reflection holograms using "plane-waves" and probed these holograms at various angles to simulate the plane wave components of a focused beam.This first approach also allowed us to determine the best exposition parameters (exposure intensity and time).
In order to record holograms with "plane-waves", we removed lenses L 2 (see Fig. 2) from the optical set-up.Without these lenses, the beams launched from the optical fibres are collimated by lenses L 1 .The waist of these beams onto the Lippmann structure is about ω ≈ 300 μm so that their associated Rayleigh length is about 90 cm (taking into account the refractive index of the photopolymer).These beams can thus be considered as crude approximations of plane waves through the 160 μm of the sensitive layer.
The photopolymerizable formulation we are using suffers from a very large shrinkage that may fully perturb the hologram recording and reading out (ca 10% for photocuring experiments with these acrylates).Usually, optical shrinkage originates from a variation of the material thickness and from a change of the polymer refractive index following the photopolymerization.In our experiment, because of the presence of the thick glass plates, exposing the material over a small area does not change the thickness of the active layer.It just changes its refractive index.
In an independent experiment we previously determined that this shrinkage corresponds to a refractive index change of about Δn max ≈ 0.012 .The consequence of this refractive index change is, at least, two-fold: 1) it modifies the optical wavelength during recording and thus blurs the fringes; 2) it creates a short focal length refractive index lens that alters the beam propagation.
These two effects prevent the efficient recording and readout of microholograms.The first point could be alleviated by using an exposure time much shorter than the complete photopolymerization duration, which is typically of a few seconds.Nevertheless, this refractive index change would still change the Bragg wavelength, modification that cannot be compensated in our set-up.
In order to circumvent these two problems, we used a strong uniform pre-exposition to consume most of the refractive index dynamic range.This pre-exposition is conducted by sending a beam from the rear fibre to the rear-side of the Lippmann structure in order to avoid recording gratings inside the photopolymer (see Fig. 3).The energy incident onto the photopolymer during the pre-exposition is about 0.2 J cm 2 using an optical intensity of about 3 mW cm 2 .From our previous measurements on the photopolymer, we estimate that this pre-exposition consumes most of the available refractive index change.Consequently, we evaluate that the maximum index change that could appear during hologram recording is smaller than Δn expo ≈ 0.1% corresponding to a shift of the Bragg wavelength smaller than Δλ = −λ Δn n ≈ 0.3 nm .Such a shift is smaller than the expected Bragg selectivity.
The pre-exposition beam is then blocked and we proceeded with the recording of the grating by sending a beam through the front-side of the Lippmann structure and at normal incidence.We wrote this grating with a beam of ω ≈ 300 μm radius during 30 s with an optical intensity around 10 mW cm 2 at 473 nm.
Detection of the grating can be achieved either by analyzing the reflection or the transmission of the structure.Similarly, this reflection (or this transmission) can be equivalently probed by the rear-side or the front-side of the structure.It is indeed known that the intensity reflectivities of such periodic structures are the same from both the forward and backward directions [15].For the sake of experimental convenience, we probed this recorded grating by sending the beam from the same laser diode on the rear-side and by detecting the transmitted signal.By doing so, we avoided recording of any further gratings during this readout.
Because of the fixed wavelength of our laser source, we tested this grating by rotating the sample around the y axis and by detecting the transmitted beam versus this rotation angle.The measured signal detected by photodiode PD1 is shown in Fig. 4 versus the angle of refraction.This signal presents a fast oscillating feature that originates from the Fabry-Perot interferences between the homodyne signal and the residual reflection onto the anti-reflection coating of the superstrate.The mean of these oscillations represents the desired homodyne signal versus the angle.
In order to analyze this signal, we modeled the light transmission through the Lippmann structure, using coupled wave equations [16].For this modeling, we assumed a zero optical shrinkage (as a result of the very strong pre-irradiation) and a positive value of the refractive index modulation, i.e. that the bright fringes of the interference pattern correspond to an increase of the refractive index above its mean value.The only fitting parameter we used for this modeling is this amplitude of the refractive index modulation of the grating.We fitted it to the value δn ≈ 1.110 −4 .It is worth noting that this value for δn corresponds to a modification of the transmission of about 6%, while it would have corresponded to a diffraction efficiency of about only 1% if this homodyne detection were not used.These values highlight the signal gain obtained with homodyne detections [11].
Except for the fast oscillating part corresponding to the parasitic Fabry-Perot as explained above, the agreements between the experiment and the modeling are satisfactory.The main features of these curves can be analyzed as follows.
• For large incident angle, the diffraction efficiency of the Bragg grating vanishes because of Bragg selectivity, and the transmission of the sample is equal to the transmission of the mirror alone, about 40%.• At normal incidence, the transmission of the structure is also close to the mirror transmission, and consequently, the reflectivity of the structure is close to the mirror reflectivity.This originates from the π 2 phase difference between the diffracted beam and the reflected beam.This π 2 phase shift comes from diffraction over a refractive index grating which is in-phase with the recording interference pattern.This feature confirms that our pre-exposure schedule makes shrinkage negligible and that the Bragg wavelength is equal to the recording wavelength.
• When the angle starts to depart from 0°, we observe an increase of the signal (corresponding to a decrease of the reflectivity).This increase originates from the additional phase shift resulting from the off-Bragg diffraction process [15].This is an increase of the transmitted signal, which is coherent with the positive sign of the refractive index modulation δn .• The Bragg angular selectivity half-width is slightly larger than the theoretical one, which indicates that the grating is slightly non-uniform through the sample thickness.
All these features confirm our correct mastering of the recording medium.Nevertheless, the Fabry-Perot effect is clearly detrimental to the correct operation of the signal detection.The straightforward way to avoid it is to use focused beams so that the wavefront of the homodyne signal beams (reflected on the mirror and diffracted by the hologram) strongly differs from this parasitic reflected beam.This experiment is the subject of the next section.
Balanced homodyne detection of Lippmann microgratings
In order to get rid of the oscillation of the homodyne signal originating from the spurious reflections, we have been using focused beams.Therefore, lenses L 2 were inserted in the two optical heads.The beam waist on the Lippmann mirror is now about ω ≈ 5 μm.This value corresponds to a Rayleigh length of about 249 μm.This value is larger than the sensitive layer, so that relatively uniform holograms can be recorded over its 160 μm thickness.Nevertheless, it remains much smaller than the superstrate thickness so that the radius of curvature of the beam reflected onto the mirror and the one reflected onto the anti-reflection facet of the superstrate are quite different.This difference of radii of curvature minimizes the detrimental Fabry-Perot effect observed above with plane-waves.
In order to minimize the optical shrinkage, we pre-exposed the sensitive layer as done previously, i.e. from the rear-side of the Lippmann structure.By scanning the optical spot with the piezoelectric translation stage, we uniformly exposed an area of 100 ×100 μm 2 during 145 s and with an optical power of 0.6 μW .Then, from the front-side, we recorded a hologram at normal incidence during 0.5s with an optical power of 320 μW and with the laser diode emitting at the wavelength of 475 nm.This large energy totally consumes the remaining refractive index change.During this recording, the piezo-translation stage is fixed.Subsequently, in order to definitively stabilize the hologram (i.e. to reach full completion of the chemical reaction), we proceeded with a post-irradiation from the rear-side similar to the pre-exposition.
The hologram is then readout from the rear-side with the diode used for recording and set at 475 nm.The transmitted and reflected signals are detected by photodiodes PD1 and PD2 while scanning the spot over an area of 100 ×100 μm For the sake of comparison, we also performed the same readout procedure by using the second laser diode set at a wavelength of 473 nm.The corresponding measurements are plotted in the right of Figs.5(a), 5(b), and 5(c), which represent respectively the homodyne transmitted signal, the homodyne reflected signal and the balanced signal.The small bumps in the corners of the scanning area correspond to the limit of the pre-exposition area and thus to non-uniform exposition zones.They should thus not be considered.
In order to test the reproducibility of our measurements, we repeated several times and at different locations such hologram recordings and readouts.All the obtained results are very similar: using the same wavelength for the readout as the one used for recording, the presence of the hologram always increases the transmitted beam and decreases the reflected beam.
The width of the detected signal results from a convolution of the hologram width with the beam width.In the experiment reported in Fig. 5, we estimate the hologram radius to be around 25 μm .This width is larger than the beam waist and results from the very large energy used to expose the hologram.With lower energies we obtained smaller widths, down to a radius of about 5 μm , but these holograms are more difficult to control and to characterize with our photopolymer.
One clearly sees the presence of the hologram in the left of Fig. 5 whereas the signal is much lower when reading at the smaller wavelength in right of Fig. 5.We also performed other experiments in which the holograms were written at the smaller wavelength of 473 nm and readout at both wavelengths.In this last case, we also clearly observed the presence of the hologram at 473 nm, while at 475 nm we observed weaker signals which are opposite in sign to the signals detected at 473 nm (that is a small increase of the reflectivity and small decrease of the transmission).By temperature tuning the diodes, we were also able to probe the gratings at some other wavelengths.From these experiments we estimate that the Bragg wavelength selectivity of our homodyne detection for those holograms is about 2 nm.Although this estimation is very crude, it is much larger than the Bragg selectivity half-width, Δλ = λ 2 (2n e) ≈ 0.5 nm, computed from the successful plane-wave approach used to analyse the results presented in the previous section.This result is not surprising.It is indeed known that the Bragg selectivity half-width of microgratings is much larger than the Bragg selectivity of plane-wave gratings [17].This increase of the Bragg selectivity half-width is also visible from the experiments reported in Ref.This increased Bragg selectivity half-width and also the sign of the homodyne signal can be interpreted as follows.The recorded hologram is wider than the probe beam.On the scale of the probe beam, and in a first crude approximation, it can be considered as infinite.This probe beam is a Gaussian focused beam with a waist of about ω ≈ 5 μm, which corresponds to a half-angular spread of: θ 1 e = λ (nπ ω) ≈ 20 mrad ≈ 1.2°.This probe beam thus contains a full spread of plane-waves.Referring to Fig. 4, one sees that each of them probes the grating with a different refraction angle and thus experiences a different transmission.For the plane wave components with a refraction angle very close to zero, the transmission is not modified by the hologram, however, for plane-waves with a larger angle, we observed a slight increase of the transmission.In our set-up we integrate all these plane-wave components onto the photodiodes so that, overall, we observe an increase of the transmitted signal.
Nevertheless, the correct operation of the homodyne detection only requires that the reflected and transmitted homodyne signals varies with opposite signs in presence of a hologram; no matter which one increases.Indeed, Fig. 5 illustrates the perfect behaviour of this proposed balanced detection scheme.The two homodyne signals [Figs. 5(a) and 5(b) left] vary in opposite directions so that the balanced signal computed using Eq.(3) allows to double the signal corresponding to the presence of the grating while cancelling out the DC components due to the reflectivity R 0 and transmission T 0 of the mirror alone.By reading out at a wavelength 2 nm larger than the Bragg wavelength, one clearly sees in the right of Fig. 5 that the balanced signal vanishes.
Conclusion
We have validated a balanced detection scheme for microholograms.This scheme is particularly convenient with the Lippmann architecture as the Lippmann mirror does not need to be removed, nor masked, during the data readout.Furthermore, this homodyne detection considerably enhances the amplitude of the detected signal.This point is of first importance for high capacity holographic data storage in which multiplexing a large number of holograms considerably decreases the retrieved signals.Moreover, because it efficiently removes the DC component of the unbalanced homodyne signal, this balanced detection is more robust to the power fluctuations of the optical source than a simple unbalanced detection.This is all the more important since the power emitted by tunable sources may vary with the wavelength.
Fig. 1 .
Fig. 1.Principle of balanced homodyne detection.PD1 and PD2 are two photodiodes providing the two homodyne signals, which are subtracted to get the balanced signal.
Fig. 2 .
Fig. 2. Scheme of the experimental set-up used to validate the balanced homodyne detection.BS are beam splitters, PD1 and PD2 photodiodes.
Fig. 3 .
Fig. 3. Structure of the Lippmann structure.The pre-and post-exposures and the hologram readout are achieved by sending the beam from the rear-side, while the hologram recording is made from the front-side.
Fig. 4 .
Fig. 4. Experimental (oscillating line) signal transmitted by the hologram-mirror structure, versus the refraction angle.This curve is obtained by rotation of the Lippmann sample along the y axis (see Fig. 2).The black solid line is the theoretical fit (see text).
2 .
The scan of the homodyne transmitted and reflected signals are shown in the left of Figs.5(a) and 5(b) respectively.Figure 5(c) left represents the balanced signal.To plot these figures, we normalized the signals to unity in the absence of the hologram.
Fig. 5 .
Fig. 5. Signals detected by scanning the piezoelectric translation stage along the x and y axes: a) homodyne transmitted signal; b) homodyne reflected signal; c) balanced signal.The left images are obtained with the diode at the wavelength of 475 nm used for hologram recording, and the right images with the diode at the wavelength of 473 nm , i.e. 2 nm below the wavelength used for recording. | 6,435.8 | 2007-03-05T00:00:00.000 | [
"Physics"
] |
Shift symmetries, soft limits, and the double copy beyond leading order
In this paper, we compute the higher derivative amplitudes arising from shift symmetric-invariant actions for both the non-linear sigma model and the special galileon symmetries, and provide explicit expressions for their Lagrangians. We find that, beyond leading order, the equivalence between shift symmetries, enhanced single soft limits, and compatibility with the double copy procedure breaks down. In particular, we have shown that the most general even-point amplitudes of a colored-scalar satisfying the Kleiss-Kuijf (KK) and Bern-Carrasco-Johansson (BCJ) relations are compatible with the non-linear sigma model symmetries. Similarly, their double copy is compatible with the special galileon symmetries. We showed this by fixing the dimensionless coefficients of these effective field theories in such a way that the arising amplitudes are compatible with the double copy procedure. We find that this can be achieved for the even-point amplitudes, but not for the odd ones. These results imply that not all operators invariant under the shift symmetries under consideration are compatible with the double copy.
I. INTRODUCTION
In recent years, there has been a resurgent interest in exploring the infrared behavior of field theories and its implications (see e.g. [1] and references therein). While most of the attention has been devoted to gauge theories, interesting results have also been derived regarding the infrared structure of scalar effective field theories (see e.g. [2][3][4][5][6][7][8][9].) For instance, Lorentz-invariant scalar field theories have been classified in [2,3] according to their soft behavior and their numbers of derivatives per field. Among these, there are three interacting theories-the U(N) non-linear sigma model (NLSM), the Dirac-Born-Infeld (DBI) theory, and the special galileon (SGal) [2,[10][11][12]-whose effective Lagrangians at lowest order in the derivative expansion each contain a single free parameter. These theories arise naturally in the Cachazo-He-Yuan (CHY) representation [11][12][13][14], and are known collectively as "exceptional scalar theories".
Exceptional scalar theories display two noteworthy properties at leading order. First, their scattering amplitudes have an enhanced single soft limit, which follows from the invariance of the actions under non-linearly realized symmetries. Because of this feature, higher-point amplitudes can be obtained recursively from lower-point ones using a modified version of the Britto-Cachazo-Feng-Witten (BCFW) recursion relation [15][16][17]. The second interesting property of exceptional scalar theories is that they are part of a web of theories related to each other by different implementations of color-kinematics replacements [12,[18][19][20][21][22]]-see Figure 1 in [23] for a pictorial summary of these relations.
One of these color-kinematics relations is an especially relevant one that is known as the double copy. The double copy construction relates colored-theories which satisfy the colorkinematics duality with their "kinematic square" [24][25][26] (for a pedagogical review see [27].) The best-known version of this relation constructs gravitational 1 scattering amplitudes as the double copy of Yang-Mills (YM) scattering amplitudes. A similar double copy construction connects two of the exceptional scalar theories mentioned above giving rise to a relation that can be summarized as NLSM 2 = SGal. From this perspective, the NLSM and SGal can be thought of as scalar analogs of YM and gravity. In fact, the origin of such correspondence has been explored in different settings and can be understood as following from YM 2 = gravity after performing a "dimensional reduction" to extract the longitudinal modes [12,20].
Recently, it has been shown that the double copy holds not only for scattering amplitudes, but also for both exact and perturbative classical solutions .
At lowest order in the derivative expansion, exceptional scalar theories can be equivalently defined through their symmetries, their enhanced single soft limits, and color-kinematics dualities. Importantly though, the inclusion of higher-order operators spoils this equivalence.
For instance, it is clear that corrections with a large enough number of derivatives per field will not modify the soft limit, regardless of whether or not they preserve the symmetries.
However, the status of color-kinematics duality is a priori less clear. In this paper, we focus on the NLSM 2 = SGal relation and explore the extent to which higher derivative corrections to these theories, consistent with their symmetries, are compatible with color-kinematics duality.
The analogous question has previously been asked for the YM 2 = gravity correspondence, and the higher-order operators of YM and their compatibility with the double copy has been explored in [53][54][55]. While the F µ ν F ν λ F λ µ term was shown to be compatible with the double copy, not all the O(F 4 ) contributions are compatible-not even the ones arising from the low energy limit of string theory. It is presently unknown whether there are hidden symmetries which only give rise to higher-order corrections that satisfy the color-kinematics duality.
Higher-order corrections to the NLSM amplitudes have been computed by several different methods. These constructions do not rely on the symmetries of the NLSM but instead focus on satisfying the color-kinematics duality or on the infrared behavior of the theory.
One construction, [56], consists of a rewriting of the open string amplitude in terms of a function called the Z-function involved in a Kawai-Lewellen-Tye (KLT)-like product with the YM amplitude. The Z-function behaves as a doubly-ordered partial amplitude and satisfies the Kleiss-Kuijf (KK) [57] and Bern-Carrasco-Johansson (BCJ) [24] relations. By taking the Abelian and α ′ → 0 limits, the Z-function reduces to the NLSM partial amplitudes. 2 Given this, it has been proposed that the α ′ -corrections correspond to the higher-order corrections to the NLSM. It is interesting to note that all odd-point amplitudes arising from this construction vanish. The theory giving rise to these amplitudes has been dubbed the Abelian Z-theory. A second construction [58] starts from the most general color-ordered scalar 4-point amplitude up to 8th order in derivatives and imposes cyclicity, the KK rela-tions, and the BCJ relations; all these requirements are highly constraining and completely fix the scattering amplitude. In fact, this 4-point amplitude coincides with that of the Abelian Z-theory. The authors of [58] also considered the 5-point amplitude, and showed that, while the contribution coming from the NLSM Wess-Zumino term does not satisfy the BCJ relations, there is a contribution at 14th order in derivatives that is compatible with the double copy prescription. Similarly, the 6-point function was computed up to 6th order in derivatives. A third method [59], assumes the pion double soft theorems [4,7,60] to compute the higher-order corrections, and finds the same results as the Abelian Z-theory plus an additional correction to the 4-point amplitude at order O(p 4 ) which does not obey the BCJ relations. Earlier work along this lines was previously performed in [61][62][63], and more recently in [2,3,58,64] . This method has now been dubbed the soft bootstrap. The soft bootstrap consists of constructing a modified BCFW recursion relation for scattering amplitudes based on the degree σ of its soft theorem, which is defined by with S n = 0 a "soft factor" involving the first n − 1 momenta. Recently, it was shown that the soft bootstrap approach can be extended to O(p 4 ) for the NLSM [65]. In [65], higher-point amplitudes at O(p 4 ) were obtained by defining soft blocks for 4 and 5 pions and using these as seeds in the soft bootstrap. As well as single-trace amplitudes, multi-trace amplitudes were also constructed, and both the SU(N) and SO(N) NLSMs were considered.
Nevertheless, the extension to O(p 6 ) and higher is not completely obvious. Lastly, another way of obtaining the higher derivative corrections to the NLSM is through the "extraction" of the longitudinal modes of YM, i.e. using the techniques of [19,20]. This was done in [66], where the leading order Lagrangian of the Abelian Z-theory was found from a dimensional reduction of the F µ ν F ν λ F λ µ YM term. Higher-order corrections to the SGal amplitudes have previously been considered in the literature, for instance in [58] by using the soft bootstrap. Using this method, one can compute the higher derivative corrections to a theory from the leading order amplitudes by assuming the single soft limit. It has been shown that the special galileon is the only interacting theory satisfying the soft limit with σ = 3 non-trivially [5,9,67]. This limit is not only satisfied non-trivially by its leading order amplitude, but also by several higher derivative corrections. It is important to note that not all higher-order amplitudes can be constructed using the soft bootstrap approach. This limitation follows from the fact that the single soft limit can be trivially satisfied at sufficiently high order; for a term in the Lagrangian of the form ∂ m φ n , the soft limit of degree σ becomes trivial if σ ≤ m/n. Similarly, one should notice that satisfying a single soft limit does not imply that the amplitude comes from a shift symmetric theory 3 . For example, a term such as (∂∂∂π) 4 would lead to an amplitude with soft degree σ = 3, but it is not invariant under the special galileon symmetries. We should also note that the corrections computed by using the soft bootstrap method include a non-vanishing 5-point amplitude. A second approach consists of finding the special galileon corrections as the double copy of the NLSM corrections. By considering the double copy of the Abelian Z-theory one obtains the even-point special galileon higher-order amplitudes from [58]. Finally, a third approach towards computing the higher derivative operators invariant under the special galileon symmetry was followed in [68]; the invariant Lagrangian was constructed up to quartic order in the Galileon field through a brane construction similar in spirit to [69,70].
From these results, it is clear that the definitions of the exceptional scalar theories through their enhanced single soft limits, through their symmetries, or through the double copy, are only equivalent at leading order, and that this equivalence breaks down when including higher-order operators. In this paper, we will explore the definition of these theories as given by their shift symmetries. We will not only compute the on-shell scattering amplitudes, but we will find the shift symmetric Lagrangians giving rise to them. The Lagrangian is relevant for calculations such as the classical perturbative double copy in [23]. We will rely on a coset construction [71][72][73] to write down the most general higher derivative corrections that are compatible with the SGal and NLSM symmetries. We will then constrain the NLSM coupling constants by demanding that the on-shell scattering amplitudes satisfy the KK and BCJ relations in order to be able to construct the double copy. Here, we follow the approach of [53] and assume that the double copy for higher order operators follows in the same way as it does for the leading order ones. Our goal is to understand whether the double copy of the higher-order corrections to the NLSM obtained this way corresponds to (a subset of) all possible higher-order corrections to the SGal theory. A pictorial summary of our results is provided in Figure 1.
The rest of this paper is organized as follows. In Sec. II, we give a short review of the coset construction which will be used to build the higher derivative corrections to the NLSM and the SGal. In Sec. III, we analyze the higher derivative corrections to the SU(N) × SU(N) → SU(N) NLSM in the large N limit, and in Sec. IV we explicitly construct the higher-order Lagrangian of the SGal. In Sec. V we explore the extent to which the higher derivative corrections introduced in the previous two sections are compatible with color-kinematics duality. Finally, we discuss our results and conclude in Sec. VI.
II. SHORT REVIEW OF THE COSET CONSTRUCTION
We begin by giving a brief review of the coset construction [71] for spontaneously broken space-time symmetries [72,73]. This construction is a method that allows the systematic construction of an effective field theory Lagrangian for Goldstone modes solely based on the knowledge of the symmetry breaking pattern. For recent, more detailed discussions see also [74][75][76][77].
Consider a system whose ground state spontaneously breaks a symmetry group G, which contains the Poincaré group as a subgroup, down to a subgroup H. In general, H may correspond to internal, space-time, or a mixture of both types of symmetries. We will denote the broken generators by X α , the unbroken translations by P a , and the remaining unbroken symmetry generators by T A . The effective action for the Goldstone bosons realizes both the unbroken translations and the broken symmetries non-linearly, while the other unbroken symmetries are implemented linearly and are therefore manifest.
The starting point of a coset construction is a dramatization of the most general symmetry transformation that is generated by the broken generators together with an unbroken translation: 4 Ω(x, π) = e x a Pa e π α Xα . (II.1) Since Ω is defined only up to an overall unbroken symmetry transformation it is an element of a coset, hence the name of this construction. From this, one can define the Maurer-Cartan form This is an element of the algebra, and as such it can be written as a linear combination of all the generators. The coefficients of this expansion can be calculated explicitly using the algebra of G, the Baker-Campbell-Hausdorff formula, and various identities involving matrix exponentials. The coefficients can be conveniently parametrized as follows: It can be shown [73] that the components E a µ play the role of a vielbein, in the sense that the volume element det(E) d d x is a scalar under G. One can also check that the quantities ∇ a π α -usually referred to as "covariant derivatives" of the Goldstone modes-transform under G as a (possibly reducible) linear representation of H. Thus, contractions of such covariant derivatives that are manifestly invariant under H are also secretly invariant under the full group G. Finally, the quantities A B a transform as the components of a connection, and can be used to introduce a covariant derivative as follows: This definition allows us to calculate higher-order covariant derivatives of the Goldstones or, for that matter, covariant derivatives of any field that is charged under H.
We can now used the building blocks introduced above to write down the most general effective action for the Goldstone modes, which schematically takes the following form: where all the indices are contracted in such a way as to preserve the unbroken symmetries.
If only internal symmetries are broken, the number of Goldstone modes is equal to the number of broken generators-this is the usual Nambu-Goldstone theorem [78,79]. However, when some of the symmetries that are spontaneously broken are space-time symmetries, one can usually obtain a non-linear realization of the symmetries that involves fewer fields [80].
Specifically, if commutation with some unbroken translation P relates two multiplets (under H) X and X ′ of broken generators, i.e.
[P, X ′ ] ⊃ X, (II.6) then one can eliminate the Goldstones that would be naively associated with X ′ and express them in terms of Goldstones of X and their derivatives. This is done by imposing a set of so-called "inverse Higgs constraints" [81], which amount to setting to zero (a subset of) covariant derivatives of the Goldstones of X in the same representation as the Goldstones of X ′ . Given the transformation properties of the Goldstone covariant derivatives, this procedure can be shown to preserve all the symmetries-including the ones that are nonlinearly realized.
III. HIGHER-ORDER LAGRANGIAN FOR THE NON-LINEAR SIGMA MODEL
In this section, we will consider a NLSM corresponding to the symmetry breaking pattern is a simple, compact, and internal symmetry group. For simplicity we will also restrict our attention to d = 4 spacetime dimensions. We will first derive the main building blocks of the effective Lagrangian using a coset construction, and discuss two different choices of coset parameterizations. Then, we will focus on the particular case where G = SU(N), and write down all possible higher derivative corrections up to O(∂ 8 ) in the large-N limit. In this limit, our results will also apply to G = U(N).
A. Coset construction and lowest-order effective Lagrangian
Let us choose the broken generators X α that appear in the coset parametrization (II.1) to be the generators of, say, G L . Then the components of the Maurer-Cartan form in (II.1) read where f αβγ are the structure constants of the group G, and U αβ is the adjoint representation of the abstract group element e π α Xα . To derive the result above, we used the fact that, in the adjoint representation, the generators X α are normalized as: Note, however, that Eq. (III.1) follows exclusively from the algebra of the group and the symmetry breaking pattern, and it is valid in any representation.
The coset vielbein is trivial because the broken generators are all internal. Therefore, the covariant derivatives of the Goldstones π α are simply Moreover, the Maurer-Cartan form does not have components along the unbroken generators, and therefore the coset covariant derivatives defined in (II.4) reduce to ordinary partial Because the commutators of the broken generators X α with the unbroken generators the covariant derivatives ∇ µ π α transform in the adjoint representation under G diag . The effective Lagrangian must be manifestly invariant under all unbroken symmetries, and therefore up to quadratic order in derivatives it must be 5 where F is the symmetry breaking scale, and the factor of 1/8 has been added for later convenience. At lowest order in the Goldstones, the covariant derivatives are equal to ordinary derivatives, i.e. ∇ µ π α ≃ ∂ µ π α + O(π∂π), and thus, the canonically normalized fields are φ α ≡ F π α /2. Higher derivative corrections to the Lagrangian (III.5) contain either higher powers of ∇ µ π α , or additional ordinary derivatives (as opposed to covariant, because the coset connection in (III.1) vanishes).
One of the advantages of the coset construction is that it does not rely on a specific representation of G L ×G R . This makes it explicit that the dynamics of the Goldstone modes depends solely on the symmetry breaking pattern, and not on the particular representation of the order parameter that realizes it. However, it can be instructive to rewrite the lowest order Lagrangian (III.5) that we obtained from the coset construction by assuming a particular representation. This will allow us to recast our result in a form that the reader might be more familiar with.
To this end, we notice that Eqs. (III.1) and (III.3) the following identity must be valid in any representation: with U IJ ≡ (e π α Xα ) IJ . In an arbitrary representation of an arbitrary group, the X α 's are normalized according to where T is the index of the representation. For instance, the indices of the fundamental representations of SU(N) and SO(N) are respectively equal to 1/2 and 2 [82]. Using the 5 We are working with a metric with "mostly minus" signature.
result (III.6) together with the normalization condition (III.7), it is easy to show that the lowest order Lagrangian (III.5) can be rewritten as In the particular case of the fundamental representation of G = SU(N), this reduces to the standard expression for the lowest order Lagrangian in chiral perturbation theory [83]. Another natural choice for the broken generators is are the generators of G L,R . It is easy to see that the components of the Maurer-Cartan form in this case read By rewriting the right-hand side of this equation in terms of broken (X α ) and unbroken (T α = 1 √ 2 (J L α + J R α )) generators, we can read off the coset covariant derivatives and connections in this new parametrization: The effective Lagrangian at lowest order in the derivative expansion is still (III.5), but now with a slightly different expression for ∇ µ π α . Higher derivative corrections involve higher In what follows, we will use this alternative coset parametrization to write down all non-redundant contributions to the NLSM effective Lagrangian up to eighth order in derivatives. This will enable us to leverage results that have already been derived in the context of chiral perturbation theory [84][85][86][87][88][89].
C. Higher-derivative corrections for G = SU (N ) We will now specialize our analysis to the case where G = SU(N) and work in the large-N limit. This will allow us to focus directly on those terms that are relevant for the double copy construction-see Sec. V A for more details-and as an added bonus will also reduce the overall number of terms we need to include in the Lagrangian. Moreover, we will omit redundant terms that can be eliminated by a field redefinition (because these are proportional to the lowest order equations of motion), by performing integrations by parts, or by using the Bianchi and Levi-Civita identities summarized in Appendix A.
Another property that can be used to simplify the Lagrangian after expanding in powers of the Goldstone fields is the SU(N) completeness relation where the first term, which would not be present for G = U(N), leads to terms that are subleading in the large-N limit. For particular values of N there exist additional trace relations that can further reduce the basis of operators in the Lagrangian, but since we are interested in results that have more general validity we will not employ these here.
In order to make our notation a little more more compact, we will work with a particular representation of SU(N)-the fundamental representation-and we will define the quantity We can then express the lowest order effective Lagrangian (III.5) directly in terms of u µ as follows: Once again, the canonically normalized field is φ α = F π α /2.
The next-to-leading order correction to this Lagrangian contains four derivatives and an arbitrary number of Goldstone fields, and reads [84] L (4) where the c i 's are constant dimensionless coefficients, and the ellipsis represents terms with more than one trace, which are negligible in the large N limit [90]. In the particular case of N = 3, the first term is redundant and can be expressed as a combination of the second one with terms involving more than one trace [83]; for N = 2 the second term is also redundant, and therefore all terms with four derivatives can be written as multi-trace terms [83].
At fourth order in derivatives, there is an additional single-trace term that can be added to the Lagrangian. This is the Wess-Zumino-Witten (WZW) term [91,92], and unlike the terms in (III.14) it is invariant under G only up to a total derivative. This term can be built by extending the base manifold to 5 dimensions, and introducing the invariant, exact 5-form Up to an overall coefficient, the integral of the 4-form β over the space-time manifold is the WZW term. It is the only 4-derivative term in the Lagrangian that gives rise to oddpoint functions. For instance, at leading order in an expansion in canonically normalized Goldstone fields it reads Notice that the WZW term vanishes for N = 2, whereas for N = 3 the coefficient c is famously quantized [92]. Moreover, this term breaks the Z 2 symmetry φ → −φ, also known as intrinsic parity. Thus, this term (and others) can in principle be omitted, if desired, by requiring that such a symmetry be preserved.
The 6-derivative corrections to our NLSM Lagrangian read [85][86][87][88] where h µν = ∇ µ u ν +∇ ν u µ , and the ellipsis again denotes multi-trace contributions which are negligible in the large-N limit. Moreover, the terms proportional to the e i coefficients break intrinsic parity, just like the Wess-Zumino term does, and give rise to odd-point amplitudes.
Finally, at 8th order in the derivative expansion we have [89] L where the ellipsis now denotes multi-trace terms, terms whose leading contribution in an expansion in powers of fields contains more than four Goldstones, and odd intrinsic parity terms. 6 In what follows we will not need these terms, since we will be calculating the 5-and 6-point functions only up to O(p 6 ).
IV. HIGHER-ORDER LAGRANGIAN FOR THE SPECIAL GALILEON
We now turn our attention to the higher derivative corrections to the special galileon.
Our goal is to find the most general action invariant under the SGal symmetries in four space-time dimensions. These symmetries act on the SGal field as [10] δ c π = c (IV.1a) where c, b µ , and s µν (the latter being traceless and symmetric) are the parameters of the symmetry transformations, while α is a constant that is convenient to introduce for normalization purposes. If π is a canonically-normalized field, then α must have dimensions of (mass) −3 , i.e. α ≡ 1/Λ 3 . While ordinary galileons are only invariant under the first two shift symmetries [93], the special galileon also satisfies the third one [10]. The fact that δ s π ∼ x 2 endows the leading order special Galileon field with a particularly soft infrared behavior [2].
A. Coset construction and lowest-order effective Lagrangian
As is the case for any theory with non-linearly realized symmetries, the SGal theory can also be obtained from a coset construction. This was first carried out in four dimensions in [5], and later extended to arbitrary dimensions in [94]. We will now briefly review this construction, and in the next subsection we will use it to systematically write down higherderivative corrections in four dimensions.
The symmetry transformations (IV.1) are associated, respectively, with some generators C, Q a , and S ab , which, together with the generators of the Poincaré group (P a and J ab ) 6 The remaining even-parity single-trace terms at eighth order in derivatives are the terms 45-66 and 119-135 listed in the supplemental material http://home.thep.lu.se/∼bijnens/chpt/basis.pdf of [89].
satisfy the following algebra [10]: The coset parametrization is, as usual, the most general symmetry transformation that is The generators of Lorentz transformations, J ab , are instead realized linearly, which means that Lorentz invariance of the Lagrangian will be manifest. The Maurer-Cartan form can be calculated using the algebra (IV.2). It takes the form Notice that, despite appearances, these building blocks only depend on even powers of α. This is because the algebra depends on α 2 , not on α. Moreover, one can always eliminate α 2 from the algebra by an appropriate rescaling of the generators, and therefore only its sign is really physical.
Since we are considering a space-time algebra, the number of broken symmetries does not correspond to the number of Goldstone bosons, and we can apply inverse Higgs constraints that allow us to eliminate some of these modes. In particular, we can demand that and solve these equations to express ξ a and σ ab in terms of derivatives of π as follows [94]: Of course, this simply reflects the fact that we only need a single field π to non-linearly realize the special Galileon symmetries, as shown in Eq. (IV.1).
At lowest order in the derivative expansion, the Lagrangian for any Galileon field (not just the special one) is invariant under the symmetries only up to a total derivative, i.e. the leading terms are WZW terms [74]. In the particular case of the special Galileon, other than the tadpole, there is only one such term. Following the standard procedure to write down WZW terms [95], it can be built by considering the exact 5-form [5,94] dβ ≡ n even (IV.12) Up to an overall constant, the coefficient of the 4-form β is equal to the leading order Lagrangian for the special galileon [10]: (IV.13)
B. Higher-derivative corrections
Higher order terms in the Lagrangian for the special Galileon are exactly invariant under all the symmetries. These can be built using the following ingredients: 1. The components of the Goldstones' covariant derivatives that have not been set to zero by imposing inverse Higgs constraints. A priori, these would be ∇ a ξ a , ∇ [a ξ b] , and ∇ a σ bc . However, after solving the inverse Higgs constraint (IV.10b) one finds that ∇ [a ξ b] = 0 [94]. Thus, the only non-trivial components are ∇ a ξ a and ∇ a σ bc .
3. The determinant of the coset vierbein E µ a , to make the integration measure in the action invariant under the non-linearly realized symmetries.
Based on the building blocks listed above, we conclude that the most general action for the special Galileon must take the form where ∆L contains all possible Lorentz-invariant combinations of its arguments.
In Sec. V B, we will use this Lagrangian to study the scattering amplitudes of the special Galileon. In order to be exactly invariant under the standard galileon symmetry, all higher derivative corrections in ∆L must have at least two derivatives acting on each field π. Hence, we will write ∆L = ∞ n=0 ∆L (2n) , where the superscript 2n refers to the number of additional derivatives. For example, keeping in mind that ∇ξ ∼ O(0) and ∇σ ∼ O(1) according to this derivative counting, the first two contributions to ∆L are where A and B i are functions of ∇ a ξ a that admit a Taylor expansion around zero. Notice that higher coset covariant derivatives cannot be integrated by parts as one might naively expect. Therefore, say, the first two terms in (IV.16) are independent structures that are both allowed by the symmetries.
In order to calculate the 4-point function at O(p 12 ) we only need to consider operators in ∆L (0) , ∆L (2) and ∆L (4) that can give rise to quartic self-interactions. To calculate the 5-point function at O(p 10 ) and the 6-pt function at O(p 12 ), we also include in ∆L (0) those operators that contribute at fifth and sixth order in the fields. To obtain explicit expressions 7 With normalization conventions for the generators, (J ab ) cd = η ac η bd − η ad η bc .
As a result, the only operators that are relevant to our calculations are A few comments are in order at this point. First, we have omitted from ∆L (4) those operators that, despite being linearly independent from the ones shown, would yield redundant interactions at quartic order. Second, it is easy to see that this Lagrangian will give rise to higher derivative corrections to the 2-point function of the form π n π. From an EFT viewpoint, these terms should be treated perturbatively as one does with any other higher-derivative interaction, and not used to modify the propagator. (See for instance footnote 1 in [96] for a brief discussion of this point.) Finally, the second operator in ∆L (0) gives rise to a cubic vertex. Nevertheless, this vertex does not contribute to the scattering amplitudes since it vanishes when one leg is on-shell 8 . Similarly, higher derivative 3-point vertices that do not vanish when one leg is on-shell (such as the 8th derivative ones arising from ∇ b ∇ b ∇ a ξ a , and ∇ a ∇ b σ ab ), do not spoil the single soft limit due to the large number of momentum factors involved in them. In fact, it has been argued in [97] that using the leading order equations of motion one can show that these operators shouldn't contribute to the scattering amplitudes.
V. COMPATIBILITY WITH THE DOUBLE COPY
In this section we will analyze the corrections to the 4-, 5-and 6-point amplitudes of the NLSM and SGal that follow from the higher derivative operators introduced in the previous two sections. We will be particularly interested in understanding the extent to which these corrections are compatible with the double copy procedure.
A. NLSM Scattering Amplitudes
We do this first for the NLSM introduced in Sec. III C are compatible with the double copy procedure. To this end, we will expand the operators in Eqs. (III.14), (III. 16 An important point to notice is that, in order to be compatible with color-kinematics duality, the color structure of the scattering amplitudes must satisfy Jacobi identities. This is a necessary but not sufficient condition to guarantee the existence of the double copy, since one also needs the correct kinematic behavior. Focusing on the color factors arising from the higher-order corrections to the NLSM, one sees that multi-trace color factors can arise at tree level. Crucially, for a general SU(N) group these are not related to the single-trace color factors, and the color factors associated with multi-trace operators in the Lagrangian would not necessarily satisfy Jacobi identities. Whether or not these terms are compatible with a (modified) double copy procedure is still unknown. For examples in which multi-trace terms are analyzed and generalized BCJ relations are considered see [98][99][100]. From now on, we will neglect the multi-trace terms, noting that, as we discussed in Sec. III C, the large-N limit makes our approach self-consistent. Restricting our attention to single-trace operators, we see that the corresponding amplitudes can be cast in the form The existence of a double copy also requires the color-ordered amplitudes to have a special kinematic structure. In fact, we must demand that they satisfy the KK [57] and BCJ [24] relations, which can be expressed respectively as these relations was shown in [101]. Imposing that the conditions above are satisfied places constraints on the dimensionless coefficients that appear in Eqs. (III.14), (III.16), (III. 17) and (III.18), as we will now discuss.
Let us start by considering the color-ordered 4-point amplitude. The most general form it can take while satisfying the KK and BCJ relations up to eighth order in derivatives is [58] A 4 [1, 2, 3, 4] = C 2 where s, t and u are the usual Mandelstam variables, and C i are constants with the subscript "i" labeling the powers of momenta in the corresponding term. As we already alluded to in the introduction, this amplitude corresponds to that of the Abelian Z-theory [56]. The first term in particular follows directly from the lowest order NLSM Lagrangian in Eq. (III.13).
We would like to understand what constraints need to be imposed on the coefficients of higher order corrections to recover an amplitude of the form (V.4). At the 4-derivative level, the contribution arising from the terms in Eq. (III.14) is of the form This satisfies the KK relations above if c 1 = −c 2 , but the BCJ relation cannot be satisfied.
We must therefore set c 1 = c 2 = 0, which is consistent with the fact that (V.4) does not contain any term quartic in momenta. Although there is no 1/F 4 correction that is compatible with color-kinematics duality, it is interesting to point out that there exists a 1/F 4 correction that satisfies the NLSM double soft limit and reads A 4 ∝ st/F 4 [59]. One should note that this amplitude cannot be obtained from Eq.(III.14). When it comes to the 6-and 8-derivative corrections, one can show that they satisfy both the KK and BCJ relations only if d 3 = 2(d 1 + d 2 ), g 1 + g 2 = 0 and g 3 + 2g 4 = 0 .
Moving on to the 5-point amplitude, we must require that all the contributions with less than 14 derivatives vanish. This is because, as discussed in the introduction, the leading color-ordered 5-point amplitude that is compatible with color-kinematics duality is known to have 14 derivatives [58]. This means that the coefficient in front of the Wess-Zumino term must vanish. Similarly, we must have e 1 = e 2 = 0 in Eq. (III.17).
It is also interesting to explore whether the 14th derivative order 5-point amplitude which is compatible with color-kinematics duality can be obtained from a Lagrangian satisfying the NLSM symmetries. In order to make some progress towards this question, we will make a few extra assumptions. Assuming that the pions are pseudoscalars and that the theory is invariant under Parity, φ a (t, x) → −φ a (t, −x); it has been shown that only terms with odd number of Levi-Civita tensors have odd numbers of Goldstones [92]. In this case, the general form of the 5-point NLSM amplitude is where Γ is some scalar function constructed from the Goldstone momenta. Rather than Furthermore, this amplitude is equal to the Abelian Z-theory result if
B. SGal Scattering Amplitudes
We now consider the scattering amplitudes arising from the higher derivative special galileon Lagrangian. Explicit expression for the 4-and 6-point amplitudes up to O(p 12 ) can be found in Appendix D.
Before turning our attention to the double copy, it is worth discussing briefly the single soft limit of the 4-point amplitude, which reads The fact that the term with 8 derivatives is not present comes from a non-trivial cancellation happening in det(E). This cancellation is crucial to have a soft theorem with degree σ = 3.
By comparing our results with the ones obtained with the soft bootstrap method [58], we find agreement. The term s 6 + t 6 + u 6 receives contributions proportional to the coefficients c 2 , c 3 and c 4 . We have checked explicitly that these coefficients can enter the 6-point amplitude without affecting the enhanced soft limit. In fact, the authors of [58] agree that such term is possible. 10 We note that the leading contribution to the 5-point amplitude arising in the soft bootstrap case at O(p 14 ) does not come from a Lagrangian with special galileon symmetry. This amplitude could arise from terms such as ǫ bcde ∇ b ∇ c ∇ d ∇ e ∇ a ξ a and ǫ bcde ∇ b ∇ c ∇ d ∇ a σ e a , nevertheless the resulting amplitude vanishes. As a matter of fact, up to the 14th derivative order we have checked that all contributions to the 5-point amplitude vanish. This is consistent with the results found in [97]. While a proof for all derivative orders is unavailable, these results seem to indicate that odd-point amplitudes arising from a special galileon invariant theory vanish on-shell.
We now compare the special galileon amplitudes with the double copy of the most general colored ordered scalar amplitudes satisfying the KK and BCJ relations. While the even-point amplitudes correspond to the NLSM ones with dimensionless coefficients constrained as in the previous section, we will also include for completeness a 5-point amplitude A * 5 at 14th derivative order which does not arise from a parity invariant NLSM Lagrangian, and yet enjoys the same single soft limit. Using these building blocks, we can construct the KLT double copy as follows: where P(2, 3) denotes all the permutations of the momenta p 2 and p 3 , and so on.
By comparing the scattering amplitudes obtained from the SGal Lagrangian with the ones obtained from the KLT double copy shown above, we find that we need to set c 2 = c 3 = c 4 = 0 since the term s 6 + t 6 + u 6 does not arise in the double copy. This shows that, by constraining the coefficients of the allowed operators in both the NLSM and the SGal, we can maintain their relation through the double copy. At this point, we lack a compelling argument which explains these constraints, but we discuss some possibilities in the next section. To conclude, we should mention that the leading order 5-point amplitude that can be obtained as the double copy of a colored scalar arises at O(p 32 ). Understanding whether this could arise from a special galileon invariant action is beyond the scope of this paper, but it would seem implausible given that all the computed odd-point amplitudes vanish on-shell.
VI. DISCUSSION AND CONCLUSIONS
We have constructed the higher derivative Lagrangians for both the non-linear sigma model and the special galileon by using building blocks given by the coset construction. The explicit form of these Lagrangians would be particularly important to calculate the radiation emitted at higher orders in the context of the classical perturbative double copy of [23].
Here, however, we focused on the on-shell scattering amplitudes arising from these shift- we are not aware of any symmetry that would enforce these constraints. Moreover, we have not explored whether these tunings happen to be technically natural. However, it is worth noticing that the constrained amplitudes still admit more than one free parameter.
In principle, it might seem surprising that an amplitude with more than one free parameter satisfies the KK and BCJ relations, but we believe that this is due to the fact that, when this happens, the σ = 1 soft limit is trivially satisfied. It is also relevant to mention that the Abelian Z-theory amplitudes correspond to a subset of the constrained NLSM amplitudes involving only one free parameter. When combined, these results show that, at least up to the derivative order we have considered, the most general colored-scalar theory compatible with color-kinematics duality is not merely a subset of the U(N)-NLSM.
We have also explicitly constructed the higher-order Lagrangian invariant under the special galileon symmetries, and have used this to understand the disagreement between the different definitions of the special galileon. It was previously shown that the even-point amplitudes of a scalar field with soft degree σ = 3, except for the s 6 + t 6 + u 6 term, match the scattering amplitudes obtained as the double copy of the most general colored scalar satisfying the KK and BCJ relations [58]. In [58], it was also shown that there is a 5-point amplitude with soft degree σ = 3 but too few momenta to arise from the double copy. This is the first instance in which the definitions of the special galileon based on its single soft limit or the double copy procedure have turned out to be inequivalent; in other words, the most general scalar field amplitudes with a soft degree σ = 3 do not correspond to the double copy of the most general colored scalar satisfying the KK and BCJ relations. In order to restore the equivalence, one could only consider even-point amplitudes and remove the s 6 + t 6 + u 6 term from the 4-point amplitude.
In our construction, we are able to constrain the dimensionless coefficients on both the NLSM and the SGal side in order to maintain their relation through the double copy. This is possible since only the even terms (up to the computed derivative orders) on the NLSM side satisfy the KK and BCJ relations, this matches the fact that the only non-vanishing amplitudes of the SGal are the even ones. It would be interesting to analyze the origin on the constraints set on the Wilsonian coefficients of these EFTs. A possibility worth exploring is if these constraints are related to the positivity bounds of EFTs that allow for a local, analytic, unitary UV completion [102,103], or other unitary conditions such as those in [104][105][106].
We have also discussed whether the 5-point amplitude arising as the double copy of the 14th derivative color-ordered amplitude which satisfies KK and BCJ relations could come from a theory with the SGal symmetries. We do not construct this amplitude since its calculation through Feynman rules seems intractable. Developing amplitude methods along the lines of the soft bootstrap method applied in [65] that can compute higher order corrections appears to be a more promising approach. Nevertheless, it seems unlikely that odd-point amplitudes arise from the SGal invariant action; a complete proof could follow the lines of the analysis in [97].
As summarized in Fig.1, the results for both the NLSM and the SGal higher order amplitudes tell us that the definitions of the exceptional scalar theories based on their symmetries, single soft limits, or double copy relations are not equivalent beyond leading order.
As we mentioned in the introduction, there are various methods for computing the higher derivative on-shell scattering amplitudes, but only a few that also obtain the corresponding Lagrangians. Given this, it would be interesting to explore whether the most general higher derivative corrections compatible with the double copy can be obtained as a dimensional reduction of higher-order operators of Yang-Mills theories and gravity, in the spirit of [19,20,66]. When considering the alternative coset parametrization for the NLSM of section III B, we have a non-zero connection given by Eq. (III.10b). The geometric structure of the coset space allows us to define a field strength, Γ µν , corresponding to this connection by This field strength satisfies the Bianchi identity which is useful in simplifying the NLSM Lagrangians.
On the other hand, there are identities that specifically help us to simplify the odd intrinsic parity terms. These are Levi-Civita identities which follow from the fact that, in 4d, a completely antisymmetric tensor with 5 indices is zero, that is, Contracting a tensor T αβγρτ η in every possible way with the one above leads to the (independent) identities: When the tensor T αβγρτ η is constructed out of u µ and ∇ µ , these identities can be used to simplify the NLSM Lagrangian.
As a cross-check of our calculations, we have verified that these amplitudes have the correct infrared behavior by computing the double soft limit of the 6-point amplitude.
In what follows, we will denote by A (j) n [1, . . . , n] the O(p j ) contribution to the n-point on-shell color-ordered amplitude. With this notation, the 4-point color-ordered amplitude of the NLSM up to the eighth derivative order is given by a sum of the following terms: where s, t, and u are the usual Mandelstam variables defined as s = (p 1 + p 2 ) 2 , t = (p 1 + p 3 ) 2 , u = (p 1 + p 4 ) 2 . (B.5) The 5-point partial amplitude up to sixth derivative order is given by: A The Abelian Z-theory amplitudes can be found in [56,107]. These amplitudes coincide with the most general color-ordered amplitudes satisfying the KK and BCJ relations found in [58]. For completeness, we report here the results for the 4-point amplitude up to eighth derivative order, and for the 6-point amplitude up to sixth derivative order: A 4 [1, 2, 3, 4] = C 2 F 2 t + C 6 F 6 t(s 2 + t 2 + u 2 ) + C 8 F 8 t(stu) + · · · , (C.1) | 10,865.4 | 2019-08-20T00:00:00.000 | [
"Physics"
] |
High-Performance Polyamide Reverse Osmosis Membrane Containing Flexible Aliphatic Ring for Water Purification
A reverse osmosis (RO) membrane with a high water permeance and salt rejection is needed to reduce the energy requirement for desalination and water treatment. However, improving water permeance while maintaining a high rejection of the polyamide RO membrane remains a great challenge. Herein, we report a rigid–flexible coupling strategy to prepare a high-performance RO membrane through introducing monoamine with a flexible aliphatic ring (i.e., piperidine (PPR)) into the interfacial polymerization (IP) system of trimesoyl chloride (TMC) and m-phenylenediamine (MPD). The resulted polyamide film consists of a robust aromatic skeleton and soft aliphatic-ring side chain, where the aliphatic ring optimizes the microstructure of polyamide network at a molecular level. The obtained membranes thereby showed an enhanced water permeance of up to 2.96 L·m−2 h−1 bar−1, nearly a 3-fold enhancement compared to the control group, meanwhile exhibiting an ultrahigh rejection toward NaCl (99.4%), thus successfully overcoming the permeability–selectivity trade-off limit. Furthermore, the mechanism of the enhanced performance was investigated by molecular simulation. Our work provides a simple way to fabricate advanced RO membranes with outstanding performance.
Introduction
The shortage of freshwater is posing threats to the sustained development of both developed and developing countries [1,2]. Membrane filtration technologies, particularly reverse osmosis (RO), are extensively used to purify water (remove dissolved salts from water) [3]; nonetheless, the energy requirement remains high [4,5]. Numerical calculation studies have confirmed that the operation pressure and energy consumption applied to seawater RO and brackish RO can be decreased significantly when using an RO membrane with an enhanced water permeance and salt rejection [6,7]. Unfortunately, the improvement in membrane water permeance is generally accompanied by a compromised salt rejection due to the trade-off limitation of permeability and selectivity [8,9].
The state-of-the-art RO membrane isa thin-film composite (TFC) membrane containing an ultrathin dense separation layer and a porous substrate [10]. The separation layer, which mainly determines the separation performance, is synthesized by interfacial polymerization (IP) between amine dissolved in water and acyl chloride dissolved in an organic solvent [11]. During the IP process, amine molecules diffuse into the organic phase, and then react with trimesoyl chloride molecules [12]. Such a diffusion-reaction process results in a cross-linked polyamide film covering on the substrate. It is generally recognized that the polyamide film has two kinds of pores, i.e., network pores (free volumes within molecular chains) and aggregate pores (interspaces between the aggregates) [13]. These sub-nanometer pores provide a diffusion path for water molecules during the RO separation process. To this date, the combination of m-phenylenediamine (MPD) and trimesoyl chloride (TMC) is most commonly used for fabricating RO membranes. This MPD-TMC chemistry (well known as FT-30 chemistry [14]) is the basis of most commercial TFC RO membranes, although various novel materials have been studied over the past few decades.
Obtaining fresh water from seawater and wastewater relies on the selective permeation of water through the polyamide layer. However, water transport is heavily restricted by the polyamide chains in a tortuous manner [15]; one of the reasons for this is the efficient packing of molecular chains during the IP process. The rigid plane structure of benzene rings within MPD/TMC and their interaction (e.g., π-π stacking) leads to a dense polyamide network, which is short of sufficient interconnected free volumes. Thus, extensive effort has been dedicated to creating more water transport channels in the polyamide layer to improve the membrane performance. One commonly used strategy to tailor the microstructural properties of the active layer is to incorporate nanomaterials (e.g., zeolite, graphene oxide, carbon nanotubes, and metal organic frameworks) into the layer [16][17][18][19][20]. Although more water transport channels can be obtained by creating interfacial voids or introducing the intrinsic voids of nanomaterials, the low affinity with the polymer matrix usually facilitates the formation of unselective defects, causing decreased salt rejection [21]. Most recently, Culp et al. revealed the relationship between the nanoscale polyamide structure and membrane performance and demonstrated that significance of controlling nanoscale polyamide homogeneity (increasing uniformly distributed angstromscale free volume polyamide) [22]. However, due to the rapid and uncontrolled reaction at the nanometer-thick interface region, achieving the desired polyamide properties with optimized microstructures remains a scientific and technological challenge.
In this work, the RO membrane with an enhanced water permeance and salt rejection was prepared by designing the polyamide structure at the molecular level. As shown in Figure 1, a monoamine with aliphatic ring (e.g., piperidine (PPR)) was introduced into the conventional IP system of TMC and MPD to tailor the intrinsic properties of polyamide layer. Different from insoluble nanomaterials, monoamine uniformly dissolves in the aqueous phase and then diffuses into the organic phase to react with TMC ( Figure 1a). Due to the self-limitation of single reactive groups, the participation of monoamine will not replace MPD in a cross-linked polyamide. The resulting polymer consists of a robust aromatic polyamide skeleton and soft aliphatic-ring side chain (Figure 1b). Due to the regulation of aliphatic ring on microstructure of the polyamide network (Figure 1c), the obtained RO membrane showed an enhanced water permeance and NaCl rejection. In this work, the fabricated RO membrane containing robust aromatic ring and soft aliphatic ring is named a rigid-flexible coupling RO (RFRO) membrane.
Materials
The polymer substrate used to prepare the TFC RO membranes was a PSF polysulfone (PSF) ultrafiltration membrane, which was purchased from Beijing OriginWater Membrane Technology Co., Ltd. (Beijing, China). The commonly used monomer to build the polyamide separation layer, i.e., m-phenylenediamine (MPD, 99%), and trimesoyl
Materials
The polymer substrate used to prepare the TFC RO membranes was a PSF polysulfone (PSF) ultrafiltration membrane, which was purchased from Beijing OriginWater Membrane Technology Co., Ltd. (Beijing, China). The commonly used monomer to build the polyamide separation layer, i.e., m-phenylenediamine (MPD, 99%), and trimesoyl chloride (TMC, 98%), were purchased from TCI Chemicals (Tokyo, Japan). The additives to prepare RFRO membranes, i.e., PPR, and the additives used in the control group, i.e., piperazine (PIP) were purchased from Aladdin (Shanghai, China). The solute to test the membrane performance, i.e., sodium chloride (NaCl, 99.5%), was purchased from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China). Deionized (DI) water produced by a two-stage RO system was used throughout all experiments.
Preparation of RO Membranes
The TFC RO membranes were prepared by the IP method. For the fabrication of the conventional RO membrane as the control, the PSF ultrafiltration substrate was immersed in a 2 w/w% MPD aqueous solution for 5 min, and the excess MPD solution was removed with an air gun until no water droplet was observed. Then, the 0.1 w/w% TMC dissolved in n-hexane solution was carefully poured onto the substrate saturated by aqueous phase to generate the IP reaction and was held for 30 s. Subsequently, the resulting membrane was washed with pure n-hexane. Finally, the obtained membrane was placed in an oven at 60 • C for 2 min to form a stable cross-linked structure. For the fabrication of RFRO membranes, a certain amount of monoamine with an aliphatic ring (i.e., PPR) was added into the MPD aqueous solution, and the resulting mixed solution was used as an aqueous phase for IP. The other steps and conditions were the same as the fabrication of a conventional RO membrane. In the following discussion, the PPR concentration was the mass concentration of PPR in the aqueous phase.
Characterization
The chemical compositions of the substrate and RO membranes were characterized by attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectroscopy (NICO-LETTM iS10 spectrometer, Thermo Fisher Scientific, Waltham, MA, USA) and X-ray photoelectron spectroscopy (XPS, Thermo Scientific K-Alpha, USA) tests. The surface morphologies were measured by a scanning electron microscope (SEM, TESCAN MIRA LMS, CZ, Brno, Czech Republic) and an atom force microscope (AFM, Shimadzu SPM-9700, Kyoto, Japan) with contact model, and the surface roughness was evaluated by the parameter of room mean surface roughness (R q ). The cross-sectional structural properties and internal nano-sized voids of the RO membranes were charactered by transmission electron microscopy (TEM, JEM 1200EX, JEOL, Tokyo, Japan). The TEM test samples were prepared by ultra-thin-sectioning according to the following procedures: the RO membrane was cut into a strip shape, and then immersed in a series of water/ethanol solutions to dehydrate the membrane. Subsequently, the dried membrane was embedded in LR resin and held for curing. Finally, the embedded membrane was sliced into ultrathin sections (this thickness was about 80 nm) to obtain the sample for TEM measurement.
Performance Evaluation
The separation performance of the fabricated membranes was evaluated by water permeance, NaCl rejection rate (2000 mg/L NaCl solution), and long-time stability. The desalination tests were carried out based on a laboratory-scale cross-flow apparatus with six filtration cells (each cell had an effective area of 17.7 cm 2 ). The tests were carried out with an operation pressure of 15 bar, a feed flow rate of 4.5 L min −1 , and a test temperature of 25 • C. Each membrane sample was pre-pressed for at least 2 h at 15 bar to reach a (1) and (2), respectively.
where P is the water permeance (L m −2 h −1 bar −1 ), V is the volume of the permeated water (L) collected within time t (h), A is the effective membrane area (m 2 ), and C p and C f represent the salt concentrations of the permeated solution and feed solution, respectively (obtained by a conductivity meter).
Molecular Simulation
To obtain the information about the microstructure of the membrane materials, a molecular dynamic (MD) simulation was applied to build the molecular models of the cross-linked polyamide and to analyze their microscope properties including free volume and chain movability. Briefly, 150 TMC, 175 MPD, and 20 PPR molecules were blended, then the crosslinking simulation was carried out based on a script program in Materials Studio (during this process, a new amide group was created when the distance between acyl chloride in TMC and amine in MPD (or PPR) was below cut-off distance, until 70% of acyl chloride groups were consumed). Finally, the polyamide model was relaxed though an equilibration NPT MD simulation and NVT MD simulation to approach the real state of polyamide film [23]. The free volume properties of the models were analyzed by the Connolly surface method and the motion of the polymer chains was analyzed based on a root-mean-square displacement (MSD) curve of the polymer chain.
Chemical Composition and Structural Properties of the RO Membrane
The chemical composition of the membranes fabricated by IP was analyzed by XPS characterization. Figure 2a shows the resolved C1s XPS spectrum of the fabricated RFRO membrane. Three peaks arising from O=C-N, C-N, and C-H/C-C bonds were observed, demonstrating the formation of polyamide. From the comparison of O1s XPS spectra ( Figure S1), it was found that the carboxyl group content of the polyamide membrane was reduced when the PPR was added in the IP system (from 31% to 19%). This was because the PPR consumed a part of acyl chloride groups of TMC; thus, the unreacted acyl chloride groups that subsequently hydrolyzed into carboxyl groups decreased. IR characterization ( Figure 2b) further showed the difference between the conventional RO membrane and RFRO membrane. Compared with the PSF substrate, the IR spectra of RO membrane showed two absorption peaks at 1608 cm −1 and 1542 cm −1 , which corresponded to the stretching vibration of C=O and bending vibration of N-H in the full-aromatic polyamide groups formed by the polymerization of MPD and TMC. In addition to the above absorption peaks, two new absorption peaks at 1646 cm −1 and 1454 cm −1 were observed in the IR spectra of the RFRO membrane (TMC-MPD/PPR), which resulted from the stretching vibration of C=O in semi-aromatic polyamide formed by PPR and TMC. The above results demonstrate the fact the PPR and MPD co-reacted with TMC during the IP process, and formed the polyamide containing the benzene ring and aliphatic ring.
The structural properties of the fabricated membranes were characterized by SEM and AFM. Figure 3a shows the surface SEM images of the fabricated RO membranes. A typical "ridge-and-valley" morphology containing leaf-like structures and nodular structures was observed on the membrane surfaces, which may have originated from reaction heat [24], the release of gas bubbles [25], and the uneven loading of the amine monomer [26] during IP process. However, compared with the conventional RO membrane, the RFRO membrane showed more and larger leaf-like structures. These structures were expected to increase the membrane surface roughness, which was further proved by AFM measurement. As presented in Figure 3b, the surface roughness of the obtained membranes gradually increased from 36.2 nm to 64.7 nm with the addition of PPR into the aqueous phase for IP. The increase in surface roughness generally has a positive correlation with the effective filtration area, which favors the improvement of membrane permeance [27,28].
( Figure 2b) further showed the difference between the conventional RO membrane and RFRO membrane. Compared with the PSF substrate, the IR spectra of RO membrane showed two absorption peaks at 1608 cm −1 and 1542 cm −1 , which corresponded to the stretching vibration of C=O and bending vibration of N-H in the full-aromatic polyamide groups formed by the polymerization of MPD and TMC. In addition to the above absorption peaks, two new absorption peaks at 1646 cm −1 and 1454 cm −1 were observed in the IR spectra of the RFRO membrane (TMC-MPD/PPR), which resulted from the stretching vibration of C=O in semi-aromatic polyamide formed by PPR and TMC. The above results demonstrate the fact the PPR and MPD co-reacted with TMC during the IP process, and formed the polyamide containing the benzene ring and aliphatic ring. The structural properties of the fabricated membranes were characterized by SEM and AFM. Figure 3a shows the surface SEM images of the fabricated RO membranes. A typical "ridge-and-valley" morphology containing leaf-like structures and nodular structures was observed on the membrane surfaces, which may have originated from reaction heat [24], the release of gas bubbles [25], and the uneven loading of the amine monomer [26] during IP process. However, compared with the conventional RO membrane, the RFRO membrane showed more and larger leaf-like structures. These structures were expected to increase the membrane surface roughness, which was further proved by AFM measurement. As presented in Figure 3b, the surface roughness of the obtained membranes gradually increased from 36.2 nm to 64.7 nm with the addition of PPR into the aqueous phase for IP. The increase in surface roughness generally has a positive correlation with the effective filtration area, which favors the improvement of membrane permeance [27,28]. Furthermore, the cross-sectional structure of the RFRO membrane was characterized by TEM measurements (Figure 4). The result shows the typical composite structure containing a continuous polyamide separation layer and a porous PSF substrate. The "ridgeand-valley" structures of polyamide layer were observed in the cross section, which confirmed the result from SEM measurement. Moreover, the TEM measurement also revealed the internal property of the rough polyamide separation layer. The abundant light grey regions demonstrate that the rough three-dimensional structures were hollow inside; these internal nano-voids are believed to serve as low-resistance pathways for water collected by the polyamide layer [29]. The TEM measurement also showed that the thickness of the polyamide film was about 21 nm. Such an ultrathin membrane thickness contributes to high water permeance because of the ultra-short distance for water permeation [30,31]. In addition, the TEM measurements for the TMC-MPD membrane were conducted to Furthermore, the cross-sectional structure of the RFRO membrane was characterized by TEM measurements (Figure 4). The result shows the typical composite structure containing a continuous polyamide separation layer and a porous PSF substrate. The "ridge-and-valley" structures of polyamide layer were observed in the cross section, which confirmed the result from SEM measurement. Moreover, the TEM measurement also revealed the internal property of the rough polyamide separation layer. The abundant light grey regions demonstrate that the rough three-dimensional structures were hollow inside; these internal nano-voids are believed to serve as low-resistance pathways for water collected by the polyamide layer [29]. The TEM measurement also showed that the thickness of the polyamide film was about 21 nm. Such an ultrathin membrane thickness contributes to high water permeance because of the ultra-short distance for water permeation [30,31]. In addition, the TEM measurements for the TMC-MPD membrane were conducted to compare the variation in membrane thickness. As shown in Figure S2, the thickness of the polyamide layer was about 19 nm, which was slightly less than that of the membrane fabricated with PPR. We speculate that the participation of PPR decelerated the self-termination process of IP, thus leading to a slightly increased membrane thickness.
Separation Performance
The performance of RFRO membranes was evaluated by a cross-flow desa test with a 2000 ppm NaCl solution as feed. Figure 4a shows the variation of wa meance and NaCl rejection versus the addition amount of PPR for fabricating RFR branes. Compared with the conventional RO membrane (the point of 0 PPR conce in Figure 5a), a noticeable performance improvement was observed for the RFR brane prepared with the incorporation of PPR. The water permeance increased fr Lm −2 h −1 bar −1 to 2.96 Lm −2 h −1 bar −1 when 1% PPR was added, a near 3-fold impro Importantly, the rejection toward NaCl increased as well (from 98.1% to 99.4%). T ultaneous promotion in water permeance and salt rejection fully demonstrates the of the rigid-flexible coupling strategy for fabricating a high-performance RO me However, when excess PPR was added (concentration above 1.5%), the separation mance of the obtained membrane would not continuously increase; this phenomen be attributed to the reaction self-limitation of monoamine during IP. Specifically action of PPR and TMC cannot form a polymer film due to the single reactive g PPR; thus, it cannot replace the dominant role of MPD in cross-linked polyamide even with an excess addition of PPR.
Separation Performance
The performance of RFRO membranes was evaluated by a cross-flow desalination test with a 2000 ppm NaCl solution as feed. Figure 4a shows the variation of water permeance and NaCl rejection versus the addition amount of PPR for fabricating RFRO membranes. Compared with the conventional RO membrane (the point of 0 PPR concentration in Figure 5a), a noticeable performance improvement was observed for the RFRO membrane prepared with the incorporation of PPR. The water permeance increased from 1.08 Lm −2 h −1 bar −1 to 2.96 Lm −2 h −1 bar −1 when 1% PPR was added, a near 3-fold improvement. Importantly, the rejection toward NaCl increased as well (from 98.1% to 99.4%). The simultaneous promotion in water permeance and salt rejection fully demonstrates the validity of the rigid-flexible coupling strategy for fabricating a high-performance RO membrane. However, when excess PPR was added (concentration above 1.5%), the separation performance of the obtained membrane would not continuously increase; this phenomenon may be attributed to the reaction self-limitation of monoamine during IP. Specifically, the reaction of PPR and TMC cannot form a polymer film due to the single reactive group of PPR; thus, it cannot replace the dominant role of MPD in cross-linked polyamide chains, even with an excess addition of PPR. Figure 5b shows the comparison of our representative membranes and commercial RO membranes and other advanced RO membranes reported in the recent literature. Our membrane is located in the upper right corner of the figure, demonstrating its performance advantages both in water permeance and salt rejection in comparison with other RO membranes. To further evaluate the potential of the RFRO membranes in practical applications, the operational stability of the membranes was examined by a long-time cross-flow desalination test under an operation pressure of 1.5 MPa. As presented in Figure 5c, the water flux and NaCl rejection varied lightly for the first 5 h, and then remained stable for the following 45 h. In addition, the XPS measurement ( Figure S3) shows that the chemical composition of the membrane remained unchanged after cross-flow filtration test. The above results demonstrate the RFRO membrane has great potential for a long-time operation. Consequently, it can be concluded that the rigid-flexible coupling strategy could be harnessed to fabricate advanced RO membranes. Figure 5b shows the comparison of our representative membranes and commercia RO membranes and other advanced RO membranes reported in the recent literature. Our membrane is located in the upper right corner of the figure, demonstrating its perfor mance advantages both in water permeance and salt rejection in comparison with other RO membranes. To further evaluate the potential of the RFRO membranes in practica applications, the operational stability of the membranes was examined by a long-time cross-flow desalination test under an operation pressure of 1.5 MPa. As presented in Fig ure 5c, the water flux and NaCl rejection varied lightly for the first 5 h, and then remained stable for the following 45 h. In addition, the XPS measurement ( Figure S3) shows that the chemical composition of the membrane remained unchanged after cross-flow filtration test. The above results demonstrate the RFRO membrane has great potential for a long time operation. Consequently, it can be concluded that the rigid-flexible coupling strategy could be harnessed to fabricate advanced RO membranes.
Mechanism Analysis
The RFRO membranes with enhanced performance were fabricated by adding a monoamine with a flexible aliphatic ring (PPR) into the aqueous phase (MPD solution) for IP. The key point for our proposed rigid-flexible coupling strategy was the fact that the added second monomer only had one reactive amine group. To show the necessity of the above character of the additive, the control group was designed where a diamine with flexible aliphatic ring (i.e., PIP) was added into the aqueous phase to fabricate the RO
Mechanism Analysis
The RFRO membranes with enhanced performance were fabricated by adding a monoamine with a flexible aliphatic ring (PPR) into the aqueous phase (MPD solution) for IP. The key point for our proposed rigid-flexible coupling strategy was the fact that the added second monomer only had one reactive amine group. To show the necessity of the above character of the additive, the control group was designed where a diamine with flexible aliphatic ring (i.e., PIP) was added into the aqueous phase to fabricate the RO membrane by the IP method (Figure 6a). Figure 6b shows the performance of the obtained membranes. A significant decrease in NaCl rejection was observed. Although we changed the addition amount, a membrane with an acceptable performance was not obtained. In addition, the SEM measurement ( Figure S4) showed that the membrane fabricated by adding PIP had a surface morphology between a typical RO membrane and a typical NF membrane. Actually, PIP is the most commonly used amine monomer for fabricating a nanofiltration (NF) membrane, which generally has a higher water permeance than an RO membrane, but a much lower NaCl rejection (~30%) [32]. Although PIP has a similar flexible aliphatic ring with PPR, the introduction of PIP did not bring the hoped-for enhanced water permeance, but instead had a terrible performance in relation to rejection. We speculated that the completely different result caused by PIP was due to the following reason: PIP and MPD co-reacted with TMC during the IP process, which changed the skeleton structure of the cross-linked polyamide network, forming a membrane containing TMC/PIP micro-phase and TMC/MPD micro-phase regions. The TMC/PIP micro-phase was actually a "defect" region for the RO desalination process; thus, the resulted membrane exhibited an unsatisfactory performance. nanofiltration (NF) membrane, which generally has a higher water permeance than an RO membrane, but a much lower NaCl rejection (~30%) [32]. Although PIP has a similar flexible aliphatic ring with PPR, the introduction of PIP did not bring the hoped-for enhanced water permeance, but instead had a terrible performance in relation to rejection. We speculated that the completely different result caused by PIP was due to the following reason PIP and MPD co-reacted with TMC during the IP process, which changed the skeleton structure of the cross-linked polyamide network, forming a membrane containing TMC/PIP micro-phase and TMC/MPD micro-phase regions. The TMC/PIP micro-phase was actually a "defect" region for the RO desalination process; thus, the resulted membrane exhibited an unsatisfactory performance. To further investigate the mechanism of the advanced performance offered by adding PPR, a molecular simulation was used to construct the molecular model of a rigidflexible coupling polyamide and to reveal the micro-properties of the materials, including the free volume and motion of the polymer chains. Figure 7a shows the cross-linked molecular model of materials synthesized by TMC, MPD, and PPR, wherein the free volumes calculated with a probe with radius of 1.4 Å are visualized by blue and gray colors. Because the radius of a water molecule is about 1.4 Å, the observed free volumes were al accessible for water. For better comparison, the fractional free volumes (FFVs) of various materials were calculated and the results are presented in Figure 7a. The FFV value of TMC-MPD/PPR polyamide was larger than that of full-aromatic polyamide (TMC-MPD) It is believed that the free volumes serve as pathways for water diffusion in polyamide layer. Thus, the enhanced water permeance of RFRO membrane can be attributed to the increased free volumes, which originated from the regulation effect of the flexible aliphatic ring on the distribution of polyamide chains. In addition, we believe that the regulation effect from the flexible aliphatic ring also reduced the nanoscale defects. Thus, the obtained membrane showed an enhanced NaCl rejection rate. To further investigate the mechanism of the advanced performance offered by adding PPR, a molecular simulation was used to construct the molecular model of a rigid-flexible coupling polyamide and to reveal the micro-properties of the materials, including the free volume and motion of the polymer chains. Figure 7a shows the cross-linked molecular model of materials synthesized by TMC, MPD, and PPR, wherein the free volumes calculated with a probe with radius of 1.4 Å are visualized by blue and gray colors. Because the radius of a water molecule is about 1.4 Å, the observed free volumes were all accessible for water. For better comparison, the fractional free volumes (FFVs) of various materials were calculated and the results are presented in Figure 7a. The FFV value of TMC-MPD/PPR polyamide was larger than that of full-aromatic polyamide (TMC-MPD). It is believed that the free volumes serve as pathways for water diffusion in polyamide layer. Thus, the enhanced water permeance of RFRO membrane can be attributed to the increased free volumes, which originated from the regulation effect of the flexible aliphatic ring on the distribution of polyamide chains. In addition, we believe that the regulation effect from the flexible aliphatic ring also reduced the nanoscale defects. Thus, the obtained membrane showed an enhanced NaCl rejection rate. nanofiltration (NF) membrane, which generally has a higher water permeance than an RO membrane, but a much lower NaCl rejection (~30%) [32]. Although PIP has a similar flexible aliphatic ring with PPR, the introduction of PIP did not bring the hoped-for enhanced water permeance, but instead had a terrible performance in relation to rejection. We speculated that the completely different result caused by PIP was due to the following reason: PIP and MPD co-reacted with TMC during the IP process, which changed the skeleton structure of the cross-linked polyamide network, forming a membrane containing TMC/PIP micro-phase and TMC/MPD micro-phase regions. The TMC/PIP micro-phase was actually a "defect" region for the RO desalination process; thus, the resulted membrane exhibited an unsatisfactory performance. To further investigate the mechanism of the advanced performance offered by adding PPR, a molecular simulation was used to construct the molecular model of a rigidflexible coupling polyamide and to reveal the micro-properties of the materials, including the free volume and motion of the polymer chains. Figure 7a shows the cross-linked molecular model of materials synthesized by TMC, MPD, and PPR, wherein the free volumes calculated with a probe with radius of 1.4 Å are visualized by blue and gray colors. Because the radius of a water molecule is about 1.4 Å, the observed free volumes were all accessible for water. For better comparison, the fractional free volumes (FFVs) of various materials were calculated and the results are presented in Figure 7a. The FFV value of TMC-MPD/PPR polyamide was larger than that of full-aromatic polyamide (TMC-MPD). It is believed that the free volumes serve as pathways for water diffusion in polyamide layer. Thus, the enhanced water permeance of RFRO membrane can be attributed to the increased free volumes, which originated from the regulation effect of the flexible aliphatic ring on the distribution of polyamide chains. In addition, we believe that the regulation effect from the flexible aliphatic ring also reduced the nanoscale defects. Thus, the obtained membrane showed an enhanced NaCl rejection rate. According to the hopping mechanism which describes the diffusion of small molecules in polymers, the small molecules may oscillate in a particular void (free volume) and when a proper pathway is created by the polymer chains moving, it occasionally jumps into a neighboring void [33]. Thus, the motion of polyamide chain is an important factor influencing the diffusion of water in polyamide layer. The motion properties of the polyamide chains with or without PPR ring were investigated by a molecular simulation, where the slope of the MSD-time curve reflected the moveability of polymer chain. As presented in Figure 7b, the slope of TMC-MPD/PPR model was larger than that of TMC-MPD model, which demonstrates that moveability of polyamide chain was enhanced when introducing the flexible aliphatic ring. On the basis of the results of the experimental characterization and molecular simulation, it can be concluded that the enhanced separation performance of RFRO membrane is attributed to the optimized molecular structures (corresponding to enhanced diffusion of water molecules in polyamide layer) and increased the surface roughness (corresponding to increased filtration area).
Conclusions
In this work, we reported the fabrication of high-performance RO membrane through a rigid-flexible coupling molecular design strategy. A monoamine with flexible aliphatic ring was added into the TMC-MPD IP system, resulting in a polyamide film consisting of a robust aromatic skeleton and a soft aliphatic-ring side chain. The introduction of the soft aliphatic ring not only regulated the packing of the polymer network during the IP process to create more free volumes but also to increase the moveability of the polymer chains as demonstrated by the molecular simulation, which favored the diffusion of water molecules in the polyamide layer. In addition, the synthesized RO membrane also has an increased surface roughness, offering a more effective area for water permeation. The obtained RFRO membranes showed a simultaneously enhanced water permeance and salt rejection. This work provided a novel designing scheme for fabricating an RO membrane with an outstanding performance. | 7,152.6 | 2023-02-01T00:00:00.000 | [
"Engineering"
] |
Analysis for DC and RF Characteristics Recessed-Gate GaN MOSFET Using Stacked TiO2/Si3N4 Dual-Layer Insulator
The self-heating effects (SHEs) on the electrical characteristics of the GaN MOSFETs with a stacked TiO2/Si3N4 dual-layer insulator are investigated by using rigorous TCAD simulations. To accurately analyze them, the GaN MOSFETs with Si3N4 single-layer insulator are conducted to the simulation works together. The stacked TiO2/Si3N4 GaN MOSFET has a maximum on-state current of 743.8 mA/mm, which is the improved value due to the larger oxide capacitance of TiO2/Si3N4 than that of a Si3N4 single-layer insulator. However, the electrical field and current density increased by the stacked TiO2/Si3N4 layers make the device’s temperature higher. That results in the degradation of the device’s performance. We simulated and analyzed the operation mechanisms of the GaN MOSFETs modulated by the SHEs in view of high-power and high-frequency characteristics. The maximum temperature inside the device was increased to 409.89 K by the SHEs. In this case, the stacked TiO2/Si3N4-based GaN MOSFETs had 25%-lower values for both the maximum on-state current and the maximum transconductance compared with the device where SHEs did not occur; Ron increased from 1.41 mΩ·cm2 to 2.56 mΩ·cm2, and the cut-off frequency was reduced by 26% from 5.45 GHz. Although the performance of the stacked TiO2/Si3N4-based GaN MOSFET is degraded by SHEs, it shows superior electrical performance than GaN MOSFETs with Si3N4 single-layer insulator.
Introduction
Silicon (Si) is widely used in the semiconductor industry as it is a material with very stable physical properties. However, recognizing the band gap limit, the research on compound semiconductors such as gallium nitride (GaN) that can be used stably at high voltage and high frequency is considered to be an important topic [1][2][3]. The AlGaN/GaNbased high-electron-mobility transistor (HEMT) is suitable for power switching applications. The two-dimensional electron gas (2DEG) formed between AlGaN and GaN layers results in a high switching speed, low on-resistance, large current handling capabilities, and high breakdown voltage [4,5]. In addition, it has long been established as a promising candidate for high-frequency operation because the high saturation velocity of the electrons significantly enhances the transport properties [6]. Overall, although HEMT devices operate in enhancement-mode, the normally off operation is more appropriate for GaN-based transistors to target high-voltage power switching applications for fail-safe requirements and to simplify the design of driving circuits. Methods for normally off operations include gate-recess etching, fluorine plasma ion implantation, the p-type doped gate structure, and the gate-controlled tunnel junction, which has been proven to be capable of normally off operations [7][8][9][10]. Furthermore, HEMTs with a thin gate-insulator have a suppressed leakage current and high reliability due to their improved interface quality [11,12].
However, when the GaN devices operate at a high voltage region, the self-generated heat lowers the maximum power density and accelerates device failure [13]. The selfheating effects (SHEs) cause phonon scattering by increasing channel temperature, which limits the overall performance such as breakdown voltage, gate-leakage current, stability, and negatively sloped saturation curve. Thus, it is important to investigate and analyze models related to thermal behavior [14,15].
In our previous study, we compared the DC characteristics of recessed-gate MIS-HEMTs based on the variation of the Si 3 N 4 , TiO 2 insulator thickness and demonstrated that the application of an appropriate combination of two materials improves the device's DC electrical characteristics [16]. Further, detailed adjustments to the simulation parameters and models were performed, and it was examined that the results were observed to be the same in general. Although SHEs were applied, a thorough investigation of heat generation and RF properties for use in power amplifier applications were not included.
In conclusion, in this study, we compare the DC performance changes in GaN MOSFET using the stacked TiO 2 /Si 3 N 4 dual-or Si 3 N 4 single-layer insulator depending on whether the SHEs is applied and analyze the operation at a large RF frequency, which is expected to change due to dispersion of temperature, considering the thermal mechanism. In terms of SHEs, the most critical factor in determining the level of thermal rise inside a device is the thermal conductivity of each material used for device fabrication. When a device is made of a material with high thermal conductivity, heat generated spontaneously during operation can easily escape to the outside, which suppresses the device's performance degradation [17]. Therefore, for the proposed device structure, we also experiment to explore the tendency and sensitivity of the performance change when materials with different thermal conductivities are used as substrates. Figure 1 shows the cross-section of GaN MOSFET based on AlGaN/GaN heterostructure with a dual-layer insulator comprising Si 3 N 4 /TiO 2 (10/20 nm thickness) under the recessed gate. The single-layer insulator in the device compared with the proposed structure has only a 30 nm thick Si 3 N 4 , and all other details and conditions are the same. The length of the gate head (L G ) is 2 µm and the length from the gate to the source and drain is 5 µm each (L GS and L GD , respectively), which are symmetrical in structure. The AlGaN layer is 25 nm thick on both sides under the insulator (T AlGaN ), and the thickness of the GaN channel is 100 nm (T channel ); GaN and AlGaN also form a 2DEG layer based on the heterostructure. The 2DEG layer under the gate is removed because there is a recessed gate with a 25 nm depth in the center, implying that the channels that were naturally created by the two materials are not formed. Thus, it operates similarly to a MOSFET that forms a channel when a positive voltage is applied, and the normally off operation is possible by shifting the threshold voltage in the positive direction. GaN buffer layer thickness is 2 µm (T buffer ) under the channel and sapphire is used as a substrate.
Materials and Methods
Many studies have been conducted to determine an optimum substrate material and thickness to prevent instability and controllability degradation due to the temperature rise in the device. The method of changing the substrate does not require complicated procedures, is nondestructive, and simple to apply to existing technologies. Diamond, silicon carbide (SiC), Si, and sapphire have been compared as candidates for GaN substrates. Particularly, SiC and diamond, which have low thermal resistance when used with GaN, are suitable materials that can reduce the maximum temperature [18]. Table 1 shows that the thermal conductivity of sapphire and SiC is 35 and 420 W/mK, respectively, meaning SiC is more than 10 times higher; thus, the characteristic change can be confirmed due to the difference in temperature distribution and heat circulation. Therefore, we additionally confirm the electrical properties and RF performance variation when the substrate material was changed from sapphire to SiC with different thermal conductivity. SiC is more than 10 times higher; thus, the characteristic change can be confirmed due to the difference in temperature distribution and heat circulation. Therefore, we additionally confirm the electrical properties and RF performance variation when the substrate material was changed from sapphire to SiC with different thermal conductivity. In this study, various models were applied to include the phenomena that occur during the operation of the device through the ATLAS technology computer-aided design simulation (Silvaco Inc., Santa Clara, CA, USA). Considering the piezoelectric and spontaneous polarization in the 2DEG layer between AlGaN and GaN, the strain due to lattice mismatch was automatically calculated and Shockley-Read-Hall recombination was applied as a physical model. In addition, the device's DC characteristics were derived by adjusting the low and high field mobilities, and we obtained more accurate results by providing interface trap, thermal conductivity, impact ionization, lattice temperature, and permittivity values for each material.
When the device is turned on, a model that spontaneously increases the temperature is used, and this heat generation is explained by lattice heat flow and general thermal environments in the simulation. The equation for calculating the mechanism that changes due to heat generated by the SHEs can be expressed as follows: where C is the heat capacitance per unit volume, TL is the local lattice temperature, k is the thermal conductivity, and H is the heat generation. The peak of temperature and the temperature distribution are calculated and determined through numerical simulation when the temperature of the lattice increases by applying bias. This model calculates the lattice temperature depending on the material and transmission parameters. It also supports In this study, various models were applied to include the phenomena that occur during the operation of the device through the ATLAS technology computer-aided design simulation (Silvaco Inc., Santa Clara, CA, USA). Considering the piezoelectric and spontaneous polarization in the 2DEG layer between AlGaN and GaN, the strain due to lattice mismatch was automatically calculated and Shockley-Read-Hall recombination was applied as a physical model. In addition, the device's DC characteristics were derived by adjusting the low and high field mobilities, and we obtained more accurate results by providing interface trap, thermal conductivity, impact ionization, lattice temperature, and permittivity values for each material.
When the device is turned on, a model that spontaneously increases the temperature is used, and this heat generation is explained by lattice heat flow and general thermal environments in the simulation. The equation for calculating the mechanism that changes due to heat generated by the SHEs can be expressed as follows: where C is the heat capacitance per unit volume, T L is the local lattice temperature, k is the thermal conductivity, and H is the heat generation. The peak of temperature and the temperature distribution are calculated and determined through numerical simulation when the temperature of the lattice increases by applying bias. This model calculates the lattice temperature depending on the material and transmission parameters. It also supports general thermal environment specifications using a combination of realistic heat sink construction, thermal impedance, and specified ambient temperature [23][24][25][26]. In addition, for heat flow, the Neumann boundary condition is set as a default value at all boundaries except for the floor, if the model that controls the movement of heat to the floor is not applied; thus, we must provide the thermal resistance values to aid our calculations. Because the thermal contact is not set on the top to focus on the bottom, which is the path where heat escapes to the outside, the movement of heat in the device is determined to be in the direction of the bottom [27].
Dependence of Heat Generation on Oxide Capacitance
Although SiO 2 as a gate-insulator material can prevent leakage current, it has low transconductance (g m ) and large pinch-off voltage, which causes many problems in terms of scaling the device down. Alternatively, high-k dielectrics, such as TiO 2 , Al 2 O 3 , and HfO 2 , minimize gate-leakage current, increase transconductance, and have high breakdown voltages, making it possible to have a performance suitable for power devices [28,29]. TiO 2 and Si 3 N 4 , which we adopted as high-k gate-insulator materials, have dielectric constants of 80 and 8, respectively. When TiO 2 is used alone as a gate-insulator, although the high dielectric constant can result in better electrical properties, its small band gap generates a large leakage current compared with other high-k materials. Furthermore, sputtering deposition directly on GaN produces poor quality that loses the function of the insulator to prevent leakage. Therefore, it is possible to maintain a high capacitance value by stacking TiO 2 on Si 3 N, and Si 3 N 4 is already frequently used for passivation of GaN devices, so it can solve the difficulties in the process. Deposition is possible through various methods such as in situ deposition in the metalorganic chemical vapor deposition (MOCVD) chamber, plasma-enhanced chemical vapor deposition (PECVD), and low-pressure chemical vapor deposition (LPCVD) [30]. The formula for calculating the capacitance of an insulator composed of two materials is as follows: where ε TiO 2 and ε Si 3 N 4 are relative dielectric constants (ε TiO 2 = 80, ε Si 3 N 4 = 8) of TiO 2 and Si 3 N 4 , respectively; ε 0 is the vacuum permittivity. Using t TiO 2 and t Si 3 N 4 as the thicknesses of TiO 2 and Si 3 N 4 (t TiO 2 = 20 nm, t Si 3 N 4 = 10 nm), respectively, C TiO 2 and C Si 3 N 4 can be calculated; in the case of a dual-layer insulator the accumulation capacitance (C total ) can be calculated from Equation (4), considering that the capacitors are connected in series. The calculated value of C Si 3 N 4 in the device with a single-layer insulator is 236 nF/cm 2 , whereas the C total in the device using a dual-layer insulator is about 590 nF/cm 2 , which is roughly twice as large, implying superior current characteristics. Moreover, it shows that a high capacitance value can be induced under the condition that the ratio of t TiO 2 is greater than t Si 3 N 4 for a constant insulator thickness of 30 nm. Figure 2a,b shows the drain current (I D )-gate voltage (V G ) transfer curves when SHE is applied and not applied to recessed-gate GaN MOSFET devices with the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators, respectively. Without the SHE, when V GS is applied from 4 to 10 V, the maximum I D (I D, max ) in the device using the stacked TiO 2 /Si 3 N 4 dual-layer insulator at bias V DS = 10 V is 743.80 mA/mm, which is over 15% higher than 643.98 mA/mm of the device with the Si 3 N 4 single-layer insulator and their maximum transconductance (g m, max ) are 115. 19 and 93.06 mS/mm, respectively. With the SHE, under the same conditions, I D, max is 555.15 mA/mm when a stacked TiO 2 /Si 3 N 4 duallayer insulator is used, which is 12% higher than 495.61 mA/mm of the device with a Si 3 N 4 single-layer insulator, and g m, max is 87.30, 69.55 mS/mm, respectively. In Figure 2a,b, the V G with the maximum value of transconductance moves in the negative direction under the influence of the SHE, confirming the tendency that the devices with the stacked TiO 2 /Si 3 N 4 dual-layer insulators have a larger value than devices with a Si 3 N 4 single-layer insulator for both characteristics, whether the SHE is applied. As aforementioned, the capacitance value connected in series due to TiO 2 and Si 3 N 4 is much larger than when only Si 3 N 4 is used, and this is a significant factor that has a great influence on the current increase despite the performance degradation caused by SHEs. SHE, under the same conditions, ID, max is 555.15 mA/mm when a stacked TiO2/Si3N4 duallayer insulator is used, which is 12% higher than 495.61 mA/mm of the device with a Si3N4 single-layer insulator, and gm, max is 87.30, 69.55 mS/mm, respectively. In Figure 2a,b, the VG with the maximum value of transconductance moves in the negative direction under the influence of the SHE, confirming the tendency that the devices with the stacked TiO2/Si3N4 dual-layer insulators have a larger value than devices with a Si3N4 single-layer insulator for both characteristics, whether the SHE is applied. As aforementioned, the capacitance value connected in series due to TiO2 and Si3N4 is much larger than when only Si3N4 is used, and this is a significant factor that has a great influence on the current increase despite the performance degradation caused by SHEs.
(a) (b) Before we analyze the effect of heat in this study based on the type and thickness of the material used as a gate insulator, it is necessary to first understand how the device's self-heating system works. In Silvaco ATLAS, the overall mechanism that changes due to heat generation and heat flow is calculated from the joule heating, and the equation is as follows: where H represents the generated heat, J n and J p represent the electron and hole current density, respectively, and E denotes the electric field. The heat generation in the GaN channel, which depends on the current density and electric field, contributes to the determination of the electrical properties according to insulator type; thus, the analysis of the correlation between these two factors is required. Figure 3a,b can play an auxiliary role in the convenient visualization of theoretical content. Figure 3a shows the overall potential distribution for the entire region in the device using the stacked TiO2/Si3N4 dual-layer insulator when a horizontal cutline is drawn along the channel where the 2DEG exists. In the off state (VGS = 0 V) with VDS applied to 20 V, an abrupt change occurs in the drain-side gate edge region, resulting in a large voltage drop. Figure 3b shows that the electric field is close to 0 due to the presence of Si3N4 instead of AlGaN under the recessed gate, and a distribution with a tendency similar to the potential that rapidly rises at the gate edge of the drain side is also shown. This means that it can withstand a strong electric field with the voltage drop which implies that it generates most of the heat and has the highest temperature value in this region [31]. Before we analyze the effect of heat in this study based on the type and thickness of the material used as a gate insulator, it is necessary to first understand how the device's self-heating system works. In Silvaco ATLAS, the overall mechanism that changes due to heat generation and heat flow is calculated from the joule heating, and the equation is as follows: where H represents the generated heat, → J n and → J p represent the electron and hole current density, respectively, and E denotes the electric field. The heat generation in the GaN channel, which depends on the current density and electric field, contributes to the determination of the electrical properties according to insulator type; thus, the analysis of the correlation between these two factors is required. Figure 3a,b can play an auxiliary role in the convenient visualization of theoretical content. Figure 3a shows the overall potential distribution for the entire region in the device using the stacked TiO 2 /Si 3 N 4 dual-layer insulator when a horizontal cutline is drawn along the channel where the 2DEG exists. In the off state (V GS = 0 V) with V DS applied to 20 V, an abrupt change occurs in the drain-side gate edge region, resulting in a large voltage drop. Figure 3b shows that the electric field is close to 0 due to the presence of Si 3 N 4 instead of AlGaN under the recessed gate, and a distribution with a tendency similar to the potential that rapidly rises at the gate edge of the drain side is also shown. This means that it can withstand a strong electric field with the voltage drop which implies that it generates most of the heat and has the highest temperature value in this region [31]. Figure 4a,b show the lattice temperature distribution in a device with the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators, in which heat flow is reflected and particle motions, such as mobility and scattering, are calculated under the bias V DS = 20 V and V GS = 10 V. In both cases, it indicates that the hottest part where the heat is concentrated is the drain-side gate edge region. The peak temperature in the device using the stacked TiO 2 /Si 3 N 4 dual-layer insulator with SHE is 409.89 K, whereas the device using the Si 3 N 4 single-layer insulator rises to 398.75 K, which is 2.7% smaller according to Figure 4c. The field is formed relatively higher along the channel when the stacked TiO 2 /Si 3 N 4 duallayer insulator is used, whereas the gate-edge portion where the strongest electric field is generated appears to be larger in the device using the Si 3 N 4 single-layer insulator as shown in Figure 5. However, there is a more pronounced difference in the current density compared with the electric field for the two types of devices. Therefore, the maximum temperature at the hotspot is higher in GaN MOSFET with the stacked TiO 2 /Si 3 N 4 duallayer insulator and the current density is much larger. Figure 4a,b show the lattice temperature distribution in a device with the stacked TiO2/Si3N4 dual-and Si3N4 single-layer insulators, in which heat flow is reflected and particle motions, such as mobility and scattering, are calculated under the bias VDS = 20 V and VGS = 10 V. In both cases, it indicates that the hottest part where the heat is concentrated is the drain-side gate edge region. The peak temperature in the device using the stacked TiO2/Si3N4 dual-layer insulator with SHE is 409.89 K, whereas the device using the Si3N4 single-layer insulator rises to 398.75 K, which is 2.7% smaller according to Figure 4c. The field is formed relatively higher along the channel when the stacked TiO2/Si3N4 dual-layer insulator is used, whereas the gate-edge portion where the strongest electric field is generated appears to be larger in the device using the Si3N4 single-layer insulator as shown in Figure 5. However, there is a more pronounced difference in the current density compared with the electric field for the two types of devices. Therefore, the maximum temperature at the hotspot is higher in GaN MOSFET with the stacked TiO2/Si3N4 dual-layer insulator and the current density is much larger. Figure 6a,b shows the ID-drain voltage (VD) transfer curves for two types of devices with and without SHE. In the saturation region, as the slope of the curve decreases and the current tends to be constant, when SHE is not applied, whereas the saturation current is degraded when SHE is applied [32]. This phenomenon is because the increasing electric field and current density contribute to heat generation, and as thermal scattering is accelerated, electron mobility is reduced. When VGS = 10 V, ID, max is 883.05 mA/mm for GaN MOSFET with the stacked TiO2/Si3N4 dual-layer insulator without SHE, and the ID decreases to 536.20 mA/mm when the heat is generated. Furthermore, ID, max of GaN MOSFET with a Si3N4 single-layer insulator is 727.71 and 463.98 mA/mm with and without SHE, respectively, and it is confirmed that the electrical performance is lowered by SHE. The lattice has a relatively larger peak temperature value when the stacked TiO2/Si3N4 duallayer insulator is used; however, it still has a higher ID despite severe thermal scattering because the current density is significantly higher in an environment with a constant heat of 300 K. This implies that if the characteristics of both devices are analyzed at the same temperature, additional benefits in terms of current can be obtained when the dual-layer insulator is used. Figure 6a,b shows the I D -drain voltage (V D ) transfer curves for two types of devices with and without SHE. In the saturation region, as the slope of the curve decreases and the current tends to be constant, when SHE is not applied, whereas the saturation current is degraded when SHE is applied [32]. This phenomenon is because the increasing electric field and current density contribute to heat generation, and as thermal scattering is accelerated, electron mobility is reduced. When V GS = 10 V, I D, max is 883.05 mA/mm for GaN MOSFET with the stacked TiO 2 /Si 3 N 4 dual-layer insulator without SHE, and the I D decreases to 536.20 mA/mm when the heat is generated. Furthermore, I D, max of GaN MOSFET with a Si 3 N 4 single-layer insulator is 727.71 and 463.98 mA/mm with and without SHE, respectively, and it is confirmed that the electrical performance is lowered by SHE. The lattice has a relatively larger peak temperature value when the stacked TiO 2 /Si 3 N 4 dual-layer insulator is used; however, it still has a higher I D despite severe thermal scattering because the current density is significantly higher in an environment with a constant heat of 300 K. This implies that if the characteristics of both devices are analyzed at the same temperature, additional benefits in terms of current can be obtained when the dual-layer insulator is used. Figure 7 shows the specific on resistance value extracted from the ID-VD transfer curve when the lattice temperature is increased from 300 K to 600 K to examine the change trend of electrical characteristics under the same lattice temperature. The self-heating model produces temperature dispersion, but in this experiment, the temperature in all regions was set to be the constant. The resistance is calculated from the following equation: Figure 7 shows the specific on resistance value extracted from the I D -V D transfer curve when the lattice temperature is increased from 300 K to 600 K to examine the change trend of electrical characteristics under the same lattice temperature. The self-heating model produces temperature dispersion, but in this experiment, the temperature in all regions was set to be the constant. The resistance is calculated from the following equation:
Temperature Sensitivity Comparison
where R on is on resistance value when a model that sets the constant to 300-600 K is applied under bias V GS = 10 V, W is the width of the device, and L SD is the length from the source to the drain [33]. The difference in resistance for two devices is due to the change in the insulator type, and considering that the parasitic resistance is the same, we can expect that the R channel dominates. Regardless of the type of gate-insulator, the values of R on, sp tend to increase linearly with temperature increases. At 300 K, the resistance of the device with the stacked TiO 2 /Si 3 N 4 dual-layer insulator is 1.42 mΩ·cm 2 , which is about 7.8% smaller than the resistance value of 1.54 mΩ·cm 2 for the device with a Si 3 N 4 single-layer insulator and the resistance values of the GaN MOSFET with the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators are 6.02 and 7.00 mΩ·cm 2 , respectively, which is about 16% larger, at 600 K. Figure 6. ID-VD transfer characteristics with and without self-heating effect in recessed-gate G MOSFET using (a) the stacked TiO2/Si3N4 dual-layer insulator, (b) Si3N4 single-layer insulator. Figure 7 shows the specific on resistance value extracted from the ID-VD transfer cur when the lattice temperature is increased from 300 K to 600 K to examine the change tre of electrical characteristics under the same lattice temperature. The self-heating mod produces temperature dispersion, but in this experiment, the temperature in all regio was set to be the constant. The resistance is calculated from the following equation: where Ron is on resistance value when a model that sets the constant to 300-600 K is a plied under bias VGS = 10 V, W is the width of the device, and LSD is the length from t source to the drain [33]. The difference in resistance for two devices is due to the chan in the insulator type, and considering that the parasitic resistance is the same, we can e pect that the Rchannel dominates. Regardless of the type of gate-insulator, the values of R sp tend to increase linearly with temperature increases. At 300 K, the resistance of the d vice with the stacked TiO2/Si3N4 dual-layer insulator is 1.42 mΩ·cm 2 , which is about 7.8 smaller than the resistance value of 1.54 mΩ·cm 2 for the device with a Si3N4 single-lay insulator and the resistance values of the GaN MOSFET with the stacked TiO2/Si3N4 du and Si3N4 single-layer insulators are 6.02 and 7.00 mΩ·cm 2 , respectively, which is abo 16% larger, at 600 K. Moreover, a Si 3 N 4 single-layer insulator makes the slope of resistance steeper with increasing temperature. At high temperatures, the R on, sp of the recessed-gate GaN MOSFET using the stacked TiO 2 /Si 3 N 4 dual-layer insulator maintains a smaller value than that using the Si 3 N 4 single-layer insulator; thus, it is estimated that carrier movement in the channel will be easier. Figure 8 shows the breakdown voltage characteristics of the recessed-gate GaN MOS-FET when the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators are used. We set the voltage at which the I D = 1 µA/mm as the breakdown voltage and observed the breakdown voltage of the device composed of Si 3 N 4 and TiO 2 and that composed of Si 3 N 4 to be 178 and 158 V, respectively. The device with the stacked TiO 2 /Si 3 N 4 dual-layer insulator had a larger breakdown due to the effective dispersion when a high voltage is applied; thus, it has a stronger ability to withstand the high heat generated by the high voltage from a device operation perspective [16].
set the voltage at which the ID = 1 μA/mm as the breakdown voltage and observed the breakdown voltage of the device composed of Si3N4 and TiO2 and that composed of Si3N4 to be 178 and 158 V, respectively. The device with the stacked TiO2/Si3N4 dual-layer insulator had a larger breakdown due to the effective dispersion when a high voltage is applied; thus, it has a stronger ability to withstand the high heat generated by the high voltage from a device operation perspective [16]. Figure 9a,b shows the current and unilateral gains based on frequency increase in the recessed-gate GaN MOSFET with the stacked TiO2/Si3N4 dual-and Si3N4 single-layer insulators, where cut-off frequency (fT) and maximum oscillation frequency (fmax) values were extracted at high frequency with and without SHE. The RF characteristics analysis is possible using the equation derived from the Y-parameter, and the equation is as follows: f max = f T 4R g · (g ds + 2πf T C gd ) (8) where gm represents the transconductance, Cgs and Cgd are the extrinsic gate-source and gate-drain capacitance (expressed as Cox = Cgs + Cgd), respectively, Rg is the gate resistance, and gds is source-drain conductance. From the ID-VG transfer characteristics curve, the VG at which gm becomes maximum for each case was applied on the RF simulation. It was induced by increasing transconductance and oxide capacitance, since GaN MOSFET with the stacked TiO2/Si3N4 dual-layer insulator aims to improve the DC characteristics; but Equation (5) shows that gm and Cox for fT have opposite relationships, requiring a complex analysis. Table 2 summarizes the capacitance values and fT calculated using the Y-parameter extracted by applying the AC signal model to each case. The cut-off frequency has a Figure 9a,b shows the current and unilateral gains based on frequency increase in the recessed-gate GaN MOSFET with the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators, where cut-off frequency (f T ) and maximum oscillation frequency (f max ) values were extracted at high frequency with and without SHE. The RF characteristics analysis is possible using the equation derived from the Y-parameter, and the equation is as follows: where g m represents the transconductance, C gs and C gd are the extrinsic gate-source and gate-drain capacitance (expressed as C ox = C gs + C gd ), respectively, R g is the gate resistance, and g ds is source-drain conductance. From the I D -V G transfer characteristics curve, the V G at which g m becomes maximum for each case was applied on the RF simulation. It was induced by increasing transconductance and oxide capacitance, since GaN MOSFET with the stacked TiO 2 /Si 3 N 4 dual-layer insulator aims to improve the DC characteristics; but Equation (5) shows that g m and C ox for f T have opposite relationships, requiring a complex analysis. Table 2 summarizes the capacitance values and f T calculated using the Y-parameter extracted by applying the AC signal model to each case. The cut-off frequency has a larger value when a Si 3 N 4 single-layer insulator is used, and the SHE is not applied for GaN MOSFET as shown in Figure 9a. tions in terms of RF characteristics are possible since Cgs is extremely small. The fmax to which the SHE is applied has a larger value than that to which the SHE is not applied in the two types of devices, unlike Figure 9a. Table 2 shows that the values of Cgs and Cgd are reversed based on whether SHE is applied because the value of the VG in the section where gm is the maximum decreases as the temperature increase; therefore, the capacitance value is estimated to be small when that value is applied during AC simulation, which affects these results. However, it is unchanged that the maximum frequency value is larger when GaN MOSFET is used as a Si3N4 single-layer insulator. (c) (d) Figure 9. (a) Cut-off frequency, (b) maximum frequency and dependence of (c) gate to source capacitance and (d) gate to drain on frequency for the recessed-gate GaN MOSFET using the stacked TiO2/Si3N4 dual-layer insulator and Si3N4 single-layer insulator with and without self-heating effect. Figure 10 shows the distribution of the lattice temperature when only the substrate material is changed to SiC under the same device structure and bias conditions. The overall temperature difference relatively reduces and the peak temperature value at the drain side gate-edge of the GaN MOSFET using the stacked TiO2/Si3N4 dual-and Si3N4 singlelayer insulators is 346.75 and 346.79 K, respectively; it exhibits an inverted trend compared to when the substrate is used as a sapphire. We demonstrate that SiC, which has excellent heat-transfer ability, can reduce the self-heating damage by preventing heat generated during device operation from being trapped inside and emitting it to the outside. Because V D and V G are changed by AC signals, unlike in DC signals, which affect the charge of the channel and gate, the C gd and C gs should be separately analyzed; thus, the semiconductor oxide capacitance value can no longer be defined as only C ox . Figure 9c,d shows C gd and C gs in the frequency range of 10-10 11 Hz for the four cases in Figure 9a,b, where C gs is dominant in determining the oxide capacitance value. The C gs is larger for devices with SHE than those without SHE, which is due to a decrease in the thermally activated carriers' detrapping phenomena in the donor layer, resulting in a decrease in the equivalent doping level; thus, smaller C gs is observed at lower temperatures [34]. We confirm from this capacitance analysis that the C gs of devices with a stacked TiO 2 /Si 3 N 4 dual-layer insulator is large, as g m has a significantly large value, whereas the g m of devices with a Si 3 N 4 single-layer insulator is relatively small. However, higher frequency operations in terms of RF characteristics are possible since C gs is extremely small. The f max to which the SHE is applied has a larger value than that to which the SHE is not applied in the two types of devices, unlike Figure 9a. Table 2 shows that the values of C gs and C gd are reversed based on whether SHE is applied because the value of the V G in the section where g m is the maximum decreases as the temperature increase; therefore, the capacitance value is estimated to be small when that value is applied during AC simulation, which affects these results. However, it is unchanged that the maximum frequency value is larger when GaN MOSFET is used as a Si 3 N 4 single-layer insulator. Figure 10 shows the distribution of the lattice temperature when only the substrate material is changed to SiC under the same device structure and bias conditions. The overall temperature difference relatively reduces and the peak temperature value at the drain side gate-edge of the GaN MOSFET using the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators is 346.75 and 346.79 K, respectively; it exhibits an inverted trend compared to when the substrate is used as a sapphire. We demonstrate that SiC, which has excellent heat-transfer ability, can reduce the self-heating damage by preventing heat generated during device operation from being trapped inside and emitting it to the outside.
Heat Transfer Materials
(c) (d) Figure 9. (a) Cut-off frequency, (b) maximum frequency and dependence of (c) gate to sou pacitance and (d) gate to drain on frequency for the recessed-gate GaN MOSFET using the s TiO2/Si3N4 dual-layer insulator and Si3N4 single-layer insulator with and without self-heating Figure 10 shows the distribution of the lattice temperature when only the sub material is changed to SiC under the same device structure and bias conditions. The all temperature difference relatively reduces and the peak temperature value at the side gate-edge of the GaN MOSFET using the stacked TiO2/Si3N4 dual-and Si3N4 s layer insulators is 346.75 and 346.79 K, respectively; it exhibits an inverted trend com to when the substrate is used as a sapphire. We demonstrate that SiC, which has exc heat-transfer ability, can reduce the self-heating damage by preventing heat gene during device operation from being trapped inside and emitting it to the outside. Because the replacement of the substrate material has an effect only when heat fer occurs due to an increase in internal temperature, it does not affect the overall d properties while maintaining a constant 300 K without SHE. Figure 11a,b show th use of SiC increases the ID, max of the GaN MOSFET with the stacked TiO2/Si3N4 dua Si3N4 single-layer insulators to 640.52 and 564.29 mA/mm, respectively, and narrow performance gap with the device without a temperature change. Because the replacement of the substrate material has an effect only when heat transfer occurs due to an increase in internal temperature, it does not affect the overall device properties while maintaining a constant 300 K without SHE. Figure 11a,b show that the use of SiC increases the I D, max of the GaN MOSFET with the stacked TiO 2 /Si 3 N 4 dual-and Si 3 N 4 single-layer insulators to 640.52 and 564.29 mA/mm, respectively, and narrows the performance gap with the device without a temperature change. Figure 12a,b show the current and power gain based on the frequency obtained by applying the changed VG at which gm becomes maximum by changing the substrate to SiC. The RF characteristic remains constant regardless of the substrate material change without temperature rise. When SHE is applied, the fT of devices with the stacked TiO2/Si3N4 dualand Si3N4 single-layer insulators is 4.7 and 6.64 GHz, respectively, which are improved by 18% and 13% compared with GaN on the sapphire. Consequently, the difference from the Figure 12a,b show the current and power gain based on the frequency obtained by applying the changed V G at which g m becomes maximum by changing the substrate to SiC. The RF characteristic remains constant regardless of the substrate material change without temperature rise. When SHE is applied, the f T of devices with the stacked TiO 2 /Si 3 N 4 dualand Si 3 N 4 single-layer insulators is 4.7 and 6.64 GHz, respectively, which are improved by 18% and 13% compared with GaN on the sapphire. Consequently, the difference from the frequency without SHE is also minimized, and f max results are improved, reflecting the same trend.
Heat Transfer Materials
(a) (b) Figure 11. ID-VG transfer characteristics with and without SHE in recessed-gate GaN MOSFET on SiC using (a) the stacked TiO2/Si3N4 dual-layer insulator, (b) Si3N4 single-layer insulator at VDS = 10 V. Figure 12a,b show the current and power gain based on the frequency obtained by applying the changed VG at which gm becomes maximum by changing the substrate to SiC. The RF characteristic remains constant regardless of the substrate material change without temperature rise. When SHE is applied, the fT of devices with the stacked TiO2/Si3N4 dualand Si3N4 single-layer insulators is 4.7 and 6.64 GHz, respectively, which are improved by 18% and 13% compared with GaN on the sapphire. Consequently, the difference from the frequency without SHE is also minimized, and fmax results are improved, reflecting the same trend.
Conclusions
We have analyzed the recessed-gate GaN MOSFET with the stacked TiO2/Si3N4 duallayer insulator for several DC and RF characteristics by using a TCAD simulation. By increasing the oxide capacitance of the Si3N4 and TiO2 combination, we have confirmed that it has a smaller Ron, larger ID and improved gm compared with the device using a Si3N4 single-layer insulator. The breakdown voltage is also relatively high, so it has strength as a power device. Furthermore, RF characteristics including current gain and power gain were evaluated. In addition, the self-heating effects (SHEs) model were reflected in the
Conclusions
We have analyzed the recessed-gate GaN MOSFET with the stacked TiO 2 /Si 3 N 4 duallayer insulator for several DC and RF characteristics by using a TCAD simulation. By increasing the oxide capacitance of the Si 3 N 4 and TiO 2 combination, we have confirmed that it has a smaller R on , larger I D and improved g m compared with the device using a Si 3 N 4 single-layer insulator. The breakdown voltage is also relatively high, so it has strength as a power device. Furthermore, RF characteristics including current gain and power gain were evaluated. In addition, the self-heating effects (SHEs) model were reflected in the simulation, and important changes in DC and RF characteristics occurred. The performance degradation by SHEs is more affected for GaN MOSFETs with the stacked TiO 2 /Si 3 N 4 dual-layer insulators due to its larger electric field and current density. Nevertheless, the dual-layer insulator induces the transistor to have enhanced DC performances. In conclusion, the recessed-gate GaN MOSFETs with the stacked TiO 2 /Si 3 N 4 dual-layer insulator can be expected to be candidates for devices with an attractive ability to deliver high power at high frequency. | 9,854.6 | 2022-01-21T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Hierarchical Structure of Glucosamine Hydrochloride Crystals in Antisolvent Crystallization
: The crystal morphology of glucosamine hydrochloride (GAH) during antisolvent crystallization was investigated in this work. Particles of different shapes, such as plate-like crystals, leaflike clusters, fan-like dendrites, flower-like aggregates, and spherulites, were produced by tuning the type of antisolvents and crystallization operating conditions. The hierarchical structures of GAH crystals tended to be formed in a water + isopropanol mixture. The effects of operation parameters on the polycrystalline morphology were studied, including crystallization temperature, solute concentration, feeding rate of GAH aqueous solution, solvent-to-antisolvent mass ratio, and stirring rate. The evolution process of GAH spherulites was monitored using SEM, indicating a crystallographic branching mode. The crystal habit was predicted to identify the dominant faces. Molecular dynamics simulations were performed and the interaction energy of solute or solvent molecules on crystal surfaces was calculated. The experimental and simulation studies help to understand the branching mechanism and design a desired particle morphology.
Introduction
Hierarchical structures are common in the biomineralization process, which are organized by nanostructured building blocks [1,2]. Seashells, corals, nacre, and eggshells are typical examples that have three-dimensional complex structures consisting of highly ordered nanocrystals [3,4]. These materials possess enhanced mechanical properties [3]. Motivated by exploring unique properties, research from a wide range of fields has been focused on the synthesis of hierarchical structures [5][6][7]. For instance, by combusting inorganic powder mixtures, AIN three-dimensional structures with diverse morphology have been demonstrated: from wildflower-like patterned crystals to multilayer hierarchical structures [5]. The micro/nano-spherulitic hierarchical 2,2 ,4,4 ,6,6 -hexanitrostilbene has been fabricated, which is promising to solve the problems of nanoscale energetic materials in agglomeration and microscale bulk crystals in low activity [6]. Materials with hierarchical structures could find broad applications in batteries [8], ceramics [9,10], catalysis [11,12], sensors [13,14], the food industry [15], pharmaceutics [16], etc. For example, when the flower-like SnO 2 nanocrystal was used as the photoanode in dye-sensitized solar cells, the photoelectric conversion efficiency could be largely enhanced [8]. Flower-like MgO crystals growing in the face of the ceramics create photoluminescence of Mg-cBN ceramics [9]. Photonic crystals with biological hierarchical structures show tunable optical properties by external stimuli [17]. By controlling the crystal phase structure of WO 3 hierarchical spheres, the gas sensing performance is enhanced significantly [14]. There are also growing reports about hierarchical structures of organic or pharmaceutical crystals [18][19][20]. It has been reported that spherical calcium citrate could be prepared via controllable reactive
Antisolvent Crystallization Experiments
The raw GAH was dissolved in water to prepare GAH solution at a certain concentration. Antisolvent crystallization was carried out by introducing aqueous solution of GAH into organic solvent, which was preloaded in a 150 mL jacketed crystallizer. The solutions were kept at a constant temperature by using a water circulation bath (Ministat 230, Huber, Berching, Germany). The feeding rate of GAH solution was controlled by a peristaltic pump. After agitating for another 30 min, the suspensions were filtered, washed, and dried in a vacuum oven at 40 • C. The effects of solvent, temperature, solute concentration, feeding rate, of solvent (water)-to-antisolvent (organic solvent) mass ratio, and stirring rate on the morphology of GAH crystals were investigated. Details about the crystallization conditions are listed in Table 1.
Characterization
Polarized optical microscopy (POM, Olympus BX53M, Olympus Corporation, Tokyo, Japan) and scanning electron microscopy (SEM, SUPRA™ 55, Hitachi Ltd., Tokyo, Japan) were used to characterize the particle morphology. The evolution of GAH hierarchical structures during antisolvent crystallization (E16 Table 1) was also ex situ monitored, where time-controlled samples were observed using SEM. The crystal form was identified by powder X-ray diffraction (XRD, Miniflex 600, Rigaku Corporation, Tokyo, Japan) using Cu Kα radiation (λ = 0.1541 nm). It was operated at 40 kV and 30 mA. The PXRD patterns were collected in a 2θ range from 5 • to 50 • with a step size of 0.02 • at a scanning speed of 8 • min −1 .
Molecular Simulations
The GAH crystal has a space group of P2 1 and Z = 2 in the unit cell (a = 7.147 Å, b = 9.214 Å, c = 7.765 Å, and β = 112.88 • ) [39]. Materials Studio software was applied for molecular simulation using the COMPASS force field [40,41]. The crystal morphology of GAH in a vacuum was simulated using the Bravais-Friedel-Donnay-Harker (BFDH) model, which uses the crystal lattice and symmetry to generate a list of possible growth faces [42]. Molecular dynamics (MD) simulations were carried out to study the interactions between the crystalline plane of GAH and the solution layer. To build a crystal surface of GAH exposed to a solution, the unit cell was cleaved according to the Miller indices and then extended to a supercell. The bulk solution was constructed containing 200 water molecules, 300 isopropanol molecules, and a certain number of solutes. The supersaturation of the solution was set to be 20 and the solute concentration was calculated based on the solubility of GAH in water-isopropanol mixtures at 298.15 K [35]. The solution layer was added to the top of the crystal surface, and a 50 Å vacuum slab was also included in the simulation box. A motion constraint was applied to the crystal surface and the whole simulation system was geometrically optimized. MD simulations were performed for 1000 ps in the NVT ensemble at 298.15 K using a Nose thermostat. The electrostatic interactions were calculated using the Ewald summation method and the van der Waals forces were calculated using the atom-based method. The cutoff radius was set to be 15.5 Å and the time step was 1 fs. The interaction energies between the crystalline surface and solvent or solute in the equilibrium system were calculated using the following expressions: where E surface is the energy of the crystal face, E solvent is the energy of the mixed solvent of water and isopropanol, and E solute is the energy of the solute GAH. E total(surface-solvent) represents the total potential energy in the simulation box removed from solute. E total(surface-solute) represents the total potential energy in the simulation box removed from solvent molecules.
Crystal Form and Morphology of GAH in Different Solvents
Antisolvent crystallization is carried out in aqueous solution mixed with different organic solvents (Table 1 E1-E6). Microscope images of GAH crystals are shown in Figure 1. The crystals grown in water + methanol and water + ter-butanol mixed solvents present a hexagonal plate-like morphology. In water + ethanol mixtures, GAH crystals grow in a pentagonal shape, exhibiting asymmetric growth. Interestingly, dendritic spherulites form when isopropanol is used as the antisolvent. When n-propanol or butanol is used as the antisolvent, irregular aggregates are obtained.
XRD measurements are employed to identify the crystal form of GAH. Figure 2 displays XRD spectra of the GAH single crystal [39], raw materials, and crystals grown from antisolvent crystallization in three binary solvent mixtures. The XRD patterns nicely match with the calculated pattern based on the single crystal structure, indicating the same crystal form. But the peak intensities of GAH samples are different due to the differences in crystal shape and size. For example, in water + methanol, ethanol, isopropanol, or tert-butanol, the most intense band is located at 12.4 • , which is assigned to the (001) face of the GAH crystal. In water + ethanol and water + isopropanol systems, the peak intensity at 25.2 • becomes low, leaving the peak at 24.9 • more prominent. They are related to the and (20-1) faces, respectively. The observed reduction in signals is possibly due to the crystal orientation effect [43]. XRD measurements are employed to identify the crystal form of GAH. Figure 2 plays XRD spectra of the GAH single crystal [39], raw materials, and crystals grown fr antisolvent crystallization in three binary solvent mixtures. The XRD patterns nic match with the calculated pattern based on the single crystal structure, indicating same crystal form. But the peak intensities of GAH samples are different due to the ferences in crystal shape and size. For example, in water + methanol, ethanol, isopropa or tert-butanol, the most intense band is located at 12.4°, which is assigned to the (0 face of the GAH crystal. In water + ethanol and water + isopropanol systems, the p intensity at 25.2° becomes low, leaving the peak at 24.9° more prominent. They are rela to the and (20-1) faces, respectively. The observed reduction in signals is possi due to the crystal orientation effect [43]. Figure 2. (a) XRD patterns of GAH single crystal, raw material, and GAH crystallized in wat methanol, water+ ethanol and water + isopropanol; (b) GAH crystals obtained from water + npanol, water + butanol and water + tert-butanol. XRD measurements are employed to identify the crystal form of GAH. Figure 2 d plays XRD spectra of the GAH single crystal [39], raw materials, and crystals grown fro antisolvent crystallization in three binary solvent mixtures. The XRD patterns nice match with the calculated pattern based on the single crystal structure, indicating t same crystal form. But the peak intensities of GAH samples are different due to the d ferences in crystal shape and size. For example, in water + methanol, ethanol, isopropan or tert-butanol, the most intense band is located at 12.4°, which is assigned to the (00 face of the GAH crystal. In water + ethanol and water + isopropanol systems, the pe intensity at 25.2° becomes low, leaving the peak at 24.9° more prominent. They are relat to the (11-2) and (20-1) faces, respectively. The observed reduction in signals is possib due to the crystal orientation effect [43]. (a) XRD patterns of GAH single crystal, raw material, and GAH crystallized in wate methanol, water+ ethanol and water + isopropanol; (b) GAH crystals obtained from water + n-p panol, water + butanol and water + tert-butanol.
Temperature
To explore the branching behavior of GAH crystals in the water + isopropanol sy tem, the effects of operation conditions are investigated. Crystallizations at different tem peratures varying from 278.15 K to 318.15 K containing saturated GAH in aqueous sol tions are firstly carried out (Table 1 E7-E10). The optical microscopy images show th Figure 2. (a) XRD patterns of GAH single crystal, raw material, and GAH crystallized in water + methanol, water+ ethanol and water + isopropanol; (b) GAH crystals obtained from water + n-propanol, water + butanol and water + tert-butanol.
Temperature
To explore the branching behavior of GAH crystals in the water + isopropanol system, the effects of operation conditions are investigated. Crystallizations at different temperatures varying from 278.15 K to 318.15 K containing saturated GAH in aqueous solutions are firstly carried out (Table 1 E7-E10). The optical microscopy images show that lower temperatures yield particles of flower-like morphology densely assembled by flaky crystals (Figure 3a,b). Spheres are formed at the initial stage of antisolvent crystallization, indicating that high supersaturation could promote the aggregation of nuclei. As the temperature increases, the size of subindividuals increases, but the number of branches reduces (Figure 3c). At 318.15 K, most particles are plate-like crystals ( Figure 3d). Hence, branching of the crystal subunits is more favored at low temperature. Increased temperature would decrease solution viscosity and then weaken the agglomeration of platelet crystals [44].
Moreover, the thermal motion of molecules and mass transfer would be accelerated [15]. Crystal growth becomes more dominated at higher temperature, leading to the formation of larger monodisperse crystals. perature increases, the size of subindividuals increases, but the number of branches duces (Figure 3c). At 318.15 K, most particles are plate-like crystals (Figure 3d). Hen branching of the crystal subunits is more favored at low temperature. Increased tempe ture would decrease solution viscosity and then weaken the agglomeration of plat crystals [44]. Moreover, the thermal motion of molecules and mass transfer would be celerated [15]. Crystal growth becomes more dominated at higher temperature, leading the formation of larger monodisperse crystals.
GAH Concentration
To investigate the effect of solute concentration in an aqueous solution on cry morphology, GAH concentrations from 0.07 to 0.40 g/g H2O are used at 278.15 K (Tab E11-E15). POM graphs of these crystals are presented in Figure 4. Upon the addition GAH aqueous solution at low concentration (0.07 g/g H2O), clusters of plates are observ which might be formed via surface nucleation and oriented crystal growth (Figure Subunits are nucleated on the most dominant face of the plate-like crystals and th growth direction is similar to the mother crystal. It is reported that the most domin crystal growth direction is also the most energetically favorable [43]. The orientation eff could be affected by the solution composition, polarity, and charge density of substra and the intermolecular interaction energy between the solute and surface [45][46][47]. At 0 g/g H2O, crystals present a dendritic morphology (Figure 4b). At larger GAH concen tions, the branches increase and the dendrites become more compact (Figure 4c,d). As GAH concentration increases to 0.40 g/g H2O, spherulites are produced (Figure 4e). A h crystallization driving force is a necessary condition for spherulites [48]. Therefore, wit the experimental concentration range, a more concentrated GAH solution that crea
GAH Concentration
To investigate the effect of solute concentration in an aqueous solution on crystal morphology, GAH concentrations from 0.07 to 0.40 g/g H 2 O are used at 278.15 K (Table 1 E11-E15). POM graphs of these crystals are presented in Figure 4. Upon the addition of GAH aqueous solution at low concentration (0.07 g/g H 2 O), clusters of plates are observed, which might be formed via surface nucleation and oriented crystal growth (Figure 4a). Subunits are nucleated on the most dominant face of the plate-like crystals and their growth direction is similar to the mother crystal. It is reported that the most dominant crystal growth direction is also the most energetically favorable [43]. The orientation effect could be affected by the solution composition, polarity, and charge density of substrates, and the intermolecular interaction energy between the solute and surface [45][46][47]. At 0.14 g/g H 2 O, crystals present a dendritic morphology (Figure 4b). At larger GAH concentrations, the branches increase and the dendrites become more compact (Figure 4c,d). As the GAH concentration increases to 0.40 g/g H 2 O, spherulites are produced (Figure 4e). A high crystallization driving force is a necessary condition for spherulites [48]. Therefore, within the experimental concentration range, a more concentrated GAH solution that creates larger supersaturation promotes heterogenous nucleation and facilitates the formation of spherulites [32].
Feeding Rate
The feeding rate of GAH aqueous solution varies from 0.05 g/min to 2.0 g/min, while other crystallization conditions are the same (Table 1 E16-E20). Figure 5 presents POM micrographs of GAH particles precipitated at different feeding rates. It can be seen that more developed spherulites are formed at lower feeding rates (Figure 5a,b). This condition provides a longer crystallization time for fabricating hierarchical structures. The lamellar bunches spread out, resulting in a curved structure as well as spherulites with hollow cores. When the feeding rate increases to 0.5 g/min, the spherulites become more open, which contain more free space between individual crystallites (Figure 5c). At faster feeding rates like 1.0 g/min and 2.0 g/min, most polycrystalline aggregates present a fan-like shape (Figure 5d,e). A possible reason is that rapid feeding results in a more extensive nucleation in bulk solution, leaving less supersaturation consumed by surface nucleation [44]. To study the influence of the solvent-to-antisolvent mass ratio on the hierarchical
Feeding Rate
The feeding rate of GAH aqueous solution varies from 0.05 g/min to 2.0 g/min, while other crystallization conditions are the same (Table 1 E16-E20). Figure 5 presents POM micrographs of GAH particles precipitated at different feeding rates. It can be seen that more developed spherulites are formed at lower feeding rates (Figure 5a,b). This condition provides a longer crystallization time for fabricating hierarchical structures. The lamellar bunches spread out, resulting in a curved structure as well as spherulites with hollow cores. When the feeding rate increases to 0.5 g/min, the spherulites become more open, which contain more free space between individual crystallites (Figure 5c). At faster feeding rates like 1.0 g/min and 2.0 g/min, most polycrystalline aggregates present a fan-like shape (Figure 5d,e). A possible reason is that rapid feeding results in a more extensive nucleation in bulk solution, leaving less supersaturation consumed by surface nucleation [44].
Feeding Rate
The feeding rate of GAH aqueous solution varies from 0.05 g/min to 2.0 g/min, while other crystallization conditions are the same (Table 1 E16-E20). Figure 5 presents POM micrographs of GAH particles precipitated at different feeding rates. It can be seen tha more developed spherulites are formed at lower feeding rates (Figure 5a,b). This condition provides a longer crystallization time for fabricating hierarchical structures. The lamellar bunches spread out, resulting in a curved structure as well as spherulites with hollow cores. When the feeding rate increases to 0.5 g/min, the spherulites become more open which contain more free space between individual crystallites (Figure 5c). At faster feed ing rates like 1.0 g/min and 2.0 g/min, most polycrystalline aggregates present a fan-like shape (Figure 5d,e). A possible reason is that rapid feeding results in a more extensive nucleation in bulk solution, leaving less supersaturation consumed by surface nucleation [44].
Solvent-to-Antisolvent Mass Ratio
To study the influence of the solvent-to-antisolvent mass ratio on the hierarchica structure, crystallization experiments E21-E25 (Table 1) are performed. At a mass ratio o 1:2, most particles are crystallized in a leaf-like morphology (Figure 6a). As the ratio
Solvent-to-Antisolvent Mass Ratio
To study the influence of the solvent-to-antisolvent mass ratio on the hierarchical structure, crystallization experiments E21-E25 (Table 1) are performed. At a mass ratio of 1:2, most particles are crystallized in a leaf-like morphology ( Figure 6a). As the ratio reduces to 1:3 or 1:7, branching is enhanced and the dendrites develop along multiple directions (Figure 6b,c). This might be the result of the higher supersaturation created by the increased mass fraction of the antisolvent, when the same amount of GAH aqueous solution is added. In this way, flowerlike and asterisk-like structures are formed. When the solvent-to-antisolvent mass ratio decreases to 1:20 and 1:50, the spherulites are still undeveloped and the subunits become smaller, exhibiting a needle-like shape (Figure 6d,e). Therefore, further increased supersaturation produces an excessive crystal nucleus, and the growth of subunits will slow. tals 2023, 13, x FOR PEER REVIEW 8 of reduces to 1:3 or 1:7, branching is enhanced and the dendrites develop along multi directions (Figure 6b,c). This might be the result of the higher supersaturation created the increased mass fraction of the antisolvent, when the same amount of GAH aqueo solution is added. In this way, flowerlike and asterisk-like structures are formed. Wh the solvent-to-antisolvent mass ratio decreases to 1:20 and 1:50, the spherulites are s undeveloped and the subunits become smaller, exhibiting a needle-like shape (Figu 6d,e). Therefore, further increased supersaturation produces an excessive crystal nucle and the growth of subunits will slow.
Stirring Rate
The stirring rate is one of the most important factors affecting solution mixing, n cleation, and collisions of particles [49]. Fast stirring creates a high shear rate and accel ates the movement of the fluid, increasing the possibility of collision among the crysta crystallizer, and mixing propeller [50]. In general, a higher stirring rate will induce crys breakage and inhibit the agglomeration of particles [51], whereas a dendritic morpholo tends to form in a static environment, where crystal growth is limited by diffusion a the growth condition is far from equilibrium [27]. It is essential to study the effect of s ring rate on the morphology of polycrystalline aggregates. Therefore, crystallizations different stirring rates are performed, changing from 100 rpm to 800 rpm (Table 1 E2 E31). Figure 7 illustrates the POM micrographs of these crystals. At a lower stirring r (100 rpm and 200 rpm), fan-like dendrites composed of diamond-shaped platelets are p duced (Figure 7a,b). As the stirring rate increases, the dendrites become more ramifi (Figure 7c,d). When the stirring rate rises to 600 rpm or 800 rpm, spherulites are dev oped, and the size of both spherulites and subindividuals becomes smaller (Figure 7e In this case, the branching mechanism of GAH crystals is not diffusion-limited aggreg tion [26,30]. Fast agitation that enhances particle collision might cause more crystal d fects, inducing more extensive surface nucleation. Under the experimental conditions higher stirring rate could facilitate heterogeneous nucleation and branching.
Stirring Rate
The stirring rate is one of the most important factors affecting solution mixing, nucleation, and collisions of particles [49]. Fast stirring creates a high shear rate and accelerates the movement of the fluid, increasing the possibility of collision among the crystals, crystallizer, and mixing propeller [50]. In general, a higher stirring rate will induce crystal breakage and inhibit the agglomeration of particles [51], whereas a dendritic morphology tends to form in a static environment, where crystal growth is limited by diffusion and the growth condition is far from equilibrium [27]. It is essential to study the effect of stirring rate on the morphology of polycrystalline aggregates. Therefore, crystallizations at different stirring rates are performed, changing from 100 rpm to 800 rpm (Table 1 E26-E31). Figure 7 illustrates the POM micrographs of these crystals. At a lower stirring rate (100 rpm and 200 rpm), fan-like dendrites composed of diamond-shaped platelets are produced (Figure 7a,b). As the stirring rate increases, the dendrites become more ramified (Figure 7c,d). When the stirring rate rises to 600 rpm or 800 rpm, spherulites are developed, and the size of both spherulites and subindividuals becomes smaller (Figure 7e,f). In this case, the branching mechanism of GAH crystals is not diffusion-limited aggregation [26,30]. Fast agitation that enhances particle collision might cause more crystal defects, inducing more extensive surface nucleation. Under the experimental conditions, a higher stirring rate could facilitate heterogeneous nucleation and branching.
Morphological Evolution of GAH Spherulites
To explore the formation mechanism of GAH spherulites, the evolution of polycr talline particles in the antisolvent crystallization process is ex situ monitored using SE (Figure 8). Originating from a hexagonal plate-like crystal, there are lamellae nucleati on the surface of the parent crystal and growing in similar directions (Figure 8a,b). T shape of the lamellae seems asymmetric and is similar to pentagonal. Two sides of trunk are nucleated, forming dendritic crystals (Figure 8c). Branching on the existing mellae with small misorientation angles generates a fan-like morphology (Figure 8d). the lamellae fan out, flower-like crystals with curved structure can be observed (Figu 8e). Intermittent branching of the fans leads to spherulites with holes, and complete sph ulites can be formed when the spaces are filled by lamellae (Figure 8f). This behavior small-angle branching matches the mode of noncrystallographic branching, which is d tinct from crystallographic branching in dendritic snow crystals or diffusion-limited gregation in fractal-like forms [48]. The subunits of polycrystalline aggregates often gr radially via noncrystallographic branching, resulting in three-dimensional spheres [5 In our case, the final evolved particles of GAH looks like a circular layer without bran ing along another axis. The reason might be that the largest face of the plate-like crys overwhelmingly dominates the crystallization, which imposes a strong orientation eff [45].
Morphological Evolution of GAH Spherulites
To explore the formation mechanism of GAH spherulites, the evolution of polycrystalline particles in the antisolvent crystallization process is ex situ monitored using SEM (Figure 8). Originating from a hexagonal plate-like crystal, there are lamellae nucleating on the surface of the parent crystal and growing in similar directions (Figure 8a,b). The shape of the lamellae seems asymmetric and is similar to pentagonal. Two sides of the trunk are nucleated, forming dendritic crystals (Figure 8c). Branching on the existing lamellae with small misorientation angles generates a fan-like morphology ( Figure 8d). As the lamellae fan out, flower-like crystals with curved structure can be observed (Figure 8e). Intermittent branching of the fans leads to spherulites with holes, and complete spherulites can be formed when the spaces are filled by lamellae (Figure 8f). This behavior of small-angle branching matches the mode of noncrystallographic branching, which is distinct from crystallographic branching in dendritic snow crystals or diffusion-limited aggregation in fractal-like forms [48]. The subunits of polycrystalline aggregates often grow radially via noncrystallographic branching, resulting in three-dimensional spheres [52]. In our case, the final evolved particles of GAH looks like a circular layer without branching along another axis. The reason might be that the largest face of the plate-like crystal overwhelmingly dominates the crystallization, which imposes a strong orientation effect [45].
Molecular Simulation Analysis
The predicted crystal shape of GAH by the BFDH model and molecular topology of four faces are shown in Figure 9. The prominent planes are (100), (020), and (001), and the surrounding faces are (10-1), (110), (011), and (11-1), as well as their symmetry-related equivalents. The most important surface is the (001) face, which occupies more than 49% of surface area. This is in agreement with the XRD patterns of the experimental crystals in which the strongest characteristic peak is assigned to the (001) face. The predicted crystal habit matches well with the experimental morphology of the plate-like crystal. During antisolvent crystallization in the water + isopropanol system, heterogeneous nucleation also occurs on the (001) surface. On this face, the molecules are oriented diagonally to the plane with NH 3+ and hydroxyl groups pointing out from the surface. This enables the formation of hydrogen bonding and electrostatic interactions with other molecules. The (100) face is less dominant and exposes hydroxyl groups. The (020) and (0-20) faces are symmetric with respect to the b-axis, but they show different structures on the surfaces.
small-angle branching matches the mode of noncrystallographic branching, which is d tinct from crystallographic branching in dendritic snow crystals or diffusion-limited a gregation in fractal-like forms [48]. The subunits of polycrystalline aggregates often gro radially via noncrystallographic branching, resulting in three-dimensional spheres [5 In our case, the final evolved particles of GAH looks like a circular layer without branc ing along another axis. The reason might be that the largest face of the plate-like crys overwhelmingly dominates the crystallization, which imposes a strong orientation eff [45].
Molecular Simulation Analysis
The predicted crystal shape of GAH by the BFDH model and molecular topology of four faces are shown in Figure 9. The prominent planes are (100), (020), and (001), and the surrounding faces are (10-1), (110), (011), and , as well as their symmetry-related equivalents. The most important surface is the (001) face, which occupies more than 49% of surface area. This is in agreement with the XRD patterns of the experimental crystals in which the strongest characteristic peak is assigned to the (001) face. The predicted crystal habit matches well with the experimental morphology of the plate-like crystal. During antisolvent crystallization in the water + isopropanol system, heterogeneous nucleation also occurs on the (001) surface. On this face, the molecules are oriented diagonally to the plane with NH 3+ and hydroxyl groups pointing out from the surface. This enables the formation of hydrogen bonding and electrostatic interactions with other molecules. The (100) face is less dominant and exposes hydroxyl groups. The (020) and (0-20) faces are symmetric with respect to the b-axis, but they show different structures on the surfaces. On the (020) face, NH 3+ , hydroxyl groups, and Cl − ions are exposed, whereas hydroxyl groups and methylene are protruded on the (0-20) surface. Figure 10a shows that the (001) face has the strongest intermolecular interaction energy with the GAH solute, followed by (020) and (100) faces. The interaction energy of the solute on the (0-20) face is weakest. Solvent molecules also exhibit a much larger interaction energy with the (001) face than with the other three faces (Figure 10b). This suggests that solvent molecules tend to adsorb on the (001) face and the detachment of solvent molecules from the surface becomes more difficult. Consequently, the self-assembly of GAH on the (001) face is hindered and crystal facet growth would be inhibited. This is in agreement with the experimental morphology of plate-like shape. Since solute molecules and ions also prefer to adsorb on the (001) face, they would accumulate on this surface, resulting in a higher local supersaturation and inducing surface nucleation. Moreover, this Figure 10a shows that the (001) face has the strongest intermolecular interaction energy with the GAH solute, followed by (020) and (100) faces. The interaction energy of the solute on the (0-20) face is weakest. Solvent molecules also exhibit a much larger interaction energy with the (001) face than with the other three faces (Figure 10b). This suggests that solvent molecules tend to adsorb on the (001) face and the detachment of solvent molecules from the surface becomes more difficult. Consequently, the self-assembly of GAH on the (001) face is hindered and crystal facet growth would be inhibited. This is in agreement with the experimental morphology of plate-like shape. Since solute molecules and ions also prefer to adsorb on the (001) face, they would accumulate on this surface, resulting in a higher local supersaturation and inducing surface nucleation. Moreover, this strong intermolecular interaction would facilitate oriented crystal growth of newly formed nuclei. On the other hand, the interaction energy of the solute on the (020) face is stronger than the interaction energy on the (0-20) face. This indicates that GAH has a higher tendency to adsorb on the (020) face than on the (0-20) face. Thus, crystal facet growth along the +b direction could be promoted and growth along the -b direction would be inhibited, resulting in the formation of asymmetric crystals. als 2023, 13, x FOR PEER REVIEW 11 of growth along the +b direction could be promoted and growth along the -b direction wou be inhibited, resulting in the formation of asymmetric crystals.
Concluding Remarks
Antisolvent crystallization was shown to be effective at inducing hierarchical stru tures of GAH particles when isopropanol was used as the antisolvent. The particle mo phology varied from fan-like dendrites to flower-like aggregates or spherulites dependin on different crystallization operation parameters. Branches could be increased at low temperature, larger GAH concentration, slower feeding rate, moderately higher solven to-antisolvent ratio, and faster stirring rate. The morphological evolution process of GA spherulites was monitored and indicated that the hierarchical structure was formed v noncrystallographic branching of lamellae. Combined with the predicted and exper mental crystal habit, the (001) face dominated the plate-like shape and heterogenous n cleation occurred on this surface. The intermolecular interaction energy of the solute solvent on crystal faces was calculated. They were both strongest on the (001) face, whic induced surface nucleation under high supersaturation and facilitated oriented cryst growth. The interaction energy of the solute on the (020) face was stronger than that o the (0-20) face, leading to asymmetric crystal growth. Overall, this work has provided a insight into the important roles of crystallization environment and operation condition on the morphological variations. The study on the branching behavior of polycrystallin aggregates and solute-surface interactions would also help design and tune hierarchic structures of organic crystals.
Concluding Remarks
Antisolvent crystallization was shown to be effective at inducing hierarchical structures of GAH particles when isopropanol was used as the antisolvent. The particle morphology varied from fan-like dendrites to flower-like aggregates or spherulites depending on different crystallization operation parameters. Branches could be increased at lower temperature, larger GAH concentration, slower feeding rate, moderately higher solventto-antisolvent ratio, and faster stirring rate. The morphological evolution process of GAH spherulites was monitored and indicated that the hierarchical structure was formed via noncrystallographic branching of lamellae. Combined with the predicted and experimental crystal habit, the (001) face dominated the plate-like shape and heterogenous nucleation occurred on this surface. The intermolecular interaction energy of the solute or solvent on crystal faces was calculated. They were both strongest on the (001) face, which induced surface nucleation under high supersaturation and facilitated oriented crystal growth. The interaction energy of the solute on the (020) face was stronger than that on the (0-20) face, leading to asymmetric crystal growth. Overall, this work has provided an insight into the important roles of crystallization environment and operation conditions on the morphological variations. The study on the branching behavior of polycrystalline aggregates and solute-surface interactions would also help design and tune hierarchical structures of organic crystals.
Data Availability Statement:
The data presented in this study are available on request from the author.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,541.8 | 2023-08-27T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Electrons Break to Photons Even in a Low Voltage Electric Circuit
This paper is the theory of breaking electrons in ordinary circuit elements like resistors and Light-Emitting Diodes (LED’s). Undergoing a change of electron has not been considered in the low voltage circuits so far. As it is shown here, there is a difference current before and after LED and resistors. The possibility of leakage current or escaping of electrons from the circuit to make electrostatic charges also considered and tested for LED. It is concluded that the reverse action of Photoelectric effect (eVo=hf-φ), creating energy from mass of electrons are happened not just in a sophisticated high energy accelerators but in daily life electric circuits. Referring to this paper, a large number of missing electrons break to photons, although the drift velocity of electrons is very low. Under going a change of electrons has been considered in the circuits of these experiments. According to the Kirchhoff second rule called the junction rule, the conservation of charge implies the junction rule, because charge does not originate or accumulate or annihilate at this point. It expresses that in any closed single loop electric circuit there is no source or well of charge besides the power supply and the current remains constant at all points of the circuit. Based on the results of this paper and missing some part of current, the conservation of charge does not show high accuracy in this paper.
Introduction
The interaction between electrons and photons has already been investigated [1] and + → + , Annihilation of electron-positron pair that produces a pair of gamma rays has been studied [2]. We know that photons or γ-rays can undergo three major types of collision. Striking an electron, losing all its energy to it and simply disappearing (Photoelectric effect) [3] and [12], or a photon can collide with an electron and be scattered to one side (Compton Effect). The third way in which a photon can be stopped is through a process called pair production that in this process, mass is created from the photon's energy [9]. The track of electron has already been viewed in the alternating-gradient synchrotron at Brookhaven National Laboratory in US at the Bubble-chamber by photograph of electron-positron pair formation [10]. In this paper the reverse action of photoelectric effect is studied. In pair production electronpositron are produced by -rays or by interacting the incident beam that is composed of antiprotons with a hydrogen nucleus (proton). In reverse process the collision of electronpositron results production of -rays. In all of cases, the Conservation of charge is valid. In the experiments mentioned here, it seems that rule to be violated. Although a battery pack of a pocket calculator with the voltage of 3.0 V delivers a current of 0.17 mA [4] which seems to be low, but for these experiments lower amperage is suggested. For more accurate results it is recommended that: 1) The better digital Multi meters especially more precise Ammeters with the Ranges of ( , ) with Higher Resolutions and Better Accuracy, DC power supply with constant voltage, etc., are needed. The Ammeter is suggested to be with selector of high amperage with the range of Micrometer or Nano ammeter, for example if we could read the current of 5.76 A in the range of , it could be as 5,763,235.07 or at the range of nA may be 5,763,235,072. 3 . Although the three first digits are the same but by increasing the resolution, the differences of measurements appear better. 2) More powerful resistor or light equipment in the circuit to be used to find more amperage differences.
3) The grounding wiring system should have the lowest possible resistant to the ground for measuring the possibility of lowest leakage current of the circuit to the surrounding environment.
Material List
Preparing a circuit with safety measures with the items and material list (Figure 1) including: a) A direct current supply having ranges of 3, 6, 9, and 12 Volts or using a proper DC voltage convertor. b) At least four digital Multimeters including: one Micro ammeter and the other possessing the selectors of Milli ammeter and Millivolt meter. c) A LED bar light about power nominal of 2 Watts. d) A metallic tray as a heat sink and a protective shield.
Preparing the First Test Circuit for Measurements
In this experiment at first two digital ammeters in parallel connected to one side and one digital ammeter connected to another side. For the next step the position of ammeters are interchanged. A voltmeter is also used for measuring the circuit voltage.
Connecting the circuit parts according to Figure 1 and reversing the direction of LED bar light to inside for preventing of damage to eyes from its glare. It is suggested even cover it with a noun combustible semi-transparent sheet for reducing its reflected glare from tray and do not touch the light bar when is on with bare hand for not burning the hand. At the positive side of LED two ammeters are connected parallel to each other and at negative side of light bar an ammeter is connected in series in the circuit. The range of one of ammeter at positive side is A and the other is µA. The range of the negative side is A. After each measurement and the circuit was disconnected for time lapse of 20 seconds and then turned on, 10 seconds waited for stabilizing the circuit parameters, the amperage was measured. The tests repeated for 5 times, then the two ammeters connected to the positive side of light bar moved to the negative side and the ammeter connected the negative side moved to the positive side. The measurements repeated as table1.
Tests and the Tables of the Data
Preparing the tables with the columns of for one side, for the same side and for another side of LED bar, ∆ , , ∆ 10 And carrying out the measurements and noting the results: For a regular circuit and non sensitive ammeters, connecting one ammeter before and another after the resistor, they do not show any difference of the current in a single loop circuit. Although in an ionic solution, both positive and negative charges contribute to the current by moving in opposite directions [5] but current in a copper wire is just related to the moving of electrons. Table 1 As a check on these results we find that the difference between the last column two averages ∆ = 13.315
Analyzing the Results of
For better comparison between the result numbers, the first average voltage is supposed to be equal to the last average voltage and its proportional amperage, power and the average difference of losing power are calculated.
There is possibility that the amperage difference has leaked to the environment. There is possibility that Also charge do accumulate on the surface of conductors. The surface area of the conductors that are used in common circuits are much too small to accumulate a significant amount of charge [6]. The leakage current test (2.7.) is done for it and does not show a measurable leakage current. In 2.9. an example of mathematical analyzing is introduced.
The result shows negative side current is more than the positive side in the single loop circuit. Although based on classical Physics principles could not be a well that electrons fall down it, but is considered that some part of electrons missing. The interpretation could be that at least part of missing electrons break to photons.
Considering the Relation Between Voltage and the Missing Current
Using the same circuit of Figure 1 Let us see if there is a relation between missing current and the applied voltage. If the K's were equal, we could conclude that the Ohm's Law is ruled for the missing current, but they are not equal.
Test for Possibility of Leakage Current
Material List: (Figure2): e) Some proper wires, clips and wire connectors f) Proper metallic pot as a leakage current collector testing device. g) A disconnect switch h) Electrical insulator sheets like polyethylene and Styrofoam. i) Ground wire connection for measuring the possibility of accumulated electrons on metallic pot.
The Leakage Current Test
For considering the possibility of leakage current the circuit shown in figure 2, has been prepared. The LED bar placed inside a metallic pot which electrically insulated from the ground. The Multi meter is set to the range of 200µA which is highest resolution in this Multi meter.
Its ampere side is connected momentary to the pot and the common wire connected to the ground wire connection. With the highest voltage tolerable by the bar light (13.22 V in this case), the voltage difference of pot and ground measured five times that sometimes for a fraction of second was about between 0.2 to 0.3 mV and Multi meter all the times showed current of zero. The waiting time period for possibility of collecting electrons on the surface of pot was 10 minutes before each measurement. The result of possibility the leakage current or escaping electrons from the light bar in these tests is negligible. The conclusion could be that some the missing part of electrons which is the difference between positive and negative sides of light bar have been broken to photons with different frequencies of visible light and infrared, etc.
Measuring of Current Before and After Tiny Electronic Resistors Sides with 1.5V, AA Battery as the Power Supply
Using two kinds of Multi meters with different ranges and also convertor which its voltage fluctuates make the first test results difficult. In this test a single Multi meter with the highest possible range resolution (200 µA) and a single battery AA with nominal voltage of 1.5 Volts is used to cancel the fluctuation of line voltage.
Material and Testing Device List
a) Three electronic resistances connected together in series with total resistance measurement of: > ? 9.81 Ω b) One battery with nominal voltage of 1.5 V, an AA battery c) One digital Multi meter has set to DC Amperage at the range of 200 d) One digital Multi meter has set to DC voltage at the range of 2000 @A e) Wires for connections and clips Connecting the circuit parts according to Figure 3 and measuring the current before and after resistances for five times, yields us the numbers in table 3: Table 3 As we already know, in gases and electrolytes which placed in electric field, all negative and positive ions could move to the opposite charge sides. In solid metals in the room temperature like cooper wires in these tests, the only parts of atom could move in the electric field, are the free electrons. The drift velocity of electrons:
Analyzing the Results of
In a typical copper wire of radius of 0.815 mm carrying current of 1A is: A : ≃ 3.54 $ 10 # @ E ⁄ [7]. Considering the numbers of table 3 and the direction of real current in copper metal in this test that is from negative pole to the positive pole, we obtain: The current equation was given as: ΔH Δ . I, for t = 1 second, we get: Based on the Einstein equation of mass and energy: If all of current difference (6) is changed to energy, then the number of electrons that break down to energy is The energy produced by difference current: >. ∆ . I, for t=1 s (15) 9.81 $ 10 $40$ 10 6 3.924 $ 10 < Q (16) Comparing (14) and (16) shows that all of missing electrons that come from the current difference have not been changed, but a fraction of them have been changed to photons. We find that fraction from dividing (16) to (14): where, X 132.5 and t=1 s, then we obtain: This shows for the mentioned condition of the circuit, about 36 percent of total dissipating power comes from the breaking of electrons that are the difference passing current between two negative and positive sides of resistances with the amount of ∆H 40 $ 10 6 J . The rest of dissipating power could be related to kinetic energy of accelerated electrons passing through the resistances when striking to the target atoms and release photons, etc.
Measuring of Current Before and After Tiny Electronic Resistors Sides with 9V, 6F22 Battery as the Power Supply
Repeating the same experiment is done with the same circuit of figure 2, but instead of 1.5 V, AA, another battery with nominal voltage of 9 V, 6F22 battery has been used. The average difference amperage obtained ∆I = 1941 nA. The average total circuit voltage is 9.656 V. The same above procedure can be followed for getting the percentage of electrons which disappear and break down to photons.
Discussion
Based on the tests, existence of current difference between two sides of an electric element has not been considered so far. The better results come with higher resolution Multi meters and non-fluctuating voltage supply (constant voltage) and also as table 2 shows the rate of difference current increases by increasing the applied voltage to the element. According to the Kirchhoff's Law, The sum of all the currents coming into a junction must be equal the sum of all the currents leaving it and the Conservation of charge implies the junction rule [6], [8] and [12]. There is difference between the current of coming in and getting out of the LED or resistances for all these experiments. According to our existing knowledge light (photons) come from the accelerated free electrons (like electron beam in a vacuum tube with high voltage electrodes which produces X-rays) or changing the orbits of electrons moving around the nuclei in atoms. Annihilation of matter in collision of electronpositron pair in high energy accelerators produces gamma rays, etc. In these tests some missing electrons produce visible light or infra-red. If electrons could break to photons, therefore, Photon could be one of the sub-building blocks of electron, (Figure 4). According to the result of this paper some electrons missing. It requires breaking to energy as the form of photons (upper right side). If electron is composed, it should be contained of at least two particles of like photons, etc. Although missing some electrons in the circuits is the main subject of this paper but photon also has a trace of electromagnetic wave. If photon after travelling a path and producing the electromagnetic field perpendicular to its direction of the travel and still maintain its properties as a photon, therefore it is composed. EMP (Electro-Magnetic Particle) could be one of its composition (up right of figure 4).
Summary and Conclusion
We see in this paper that the tests show the current of one side (negative) at a circuit element like LED or resistor is different than the other side (positive) in a single loop. The Conservation of Charge Law as one of the principles of Classical Physics is incompatible with the observed mentioned experiments with measurements of higher resolution. For more accurate results, digital Multi meters with Ranges of (µ A, n A,..) with Higher Resolutions and Better Accuracy is suggested. The tests can be adopted as an assignment for related courses at the Universities and Technical Colleges lab. The method, material list, and also precautions have been mentioned in the paper. Other cautions and safety measures must be added to the lab order, especially by increasing the circuit voltage and current.
Although based on classical Physics principles could not be a well that electrons fall down it, but is considered that a fraction of electrons missing. The possibility of leakage current is tested in LED light, but no measureable leaking amperage viewed. The interpretation could be that: A large number of missing electrons break to photons and it is happened not just in a sophisticated electron accelerator, but in daily life electric circuit. Electrons break although the drift velocity of electrons is very low. Sensing the hot wire (the wire which carries electricity) without cutting its insulator and only by circuit alert device or clamp ammeter could be the result of breaking some passing electrons and radiating photons (Photons produce Electromagnetic Waves) to the surrounding area. | 3,780 | 2016-12-13T00:00:00.000 | [
"Physics"
] |
A Panglossian Dilemma
Ambient intelligence is a factual phenomenon of increasing magnitude. It also invites intrigued attention as carrier of meanings. Meanings are produced in a variety of contexts, which are here the focus of attention. In order to analyze contextual narratives and their effects, concepts such as intelligence, optimization, rationale, rationality and ambience are discussed. One meaning of ambient intelligence is its indicative contribution to increased unilateral control of the many by the few. Ethical guidelines may be part of prevailing rhetoric, but their success as a self-controlling factor seems fairly unrealistic. Moral confusion is not only related to artificial intelligence, but to the very essence of modern society.
Introduction
Our world is composed of particulars, matters that have extension such as dimension, weight and form. Our lives are also composed of universals, abstractions regarding relative matters such as position and value. Particulars are compulsory to conceptualize when describing the world, universals are indispensable when making particulars and other universals meaningful. Our world exists for us as far as we live, and we live as far as we produce meanings.
Our understanding of the world is constantly deepening due to scientific progress, but the meaning of our lives is not a matter of accumulating knowledge. Every generation and every individual have to work out that for themselves.
Ambient intelligence is a phenomenon that can be described and it is also a carrier of meanings. As rhetoric would advise, the best argument is the inevitable. By skillful descriptions, rhetoric aims are pursued and those subjected to skillful talk eventually think they have figured out everything by themselves. In the pursuit of a critical understanding, conceptual analyses are needed.
Here, an attempt is made to conceptualize contexts that are meaningful for understanding ambient intelligence and the ways we understand it. Intelligence is often associated with the act of optimizing, which in its turn seems to be connected to the broader concept of rational action. Rationality is, however, a function of the context where rational action takes place. The context of rational action has its own rationale, which defines rationality.
Ambient intelligence concerns ambience, and ways we conceive it. In preindustrial built environments, the physical context caused intelligible ambience to emerge. In the industrialized world of constant flux, intelligible and stable conditions are replaced for a dynamic that makes virtue of the constant need for change.
Whatever that ambience is, it must influence the meanings we attribute to ambient intelligence, and it must have a crucial effect on how societies are managed and controlled by artificial intelligence.
Intelligence and optimization
What do we understand by intelligence? Obviously, it indicates problem-solving capacity, but what does that mean? The development of intelligence testing is based on their measuring capacity, but what do they measure? An essence of rationality is optimization, but how, and what, can we optimize?
Intelligence
The lexical meaning of intelligence is the ability to acquire and apply knowledge and skills. The ability to acquire knowledge is evidently linked to personal capacity as individuals are born different. Inherent assets may not be realized due to external factors such as malnutrition, deceases, social instability or injuries. The same reasons that hamper people from acquiring knowledge and skills may also cause obstacles in applying them.
The standard definitions seem to pay less or no attention to the potential targets of intelligence. What are the actual contexts where intelligence works? What is the focus? Is it a question of solving particular problems, or to act successfully over longer time periods in changing conditions to achieve some distant end? Does it include only logic and attention, or emotions as well? Is intelligence part of developing and organizing social and symbolic systems?
The more limited the target of our mental activities is, the easier it is to find out ways of optimizing. This is a standard version of rationality. In the gender-centered world some of us still live in, female prejudices attribute "typical male" approaches to "tube-thinking". Male biases attribute "typical female" approaches to "funnel cake-thinking". Either way, rationality is defined according to particular contexts. Males would be accused of lacking the capability to understand matters related to social complexity. Females would face the blame of lacking capacity to rationalize and optimize. This issue exceeds gender speculations as it is an existential matter. All of us are part of the complexity of this world.
The lexical meaning of artificial intelligence refers to the theory and development of computer systems able to perform tasks normally requiring human intelligence. Visual perception, speech recognition, decision-making and translation between languages are often mentioned examples. Optimizing seems to be an integrated and necessary part of programming and the elaboration of algorithms, which makes "tube-thinking" necessary. When expanded into a "funnel-cake thinking", problems arise. Optimization-based rationality gets complicated or outright impossible.
Testing intelligence
Allegedly the first to create a test, in 1905, was Frenchman Alfred Binet (1857-1911) together with his colleague Théodore Simon (1873-1961) [1]. Binet considered intelligence to be a mixture of mental faculties, emerging in changing conditions and controlled by practical judgement. He did not view intelligence as a fixed capacity. Intelligence could not be measured, only classified. The test categorized the mental age of children, and was a way to assess the mental adequacy of the tested compared to the mental average for persons of the same age. DOI: http://dx.doi.org /10.5772/intechopen.95944 In the USA, eugenicist Henry H. Goddard (1866-1957) got acquainted to the Binet-Simon Scale, and saw it as a way to detect feebleminded people for compulsory sterilization, matching the view of intelligence as genetically inherited. In 1916, Lewis Terman (1877-1956) issued the Stanford-Binet Intelligence Scale, sticking to the view of intelligence as unchangeable.
The pioneer of American behaviorism, Edward Thorndike (1874-1949), defined intelligence in terms of the capability to form neural bonds based on genetic factors as well as experience. J.P. Guilford (1897-1987) maintained that standard IQ tests imply an oversimplified answer, convergent thinking. Creativity on the other hand implies per definition more than one answer to any problem, divergent thinking. He disputed reductionism, and ended up with 180 different types of intelligence, which for practical reasons would limit the use of his method.
In Britten, Charles Spearman (1863-1945) claimed in 1904 that disparate cognitive test scores reflect a single general intelligence factor, and assumed that the psychological g factor would correspond to a biological g factor. This position did not remain uncriticized. Raymond Cattell (1905-1998) developed Spearman's ideas. Fluid intelligence refers to the ability to reason abstractly and perceive relations without previous practice or instructions. Crystalized intelligence generates from experience, learning and accumulated judgement skills. He elaborated a test to assess fluid intelligence by making it culture-fair. His promotion of eugenics has, however, been a cause of critique.
The changing approaches to testing indicate that human intelligence is a controversial matter, and very much embedded in those culture-specific societies from where the theories emerge. Even the fairly recent invention of emotional intelligence (EI) is phrased according to strongly utilitarian guidelines, meaning how to manage emotions to achieve one's goals.
Intelligence testing has historical bonds to biologism and eugenics, which have providing a pseudoscientific basis for racism. Testing reflects the way the overall context of intelligence is conceived. When testing changed from classification to computing, the focus was by necessity narrowed down to matters that could be measured. The perspective should be broadened up as testing intelligence is a moral matter as well. There are many different kinds of utility, and other aspects besides utility to be consider. Is there a happiness-intelligence or only a dissatisfactionintelligences? Are we looking for creativity, many answers to a problem, or are we looking for optimum, the best answer to one problem?
Optimization
When we optimize, we either seek to minimize resources when pursuing defined ends, or alternatively, we try to optimize results within given resources. Both cases require a time table, often broken down into sub-targets on the way to an end. Optimization may also indicate the attempt to minimize time-use within available resources and defined output, or regardless those.
Economic ventures are typical targets of optimizing, but optimization does not necessarily cover all aspects of a single project. Negative externalities, such a depletion of resources, natural hazards, social and cultural costs that are caused by private entrepreneurs, are still often passed over to public administration and tax payers. In addition, even single projects cannot be optimized without a fixed point of reference in time. In hindsight, many owners of projects would recognize that a change of time perspective could have ended in very different results. An optimum is a function of time.
The issue of benefits to optimize may also be viewed in terms of various kinds of markets according to market access (restricted versus non-restricted) and competition within a market (rivalrous versus non-rivalrous). The market for private goods is per definition restricted and rivalrous. One can enter only in case demanded resources are possessed. Optimization is possible and needed for private benefits. The idea of an unrestricted and free market is an abstraction as the very logic of capitalism induces market restrictions and monopolies. If not, there would be no use for anti-trust legislation. Governments and politics can influence the market of private goods mainly indirectly, by implementing laws and regulations.
Club goods indicate restricted and non-rivalrous markets, which the club can optimize according to conceived club-benefits. Markets for common goods are non-restricted, but rivalrous and the common assets are at risk of being depleted, i.e. the "tragedy of the commons" [2]. Because of non-restrictedness, an optimization is impossible, and public government can interfere only indirectly. Public goods are open for all and do not imply rivalry among users. Because of their open access, there is nothing to optimize from the point of view of public government, except for goods that have to be produced and managed. Sunshine is free for all, but public space needs to be built and maintained.
Singular optimizations sustain competition and the destruction of competitors. But what about the overall economic system and the wellbeing of citizens? Optimizing parts may cause an overall disastrous waste of resources. Adam Smith (1723-1790) claimed there is an overall order in the chaos [3]. He proclaimed that the totality of self-interested actions would eventually cause unintended social benefits. A prudent reader may recollect that the "invisible hand" of markets was not all that invisible: Smith worked for the monopoly at the time, the East India Company.
Governing the national economy is now executed according to the same logic as single ventures. It is boiled down to a restricted number of indicators, like the GNP, and aims at optimizing economic growth. Growth is an end in itself, and the focus of public and general interest. In political rhetoric, positive as well as negative growth lend themselves to very far-reaching conclusions as to their alleged effects on human matters.
GNP reflects the sum of its constitutive parts, which are thought to be optimizable. Nonetheless, a considerable part of the economy is no target for optimizing at all. Common and public goods, being related to public interests such as the smooth running of everyday life, care for tax-payers money and public revenues, are optimized by the political system. The "political system" is a very vague term that may reflect anything from particular interests to the whole body of citizens, or even to humanity as listed in human rights. Insofar as politicians optimize their commitments, they usually focus on the lengths of their tenures.
Human intelligence seemed to escape us, but so does artificial intelligence! For the majority of people, GNP and its annual fluctuations is a very poor indicator for quality of life. Nor does the investor-driven use of artificial intelligence for programming maximum revenues at the stock exchange say much about the utility of the exchange for citizens in general. Maybe the question to ask ourselves is not how artificial intelligence can be humanized, but rather why human life has been reduced to forms that can be optimized by artificial intelligence?
Rationale and rationality
To conduct oneself intelligently in a rational manner, one has to relate one's actions to a given context. What is the context and how is it formed? Is it something to be made up from case to case, or is it more general? Does rationality change according to context? How to choose when one has to? Does choice by necessity indicate moral judgement? What is the role of science in all this? DOI: http://dx.doi.org/10.5772/intechopen.95944
Rationale
Rationale refers to controlling principles of opinion, belief, practice, or phenomena. To be rational refers to having reason or understanding, or to something being agreeable to reason. Controlling principles are not perforce agreeable to reason as they may be structural and unintended outcomes of very complicated social processes. Nobody can escape being bound to some sort of overall principles of action, but few can claim to act rationally in every instance.
Dr. Pangloss is a stunning character in Voltaire's novel Candide, published in 1759 [4]. Voltaire (1694-1778) is thought to have used the character for ridiculing Leibnizian optimism. Nonetheless, Dr. Pangloss certainly makes sense as a representative of the breaking times when the traditional teleological world view -the purposefulness of everything -had to confront a causal world view, based on science. But Dr. Pangloss is more than a caricature of naïve optimism, he mirrors an existential dilemma as well.
According to the doctor, "all is for the best", because we live in "the best of all possible worlds". God is the ultimate good so why would not his creation be the best as well? Thus, it is reasonable to claim that everything that occurs is for the best. Dr. Pangloss firmly professed causality within an overall scheme of teleology, thereby reflecting a view of God as the Creator, not as the Intervener. At the time, the existence of God was not questioned, but his nature was.
A problem with Pangloss' ethical position is that everything turns out both acceptable and obligatory, in accordance with the initial ruling of the Creator. It is not Pangloss' fatalism that gives rise to moral doubts, but his opportunism. Actually, his character may be seen as an embodiment of alleged Jesuitical sentiments: End justifies means! If the initial creation is the best of all worlds, then every derivative of that creation, good and bad, is for the eventual good. Only human shortsightedness would blur that post-factum.
As final explanations, the concepts of cause and purpose may appear to us mutually exclusive. But, if we define the purpose of our universe to be causal, there is no contradiction. If the purpose of the universe is defined not to be causal, a contradiction arises. Consequently, to be considered rational we have to avoid thinking and acting in a way that would offend the rationale of our basic guiding principles, whether religious, atheistic or agnostic. Human characters who possess the quality of not being self-contradictory, are thought to have integrity.
We may face another problem as well: What are those entities that generate controlling principles of opinion, belief and practice? Dr. Pangloss was a character of a firmly Christian country of Christian Europe. In a hierarchical manner, any entity can of course be thought of as being part of a greater totality. The Christian solution is to close the hierarchy by referring to this world, the Creation, as the target of human reasoning. The Heaven or Paradise are per definition out of reach, and conceivable only as part of eternity, and so are our understanding of the deeds of the Lord. Any endeavor to bridge the gap may provide ample room for speculation, accompanied by a never-ending stream of self-promoting prophets and wizards.
The Christian world view is by no means unique, rather the contrary. Most of us seek -consciously or unconsciously -to build our identities based on some kind of view of a world that we can and want to live with. Are we free to choose? The gospel of the modern world is: Yes! In reality, experience transmits a more complicated story. Only madmen are able to extrapolate their madness into the big world. The sane ones must go the other way around. Societies and cultures provide rationales, the task of individuals and single ventures is to provide matching thoughts and deeds.
Rationality
In his Utopia, Thomas More (1478-1535) sought to find a rational, explicit and measurable expression for the rationale of Christian society [5]. He was decapitated by his King, Henry VIII, who usurped the religious power of the Pope, and robbed the Catholic Church of its wealth. Maybe the modern world was born in 1535 CE? What are the fundaments of our modern world? Heaven got lost because eternity got lost. Now, our haven (short of the e) is located in this world, but in the future. Remarkably, the end was changed, but the idea of Christian eschatology is still there.
The first to make the switch were the people of the Renaissance. They started to look ahead by looking back. Nonetheless, they applied a conception of time that was linear, albeit opposite to ours. The great discoveries of the early modern time brought about global trade, and in its wake, colonial subjugation, looting and plunder of the Americas, Africa and the East. Economic wealth in Europe brought about a surplus that was reinvested for the sake of further surplus. The future in this world was eventually found.
The corporate form of capitalism that emerged during the 17th century, indicates a rationality narrowed down to optimizing the revenues of single ventures [6]. Over time, some part of the aggregated surpluses has been invested in political ventures labelled charity, corruption or money laundry according to prevailing conjuncture. Concentration of wealth caused by necessity the need for controlling politics, which is now equally obvious in democratic and nondemocratic countries.
During the Renaissance, Antiquity was thought to represent the ultimate achievements of mankind. Social progress is an idea of the 17th century, but the concern was limited to the economy [7]. Towards the turn of the century, a debate in the French Academy between the "Moderns" and the "Antiques" reflected a broader understanding. The issue at stake was the very essence of change: Is all change for the better? After decades, a reasonable conclusion was reached: Quantifiable knowledge can be accumulated, like mathematics and science. Knowledge involving judgement like questions regarding moral and beauty, are skills that individuals acquire, and the knowledge of those cannot be accumulated [8]. There is an endless growth of applicable criteria for making judgement, but that does not indicate improved quality of factual judgements.
Only the Enlightenment of the 18th century, with Voltaire and others, brought to the fore a notion of overall progress, and Dr. Pangloss became a ridiculed figure [9]. He was stuck to the eternal heaven, not the haven of the future. During the heydays of the Enlightenment, progress turned limitless as well as endless, and a purpose in itself. Consequently, the 19th century brought with it progress and regress as ideological and political concepts. In the 20th century, when progress was boiled down to economic growth as indicated by GNP, every economy of the globe could be integrated into a common ranking list with regard to overall output per year and person.
The eventual point of reference is the future of this world. Nevertheless, like the gospel, the future is unverifiable. But it is an offer one cannot refuse as there is nothing to lose, only to gain -except for infidels refusing to give up their integrity. There is a difference between eternity and the future in that the future is even more abstract than eternity. As the case of More shows, his utopia was firmly anchored in Christian ethic. Considering history, it is hard to discern how our future, being a battleground for ideologies and countries of all shadings, has anything to do with particular moral sentiments or ethical considerations.
However, even the haven of future may have an end. When most aspects of human life are increasingly bound up to external order and control, the prospects DOI: http://dx.doi.org /10.5772/intechopen.95944 of single individuals are narrowed down. Now, the wealthiest 10 percent of the global population owns 81.7 percent of global wealth, and the wealthiest 1.0 percent have 45 percent [10,11]. What happens when 0.1 percent of the global population will own everything? The future could then be not to gaze into the future, but to return to the initial state of human history of here and now. Carpe Diem, catch the day! The nucleus of wealth accumulation is now finance. The value of money, when being a commodity exchanged on a market, is subjected to fluctuations determined by supply over demand. With concurrent fiat money, the logic changes insofar as investments do not by necessity concern productive measures at all. Finance becomes a club good. By the financial transactions of the biggest players, the value of existing wealth can be manipulated for the sake of more wealth. When the total amount of indebtedness grows faster than productive output, a further concentration of wealth to the club members seems inevitable. A recent estimate suggests a global debt burden of 272 trillion USD, that is 365 percent of total GDP [12].
Rationality seeks its rationale among available possibilities. In the various phases of human development, options at hand may have increased in absolute terms, but they may decrease further in relative terms. The employed criteria of judgement may still expand and improve over time when based on expanding sets of data. The quality of judgement is up to prudence. Individuals are prudent, not nations, and judgement skills can be improved only during a lifetime.
Moral choice
For half a millennium, European science has been developed to encompass most aspects of life, but still there seems to be no theoretical consensus on judgement. In order to make a judgement, one needs criteria, but to figure out criteria, one needs to make judgements. The idea of "value" is self-referential. To evaluate, we need to evaluate and choose applicable criteria, in absurdum [13]. All of us have to make choices, no matter how informed we are. Most choices are moral ones and based on considerations about right or wrong. Moral considerations are not always manifest, but unavoidable and omnipresent.
The Sisyphus-work of redesigning morality is manifest in the ways scientists and philosophers have tried to grip the task. The initial phase was filled with optimism. The grand utilitarian, Jeremy Bentham (1747-1832) aspired in vain to elaborate a felicific calculus, but it would not have included "natural and imprescriptible rights", which he considered "nonsense upon stilts" [14]. His position is rational as utilitarianism was embedded in the economy and politics of his time. The recognition of human rights would certainly have been obtrusive as human labor was supposed to be a commodity of the marketplace.
John Stuart Mill (1806-1873) expressed the idea that the rules of thumb of everyday morality would get endorsed by the systematic utilitarian method, but such derivations are still on their way. The futility of expecting a feasible algorithm of moral values for global cost-benefit analysis is as obvious as ever before. Utilitarian calculations face many problems. Considering positive effects as benefits seems to be obvious, but what about negative effects? In the short run they are costs, but in a longer perspective they may turn out to be beneficial. By switching the perspective, short term positive effects may later on turn out to be negative.
In all, to judge and weight all moral consequences in terms of benefits and drawbacks is impossible. Moreover, even to weight practical results in terms of benefits and drawbacks is impossible, except for limiting the scope to a short period of time and a narrow place. This means utilitarianism reflects a rationality that is conceivable only within the clearly defined limits of single projects.
The Kantian tradition -stressing principles of conduct -has likewise paid tribute to practicality, and resented the impracticality of utilitarianists. The maxims, such as the Categorical Imperative, are open in a similar way as the utilitarian endeavor for benefits. They require an actor to consider and select relevant maxims to match actions or to select relevant actions to match maxims. A truly thoughtful person may not be able to take any actions at all as uncertainty is our companion.
A somewhat sloppy conclusion would be that sincere moral thinking requires understanding, knowledge and imagination, which is not achieved by applying formulae. The complexity of real-world problems is impossible to compute. We can never consider all things, or all times for that matter. In practice, capitalism, and to some extent representative democracy, mostly set a time front that is as long as an investment period or political tenure. Those may be optimized. The positive and direct effects, and alleged positive externalities, are annealed while negative externalities are easily unrecognized or silenced.
Is there a single point of departure, one perspective from where to assess ideal rationality? The traditional answer is yes, common interest. In practice, hardly any political party would miss to refer to public or common interest. The idea of a common interest is illustrated by the Prisoner's Dilemma [15]. To optimize his situation, the rationally acting suspect would judge his fellow suspects and probably find out that some of them are somewhat irrational, and therefore unreliable. The shortsighted self-interest of some accused would obstruct the possibility to find an optimal solution, common for all. Consequently, the ideally rational player would have to turn less rational, not to lose too much. Is that rational? Nonetheless, it seems to be part and parcel of politics, rhetoric and modern life.
Accumulation of knowledge
Scientific institutions worldwide try to safeguard the academic virtues in order to contribute to the accumulation of knowledge [16,17]. This can be seen as a moral prerogative for science and its global body of researchers. It is also an example of the match between the rationale of science and the rationality of academia. The academic routines include dissertation and publishing of findings, peer reviewing and critical scrutiny, acceptance to prove or disprove arguments regardless the status of the speaker, demand for theoretically anchored hypotheses, reliability of data, application of credible methods, inherent logical consistency of the work, willingness to rework one's findings, etc.
With the increased strategical and commercial impact of science, such traditions are evaporating for the sake of circumscribing and monopolizing the use of knowledge. This is particularly true for breaking research in technology and big pharma in closed institutions, where foreseen benefits are astronomical in terms of revenues and strategic power. In absolute terms, scientists may be more and more knowledgeable, but in relative terms, the opposite prevails as research and development is out of reach for the public, and for most researchers as well.
Ambience
The lexical definition of ambience is a feeling or mood associated with a particular place. Environment is a token of history, and an analysis may bring understanding of the rationale that drives the present development of ambient intelligence. Firstly, ambience relates to perceived integrity of the environment, DOI: http://dx.doi.org /10.5772/intechopen.95944 but in what sense? Secondly, what changes are obvious when comparing the way production of modern urban environment is organized compared with the traditional ways of building and planning? Thirdly, how does urban form indicate the rationale of economics as well as social and political control?
Traditional integrity
Differences in ambience usually play out to the advantage of historical settings. This is not only a matter of opinion, but reflected in the concept of gentrification, which indicates the preservation and upgrading of historical urban settings, and associated with an influx of new inhabitants and soaring real estate prices [18]. Much of travelling and tourism is based on the fact that historical environments offer a kind of ambience that modern urban settings are void of [19].
Why are historical urban environments so sought after? Why do they please people? One reason is that they associate to important historical events, which are integrated into nationalistic rhetoric. A feeling of nostalgia is probably globally present in the sense that it may remind us of childhood, passed times and our identity.
However, there is another and more tangible reason for the attractiveness of historical urban settings. They are results of handicraft, built out of local materials, following local building traditions, erected by local labor force, which generate overall unity. The finest of historical buildings have pursued a very long life [20,21]. Representing handicraft, traditional architecture possesses an additional quality. Details of buildings are to some extent distinguishable at a distance shorter than 300 meters [22]. When one approaches them, new and smaller details unfold at closer distance. Handmade environments offer continuously new excitement for a pedestrian despite the fact that she or he may have lived in the surrounding for decades.
The first cities known to history were built in a way reflecting the rationale of tribal society. Each group and segment of the local society managed and controlled its own territory. The first European cities breaking this pattern were the Greek cities of the Antique at the time when the city states and citizenship emerged. Those cities were unlocked in the sense that all parts except the privately controlled plots became available for the citizenry. Houses continued to be produced by the inhabitants for their own purposes. Plans were laid out in advance and lots were distributed by means of negotiations and consent, not as commodities exchanged on a market. Ideally, the control was executed in a communal way by the citizenry for the citizenry [23].
The earliest indication of the idea of landed value is a map of central Florence of the early 15th century, showing the taxation value of properties [24]. At about the same time, the central perspective was introduced into visual arts as a new innovation. Both of these phenomena indicate a novel way of distancing oneself. The use value of the physical setting acquired an additional exchange value. The central perspective provided the viewer with a position that used to be reserved for celestial figures and the Omnipotent. Economic and visual alienation seem to have occurred in correlation.
The relation between the citizenry and the ruler remained in some sense reciprocal. Even in the case of the Baroque city plans of the 17th and early 18th century, the people had visual access to the palace of the Prince who likewise could see every corner of the city from his palace. The religious justifications of worldly inequalities did not diminish the need for overall community. The ambience of historical urbanism expresses integrity.
Modern disintegration
The birth of modernist architecture coincided with industrialization of construction. The pioneers designed their works in a style mimicking the design of factory produced items, although the buildings were produced by handicraft [25]. An argument that has been reiterated over and over again concerns integrity of architectural expression. Modernists claim that architecture has to be honest [26]. As honesty is a relative matter, it has to be related to something. The true point of reference for modernists is time, the spirit of our time, heading for the futurewhatever that may indicate. The true expression of any era can be confirmed only in hindsight, which would disqualify the assumed spirit of the present and the future as intelligible points of reference. We cannot pretend, if we want to be truthful! By associating architectural expression with the modern rationale of continuous reinvesting and rebuilding, the destruction of historical settings became acceptable and even preferable. The place, locality and history lost their meaning as points of reference for determining environmental values. Integrity is understood in terms of the future, not in relation to the past and the actual place with its local characteristics and traditions. Consequently, modernistic urban settings and architecture have no homeland, and built environment is globally uniformed -like artificial intelligence.
There has been some opposition to these trends, for instance a quest for genius loci, the spirit of the place, for topophilia and for critical regionalism as opposed to global design [27][28][29]. The results are close to neglectable, and do not exceed a limited number of hailed examples. Postmodernism as architectural style is sometimes associated with anti-modernism, but more so it is another expression of modernism. Various approaches that could be summoned under the concept of retro, are also modernistic in the sense that they are integrated parts of modern settings in constant flux, whether exterior or interior.
The Baroque era still expressed reciprocity between controllers and the controlled. This changed only in the late 18th century, when Jeremy Bentham, the utilitarian, introduced the so-called Panopticon for correctional institutions [30]. Due to the design of the precinct, prisoners were constantly surveilled by the guards, who themselves were invisible to the prisoners. Societal control became unilateral. No wonder Bentham ridiculed natural and imprescriptible rights. In a context of unilateral and total control, there can be no room for any inherent right of the subdued, and benefits are much easier to calculate when they concern only those in command. All concurrent systems for urban surveillance are based on the Panopticon principle. Humans are replaced for a huge variety of surveillance technologies, exempt from the controlled.
Planning legislation of the 19th century was still based on the presumption that plot owners would exploit their property for their own needs. In case of purely speculative projects, a developer would have to stick to approved town plans and available plot supply [31]. A century later, planning legislation was turned the other way around to suit large scale speculation in rising land values. Despite the existence of public planning monopolies, developers acquired the right to develop land much as they pleased [32,33]. The development of planning legislation in Sweden is a case in point. Planning is in practice removed from the public to the corporate sphere and made a club good.
Consider the overall shape of urban environment. Historical cities produced in a traditional way, express an endless variety within an overall unity. This is likely to be the most important single factor that makes historical environments so attractive. That is their ambience. Modern settings express the opposite: Monotonous labyrinths within an overall chaos. Consequently, orientation and identification DOI: http://dx.doi.org /10.5772/intechopen.95944 are made almost impossible, and the best, if not only way to orientate is to use electronical equipment for navigation. That is certainly a need of today, but it is a previously unknown need that did not exist when human habitats were laid out in an intelligible way.
Ambient intelligence
Ambient intelligence is described by providing general outlines and jots of selfcriticism, which set the agenda for further discussions [34]. That is not exceptional, but is it credible?
The phenomenon
Ambient intelligence refers to environments that are sensitive and responsive to the presence of people by means of electronics. In harmony with the modern view of our future haven, it was developed as a corporate initiative in the late 1990s to provide a projection on the future. Information and intelligence were supposed to be hidden in the network that connected different devices. The technological framework behind them was thought to gradually disappear into the surroundings until only the user interfaces remain perceivable by users. The parallel to the Panopticon way of unilateral control is striking! The ambient intelligence paradigm builds upon computing, profiling, context awareness, and interaction design. Applied systems and technologies are supposed to be context aware as they recognize individuals and their situational context. Moreover, they are personalized and tailored for individual needs, and adaptive as they can respond to individuals. They also anticipate individual desires without conscious mediation. The parallel to an age-old narrative, the life of the master and his servants, is obvious.
Ambient intelligence is said to rely on user experience, and the advancement in sensor technology and sensor networks. In response to operational obstacles, a design emerged that created new technologies and media around the user's personal experience. The user is asked to give feedback to improve the design. Biohacking may be an example that illustrates the most private sides of such applications, which seem to draw the line between private and public inside the body of the users.
Ambient intelligence requires a number of key technologies to exist. These include unobtrusive, user-friendly hardware and human-centric computer interfaces. Computing infrastructure is characterized by interoperability, networks and service-oriented architecture. Systems and devices must be reliable and secure, achieved through self-testing and self-repairing software and privacy ensuring technology. The promises for the future resemble those of salvation of the afterworld.
Criticism
It is said that any immersive, personalized, context-aware and anticipatory characteristics bring up concerns about the loss of privacy. At the same time, it is claimed that applications of ambient intelligence do not necessarily have to reduce privacy in order to work! In social sciences, the possibility of flaws is a question of probability. Nuclear accidents and related catastrophes offer a realistic analogy. According to safety calculations, nuclear disasters would never happen, because the computed probabilities are neglectable. They still happen! Intrusion is an everyday phenomenon, and it is difficult to imagine that hacking would decrease when information systems expand and get more complicated and difficult to guard.
Power concentration in large organizations, a fragmented, decreasingly private society and hyperreal environments where the virtual is indistinguishable from the real, are said to be the main topics of critics. But what about the sector as a main factor in the general tendency of concentrating wealth and power? What about the major global technology companies, accountable only to themselves? Should not that be addressed as well?
The Santa Claus' list
According to the Information Society and Technology Advisory Group (ISTAG), the following characteristics will permit the societal acceptance of ambient intelligence: Ambient intelligence should facilitate human contact, be oriented towards community and cultural enhancement, help to build knowledge and skills for work, better quality of work, citizenship and consumer choice, inspire trust and confidence, be consistent with long term sustainability-personal, societal and environmental-and with lifelong learning, be made easy to live with and controllable by ordinary people [35].
Consider the global social media platforms of today, applying the principle of unilateral control. Now, literally billions of people produce information about themselves, free of charge, to be sold by gigantic operators to other corporations and public authorities. It is surveillance of a magnitude that used to be unimaginable. Here, the essence of artificial intelligence is exposed. It may provide benefits and joy for the billions while enriching global corporations, tightening the straitjackets of ordinary citizens and providing the database for individualized control as well as manipulation of consumer choices and political commodities [36]. The Santa Claus' list appears equally important and naïve.
Conclusions
It is easy to laugh at Dr. Pangloss' assertion that our noses are shaped to carry spectacles, therefor we use spectacles. But concurrent designers of spectacles may actually think like the doctor, and so may programmers as well. Designers and programmers are professionals, and the rationale of professions is that they reserve for themselves the right to judge what is accountable knowledge. In their practice, evidence-based knowledge and professional judgement are not necessarily kept apart. Drawing up a list of all the good things ambient intelligence should promote resembles Dr. Pangloss' explanation why his friend drowned in the bay of Lisbon: The bay was created for that purpose! An obvious parallel is the tenet of business that economic growth must be pursued for the sake of economic growth, because in the best of worlds there is perpetual economic growth. Technological development is of course a constitutive part of that narrative. That part also includes the (professional) presumption that ethical guidelines are a matter for the sector itself. MIT professor, Dr. Tegmark has pointed out the urgent need for ethical guidelines, elaborated by the sector itself [37]. Kindly expressed, he cannot be familiar with avalanches of financial disasters, instigated by the financial sector for some centuries now, under the auspices of self-regulation.
The fundamental dilemma is not whether to promote ambient intelligence or not. It will be developed anyway. But how to work out ethical rules that would safeguard users from intrusion, fraud, blackmailing, trafficking, abduction of identity, DOI: http://dx.doi.org/10.5772/intechopen.95944
Author details
Christer Johannes Bengs Swedish University of Agricultural Sciences, Sweden *Address all correspondence to<EMAIL_ADDRESS>robbery, or commercial, social and political manipulation, or global surveillance of each and every individual -all the horrors of Pandora's box?
As far as ethical rules are concerned, the problem is not only related to artificial intelligence, but to the very essence of modern society. We are living in a world in constant flux, where uncertainty is said to be increasingly replaced by rational decision making, backed by science and new technology. In the best of worlds, that process would eventually make individual judgement and moral choices obsolete. However, we are not quite there yet, and the outspoken idea of modern societies is not to be judgmental. The contradiction between ideology and reality indicates a vast grey zone, where Pandora's box is wide open. Voltaire and Dr. Pangloss may have died, but the Panglossian dilemma lives! © 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 9,646 | 2021-02-01T00:00:00.000 | [
"Philosophy",
"Computer Science"
] |
Heavy Ion Physics Highlights from ATLAS
An overview of the ATLAS results from Pb+Pb collisions at √ sNN = 2.76 TeV will be presented. The results for hard probes include both single jet and di-jet measurements, W and Z bosons, photons, and high pT charged tracks. Taken together these results provide a compelling picture of the interaction of hard particles in the dense QCD medium. Additionally, ATLAS has measured properties of the bulk particle production including charged particle multiplicity and extensive measurements of the azimuthal particle distributions and correlations. Results shown will be from the ∼ 10 inverse μb−1 of minimum bias recorded in the 2010 LHC heavy ion run, as well as from ∼ 0.15 nb−1 sampled in the 2011 LHC heavy ion run.
Lead-Lead collisions at
√ s NN = 2.76 TeV in the Large Hadron Collider (LHC) provide the opportunity to study strongly interacting matter at the highest temperatures achieved in the laboratory.The ATLAS experiment has a robust heavy-ion program to take advantage of this opportunity.In the 2010 and 2011 LHC runs, yielding approximately 10 μb − 1 and 0.15 nb − 1, respectively, the ATLAS experiment has made a set of measurements that form an emerging picture of the hot dense matter created in a heavy ion collision.These measurements include bulk properties of the system -charged particle multiplicity [1] and extensive measurements of the azimuthal particle distributions and correlations [2] -as well as hard probes such as photons [3], W [4] and Z bosons [5], high pT charged tracks [6], single [7] and di-jet measurements [8], and muons from heavy flavor decays [9].The ATLAS detector [10] at the LHC covers nearly the entire solid angle around the collision point.It consists of an inner tracking detector surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters, and a muon spectrometer incorporating three superconducting toroid magnet systems.The Inner-detector system (ID) is immersed in a 2 T axial magnetic field and provides charged particle tracking in the range |η| < 2.5.The high-granularity silicon pixel detector covers the vertex region and is surrounded by the silicon microstrip tracker and transition radiation tracker.The calorimeter system covers the range |η| < 4.9.Within the region |η| < 3.2, electromagnetic calorimetry is provided by barrel and endcap high-granularity lead liquid-argon (LAr) calorimeters, with an additional thin LAr presampler covering |η| < 1.8.Forward calorimeters (FCal) are located in the range 3.1 < |η| < 4.9.The muon spectrometer comprises separate trigger and high-precision tracking chambers that measure the deflection of muons in a magnetic field generated by superconducting air-core toroids.The precision chamber system covers the region |η| < 2.7 with trigger coverage in the range |η| < 2.4.
Bulk Properties
In the new energy regime available for heavy ion collisions at the LHC, the charged particle multiplicity as a function of centrality may be studied to learn about the initial entropy production in the created hot dense matter.Figure 1 shows dN ch /dη/(N part /2) for several measurements as a function of √ s NN (top) and the multiplicity as a function of N part [1].Several striking characteristics are observable in the figure: it is clear that the multiplicity in central A+A collisions grows faster than in p+p, and that log scaling, which could plausibly describe the data up to RHIC energies, is ruled out by the LHC data (see [11], [12], and [1] as well as references therein for a discussion of the scaling models).A consistent scaling description of the data from low to high energies is not yet clear.
Looking at the centrality distribution, it is noteworthy that the shape of the N part dependence is consistent at both RHIC and LHC energies, and the absolute magnitude is different by a factor approximately 2.15 independent of centrality.
Particle Flow
An important observable used to understand the hot dense medium is the azimuthal anisotropy of particle emission.At low p T ( 4 GeV), this anisotropy results from a pressure-driven anisotropic expansion of the created matter, with more particles emitted in the direction of the largest pressure gradients.The observed azimuthal anisotropy is customarily expressed as a Fourier series in azimuthal angle φ: where Φ n is the n-th order reaction plane and v n are the amplitudes of the Fourier expansion.In typical non-central heavy ion collisions where the nuclear overlap region has an "elliptic" shape (or quadrupole asymmetry) on average, the azimuthal anisotropy is expected to be dominated by the v 2 component.However, it was recently pointed out that the positions of the nucleons in the overlap region can fluctuate to create matter distributions with additional shape components, such as dipole (n = 1), triangular (n = 3), and higher asymmetries.As shown in Figure 2 considering all the Fourier components up to n=6, allows a precise reconstruction of the two particle Δφ correlations [2].This implies that the shape observed in two particle correlations at low p T can be largely attributed to geometric and collective phenomena and does not involve jet quenching.
Electroweak Bosons
In order to understand the energy loss of of color-charge carriers in the medium, a baseline measurement of the production rates of electroweak bosons that do not interact via the strong force is made.Measurements of both photons [3] and Z bosons (Z → ee and Z → μμ) [5] have been made using the 2011 Pb+Pb dataset and the particle yields scaled by the appropriate N coll compared in different centrality bins as shown in Figure 3.As seen in the figure, the particle production of photons and Z bosons scales with N coll confirming that within the experimental precision electroweak bosons are not effected by the medium, and that the binary collision model is correct for non-color charge carrying particles.
Color Sensitive Probes
Unlike the color-neutral electroweak bosons color-charge carrying particles show a marked break from N coll scaling.Inclusive single particle measurements as well as two particle correlations have suggested that jets are being "quenched" in the medium, and much effort has been made to understand the mechanisms of this quenching.The suppression of particle production may be quantified using the nuclear modification factor, R CP : ICFP 2012 where C and P refer to central peripheral event classes, respectively.Figure 4 shows the R CP of inclusive charged particles measured by ATLAS [6], and as expected from previous results displays a clear suppression of particles in central Pb+Pb events.The nuclear modification factor drops to a minimum value comparable to the measurements made at RHIC energies [13] at p T ≈ 7 GeV , before rising to approximately 50% at p T = 30 GeV.No psuedorapidity dependence is observed.
To go beyond the inclusive particle measurement and look more closely at the jets themselves, ATLAS has made the first direct observation of jet quenching by measuring the imbalance of di-jet energies [8].The imbalance is expressed in terms of the asymmetry, A J , of two azimuthally correlated jets: Figure 5 shows the asymmetry and Δφ distributions for Pb+Pb, p+p, and simulated events.In the more peripheral Pb+Pb events as in p+p and simulation, A J is peaked at zero implying no relative modification of the jet energy, i.e. no quenching.However in more central Pb+Pb events A J this is no longer the case, showing that there is a relative quenching of one of the two jets due to the presence of the dense medium.Despite this quenching of the jet energy, Δφ distributions remain consistent even in the most central collisions.In addition to the jet energy imbalance one may measure the nuclear modification factor, R CP , of the fully reconstructed jets rather than inclusive single particles [7]. Figure 6 shows the R CP of jets as a function of N part , clearly demonstrating the suppression of the overall jet yield in addition to the imbalance seen in Figure 5.The suppression observed is independent of the jet p T within the experimental uncertainties.In addition to jets reconstructed using the anti-k t algorithm with R = 0.4, jets were reconstructed using R = 0.2, 0.3, and 0.5.This allows the observation of a slight but significant increase in the magnitude of the suppression with smaller R at low p T .As a further look into the behavior of color charged objects in the medium, the nuclear modification factor of muons from heavy flavor decay is formed.A template fitting method is used to estimate the portion of muons coming from heavy flavor decays and their R PC (R PC = 1/R CP used in order to minimize the impact of fluctuations in the low statistics peripheral event sample) is shown in Figure 7.
The figure shows that although the particle yield is suppressed in central events compared to peripheral events, the shape of the modification in p T is different from that shown in Figure 4 for inclusive charged particles.
ICFP 2012 00036-p.5The ATLAS heavy-ion program has had two successful years of data taking.Among the measurements made, the charged particle multiplicity has shown log √ s NN scaling to be broken, while the centrality shape remains consistent with lower energy.The measurement of the azimuthal distribution of produced particles has shown that initial geometry largely explains correlation function structure, and has greatly increased our knowledge of the medium's early geometry and its subsequent expansion.
In addition the measurement of electro-weak bosons, photons and Z bosons, which do not strongly interact sets a baseline for hard probes that are not effected by the hot and dense medium.The N coll scaling behavior of electro-weak bosons demonstrates that the Glauber binary collision model describes well the collision geometry and the incidence of hard scatterings in the event.Further, these measurements lay the groundwork for the use of photons and Z bosons as a calibration for modified jets in boson + jet events.This baseline of unmodified color neutral objects is complemented by the studies of color charge sensitive objects, among them inclusive charged particles, jets, and muons from heavy flavor decays, whose modification is studied in order to learn about the properties of the medium.The nuclear modification factor of inclusive charged particles shows suppression similar to that seen at RHIC energies at low momentum, but also clearly shows an increase that at higher p T that was not clear in the RHIC data.This momentum dependence is in contrast to the strikingly flat shape in p T of the jet nuclear modification factor and the somewhat different suppression seen in muons from heavy flavor decay.The precise mechanisms of the energy quenching due to the medium are not yet clear, however these measurements are shedding significant light on them.
This research is supported by FP7-PEOPLE-IRG (grant 710398), Minerva Foundation (grant 7105690) and by the Israel Science Foundation (grant 710743).
Figure 1 .
Figure 1.Top: √ s NN dependence of the charged particle density per colliding nucleon pair dN ch /dη/(N part /2) from a variety of measurements in dN ch /dη(η=0) in p+p and p+p collisions (inelastic and non-single diffractive results) and from central A+A collisions, including the ATLAS 0-6% centrality measurement reported here for |η| <0.5 and the previous 0-5% centrality ALICE and CMS measurements (points shifted horizontally for clarity).The curves show different expectations for the √ s NN dependence: results of a Landau hydrodynamics calculation (dotted line), a √ s NN extrapolation of RHIC and SPS data proposed by ALICE (dashed line), a logarithmic extrapolation of RHIC and SPS data (solid line).Bottom: dN ch /dη/(N part /2) vs N part for 2% centrality intervals over 0-20% and 5% centrality intervals over 20-80%.Error bars represent combined statistical and systematic uncertainties on the dN ch /dη(η=0) measurements, whereas the shaded band indicates the total systematic uncertainty including N part uncertainties.The RHIC measurements have been multiplied by 2.15 to allow comparison with the √ s NN = 2.76 TeVresults.The inset shows the N part < 60 region in more detail.Figure is from reference [1].
Figure 2 .
Figure 2. Centrality dependence of Δφ correlations for 3 < p a T , p b T < 4 GeV.A rapidity gap of 2 < |Δη| < 5 is required to isolate the long-range structures of the correlation functions, i.e. the near-side peaks reflect the "ridge" instead of the autocorrelations from jet fragments.The error bars on the data points indicate the statistical uncertainty.The superimposed solid lines (thick-dashed lines) indicate contributions from individual v n,n components (sum of the first six components).Figure is from reference [2].
Figure 3 .
Figure 3. Left: Centrality dependence of the photon yield per event in several p T bins, scaled by the average nuclear thickness function T AA (equivalent to N coll divided by the total inelastic p+p cross section) for that centrality interval.The horizontal axis is the average number of participants N part for each selected centrality interval.Statistical errors are shown by the error bars.Systematic uncertainties on the photon yields are shown by the yellow bands.Figure is from reference [3].Right: Centrality dependence of Z boson yields divided by N coll , measured in |y Z | < 2.5.Results for Z → ee (upward pointing triangles) and Z → μμ (downward pointing triangles) channels are shifted left and right respectively (for visibility) from their weighted average (diamonds) which is plotted at the nominal N part value.The statistical (bars) and systematic (shaded bands) uncertainties are calculated using the appropriately weighted average of the two contributing sources.Brackets show the combined uncertainty including the uncertainty on N coll .The dashed lines are constant fits to the combined results.Figure is from reference [5].
Figure 4 .Figure 5 .Figure 6 .
Figure 4. R CP extracted from the inclusive charged particle distributions in three different η ranges, and three centrality combinations: with 0-5%, 30-40% and 50-60% as numerators and a common peripheral sample (60-80%) as denominator.Statistical errors are shown with vertical lines and the overall systematic uncertainty at each point is shown with gray boxes.Figure is from reference [6].
Figure 7 .
Figure 7. Muons from heavy flavor decay peripheral to central ratio, R PC , as a function of p T for different centrality bins.The points are shown at the mean transverse momentum of the muons in the given p T bin.The error bars include both statistical and systematic uncertainties.The contribution of the systematic uncertainties from N coll and efficiency, which are fully correlated between p T bins, are indicated by the shaded boxes.Figure is from reference [9]. | 3,330.6 | 2014-04-01T00:00:00.000 | [
"Physics"
] |
Shp2 regulates migratory behavior and response to EGFR-TKIs through ERK1/2 pathway activation in non-small cell lung cancer cells
In the clinical treatment of lung cancer, therapy failure is mainly caused by cancer metastasis and drug resistance. Here, we investigated whether the tyrosine phosphatase Shp2 is involved in the development of metastasis and drug resistance in non-small cell lung cancer (NSCLC). Shp2 was overexpressed in a subset of lung cancer tissues, and Shp2 knockdown in lung cancer cells inhibited cell proliferation and migration, downregulated c-Myc and fibronectin expression, and upregulated E-cadherin expression. In H1975 cells, which carry double mutations (L858R + T790M) in epidermal growth factor receptor (EGFR) that confers resistance toward the tyrosine kinase inhibitor gefitinib, Shp2 knockdown increased cellular sensitivity to gefitinib; conversely, in H292 cells, which express wild-type EGFR and are sensitive to gefitinib, Shp2 overexpression increased cellular resistance to gefitinib. Moreover, by overexpressing Shp2 or using U0126, a small-molecule inhibitor of extracellular signal-regulated kinase 1/2 (ERK1/2), we demonstrated that Shp2 inhibited E-cadherin expression and enhanced the expression of fibronectin and c-Myc through activation of the ERK1/2 pathway. Our findings reveal that Shp2 is overexpressed in clinical samples of NSCLC and that Shp2 knockdown reduces the proliferation and migration of lung cancer cells, and further suggest that co-inhibition of EGFR and Shp2 is an effective approach for overcoming EGFR T790M mutation acquired resistance to EGFR tyrosine kinase inhibitors (TKIs). Thus, we propose that Shp2 could serve as a new biomarker in the treatment of NSCLC.
INTRODUCTION
Lung cancer is a leading cause of cancer death worldwide. Lung cancer has been reported to account for 13% of all new tumor cases [1], and 50% of patients with stage I or II non-small cell lung cancer (NSCLC) have been found to develop systemic metastases despite complete resection [2]. Traditional chemotherapy is only modestly effective in the treatment of patients with advanced lung cancer, with the median survival time being only 8-10 months [3]. Recently, targeted therapies involving the use of epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) were reported to be beneficial for NSCLC patients, but the response to EGFR-TKIs was limited mainly to NSCLC patients carrying EGFR mutations (50% of Asian patients and 15% of Western Research Paper www.impactjournals.com/oncotarget patients), and ~20%-30% of these patients failed to respond to the drugs [4]. Furthermore, some of these patients might develop resistance to EGFR-TKIs, frequently through the EGFR T790M mutation or through upregulation of c-MET or other receptors [5]. Consequently, only 10%-20% of NSCLC patients might benefit from EGFR-TKIs treatments, and improved therapeutic approaches are clearly required. Thus, to enable the development of new and effective targeted therapeutic drugs for lung cancer, considerable research effort is currently being devoted toward enhancing our understanding of the mechanisms and molecules that regulate the growth and migratory behavior of lung cancer cells.
Protein tyrosine phosphorylation and dephosphorylation levels in cells are governed by the balanced actions of protein tyrosine kinases (PTKs) and protein tyrosine phosphatases (PTPs) [6]. Whereas PTKs have been widely reported to promote the development of human cancers, PTPs have been mostly regarded as tumor suppressors. However, increasing evidence suggests that certain PTPs can also promote cancer development (i.e., act as oncogenes), such as the PTP Shp2 (src homology 2 domain-containing tyrosine phosphatase 2), which is encoded by PTPN11 [7]. Shp2-the first PTP-superfamily member identified to act as an oncogene-functions in the control of cell proliferation, survival, differentiation, invasion, metastasis, and morphogenesis [8].
Shp2 is a ubiquitously expressed PTP that participates in signaling events proximal to receptor PTKs, such as EGFR and PDGF, insulin, and IGFI hematopoietic receptors [9]. Activated Shp2 has been reported to mediate growth factor-stimulated Ras-ERK1/2 (extracellular signal-regulated kinase 1/2) activation and promote cell growth and survival [7]. How Shp2 mediates Ras-ERK1/2 activation remains incompletely understood, although the activation could involve several potential mechanisms, including the dephosphorylation of a RASGAP binding site on GAB1 or the dephosphorylation of CSK binding sites on PAG/CBP and paxillin [10,11]. Because Shp2 functions in multiple oncogenic receptor PTK pathways, targeting Shp2 could represent a favorable strategy for improving cancer therapies.
Shp2 has now been widely confirmed to represent a promising target in cancer treatment [12][13][14][15], and Shp2 has been recently suggested to function in tumor initiation and to enhance tumor maintenance and progression [16][17][18][19]. However, no previous study has comprehensively evaluated Shp2 function in NSCLC development. We hypothesized that Shp2 expression contributes to NSCLC progression and that Shp2 targeting could serve as a potential treatment for lung cancer. In previous work, we have focused on PTP actions in diverse signaling pathways and diseases [20][21][22]. Here, we specifically investigated the function of Shp2 in lung cancer by manipulating Shp2 expression in lung cancer cell lines.
Shp2 is overexpressed in clinical samples of NSCLC
To assess the potential involvement of Shp2 in NSCLC development, we first examined Shp2 expression in human NSCLC tumor tissues and paired adjacent normal tissues. Immunohistochemical staining of 23 distinct tissue samples revealed that Shp2 expression was significantly higher in lung cancer tissues than in normal lung tissues (P < 0.001) ( Figure 1).
Figure 1: Shp2 expression is increased in non-small cell lung cancer (NSCLC).
Anti-Shp2 antibody staining of (A) normal lung tissue and (B) NSCLC tissue. The IHC semi-quantitative score was derived based on two criteria: the antibody staining intensity was multiplied by the percentage of tumor cells stained. IHC scores for each set of specimens were averaged (N = 23) and statistically analyzed (C). www.impactjournals.com/oncotarget
Shp2 knockdown inhibits tumor growth and enhances cellular response to gefitinib
We examined the functional significance of Shp2 expression in NSCLC cells by knocking down Shp2 expression through RNA interference in the cell lines H1975 and H292 (Figure 2A). In H1975 cells, which are gefitinib resistant, proliferation was decreased following Shp2 siRNA transfection, and the IC50 of the cellular response to gefitinib was reduced from >10 μM to 1.60 μM after Shp2 depletion ( Figure 3A, 3C). Cell proliferation was also markedly inhibited in the case of Shp2-depleted H292 cells, but these cells were only slightly sensitized to gefitinib after Shp2 knockdown ( Figure 3B, 3D). In a complementary set of assays, we examined the effect of Shp2 overexpression in cells ( Figure 2B); whereas expression of Shp2 WT decreased cellular sensitivity to gefitinib in H292 cells ( Figure 3F), the expression did not alter gefitinib sensitivity in H1975 cells ( Figure 3E).
Shp2 knockdown reduces migration in NSCLC cells
Next, Shp2 involvement in the control of NSCLC cell migration was assessed by manipulating Shp2 expression and performing wound-healing and Transwell-migration assays. When Shp2 expression was downregulated in H292 cells (Figure 2A), wound closure was slowed ( Figure 4A) and migration through the Transwell membrane was delayed ( Figure 4B). However, Shp2 WT expression in H292 cells caused no increase in cell migration in either the wound-healing assay (data not shown) or the Transwell-migration assay (data not shown), which suggests that the parental cancer cell line already exhibits maximal migration capacity.
Shp2 expression enhances c-Myc expression
The c-Myc oncogene is frequently overexpressed in several types of cancer, and c-Myc overexpression is associated with poor prognosis in lung adenocarcinoma [23]. We performed immunoblotting to examine c-Myc expression in lung adenocarcinoma cells transfected with Shp2/control siRNA or Shp2 WT /empty vector ( Figure 5A, 5B). Whereas Shp2 knockdown resulted in c-Myc downregulation, Shp2 overexpression led to increased c-Myc expression (relative to control siRNA or vector transfection, respectively). Thus, Shp2 expression promoted c-Myc expression in lung adenocarcinoma cells.
Shp2 promotes epithelial-to-mesenchymal transition (EMT) and Shp2 inhibition suppresses EMT
Based on considering the aforementioned results, we evaluated whether Shp2 affects the expression of the EMTassociated proteins E-cadherin and fibronectin in lung cancer cells. Following transfection of the Shp2-silencing siRNA, E-cadherin expression was increased in H1975 cells and fibronectin expression was decreased in H292 cells ( Figure 5A). By contrast, E-cadherin expression was decreased in cells expressing Shp2 WT ( Figure 5B). These results indicate that Shp2 promotes EMT and that Shp2 inhibition leads to mesenchymal-to-epithelial transition in lung cancer cells.
Shp2 enhances c-Myc expression and EMT potentially through Ras/MAPK signaling
To identify the potential signaling pathways by which Shp2 upregulates c-Myc expression and promotes EMT, we examined the activation of the Ras/MAPK pathway, one of the key signaling pathways stimulated by Shp2 in other cell types. Ras/MAPK pathway activation was assessed by monitoring the phosphorylation (and thus the activation) status of the main downstream effector ERK1/2. Whereas ERK1/2 phosphorylation was higher in lung adenocarcinoma cells overexpressing Shp2 than in vector-control cells, ERK1/2 phosphorylation was lower Transwell-migration assays (D). *P < 0.05 versus controls. www.impactjournals.com/oncotarget in cells transfected with the Shp2 siRNA than in cell transfected with the control siRNA ( Figure 5).
Lastly, to determine whether Shp2 promotes c-Myc expression and EMT through activation of ERK1/2 signaling, we tested the effect of U0126, a small-molecule inhibitor of ERK1/2 ( Figure 6). Western blotting analysis revealed that Shp2 WT -induced upregulation of c-Myc and fibronectin expression and downregulation of E-cadherin expression were abrogated in H292 cells treated with U0126 ( Figure 6B). Accordingly, in H292 cells expressing Shp2 WT , the decreased sensitivity to gefitinib caused by Shp2 overexpression was counteracted by U0126 ( Figure 6A). Moreover, we tested whether U0126 can block the migration of NSCLC cells. Because Shp2 overexpression did not markedly affect cell migration in this study (as noted earlier in this section), we used the parental H292 cells, and we quantified the motility of the cells in the absence or presence of U0126 by using wound-healing and Transwell-migration assays. Our results showed that U0126 was as potent as the Shp2 siRNA in suppressing the migration and invasive behavior of H292 cells ( Figure 6C, 6D).
DISCUSSION
We have demonstrated here that Shp2 was overexpressed in lung cancer tissue samples from a group of NSCLC patients from Beijing, and that in lung cancer cells, Shp2 promoted proliferation, reduced gefitinib sensitivity, and enhanced migration through activation of the ERK1/2 pathway. These results suggest that Shp2 functions as a key positive regulator of lung cancer progression. More than 58 different Shp2 mutations have been identified in various tumors, and in patients with these tumors, normal cell proliferation and migration are disrupted [8]. However, in lung cancer, the Shp2 mutation rate is only 1.81% according to the Catalogue of Somatic Mutations in Cancer databank (www.sanger.ac.uk) [24]. Therefore, our finding here that wild-type Shp2 overexpression enhances tumor growth and migration reveals a broad role of Shp2 in promoting the progression of lung cancer.
The c-Myc oncogene is regarded as a driver oncogene that links growth factor stimulation to proliferation in normal and tumor cells [25]. Our results showed that Shp2 knockdown inhibited c-Myc expression, and, accordingly, proliferation was impaired in Shp2-knockdown cells. Moreover, c-Myc is frequently overexpressed in lung cancer and promotes tumor progression in Raf-or Ras-driven lung cancer, and this is associated with poor prognosis [26,27]. Our results also indicated that ERK1/2 signaling, which occurs downstream of Shp2 and Ras, acted upstream of c-Myc. Thus, Shp2 might regulate lung cancer cell proliferation through the ERK/c-Myc signaling axis. EGFR-TKIs treatment is associated with survival in NSCLC patients harboring EGFR activating mutations [28]. However, despite a highly favorable initial response, drug resistance and tumor progression occur in most patients [29,30]. Because Shp2 functions as a mediator of EGFR signaling, we investigated whether Shp2 affects cellular sensitivity to the EGFR-TKIs gefitinib. In gefitinib-resistant H1975 cells, Shp2 knockdown markedly lowered the gefitinib IC50, this finding indicate that combined inhibition of EGFR and Shp2 might improve drug response in tumors harboring the T790M mutation. Whereas the Shp2 knockdown only slightly affected the gefitinib sensitivity of H292 cells, which are gefitinibsensitive lung cancer cells. Given that Shp2 promotes tumor growth and migration through the activation of the ERK pathway [7], the aforementioned findings raise the intriguing question of whether EGFR-TKIs completely inhibit ERK activity. The T790M mutation causes gefitinib resistance by sterically interfering with gefitinib binding to EGFR, which can result in diminished impaired ERK phosphorylation in H1975 cells relative to the H292 cells that are sensitive to gefitinib [31]. In H1975 cells, Shp2 knockdown reduced ERK phosphorylation by >70%. This result suggests that ERK inhibition by using only EGFR-TKIs might not be effective in certain cases, and that co-inhibition of EGFR and Shp2 might represent a comparatively more effective strategy for the treatment of gefitinib-resistant lung tumors. The finding here that Shp2 overexpression mitigated the effects of gefitinib by enhancing ERK phosphorylation in H292 cells furthers bolsters our conclusion. Moreover, growing evidence indicates that EMT increases resistance to EGFR-TKIs therapy [5,32,33], and our results showed that Shp2 knockdown induced E-cadherin upregulation and fibronectin downregulation, which are characteristic of mesenchymal-to-epithelial transition and have been found to be associated with improved response to EGFR-TKIs treatment.
Metastasis is the main cause of death in the majority of patients with NSCLC [34]. In this study, we evaluated critical stages in the invasive metastasis cascade, including EMT. EMT has been widely demonstrated to be frequently "hijacked" during metastatic progression through the loss of epithelial cell-junction proteins, including E-cadherin, and the gain of mesenchymal markers, such as vimentin and fibronectin [15,35]. We observed similar changes here in lung cancer cells. Notably, Shp2 knockdown suppressed the migration of lung cancer cells, which agrees with previous work indicating that positive expression of Shp2 was closely related to the metastasis of NSCLC to lymph nodes [36]. Thus, interference with Shp2 function might provide an approach for inhibiting cancer cell metastasis. However, we did not detect a major effect of Shp2 overexpression on metastasis in lung cancer cells. Shp2 was found to be expressed at higher levels in the majority of NSCLC specimens than in adjacent normal lung samples [36]. Therefore, one underlying reason for the lack of effect of Shp2 overexpression might be that the metastatic ability of the lung cancer cells was already enhanced maximally.
Our results indicated that Shp2 expression potentially induced the cellular EMT program through activation of the ERK pathway. ERK function in EMT has been examined not only in normal development, but also in cancer metastasis. For example, ERK activation promoted the initiation of epithelial tubule development through morphological changes [37], and ERK2 was specifically implicated as an EMT driver [38]. Considering that U0126 suppressed Shp2-induced EMT and migration of lung cancer cells in this study, drugs that target the ERK pathway could potentially be used for treating Shp2overexpressing lung cancers.
Our finding that increased Shp2 expression promotes the EMT phenotype and c-Myc expression in lung cancer cells suggests that Shp2 can serve as a potential target in lung cancer treatment. We have provided evidence indicating that Shp2 knockdown reduces the proliferation and migration of lung cancer cells and that co-inhibition of EGFR and Shp2 can effectively overcome acquired EGFR-TKIs resistance. These findings provide a rationale for future investigations into the effects of small-molecule inhibitors of Shp2 on lung cancer progression and thus into a promising new target for lung cancer therapy.
Tumor tissue samples
Twenty-three pairs of primary resected NSCLC tumor specimens and control normal tissue samples, which were adjacent to and at least 5 cm from the tumor lesion, were obtained after surgical resection from patients of Peking People's University Hospital. This study was approved by the Institutional Review Board of Peking People's University Hospital.
Immunohistochemistry (IHC)
IHC was performed to evaluate Shp2 expression in paraffin-embedded cancer tissue specimens and adjacent normal specimens (normal controls). Sections were stained with an anti-Shp2 antibody (1:200; sc-280, Santa Cruz Biotechnology, Santa Cruz, CA). The IHC semiquantitative score was derived based on two criteria: Immunoreactivity was classified by estimating the percentage (P) of cancer cells exhibiting characteristic staining (from 0% to 100%) and by estimating the intensity (I) of staining (1, weak; 2, moderate; 3, strong), and then the score was calculated by multiplying the percentage of positive cells by the intensity: Q = P × I (maximum = 3).
Human wild-type Shp2 (Shp2 WT ) cDNA was a generous gift from Prof. Benjamin Peng (Hong Kong University of Science and Technology, Hong Kong SAR, China); the cDNA was cloned into pSP64R1 vector. The siRNAs and the Shp2 WT plasmid were transfected into cells by using Lipofectamine 2000 (Invitrogen Life Technologies, Carlsbad, CA) according to the protocol recommended by the manufacturer. Immunoblotting was performed to assess Shp2 silencing or overexpression at 72 h after transfection.
Cell-proliferation assay
Cell proliferation was measured by performing the CCK8 assay according to the manufacturer's specifications (DOJINDO, Kumamoto-ken, Japan). Briefly, cells seeded in 96-well plates were treated with up to 10 μM gefitinib for 3 days. Subsequently, 110 μL of fresh medium containing 10 μL of CCK8 reagent was added to the wells, and the plates were incubated for 2 h at 37°C. The culture plates were then placed on a microplate reader, and the optical density (OD) was measured at 450 nm. Wells containing only medium were used for background correction. Each experiment was performed at least thrice. Immunoblotting Cells exposed to different treatments were lysed on ice for 1 h in RIPA buffer, and the extracts were used in western blotting analyses. Proteins were resolved using SDS-PAGE and transferred to PVDF membranes (NEN, Boston, MA), which were blocked in TBS containing 5% skim milk (Sigma, St. Louis, MO) and then incubated (overnight, 4°C) with mouse monoclonal antibodies against Shp2 (1:1000; 610622, BD Transduction Laboratories, San Jose, CA), c-Myc (1:1000; sc-40, www.impactjournals.com/oncotarget Santa Cruz Biotechnology), E-cadherin (1:500; 610252, BD Transduction Laboratories), or GAPDH (1:1000; Zhongshan Ltd., Beijing, China); or rabbit polyclonal antibodies against fibronectin (1:250; ab2413, Abcam, Cambridge, UK) or ERK or p-ERK (both 1:1000; Cell Signaling Technology, Danvers, MA). Immunoreactive bands were detected by incubating membranes with HRP-conjugated secondary antibodies (1 h, room temperature) and then with enhanced chemiluminescence substrate.
Cell-migration assays
Firstly, cells were starved for 24h in RPMI medium alone. Wound-healing assay: Confluent cell monolayers in 6-well plates were scratched with a 200-μL pipet tip and then incubated for 24 h in RPMI medium alone. Scratch areas were quantified using Image Pro Plus. Where U0126 was used, cells were plated at sub-confluence and treated on Day 3 with medium containing U0126. Transwell-migration assay: After different treatments, 10 5 cells were resuspended in RPMI medium alone and added to Transwell membranes (diameter: 6.5 mm; pore size: 8 μm; Corning, Corning, NY), which were placed in 24-well-plate wells containing 700 μL of RPMI medium supplemented with 10% FBS. The chambers were incubated for 24 h at 37°C. The cells that migrated to the lower chamber were fixed in 4% paraformaldehyde for 30 min and stained with crystal violet. The invasion-assay results were quantified by counting the cells on the lower surface of the filters by using Image Pro Plus.
Statistical analysis
All data were analyzed using Student's t test and are presented as means ± SD. Gefitinib IC50 values were calculated by fitting a 3-parameter logistic function to normalized data. Differences were considered statistically significant at P < 0.05. All statistical analyses were performed using SPSS version 16.0 software. | 4,419.8 | 2017-08-14T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |