text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Robust artifactual independent component classification for BCI practitioners
Objective. EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain–computer interfaces (BCIs). Approach. Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. Main results. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Significance. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.
Introduction
Artifacts are omnipresent in recordings of the electroencephalogram (EEG) and other brain signals. For neuroscientific or clinical purposes the interpretation of EEG signals depends on relatively clean recordings. Thus, Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. artifact avoidance during measurement and post-hoc artifact removal are important steps to enhance the signal-to-noise ratio (SNR) before scientific interpretation of the data. While task-independent artifacts may mask an existing effect, artifacts systematically locked to an experimental task are even more problematic: they may lead to misinterpretation of the data and spurious results.
The field of the brain-computer interface (BCI) not only makes use of offline analyses, but strives to interpret mental states on a single-trial basis in real-time and in closedloop scenarios [1]. BCI research is especially sensitive to task-locked artifacts, as the decoding of a user's intent by a BCI system should not rely on task-related non-neural signals. This requirement is most important when conducting research with healthy study participants on a novel paradigm or analysis method which should be transferable to severely motor-impaired patients, because they may not be physically capable of producing those artifacts [2][3][4]. Understandably, the role of artifacts is thus scrutinized during peer-reviewed publication processes.
The exclusive use of brain signals in BCI must typically be dropped when it comes to practical tests with end-users in need, as hybrid BCI approaches [5,6] provide a richer and more reliable control than pure BCIs. Additionally, interest in novel types of studies is growing amongst EEG researchers. Such studies include users (inter-)acting in space [7][8][9] like in collaborative and social paradigms (for a review see [10]), the interaction between users and machines [11] and the nonmedical use of BCI methods [12,13].
From an EEG practitioner's point of view, a fully automatic algorithmic solution for the treatment of artifacts is desirable. It would put him or her in control of artifacts and enable him or her to either remove them or check their influence. Ideally, this would be realized by a global classifier which could be trained once and then reliably separates multiple types of artifactual components from neural components. The classifier should work robustly across data from different users and across domains. The latter includes changing experimental paradigms and tasks, different preprocessing methods and varying EEG electrode setups. It should do so without any need of re-training, and it should not require separate artifact recordings before it can be applied to novel scenarios.
State-of-the-art IC artifact classification
For an extensive review of artifact reduction techniques in the context of BCI-systems, we refer the reader to [14]. In our work, we concentrate on a class of popular artifact rejection approaches, which decompose the original EEG into independent source components (ICs) using independent component analysis (ICA). This method exploits the assumption that artifactual signal components and neural activity are generated independently. Artifactual ICs are handselected and then discarded. The remaining neural components are used to reconstruct the EEG [15,16].
While assumptions for the application of ICA methods are only approximately met in practice (no systematic coactivation of artifactual and neural activity, linear mixture of independent components (ICs), stationarity of the sources and the mixture, prior knowledge about the number of components), their application usually leads to a good, albeit not perfect separation for common artifacts such as blinks, eye movements or scalp muscles [17][18][19][20]. ICA has successfully been applied to the removal of cochlea implant artifacts [21]. However, gait-related artifacts are reported to remain in most of the ICs in EEG recorded during mobile activities [9,22].
Because a thorough analysis of the achievable separation performance is out of the scope of this paper, we refer the reader to [17,23,24] on the question of which ICA variants are well-suited for artifact rejection. Instead, we focus on practical tools which avoid the time-consuming hand-rating process of ICs by classifying ICs with the help of machine learning methods into artifactual and non-artifactual components. Most approaches concentrate on eye artifacts [25][26][27][28][29][30][31], but automatic classification has also been successful for heart-beat artifacts [28,31], generic discontinuities [29], muscle artifacts [31][32][33][34] and even very specialized artifacts such as cochlear implants [21]. As most of these methods have a supervised basis, to some degree they reflect the specific conditions of the training set. The EEG practitioner is now faced with the question of how well supervised methods generalize to his or her data acquired under novel experimental conditions with different preprocessing.
Unsupervised methods successfully circumvent this problem for example by reverting to automatic thresholding strategies [29]. However, these methods are often limited to the use of one or two features and detect only certain types of artifacts. It is unclear how to extend them to more complex artifacts with a varying physiological fingerprint, such as muscle artifacts. For supervised or template-based approaches, first studies suggest that generalization to novel paradigms is possible [28,30,31,34]; however, efforts have concentrated on eye artifacts [28,30].
Robustness under novel paradigms and electrode setups
In this paper, we take a step forward by analyzing the generalization ability of a state-of-the-art supervised IC classification algorithm which we have recently proposed [34]. It is not restricted to the classification of eye or muscle artifacts, but is equally well suited to detect other artifacts such as loose electrodes. By comparing three strategies, we investigate this multi-artifact classifier wrt. new electrode setups and paradigms. We ask the following questions: How does a change of the electrode setup impact the IC classification performance? Is it necessary to hand-label components of the new data set and retrain the classifier based on those? How strong is the deterioration of IC classification performance without re-training? We investigate these questions for three data sets of 6303 labeled ICs from 35 participants in 3 experimental studies: a reaction time (RT) task embedded in a simulated-driving task, an auditory event-related potential study (ERP-BCI) and a study analyzing continuous EEG data (CNT) of subjects instructed to listen to short stories.
Effect on BCI performance
After having demonstrated the robustness properties of the IC classification, we are interested in the effects of automatic ICA artifact cleaning on the classification of EEG trials in BCI systems. As a first proof-of-concept, Halder et al [33] applied artifact cleaning to data from three participants who performed motor imagery. Depending on whether artifacts were systematically co-activated with the task or not, opposite effects of artifact cleaning on BCI classification performance were demonstrated. To the best of our knowledge, only small data sets of one or two participants have been analyzed since then [35,36].
To fill this gap, we extend our analysis from [34] by investigating the overall effect of ICA artifact cleaning on BCI performance to data of 101 participants wrt. 3 BCI paradigms: auditory event-related potentials, event-related (de-)synchronization and slow motor-related potentials due to motor imagery tasks.
Software for the EEG practitioner
Last but not least, we make our IC classification software available as an EEGLAB plug-in 'MARA' (Multiple Artifact Rejection Algorithm). EEGLAB [37] is a popular, Matlabbased open-source tool and used by a growing community of EEG researchers. As existing ICA-based plug-ins primarily focus on the detection of eye artifacts [27][28][29], we hope this will deliver a substantial contribution to the community by assisting EEG practitioners with the rejection of multiple type of artifacts.
Processing chain for ICA artifact rejection
The typical process chain for artifact rejection with ICA consists of the following steps: first, a rough pre-cleaning of the data by channel rejection and trial rejection based on variance criteria may be performed. Second, a dimensionality reduction may help to avoid an unnatural splitting of (neural) sources. Unfortunately, the optimal number of components to extract remains unknown and has to be determined either by visual inspection or by a heuristic, such as retaining 99% of the explained variance or a fixed number of components. Third, ICA methods decompose the observed EEG data x into unknown source components s assumed to be mutually independent and following the generative linear model x = A · s. Finally, artifactual source components are identified which allows the EEG signals to be reconstructed without them.
In manual classification of ICs, experts ratings are based on a component's time series, its power spectrum and spatial pattern (given by the respective column of A). Unfortunately, ICA frequently results in mixed components containing aspects of both neural and artifactual activity which cannot be rated unambiguously [38]. Consequently, such mixed components tend to be either retained or rejected depending on the specific application. The subjective nature of such expert decisions is reflected by the fact that experts disagree with each other as well as with themselves over time [39]. Nevertheless, the reliability of component classification is often not reported, and if it is, researchers use one of many metrics of inter-rater reliability statistics which are difficult to compare directly (e.g. Krippendorff's alpha in [20], interclass correlation coefficient in [40], degree of association phi in [28], mean-squared error (MSE) or average agreement in [34,39]).
Automatic classification of ICs based on Machine Learning methods offers a well-described algorithm which rates consistently over time. However, this algorithm, too, is of subjective nature in the sense that it is optimized to predict labels similar to those labeling strategies applied by human raters. The performance of the algorithm thus crucially depends on the quality of the training set and its labels. For all our IC data sets, experts were instructed to identify components which are predominantly driven by artifacts.
In this paper, automatic IC classification is realized by a linear pre-trained classifier. It is based on the following six features which were determined in a feature selection procedure described in [34]. One feature aims to detect outliers in the time series of an IC, three features are extracted from the spectrum, and two features extract information from the scalp pattern of an IC-the latter depending directly on the electrode layout.
(i) Current density norm. ICA itself does not provide information about the locations of the sources s. However, ICA patterns can be interpreted as EEG potentials for which the location of the sources can be estimated. We considered 2142 locations arranged in a 1 cm spaced 3D-grid, formulated the forward problem according to [41][42][43] and sought the source distribution with minimal l 2 -norm (i.e. the 'simplest' solution) [44,45]. Since this source distribution can model cerebral sources only, it is natural that artifactual signals originating outside the brain can only be modeled by rather complicated sources. Those are characterized by a large l 2 -norm, which we use as a feature.
Data sets and experimental paradigms
Data sets of four experimental EEG paradigms (named RT, CNT, MI-BCI, ERP-BCI) were available for this study. For three of them, RT, CNT and ERP-BCI, expert-labeled ICs (artifacts versus neural sources) were available. Two data sets (MI-BCI, ERP-BCI) stem from BCI experiments. As the trialwise BCI tasks are known, the estimated single-trial BCIclassification performance provides a metric for the influence of a preceding artifact treatment.
RT. For this data set, labeled ICs were available. In a simulated-driving study, participants performed a forcedchoice left or right key press RT task upon two auditory stimuli in an oddball paradigm [34]. EEG data was recorded from 121 approx. equidistant sensors and high-noise channels were rejected based on a variance criterion. We selected 43 runs of 10 min duration from eight participants that had 104 electrodes in common. Prior to the IC computation via TDSEP [46], a 2 Hz high-pass filter was applied, and dimensionality was reduced to 30 PCA components. Two experts hand-labeled the resulting 30 ICs per run into artifactual and neural components (1290 labeled ICs altogether). Of these, 840 ICs (28 runs from 5 participants) were used to train a linear classifier C RT to discriminate artifactual from neural components. Another 450 ICs (15 runs from 3 remaining subjects) were available for estimating the generalization performance of C RT . The training set contained 52% of artifactual ICs, the test set contained 59%.
CNT. For this data set, labeled ICs were available. Nine participants continuously listened to audio-visual stories during short runs of an average duration of 3.77 min [40]. The resulting 71 recordings contained 62 EEG channels plus one EOG channel. The recording of each run was appended with a short eyes-closed and eyes-open recording and high-pass filtered at 0.16 Hz. No dimensionality reduction was applied, before ICs were estimated by FastICA [47] on the full set of electrodes. This decomposition yielded 63 × 71 = 4473 components, which were hand-rated by three experts into 47% artifactual and 53% neural source components.
ERP-BCI.
For this data set, labeled ICs as well as labeled BCI-trials were available. In a spatial auditory BCI study which made use of auditory event-related potentials, participants underwent a calibration run of approx. 30 min duration and an online spelling run [48]. In the online run, subjects were asked to write a sentence while auditory and visual feedback was provided. EEG was recorded from 61 electrodes while the participants listened to a rapid sequence of 6 auditory stimuli and were instructed to silently count the number of appearances of a rare target tone.
For the classification of artifacts, data of 18 participants was analyzed. Their EEG signals were band-pass filtered between 0.1 and 40 Hz and the dimensionality was reduced to 30 PCA channels. Subsequently 30 ICs were computed per run using TDSEP. The resulting 540 source components were hand-labeled into 72% artifactual and 31% neural source components.
To assess the influence of artifact correction onto the BCI classification performance, data of the 21 BCI novices participating in the first session of the auditory ERP speller study of Schreuder et al [48] was re-analyzed. Their calibration measurement is used to train a shrinkage regularized linear classifier based on spatio-temporal ERP features [48,49]. BCI performance evaluations are based on the re-analyzed online data of these participants.
MI-BCI.
For this data set, labeled BCI-trials were available, but no labeled ICs. This data set was recorded with 119 EEG channels from 80 healthy BCI novices, who first performed motor imagery tasks (left hand, right hand and both feet) in a calibration run (i.e. without feedback). Every 8 s, the requested BCI task of the current trial was indicated by a visual cue. A CSP-based BCI-classifier (see below) was trained on the labeled calibration trials using the pair of classes which provided best discrimination. During the three online runs of 100 trials each participant controlled an application which provided continuous visual feedback in the form of a horizontally moving cursor [50].
Motor imagery data can be exploited by two different types of EEG features.
(i) CSP-MI-BCI: the most common strategy makes use of oscillatory features which describe event-related (de)synchronization (ERD/ERS) in the alpha-and beta band of the EEG. After enhancing the SNR of these effects by individual data-driven spatial filters, which are derived by the common spatial patterns (CSP) analysis [51], CSPfeatures can be classified by a shrinkage-regularized linear classifier. (ii) LRP-MI-BCI: the second strategy is based on slow motorrelated potentials (e.g. the lateralized readiness potential (LRP)). Different classes of imagined movements are distinguished with an ERP-type analysis [49,52]: EEG is band-pass filtered between 4 and 8 Hz, before a small number of class-discriminative intervals is determined on the calibration data. The average activity per interval and channel is used as features for a binary shrinkageregularized linear classifier.
While the original online runs were performed with the CSP-MI-BCI classifier, without artifact rejection, the offline re-analysis makes use of both types of features in order to assess the influence of a preceding artifact removal.
Robustness under novel paradigms and electrode setups
For the classification of artifactual IC components, three classification strategies-fixed, adapted and study-specificwere compared on the ERP-BCI and the CNT data set. Figure 1 visualizes the strategies. In the fixed scenario, classifier C RT is trained once on features of labeled ICs of the RT data set, and furthermore applied to ICs of any other data set. Neither hand-labeling of novel ICs nor re-calculation of features or any re-training of the classifier is necessary in this simplest scenario. While hand-labeling of novel ICs is also avoided successfully in the adapted strategy, a channel adaptation on the RT-data is performed by cutting the training patterns to the specific electrode layout of the test data set. Features then need to be re-calculated based on the reduced patterns and a re-training yields the adapted classifier C RT−A . All steps can be performed automatically and do not require user input. The third strategy, study-specific, requires the effort of experts every time a novel study is performed. The ICs of at least some subjects need to be hand-labeled, before a study-specific classifier (e.g. C CNT or C ERP ) can be trained and applied to novel subjects. It's performance was evaluated by leave-onesubject-out cross-validation.
To explore the robustness of the artifact classifier against reduced EEG channel sets, we compared the fixed IC-classifier C RT with the adapted IC-classifier C RT−A on the RT and ERP-BCI test data sets with reduced setups (varying from 16 to 104 resp. 61 EEG channels). All electrode setups were approximately equidistant and covered the whole scalp.
Effect on BCI performance
This offline re-analysis of three BCI paradigms described in section 2.2 compares standard BCI performance with and without a preceding ICA artifact cleaning. In both cases, artifactual channel and trial rejection based on a variance criterion was performed prior to BCI training. Training of the BCI-classifiers is based on the calibration runs only, and BCI performance tests are performed with the online runs of the participants.
ICA artifact cleaning is included in a manner that allows for real-time BCI applications. Prior to TDSEP, we estimated whether a PCA pre-processing to 99% explained variance would be useful via cross-validation on the calibration data. This was the case only for the LRP-MI paradigm. IC components were then derived by TDSEP and classified with the adapted classifier C RT−A on the calibration data. The BCI is set up on the remaining ICs. On the online runs, un-mixing and component rejection is performed according to the demixing determined on the calibration data. The BCI classifier is applied to features extracted from the remaining components of the online runs. Figure 2 shows the classification error for the fixed classifier C RT and the adapted classifier C RT−A for different channel setups on both the RT and the ERP-BCI test sets. On the RT test data with the full 104 channel setup, a classifier using all six features achieves a MSE of 9.3% only, which slightly outperforms the use of only four pattern-independent features (12.4% MSE). While C RT generalizes robustly over the range of 104 to 48 electrodes in the RT test sets, its error increases up to 31.8% for the smallest set of 16 electrodes. On the ERP-BCI data set, the use of only four pattern-independent features is already outperforming the fixed classifier C RT on the full 61 electrode setup. Classification performance of C RT then breaks down to 50% on the smallest set of 16 electrodes. In both the RT and the ERP-BCI data set, the drop in overall performance is due to the bad performance of both pattern-based features of over 50%. For the adapted strategy (i.e. re-training the classifier on the patterns cut to the specific electrode setup), the error of the pattern features (range within pattern and current density norm) was much less pronounced in both data sets. The overall error of C RT−A for 16 electrodes remained at 11.3% on the RT data set (compared with 9.3% on 104 channels) and at 15.9% for the ERP-BCI data set (compared with 13.3% on 61 channels). In both data sets, we slightly gain from using the pattern features. On the reduced electrode setup, the classifier weight of the range in pattern dropped, while the weight for current density norm remained stable.
Robustness under novel paradigms
The results for the three proposed classification strategies on the three labeled IC data sets are summarized in table 1. The adapted classifier C RT−A (trained on the RT data set cut to the specific electrode montage of the ERP-BCI or CNT data set) achieves an error of 13.3% on the ERP-BCI data and an error of 14.0% on the CNT data set.
The classification performance can be improved by a retraining on labeled data from the same study, but the effect is small. We observe an error of 9.3% on the RT data set, an error of 9.6% on the ERP-BCI data set and an error of 13.1% on the CNT data set. This improved performance is due to two effects: first, adjusting feature thresholds for the specific study may improve the performance of each feature. For example, a retraining of the 8-13 Hz feature of the CNT data set decreased its error from 33.3% to 18.0%. Second, feature weights adjust such that more discriminative features obtain a higher weight. Interestingly, after re-training both C ERP and C CNT primarily use one of the two pattern features-C ERP focuses mostly on the current density norm feature, while C CNT is strongly based on the range within pattern feature.
Effect on BCI performance
The upper plots of figure 3 show scatter plots of BCI performance with and without preceding ICA artifact cleaning for the three analyzed BCI paradigms. For ERP-BCI, BCI performance decreased slightly from 69.4% to 68.3% (t(20) = −2.43, p = 0.03, d = 0.21). On average, 44 components were retained and 16 artifactual components were removed. There was no significant change in overall MI-CSP performance Table 1. Feature weight vectors w and test errors (MSE) for three data sets (RT, ERP-BCI and CNT) and three classification strategies (fixed classifier C RT , adapted classifier C RT−A and study-specific classifiers C ERP , C CNT ). Test errors are reported for the 6 single features and for the combined classification. The fixed classifier is trained on the RT train data set. The adapted classifier is trained on the RT train data set cut to the specific electrode montage. The study-specific classifiers are trained on data from the same study and evaluated with leave-one-subject-out CV. The strongest changes were observed for the MI-LRP paradigm, which is most prone to eye artifacts due to the focus on low-frequency signal components. Note that as feedback was provided with a moving cursor, eye activity may be correlated with the two classes. On average, nine components were retained and ten artifactual components were removed. While the mean BCI accuracy remained constant at ≈60% (t(79) = 0.23, p = 0.82, d = 0.03), the performance of each participant varied considerably. The lower plots of figure 3 exemplarily highlight the effect of the artifact rejection for two participants. Without artifact rejection, both participants mainly use eye artifacts for BCI control (frontal class-discriminative activation). The effect of artifact removal can be twofold. For participant A, eye artifacts obstruct the underlying neural activity, and the system's accuracy improved upon artifact cleaning from 66.3% to 73.6% due to an improved signal-to-noise level. In participant B, very little class-discriminant activity remained after the eye activity was removed. BCI classification dropped considerably from 91.3% to 64.0%.
Discussion
To summarize, we have analyzed the robustness properties of our recently proposed artifact classification method and proposed a strategy to handle a wide range of electrode setups. The proposed adapted strategy fully automates the time-consuming rating of artifactual ICs and reliably identified multiple types of artifacts from 35 participants and 3 EEG paradigms.
IC classification performance of three strategies was evaluated against expert ratings. We showed that our simplest automatic fixed strategy (train the classifier once, then apply to other setups) exhibits sensitivity to drastically reduced electrode setups. As a solution, we proposed the adapted strategy which recomputes the training features based on the specific electrode montage of the test sets. Using this relatively inexpensive strategy-no hand-labeling is involved-artifact classification generalizes well even on very reduced electrode setups.
For comparison reasons, a re-training of the classifier using labor-intensively gained hand-labeled ICs from every new study was analyzed (strategy study-specific). While avoiding some generalization issues in theory, it is prohibitively expensive in most practical situations and only achieved a performance gain of a few per cent compared with the adapted strategy.
We therefore recommend the adapted strategy for artifact classification. It generalized robustly even to completely novel EEG paradigms, with its IC classification performance (13.3% MSE on auditory ERP data and 14.0% MSE on auditory listening data) staying on a similar level as inter-expert disagreements (often above 10% [34,39]). This classification error is remarkably low given that the studies have been recorded with half the number of electrodes, used different ICA methods and contained different proportions of artifactual components.
We provide the ready-to-use artifact classifier to the community as an open-source EEGLAB plug-in called MARA (multiple artifact rejection algorithm). MARA automatically adapts to novel channel setups and its output is designed to support the experimenter in his or her decisions: a semi-automatic mode allows for visual inspection of components and for changing the classifier's proposed ratings. Figure 4 shows an example screen shot of the visual inspection menu. The plug-in is published under the General Public License (GPL) and can be downloaded from www.user.tu-berlin.de/irene.winkler/artifacts/.
BCI practitioners may find the application of MARA on BCI data sets of particular interest. We used the adapted strategy to analyze how ICA artifact cleaning impacts on single-trial BCI performance of three different BCI paradigms. In all three paradigms, we were able to remove artifactual activity while maintaining the average BCI performance.
On the single subject level the effect of artifact cleaning depends on whether artifacts mask the relevant neural activity or serve as a control signal for BCI. While artifact cleaning had little influence on an auditory ERP speller and on oscillatory motor imagery data analyzed with CSP, we observed strong effects for a paradigm known to be heavily affected by eye artifacts, the use of slow motor-related potentials. Here our analysis suggests that artifact removal by MARA or similar tools may drastically improve the safety and reliability of results, as they guarantee that rejected artifacts are not utilized mistakenly to control the BCI system. | 6,458.2 | 2014-05-19T00:00:00.000 | [
"Computer Science"
] |
Central Bank transparency and inflation targeting : A policy performance analysis for Turkey
In the wake of recent global financial crisis, growing emphasis has been put on central bank reliability and transparency, especially for the developing countries. In our paper, we built a macroeconomic model to test the impact of central bank reliability on the inflation targeting performance by the bank. To this end, we utilized the indexes developed by Cukierman et.al.(1993); and Dinçer and Eigenberg (2013) to calculate a measure of Turkish Central Bank reliability for the past two decades, and ran a multi-equation time series regression to analyze the effect of institutional independence and transparency on inflation targeting performance, while controlling for other macroeconomic indicators that impact inflation. We found that higher degrees of perceived transparency of the central bank has a positive effect on the inflation targeting performance. Also, transparency seems to be a better proxy for the overall institutional change the Turkish economy experienced in the last two decades.
Introduction
Milton Friedman and Anna Schwartz famously stated that "inflation is always and everywhere a monetary phenomenon" in their now classic book titled "A Monetary History of the United States, 1867-1960" which laid most of the fault for the Great Depression on The Federal Reserve System (Friedman and Schwartz, 2008).It is of course not surprising that such a strong claim came from two most prominent advocates of monetarism; after all, at the very core of monetarism lies the age-old quantity theory of money.The modern form of the quantity theory stands on a definitional relationship that equates the total amount of money in circulation multiplied by the velocity of money to the nominal output level in a given period.The origin of this definitional relationship goes back to David Hume (1758) and John Stuart Mill (1848) and it was originally formulated algebraically by Irving Fischer (2006); the algebraic expression the relationship is known as the Fischer's equation of exchange.
While there is little or no disagreement on the validity of the equation of exchange, there has been a long tradition of disagreement on the direction of causality underlining the equation among major economists: According to Karl Marx (1970), the determinative elements of the equation are the quantity and the price of commodities; the quantity of money and velocity merely shadows them.Keynes and his followers, on the other hand, argued that the amount of money is determined by the aggregate demand in the economy; the people decide how much money to hold depending on their purchasing power.In "A Tract on Monetary Reform" (1924) Keynes derives his own version of the equation and claims that a change in the amount of money in circulation will result in a change in the public's willingness to hold wealth in the form of cash or checking accounts, at least in the short run.Therefore, an increase in the total amount of money in the economy will not immediately result in a proportional increase in the price level, and potentially can affect real variables, according to Keynes.
Empirical evidence only partially support Friedman's and Schwartz's claim, and only in the long-run.Grauwe and Polan's (2005) massive study encompasses a sample of 160 countries over a period of 30 years shows the monetarist argument have a certain degree of validity in the long run, when all countries in the sample was considered.However, they also conclude that the robust linkage between inflation and money supply is "almost wholly due to the presence of high-(or hyper-) inflation countries in the sample.The relationship between inflation and money growth for low-inflation countries (on average less than 10% per annum over the last 30 years) is weak."However, generalizing a positive correlation between money supply and inflation has its limits, even though one focuses only on the experiences of high-or hyper-inflation economies.Turkey has been a famous example for this point: Turkey's met with high CPI inflation rates (over 40%) with the aftermath of the first oil crisis in 1977; the inflation rate rarely saw below 40% until 2002.Figure 1 shows the CPI inflation rates for the Turkish economy; as it can be seen, the 25 years between 1977 and 2002 witnessed the inflation to reach record peaks as high as 120% during major economic and/or political crises.However, even though Turkey might seem an ideal candidate for Grauwe and Polan's (2005) group of high-inflation countries given its' consistent and persistent problems with price stability, it remains an exception to the rule: Vuslat Us's (2004) analysis focuses on Turkey and he finds zero correlation between money supply (M2) and inflation for a period of 30 years; according to Us the depreciation of the Turkish Lira and high public sector prices are the culprits behind high and even-increasing inflation rate.
Us concludes his analysis by stating: "The continued success of the fight against inflation is mostly dependent on the structural programs, most of which are still to come.(….)The implementation of the structural reforms will raise credibility as the reforms are performed successfully [emphasis added]".1991-Q2 1992-Q2 1993-Q2 1994-Q2 1995-Q2 1996-Q2 1997-Q2 1998-Q2 1999-Q2 In our study, we aim to show empirically how the on-going institutional changes in Turkey have affected the price stability in the country, and to what degree the economic reforms are responsible for the success story of the country after 2002 in terms of price stability.Of course, measuring institutional change objectively and properly can be tricky as it requires tested and standardized instruments.Fortunately, the recently growing literature about the institutional quality of central banks offers a solution to this problem.In Section 2, we present the two indexes (namely the transparency and independency indexes for central banks) that can be utilized as proxies for structural changes in economic institutions, while summarizing out panel data results regarding these variables in the literature.Section 3 explains the empirical model we test and the data we use to adapt the link between central bank transparency with the success of anti-inflation policies in a time-series set up.Section 5 concludes our analysis and presents suggestions for further research.
Measuring institutional change: Central Bank independence and transparency
The concept of a central bank in control of the money supply by itself indicates delegation of a certain part of economic policy responsibilities to a body outside the branches of government, and therefore a certain degree of independence from other policy authorities.An independent central bank is important for the health of the economy, given the long-time horizon of monetary policy.The political authority is likely to be tempted by short-term gains from day-to-day monetary adjustments and will be ignoring the lagging nature and the long-run costs of an expansionary monetary policy (Blinder, 1999).
Theoretical studies on how an independent agent with a set of goals and longer term than the elected politicians can lessen inflationary bias of monetary policy have increased following Rogoff's lead in 1985 (see Svensson, 1995;Bernanke et. al., 1999;and Walsh, 1995 for example).Empirical analysis on the policy performance effects of central bank independence, on the other hand, naturally necessitates a standard and universal method in order to classify various central banks with different legal structures according to their independence.One of the most utilized of such measures was offered by Cukierman et.al. (1992), as a part of the effort by Alex Cukierman and his various co-researches to examine how political forces affect economic policy.Cukierman et.al. (1992) measures the legal independence of a central bank using four main criteria: the independence of the CEO of the central bank from elected political authority, the degree of independence of the central bank in determining policy formulation, the announced objective of the central bank, and finally how rigid are the limits on lending to the public sector.Each of the main criteria have various sub-criteria with sometimes varying weights.For example, limits on lending criterion depends on the legal limits on advances to the public sector, limits on securitization of government debt, limits on the terms of loans, and the number of potential public institutions that are allowed to borrow from the central bank.
Independence of the central bank from the political authority is clearly important in developing countries given the pressure from permanent budget deficits as a result of public investments on infrastructure, populist fiscal expansions, and the loss of tax revenue due to the massive informal sectors.In 2001, following a major change in legislation, the Turkish central bank experienced a significant increase in terms of independence index according to the updated calculations by Dinçer and Eichengreen (2014).This almost 50% increase in the central bank independence score (from 0.42 to 0.60) was majorly due to new limitations on the public sector lending by the bank.Although, this was certainly good news for the Turkish price stability, since the change in the index lies outside the period we are analyzing, time series regression results fail to show any significant effect of central bank independence on the rate of inflation.Luckily, transparency index values for Turkey show some variation during the same period, which makes it a better proxy candidate for institutional and structural change.Transparency level of a central bank simply refers to the level of disclosure about policy goals, tools and predictions with the public.The idea that transparency is a desirable quality of a central bank is quite new and can be thought of as a part of the overall trend for freedom of information and full disclosure about government policies.Banks and other financial institutions are expected to be secretive about their operations and since the central bank is a bank for national private banks as well as a last resort lender to the government; increased transparency might at first seem like a bad idea: Sharing information about loans and advances to the treasury can reveal a certain degree of incompetence of government policies and therefore may hurt the image of economic authorities in the eyes of the public.There are also bureaucratic and political reasons behind obfuscation according to Havrilesky (1987); clandestineness may serve to hide redistributive effects of monetary policy over income.Last but not the least, it is broadly accepted that in order for monetary policies to have real effects they should be unanticipated.Despite all these arguments, heightened transparency is an ongoing trend since 1989; it all started when the Central Bank of New Zeeland started revealing interest rate forecasts and portfolio outlooks to the public.Central banks in Norway and Sweden followed suit.As of January 2012, the Federal Reserve began publishing its predictions on short-term interest rates and views on the evolution of the bank's portfolio.Furthermore, panel data results reveal that increased transparency, alongside increased independence, serves to decrease variability of inflation rates for both developed and developing countries (Crowe and Meade, 2008;Piklos, 2011;Dinçer and Eichengreen, 2014).
For our time series analysis on Turkey, we use the updated version of transparency index provided by Dinçer and Eichengreen (2014).Their calculations depend on five main criteria of transparency: political transparency which refers to the openness about policy goals, economic transparency in relation to the economic data that is used to design the monetary policy, procedural transparency that concern the decision making method of the bank, policy transparency which relates to the time frame in which the policy decisions are shared with the public together with the explanations for the decision, and finally operational transparency which regards the execution of policy actions as well as the evaluation of outcomes.
Turkish transparency index has been on the rise beginning with the year 2000: an increase is observed in 2001, following the implementation of "the pegged within the bands" exchange rate targeting with disastrous results; another one in 2002 right after the financial crisis and new structural adjustment policies that followed; another increase in 2004 subsequent to the announcement regarding plans for explicit inflation targeting "when conditions are deemed appropriate", and finally in 2006 with the full adaptation of explicit inflation target as the primary policy tool.The fact that all the major shifts in the transparency level of the Turkish central bank coincides with major structural changes in the overall economic policy can be interpreted as a sign that transparency index may be a better proxy for institutional transformation of the economic policy establishment in Turkey (Erçel, 1999;Özatay, 2009).
Models and results
Our main concern is how the institutional structure of the Central Bank of the Republic of Turkey (CBRT) have impacted the price stability performance of the monetary policy.Of course, in order to measure policy performance one needs clearly defined policy goals.Even though the CBRT began using inflation targeting as the primary policy goal after January 2002, for the following four years the targeted level of inflation was not explicitly announced to the public.January 2006 marks the initial announcement of the target level of inflation, and the bank continued to update the target in the beginning of each year that follows.However ex-post data for the (unannounced) inflation targets are available beginning with 2004.Therefore, the data set for performance analysis is limited to observations between 2004 and 2016 only.As mentioned above, the independence index does not display any change in the period in question, leaving only the transparency index as a viable qualitative measure for the institutional change in the central bank during the inflation targeting era.However, international data reflects that independence and transparency generally move together in the same direction since they usually change as a result of the same structural reforms (see Bernanke, 2010 for an overview), so that they may be interpreted as closely correlated variables.We also use a dummy variable for the observations between the third quarter of 2008 and the third quarter of 2009, in order to control for the impact of the financial crisis on the real GDP as well as other factors.The significant drop in the reel GDP growth rate in this period is shown in Figure 2 above.
Our observation range is between the first quarter of 2004 and the second quarter of 2016 (naturally we lose one observation in the beginning due to the lagged inflation variable).We extent the transparency index reported by Dinçer and Eichengreen ( 2014) to 2016 as unchanged, since there has been no legislature change in the bylaws of CBRT after 2006.Our results are presented in Table 1 below Coefficient for transparency index is significant at 5% confidence interval and negative as anticipated: heightened transparency leads to more accurate inflation targeting.This is in accordance with the multi-country panel data results.Coefficients for real GDP growth rate, nominal money supply growth rate, and the lagged inflation rate of change are all significant for at least within 5% confidence interval, signs of these coefficients are also in agreement with the theory.Even though the negative coefficient for rate of change in real effective exchange rate also fits the general literature, it is not significant.
Conclusion
A monetary policy aiming at a long-term price stability needs concurrency of decisions, given the long-horizon and lagged nature of the impact of the money supply changes.Heightened independence enables a central bank to choose policy goals and tools freely, away from the influence of the elected political officials' short-term desires and populist temptations.It also limits the governments usage of the bank as a last resort for funding.Increased transparency or openness about the inner workings of the central bank, on the other hand, ensures the economic decision makers that these policies is will carry a certain degree of consistency in the long run.Therefore, steps towards heightened transparency and independence of a central bank in the form of legislative changes are likely to increase the effectiveness of policies towards price stability.
The Turkish economy had been infamously struggling with high inflation rates and all the ambiguity and reliability issues they cause during 1980's and 1990's.Structural reforms, which were in part shaped by stand-by agreements with the International Monetary Fund, gained pace especially in the wake of the 2001 financial/political crisis.Finally, after the adaptation of inflation targeting method as the primary monetary policy tool, Turkey seems to be able to attain some degree of price stability, which is vital for a developing country that faces ever increasing current account deficits and the finance sector risk they bring.
Our empirical model indicates that in Turkey between 2004 and 2016, increases in the transparency index have influenced inflation targeting policy positively; when the impact of real income, money supply, past inflation, and the effective real exchange rate are controlled for in a time series regression.This result is consistent with the multi-country cross-section data studies in the literature.
Figure 1 .
Figure 1.Inflation in Turkey (%) 1990Q1-2016Q2 Therefore, our inflation targeting policy performance model includes solely the transparency index as the control for institutional change, along with that usual independent variables used in inflation rate models: Our dependent variable is rate of change in the deviation of the realized inflation rate ( from the targeted inflation rate ( .We control for the effects of change in the preceding inflation rate with the lagged variable to correct for autocorrelation.The remaining independent variables are the growth rate of real GDP (Δ , the rate of change in the M2 money supply , rate of change in the real effective exchange rate , and finally the transparency index.
Table 1 .
. Change in derivation from the inflation target Dependent Variable: Change in Derivation from the Inflation Target | 3,999.8 | 2017-02-01T00:00:00.000 | [
"Economics"
] |
The Protective Effect of Aged Garlic Extract on Nonsteroidal Anti-Inflammatory Drug-Induced Gastric Inflammations in Male Albino Rats
Natural products have long gained wide acceptance among the public and scientific community in the gastrointestinal ulcerative field. The present study explore the potential effects of aged garlic extract (AGE) on indomethacin-(IN-) induced gastric inflammation in male rats. Animals were divided into six groups (n = 8) control group, IN-induced gastric inflammation group via oral single dose (30 mg/kg to fasted rats) two AGE orally administered groups (100 and 200 mg/kg for 30 consecutive days) two AGE orally administered groups to rats pretreated with IN at the same aforementioned doses. The results declared the more potent effect of the higher AGE dose (200 mg/kg) as compared to that of the 100 mg/kg dose in the gastroprotective effects reflected by significant gastric mucosal healing of damage and reduction in the total microbial induced due to indomethacin administration. In addition to the significant effect to normalize the significant increase in malondialdehyde (MDA), myeloperoxidase (MPO), tumor necrosis factor-α (TNF-α) values, and the significant decrease in the total glutathione (tGSH), superoxide dismutase (SOD), and catalase (CAT) values induced by indomethacin. The results support AGE antioxidant, anti-inflammatory, and antimicrobial potency reflected by the healing of the gastric tissue damage induced by indomethacin.
Introduction
The side effects of the anti-inflammatory drugs are one of the major problems in developing medicine today. Therefore, the concept of nutraceuticals evolved. Nutraceuticals are medicinal foods that have a role in maintaining wellbeing, enhancing health, modulating immunity, and thereby preventing as well as treating specific diseases [1]. Plants of the genus Allium known for their production of steroid, saponins, and organosulfur compounds, such as alliin, ajoene, are representative chemicals. Recently, fresh garlic found to have some interesting biological and pharmacological activities including antifungal and antibacterial effects [2,3]. Aqueous garlic extract exerts antioxidant action by scavenging reactive oxygen species enhancing cellular antioxidant enzymes superoxide dismutase, catalase, and glutathione peroxidase [4]. Garlic represents an important source of antioxidant phytochemicals such as diallyl sulfide, S-allylmercaptocysteine, and ajoene, which is the optimal assurance for neutralizing free radical-mediated inflammation. It possesses hepatoprotective, neuroprotective, genoprotective, immunoprotective, and antioxidative activities [5][6][7][8][9].
Nonsteroidal anti-inflammatory drugs (NSAIDs) are widely used in the treatment of fever, pain, and inflammation. However, these drugs have some side effects, especially on the gastrointestinal tract such as gastric mucosal erosions, ulcerations, bleeding, and perforations. Many studies suggested that the mechanisms for the gastric damage caused by NSAIDs are inhibition of prostaglandin synthesis and inhibition of epithelial cell proliferation in the ulcer margin [10][11][12]. They stimulate HCl secretion and cause weakness of mucous gel layer, which act as barrier by decreasing mucin production and increasing the secretion of bicarbonate from gastric and duodenal mucosa [13]. Indomethacin (IN) is a nonsteroidal anti-inflammatory drug commonly used to reduce fever, pain, stiffness, and swelling. It works by inhibiting the production of prostaglandins, which normally protect the gastrointestinal mucosa from damage by maintaining blood flow and increasing mucosal secretion of mucous and bicarbonate [14]. Indomethacin has been shown to produce sever gastric damage in rats than did others and has become the preferred drug for inducing ulcer models [11,12,15]. The increased levels of reactive oxygen species (ROS) reported in the mechanism of both stress and IN-induced gastric damage [16]. The important roles of oxygen reactive species (ROS) which cause lipid peroxidation (LPO) have been known to play a critical role in the development of pathogenesis in acute gastric damage induced by stress, ethanol, and NSAIDs [17,18]. Therefore, the present study was designed to explore the potential anti-inflammatory effects of AGE on an IN-induced gastric inflammation in male albino rats and to evaluate its effects on antioxidant parameters in rat stomach tissue and the level of TNF-, a cytokine, plays an important role in inflammation.
Material and Methods
Experimental Animals. The present study was carried on (48) adult male albino rats (Rattus norvegicus), weighing (200 ± 20 g). Animals were housed in environmentally controlled conditions (temperature of 22 ± 2 ∘ C) with a 12 h light/dark cycle and had free access to commercial rodent pellets and water ad libitum in accordance with the National Institutes of Health Guide for Care and Use of Laboratory Animals [19].
Chemicals. Aged garlic extract (AGE) was purchased as capsules from Wakunaga of America CO., LTD (Mission Viejo, CA, USA). Indomethacin (IND) was purchased from Sigma Chemical Co., St. Louis, MO, USA.
The Experimental Design. The animals were randomly divided into six groups ( = 8) and subjected orally via stomach tube daily for 30 consecutive days: (1) control subjected water.
(2) indomethacin-induced gastric inflammation (IN): the animals were deprived of food but had free access to tap water 24 h before ulcer induction with a single oral dose of indomethacin 30 mg/kg. Tissue Sampling. At the end of the experimental duration, the animals were sacrificed (4 h after indomethacin administration in IN-group). Immediately blood collected for serum preparation and the stomachs was separated out the body.
Macroscopic and Histopathological Studies. The isolated stomachs from the control and treated groups were cut along the greater curvature and washed in ice-cold saline and spread out with pins on a cork board and then photographed using a digital camera to assess the inflamed areas of the mucosa in all tested groups. The total gastric mucosal erosive lesions were measured (mm 2 ) with a dissecting microscope under ×20 magnification [20]. The stomachs were then divided into three parts; the first part was fixed in 10% formalin for histopathological examination. Paraffin sections 5 m in thickness were prepared and stained with Haematoxylin and Eosin stain (H&E) to verify histological details [21].
Total Gastric Microflora. Stomach content of the second part rinsed using 10 mL NaCl 0.9%, collected in sterile tube and diluted by buffered sodium chloride peptone solution pH 7. Samples filtered using membrane filtration method, transferred one of the membranes on Casein soybean digest agar plate with sterile forceps, and were incubated 5 days at 30 ∘ C-35 ∘ C for bacteria. While, the other membrane transferred to sabouraud dextrose agar plate, and incubated for 5 days at 20 ∘ C-25 ∘ C for fungi. The test strains were Candida albicans (ATCC 2091) and Escherichia coli (ATCC 8739) (ATCC: American Type Culture Collection, Rockville, MD, USA). The total viable aerobic count is the sum of the bacterial average number of colony-forming units (CFU) found on Casein soybean digest agar [22] and that of fungal on Sabouraud dextrose agar [23]. (enzyme-Linked Immunosorbent Assay) kit [24] and the third part of stomach tissue was homogenized in a 50 mmol/L phosphate saline buffer (PBS) pH 7.2 under cold condition, using Glass-Teflon homogenizing tube. The homogenate centrifuged at 2500 r/min for 10 min and the supernatant used for the determination of malodialdehyde (MDA) level by the thiobarbituric acid test [25], total glutathione (tGSH) [26], and the enzymes activities of Superoxide Dismutase (SOD) [27], Catalase (CAT) [28], and Myeloperoxidase (MPO) [29].
Statistical Analysis. Data was expressed as means ± SE. Statistical analysis was evaluated by one-way ANOVA. Once a significant test was obtained, LSD comparisons were performed to assess the significance of differences among various treatment groups. Statistical Processor System Support "SPSS" for Windows software, Release 20.0 (SPSS, Chicago, IL), was used.
Effect of AGE on Gastric Inflammation.
Indomethacin administration to fasted rats induced gross linear hemorrhagic mucosal lesions. Treatment with AGE showed significant healing effect of the gastric lesions induced by indomethacin in a dose-dependent manner (Figures 1 and 2).
Effect of AGE Total Gastric Microflora.
Indomethacin induced significant increase in the total microbial count (9.01 ± 0.11 log CFU/g) as compared to that of control (4.78 ± 0.09 log CFU/g). the total microbial count also increased significantly as compared to control in AGE 100 and AGE200 recording (6.04 ± 0.27 log CFU/g) and (5.30 ± 0.03 log CFU/g), respectively, and significantly decreased compared to the IN group result. Treatment with AGE to indomethacin ulcerated group showed significant reduction in the total microbial count in a dose-related effect as compared to IN group value (9.01±0.11 log CFU/g) recording (8.61 ± 0.14 log CFU/g) and (7.61 ± 0.14 log CFU/g) in IN + AGE 100 and IN + AGE 200, respectively ( Figure 3).
Histopathological Effect of AGE on Gastric Inflammation.
Indomethacin intake induced severe and extensive macroscopic gastric mucosal damage in the irrigated starved rats, characterized by injury in the epithelial layer of the mucosa and sloughed off gastric mucus (Figure 4
Effect of AGE on MPO Enzyme
Activity. The index of neutrophil infiltration in gastric damage by myeloperoxidase enzyme activity was measured in the stomach tissue and represented in Figure 10. In indomethacin treated group MPO activity significantly increased to 5.80 ± 0.12 mol/mg as compared to 0.65 ± 0.02 mol/mg tissue for control group.
Discussion
There are many factors implicated in the pathophysiology of the indomethacin ulcerogenic potential establishing it as the first-choice drug to produce an experimental ulcer model because of having a higher ulcerogenic potential than other nonsteroidal anti-inflammatory drugs (NSAIDs) [30]. The ulcerogenic mechanism of indomethacin suggested as accompanied with severe oxidative stress in gastric tissue causing damages to key biomolecules such as lipids, proteins, and DNA leading to increased accumulation of MDA, MPO, and accumulation of reactive products altering enzymatic and nonenzymatic antioxidant parameters leading to enhanced oxidative damage during stomach ulceration [15,[31][32][33][34]. In addition, the tumor necrosis factor-a (TNF ) might be the key signal for NSAID-induced gastric inflammation. Where neutrophil accumulation within the gastric microcirculation and the levels of TNF in the plasma of rats significantly increased following the administration of indomethacin accompanied by gastric injury [12,35,36]. In the present study, there are marked damage to the gastric mucosa as evident by macroscopic and histopathological examinations associated with the reduced activities of SOD and CAT and tGSH level, in addition to the elevated TNF level and MPO activity following indomethacin administration.
As regards to the predicted significant elevation of microflora in our ulcer model could be explained through the previous studies where, in experimental models, bacteria colonization at the stomach ulcer site appears to play an important role in exacerbating mucosal injury and has a clear detrimental effect on its healing [37]. Gram-negative bacteria are likely to be responsible for the observed delay in ulcer healing, whereas Gram-positive bacteria may actually promote ulcer healing. Previous study suggested that bacteria other than Helicobacter pylori have the capacity to significantly influence the ulcer and that ulcers represent an environment conducive to bacterial growth [38]. Other studies evidenced that persistent colonization of the stomach with C. albicans could be achieved in rats by NSAID treatments and that, despite a marked reduction in gastric acid secretion, this infection delays ulcer healing and is accompanied by a fall in mucosal microcirculation in the ulcer area. The delay in ulcer healing induced by Candida combined may be associated with gastric mucosal inflammation that involved overexpression and subsequent release of TNF [23,39]. The current study investigated the anti-inflammatory activity of AGE in two doses of AGE (100 and 200 mg/kg) on indomethacin subjected rats. The experimental results showed the advantage of healing potential of the high AGE dose (200 mg/kg) compared to that at dose 100 mg/kg in the gastric mucosal injury induced by indomethacin. The anti-inflammatory activity of AGE was exhibited through its antioxidant activity by resolving the oxidative stress in gastric tissue. AGE is prepared by soaking garlic in ethanolwater mixture for 20 months, which removes irritant compounds from garlic and solubilizes some of the insoluble compounds. The process converts unstable compounds, such as allicin, to stable substances and produces high levels of water-soluble organosulfur compounds that are powerful antioxidants. These include S-allylcysteine (SAC), AGE's major component, and S-allylmercaptocysteine, unique to AGE. Among other compounds present are low amounts of oil-soluble organosulfur compounds, flavonoids, a phenol, allixin, selenium, and saponins [6,[40][41][42]. AGE phenolic contents may exert the scavenging activities by donating a hydrogen atom from their phenolic hydroxyl groups [41,42]. In addition, Na-(1-deoxy-D-fructos-1-yl)-L-arginine (Fru-Arg) was identified as a major antioxidant compound in AGE. The hydrogen peroxide scavenging activity of Fru-Arg was comparable to that of ascorbic acid, suggesting that it could contribute to the pharmacologic effects of AGE through its antioxidant properties [43]. Previously AGE proposed greater safety and efficacy than raw garlic as a therapeutic agent [42]. As regards to the antimicrobial activity of AGE, results in our study are in line with the previous studies, which found that the garlic-derived compound S-allylcysteine (SAC) inhibited the growth of Escherichia coli and enhanced the antibiotic effect of gentamycin [44]. Previous study revealed the dose-dependent antimicrobial efficacy of aqueous garlic extract against 133 multidrug-resistant Gram-positive and Gram-negative bacterial isolates and against 10 Candida spp. The antimicrobial potency of garlic attributed to its ability to inhibit toxin production and expression of enzymes for pathogenesis [45]. The antibacterial and antifungal activities of AGE constituents, allicin, and SAC were previously demonstrated [46][47][48][49]. The pattern of resolving oxidative stress in gastric tissue observed through the direct relationship between gastric non-enzymatic tGSH levels and ulcer severity. The tissue, tGSH, and GSH-related enzymes accepted as important protective agents due to their antioxidant properties prevent tissue damage by keeping the ROS at low levels [50,51]. Aged garlic extract and SAC in both vivo and cell culture-based previous studies recorded to preserve the levels of glutathione peroxidase and glutathione reductase, where glutathione reductase involved in conversion of oxidized glutathione to glutathione [52,53]. Aged garlic extract treatment significantly prevented induced stress degeneration in morphology and reversed the increased level of MDA and the decreased GSH contents to control values of gastrointestinal mucosa due to its potent free radical scavenging and antioxidant properties [54]. The antioxidant properties of AGE ameliorated oxidative organ injury due to naphthalene toxicity by reversing significantly the elevated MDA levels and MPO activity levels [55]. The study reported that the antioxidant enzymes activities of SOD and CAT have been reversed to normal with 200 mg/kg AGE treatment in IN-administered group these results agree with previous studies reported that garlic extract induces antioxidant effects on rats [56,57] and suggested that AGE is able to directly scavenge superoxide radicals [58]. Garlic allicin inhibited the TNF-alpha secretion assessing the anti-inflammatory effect of allicin on intestinal epithelial cells [59] and SAC exhibited a dose-dependent inhibition of NF-kappa B activation induced by both TNF-alpha and H 2 O 2 in human T lymphocytes (Jurkat cells) [60].
In conclusion, the healing activity of AGE at high dose (200 mg/kg) may be resulted from its ability to scavenge ROS produced by indomethacin administration that initiate lipid peroxidation. The mechanism of gastroprotective effects of the AGE on gastric damage induced by indomethacin may be related to its anti-inflammatory actions and its antioxidant properties, which reduce MDA levels and MPO activity and increase tGSH, SOD, and CAT activities. Therefore, our study suggest that AGE was safe and could be a promising new drug for the prevention of NSAIDs-induced gastric damage. | 3,476.2 | 2014-04-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Mammography using low-frequency electromagnetic fields with deep learning
In this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations. The technique, to a high degree, resembles X-ray mammography; however, instead of using X-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged. To capture breast impressions, a metasurface, which can be thought of as analogous to X-rays film, has been employed. To achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 MHz. The metasurface is designed to operate at the same frequency. The detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques. Using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors. Remarkably, deep learning models were found to achieve very high classification accuracy.
Mammography using low-frequency electromagnetic fields with deep learning
Hamid Akbari-Chelaresi 1 , Dawood Alsaedi 2 , Seyed Hossein Mirjahanmardi 3 , Mohamed El Badawe 4 , Ali M. Albishi 5 , Vahid Nayyeri 6* & Omar M. Ramahi 1* In this paper, a novel technique for detecting female breast anomalous tissues is presented and validated through numerical simulations.The technique, to a high degree, resembles X-ray mammography; however, instead of using X-rays for obtaining images of the breast, low-frequency electromagnetic fields are leveraged.To capture breast impressions, a metasurface, which can be thought of as analogous to X-rays film, has been employed.To achieve deep and sufficient penetration within the breast tissues, the source of excitation is a simple narrow-band dipole antenna operating at 200 MHz.The metasurface is designed to operate at the same frequency.The detection mechanism is based on comparing the impressions obtained from the breast under examination to the reference case (healthy breasts) using machine learning techniques.Using this system, not only would it be possible to detect tumors (benign or malignant), but one can also determine the location and size of the tumors.Remarkably, deep learning models were found to achieve very high classification accuracy.
The absence of reliable, non-invasive, and high-resolution technologies to detect breast malignancies at early stages has resulted in increased morbidity and poor quality of life for many women.Current technologies have been facing critical challenges that are associated with health-related issues, scanning time, and affordability.Considering the fact that breast cancer is the most common cancer amongst women, proposing new techniques for breast cancer detection, which can eliminate the aforementioned challenges and improve the diagnosis resolution, is of significant importance 1 .Such techniques can lead to early-stage detection, thereby increasing the chance of full recovery 2 .
One of the traditional methods for cancer detection is X-ray mammography.Although this technology is relatively inexpensive and suitable for detecting malignant tissues in low-density breasts, it has a potentially harmful effect due to ionizing radiation.For high-density breasts, diagnosis of cancerous tumors is challenging owing to a high overlap between fat and malignant tissues 3,4 .Magnetic resonance imaging (MRI) is used as a complementary diagnosing tool, providing higher accuracy than X-ray mammography, especially for highdensity breasts.However, being costly, MRI cannot be used for regular screening, particularly in low-income communities 5 .Ultrasound is also used for breast-cancer diagnoses; however, its accuracy in detecting tumors depends on the radiologist's expertise 6 .
In recent years, breast cancer detection techniques based on microwave imaging (MWI) have been introduced as an alternative to the aforementioned traditional methods [7][8][9][10][11][12] .MWI uses non-ionizing electromagnetic (EM) waves instead of potentially hazardous ionizing waves.In addition, low-frequency electromagnetic excitation allows for deeper penetration, thereby increasing the ability to diagnose anomalies buried deep inside denser breasts 13,14 .Furthermore, MWI takes advantage of low-cost system integration.The main components of any MWI-based system are transmitters (antennas) to emit EM waves and receivers (detectors) to capture the data in the form of the EM field distribution, which is correlated to the permittivity of tissues 15,16 .The collected data is then sent to a computer for image construction.
Generally, technologies based on MWI can be categorized as radar-based imaging technology and microwave tomography 17 .The radar-based imaging technique constructs images using reflected waves from the objects under examination 18 , whereas microwave tomography techniques are mainly founded on inverse scattering algorithms to construct an image of the object 19 .Nevertheless, all current technologies based on MWI have been facing critical challenges such as the complexity of the system (due to using multiple antennas), the need for impedance-matching liquid between the breast and antennas in a wide variety of cases, the complication of the image construction using experimental data, and low accuracy and resolution owing to the coupling between the antennas (detectors) 10,13 .Microwave-based systems have also been introduced for breast cancer detection without the need for full breast imaging 20,21 .
In a recent work, we introduced a new microwaves-based modality for breast cancer detection, which is identical in concept to X-ray mammography but using microwaves 22 .The key concept behind this new and simple modality is achieving an impression of the female breast that correlates to the constituents of the breast.This impression is captured using a metasurface consisting of an ensemble of electrically-small resonators stitched to achieve good electromagnetic energy absorbance.Then, artificial intelligence is used to train the system to provide a conclusion as to the probability of the existence of a tumor within the breast.The developed system in 22 has two fundamental limitations.The first is due to the metasurface used to capture the impressions.This metasurface is constrained by its unitcell size, which directly relates to the frequency at which the system can operate.To achieve higher impression resolution, the metasurface needs to have a higher number of smaller cells (which resemble pixels).However, smaller cells typically resonate at higher frequencies, limiting the penetration of the electromagnetic field into the breast.The second challenge relates to the non-highly accurate impressions achieved.Since a relatively high frequency was used in 22 (around 2.0 GHz), the achieved impressions could not give a robust correlation to all the constituents of the breast, again, for the simple reason that at 2.0 GHz, the penetration into the breast is shallow and does not cover the entire volume of the breast 23 .
Deep neural networks have shown remarkable performances across various medical imaging domains, performing tasks such as classification, detection, segmentation, and reconstruction.Artificial neural networks (ANNs) require a large amount of high-quality data for efficient training and to achieve reliable outputs.Hence, part of their successes is indebted to the abundant data collected from widely used imaging modalities, including MRI, computerized tomography (CT), ultrasound, and pathology.While the implementation of ANNs in the microwave domain dates to more than two decades ago by training shallow networks (containing one fully connected hidden layer) 24 , only recently attempts were reported to use deep networks 25,26 .One of the underlying reasons is due to data scarcity of microwave images.The recent works proposed a conventional convolutional neural network (CNN) to enhance the microwave imaging resolution, while the earlier works used an autoencoder followed by a secondary CNN for image reconstruction purposes.
Deep learning (DL) algorithms are conventionally categorized into supervised learning, weakly supervised learning, and unsupervised learning.In supervised learning, a set of M training samples {(x i , y i )} M i=1 exist where for each input x i an annotated label, y i is available.The objective is to train the network g θ : x → y based on minimizing a loss function L that predicts the most accurate label for an unknown test image.Weakly supervised methods leverage only the coarse-grained annotations to deduce the fine-grained labels, while unsupervised learning requires no annotations and aims at recognizing patterns in the data.
Here, our main objective is to provide the majority of women who are under risk of breast cancer worldwide with a modality that can be used frequently.As discussed earlier, MRI modality is neither affordable nor convenient for frequent screening.Also, x-ray mammography can cause health issues for patients due to potentially ionizing radiation.Our proposed modality is most suitable for screening purposes, owing to its non-hazardous, low-cost, and easy-to-use features.
In this work, we present a design of a mammography system for detecting tumors using low-frequency electromagnetic fields (LFEMF) that overcomes the challenges in our earlier work 22 .First, instead of using a plane wave excitation that requires a horn antenna placed in the far field of the breast under examination, a simple electrically-small dipole antenna is used in the near field of the breast to provide a rich source of excitation that includes EM energy, containing all field polarizations.This type of excitation is intended to excite all constituents of the breast, thereby creating an impression that is inclusive of most breast tissues.Second, to enable a much lower frequency excitation and, as a result, effectively increase the penetration and impression resolution, we have designed a metasurface made up of an ensemble of highly miniaturized unitcells that operate at 200 MHz (which is one order of magnitude reduction as compared to the earlier design 22 ).Third, we employ a new technique in the image processing stage, which is based on the comparison between healthy and unhealthy breast models.In fact, after capturing the images, unlike the previous work 22 , the dataset images will first be pre-processed and then fed into the neural network algorithm to train the introduced architecture in a supervised manner.
Unitcell design
The metasurface is analogous to the X-ray film used in X-ray mammography.It is effectively an ensemble of electrically-small resonators, which are typically referred to as unitcells.When the unitcells are placed in close proximity to each other, they will create a tightly-spaced array, whereby the input impedance of each unitcell is strongly altered by the surrounding ones.The unitcell can be designed using different topologies available in the literature, one of which is electric-inductive-capacitive (ELC) resonators that provide high sensitivity to incident electric fields 22,27 .Such a unitcell is made of capacitors and inductors, realized by gaps and conductive ring patterns.An important design feature of the metasurface for our application is being low loss to maximize energy absorption at the terminals.The earlier works have shown that a metasurface composed of ELC resonators can achieve a near-unity absorption, however, for energy harvesting applications.
Since our desired frequency is around 200 MHz ( = 1.5 m), the typical size of an ELC resonator would be large compared to the human female breast.This would then preclude the possibility of using the unitcell as a pixel for constructing an impression of the breast.Essentially, the resolution of the impression would be directly dependent on the size of the unitcell.To enable sufficient miniaturization of the unitcell such that the metasurface would have a reasonable number of them for high resolution, we can increase the equivalent capacitance and/ or inductance, thereby decreasing the frequency according to Eq. ( 1).where f r is the resonance frequency, and C and L are the equivalent capacitance and inductance of the ELC reso- nator, respectively.To increase the capacitance, the separation between the metallic parts of the resonator will have to decrease significantly.While this is possible from a fabrication standpoint, it would, however, escalate the fabrication tolerance and cost.Therefore, our approach is to increase the inductance using lumped inductors placed in the inner and outer rings of the proposed ELC resonator.The concept of manipulating the resonance frequencies by lumped elements, such as inductors, capacitors, and resistors, for miniaturization of electricallysmall resonators, has been made use of in previous works such as 28 and references therein.Later, the resonance frequency can be tuned to optimize the S-parameters for maximum absorption.
The proposed unitcell, which is a miniaturized ELC resonator, is shown in Fig. 1.So as to preserve the unitcell symmetry in the x and y directions, not only have the splits been selected at the middle of the rings, but also the values of the inductors and resistors were selected to be identical.In addition, two vias, instead of a single via, were implemented to retain a degree of symmetry vis-a-vis the impinging electromagnetic field 29 .To minimize the loss, in our design, we considered a low-loss Rogers substrate (TMM10i) with a relative permittivity of ǫ r = 9.8 and a loss tangent of tan(δ ) = 0.002.
The unitcell was modeled using CST Microwave Studio 30 .Considering that the unitcell is intended to be placed in an infinitely self-repeating structure in both x and y directions, in order to model the structure periodicity, a perfect magnetic conductor boundary condition (PMC boundary condition) was placed in the x-direction, and a perfect electric conductor boundary condition (PEC boundary condition) was placed in the y-direction.The performance of the unitcell was then gauged by illuminating the unitcell and then recording |S 11 |.
The unitcell was optimized based on two key targets, minimum |S 11 | and maximum Q-factor.The optimized parameters of the unitcell are given in Table 1; this unitcell has a footprint of 10.5 mm × 10.5 mm and a thickness of 4.0 cm, which allows for increased inductance due to the long vias.Figure 2 shows the response of the unitcell, where three controlling parameters were investigated: the substrate thickness t , the inductors' inductance values L 1 = L 2 = L 3 = L 4 = L , and the termination resistance values R 1 = R 2 (see Fig. 1).In Fig. 2a, due to the fact that the vias will add inductance to the structure, the longer the vias are (i.e. higher t ), the lower the resonance frequency would be. Figure 2b shows that the resonance frequency can be adjusted by varying the inductance of the lumped inductors.Figure 2c illustrates that the minimum of |S 11 | occurs for the value of R 1,2 = 470 at the resonance frequency of the unitcell, which indicates perfect impedance matching between the free space and the cell.
Figure 2a-c shows that the impedance matching is affected by the vias' length ( t ), the inductors' inductance L , and the termination resistors R 1 = R 2 , respectively.We further observe that for t = 40 mm, L = 100 nH, and R 1 = R 2 = 470 , |S 11 | has a minimum value of lower than −30 dB, corresponding to a very strong impedance matching at 200 MHz, and a Q-factor of 16.This strong absorbance (matching) will not happen for other values of the parameters under study in Fig. 2.
Figure 2d shows the response of the optimized unitcell with the parameters given in Table 1.It can be seen that 0.46 Watts (92%) out of 0.5 Watts incident power is delivered to the termination resistors (0.23 Watt in each resistor), and 0.04 Watts (8%) is lost due to dielectric and ohmic loss.
LFEM imaging system design
A metasurface was formed as a 10 × 10 array of the unitcell described above.Figure 3a,b shows the top and bot- tom views of the metasurface.Each unitcell represents a pixel, the value of which will be assigned by measuring the power dissipation at the corresponding termination resistors.Owing to the symmetry in the unitcell, the received power will be equally distributed between the resistors.Thus, by measuring the dissipated power in one of the two termination resistors of a unitcell, and repeating the same measurement for all of them, we will end up with a two-dimensional power map, which can be thought of as an impression-image.In other words, the position ( x n , y n ) of each unitcell and its termination power dissipation is reserved for the impression-image construction, which reflects the power absorbed by the unitcells (pixels).
We have leveraged the technique proposed in 22 to record a 30 × 30 impression instead of a 10 × 10 impres- sion.Considering that the length of the unitcell is x, in addition to the reference position, we recorded the impressions by shifting the whole metasurface with the value of x/3 and 2 x/3 in both x and y directions.By doing so, nine different combinations of metasurface positions will provide us with nine pixels for each unitcell.Thus, using the whole array, with the same metasurface, an impression with 900 pixels, rather than 100 pixels, will be created (see Fig. 3c).
In our previous works [31][32][33] , it was shown that in the near field, the resolution can dramatically exceed the Abbe diffraction limit.If the source of radiation is placed very close to the breast, the electromagnetic field that impinges upon the breast contains all polarizations.Therefore, under such excitation, and as opposed to a plane wave excitation where the impinging field is polarized in one direction, the interaction between the impinging fields and the breast constituents provides a scattered or secondary field that has higher information content.
By recalling the spring and mass model of a molecule interacting with EM waves, due to the formation of dipole moments inside the sample, molecular polarization will take place.Depending on the polarization of the incident field, molecules inside the breast will be affected differently.Thus, by having different polarizations generated by the source, the molecules of healthy and cancerous tissues will be excited, thus generating independent information leading to unique impressions on the metasurface.An electrically small dipole with a length of /10 and a diameter of /1000, placed very close to the breast, was used as the radiation source.
In the numerical simulations, various types of breast models were considered to cover human female breast diversity.The breast mainly consists of fibro-glandular and fat tissues 34 .The density of the breast model can be categorized based on the density of the fibro-glandular tissues.Figure 3d shows the breast model used in the CST simulation, consisting of skin, fat tissue, fibro-glandular tissue, and tumor.Four different categories of the breast model were considered: (1) extremely dense breast (more than 75% fibro-glandular tissue), (2) heterogeneously dense (50-75% fibro-glandular tissue), (3) fibro-glandular scattered areas (25-50% fibro-glandular tissue), and (4) entirely fatty (less than 25%) 34,35 .The percentage of the fat and fibroglandular tissues can vary widely amongst women.However, the mean composition of the fibroglandular tissues, including skin, varies from 13.7 to 25.6% with the overall mean of 19.3% 36 .This means that most of the women in the statistical population are classified under entirely fatty category, which makes MWD process less challenging.In fact, the contrast between the fibroglandular and tumorous tissues is not as significant as the contrast between the fat and tumorous tissues.Thus, the examination of extremely and heterogeneously dense breasts is more difficult than that of fibroglandular-scattered and entirely fatty breasts 37 .
For the purpose of validating our concept, the breast is considered to be a hemisphere with a radius of 50 mm, covered with a layer of skin with a thickness of 2 mm.The internal composition of the breast consists of fibro-glandular and fat tissues.At the operating frequency of 200 MHz, the relative permittivity and electrical conductivity for the fibro-glandular tissue are close to 64 and 0.8 S/m, whereas for fat tissue, they are close to 5.6 and 0.03 S/m 38 .The center C t of the spherical tumor, modeled as a perfect conductor (PEC), has been placed at ( x t , y t , z t ) with reference to the origin of the cartesian coordinate system shown in Fig. 3d.This is a good preliminary electromagnetic model to guarantee sufficient contrast between the tumor and healthy tissues 39 .In Table 2, the locations of the tumor and their corresponding coordinates are listed.These locations were used to produce the dataset for the classification exercise.
Numerical simulation, results, and discussion
Figure 4 illustrates the impression of a healthy breast model and that of a breast with a 10 mm tumor placed at the upper-left corner ( C t 1 , see Table 2) of the breast model.Considering Fig. 4, we can see that the differences between the two impressions seem to be negligible to the naked eye.To enhance the resolution, and consequently the potential to increase the differentiation in the impression for the breast with and without the tumor, we subtract each impression of a tumorous breast from the impression of the same breast but without the tumor.
In the next step, we add a tumor sample with different dimensions in various locations.At first, a spherical tumor sample made of PEC with a radius of 10 mm is placed at four different locations: the upper-left corner ( C t 1 ), the upper-right corner ( C t 2 ), the lower-left corner ( C t 3 ), and the lower-right corner ( C t 4 ) of the breast (see Table 2 and Fig. 5a-d).We have selected these locations to show the quality of the impressions and their ability to detect the locations of the tumors and to better observe subtle dissimilitudes between different impressions.
The effectiveness of the subtraction technique presented in this work is based on the assumption that the left and right breasts are identical for the vast majority of women.Statistically, the magnitude of relative BV (breast volume) asymmetry between the two breasts has a median of 2.71%, and the magnitude of relative DV (dense volume) asymmetry has a median of 3.28% 40 .By applying the subtraction technique mentioned above, we can observe in Fig. 5e-h that the most pronounced differences were around the tumor's location.In Fig. 5e,f,h, we observe that a strong correlation exists between the impressions and the locations of the tumors.However, in Fig. 5g, the lightest pixels are somewhere different from the tumor's location, although we can see dimmer differences around the location of the tumor.Next, we analyze the results of the simulations for four different locations of a tumor with a radius of 7.5 mm.The tumor was placed in one of the four positions given in Table 2 ( C t 1 , C t 2 , C t 3 , and C t 4 ) as depicted in Fig. 6a-d).Figure 6e-h show the impressions of the above-mentioned www.nature.com/scientificreports/tumorous breast models after subtracting from the impression of the healthy breast.We observe that the most pronounced difference is close to the locations of the tumors.We also observe that the impressions cannot show the location of the tumors as accurately as those shown in Fig. 5e-h.The reason is due to the smaller size of the tumors, which makes them more challenging to be recognized from the corresponding impressions.Another noteworthy point here is that the difference in magnitude has been reduced, in the sense that the maximum contrast of the impressions has decreased from 0.18 to 0.08 for 10 mm and 7.5 mm tumor sizes, respectively (see the color bars of Figs.5e-h and 6e-h).Finally, we investigate the results of the simulations for four different locations of a tumor with a radius of 5 mm.The tumor was placed in the same positions (see Fig. 7a-d).Figure 7e-h show the impressions of the corresponding tumorous breast models after applying the subtraction method.
If the contrast in the achieved impressions is more than a DV asymmetry of 3.28%, there should exist some anomalies in the female breast.From the simulations, a contrast of more than 3.28% for 7.5 mm and 10 mm tumor sizes, was obtained, which confirms the effectiveness of the proposed modality in identifying an anomaly with such dimensions.However, the impressions for a tumor with 5 mm radius do not show the same level of dissimilarity, thus making it more difficult to identify tumors having a radius less than 5 mm (see the color bars of Fig. 7e-h).
Mammography system setup and phantom studies
Figure 8 shows the full mammography system setup in clinical investigations.The electrically small antenna (ESA) is fed by a CW signal at the desired frequency (200 MHz).Because ESAs are very inefficient radiators, only a small portion of the power from the signal generator will be radiated.The radiated power can be increased by employing a power amplifier.The radiated low-frequency electromagnetic waves will penetrate into the female breast, and after interaction with the female breast constituents, the power incident on the metasurface will be recorded using a spectrum analyzer or a power meter.By recording the power from each unitcell, an impression (or power map) will be obtained for the breast.
A critical component of our mammography system is the metasurface.To validate the concept, the metasurface was built using a 10 × 10 array of unitcells, where each unit cell is 1 cm × 1 cm.The metasurface should incorporate sufficient number of unitcells (pixels) to provide sufficient resolution to capture small anomalies inside the female breast.(For a higher resolution, smaller unitcells would be needed, which would require further miniaturization of the unitcell.)It is also possible that we use a smaller array size (i.e. 5 × 5) to reduce the complexity of the circuit, but then, we need to fully scan the lower side of the breast by moving the metasurface, which adds the mechanical complexity of the system.
Regarding the female breast positioning in real-world clinical setup, the upper and lower sides of the breast will be slightly pressed (see Fig. 3d).Thus, in the z coordinate, we will not encounter any challenge in adjusting the position.In the x and y directions, it is important that the left and right breasts are positioned identically with respect to their corresponding coordinate system reference.This can be possible by having the woman under test to lie in prone position (see Fig. 8).
We emphasize that the proposed method is valid for any configuration of anomalies with any arbitrary shapes because it is intended for detection of anomalies within the breast rather than imaging of the entire breast.Furthermore, it is valid for any breast shape with various fibroglandular and fat tissues, as long as the minimum contrast (DV asymmetry) between the anomalies and healthy constituents exists.Here, we present a case study for a realistic numerical phantom model.This model is ACR Class 2-Scattered Fibroglandular breast phantom, the constituents of which have been translated from MRI sagittal slice into CST Microwave studio.It is consisting of 100 different fibroglandular voxel configurations, each of which represents specific electromagnetic characteristics.The valid frequency range of the numerical phantom is from 0 to 20 GHz 41,42 .The embedded tumor www.nature.com/scientificreports/has a relative permittivity of 130 and an electrical conductivity of 2 S/m, which guarantees a low contrast of approximately 2:1 between the fibroglandular and cancerous tissues 18 .Figure 9 shows that by using this method, regardless of the breast model under investigation, we would achieve the sufficient contrast of 14% in the obtained impression, which is more than the DV asymmetry of 3.28%.
Considering the last section, where we investigated different scenarios of tumor sizes and locations, we conclude that by simply using the subtraction method, we cannot achieve highly reliable results in terms of a strong correlation between the tumor presence and the corresponding impression.Increasing the number of classes for different tumor sizes and locations will make distinguishing the impressions from each other more complex.Therefore, to provide better tumor detectability, in the next section, we apply a deep learning (DL) method to classify different impressions and separate them from each other.
Since our objective is to detect the location and size of the tumors (not their shapes or numbers), for simplicity, we have chosen to do the simulations with spherical tumors and with different radii.The 12 breast models that we used in our dataset are different in their fat constituents.We started from an extremely dense breast model and gradually added fat tissues into the breast in order to make it fattier (see Fig. 10).
Deep learning classifications
We have developed a deep learning architecture to classify the impressions recorded from the simulations.convolutional layer applies 64 filters, while other layers use 32 filters.No padding was used in this architecture.The features are then flattened and passed to fully connected layers with 64 nodes.A dropout with a rate of 0.5 is applied for regularization.We finally added the last fully connected layer with 12 nodes, which represent the number of classes, followed by a softmax activation function to predict image labels.
We ran 144 simulation scenarios and obtained impressions from different models having different tumor sizes and locations.By changing the percentage of the fat tissue inside the breast model, we simulated different breast models ranging from extremely dense to fatty.In this work, we have used 12 different models, wherein tumor samples of three different dimensions were placed in four distinct locations inside the diverse breast models.Then, impressions obtained by each simulation scenario generated a dataset containing twelve classes, as shown in Table 3.The data was divided into training (66%), validation (17%), and test (17%).A total number of 100 epochs with a batch size of four was run to train the architecture.The model's loss function on the training and validation cohorts are illustrated in Fig. 11b, where the best model was saved based on the validation loss and used for testing purposes.We used an Adam optimizer with a learning rate of 10 −3 and a categorical cross entropy as the cost function.Sheer, zoom and rotational augmentations were used to enhance the model's generalizability.We used a t-SNE dimensionality reduction to visualize different classes in two dimensions only.Figure 12a shows the results of the training data.It is seen that all of the twelve classes are distinctly separated from each other, indicating that the model successfully captured the critical features in the images based on their classes.The obtained F1-score and confusion matrix additionally resulted in 100% accuracy for all twelve classes, thereby confirming that our introduced model successfully distinguishes between classes.We further employed a decision tree model to compare its accuracy with that of our deep learning model.For this purpose, a histogram of gradients (HOG) approach with 12 bins as orientations is employed to extract the images' features.The accuracy achieved amounts to 75%, as opposed to the perfect accuracy acquired by the deep learning algorithm.The t-SNE visualization of the extracted features using HOG is illustrated in Fig. 12b.It is evident that the t-SNE visualization of features generated by the deep learning implementation demonstrates a remarkable level of proximity among features belonging to different classes.This proximity suggests that the deep CNN effectively encodes essential patterns and distinctive characteristics, enabling it to return similar features for nearly all classes.On the contrary, the t-SNE representation of features obtained from the HOG algorithm exhibits a noticeable difference.The points corresponding to a specific class are notably more dispersed.This dispersion indicates that the HOG-based features may not be as effective in capturing and preserving the inherent relationships between samples belonging to the same class.
Conclusion
This work demonstrated a new technology for detecting the anomalies inside human female breast.The proposed detection system is mainly based on a metasurface, consisting of miniaturized unitcells, operating at 200 MHz.The combination of an electrically small radiating source and the metasurface was able to provide distinguishable impressions for various scenarios (different tumor sizes and locations).The captured impressions, provided by the hardware (source and detector antennas), then, were fed into a deep learning algorithm to classify them automatically instead of manually.The output of the software achieved 100% accuracy.
While this work demonstrated the validity of the concept proposed here, in a future work, we will develop a prototype to demonstrate the feasibility of our detection modality experimentally.
Figure 2 .
Figure 2. |S 11 | for different values of (a) t , (b) L , and (c) R 1 = R 2 , where the non-variant parameters are set to L = 100nH, R 1 = R 2 = 470 , and t = 40 mm.(d) |S 11 | and power delivered to the terminal resistors of the optimized unitcell, where the incident power is 0.5 Watts.
Figure 3 .
Figure 3. Metasurface array: (a) top view and (b) bottom view, showing the two termination resistors (c) scanning technique for increasing the pixels of the impression, where x is the length of the square unitcell.The arrows show the directions of the displacements.(d) Breast model, including skin, fat tissues, fibro-glandular tissues, and a tumor positioned at C t .
Figure 4 .
Figure 4. (a) Simulation model of the healthy breast (b) simulation model of the tumorous breast (c) recorded impression of the healthy breast (d) recorded impression of the tumorous breast, where the red circles indicate the border of the tumor with a radius of 10 mm and location of C t 1 (see Table2).
Figure 5 .
Figure 5. Model of the tumorous breast with a 10 mm tumor placed in: (a) upper-left corner, (b) upper-right corner, (c) lower-left corner, and (d) lower-right corner with the corresponding impressions shown in (e-h), respectively.
Figure 6 .
Figure 6.Model of the tumorous breast with a 7.5 mm tumor placed at: (a) upper-left corner, (b) upper-right corner, (c) lower-left corner, and (d) lower-right corner with the corresponding impressions shown in (e-h), respectively.
Figure 7 .
Figure 7. Model of the tumorous breast with a 5 mm tumor placed at: (a) upper-left corner, (b) upper-right corner, (c) lower-left corner, and (d) lower-right corner with the corresponding impressions shown in (e-h), respectively.
Fig- ure 11aillustrates the complete architecture of the DL classifier.The input impression is a grayscale of dimensions 30 × 30.Seven 3 × 3 convolutional neural network blocks, each followed by a rectified linear unit (ReLU) and a 2 × 2 max pooling with stride 1 for downsampling compose the feature extraction part of the classifier.The last
Figure 9 .
Figure 9. (a) The simulation setup of the ACR Class 2-Scattered Fibroglandular breast phantom (b) the obtained impression, where the red circle indicates the border of the tumor with a radius of 1.5 cm.
Figure 10 .
Figure 10.Increment of the fat tissues in the dataset breast models.
Figure 11 .
Figure 11.(a) Deep learning architecture composed of seven convolutional neural networks and two fully connected layers.(b) Graph of loss function's values for training and validation sets over 100 epochs.The model with the least validation loss is marked with x and used for testing purposes.
Figure 12 .
Figure 12. t-SNE visualization of the training data for twelve classes shown in Table 3.(a) Based on the features extracted from the deep learning architecture (b) Based on the HOG features.
Table 1 .
Design parameters of the unitcell.
Table 2 .
Various locations of the tumor.
Table 3 .
Size and location of the tumors for different classes. | 7,985.6 | 2023-08-15T00:00:00.000 | [
"Computer Science"
] |
Microtubule-Based Mitochondrial Dynamics as a Valuable Therapeutic Target in Cancer
Simple Summary Mitochondria are well known for being the powerhouses of the cell—whether the cell is normal or cancerous. Moreover, they can move, split, fuse themselves, or be eliminated via mitophagy with the help of the interplay between motor proteins and the cell scaffold—especially microtubules. The relationship between mitochondria, microtubules, and motor proteins is altered in cancer, and targeting this molecular machinery can offer a novel weapon in its treatment. In this paper, we review and summarize the state of the art of this approach. Abstract Mitochondria constitute an ever-reorganizing dynamic network that plays a key role in several fundamental cellular functions, including the regulation of metabolism, energy production, calcium homeostasis, production of reactive oxygen species, and programmed cell death. Each of these activities can be found to be impaired in cancer cells. It has been reported that mitochondrial dynamics are actively involved in both tumorigenesis and metabolic plasticity, allowing cancer cells to adapt to unfavorable environmental conditions and, thus, contributing to tumor progression. The mitochondrial dynamics include fusion, fragmentation, intracellular trafficking responsible for redistributing the organelle within the cell, biogenesis, and mitophagy. Although the mitochondrial dynamics are driven by the cytoskeleton—particularly by the microtubules and the microtubule-associated motor proteins dynein and kinesin—the molecular mechanisms regulating these complex processes are not yet fully understood. More recently, an exchange of mitochondria between stromal and cancer cells has also been described. The advantage of mitochondrial transfer in tumor cells results in benefits to cell survival, proliferation, and spreading. Therefore, understanding the molecular mechanisms that regulate mitochondrial trafficking can potentially be important for identifying new molecular targets in cancer therapy to interfere specifically with tumor dissemination processes.
Introduction
The cytoskeleton is a dynamic and interconnected network of filaments composed of structural and regulatory proteins that play a key role in all fundamental cellular processes, such as shape retention, motility, division, and intracellular transport of proteins and organelles [1,2]. Therefore, it is not surprising that alterations in cytoskeletal function can contribute to the onset and progression of cancer [3]. The three main types of filament that characterize the cytoskeleton are microfilaments, microtubules, and intermediate filaments [4]. Several ultrastructural analyses have shown that the cytoskeletal filaments interact directly or indirectly with the plasma membrane and various intracellular organelles [5]. Microtubule-dependent mitochondrial dynamics: Through the balance between fusion/fission and biogenesis/mitophagy, mitochondrial dynamics represent a central process in the bioenergetic adaptation and metabolic plasticity of cancer cells. The balance between biogenesis and mitophagy regulates the number of mitochondria and their quality. The fusion process helps to increase mitochondrial metabolism and to limit mitophagy and apoptosis, while the fission process allows the spatial redistribution of mitochondria in areas of the cell with greater energy and metabolic needs, favoring cell spreading and metastases.
Although the mechanisms regulating this interplay and its impact on mitochondrial architecture and cellular bioenergetics are still not well understood, growing evidence underlines how mitochondrial dynamics are fundamental in tumorigenesis, tumor progression, and the metabolic flexibility of cancer cells [15]. It has been hypothesized that the mitochondria-MT associations are necessary to regulate the distribution, positioning, and tracking of mitochondria to cellular-energy-requiring areas, as suggested in several neuronal studies, where kinesins and dynein were shown to transport mitochondria through axons and dendrites to energy-intensive areas in order to produce adenosine triphosphate (ATP) and guanosine triphosphate (GTP) [12,16]. Furthermore, the interaction of microtubules with the outer membrane proteins' voltage-dependent anion-selective channel (VDAC) is directly involved in the coordination of mitochondrial function [17]. The intracellular distribution of mitochondria occurs through the action of the motor proteins associated with microtubules, including the plus-end-directed kinesins and minus-enddirected dyneins [18,19].
In addition to regulating cell metabolism and energy production, mitochondria play a crucial role in several fundamental cellular activities, including calcium homeostasis, reactive oxygen species (ROS) production, and programmed cell death [20]. Each of these processes can be impaired in cancer cells. The acquisition of migratory and invasive abilities and adaptive changes in the metabolism of cancer cells has often been associated with alterations in the mitochondrial network [21]. Indeed, mitochondrial dynamic processes are key to the maintenance of mitochondrial homeostasis [22]; they include the displacement of mitochondria along the cytoskeleton, and the regulation of mitochondrial architecture mediated by fusion/fission events [22]. Interestingly, in addition to intracellular mitochondrial movement, a horizontal mitochondrial transfer between neighboring or even non-immediately contacting cells was also observed [23]. These exchanges, especially in the cancer microenvironment, can satisfy the energy needs of the acceptor cell, thus favoring its proliferation and survival.
In this review, we analyze the involvement of the mitochondria-microtubules interplay in tumor progression based on the current knowledge in this field.
Mitochondria
Mitochondria probably evolved from engulfed prokaryotes that developed an endosymbiotic relationship with the host eukaryote, gradually developing into a mitochondrion [24]. As double-membrane-bound organelles, mitochondria have five distinct compartments: the outer mitochondrial membrane (OMM), the inner membrane space (IMS), the inner mitochondrial membrane (IMM), the cristae (originated from the folds of the inner membrane), and the matrix that contains the mitochondrial DNA [25]. They are considered to be the energy producers of cells, as the cristae host the electron transport chain (ETC) and oxidative phosphorylation (OXPHOS) proteins. Mitochondria are especially located along cell extensions at the anterior edges of cells, where highly energetic mechanisms such as extensive cytoskeletal remodeling and cell adhesion processes occur [26]. Mitochondria play a pleiotropic role in tumorigenesis by allowing cancer cells to adapt to supervening metabolic needs and environmental changes [27]. Recent studies have demonstrated the potential roles of mitochondrial trafficking in cancer cell motility and invasion [28].
Mitochondria constitute a dynamic network in continuous reorganization, thanks to the balance between different mechanisms such as fission and fusion, biogenesis, and mitophagy, which control the number, morphology, quality, and cellular distribution of the mitochondria [29]. The mitochondrial dynamics are essential in regulating several cellular functions, playing a crucial role in bioenergetics activities, inflammation, cell differentiation, movement, and cell fate [29].
Mitochondrial Fission and Fusion
The mitochondrial network morphology continuously changes as a result of fusion/fission processes and the movement of mitochondria along microtubular structures [30]. In particular, the balance between fission and fusion determines the shape, size, and number of mitochondria, strongly impacting on energy metabolism. Emerging evidence indicates that alteration of this balance contributes to various aspects of tumorigenesis, cancer progression, and metastasis.
Fusion and fission are highly energetic cellular processes closely related to the functioning of mitochondrial activity [31]. For instance, the fusion of damaged mitochondria with healthy ones can restore-at least partially-the function of the impaired mitochondria. On the other hand, the fission process allows the segregation of functioning mitochondria from damaged ones, thus enabling the mitophagic removal of the latter [32]. Mitochondrial fusion is a sequential and complex process involving the outer and inner mitochondrial membranes and the matrix. The primary regulators of this process are the GTPase dynamin-related proteins (outer mitochondrial membrane proteins) mitofusin1 (MFN1) and mitofusin2 (MFN2), and optical atrophy 1 (OPA1)-a transmembrane protein tightly associated with the mitochondrial inner membrane, and located in the intermembrane space [31].
The opposite process-mitochondrial fission-is mainly regulated by the large GTPase dynamin-related protein DRP1, mitochondrial fission protein 1 (Fis1), and mitochondrial fission factor (MFF) [33], and is responsible for mitochondrial fragmentation. DRP1 is a cytosolic protein, which requires the localization of Fis1 in the mitochondrial outer membrane in order to form the fission complex. DRP1 physically constricts the mitochondrion to form a ring structure located on the future mitochondrial fission area; its phosphorylation regulates the mitochondrial translocation and activation of DRP1 by multiple kinases as a function of the different phases of the cell cycle, or in response to stress conditions [34]. MFF, along with Fis1, appears to be one of the mitochondrial receptors of DRP1 [35]. Accordingly, a reduction in MFF levels induces elongation of the mitochondrial network and a decrease in the translocation of DRP1 to the mitochondria [36]. Recently, the mitochondrial dynamic proteins MID49 and MID51 have been observed to participate in the recruitment of DRP1 to the mitochondria [37].
Multiple studies have demonstrated an imbalance of fission and fusion processes in cancer, with elevated fission activity and/or decreased fusion resulting in a fragmented mitochondrial network [33]. Such fragmentation of mitochondria allows their spatial redistribution in cell areas with greater energy needs [38]. It has been proposed that mitochondrial fusion promotes tumor cell resistance to apoptosis, whereas mitochondrial fission has been associated with increased invasiveness. Indeed, several studies have demonstrated that mitochondrial fission is required in order to maintain the migratory and invasion potential of breast, thyroid, and glioblastoma cancer cells [38][39][40][41], while DRP1induced mitochondrial fission was found to be associated with a migratory phenotype in several types of cancer. In human breast cancer cells, treatment with mitochondrial division inhibitor 1 (MDIVI-1)-a DRP1-specific inhibitor that suppresses mitochondrial fission [42]-induced the re-localization of mitochondria near the nucleus, suggesting inhibition of subcellular mitochondrial trafficking [28]. Notably, recent research has also demonstrated that restoration of the fused mitochondrial network-through either DRP1 knockdown/inhibition or MFN2 overexpression-impairs cancer cell growth, suggesting that mitochondrial network remodeling is essential in cancer progression [38,39,43]. In accordance with the above, a dysregulation of OPA1, MFN1, and MFN2 was observed in different types of human tumors-such as lung and bladder cancers [44,45]-while, in hepatocellular carcinoma, a high expression of DRP1 was associated with a significant increase in distant metastases [46]. All of these facts highlight the important role of mitochondrial dynamics in metastatic processes [33].
In any case, the mechanisms that regulate fission and fusion have not yet been fully identified, but would also seem to be determined by the specific cell type (e.g., yeast, neuron, cardiomyocyte, epithelial cells, etc.). However, in general, it has been observed that mitochondrial motility facilitates fission and fusion, since a mitochondrion moves towards another to merge and, once divided, the mitochondria have to move apart in order to remain separate [47]. In fact, experimental evidence has suggested that impairment of the mitochondrial motility, mediated by nocodazole or vasopressin-causes selective inhibition of the fusion process [48].
Previously published data clearly indicate that microtubules play an important role in fusion and fission processes. For example, Mahecic et al. reported that microtubule-based motor proteins were responsible for generating sufficient tension forces to induce the fission process [49]. The actomyosin cytoskeleton participated in the formation of the constriction point, and in the recruitment of DRP1 in the division zone [50,51]. In accordance with this scenario, it was observed that the destruction of microtubules with nocodazole, or of actin filaments with latrunculin-β, inhibited the mitochondrial fission process [51]. On the other hand, it has also been found that the interaction between microtubules and mitochondria via the microtubule-mitochondria binding protein (Mmb1p) could inhibit the localization of DRP1 to the mitochondrion, thus counteracting the fission process [52]. Accordingly, the deletion of Mmb1p induced mitochondrial fission [53]. Mmb1p appears to play a role in the stability of the microtubule network. It has been suggested that more stable microtubules would favor longer contact times between mitochondria and microtubules, thus promoting mitochondrial elongation. Conversely, shorter mitochondria-microtubule interaction times would seem to favor the activation of fission mechanisms, leading to mitochondrial fragmentation [54].
In Figure 2, a schematic drawing summarizing the processes of fission and fusion, and the main actors involved, is shown. The fusion process contributes to implementing respiration and mitochondrial metabolism, while limiting mitophagy and apoptosis. Mitochondrial fission is regulated by the GTPase activity of the DRP1 that is recruited to the mitochondria in response to stresses, and here interacts with its mitochondrial receptors (Mff1, Fis1, and MID49/51). DRP1 is responsible for mitochondrial fragmentation, as it physically constricts the mitochondrion by forming a ring structure located on the future mitochondrial fission area.
Mitophagy
Given the crucial role of mitochondria in vital processes, there are several multistep mechanisms involved in the control of their functionality, including mitophagy [30,55,56].
Mitophagy, a specific type of autophagy, is a helpful self-degradative process for mitochondrial quality control [57]; it is critical to clearing damaged or dysfunctional mitochondria and maintaining cellular homeostasis, since dysfunctional mitochondria can promote oxidative stress [58].
The serine/threonine kinase PTEN-induced putative kinase 1 (PINK1), and the E3 ubiquitin ligase Parkin, play pivotal roles in the regulation of mitophagy. PINK1 is normally imported into the mitochondria, where it is cleaved by the protease PARL, and remains in small amounts on the inner membrane. In mitochondrial depolarization conditions with a transmembrane potential (∆ψm) decrease, PINK1 levels on the outer membrane increase. Parkin moves from the cytosol to the mitochondria in healthy mitochondria, triggering the ubiquitination of different proteins on the outer membrane, such as MFN1, MFN2, and VDAC. In damaged mitochondria, Parkin is selectively recruited via a PINK1-mediated process [59]. Following ubiquitination, p62/SQSTM1 mediates the interaction between proteins marked by ubiquitin and LC3, allowing the formation of a phagophore able to engulf and degrade the damaged mitochondrion. Parkin-induced mitophagy is dependent on PINK1, but it also requires DRP1-mediated mitochondrial fission [24]. Indeed, fission is critical for mitophagy. In this process, one depolarized and one hyperpolarized mitochondrion are formed, and only the depolarized mitochondrion is removed, whereas the hyperpolarized mitochondrion can be re-introduced into the mitochondrial network [60]. The close association between mitochondrial movement and mitophagy was first indicated by the observation of a biochemical association between PINK1 and the Miro complex [61], and subsequently between Parkin and this complex-especially after mitochondria depolarization with carbonyl cyanide m-chlorophenylhydrazone (CCCP) [62].
As consequence of activating the PINK1/Parkin pathway, there is the proteasomedependent degradation of Miro and the subsequent release of kinesin from the mitochondrial surface [62,63]. All of this determines the arrest of mitochondrial transport and the recruitment of cytosolic Parkin to the mitochondrion [62]. It is therefore likely that halting mitochondria in some manner facilitates their clearance by mitophagy.
An alternative pathway for the induction of mitophagy-particularly important in cancer cells-is activated by hypoxia. Damaged mitochondria increase the expression of BNIP3, BNIP3-like (BNIP3L/NIX), and FUNDC1-a family of mitophagy receptors localized in the OMM of the mitochondria [64], which directly recruit LC3 through their LC3-interacting region (LIR) to initiate mitophagy [65,66]. BNIP3 and NIX interact with LC3 at the microtubule level, promoting the sequestration of mitochondria in forming autophagosomes [67]. Figure 3 shows both alternative pathways.
A close link has been observed between mitophagy and microtubules in aggressive tumors, such as glioblastomas and metastatic melanomas. In particular, in a model of glioblastoma, a reduction in α-tubulin has been observed to induce a downregulation of BNIP3 and NIX, with consequent inhibition of mitophagy. This leads to a reduction in the numbers of of lamellipodia and filopodia, with a significant reduction in the migratory capacity of tumor cells [68].
Mitophagy is also a crucial complex process in the progression of hematological malignancies and the acquisition of drug resistance, especially in advanced myeloma and lymphomas [69]. In high-grade lymphomas and in the cells derived from particularly aggressive tumors, the fusion between mitophagosomes and lysosomes frequently occurs in the perinuclear zone, at the minus end of the microtubule network [70]. In these cells, the mitochondrial localization around the nucleus is strongly fission-dependent [71]. DRP1 and Fis1 are master regulators of fission machinery, and act in the asymmetric cell division of stem cells, facilitating the preservation of stem properties only to daughter cells that inherit the younger mitochondria [72]. It is interesting to note that mitophagy can play opposite roles in tumorigenesis, based on the tumor type and stage and the microenvironmental context. Indeed, this process can promote the survival of cancer cells by eliminating damaged mitochondria that, through excessive ROS production, could induce apoptosis. At the same time, mitophagy can act as a tumor suppressor by eliminating impaired mitochondria that, inducing a chronic mild oxidative stress, could promote carcinogenesis. In general, in the first steps of carcinogenesis, Parkin mutations inhibit mitophagy, while during cancer progression, abnormal regulation of BNIP3 improves mitophagy. This adaptation process may represent a cellular strategy for increasing cancer survival [73]. For instance, it has been demonstrated that in the onset of hepatocellular carcinoma, the loss of mitophagy induces the accumulation of damaged mitochondria, promoting carcinogenesis [74].
It should be noted that alterations in mitochondrial dynamics and mitophagy are considered to be among the most important causes of mitochondrial DNA (mtDNA) release [75]. Cytosolic mtDNA fragments can translocate into the nucleus and be incorporated within nuclear DNA, contributing to genomic instability and potentially causing cancer and other diseases [76]. Interestingly, cytosolic mtDNA is a potent agonist of the cell's innate immune surveillance machinery; it can trigger an innate inflammatory response [77], enabling the recruitment of adaptor molecules/receptors-such as cyclic GMP-AMP (cGAMP) synthetase (cGAS), toll-like receptor 9 (TLR9), and the nucleotide-binding oligomerization domain-like receptor family pyrin domain-containing 3 (NLRP3) inflammasome-which induce a type I interferon (IFN-I)-or NF-κB-mediated inflammatory response [77][78][79].
Interestingly, in a recent paper, Ziegler et al. highlighted a link between mitophagy, lysosomal integrity, and MHC class I presentation in intestinal epithelial cells (IECs). In particular, the authors demonstrated the active role of the immune system in the antitumor response in colon cancer, supporting the possibility of successfully modulating the immune response in at least some types of cancer. The hypothesis arising from these results is that the therapeutic trigger of mitophagy could stimulate antigen presentation in the tumor cells themselves, contributing to the development of an immune response against colorectal cancer [80].
Intracellular Mitochondrial Trafficking
A fundamental step in tumor progression that improves invasiveness and metastatic propensity is the motility increase of cancer cells. Growth factors and cytokines regulate the cell migration process through different signaling pathways-such as MAPK and PI3K-AKT-which alter the expression of genes involved in cell polarity, morphology, cytoskeletal dynamics, and cell adhesion, increasing migratory ability [81]. The importance of the spatial distribution of mitochondria in cancer cells, and the mechanisms by which mitochondrial dynamics regulate cell migration, have only recently been brought to light. Mitochondrial trafficking has emerged as a fundamental regulator of the metastatic capacity of various tumors [26]. Indeed, our current knowledge shows that the localization of mitochondria to the leading edge favors tumor invasion by providing the ATP and metabolic intermediates necessary for the bioenergetic and biosynthetic demands of the cells. A high amount of energy is needed to power the cytoskeletal dynamics and the different molecular processes, such as the development of focal adhesions and cell protrusions essential for cell migration [26]. A recent study showed that cortical mitochondria supported membrane lamellipodia dynamics and actin cytoskeleton remodeling, resulting in increased cancer cell motility and invasion [82]. The importance of local energy production was demonstrated in both ovarian cancer cells [83] and living mouse embryonic fibroblasts (MEFs), where mitochondrial accumulation at the leading edge of the lamellipodia led to increased ATP concentration [84].
Mitochondrial localization in cancer cells can be reprogramed depending on intracellular and extracellular signals, leading to cells changing from a highly proliferative phenotype to a highly invasive phenotype. In particular, the presence of abundant perinuclear mitochondria characterizes a highly proliferative phenotype, while mitochondrial localization to the leading edge determines a highly invasive phenotype ( Figure 4A). Thus, mitochondrial re-localization at the cortical level involves a "regional" increase in oxidative metabolism to support the energy-intensive movements [84] and, in general, contributes significantly to the metabolic plasticity of cancer cells. Microtubule-based mechanisms drive long-distance transport, while short-distance transport is driven by actin-based movement. Long-distance transport is performed by two MT-based molecular motors with opposite functions: the kinesin (KIF)-driven anterograde transport, and the dynein-driven retrograde transport. (C) Mitochondrial movement blocking. Mitochondria possess several anchoring mechanisms capable of blocking their movement. SNPH associates with the mitochondrial outer membrane and anchors mitochondria to microtubules. Moreover, high calcium concentrations inhibit MT-based mitochondrial trafficking by binding to the MIRO 1/2 proteins. This binding prevents MIRO and KIF from interacting. In addition, mitochondrial trafficking can also be controlled by the ubiquitination of SNPH or MIRO 1/2, or by high levels of ROS production able to activate MAPK p38. The p38 phosphorylation promotes disengagement of the KIF from microtubule tracks via phosphorylation of serine 176.
The intracellular localization of mitochondria is the result of movements along the microtubules and anchoring to the actin filaments [9]. Protein adapters and mitochondrial receptors make the binding between mitochondria and motor proteins possible. The interaction between motor proteins, adapters, and receptors ensures targeted movements of the mitochondria and the fine-tuning of their motility [85,86]. The molecular mechanisms underlying this movement were initially described in neurons, where microtubule polarity and structural organization influence both soma-to-axon and soma-to-dendrite mitochondrial transport. Microtubule-based motor proteins such as the kinesin superfamily proteins and cytoplasmic dyneins sustain long-range mitochondrial transportation in the anterograde (microtubule plus end) and retrograde (microtubule minus end) directions, respectively. In the axonal portion, the microtubules are uniformly distributed, so that the negative ends face the cell body while their positive ends point distally [87,88]. Although initially considered "neuronal-specific," the anterograde (from the nuclei to the periphery) and retrograde (from the periphery to the nuclei) mitochondrial movements have also been shown in other cell types, such as migrating lymphocytes [89] and tumor cells [90]. Previously published data demonstrate that the intracellular transport of mitochondria occurs mainly via the microtubule cytoskeleton, using a mechanism consisting of mitochondrial Rho GTPases (MIRO 1/2), trafficking adapter proteins that bind to kinesin (TRAK1 and TRAK2) and the motor proteins kinesin-1/3 and dynein [91,92].
Interestingly, the experimental findings of Heindrichs et al. demonstrated that TRAK1 strongly increases KIF5B's processivity when the microtubule surface is crowded with a large variety of proteins; moreover, the authors suggest that the anchoring of KIF5B by TRAK1 increases the time for which KIF5 can stop in front of an obstacle without detaching from the microtubule [93].
In contrast, short-range mitochondrial movements depend on actin filaments and myosin motors (e.g., MYO19, MYO6, MYO5). Myosins move along actin filaments in both directions [91]. How myosins regulate movement, and how they bind to mitochondria, is poorly understood. Recently, the MIRO-dependent localization of MYO19 to the mitochondria has suggested that MIRO proteins might be active in regulating mitochondrial motility via either actin or microtubules ( Figure 4B, right panel) [94].
Mitochondrial trafficking was first thought of in neurons as an energy supply process toward high-consuming sites [9]. However, it can also locally fuel membrane dynamics and migration of cancer cells [82]. By exploiting the same neuronal regulators of mitochondrial motility, cancer cells can reposition the mitochondria in cortical areas favoring invasive processes [95].
The activity of DRP1 appears to be mandatory in the mitochondrial trafficking associated with tumor chemotaxis [86], as mitochondrial fission allows for a more rapid transfer of mitochondria along the microtubules within tumor cells. Consequently, the occurrence of a link between microtubule-based mitochondrial trafficking and mitochondrial fission was suggested [96].
It is interesting to note that several mechanisms can determine the blocking of mitochondrial movement and, more generally, the movement of all intracellular organelles inside the cell. Mitochondria can be immobilized (1) by the binding of myosin to actin [96]; (2) by their anchor to microtubules via syntaphilin (SNPH) [97]; (3) by the action of calcium on microtubules [98]; and (4) by proteasomal degradation of the kinesin-1/TRAK complex ( Figure 4C) [98].
As the intracellular distribution of mitochondria can regulate tumor cell growth, motility, and metastatic capacity, the alteration of mitochondrial movement could modify cancer therapy responses. Blocking mitochondrial movement would result in a lower energy supply for cancer cells, thus preventing tumor progression and invasion. It has been shown that SNPH can block invasion in glioblastoma, as well as breast, lung, and prostate cancers [95]. Furthermore, lower levels of SNPH are correlated with tumor progression and metastatic dissemination in lung, colon, prostate, and breast cancers [95].
Changes in the intracellular levels of ROS are also able to regulate mitochondrial dynamics. Indeed, several in vitro and in vivo studies on cancer cells have reported that increased ROS production was correlated with mitochondrial membrane potential loss, mitochondrial fission, mitophagy, and apoptosis [99,100]. On the other hand, excessive fission activity can enhance ROS production [101], due to mitochondrial membrane depolarization [102]. In turn, ROS induce post-translational modifications of DRP1, MFNs, and OPA-1, with consequent damage to mitochondrial morphology and function [103]. On the other hand, lowering ROS levels leads to mitochondrial fusion [102].
Thus, the activity of ROS might be capable of increasing tumorigenesis and/or promoting cancer progression by activating signaling pathways that regulate cellular proliferation, metabolic adaptation, apoptosis resistance, chemoresistance, and cellular migration/invasion [101].
Role of Microtubules in Mitochondrial Dynamics
The movement of mitochondria along MT tracks is regulated by second messengers generated ad hoc. Within the past decade, experimental evidence has shown the key role of calcium in regulating mitochondrial movement. High calcium concentrations have been observed in many cell types to inhibit MT-based mitochondrial trafficking by binding to the MIRO1 and 2 proteins [13,16,104]. This link between calcium and MIRO prevents the latter from interacting with the motor protein KIF5 [105] ( Figure 4C). With calcium being the second messenger of a plethora of signaling pathways, mitochondrial trafficking can therefore be regulated by many factors [105].
Mitochondria, along with other organelles, constitute intracellular storage sites for calcium. Therefore, it was hypothesized that mitochondrial trafficking could be inhibited or stimulated by calcium fluctuations rather than by the absolute level of calcium [106]. From this perspective, the MIRO/KIF5 binding could represent an indicator of high local calcium levels, allowing the mitochondria to buffer it. The calcium fluxes occur in areas of high metabolic demand, such as nerve endings, or the protrusion zones and leading edge in the case of cancer cells. These areas where the mitochondria are clustered represent the cell migration fronts, and play a pro-metastatic role [107].
In addition, mitochondrial trafficking can also be controlled by the ubiquitination of SNPH or MIRO1. For instance, it has been shown that the ubiquitination of some residues of SNPH-a protein located in the OMM [108]-is necessary to allow binding with tubulin and the consequent relocation of mitochondria to specific cellular areas [105]. By contrast, MIRO1 degradation induces mitochondrial arrest movements due to its phosphorylation at S156 by PINK1. In tumor cells, SNPH is downregulated by oxidative stress. During oxidative stress or hypoxia, the downregulation of SNPH, acting on the mitochondrial metabolism and trafficking, could inhibit cell proliferation and stimulate the motility and invasion of tumor cells. For instance, the degradation of SNPH in hypoxic conditions induced a greater presence of cortical mitochondria in glioblastoma cells, with a consequent increase in their invasiveness [109]. Therefore, SNPH could function as a metastatic propensity regulator, thus proving to be potentially useful as a biomarker. This hypothesis would also agree with the lower levels of SNPH found in cells isolated from metastatic sites compared to those isolated from their respective primary sites.
Other important modulators of mitochondrial dynamics are the ROS that suppress mitochondrial motility in both Ca 2+ -dependent and -independent manners [110,111] A recent work has shown how ROS could also regulate mitochondrial dynamics via the MAPKp38 pathway ( Figure 4C). In particular, in human fibroblasts, a high level of ROS production was able to activate p38, which promoted disengagement of the motor from the microtubule tracks via phosphorylation of the serine residue at position 176 of KIF5 [111]. This inhibited the mitochondrial motility independently of any changes in calcium flux.
Moreover, it was shown that in neuronal cells under hypoxic conditions, the MIRO/TRAK complex regulated mitochondrial trafficking via its association with hypoxia-upregulated mitochondrial movement receptor (HUMMR) [112].
Metabolic and Phenotypic Consequences of Mitochondrial Transfer
Multiple studies have shown that whole functional mitochondria can be naturally transferred from a healthy cell to a recipient cell via nanotubular structures known as "tunneling nanotubes" (TNTs) ( Figure 5) [113]. TNTs are short-lived cytoplasmic bridges between cells that transport various cargos in a uni-or bidirectional fashion-including cytosolic molecules, organelles such as mitochondria [114,115], or pathogens [116]. TNTs are ultrafine and very heterogeneous in length and width; they lack any attachment to the substrate, but their structure-depending on the context and the delivered cargo-is supported by cytoskeletal F-actin fibers [113] in conjunction with microtubules [117,118]. The main molecular mechanisms driving TNT formation start from the formation of membrane protrusions (filopodia-like) or the dislodgement of two previously attached cells, in both physiological and pathological environments. Each of these processes of cellto-cell communication can lead to closed-ended or open-ended TNTs, the latter allowing cytoplasmic continuity between connected cells. The TNT-mediated intercellular transfer can occur between neighboring cells or cells not immediately in contact; it may affect the bioenergetic state of acceptor cells, depending on their metabolic requirements to favor cell proliferation and survival [119], resulting in metabolic reprogramming of connected cells [114,120]. In particular, the experimental findings of Tan et al. showed that the transfer of mtDNA from host cells to tumor cells with compromised respiratory function restores the mitochondrial respiration required for tumorigenesis in murine lung and breast tumor models [121]. These results are also supported by the recent data obtained by Bajzikova et al., which confirm the importance of mtDNA transfer from host cells to tumor cells in the reconstitution of OXPHOS, showing that pyrimidine biosynthesis dependent on respiration-linked dihydroorotate dehydrogenase (DHODH) is necessary for tumor growth, and that mitochondrial ATP generation is actually unessential for tumorigenesis [122]. For efficient mitochondrial shuttling, TNTs are formed de novo; they are transiently expressed in response to a broad range of cellular stressors [123][124][125][126][127], suggesting that TNT formation may represent a type of stress response [128]. TNT structures involved in mitochondrial transfer were observed as a heterotypic connection between non-malignant and cancer cells in many different cancer types [129,130], as well as from mesenchymal stem cells (MSCs) to differentiated cells, in damaged tissues and tumors [131]. The ability of TNTs to form between tumor cells and, at the same time, to connect these cells to the tumor microenvironment (TME), indicates a crucial role of mitochondrial trafficking in cancer progression. It has been demonstrated that tumor cells can employ mitochondrial transfer to modify their microenvironment, thus favoring tumor progression [132]. TNT-mediated acquisition of healthy mitochondria confers more aggressive phenotypic characteristics to tumor cells, such as enhanced proliferative and invasive properties and radio/chemotherapy resistance [133,134].
In tumor cells, the advantage of mitochondrial transfer benefits cell proliferation and survival, increases OXPHOS and, consequently, supports cancer metabolic plasticity [130,135]. On the other hand, restoration of basic mitochondrial activities in cancer cells via uptake of healthy mitochondria led to a significant decrease in intracellular ROS levels, suggesting a crucial role for these reactive molecules in the acquisition of chemoresistance after mitochondrial transfer [119].
Mitochondrial Dynamics and Cancer Therapy
The fundamental role of mitochondria in the different stages of carcinogenesis and in tumor maintenance has led many researchers to hypothesize that mitochondrial dynamics may represent a possible innovative therapeutic target [42,136,137].
However, before this can be realized, in-depth studies are necessary in order to shed light on some contradictions emerging from the studies carried out to date.
For instance, several experimental data have highlighted the dual role of mitophagy in the onset of cancer, based on the type and stage of the tumor and the microenvironmental context. In fact, mitophagy can promote cancer cell survival by removing damaged mitochondria, thus counteracting ROS-mediated apoptosis. On the other hand, mitophagy can act as a tumor suppressor by eliminating dysfunctional mitochondria able to promote carcinogenesis by inducing a mild chronic oxidative stress [73,138].
In aggressive tumors, such as glioblastomas and metastatic melanomas, a close link between mitophagy and tubulin alterations has been observed. In particular, in a model of glioblastoma, the α-tubulin decrease-due to genetic alteration or pharmacological treatment-induced a downregulation of BNIP3 and NIX, and inhibited the selective mitophagic removal of mitochondria. This inhibition of mitophagy resulted in decreased formation of lamellipodia and filopodia able to negatively affect tumor cell migratory ability [64,65].
As mentioned above, the highly dynamic network of mitochondria is preserved by the continuous balance between fission and fusion, which are regulated, among others, by DRP1and MFNs, and OPA1, respectively.
Although mitochondrial fusion has been correlated with chemoresistance in some cancers, most of the literature agrees that the DRP1-induced fission is necessary for the processes of invasion and metastasis in tumors such as those of the breast and thyroid, as well as in glioblastoma [33,[38][39][40][41][42]. In accordance with this, in cancer cells a surplus of fission is generally caused by upregulation of DRP1 expression, leading to the formation of fragmented mitochondria necessary for their spatial redistribution to those regions of the cell with high metabolic demands [37]. Given that DRP1 upregulation is a common event in many oncogenic transformations, it can be assumed that cancer cells may be preferentially sensitive to DRP1 inhibition. This hypothesis was confirmed via the pharmacological and genetic inhibition of DRP1, which led to decreases in the growth of glioblastomas, melanomas, hepatocellular carcinomas, and mesotheliomas, either in vitro or in vivo [137,[139][140][141]. In the MDA-MB-231 and MDA-MB-436 breast cancer cell lines, the downregulation of DRP1 or overexpression of MFNs had a similar impact in reducing cell migration and invasion. This could suggest that the inhibition of fission may have the same effect as the induction of fusion, at least in some cancers, pointing to the role of mitochondrial dynamics, rather than fission, in the metastatic process [38]. In the same vein, the observed imbalance of the fusion/fission process (i.e., with a predominance of fission) in human lung cancer cell lines could be reversed by DRP1 inhibition (or MFN2 overexpression), promoting cell cycle arrest and increasing spontaneous apoptosis [50]. Furthermore, in brain tumor cells, the inhibition of DRP1 has been reported to decrease migration and proliferation [137].
Zhao et al. also showed that mitochondrial fission was necessary for the redistribution of mitochondria to the leading edge, and that this presence enhanced the formation of lamellipodia. The mitochondrial clustering in the migration front of the cell could represent a prerequisite, or be the first step, in the migration and invasion of breast cancer cells [38]. In addition, some studies also support the idea that the inhibition of mitochondrial fragmentation might represent a useful therapeutic strategy to reduce metastatic dissemination in colon cancer cells, in which DRP1 downregulation decreased proliferation and increased apoptosis [136].
Although no specific inhibitors targeting MFNs and OPA1 have been devised at now, the hydrazone M1, which acts as a mitochondrial fusion process promoter independently of these two proteins, might be considered a promising drug for targeted cancer therapy [142].
Conversely, two drugs inhibiting DRP1 have been developed, i.e., the mitochondrial division inhibitor MDIVI-1, and the peptide P110. The former inhibits DRP1 activity [39], while the latter alters the DRP1-Fis1 interplay, decreasing DRP1's functionality in the neurons [143]. Between the two agents, MDIVI-1 has been extensively studied in a cancer setting and, although it has shown cytoprotective effects in non-transformed cells-such as neurons and cardiomyocytes-it has shown some cytotoxic properties across a wide range of cancer cell lines [144], thus suggesting a certain selectivity. Moreover, a recent study indicated that MDIVI-1, in addition to inhibiting DRP1, was also able to target mitochondrial complex I in the absence of DRP1, thus directly impacting mitochondrial metabolism [145]. These data further support the role of DRP1 as putative target of pharmacological approaches aimed at inhibiting oncogenic transformations in a wide range of cancers [137,[139][140][141].
Inhibition of DRP1 by MDIVI-1 has also been observed to promote apoptosis induced by the cytokine tumor-necrosis-factor-related apoptosis-inducing ligand (TRAIL) in human ovarian cancer cells [146]. TRAIL is a receptor-mediated inducer of apoptosis proposed for the clinical therapy of some cancers, such as pancreatic cancer, non-squamous non-smallcell lung cancer, and lymphoma [147,148]; however, as with most drugs, the resistance acquired by tumor cells limits their therapeutic effectiveness over time. Similarly, MDIVI-1 was found to be active in overcoming cisplatin resistance in primary ovarian cancer cells isolated from patients [149]. The inhibition of mitochondrial fission would therefore seem to sensitize tumor cells to antineoplastic drugs, suggesting a possible use of MDIVI-1 in combined therapy. Interestingly, in cardiovascular diseases, the inhibition of the mitochondrial fusion process has been suggested to represent a promising therapeutic strategy. In fact, MFN1-and -2-deficient cells were characterized by elevated mitochondrial fragmentation with a loss of mitochondrial membrane potential and defects in mitochondrial respiration [141,150]. Ferreira et al. demonstrated that in rats' heart failure, β-II protein kinase C (βIIPKC) accumulates on the mitochondrial outer membrane and phosphorylates MFN1, resulting in buildup of fragmented and dysfunctional mitochondria. The authors showed that the use of βIIPKC siRNA or a synthetic βIIPKC inhibitor mitigated mitochondrial fragmentation and cell death in cultured neonatal and adult cardiac myocytes [150].
As emerged from the above, the localization of mitochondria in the different areas of the cell strongly impacts its proliferative and movement capacities and, therefore, plays a fundamental role in the spreading of tumor cells.
In this regard, although initially described as neuronal-specific, SNPH is expressed in multiple non-neuronal tissues, including cancers [26,109]. A decrease in SNPH causes a considerable mitochondrial repositioning to the cortical cytoskeleton, enhancing cancer cell motility and invasion. It was demonstrated that SNPH downregulation or loss during tumor progression was correlated with poor outcomes in patients [109]. Conversely, the reintroduction of SNPH into invasive tumor cells was able to decrease metastatic dissemination in a murine model [151].
Given the role played by the binding between the mitochondria and the cytoskeleton in the regulation of mitochondrial dynamics, microtubule-targeted agents constitute a class of anticancer drugs used in the clinic [38,152]. Among the most widely used agents in the treatment of several malignancies, there are taxanes and vinca alkaloids [84,153]; their use is mainly justified by the fact that, by interfering with the formation of the mitotic spindle, they have an antiproliferative effect. However, we cannot exclude the possibility that their anticancer efficacy is also partly linked to the effect exerted on the mitochondrial dynamics.
Targeting DRP1, SNPH, or other proteins involved in mitochondrial dynamics could therefore be of great interest in the context of anti-metastatic therapy. In fact, although metastases are the leading cause of death in cancer patients, there is a scarcity of therapeutic targets to interfere specifically with tumor dissemination processes [154].
In accordance with the growing evidence of the contribution offered by mitochondrial dynamics in metastatic processes-promoting both metabolic adaptation and the migration propensity of cancer cells [155,156]-the biochemical machinery involved in these dynamics may represent an innovative therapeutic target.
Conclusions
In recent decades, it has emerged that dynamic interactions between mitochondria and the cytoskeleton are critically important for maintaining the structure and function of the mitochondrial network. The movement of mitochondria through the cytoskeleton is fundamental for the supply of energy and metabolites to areas of the cell with high energy demands, and for buffering calcium where necessary. Furthermore, the cytoskeletal network-particularly microtubules and motor proteins-plays a fundamental role in the regulation of the mitochondrial fission/fusion balance, as well as in quality control, mitochondrial turnover, and in the distribution of mitochondria during cell division.
Since cancer is a disease associated with mitochondrial dysfunction, which has a key role in carcinogenesis, as well as in tumor maintenance and progression, considering mitochondrial dynamics as an innovative therapeutic target and/or as a useful prognostic biomarker in cancer might be appropriate. In this scenario, further studies are needed in order to better understand the effects of different oncogenic signaling pathways on mitochondrial dynamics, and/or to identify additional signaling modalities that regulate mitochondrial network homeostasis in cancer cells-also as a function of the tumor microenvironmental features. Acknowledgments: English language editing and publication support services were provided by Fabio Perversi and Aashni Shah. This was supported with internal funds.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,205 | 2021-11-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
On the Dihadron Angular Correlations in Forward $pA$ collisions
Dihadron angular correlations in forward $pA$ collisions have been considered as one of the most sensitive observables to the gluon saturation effects. In general, both parton shower effects and saturation effects are responsible for the back-to-back dihadron angular de-correlations. With the recent progress in the saturation formalism, we can incorporate the parton shower effect by adding the corresponding Sudakov factor in the saturation framework. In this paper, we carry out the first detailed numerical study in this regard, and find a very good agreement with previous RHIC $pp$ and $dAu$ data. This study can help us to establish a baseline in $pp$ collisions which contains little saturation effects, and further make predictions for dihadron angular correlations in $pAu$ collisions, which will allow to search for the signal of parton saturation.
I. INTRODUCTION
Small-x physics framework provides with the description of dense parton densities at high energy limit, when the longitudinal momentum fraction x of partons with respect to parent hadron is small. It predicts the onset of the gluon saturation phenomenon [1][2][3] as a result of nonlinear QCD evolution [4,5] when the gluon density becomes very high.
Dihadron angular decorrelation in forward rapidity pA collisions, which was first proposed in Ref. [6], is reckoned as one of the most interesting observables sensitive to gluon saturation effects. There have been great theoretical [6][7][8][9][10][11][12] and experimental [13][14][15] efforts devoted to this topic over the last few years. In addition, by applying the small-x improved TMD factorization framework [16], the suppression of the forward dijet angular correlations in proton-lead versus proton-proton collisions at the LHC due to saturation effects has been predicted in Ref. [17,18]. Besides the calculations based on the saturation formalism, there are also other explanations based on the cold nuclear matter energy loss effects and coherent power corrections, as shown in Refs. [19,20].
More precise data on the dihadron angular correlations in the forward rapidity region in pAu collisions from the STAR collaboration at RHIC are expected to be released soon. The prediction due to the saturation effect shows clear enhancement of decorrelations in pAu collisions as compared to that in pp collisions. The new data will also allow to examine the strength of saturation effects in different p T bins and conduct detailed comparison between the experimental data and theoretical predictions. In addition, the pedestal due to double parton distributions observed in dAu collisions, which is considered to be a background, is expected to be much smaller in forward pAu collisions.
On the theory side, recent developments have allowed to incorporate the so-called parton shower effect, namely the Sudakov effect, into the small-x formalism [21][22][23]. This, in particular, will enable us to go beyond the saturation dominant region, and conduct calculations for dihadron correlation in a much wider regime where both saturation effects and Sudakov effects are important. Thus, we can perform a much more comprehensive and quantitative comparison between the small-x calculation and experimental data. In general, both saturation and Sudakov effect should play important roles in dihadron (dijet) angular correlation (decorrelation) in pAu collisions. Furthermore, similar technique has been applied to dijet and dihadron productions in the central rapidity region in both pp and heavy ion collisions [24,25]. It has been demonstrated to be useful in the study of the transport coefficient of the quark-gluon plasma by comparing the angular correlations in pp and AA collisions. The Sudakov effects have also been incorporated in the recent calculation of the forward dijet production in ultraperipheral heavy ion collisions at LHC [26]. The calculation was based on the framework that interpolates between the Color Glass Condensate formalism and high energy factorization. The Sudakov effects have been included by the suitable re-weighting procedure of the events using the Sudakov form factor in a Monte Carlo simulation.
In Ref. [21,23], it has been demonstrated that the small-x effects and Sudakov effects can be simultaneously taken into account in the auxiliary b ⊥ space as a result of convolutions in the momentum space. Saturation effect in forward pAu collisions can be factorized into various small-x unintegrated gluon distributions (UGDs) as derived in Refs. [8,9]. These UGDs include two important ingredients of saturation physics, namely small-x (non-linear) evolution and multiple interaction, which can be characterized by the saturation momentum Q 2 s (x g ) and products of several scattering amplitudes (including both quadrupole and dipole type). Generally speaking, one expects that the saturation effect is stronger in the region where the gluon momentum fraction x g becomes smaller. This implies that the saturation effect is maximized in the lowest p T bin of dihadrons at given rapidity. On the other hand, the strength of the Sudakov effect depends on the hardness of the scattering, namely the magnitude of p T of each jet prior to the fragmentation process. Therefore, one expects that the parton shower effect is relatively weaker in the low p T bins while it grows stronger for large p T bins. In dijet productions, we have learnt that the angular correlation of dijets in pp collisions always becomes steeper for dijets with larger jet transverse momenta. Therefore, we expect that dihadrons in high p T bins are more sharply correlated (steeper) than those low p T bins, since the saturation effects become weaker in high p T bins while Sudakov effects only grows slowly with increased p T . As a result, we can expect that the curves of back-to-back dihadron angular correlation become more and more flat when one moves from large p T bins to small p T bins. The purpose of this paper is to conduct a comprehensive phenomenological study on the dihadron angular correlations by comparing with all the available data and making predictions for upcoming data.
To take into account the small-x effect, we use the simple Golec-Biernat Wusthoff (GBW) model [27] as a first step, since it is easy to implement and at the same time contains relevant physics due to the saturation. In principle, one should use a more sophisticated approach which employs the solution [28] to the non-linear small-x evolution equations [4,5,29,30] for various types of gluon distributions to compute the correlation as in Ref. [31]. The is much more numerically demanding together with the Sudakov resummation. Therefore, we will leave this for a future work.
This paper is organized as follows. In Sec. II, we provide the summary of the theoretical formulas for dihadron productions in the forward rapidity region and discuss details of the numerical implementation of the Sudakov factor in the small-x formalism. In Sec. III, we show the comparison between our numerical result with the experimental data measured at RHIC and provide our prediction for the upcoming data in pAu collisions. We summarize our findings in Sec. IV.
II. FORWARD RAPIDITY DIHADRON PRODUCTION IN PA COLLISIONS
Following Ref. [8][9][10], we study the forward dihadron production in the so-called hybrid dilute-dense factorization, which is motivated by the fact that the projectile proton is dilute while the target nucleus (or proton) is rather dense in such kinematical region. For the quark initiated channel, the back-to-back dihadron production formula can be written as the convolution of the large x collinear quark distribution from the projectile proton, the small-x UGDs from the target nucleus, and the hard factor as well as the final state fragmentation functions as follows where |k 1⊥ |e y 1 +|k 2⊥ |e y 2 , k 1⊥ = p 1⊥ /z 1 and k 2⊥ = p ⊥1 /z 2 . We use y 1 , p ⊥1 and y 2 , p ⊥2 to represent the rapidity and transverse momenta of the trigger hadron and associate hadron, respectively. The q(x) is the collinear quark distribution function. We use CT14 [32] from the CTEQ group in the numerical calculation. D h/q (x) and D h/g (x) are the collinear parton fragmentation functions. In the numerical evaluation, AKK08 [33] fragmentation functions are used. The factorization scale µ is set to be µ b (defined below) in the Sudakov resummation framework, in order to reach a convenient and compact resummation fornula. As common practice, the b ⊥ dependence in the factorization scale µ should also be taken into account when the numerical integration over b ⊥ is carried out. The hard factor H qg and small-x gluon distributions are defined as Here we denote S xg (b ⊥ ) andS xg (b ⊥ ) as the small-x expectation value of fundamental and adjoint Wilson loops with space separation b ⊥ , respectively. S ⊥ is denoted as the averaged transverse area of the target hadron. In principle, besides the dipole amplitude, quadrupole scattering amplitudes also appear in the production of dihadrons as demonstrated in Ref. [8,9]. We have used the so-called dipole approximation to write the quadrupole amplitude in terms of dipole amplitudes in the adjoint representation. For the gluon initiated channel, the corresponding cross section is where the hard factor H gg and small-x gluon distributions are We have also computed the gg → qq channel, which is found to be always negligible numerically. If the corresponding Sudakov factors S Sud (b ⊥ ) are set to be zero, the above expressions reduce to the results originally derived in Refs. [8,9] and numerically evaluated in Ref. [10]. The Sudakov factors come from the resummation of soft-collinear gluon radiation and they can be normally written as follows where S i p (b ⊥ ) and S i np (b ⊥ ) are the perturbative and non-perturbative Sudakov factors, respectively for parton i. Since we are using small-x unintegrated gluon distributions for parton b, which may have already contained some non-perturbative information at low-x about the target nuclei(protons), we do not include non-perturbative Sudakov factor associated with the incoming small-x gluon (active parton b) in S i np . In addition, according to the derivation in Ref. [21], the single logarithmic term, which is known as the B-term, in the perturbative part of the Sudakov factor for this incoming small-x gluon is absent. The perturbative Sudakov factors for q + g → q + g and g + g → g + g channels are given by For the non-perturbative Sudakov factor, we employ the parameterization in [34,35].
g 1 = 0.212, g 2 = 0.84, and Q 2 0 = 2.4GeV 2 . As found in Ref. [21][22][23], in order to get rid of terms associated with collinear gluon splittings, it is most convenient to set the factorization scale µ = µ b for both collinear parton distributions and fragmentation functions in the resummed formula. Since we have arbitrary number of soft gluons resummed into the Sudakov factor, it becomes difficult to recover the exact kinematics. In practice [24,25], we can approximately write x = k ⊥ √ s (e y1 + e y2 ) and x g = k ⊥ √ s (e −y1 + e −y2 ) with k ⊥ ≡ max[k 1⊥ , k 2⊥ ]. In addition, the hard scale is then determined as Q 2 = xx g s = k 2 ⊥ (2 + e y1−y2 + e y2−y1 ). In principle, Q should be much larger than the transverse momentum imbalance of the dijet pair q ⊥ ∼ 1 b ⊥ > 1 bmax . In the current RHIC kinematics, this is not exactly the case (Q ∼ 4 to 10 GeV), which means that the effect of the non-perturbative Sudakov factor is not completely negligible in contrast to the high energy dijet productions at the LHC [24]. We are also aware of the issue of nonuniversality in dijet productions [36,37], which implies that the non-perturbative Sudakov factors in forward dihadron productions may differ from those used in DIS or Drell-Yan processes. We rely on numerical fit in pp collisions to determine the size of the non-perturbative Sudakov factors in forward dihadron production.
As mentioned above, we employ the GBW model [27] with Gaussian form for the scattering amplitudes in this paper for the sake of simplicity. The S xg (b ⊥ ) andS xg (b ⊥ ) is then given by while, The various relevant gluon distributions can be then cast into As shown above, the dihadron production process in the dilute-dense factorization involves several different types of gluon distribution. These distributions are related to the gluon distributions defined in inclusive DIS, however, they are in fact different type of distributions with various forms of gauge links. Normalized forward dihadron angular correlation compared with the experimental data measure by STAR collaboration [13]. Both the leading and associate hadrons are in the forward rapidity region (2.5 < y < 4). The pedestal has not been taken into account in the theoretical curves for the dAu collisions.
III. NUMERICAL RESULTS
Previous experimental measurements [13][14][15] and theoretical calculations [7,10,11] studied the coincidence probability C(∆φ), which is defined as the ratio of the dihadron yield to the single trigger hadron yield. The trigger hadron yield (cross section) is used as the normalization. In this paper, we suggest to study the self-normalized angular correlation in the back-to-back region. The advantage of self-normalized correlation is that one can avoid the uncertainties and subtleties introduced by the single trigger hadron yield in the small-x formalism (see for example the discussion in Ref. [38][39][40]). As a matter of fact, this has become the common practice at the LHC for back-to-back dijet and photon-jet angular correlation measurements. Therefore, in the following, we adopt such idea and normalize the angular correlation in the back-to-back region for both theoretical curves and experimental data.
With the Sudakov factor, now we can not only describe the dAu data in which the saturation effects are dominant, but also naturally explain width of the back-to-back correlation data measured in pp collisions with Q 2 sp = c(b)Q 2 s,GBW (x) and c(b) = 0.25. Here we use the profile parameter c(b) to take into account the fact that collisions are mostly peripheral in pp collisions. Similar parametrization has been also used in single forward hadron productions in pp collisions [41]. The GBW saturation momentum is defined as Q 2 s,GBW (x) ≡ (x/x 0 ) −λ GeV 2 with x = 3.04 × 10 −4 and λ = 0.288. In addition, as explained earlier, due to the non-universality of dijet productions, we expect that the strength of the non-perturbative Sudakov factor could be different for this process. As shown in Fig. 1, we find that we can explain the forward dihadron back-to-back angular correlations in pp collisions with 3.3 times of the non-perturbative Sudakov factor fitted from deep inelastic scattering (DIS) and Drell-Yan process.
Using the same parametrizations, we further perform the numerical calculation for the dihadron angular correlation in the forward rapidity region in peripheral and central dAu collisions, and compare with the experimental data measured by the STAR collaboration [13] in Fig. 1. The saturation scale in the pA or dA collisions is given by [3,27], while c(b) = 0.85 and 0.45 for the central and peripheral collisions respectively [10]. For minimum bias events, we use c(b) = 0.56 which is roughly in between the peripheral and central collision events.
In pp collisions, we find that the Sudakov and saturation effects are equally important. Therefore, the addition of the Sudakov factor is essential to describe the back-to-back angular correlation in forward dihadron productions in pp collisions in the dilute-dense factorization. In dAu(pA) collisions (especially the central collisions), the saturation effects become the dominant mechanism for the broadening of the away side peak, since the saturation scale is enlarged by a factor of A 1/3 for large nuclei. Nevertheless, in order to make more reliable predictions for various transverse momentum ranges of dihadron productions, it is necessary to take into account the Sudakov effect. Normalized forward and near-forward dihadron angular correlation comparing with the experimental data measure by STAR collaboration [14]. The trigger π 0 is in the forward rapidity region (2.5 < y < 4) and the associate π 0 is in the near forward rapidity region (1.1 < y < 1.9).
We also perform the numerical calculation for the dihadron angular correlation in the forward and near-forward rapidity region and compare with the experimental data [14] in Fig. 2. As expected, the Sudakov effect is the dominant effect, while the small-x effect is negligible since x g is not sufficiently small in this kinematical region. We have also checked the dihadron correlation between forward trigger hadron and middle rapidity associate hadron [14], and find the same conclusion.
Finally, we make predictions for several transverse momentum bins, both for trigger and associate particles, as shown in Fig. 3 for both pp and pAu collisions at RHIC. As we can see in the plots, by comparing solid (or dashed) curves with different colors, which correspond to different p T of trigger particle, we find that the correlation curves become flatter when we decrease the transverse momentum. Despite the fact that the strength of the perturbative Sudakov factor increases with p T , partons with larger transverse momenta are less likely to be deflected. Therefore the resulting distribution in p T is also less likely to be broadened. This is the reason why we see the corresponding curve of the p T bin with large transverse momentum is more steep than that of the small p T bin. Furthermore, by comparing the solid and dashed curves with the same color, we see that the back-to-back dihadrons are always more decorrelated in pAu collisions than in pp collisions. This is understood as originating from the larger saturation effects in nucleus target.
IV. CONCLUSIONS
In this paper, we have carried out a comprehensive study of forward rapidity dihadron angular correlations in both pp and dAu (pA) collisions at RHIC, by using the small-x formalism with parton shower effects. This new framework allows to describe the forward dihadron angular correlation in pp collisions, where both the small-x effect and the Sudakov effect are important. By incorporating the parton shower effect, a very good agreement with all the available data is obtained, and further prediction for the upcoming data collected in the pAu collisions at RHIC is also provided. Using the results in pp collisions as the baseline, we can reliably study the saturation effect which accounts for the difference between angular correlations the pp and pAu collisions, and therefore provide robust predictions. This would allow us to systematically study the signature of gluon saturation at RHIC. | 4,307.8 | 2018-05-15T00:00:00.000 | [
"Physics"
] |
Exploring the SARS-CoV-2 Proteome in the Search of Potential Inhibitors via Structure-Based Pharmacophore Modeling / Docking Approach
: To date, SARS-CoV-2 infectious disease, named COVID-19 by the World Health Organization (WHO) in February 2020, has caused millions of infections and hundreds of thousands of deaths. Despite the scientific community e ff orts, there are currently no approved therapies for treating this coronavirus infection. The process of new drug development is expensive and time-consuming, so that drug repurposing may be the ideal solution to fight the pandemic. In this paper, we selected the proteins encoded by SARS-CoV-2 and using homology modeling we identified the high-quality model of proteins. A structure-based pharmacophore modeling study was performed to identify the pharmacophore features for each target. The pharmacophore models were then used to perform a virtual screening against the DrugBank library (investigational, approved and experimental drugs). Potential inhibitors were identified for each target using XP docking and induced fit docking. MM-GBSA was also performed to better prioritize potential inhibitors. This study will provide new important comprehension of the crucial binding hot spots usable for further studies on COVID-19. Our results can be used to guide supervised virtual screening of large commercially available libraries.
Introduction
Coronaviruses (CoVs) are one of the major pathogens that primarily targets the human respiratory system which caused previous outbreaks such as the severe acute respiratory syndrome (SARS)-CoV and the Middle East respiratory syndrome (MERS)-CoV. The novel coronavirus SARS-CoV-2 has become a pandemic threat to public health. It is a respiratory diseasecausing fever, fatigue, dry cough, muscle aches, shortness of breath and some instances lead to pneumonia [1]. The SARS-CoV-2 genome comprises 29,903 nucleotides, with 10 Open Reading Frames (ORFs). The 3 terminal regions encode structural viral proteins: whereas the 5 terminal ORF1ab encodes two viral replicasepolyproteins pp1a and pp1b. The proteolytic cleavage of pp1a and pp1b produces 16 nonstructural proteins (nsp1 to nsp16). Among these, there are nsp3, the papain-like protease (PLpro) and nsp5, the 3-chymotrypsin-like protease (3CLpro, also known as the main proteaseMpro). The viral polyprotein processing is essential for maturation and infectivity of the virus (Figure 1) [2]. Because of the crucial roles, these two proteases are important targets for antiviral drug design. Moreover, the virus encoded for other proteins that could be potential targets of antiviral drugs. The mature proteins of SARS-CoV-2 are: host translation inhibitor nsp1 (nsp1); nonstructural protein 2 (nsp2); papain-like proteinase (PLpro); nonstructural protein 4 (nsp4); 3C-like proteinase (3CLpro), nonstructural protein 6 (nsp6), nonstructural protein 7 (nsp7), nonstructural protein 8 (nsp8), nonstructural protein 9 (nsp9), nonstructural protein 10 (nsp10), RNA-directed RNA polymerase (Pol/RdRp), helicase (Hel), guanine-N7 methyltransferase (ExoN/nsp14), uridylate-specific endoribonuclease (NendoU/nsp15), 2'-O-ribose methyltransferase (nsp16), Spike glycoprotein (S glycoprotein), protein 3a, Envelope small membrane protein (E protein), Membrane protein (M protein), nonstructural protein 6 (nsp6), protein 7a, nonstructural protein 7b (nsp7b), nonstructural protein 8 (nsp8), nucleoprotein (NC), ORF10 protein. These proteins can form hetero-oligomeric complexes such as: nsp7/nsp8 hetero-oligomeric complex; nsp7/nsp8/Pol hetero-oligomeric complex; nsp10/nsp14 hetero-oligomeric complex; nsp10/nsp16 hetero-oligomeric complex; Spike glycoprotein/hACE2 hetero-oligomeric complex.Anti-coronavirus therapies can be split into two main approaches: the first approach is to act on the human immune system or human cells level, and the other approach is to focus on coronavirus itself [3]. In exploring novel therapies for COVID-19, researchers are using computational approaches to aid in the discovery of potential candidates [4]. In particular, in silico drug repurposing, also named drug repositioning, is a strategy used to identify novel uses for existing approved and investigational drugs. This strategy offers numerous advantages over traditional drug development pipelinesthat suffer risks failure in preclinical or early stage clinical trials due to safety and/or toxicological issues. On the contrary, the drug repurposing strategy reduces this risk by using drugs that have demonstrated safety records from previous trials. The real advantage of drug repurposing is that preclinical and early stage clinical trials do not need to be repeated. This determines cost reductions compared to traditional drug development [5][6][7][8][9][10][11][12][13][14][15][16][17][18]. The number of in silico studies on drug repositioning against SARS-CoV2 is growing rapidly in these last months. A major part of these studies is focused on the repurposing of approved and investigational drugs against the 3CLpro or Mpro by using both ligand-based approaches and structure-based approaches. Structure-based approaches are related to different docking analysis [19][20][21][22][23][24][25][26][27][28][29]. In another work, Battisti and coworkers used two different approaches related to docking and pharmacophore combined with molecular dynamics to perform virtual screening of a large database of compounds on 10 different SARS-CoV-2 proteins [30]. To our knowledge, Touret and coworkers performed, to date, the only in vitro screening of an FDA approved chemical library which revealed potential inhibitors of SARS-CoV-2 replication [21]. Nevertheless, the identification of potential inhibitors is still challenging for all the researchers involved in the field. In this study, a computational analysis of the proteins encoded by the SARS-CoV-2 genes was performed. Such an analysis was used as a starting point for a druggability assessment and a computational drug repurposing work-frame. First, high-quality protein structures were built employing homology modeling or exploiting existing experimental structures. Starting from the models, a computational assessment was done to find out a druggable binding pocket for those proteins of which catalytic site is not known in the literature. The best druggable sites found in the previous analysis, together with the catalytic sites reported in the literature, were then used to build structure-based pharmacophore models. In the end, these models were used to screen the DrugBank library (approved and investigational drugs) [31] as a first screening approach.
Library Preparation
A total of 8752 experimental, investigational and approved molecules were downloaded from the DrugBank database (www.drugbank.ca). First, the database molecules were prepared using Schrödinger LigPrep v. 2018-4. The force field adopted was OPLS3e and Epik [32] was selected as an ionization tool at pH 7.0 ± 2.0. Tautomers generation was flagged and the maximum number of conformers generated was set at 32. The database obtained was prepared as a Pharmacophore Screening database, in *.lbd format, through Idbgen (extension present in the LigandScout 4.3 [17] package), which allowed obtaining the best conformation of the ligand (at low energy) between the 200 the application can calculate. The tautomers were considered as separate molecules and those molecules that were duplicated or whose conformation calculation had failed were eliminated.
For 3C-like protease (3CLpro) the crystal structure of the COVID-19 main protease (PDB ID: 6LU7) was available. The structure of papain-like protease (PLpro) of SARS virus (PDB ID: 3E9S) was used as a template of the human coronavirus papain-like model (82.86% sequence identity). This one was the best available experimental structure at the time of the study (25 March 2020). On 27 May 2020, the crystal structure of PLpro of SARS-CoV-2 was released (PDB ID: 6WZU). We performed the overlapping of our model and the experimental structure. The RMSD value of 3.99 Åshows that the two structures are identical unless few residues in the C-terminal (See Supplementary Information). For guanine-N7 methyltransferase (nsp14) we used as template the SARS-related coronavirus (PDB ID: 5C8S) that shows 95.07% of sequence identity. For uridylate-specific endoribonuclease (NendoU/nsp15), we used the experimental structure as reported in the Protein Data Bank [34] (PDB ID: 6W01). The crystal structure of nsp4 from mouse hepatitis virus A59 (PDB ID: 3VCB) was used as a template of SARS-CoV-2 nsp4 (61.36% sequence identity). The crystal structure of SARS-CoV super complex of nonstructural proteins (PDB ID: 2AHM) was chosen as a template of nsp7/nsp8 supercomplex (97.86% sequence identity). For nsp9 the template of nsp9 from SARS-coronavirus (PDB ID: 1UW7) was used. It shares a sequence identity of 97.35%. The X-ray structure of SARS coronavirus nsp7/8/12 (PDB ID: 6NUR) was selected as a template of nsp7/nsp8/nsp12 hetero-oligomeric complex (96.70% sequence identity). The crystal structure of SARS-coronavirus helicase (PDB ID: 6JYT) was used as template for SARS-CoV-2 helicase (Hel). It shows a high sequence identity (99.83%). On 29 July 2020, the experimental structure of SARS-CoV-2 helicase (PDB ID: 6ZSL) was released. The overlapping of our model and the experimental structure shows a RMSD value of 4.17 Å. This means a quite identical structure unless some loops (See Supplementary Information). The crystal structure of nsp16/nsp10 SARS coronavirus complex (PDB ID: 2XYQ) was chosen as a template of the model of 2'-O-ribose methyltransferase (nsp16) with 93.45% sequence identity. The models obtained and the PDBs were refined using the protein preparation wizard tool of Maestro Suite Software [35]. This tool allowed the protein structure optimization, including missing loops, side chains and hydrogens, optimization of the protonation state in a pH range 7.0 ± 2.0 and analysis of atomic clashes. For PDBs containing co-crystallized ligands, Epik [32] was used to predict ionization and tautomeric state of ligand, while PROPKA was used to check for the protonation state of ionizable protein groups. Protein was refined using restrained minimization with OPLS3e as force field.
Pharmacophore Modeling
Pharmacophore model generation was performed using LigandScout 4.3. The structures were imported into LigandScout. 3C-like proteinase, PLpro, nsp14, nsp15, nsp16-nsp10 are protein-ligand complexes, while, nsp4, nsp9, nsp10-nsp14, helicase, nsp7-nsp8 supercomplex, nsp12are targets without ligand-bound. For protein-ligand complexes, a structure-based pharmacophore model was generated [31]. When the model showed more features, to improve the performance of virtual screening, we considered the features for the binding, in other cases the features were omitted until hits were found. The calculate pockets tool has been used to find the binding pockets for the structures without ligand-bound. A grid was calculated over the entire protein structure and grid points were evaluated according to their buriedness and their number of neighboring grid points. Isocontour surfaces were generated. Then, a model was created by selecting the nature and number of six features according to the features showed in the protein-ligand complexes utilizing "Create Apo Site Grids". Next, the pharmacophore model was generated for each one. The obtained pharmacophore models were used as a query to screen the DrugBank library. For apo protein, such an approach allow to evaluate if a putative binding site is suitable for ligand binding.
Pharmacophore screening was preferred to be used prior to docking for two reasons. First, it exploits a rapid screening techniques that is crucial in the first stage of virtual screening cascade. Indeed, this is very common to use it as a first step in virtual screening campaign on large databases [36]. Second, the structure-based pharmacophore uses a static conformation of protein side chains, while the docking funnel here used was set to have a gradually increasing precision with a final step of IFD that allow user to simulate side-chains-induced fit based on the ligand.
Docking
The hits identified by the virtual screening were submitted to a docking study using Glide [37] in standard precision (SP) with the OPLS3e [38] force field. The crystal structures were optimized using protein preparation wizard in Maestro [35] adding bond orders and hydrogen atoms to the crystal structure using the OPLS3e force field. Next prime [39] was used to fix missing residues or atoms in the protein and to remove co-crystallized water molecules. The protonation state, pH 7.2 ± 0.2 of the protein and the ligand were evaluated using Epik 3.1 [32]. The hydrogen bonds were optimized through by reorientation of hydroxyl bonds, thiol groups and amide groups. In the end, the systems were minimized with the value of convergence of the RMSD of 0.3 Å [40,41]. For protein-ligand complexes, the grid boxes were built considering the ligands as a centroid. In contrast, for apoproteins, the amino acid residues, previously identified by LigandScout as crucial, were considered for centering the docking grid. The docking study was performed using the Glide docking tool, in extra precision (XP) using no constraints. Van der Waals radii were set at 0.8 and the partial cutoff was 0.15 and flexible ligand sampling. Bias sampling torsion penalization for amides with nonplanar conformation and Epik state penalties were added to the docking score.
Induced-Fit Docking and MM-GBSA
The induced-fit protocol (IFD)-developed by Schrödinger [24]-is a method for modeling the conformational changes induced by ligand binding. This protocol models induced-fit docking of one ormore ligands using the following steps as also reported in [42]. The protocol starts with an Initial docking of each ligand using a softened potential (van der Waals radii scaling). Then, a side-chain prediction within a given distance of any ligand pose (5 Å) is performed. Subsequently, a minimization of the same set of residues and the ligandfor each protein/ligand complex pose is performed. After this stage, any receptorstructurein each pose reflect an induced fit to the ligand structureand conformation. Finally, the ligand is rigorously docked, using XP Glide, into the induced-fit receptor structure.
IFD was performed using a standard protocol and OPLS3e force field was chosen [38]. Receptor box was centered on the co-crystallized ligands on the crucial residues identified within the binding site. During the initial docking procedure, the van der Waals scaling factor was set at 0.5 for both receptor and ligand. Prime refinement step was set on side chains of residues within 5Å of the ligand. For each ligand docked, a maximum of 20 poses was retained to be then redocked at XP mode. IFD calculation was followed by prime/MM-GBSA for the estimation of ∆G binding . The MM-GBSA approach employs molecular mechanics, the generalized Born model and the solvent accessibility method to elicit free energies from structural information circumventing the computational complexity of free-energy simulations wherein the net free energy is treated as a sum of a comprehensive set of individual energy components, each with a physical basis [41,[43][44][45]. The conformational entropy change-T∆S-can be computed by normal-mode analysis on docking poses, but many authors have reported that the lack of the evaluation of the entropy is not critical for calculating the MM-GBSA (or MM-PBSA) free energies for similar systems [46][47][48][49]. For these reasons, the entropy term-T∆S was not calculated to reduce computational time. In our study, the VSGB solvation model was chosen using OPLS3e force field with a minimized sampling method.
Results and Discussion
Recently, SARS-CoV-2 caused the outbreak of coronavirus disease 2019 (COVID-19) threatening global health security. To date, no approved antiviral drugs or vaccines are available against COVID-19 although several clinical trials are underway. In this framework, computational methods offer an immediate and scientifically sound basis to potentially design highly specific inhibitors against important viral proteins and guide the antiviral drug discovery process [50]. In this work, SARS-CoV-2 encoded proteins were analyzed from PDB structures and homology models were generated by using the most similar PDB crystal structures as templates. For the homology models created, starting from the high similarity between SARS-CoV-2 proteins and some available crystal structures from SARS-CoV, ligand coordinates of the available most similar crystals were exploited for the structure-based pharmacophore creation. Below we report the analyzed proteins and the related pharmacophore maps composition.
Nonstructural protein 16 (nsp16) also termed 2'-O-methyltransferase is activated only by the binding of nsp10. We considered the structure of the nsp16-nsp10 complex from SARS-COV-2 with 1.80 Å of resolution (PDB ID: 6W4H). This complex shows S-adenosylmethionine (SAM) in the binding site. It forms hydrogen bonds with Asp6928, Tyr6930, Asp6897 and Cys6913 ( Figure 2P). The derived pharmacophore model on the co-crystallized ligand showed 9 features: 4 HBAs with Gly248 and Thr341, 1 HBD with His250 and 2 negative ionizable areas with Gly248 and Lys290 ( Figure 2Q,R).
The other pharmacophore models were developed exploring the apoprotein surfaces as follows: uridylate-specific endoribonuclease (NendoU/nsp15) forms a hexameric endoribonuclease, that preferentially cleaves 3' of uridines. It is one of the RNA-processing enzymes encoded by the coronavirus [52]. Exploring the apoprotein surface, a potential active site was found, and a pharmacophore model was generated ( Figure 3A). It contained the following residues: Thr166, Arg198, Asp267 and Ser273. The pharmacophore model showed 3 features: 2 HBDs and one hydrophobic feature.
Nonstructural protein 4 (nsp4) is localized at the endoplasmic reticulum membrane when expressed alone, but this protein can be recruited into the replication complex in infected cells [52]. After scanning the protein surface, a potential binding pocket was identified containing residues Leu417, Thr460 and Arg464. The derived pharmacophore model showed 6 features: 2 HBAs, 2 HBDs and a hydrophobic feature ( Figure 3B).
Nonstructural protein 9 (nsp9), encoded by ORF1a, does not present a designated function, but is most likely involved with viral RNA synthesis. The crystal structure suggests that the protein is dimeric, whereas nsp9 binds RNA and interacts with nsp8 [53]. The potential identified binding site contains the following residues: Gly38, Arg39, Ser59 and Thr64. The derived pharmacophore model showed 6 features: 2 HBAs, 2 HBDs and one hydrophobic feature ( Figure 3C).
Nonstructural protein 7 and 8 (nsp7-nsp8) supercomplex are essential cofactors for Nsp12 polymerase [33]. Two putative active sites were found: pocket A and pocket B. Pocket A between chains C, G and H, pocket B between chainG-H of nsp8. The pocket A showed as residues: Glu50 of chain C; Thr124 and Arg190 of chain G; Glu5, Arg57, of chain H. The pocket B of chains G-H of nsp8 showed the residues: Arg57 and Asp64 of chain G; Leu122 and Thr123 of chain H. The pharmacophore model showed 6 features each: 2 HBAs, 2 HBDs and two hydrophobic features ( Figure 3F,G).
Nonstructural protein 12 bound to nsp7-8 co-factors (nsp7-nsp8-nsp12) hetero-oligomeric complex is an RNA-dependent RNA polymerase. It is bound to its essential co-factors nsp7 and nsp8 greatly stimulates the replication and transcription activities of the polymerase. The nsp12 contains a polymerase domain (a.a. 398-919) that assumes a structure resembling a cupped "right hand". The polymerase domain consists of a finger domain (a. a. 398-581, 628-687), a palm domain (a.a. 582-627, 688-815) and a thumb domain (a.a. 816-919). CoV nsp12 also contains a nidovirus-unique N-terminal extension (a.a. 1-397) [27]. The putative active sites, pocket A and pocket B were found into conserved motif regions (A-G) possessed of all polymerases [33]. Pocket A contained residues of N-terminal extension Thr246 and Arg249; pocket B contained residues of N-terminal extension Tyr129, His133, Asn138 and motif D (Ala706-Asp711), the pharmacophore model showed 6 features: 2 HBAs, 2 HBDs and two hydrophobic features ( Figure 3H,I). The identified pharmacophore models were used to perform a virtual screening against the DrugBankdatabase of experimental, investigational and approved drugs considering as a first filter.
The hits found were submitted to docking studies to evaluate the poses and interactions at the putative active site. First, XP docking was performed and subsequently, the highest-ranked hits were submitted to induced fit docking analysis and MM-GBSA calculation to further filter. For just one protein (nsp16), no hits were identified in the DrugBank database. At the end of the computational exploration, we have identified a total of 34 hits for all the explored targets. Among these compounds, 26 are experimental drugs, 5 investigational drugs and 3 approved drugs. The summary results were reported in the Supporting Information. In the main text, we will discuss the molecular recognition analysis for the best binder hits for each target. The rest of the identified hits, docking scores, ∆Gbinding, and protein-ligand interactions is reported in a table in Supplementary Information as well as 2D ligand interaction diagrams of the best binders.
The best docked hit molecule for 3CL-protease is the experimental drug DB082309, a phenyl pyrroline derivative (∆G = −72.56 kcal/mol). This compound is characterized by an H-bond between the carbonyl oxygen with Asn142, but the principal contribution to the binding is given by the ∆GvdW = −52.56 kcal/mol and the ∆Glipo = −23.65 due to the 2 aromatic rings (phenyl and O-difluorophenyl) of the molecules which are located in two hydrophobic pockets (Leu140, Phe141, Leu167, Pro168) and the piperazine moiety interacting with His41 and Met49 ( Figure 4A).
The most promising drug candidate for papain-like protease is the experimental drug DB07358 (∆G= −50.662 kcal/mol), a benzamide derivative. In our study, the experimental drug DB07358 forms three H-bonds with Tyr269, Gln270 and Tyr274. Moreover, the binding is characterized by a strong pi-stacking of the thiazol moiety with the phenyl ring of Tyr269 and phenylamino moiety with the phenyl ring of Tyr274 (∆Glipo = −19.47 kcal/mol, ∆GvdW = −38.50 kcal/mol) ( Figure 4B).
The top-ranked compound for nsp4 is the experimental drug sinapoyl-coA (DG = −80.73 kcal/mol). The binding of sinapoyl-CoA in the nsp4 pocket is influenced by a high number of H-bonds with several different residues (Leu417, Thr419, Arg464, Thr460) ( Figure 4F).
The experimental drug DB02794 resulted in the best binding hit related to the nsp9. Due to the presence in the scaffold of many oxygen atoms, DB02794 establishes many H-bond interactions involving Lys36, Gly38, Arg39, Ser59, Asp60, Glu68. Other H-bond interactions involve some nitrogen of the experimental drug and the residues Gly38, Ser59 and Lys92. The strong net of H-bond interactions is reflected by a ∆Gcoul = −84.35 kcal/mol, partially compensated by a loss of binding energy due to the solvation contribution ∆G = +68.45 kcal/mol. It is worthy to note that the next top-ranked hits for nsp9 are 3 approved drugs (ioxilan, Pemetrexed, and isoprenaline), which could be of particular interest due to the status "approved", which would allow to use them in clinical trials ( Figure 4G). For the helicase, the apo binding pocket analysis identified 2 different putative binding sites. The most promising candidate drug-binding pocket A is the experimental drug 4-hydroxybenzoyl-coA (∆G = −91.90 kcal/mol). The interactions that this compound establishes with the pocket A are characterized by several H-bonds, most of which formed by the three phosphate moieties with Lys139, Arg339, Asn361, Arg390. Other H-bond interactions are among the hydroxyl and carbonyl oxygens and Lys139, Glu142, Lys146, Asp179, His230, Cys309, Arg339, Arg390. Moreover, the purine moiety establishes pi-stacking interactions with the imidazole moiety of His230. Regarding the top-ranked compound in pocket B, this is the experimental drug DB02136, a cephalosporin analog, (∆G = −75.81 kcal/mol). This compound interacts with the residues Ile20, Arg21, Arg22, Arg129, Glu136 forming H-Bonds with carbonyl and hydroxyl oxygen atoms, but the binding mode is strengthened by an important contribution of ∆GvdW = −71.94 kcal/mol ( Figure 4H,I).
Furthermore, for the supercomplex nsp7-nsp8, two different pockets were found. The most promising candidate for pocket A is the experimental drug flavin-N7 protonated-adenine dinucleotide (∆G = 78.86 kcal/mol). The flavin moiety interacts with the residue Arg57 forming 2 H-bonds. These latter are also formed among the phosphate and Thr190, the ribose moiety and Arg190 and the purine moiety and Ile2, Ile3, Ile4. Moreover, the binding interaction is strengthened by ionic interactions among the NH3+ and the glutamic residues 5 and 50. The residue Arg190 interacts with the purine moiety employing pi-stacking interactions. The top-ranked compound for pocket B is the experimental drug DB06955 (∆G = −58.14 kcal/mol), a pyrrole-indole derivative, interacting with Arg57, Asp64, Leu122 and Thr123 employing H-bond interactions ( Figure 4J,K).
Last, but not least, for the hetero-oligomeric complex nsp7-nsp8-nsp12 two different pockets were identified. In pocket A, the most promising compound is the experimental peptide analog DB04579 (∆G = −57.10 kcal/mol) interacting with the residues Thr246, Arg249, Leu251, Ser255 through H-bond interactions. The most promising compound for the pocket B is the investigational drug PCI-27483, a phenyl benzimidazole derivative to date used for the treatment of the pancreatic adenocarcinoma. The binding mode is characterized by several H-bond interactions involving His133, Phe134, Asp135, Asn138, Ala708, Ser709, Thr710, Lys780 and Asn781. The indole moiety is further involved in pi-stacking interactions with Tyr129 ( Figure 4L,M).
Conclusions
The recently emerged SARS-CoV-2 caused a major outbreak of COVID-19 and instigated a widespread fear and has threatened global health security because there are no approved therapies for treating. In the attempt to try to speed up the search for new inhibitors of the virus replication, in this study, we performed a computational drug repositioning campaign on the DrugBank database of experimental, investigational and approved drugs. The aim of using such a restricted database had the rationale to identify potential lead compounds to quickly test in vitro and in vivo as they passed toxicity tests. We analyzed the proteome of SARS-CoV-2 and using homology modeling we identified the high-quality models of proteins. A structure-based pharmacophore modeling study was performed to identify pharmacophore features for each target. Successively, the pharmacophore models were used to perform a virtual screening against the DrugBank library. After a docking study, we identified a total of 34 hits for all the explored targets (3CL-protease, papain-like protease, guanine-N7-methyltransferase nsp14, nsp16, NendoU/nsp15, nsp4, nsp9, helicase, nsp7-nsp8 supercomplex and nsp7-nsp8-nsp12 hetero-oligomeric complex). Among these compounds, 26 are experimental drugs, fiveinvestigational drugs and three approved drugs. The final selection of the potential inhibitors was made considering the best binding energy for each compound obtained utilizing MM-GBSA calculation. Molecular recognition analysis showed that these compounds interact with the residues found as crucial for each target. These drugs can be further explored against the successful inhibition of COVID-19. Moreover, a set of hot spot residues and pharmacophore features for each target, which makes substantial contributions to the protein-ligand binding are also identified. This achievement can facilitate us to rationally design novel selective inhibitors targeting SARS-CoV-2, not comprised in the DrugBank. The results of this study offer a double important hint for anti-COVID19 drug discovery campaigns. On one side, it shows putative repurposing drugs to be adopted as a single therapy or in combination with other therapies. On the other side, our deep studies attempted to map out the main binding hot spots for the most important SARS-CoV-2 proteins, opening an important route to the design of new molecules to test.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,936 | 2020-09-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Increase of solubility of foreign proteins in Escherichia coli by coproduction of the bacterial thioredoxin.
Eukaryotic proteins are frequently produced in Escherichia coli as insoluble aggregates. This is one of the barriers to studies of macromolecular structure. We have examined the effect of coproduction of the E. coli thioredoxin (Trx) or E. coli chaperones GroESL on the solubility of various foreign proteins. The solubilities of all eight vertebrate proteins examined including transcription factors and kinases were increased dramatically by coproduction of Trx. Overproduction of E. coli chaperones GroESL increased the solubilities of four out of eight proteins examined. Although the tyrosine kinase Lck that was produced as an insoluble form and solubilized by urea treatment had a very low autophosphorylating activity, Lck produced in soluble form by coproduction of Trx had an efficient activity. These results suggest that the proteins produced in soluble form by coproduction of Trx have the native protein conformation. The mechanism by which coproduction of Trx increases the solubility of the foreign proteins is discussed.
Eukaryotic proteins are frequently produced in Escherichia coli as insoluble aggregates. This is one of the barriers to studies of macromolecular structure. We have examined the effect of coproduction of the E. coli thioredoxin (Trx) or E. coli chaperones GroESL on the solubility of various foreign proteins. The solubilities of all eight vertebrate proteins examined including transcription factors and kinases were increased dramatically by coproduction of Trx. Overproduction of E. coli chaperones GroESL increased the solubilities of four out of eight proteins examined. Although the tyrosine kinase Lck that was produced as an insoluble form and solubilized by urea treatment had a very low autophosphorylating activity, Lck produced in soluble form by coproduction of Trx had an efficient activity. These results suggest that the proteins produced in soluble form by coproduction of Trx have the native protein conformation. The mechanism by which coproduction of Trx increases the solubility of the foreign proteins is discussed.
Production of large amounts of proteins in Escherichia coli is a first step for structural studies of macromolecules. However, many eukaryotic proteins, especially full-length proteins, are produced in E. coli as insoluble aggregates (inclusion bodies). Our laboratory is interested in the three-dimensional structure of transcription factors, especially nuclear oncogene products such as the myb proto-oncogene product (c-Myb). We identified the solution structure of the Myb-DNA complex by using the bacterially produced Myb protein containing the DNA-binding domain alone (1). However, the structural study of the fulllength Myb has been unsuccessful, because it was produced in E. coli as insoluble aggregates. Although production of a protein as insoluble aggregates can offer the advantage of easy purification, a protein solubilized by some appropriate process such as urea treatment does not have a guarantee that it has the native protein conformation. Thus, large-scale production of eukaryotic proteins in E. coli in soluble form is the first step for structural studies.
The mechanism by which proteins become soluble is not understood. The formation of inclusion bodies might be thought as "inappropriate" protein-protein interactions due to the lack of proper polypeptide folding (2,3). Why are many eukaryotic proteins produced as insoluble aggregates in E. coli? Two factors appear to affect the solubility of eukaryotic proteins in E. coli. The first parameter is the E. coli heat shock chaperone GroESL (encoded by the groE operon, Fig. 1). The role of the GroESL complex in catalyzing the correct folding of a newly synthesized polypeptide has become firmly established recently (4 -7). To produce eukaryotic proteins in E. coli, a strong promoter like the T7 promoter is often used. In this case, a high level of production of the E. coli chaperones GroESL may be needed. For example, when phage infects E. coli, the production of GroESL is induced. If the level of functional GroESL does not increase, phage cannot form the phage particles, because the folding of coat proteins does not occur correctly (8). Thus, the coordinate induction and high level production of E. coli chaperones may be required for proper folding of the foreign proteins. The second parameter that affects the solubility of eukaryotic proteins in E. coli could be the difference of redox state between E. coli and eukaryotic cells. Most of the fusion proteins with GST 1 (glutathione S-transferase) containing various mammalian proteins produced in E. coli bind to glutathione-Sepharose beads very efficiently. In contrast, the GST-fusion proteins produced in mammalian cells bind to glutathione beads only with low efficiency. 2 This observation suggests that mammalian cells have a different redox environment from E. coli. Consistent with this observation, it was reported that quite high concentrations of glutathione are maintained in mammalian cells (9).
We report here that the solubility of various eukaryotic proteins in E. coli is increased by coproducing E. coli thioredoxin (Trx). By coproducing Trx, eight foreign proteins including transcription factors and oncogene products were successfully produced in soluble form. The effect of Trx coproduction on the solubility of foreign proteins is compared with that of GroE coproduction.
MATERIALS AND METHODS
Construction of Plasmids-To make the pACYC plasmid containing the T7 promoter (pACYC-T7), the HindIII-SphI 0.5-kb DNA fragment of pACYC184 (10) was replaced by the 0.6-kb HindIII-SphI fragment containing the T7 promoter from the pAR2156 vector (11,12). The NdeI-BglII 2.1-kb DNA fragment containing the GroESL-coding region was made by the polymerase chain reaction using the groE plasmid (pKV1561) (13) and inserted into the NdeI-BamHI site of pACYC-T7 to generate the plasmid to produce E. coli GroESL under the control of T7 promoter (pT-GroE). To make the plasmid in which the T7 promoter was linked to the E. coli Trx-coding region (pT-Trx) (14), the NdeI-HindIII fragment of pT-GroE was replaced by the NdeI-HindIII fragment containing the Trx-coding region and the aspA transcription ter-* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Bacterial Strains and Production of Proteins-The E. coli strain BL21(DE3) (11,12) harboring pT-GroE, pT-Trx, or pET vector to produce various vertebrate proteins was made by transformation with each plasmid. To generate the bacteria to produce both foreign protein and GroE or Trx, the E. coli strain BL21(DE3) harboring pT-GroE or pT-Trx was transformed by pET vector encoding various vertebrate proteins. The bacteria was cultivated in 2.5 ml of superbroth to A 550 of 0.7, and then production of the proteins was induced with 1 mM isopropyl-1-thio--D-galactopyranoside (IPTG) for 4 h. Bacteria were pelleted, washed with phosphate-buffered saline (130 mM NaCl, 2.7 mM KCl, 10 mM potassium phosphate buffer, pH 7.2), suspended in 150 l of buffer A (50 mM Tris-HCl, pH 7.5, 5 mM MgCl 2 , 0.5 mM EDTA, 0.1 M NaCl), and disrupted by sonication. After centrifugation, the supernatant was rescued as the soluble fraction. The pellet was suspended into 200 l of SDS sample buffer, boiled for 3 min, and centrifuged. The supernatant was rescued as the insoluble fraction.
Autophosphorylation Assay of the Human lck Gene Product (Lck)-E. coli BL21(DE3) harboring a pET vector to produce Lck with or without Trx expression vector was cultivated, and production of Lck was induced by IPTG for 3 h. The bacterial pellet was suspended into 1/25 volume of buffer L (50 mM Tris-HCl, pH 8.0, 0.5 mM EDTA, 50 mM NaCl, 1 mM dithiothreitol, 0.125 mM phenylmethylsulfonyl fluoride) containing four protease inhibitors (soybean trypsin inhibitor, antipain, pepstatin A, chymostatin, and leupeptin; each 10 g/ml), and disrupted by sonication. After centrifugation, the supernatant containing the soluble form of Lck was rescued and stored (soluble Lck). The insoluble pellet containing the insoluble form of Lck was suspended into 1/50 volume of buffer L containing 8 M urea, stood on ice for 1 h, and centrifuged. The supernatant was rescued, dialyzed against buffer L, and stored (ureatreated insoluble sample). Autophosphorylating activity of Lck was examined by using the Lck-specific antibody as described (15). RESULTS We tried to examine the effect of coproduction of E. coli Trx or E. coli chaperones GroESL on the solubility of vertebrate proteins in E. coli. The pET vector, which contains the T7 promoter and a pBR322-based replicon (11,12), was used to produce foreign proteins. To coproduce GroESL or Trx with a level similar to that of foreign proteins of interest, the GroESLor Trx-coding region was also linked to the T7 promoter and inserted into the pACYC vector, which contains a p15A replicon and a chloramphenicol resistance marker (10) (Fig. 1A). The generated plasmids, pT-GroE or pT-Trx, allowed for cotransformation with the pET plasmids to produce various vertebrate proteins based upon plasmid compatibility. To confirm the production of GroESL or Trx from these plasmids, the BL21(DE3) bacteria transformed with pT-GroE or pT-Trx plasmids was cultivated in the presence or absence of IPTG. Fig. 1B shows the Coomassie staining of the total proteins of each culture following SDS-PAGE. The overproduction of GroEL and GroES from pT-GroE or Trx from pT-Trx is striking, with over 30% of the total cellular protein being GroESL or Trx.
We first examined the effects of coproduction of GroESL on the solubility of various vertebrate proteins (Fig. 2, see lanes marked by ϩGroE). To produce various foreign proteins, the BL21(DE3) transformants harboring the pET plasmid alone or both the pET plasmid and pT-GroE were made. To assess the production and solubility of each of the proteins, cultures were harvested 4 h postinduction, the cell pellets were lysed by sonication, and the resulting lysates were separated into soluble and insoluble fractions by centrifugation. The proteins in both fractions were separated by SDS-PAGE followed by Coomassie staining. Without coproduction of GroESL, mouse c-Myb (16) was produced completely as insoluble aggregates. However, coproduction of GroESL significantly increased the solubility of Myb, and approximately 10% of c-Myb was produced in soluble form, resulting in the production level of about 20 mg of soluble form per liter of culture. Similar increases in solubility were observed with two other human transcription factors, cAMP response element-binding protein 1 (CRE-BP1) (also called ATF-2) (17) and the p53 tumor suppressor gene product (18). However, coproduction of GroESL did not increase the solubility of three other nuclear factors, the skirelated gene product SnoN (19), myc proto-oncogene product (Myc) (20), and adenovirus oncogene product E1A (21). We have also examined the effects of coproduction of GroESL on the solubility of vertebrate kinases. The solubility of one of the Ser/Thr kinases, the Xenopus mos proto-oncogene product FIG. 1. Expression vector for E. coli chaperones GroESL or E. coli Trx. A, structure of the GroESL or Trx expression vector. The GroESL-or Trx-coding region was linked to the T7 promoter and inserted into pACYC vector containing the chloramphenicol resistance marker. B, induction of GroESL or Trx production. E. coli BL21(DE3) harboring GroESL or Trx expression vector was cultivated to A 550 of 0.7 (IPTG(Ϫ)), or was then cultivated for 4 h more in the presence of 1 mM IPTG (IPTG(ϩ)). Total lysates were prepared and centrifuged to separate the soluble and insoluble fractions. Samples prepared from 0.4 and 0.7 ml of culture were analyzed by 10% (for GroEL) (left) and 15% (for GroES and Trx) (right) SDS-PAGE followed by Coomassie staining.
(Mos) (22), increased significantly, but the solubility of one of the tyrosine kinases, Lck (23,24), which is a member of the src gene family, did not. Thus, coproduction of GroESL improved the solubility of four foreign proteins out of the eight examined.
We then examined the effects of coproduction of Trx on the solubility of eight proteins described above (Fig. 2, see lanes marked by ϩTrx). The solubility of c-Myb was dramatically increased by coproduction of Trx, and about 30 mg of the soluble form of c-Myb were produced per liter of culture. Similarly, coproduction of Trx improved the solubility of CRE-BP1, p53, Mos, and Lck, and approximately 30 -100 mg of soluble forms were produced per liter of culture. Although all of Myc and SnoN were produced as insoluble aggregates even in the presence of GroESL, significant amounts of Myc and SnoN were produced in soluble form by coproduction of Trx. About half of E1A was produced in soluble form in the absence of Trx, while most of E1A was soluble in the presence of Trx, resulting in production of about 70 mg of the soluble form of E1A per liter of culture. Thus, Trx increased the solubility of all eight foreign proteins examined, indicating that the Trx coproduction system is more useful than the GroE system.
Does foreign protein produced in soluble form by coproduction of Trx have the native conformation? To examine this, we have analyzed the autophosphorylating activity of Lck (Fig. 3). Lck was produced in soluble form by using the Trx coproduction system (Fig. 3, lanes 1-4). On the other hand, Lck produced as insoluble form without coproduction of Trx was solubilized by urea treatment (Fig. 3, lanes 5-8). Two forms of Lck were immunoprecipitated by the Lck-specific antibody and incubated with [␥-32 P]ATP to measure the autophosphorylating activity. The specific activity of soluble Lck was 10 times higher than that of urea-treated insoluble materials. These results indicate that only a small portion of molecules have the native conformation in the protein sample solubilized by urea. In contrast, the proteins produced in soluble form by coproduction of Trx appear to have the native protein conformation. DISCUSSION We have demonstrated that coproduction of Trx dramatically increases the solubility of all eight foreign proteins examined in E. coli. However, an increase of the solubility by coproduction of GroESL was observed with only four of the eight proteins examined. These results indicate that the Trx coproduction system is more useful than the GroE system to produce foreign proteins as soluble forms. By making the plasmid to coproduce both Trx and GroE, we have also examined whether overproduction of both Trx and GroE can more effectively increase the solubility of vertebrate proteins than that of either protein. However, we have observed no additional effects of overproduction of either Trx or GroE. 2 Improvement of the solubility of foreign proteins by overproduction of GroESL was reported recently by other groups with human procollagenase (25) or human Csk (26). In spite of the obvious effect on the solubility of procollagenase, Lee and Olins (25) reported that overproduction of GroESL had no effect on the solubility of three other proteins. Our better results with GroE (four successful cases out of eight) than in the other report could be due to the high level of production of GroESL under the control of the T7 promoter.
It was reported that the use of Trx as a gene fusion partner increases the solubility of foreign proteins like cytokines (27). However, our finding that the overproduction of free Trx dramatically increases the solubility of foreign proteins is clearly different from this. This finding give us a big advantage in preparing the large amount of proteins for structural study. In the case of fusion proteins, Trx has to be cleaved off by a specific peptidase, but the efficiency of cleavage is often very low. Increase of the solubility of foreign proteins by overproduction of Trx strongly suggests that the redox state affects the solubility of foreign proteins. Our observation that many GSTfusion proteins produced in mammalian cells bound to glutathione-Sepharose beads much less efficiently than those produced in E. coli cells suggest that E. coli cells have the relatively oxidative environment compared to mammalian cells. This could induce formation of abnormal intramolecular disulfide bonds that aggregate the proteins. Another observation that addition of a reducing reagent in the process of solubilization of insoluble aggregates of c-Myb by urea or guanidine hydrochloride increases the efficiency of obtaining the functional proteins 2 may support this speculation.
Comparison of the activity of Lck between the soluble form and the solubilized form by urea indicates that the soluble form has higher activity than the urea-treated form, indicating that only a portion of the molecules in the urea-treated sample has a native protein conformation. These results show that the artificially solubilized protein sample is not suitable for structural study. Recently, we have succeeded in purifying the large amount of full-length c-Myb protein produced as a soluble form using the Trx system. This c-Myb protein sample will be useful not only for crystal study but also for various experiments to FIG. 2. Increase of solubility of mammalian proteins by coproduction of GroESL or Trx. E. coli BL21(DE3) harboring the pET expression vector to produce various proteins indicated on the right with (ϩ) or without (Ϫ) GroESL expression vector (GroE) or Trx expression vector (Trx) was cultivated to A 550 of 0.7 (IPTG(Ϫ)), and was then cultivated for 4 h more in the presence (IPTG(ϩ)) or absence (IPTG(Ϫ)) of 1 mM IPTG. The soluble (marked by S) and insoluble (marked by I) fractions were prepared and analyzed on 10% or 8% SDS-PAGE followed by Coomassie staining . FIG. 3. Autophosphorylation of Lck produced in E. coli. Various amounts of Lck produced in soluble form and the urea-treated Lck produced in insoluble form were immunoprecipitated with the Lckspecific antibody. To confirm that similar amounts of Lck were precipitated between two preparations, samples of the precipitates were analyzed by SDS-PAGE followed by Western blotting by the Lck-specific antibody (upper). The remaining precipitates were incubated with [␥-32 P]ATP to measure the autophosphorylating activity and then analyzed by SDS-PAGE followed by autoradiography (lower). examine the interaction between c-Myb and other proteins. The Trx coproduction system allows us to prepare many mammalian proteins suitable for structural study and molecular biological study. | 4,093.6 | 1995-10-27T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Monitoring System Prototype Design at The Project Management Units
Project Management Unit of ERP (Enterprise Resource Planning) serves to limit, planning, forecasting, scheduling, organizing, directing, controlling, and closing projects. Technological development monitoring system and growing can be used to assist management in monitoring to ensure all activities conducted goes according to plan. The technology can be used as a media monitoring include web-based technology. The method used is by direct observation in ERP Project Management Unit, analyzing procedures and processes on the system is running, as well as describe the procedure using UML. To support the accuracy of the data, the research conducted by the method of literature, by searching for relevant information from various books and browsing the Internet. I also conduct experiments or in practice in the design and manufacture of prototype applications Project Management Monitoring System. The benefits of this research is to produce a new system in the form of application Project Management Web-based Monitoring System is capable of supporting high mobility of employees to report any activity that has been, is being, or will be done in realtime. The report was subsequently used as a basis for analyzing the shortcomings of the activities that have been done so that management can do forecasting activities that will be implemented in the future. From this research, the expected performance of the ERP Project Management Unit in monitoring projects can be more effective, efficient, and optimal.
INTRODUCTION
Today the development of monitoring system technology is increasingly advanced and developing (Al-Khafajiy et al., 2019;Kumar. S, 2022), so that it can be used to assist management in monitoring to ensure that every activity carried out goes according to plan (Polyakova, A et al., 2019;Haasnoot, M et al., 2018;Marbouh, D et al., 2020;Cheng et al., 2020). Technologies that can be used as monitoring media include web-based technology and mobile device technology or Android-based smartphones (Paputungan, I et al., 2020;Susilawati, H et al., 2019;Işık, M. F et al., 2018;Christanto, F. W., & Suprayogi, M. S, 2018). Webbased technology has been known to have many advantages, one of which is that it can be accessed through various types of media and also different operating systems (Stawarz, K et al., 2019;Xu et al., 2020) .
The tight schedule of activities is one of the obstacles for management in monitoring every activity carried out (Yiu, N, 2019). Not infrequently, employees have to take official trips out of town or even abroad to take part in various work-related activities. This causes the reporting process to be delayed, and not up to date, and the performance of the ERP Project Management Unit (Enterprise Resource Planning) in carrying out the monitoring activity process is less than optimal (Faghihi V., 2022;Chethana S, 2022). In general, the functions of the ERP Project Management Unit are scooping, planning, estimating, scheduling, organizing, directing, controlling, and closing projects (Battistello et al., 2021;Mohapatra, H., & Rath, A. K., 2020).
The report is then used as the basis for analyzing the shortcomings of the activities that have been carried out so that management can forecast activities that will be implemented in the future. Thus, it is expected that the performance of the ERP Project Management Unit can be more effective, efficient, and optimal (AboAbdo, S et al., 2019;Supriyono, S., & Sutiah, S., 2020).
Based on the description above, the author tries to research and design a project management application prototype that can later be used in building an Enterprise Resource Planning (ERP) project management unit application.
RESEARCH METHOD
At this stage, the steps of research activities are described, and this framework is the steps that will be taken in solving the problems discussed.
1. Planning In doing a job requires good planning, an activity planning will determine the success of a job. In developing the system, detailed planning is needed so that the objectives are achieved.
Data collection
This is done by studying, collecting, and summarizing reference books related to the preparation of research reports to obtain the necessary data and information. The references are taken from various sources.
Analysis and design
At this stage, an analysis of existing problems is carried out and analyzes all needs so that solutions are obtained in solving these problems. This stage of analysis begins with field observations. After obtaining the next problem is the analysis of system requirements and system design. At this stage of analysis using the OOAD (Object-Oriented Analysis and Design) method. For system business process design, UML (Unified Modeling Language) is used.
Conclusion
After all the stages are done, the next step is to conclude from the beginning of the process until the system prototype design is created. At this stage, documentation of the entire series of activities is also carried out by making it into a report.
RESULTS AND DISCUSSION A. Results of Problem Analysis
Based on the analysis that has been done on the project management monitoring system currently running on the ERP Project Management Unit, the authors found the following problems: 1. The project management monitoring system that is currently running is still semicomputerized, meaning that there are still manual activities such as recording activities carried out and reporting processes that are still being carried out by correspondence using official paper media. 2. The project management monitoring system that is currently being implemented is still not running effectively and efficiently, because the activity reports sent by each sub-unit use different formats. In other words, no standard reporting format has been set. This adds to the work of the Project Management Administration Staff in compiling reports on sub-unit activities. 3. The project management monitoring system that is currently being implemented is still not running quickly and accurately, because each reporting process requires checking and approval in the form of signatures or initials from each Sub Unit Manager, Project Management Administration Head, or ERP Project Management Head. This requires a lot of time so that the activity reporting process becomes hampered, and not up to date, and the performance of the ERP Project Management Unit in carrying out the activity monitoring process is less than optimal. 4. The current project management monitoring system does not work live, so it does not support the high mobility of employees to report activities that have been, are being, or will be carried out in real-time. 5. The risk of inaccurate data, because the recording process is still manual often makes the staff on duty forget to record the data activities carried out.
B. Problem Solving
To overcome the problems encountered in the process of monitoring activity and making reports, the authors propose alternative problem solving as follows: 1. Designing a web-based Project Management Monitoring System so that the process of monitoring and reporting projects can be done computerized and reduce the use of paperless. 2. Designing a web-based Project Management Monitoring System that is able to work live, so that each staff can report on every activity from the sub-unit anytime and anywhere. Managerial parties are also able to monitor projects in real-time without having to be in the office. Thus, it is expected that the project management monitoring process can run quickly, accurately, and optimally. 3. Determine the standard reporting format on the system. Reports sent by each unit use the same format, so the project management monitoring process becomes more effective and efficient. In Figure 1 presented, the Administrator performs the login process, the system then performs login verification. After a successful login process, the Administrator creates master data of Users, Units, Document type, and Status. Employee logins using the username and password that have been created by the Administrator, the system will verify the login. After the login process is successful, the Employee inputs monitoring data in the form of Instructions, Projects, Activities, and Activity Monitoring. The data that has been inputted is automatically saved into the database. Similar to Employees, BOD (Board of Directors) also performs the login process using the username and password that have been created by the Administrator. After the verification process is successful, the BOD can perform an overview in a tree view or a table that displays project management monitoring system data. Based on the activity diagram above, there are 1 (one) initial node which is the start of the activity; 22 (twenty-two) activities carried out by actors; 1 (one) decision node; 5 (five) fork nodes; 1 (one) join node; 1 (one) activity final node which is the end of the activity. occur and the activities carried out by the actor; 5 (five) Self messages representing recursive operation calls or method calls belonging to the object itself.
Fig 4. Sequence Diagram-BOD
Based on the sequence diagram, there are 1 (one) actor, namely BOD; 4 (four) Lifelines, namely: log in, Overview, Tree View, and Logout; 6 (six) Messages containing information about the activities that occur and the activities carried out by the actor; 1 (one) Self message representing a recursive operation call or method call belonging to the object itself.
D. Prototype Design
This stage is a clear picture of the complete design to the users and the website under study, as well as meeting the needs of system users. The following is a prototype or display of the Project Management Monitoring System design that was made. Figure 5 is the main page display design. When administrators, employees, or BOD finish logging in, they will enter the main form. This main form contains the menu of the project management monitoring system. Figure 6 is the design page for the project, where on the menu you can see a list of existing project work. The project list table contains information about the project name, project start date, project completion date, document attachments, instruction name, project status, unit name, and action (view, edit, delete). Figure 7. is a page design for adding users, we're on the menu you can add users. The user data contains the user's name, email address, access rights (according to the position and task), the unit code of the user, the user's active status or not, and the action (view, edit, delete).
E. Discussion
The design of the Project Management Monitoring System Application in the ERP Project Management Unit must pay attention to factors related to the system running within the company. In this case, the author adjusts to the wishes of stakeholders, starting from the current system analysis process, problems, and alternative solutions to problems encountered, as well as making user requirements as outlined in the elicitation. It is intended that the performance of the ERP Project Management Unit in monitoring projects can be more effective, efficient, and optimal. In designing this system the researcher designed a number of inputs to the program such as testing examples on each menu and sub-menu. If the data input is incomplete, the system will display a message that provides information about deficiencies or errors in data input. This is very helpful for application users if later this design is implemented into a system so that users can provide input according to the data required by the system for further processing to produce useful information for project management monitoring. | 2,588.6 | 2022-09-05T00:00:00.000 | [
"Computer Science"
] |
Study of the material engineering properties of high-density poly(ethylene)/perlite nanocomposite materials
This paper was focused on application of the perlite mineral as the filler for polymer nanocomposites in technical applications. A strong effect of the perlite nano-filler on high-density poly(ethylene) (HDPE) composites’ mechanical and thermal properties was found. Also found was an increase of the Young’s modulus of elasticity with the increasing filler concentration. Increased stiffness from the mechanical tensile testing was confirmed by the nondestructive vibrator testing as well. This was based on displacement transmissibility measurements by means of forced oscillation single-degree-of freedom method. Fracture toughness showed a decreasing trend with increasing perlite concentration, suggesting occurrence of the brittle fracture. Furthermore, ductile fracture processes were observed as well at higher filler concentrations by means of SEM analysis. There was also found relatively strong bonding between polymer chains and the filler particles by SEM imagining.
Introduction
Modification and recycling of polymers is an important part of the polymer research and applications [1][2][3]. Thermoplastics, such as polyethylene, can offer useful mechanical, chemical, electrical [4], and optical [5] properties, e.g., as the structural supporting components [6] and packaging materials [7]. Due to its low price per unit volume and unique physicochemical properties, it is globally the most used thermoplastic [8]. Poly(ethylene) is a semicrystalline polymer. It is classified based on its density. There are four different groups: high-density polyethylene (HDPE), low-density polyethylene (LDPE), linear low-density polyethylene (LLDPE), and very lowdensity polyethylene (VLDPE) [9]. Covalent bond between carbon atoms of the poly(ethylene) molecule is extremely stiff and strong similarly as in the diamond. Furthermore, poly(ethylene) chain has the smallest transversal crosssection compared to all polymers [10]. This is due to the lack of the presence of the pendant groups in its macromolecular structure. Therefore, the system of the unidirectional-oriented polyethylene chains has relatively high strong elements available per unit area capable to transmit high mechanical stresses. For this reason, the macroscopic mechanical strength of such structure is very high. Semiempirical estimations of the maximum strength of the polyethylene along its macromolecular chain vary in the range of GPa. Theoretical calculations of the modulus of elasticity propose magnitudes from 180 to 340 GPa [10]. However, there was found increased risk of cavities formation at the nano-filler/polymer matrix interface in the HDPE nanocomposites due to the difference in the Young's modulus of elasticity between the polymer matrix and the filler nanoparticles [11].
Moreover, the tensile properties of polymer fibers might be significantly affected by their fiber structure as found for polyacrylonitrile (PAN) membranes [12]. It was found that these decrease significantly with increasing the fiber orientation angle. The results also showed that the nanofiber membranes exhibited ductile fracture pattern.
The fundamental mechanisms governing the sizedependent mechanical behavior of different crystal structures were described in detail in [13], where the effects of fabrication process and current experimental techniques for micro and nano-compression were investigated as well. The influence of the surface effect on the properties of the nano-scale sample is directly associated with the surface to volume ratio.
Mineral fillers were used as an additive in polymer nanocomposites [14], silicon rubber [15], and in combination with the polyvinyl alcohol fibers/nano-SiO 2 fillers as reinforced cementitious composites [16] in the last decade. At the present, mineral nano-fillers were used as friction reducing additives for improving tribological performance and wear resistance of HDPE [17]. HDPEnanoclay composites were also used in the additive manufacturing by means of the 3D printing technology, where increase in Young's modulus of elasticity with increasing nanoclay concentration was found [18]. It was found that the net effect of the nano-fillers on the wear resistance was due to the combination of different abilities of the nanofillers to modify the tensile and compressive mechanical properties of polymers. These were manifested in the observed surface hardness and ductility of the nanocomposite, contributed by the nano-fillers to the friction coefficient and the creation of the transfer film [17]. The effect of incorporating nano-fillers into the thermoplastic polymer network results in the improvement of the physicochemical and mechanical characteristics such as low air permeability [19], improved mechanical strength, modulus of elasticity, and stiffness [20][21][22]. That is why, now, the research focused on composites containing inorganic fillers is important. Perlite powder is an essential material for the application of thermal and acoustic insulation materials [23]. Perlite is the mineral formed by the cooling of volcanic eruptions. It is composed of SiO 2 , Al 2 O 3 , K 2 O, Na 2 O, and water. Depending on its origin, perlite may contain also TiO 2 , CaO, and MgO. When subjected to thermal treatment, natural perlite particles expand up to 20 times in volume, due to the water vaporization [24]. Expanded perlite was also found as an effective sound-absorbing material due to its open pore structure [25]. When sound waves of a certain wavelength enter such pores, they effectively absorb the acoustic energy. Applications of polyethylene prepared with the addition of perlite were reported elsewhere [25]. It was found that perlite enhanced the thermal stability and sound absorption coefficients of polyesters.
Polymer matrix modification, its crystallinity degree, type of reinforcement, filler/matrix adhesion quality [26], filler particles size, etc. influence physicochemical, thermal, and mechanical properties of the final composites [27,28]. That is why, detailed understanding of the effect of the polymer matrix and the filler particles on the overall composite performance in consumer technical products, e.g., at the mechanical deformation loads both static and dynamic, is of scientific and practical importance. For that reason, this paper is focused on combination of the destructive and nondestructive mechanical testing [29] and thermal analysis in combination with SEM imagining for perlite/ HDPE composites analysis.
Materials
High-density poly(ethylene) (HDPE) type 25055E (The Dow Chemical Company, USA) was used in the form of white pellets (lot. No. 1I19091333). As the filler particles, the inorganic volcanic glass mineral perlite (Supreme Perlite Company, USA) was applied (d 50 = 447 nm diameter, density 1.10 g/cm 3 ) [30]. Perlite filler moisture content was 0.1 wt%. There were prepared 150 composites samples (dog bone shape for tensile testing, Charpy's pendulum, and vibrator testing) of virgin HDPE and 5, 10, and 15 wt% of inorganic filler concentrations of perlite/HDPE composites. Composite samples were prepared by means of the injection molding technology on the injection molding machine Arburg Allrounder type 420 C (Germany). Applied processing temperature ranged from 190 to 220°C, mold temperature 30°C, injection pressure 60 MPa, and injection rate 20 mm/s [31]. Extrusion machine Scientific was used for virgin and composite samples extrusion at the processing temperature ranging from 136 to 174°C, L/D = 40.
Scanning electron microscopy
Scanning electron microscopy (SEM) images were captured using a Scanning Electron Microscope Hitachi SU 6600 (Japan). The source of the electrons is Schottky cathode. This microscope has the resolution in secondary electron mode (SE) 1.3 nm and in back scattered electrons (BSE) 3 nm. For these images, the SE and an accelerating voltage of 5 kV were used. The distance between sample and detector was 6 mm. Studied materials were placed on double-sided carbon tape on aluminium holder.
Thermal analysis
For perlite/HDPE nanocomposites, virgin HDPE thermogravimetry (TG), and differential thermal analysis (DTA), experiments were performed on a simultaneous DTA-TG apparatus (Shimadzu DTG 60, Japan). Measurements were performed at a heat flow rate of 10°C/min in a dynamic nitrogen atmosphere (50 ml/min) at the temperature range from 30 to 550°C. The crystallinity (w C ) of the nanocomposites was calculated according to the formula (1) [32,33]: where: ΔH 0 m = 293 J g −1 is heat of fusion for 100% crystalline material HDPE, heated at rate of 10°C/min [32,34], and ΔH m (J g −1 ) is measured heat of fusion.
Uniaxial tensile testing
Tensile testing of injection-molded specimens was performed on a Zwick 1456 multipurpose tester (Germany). The measurements were realized according to CSN EN ISO 527-1 and CSN EN ISO 527-2 standards [35]. Samples were strained at room temperature up to break at the test speeds of 50, 100, and 200 mm/min. From the stress-strain dependencies, Young's modulus of elasticity and elongation at break were calculated. Each experiment was repeated 10× at the ambient temperature of 22°C and average values and standard errors were calculated.
Charpy impact testing
Impact tests were performed on Zwick 513 Pendulum Impact Tester (Germany) according to the CSN EN ISO 179-2 standard with the drop energy of 25 J.
Displacement transmissibility measurement
The material's ability to damp harmonically excited mechanical vibration of single-degree-of freedom (SDOF) systems is characterized by the displacement transmissibility T d , which is expressed by the equation [36,37]: (2) where: y 1 is the displacement amplitude on the input side of the tested sample and y 2 is the displacement amplitude on the output side of the tested sample, a 1 is the acceleration amplitude on the input side of the tested sample and a 2 is the acceleration amplitude on the output side of the tested sample. Generally, there are three different types of mechanical vibration depending on the value of the displacement transmissibility, namely, resonance (T d > 1), undamped (T d = 1), and damped (T d < 1) vibration [37].
The displacement transmissibility of a spring-massdamper system, which is described by spring (stiffness k), damper (damping coefficient c), and mass m, is given by the following equation where: ζ is the damping ratio and r is the frequency ratio, which are expressed by the formulas [39,40]: where: ω is frequency of oscillation and ω n is the natural frequency [41,42]. Under the condition dT d /dr = 0 in the equation (3), it is possible to find the frequency ratio r 0 , at which the displacement transmissibility has its maximum value [36]: It is evident from the equation (6) that the local extreme of the displacement transmissibility is generally shifted to lower values of the frequency ratio r with the increasing damping ratio ζ (or with the decreasing material stiffness k). The local extrema (i.e., the maximum value of the displacement transmissibility T dmax ) is obtained at the frequency ratio r 0 from the equation (6).
The mechanical vibration damping testing of the investigated materials was performed by the forced oscillation method. The displacement transmissibility T d was experimentally measured using the BK 4810 vibrator in combination with a BK 3560-B-030 signal pulse multianalyzer and a BK 2706 power amplifier at the frequency range from 2 to 3,200 Hz. The acceleration amplitudes a 1 and a 2 on the input and output sides of the investigated specimens were evaluated by means of BK 4393 accelerometers (Brüel & Kjaer, Naerum, Denmark). Measurements of the displacement transmissibility were performed for three different inertial masses m (i.e., ranging from 0, 90, and 500 g), which were positioned on the upper side of the periodically tested samples. The tested block article dimensions were (60 × 60 × 3) mm (length × width × thickness). Each measurement was repeated 5 times at an ambient temperature of 23°C.
Results and discussion
As known from earlier studies [35], applied inorganic nano/micro particles are used as functional fillers modulating elastoplastic behavior of polymer composites. It was found that dominating factors responsible for controlled mechanical response patterns of the composites are mainly the physicochemical properties of the applied polymer base matrix (e.g., HDPE, low-density poly(ethylene) (LDPE), linear low-density poly(ethylene) (LLDPE), etc.) and the properties of the filler particles (e.g., their uniformity, shape, diameter, and surface chemistry). Other factors, which should be taken into account, are the ratio of the amorphous/crystalline regions of the polymer matrix and the quality of the interface adhesion between filler particles and the polymer matrix [9].
SEM images of the perlite micro/nano-filler particles are shown in Figure 1a and b.
There is an evidence confirming their porous internal structure as shown in Figure 1a. As it is well-known from the literature [30,43,44], the porous micro/nano structures of the fillers or the whole composite articles are directly affecting their sound absorption, vibration damping properties as well as their dynamic mechanical properties [31,45,46].
Typical filler concentration dependencies of the Young's modulus of elasticity (E) of the studied perlite/HDPE composites are shown in Figure 2.
They were characteristic with the gradual increase of the E with the increasing perlite concentration for all deformation rates under study (50, 100, and 200 mm/min). There was observed approximately 37% increase in E for 15 wt% perlite concentration compared to the virgin HDPE. This triggered substantial increase of the materials' stiffness. However, this phenomenon was accompanied with the corresponding exponential decrease of the observed elongation at break with the increasing filler concentration as shown in Figure 3.
There was confirmed increasing stiffness of the composite response to the applied uniaxial deformation with the increasing deformation rate as reflected by original elongation at break for the virgin polymer of 220% (50 mm/min deformation rate) changed to 70% (100 mm/min deformation rate) and to 45% (200 mm/min deformation rate). Interestingly, these differences were not much significant for higher concentrations of the filler exceeding 10 wt% perlite/HDPE composite matrices, suggesting lowered mobility of the poly(ethylene) macromolecular chains. Results of the fracture mechanics measurements of the studied composites are shown in Figure 4.
There was found exponential decrease of the fracture toughness from 3.8 kJ/m 2 (virgin HDPE) to 2.4 kJ/m 2 for perlite/HDPE composites in the perlite concentration range from 5 to 15 wt%. As observed earlier during uniaxial tensile testing, mineral filler brought increase of the material stiffness to the composite matrix as reflected by the increasing modulus E with the increasing filler concentration. Also during impact fracture testing performed on Charpy pendulum, similar effect was confirmed, as reflected in Figure 4, by the increasing maximum force with the increasing filler concentration.
With respect to the proposed mechanical energy transfer mechanism, the SEM images shown in the Figure 5 clearly recognized plastically transformed polymer regions characteristic with the well-developed spurs and deformation bands typical for ductile fracture interface (Figure 5b-d) as well as brittle fracture regions as shown in Figure 5a and b.
Results of the dynamic mechanical testing of the studied composites by the forced oscillation method on vibrator device are shown in Figures 6 and 7.
Here ( Figure 6), typical frequency dependencies of the displacement transmissibility demonstrated increased material stiffness with the increasing filler concentration as reflected by appearance of the first resonance frequency (f R1 ) peak position ( Table 1).
There was confirmed validity of the formula (6), where with increasing stiffness (or decreasing damping ratio) the f R1 peak position was shifted to the higher excitation frequencies. Obtained dynamic mechanical behavior was in excellent agreement with tensile testing measurements, where the E was increased with the increasing perlite concentration in the polymer composite matrix. It was also found that the f R1 was shifted to the lower excitation frequencies with the increasing inertial mass applied during vibrational measurements. This fact is in agreement with the formula (4), where the increasing inertial mass m leads to the lower natural frequency ω n , thus to the lower f R1 . From the practical point of view, this method (vibration damping) allows nondestructive evaluation of the stiffness of the polymer nanocomposites in contrary to the destructive tensile or fracture tests.
Results of the thermal analysis are shown in Figure 8a and b and Table 2.
There was found a minor increase of the melting temperature from 137.4°C (virgin HDPE) to 141.6°C (15 wt% perlite/HDPE composite), indicating a stronger bonding between polymer chains and the filler particles. However, the crystallinity was decreased with the increasing perlite concentration from original 60.65 to 28.16%, suggesting perlite had no positive effect on HDPE crystallization as was reported, e.g., for halloysite nanotubes filler [47]. It was found (Figure 8a) that the highest degradation rate was observed for 5 wt% composite; however, the lowest degradation rate was found for 15 wt% composite. All curves exhibited one single degradation step attributed to the radical random scission mechanism of polyolefin thermal decomposition [33]. Compared to the virgin HDPE, the onset of the degradation of 15 wt% perlite/HDPE composite was shifted to the lower value, where according to Cuadri et al. [33] the predominant chain scission provoked the formation of low-thermal stability compounds. The latter compounds were consecutively eliminated at lower temperatures.
Conclusions
It was found in this study that perlite mineral filler is strongly influencing mechanical and thermal properties of HDPE polymer nanocomposites. There was a confirmed gradual increase of the Young's modulus of elasticity accompanied with the corresponding decrease of the elongation at break with the increasing filler concentration. There was an observed 37% increase in the Young's modulus of elasticity for 15 wt% perlite concentration in comparison with the virgin HDPE. The observed increased stiffness from the tensile testing was confirmed by the nondestructive vibrator testing based on measurement of the displacement transmissibility during forced oscillation measurements. It was reflected by the shift of the first resonance frequency peak position to the higher excitation frequencies. Fracture toughness showed a decreasing trend with the increasing perlite concentration from 4 to 2.3 kJ/m 2 , suggesting occurrence of the brittle fracture as well. However, there were observed regions of the ductile fracture processes at higher filler concentrations as found in SEM images. These were characteristic with the observed polymer deformation bands and spurs. It can be concluded that the perlite particles function as the stress concentrators in the complex composite matrix. There was found a minor increase of the melting temperature with the increasing filler concentration, indicating a stronger bonding between polymer chains and the filler particles. c -filler concentration, T mmelting peak temperature, ΔH mheat of fusion, w Ccrystallinity, T D -DTA peak of decomposition, T Astarting pointintersection of extrapolated staring mass with the tangent applied to the maximum slope of the TG curve (decomposition behavior), TWLtotal weight loss, ΔH m endothermic process detected in the temperature range from 95 to 175°C for all samples. | 4,179 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Aerodynamic Design and Development of Infiniti 4.0 Solar Car using CFD
Aerodynamic characteristics of a solar vehicle is of significant interest and importance to reduce the air drag experienced in order to reduce the power loss in overcoming various resistances. This paper deals with virtual wind tunnel testing using CFD and finding out the best possible design with due respect to the ESVC’2020 rulebook. A three dimensional CAD model of the vehicle was created using SolidWorks 2018.The model was meshed and CFD was performed subsequently in the ANSYS FLUENT. The drag coefficient was found to be about 0.697 at a speed of 72 Kmph (20m/s) due to large wake region formed behind the main roll hoop. Further reduction in drag coefficient is exhaustive as the main roll hoop is mandatory according to the rulebook. Keywords—ANSYS, drag coefficient, CFD, main roll hoop, solid works ,wake region, etc.
In the recent past years, the world has realized the problem of energy crisis due to depleting fossil fuels, increasing population, etc. leading to deadly problems of ever increasing pollution and global warming. Automobiles are the most used means of transportation despite being the primary contributors to the global air pollution. Hence it was very logical for the scientists and researchers to find an alternative fuel to power the modern vehicles. [2] It is a well understood fact that the future of mobility is solar electric hybrid vehicles, that being said the cost of energy produced by solar arrays remains on a higher side as compared to energy produced by nuclear fuels, coal, petroleum etc. [3] This high cost coupled with comparatively low efficiency of solar arrays makes it necessary to utilize the energy judiciously.
Given that almost 70% of all the power from the solar array is used to overcome the air resistance, aerodynamics of the solar power vehicle has to be given prime importance while designing the vehicle to minimize the power loses. Computational Fluid Dynamics (CFD) is a tool to study the fluid flow at minimum cost and provides sufficient accuracy with an average error less than 10%. Hence CFD was used to optimize the design. [4] II. CFD THEORY AND IMPORTANCE Computational Fluid Dynamics (CFD) is the implementation of numerical methods using computational techniques to study the fluid flow and solving the mathematical equations which describe the behavior of fluids. [5] CFD is becoming increasingly popular in the fields of Heat Transfer and Applied Mechanics due to its fair capacity of predicting the motion of fluids and chemical reactions as well. Navier-Stokes equations are the governing equations to model the behavior of viscous fluids. As mentioned below, Navier-Stokes are generally represented in the partial differential equation form. The solving of these equations analytically using the usual various standard methods available to solve the partial differential equations for simple flows over simple geometries like laminar flow over a thin plate is possible but if the flow is to become turbulent over much of a complex body/geometry it is almost impossible to even get close to the actual solution .Hence, a time averaged Navier-Stokes is being used in conjunction with the various turbulence models to solve the issues resulting from the time averaging process .The Navier stokes equation which is the combination of two equations. [6] i.e.
A) Continuity equation in 3D
The unsteady three dimensional mass conservation or continuity equation at a point in compressible fluid can be represented as The first term on the left side is Rate of change of density with respect to time. The second term describes the net flow of Mass across boundaries.
B) Momentum equations in 3D
The unsteady, momentum equation in 3D at a point in compressible fluid flow can be represented as In each of these equation the first term on left side is the Rate of change of mass velocity with respect to time and the second term represents the convective momentum transfer. The first term on right side represents the pressure gradient in that particular direction. The second term represents the diffusive momentum transfer and the third term represents the source.
Various models are considered to solve the Navier stokes equation. The turbulence model analyzed in the CFD simulation should be able to capture the main flow effects around the vehicle. Different turbulence model is studied in simulation ranging from steady state approaches, k-ɛ and k-Ω SST models, to transient models like DDES i.e. Delayed Detached eddy simulation. For the RANS portion, the turbulence model considered are k-ɛ and k-Ω models. [6] The k-ɛ turbulence model is composed by two equations that come directly from the differential transport equation, in which k represents turbulence kinetic energy and ɛ represents the turbulence dissipation rate. The two equations are mentioned below Where µ is the turbulent viscosity, 1 , 2 , ɛ are the turbulence model constants.
The k-Ω turbulence model is composed by two equations, in which k represents the turbulence kinetic energy and Ω represents the specific dissipation rate. The k-Ω SST model combines the advantages of k-ɛ and k-Ω models. The two equations are mentioned below Where µ is the turbulent viscosity , Ω and Ω are the turbulence model constants , represents the turbulent kinetic energy, Ω represents the generation of Ω , and Ω represent the dissipation of and Ω due to turbulence, Ω represents the cross diffusion term , and Ω are user defined source terms.
C) Non Equilibrium wall function (NWF):
For high Reynolds number flow, such as external flow around vehicle resolving the near wall region down to the wall is not practical. NWF takes into account the effect of local variation of viscous sub layer, when computing the turbulent kinetic energy budget in wall adjacent cells. Also, NWF focuses on adverse pressure gradient region which are common in fluid flow around vehicles. NWF provides more realistic prediction of the turbulent boundary layer, including flow separation, and they do it without increasing the computational time. [9] So, in the present solution, k-Ω turbulence model with non-Equilibrium wall function is implemented to study the fluid flow around the vehicle. Also, the parameters like aesthetics, driver ergonomics are taken into consideration while designing. The limits for the maximum length, width and height are 100 inch, 60inch and 60 inches respectively as per the rulebook.
B. Geometry Preparation
Once the CAD Model is ready, it is very important to create a numerically acceptable geometry for the solver to perform CFD Analysis. It was observed that the computational time required for the simulation of the original geometry was unreasonable. Hence, performing CFD simulation of the original model for number of iterations was not a feasible solution and model for CFD simulation was scaled down in 1:5 ratio.
Once the model is scaled down, enclosure as a computational domain as shown in Fig. 2. was prepared for the simulation. Width and height of the domain is 1 meter from the car which can be observed in Fig. 3. The length of the domain is 4 meters behind the car and 2 meters in front of the car as shown in Fig. 4. The purpose of providing maximum length behind the car is to ensure that the wake behind the car is properly solved before exit plane
C. Meshing
After the Geometry was reconstructed and became numerically acceptable, the file was imported into ANSYS, to construct the three-dimensional Grid system. Tetrahedral Unstructured Mesh cells were used in the computational Domain.
For the above grid system, the cell size gradually grows from around 3.3406e-003 m near the car surface up to about 0.2 m in the far-field where the incoming flow is assumed to be undistributed by the presence of the car. The Grid density is high around the car where the flow gradients can be large (Fig. 5.). For the same reason, there are also densely pack mesh near the car surface which can be observed from Fig. 6. There are 234182 nodes and 814627 elements in the grid.
www.ijert.org
For the geometry, inflation feature has been setup to utilize the growth of 5 layers from the surface of car boundary with total thickness of 12.5 mm and 1.2 growth rate from the first layer. In addition to capturing the boundary layer effect accurately, inflation also contributes to lesser element count and computational time. Inflation layer are clearly visible in Fig. 7. around the surface of solar vehicle.
D. Boundary Conditions
The front plane was given the "velocity-inlet" boundary condition. The bottom of the domain and solar vehicle was set as "No slip wall" and the top and the sides of the domain are set as "Symmetry" so that these surfaces would not bring any effects on the simulation. The outlet boundary condition is set to "Pressure-outlet" with gauge pressure of 0 Pascal. All the boundaries can be observed in Fig. 8.
E. Simulation Results
The numerical simulation was solved for 300 iterations. The changes in the graphs for each iteration were observed and was considered converged as there was recurrence of same fluctuation after 100 iterations till 300 iterations. The graph of Residuals can be observed in Fig. 9. The graphical change in lift coefficient during the iterations can be observed in Fig.11. The positive value of 'coefficient of lift' means an upward force is exerted on the vehicle. The value of lift was selected considering the available traction, rolling resistance offered due to combined effect of weight and lift force. The value of lift coefficient after simulation is 0.08273. The pressure distributions on the surfaces of the car has been examined and presented in Fig. 12. The regions of high pressure appear at the frontal area and the roll hoop of the vehicle. The airflow experiences resistance at these locations, hence it is expected that the pressure increases when velocity drops. Also, we can see that the flow field around the whole car surfaces develops in a relatively smooth manner expect for the frontal area and roll hoop. Wake region can be observed behind the roll hoop which shows the separation of boundary layer due to adverse pressure gradient. Due to the increase in pressure, velocity behind the roll hoop is minimum which can be observed clearly from velocity contours at mid-plane of solar vehicle in Fig. 15. Fluid Flow around the vehicle was successfully studied around the car body using CFD. Designing of the vehicle aerodynamically is difficult as the guidelines of the competitions rulebook has to be strictly followed. As demonstrated in the above various graphics, wake region formed is comparatively very large as compared to other commercial vehicles and the main reason behind it was the roll hoop providing a large surface area to generate increased pressure drag. The drag coefficient was found to be 0.697 which is more than many super cars. However, it was considered the best design with due respect to the rulebook's guidelines. Also, the actual working prototype was manufactured which is as shown in fig. 16. | 2,555 | 2020-04-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Mite Fauna of the Family Syringophilidae (Acariformes: Prostigmata) Parasitizing Darwin’s Finches in Gal á pagos Archipelago
: Due to the biological uniqueness of the Gal á pagos Islands, ectoparasites of their avian fauna are relatively well-studied compared with other oceanic islands. However, in this study, quill mites (Acariformes: Prostigmata: Syringophilidae) were investigated for the first time in this archipelago. We investigated 7 species (out of 9) and 133 specimens of Darwin’s Finches of the genus Geospiza . Quill mite parasites were confirmed in two host species, Vampire Ground-Finch G. septentrionalis (Prevalence Index = 5%) and Small Ground-Finch G. fuliginosa (PI = 4%). Both hosts were infested by a new mite species, Aulonastus darwini sp. n., inhabiting the quills of their contour feathers. The host–parasite relationship is discussed.
Introduction
The geographical patterns of biodiversity on isolated islands are an intensively studied subject of evolutionary ecology [1,2]. However, despite this extensive research, island biogeography of parasites has received relatively little attention [3,4].
Oceanic Galapágos Islands, due to their isolation from the mainland, low probability of multiple colonization events, and restricted surface area, represent a unique natural laboratory and provide a suitable model to study co-phylogenetic patterns in hosts and parasites [5]. The archipelago is separated by ∼1000 km of the ocean from the nearest mainland. It has low species diversity compared with other islands or close continental landmasses, representing a simpler community with fewer potential numbers of species and interactions among them [6]. Galápagos harbors 31 species of native resident land birds [7]. The archipelago's isolation and harsh environment delayed its colonization by humans until the 1800s, and its biodiversity remains mostly intact; only ∼5% of animal species have been lost. In addition, 26 breeding land bird species are endemic there, and no bird species extinctions occurred [8].
The bird symbionts (parasites and others) recorded on the Galápagos Islands can be divided into three groups according to their origin: (i) those that came to the islands with the ancestors of their host species; (ii) those acquired following colonization from other host species in the native bird community; and (iii) those introduced to the islands by humans [6].
Among the ectoparasites found on Galápagos land birds, the most studied group is probably Phtiraptera [6,9]. Relatively well-studied are also Diptera, particularly Hippoboscidae [6,10], and accidentally introduced parasitic vampire nest fly, Philornis downsi (Muscidae), which causes significant mortality and threatens the survival of some endangered species [11].
Although there are specimens labeled "Geospiza difficilis" in SNSB-ZSM, specifically, G. difficilis septentrionalis and G. difficilis nigrescens, these subspecies from Darwin/Wolf Islands are considered today to be full species, i.e., G. septentrionalis [18]. Similarly, subspecies G. difficilis acutirostris from Genovesa Island was erected to full species. Therefore, we considered monotypic G. difficilis sensu stricto, occurring on Pinta, Fernandina, and Santiago Islands (in the past, also Santa Cruz, Floreana, and San Cristóbal Islands) [24], as not investigated in our study.
From each specimen, we examined about ten contour feathers (near the cloaca region), about five under tail coverts, and one small wing covert. Before mounting, mites were softened and cleared in Nesbitt's solution at room temperature for three days, according to the protocol introduced by Walter and Krantz [25] and Skoracki [22]. Then, mites were mounted on slides in Hoyer's medium and investigated using a light microscope (ZEISS Axioscope, Oberkochen, Germany) with differential interference contrast (DIC) illumination. Drawings were made using a camera lucida drawing attachment.
In the description, all measurements are in micrometers, and the dimension ranges of the paratypes are given in parentheses following the data from the holotype. The idiosomal setation follows Grandjean [26] as adapted for Prostigmata by Kethley [27]. The leg chaetotaxy follows that proposed by Grandjean [28]. Finally, the morphological terminology follows Skoracki [22].
Descriptive statistics were computed using Quantitative Parasitology on the Web [29], with 95% confidence intervals (the Sterne method).
Male. Not found.
Differential Diagnosis
Aulonastus darwini sp. n. is morphologically the most similar to A. fringillus Skoracki, 2011 described from Fringilla coelebs (Fringillidae) from Poland [22]. In females of both species, each medial branch of the peritremes has two chambers, and each lateral branch has four or five chambers; the propodonotal shield is rectangular and bears bases of setae ve, si, se, and c1; setae c1 are longer than se; setae e2 and d1 are subequal in length; setae h2 are longer than f2, and agenital setae ag2 are longer than 60 µm.
This new species differs from A. fringillus by the following features: in females of A. darwini, the infracapitulum is apunctate; the propodonotal shield is rounded anteriorly and apunctate; setae c1 are 1.3-1.5 times longer than d2, and lengths of setae d2 are 125-160 µm. In females of A. fringillus, the infracapitulum is sparsely punctate; the propodonotal shield is with a concave anterior margin and punctate on the whole surface; setae c1 are 1.9-2 times longer than d2, and lengths of setae d2 are 80-90 µm.
Etymology
The species is named after Charles Darwin and the common English name of the whole host lineage (Darwin Finches, Geospizini), which played an important role when Darwin formulated his theory of evolution by natural selection [30,31].
Quill Mites of the Higher-Level Host Group-Birds of the Family Thraupidae
The Neotropic Tanagers and allies (Thraupidae) are poorly researched for the presence of the mite fauna of the family Syringophilidae, despite being the second-largest family of birds (it represents ∼4% of all avian species and 12% of the Neotropical birds) [32]. Considering that the family Thraupidae comprises more than 380 species [18], it shows how insufficient (less than 2%) is our knowledge of the syringophilid fauna associated with this host group. However, the above species overview shows that this taxon is infested by both large-sized mites such as Syringophiloidus (subfamily Syringophilinae), which occupy quills of large flight feathers with large calamus cavity and thick quill walls (secondaries) and small-sized syringophilids inhabiting small contour feathers such as representatives of the genera Aulonastus (Syringophilinae) and Neopicobia (Picobiinae). This set of genera (with many still undiscovered species) can be characteristic of birds belonging to the family Thraupidae.
Quill Mites of Darwin's Finches
Darwin's Finches form a group of approximately 18 species in 5 genera belonging to the subfamily Coerebinae [18], which exhibit the highest propensity for dispersal of all lineages in the tanager radiation; most tanagers native to isolated islands are members of this subfamily [36]. Until now, we had no information on the presence of quill mites on other members of this subfamily, and currently, we present data about quill mite fauna associated with one genus of Darwin's Finches (Geospiza), while the other genera of this group, i.e., Camarhynchus (five species), Certhidea (two species), Pinaroloxias (one species), Platyspiza (one species) still remain unexamined.
The overall diversity of Geospiza quill mites found in the Galápagos Archipelago seems to be low, in accordance with parasite island syndrome theory [3,37]. Out of 7 investigated species and 133 specimens of Darwin's Finches of genus Geospiza, quill mite parasites were confirmed in only 2 species, G. septentrionalis and G. fuliginosa, whereas 5 Geospiza species examined in our study were not confirmed as hosts of quill mites, i.e., G. fortis, G. scandens, G. magnirostris, G. conirostris, and G. propinqua. Two species, namely G. magnirostris and G. difficilis, were not investigated in our study.
At the moment, it is hard to say whether other species were not confirmed as quill mite hosts due to low sample size or any sorting event: parasites could either "miss the boat" during the colonization of individual Galápagos Islands (founding Geospiza population did not harbor the parasite on their arrival), or mites could be "lost overboard" (despite colonizing finches did harbor parasites, they lost them already when on the islands due to substantial host population fluctuations in harsh insular conditions) [17,38].
Hosts of Aulonastus darwini sp. n.
The Vampire Ground-Finch Geospiza septentrionalis, which is one of the hosts for Aulonastus darwini, is restricted only to Wolf and Darwin Islands, in the far northwestern corner of the archipelago, remote even by Galápagos standards. Due to the barrenness of both islands, the Vampire Ground-Finches undergo extreme dietary limitations during the dry season; they shift from seeds toward an insectivore diet but also significantly feed on blood and eggs of breeding seabirds and even partially digested fish regurgitate and guano. Blood-feeding likely evolved from the avian ectoparasite feeding (probably hippoboscid louse-flies), and then finches shifted from this mutualism to piercing the bird skin, allowing them to consume blood [20,[39][40][41].
In our study, we investigated 19 bird individuals collected from both the Wolf and Darwin Islands, and we found only one infested host (originating from Wolf I.).
On the other hand, the second host of A. darwini, the Small Ground-Finch, is one of the most common, abundant, and highly adaptable, as well as widespread of Darwin's Finches of the Galápagos Islands; it occurs on almost all the islands of the archipelago. In towns and villages, they act very much similar to a House Sparrow (Passer domesticus) in other parts of the world [42]. We examined birds (N = 48) collected from 11 islands (Isabela, Santa Fe, Floreana, San Cristobal, Pinzon, Gardner, Espanola, Santiago, Rabida, Pinta, and Marchena), and 2 of them were infested (originating from Isabela and San Cristobal Islands).
Prevalence of Infestation and Habitat Preference
The information about the proportion of infected hosts in the host population (index of prevalence (IP)) has recently appeared more in the literature on syringophilid mites and their hosts. Thus far, IP has been calculated for birds collected in the wild [43][44][45][46][47][48][49][50][51][52][53][54][55], kept in farm households [56,57], or zoological gardens [58], as well as on the birds housed in museum collections [59][60][61][62][63][64]. The historic bird skins in museums are as good material for studying quill mite ecology as living birds because these ectoparasites are not able to leave the calamus after the host's death. All these studies show that the highest prevalence was noted among domestic birds kept in crowded conditions or birds closely associated with humans (such as House Sparrows), where IP can reach more than 80% (see [47,65]). Less but still high IP was noted for wild social birds, in which IP reaches 38% (see [62]). The lowest prevalence was recorded in wild and non-social birds, in which IP rarely exceeds 10% (see, e.g., [54]).
Examination of both Darwin's Finches in our study shows a relatively low index of the prevalence of mite A. darwini (4-5%). However, their population parameters substantially differ. Vampire Darwin's Finch has a global population estimated at fewer than 1000 mature individuals [38], while Small Ground-Finch lives on most Galápagos Islands, and its population is considered abundant. Due to highly insular conditions and permanently limited host population size (or, the island syndrome [3]) one would expect higher prevalence at least in Vampire Darwin's Finch. Additionally, noteworthy is the fact that both prevalence and parasite presence/absence data are historic in our study, more than 120 years old. In contrast to our findings, OConnor et al. [14] and Villa et al. [13] noted that Geospiza species are commonly infested by feather mites (e.g., of the genera Mesalgoides, Proctophyllodes, Trouessartia, and Xolalgoides), in which the IP is higher than 25%.
Representatives of the genus Aulonastus are small-sized mites, and they are found mainly inside feather quills of wing coverts, contour feathers, and occasionally inside secondaries in small-sized passerines [21,22]. In our study, all mite specimens were collected from quills of contour feathers. Interestingly, besides the 133 examined specimens of 7 host species, we did not find any representative of the subfamily Picobiinae, which exclusively inhabit this type of habitat. One possible explanation for the absence of picobiine mites on the Galápagos Archipelago could be again any kind of sorting event as for this subfamilythe founding flock of Darwin's Finches that started to colonize the archipelago could be parasitized only by representatives of the syringophilinid genus Aulonastus. However, more detailed research is needed. Institutional Review Board Statement: Ethical review and approval were waived for this study due to the use of only dead animals (specimens deposited in the ornithological collection).
Data Availability Statement:
All necessary data (such as localities) are available in the text of this article. | 2,832.2 | 2022-07-22T00:00:00.000 | [
"Biology"
] |
Synthesis and Tribological Properties of Bio-Inspired Nacre-Like Composites
Ceramic materials possessing the properties of high-strength and rigidity are widely used in industry. The shell nacre has a layered structure containing both macroscopic and microscopic levels and is equipped with superior qualities regarding hardness and strength. Therefore, the ceramic composites with a nacre-like layered structure have the potential to be utilized as sliding bearings employed in the harsh conditions of wells. For the purpose of this paper, a porous Al2O3 ceramics skeleton with nanometer powder is prepared using the freeze-casting method. Then the porous ceramic skeleton is filled with polymer polymethyl methacrylate (PMMA) through mass polymerization to produce a bionic Al2O3/PMMA composite with a lamellar structure. The properties of the prepared composite are determined by the analysis of micro-hardness, fracture toughness, friction coefficient, wear scar diameter, and the morphology of the worn surface. Consequent results indicate that elevation in the A12O3 powder, which acts as the initial solid phase content, prompts the ceramic slurry to exhibit an increase in viscosity and a gradual decrease in the pore size of the ceramic skeleton. The prepared layered Al2O3/PMMA composite possesses high fracture toughness, which closely resembles that of Al, is approximately four times that of the matrix of the Al2O3 ceramics and 16 times that of the PMMA. Three kinds of composites containing different solid phase content are subjected to testing involving lubrication by water-based drilling fluid to determine the friction coefficient of each. The results indicate that an increased load leads to a decreased friction coefficient while the impact of speed is not evident. Under dry conditions, the friction coefficient of three different composites tested, declines with elevated load and speed. With the use of water-based drilling fluid as lubrication, the wear scar diameter increases at higher speed, while dry conditions denote increased load. Abrasive wear is determined to be the principal form of erosion of layered Al2O3/PMMA composites.
Introduction
Bearings are essential components of downhole tools that are constantly exposed to extreme working conditions [1]. Wear and tear of these bearings is becoming increasingly problematic, severely threatening the operation stability of the bit and other vital equipment [2,3]. As a crucial mechanical component, the ceramic bearing can be widely used in aviation, aerospace, petroleum, automobile, and other fields. Ceramic bearings are unmatched by metal bearings due to their excellent performance at high temperatures, tremendous strength, and outstanding resistance to abrasion and corrosion [4][5][6][7]. However, the notable drawback is that the fracture resilience is exceedingly low. Subsequently, these bearings are prone to microcracks and micro defects, severely restricting its application in engineering [8][9][10]. The hard tissue of natural organisms, especially the nacre of shells,
Materials
The nano-scale alumina powder (Sigma Aldrich (Shanghai) Trading Co., Ltd., Shanghai, China) used for the purpose of this paper is α-Al 2 O 3 , and its particle size was less than 12 nm. Deionized water was used as the solvent for the freeze-casting experiment. Sodium citrate (Sigma Aldrich (Shanghai) Trading Co., Ltd.) was used as the dispersant. The water-based drilling fluids were prepared for the harsh working conditions like the high water and sand content found in oil/gas wells. The mass ratio of each raw material was low viscosity Carboxyl methyl Cellulose (CMC) (1 wt. %), bentonite (3 wt. %), Na 2 C 3 (0.15 wt. %), and H 2 O (95.85 wt. %). The sodium carbonate powder was dissolved in the deionized water. The bentonite was added with the glass rod mixing edge, followed by the low viscosity CMC. The mixture was stirred with a magnetic stirrer for 30 min and then left to stand for 24 h.
Experimental Equipment
During the research for this study, a freeze-casting device was designed for directional solidification at the bottom and top of the slurry as shown in Figure 1. The cooling rate at the top and bottom of the slurry could be manipulated with PID (Proportion Integral Derivative) control. It could also be pressurized during the freezing process to achieve a slimmer profile of the lamellar structure and further simulate the nature of the shell nacre. The contact type fast cooling system was designed to apply pressure to the sample at a low temperature and to control the cooling temperature accurately. The freeze-casting device was used to directionally solidify alumina ceramic slurry. It is mainly composed of a refrigeration module, temperature control module, support module and compression module, as illustrated by Figure 1. The experimental platform could simultaneously accomplish, as well as control both the cooling and pressurization of the sample in the refrigeration chamber with remarkable accuracy. A cooling rate of 6 • C/min and a freezing temperature of −30 • C were selected for the experiment. The temperature of the ceramic slurry was reduced from room temperature to −30 • C at a cooling rate of 6 • C/min, and then maintained at −30 • C for 5 min. After the cooling process, a warm insulation procedure was followed to make the cooling more consistent. The detailed experimental equipment in this study is listed in Appendix A. the compression strength, fracture toughness, micro hardness performance, and friction and wear properties, allowed for the material ratio and process parameters to be optimized, providing a wearresistant sliding bearing composite material of exceptional strength.
Materials
The nano-scale alumina powder (Sigma Aldrich (Shanghai) Trading Co., Ltd., Shanghai, China) used for the purpose of this paper is α-Al2O3, and its particle size was less than 12 nm. Deionized water was used as the solvent for the freeze-casting experiment. Sodium citrate (Sigma Aldrich (Shanghai) Trading Co., Ltd.) was used as the dispersant. The water-based drilling fluids were prepared for the harsh working conditions like the high water and sand content found in oil/gas wells. The mass ratio of each raw material was low viscosity Carboxyl methyl Cellulose (CMC) (1 wt. %), bentonite (3 wt. %), Na2C3 (0.15 wt. %), and H2O (95.85 wt. %). The sodium carbonate powder was dissolved in the deionized water. The bentonite was added with the glass rod mixing edge, followed by the low viscosity CMC. The mixture was stirred with a magnetic stirrer for 30 min and then left to stand for 24 h.
Experimental Equipment
During the research for this study, a freeze-casting device was designed for directional solidification at the bottom and top of the slurry as shown in Figure 1. The cooling rate at the top and bottom of the slurry could be manipulated with PID (Proportion Integral Derivative) control. It could also be pressurized during the freezing process to achieve a slimmer profile of the lamellar structure and further simulate the nature of the shell nacre. The contact type fast cooling system was designed to apply pressure to the sample at a low temperature and to control the cooling temperature accurately. The freeze-casting device was used to directionally solidify alumina ceramic slurry. It is mainly composed of a refrigeration module, temperature control module, support module and compression module, as illustrated by Figure 1. The experimental platform could simultaneously accomplish, as well as control both the cooling and pressurization of the sample in the refrigeration chamber with remarkable accuracy. A cooling rate of 6 °C/min and a freezing temperature of −30 °C were selected for the experiment. The temperature of the ceramic slurry was reduced from room temperature to −30 °C at a cooling rate of 6 °C/min, and then maintained at −30 °C for 5 min. After the cooling process, a warm insulation procedure was followed to make the cooling more consistent.
Preparation of Porous Al 2 O 3 Ceramics
The Al 2 O 3 porous ceramic body was prepared by using freeze-casting technology on ceramic slurry composed of Al 2 O 3 ceramic powder and deionized water. Three kinds of ceramic pastes with different solid phase contents (the ratio of Al 2 O 3 powder mass to the entire ceramic slurry) of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. %, respectively, were prepared. The total mass of the Al 2 O 3 ceramic powder was 100 wt. %. The Al 2 O 3 ceramic powder included 2 wt. % sodium citrate, 1 wt. % polyvinyl alcohol and surplus Al 2 O 3 nano powder. The prepared slurry was placed into the ball milling tank, followed by further mixing by hand. The slurry was blended for an additional 24 h in the ball milling machine to improve its uniformity and prevent the agglomeration of particles and the combination of flocculation. After the cooling process of the slurry was complete, the sample was placed in the vacuum degasser for 30 min to remove the air and to avoid defects like blowholes. The slurry was deposited into the refrigeration chamber ensuring that the sample was in a vertical temperature gradient environment. This chamber consisted of polytetrafluoroethylene, which produced an excellent thermal insulation effect. The slurry was then subjected to a double-headed direction solidification using a self-made freeze-casting device and was cooled from room temperature to −30 • C. The temperature drop rate was 6 • C/min, and the ceramic slurry was directionally solidified into a cylindrical porous ceramic body of ϕ25 mm × 5 mm, which was green in color. Following the unmolding of the directionally solidified ceramic body, it was placed in a freeze dryer for 24 h. Low-temperature sublimation was employed to remove the sheet ice from the solidified material to obtain a porous ceramic body. The freeze-dried porous ceramic body was placed in a high-temperature furnace and sintered in nitrogen and then heated to 1650 • C. This temperature was maintained for 4 h, and then allowed to reach room temperature. As shown in Figure 2a, since both the sample size and the content of organic matter are small, no additional organic matter removal was required during the sintering process.
Preparation of Layered Al 2 O 3 /PMMA Composites
The bulk polymerization method was used to insert the PMMA into a porous ceramic body with a solid phase content of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. % produced by cryogenic casting technology. The first step was to sand the ceramic body for 20 min using sandpaper with varying grit sizes, followed by a 30 min smoothing process using diamond suspension polishing solution. The sample was then ultrasonically cleaned with petroleum ether and absolute alcohol for 10 min and finally dried. This process was followed by mixing 15 mL of monomeric methyl methacrylate (MMA) (Sigma Aldrich (Shanghai) Trading Co., Ltd., Shanghai, China) and 0.134 g of initiator Azo disobutyronitrile (AIBN) (Accelerating Scientific and Industrial Development thereby Serving Humanity) solution into the dense, layered ceramic body. Prepolymer was prepared by heating it for 30 min-40 min at 70 • C and then leaving it to cool to 40 • C. Finally, the prepolymer was placed in an oven at 40 • C and allowed a reaction time of 24 h in order to obtain the laminated Al 2 O 3 /PMMA composite material, as shown in Figure 2b.
Preparation of Porous Al2O3 Ceramics
The Al2O3 porous ceramic body was prepared by using freeze-casting technology on ceramic slurry composed of Al2O3 ceramic powder and deionized water. Three kinds of ceramic pastes with different solid phase contents (the ratio of Al2O3 powder mass to the entire ceramic slurry) of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. %, respectively, were prepared. The total mass of the Al2O3 ceramic powder was 100 wt. %. The Al2O3 ceramic powder included 2 wt. % sodium citrate, 1 wt. % polyvinyl alcohol and surplus Al2O3 nano powder. The prepared slurry was placed into the ball milling tank, followed by further mixing by hand. The slurry was blended for an additional 24 h in the ball milling machine to improve its uniformity and prevent the agglomeration of particles and the combination of flocculation. After the cooling process of the slurry was complete, the sample was placed in the vacuum degasser for 30 min to remove the air and to avoid defects like blowholes. The slurry was deposited into the refrigeration chamber ensuring that the sample was in a vertical temperature gradient environment. This chamber consisted of polytetrafluoroethylene, which produced an excellent thermal insulation effect. The slurry was then subjected to a double-headed direction solidification using a self-made freeze-casting device and was cooled from room temperature to −30 °C. The temperature drop rate was 6 °C/min, and the ceramic slurry was directionally solidified into a cylindrical porous ceramic body of φ25 mm × 5 mm, which was green in color. Following the unmolding of the directionally solidified ceramic body, it was placed in a freeze dryer for 24 h. Low-temperature sublimation was employed to remove the sheet ice from the solidified material to obtain a porous ceramic body. The freeze-dried porous ceramic body was placed in a high-temperature furnace and sintered in nitrogen and then heated to 1650 °C. This temperature was maintained for 4 h, and then allowed to reach room temperature. As shown in Figure 2a, since both the sample size and the content of organic matter are small, no additional organic matter removal was required during the sintering process.
Preparation of Layered Al2O3/PMMA Composites
The bulk polymerization method was used to insert the PMMA into a porous ceramic body with a solid phase content of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. % produced by cryogenic casting technology. The first step was to sand the ceramic body for 20 min using sandpaper with varying grit sizes, followed by a 30 min smoothing process using diamond suspension polishing solution. The sample was then ultrasonically cleaned with petroleum ether and absolute alcohol for 10 min and finally dried. This process was followed by mixing 15 mL of monomeric methyl methacrylate (MMA) (Sigma Aldrich (Shanghai) Trading Co., Ltd., Shanghai, China) and 0.134 g of initiator Azo disobutyronitrile (AIBN) (Accelerating Scientific and Industrial Development thereby Serving Humanity) solution into the dense, layered ceramic body. Prepolymer was prepared by heating it for 30 min-40 min at 70 °C and then leaving it to cool to 40 °C. Finally, the prepolymer was placed in an oven at 40 °C and allowed a reaction time of 24 h in order to obtain the laminated Al2O3/PMMA composite material, as shown in Figure 2b.
Micromorphology of Layered Al 2 O 3 /PMMA Composites
Both the prepared porous disc specimens and the layered Al 2 O 3 /PMMA composite disc specimens were sanded using diamond sandpaper with different levels of granularity. A diamond suspension polishing solution was used for further refinement, followed by an ultrasonic cleaning with petroleum ether and anhydrous ethanol. Finally, the specimen was dried, and the spray gold was produced. The interface microstructure of porous ceramics and layered Al 2 O 3 /PMMA composites were observed and analyzed by an SU8010 Cold Field Emission Scanning Electron Microscope (Hitachi (China) Co., Ltd., Beijing, China). Figure 3 shows the EDS (Energy Dispersive Spectrometer) analysis of laminated Al 2 O 3 /PMMA composites. The results show that after the porous ceramic body was filled with polymer PMMA, both the ceramic and the polymer interface displayed proper integration. The improved layered structure was established, indicating that the bulk polymerization process was satisfactory and the prepared Al 2 O 3 /PMMA composite possessed an adequate stratified form. In Figure 3a, the O atomic content in the red box area is 42.02 wt. % and the Al atomic content is 57.98 wt. %, which confirms that the area consists of matrix material Al 2 O 3 . Figure 3b shows that the C atomic content in the red box area is 71.19 wt. % and the O atomic content is 28.82 wt. %, which indicates that the area is comprised of filled phase PMMA.
Micromorphology of Layered Al2O3/PMMA Composites
Both the prepared porous disc specimens and the layered Al2O3/PMMA composite disc specimens were sanded using diamond sandpaper with different levels of granularity. A diamond suspension polishing solution was used for further refinement, followed by an ultrasonic cleaning with petroleum ether and anhydrous ethanol. Finally, the specimen was dried, and the spray gold was produced. The interface microstructure of porous ceramics and layered Al2O3/PMMA composites were observed and analyzed by an SU8010 Cold Field Emission Scanning Electron Microscope (Hitachi (China) Co., Ltd., Beijing, China). Figure 3 shows the EDS (Energy Dispersive Spectrometer) analysis of laminated Al2O3/PMMA composites. The results show that after the porous ceramic body was filled with polymer PMMA, both the ceramic and the polymer interface displayed proper integration. The improved layered structure was established, indicating that the bulk polymerization process was satisfactory and the prepared Al2O3/PMMA composite possessed an adequate stratified form. In Figure 3a, the O atomic content in the red box area is 42.02 wt. % and the Al atomic content is 57.98 wt. %, which confirms that the area consists of matrix material Al2O3. Figure 3b shows that the C atomic content in the red box area is 71.19 wt. % and the O atomic content is 28.82 wt. %, which indicates that the area is comprised of filled phase PMMA. Figure 4 shows the microstructure of layered Al2O3/PMMA composites with an initial solid content of 20 wt. % prepared by freeze-casting and bulk polymerization. It is clear that the ceramic layer (bright color on the map) and the polymer layer (the dark color) are distributed alternately, and the porous structure of the alumina is inherited. The interface of both the matrix alumina and the reinforcing phase polymer PMMA is satisfactory denoting the adequacy of the bulk polymerization process. Figure 5a illustrates the random nature of the pore orientation in the cross-section of the sample. The reason for this pertains to the direction the formation and growth of ice crystals take along the path of the temperature gradient, which lies perpendicular to the cross-section of the sample. A number of small holes are evident in the polymer PMMA layer of the sample, which is the result of the higher temperature of the bulk polymerization process that leads to the formation of bubbles in the moulded PMMA. Therefore, the temperature of the prepolymerization and polymerization stage of the bulk polymerization process should be strictly controlled to prevent the product from producing bubbles that can influence the performance of the sample. After bulk polymerization the laminar structure of the composite is more pronounced. Both the matrix ceramics and the reinforcing phase PMMA are well combined, and the microstructure is more compact. Figure 4 shows the microstructure of layered Al 2 O 3 /PMMA composites with an initial solid content of 20 wt. % prepared by freeze-casting and bulk polymerization. It is clear that the ceramic layer (bright color on the map) and the polymer layer (the dark color) are distributed alternately, and the porous structure of the alumina is inherited. The interface of both the matrix alumina and the reinforcing phase polymer PMMA is satisfactory denoting the adequacy of the bulk polymerization process. Figure 5a illustrates the random nature of the pore orientation in the cross-section of the sample. The reason for this pertains to the direction the formation and growth of ice crystals take along the path of the temperature gradient, which lies perpendicular to the cross-section of the sample. A number of small holes are evident in the polymer PMMA layer of the sample, which is the result of the higher temperature of the bulk polymerization process that leads to the formation of bubbles in the moulded PMMA. Therefore, the temperature of the prepolymerization and polymerization stage of the bulk polymerization process should be strictly controlled to prevent the product from producing bubbles that can influence the performance of the sample. After bulk polymerization the laminar structure of the composite is more pronounced. Both the matrix ceramics and the reinforcing phase PMMA are well combined, and the microstructure is more compact. Figure 5 shows the microscopic morphology of the layered Al2O3/PMMA composites with varying initial solid content as exposed to a temperature of −30°C. The layered Al2O3/PMMA composites prepared by cryopreservation and bulk polymerization have a micro morphology of ceramic lamellae, and polymer PMMA lamellae arranged alternately. As the initial solid phase content increases, so does the thickness of the sample's ceramic layer until it reaches approximately 20-30 μm in both the ceramic and polymer layers. The lamellar structure is thinner and closer in appearance to the nanoscale layer structure of the shell nacre. This is an indication that the doubleheaded freeze method is able to refine the composite layer structure. The bulk polymerization process does not destroy the layered pore structure of the porous ceramic body. The bulk polymerization process allows the PMMA to be adequately inserted into the layered pore structure of the billet, replacing the region of the ice crystals before the introduction of sublimation during the freezecasting process. This encourages the formation of both the laminated structure of the ceramic layer and the PMMA layer. Therefore, upon completion of the bulk polymerization process, the microscopic morphology of the composite material inherits the layered structure of the porous ceramic billet, as well as the microscopic morphology of the ceramic body containing lamellar pores. Figure 5 shows the microscopic morphology of the layered Al2O3/PMMA composites with varying initial solid content as exposed to a temperature of −30°C. The layered Al2O3/PMMA composites prepared by cryopreservation and bulk polymerization have a micro morphology of ceramic lamellae, and polymer PMMA lamellae arranged alternately. As the initial solid phase content increases, so does the thickness of the sample's ceramic layer until it reaches approximately 20-30 μm in both the ceramic and polymer layers. The lamellar structure is thinner and closer in appearance to the nanoscale layer structure of the shell nacre. This is an indication that the doubleheaded freeze method is able to refine the composite layer structure. The bulk polymerization process does not destroy the layered pore structure of the porous ceramic body. The bulk polymerization process allows the PMMA to be adequately inserted into the layered pore structure of the billet, replacing the region of the ice crystals before the introduction of sublimation during the freezecasting process. This encourages the formation of both the laminated structure of the ceramic layer and the PMMA layer. Therefore, upon completion of the bulk polymerization process, the microscopic morphology of the composite material inherits the layered structure of the porous ceramic billet, as well as the microscopic morphology of the ceramic body containing lamellar pores. Figure 5 shows the microscopic morphology of the layered Al 2 O 3 /PMMA composites with varying initial solid content as exposed to a temperature of −30 • C. The layered Al 2 O 3 /PMMA composites prepared by cryopreservation and bulk polymerization have a micro morphology of ceramic lamellae, and polymer PMMA lamellae arranged alternately. As the initial solid phase content increases, so does the thickness of the sample's ceramic layer until it reaches approximately 20-30 µm in both the ceramic and polymer layers. The lamellar structure is thinner and closer in appearance to the nanoscale layer structure of the shell nacre. This is an indication that the double-headed freeze method is able to refine the composite layer structure. The bulk polymerization process does not destroy the layered pore structure of the porous ceramic body. The bulk polymerization process allows the PMMA to be adequately inserted into the layered pore structure of the billet, replacing the region of the ice crystals before the introduction of sublimation during the freeze-casting process. This encourages the formation of both the laminated structure of the ceramic layer and the PMMA layer. Therefore, upon completion of the bulk polymerization process, the microscopic morphology of the composite material inherits the layered structure of the porous ceramic billet, as well as the microscopic morphology of the ceramic body containing lamellar pores.
Fracture Toughness
The fracture toughness of laminated Al 2 O 3 /PMMA composites was measured by the Chevron Notch (CN) method. Figure 6 depicts the shape, size, and stress diagram of the CN sample. The size of the laminated Al 2 O 3 /PMMA composite specimen is (L × W × B), 20 mm × 4 mm × 7 mm, the incision depth is 1 mm, and the distance between the two supporting points below is 16 mm. A universal electronic material testing machine was used to measure the maximum load, as represented by P, of the layered Al 2 O 3 /PMMA composites containing diverse solid phase content, as shown in Table 1.
Fracture Toughness
The fracture toughness of laminated Al2O3/PMMA composites was measured by the Chevron Notch (CN) method. Figure 6 depicts the shape, size, and stress diagram of the CN sample. The size of the laminated Al2O3/PMMA composite specimen is (L × W × B), 20 mm × 4 mm × 7 mm, the incision depth is 1 mm, and the distance between the two supporting points below is 16 mm. A universal electronic material testing machine was used to measure the maximum load, as represented by P, of the layered Al2O3/PMMA composites containing diverse solid phase content, as shown in Table 1.
where P is load (N), W is sample height (m), b is sample width (mm), a is incision depth (m), and KIC is fracture toughness (MPa·m 1/2 ). The fracture toughness of layered Al2O3/PMMA composites with different solid phase contents calculated is shown in Table 2. According to existing literature, the fracture toughness of the matrix Al2O3 ceramic is 3-5 MPa·m 1/2 , the fracture toughness of the reinforcing phase polymer PMMA is 0.7-1.6 MPa·m 1/2 , and the fracture toughness of the metal Al is 14-28 MPa·m 1/2 . Therefore, the prepared layered Al2O3/PMMA composite has excellent fracture toughness, which significantly improves the fragility of the ceramic matrix, and can effectively improve the performance of the ceramic sliding bearing. Here, the crack deflection model is referred to by way of explanation. Deflection will occur when a crack is extended in both the flexible (polymer PMMA) and brittle (Al2O3 ceramic) layer, exhibiting Al2O3 as a staircase distribution. The ductile layer can absorb energy through plastic deformation, The formula for the determination of fracture toughness by CN method is: where P is load (N), W is sample height (m), b is sample width (mm), a is incision depth (m), and K IC is fracture toughness (MPa·m 1/2 ). The fracture toughness of layered Al 2 O 3 /PMMA composites with different solid phase contents calculated is shown in Table 2. According to existing literature, the fracture toughness of the matrix Al 2 O 3 ceramic is 3-5 MPa·m 1/2 , the fracture toughness of the reinforcing phase polymer PMMA is 0.7-1.6 MPa·m 1/2 , and the fracture toughness of the metal Al is 14-28 MPa·m 1/2 . Therefore, the prepared layered Al 2 O 3 /PMMA composite has excellent fracture toughness, which significantly improves the fragility of the ceramic matrix, and can effectively improve the performance of the ceramic sliding bearing. Here, the crack deflection model is referred to by way of explanation. Deflection will occur when a crack is extended in both the flexible (polymer PMMA) and brittle (Al 2 O 3 ceramic) layer, exhibiting Al 2 O 3 as a staircase distribution. The ductile layer can absorb energy through plastic deformation, and when it is compressed, it can form macro bridging at the tail of a crack to prevent it from spreading, thus effectively improving the fracture toughness of the material.
Microhardness
The prepared layered Al 2 O 3 /PMMA composite disc specimen was polished using sandpaper with varying particle sizes of diamond and diamond suspension polishing solution. The polishing process was followed by an ultrasonic wash with petroleum ether and absolute ethanol, and finally dried. The HVT-1000Z Microhardness Analyzer (Shanghai Zhongyan Instrument Factory, Shanghai, China) was used to measure the surface microhardness of the layered Al 2 O 3 /PMMA composite. A study was made of changes occurring in the microhardness of the sample with the solid phase content with a load of 2.942 N and a loading time of 10 s. Ten measurements were taken, and the error margin determined to arrive at the final result. As shown in Figure 7a, the surface indentation of laminated Al 2 O 3 /PMMA composites was observed by a microhardness analyzer. The indentation is diamond shaped and appears to be cracking, indicating that the material is fairly brittle. Figure
Microhardness
The prepared layered Al2O3/PMMA composite disc specimen was polished using sandpaper with varying particle sizes of diamond and diamond suspension polishing solution. The polishing process was followed by an ultrasonic wash with petroleum ether and absolute ethanol, and finally dried. The HVT-1000Z Microhardness Analyzer (Shanghai Zhongyan Instrument Factory, Shanghai, China) was used to measure the surface microhardness of the layered Al2O3/PMMA composite. A study was made of changes occurring in the microhardness of the sample with the solid phase content with a load of 2.942 N and a loading time of 10 s. Ten measurements were taken, and the error margin determined to arrive at the final result. As shown in Figure 7a, the surface indentation of laminated Al2O3/PMMA composites was observed by a microhardness analyzer. The indentation is diamond shaped and appears to be cracking, indicating that the material is fairly brittle. Figure 8b
Sliding Friction Characteristics of Layered Al2O3/PMMA Composites
The friction coefficient of the layered Al2O3/PMMA composites with different solid content was studied by applying variable levels of loading and rotational speed. The correlating friction and wear test was executed by the CFT-1 material surface performance tester. The initial method of interaction in the friction and wear experiment was point contact when the ball-disk sample was adopted. The disk assumes the cylindrical sample of solid phase content of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. %, respectively, as well as a Q235 standard steel ball of φ4 mm. The content of main alloying elements of the steel ball are: C (0.14%-0.22%); Mn (0.30%-0.65%); Si (≤0.30%); S (≤0.04%); and P (≤0.04%). The rotational speeds set in the experiment were 100 r/min, 150 r/min, 200 r/min, 250 r/min, and 300 r/min, with the loads at 1 N, 2 N, 3 N, 4 N, and 5 N, respectively. The experimentation
Sliding Friction Characteristics of Layered Al 2 O 3 /PMMA Composites
The friction coefficient of the layered Al 2 O 3 /PMMA composites with different solid content was studied by applying variable levels of loading and rotational speed. The correlating friction and wear test was executed by the CFT-1 material surface performance tester. The initial method of interaction in Materials 2018, 11, 1563 9 of 15 the friction and wear experiment was point contact when the ball-disk sample was adopted. The disk assumes the cylindrical sample of solid phase content of 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt. %, respectively, as well as a Q235 standard steel ball of ϕ4 mm. The content of main alloying elements of the steel ball are: C (0.14%-0.22%); Mn (0.30%-0.65%); Si (≤0.30%); S (≤0.04%); and P (≤0.04%). The rotational speeds set in the experiment were 100 r/min, 150 r/min, 200 r/min, 250 r/min, and 300 r/min, with the loads at 1 N, 2 N, 3 N, 4 N, and 5 N, respectively. The experimentation time was 10 min, and the experimental environment was kept at room temperature. The samples were either immersed in the water-based drilling fluid or tested under dry friction conditions. The VKX-100 Shape Measurement Maser Microscope (Keyence (China) Co., Ltd., Shanghai, China) provided by was employed to observe and measure the wear scar diameter of laminated Al 2 O 3 /PMMA composites. Figure 8b it is clear that the friction coefficient of composites with different initial solid phase content decreased with a higher load when tested under dry conditions. As a result, the more extensive the load, the more ceramic solid particles will be formed by the grinding of ceramic and steel balls. Consequently, the rolling effect is more apparent, and the friction coefficient becomes smaller. The friction coefficient ranges generated in dry conditions are as follows: for the 15 wt. % composite it is between 0.66 and 0.73; for the 17.5 wt. % composite it is between 0.55 and 1.03; for the 20 wt. % composite it is between 0.73 and 1.21; for the 22.5 wt. % composite it is between 0.68 and 1.15; and for the 25 wt. % composite it is between 0.72 and 1.12. The friction coefficient of the 15 wt.% composites displayed little variation, while the friction coefficient of 17.5 wt.%, 20 wt.%, 22.5 wt.%, and 25 wt.% composites exhibited considerable diversity. Compared with the water-based drilling fluid environment, under dry conditions, no lubrication is present to act as a barrier between the friction pairs. Therefore, the two friction surfaces come into direct contact with each other leading to a larger unstable friction coefficient. Also, the friction coefficient increases with the increase of solid phase content. When the steel ball and the alumina are in contact with each other, elastic or plastic deformation is likely to occur. The higher the initial solid phase content, the higher the ceramic content, the higher the chance of the steel ball and the ceramic grinding, the larger the contact stress, and the larger the frictional resistance. Figure 9 shows the variation curve of the friction coefficient of the composite as tested in both a lubricated and dry environment containing different initial solid phase content with a load of 3 N. Figure 9a indicates that changes in the friction coefficient of five kinds of composites with different solid phase content are not evident following the application of rotational speed. The friction coefficient ranges are as follows: for the 15 wt. % composite it is between 0.46 and 0.50; for the 17.5 wt. % composite it is between 0.34 and 0.49; for the 20 wt. % composite it is between 0.37 and 0.44; Figure 8b it is clear that the friction coefficient of composites with different initial solid phase content decreased with a higher load when tested under dry conditions. As a result, the more extensive the load, the more ceramic solid particles will be formed by the grinding of ceramic and steel balls. Consequently, the rolling effect is more apparent, and the friction coefficient becomes smaller. The friction coefficient ranges generated in dry conditions are as follows: for the 15 wt. % composite it is between 0.66 and 0.73; for the 17.5 wt. % composite it is between 0.55 and 1.03; for the 20 wt. % composite it is between 0.73 and 1.21; for the 22.5 wt. % composite it is between 0.68 and 1.15; and for the 25 wt. % composite it is between 0.72 and 1.12. The friction coefficient of the 15 wt.% composites displayed little variation, while the friction coefficient of 17.5 wt.%, 20 wt.%, 22.5 wt.%, and 25 wt.% composites exhibited considerable diversity. Compared with the water-based drilling fluid environment, under dry conditions, no lubrication is present to act as a barrier between the friction pairs. Therefore, the two friction surfaces come into direct contact with each other leading to a larger unstable friction coefficient. Also, the friction coefficient increases with the increase of solid phase content. When the steel ball and the alumina are in contact with each other, elastic or plastic deformation is likely to occur. The higher the initial solid phase content, the higher the ceramic content, the higher the chance of the steel ball and the ceramic grinding, the larger the contact stress, and the larger the frictional resistance. Figure 9 shows the variation curve of the friction coefficient of the composite as tested in both a lubricated and dry environment containing different initial solid phase content with a load of 3 N. Figure 9a indicates that changes in the friction coefficient of five kinds of composites with different solid phase content are not evident following the application of rotational speed. The friction coefficient ranges are as follows: for the 15 wt. % composite it is between 0.46 and 0.50; for the 17.5 wt. % composite it is between 0.34 and 0.49; for the 20 wt. % composite it is between 0.37 and 0.44; for the 22.5 wt. % composite it is between 0.60 and 0.66; and for the 25 wt. % composite it is between 0.43 and 0.57. From Figure 9a, it is clear that the friction coefficient displays no apparent upward or downward trend with the increase in velocity but is stable instead. This result is due to the low viscosity of the water-based drilling fluid, which makes it difficult to form fluid dynamic lubrication. Mixed lubrication still dominates the lubrication state, and all the friction coefficients are stable. Figure 9b illustrates that the friction coefficient of the five solid phase content composites decrease when rotational speed is higher. Due to the low microhardness of these composites, the ceramic solid particles that detach under the load weight produce a rolling effect and the friction coefficient of the composite decreases. Contrary to the results obtained from testing conducted in the lubricated environment, the friction coefficient is unmistakably higher under dry conditions due to the lack of lubrication by water-based drilling fluids. The friction coefficient gradually increases with the elevation of the solid phase content. This elevation increases the ceramic content in the composite, reinforcing the probability of the ceramic and the steel ball making abrasive contact. The possibility increases for elastic or plastic deformation to occur hindering sliding of the bearing. Thus, the friction coefficient is higher. Figure 10 shows the surface morphology of the 25 wt. % initial solid phase content composite subjected to testing in both a dry environment and in the presence of water-based drilling fluid with a fixed load of 3 N, and speeds of 100 r/min, 150 r/min, 200 r/min, 250 r/min, and 300 r/min, respectively. The wear scar of the composite material in the water-based drilling fluid is more pronounced. Figure 11 shows the change curve of the width of the wear scar on the Al2O3/PMMA composite comprised of varying solid phase content. When water-based drilling fluid is used, the width of the wear scar displays a general increase at a higher rotational speed. Contrary to the results obtained from testing conducted in the lubricated environment, the friction coefficient is unmistakably higher under dry conditions due to the lack of lubrication by water-based drilling fluids. The friction coefficient gradually increases with the elevation of the solid phase content. This elevation increases the ceramic content in the composite, reinforcing the probability of the ceramic and the steel ball making abrasive contact. The possibility increases for elastic or plastic deformation to occur hindering sliding of the bearing. Thus, the friction coefficient is higher. Figure 10 shows the surface morphology of the 25 wt. % initial solid phase content composite subjected to testing in both a dry environment and in the presence of water-based drilling fluid with a fixed load of 3 N, and speeds of 100 r/min, 150 r/min, 200 r/min, 250 r/min, and 300 r/min, respectively. The wear scar of the composite material in the water-based drilling fluid is more pronounced. Figure 11 shows the change curve of the width of the wear scar on the Al 2 O 3 /PMMA composite comprised of varying solid phase content. When water-based drilling fluid is used, the width of the wear scar displays a general increase at a higher rotational speed. Figure 10 shows the surface morphology of the 25 wt. % initial solid phase content composite subjected to testing in both a dry environment and in the presence of water-based drilling fluid with a fixed load of 3 N, and speeds of 100 r/min, 150 r/min, 200 r/min, 250 r/min, and 300 r/min, respectively. The wear scar of the composite material in the water-based drilling fluid is more pronounced. Figure 11 shows the change curve of the width of the wear scar on the Al2O3/PMMA composite comprised of varying solid phase content. When water-based drilling fluid is used, the width of the wear scar displays a general increase at a higher rotational speed. Figure 13 shows the change curve of the diameter of the wear scar on the layered Al2O3/PMMA composites comprised of different solid phase content. The wear scars of five kinds of initial solid phase content composites were tested and were found to widen under increased load weight in both lubricated and dry conditions. The elevated load is responsible for a gradual increase in the number of corrosion pits and furrows on the surface of the steel ball leading to abrasive and corrosion wear. Therefore, the friction coefficient is higher. Figure 13 shows the change curve of the diameter of the wear scar on the layered Al 2 O 3 /PMMA composites comprised of different solid phase content. The wear scars of five kinds of initial solid phase content composites were tested and were found to widen under increased load weight in both lubricated and dry conditions. The elevated load is responsible for a gradual increase in the number of corrosion pits and furrows on the surface of the steel ball leading to abrasive and corrosion wear. Therefore, the friction coefficient is higher. Figure 13 shows the change curve of the diameter of the wear scar on the layered Al2O3/PMMA composites comprised of different solid phase content. The wear scars of five kinds of initial solid phase content composites were tested and were found to widen under increased load weight in both lubricated and dry conditions. The elevated load is responsible for a gradual increase in the number of corrosion pits and furrows on the surface of the steel ball leading to abrasive and corrosion wear. Therefore, the friction coefficient is higher. Figure 14 shows the wear morphology of the composite surface under dry friction conditions. Figure 14a-c indicates that when the load is 1 N, and the speed is 200 r/min, the eroded surface of the composite material with an initial solid phase content of 20 wt. % is furrowed, and the wear form is mainly abrasive. As shown in Figure 14d-f, when the load is 3 N, and the speed is 300 r/min, the worn surface of the composite material with an initial solid phase content of 20 wt. % becomes more apparent, while the wear form is mainly abrasive. Figure 14a-c indicates that when the load is 1 N, and the speed is 200 r/min, the eroded surface of the composite material with an initial solid phase content of 20 wt. % is furrowed, and the wear form is mainly abrasive. As shown in Figure 14d-f, when the load is 3 N, and the speed is 300 r/min, the worn surface of the composite material with an initial solid phase content of 20 wt. % becomes more apparent, while the wear form is mainly abrasive.
Conclusions
Freeze-casting technology was used in conjunction with nano-scale Al2O3 powder, to successfully prepare a porous ceramic body with an unmistakable lamellar structure after sintering at 1650 °C for four hours. With the increase in initial solid phase content, the viscosity of the ceramic slurry increased, while the pore size decreased. The selected composite material represented by 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt.%, respectively, had individual pore sizes of 24, 18, 16, 15, and 10 μm. The respective thicknesses of the ceramic sheets were 22, 30, 32, 15, and 20 μm, with a slimmer lamellar structure. The method of bulk polymerization was used to insert the polymer PMMA into the porous ceramic body to produce a layered Al2O3/PMMA composite with a suitable combination of matrix ceramics and enhanced phase polymers. The thickness of both the ceramic layer and the polymer layer was approximately 20-30 μm, and the lamellar structure was thinner. The prepared composites had high fracture toughness closely resembling that of Al, about four times that of matrix Al2O3 ceramics and 16 times that of enhancement phase PMMA. When using waterbased drilling fluid on five kinds of composites containing different solid phase content, the friction coefficient decreased under a heavier load. No change was apparent with the application of speed. The friction coefficient of three different solid phase content composites was tested in a dry friction environment and was found to follow a downward trend with the increase of both load and rotational
Conclusions
Freeze-casting technology was used in conjunction with nano-scale Al 2 O 3 powder, to successfully prepare a porous ceramic body with an unmistakable lamellar structure after sintering at 1650 • C for four hours. With the increase in initial solid phase content, the viscosity of the ceramic slurry increased, while the pore size decreased. The selected composite material represented by 15 wt. %, 17.5 wt. %, 20 wt. %, 22.5 wt. %, and 25 wt.%, respectively, had individual pore sizes of 24, 18, 16, 15, and 10 µm. The respective thicknesses of the ceramic sheets were 22, 30, 32, 15, and 20 µm, with a slimmer lamellar structure. The method of bulk polymerization was used to insert the polymer PMMA into the porous ceramic body to produce a layered Al 2 O 3 /PMMA composite with a suitable combination of matrix ceramics and enhanced phase polymers. The thickness of both the ceramic layer and the polymer layer was approximately 20-30 µm, and the lamellar structure was thinner. The prepared composites had high fracture toughness closely resembling that of Al, about four times that of matrix Al 2 O 3 ceramics and 16 times that of enhancement phase PMMA. When using water-based drilling fluid on five kinds of composites containing different solid phase content, the friction coefficient decreased under a heavier load. No change was apparent with the application of speed. The friction coefficient of three different solid phase content composites was tested in a dry friction environment and was found to follow a downward trend with the increase of both load and rotational speed. Exposure to lubricated conditions and elevated rotational speed lead to the expansion of the wear scar diameter. Testing conducted in dry conditions showed an identical result when the load was increased. The synthetic nacre-like composites in this study can contribute to a wear-resistant sliding bearing composite material of exceptional strength in petroleum drilling equipment. | 10,833.2 | 2018-08-30T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Drug-Drug Interaction Potential, Cytotoxicity, and Reactive Oxygen Species Production of Salix Cortex Extracts Using Human Hepatocyte-Like HepaRG Cells
Herbal preparations of willow bark (Salix cortex) are available in many countries as non-prescription medicines for pain and inflammation, and also as dietary supplements. Currently only little information on toxicity and drug interaction potential of the extracts is available. This study now evaluated the effects of two Salix cortex extracts on human hepatocyte-like HepaRG cells, in view of clinically relevant CYP450 enzyme activity modulation, cytotoxicity and production of reactive oxygen species (ROS). Drug metabolism via the CYP450 enzyme system is considered an important parameter for the occurrence of drug-drug interactions, which can lead to toxicity, decreased pharmacological activity, and adverse drug reactions. We evaluated two different bark extracts standardized to 10 mg/ml phenolic content. Herein, extract S6 (S. pentandra, containing 8.15 mg/ml total salicylates and 0.08 mg/ml salicin) and extract B (industrial reference, containing 5.35 mg/ml total salicylates and 2.26 mg/ml salicin) were tested. Both Salix cortex extracts showed no relevant reduction in cell viability or increase in ROS production in hepatocyte-like HepaRG cells. However, they reduced CYP1A2 and CYP3A4 enzyme activity after 48 h at ≥25 μg/ml, this was statistically significant only for S6. CYP2C19 activity inhibition (0.5 h) was also observed at ≥25 μg/ml, mRNA expression inhibition by 48 h treatment with S6 at 25 μg/ml. In conclusion, at higher concentrations, the tested Salix cortex extracts showed a drug interaction potential, but with different potency. Given the high prevalence of polypharmacy, particularly in the elderly with chronic pain, further systematic studies of Salix species of medical interest should be conducted in the future to more accurately determine the risk of potential drug interactions.
INTRODUCTION
The bark of various willow (Salix cortex) varieties has long been used in medicine against inflammation, as a painkiller and against fever (Wood, 2015). Recently, our group demonstrated the anti-inflammatory potential of Salix cortex extracts in SARS-CoV-2 peptide and bacterial lipopolysaccharide (LPS)-activated human in vitro systems, providing new evidence of the potential usefulness of Salix products as therapeutics (Le et al., 2021). Salix cortex is also used in dietary supplements, e.g., for weight reduction and to enhance performance in sports (Matyjaszczyk and Schumann, 2018). Despite its long history of use, so far only little data is available on the toxicity or potential adverse effects of Salix cortex preparations (Shara and Stohs, 2015). Based on a limited number of studies, one safety report by Shara and Stohs (2015) recommended that people who 1) are allergic to aspirin, 2) suffer from pathological conditions such as gastritis, stomach ulcers, diabetes, asthma, or hemophilia; or 3) are under anticoagulant-drug therapy, as well as beta-blockers, diuretics, and non-steroidal anti-inflammatory drugs (NSAIDs), may avoid Salix cortex.
Herbal preparations including Salix cortex extracts contain hundreds of phytochemicals that can act in different ways, encompassing the risk of drug interactions that is the ability to modify the action or effect of another drug administered successively or simultaneously. Considering that the usage of pain medication plays a major role in polypharmacy (Marengoni et al., 2014;Schneider et al., 2021;Young et al., 2021), knowledge of the interaction potential of Salix species is of great importance, as this could help avoid drug-related problems that could affect patients' safety. An important determinant in the occurrence of drug interaction is the drug metabolism via the cytochrome P450 (CYP450) system (Guengerich, 2008). This class, which is predominantly expressed in the liver, has more than 50 enzymes, but from these, only six of them (CYP1A2, CYP2C9, CYP2C19, CYP2D6, CYP3A4, and CYP3A5) metabolize 90 percent of the medications (Wilkinson, 2005). Considering the historical relevance of Salix cortex extracts in traditional medicine, their wide commercial availability, as well as their potential new pharmacological application, in the present study we investigated the potential of Salix cortex extracts for drugdrug interactions with respect to CYP450 enzymes relevant to drug metabolism. For this purpose, we used the human hepatocyte-like cell line HepaRG. As a validated in vitro model to investigate drug effects on metabolism enzymes, the HepaRG cell line is considered an alternative to primary ex vivo cultured human hepatocytes, especially in studies related to detoxification metabolism, such as CYP450 enzyme activities for predicting drug-drug interaction (Aninat et al., 2006;Anthérieu et al., 2010). Potential cytotoxicity was then assessed by measuring adenosine triphosphate (ATP) and lactate dehydrogenase (LDH), and oxygen radical formation was measured by electron magnetic resonance spectroscopy (EPR) in the cells.
Cell Culture
The human hepatic cell line HepaRG was obtained from Biopredic International ® (Rennes, France). The cell line was cultured in William's Medium E + GlutaMAX ™ , supplemented with 10% FCS, 100 U/mlµ penicillin, and 100 μg/ml streptomycin, 50 µM hydrocortisone 21hemisuccinate sodium salt, and 5 μg/ml human insulin. The maintenance and differentiation of the cell line was performed according to Biopredic International ® instructions, as previously described (Le et al., 2021). Cells were maintained at 37°C in a humidified incubator with a 5% CO 2 and 95% air atmosphere.
Salix Cortex Extract Preparations
Salix cortex extracts were prepared and standardized as previously described (Le et al., 2021). S. pentandra clone PE1 (extract S6), originally collected in 2006 in Eggersdorf (Brandenburg, Germany), was cultivated in Wriezen (in northeastern Berlin, Brandenburg, Germany). One-year-old branches of the clone were cut off in August 2016 and bark was peeled at a height from 10-100 cm. Afterwards, the bark material was frozen (−80°C) and immediately lyophilized. Hardwood cuttings of PE1 were also planted in a clone collection at Humboldt-Universität zu Berlin (Germany) to guarantee the availability and conservation of the Salix clone. The bark was extracted using a solution of 70% methanol and 0.1% formic acid. Extract B refers to a willow bark reference used for phytopharmaceutical production, which was provided from Bionorica SE (Neumarkt, Germany). Both extracts (B and S6) were standardized to 10 mg/ml phenolic content using high performance liquid chromatography (HPLC). Based on the reported pharmacological potential and knowledge of characteristic compounds in different Salix species, the Frontiers in Pharmacology | www.frontiersin.org November 2021 | Volume 12 | Article 779801 following phytochemicals were used to standardize the extracts: salicylates (salicin, salicortin, 2′-O-acetylsalicin, 2′-Oacetylsalicortin, and tremulacin), flavan-3-ols (catechin and epicatechin), flavonoids (two isomers of naringenin-5-O-glucoside, naringenin-7-O-glucoside, luteolin-7-O-glucoside, quercetin-hexoside, and isosalipurposide), other phenolic compounds (triandrin, two caffeic acid derivatives, and syrengin). S6 extract contained 8.15 mg/ml total salicylates and 0.08 mg/ml salicin, and extract B contained 5.35 mg/ml total salicylates and 2.26 mg/ml salicin (Le et al., 2021).
Assessment of Cell Viability and Cytotoxicity
The enzyme lactate dehydrogenase (LDH) is involved in energy production and found in almost all cells of the human body. Upon damage, it is released from the cell in the medium and can thus be used as a marker for cytotoxicity. After 24 h extract exposure, LDH was quantified using an LDH-Glo ™ Cytotoxicity Assay kit (Promega GmbH, Mannheim, Germany) according to the manufacturer's instructions. As positive control, cell exposure to 0.2% triton-X for 15 min was used. Adenosine triphosphate (ATP) is a key indicator of cellular activity and has been used as another marker of cytotoxicity upon 24 and 48 h treatment using the CellTiter-Glo ® 2.0 Cell Viability Assay (Promega GmbH, Mannheim, Germany) according to the manufacturer's instructions. As positive control, cell exposure to 0.2% triton-X for 24 or 48 h was used. In both assays, 0.5% distilled water was used as solvent control (SC).
Assessment of Reactive Oxygen Species (ROS) Production Using EPR
The production of reactive oxygen species (ROS) by Salix cortex extracts in hepatocyte-like HepaRG cells was detected using electron paramagnetic resonance (EPR) spectroscopy as described by Lamy et al. (2013) and adapted by Odongo et al. (2017). Differentiated HepaRG cells were treated with different concentrations of Salix cortex extracts or 0.5% distilled water (solvent control) for 1 or 24 h. Cell exposure to 200 µM menadione for 30 min was used as positive control. Afterwards, for ROS detection, 200 µM 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrolidine hydrochloride (CMH, Noxygen Science Transfer & Diagnostics GmbH, Elzach, Germany), 25 µM deferoxamine (DFO), and 5 µM DETC were used in Krebs-HEPES buffer for 30 min (Odongo et al., 2017). Supernatants were then measured by EPR spectroscopy for ROS production evaluation. The instrument setting and the number of scans used were set as previously described (Lamy et al., 2013).
Cytochrome P450 Enzyme Activity Quantification
The effects of Salix cortex extracts on CYP1A2 and CYP3A4 enzyme activity were evaluated at 1 and 48 h treatment (Bernasconi et al., 2019;FDA, 2020) using the cell based P450-Glo ™ Induction/Inhibition Assay Systems (Promega, Walldorf, Germany) according to the manufacturer's protocol. In brief, HepaRG cells were differentiated as described above in white 96 well plates. After differentiation, cells were incubated for 1 or 48 h with the Salix cortex extracts, or 0.5% distilled water (solvent control). For 48 h treatment, the medium was exchanged after 24 h with the addition of fresh extract. To analyze enzyme activity of CYP2C19, the biochemical P450-Glo ™ CYP2C19 Assay and Screening System (Promega, Walldorf, Germany) was used according to the manufacture's protocol. 50 µM omeprazole (induction) and 320 µM naringenin (inhibition) were used as positive controls (PC) in the CYP1A2 assay.10 µM Rifampicin (induction) and 10 µM ketoconazole (inhibition) were used as positive control in the CYP3A4 assay. 10 μg/ml (22.6 µM) troglitazone was used as positive control for CYP2C19 inhibition (Wishart et al., 2018).
Quantitative PCR for CYP450 mRNA Expression CYP2C19 mRNA expression was quantified using qRT-PCR. In brief, differentiated HepaRG cells were treated with different concentrations of Salix cortex extracts or 0.5% distilled water (solvent control) for 6 or 48 h. Total RNA from HepaRG cells was isolated using the RNeasy mini Isolation kit from Qiagen (Hilden, Germany) followed by DNA purification step using the RNasefree DNase kit from Qiagen (Hilden, Germany) according to the manufacturer's instructions. RNA quality and quantity were measured using a NanoDrop ND-1000 spectrophotometer (Thermo Scientific, Freiburg, Germany). Isolated RNA was resuspended in 10 µL of RNAse-free water. Each sample was treated twice with 2 µL RNAse-free DNAse 1unit/μL (Qiagen, Hilden, Germany) for 10 min at 37°C to eliminate remaining DNA. The prepared RNA was reverse-transcribed as previously described (Helmig et al., 2009). For quantitative comparison of CYP2B6, CYP2C19 and CYP2D6 mRNA levels real-time PCR was performed using SYBR-green fluorescence in a LightCycler ® System (Roche Diagnostic GmbH). After optimization of PCR conditions, amplification efficiency was tested in standard curves using serial cDNA dilutions. The correlation coefficient had to be above 0.9 and the slope around −3.5. Amplification specificity was checked using melting curves. Gene expression was related to the mean expression of the three housekeeping genes (HSK) beta-2microglobulin (B2M), hypoxanthine-guanine phosphoribosyltransferase (HPRT) and glycerinaldehyd-3phosphat-dehydrogenase (GAPDH) (Vandesompele et al., 2002). Calculations of expression was performed with the 2 −ΔΔCT method (Pfaffl, 2001). The sequences of the used specific primers are listed in FIGURE 1 | CYP450 enzyme activity quantification after treatment with Salix cortex extracts using a luminescent method. (A-D) Differentiated HepaRG cells were exposed to extracts for 1 or 48 h before analysis. (E) CYP2C19 enzyme activity was analysed in a cell-free assay after 0.5 h incubation of extracts with a human recombinant CYP2C19 enzyme, followed by analysis. Positive control (PC): 320 μM naringenin (CYP1A2 inhibition, PC1), 50 μM omeprazole (CYP1A2 induction, PC2), 10 μM rifampicin (CYP3A4 inhibition, PC3), 10 μM ketoconazole, (CYP3A4 induction, PC4), and 22.6 μM troglitazone (CYP2C19 inhibition, PC5). ASA, acetylsalicylic acid. The values are presented as means + SD (CYP1A2 1 and 48 h, n 3; CYP3A4 1 h, n 3, 48 h n 4; CYP2C19 n 3). Ordinary one-way ANOVA was used for statistical analysis, followed by a Dunnett test. Significance was evaluated between extracts and solvent control (a. d.) as well as between extract S6 and B. *p > 0.05, **p > 0.01.
Statistical Analysis
Data were analyzed using GraphPad Prism 6.0 software (La Jolla, CA, United States) and presented as means + SD of at least three independent experiments. When comparing multiple means, the results were analyzed either by one-way ANOVA followed by Dunnett's multiple comparison tests or two-way ANOVA followed by Tukey's multiple comparison test.
CYP450 Enzyme Activity Quantification
The experiments tested a concentration range of the extracts that had previously shown pharmacological activity in terms of blocking LPS-induced inflammation in primary human immune cells (Le et al. 2021). To investigate potential drug interaction, CYP450 enzyme activity of three enzymes (CYP1A2, CYP3A4 and CYP2C19) was quantified upon Salix cortex extract exposure. As shown in Figure 1A, narigenin (PC1) reduced CYP1A2 enzyme activity in hepatocyte-like HepaRG cells after 1 h by 70%. Omeprazole (PC2) increased the CYP1A2 enzyme activity after 1 h by more that 2-fold (1.6 at 48 h, Figures 1A,B). The Salix cortex extracts did not affect cellular enzyme activity at that time. After 48 h treatment, both extracts reduced the enzyme activity at high concentrations (25 or 50 μg/ml), while the effect was more pronounced by S6, then. Ketoconazole (PC3) completely abolished CYP3A4 activity after 1 h exposure of HepaRG cells, while rifampicin (PC4) triggered enzyme activity induction by about 16-fold compare to control after 48 h ( Figures 1C,D). Acetylsalicylic acid (ASA) did not affect CYP1A2 and CYP3A4 enzyme activity after 1 or 48 h at the tested concentrations ( Figures 1A-D). For assessment of CYP2C19 enzyme activity, a cell-free assay was used. After 0.5 h, troglitazone (PC5) reduced CYP2C19 activity by 68%; at ≥25 μg/ml both Salix cortex extracts also reduced enzyme activity by 81% (extract S6) and 31% (extract B) compared to solvent control. Again, the inhibitory effect of S6 on CYP450 enzyme activity was stronger compared to extract B.
CYP2C19 mRNA Expression
Differentiated HepaRG cells were exposed for 6 and 48 h to Salix cortex extracts and mRNA expression of CYP2D6, CYP2B6 and CYP2C19 quantified using qRT-PCR ( Figure 2). The baseline mRNA levels of CYP2D6 and CYP2B6 were very low in HepaRG cells and no mRNA expression upon treatment could be seen (data not shown). Baseline CYP2C19 mRNA levels were not reduced after 6 h treatment with Salix cortex extracts. After 48 h treatment with 25 μg/ml extract S6, but not B significantly reduced CYP2C19 mRNA expression by 55%.
Cytotoxicity and ROS Production
As given in Figures 3A-C, neither of the two extracts affected intracellular ATP levels or triggered LDH release in hepatocyte-like HepaRG cells at the tested concentrations (0.25-50 μg/ml). We also tested whether the extracts could elevate the level of intracellular ROS in the cells, which in turn could cause damage to lipids, proteins and DNA. From Figures 3D,E it can be seen that after treatment with Salix cortex extracts for 1 or 48 h, no increase in ROS production could be detected.
DISCUSSION
The biggest consumers of prescription and over-the-counter medicines are older adults (Qato et al., 2008;Qato et al., 2016), and self-medication (Vacas Rodilla et al., 2009;Jerez-Roig et al., 2014) as well as consumption of non-prescription medicines, herbal and other dietary supplements in the first place, is widespread among them (Izzo and Ernst, 2009;de Souza Silva et al., 2014;Raji et al., 2017). Conditions of chronic pain, or other chronic conditions, such as diabetes, heart disease, stroke, or cancer, may experience concurrent use of multiple medications (Lunsky and Modi, 2018;Schneider et al., 2021). With the number of drugs, the risk of drug interactions increases exponentially while many drug interactions can be explained by changes in metabolic enzymes in the liver and other extrahepatic tissues. Interaction with hepatic CYP450 enzymes in terms of induction or inhibition is here one of the most important causes after co-administration of medications (Cadieux, 1989;Johnell and Klarin, 2007). CYP450 enzyme induction usually leads to accelerated biotransformation of a drug. For most drugs, accelerated metabolism results in decreased efficacy, but if a pro-drug is activated by CYP450 enzymes, its efficacy and/or toxicity may increase. When two drugs compete over the same enzyme receptor site, an enzyme inhibition occurs. The stronger inhibitor predominates, resulting in decreased metabolism of the competing drug. This can result in increased serum levels of the unmetabolized drug and thus greater potential for toxicity. For drugs whose pharmacological activity requires biotransformation from a pro-drug form, inhibition may result in decreased efficacy. Besides substrate competition, a drug can also reduce enzyme activity due to direct interaction or mRNA inhibition (Lee et al., 2009). The drug interaction experiments reported in the present study were carried out using hepatocyte-like HepaRG cells. Excluding CYP2A6 and CYP2E1, the HepaRG cell line has been reported to express high functional levels of most of the major xenobiotic metabolizing CYP450 enzymes. These activities were then found to be inhibited and induced by prototypical compounds at comparable levels to primary hepatocytes (Turpeinen et al., 2009). With these characteristics, HepaRG cells have been proposed to be a more physiologically relevant pre-clinical platform for drug-drug interaction studies and safety pharmacology compared to e.g., the pre-clinically widely used cell line HepG2. Even though HepG2 cells are inexpensive and convenient, they lack a substantial set of liver-specific functions, particularly CYP450 activity (Gómez-Lechón et al., 2010). So far, there are only few reports on Salix cortex extracts investigating a CYP450 interaction potential. Using a cell-free fluorimetric in vitro assay, an ethanolic extract of Salix planifolia was found to inhibit CYP2C8 (60.9%), CYP2C19 (48.5%), CYP3A4 (92.3%), CYP3A5 (73.9%), and CYP3A7 (71.4%) at 10 μg/ml concentration. All other investigated enzymes were inhibited by less than 30.0%, which includes CYP1A2 (Tam et al., 2009). In HepaRG cells, we observed a low CYP1A2 and CYP3A4 interference potential of the tested Salix cortex extracts at a concentration which was about 5fold higher as compared to an effective anti-inflammatory concentration reported earlier by us (Le et al., 2021). This effect was evident only after 48 h, which argues against a direct CYP enzyme activity interaction potential. CYP2C19 metabolizes important drugs in clinical practice, such as proton pump inhibitors (esomeprazole, lansoprazole, omeprazole, pantoprazole, rabeprazole), clopidogrel, tamoxifen, diazepam, citalopram, or escitalopram (Sienkiewicz-Oleszkiewicz and Wiela-Hojeńska, 2018). For this enzyme, our data suggest that both Salix cortex extracts have the potential to interfere with drug metabolism, as they both reduced CYP2C19 enzyme activity in a concentrationdependent manner after 30 min incubation. As with the other enzymes investigated, extract S6 was more potent in enzyme inhibition than extract B. On mRNA level, only S6 significantly reduced CYP2C19 expression in HepaRG cells. It is certain that none of the observed effects on CYP450 enzymes can be attributed to cytotoxic effects, since there was no reduction in ATP levels, increase in LDH or ROS production upon Salix cortex extract treatment in HepaRG cells. The two extracts differed in their salicylate content, which might account for the observed differences, but information on CYP450 regulation by e.g., acetylsalicortin or acetylsalicin, which were both present solely in extract S6, does not exist so far. In contrast to extract B, extract S6 also contained the flavonoids catechin (0.78 mg/ml) and epicatechin (0.03 mg/ml) (Le et al., 2021). For both compounds no relevant inhibition of CYP1A2, CYP2C9, CYP2D6, and CYP3A4 could be detected in a study on human liver microsomes (Satoh et al., 2016), which confirmed previous data (Muto et al., 2001). Thus, it is unlikely that the presence of these flavonoids add to the observed effects. For the aglycone of quercetin, some weak CYP450 activity inhibition has been described (Mohos et al., 2020). Quercetin-hexoside (but not the aglycone) is present in S6 at a 3-fold higher concentration compared to extract B. In contrast, extract B contains some O-glucosides of naringenin. For the aglycone CYP1A2 inhibition has been reported by Fuhr et al. (1993) and this was confirmed in the present study (20% at 80 µM) (Fuhr et al., 1993). However, extract B contained naringenin glucosides only at about 8 µM in total. Even if the glucosides were as potent as the aglycone of naringenin, this concentration would have been too low to inhibit CYP1A2. Taken together, at present, too little information is available to explain the observed differences between the extracts or to attribute the effects to individual extract constituents. Both salicylates and flavonoids as well as other phenolic compounds, such as syrengin, or yet unidentified compounds in the extracts and possible additivity between the compounds need to be investigated with respect to CYP450 inhibition and their role further elucidated in the future. However, Salix species show huge differences in their phytochemical content, depending on the genotype (Förster et al., 2008;Förster et al., 2010;Gawlik-Dziki et al., 2014;Gligorić et al., 2019), and also other factors such as the plant part used as a source material for medical products (Gawlik-Dziki et al., 2014;Sugier et al., 2013). This currently complicates a reliable therapeutic efficacy of the product. Based on the present data, it also calls for further systematic and more detailed studies on possible drug interactions. For the moment, a potential interaction with drugs that are metabolized by CYP2C19, CYP1A2, as well as CYP3A4 with Salix cortex containing formulations cannot be excluded in common dosages. Although after oral intake, the amount of most phytochemicals in Salix species that becomes accessible for absorption through the epithelial layer of the gastrointestinal tract is currently not known (Schmid et al., 2001), it must be considered that e.g., CYP3A4 is not only the most abundant CYP in the liver but also the wall of the small intestine. There, before absorption into the blood stream occurs, it plays a major role in the metabolism of many different drugs such as calcium channel blocker, lovastatin or diazepam (Vuppalanchi and Saxena, 2011), which either limits or increases the amount of bioavailable active drug. Especially people that have an inherent risk of polypharmacy and consider long-term use of Salix products (ESCOP, 2017) should be aware of this.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
EL conceived and designed the study and experiments. JG, CH, and SH designed and carried out the experiments. CH and JG prepared the graphs and analyzed the data. JG, EL, CH, and SH wrote the paper. NF and IM prepared the Salix extracts and performed chemical analysis of the extracts. All authors have given approval to the final version of the manuscript. | 5,152 | 2021-11-18T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
NONLINEAR OPTICAL PROPERTIES AND OPTICAL POWER LIMITING OF POLY (3HT- Co-Th)-PMMA POLYMER BLEND FILMS
Poly (3 - Hexylthiophene - Co - Thiophene) copolymer was prepared by using the addition polymerization method. The Nonlinear optical properties and the behavior of the optical power limiting of the prepared polymer blend poly (3HT- Co - Th) - PMMA films were studied by using the z - scan technique for different weight ratios of the copolymer poly (3HT- Co - Th). In the present work, a continuous wave (CW) diode - pumped solid - state laser (DPSSL) of wavelengths 532 nm was used for the irradiation of the prepared film samples. The nonlinear optical parameters such as, the nonlinear refractive index (n2), the nonlinear absorption coefficient (β), and the third - order - nonlinear optical susceptibility (χ (3)) of the polymer blend poly (3HT- Co - Th) - PMMA films were determined for different weight ratios of the copolymer poly (3HT- Co - Th). It is observed that the polymer blend poly (3HT - Co - Th) - PMMA films exhibit saturable absorption (SA) and self - defocusing effects, and this gives an indication that both, the nonlinear refractive index (n2) and the nonlinear absorption coefficient (β), have negative values. The obtained results indicate that the prepared polymer blend poly (3HT - Co - Th) - PMMA films are promising materials and can be considered as suitable materials for different optical and electronic applications.
INTRODUCTION
Nonlinear optical (NLO) properties of different optical materials have received much attention because of their several applications such as, optical power limiting, optical switching, light -emitting diodes, solar cells, optical sensors, and in photonic and optoelectronic devices [1 -17]. Thiophone is one of these important materials. It is a conjugate polymer and has attractive properties for many optical and electronic applications, such as: Solubility in the organic solvents, flexibility in the preparation, and absorbance in the ultraviolet and visible spectral regions, high electrical conductivity, good environmental stability, and high optical damage thresholds [8,18]. Thiophene and 3 -Hexylthiophene were used to prepared the copolymer poly (3HT-Co -Th). In order to obtain a new polymer suitable for the optical and optoelectronic applications, the prepared copolymer is adding to the poly (methyl methacrylate) (PMMA) polymer. Such prepared polymer blend shows high absorption in the range of the visible and the ultraviolet of the electromagnetic spectrum. It is observed that the addition of the prepared copolymer to the PMMA polymer causes significant modification in the optical properties of the PMMA polymer. It is found, in the present study, that the optical properties of the prepared polymer blend poly (3HT-Co -Th) -PMMA films are depended on the weight ratio of the added copolymer poly (3HT -Co -Th) to the PMMA polymer and their properties are changed with changing this weight ratio.
Optical limiting (OPL) behavior is an important nonlinear effect can be used for the optical limiter device to protect the human eyes and the other sensitive optical devices (such as, optical switches and optical sensors), from the optical damages due to the intense laser beams, that may be caused those damages. The optical limiter is working as an attenuator for the incident optical radiation (such as the laser beam radiation) on these devices. In the nonlinear optical materials, the output power (or the intensity) of the laser beam tends to increase with increasing the input power (or the intensity) of the laser beam until it reaches a value called the threshold optical power (Pth), then the output power starts to stabilize at a constant value as the input power continuing to increase [19 -24].
There are several techniques for measuring the nonlinear optical properties of the optical materials. The z -scan technique is one of these techniques, which is commonly used for the measurements of the nonlinear optical properties of the optical materials due to its advantages such as, the simplicity of the experimental setup and the optical measurements, as well as it is easily to interpret the results that obtained from the optical measurements. Moreover, this technique is sensitive to most the nonlinear optical effects such as, self -focusing, self -defocusing, nonlinear refraction, nonlinear saturable absorption (SA), and optical limiting [25,29]. This technique can be used to measure the nonlinear refractive index (n2), the nonlinear absorption coefficient (β), and the third -order of the nonlinear optical susceptibility (χ (3) ) of many optical materials. There are two types of the z -scan technique. The first type is called, close -aperture z -scan, is used to scan the beam of the laser along the z -axis of the closed (partially open) aperture and measure the nonlinear refractive index (n2). While the second type is called, the open -aperture z -scan (the aperture is completely open or it is removed from the z -scan system) and is used to measure the nonlinear absorption coefficient (β). In addition to being used to measure the real and imaginary parts of the third -order nonlinear optical susceptibility (χ (3) ), this technique is also used to study the properties of the optical power limiting.
EXPERIMENTAL METHODS AND MEASUREMENTS
The copolymer poly (3HT -Co -Th) was prepared by using the addition polymerization method. The monomers 3HT and Th were purchased from the Aman International Industrial Company, India. Two different weight ratios of the 3HT and TH monomers were mixed together. The poly (3HT -Co -Th) -PMMA polymer blend film samples were prepared for different weight ratios by using the casting method. 4 gm of poly(methylmethacrylate) (PMMA) polymer was dissolved in 10 ml of Chloroform and the mixture was stirred for 3 hours until the polymer completely dissolved and homogenous solution produced. Then, different percentage weight ratios of the copolymer poly (3HT -Co -Th), 0.033 %, 0.040 %, 0.046 %, 0.053 %, and 0.060% were added to the PMMA solution. The produced solution was stirred until the two polymers mixed together and homogeneous solutions were formed. The produced solutions of the poly (3HT -Co -Th) -PMMA polymer blend at different weight ratios of the copolymer poly (3HT -Co -Th) were cast on glass slides of 1 mm thickness and left to dry progressively and hard poly (3HT -Co -Th) -PMMA polymer blend films were obtained. The average thickness of these films was around 1 mm.
The absorbance (A) and the transmittance (T) spectra of the prepared polymer film samples were measured by using Cecil UV -Visible double -beam spectrophotometer (Model CE -7500) of the wavelength range 190 -1100 nm.
For measuring the nonlinear optical properties and determining the associated nonlinear optical parameters of the poly (3HT -Co -Th) -PMMA polymer blend films, the z -scan technique was used. Fig.1. shows the schematic diagram of the z -scan experimental setup used in the present work for the measurements of the nonlinear optical properties of the prepared poly (3HT -Co -Th) -PMMA polymer blend films.
The laser used in the z -scan experiments was a continuous wave (CW) diode -pumped solid -state laser (DPSSL) of a Gaussian beam at λ = 532 nm wavelength. The laser is of adjustable output power over the range 0 -100 mW. A converging lens (L) of focal length 5 cm was used to focus the laser beam on the film sample. The radius of the laser beam (w0) at the beam waist is approximately 18 μm, and the intensity of the laser beam is calculated and its value is Io = 1.94 kW / cm 2 . The corresponding Rayleigh range (ZR) is 1.91 mm, which is consisted with the z -scan condition that the length of the sample (L) must be less than the Rayleigh range (ZR), namely, L << ZR [22]. The output laser beam was spitted into two parts by the beam splitter (BS). The first part of the laser beam is directed toward the photo-detector D1, which was used to measure the power of the incident laser beam on the sample. While the second part of the laser beam was focused on the film sample by the convergence lens (L) and passes through the sample. Then the laser beam passes through the narrow aperture and incident on the photo-detector D2, which was placed behind the aperture to measure the power of the transmitted beam. The radius of the aperture, which used in the closed (partially open) aperture, is 1 mm.
RESULTS AND DISCUSSION
UV -Visible absorbance spectra of the poly (3HT -Co -Th) -PMMA polymer blend films for different weight ratios of the copolymer poly (3HT -Co -Th) were recorded over the wavelengths 300 -900 nm by using the double -beam spectrophotometer, as shown in Fig. 2. The spectra show that the high absorbance peaks are located around the wavelength 466 nm. The values of absorbance are in the range of 0.05 -0.34 (Arb. Units) for the weight ratios range 0.033 % -0.060 % of the copolymer poly (3HT -Co -Th). It can be clearly seen that the value of the absorbance of the poly (3HT -Co -Th) -PMMA polymer blend film increases with increasing the weight ratio of the copolymer poly (3HT -Co -Th). The normalized transmittance curves of the closed -aperture laser beam z -scan, the open -aperture laser beam z -scan, and the pure nonlinear refraction of the p (3HT -Co -Th) -PMMA polymer blend film at different weight ratios of the copolymer poly (3HT -Co -Th) were measured and are shown in Figs.3 (a) -(c), respectively. The contribution of the nonlinear refraction only (the pure normalized transmittance curves in Fig. 3 (c)) was obtained by dividing the data of the normalized transmittance of the closed -aperture laser beam z -scan by the data of the normalized transmittance of the open -aperture laser beam z -scan. We noticed from Fig. 3 (a) that the normalize transmittance curve starts with a peak in the negative part of the z -axis to the valley in the positive part of the z -axis, this means that the normalized transmittance curve has a peak -to -valley feature. Such behavior is an indication of the exhibiting of self -defocusing effect of the laser beam when passing through the polymer film samples, and thus these film samples have negative values of the nonlinear refractive index (n2 < 0). It is clear from Fig. 3 (b) that there is an increase in the value of the normalized transmittance when the sample approaches the focal point, and this gives an indication that the prepared film samples in the present study exhibit saturable absorption (SA) when the laser beam intensity increases, and this means that these film samples have negative values of the nonlinear absorption coefficient (β < 0). The nonlinear optical parameters, n2, β, and the real (Re (χ (3) )) and the imaginary (Im (χ (3) )) parts of the third -order nonlinear optical susceptibility ((χ (3) )) can be determined from the following relations. By Using the difference between peak and valley transmittances for the normalized transmission curve i.e., ∆Tp-v = Tp -Tv, the phase difference ∆ϕo can be determined according to the following relation [25,26]: (1) where S is the linear transmittance of the aperture and given by: where ra is the radius of the aperture and wa is the radius of the laser beam at the entrance of the aperture and given by: where λ is the wavelength of the laser beam. The nonlinear refractive index (n2) is given by the following relation [26]: (4) where I0 is the intensity of the laser beam at focus (z = 0), and given by: (5) where P0 is the laser input power.
The induced refractive index change (Δ n) of material is given by the relation: (6) where I is the intensity of the incident laser beam.
The nonlinear absorption coefficient (β) is given by the following relation: where ∆T is the normalized transmittance difference between peak at the focal point (z = 0) in the open aperture z -scan normalized transmittance curve and the baseline, and Leff is the effective length of the sample and given by: (8) where α0 is the linear absorption coefficient of the medium.
The nonlinear optical parameters n2 and β are associated with the real (Re (χ (3) )) and the imaginary (Im (χ (3) )) parts of the third -order nonlinear optical susceptibility (χ (3) ), and provide important information about the properties of the material. The real and the imaginary parts of the third -order nonlinear optical susceptibility can be determined by using the following relations [30]: Nonlinear Optical Properties and Optical Power Limiting of Poly (3HT-Co -Th) -PMMA Polymer Blend Films International Journal of Engineering Science Technologies 6 (9) (10) The complex third -order nonlinear optical susceptibility (χ (3) ), can be described by the following relation: (11) The nonlinear refractive index (n2), the nonlinear absorption coefficient (β), and the third -order nonlinear optical susceptibility (|χ (3) |) were plotted as a function of the weight ratio of the copolymer poly (3HT -Co -Th), as shown in Figs. 5, 6 and 7, respectively. The calculated values of the optical parameters of the prepared poly (3HT -Co -Th) -PMMA polymer blend film for different weight ratios of the copolymer poly (3HT -Co -Th), are summarized in Table 1. It is clearly noticed from this table; the values of the nonlinear optical parameters increase with increasing the weight ratio of the copolymer poly (3HT -Co -Th). The optical power limiting properties of the poly (3HT -Co -Th) -PMMA polymer blend film for different weight ratios of the copolymer poly (3HT -Co -Th) were also studied. Optical power limiting effect was study by using the z -scan technique. The sample of the poly (3HT -Co -Th) -PMMA polymer blend film was fixed in the z -scan system after the focal point of the lens, and the laser input power was varied progressively and the corresponding laser output power was recorded by the photo-detector D2. Fig. 8 shows the optical power limiting curves (the laser output power versus the laser input power) for the poly (3HT -Co -Th) -PMMA polymer blend film at different weight ratios of the copolymer poly (3HT -Co -Th). From this figure, it is noticed that the laser input power -laser output power characteristic shape depends on the weight ratio of the copolymer poly (3HT -Co -Th). The poly (3HT -Co -Th) -PMMA polymer blend film starts to show more obvious power limiting behavior with increasing the weight ratio of the copolymer poly (3HT -Co -Th). Because an increase in the weight ratio of the copolymer poly (3HT -Co -Th), leads to increase the number of atoms of the polymer film and this causes an increase in the absorption of the incoming photon energy. As a result, the output power of the laser beam will decrease.
The values of the optical power threshold (Pth) of the optical power limiter of the prepared poly (3HT -Co -Th) -PMMA polymer blend film was determined for the different weight ratios of the copolymer poly (3HT -Co -Th).
The estimated values of the optical power threshold (Pth) of the poly (3HT -Co -Th) -PMMA polymer blend film for different weight ratios of the copolymer poly (3HT -Co -Th) are shown in Table 2. It can be noticed that the value of the optical power threshold (Pth) depends on the weight ratio of the copolymer poly (3HT -Co -Th), and it decreases with increasing the weight ratio of the copolymer poly (3HT -Co -Th).
CONCLUSIONS
The copolymer poly (3HT -Co -Th) was prepared by using the addition polymerization method and the film samples of the poly (3HT -Co -Th) -PMMA polymer blend were prepared, for different weight ratios of the copolymer poly (3HT -Co -Th), by using the casting method. The nonlinear optical properties and the behavior of the optical power limiting of the prepared poly (3HT -Co -Th) -PMMA blend polymer film sample for different weight ratios of the copolymer poly (3HT -Co -Th) were studied. The nonlinear optical parameters of the prepared poly (3HT -Co -Th) -PMMA polymer blend film samples, the refractive index (n2), the nonlinear absorption coefficient (β), and the third -order optical susceptibility (|χ (3) |) were measured by using the z -scan technique with a continuous wave (CW) diode -pumped solid -state laser (DPSSL) operating at the wavelength 532 nm. It is observed that the prepared poly (3HT -Co -Th) -PMMA polymer blend sample films exhibit the effects of the selfdefocusing and the nonlinear saturable absorption (SA), which is the indication of the negative values for both the nonlinear refractive index and the nonlinear absorption coefficient (n2 < 0 and β < 0). Obtained results show that increasing of the weight ratio of the copolymer poly (3HT -Co -Th) in the poly (3HT -Co -Th) -PMMA polymer blend film sample will enhance the UV -visible absorption of the polymer blend film sample. Also, increasing of the weight ratio of the copolymer poly (3HT -Co -Th) in the poly (3HT -Co -Th) -PMMA polymer blend film sample will increase the values of the nonlinear optical parameters, n2, β, and |χ (3) |), of this polymer blend film sample.
The optical power limiting behavior of the poly (3HT -Co -Th) -PMMA blend polymer film was studied for different weight ratios of the copolymer poly (3HT -Co -Th). The obtained results showed that the prepared blend polymer films exhibit clear optical power limiting with reasonably low power limiting threshold (Pth). Results indicate that the prepared poly (3HT -Co -Th) -PMMA polymer blend films are promising candidates, can be used as potential materials for different optical and electronic applications, such as, optical power limiters, optical sensors, solar cells, and other optical and optoelectronic devices.
SOURCES OF FUNDING
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. | 4,152.4 | 2021-01-08T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Zodiacal Exoplanets in Time (ZEIT) VIII: A Two Planet System in Praesepe from K2 Campaign 16
Young planets offer a direct view of the formation and evolution processes that produced the diverse population of mature exoplanet systems known today. The repurposed Kepler mission K2 is providing the first sample of young transiting planets by observing populations of stars in nearby, young clusters or stellar associations. We report the detection and confirmation of two planets transiting K2-264, an M2.5 dwarf in the 650 Myr old Praesepe open cluster. Using our notch-filter search method on the K2 lightcurve, we identify planets with periods of 5.84 d and 19.66 d. This is currently the second known multi-transit system in open clusters younger than 1 Gyr. The inner planet has a radius of 2.27$_{-0.16}^{+0.20}$ R$_\oplus$ and the outer planet has a radius of 2.77$_{-0.18}^{+0.20}$ R$_\oplus$. Both planets are likely mini-Neptunes. These planets are expected to produce radial velocity signals of 3.4 and 2.7 m/s respectively, which is smaller than the expected stellar variability in the optical ($\simeq$30 m/s), making mass measurements unlikely in the optical, but possible with future near-infrared spectrographs. We use an injection-recovery test to place robust limits on additional planets in the system, and find that planets larger than 2 R$_\oplus$ with periods of 1-20 d are unlikely.
INTRODUCTION
Planets and their host stars can change dramatically over their lifetimes. Their structural, orbital, and atmospheric properties are all expected to evolve through interactions with their host star (e.g., Ehrenreich et al. 2015), the protoplanetary disk from which they formed (e.g., Cloutier & Lin 2013), other planets in the system (e.g., Chatterjee et al. 2008), and the greater stellar environment (e.g., Cai et al. 2018). Understanding the underlying drivers and relative importance of these evolutionary mechanisms is critical for revealing the early sculpting of planetary systems and the conditions that give rise to the diversity of mature planetary systems revealed by Kepler and earlier exoplanet surveys (e.g., Lissauer et al. 2014;Inamdar & Schlichting 2016).
The Zodiacal Exoplanets in Time (ZEIT) survey (Mann et al. 2016a) was designed to identify and characterize transiting planets in these young clusters and star forming regions using the precise photometry from K2 (Van Cleve et al. 2016). The greater goal is to better understand how planets form and evolve by comparing the statistical properties of exoplanets of different ages and arXiv:1808.07068v2 [astro-ph.EP] 6 Sep 2018 to older systems found during the original Kepler mission (Borucki et al. 2010; Thompson et al. 2018). Thus far we have identified planets in Hyades , Praesepe (Mann et al. 2017b), and Upper Scorpius (Mann et al. 2016b), many of which were found near-simultaneously by similar surveys focusing on exoplanets in young stellar associations (e.g., Obermeier et al. 2016;David et al. 2016;Pepper et al. 2017;Ciardi et al. 2018;Livingston et al. 2018).
Multi-transiting planetary systems are uniquely useful for studying stellar and planetary properties. In cases where the planets' eccentricities can be independently constrained (e.g., through dynamics; Deck & Agol 2016;Gillon et al. 2017), multi-transiting systems can be used to constrain stellar densities with a precision that rivals eclipsing binaries (e.g., Mann et al. 2017a). Even with no information on the host star properties, differences between the measured transit duration of planets in the same system can be used to measure the relative eccentricities (Kipping et al. 2012). Multi-transit systems where planets undergo transit timing variations offer the best opportunity to measure the masses of small planets (e.g., Deck & Agol 2015;Hadden & Lithwick 2017a). Lastly, these systems provide a measurement of the mutual inclination of planets, a probe of the entropy of a system and hence the role of dynamical disruptions from their (expected) initially flat configuration (e.g., Figueira et al. 2012;Ballard & Johnson 2016).
Multi-transiting systems in clusters offer a unique route to study the dynamical properties of planets with known (young) ages. So far, there is only one known multi-transiting system in an open cluster: K2-136, a three-planet system in the 650 Myr old Hyades cluster Livingston et al. 2018;Ciardi et al. 2018). Here, we present the discovery of two planets transiting the 650 Myr old Praesepe cluster star K2-264 (JS 597;Jones & Stauffer 1991) from its K2 light curve. K2-264 hosts two super-Earth/mini-Neptunesized planets in short (≈6 and ≈20 day) period orbits. We describe our discovery and follow-up observations in Section 2, and we describe our analysis to determine stellar properties in Section 3. In Section 4 we place limits on additional planets in the system, and in Sections 5 and 6 we describe our transit fitting to determine stellar parameters and our false positive probability analysis. Finally, in Section 7, we discuss the implications from discovering a second multi-transiting cluster system. Stumpe et al. 2012) prior to public release of the data on 30 May 2018. K2-264 was selected as part of four K2 guest observer programs in Campaign 16 1 , three of which selected K2-264 on the basis of membership in the Praesepe cluster.
We applied our detrending and transit search pipeline, which is described in detail in Rizzuto et al. (2017), to the data for K2-264. To summarize the process, we first removed the K2 roll or flat-field systematic, caused by the instability of the K2 pointing and pixel-response variations using the method of Vanderburg & Johnson (2014). This produced a cleaned lightcurve that was mostly free of instrumental systematics but still contains signals from stellar variability and transiting planets. We removed the astrophysical variability with a "notchfilter," which fits a 1-day window of the light curve as a combination of an outlier-resistant second-order polynomial and a trapezoidal notch. The inclusion of the notch allows aggressive detrending outside the notch without over-fitting that may remove or weaken transit-like signals. This window is then moved along each point in the light-curve, detrending the variability signal from the entire dataset. The periodic transits were then identified using the Box Least Squares algorithm (Kovács et al. 2002) on the detrended lightcurve. Figure 1 shows the rotational variability, detrended lightcurve, and detected transit signals.
Once the two transiting planets were detected, we reextracted the data using a simultaneous fit to the K2 roll systematic, low-frequency stellar variability, and transits, including outlier rejection as described in Vanderburg et al. (2016). The final lightcurve, following flattening by removal of the best-fit low-frequency variability and significant outliers, was then used for our MCMC transit fitting described below.
NIR Spectra from SPEX
On 2 June 2018 we observed K2-264 with the InfraRed Telescope Facility (IRTF) SpeX mediumresolution spectrograph . We used the 0.3 slit in SXD mode, which yielded a spectral resolution of R 2000 from 0.7 to 2.55 µm. Extraction and calibration of the spectrum, including flat, bias, and wavelength correction, was carried out using the Spextool package , which incorporates the xtellcor package (Vacca et al. 2003) to correct for telluric contamination. The observation was taken in poor conditions at high airmass and had a median signal-tonoise ratio (SNR) per pixel of 15 in the first two orders covering 1.4-2.5 µm, and SNR of 10 in the two orders covering 0.9-1.3 µm. Given the low SNR of the spectrum, we did not attempt to extract stellar properties such as T eff , log g, or metallicity. Figure 2 displays the resulting spectrum of K2-264. We measured a radial velocity from the spectrum by cross-correlating with a similar spectral-type standard. This was done over each order using the tellrv package 2 (Newton et al. 2014). After correcting for Barycentric motion, the cross-correlation yielded a radial velocity for K2-264 of 26±6 km/s, which is within ∼1-σ of the expected expected radial velocity for a Praesepe member.
Literature Photometry and Astrometry
Photometry from multiple all-sky surveys were compiled to build a full SED for K2-264 . Optical g r i z magnitudes were taken from the PanSTARRS point source catalog (Flewelling et al. 2016). Near-IR J,H and Ks photometry was taken from the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006), the r' magnitude was taken from the Carlsberg Meridian Catalog (CMC15; Muiños & Evans 2014). Mid-IR magnitudes in the W1-4 bands were taken from the Wide-Field Infrared Survey Explorer (WISE; Wright et al. 2010 allax, and G, RP , and BP magnitudes were taken from the Gaia mission second data release (Gaia Collaboration et al. 2018). These data for K2-264 are shown in Table 1.
Archival Imaging
We examined archival imaging observations of K2-264 from several different surveys to search for nearby stars that might contribute the transit signals we see.
In particular, we examined observations from the Palomar Observatory Sky Survey (POSS-I) to identify background stars at the present-day position of K2-264 , and we used observations from the Pan-STARRS survey to search for nearby faint companions.
We first used the POSS images of K2-264 to rule out the presence of bright background stars at the presentday position of the target star. K2-264 was observed by POSS in 1950, when its position was ∼2.7 away from its present day position due to the star's proper motion (µ = 40.1 ± 0.1 mas yr −1 ). K2-264's PSF partially overlaps its present-day position, but it is still possible to rule out some nearby companions. Based on nearby stars observed at the same time, we estimate that if there was a background star at the present-day position of K2-264 brighter than R ∼ 19 mag, we would be able to detect it. Since we see no evidence for such a star, we can rule out the presence of these background companions about three magnitudes fainter than K2-264. We show the POSS image in Figure 3.
We also used observations from the Pan-STARRS survey to search for and rule out faint stars near the position of K2-264. Neither a query of the Pan-STARRS archive point source catalog nor visual inspection of images revealed any nearby stars closer to K2-264 than 30". With Pan-STARRS, we can rule out nearby stars to fairly faint limits (r 20). The Pan-STARRS image is shown in Figure 3.
Companion Constraints from Gaia Data Release 2
While detection limits for additional sources surrounding stars in the Gaia second data release (Gaia Collaboration et al. 2018) are not characterized by the Gaia team, limits can be estimated using populations of known binaries detected in ground-based imaging surveys. Ziegler et al. (2018a) used a sample of 620 binary companions to Kepler Objects of Interest (KOIs) detected with Robo-AO imaging to characterize the detectability of companions 1-4" from a primary in the Gaia second data release. Ziegler et al. (2018a) find that companions with separations <1 arcsecond are not listed as separate sources in the Gaia catalog, and provide contrast limits out to separations of 4". This method can be extended to smaller separations by examining the quality of the Gaia astrometric fit, in particular the significance of the "extra-error" term. In order to do this we supplemented 363 high-confidence binary companions to KOIs identified by Robo-AO (Law et al. 2014;Baranec et al. 2016;Ziegler et al. 2017Ziegler et al. , 2018b with 93 companions detected at ρ < 1 using imaging or aperture mask interferometry with the Near Infrared Camera (NIRC2) on the Keck 2 telescope by Kraus et al. (2016). The higher spatial resolution of Keck, particularly when combined with aperture mask-ing, provided companions down to ρ 10-20 mas. The Robo-AO LP600 filter is very similar to the Gaia G bandpass (Ziegler et al. 2018a), however the companions from Kraus et al. (2016) were detected in K-band. Under the assumption that these companions were very likely to be bound due to the small separations, we interpolated Gaia G band primary-secondary contrasts using the K-band contrast, the primary estimated effective temperature from Kraus et al. (2016), and a 2 Gyr PARSEC 1.2s isochrone (Chen et al. 2014).
We then queried the Gaia second data release (DR2) catalog in a 10 arcsecond cone around each KOI with a detected companion. We assessed detection by Gaia on the basis of three separate conditions: (1) The companion was identified as a unique source in the catalog at the expected position angle and separation and with the expected contrast (2) The companion was not resolved as a distinct source in the Gaia catalog, but the astrometric extra error significance (D) was >10-σ. This was only used for companions with separations <1 arcsecond. (3) The primary star was missing from the Gaia catalog, again this condition was only applied to companions with separations of ρ < 1 . Our interpretation assumes that clear binaries where the astrometric solution was extremely poor were removed from the second data release, which is listed in Gaia Collaboration et al. (2018) as the intended operating procedure employed by the Gaia data reduction team. Finally, if the contrast of the companion and the magnitude of the primary would indicate a Gaia G magnitude of the secondary of >21 mag, we removed that companion from the test sample as it falls below the faint limit for the Gaia survey and may not be robustly detected. Figure 4 displays the separation and contrast of the recovered and non-recovered companions in the Gaia second data release. We find similar magnitude limits in the 1-3 arcsecond range as Ziegler et al. (2018a), with 50% recovery for ∆G = 3 mag at 1 arcsecond and for ∆G = 6 mag at 3". Inside 1 arcsecond, companions with ∆G < 2 mag are reliably detected on the basis of the astrometric fit down to separations of 80 mas.
There were no sources within 35 arcsecond of K2-264 in Gaia DR2, and the astrometric extra error significance for K2-264 is D = 4.98-σ. The Gaia DR2 release notes (Gaia Collaboration et al. 2018) state that for stars with well behaved astrometry, D should be considered as a half-normal with mean zero and spread of unity. Furthermore, Gaia DR2 astrometry may contain instrument and attitude modelling errors that may in- 2 mag at separations of 80-1000 mas.
STELLAR PARAMETERS
Effective Temperature and Bolometric Flux: We simultaneously solved for the spectral type and bolometric flux (F bol ) by fitting the literature photometry (Section 2.3) using a grid of M-dwarf templates, following the technique outlined in the previous papers in this series (e.g., Mann et al. 2017bMann et al. , 2018. For the templates, we used a set of flux-calibrated templates of members of the Hyades open cluster, which were observed as part of programs to characterize nearby M dwarfs (Gaidos et al. 2014). We first filled missing regions of the tem-plate spectra for which data were not available with BT-SETTL atmosphere models (Allard et al. 2011) of the corresponding temperature, and then reddened each template according to the E(B-V) value for Praesepe from Taylor (2006). For each template, we computed synthetic magnitudes using the filter profiles and zeropoints from Evans et al. (2018) Mann et al. (2015) for other optical bands, and Cohen et al. (2003) for 2MASS. We compared these synthetic magnitudes to the archival values, letting the template choice and overall flux levels shift as free parameters. For each template, we computed F bol by integrating over the full spectrum. Our final adopted spectral type and F bol were those corresponding to the best-fit template weighted by the χ 2 values from the comparison between observed and synthetic (from the templates) photometry). This method yielded a spectral type of M2.5(±0.5) and and F bol of 3.068±0.068×10 −11 erg cm −2 s −1 . The errors in F bol account for variations due to many possible template fits, uncertainties in the cluster reddening, and uncertainties arising from interpolating over gaps in the spectrum. We show the best-fit template and a comparison to the photometry in Figure 5.
To determine R * , M * , and ρ * we used the empirical M K S -R * relation from Mann et al. (2015) Boltzmann equation, our bolometric flux, temperature of the best-fit template star, and the Gaia parallax. We find that the radius corresponding to the best-fit temperature is 0.475±0.018 R , which agrees with the radius from the M K S -R * very closely.
To calculate the total stellar luminosity, we combined our F bol value with the Gaia parallax, which yielded 0.0330 ± 0.0012L . Joining this with our radius determination and the Stefan-Boltzmann equation produced a T eff of 3580±70 K. This T eff value was consistent with the assigned value for our best-fit template (3560±60 K) derived by comparison to BT-SETTL models (Allard et al. 2011), as described in Mann et al. (2013).
Rotation Period: To determine the rotation period, we took the K2 roll-corrected light curve prior to removing the stellar variability, masked out the transits from the data, and computed a Lomb-Scargle periodogram spanning periods of 1-40 days. We fit a Gaussian to the largest peak in the periodogram to find the period at the peak power, and conservatively estimate the uncertainty as the standard deviation of the Gaussian divided by the periodogram power. We find the rotation period to be 22.8±0.6 days. The rotation period of K2-264 lies directly on the Praesepe rotation-mass sequence. Figure 6 shows the rotation period of K2-264 in relation to the host stars of the seven other known Praesepe members with transiting planets (Mann et al. 2017b) from K2 Campaign 5, and the full Praesepe population (Douglas 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Mass (
et al. 2017).
Membership in the Praesepe cluster: The kinematics, position, and photometry of K2-264 all place it as a high confidence member of the Praesepe cluster. Combining our RV measurement for K2-264 with the Gaia data release 2 position, proper motions, and parallax measurements allow calculation of the three dimensional space velocity to be (U, V, W ) = (37.3 ± 4.6, −18.0 ± 2.3, −14.7 ± 3.4) km/s. This agrees with the 3D space velocity of Praesepe derived from the locus of the known members updated with Gaia DR2 astrometry of (42,-20,-10) km/s with intra-cluster dispersion of 1-2 km/s (Kraus & Hillenbrand 2007). Figure 7 shows the proper motion offset from the Praesepe veloc- (2007) with membership probability greater than 95%. The intra-cluster dispersion appears to be 1-2 mas/yr, or equivalently 1-2 km/s. The proper motion of K2-264 is highly consistent with the projected sky motion of the Praesepe cluster.
ity projected onto the plane of the sky for K2-264 and the Praesepe members of Kraus & Hillenbrand (2007) with membership probability greater than 95%. Here we take the members from Kraus & Hillenbrand (2007), but plot the de-projected proper motions from Gaia DR2. K2-264 falls within the range of the velocity dispersion of the members. The Gaia DR2 positions and parallax (π = 5.36±0.06 mas; D = 186.6 +2.1 −4.1 pc) place K2-264 on the periphery of the central core of the Praesepe cluster. Figure 8 shows the spatial position slices of known Praesepe members and the position of K2-264 in relation to the cluster. We calculate a kinematic and spatial membership probability of ∼97% for K2-264 using the Bayesian membership selection method of Rizzuto et al. (2011Rizzuto et al. ( , 2015. In Figure 6 we also show the Gaia (BP-RP, G) colormagnitude diagram of Praesepe members from Kraus & Hillenbrand (2007). The single and binary star sequences are clearly visible, and K2-264 falls directly on the single star sequence of Praesepe members. In combination with the kinematic and rotational match to the cluster population, this makes membership in Praesepe highly likely. In addition, the narrow single stars sequence rules-out an unresolved companion to K2-264 contributing more that 10-20% of the total observed flux. This is consistent with the lack of companions with ∆G 2 mag determined from the Gaia astrometry in Section 2.5.
Metallicity: Given the strong membership of K2-264 in the Praesepe cluster, we can assign the bulk cluster metallicity of the Praesepe population to it. A value of [Fe/H] = 0.12 (Boesgaard et al. 2013) is used when required for other calculations and model fitting.
LIMITS ON ADDITIONAL PLANETS
We tested the sensitivity of the combination of our transit search and detrending pipeline and the K2 data for K2-264 using the method described in Rizzuto et al. (2017). We injected a series of synthetic planet signals with random parameters into the raw K2 photometry using the BATMAN model of Kreidberg (2015). We used orbital periods of 1-30 days and planet radii of 0.5-10 R ⊕ , and allow orbital phase and impact parameter to have values within the interval (0,1). We fixed the eccentricity to zero in these simulations, as it does not significantly alter detectability of a transit, but would significantly increase the required number of trials to obtain a dense enough mapping of parameter space. We randomly injected 5000 trial planet signals for this test. More information regarding this process can be found in Rizzuto et al. (2017).
For each injected planet, we applied the corrections for the K2 pointing and stellar variability, and searched for planets using the BLS algorithm, retaining signals with power spectrum peaks of >7-σ. If a planet was detected within 1% of both the injected period and injected orbital phase, we flagged it as recovered. Figure 9 displays the results of the injection-recovery testing. We found that the combination of the K2 data and our search methodology is sensitive to 1.7 R ⊕ planets at orbital periods of 1-10 days, 2.0 R ⊕ planets at orbital periods of 10-20 days, and 3.4 R ⊕ planets out to periods of 25 days at the 90% recovery level.
TRANSIT FITTING
To determine transit parameters for K2-264, we fit the cleaned and detrended K2 lightcurve with a Markov Chain Monte Carlo (MCMC) as described in Mann et al. (2016aMann et al. ( , 2018 and Johnson et al. (2017). In summary, our MCMC fitting is based on the combination of the BATMAN transit model code (Kreidberg 2015) with the Affine-invariant MCMC code emcee (Foreman-Mackey et al. 2013). The BATMAN transit models were computed including oversampling and binning to the ∼30 minute K2 long-cadence exposures. We implemented a quadratic limb-darkening law, and used the triangular sampling method of Kipping (2013). The free parameters in our model that are different for each planet are the planet-star radius ratio (R P /R * ), orbital period (P), epoch of first transit mid-point (T 0 ), impact parameter (b), and two parameters used in place of eccentricity and argument of periastron ( √ e sin ω, √ e cos ω). These parameters were all fit simultaneously with a common stellar density (ρ * ) and the limb darkening parameters (q 1 and q 2 ).
We applied a Gaussian prior on stellar density ρ * , de- Completeness map for additional planets in the K2-264 system, produced from injection-recovery testing of our search pipeline. Each point represents an injected planet signal, with blue points indicating recovery and red points non-recovery. White stars are the two detected planets b and c. Our pipeline and the K2 data for K2-264 are sensitive to planets as small as ∼1-2 R⊕ at orbital periods of 1-20 days.
termined from our SED fitting described in Section 3. We also applied a Gaussian prior of the limb darkening parameters determined from the Limb Darkening Tool Kit (LDTK; Parviainen & Aigrain 2015) using the Husser et al. (2013) models, the Kepler filter response function, and the stellar parameters from Section 3. The priors computed were 0.42±0.10 and 0.38±0.05 for u 1 and u 2 respectively. The Gaussian prior was applied after conversion to the triangular sampling parameterization for quadratic limb darkening of Kipping (2013). All other parameters were explored with uniform priors with physical boundaries (e.g., 0 < b < 1). We ran the MCMC chain for 200,000 steps, with 50,000 steps of burn-in. The transit fit parameters and other derived quantities are reported in Table 2. For each value, we report the median, with errors derived from the 16 th and 84 th percentile values from our fit posteriors. The best fit transit models are shown in Figure 10. We also show posterior distributions for a subset of parameters (R p /R * , e, b, ρ * ) in Figure 11. Both planets have most likely eccentricities close to zero, which is expected for multiple systems of shortperiod planets, though both the eccentricity and impact parameters for the planets are not well-constrained by the K2 data. Both planets are also similar in size with radii of 2.27 +0.20 −0.16 R ⊕ and 2.77 +0.20 −0.18 R ⊕ .
FALSE POSITIVE PROBABILITY
While most planet candidates detected by Kepler and K2 are likely to be bona fide exoplanets, some transitlike signals may be caused by other astrophysical scenarios. We quantified the likelihood of one of these scenarios causing either of the two transit signals we see towards K2-264 using the open-source vespa software package (Morton 2015). Vespa calculates the false positive probability (FPP) of transiting signals using the procedure described by Morton (2012) and Morton et al. (2016). In particular, vespa performs a Bayesian model comparison between several different scenarios which might cause transit-like signals (transiting planets, an eclipsing binary on the target star, an eclipsing binary on a physically bound companion star, or an eclipsing binary on an unassociated background star), and using the transit light curve, stellar parameters, photometric measurements, and observational constraints, determines the likelihood of each model.
In the case of K2-264, we ran vespa using the transit light curve, broadband photometric measurements from the 2MASS survey 5 , and constraints on the presence of nearby stars from the 2MASS J -band image of K2-264 6 . Based on these inputs, vespa calculated an FPP of 4×10 −3 for K2-264 b and 9×10 −4 for K2-264 c. These FPPs do not take into account the fact that candidates in multi-candidate systems are considerably less likely to be false positives than candidates in single-candidate systems (Latham et al. 2011;Lissauer et al. 2012). We take this into account by applying a "multiplicity boost" to the calculated FPPs for K2-264 b and c. Following Lissauer et al. (2012), we divide the calculated FPP for K2-264 b and c by a factor of 25 as K2-264 is a twocandidate system. This agrees with the value derived by Sinukoff et al. (2016) for K2 data. After applying the multiplicity boost, we find FPPs for K2-264 b and c of about 10 −4 and 4 × 10 −5 , respectively. Based on these very low FPPs, we consider both candidates in the K2-264 system to be validated planets.
DISCUSSION
We have reported the discovery and characterization of a two planet transiting system in the Praesepe open cluster. There are now several detected transiting planets in young open clusters and associations observed by K2, though K2-264 is one of only two multiple-planet systems, the other being K2-136, a three transitingplanet system in the Hyades open cluster .
K2-264 b and c are both likely mini-Neptunes, and both sit near the upper envelope of the field masspanet radius distribution, as is seen for other planets in intermediate-age clusters. These two planets continue the trend of young open clusters M dwarfs hosting planets of larger radii than have been observed for planets transiting older field population dwarfs from the original Kepler sample (Mann et al. 2017b;Dressing & Charbonneau 2015). Figure 12 shows the planet radii and host star masses of the M-dwarf hosted young planets identified in the ZEIT survey (Mann et al. 2016a(Mann et al. ,b, 2017b, including K2-264 b/c, compared to older transiting systems. The possible inflation in radii at ∼650 Myr may be a sign of ongoing atmosphere loss (e.g., Lopez et al. 2012). With further completeness testing on the entire sample of Hyades and Praesepe stars observed by K2 a measure of the rate and significance of the potential radius difference could be measured.
Systems with multiple transiting planets offer the potential for many science cases not possible with single transit systems. In particular, eccentricity and stellar density can strongly constraint each other (Van Eylen & Albrecht 2015;Mann et al. 2017a). Planet masses for multiple systems can also be measured from transit timing variations (TTV's) (Hadden & Lithwick 2017b). Though we did not explicitly test for TTV's, the detection of TTV's in the K2 dataset is unlikely; similar size planets show variations of <15 min, which is smaller than the long-cadence timing of 30 mins. In particular, TTV's on planet b due to planet c are expected to be very small (<1 min) given that the orbital periods are very far from a resonance (Agol et al. 2005). One scenario where TTV detection could be possible involves the presence of a third planet in or near e.g., a 2:1 resonance with the inner planet b. Such a planet would have to be approximately earth-mass to have avoided detection in the K2 lightcurve. The TTV amplitude (Mann et al. 2016a(Mann et al. , 2017b and those presented in this paper from C16, compared to older M-dwarf hosted planets from the original Kepler samples (Dressing & Charbonneau 2015). The 650 Myr Praesepe and Hyades planet population have larger radii than those hosted by older M dwarfs. The single 10 Myr old planet in Upper Scorpius (K2-33 b; Mann et al. 2016b) is also significantly larger than the older planets.
from such a planet on the ephemeris of K2-264 b, assuming zero eccentricities, is 5-15 min depending on the proximity to resonance (Agol et al. 2005).
The currently available long-cadence data from K2 is particularly unsuited to the science cases described above. However, K2-264 is highly amenable to follow-up photometry. Both planets are large enough that groundbased facilities could resolve their transits ( 3 mmag), though the faintness of the host star (r 16mag) may be prohibitive for small apertures at high cadence. Shorter cadence data resolving ingress and egress shapes can place stronger constraints on eccentricity, and offer suggestions as to the types of formation mechanisms responsible for forming these two short-period planets. Space-based follow-up with the Hubble Space Telescope or Spitzer is possible for both planets. In Spitzer channel 1 ( 3.5 µm; Hora et al. 2008) K2-264 is 12 mag (Wright et al. 2010) and in a 2 min exposure a SNR of 500 pmm is possible. This is sufficient to resolve the transit shape from even a single transit.
Follow-up spectroscopy to measure the masses of K2-264 b,c may not be possible. K2-264 shows stellar variability with a period 22.8 days and photometric amplitude of 3%. If the star is seen equator-on, this amplitude of variability is expected to produce RV variability of 30 m/s in a similar band as K2. Using the massradius relation for planets from Weiss & Marcy (2014) and the radii inferred from our transit fitting, we find that K2-264 b,c have likely masses of 5.8 M ⊕ and 7 M ⊕ respectively. Assuming circular orbits and the stellar properties derived above, these masses correspond to radial velocity semi-amplitudes of 3.4 m/s and 2.7 m/s respectively. The amplitude of these signals is significantly smaller than the expected stellar rotations signal. Moving to the near-infrared, where the stellar variability is expected to have significantly smaller amplitude, could alleviate this problem in combination with our prior knowledge of the rotation period of the star.
ACKNOWLEDGMENTS
ACR was supported as a 51 Pegasi b Fellow though the Heising-Simons Foundation. AWM was supported through NASA Hubble Fellowship grant 51364 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. AV's work was performed under contract with the California Institute of Technology (Caltech)/Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. S.T.D. acknowledges support provided by the NSF through grant AST-1701468. This paper includes data collected by the K2 mission. Funding for the K2 mission is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper 7 . This work has made use of data from the European Space Agency (ESA) mission Gaia 8 , processed by the Gaia Data Processing and Analysis Consortium (DPAC) 9 . Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A&AS 143, 23. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
Facilities: Kepler, IRTF , Texas Advanced Computing Center Table 2 continued a Inclination, ω, a/R * and transit duration were not fit as part of our MCMC, but were derived from other fit parameters (see Section 5).
b The most likely eccentricities for both systems is ∼0, and so we report only the 1-σ upper limit.
c R P Teq were calculated using T eff from Section (3). Equilibrium temperature Teq was calculated assuming an albedo of 0.3. | 7,789.6 | 2018-08-21T00:00:00.000 | [
"Physics",
"Geology"
] |
van der Waals solid solution crystals for highly efficient in-air photon upconversion under subsolar irradiance †
Triplet-sensitized photon upconversion (UC) has been proposed for broad applications. However, the quest for superior solid materials has been challenged by the poor exciton transport often caused by low crystallinity, a small crystal domain, and aggregation of triplet sensitizers. Here, we demonstrate substantial advantages of the van der Waals solid solution concept to yield molecular crystals with extraordinary performance. A 0.001%-order porphyrin sensitizer is dissolved during recrystallization into the molecular crystals of a blue-fluorescent hydrocarbon annihilator, 9-(2-naphthyl)-10-[4-(1-naphthyl)phenyl]anthracene (ANNP), which contains bulky side groups. This attempt yields millimeter-sized, uniformly colored, transparent solid solution crystals, which resolves the long-standing problem of sensitizer aggregation. After annealing, the crystals exhibit unprecedented UC performance (UC quantum yield reaching 16% out of a maximum of 50% by definition; excitation intensity threshold of 0.175 sun; and high photostability of over 150000 s) in air, which proves that this concept is highly effective in the quest for superior UC solid materials.
New concepts
Herein, the concept of van der Waals solid solutions has been demonstrated to yield triplet-sensitized photon upconversion (UC) organic crystals with extraordinary performance. The use of a hydrocarbon annihilator with bulky side groups (ANNP), which we discovered to embody this concept, has been shown to dissolve triplet-sensitizing porphyrins into the crystals of ANNP during recrystallization and generate uniformly colored and transparent crystals, which are solid solutions we aimed towards. The large conformational freedom of the side group in the crystal is elucidated to be the key to generate solid solutions. Near-equilibrium formation of only the a-phase (ANNP crystal doped with sensitizer molecules) without generating the b-phase (sensitizer aggregates) differentiates this concept from the existing kinetically controlled concept that causes a small crystal domain size and a short exciton diffusion length. The present concept effectively enhances the lifetime and diffusion length of triplet excitons because of the high crystallinity. After annealing, the crystals exhibit outstanding UC performance in air, which demonstrates the effectiveness of this concept. Suppression of the detrimental back energy transfer from the annihilator to the sensitizer has been shown to be an additional advantage. Therefore, the proof-of-concept here opens a large domain of versatile dispersion-force-based organic systems in the quest for superior UC solids. a maximum of 50% in this report), as typically reported for the combination of platinum octaethylporphyrin (PtOEP, Fig. 1b) and 9,10-diphenylanthracene (DPA, Fig. 1b). [29][30][31][32][33] To avoid this segregation issue, previous studies used a kinetically controlled approach [30][31][32][33][34][35][36][37][38] where organic solids were quickly formed. For example, Simon and coworkers 30 reported the fabrication of molecular glasses by rapidly cooling a hot melt of a sensitizer-annihilator mixture, but suppression of the aggregate was incomplete. Other studies [31][32][33][34][35][36][37][38] formed organic thin films by casting a solvent solution on a flat substrate, where researchers, to an effective extent, suppressed the segregation of the sensitizer from the annihilator chromophores.
Some mechanistic studies showed, however, that the grain boundary and thus the grain size were the limiting factors for triplet exciton diffusion. 39,40 Mikhnenko et al. 41 pointed out a significant degree of disorder in amorphous and polycrystalline organic thin films, in particular when the films were cast from solution.
Although the number of reports is limited, some studies used nonkinetic approaches. 13,[42][43][44][45] For example, Oldenburg et al. 42 fabricated sensitizer-annihilator heterojunctions of thin metal-organic framework (MOF) layers and reported F UC o 0.1% and an excitation threshold intensity ‡ (I th ) of ca. 1 mW cm À2 . Ogawa et al. 43 reported an aggregation-free dispersion of an anionic sensitizer in the crystal of an ionic annihilator utilizing ionic interactions between the chromophores (F UC = 3% and I th = 49 mW cm À2 in Ar), in which the photographs of the crystals, photostability data, and information on whether or not the crystals included the solvent methanol were not presented. Recently, Roy et al. 45 reported MOF crystals with F UC = 1.95% and I th = 5.1 mW cm À2 . However, these values were, as in other MOF-based reports, 13,44 for a liquid suspension and no photostability data were shown, whereas ref. 13 (F UC = 0.64% and I th = 2.5 mW cm À2 ) showed photodegradation data.
Surprisingly, van der Waals crystals formed by dispersion forces, representing the simplest class of organic crystals, have been nearly unexplored in the quest for high-performance UC solids. This lack of study may be because of the impact of the initial work 29 that showed considerable segregation of PtOEP from the molecular crystal of DPA, which was also found in subsequent reports; 30-33 the authors of ref. 43 described that dispersion-force based strategies sacrifice the advantages of crystalline systems.
The dispersion force approach has, however, many inherent advantages. First, by use of a weak dispersion force, the cost of the chromophores can be minimized because there is no need for elaborate moieties or ligands that cause specific interactions. Second, by use of a defined phase, samples can gain thermodynamic stability. This is in contrast with the previous strategy of using kinetically controlled methods because solids formed by such rapid methods rely on a nonequilibrium state.
Herein, we show, to our best knowledge, the first explicit exploitation of van der Waals forces to unequivocally resolve the long-standing sensitizer segregation problem by means of the classical but resurging concept of solid solutions. 46 In solid solutions, represented by a and b phases in Fig. 1b, mixing entropy is the driving force that molecularly disperses one component into a solid of the other component. 46,47 Thus, the strategy envisaged here is to selectively generate dispersion-force-based a crystals and avoid emergence of a b phase that is the sensitizer aggregate (Fig. 1b). We generated such solid solution crystals with an extremely low sensitizer : annihilator mole ratio (ca. 1 : 50 000). Note that the similar term mixed crystal can include heterogeneous systems, such as a mixture of a and b phases, 47 which was not targeted here. One of the key factors of the success here is attributable to our discovery of an excellent hydrocarbon annihilator, 9-(2-naphthyl)-10-[4-(1-naphthyl)phenyl]anthracene (ANNP, Fig. 1c), originally developed as a blue organic light-emitting diode (OLED) chromophore. Researchers often choose an asymmetric structure in OLED molecules. 48,49 The key mechanism responsible for successful formation of a-phase crystals is attributable to the 4-(1-naphthyl)phenyl side group, which features two distinct conformations and provides an interstitial site in the crystal. The sample was generated by a recrystallization method ( Fig. 1d and Experimental section, ESI †) over 2-3 days. We thereby generated crystals that display extraordinary performance in air, as shown below.
Results and discussion
The crystals were pinkish (color of PtOEP) and transparent (Fig. 2a), with a flat-plate shape and thickness between ca. 50 and 250 mm (Fig. 2b). Polarization microscopy indicated twinlike multiple single-crystalline domains (Fig. 2b). The results were highly reproducible and we found no polymorph.
Optical absorption measurements of a crystal were straightforward because of the flat shape ( Fig. S1, ESI †). The absorption spectrum of the single crystal ( Fig. 2c) was identical to that of a dilute toluene solution of PtOEP, albeit with a 3.5 nm redshift of the peak. Fig. 2c also shows a spectrum of an over-saturated suspension of PtOEP in toluene, exhibiting an aggregation feature at 550 nm. 50,51 The absence of the aggregation feature in the UC crystal indicates molecular dissolution of PtOEP in the crystal of ANNP and thus formation of a solid solution.
Optical microscopic observations at a high magnification did not indicate aggregates (Fig. S2, ESI †). The spectrum did not depend on the light polarization (Fig. S3, ESI †) and thus PtOEP had no preferential orientation, at least for the direction normal to the largest crystal plane. From the absorption spectra, the concentration of PtOEP in the crystals was ca. 5 Â 10 À5 M (Table S1, ESI †). This corresponds to a significantly low sensitizer : annihilator mole ratio of ca. 1 : 50 000, which also supports that the crystals in Fig. 2 are a solid solution.
Below, we conducted all of our experiments in air. We excited the samples with a laser at 542 nm, which was 3.5 nm away from the absorption peak at 538.5 nm (Fig. 2c), unless otherwise stated. In this report, an excitation of 1 mW cm À2 intensity at 542 nm corresponds to a sensitizer excitation density 52,53 of ca. 4.8 Â 10 À5 M s À1 . We used a microscopebased setup (Fig. S4, ESI †) to investigate the photoemission from a single crystal.
During excitation at 542 nm, we observed an UC emission peaked at 434 nm (Fig. 3a). Conversely, we observed no UC emission from a reference crystal prepared with DPA by the same method ( Fig. 3a and S5, ESI †). This demonstrates that the side groups of ANNP (Fig. 1c) played a key role in accommodating PtOEP in the crystal, as discussed below. The quantum yield of the phosphorescence of PtOEP (inset of Fig. 3a) was ca. 1 Â 10 À5 , indicating that the TET from PtOEP to ANNP was quantitative.
Notably, we greatly enhanced the emission intensity (Fig. 3a), F UC (Fig. 3b), and I th (Fig. 3c) of the as-generated crystals by annealing at 90 1C for 4 days to F UC = 16.4% and I th = 0.77 mW cm À2 for the maximum and lowest values, respectively, for measurements of 10 samples. The average values were 13.4% and 2.1 mW cm À2 , respectively; Table S1 and Fig. S6 and S7 (ESI †) present the entire data. This dramatic enhancement is attributable to the improved crystallinity, as supported by the selective increase of the intensities of higherangle peaks in the powder X-ray diffraction patterns by annealing (Fig. S8, ESI †). This hypothesis is further supported by the drastic increase of the triplet lifetime t T from a value of 470 ms to a value of 5.1 ms (Fig. 3d). We also found a slight increase in the fluorescence quantum yield F FL (from a value of 38.7% to a value of 41.7%, Table S2, ESI †). Note that these values were lower than the F FL of ANNP in a toluene solution (3 Â 10 À6 M, deaerated by three freeze-pump-thaw cycles), which was measured to be 84.3%; refer to Fig. S9 in the ESI † for the fluorescence spectra of ANNP in the crystal and toluene solution. Therefore, F UC in the present materials system is largely limited by F FL in the crystalline state.
In Fig. 3c, we regarded 2.31 mW cm À2 , obtained by integrating an AM1.5 solar spectrum 54 over the range of 538.5 AE 7 nm, as an equivalent solar irradiance for monochromatic excitation at 542 nm (} 542nm ) based on eqn (1) in the spectrum shown in Fig. 2c, where e l is the molar absorption coefficient of PtOEP in the crystal at wavelength l. However, such an equivalent irradiance has an unclear meaning, which can be also recognized by the lack of a clear definition for it. 52 One clear problem of such } 542nm is that it cannot include e l that is outside the integration range of eqn (1). We thus resort to direct evaluation using a solar simulator. We passed the simulated sunlight through a 510 nm long-pass filter and irradiated onto an ensemble (3.2 mg) of the crystals (batch #1) (Experimental section and Fig. S10, ESI †). We obtained the UC emission from the sample under one-sun (1}) irradiance, whereas we observed no emission when we replaced the crystals by the same quantity of ANNP (inset of Fig. 3e). This verified that the blue fluorescence exclusively originated from UC. From the dependence of the UC emission intensity on the irradiance (Fig. 3e), the slope in the doublelogarithmic scale was 1.21 at 1}. Using this slope, we calculated the dimensionless excitation intensity 53 (L) to be 11.1 at 1}. Because this is well above the value L = 2 that corresponds to I th , 53 we have confirmed the subsolar nature of the present UC. The theoretical curve fit 53 to the data points yielded I th = 0.175}, which matched L = 2 as shown by the top axis of Fig. 3e. This I th value is unprecedentedly low.
The reproducibility check carried out using another sample batch (batch #2, 3.2 mg) reproduced the results with a slightly different I th = 0.216}.
There are two additional features of the crystals. First, the crystals exhibit high photostability under continuous photoirradiation in air (Fig. 3f, at 542 nm and 20 mW cm À2 ). Such photostability may come from the close-packed molecular arrangement that is attributable to the high crystallinity. Second, thermo-gravimetric analysis (TGA) showed that the crystals contained 2.1 wt% solvent, and this quantity did not change by annealing (Fig. S11, ESI †). This indicates stable accommodation of the solvent, which can escape at a temperature greater than 150 1C (Fig. S11, ESI †).
We analyzed single-crystal X-ray diffraction (sc-XRD) data for a single crystal of the as-received ANNP (Fig. 4a) and a single domain cut out of the UC crystal (Fig. 4b). For the latter, only the case after annealing is shown here because the change in the crystallographic parameters caused by annealing is too small to be clearly evident pictorially (see Table S3 for these crystallographic data, ESI †). In Fig. 4b, PtOEP is not seen because of its significantly low concentration.
These data indicate that the 2-naphthyl group had two slightly different conformations, displayed in green and light pink whose ratio was 72 : 28 for both Fig. 4a and b. Conversely, the 4-(1-naphthyl)phenyl group had two substantially different conformations, displayed in light blue and orange whose ratio was 51 : 49 for both Fig. 4a and b. As evident in the center of the graphics, simultaneously adopting light-blue configurations by two adjacent ANNP molecules is prohibited because of their spatial overlap. Only ''light blue + orange'' or ''orange + orange'' pairs are allowed. This large conformational freedom suggests the presence of an interstitial space near it.
We discuss two notable points. First, as shown in Fig. 4b, there are ethanol molecules, shown in magenta color, near this conformational freedom with a mole ratio of ANNP : ethanol = 5 : 1. This ratio did not change by annealing, which agrees with the constant quantity of the included solvent (2.1 wt%) by annealing found by TGA. Here, 2.1 wt% ethanol corresponds to the mole ratio of ANNP : ethanol D 4 : 1, which roughly agrees with the ratio determined by the sc-XRD analysis. Second, the crystal structure of the UC crystal did not change from that of the as-supplied ANNP, except for slight changes in the crystallographic parameters (Table S3, ESI †). This implies the rigidity of the ANNP arrangement in the crystal. From the sc-XRD data, the density of ANNP was 2.44 Â 10 3 mol m À3 , which is much lower than those of anthracene (6.99 Â 10 3 mol m À3 ) 55 and DPA (3.72 Â 10 3 mol m À3 ). 56 We surmise that PtOEP molecules were also accommodated near that side group, although not directly evident by this analysis. Thus, the rigid and low-density network of ANNP and the movable 4-(1naphthyl)phenyl side group are the key factors for realizing the concept we targeted. Notably, analysis of the sc-XRD data assuming toluene instead of ethanol was impossible. However, a trace quantity of toluene could have also been included. The inclusion of a much lesser quantity of toluene could be ascribed to the larger molecular volume of toluene in organic crystals (Voronoi-Dirichlet polyhedron volume, 157.5 Å 3 ) than that of ethanol (86.4 Å 3 ). 57 Triplet exciton diffusion characterizes the properties of a solid TTA-UC system. 41 where a is the sensitizer absorption coefficient at the excitation wavelength, F TET is the quantum efficiency of the TET (assumed to be unity), a 0 is the minimum distance required for the annihilation of two annihilator triplets, and D T is the diffusion coefficient of the triplet exciton. For a 0 , we used the average value of the nearest-neighbor and second-nearest neighbor distances (7.44 and 8.84 Å, respectively, Fig. S12, ESI †). Using the average value of I th for monochromatic excitation at 542 nm (2.1 mW cm À2 ), a (3.5 Â 10 À3 cm À1 ), and t T (5.1 ms), D T was 9.22 Â 10 À7 cm 2 s À1 . The triplet exciton diffusion length L T is given by 41,58 where Z is the dimensionality (from 1 to 3). We assume Z = 3, although this must be elucidated by future work. Depending on the custom, the other form is also used. 41,58 From these relations, the present UC crystals have an L T L 0 T À Á of 1.68 (1.19) mm. These values are approximately two orders of magnitude larger than the previously reported values for solid TTA-UCs, 13,40,60,61 except L T B1.6 mm for water-suspended MOF nanoparticles of ca. 55 nm size. 13 We compared these reported L T and D values in Table S4 (ESI †). Thus, within a sphere with radius L T L 0 T À Á , there are approximately 1.25 Â 10 6 (4.43 Â 10 5 ) PtOEP molecules. Therefore, the significantly low concentration of PtOEP caused by our use of the a-phase (cf. Fig. 1b) was not problematic. These results reconfirm previous suggestions 39,40,43 that high crystallinity and a large crystalline domain are important for efficient UC in solids.
Finally, we discuss back-energy transfer (BET, Fig. 5a). BET was previously regarded as inevitable in a binary sensitizerannihilator solid based on the estimated BET efficiency (F BET ) as high as 40%. 35 When Förster resonance energy transfer (FRET) 62 is the dominant mechanism, F BET can be estimated by comparing the fluorescence decay time-constant (t 1A ) from the UC crystals and t 1A from the crystals prepared without PtOEP (Fig. 5b). Note that this method cannot be used to evaluate the BET caused by simple reabsorption of UC photons by the sensitizer because this reabsorption does not change t 1A . Thus, here we assess the BET by FRET that actively quenches the 1 A* state. Our double-exponential fits in Fig. 5b yielded fast and slow components, where the time-constants were 2.50 and 7.04 ns for the UC sample, and 2.45 and 7.18 ns for the reference, respectively. These small differences are considered to have mostly arisen from the uncertainty in the curve fitting. Thus, F BET in the present system is negligible because of the low concentration of PtOEP. We mention that the t 1A of ANNP in a toluene solution (3 Â 10 À6 M, deaerated by three freezepump-thaw cycles) was 3.78 ns (Fig. S13, ESI †), which was similar to t 1A for the crystals.
Conclusions
We demonstrated the concept of explicitly exploiting van der Waals solid solution crystals consisting of only an a-phase to resolve the long-standing problem of sensitizer segregation and realize materials with outstanding UC performance. Compared with the existing concept of using kinetically controlled fastsolidification conditions, the present approach has the advantages of (i) higher thermodynamic stability because of the reliance on (near-)equilibrium states and (ii) higher UC performance because of the simultaneous achievements of a long triplet exciton diffusion length and suppression of detrimental BET from the 1 A* states. The factors in (ii) were caused, respectively, by the large single-crystal domain with high crystallinity and the significantly low concentration of the sensitizer. To form crystalline solid solutions, the interstitial site created by the bulky and movable side group of ANNP has been found to be the key factor, as supported by the comparison with a reference crystal prepared with DPA. The elucidated high F UC , low I th , and high photostability in air are promising for applications. In particular, the extraordinarily low I th demonstrated by using simulated sunlight indicates that solar concentration optics are no longer needed for efficient upconversion of terrestrial sunlight. Probably the most important advantage of this concept lies in its reliance on the versatile van der Waals force and hydrocarbon annihilators. Overall, the proof-ofconcept here is a major technical leap forward in the quest for high-performance UC solids, which will open up diverse photonics technologies in the future.
Notes and references ‡ When calculating I th , the definition of the laser spot area is important. Some researchers used the 1/e 2 diameter to calculate the spot area for Gaussian laser beams. However, as described in ref. 63, the laser spot area calculated by the 1/e 2 diameter yields a 50% lower excitation intensity than the actual peak intensity in the laser spot. In all previous TTA-UC papers authored by Y. M., a FWHM diameter of the Gaussian profile was used to calculate the spot area, yielding a 1.44 times higher intensity than the actual peak intensity; i.e., conservative calculation of I th . In the present article, however, all excitation beams had a top-hat intensity profile (cf. Fig. S4, ESI †), and therefore the intensity values for and k 1A(nonrad) refer to the radiative and non-radiative decay rates of 1 A*, respectively, and k 1A(BET) refers to the rate of BET by the Förster mechanism. (b) Time-resolved fluorescence intensity decay curves (excitation: 405 nm, monitor: 455 nm) for the UC crystals (blue) and reference crystals prepared without PtOEP (pink). We generated these curves by averaging the curves acquired from 10 crystals for each case. We multiplied the data for the ''UC crystal'' by 1.21 to match the heights of these curves at time = 0. | 5,080.6 | 2021-11-09T00:00:00.000 | [
"Materials Science",
"Chemistry",
"Physics"
] |
Evaluation of the quality of Permalloy gratings by diffracted magneto-optical spectroscopy
Magneto-optical Kerr effect (MOKE) spectroscopy in the –1st diffraction order with p-polarized incidence is applied to study arrays of submicron Permalloy wires at polar magnetization. A theoretical approach combining two methods, the local modes method neglecting the edge effects of wires and the rigorous coupled wave analysis, is derived to evaluate the diffraction losses due to irregularities of the wire edges. A new parameter describing the quality of the edges is defined according to their contribution in the diffracted MOKE. The quality factor, evaluated for two different samples, is successfully compared with irregularities visible on atomic force microscopy pictures. © 2005 Optical Society of America OCIS codes: (050.1950) Diffraction gratings; (160.3820) Magneto-optical materials; (240.5770) Roughness. References and links 1. H.-T. Huang and F. L. Terry, Jr., “Spectroscopic ellipsometry and reflectometry from gratings (Scatterometry) for critical dimension measurement and in situ, real-time process monitoring,” Thin Solid Films 455-456, 828-836 (2004). 2. R. Antos, I. Ohlidal, J. Mistrik, K. Murakami, T. Yamaguchi, J. Pistora, M. Horie, and S. Visnovsky, “Spectroscopic ellipsometry on lamellar gratings,” Appl. Surf. Sci. 244, 225-229 (2005). 3. R. Antos, J. Pistora, I. Ohlidal, K. Postava, J. Mistrik, T. Yamaguchi, S. Visnovsky, and M. Horie, “Specular spectroscopic ellipsometry for the critical dimension monitoring of gratings fabricated on a thick transparent plate,” J. Appl. Phys. 97, 053107 (2005). 4. P. Klapetek, I. Ohlidal, D. Franta, and P. Pokorny, “Analysis of the boundaries of ZrO2 and HfO2 thin films by atomic force microscopy and the combined optical method,” Surf. Interface Anal. 34, 559-564 (2002). 5. P. Klapetek, I. Ohlidal, D. Franta, A. Montaigne-Ramil, A. Bonanni, D. Stifter, and H. Sitter, “Atomic force microscopy characterization of ZnTe epitaxial films,” Acta Phys. Slov. 53, 223-230 (2003). 6. D. Franta and I. Ohlidal, “Ellipsometric parameters and reflectances of thin films with slightly rough boundaries,” J. Mod. Opt. 45, 903-934 (1998). 7. I. Ohlidal and D. Franta, “Ellipsometry of Thin Film Systems,” Progress in Optics 41, 181-282 (2000). 8. J. I. Martin, J. Nogues, K. Liu, J. L. Vicent, and I. K. Schuller, “Ordered magnetic nanostructures: fabrication and properties,” J. Magn. Magn. Mater. 256, 449-501 (2003). 9. M. Grimsditch and P. Vavassori, “The diffracted magneto-optic Kerr effect: what does it tell you?” J. Phys.: Condens. Matter. 16, R275-R294 (2004). 10. Y. Suzuki, C. Chappert, P. Bruno, and P. Veillet, “Simple model for the magneto-optical Kerr diffraction of a regular array of magnetic dots,” J. Magn. Magn. Mater. 165, 516-519 (1997). 11. J. L. Costa-Kramer, C. Guerrero, S. Melle, P. Garcia-Mochales, and F. Briones, “Pure magneto-optic diffraction by a periodic domain structure,” Nanotechnology 14, 239-244 (2003). (C) 2005 OSA 13 June 2005 / Vol. 13, No. 12 / OPTICS EXPRESS 4651 #7477 $15.00 USD Received 16 May 2005; revised 3 June 2005; accepted 3 June 2005 12. D. van Labeke, A. Vial, V. A. Novosad, Y. Souche, M. Schlenker, and A. D. Dos Santos, “Diffraction of light by a corrugated magnetic grating: experimental results and calculation using a perturbation approximation to the Rayleigh method,” Opt. Commun. 124, 519-528 (1996). 13. A. Vial and D. Van Labeke, “Diffraction hysteresis loop modelisation in transverse magneto-optical Kerr effect,” Opt. Commun. 153, 125-133 (1998). 14. Y. Pagani, D. Van Labeke, B. Guizal, A. Vial, and F. Baida, “Diffraction hysteresis loop modeling in magneto-optical gratings,” Opt. Commun. 209, 237-244 (2002). 15. Y. Souche, V. Novosad, B. Pannetier, and O. Geoffroy, “Magneto-optical diffraction and transverse Kerr effect,” J. Magn. Magn. Mater. 177-181, 1277-1278 (1998). 16. P. Garcia-Mochales, J. L. Costa-Kramer, G. Armelles, F. Briones, D. Jaque, J. I. Martin, and J. L. Vicent, “Simulations and experiments on magneto-optical diffraction by an array of epitaxial Fe(001) microsquares,” Appl. Phys. Lett. 81, 3206-3208 (2002). 17. R. Antos, J. Mistrik, M. Aoyama, T. Yamaguchi, S. Visnovsky, and B. Hillebrands, “Magneto-optical spectroscopy on permalloy wires in 0th and 1st diffraction orders,” J. Magn. Magn. Mater. 272-276, 16701671 (2004). 18. R. Antos, J. Mistrik, S. Visnovsky, M. Aoyama, T. Yamaguchi, and B. Hillebrands, “Characterization of Permalloy wires by optical and magneto-optical spectroscopy,” Trans. Magn. Soc. Japan 4, 282-285 (2004). 19. R. Antos, J. Mistrik, T. Yamaguchi, S. Visnovsky, S.O. Demokritov, and B. Hillebrands, “Evidence of native oxides on the capping and substrate of Permalloy gratings by magneto-optical spectroscopy in the zerothand first-diffraction orders,” Appl. Phys. Lett. 86, 231101 (2005). 20. S. Ingvarsson, “Magnetization dynamics in transition metal ferromagnets studied by magneto-tunneling and ferromagnetic resonance,” (Ph.D. thesis, Brown University, 2001). 21. K. Rokushima and J. Yamakita, “Analysis of anisotropic dielectric gratings,” J. Opt. Soc. Am. 73, 901-908 (1983). 22. G. Neuber, R. Rauer, J. Kunze, T. Korn, C. Pels, G. Meier, U. Merkt, J. Backstrom, and M. Rubhausen, “Temperature-dependent spectral generalized magneto-optical ellipsometry,” Appl. Phys. Lett. 83, 45094511 (2003). 23. P. Hones, M. Diserens, and F. Levy, “Characterization of sputter-deposited chromium oxide thin films,” Surf. Coat. Tech. 120-121, 277-283 (1999). 24. D. F. Edwards, “Silicon (Si),” in Handbook of Optical Constants of Solids, E. D. Palik, ed. (Academic, Tokyo, 1998); H. R. Philipp, “Silicon Dioxide (SiO2) (Glass),” ibid.
Introduction
Optical-spectroscopic techniques play an important role in characterizing the quality of grating lithography [1][2][3].These techniques are becoming of comparable importance to the classical ones such as scanning electron microscopy (SEM) or atomic force microscopy (AFM) because of the destructive character of cross-sectional SEM or limitations of AFM to study features beneath a surface.The principles of monitoring the quality of surfaces and thin films are suitably established using statistical quantities determined by AFM [4,5] and by optical techniques [6,7].The height deviation and autocorrelation length of surface irregularities measured either by AFM or optically foremost produce a difference due to the convolution of the AFM tip with the surface.To evaluate imperfections of the wire-edge parts of surfaces in laterally textured films using AFM is obviously very difficult even with highestquality tips.
In the case of gratings made of magnetic materials, the optical techniques can be completed with magneto-optical (MO) analyses, particularly by measuring the magnetooptical Kerr effect (MOKE).Recently micromagnetic properties of periodically arranged magnetic wires, dots, and holes in magnetic films have received remarkable attention [8,9].It has been reported that the diffracted MOKE (D-MOKE) hysteresis loops, i.e., the loops recorded on beams diffracted by gratings, can help to investigate the magnetization distribution in magnetic nanostructures.Particularly, the D-MOKE contains more detailed information on the magnetic domains in the vicinity of wire or dot edges or in films with holes.To achieve faithful analysis of magnetic-domain behavior including the switching process, authors applied theoretical models with different levels of accuracy.In the case of shallow gratings, with a small depth-to-period ratio, approximate analytical models neglecting the internal diffraction edge-effects were demonstrated as adequate to describe the D-MOKE response [9][10][11].Those models are based on the far-field Fourier analysis of the lateral amplitude-reflectance distribution and on the assumption of the optical and MO uniformity within depth.However, if the lateral magnetic-domain features to be described are of small sizes, i.e., comparable to the depth, then the D-MOKE response might be affected by the edge-effects, and hence such a model becomes incorrect.Moreover, to monitor more than the lateral distribution of magnetization, could obviously be possible with theoretical approaches of the relevant capability.Since rigorous theoretical models require long-time and largememory numerical computation, several authors developed approximation approaches more precise than those for shallow gratings, e.g., the perturbation approximation to the Rayleigh method [12,13].Nevertheless, those models cannot generally be used for deep gratings [14].
So far, the D-MOKE analyses have been limited to a single wavelength with coherent wave sources.A few attempts were made to measure the incidence-angle dependence [12,15] and the dependence on the orientation of the magnetization vector [16].Experimental arrangements employing the multiple wavelength range, i.e., the "D-MOKE spectroscopy," suggest an improvement of the sensitivity and accuracy, as well as an increase of the number of features detectable by the MO measurement, which is the aim of the present work.
In this paper we demonstrate that the D-MOKE response can strongly be affected by the reduced quality of a laterally patterned structure, namely by random irregularities of the wireedge parts of the structure's surface, which may cause both the approximate and rigorous models inaccurate.The quality of such magnetic gratings is here evaluated by using one parameter identifying the amount of the edges' internal-diffraction contribution to the D-MOKE response in a broad spectral range.
Samples and measurements
A set of samples with similar fabrication parameters was investigated.The gratings were prepared from a nominally 10-nm-thick Permalloy (Ni 81 Fe 19 ) film deposited on an Si substrate and protected by 2-nm-thick Cr capping.The patterning was made by means of e-beam lithography with subsequent ion milling.
The AFM measurements on two chosen samples are displayed in Fig. 1.For the purposes of this paper, the D-MOKE results obtained on the sample with the most evident irregularities in the set are reported.An AFM picture of this sample is shown in Fig. 1(a) compared with a better one (Fig. 1(b)).The AFM measurement, performed at several positions of the grating, yielded the information on the period of about 900 nm, wire top linewidth of 536 nm, and grating depth of 13 nm.According to former analyses on the samples, the structure was determined consisting of 11 nm of Permalloy on the substrate covered by 3 nm of a native SiO 2 overlayer, and with 2 nm of capping completely oxidized into Cr 2 O 3 [17][18][19].The D-MOKE experiments were performed on an MO spectrometer employing the azimuth modulation and compensation technique.The measurement reported here was achieved in the -1st diffraction order for p-polarized incidence in the spectral range of 1.8-4.1 eV at polar magnetization with applied out-of-plane magnetic field of 1.4 T, sufficient for saturated magnetization [20].The propagation angles of higher diffraction orders are wavelength dependent; a special measurement configuration was therefore chosen.The angle between the incident and the diffracted beams was fixed at 20°, and the sample was rotated while the wavelength was swept.
Theoretical approach
Two different theoretical approaches were chosen for modeling the D-MOKE response of the gratings, the rigorous coupled wave analysis (RCWA) implemented as the transfer-matrix approach for anisotropic media [21] and the local modes method (LMM), which is an approximate analytical method based on the far-field Fourier analysis of the lateral amplitudereflectance distribution assuming the saturated magnetization [17].According to the applied magnetic filed, we assume the homogenous saturated magnetization of the Permalloy wires without any domain structure in modeling throughout this paper.Since none of both approaches describes the measured spectrum presented here correctly, a third model is derived involving both the RCWA and LMM calculations and assuming the reduced quality of the wire edges.
Here we provide a short description of the LMM including an evaluation of its error.Let x be the coordinate along the wires of the grating and y the coordinate along its periodicity.Any shallow optical element, with all its lateral-texture sizes considerably higher than its depth, can be described by a complex amplitude reflectance r αβ (y), which is a function of only the lateral coordinate y for we are working with one-dimensional patterning.The indices α and β denote the polarization-basis indices of reflected and incident waves, respectively.
According to the LMM, in which the edge effects are considered negligible, the function r αβ (y) uses only two parameters, the amplitude reflectances of wires, r w,αβ , and of the system of air/substrate between wires, r b,αβ .In accordance with the principles of the far-field Fraunhofer diffraction, the amplitude reflectances in the separate diffracted orders are determined by formulae ( where n corresponds to the nth diffraction order, while f denotes the grating's filling factor given by the ratio of the wire linewidth to the period.The term ( ) represents the influence of the internal diffraction at wire edges in the rigorously calculated optical response or, in other words, the error of the LMM with respect to the RCWA.Formerly we have reported that only ( ) n pp r at oblique angles of incidence contains a significant amount of ( ) n pp r Δ , especially in higher diffraction orders [18].
Results and discussion
The material constants used in all simulations were taken from literature [22][23][24].Examples of absolute values of two simulated amplitudes ( 1) sp r − and ( 1) pp r − in the configuration presented here are shown in Fig. 2. The D-MOKE ellipsometric parameters in the -1st diffraction order for p-polarized incidence, the Kerr rotation ( 1) p θ − and ellipticity ( 1) p ε − , are determined for small D-MOKEs as the real and imaginary parts of the ratio of the corresponding amplitude reflectances, i.e., Considering the small values of ( 1) pp r − in Fig. 2, the D-MOKE parameters are obviously highly sensitive to ( 1) pp r − Δ , or, in other words, both the LMM and RCWA produce remarkably different values, as shown in Fig. 3 together with the experimental data.Since even the rigorous method does not match the measurement, the most acceptable explanation follows from Fig. 1.The AFM picture in Fig. 1(a) indicates considerable irregularities in the wire edges, which naturally reduce the value of ( 1) pp r − Δ in Eq. ( 2) due to diffraction losses.It is straightforward to replace this value by a reduced value ( 1) ( ( ) and to determine the wavelength-dependent function η(λ) by using real values expected between 0 and 1.In the case of our samples, however, no wavelength dependence was detected.Each sample in the set was successfully characterized with a constant value of η by applying the least-square method in a one-parameter fit of the D-MOKE rotation and ellipticity.We refer to this parameter as the "quality factor of the grating with respect to the wire edges," and to the new theoretical approach as the "combined RCWA-LMM model."Analogous to the height deviation and autocorrelation length of random irregularities of nonpatterned surfaces, which are connected with the reduction of the reflectivity due to diffraction losses [6,7], the η factor describes a reduction of the diffracted light according to similar principles.The value of η = 1 corresponds to ideal periodical smooth edges, whereas η = 0 implies that no edge effects are observed.The realistic value according to the measured D-MOKE parameters was found η = 0.53, with the fitted spectrum displayed in Fig. 4. The same procedure applied to the sample of obviously higher quality, with the AFM picture displayed in Fig. 1(b), yielded a value of η = 0.70.
Conclusions
Diffracted MO spectroscopy was successfully applied to evaluate the quality of periodically patterned magnetic structures.The method appears to be profitable since the wire edges measured by AFM can hardly be analyzed statistically, as it is conventional in the case of flat surfaces.The principle can be applied for any optical or MO configuration where the edge effects are not negligible compared to the surface/thin film response.The results also suggest advantageous possibilities in other applications, e.g., analyzing the non-linear MO effect or photo-elastic and magneto-elastic effects, since any effect usually negligible may be enhanced in a particular experimental arrangement.
Fig. 1 .
Fig. 1.Atomic force microscopy pictures of the analyzed sample (a) and a sample of higher quality (b).Top view of each sample is accompanied by the cross section. | 3,544.8 | 2005-06-13T00:00:00.000 | [
"Physics"
] |
Curvature-Matter Coupling Effects on Axial Gravitational Waves
In this paper, we investigate the propagation of axial gravitational waves in the background of the flat FRW universe in $f(R,T)$ theory. The field equations are obtained for unperturbed as well as axially perturbed FRW metric. These field equations are solved simultaneously to obtain the unknown perturbation parameters. We find that the assumed perturbations can affect matter as well as four-velocity. Moreover, ignoring the material perturbations we explicitly obtain an expression for four-velocity. It is concluded that axial gravitational waves in the curvature-matter coupling background can produce cosmological rotation or have memory effect if the wave profile has a discontinuity at the wavefront.
Introduction
The discovery of cosmic expansion is a big achievement as well as the most fascinating area of research. Researchers introduced different approaches to investigate the reason behind this phenomenon by modifying matter or geometric part of the Einstein-Hilbert action leading to modified matter models or modified theories of gravity, respectively. Examples of modification in geometric part are f (R) [1], f (G) [2] and f (R, T ) [3] theories of gravity where R, G and T denote Ricci scalar, Gauss-Bonnet invariant and trace of the energy-momentum tensor. While examples of modified matter models are quintessence [4], phantom [5], K-essence [6], holographic dark energy [7] and Chaplygin gas models [8].
The simplest generalization of general relativity (GR) is obtained by replacing R with its generic function named as f (R) in the Einstein-Hilbert action leading to f (R) theory. Many astrophysical as well as cosmological aspects have been investigated within the framework of this theory [9]. Harko et al. [3] proposed f (R, T ) gravity which is a curvature-matter coupling theory. This can produce a matter dependent deviation from geodesic motion and also help to study dark energy, dark matter interactions as well as late-time acceleration [10].
Different aspects of cosmic and stellar evolution have been studied in f (R, T ) gravity. Sharif and Zubair [11] investigated the validity of second law of thermodynamics for phantom as well as non-phantom phases. Shabani and Farhoudi [12] explored viability of some f (R, T ) gravity models by solar system constraints. Yousaf et al. [13] investigated the stability of cylindrical symmetric stellar configurations by inducing perturbations in this theory. We have studied physical characteristics of charged [14] as well as uncharged stellar structure [15] in this gravity.
The fluctuations in the fabric of spacetime produced by massive celestial objects are known as gravitational waves (GWs). The significance of GWs comes from the fact that they lead to new techniques to explore cosmic issues. The observations of GWs can help us to study the individual sources of GWs that give information about structure as well as kinematics of the cosmos. The observation of a stochastic background of GWs of cosmological origin can provide information about initial structure formation. These detections have inaugurated a new era of astronomy as well as the possibility to investigate gravity in extreme gravity regimes.
After a long history of struggles (from Weber bars to advanced laser interferometers), scientific efforts came true and GWs are finally detected by earth-based detectors. Some of the observed GWs signals by the LIGO-VIRGO collaboration are GW150914 [16], GW170104 [17] and GW170817 [18]. The origin of these signals is the merging binaries of black holes and neutron stars which release energy in the form of GWs. The most recent signal (GW170817) [18] is consistent with the binary neutron star inspiral.
It has an association with gamma ray burst signal GRB170817A detected by Fermi-GBM and provides the first direct evidence of gamma ray bursts during the mergence of two neutron stars.
The phenomenon of GWs has become a topic of central importance in cosmology nowadays. The polarization of a GW provides information for its geometrical orientation. Kausar et al. [19] explored polarization modes of GWs in f (R) theory and found two modes other than GR. Alves et al. [20] evaluated these modes for f (R, T ) and f (R, T φ ) theories (here φ represents scalar field). They concluded that in vacuum the former one produces the same results as f (R) while the polarization modes in f (R, T φ ) gravity depend upon the expression of T φ . We have shown that axially symmetric dust fluid with dissipation behave as a source of gravitational radiation in f (R) theory [21]. We have also studied polarization modes of GWs for some viable f (R) models [22].
Regge and Wheeler [23] studied the stability of Schwarzschild singularity by introducing small perturbations in the form of spherical harmonics producing odd and even waves. They found that these disturbances oscillate around equilibrium state and do not grow with time showing the stability of Schwarzschild singularity. Zerilli [24] analyzed the emission of gravitational radiation when a black hole swallows a star. He did this analysis by considering the problem of a particle falling into a Schwarzschild black hole and perturbations introduced by Regge and Wheeler as well as corrected the even wave propagation equation derived in [23]. The energy carried by GWs is the gravitational radiation. Hawking [25] investigated gravitational radiation produced by colliding black holes and Wagoner [26] discussed these radiation for accreting neutron stars.
Malec and Wylȩżek [27] used the wavelike perturbations proposed by Regge and Wheeler in the Schwarzschild spacetime to study the GW propagation in cosmological context. They investigated Huygens principle for cosmological GWs in Regge-Wheeler gauge and found that this principle is satisfied in radiation dominated era while it does not hold in matter dominated universe. Otakar [28] explored the GW propagations in higher dimensions using axial perturbations proposed by Regge-Wheeler. They showed that in braneworld scenario the Huygens principle seems to be satisfied for high multipoles in contrast with four dimensions. Viaggiu [29] studied propagations of axial and polar GWs proposed in [23], in de Sitter universe using the Laplace transformation. Kulczycki and Malec also [30] studied the perturbations induced by axial and polar GWs in FRW universe. They concluded that Huygens principle has the same status for both types of waves, it is valid for radiation era while it is broken elsewhere. The same authors [31] discussed cosmological rotation of radiation matter induced by axial GWs. However, axial and polar perturbations have also been studied using gauge-invariant quantities [32]- [34]. In [34], the authors investigated the cosmological perturbations in the context of Lemaitre-Tolman spacetime. In case of axial modes, their equations (restricted to FRW metric) coincide with that of [30].
The issues of cosmological rotation induced by GWs and validity of Huygens principle in Regge-Wheeler gauge have not yet been studied in the framework of modified theories. In the present work, we induce the axial perturbations (which change the geometry from spherical to axial) introduced by Regge and Wheeler [23] in the flat cosmological as well as curvature-matter coupling backgrounds. Since the FRW universes are conformally flat, these distortions are linked with the axial GWs. These disturbances are may be the consequence of non-gravitational forces (electromagnetic forces, nuclear forces) associated with brutal astrophysical events. The non-symmetric explosion of a supernova could be an example for the production of such type of waves. We focus on the axial wave perturbations induced in flat cosmos consisting of perfect fluid. The paper is arranged as follows. In the coming section, we discuss the background FRW cosmology in f (R, T ) theory. In section 3, we define the perturbations in FRW metric as well as matter variables and formulate the corresponding field equations. The unknown perturbation parameters are found in section 4. Finally, we summarize and conclude the results in the last section.
FRW Cosmology and f (R, T ) Gravity
In order to discuss the wave propagation in FRW universe, we consider the FRW metric in conformal coordinates (η, r, θ, φ) as where η is the conformal time coordinate related to the ordinary time by the relation such that the conformal Hubble parameter H is related with the ordinary Hubble parameter H by We consider matter as perfect fluid defined by the energy-momentum tensor where V µ , ρ and p stand for four velocity, density and pressure, respectively.
The action integral for f (R, T ) theory is where g is the determinant of the metric tensor and L m is the matter Lagrangian density. The field equations for this action are where f R = ∂f ∂R , f T = ∂f ∂T and Θ µν = −2T µν + L m g µν . In this paper, we consider f (R, T ) = R + 2λT [3] to investigate the role of curvature-matter coupling on the propagation of GWs. This model can discuss the accelerated expansion by producing a power-law like scale factor. It also has a correspondence with ΛCDM model by considering the cosmological constant as a function of trace T or Λ(T ) gravity by Poplawski [35]. The choices for matter Lagrangian density L m are p or −ρ. However, it is shown that these two densities yield the same results for minimal curvature-matter coupling if the matter under discussion is perfect fluid [36]. So the assumption L m = p and the model f (R, T ) = R + 2λT simplify the field equations as This yields the following independent field equations for the metric (1) here dot denote the derivative with respect to the conformal time η.
In further discussion, we consider the GWs in radiation dominated era so using the equation of state (EoS) p 0 = ρ 0 3 , the field equations (8) and (9) give the following differential equation in H which yields the scale factor where c 1 is constant of integration. The covariant derivative of the field equations is Using Eqs. (1), (4), the model f (R, T ) = R + 2λT and p 0 = ρ 0 3 , Eq.(11) produces the following differential equation in ρ ρ + 3 8π + λ 6π + λ Hρ = 0, whose solution is c 2 is again an integration constant. These values of scale factor and density are used in the further mathematics.
Axial Perturbations in FRW Spacetime
In this section, we first briefly discuss perturbations used to study the effects of GWs. Here, the background metric g µν is the FRW spacetime and h µν are the corresponding perturbations in the metric tensor due to GWs such that we have g (perturb) where e is a small parameter (it measures strength of perturbations and the terms involving O(e 2 ) are neglected).
We follow the Regge-Wheeler [23] perturbation scheme to investigate the wavelike fluctuations. To obtain explicit expressions for the components of h µν in terms of four coordinates (x 0 = η, x 1 = r, x 2 = θ, x 3 = φ), they expressed them in the form of spherical harmonics. The symmetry of the metric tensor allows the angular momentum to be defined. The angular momentum is discussed by assuming the rotations on a 2D-manifold with η = constant and r = constant. The components of h µν have different transformations under a rotation of the frame. Among the ten independent components of the tensor h µν , the components h 00 , h 01 , h 11 transform like scalars (as x 0 = η and x 1 = r are constants and do not change during rotation), h 02 , h 03 , h 12 , h 13 change like vectors (as x 2 and x 3 are changed during rotation) while h 22 , h 23 , h 33 transform like tensors. Further, these scalars, vectors and tensors are expressed in terms of spherical harmonics Y M L where L is the angular momentum with the projection M on z-axis. After this, they expressed the perturbation matrix h µν in terms of odd and even parity waves. In this paper, we only consider the odd or axial wave perturbations defined by the matrix [23] with k 0 = k 0 (η, r) and k 1 = k 1 (η, r). Here we are considering the odd waves corresponding to m = 0, which are discussed by Regge and Wheeler [23] so that φ disappears in calculations. Also, for the wavelike solution the index l exceeds one, i.e., Y = Y l0 ; l = 2, 3, .... The resulting axially perturbed FRW spacetime in Regge-Wheeler gauge is defined by The perturbations in the material quantities are defined as follows [30] where ρ 0 and p 0 are the background density and pressure. The fluid may or may not be comoving in the perturbed scenario so the perturbed components of four velocity are taken as [30] V 0 = 2g where V α V α = −1 + O(e 2 ). The field equations for the perturbed metric (15) as well as corresponding perturbed matter are where prime indicates the derivative with respect to r and also, we have used the relation [30]
Solving (29) and (30) simultaneously for Π, we obtain which implies either The first factor in the above equation yields λ = −4π and −2π. However, keeping in mind the viability conditions for the assumed model, we exclude λ = −4π. Hence if λ = −2π, then there is a possibility that Π = 0 and similarly ∆ = 0, i.e., the axial GWs can affect the background matter in curvature-matter coupling scenario. Assuming the EoS for radiation dominated era p 0 = 1 3 ρ 0 , we obtain the following relationship between Π and ∆ Substituting the above relation in Eqs. (31) and (32), we are left with four unknowns k 0 , k 1 , ∆, u with three equations (26), (31), (32). Thus in order to have the system closed, we assume that GWs do not perturb the matter field, i.e., ∆ = 0 = Π. Now introducing a new quantity Q(η, r) such that Using this equation with Eq. (26) in (32), we obtain Inserting p 0 = ρ 0 3 , the values of a(η) as well as ρ 0 from Eqs. (10) and (11) into (37), it follows thaẗ and take l = 2 such that the above equation becomes This is a wave equation and can be solved through separation of variables by assuming Q(η, r) = T (η)R(r) and the initial conditions. Q(0, r) = Ψ 1 (r), ∂ η Q(0, r) = Ψ 2 (r), Introducing the separation constant −m 2 , we obtain the following two differential equations These are second order homogeneous linear differential equations with variable coefficients. Equation (40) can yield some solution if the power of η is fixed. So, we consider 6π+2λ 6π+λ (−3)(8π+λ) 6π+λ = n and check that for what values of n, the values of λ are consistent with viability criteria. We find that the values of λ for n > 1 are not consistent with λ > −4π (the viability criteria) and n < −2 yields imaginary values of λ. Hence, n can have the values within the limit −2 ≤ n < 1. For n = −2, we have λ = 0 which is the case of GR. For convenience, we consider the integer values in this interval, i.e., n = 0, −1, to find the solution of Eq. (40). For n = 0, the solution is where c 3 and c 4 are constants of integration and for n = −1, we have where c 5 , c 6 are constants and Hypergeometric1F1, HypergeometricU are the confluent hypergeometric functions of the first and second kind, respectively. These functions are defined by where c 7 , c 8 are integration constants. Inserting the values of R(r) and T (η) in Q(η, r) = T (η)R(r), we obtain Q(η, r) for both values of n. Furthermore, using initial conditions one can find the expressions for Ψ 1 (r) and Ψ 2 (r) for n = 0 as well as n = −1.
Replacing the values of Q(η, r) and a(η) in Eq.(36), we obtain the value of k 1 while the expression for k 0 is obtained from Eq.(26) as follows where η 0 is the conformal time at the hypersurface originating GWs. Assuming k 0 (η, r) = 0, we have B(r) = 0 and k 0 becomes For n = −1, we have Thus the final expression for four velocity in radiation dominated phase becomes Thus the azimuthal velocity of any point P having coordinates (η, r, θ, φ) is
Final Remarks
According to rough approximate, a pair of massive black holes merge in every 223 352 −115 sec and a binary of neutron star merge in every 13 49 −9 sec [37]. Among these mergers a small fraction is detected by advance interferometers of LIGO-Virgo collaboration and can be associated to some individual GW event. The rest of the events contribute to make a stochastic background which is a random GW signal originated by various independent, weak and unresolved sources. These sources include for instance, the supernova explosions at the end of a massive star's life (including non-symmetric explosions), a rapidly rotating neutron star, cosmic strings etc. Mathematical and statistical approaches have been developed to observe these stochastic background of GWs and extract information from them [38]. These GWs signals have great influence on cosmic evolution and hence the study of different aspect of GW phenomenon is very significant.
The main goal of this manuscript is to explore the changes produced by axial GWs in geometry as well as matter of a flat universe during evolution and in the context of curvature-matter coupling theory. For this purpose, we assume the presence of these waves and find the corresponding geometrical and material changes produced by these waves in f (R, T ) gravity. We have introduced axial perturbations in the flat FRW spacetime, the background matter is also perturbed as well as the four velocity is allowed to be noncomoving. We then proceed to find all unknown parameters of perturbations with the help of perturbed and unperturbed field equations. It is mentioned here that all field equations reduce to GR equations [31] for λ = 0.
The factors w, v, appearing in V 1 and V 2 are zero showing that axial waves do not change these components of velocity which is similar to that in GR. We have found that axial GWs in f (R, T ) theory can perturb the background matter in contrast to GR. However, here we suppose ∆ and Π equal to zero in order to find the remaining functions k 0 , k 1 and u. The resulting k 0 and k 1 are different from those of GR and depend upon the coupling constant λ. The function u appearing in the azimuthal velocity component has non-zero expression showing that fluid exhibits a rotation due to axial GWs similar to GR. But the expression of u here depends upon λ and differs from GR.
Currently, our universe is in expansion phase and it is crucial to investigate the propagation of GWs in this expanding universe. In this regard, we expand our analysis using the EoS p 0 = −ρ 0 for expanding matter and observe how such types of GWs can perturb the flat cosmos in the recent era. For p 0 = −ρ 0 , the scale factor and density have the expressions wherec 1 andc 2 are integration constants. It is found that this EoS can yield non-vanishing w and v (from (23) and (24) When the expression of u(η, r) is continuous at the wave front, the smooth wave profile does not induce any cosmological rotation [31]. Hence we conclude that the axial GW can induce a cosmological rotation if u(η, r) is discontinuous at the wave front. If the freely falling particles are displaced by a GW, it is called memory effect of the GW. Hence the axial GW in f (R, T ) gravity induces memory effect when the wave profile has discontinuity at the wave front. Also, the model considered here describes the simplest curvaturematter coupling and we assume this model to reduce the calculation work. However, this work can be extended for other minimally coupled models containing nonlinear power of R or T or non-minimally coupled models leading to interesting results. Such models may yield the non-vanishing values of the perturbation parameters which are zero in the present scenario. When a GW without memory passes through a detector, it produces an oscillatory deformation and returns the detector back to its equilibrium state. On the other hand, a GW with memory can induce a permanent deformation in an idealized detector, i.e., a truly free falling detector [39]. The detectors like Weber bars and LIGO are not sensitive to the memory effect. However, the detectors of the type like LISA (Laser interferometry space antenna) or advanced LIGO can detect the memory due to its sensitivity and with strong memory sources [40]. Also, the ground-based detectors are not truly free falling and cannot store a memory signal while LISA like detectors are able to maintain the permanent displacement because these are free floating. | 4,949 | 2018-09-01T00:00:00.000 | [
"Physics"
] |
Emission Channels from Perturbed Quantum Black Holes
We calculate the emission of gravitational waves, gravitons, photons and neutrinos from a perturbed Schwarzschild blackhole (BH). The perturbation can be due to either classical or quantum sources and therefore the injected energy can be either positive or negative. The emission can be classical in nature, as in the case of gravitational waves, or of quantum nature, for gravitons and the additional fields. We first set up the theoretical framework for calculating the emission by treating the case of a minimally coupled scalar field and then present the results for the other fields. We perform the calculations in the horizon-locking gauge in which the BH horizon is deformed, following similar calculations of tidal deformations of BH horizons. The classical emission can be interpreted as due to a partial exposure of a nonempty BH interior, while the quantum emission can be interpreted as an increased Hawking radiation flux due to the partial exposure of the BH interior. The degree of exposure of the BH interior is proportional to the magnitude of the injected null energy.
Introduction
Black-Holes (BHs) are well understood in the framework of general relativity (GR) where their semiclassical properties in the exterior region have been successfully described using the formalism of quantum fields in curved spacetime. In contrast, their quantum nature, in particular, quantum effects in the vicinity of the horizon, has not yet been fully understood and is inconsistent with the classical GR description. For a review, see, for example, [1,2].
Hawking [3] considered BHs in equilibrium and demonstrated that BHs emit particles as if they were a thermal body with a temperature T H that is inversely proportional to their mass M . The established method of studying quantum BHs is the semiclassical approach, in which the background gravitational field is fixed and matter fields are quantized about this fixed background. The quantization of matter fields in Schwarzschild spacetime and particularly the vacuum expectation value (VEV) of the stress energy momentum (SEM) tensor was calculated in [4,5,6,7,8,9,10]. The inherent divergence of the VEV in curved space was resolved by applying an appropriate regularization technique. These results emphasize that creation of particles in curved space is a vacuum phenomenon: virtual particles gain sufficient energy from the background so as to become real.
Here, our main objective is to examine the properties of BHs by examining their emission when they are away from equilibrium. This is achieved by considering external perturbations to the BH. Specifically, we calculate modifications to Hawking radiation and to gravitational wave (GW) emissions. Of particular interest are perturbations that deform the BH horizon, thus allowing it to be deformed inwards. In general, these require absorption of some negative null energy by the BH and therefore are likely to result from quantum processes. However, some regions of the BH horizon can be deformed inwards if the total injected nukk energy is small. We show that, unlike the gravitational perturbations in classical GR, where radiation can only be emitted from a Schwarzschild BH in the form of gravitational waves, in the quantum case, due to coupling of the background metric to the various matter fields and their nonzero VEV of the SEM tensor, gravitational perturbations produce additional particle species. However, the rate of such particle production is small.
The paper is organized as follows, in the first part we establish the theoretical framework for evaluating the quantum emission from perturbed BHs.
We derive an explicit expression for the radiated power by using the path integral approach and show that it coincides with the Euclidean partition function approach. In the second part we outline the classical setup that describes the deformed horizon geometry of BH in the presence of external perturbation and determine their corresponding metric perturbation. In the third part we apply the theoretical framework and calculate the quantum emission for various particle species. The classical emission is calculated by following the relevant discussions on tidal deformations in the literature.
Next, we compare the results and discuss their relative magnitude in the context of horizon deformations. We find that the relative magnitude is controlled by the BH entropy and by the time scale of the emission. The factor involving the time scale can, in some cases provide a significant enhancement in the emitted flux. Then, we interpret the increased flux in terms of an external observer and a nonempty BH. In the final part, we summarize our result and discuss their significance. In an appendix, for completeness, we explain in detail the relationship between our discussion of tidal deformations and the existing discussions in the literature.
Emission from perturbed blackholes
In this section we present the formalism for describing how external perturbations modify the vacuum state of quantum fields outside the BH. The modified state is time dependent and so, according to standard arguments, particle production occurs.
First, we consider the simple case of a minimally coupled, massless scalar field, whose equations of motion are given by g ab ∇ a ∇ b φ = 0, where g ab is the unperturbed background Schwarzschild metric. In the absence of perturbations, when the field is in its vacuum state, the vacuum persistence amplitude out, 0|0, in = Z[0], is given in terms of an effective action, where the scalar fields action is given by In general, in curved spacetime the in and out vacuum states are different, this is also the case for time-independent backgrounds, such as Schwarzschild spacetime. In a curved background the annihilation operators for the past "in" vacuum,â j are defined asâ j |0, in = 0 while the corresponding operators of the future "out" regionb j , are defined asb j |0, out = 0. Generically, the operatorsâ j are different thanb j and the two sets are related by a Bogolubov transformation (for more details, see for example [11,12,13] and in the context of this manuscript, see explicitly in [14]). In particular, also in Schwarzschild spacetimeb j |0, in = 0 andâ j |0, out = 0 which implies that out, 0|0, in = 0 and therefore that particles are being produced.
To investigate the influence of perturbations on the vacuum state, we consider a finite-duration small perturbation to the background Schwarzschild metric. The perturbation is switched on at some early time t i and switched off at some later time t f . So, the metric changes from g ab to g ab + h ab . The scalar field action variation is related to stress-energy-momentum tensor, where T ab is the SEM tensor of the minimally coupled field given by Following the semiclassical approach we define the effective action out, 0|0, in = e iW and apply the relation We then approximate A naive calculation of T ab is divergent. Therefore, we have to consider the VEV of the renormalized stress-energy-momentum (RSEM) tensor, denoted by T ab ren . There are many renormalization techniques that handle the infinities of T ab in curved space. While in flat backgrounds divergences of the SEM tensor VEV are easily identified and eliminated by the standard normal ordering techniques, in curved backgrounds this method is inapplicable and rather intricate regularization techniques are introduced. The most frequently used among them are the zeta function regularization [4], dimensional regularization [5,6] and the covariant geodesic point separation [7,8,9,10], which we elaborate on bellow.
In general, in curved background the full partition function is given by S = S g + S m where S g is the gravitational Einstein-Hilbert action. Then, as shown above, the external source induces a gravity-matter coupling of the form h ab T ab . Following the separation method [7,8,9,10], the divergences in the SEM tensor are isolated such that T ab = T ab ren + T ab div , where the divergent terms in T ab div turn out to be purely geometrical and can be absorbed into the gravitational action. The finite T ab ren determines the emission of radiation from the BH.
where now it is clear that the first term is the Hawking term that gives rise to the thermal flux, while the second term is a modification of the Hawking radiation induced by the external perturbation.
It is convenient to work in the interaction picture, assigning the explicit time-dependence of the "in" state to the corresponding operator, In this form, we identify the time-evolution operator U (t, 0)|0, in = |0, in t , and the energy difference ∆E between the states |0, in and |0, in t , The excess energy of the time-dependent vacuum state with respect to the stationary initial vacuum state ∆E , which is supplied by the external gravitational perturbation is the total energy gained by the vacuum state of the matter fields. Part of this energy, which is denoted by ∆E R , is emitted to infinity in different forms of radiation and part remains trapped and contributes to the eventual increase in the mass of the BH. The distribution of the radiated energy depends on the detailed nature of the perturbation and the type of the excited fields. We are particularly interested in the total power radiated by the l = 2 deformation of the horizon surface (labeled by R 2 ) or equivalently the BH surface luminosity. This power is obtained by projecting the flux components of the SEM tensor on the BH surface, which is defined by embedding the BH's outer surface in a constant time surface Σ t . The flux of the SEM tensor is given by the projection T tρ n ρ = T tr , where n ρ = (0, 1, 0, 0) is the normal to Σ t . Then, from Eq. (2.9) the radiated energy to infinity as a result of deformations in the BH outer surface is given by √ −gh tr T tr ren and the additional power emanating from the deformed horizon (the BH additional luminosity) takes the form where ∆L is the luminosity difference between the stationary and the timedependent state. Moreover, by identifying the Hawking luminosity as L H = R=2M dΩr 2 T tr ren , Eq.(2.12) can be written as ∆L ∼ h tr (R lm )L H , where h tr (R lm ) = h tr (r) r→R 2m is the value that the metric perturbation takes about the deformed horizon.
In the following, we outline the perturbative treatment and specify the metric perturbation that describes geometric deviations of the BH horizon away from, but near, equilibrium.
Classical setup
As previously mentioned, our interest is in the emitted radiation from BHs that are out of equilibrium. BHs out of equilibrium were discussed in the context of studying tidally deformed BHs. There, the perturbations are described in the horizon-locking gauge [15,16,17,18,19]. We have adopted this framework for our purposes. We first outline some of the main ideas and then discuss the detailed calculations and results.
Background
Originally, the dynamics of BHs undergoing tidal deformations was described by Thorne and Hartle [20] and later elaborated by Alvi [21], Hughes [19] and Poisson [15,16,18,22] who describes in great detail the geometry of a deformed BH horizon as a result of tidal gravitational perturbation. The idea is that the spcetime of a nonrotating BH is deformed by a weak tidal interaction produced by an external moving object. This environment is characterized by the radius of curvature R, which can be considered as the region in space where the BH gravitational field interacts with the object's tidal field. To guarantee a weak gravitational interaction, it is assumed that the Schwarzschild BH with mass M and radius R S = 2GM satisfies R S R.
For example, consider a binary system that consists of a BH and an external object with mass M , the relative distance between them is b. Then, the radius of curvature is given by , which is of order of the BH's angular velocity, which also determines the typical interaction scale.
The important aspect is that the tidal field induces a modification to the BH gravitational field in a region r R. So, perturbation about the background geometry are expanded in powers of the dimensionless parameter r/R 1.
For a more detailed description we refer the readers to Appendix A, and to Ref. [17]. In the following section, in analogy with Poisson's results, we present a general framework for constructing the geometry of deformed horizons.
Deformed horizon geometry
As previously mentioned, we are interested in perturbations that describe a horizon deformation of BHs near their equilibrium state. Astrophysical BHs relaxing to equilibrium can originate from a various astronomical events such as a BH that is immersed in an external gravitational field induced by an external source or by inspiralling compact binaries [16]- [23], or by a binary postmerger event in its ringdown stage [24,25]. Here the specific details of these events are not important, instead we are interested in the general description of the horizon deformation.
The idea is that an external GR observer can describe the near-horizon geometry of a deformed BH relaxing to equilibrium in the horizon-locking coordinate system. In this gauge, the horizon position is "locked" at r = R S , such that h(R S ) = 0 up to some higher-order correction in the perturbation strength. Then the perturbed geometry is interpreted in terms of a per-turbation in the associated Ricci curvature. Alternatively, the deviation in the scalar curvature can be converted into a deviation of the BH outer sur- face. An external observer can interpret the perturbation as the geometrical deviation of the BH outer surface horizon from its unperturbed location at r = R S [17,19]. This is depicted in Fig. 1 for some specific perturbations.
In general, outward and inward deviations of the BH surface with respect to its unperturbed horizon R S can be identified as the injection of some average positive and negative null energy, respectively.
To determine the deformed horizon geometry, the BH outer surface position is parametrized about a three-dimensional Euclidean space as measured by a remote flat space observer. The dimensionless horizon displacement parameter of the deformed surface with respect to its unperturbed position is Here v is the ingoing Eddington-Finkelstein (EF) coordinate , and l, m are the deformation angular modes. Then D(v) lm ≡ D(v)Y lm and specifically for the l = 2 mode we define The BH outer surface position for an arbitrary l, m modes is given in this case by, in accordance with Eq. (4.1), the surface displacement for the l = 2 mode is . This is illustrated in Fig. 1 for some l, m modes. As stated, this parametrization defines the BH surface in a three-dimensional Euclidean space. The surface curvature associated with the above parametric equation is given by where the Y lm are defined as a real functions (see Appendix A, Eq. (A.10)).
To proceed, we identify the surface curvature with the Ricci curvature of the deformed horizon geometry. Then, since the only information that we have is about the surface curvature, we have some gauge freedom in determining uniquely the near-horizon geometry. In [15,16] it is shown that there exists a unique choice of coordinates that satisfies the geometrical properties mentioned above, the horizon-locking coordinate system. The horizon-locking metric perturbation is defined about the Schwarzschild background in the outgoing EF coordinates whose line element is given by 2 An exact definition of the "vicinity region" appears in Appendix A.
Results
Here, we begin with the results that describe the classical gravitational wave emission from a perturbed Schwarzschild BH. We then proceed to calculate the quantum emission of fields with different spin: minimally coupled conformal scalar fields, electromagnetic fields, neutrino fields and graviton fields.
This emphasizes that the framework for the classical and quantum emission is similar while demonstrating the fact that unlike the classical emission, in the case of quantum emission, all matter fields are produced.
Quadrupole emission
The deformation of the Schwarzschild BH horizon is given in Eq. (4.2) and its associated background perturbation in Eq. (4.5). This deformation leads to a nonvanishing time-dependent quadrupole moment as we now explain.
The time-dependent geometric deviation results in a time-dependent excess energy density ρ(t, r). This leads to a time-varying quadrupole moment Q ab ∼ d 3 x ρ(t, x )x a x b , which in turn sources a gravitational-wave emission.
It is possible to express the stress tensor of the gravitational energy and its radiation power in the standard form ∆L GW ∼ ... Q ab ... Q ab . This can be estimated directly by noting that Q ∼ M 3 . As explained in Appendix A, we assume that the induced gravitational field is slowly varying, then its rate of change can be approximated by d/dt ∼ ∆t −1 ∼ D 1/2 /M , so ...
and ∆L GW ∼ D 3 . Detailed calculations of the classical emission from a BH undergoing tidal deformations are found in [15,20]. The final result for the emitted power of gravitational energy, or alternatively the rate in which the BH losses its mass, is expressed in terms of the dimensionless displacement quantity D, and takes the from
The total energy loss in a characteristics time interval ∆t is ∆E
In the models that were considered in [26]
Increased quantum flux
To begin, we specify the quantities in the expression for the radiated energy flux, Eq. (2.12), recalling that we are interested in the emitted power from the deformed horizon.
The flux components of the RSEM tensor are evaluated in EF coordinates about the Unruh vacuum state, which describes an outward thermal flux at infinity. Then, the flux term is given by T uu − T vv = 2T tr so the Hawking luminosity for a field of a given spin is defined as L s H = 4πr 2 T tr s ren . The explicit expression for the RSEM tensor of a stationary Schwarzschild BH is given by [27], with T H = 1/(8πM ) being the Hawking temperature 3 . The numerical factor α s depends on the spin of the excited fields and is determined by the transmission coefficient through potential barrier [28]. Equation (4.10) 3 To restore units, the luminosity needs to be multiplied by a factor of By substituting M = 1/8πT H , R S = 2M and r → R 2 we obtain the following expression, where R ab is the Ricci tensor. Then, we derive the SEM tensor of the emitted radiation in the far-field region and expand the gravitational action in the GW perturbation h ab . The zeroth and first order expansion of the Ricci tensor vanishes in the Schwarzschild vacuum whereas the second order expansion is nonvanishing and yields (after lengthy derivation) the null component of the emitted GWs SEM tensor (4.15) and the GW power is given by This demonstrates that the classical GW emission originated in the nonzero quadrupole moment that is generated by deformations of the BH.
To proceed, it is possible to estimate the modified Hawking temperature by using Eq. (4.8) and assuming that the perturbed BH emits an approximately thermal spectrum. Then the emission is approximately that of a black body. The modified luminosity L = L H + ∆L, can be evaluated using Eq. (4.8), L = π 12 T 2 , where T is the modified Hawking temperature of the BH surface. From Eq. (4.11) so T is given by where ∆T = T − T H . The modified BH temperature is seen no to depend on the sign of D, which implies that inward or outward surface deviations Eq. (4.2) resulted in an increase of the BH temperature.
To get more insight about the results, it is instructive to compare between the the classical and the quantum emissions, in Eq. (4.6) and Eq. (4.13), respectively, (4.20) The appearance of a quantum suppression factor 1/S BH can be explained as follows. Classically an unperturbed BH has energy mass of E = M , also, the energy scale that is associated with its quantum properties is the Here, ∆t is left unspecified and depends on the details of the underlying mechanism that drives the BH horizon deformations. For example, we can consider the setup of horizon deformations that are induced by an external remote moving object, and are given in Section (3.1) and also in Appendix A. The characteristic time scale is ∆t ∼ T S /D 1/2 , then the ratio exhibits ∆L/∆L GW ∼ 1/(DS BH ) ∼ l 2 P /(DR 2 S ). To interpret that we recall that the radial horizon displacement is given by ∆R S = DR S , then for small deviations with D 1 we argue that in BH relaxing to equilibrium, the ratio Eq. (4.20) is considerably larger in comparison with the ratio of an unperturbed BH Eq. (4.21). Furthermore for the sub-Planckian regime where DR 2 S ∼ l 2 p , which means that ∆R S ∼ l 2 p /R S ∼ R S /S BH , we conclude that for deviation of this order the quantum emission is nonnegligible in comparison to the classical one.
Summary and Discussion
In this paper we calculated the emission from a perturbed Schwarzschild BH.
The perturbations could be of classical or quantum origin . We established a theoretical framework for calculating the emission for the different kinds of quantum fields by using the semiclassical approach. We showed that in the quantum case there exists a nontrivial gravity-matter coupling term of the form h ab T ab . Since all matter fields are coupled to the metric perturbation through their SEM tensor, all particle species are produced. This was demonstrated explicitly in the emission formula Eq. (2.12). In contrast, in the classical GR treatment, the radiation from BHs undergoing horizon deformations is emitted only in the from of gravitational waves. We also stress that such a coupling term is absent in the absence of a perturbation, because for Schwarzschild BHs T ab = 0 for all matter fields. This discussion highlights the difference between the underlying mechanism that provides the source of energy in the classical case and that in the quantum case. In the classical case, the emission originates from the geometrical properties of the scalar curvature and its associated perturbed metric, which eventually constitute the gravitational SEM tensor of the emitted radiation, T ab ∼ḣ 2 ab . In the quantum case, the source term is the vacuum fluctuation of the matter fields RSEM tensor that is coupled to the perturbed background h ab . We also emphasize the importance of the gravity-matter coupling term that appears in Eq. (2.11) to the resolution of the BHs information paradox. Whereas in [29,30] it is shown that operators in this form can transfer information to the outgoing radiation and may eventually lead to unitarization.
Subsequently, in section (4.2), we calculated explicitly the emission of the minimally coupled scalar fields, neutrinos, photons and graviton fields. We find that the flux is always positive, independently on the sign of the deformation parameter D, and independent on whether the BH outer surface protrudes outward or inward. Then, by assuming that the emission is approximately that of a black body, we interpreted the additional flux in terms of the BH surface temperature and find the modification to the Hawking temperature. Similarly, the modification was found to be independent of the sign of the horizon displacement parameter D. Later, considering the results of the classical emission we compare the classical and quantum results for the BH luminosity, we conclude that the ratio is controlled by the BH entropy 1/S BH and also by the ratio ∆t/T S , which in some setup could provide significant enhancement factor in Eq. (4.20).
As explained in Section (3.2), an external observer can describe the near horizon geometry of a deformed BH in the horizon locking coordinate system. This means that the horizon position is locked at r = R S , such that h(R S ) = 0. In this gauge, the perturbed geometry is interpreted in terms of a perturbation in the associated scalar curvature. Alternatively, the change of the Ricci scalar can be described as the deformation of the BH outer horizon with respect to its unperturbed horizon at r = R S , as shown in Fig. 1
A Gravitational setup
In this appendix we explain in detail the connection of tidal deformations to the discussion that is given in Section (3.1). The discussion follows [15,16,18]. Similar discussions are also presented in [19,20,21].
To begin, we consider the perturbatons in the horizon-locking gauge about the Schwarzschild backgrounds in the outgoing EF coordinates. The line element is given by were f (r) = 1 − 2M/r. The metric perturbation h µν is given as an expansion in r/R where R is the radius of curvature. The scale R defines the region in space where the BH gravitational field interacts with the tidal field of an external object. As explained in Section (3.1), to guarantee a weak gravitational interaction, the Schwarzschild radius R S of the BH must satisfy mass M and the separation distance is b, the radius of curvature is given by . This is illustrated in Fig. 2. Here, we list the leading terms which is second order in r/R. Then, by imposing the proper gauge conditions, in the vicinity of the BH the metric perturbation takes the form are the dimensionful tidal quadrupole moments of the scalar, vector and tensor spherical harmonics, respectively. We later define the appropriate scalar harmonics. The definition of the vector and tensor harmonics is given in [15] and is not relevant to our purpose, since, as previously mentioned, they do not contribute to the emitted flux.
The perturbation strength scales as C ∼ B ∼ R −2 . The contribution of the higher-order terms: the octupole moments that scale as r 3 /R 3 and the hexadecapole moments that scale as r 4 /R 4 are smaller than the quadrupole moments that scales as r 2 /R 2 . We therefore consider only the contribution of the quadrupole term l = 2. In the limit r/R → 0, while keeping M/r fixed, the metric perturbation vanishes.
One could also define the dimensionless parameters where the C, B denote the expansion parameter of the metric perturbation.
The parameter D can be viewed as the expansion parameter in the vicinity of the horizon r ∼ M . It is related to the tidal fields by D = CM 2 /r 2 and, as we show below, it is useful for the description of the horizon deformation.
The scalar harmonics of the tidal fields are defined by where C ∼ R −2 and its exact value is determined by defining the components of the Weyl tensor [15]. For the purpose of this work it is unnecessary to specify them further, since our interest is in the region 2M < r < r max , where r max R Fig. 2. Then, the construction of the Weyl tensor and its associated tidal fields will only indicates how small the ratio C ∼ r 2 /R 2 1 is. The important point is that this region is finite and the perturbative description in Eqs. (A.2)-(A.5) is valid only for r < r max . In addition, it is shown in [22,21] that in regions where r max < r < R the metric takes the form of post-Newtonian expansion, such that the induced corrections to the BH spacetime are significantly smaller than those that are given in Eqs. (A.2)-(A.5). Therefore, one is mainly interested in the region R S < r < r max .
The spherical harmonics listed above are defined as the real part of the Y 2m , of the quadrupole moments, is given bẏ Another important scale is the inhomogeneity scale L, which measures the degree of spatial variation of the induced tidal fields. In the context of a binary system, the analogue scale is the relative distance between the constituents L ∼ b. It is the smallest scale among T , R, so L < R and given by L ∼ M D −1/3 . The definition of r max is given in terms of the length scale L and the cutoff R. First, it is clear that r max has to be smaller than both L and R, this indicates that it must be determined according to the smallest scale in the problem, then r max αL with α being a dimensionless parameter α < 1. Then, the only way to construct α from L, R is using the relation α ∼ L/R. Thus r max ∼ L 2 /R ∼ √ M L ∼ M D −1/6 , which agrees with [22]. For completeness, we stress that the metric perturbation in Eqs. (A.2)-(A.5) is well defined in the region R s < r M D −1/6 . Otherwise, at larger distances, the expansion parameter is too large and the perturbative treatment breaks down.
In accordance with the above relations, we now wish to express the horizon shift ∆R as the result of the tidal deformation described by the metric perturbation. As explained in Section (3.2) the explicit expression of the Ricci scalar [17], is given by l(l − 1)(l + 1)(l + 2) D lm . (A.12) The BH horizon radial deviation, following [17,19] is given by The radial shift ∆R 2 = R 2 − R S then reads So the BH horizon extends from its original unperturbed location up to a distance scale that is determined by the factor ∆R s /R s = − D.
We can now identify D with the horizon displacement parameter D in Eq. (4.2), D 2 = −D 2 , which establishes the connection between the description in Section (3.2) and the original setup of the tidal deformation. | 6,990.2 | 2019-02-22T00:00:00.000 | [
"Physics"
] |
EURASIP Journal on Applied Signal Processing 2005:18, 3060–3068 c ○ 2005 Hindawi Publishing Corporation SPAIDE: A Real-time Research Platform for the Clarion CII/90K Cochlear Implant
SPAIDE (sound-processing algorithm integrated development environment) is a real-time platform of Advanced Bionics Corporation (Sylmar, Calif, USA) to facilitate advanced research on sound-processing and electrical-stimulation strategies with the Clarion CII and 90K implants. The platform is meant for testing in the laboratory. SPAIDE is conceptually based on a clear separation of the sound-processing and stimulation strategies, and, in specific, on the distinction between sound-processing and stimulation channels and electrode contacts. The development environment has a user-friendly interface to specify sound-processing and stimulation strategies, and includes the possibility to simulate the electrical stimulation. SPAIDE allows for real-time sound capturing from file or audio input on PC, sound processing and application of the stimulation strategy, and streaming the results to the implant. The platform is able to cover a broad range of research applications; from noise reduction and mimicking of normal hearing, over complex (simultaneous) stimulation strategies, to psychophysics. The hardware setup consists of a personal computer, an interface board, and a speech processor. The software is both expandable and to a great extent reusable in other applications.
INTRODUCTION
The technical evolution in cochlear implant processing shows an ever-increasing complexity of both the hardware and software [1,2]. This technological advance increases performance scores significantly but makes it difficult to implement and experiment with new sound-processing and stimulation strategies. Therefore, research tools that hide most of the complexity of implant hardware and communication protocols have been developed recently [3,4,5]. They allow streaming off-line processed data from PC to implant and support all stimulation features of the implant. However, off-line processing cannot support live input from a microphone. Furthermore, it is cumbersome when an experiment consists of comparing different processing strategies each with different parameter settings, more so when large · · · · · · · · · · · · Figure 1: Relation between (typical) strategies, channels, and electrode contacts. PRE = pre-emphasis, BPF = bandpass filter, ENV = envelope extraction, MAP = mapping to current values. word and sentence databases are used for evaluating soundprocessing or stimulation strategies. SPAIDE (sound-processing algorithm integrated development environment) is a platform that makes all features of the Clarion CII and 90K cochlear implants [8] available for advanced research. It supports streaming off-line processed data and real-time processing on PC combined with streaming of the results to the implant. The platform supports live input and off-line processing of the test material for all possible test conditions is not necessary anymore.
The following section describes the basic concepts of SPAIDE and the terminology used throughout the paper. Then follow the key elements of the hardware and software. Next the specifications and benchmark results of the realtime processing are presented, and typical research applications with SPAIDE and the steps to set up an experiment are described. Finally the pros and cons of the platform and future developments are discussed.
BASIC CONCEPTS
The architecture and implementation of SPAIDE rely on the concept of channels. The platform makes a clear distinction between audio and stimulation channels, stimulation groups, and electrode contacts ( Figure 1). The soundprocessing strategy defines the number of audio channels and the different processing steps in each of these channels. A typical processing strategy in cochlear implants consists of pre-emphasis filtering, bandpass filtering, and envelope extraction [2]. The stimulation strategy specifies the number of stimulation channels, their stimulation sequence, and their temporal and spatial definitions. A stimulation channel is defined as a set of electrode contacts that simultaneously carry the same electrical stimulus waveform, though not necessarily with the same amplitude and sign ( Figure 2). This general definition is possible because the CII/90K implant has 16 identical and independent current sources, one per electrode contact. The temporal definition of a channel describes the electrical stimulus waveform. The spatial definition of a channel specifies the weights with which the waveform is multiplied for the different electrode contacts in the channel ( Figure 2). Some or all of the channels can be used more than once within a strategy, and an electrode contact can be part of different stimulation channels. All channels stimulated simultaneously constitute a stimulation group [6]. The waveforms of the different stimulation channels within a group may be different. Furthermore, one or more channels can be part of different stimulation groups. For instance, if channels C1 and C2 form group G1 and channels C2 and C3 form group G2, then channel C2 is stimulated whenever group G1 or G2 is activated while channels C1 and C3 are stimulated only when group G1 or G2 is activated, respectively.
The sound-processing and stimulation strategies are specified independently of each other. Audio channels are connected to stimulation channels during patient fitting. Each stimulation-channel input is connected to one of the audio-channel outputs, mapped to current values by the stimulation-channel and patient-specific compression, and then multiplied with the stimulus waveform and the spatial weights to determine the current at the electrode contacts.
The stimulation strategy is independent of the input signal and sound processing. This is not a limitation imposed by the platform but due to the fact that the stimulation strategy is programmed in the CII/90K implant; it is programmed with a table that defines the shape and timing of the electrical waveforms generated by the current sources. As a consequence stimulation rates in the different channels are fixed and cannot be changed based on signal properties, for example, set to 1/F0 with F0 the fundamental frequency.
HARDWARE
The hardware of SPAIDE consists of a personal computer (PC), a programming interface (PI), a speech processor (SP), and a Clarion CII or 90K implant (CII/90K) ( Figure 3). Because the clinical programming interface (CPI), which is used during implant fitting in the clinical centre, does not provide a USB connection a SBC67 DSP board of Innovative Integration [7] is used. The only SP that currently supports the SPAIDE application is the portable speech processor (PSP) [8].
At boot time the PC downloads the application software to the PI through the RS232 link, the PI sends application software to the SP, and the SP configures the implant's registers. Once all hardware components are booted, the PC captures sound from file or audio input on PC, processes the signal in a custom way, and sends commands and data in packages through USB to the PI where data is buffered and commands are handled immediately. During stimulation the SP masters the timing by sending hardware interrupts to the PI whenever it needs to forward data to the implant. The PI thus sends the buffered data to the SP at the rhythm imposed by these hardware interrupts, and the SP transmits the data to the CII/90K. Finally, the CII/90K generates the electrical stimulation patterns. The SP continuously reads the implant status information and the PI monitors both the SP and implant status. Stimulation is stopped immediately on any error condition.
Overview
The software architecture of SPAIDE is shown in Figure 4. On the PC side the application consists of three major components. The user interface (UI) of SPAIDE allows for user interaction with the two other components, which are the Figure 5: Typical processing chain of SPAIDE. The processing functions are implemented in feature blocks (FB).
configuration and the real-time (RT) processing. PC software runs on Windows XP platforms. Specific application software also runs in the programming interface and in the speech processor.
In order to run an experiment the different processing steps and their parameters must be configured in a so-called topology file. This description specifies the sequence and the parameters of all processing steps that are needed to implement the sound-processing and stimulation strategies. Different (graphical) user interfaces help to specify the experiment and fit the patient.
The processing chain consists of up to four larger components ( Figure 5). The first component is the sound input, which reads data from file (e.g., WAVE audio) or captures sound on PC from its microphone or line input. The second component is the sound processing that can be completely user-defined within the processing capabilities of the PC. The third component is the application of stimulation strategy and patient fitting, and is custom within the limitations of the CII/90K. The fourth component is the output, which can be the USB driver for streaming data to the implant, sound output via loudspeaker or line output on PC, or output to file or MATLAB. Processing functions are implemented in feature blocks each of which implements one processing step, for example, a filter bank. Together, these feature blocks constitute an extendable collection of processing functions available to build a topology with.
RT processing is implemented in an RT framework. This framework requires that feature blocks implement a set of functions for initialization and processing, which is fulfilled automatically when the feature block is derived from the feature block class. Most software modules, from feature blocks to the stimulation-strategy builder used during configuration, are available as a Win32 dynamic link library (DLL) or as a static library. Therefore they can be reused in any application that can deal with these DLLs and libraries.
Configuration
The first step in the configuration ( Figure 4) is the specification of the topology. The main task of the topology builder is to define the different processing steps (feature blocks), and to specify the names of the queues that interconnect them. The framework will automatically connect two feature blocks with corresponding input and output queue names. Once the topology is specified the parameters of the sound-processing strategy, for example, filter parameters, must be designed. These parameters can be in the topology file or the topology can contain links to the files with the parameters. Currently no UI is integrated in SPAIDE to specify the topology or to design sound-processing parameters. Other applications, for instance, MATLAB, should be used instead. The stimulation builder window of SPAIDE specifies the temporal and spatial properties of stimulation channels and stimulation groups, and also specifies the grounding scheme. When one or both of the indifferent electrodes outside the cochlea the implant box or the ring electrode around the electrode array, are grounded this applies to all stimulation channels. The grounding of an electrode contact however is controlled dynamically, that is, the electrode contact can be grounded in one or more stimulation channels but can be an active contact in other stimulation channels. The specified stimulation strategy is converted into two tables. One table is used on the PC for timing the data/amplitude stream from PC to implant, the other is sent to the implant to control the shape and timing of the electrical waveforms generated by the current sources.
Patient fitting consists of the specification of patientdependent parameters. It groups parameters that can be adapted to the patient. The most important parameters are the connection between audio and stimulation channels and the mapping functions in each of the stimulation channels. The mapping functions implemented in SPAIDE define the relation between the processed-audio amplitude and the current value (in µA) in each of the stimulation channels individually and are implemented as static compressions.
Real-time processing
The RT framework consists of several components ( Figure 6) of which the core component is the RT engine that initializes and runs the topology. First the topology description is read and the feature blocks (processing functions) are connected through data queues. These queues have no specialized data type and are implemented as a byte buffer. Functions that use a queue are assumed to know the type of data their input queue is providing. The RT engine also creates a container object that is used to store data that is accessible by both the SPAIDE application and all components in the framework. After creating all components, the RT engine initializes the processing with the parameters specified in the topology and sends the stimulation-strategy table to the implant. Once the whole processing chain and the implant are initialized, the engine starts its run-thread and sequentially executes the processing functions in the topology and realizes a continuous data flow from input to output. If needed a feature block can run in its own thread, which is useful for asynchronous processes like the USB transmission.
SPAIDE uses a frame-based paradigm to process the audio input in real time. The audio is first chopped in frames of approximately 50-100 milliseconds (see Section 5). These frames are processed through digital filters, mapping functions, and so forth, and samples are selected as specified by the stimulation strategy. These values are converted into stimulation currents according to the patient-dependent fitting parameters. To maximize the stimulation accuracy, SPAIDE automatically sets the current ranges in the implant to obtain the highest current resolution, and scales current values accordingly. Finally, the currents are organized into frame packets and transmitted over the USB link to the programming interface. During processing messages sent by SPAIDE, the RT framework, or feature blocks are logged in a window such that the experimenter is aware of the status of the platform. Changes to the fitting parameters are immediately used during RT processing as long as these changes do not require a reset of the implant. This allows for RT fitting with SPAIDE.
SPAIDE has a simulation mode in which there is no data stream to the implant. A simulator window displays the electrical waveforms, either at the stimulation-channel level or the electrode-contact level. This allows for verifying the whole configuration without the need for hardware.
Programming interface
The role of the application software in the PI is to handle the data stream from the USB link to the SP and the implant in a timely manner. It implements a FIFO in which data is written at the rhythm of the USB transmission, and from which data is read at the rhythm imposed by the hardware interrupts that are generated by the SP.
This FIFO buffer is necessary because processing on a PC running Windows shows jitter in the processing duration. This makes that temporarily no new data is sent by the PC to the PI. Without a buffer this would cause an underrun, that is, the PI has not enough data available to sustain the continuous stream to the implant. The length of the buffer is dimensioned such that the probability of an underrun is very low (see Section 5). If it occurs anyhow, then a frame holding zero amplitudes is inserted. In the case of overrun, that is, too much data is sent to the PI, the application on the PI can throw out frames. An overrun typically follows a series of underruns when the processing on PC is catching up its processing delay. Neither the insertion of zero amplitudes during underrun nor the deletion of frames during overrun can result in charge-unbalanced stimulation, thus guaranteeing patient safety.
Sound processor
The SP application software is a subset of the code base of the clinical SP, and includes only the functionality needed for forward telemetry to the implant. Other capabilities like audio capture from the SP, access to SP control settings, and back telemetry are currently not used but might be in the future.
Differences in speech perception scores of the HiResolution
[8] strategy with SPAIDE and with the clinical processor have not been evaluated in a formal study. However, initial testing of the platform demonstrated a tendency of slightly lower scores with SPAIDE. This is probably a result of the accumulation of small implementation differences between SPAIDE and the clinical device. The sound processing in SPAIDE has to simulate the analogue front-end of the clinical processor and does not include the AGC. Furthermore, the mapping/compression in SPAIDE is similar but not identical to the one in the clinical device. Finally, due to some limitations in the current stimulation builder of SPAIDE (see Section 6), the stimulation strategy is not always identical to the clinically used strategy. Table 1 shows benchmark results for the standard cochlear implant strategies used with the CII/90K, CIS [2], and HiResolution, as measured on a Pentium IV-1.7 GHz PC, with 512 MB RAM, and running Windows XP. Audio input is read in frames of 100 milliseconds from a WAV file with signals sampled at 44 100 Hz, and the processing uses doubleprecision floating-point values. The CIS processing chain consists of a 2nd-order IIR pre-emphasis filter, a 16-channel 6th-order IIR filter bank, half-wave rectification, 2nd-order lowpass IIR filters for envelope extraction, sample selection, compression, and USB transmission. The HiResolution processing chain consists of a full simulation of both the analogue and digital preprocessing stages in the SP programmed with the HiResolution strategy (without the AGC), a 16channel 6th-order IIR filter bank, envelope extraction by rectification and averaging, compression, and USB transmission. In both cases the stimulation strategy is standard 16channel CIS with a rate of 2900 pulses/s/channel. The longer processing time needed by the HiResolution strategy is due to the simulation of the analogue front-end. These timing results are only indicative because they also depend on PC hardware properties like bus speed, amount of cache memory, and so forth, but they show there is enough headroom for implementing more complex processing strategies.
An important aspect of stimulation in cochlear implants is that the timing of the electrical pulses is exact and that the correct current values are delivered to the right electrode contacts. This asks for synchronization between the processing on PC and the stimulation in the CII/90K. All stimulation timing is controlled by the hardware interrupts generated by the SP with microsecond accuracy. Therefore it is independent of the exact timing of the software running under Microsoft Windows provided the PC is able to sustain the required data stream. As described before, the stimulation builder creates a timing table during configuration, which allows the PC to match the rate of data transmission from PC to PI to the rate at which the implant needs data. This minimizes the chance of data underrun provided the SPAIDE application can use PC resources as much as it needs. Unfortunately this is not always the case under Windows. An analysis has shown that Windows periodically calls processes that delay the processing, and thus the data transmission over USB, up to several milliseconds. Also some rare but much larger delays of several tens of milliseconds are found. Although it is possible to specify short processing frames and use the FIFO in the PI to buffer these processing delays, SPAIDE is typically used with processing frames of 50-100 milliseconds. That delay is needed to minimize or prevent underruns while using longer processing frames has also the advantage of a smaller processing overhead, that is, more efficient use of the PC's resources. The length of the FIFO in the PI is approximately 300 milliseconds. It is a value that gives extra headroom to buffer even larger processing delays due to UI interaction, for example, in the fitting window of SPAIDE, or due to other applications running at the same time as SPAIDE.
The overall latency from sound-processing input to electrical stimulation depends on the type of input. In case of file input and 100-milliseconds frames it is approximately 350 milliseconds, which is the sum of the processing duration (cf. Table 1), the USB transmission time, and the FIFO delay on the PI; in case of audio input this latency must be increased by the time to record the frame, that is, 100 milliseconds in this example.
Many safety measures are built in the platform to detect inconsistencies in the configuration and to prevent stimulation of the patient with too large or unbalanced currents. When a problem is encountered during configuration, the platform will not allow stimulation. Errors during processing will immediately result in a stimulation halt. The strategy builder always controls current balancing for each stimulation channel. The sum of currents should always be zero if no grounded contact is associated with the channel. If this condition is not met, the builder signals an error and refuses to build the strategy and extract the stimulation table for the implant.
As discussed before, the processing chain consists of up to four components (Figures 5 and 7a). This configuration is typical for research applications in the field of soundprocessing or stimulation strategies that are not too complex such that real-time processing is feasible. It is the easiest configuration to evaluate strategies with sound or speech material stored as WAV files on hard disc. However, not all components are necessarily present in the configuration (Figures 7b, 7c, and 7d). Figure 7b shows a configuration where the input consists of audio-channel data that was generated and saved to file earlier, for example, by SPAIDE (cf. Figure 7d) or by another application like MATLAB. The audio-channel data is connected to stimulation channels that are configured in SPAIDE, and the fitting parameters complete the configuration. This application is useful when the audioprocessing complexity is too large for implementation in a real-time processing system on PC. Adaptive filter models of the cochlea used for a closer replication of the processing steps in the normal ear [9], for instance, are very likely to fall in this category. The setup is also applicable to experiments where audio data must meet specific long-term criteria which are difficult to guarantee in a real-time system, for example, noise addition to enhance stochastic resonance [10,11]. Another category of applications that can use this configuration is found in studies of new stimulation strategies, where the input is used to modulate the electrical stimulation patterns and to generate special pulse patterns that allow evaluating spatial selectivity, temporal interaction, and so forh. The configuration in Figure 7c only consists of an input and an output component. It is a streaming application where the preprocessed data contains the amplitude values that must be transmitted to the implant. This is the preferred configuration when the experimenter wants maximum control over the stimulated currents, which is often the case in psychophysics. Finally, the configuration in Figure 7d is a pure sound-processing application where no data is streamed to the implant. This can be used to preprocess sound, for example, to study the application of simultaneous masking [12] in a free-field experiment with CI subjects.
The key to a successful experiment with SPAIDE is the preparation of topologies and sound-processing parameters, and the definition of stimulation strategies. To simplify this process for many experiments, SPAIDE comes with a set of topologies, parameters, and stimulation strategies that cover the CIS and HiResolution strategies for both streaming (Figures 7b and 7c) and real-time applications (Figures 7a and 7d). No further technical knowledge is needed to do experiments with SPAIDE. After preparation of the hardware setup and booting, the prepared topologies and stimulation strategies can be loaded. If necessary, the stimulation strategy can be fitted using the fitting screen of SPAIDE.
The software architecture, the separation of soundprocessing and stimulation strategies, the implementation of processing steps in different modules (DLL, static lib), and the implementation of the RT framework as a separate module make it possible to reuse SPAIDE, or part of it, within a new Windows application that can deal with DLLs. Documentation and sample code are provided to support the development of new applications. The modularity also favours the expandability of SPAIDE through the addition of new feature blocks. A researcher can implement a custom processing function, for example, a specific filter bank or compression, as a new feature block which must be coded in C/C++. SPAIDE is used in research on noise reduction and mimicking of normal hearing [12], and to evaluate new stimulation strategies [13]. Different research centres recently developed new C/C++, MATLAB, and Delphi applications that reuse SPAIDE functionality for psychophysical experiments and evaluation of new sound-processing and stimulation strategies.
DISCUSSION
In contrast with platforms that only can stream preprocessed data from file [3,4,5], SPAIDE is also able to simultaneously capture sound in real time, process this input immediately, and stream the results to the implant. Therefore SPAIDE is called a real-time system although it is not a hard real-time system as one would expect from a typical DSP platform. The reason is that all processing is done on a Windows platform that is not under full control of the application. This results in jitter in the processing duration, something that is accounted for in the application software of SPAIDE at the cost of an overall latency of 300-400 milliseconds. In many research applications this latency is much less disturbing than stimulation with zero currents due to an underrun. However, for applications that need live audio input from microphone this latency is too large when synchronization between visual cues for lip-reading and auditory perception is needed. Synchronization is mandatory when audio captured by the speech processor is used as a way to communicate with the patient. This mode is currently not available because only the USB downlink from PC to PI is used.
The platform is designed for use in a very broad range of research applications on sound-processing, stimulation strategies, and psychophysics. The platform however does not offer a ready-made solution for all possible research demands. Extending the possibilities with new processing functions (feature blocks) is one way to adapt the platform, but necessitates C/C++ programming knowledge. Another way is to use the exported functionality in an existing or new Windows application. This, of course, also demands programming skills but the application can be written in the preferred language like C/C++, C#, Visual Basic, MATLAB, and so forth.
Before experimenting with new sound-processing or stimulation strategies in patients, it is often required to first verify the whole processing chain from input to output. SPAIDE supports writing the data from queues, which interconnect the processing blocks, to files or to matrices in MAT-LAB. When SPAIDE is used in simulation mode, the current values and electrical waveforms can be verified. This allows verifying the processing up to the amplitude data transmitted to the programming interface and the temporal property of the stimulation channels, but not the currents delivered to the electrode contacts. These currents can only be monitored on an oscilloscope connected to the electrode load board of a reference implant in a box. A tool to analyse the RF signal to the implant [14] is no perfect alternative since the stimulation strategy, which delivers the amplitude data to the right electrodes in the cochlea, is programmed in the implant.
The current implementation of the stimulation builder supports up to 32 stimulation channels, each with its own temporal and spatial properties. The electrical waveform limited to a concatenation of 4 pulses and the duration of each of these pulses can be specified with a time resolution of 10.776 microseconds. The spatial scaling factor can only be specified in steps of 1/8, from −1 to +1. If the spatial scaling factor equals zero, the contact can be specified as active grounded or as floating (not grounded). Two simultaneously active stimulation channels can have common passive (grounded) contacts, but no common active electrode contacts. This would require current summation for the common contacts.
Further improvements to both the hardware and software of the platform are foreseen. Currently the programming interface is a specific DSP board and the platform only supports monaural experiments. The next generation (research) SP will have USB and will replace both the PI and SP. Furthermore, it will support binaural experiments. On the software side, the new stimulation builder will not have the constraints mentioned above and the USB uplink will be implemented such that microphone input from the SP can be used. Latencies from input to output will be reduced to obtain synchronization between visual cues for lip-reading and auditory perception. Furthermore, the specification of topologies and parameters will be integrated in the platform.
CONCLUSION
The SPAIDE platform is versatile and supports a broad range of research applications on sound-processing, stimulation strategies, and psychophysics. The separate configuration of sound-processing strategy, stimulation strategy, and patientfitting parameters enhances the flexibility.
The open architecture offers two ways to further extend the platform's possibilities. Components of SPAIDE are reused in custom applications and research tools, or custom components (feature blocks) are added to SPAIDE. The platform is powerful because it supports the wide stimulation capabilities of the CII/90K implant. The real-time processing capability is limited by the available resources on PC, which not only depend on clock speed but also on available cache, Windows processes other than SPAIDE, amount of data transmitted over USB, and so forth.
Safety measures are incorporated to maximize patient safety, ranging from checking stimulation strategies to detection of configuration inconsistencies. The status of all components is continuously checked during processing and any unexpected event will immediately result in a stimulation halt. The processing runs under Windows, which results in jitter in the processing delay. However, buffering and synchronization systems in both SPAIDE and the application software in the programming interface minimize the chance of loosing data and of erroneous stimulation. | 6,745.2 | 2004-01-01T00:00:00.000 | [
"Computer Science"
] |
Effects of BRCA1 and BRCA2 gene mutations on female fertility among Chinese women: A systematic review and meta-analysis
. Purpose : It is still inconsistent whether the mutations of BRCAs could reduce the female fertility by increasing the prevalence of breast and ovarian cancer. So we focus on the effects of BRCAs mutations onthe female fertility among Chinese women in this meta-analysis. Material and Method : The PubMed, Medline, Scopus, Embase, Science Direct, Web of Knowledge and China National Knowledge Infrastructure (CNKI) databases were methodically searched to eclectic relevant studies published from 2000 to 2022 using the key words “ BRCA ” and “ mutation ” and “ female fertility or ovarian cancer or cervical cancer or breast cancer ” and “ China or Chinese or Asia or Asian ” . The random effects models in RevMan 5.3 software were used to include and evaluate both longitudinal research and randomized controlled trials. Results : This meta-analysis included 13 studies with a total of 10689 Chinese participants. Contact the control group, positive correlations between the mutations of BRCAs and female cancers were shown among the Chinese women from 35 to 60 years (OR=5.26) (P<0.00001). Conclusions : The mutations of BRCAs may increase the incidence of cancer among Chinese women, especially the older than 40 years, and reduce female fertility, in which more prospective studies on the fertility outcomes are still needed in the future
Introduction
Recently, it is serious that China has become an aging country. By the end of 2021, there were 190 million people who were 65 years of age or older, making up 13.5% of the overall population (1). In this condition, there are still many factors that affect the cause for aging emergence, such as longer life, lower mortality, lower fertility, improvement of health care and economic development, in which the declining fertility is the most important cause. As shown in the reference (2)(3), the fertility rate the China has remained roughly at 1.5 to 1.6 in the last decade, but it has dropped to 1.3 for the first time through the year of 2020, which was below the internationally recognized alert line of 1.5, So improving the female fertility among the Chinese women is important for ameliorating the aging problem.
Recently, many studies have confirmed that many factors could affect the female fertility, such as genetic factors, age, gynecological history, poor sexual behavior, male diseases and environmental factors, in which the genetic mutations are also as one of the key factors, as the BRCAs. BRCAs, especially the BRCA1 and BRCA2, * Corresponding author's e-mail<EMAIL_ADDRESS>play an importantly biological functions by increasing the telomere length maintenance and DNA repair to impact female reproductive longevity (4)(5). Besides, the mutations of BRCAs can cause the accumulation of meiotic errors to leading the apoptosis and early depletion of ovarian reserve (5)(6)(7). So far, it is known that the mutations in BRCAs can raise the danger of ovarian, breast and cervical cancer to impact on ovarian reserve and reduce the female fertility. However, it is still controversial (4). Meanwhile, in our country, there is a lack of corresponding evidence-based medical evidence in this regard.
Due to the known limitations of systematic reviews and meta-analyses, such as the lower number of included references, possible confounding factors were not adequately considered, so we conducted a thorough metaanalysis of Chinese BRCA1/2 mutations and female fertility in our study, which is important to provide the values in the management and follow-up of this high-risk population.
Data identification and selection
The recommended reporting items and meta-analyses (PRISMA) statement served as the foundation for this systematic review and meta-analysis. A comprehensive literature search was conducted from January 2000 to December 2022, in which all eligible studies were included in this research, regardless of the language, authors and research types. Through the use of computerized databases, the data were identified, including the PubMed, Medline, Scopus, Embase, Science Direct, Web of Knowledge database and China National Knowledge Infrastructure (CNKI) using the search terms "BRCA or BRCA1 or BRCA2" and "mutation" and "ovarian reserve or fertility or ovarian cancer or cervical cancer or breast cancer" and "China or Chinese or Asia or Asian " . The women considered for analysis were analyzed their fertile age and assessed using the subgroups. Additionally, the references of the pertinent studies mentioned above were reviewed in order to include the new papers. All published original articles in full length were finally selected.
Inclusion and exclusion criteria
All references included required to be randomized controlled trials from cohort studies and long-term studies on the impact of BRCA1 and BRCA2 gene mutations on female fertility in order to ensure the quality of this research. References to literature reviews, animal studies, and cell lines were disregarded( Figure 1).
Figure 1. Flowchart of included studies
Two reviewers were asked to independently assess all of the references and citations during the retrieval process to find research that might be offered by the searching index phrases. Another reviewer arbitrated any conflicts. Additionally, the same person evaluated all potential publishing biases using funnel plots of the outcome comparisons. [20] Cross-sectional study 13 (366) Ovarian cancer Note: 1 Data in the calcium intervention groups and the control groups are shown outside and inside the brackets, respectively. [20] BRCA1/2 mutation group, P=23.08% (3/13), Control group, P=1.92% (7/366). Note: 1 Data in the calcium intervention groups and the control groups are shown outside and inside the brackets, respectively.
Data extraction and outcomes
After reviewing the whole references for each study in this meta-analysis, the pertinent data were collected to preserve the study characteristics, and the following information was noted, year of publication, diseases (breast cancer patients undergoing reproductive reserve/cancer-free women undergoing surveillance programs), number/total number of BRCAs, study types, patient age (range), matching criteria/adjusted variables and main outcomes (Table 1).
Statistical analysis
To confirm the existence of statistical heterogeneity across all studies, the χ2test for the heterogeneity of data among the proportions was run. The fixed-or random-effects model in Review Manager Software (RevMan version 5.3) was used to generate the pooled odds ratio (OR), according to the sample frame and not on the basis of heterogeneity. Continuous outcomes were presented as the means ± SD with a 95% confidence interval (CI). The age intervention carried out the subgroup analysis. The heterogeneity between all the studies included in this research was assessed using the I2 statistic. When the I2 was greater than 50%, a random effects model was applied, when it was lower than 50%, a fixed effects model was applied. A funnel plot was utilized to look for any potential bias. The cutoff point for statistical significance for two-sided P values was α<0.05.
Basic characteristics in this study
As shown in Figure 1, 955769 articles in all were originally identified through the PubMed, EMBASE, Web of Knowledge and CNKI databases. After reviewing the titles and abstracts, the remaining articles for which the full manuscript could be retrieved for detail assessment were 63 articles. Of these, 50 articles were excluded because 24 of them were removed as duplicates, 10 were reviews, 2 were conference abstracts, 4 were animal studies, 3 were study protocols, and 6 were case report studies. After reviewing the entire text, 13 suitable manuscripts were finally added to this meta-analysis. (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) In the Table 1, the publication dates in the included studies were varied from 2000 to 2022. From these included studies, eleven were cross-sectional studies, two were retrospective cohort study, and one contained both retrospective cohort study and cross-sectional studies. Meanwhile, the sample sizes were ranged from 4 to 1277, in which three studies included the women were breast cancer and two study included ovarian cancer and two study included women with various cancers.
Effects of BRCA1/2 mutation carriers on the female cancer
As shown in Figure 2 and 3, a total of 13 studies with a total of 10,689 participants were integrated to investigate the association between BRCA mutations and female malignancies. Using a random effects model, the results of the forest plot ( Figure 2) showed a significant positive association (OR=5.26) between BRCA mutations and female malignancies (P<0.00001).
Discussion
The above outcomes of this meta-analysis with 13 randomized controlled trials determined that the mutations in the BRCA1/2 were could increase the prevalences of cancers to reduce the female fertility using the random effects model among the Chinese women. This is in agreement with the conclusions drawn from their metaanalysis by Matanes, Marchetti et al. (21)(22)(23) that there were positive correlations between the female cancer and the mutations of BRCA1 and BRCA2. However, another meta-analysis from Jiang et al.showed there were no evident correlations between the mutations of BRCA1 and BRCA2 and female fertility (24).
Similar to previous studies, BRCA1 and 2 play an important role in regulating DNA damage and repair, cell proliferation and apoptosis, which have a direct impact on DNA repair and maintenance of telomere length.BRCA mutations are among the best known, well-characterized and important genetic alterations that predispose carriers to an increased frequency of cancer in women (25)(26)(27). In addition, BRCA mutations could cause the accelerated loss of primordial follicles and increased DNA damages in the oocytes, leading to the reduction in the ovarian reserve (28), and mutations in the results of BRCAs on the inability to maintain telomere integrity (29), which is associated with the germ cell lifespan (30). It has also been hypothesized that the mutations in the BRCAs may lead to a shortened germ cell lifespan and thus the hypogonadism (31). Taking these data together and the theory of our analysis above suggested that the women with BRCAs have poorer ovarian reserve and fertility potential than the wild-type women, and that mutations in the BRCAs do affect the female fertility.
Our meta-analysis confirmed that the mutations in the BRCA1/2 were could increase the prevalences of cancers to reduce the female fertility using the random effects model among the Chinese women. However Hu et al. showed there were no direct evidence on the fetal birth rate, serum anti-Miller hormone levels, sinus follicle count and ovarian response (32). However, this study did not analyze BRCA1 and 2 separately, and the spiritual fertility suggested in the study did not accurately reflect the female fertility potential. While the results indicated that BRCA carriers and non-carriers had similar fertility experiences, with no differences in age at birth, age at last birth, or mean gestational age between the two groups (33)(34). However, this study with a questionnaire, the conclusions obtained were limited by the data collected and no agestratified analysis was performed. The results of Moslehi et al. (35) indicated that no significant differences in fertility were found between the mutation of BRCAs carriers and non-carriers. The results of Gasparri et al. (36) suggested that young BRCA1 mutation carriers had lower AMH levels compared to wild-type women and therefore might have reduced ovarian reserve. However, there was no direct evidence that low AMH levels lead to reduced ovarian reserve and pregnancy rates. However, because significant heterogeneity must be taken into account, the meta-findings analysis's were not entirely trustworthy. and the sample size discussed in this paper was still insufficient, so more samples are still needed in the future.
Conclusions
In summary, the references published to date have not clearly demonstrated that the mutations of BRCAs could reduce the female fertility by increasing the prevalence of ovarian cancer and cervical cancer. However, after considering age, the effects of calcium absorption on weight loss were evident.
Contributors
Rui-Chen Ma chartered and conducted the analysis, extracted figures, created graphs, wrote the document, and assessed the quality of the included studies. Yu-hua Ma read the article and cited the data. Jing Zhao reviewed the research and evaluated the caliber of the papers that were incorporated. All authors commented critically on the manuscript and agreed with the final document. | 2,749.6 | 2023-01-01T00:00:00.000 | [
"Biology"
] |
Low activation, refractory, high entropy alloys for nuclear applications
Two new, low activation high entropy alloys (HEAs) TiVZrTa and TiVCrTa are studied for use as in-core, structural nuclear materials for in-core nuclear applications. Low-activation is a desirable property for nuclear reactors, in an attempt to reduce the amount of high level radioactive waste upon decommissioning, and for consideration in fusion applications.The alloy TiVNbTa is used as a starting composition to develop two new HEAs; TiVZrTa and TiVCrTa. The new alloys exhibit comparable indentation hardness and modulus, to the TiVNbTa alloy in the as-cast state. After heavy ion implantation the new alloys show an increased irradiation resistance.
The operating environments envisaged for advanced nuclear reactors create significant challenges for structural materials due to a higher neutron flux, a more corrosive environment and higher operating temperatures [1]. The initial choice material for structural components in Gen IV reactors was 316 austenitic stainless steel, however, due to unacceptable levels of void swelling, the focus has shifted towards reduced activation ferritic/martensitic (F/ M) steels, which are suitable for both fission and fusion applications [2,3]. Although these are promising candidate materials, there remains significant concerns regarding the creep-rupture strength and irradiation embrittlement at 550 C [4]. It is therefore of interest to explore new alloy designs outside the paradigms of conventional steels, to meet these material requirements.
High entropy alloys (HEAs) represent a new class of alloys that have the potential to replace conventional alloys in structural applications. Typically, they consist of four or five alloying elements in close to equiatomic concentrations. The original concept of HEAs was based on the idea that the high configurational entropy of the system would favor the formation of a disordered single-phase solid solution over ordered intermetallic compounds, resulting in simple microstructures with enhanced material properties [5e7].
Although single phase HEAs have been reported in the literature [8,9], an even larger number of HEAs have been documented that exhibit complex intermetallic phases [10,11]. As a result, the concept that configurational entropy would generally stabilise these highly alloyed systems to a single phase is now largely discredited, and research is directed towards forming more ductile HEAs that contain a major solid-solution phase fraction [12,13]. HEAs based on refractory elements of the 4-5-6 alloy group (named from the groups and periods in the periodic table) show considerable potential for structural applications [14]. Alloys of the NbTaV -(Ti, W, Mo) system, that have a disordered BCC structure, have been proposed as high temperature materials that offer high strength, ductility and oxidation resistance [15,16]. The alloy TiVNbTa shows excellent compressive mechanical properties at room temperature (s y ¼ 1273 MPa) and at elevated temperatures (s y drops to 688 MPa at 900 C) [17]. Variations of the TiVCrZrNb system have also been proposed as high temperature, structural materials which exhibit low densities and high hardness [18]. A concise summary of the current literature on HEAs based on the 4-5-6 elemental palette is tabulated in Ref. [19].
HEAs are of interest for nuclear applications due to their claimed 'self-healing' qualities [20,21]. Experimental irradiation studies using heavy ion irradiation demonstrate that HEAs offer superior radiation resistance compared to conventional alloys, with lower density of dislocation loops and less radiation induced segregation which has been attributed to the severe lattice distortion and sluggish diffusion [22e25]. More recent work has attributed the irradiation resistance of HEAs to two unique properties; firstly a lower phonon mean free path, limiting the cascade induced heat wave propagation, creating a more localised and longer thermal spike which would favor athermal point defect recombination [26] and secondly, a broadening of the interstitial and vacancy migration energy distributions, with possible overlap of the two distributions, which would favor thermal point defect recombination [27]. Both hypotheses are still under investigation to tentatively explain the potential higher radiation resistance of HEAs. Previous research has demonstrated that the radiation induced volume swelling in the Al x CoCrFeNi alloys is lower than that of conventional nuclear materials. In HEAs, FCC alloys have less swelling than alloys with a BCC þ FCC microstructure and single phase BCC microstructures have the largest amount of void swelling. In contrast, conventional alloys with a BCC microstructure would generally have less void swelling than FCC materials [21]. Although the majority of irradiation damage studies have been targeted at FCC HEAs based on transition metals (CoCrFeMnNi), refractory HEAs that offer a low thermal neutron absorption cross-section i.e. TiVZrNb [28] and TiVCrNbMo [29], have been proposed for nuclear applications. However, as pointed out in Ref. [19], the suitability of these alloys for use in the next generation of advanced nuclear reactors is somewhat restricted by the high activation elements Nb and Mo. Although activation issues are significantly reduced compared to the most common cobalt containing FCC HEAs, these high activation elements require substantially longer time periods before the radioactivity levels reach a satisfactory limit for 'hands on' maintenance [30]. Materials based on low activation elements, with minimal impurities, have a profound environmental impact by reducing the quantity of high level radioactive waste during decommissioning and allow in-core components to be recycled; low activation is also an essential design criteria for fusion reactors [31,32]. As such, development of a radiation resistant, low activation HEA, with enhanced high temperature properties would advance the next generation of nuclear reactors.
In this work, we start with the well characterised BCC alloy, TiVNbTa, as this material consists of a single phase, disordered, BCC microstructure, with exceptionally high strength and significant plastic flow in compression at elevated temperatures [17]. The composition is modified by replacing the high activation element, Nb in TiVNbTa with Zr and Cr, to produce TiVZrTa and TiVCrTa; two low activation alloys that have low thermal neutron absorption cross-sections making them suitable for both advanced fission and fusion applications. The nuclear properties of the pure elements are given in Table 1. We report firstly on the microstructure of these alloys and, using nanoindentation, study the effect of heavy ion irradiation on the mechanical properties.
Equiatomic TiVNbTa, TiVZrTa and TiVCrTa alloys were vacuum arc melted in a water cooled, copper crucible using an Arccast Arc200 arcmelter. Raw elements (99.99% purity purchased from Goodfellow, UK) were weighed out to their target stoichiometries to produce a 30g billet, of each composition. The final microstructure achieved was close to equiatomic (± 2 at.%) measured using EDX. Alloys were melted for approximately 5 min and subsequently flipped and remelted to ensure a homogeneous microstructure throughout. Microstructural characterisation of the as-cast material, was carried out using backscatter electron imaging (BSE), energy-dispersive X-ray spectroscopy (EDX) and electron backscatter diffraction (EBSD). A Zeiss Merlin field emission gun scanning electron microscope (FEG-SEM), equipped with an Oxford instruments Xmax 150 EDX detector and a Bruker Quantax EBSD system was used. The crystal structure was identified using X-ray diffraction (XRD) on a Panalytical Empyrean with a 40 kV voltage and 40 nA current from a Cu Ka source.
Heavy ion implantation of a section of each alloy, and a control sample of pure vanadium, was performed at the Surrey National Ion Beam Centre, UK using a 2 MeV Tandem accelerator. Vanadium ion implantation was carried out at 500 C using 2 MeV V þ ions to a fluence of 2.26 Â 10 15 ions/cm 2 . Samples were held at 500 C for 18 h during the implantation. SRIM (stopping range of ions in matter) software was used to convert the fluence into a displacement per atom (dpa) value. The Ion distribution and quick calculation of damage method was used to obtain a damage profile from 2 MeV of vanadium ions in TiVNbTa, yielding a peak damage of 3.6 dpa approximately 700 nm below the surface. Nanoindentation was carried out using the continuous stiffness method (CSM) [33] on an Agilent (formerly Keysight, formerly MTS) G200 nanoindenter, fitted with a Berkovich diamond tip. An array of 25 indentations were made to a maximum depth of 1500 nm with a strain rate target of 0.05 sec À1 . The CSM amplitude was 2 nm and the frequency 45Hz. Indentation hardness and modulus was measured in both the unirradiated and irradiated portion of each sample to assess the hardening due to irradiation damage and to ensure that the effect of thermal annealing was accounted for.
Representative backscatter electron micrographs and EDX composition maps of each alloy are given in Fig. 1. The microstructure can vary significantly within an arc melted billet due to lack of a controlled cooling rate, hence in this study, care was taken to ensure that all characterisation was carried out in a consistent location, close to the centre of the 30g billets. The TiVNbTa consists of a typical as-cast dendritic microstructure due to non-equilibrium solidification. Although dendritic structures can be observed in single element metals, it is possible that for these multicomponent alloys, the microstructure is triggered by the different solidification temperatures of each element (see Table 1). EDX maps show qualitatively that the dendrite arms ('brighter' contrast regions in Fig. 1 (a)) are enriched with Ta and are depleted in Ti and V; Nb is more uniformly distributed within the alloy matrix, however there is an indication from the EDX maps that the Nb favours the interdendritic, Ti-V rich regions. The TiVZrTa matrix also has a dendritic microstructure of which the arms are enriched in Ta (depleted in Zr, Ti and V). The majority of the interdendritic region is enriched in Zr, Ti and V but depleted in Ta with the exception of a few V rich regions. Uniformly distributed in the matrix are small precipitates which can be identified as the 'dark' spots in the SEM micrograph due to the low average atomic number (see Fig. 1(b)). These are enriched in Zr and depleted in V and Ta. The third alloy, TiVCrTa consists of a matrix with a dendritic microstructure. The dendrite arms, again, are enriched in Ta and depleted in Ti and Cr; V is homogeneously distributed in the matrix. A fine distribution of Ti and Cr rich precipitates are observed in the interdendritic region (see dark spots in Fig. 1(c)). Fig. 2 shows the indexed XRD spectra for each alloy which indicates all three alloys have a majority, disordered BCC phase. Table 1 Atomic radius (r), Lattice constant (a), melting temperature (Tm), thermal neutron absorption cross-section (s A ), approximate time (years) for contact dose rate to reach 'hands on' level (2 Â 10 À5 Sv/h) of the pure Ti, V, Cr, Zr, Nb and Ta elements after 5 years exposure in 3.6 GW fusion power reactor [30]. Additional peaks in the EBSD patterns from each region of the microstructures were collected and the band contrast maps are given in Fig. 3. The EBSPs from the as-cast TiVNbTa were indexed as a single phase, BCC microstructure with approximate grain size of d ¼ $ 200 mm and a lattice parameter a ¼ 3.239 Å was obtained from the XRD. In the TiVZrTa, the indexed XRD spectra shows three distinct BCC phases with lattice parameters a ¼ 3.155 Å, a ¼ 3.274 Å and a ¼ 3.470 Å. EBSD is unable to differentiate three individual BCC phases in the microstructure, however it is clear from the band contrast map, Fig. 3 (middle), that three distinct regions do exist in this microstructure; that is the matrix region containing the dendritic structure, the precipitates shown as bright white in the band contrast map, due to the high intensity signal obtained from the EBSPs and finally a region where the pattern quality was sufficiently poor within the interdendritic regions. The average grain size of the matrix region is d ¼ $ 50 mm and the precipitates are approximately 7e10 mm in diameter. The TiVCrTa has the largest grain size, d ¼ $ 600 mm. The matrix is predominantly made up of disordered BCC microstructure (a ¼ 3.11 Å from XRD) with a fine distribution of C15, Fd-3m Laves phase precipitates (a ¼ 7.048 Å from XRD) forming in the interdendritic regions and along the grain boundaries.
These second phase precipitates are approximately 2 mm in size and have been highlighted in red in Fig. 3 (right). Table 2 summarises the approximate chemistry (at.%) and the assigned crystallographic structure of the phases present in each alloy system. Nanoindentation hardness and modulus as a function of displacement are given in Fig. 4(a) and Fig. 4(b) respectively for the unirradiated, as-cast HEAs. The TiVZrTa and TiVCrTa have hardness values of 7.41 GPa and 6.83 GPa respectively; both of which are harder than the base alloy TiVNbTa that has a hardness of 5.86 GPa. All three HEAs offer an increased indentation hardness relative to conventional nuclear materials, such as 316SS (2.2 GPa) and T91 (3.1 GPa) [34,35]. The larger scatter in the indentation data from the TiVZrTa can be attributed to the minor, BCC2, Zr rich phase/precipitates observed for this alloy which had an increased hardness compared to the matrix. Fig. 4(c) gives the irradiated hardness data for the HEAs along with the irradiation damage profile, calculated from SRIM for the implantation. The shape of the hardness curve (including an indentation size related increase at shallow depths) is of the same form to that of the unirradiated hardness, indicating no obvious influence from the damaged layer as the plastic zone extends into the unirradiated material beyond the Bragg peak. Note that this damage profile was calculated for the TiVNbTa composition, however, there was negligible differences between the damage profile for the other compositions. The peak damage occurs at a depth of $ 700 nm below the surface and no damage is expected beyond 1 mm of depth. The irradiation induced hardening, DH, is measured at an indentation depth of 300 nm, to ensure that the plastically deformed zone is constrained within the damaged layer and thus meaningful values of hardness are extracted and compared. Additionally, at a depth of 300 nm the modulus data is independent of depth indicating that the area function calibration was able to capture the tip geometry well at this depth, with negligible surface and substrate effects (see Fig. 4(b)). The hardness of the irradiated and unirradiated material, at an indentation depth of 300 nm is plotted in Fig. 4(d) along with the DH for each alloy.
The pure vanadium control sample has an irradiation induced hardening of 1.19 GPa (37%), confirming that the irradiation implantation was carried out successfully. The TiVNbTa shows an irradiation hardening of 0.66 GPa (8%) which suggests some irradiation induced damage accumulation in this single phase alloy, however significantly less than observed in the single element control sample. Irradiation induced hardness changes in the TiVZrTa and TiVCrTa are negligible; a two sample t-test (implemented in Matlab) indicates that in these samples, the two sets of hardness values obtained before and after irradiation come from distributions that cannot be distinguished to the 5% probability level. Hence, no measurable irradiation hardening can be identified.
The new alloys exhibit a majority disordered BCC phase, similar to TiVNbTa, with elemental microsegragation formed upon solidi- Fig. 2. XRD spectra for the three alloys; peaks labelled putty were from the medium used to hold the samples securely in place. Table 2 Phases present in each alloy system with respect to the maps shown in Fig. 3. Chemical composition of each phase in at.% with standard deviation averaged from 20 EDX spectra in EDX Quant maps (Oxford instruments, AZtec software). Crystal structure and lattice parameter attributed to each phase identified using a combination of EBSD and XRD.
Alloy
Phase/region (Fig. 3 fication due to the different solidification temperatures of the alloying elements (dendritic microstructure). In all three alloys, Ta rich dendrite arms are initially formed due to the high T m . It has been shown for the TiVNbTa alloy that this chemical segregation is alleviated through homogenisation at 1200 C for 72 h [17]. A theoretical phase prediction was carried out using CALPHAD on the TiVNbTa which has revealed that the microstructure is only single phase down to a temperature of approximately 525 C, after which a second BCC phase starts to precipitate. Multiple BCC phases have been identified theoretically and experimentally for refractory HEAs based on similar compositions [28]. The TiVZrTa has a more complex microstructure and XRD reveals three distinct BCC phases. From the chemical analysis, Zr rich precipitates are identified with a BCC crystal structure which have a significantly stronger EBSD signal, indicating a second minor BCC phase. It is also possible that there exists a V rich third phase in the interdendritic region. The binary phase diagram, Zr-Ta, shows that these elements form two BCC phases at high temperature [36]. It appears the relatively high cooling rates cause these elements to remain in multiple BCC phases, rather than transforming into the low temperature BCC þ HCP microstructure. The crystal structure and lattice parameters assigned to each region in the TiVZrTa microstructure are based on the atomic radius of the element with highest concentration within this region i.e. the BCC phase with the largest lattice parameter has been assigned to the Ta rich phase in the microstructure. TiVCrTa forms an ordered intermetallic phase upon casting that is finely distributed in the major BCC matrix, that are enriched in Ti and Cr. A C15 Laves phase, Ti 2 Cr, is present in the Ti-Cr binary phase diagram and is favoured over the competing solid solution phase. However, the observed C15 phase in the TiVCrTa consists of (approximately in at.%) 41Ti, 22V, 25Cr, 12Ta, hence some of the Ti could be substituted with Ta and some of the Cr replaced with V. The atomic radius of Cr is small relative to those of the other alloying elements in TiVCrTa which may also favour the formation of the ordered Laves phase in this alloy. More work is needed to assess whether the nucleation and growth of the C15 phase favors formation of fine precipitates that are in thermodynamic equilibrium with the BCC matrix phase. The measured indentation hardness and modulus of the new alloys are comparable to that of the TiVNbTa, indicating these new alloys will exhibit comparable bulk mechanical properties. The non-homogeneous microstructure of the TiVZrTa and the relative size of the precipitates to the scale of nanoindentation testing, leads to larger experimental scatter in the hardness values; it was identified that the Zr rich BCC2 precipitates have an increased hardness compared to the matrix. As a consequence, this has the potential to reduce the overall ductility of this alloy however, in order to assess this, macro scale mechanical testing is required. The fine dispersion of C15 precipitates observed in the TiVCrTa provide a strengthening mechanism, increasing the hardness relative to the TiVNbTa. It is appropriate to assume that the additional interphases, induced by the second phases, formed in the new alloys act as defect sinks to the irradiation damage, leading to increased irradiation resistance in these alloys.
This work presents three refractory HEAs, two of which consist of elements with low activation and low neutron absorption crosssections. The microstructure of these alloys consist of a majority disordered BCC phase, with minor secondary phases. The two low activation alloys offer mechanical properties (indentation hardness and modulus) comparable to the well characterised TiVNbTa. However, the new alloys presented in this work offer increased radiation resistance in terms of irradiation induced hardening. Further work, to asses the stability of these phases after homogenisation is needed in addition to a full macro scale mechanical property characterisation. | 4,530.4 | 2019-09-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Expression of Animal Anti-Apoptotic Gene Ced-9 Enhances Tolerance during Glycine max L.–Bradyrhizobium japonicum Interaction under Saline Stress but Reduces Nodule Formation
The mechanisms by which the expression of animal cell death suppressors in economically important plants conferred enhanced stress tolerance are not fully understood. In the present work, the effect of expression of animal antiapoptotic gene Ced-9 in soybean hairy roots was evaluated under root hairs and hairy roots death-inducing stress conditions given by i) Bradyrhizobium japonicum inoculation in presence of 50 mM NaCl, and ii) severe salt stress (150 mM NaCl), for 30 min and 3 h, respectively. We have determined that root hairs death induced by inoculation in presence of 50 mM NaCl showed characteristics of ordered process, with increased ROS generation, MDA and ATP levels, whereas the cell death induced by 150 mM NaCl treatment showed non-ordered or necrotic-like characteristics. The expression of Ced-9 inhibited or at least delayed root hairs death under these treatments. Hairy roots expressing Ced-9 had better homeostasis maintenance, preventing potassium release; increasing the ATP levels and controlling the oxidative damage avoiding the increase of reactive oxygen species production. Even when our results demonstrate a positive effect of animal cell death suppressors in plant cell ionic and redox homeostasis under cell death-inducing conditions, its expression, contrary to expectations, drastically inhibited nodule formation even under control conditions.
Introduction
Programmed cell death (PCD) is a genetically regulated process of cellular suicide and is well known to play a fundamental role in a wide variety of developmental and physiological functions in animals, plants, and fungi [1][2][3]. A key feature of PCD is the requirement of energy to the control and execution of the death in an orderly manner [4,5]. In plants, PCD is essential for cell homeostasis and specialization, playing an important role in plant development and adaptive responses to different stress conditions such as salinity, cold stress, hypoxia and pathogen attack [6][7][8].
In metazoans, from humans to Caenorhabditis elegans, the central regulators of PCD are well characterized and conserved involving pro-and anti-apoptotic protein such as APAF-1/CED-4 and BCL-2/CED-9, and executing protein family caspasas/CED-3 [9,10]. Interestingly, although these regulators are absent in the genomes of plants and yeast, the effects of animal pro-and antiapoptotic proteins has been studied in transgenic plants [11][12][13][14][15][16][17][18]. According to the localization of these heterologous proteins in plant cells, it is proposed that cell death suppressors contribute to maintain the organelles homeostasis preventing the generation/ release of death signals, similar to what occurs in animals [11,13].
However, there are limited data regarding the mechanisms through which the animal cell death suppressors modulate the plant physiology.
Remarkably, the expression of PCD suppressors in plants result in agronomical beneficial features such as improved tolerance to a variety of biotic and abiotic stresses [11][12][13][14][15][16]. Increased the biological nitrogen fixation in legumes is a main objective for the agriculture, and different strategies had been explored towards this objective. During the natural or stress induced senescence, which involve cell death processes, the biological nitrogen fixation metabolism is impaired, affecting both quality and quantity of legume yields [19,20]. Therefore, the development of strategies to increase the tolerance to a variety of stresses is highly relevant. To the best of our knowledge, the effect of animal PCD suppressor has not been tested in legumes.
Soybean (Glycine max L.) culture is strongly affected by drought and salinity [21,22]. Likewise, the soybean-rhizobia symbiotic interaction process is also severely affected by stress conditions [19,20]. Our group has reported that salt stress, but not osmotic stress, negatively affect the early stages of the Glycine max L.-Bradyrhizobium japonicum interaction such as root hairs defor-mations and viability [23], and how these short-term treatments affect nodule formation [24]. In this context, two root hairs deathinducing conditions were identified: sub lethal salt stress treatments combined with B. japonicum inoculation (inoculated 50 mM NaCl) and severe salt stress (150 mM NaCl). Interestingly, in root hairs under sub lethal salt stress conditions (50 mM NaCl), the symbiotic interaction with B. japonicum induced a sustained increase of intracellular reactive oxygen species (ROS) levels in a similar pattern to that observed in response to pathogenic elicitors [23,25]. In contrast, under 150 mM NaCl, the intracellular ROS production decreased from the beginning of treatment independently of the presence of the symbiont [23].
The aim of the present work was to evaluate if the expression of Ced-9 from Caenorhabditis elegans could improve the stress tolerance of legume-rizobia symbiotic interaction and the biological nitrogen fixation process. Transgenic soybean-hairy roots expressing Ced-9, obtained with Agrobacterium rhizogenes [26,27], were subjected to the above described cell death-inducing conditions, in order to evaluate root cell viability, redox and ionic parameters associated and nodule development.
Root hairs death-inducing stress conditions
Two-day soybean seedlings were subjected 30 min under previously reported root hairs death-inducing conditions. Moderate salt stress treatments (50 mM NaCl) combined with B. japonicum (inoculated 50 mM NaCl) and severe salt stress (150 mM NaCl) were the cell death conditions for root hairs [23]. However, under these root hairs death-inducing conditions the roots were kept alive. These results were observed by Evans Blue staining and DNA degradation analysis (Fig. 1A, 1B and 1C). Root hairs DNA degradation was observed while roots maintained the chromatin integrity (Fig. 1C).
Malondialdehyde content (MDA), which is an intermediary metabolite of lipid peroxidation used as oxidative stress marker, was measured in root hairs. The MDA level increased in inoculated and inoculated 50 mM NaCl treatments, whereas non significant differences were observed in salt stress alone (50 mM NaCl and 150 mM NaCl) respect to control ( Fig. 2A). Likewise, with the purpose to discriminate ordered or non-ordered death processes, the levels of adenosine-59-triphosphate (ATP) were quantified [4,28] in root hairs subjected to root hairs deathinducing conditions. No significant changes were observed at 50 mM NaCl (Fig. 2B). Under inoculated 50 mM NaCl treatment, root hairs had increased ATP levels as well as under inoculated control treatment (Fig. 2B). Conversely, under 150 mM NaCl treatment, root hairs showed a slight, but not significant decrease in ATP levels respect to the control (Fig. 2B).
CED-9 expression ameliorates root hairs death-inducing conditions effects
In order to evaluate the effect of the animal cell death suppressor, wild type and CED-9 transgenic hairy roots were obtained by infection with A. rhizogenes K599 strain. It should be pointed out that the differentiation and development of hairy roots were conducted without antibiotic selection, thus the resulting K599-CED9 composite plants contained both transgenic and wild type hairy roots (Fig. S1A). The expression level of the transgene in hairy roots was tested by qPCR using Ced-9 specific-derived primers (Fig. S1B) and the identity of qPCR product was verified by nucleotide sequencing. The K599-CED9 hairy roots were shorter than wild type hairy roots obtained by infection with untransformed A. rhizogenes (K599-empty) (Fig. S2A).
Roots hairs nuclear morphology was evaluated in K599-empty and K599-CED9 hairy roots incubated 30 min under control, or root hair death-inducing conditions ( Fig. 3; Fig. S3). The nuclear morphology of root hairs was evaluated by acridine orange and ethidium bromide (AO/EB) staining and observed with confocal microscopy. Acridine orange is a dual-fluorescence dye that interacts with DNA and RNA, and it also serves as a pH indicator. Ethidium bromide binds to DNA by intercalating between the bases, but it is membrane impermeant so generally excluded from viable cells [28]. Hence, roots were incubated 30 min with AO/ EtBr to allow entry of the probes. The nuclei of root hairs in the control treatments exhibited an orthodox conformation, and similar size and shape between K599-empty and K599-CED9 hairy roots (Fig. S3). Under root hair death-inducing conditions, root hairs of K599-empty hairy roots showed significantly higher nuclear fragmentation than root hairs of K599-CED9 hairy roots (Fig. 3). Furthermore, an increase in AO staining was observed in K599-empty hairy roots particularly under 150 mM NaCl conditions (Fig. 3).
CED-9 effects on root ion and redox homeostasis under hairy roots death-inducing conditions
Whole hairy roots were used to perform the biochemical determinations due to the low yield of root hairs in hairy roots. However, as previously observed, root hair death-inducing conditions did not induce roots death. Then, treatment time was adjusted to 3 h, when positive Evans Blue staining (Fig. S2A), but not DNA degradation (Fig. S2B) could be observed. This result indicates an early stage of root cell death. It also was noted that K599-CED9 hairy roots showed more membrane selectivity than K599-empty hairy roots (Fig. S2A).
K599-empty hairy roots showed a dramatic decrease in potassium levels under 150 mM NaCl treatment, whereas in K599-CED9 hairy roots this decrease was much less pronounced. Nevertheless, no significant differences were observed in potassium content between control and inoculated 50 mM NaCl treatments in any transgenic or wild type genotype (Fig. 4A). However, sodium content in hairy roots increased in a dose dependent manner in both K599-empty and K599-CED9 hairy roots (Fig. 4B). Moreover, no significant differences in the concentration of sodium were observed between K599-empty and K599-CED9 in any of the treatments performed (Fig. 4B). Likewise, calcium concentration decreased markedly in K599-empty hairy roots only under 150 mM NaCl treatment (Fig. 4C), whereas the calcium NaCl) (C and D), and 150 mM NaCl (E and F). Nuclear morphology of root hairs was evaluated. Arrows indicates nuclear fragmentation. Images were taken with a Zeiss confocal microscope. The excitation was performed simultaneously at 488 nm and emission filter BP 500-530 IR and BP 565-615 IR for AO and EtBr, respectively (image overlay). doi:10.1371/journal.pone.0101747.g003 content in K599-CED-9 hairy roots did not change in stress treatment respect to control (Fig. 4C).
In order to characterize changes in the redox state during hairy roots death-inducing conditions, the levels of MDA, hydrogen peroxide (H 2 O 2 ), antioxidant capacity, ascorbic acid ratio (reduced form/total) and ATP in K599-empty and K599-CED-9 hairy root were quantified ( Fig. 5 and Fig. 6). MDA content increased in K599-empty hairy roots under stress conditions (Fig. 5A). Interestingly, K599-CED9 hairy roots did not show significant differences under any stress treatments regarding control (Fig. 5A). In accordance, H 2 O 2 levels increased in K599empty hairy roots under inoculated 50 mM NaCl and 150 mM NaCl treatments, whereas K599-CED9 hairy roots did not show significant differences in H 2 O 2 content between control and stress treatments (Fig. 5B). Moreover, differences in H 2 O 2 content between K599-empty and K599-CED9 were only observed under 150 mM NaCl treatment ( Fig 5B). The Ferric Reducing Ability of Plasma (FRAP) assay showed raises in antioxidant capacity in K599-empty hairy roots under 150 mM NaCl (Fig. 5C), although the ascorbic acid ratio was reduced (Fig. 5D). Under roots deathinducing conditions, K599-CED9 hairy roots did not show differences in both antioxidant capacity and ascorbic acid ratio respect to control ( Fig. 5C and 5D).
ATP levels increased in K599-empty hairy roots inoculated in presence of 50 mM NaCl treatment while no significant difference were observed under 150 mM NaCl (Fig. 6), similarly to the results observed in root hairs (Fig. 2). In contrast, K599-CED9 hairy roots had increased ATP content under all stress conditions, including 150 mM NaCl treatment, showing significant differences respect to K599-empty hairy roots (Fig. 6).
Ced-9 expression inhibit nodule formation in hairy roots
Strikingly, K599-CED9 composite plants showed a reduction of 60% in the number of nodules compared to K599-empty composite plants under control conditions ( Fig. 7A and 7B). As it was previously mentioned, each soybean K599-CED9 composited plants developed both transgenic K599-CED9 and nontransgenic hairy roots (Fig. S1). In the nodulation assay, K599-CED9 composite plants had both hairy roots with and without nodules. Hairy roots from K599-CED9 composite plants were separated according nodulated and non nodulated and the presence of Ced-9 transgene was examined by PCR. This experiment clearly showed that in K599-CED9 hairy roots nodulation was dramatically inhibited, while in the same composite plant, those non-transgenic hairy roots were nodulated (Fig. 7C).
Discussion
Soybean-rhizobia symbiotic interaction is severely affected by salt stress, showing a reduction on number and weight of nodules in plants salinized with 26 mM NaCl [19,20]. Our group had studied the effects of salt stress conditions on early events of Glycine max L.-B. japonicum symbiotic interaction, where undescribed root hairs death-inducing conditions were identified: sub lethal salt stress treatments combined with B. japonicum (inoculated 50 mM NaCl) and severe salt stress (150 mM NaCl) ( Figure 1). During the early events of symbiotic interaction, a fast and transient increase of intracellular ROS generation take place in root hairs [23,25], whereas a sustained ROS production was reported when the symbiotic interaction occurred under 50 mM NaCl [23]. A similar root hair ROS kinetic was observed in response to pathogenic elicitors [25]. In contrast, under 150 mM NaCl conditions, intracellular ROS production diminished from the beginning of treatment [23]. Hence, the initial hypothesis was that the expression of anti-apoptotic proteins from animals, which have no homologues identified in plants, modulates redox homeostasis and delays senescence and death processes of the plant-symbiont system in legumes under stress conditions. . Ced-9 affects ion relationship during hairy roots death-inducing stress conditions. K599-empty (dark bars) and K599-CED9 (grey bars) hairy roots were subjected 3 h to control, inoculated with B. japonicum in presence of 50 mM NaCl (inoc 50 mM NaCl), and 150 mM NaCl conditions, and then potassium, sodium and calcium ions were quantified by high pressure liquid chromatography. Data are means 6 SE of five independent hairy roots. Different Latin and Greek letters indicate significant differences between treatments in K599-empty and K599-CED9 hairy roots, respectively (p,0.05, DGC test). Asterisks indicate significant differences between hairy roots genotypes (p,0.05, DGC test). doi:10.1371/journal.pone.0101747.g004 Effects of Ced-9 Expression in Soybean PLOS ONE | www.plosone.org First at all, in this work we characterized these two root hairs death-inducing conditions. There are two main ways to execute cell death: ordered (programmed-like) and non-ordered (necrosis). The ATP level is a fine parameter to distinguish ordered from non-ordered cell death types in mammalian cells [4,5]. During apoptosis, ATP has to be maintained high to allow the formation of the apoptosome [29,30], while ATP depletion has been observed during necrosis [31][32][33]. However, this observation has not been clearly observed in plants [34][35][36][37]. Casolo et al [38] found that in soybean cell cultures, low H 2 O 2 concentration induces PCD, which is accompanied by a slight decrease in ATP. In addition, ATP depletion after PCD induction in A. thaliana [39] and tobacco BY-2 cells [40] has also been reported. It has been reported that environmental stimuli can produce different types of cell death depending on the stimulus intensity and the ATP availability within the cell [41]. Here, we have determined that root hairs death induced by inoculation in presence of 50 mM NaCl showed characteristics of ordered-process, with increased ROS generation, MDA and ATP levels, whereas the cell death induced by 150 mM NaCl treatment showed non-ordered or necrotic-like characteristics, like decreases in ROS production and ATP levels ( Figure 2). Furthermore, the differences observed in Evans Blue staining between these death-inducing treatments ( Figure 1A) also indicates the differences in the stress intensity which would lead to the execution of ordered-death or necrosislike processes. Moreover, the increased MDA and ATP levels observed in control inoculated root hairs (Figure 2) would be due to the increased metabolic activity during early responses of the symbiotic interaction [25,42,43].
The expression of cell death suppressor Ced-9 from C. elegans inhibited or at least delayed cell death under root hairs deathinducing conditions (Figure 3). Furthermore, an increase in AO staining was observed especially under 150 mM NaCl conditions, which would indicate cellular acidification. Interestingly, the expression of Ced-9 affected both ordered and necrotic-like death events in root hairs (Figure 3), and it has also been documented in animal systems [44][45][46], suggesting similar functionality level between the components of the mechanisms of cell death in plants and animals. However, few works have evaluated homeostatic and physiological parameters in transgenic plants in order to understand the effects of the expression of Ced9. Shabala and coworkers [16] have demonstrated that the expression of Ced-9 delays the onset of leaf senescence symptoms under salt and oxidative stress conditions altering the flow patterns of K + and H + across the Greek letters indicate significant differences between treatments in K599-empty and K599-CED9 hairy roots, respectively (p,0.05, DGC test). Asterisks indicate significant differences between hairy roots genotypes (p,0.05, DGC test). doi:10.1371/journal.pone.0101747.g005 plasma membrane. Consistent with Shabala's results [16], K599-CED9 hairy roots showed altered potassium content respect K599-empty hairy roots only under 150 mM NaCl condition, whereas non-significant differences were observed under inoculated 50 mM NaCl between transgenic and wild type hairy roots. This result indicates different causes of death and therefore, different mechanisms of action of CED-9 under these stress conditions ( Figure 4B). Moreover, CED-9 effects on K + -efflux may be due to sustained levels of Ca 2+ which subsequently may affect the opening and closing balance of non-selective cation channels (NSCC) [47,48] (Figure 4C), but Ca 2+ subcellular localization approaches are required to verify this hypothesis. However, Ced9expression had no effect on sodium influx which increased in dose dependent manner, similarly to that observed in K599-empty hairy roots ( Figure 4B).
It has been shown that under saline stress ROS generation are induced [49][50][51] leading to oxidative damage [52,53]. In this regard, it has been suggested that anti-apoptotic genes from animals would suppress ROS generation or promote its removal in plants [7,14]. However, to best of our knowledge, there are no redox studies to support this hypothesis since these conclusions were based on visual observations such as a lack of decoloration in transgenic leaves under stress conditions [13,14] and chlorophyll content in salt stressed leaves [16]. In this work, we reported redox effects of Ced-9-expression in soybean hairy roots under stress conditions ( Figure 5 and 6). Increases in antioxidant capacity in K599-empty hairy roots ( Figure 5C) could indicate a response to oxidative stress induced by hairy root death-inducing conditions ( Figure 5A and 5B); while no changes were observed between treatments in K599-CED9 hairy roots ( Figure 5A, 5B, 5C and 5D). These results demonstrated that the expression of Ced9 prevents ROS generation in hairy roots under stress conditions. On the other hand, the mammalian homologous of CED-9 may regulate metabolic efficiency in neurons through interaction with the mitochondrial F 1 F 0 ATP synthase in the inner membrane [54]. Likewise, Qiao et al [13] suggested a possible contribution of Bcl-xL and Ced-9 to improved mitochondrial membrane potential when were expressed in plants. In this regard, this work demonstrated that K599-CED-9 hairy roots had improved metabolism assessed as ATP content (Figure 6), particularly in severe salt conditions. Strikingly, despite of improved metabolism and tolerance to death-induced stress conditions, K599-CED9 hairy roots had a significant inhibition of its nodulation capacity (Figure 7). Moreover, given that cell death process is an early control of the number of nodules [55,56], we expected that the expression of Ced-9 could impact positively on the nodulation process. Taking into account that one of the main action of Ced-9 is the ionic flux control, it is possible that its expression in legume could adversely affect the ion flux signatures that occur during rhizobium perception [57,58]. Likewise, It has been reported in animals that CED9 interact with proteins involved in vesicular traffic and autophagy [59], which in turn have participation in organogenesis events [60][61][62]. In this regard, we have the hypothesis and also relevant unpublished data showing that CED9 expression, which have no homologues identified in plants, could affect nodule organogenesis by interacting with vesicular traffic and autophagy proteins conserved in plants.
In summary, in this work we characterized the effects of Ced-9expression on soybean hairy root under different, ordered-like and necrosis-like root hair and root death-inducing conditions. In this respect, we demonstrated that part of improved tolerance given by Ced-9 expression is based on the maintenance of ionic and redox homeostasis capacity. However, contrary to expectations, Ced-9 expression drastically inhibited nodule formation, and consequently the expression of animal cell death suppressors seems not to be an adequate strategy to increase the nitrogen content derived from biological fixation.
Bacterial strain and plant material
Soybean seeds (Glycine max L. DM4800) were disinfected with sodium hypochlorite 5% (v/v) for 5 min and germinated in the dark for 48 h on filter paper moistened with distilled water. The seeds were incubated at 28 and 37 uC during the first and second 24 h periods, respectively, to promote the growth of roots and root hairs. Bradyrhizobium japonicum USDA 138 was cultured in yeast extract mannitol (YEM) medium [63] at 28 uC with constant agitation for 5 days (3610 9 cells mL 21 ). The bacteria were washed and resuspended in sterile water.
Binary vector and A. rhizogenes strains
The binary vector pBI2113-Ced-9 has an efficient promoter cassette overexpressing the Ced-9 gene (GenBank accession number L26545), a Caenorhabditis elegans homolog of Bcl-xL which was kindly provided by Dr. Yuko Ohashi [12,64]. Cucumopine-type A. rhizogenes strain K599 was used to infect cotyledon axes regions. A. rhizogenes K599 with pBI2113-Ced-9 was grown in Luria-Bertani (LB) medium containing kanamycin (Km) at 50 mg mL 21 . To get fresh cells, A. rhizogenes K599 was grown on LB plates containing Km and incubated 48 h at 28 uC. Cells were collected from these plates and diluted into 1 mL of sterile water. For control hairy roots (K599-empty), a fresh culture of A. rhizogenes K599 lacking the binary vector was grown in LB medium without antibiotics. A. rhizogenes-mediated root transformation Induction of A. rhizogenes-mediated root transformation protocol was modified from Estrada-Navarrete [27]. Briefly, after germination, sprouts were inoculated by injection directly into the cotyledonary nodes with a syringe and transferred to a hydroponic supplemented with 8 mM KNO 3 and it was within a larger tube that serves as moist chamber. Typically, soybean plants infected by A. rhizogenes started to show tumors approximately 5 days after inoculation. Twelve days after A. rhizogenes infection, plantlets exhibited numerous induced hairy roots per wound site. Primary root was removed from the plant by cutting approximately 1 cm below the cotyledon nodes and the composite plants were placed in plastic trays with B&D solution with or without KNO 3 depending on the treatment to be performed.
Root hairs death-inducing conditions
After germination, sprouts were incubated 30 min in aerated tubes that contained sterile water (control), B. japonicum (inoculated), 50 mM NaCl, B. japonicum in presence of 50 mM NaCl and 150 mM NaCl. Root hairs from roots subjected to different stress treatments were extracted by peeling the root zone containing young root hairs, which were immediately frozen in liquid air. Peeling was performed under a magnifying glass by making an incision with a scalpel in root and pulling the epidermal tissue containing the root hairs using a fine-tipped clamp. Root hairs of approximately 200 roots generate sufficient material for a sample.
Hairy roots death-inducing conditions
Once primary root was removed from the plant by cutting below the cotyledon nodes, the composite plants were placed in aerated plastic trays with B&D solution supplemented with 8 mM KNO 3 and incubated in a growth chamber under 16 h photoperiod (350 mmol m 22 s 21 ) at 2662 uC during two weeks. Hairy roots were subjected to stress treatments for 3 h and then, they were immediately frozen in liquid air.
Cell death evaluations
Cell death evaluations were performed by Evans Blue staining and DNA degradation analysis. Evans Blue is a dye used in the determination of cell viability [13] due to its inability to permeate intact cell membranes. When cells lose the membrane potential, the dye diffuses within the cell and it may visualized by conventional microscopy. The roots were incubated 10 min with Evans Blue 0.05% (w/v) in water or each NaCl levels assayed.
Genomic DNA was isolated using CTAB [66]. In brief, the samples were homogenized to a fine powder using a mortar and pestle under liquid nitrogen and thawed in CTAB extraction buffer (2% w/v CTAB, 1.4 M NaCl, 20 mM EDTA, 100 mM TRIS-HCl pH 8.0). RNase A was added and the homogenate was incubated for 30 min at 37 uC. DNA was extracted twice with an equal volume of chloroform:isoamylalcohol (24:1 v/v) and precipitated with 0.6 vols. of isopropanol. For visualization of the DNA degradation, equal amounts of DNA (2 mg) were loaded on a 2% TAE agarose gel and stained with ethidium bromide.
The nuclear morphology of hairy roots was evaluated by acridine orange (AO, 50 mg/mL) and ethidium bromide (EtBr, 50 mg/mL) staining and observed with Zeiss confocal microscopy. AO and EtBr are dyes that intercalate DNA and fluoresce under UV light. The orange color and the presence of the dispersed chromatin in the cytoplasm indicate that the cells have lost integrity of the nuclear membrane and are in a very late stage of death. Both dyes are excited by 488 nm and emission was observed at 500-530 nm and 565-615 nm for AO and EtBr, respectively.
MDA and ATP quantification in root hairs and hairy roots
The samples were homogenized using a mortar and pestle under liquid nitrogen and thawed in 3% (v/v) trichloroacetic acid (TCA) then centrifugation was carried out at 13,000 g, 4 uC during 15 min.
MDA levels were quantified according Heath and Packer [67]. Briefly, 100 mL of sample were mixed with 100 mL of 20% TCA + 0.5% thiobarbituric acid (TBA), incubated at 90uC for 20 min and ice cold rapidly. The mix was centrifuged at 13,000 g for 10 min. The absorbance of the supernatant was read at 532 nm y 600 nm.
The concentration of ATP present in each of the samples was determined with a GloMax luminometer using a bioluminescent detection reagent (ENLITEN rLuciferase/Luciferin; Promega) according to manufacturer. The amount of ATP present in the sample was calculated from the measured relative light units using a standard curve spanning the relative light unit range obtained from the samples [68].
Na + , K + and Ca + determination in hairy roots Ion quantification was performed by high pressure liquid chromatography (Shimadzu LC2010) with Shim-pack IC-C3 column and non-suppressed system. Hairy roots segments (50 mg) were placed into 1 mL of 0.1 N nitric acid during 3 days. Samples were passed through 0.22 mm pore size MF Millipore cellulose membrane filters and diluted 6 times with MQ water. Then, 30 mL of samples were analyzed. The mobile phase was oxalic acid 2.5 mM and the time of chromatography was 20 min with a flow rate of 1.2 mL min 21 . Quantitative analysis was done with multicationic standards by software LCSolution.
Fluorometric H 2 O 2 determination in hairy roots
The samples were homogenized in 3% (v/v) trichloroacetic acid (TCA) then centrifugation was carried out at 12,000 g, 4 uC during 15 min. 100 mL of supernatant was mixed with determination buffer (100 mM potassium phosphate pH 7.4, 4.5 U/mL horse radish peroxidase (HRP) and 1 mM p-hydroxyphenilacetic acid). Duplicates were incubated with catalase in order to diminish unspecific fluorescence. Hydrogen peroxide fluorescence was measured with spectrofluorometer Shimatzu at 371-414 nm excitation and emission respectively [69].
FRAP (Ferric Reducing Ability of Plasma) assay
The samples were homogenized using a mortar and pestle under liquid nitrogen and ethanol 80% was added. Then, centrifugation was carried out at 12,000 g, 4 uC during 10 min. 100 mL of supernatant was mixed with reaction buffer (5 mL of Acetate buffer 0.3 M pH 3.6; 0.5 mL of TPTZ 10 mM (2,4, 6 Tris (2 pyridyl) s-triazine) diluted in 40 mM HCl and 0.5 mL of FeCl 3 200 mM) in a microplate on ice. Samples were removed from the ice and read at 600 nm after 20 min. A standard curve with TROLOX was done to calculate FRAP capacity in samples [70].
Ascorbic acid determination
Samples were prepared for ascorbate analyses by homogenizing material in 1 mL of 3% trichloroacetic acid. The homogenate was centrifuged at 10 0006g at 4uC for 15 minutes and the supernatant was collected for analyses of ascorbate. Total ascorbate content was determined according to Gillespie and Ainsworth [71] with modifications [72]. The reaction mixture for total ascorbate contained a 50 mL aliquot of the supernatant, 15 mL of 150 mM phosphate buffer (pH 7.4) and 15 mL of 10 mM DTT; samples were incubated at room temperature for 10 min.
After that, the mix was incubated for 60 min at 37 uC with 15 mL of 0.5% N-ethylmaleimide (NEM), 16.6% orthophosphoric acid (H 3 PO 4 ), 1.33% a-a'bipyridyl, and 30 mL of 3% FeCl 3 . The samples were measured at 525 nm in ELISA MRX II. Reduced ascorbic acid content was determined using the same protocol, except for the addition of NEM and DTT.
Protein content
Protein content was determined spectrophotometrically at 578 nm according to Bradford [73] with bovine serum albumin (BSA) as a standard.
RNA extraction
Samples were homogenized in a cold mortar with TRIzol Reagent (1:10 mg plant tissue:mL reagent), mixed for 1 min and incubated at room temperature for 5 min. Then, 0.2 mL chloroform per mL of TRIzol Reagent was added and incubated at room temperature for 3 min. After incubation, the samples were centrifuged at 12,000 g at 4uC for 15 min and the aqueous phases were transferred to clean tubes. RNA was precipitated by adding 1 vol. of isopropanol, incubated at room temperature for 10 min and centrifuged at 12,000 g at 4uC for 15 min. The precipitate was washed with 70% ethanol and the samples were centrifuged again at 12,000 g, 4uC for 15 min. The precipitate was dried and resuspended in DEPC water and its concentration was quantified using a NanoDrop 3300 spectrometer (Thermo Scientific). Purified RNA was treated with DNase I (Invitrogen) to remove genomic DNA, according to the manufacturer's instructions. qPCR RNA DNA-free (1 to 2.5 mg) was used with oligo(dT) for first strand cDNA synthesis using the Moloney Murine Leukemia Virus Reverse Transcriptase (M-MLV RT, Promega) according to the manufacturer's instructions. For each primer pair, the presence of a unique product of the expected size was checked on ethidium bromide-stained agarose gels after PCR reactions. Absence of contaminant genomic DNA was confirmed in reactions with DNase treated RNA as template. The qPCR reaction was performed using iQ Universal SYBR Green Supermix Bio Rad.
Statistical analyses
Data were analyzed using analysis of variance (ANOVA) followed by the DGC test model with InfoStat software [74]. , and 50 mM NaCl (E and F), and root hairs nuclear morphology was evaluated. Images were taken with a Zeiss confocal microscope. The excitation was performed simultaneously at 488 nm and emission filter BP 500-530 IR and BP 565-615 IR for AO and EtBr, respectively. AO: acridine orange channel, EtBr: ethidium bromide channel, AO/EtBr: image overlay. (TIF) | 7,186.2 | 2014-07-22T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Estimating Gross Annual Electricity Demand of Turkey
The electricity demand has become increasingly significant for the financial decision makers with rapid economic growth. In order to achieve a sustainable economic growth, continuous and adequate power supply is crucial. Due to the electricity is unable to be stored economically and has a characteristic of coincidence of generation and consumption, forecasting electricity demand accurately is of great importance in order to balance supply and demand. Turkey, an emerging market with one of the most rapid economic growth rate in the world, should consider forecasting the gross electricity demand. As it is known, there exists a high correlation between growth rate of gross domestic product (GDP) and electricity demand in developing countries. Therefore, unlike many other forecasting models for electricity demand, a single parameter (GDP in line with the purchasing power parity) has been used to estimate gross annual electricity demand of Turkey in this empirical study. Three different forecasting methods, namely; time series, regression and fuzzy logic techniques have been applied to Turkish electricity demand data and then compared according to the absolute relative errors (AREP). Based on the AREP figures, it can be concluded that time series model has shown a slightly better forecasting performance than the other two methods for estimating gross annual electricity demand of Turkey based on the available data.
Introduction
Due to urbanization and fast economic growth, demand for energy and the necessity of new investments in energy sector have been rising.It's shown in the Table 1 that, International Energy Agency (IEA) indicates, global energy demand in 2035 may be 51% higher than ones in 2009.Electricity dependence has increased worldwide since last century.The part of electricity in global final energy consumption in 1971 was 9, 0% ,16,1% in 2002, 17,7% in 2010.This rate is expected to reach 20,2% in 2030 (Yamaçlı, 2010).
In 2011, 239,5 TWh electricity was consumed in Turkey.At first glance, Turkey has still lower per capita consumption levels as it is compared to that of in OECD or EU countries.Per capita electricity production in Turkey was 2.709 kWh in gross terms in 2011 (World Bank, 2013).
Although Turkey has lower per capita consumption (one fourth of IEA average) it is expected to increase as the economy and energy demand grows (Energy Market Regulatory Authority, 2012).
Like in many others, the demand for electricity which has a steady rise in the share of energy consumption is growing in Turkey as well (Table 2).Therefore, so as to achieve high quality and continuous electricity supply, forecasting for short and long term electricity is inevitable for policy makers as well as distribution and transmission companies in Turkey.The results of electricity demand forecasts are also extremely important in order to make adequate energy investment decisions.
Table 2. Gross electricity production and electricity demand (GWh) in Turkey Furthermore, it is believed that the demand for electricity's continuously increase in the forthcoming years in Turkey.However, this increase in demand is expected to be proportional to the country's economic growth rather than network expansion due to population rise and new settlement areas.In addition to the fact that there is high correlation between GDP growth rate and electricity demand, the former left the latter behind by 1.3% on average between 2000 and 2011 in Turkey (Figure 1).Empirical studies regarding to estimate electricity demand are still in progress.Alaa El Shazly analyzed the electricity demand using a panel co-integration approach and provided out-of-sample forecasting at the sectorial level.Mattee De Felice, Andrea Alessandri and Paolo M. Ruti performed daily load forecasting for Italy through numerical weather prediction models with the aim of studying the influence of temperature (Felice, Alessandri, & Ruti, 2013).
On the other hand, Roula Inglesi tried to estimate the electricity demand of South Africa by modelling the Engle-Granger approach of co-integration and Error Correction models (Inglesi, 2010).
GDP is commonly known as the value of all distinguished final goods and services produced in a country during a while of time.In the literature, GDP is defined as the most important element of the electricity demand.Some empirical studies show that energy usage and GDP are combined together in Turkey (Lise & Montfort, 2007).
Preferring GDP instead of GNP to estimate demand for electricity is more convenient because electricity usage depends on the goods and services generated in the country.Unlike GDP, GNP allocates production based on ownership.The market value of all final goods and services produced annually supplied by the residents of a country is called GNP.As an instance, GDP can be expressed by three different types, namely; fixed, existent prices and purchasing power parity.The GDP values according to purchasing power parity are used in this empirical study.
Determination of adequate and necessary information is the common difficulty in developing a reliable forecast models.Inadequate information leads poor forecasting and useless or redundant information/data makes modeling very hard to construct (Bianco, Manca, & Nardini, 2009).
A country's electricity demand or consumption can be related to many parameters, such as import, export, population, energy prices or weather conditions.However, in order to construct a simple and practical model, unlike many other models, single parameter model is preferred in this empirical study.
A Short Review of Turkish Electricity Market
Turkish electricity market was opened to competition in 2001 by electricity market law which was passed by the Turkish Grand National Assembly to begin a new period in the market.The legal framework and design of the new Turkish electricity market was compatible with that of European Union.To accelerate the liberalization process, in 2004, the Turkish government issued a strategy paper, aiming at speeding the liberalization of the electricity market as per the provisions of the law in 2004.Later on, in 2009, the government decided to enact a new strategy paper to accelerate the liberalization process by introducing some measures required for the security of supply.New electricity market law was approved by Turkish parliament on March 14, 2013 and was published in the official gazette on March 30, 2013.One of the most important changes adopted by the New Law is the pre-licensing system.Companies willing to generate electricity have to apply for pre-license first, to operate a generation facility in Turkey.
According to Electricity Market Law and other regulations, any legal company established according to Turkish Commercial Law may take place in Turkish electricity market by obtaining appropriate license from Energy Market Regulatory Authority (EMRA).Each activity requires different licenses.Moreover, distinct accounts must be kept by license holders for each and every activity.Tariffs of distribution service of TEDAS, transmission service of TEIAS and electricity selling price of TEDAS are regulated by EMRA according to the Law.For the income of TEIAS, revenue cap is being applied.
By the law, the activities in the market, except for network activities are open to competition under the supervision of, and regulated by EMRA.The electricity market is based on bilateral agreements complemented with the balancing and settlement market.The private sector may participate in all lines of the electricity market, except for transmission, by obtaining the relevant licenses from EMRA.Third party access to the network without discrimination is in place under the supervision of EMRA.The law foresees an independent transmission system operator.According to the law, the ownership, operation, and maintenance of investments in the national grid remain in the hands of TEIAS and TEIAS will remain as the sole transmission system operator and asset owner.
Distribution utilities are responsible for distribution network planning, construction and operation, and for being "supplier of last resort".Distribution companies are entitled to engage in retail business and/or retail sale services for consumers, and generation activities subject to a separate license and accounting unbundling.Distribution utilities unbundled retail activities as end of 2012.Now, distribution and retail sale activities are legally unbundled as of January 1 st , 2013.New Law also stipulates that distribution companies cannot engage in any activity other than distribution in the electricity market.
Distribution companies have to prepare regional demand forecasts, and submit them to transmission system operator, TEIAS.TEIAS is required by the law to prepare its transmission planning and capacity generation projection based on these demand forecasts and submit them to EMRA for approval.All customers directly connected to the transmission system as well as consumers with consumption of more than 4.500 kWh for 2014 are deemed as eligible customers.The corresponding theoretical degree of market opening is around 85%.
Turkey is giving high importance to the electricity generation from renewable energy sources in order to utilize the domestic sources as much as possible and lower the import dependency on energy sources.For this purpose, a renewable promotion law was enacted in 2005.RES based power plants are supported by feed-in tariff.In addition, up to 1 MW of renewable based power plants are exempted from licensing (Yavuzdemir & Gozen, 2013).
Fuzzy Logic Model
Fuzzy logic attempts to reflect the human way of thinking.The traditional computational logic and set theory is all about true or false, zero or one, black or white (no grey).In reality, two different colors may both be described as "red", but one is considered to be redder than the other.Fuzzy logic can be seen as an extension of ordinary logic, where the main difference is that we use fuzzy sets for the membership of a variable.Fuzzy logic uses fuzzy sets rather than crisp sets to determine the membership of a variable.Lotfi A. Zadeh was announced the fuzzy sets and fuzzy logic in 1965.A fuzzy set can be defined as a collection of objects with graded membership and can be written as below.
Where A is the fuzzy set, x the members of the set and µ is the degree of membership ranging from 0 to 1 (Kucukali & Baris, 2010).As an example, let "middle age" be defined between 30 and 60.Then, a fuzzy set of middle aged people can be written as; According to this set, a man who is 30 or younger is not a middle aged person at all and a man at the age of 45 is somewhat middle aged and a man who is 60 is indeed middle aged.
Fuzzy logic is a method which develops a systematic approach to solve problems in order to control a system while expressing the uncertainty in a system (Kuşan, 2009).
In general, a fuzzy system has five fundamental components which are input parameters such as; fuzzification, fuzzy rule base, defuzzification and outputs.Input parameters should be determined firstly.Then, these have to be divided into fuzzy sets which have fuzzy boundaries with a degree of membership ranging 0 to 1.The fuzzy sets are classified as low, medium, high, very high etc.The process of translating the measured numerical values into fuzzy linguistic values is called fuzzification.
In other words, fuzzification is where membership functions are applied, and the degree of membership is determined.Then, fuzzy rules are noted in IF-THEN format based on expert judgment and the data.In further, the results are defuzzified to a certain amount as a conclusion which indicate the correspondent fuzzy set.Defuzzification can be described as a reverse process of fuzzification.Although there are many defuzzification method in the literature, centroid method is the most commonly used one.In this method defuzzification output Z * is defined by (3) In order to transform the fuzzy set which represent the comprehensive outcome of a certain number (which best represents this fuzzy set) defuzzification is used.(Kucukali & Baris, 2010).
Time Series Model
A Annual time series do not include seasonal effects.In this study all data used are annual data therefore, multiplicative model with no seasonal effect (Y t =T t .C t .R t ) is used to predict future values.In predicting "Trend Component" either linear or quadratic models can be preferred.A linear model can be defined as t t =b 0 +b 1 t while a quadratic model as t t =b 0 +b 1 t+b 2 t 2 .Besides, "b 0 ", "b 1 " and "b 2 " are model parameters which can be calculated by least square method.For a linear model, and for a quadratic model, are the equations to solve in order to calculate the parameters "b 0 ", "b 1 " and"b 2 " (University of Cambridge, 2014).
Data and Methodology
Many parameters such as import, export, population, energy prices or weather conditions can be used in order to estimate the electricity amount used or demand of a country.However, single parameter model is used for constructing a simple and practical model in this empirical study.
As mentioned in the earlier parts, GDP is a reliable parameter for Turkey and therefore GDP is chosen as input parameter and energy demand is chosen output for the fuzzy model explained in the paper.GDP is a fundamental indicator of a country's economic performance and would be considered by fixed prices, existent prices and in line with purchasing power parity.The GDP data used in our model are in line with the purchasing power parity.Gross electricity consumption data used were collected from TEIAS while GDP data were obtained from IMF.
Fuzzy Sets and Rules Used in the Analyses
The scatter diagram of energy demand vs GDP was used to build up the fuzzy sets and fuzzy rules in fuzzy logic analysis which were carried out in Matlab.The scatter diagram of energy demand vs GDP was also utilized in order to determine the limits of fuzzy sets and the connection between input and output variables.Triangle and trapezoid membership functions are used to represent the fuzzy sets with the maximum contributions as their peaks (Figure 2).Seven fuzzy rules used in constructing the fuzzy model to estimate the electricity demand in Turkey are given in Table 3. GDP ranging from 100-1.600 billion US dollars was divided into seven fuzzy sets (Figure 2).In addition, electricity demand ranging from 0-350 TWh was also grouped into seven subsets as "Very low", "Low", "Medium", "Medium high", "High", "Very high" and "Extremely high" (Figure 2).The data were unique for demand of electricity and the structure of Turkish economic.Table 3 summarizes the fuzzy rules used in this empirical study which are consisting of seven fuzzy rules.For a given input, definite IF-THEN rules would be started at the same time.Any rule could have a distinctive power, because a given input can be linked to more than one fuzzy set, aside from same participation values.
In order to clarify the calculation method of gross electricity demand with the fuzzy logic algorithm, GDP = 659 billion US dollars is taken as input parameter.These input results initiate the Rule-3, Rule-4 and Rule-5.The following parts of the Rule-3, Rule-4 and Rule-5 are then summed up in order to construct a combined output fuzzy subset.Finally, by making use of the centroid method in defuzzification process, the output was calculated by solving the center of mass of the output fuzzy subset.For GDP = 659 billion US dollars, gross electricity demand was calculated as 147 TWh.Results of forecasting studies carried out for Turkish gross annual electricity demand nearby fuzzy logic model, time series model and regression model are illustrated in Table 4.As can be seen below, Figure 3 illustrates the annual electricity demand values against GDP values including the periods of 1980 and 2012, best fitting regression equation is E=0,2213 GDP -0,8454. Figure 4 shows a comparison of the actual annual gross demand, regression, fuzzy and time series models.
Forecasting performance of all techniques was analyzed by average absolute relative errors (AREP).Following Equation ( 11) was used in order to calculate average absolute relative errors.
In Equation ( 11), E p and E m are indicated the gross annual electricity demand values, respectively.Based on the AREP values we can say that time series model has better forecasting performance than fuzzy and regression models, since the AREP values are 2,75%, 4,81% and 7,64% respectively.All data computed by these three models are presented in Table 4.
Conclusion and Remarks
Electricity generation market has gained a crucial role in the emerging economies for the last two decades.Since electricity cannot be stored economically and requires high costs, establishing a system meeting the electricity demand with a high quality is vital.Therefore, demand forecasting and production planning studies constitute the most important aspects of electrical system planning.
Many studies dealing with electricity demand forecast reveals that economic growth and electricity demand has a high correlation.In fact, between the years 2000-2011 electricity demand in Turkey grew by 5,68% on average and over the same period the average gross domestic product increased by 4,36% on average (Figure 1).
In this paper different forecasting techniques such as fuzzy logic, time series and regression approaches were applied for electricity demand forecasting in Turkey.Results have revealed that electricity demand is strongly related with GDP in Turkey.According to our calculations, one parameter model is sufficient to a certain extent and can be used for electricity demand forecasting of this country.
The advantage of such a model is its simplicity and quick results for policy makers.Based on the average relative errors, we can say that time series model has better forecasting performance than fuzzy and regression models with available data, since the AREP values are estimated as 2,75%, 4,81% and 7,64% respectively.In order to have more accurate estimation with smaller errors, some other parameters such as import, export, population and energy prices may be included in the models.
In this regard, it can be concluded that the models include trend analyses and fuzzy logic approaches could give significant results in estimating the electricity demand of Turkey and would provide crucial information for decision makers in energy market.
Figure 1 .
Figure 1.GDP growth rate vs electricity demand growth rate in Turkey
Figure 2 .
Figure 2. Fuzzy sets for input and output variables Source: Authors' Calculations in Matlab.
Figure 3 .
Figure 3. Regression line for electricity demand vs GDP between 1980 and 2012 Source: Authors' Calculations.
Figure 4 .
Figure 4. Forecasting results of the estimation models
Table 1 .
Primary energy demand by fuel time series model is a series of data points, measured typically at sequential points in time spaced at specific time periods.However, time series forecasting is known as the usage of a model which predicts forthcoming values regarding previously mentioned values.Classical decomposition is another simple technique for describing the series.Time series are classified in the following four different ways (University of Cambridge, 2014): Trend ( ), long term movements in the mean, Seasonal Effects ( ), cyclical changes related to the calendar, Cycles ( ), other cyclical changes (such as a business cycles), Residuals ( ), other spontaneous or systematic changes A time series can be written in the Equation (4) and Equation (5) which are given in below: : Cyclic component at time t; : Residuals at time t.
Table 3 .
Fuzzy rules constructed for gross annual electricity demand
Table 4 .
Results of forecasting gross annual electricity demand of turkey by fuzzy logic model, time series model and regression model Note.N.A.:Not Available.* IMF projections. | 4,380.6 | 2015-03-25T00:00:00.000 | [
"Economics",
"Engineering"
] |
Morphological Divergence Driven by Predation Environment within and between Species of Brachyrhaphis Fishes
Natural selection often results in profound differences in body shape among populations from divergent selective environments. Predation is a well-studied driver of divergence, with predators having a strong effect on the evolution of prey body shape, especially for traits related to escape behavior. Comparative studies, both at the population level and between species, show that the presence or absence of predators can alter prey morphology. Although this pattern is well documented in various species or population pairs, few studies have tested for similar patterns of body shape evolution at multiple stages of divergence within a taxonomic group. Here, we examine morphological divergence associated with predation environment in the livebearing fish genus Brachyrhaphis. We compare differences in body shape between populations of B. rhabdophora from different predation environments to differences in body shape between B. roseni and B. terrabensis (sister species) from predator and predator free habitats, respectively. We found that in each lineage, shape differed between predation environments, consistent with the hypothesis that locomotor function is optimized for either steady swimming (predator free) or escape behavior (predator). Although differences in body shape were greatest between B. roseni and B. terrabensis, we found that much of the total morphological diversification between these species had already been achieved within B. rhabdophora (29% in females and 47% in males). Interestingly, at both levels of divergence we found that early in ontogenetic development, females differed in shape between predation environments; however, as females matured, their body shapes converged on a similar phenotype, likely due to the constraints of pregnancy. Finally, we found that body shape varies with body size in a similar way, regardless of predation environment, in each lineage. Our findings are important because they provide evidence that the same source of selection can drive similar phenotypic divergence independently at multiple divergence levels.
Introduction
Numerous studies have documented adaptation to divergent natural selection regimes [1][2][3][4][5][6][7][8]. However, most studies examining fine-scale evolutionary diversification are limited to either between species or within species differences, and as a result, fail to adequately address how the same source of selection drives phenotypic divergence at varying taxonomic levels (a broad but general exception being studies of convergent and parallel evolution). Indeed, few studies have looked at the evolution of adaptive strategies across a speciation continuum (i.e., both within and between species) with the intent of determining how much diversification takes place across different stages of speciation [9][10][11]. The paucity of such studies may be due to the difficulty of identifying systems where similarly divergent selection regimes have driven or are driving divergence at multiple taxonomic levels. These studies are valuable to our understanding of evolutionary diversification, and can help explain how predictable phenotypic divergence is when populations or species are subject to similar selective environments.
Livebearing fishes (Poeciliidae) have been used as model systems in a diversity of ecological and evolutionary studies [6,23,[38][39][40][41][42][43][44][45]. Many of these studies have focused on adaptation to divergent predation environments, specifically examining life-history evolution and morphological divergence driven in large part by the presence or absence of predators [6,21,[46][47][48][49][50][51][52]. The live-bearing fish genus Brachyrhaphis has become an important model for studying the evolution of predator-mediated adaptations [6,13,23,46]. Brachyrhaphis occur primarily in lower Central America (LCA), with many species endemic to Costa Rica and Panama. Several species of Brachyrhaphis exhibit adaptation to divergent predation environments, including changes to lifehistory [46] and morphology [6,13]. Brachyrhaphis rhabdophora, for example, has evolved divergent life-history strategies associated with predation environment that are similar to those observed in numerous other poeciliid species [46,53]. Studies of adaptation in Brachyrhaphis have so far focused exclusively on intra-specific variation, where populations of a given species occur in either 'predator free' or 'predator' environments. Interestingly, similar patterns of morphological divergence may be present at deeper phylogenetic levels within Brachyrhaphis (i.e., between sister species rather than populations within a species; see below). If this is the case, then Brachyrhaphis would provide an ideal model system for studying morphological variation both among populations and between species from divergent predation environments, and testing for similar patterns of divergence among different phylogenetic levels to determine how similar selective regimes drive phenotypic divergence.
Brachyrhaphis roseni and B. terrabensis are sister species [54] that have similar distributions, occurring from southeastern Costa Rica to central Panama along the Pacific versant [55]. Although these species frequently occur within the same drainages, B. terrabensis typically occupies higher elevation headwater streams, while B. roseni occupies lower elevation coastal streams [55]. Consequently, B. terrabensis occurs in streams that are primarily void of piscivorous predators, while B. roseni co-occurs with numerous and abundant predators (e.g., Hoplias microlepis). This pattern is similar to that observed among populations within other poeciliid species [13,21,23,27,47,50,51], including the well-studied sister species to this species pair, B. rhabdophora [24,25,43,46,56]. However, B. roseni and B. terrabensis are unique because they themselves do not span both predator and predator free environments, but rather are segregated into predator and predator free environments, respectively (Belk et al. in review; unpublished data). Furthermore, Brachyrhaphis roseni and B. terrabensis have evolved similarly divergent life histories (Belk et al. in review) to those observed between populations of B. rhabdophora [46], B. episcopi [23], and other poeciliids [21], namely smaller size at maturity with more and smaller offspring in predator environments than in predator free environments. The hypothesis that these species are sister taxa, and the fact that they occur in divergent predation environments and display predictable patterns of life-history divergence, suggests that the selective forces driving divergence between populations of B. rhabdophora (i.e., predator vs. predator free environments) might also have driven divergence between B. roseni and B. terrabensis. This provides an opportunity to compare morphological variation both within (recently diverged) and between species of Brachyrhaphis from opposing predation environments in two closely related evolutionary lineages. In addition to testing for gross differences in prey morphology associated with predation environment, our data set allows us to test for similar patterns of morphological divergence both between sexes and among size classes.
In this study, we use geometric morphometric analyses to test four hypotheses related to morphological divergence driven by predation environment in three species of Brachyrhaphis fishes. We focus on contrasts between B. roseni and B. terrabensis and between populations of B. rhabdophora from divergent predation environments. Our hypotheses are as follows.
First, we predict that body shape differs between B. roseni and B. terrabensis, and between populations of B. rhabdophora from different predation environments. We predict that populations from predator environments (B. roseni and predator B. rhabdophora) will be more streamlined and have a more robust caudal peduncle region than populations from predator free environments (B. terrabensis and predator free B. rhabdophora) due to morphological optimization for different swimming modes [8,49,[57][58][59][60][61][62]. Cooccurrence with predators should favor the evolution of a body form optimized for fast-start swimming (i.e., greater burst speed ability), needed to evade predator strikes [8]. In contrast, increased resource competition often associated with predator free environments should favor the evolution of a body form optimized for more efficient prolonged swimming, important for finding and consuming food, acquiring mates, and conserving energy for reproduction [8,49]. Given that these two swimming types are optimized by different propulsor arrangements (i.e., fin size and shape, muscle size and shape), optimizing body shape for one swimming mode necessarily compromises the other. Prolonged swimming performance is optimized with a relatively shallow caudal peduncle, and a deep anterior body/head region. Fast-start swimming is optimized by the opposite trait values, including deep caudal peduncle and a shallow anterior body/head [8,49,[57][58][59][60][61][62].
Second, we expect to find similar, but more pronounced (i.e., greater magnitude), morphological divergence occurs between sister taxa Brachyrhaphis roseni and B. terrabensis than occurs between populations of B. rhabdophora from different predation environments. This hypothesis focuses on determining how much divergence occurs between populations of B. rhabdophora from different predation environments versus between sister species B. roseni and B. terrabensis from different predation environments. We predict that divergence in body shape between B. roseni and B. terrabensis will be associated with predation environments as predicted by theory, and that these differences will be similar but more exaggerated than those observed between populations of B. rhabdophora. This difference in magnitude could be attributed to several factors, including for example a greater time since divergence or differences in the balance between strength of divergent selection and homogenizing gene flow.
Third, we predict that body shape will vary between sexes, both for the among-species and among-population comparisons. Although the pattern of variation described above is predicted to occur between populations from different predation environments due to divergent natural selection, it is also likely that, within populations, these morphological traits are affected by differences in reproductive roles between sexes, mating strategies among size classes, and ontogenetic changes. Given that Brachyrhaphis are livebearing, females of all three species may be constrained morphologically by pregnancy in the same way [37]. Therefore, we test if patterns of sexual dimorphism show equal magnitude and direction of divergence between contrasting selective environments, essentially addressing the question, do differences in male and female reproductive roles constrain or magnify shape responses to variation in predation environment? We predict that female body shape will converge between predation environments relative to males due to the constraint of pregnancy.
Finally, we test the hypothesis that body shape differs among size classes across predation environments. This hypothesis tests for an interaction between size and species, and addresses potential differences in reproductive roles, alternative-mating strategies among size classes, and ontogenetic effects. We predict that shape will not vary consistently across sizes (i.e., as individuals mature and grow) because of the potential for variation in male reproductive strategy across size classes in Brachyrhaphis (i.e., coercive mating versus coaxing), and differences in female reproductive allocation at different sizes.
Molecular Laboratory Methods and Analysis of Genetic Distance
A primary purpose of this study is to determine how body shape evolves at different phylogenetic levels of divergence (i.e., within and between species) when populations are subject to similarly divergent selective regimes. Although a previous study of Brachyrhaphis rhabdophora indicated little molecular divergence among populations from different predation environments [43], the amount of molecular divergence among populations of B. rhabdophora compared to the amount of divergence between sister species B. roseni and B. terrabensis remains relatively unexplored (but see Mojica et al. 1997). Thus, we generated mitochondrial DNA sequences from the cytochrome b (cytb) gene for four representative populations of B. rhabdophora from different predation environments and for six populations of B. roseni and B. terrabensis (Table S2). We isolated DNA using the Qiagen DNeasy96 tissue protocol (QIAGEN Sciences, Maryland, USA) and amplified cytb fragments for each sample by PCR, using forward primer GLU31 [63] and reverse primer HD15680 [64]. We followed [65] for amplification and sequencing reactions, clean up, and sequence visualization. We assembled contigs and checked amino acid coding for errors (stop codons) while viewing electropherograms in Geneious [66], and manually aligned sequences in Mesquite v. 2.75 [67]. We obtained a total of 26 B. rhabdophora, 16 B. roseni, and 18 B. terrabensis sequences of a cytb fragment 1140 bp in length (plus ,65 bp of the downstream gene) representing four, three, and three populations, respectively (Table S2). All sequences were deposited on Genbank under accession numbers KJ081551-KJ081609.
In order to test for varying levels of molecular divergence within and among species of Brachyrhaphis, we computed pairwise genetic distances using MEGA5 [68]. We first computed raw pairwise genetic distance. Next, we used a model selection framework (AIC, [69]) within jModelTest 2 [70] to determine the best-fit model of molecular evolution for our data set. We then calculated modelcorrected pairwise genetic distances using the best-fit model, TrN+ G [71], with the Tamura-Nei model and gamma distributed rates among sites in MEGA5 [68]. Our results show that B. roseni and B. terrabensis show a greater level of genetic divergence than populations of B. rhabdophora from different predation environments. Pairwise population comparisons of cytb among populations of B. rhabdophora from different predation environments revealed remarkably little variation (mean model corrected pairwise genetic distance = 0.004; Table S3). On the contrary, pairwise population comparisons between B. roseni and B. terrabensis showed genetic distance an order of magnitude greater (mean model corrected pairwise genetic distance = 0.04; Table S4). Thus, with an expanded sampling both in terms of numbers of base pairs and sequences, we find strong evidence that supports the findings of and refute the findings of Mojica et al. (1997). Collectively, these data validate our comparison as one consisting of two levels of phylogenetic divergence.
Study Sites and Characterizing Predation Environment
We collected Brachyrhaphis roseni and B. terrabensis with a handheld seine from eight streams in the Chiriquí province of Panama between 20 and 29 August 2011, and one population of each species from eastern Costa Rica during 2007 ( Figure 1; Table S1). We collected Brachyrhaphis rhabdophora from two predator free and three predator environments in Guanacaste region of Costa Rica between 5 and 12 May 2006 (Table S1). All animal collecting was conducted under Brigham Young University IACUC committee approval. All necessary permits were obtained for the described field studies, and no collecting took place on private or protected lands. Collecting and export permits were provided by the Autoridad Nacional del Ambiente in Panama and under the Costa Rican Ministerio del Ambiente y Energía Sistema Nacional de Areas de Conservasión in Costa Rica.
The streams are characterized by a pool-riffle-pool structure, similar to that observed in other Brachyrhaphis species [25]. A primary environmental indicator of B. roseni, B. terrabensis, and B. rhabdophora life history divergence is the presence or absence of piscivorous predators (e.g., Parachromis dovii and Hoplias microlepis [24,25,46], unpublished data). Although predation pressure may be the selective force of most importance in this system, 'predation environment' is characterized by the presence ('predator') or absence ('predator free') of predators and a suite of other confounded environmental factors. For example, resource availability, stream gradient, and stream width may play an important role in determining life-history evolution and resulting morphology and are known to co-vary with presence or absence of predators in B. rhabdophora [56]. In this study, we consider 'predation environment' to be this suite of ecological features, which included either the presence or absence of piscivorous predators. Brachyrhaphis roseni, B. terrabensis, and B. rhabdophora typically occur in low velocity stream habitats (i.e., side-channels and pools found in small tributaries), although higher elevation sites (typical of B. terrabensis populations) tend to have steeper gradients and slightly faster stream velocities. Brachyrhaphis terrabensis primarily occurs in the same river drainages as B. roseni, although at higher elevations. Brachyrhaphis roseni habitat is characterized by low-elevation streams that are predator environments, while B. terrabensis occurs in predator free environments. Brachyrhaphis rhabdophora is found in both habitat types, predator free (typically high-elevation) and predator (typically low-elevation).
Geometric Morphometric Analyses
We used a total of 802 fish in the geometric morphometric analysis (Appendix I): 211 B. terrabensis (predator free), 289 B. roseni (predator), and 302 B. rhabdophora (201 from predator, and 101 from predator free sites). For all sites, there were roughly equal numbers of males and females, and a representative sample of the range of size variation observed within each population. For each fish, we measured standard length (mm), and digitized thirteen biologically homologous landmarks (or semi-landmarks; Figure S1) on a lateral image of each fish (tpsDig; [72]). Landmarks were defined as: (1) anterior tip of the snout; (2), anterior extent of the eye; (3) semi-landmark midway between landmarks 1 and 4; (4) anterior insertion of the dorsal fin; (5) posterior insertion of the dorsal fin; (6) semi-landmark midway between landmarks 5 and 7; (7) dorsal origin of the caudal fin; (8) ventral origin of the caudal fin; (9) semi-landmark midway between landmarks 8 and 10; (10) posterior insertion of anal fin or gonopodium in males; (11) anterior insertion of the anal fin or gonopodium in males; and (12) semi-landmark midway between landmarks 11 and 13; (13) intersection of the operculum with the ventral outline of the body.
We summarized shape variation from digital landmarks into relative warps (i.e., principal components) using tpsRelw [73]. We used generalized Procrustes analysis [74] to remove all non-shape variation due to position, orientation, and scale of the specimens for each image. For sliding semi-landmarks we used the minimize d 2 option in tpsRelw. Relative warps are defined as linear combinations of affine and non-affine shape components that describe some portion of the variation observed in the specimens [73]. We used the first 10 relative warps, which combined explained more than 96% of the shape variation, in subsequent analyses. By using only the top ten relative warps we effectively reduce the number of variables and account for the reduced dimensionality from use of sliding semi-landmarks. We analyzed the data using mixed model multivariate analysis of variance (MANOVA) in ASREML-R version 3.00 [75] within R (R Core Development Team 2010). Within each model, we included sampling site as a random factor to ensure that outlier sites did not drive the patterns we observed. Given that relative warps are orthogonal and ordered according to the amount of variation they explain, they can be treated as repeated measures with the use of an 'index variable' analogous to time in traditional repeated measures models. This method has been successfully employed in similar studies of shape variation in B. rhabdophora [6] and other livebearing fishes [76]. Thus, the order number of the relative warps (i.e. 1-10; reflecting the order of the warps but not the value) was treated as an index variable and included in the repeated statement for mixed model analyses. The use of the index variable arises out of mathematical necessity, and is crucial for this method to work and to interpret the results. It is the interaction of the main effect with the index variable that allows us to test the hypothesis that shape differs between groups on any one or any linear combination of relative warps. This is the same hypothesis tested in a standard MANOVA, but the index variable allows us to test this hypothesis in a mixed model framework. We tested each of our four hypotheses (detailed above) using these data.
To test for overall shape differences between predation environments (hypothesis 1), and for shape differences between predation environment and across sexes (hypothesis 3), we first tested for main effects and interactions of predation environment, sex, centroid size (a covariate; hereafter size), and index variable for the whole dataset (N = 802). Within each model, we included sampling site as a random factor to ensure that outlier sites did not drive the patterns we observed. Our initial global model estimated shape as , index variable + species + sex + size + (index variable: species) + (index variable: sex) + (index variable: size) + (index variable: species: sex) + (index variable: species: size) + (index variable: sex: size) + (index variable: species: sex: size). We used model selection techniques (i.e., AIC) to determine if a reduced model (all possible models maintaining the fixed effects) resulted in a better model fit (i.e., lowest AIC score; [69,77]). In our analysis, interactions between main effects and the index variable served as the most direct test of our hypotheses. Simple interactions of main effects are less informative because the interaction with the index variable tests for differences in shape on each of the relative warps independently, while simple interactions do not. If we do not consider the interaction with the index variable we are simply testing for differences among treatments when averaged across all relative warps. Relative warps are independent from each other (i.e., they explain different axes of variation); therefore the magnitude and direction of differences between levels of the main effects may vary differently and randomly across relative warps. Interactions with the index variable allow relative warps to vary independently (i.e., not to be considered as a whole) and thus allow the interaction to be significant even if the main effects alone, or their interactions, are not [6].
Given that in both of our taxonomic contrasts we found a significant interaction between predation environment, sex, and the index variable in the MANOVA, we applied a phenotypic change vector analysis (PCVA; [78][79][80]) to determine the specific nature of the interaction to test for differences in shape changes between sexes. This analysis has been used previously and effectively in another Brachyrhaphis species [6]. The PCVA tests whether the significant interaction between main effects and the index variable resulted from differences in magnitude (MD) or direction (H) of morphological change. The PCVA tests magnitude and direction across all relative warps. Specifically, we used the PCVA to compare the amount and direction of sexual dimorphism between B. roseni and B. terrabensis, and between populations of B. rhabdophora from different predation environments. Here, we compared both size and direction of the phenotypic trajectories to test for differences in magnitude of sexual dimorphism and for different effects of predation on males and females (i.e., to determine if predation affects sexes differently), respectively. We conducted the PCVA using ASREML-R version 3.00 [75] within R (R Core Development Team 2010). We plotted LS means on the first two relative warp axes, which accounted for 64.36% of the shape variation, to visualize differences in magnitude and direction of shape change (Fig. 2).
To test for a difference in magnitude of variation between predation environment (hypothesis 2), and for differences between predation environment across sizes (hypothesis 4), we tested for main effects and interactions of species group (B. roseni/B. terrabensis and B. rhabdophora from divergent predation environments), predation environment, size, and index variable for each sex (males N = 278; females N = 523) using a mixed model MANOVA. We included location as a random variable in the model. Our full model estimated shape as = index variable + group + environment + size + (index variable: group) + (index variable: environment) + (index variable: size) + (index variable: group: environment) + (index variable: group: size) + (index variable: environment: size) + (index variable: group: environment: size). We used model selection techniques to determine if a reduced model resulted in a better model fit [69,77]. Where the interaction of group, environment, and index variable was significant in the MANOVA, we applied the PCVA to determine whether the significant interaction between main effects and the index variable resulted from differences in MD or H of morphological change. Following significant interaction between size and the index variable, we generated thin-plate splines in tpsRegr [81] using centroid size and superimposed landmark coordinates to visualize shape variation along the centroid size axis.
Effects of Predation Environment on Body Shape
Consistent with the predictions in our first hypothesis, we found that body shape differed between predation environments both within Brachyrhaphis rhabdophora and between B. roseni and B. terrabensis. The best-fit model estimated shape as , index variable + species + sex + size + (index variable: species) + (index variable: sex) + (index variable: size) + (index variable: species: sex) + (index variable: species: size) + (index variable: sex: size) + (index variable: species: sex: size). Morphology differed significantly for the interaction of species group, predation environment, and index variable for both females and males (Table 1). Thus, we conducted a PCVA analysis to determine if the significant differences were caused by the magnitude of change, the direction/angle of change, or both for each sex (hypothesis 2). For females, the PCVA revealed that the magnitude of shape variation was greater in the B. roseni/B. terrabensis species group (MD = 0.0348; P = 0.001); the trajectories also differed in orientation (h = 80.14u; P = 0.001). Similarly, the PCVA revealed that the magnitude of shape variation in males was greater in the B. roseni/B. terrabensis species group (MD = 0.0247; P = 0.001) and that the trajectories differed in orientation (h = 81.80u; P = 0.002). Consistent with the predictions for our second hypothesis, greater morphological differentiation occurred between B. roseni/B. terrabensis than between populations of B. rhabdophora from different predation environments. Specifically, B. rhabdophora achieved 29% (females) and 47% (males) of the divergence present between B. roseni/B. terrabensis.
Morphology differed significantly for the interaction of predation environment, sex, and index variable (Table 2). Thus, we conducted a PCVA analysis to determine if the significant difference was caused by the magnitude of change, the direction/angle of change, or both. Summary statistics revealed that there was significant variation in the magnitude of sexually dimorphic shape change among the four taxa (Var size = 0.0000977; P = 0.003) and significant variation in the direction of shape change (Var orient = 257.57; P = 0.001). Within species groups, the magnitude of shape change was not significantly different; however, the magnitude of sexually dimorphic shape change was significantly greater in the B. roseni/B. terrabensis species Table 3). The direction of shape change was significant in all pairwise comparisons (Table 3). For within species comparisons, the direction of shape change represented a convergence of shape in females, which was consistent with the predictions of our third hypothesis.
To determine how shape varies across size classes (hypothesis 4) in females (due to changes associated with pregnancy) and males (due to potential differences in mating strategies and ontogenetic effects), we generated thin-plate splines in tpsRegr [81] using centroid size and superimposed landmark coordinates to visualize shape variation along the centroid size axis in females (Fig. 3) and males (Fig. 4) of both species. We found that females showed a shift in morphology from small to large that was characterized by an increase in abdomen size and a decrease in caudal peduncle area. Adult males showed a shift in morphology from small to large that was characterized by a shortening and deepening of the head region and a reduction in the caudle peduncle region.
Discussion
The principal objective of our study was to test for divergent morphologies driven by predation environment in Brachyrhaphis fishes at two taxonomic levels in two phylogenetically sister lineages, and determine how much variation occurs within populations and species that have evolved in similarly divergent selective regimes. We predicted that the divergent morphology observed between these species and populations would reflect body shape optimized for their native predation environment, although the magnitude of morphological divergence would be greater between B. roseni and B. terrabensis than between populations of B. rhabdophora from different predation environments. We also tested for differences in shape between sexes and across size classes, and predicted that shape optimization would differ across sex and size class according to potential differences in mating strategies or reproductive constraints.
Parallel Morphological Evolution at Two Levels of Divergence
Our results strongly support divergent morphologies between Brachyrhaphis roseni and B. terrabensis, and between populations of B. rhabdophora from different predation environments as predicted by theory ( Table 2; Fig. 2) [8,51,[57][58][59][60][61][62]82]. As predicted, individuals from predator environments showed a deeper caudal peduncle and a shallower anterior body/head than individuals from predator free environments. This pattern is strikingly similar to that observed in other poeciliids [8,13], and strongly suggests that 'predation environment' is the principal driver of parallel patterns of shape variation between both sister species (B. roseni and B. terrabensis) and populations within a species (B. rhabdophora). Importantly, although our results suggest that both male and female body shape was significantly more divergent (i.e., more pronounced) between B. roseni and B. terrabensis than between B. rhabdophora populations from different predation environments (Fig. 2), 47% (males) and 29% (females) of the variation in body shape was already present between populations of B. rhabdophora. Therefore, although sister species B. roseni and B. terrabensis are clearly at a point of greater divergence (i.e., phylogenetically but also potentially ecologically), both taxon pairs are on a similar evolutionary trajectory and B. rhabdophora has already reached a substantial level o cf evolutionary diversification. Intraspecific evolutionary divergence of this type has been noted in a variety of poeciliid fishes for several different traits [13,39,40,[46][47][48][49]. Interestingly, we found that in B. rhabdophora divergence in male morphology was greater than divergence in female morphology, at least relative to variation noted between B. roseni and B. terrabensis. This pattern of males evolving more rapidly than females has previously been noted in guppies in work that focused on life history traits [83]. Following an introduction experiment, which involved transplanting populations from high-predation to lowpredation sites, evolution of male life-history traits was significantly more rapid than female life-history traits [83]. This finding was largely attributed to a difference in heritability, possibly associated with Y chromosome-linked traits [83]. The pattern observed in Brachyrhaphis suggests that female body shape is less variable, perhaps due to constraints associated with pregnancy (see below). The fact that male B. rhabdophora have achieved a greater amount of divergence relative to females may be due to greater existing variation in male body shape. One possible explanation is that males that employ alternative mating strategies have evolved different morphologies to accommodate these strategies (see below). If males of different sizes do in fact tend to adopt alternative mating strategies, it would be likely that greater genetic variance would occur in males relative to females, possibly contributing to the greater differentiation achieved in male B. rhabdophora relative to female B. rhabdophora. Overall, we see four possible explanations for why greater divergence occurs between B. roseni and B. terrabensis than occurs within B. rhabdophora, although we did not explicitly test any of these hypotheses, and only briefly state them here. First, the time since B. roseni and B. terrabensis diverged could be greater than the time since populations of B. rhabdophora from predator and predator free environments. Second, B. roseni and B. terrabensis could be experiencing stronger divergent selection than B. rhabdophora. Third, populations of B. rhabdophora and sister species B. roseni-B. terrabensis could be experiencing differences in the balance between selection and gene flow. And finally, greater heritable variation could be present between B. roseni and B. terrabensis relative to B. rhabdophora. These hypotheses should be tested further to determine the exact nature of this difference in relative morphological divergence. The idea that Brachyrhaphis roseni and B. terrabensis are sister taxa that occur in the same drainages but in different predation regimes suggests the possibility that divergent natural selection has driven and maintains reproductive isolation between these two species. Numerous lines of evidence suggest that the most recent common ancestor of this species pair likely occurred across a range of predation habitats within the drainages where B. roseni and B. terrabensis are currently found, a pattern strikingly similar to that found in congenerics B. rhabdophora [24,25,43,46,56] and B. episcopi [23,42,84]. For example, multiple recently diverged populations of B. rhabdophora have evolved life-history phenotypes that are adaptive for their specific predation environments [24,25,43,46,56]. Brachyrhaphis roseni and B. terrabensis have evolved nearly identical, although more pronounced, life-history phenotypes as a result of divergent selection regimes (Belk et al., in review). Likewise, our results suggest that body shape evolution is also occurring in parallel, with similar but more pronounced divergence in B. roseni and B. terrabensis than is found in B. rhabdophora. This begs the question: have similarly divergent selection regimes also driven the evolution of reproductive isolation in parallel? Previous studies suggest that body shape plays a key role in mate choice in other livebearing fish, and that individuals prefer as mates those who have a body shape optimized for selection regimes similar to their own [7]. If this holds true in Brachyrhaphis, it is likely that reproductive isolation due to assortative mating for body shape may already occur between populations of B. rhabdophora, and is even stronger between B. roseni and B. terrabensis. Studies in our lab are currently underway to test these predictions.
Reproductive Constraints on Morphological Evolution
Although shape varied between B. roseni and B. terrabensis, and between populations of B. rhabdophora from different predation environments as predicted (hypothesis 1), the degree of variation was not equal across sexes (hypothesis 3). As predicted, both male and female diverged as a function of predation environment; however, divergence in female shape was less than divergence in male shape (Fig. 2). One explanation for this is that Brachyrhaphis are livebearing fishes with a female body shape constrained by pregnancy [6], regardless of predation environment. Hence, immature females from different predation environments might initially differ in body shape, but these differences go away once females become pregnant. This difference is predicted by a tradeoff that occurs between reproduction and fast-start swimming performance (i.e., pregnant females have reduced fast-start speeds), as observed in another poeciliid species [6,37]. This observation of female shape convergence also illuminates previous patterns observed regarding mortality rates in the closely related B. rhabdophora [25]. Johnson and Zuniga-Vega (2009) showed that differential mortality rates drive life-history evolution in B. rhabdophora (i.e., higher survivorship in predator free environments than in predator environments), and that in predator environments mortality rates were relatively constant across size classes until individuals reached the largest size class where mortality increases. This pattern is reversed in predator free environments (i.e., survivorship increases in the largest size class). If convergence in body shape coincides with divergent mortality rates as size increases, then our data suggest that B. roseni and B. terrabensis should also be experiencing differences in size-specific mortality rates. A possible explanation is the negative impact that pregnancy may have on fast start swimming performance (useful in predator environments) as seen in related poeciliid fish [37].
Morphological Evolution across Size Classes: Role of Sexual Selection and Alternative Mating Strategies?
In addition to finding gross differences in morphology between predation environments, we found evidence that shape did not vary consistently among size classes of adult females (Fig. 3) and males (Fig. 4) of all Brachyrhaphis species studied. In other words, we found allometric differences in shape among size classes in each taxon. We predicted that shape would not vary consistently across sizes (i.e., as individuals mature and grow) because of the potential variation in male reproductive strategy across size classes in Brachyrhaphis, and differences in female reproductive allocation at different sizes. As adult females increase in size, the predominant shape change that occurs is a relative increase in abdomen size and a resulting relative decrease in the caudal peduncle region. This finding complements Wesner et al. (2011), who found that late in pregnancy, female body shape converges due to constraints of pregnancy on body shape. The patterns observed between female B. roseni and B. terrabensis, and B. rhabdophora from different predation environments, is remarkable similar.
The pattern of shape change with size in mature males follows a different pattern, potentially consistent with different reproductive strategies between small and large males (i.e., sneaker males vs. displaying males) in each species. Patterns of shape variation with size observed in males of B. roseni, B. terrabensis, and B. rhabdophora are consistent with shapes that are optimized for behaviors associated with reproductive mode; within taxonomic units, small males had a body shape that facilitated burst swimming more than large males (e.g., more streamlined with a more robust caudal peduncle), who demonstrated a body shape that was more conducive to endurance swimming necessary for displaying behaviors (i.e., deeper anterior body/head region with a relatively shallow peduncle) [12][13][14]51,55]. The size at which a male reaches maturity has a large effect on mode of reproduction in numerous livebearing fish [85][86][87] because males typically do not grow after maturing. Relatively smaller males (''sneakers'') often rely on forced copulations (i.e., coercion) rather than courting females to win mates, although the degree to which this pattern holds is highly species specific; mating strategy is context dependent [82,[86][87][88][89][90] in some species (i.e., relative size determines mating strategy), while in others mating strategy is genetically based and not plastic [86,87,91]. Preliminary observations suggest that small Brachyrhaphis males tend to sneak (especially in the presence of larger males), while larger males devote more of their reproductive efforts to displaying to win mates (personal observation). Although species-specific variation in mating strategies exists, some patterns can be generalized. Forced copulation generally relies on short swimming bursts [86,87] that allow the male to copulate with a female before she can defend herself and potentially injure the male. Alternatively, relatively large males adopt larger, showier features and often rely on a courting strategy of reproduction (i.e., coaxing) [86,87]. Displaying males are often required to swim alongside a female until she concedes copulation (personal observation). We hypothesize that this mode of reproduction is likely optimized by a more fusiform body shape that allows the male to have greater swimming endurance during courtship. Just as livebearing reproduction interacts antagonistically with predation environment in generating female morphology (i.e., pregnancy constraints and resulting swimming performance trade-offs), reproductive mode and predation environment may exert opposing selective pressures on body shape in males. We propose that the nearly identical patterns we observed at both taxonomic levels we tested here suggests that selection could favor different body forms that may be associated with reproductive roles and mating strategies, and that the potential adaptive nature of different behaviors is paralleled by morphological divergence. Our findings, although they do not provide conclusive evidence in support of this hypothesis, highlight a gap in our knowledge related to the role of morphology in alternative mating strategies. Future work should focus on determining how body shape and size interplay with mating strategies, whether genetically determined or plastic.
Conclusions
In conclusion, sister taxa Brachyrhaphis roseni and B. terrabensis differed dramatically in body shape and the differences observed correspond to divergent predation regimes that favor different body shapes. Brachyrhaphis rhabdophora from different predation environments also differ as predicted by predation environment, and these differences are parallel, although less exaggerated, to those observed between B. roseni and B. terrabensis. Our study provides evidence that evolution acts in a predictable manner when similar selection pressures are at work by showing that body shape evolution follows dramatically similar trajectories at multiple levels of divergence (i.e., both between and within species). We also conclude that shape appears to be optimized differently in males and females, and across a range of sizes, and that these differences may correspond to reproductive roles and mating strategies, respectively. The fact that closely related species in geographic proximity and similar selective environments have evolved nearly identical morphological characteristics is strong evidence that evolution acts in a predictable manner, and provides a framework for future studies on speciation in this unique system. Figure S1 Geometric morphometric landmarks. Landmark locations used for geometric morphometric analyses on Brachyrhaphis roseni, B. terrabensis, and B. rhabdophora.
(DOCX)
Table S1 Geometric morphometric population data. Population data for samples used in the geometric morphometric portion of this study, including total N, drainage and country of origin, and coordinates. (DOCX) for Brachyrhaphis rhabdophora from high-(HP) and low-predation (LP) environments. Raw pairwise differences are presented above the diagonal, and adjusted pairwise differences using TrN+G model of evolution are presented below the diagonal. (DOCX) Table S4 Genetic distance comparisons between Brachyrhaphis roseni and B. terrabensis. Pairwise genetic distances based on 1140 base pairs of cytochrome b (plus ,65 bp of the downstream gene) for Brachyrhaphis roseni and B. terrabensis. Raw pairwise differences are presented above the diagonal, and adjusted pairwise differences using TrN+G model of evolution are presented below the diagonal. Population abbreviations for drainage of origin are as follows: Rio Chiriquí (Ch.); Rio Chiriquí Viejo (CV); and Rio Coto (C). Two populations of B. terrabensis were taken from the Rio Chiriquí Viejo drainage, and are designated with subscripts representing their country of origin (CV CR and CV P for Costa Rica and Panama, respectively). (DOCX) | 9,216.2 | 2014-02-26T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Transcriptomic analysis provides new insights into the secondary follicle growth in spotted scat (Scatophagus argus)
Spotted scat (Scatophagus argus) is an important mariculture fish that is of great economic significance in East and Southeast Asia. To date, there are no studies on ovary development and regulation in S. argus. Herein, the ovary transcriptome profiles of S. argus at different stages were constructed, and the genes and pathways potentially involved in secondary follicle growth were identified. A total of 25,426 genes were detected by sequencing the mRNAs from the ovary libraries at stage III (n=3) and IV (n=3). Notably, 2950 and 716 genes were up-regulated and down-regulated in the stage IV ovary, respectively, compared to the stage III ovary. The differentially expressed genes (DEGs) were found to be mostly involved in regulating steroidogenesis, vitellogenesis, lipid metabolism, and meiosis. Up-regulation of steroid hormone synthesis pathway genes (fshr, cyp17a1, and foxl2) and insulin-like growth factor pathway genes (igf1r, ifg2r, igfbp1, igfbp3, and igfbp7) in the ovary at stage IV was possibly the reason for the increased serum estrogen. Moreover, ppara, ppard, fabp3, and lpl were up-regulated in the stage IV ovary and were potentially involved in the lipid droplet formation in the oocyte. Many DEGs were involved in the cellular cycle, meiosis, and cAMP or cGMP synthesis and hydrolysis, indicating that meiosis was restarted at stage IV ovary. In addition, numerous TGF-beta signal pathway genes were up-regulated in the stage IV ovary. This ovary transcript dataset forms a baseline for investigating functional genes associated with oogenesis in S. argus.
Introduction
In recent years, there has been an increasing demand for newly cultured species, especially fish, because of the reduction of natural resources caused by overfishing. Producing a large number of healthy and viable eggs is pivotal to the success of aquaculture (Lubzens et al., 2010). Mature fertilizable eggs develop from oogonia through oogenesis. A normal oogenesis process is necessary for successful fertilization and embryonic survival. Although oogenesis significantly differs between teleost, the basic stages of oogenesis are: 1) transformation of oogonia into oocytes (onset of meiosis), 2) oocyte growth (under meiotic arrest), 3) maturation (resumption of meiosis), and 4) ovulation. Several articles have detailed the descriptions of these phases (Patiño and Sullivan, 2002;Lubzens et al., 2010;Urbatzka et al., 2011;Breton and Berlinsky, 2014;Sullivan and Yilmaz, 2018;Meng et al., 2022).
Oocyte growth is a critical stage of oogenesis and has been extensively studied in different species. In teleost, oocyte growth may encompass a significant portion of the lifespan and can be divided into the primary growth stage (PG) and secondary growth stage (SG) (Lubzens et al., 2010). SG consists of the cortical alveoli stage (also named previtellogenic stage, PV) and vitellogenic stage, which is further divided into early vitellogenic (EV), midvitellogenic (MV), late vitellogenic (LV), and full-grown (FG) stages according to the follicle size and morphological characteristics (Kwok et al., 2005). Primary growth starts with the onset of meiosis when oogonia develop into PG oocytes. The PG oocytes are subsequently arrested in meiotic prophase I and become surrounded by follicular cells (theca and granulosa cells), forming the ovarian follicle structure (Lubzens et al., 2010). The beginning of SG is marked by the appearance and accumulation of cortical alveoli (CA). CA are endogenously synthesized membrane-limited glycoprotein vesicles of variable sizes, which increase in number and size during early SG (Lubzens et al., 2010). CA play an important role in the fertilization response and early embryogenesis. Lipid droplets appear in the ooplasm, and their abundance predominates in the early stages of vitellogenesis over the yolk globules at about the same time as CA appearance or late in the PV stage. The lipids accumulating within the oocyte ooplasm originate from plasma very low-density lipoproteins (VLDL) and vitellogenins (Vtgs) (Lubzens et al., 2010;Sullivan and Yilmaz, 2018;Qu et al., 2022). True vitellogenesis is marked by the formation of yolk granules at the periphery of the cytoplasm as the oocyte grows. During the vitellogenic stage, dramatic follicle growth occurs as the oocyte sequesters Vtgs through receptormediated endocytosis (Lubzens et al., 2010;Sullivan and Yilmaz, 2018). The lipid droplets and yolk granules are important nutrient pools for the oocytes to meet the subsequent maturation requirements and early embryonic development (Hiramatsu et al., 2015;Sullivan and Yilmaz, 2018). The FG oocytes undergo meiotic maturation, which includes the resumption of meiosis and the completion of the first meiotic division after growth completion to become fertilizable eggs (Lubzens et al., 2010).
Oocyte growth is a complicated and dynamic process regulated by many circulating and follicle-produced local physiological factors. The follicle-stimulating hormone (Fsh), a pituitary gonadotropin, has been proven to stimulate oocyte growth (vitellogenesis) by activating cyp19a1a transcription and producing estrogen, mainly the estradiol (E 2 ) (Lubzens et al., 2010;Meng et al., 2022), which induces hepatocytes in the liver to synthesize and secrete Vtgs (Sullivan and Yilmaz, 2018). Besides the pituitary gonadotropin hormone (Gth) and gonadal sex steroids hormones, accumulating evidence suggests a cross-talk between the developing oocyte and its surrounding follicle layers in oocyte growth mediated by paracrine factors (Lubzens et al., 2010;Meng et al., 2022). For example, Inha/Inhbb of the TGF-beta signaling pathways and Bmp15 of the Bmp signaling pathway have proven to be required for oocyte growth (Myllymaa et al., 2010;Cook-Andersen et al., 2016). Recently, many genes and pathways that potentially play a role in fish oogenesis have been identified through large-scale transcriptome analyses. These transcriptomic analyses on fish oogenesis have mainly focused on the true vitellogenesis stage (From EV to LV) Meng et al., 2022), final maturation and ovulation (Breton and Berlinsky, 2014;Tang et al., 2019;He et al., 2020;Wang et al., 2021a;Meng et al., 2022), and transition from PG to PV (Breton and Berlinsky, 2014;Zhu et al., 2018;Qu et al., 2022). Notably, there is a lack of transcriptome analysis between the CA stage and the true vitellogenic stage, which is critical for understanding the potential mechanisms of secondary follicular growth, especially the production of lipid droplets and yolk granules.
Spotted scat (Scatophagus argus) is an important aquaculture fish with high economic value in East and Southeast Asia. S. argus can eat algae, sick shrimp, parasites on fish bodies, and shellfish attached to the pool wall and cage, making it a good "garbage fish." It is thus suitable for mixed cultivation with other marine shrimp and fish. Although artificially-induced spawning can be achieved in some laboratories (Cai et al., 2010;Mandal et al., 2021;Washim et al., 2022), the efficiency of artificial propagation is still very low in the actual breeding process, mainly because of the lack of female fish which can produce a large number of healthy and viable eggs. Elucidating the mechanisms underlying oocyte growth may thus help increase the quality and quantity of eggs produced by female S. argus for artificial breeding, thereby promoting its sustainable development in aquaculture. Although numerous studies regarding the reproductive biology of S. argus have been done in recent years (Cui et al., 2017;Chen et al., 2020;Zhai et al., 2021;Jiang et al., 2022), the molecular mechanisms controlling oocyte growth remain poorly understood and are worth further studies. In this study, transcriptomic analyses of the ovary at the PV stage (stage III) and LV stage (stage IV) were performed to identify candidate genes and potential pathways associated with secondary follicular growth in S. argus. The findings of this study provide a solid foundation for further elucidation of the mechanisms offish oogenesis.
Experimental fish and sample collection
Twelve one-year-old female S. argus (200-310 g) were collected from Donghai Island, Zhanjiang, Guangdong Province, China. The fish were euthanized with 100 mg·L -1 of tricaine methanesulfonate (MS-222, Sigma, Saint Louis, MO, USA), followed by collecting and storing part of the ovary tissues in Trizol at -80°C for total RNA extraction. The remaining ovary tissues from the same fish were fixed in Bouin's solution for 12 hours and used for histological identification. Histological characterization of the ovary was performed following the approach of Cui et al. (2017) and Jiang et al. (2022). All experiments were performed according to the requirements of the Animal Research and Ethics Committee of the Institute of Aquatic Economic Animals, Guangdong Ocean University.
2.2 RNA extraction, library construction, and Illumina sequencing Total RNA was extracted using TRIzol (Life Technologies, Carlsbad, CA, USA) following the manufacturer's protocol. RNA quantity and quality were determined using NanoDrop spectrophotometry (Thermo Scientific, Wilmington, DE, USA) and agarose gel electrophoresis, respectively.
Total RNA extracted from the ovarian tissues of 6 fish (3 at stage III and 3 at stage IV) from the above 12 fish were used for RNA-Seq. The NEBNext UltraTM RNA Library Prep Kit was used for complementary DNA (cDNA) library construction following the manufacturer's recommendations. Briefly, mRNA was purified from total RNA using Oligo (dT) beads and was then randomly fragmented into short fragments using the fragmentation buffer. The fragments were then used to synthesize the first-strand cDNA by employing random primers. Second strand cDNA was synthesized using DNA polymerase I, dNTP, RNase H, and buffer. The synthesized cDNA was then purified using AMPure XP beads and then end-repaired, poly(A) added, and ligated to Illumina sequencing adapters. The ligation fragments were subsequently size-selected using AMPure XP beads. PCR was then performed to generate cDNA libraries which were sequenced using Illumina HiSeq ™ 2000. All clean libraries of sequencing data were submitted to the NCBI Sequence Read Archive (SRA) database (Accession Nos: SRP171076 and PRJNA906196).
Sequence assembly, annotation, and functional analysis
Clean reads were obtained by removing the reads containing adapters and low-quality reads from the original sequences. The clean reads were then mapped to the S. argus reference genome (https://ngdc.cncb.ac.cn/search/? dbId=gwh&q=GWHAOSK00000000.1) using HISAT2.2.4. The Q20, Q30, GC-content, and sequence duplication levels of the clean data were also calculated. The gene expression levels were calculated using the fragments per kilobase of transcript per million mapped reads (FPKM) method. Analysis of the differentially expressed genes (DEGs) between two different groups was performed using the EdgeR package. Genes with a fold change ≥2 (|log2FC|>1) and a false discovery rate (FDR) < 0.05 were highlighted as significant DEGs. The DEGs were annotated by checking their Gene Ontology (GO) functions and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways enrichment. The calculated P value of GO and KEGG analyses was determined through FDR correction. GO terms and KEGG pathways with a P-value <0.05 were defined as significantly enriched in the DEGs.
Validation of DEGs with quantitative real-time PCR (qRT-PCR)
Thirteen DEGs related to ovarian development were randomly selected for qRT-PCR analysis to validate the RNA-seq data. Total RNA was extracted from ovarian tissues of the above 12 fish (6 at stage III and 6 at stage IV), with DNAase I used to digest the genomic DNA. cDNA was synthesized using the TansScript kit (TransGen Biotech, Beijing, China). qRT-PCR was conducted using the qPCRPerfectStart TM Green qPCR SuperMix (TransGen Biotech, Beijing, China) on a Light Cycler 96 (Roche). The PCR reaction (20 mL) contained 1 mL cDNA, 10 mL 2 × PCR mix, 8.2 mL ddH 2 O, and 0.4 mL of forward and reverse primers (10 mmol·mL -1 ). The qRT-PCR thermocycling conditions were initial denaturation at 95°C for 1 min and 40 cycles of denaturation, primer annealing, and extension at 95°C for 15 s, 60°C for 15 s, and 72°C for 30 s, respectively. B2m was used as the reference gene. In previous studies, B2M/B2m has been recommended as a stable reference gene for human ovaries (Asiabi et al., 2020), bovine granulosa cells, oocytes and cumulus cells (Baddela et al., 2014;Caetano et al., 2019), and rat ovarian granulosa cells (Cai et al., 2019). The 2 -DDCt method was used to calculate the relative expression of the target genes. Each sample was amplified in duplicate. All primers used in the various assays are listed in Supplementary Table 1.
Statistical analyses
Data were analyzed using the independent sample t-test. A probability level of < 0.05 (P < 0.05) was used to indicate statistical significance. The GraphPad Prism 6 software (La Jolla, CA) was used to conduct statistical analyses. All data are presented as means ± standard error of the mean of three replicates.
Characteristics of S. argus ovary development stages
Based on the ovarian histological characteristics (Figure 1), two representative stages of ovary development were identified: stage III (corresponding to the PV stage of zebrafish) and stage IV (corresponding to the LV stage of zebrafish). At stage III, cortical alveoli or lipid droplets appeared within the cytoplasm of some oocytes, and the ovary was mainly composed of primary growth stage oocytes and cortical vesicle stage oocytes. At stage IV, the ovary was mainly filled with preovulatory oocytes, which were significantly larger and contained abundant yolk granules and lipid droplets.
Transcriptome sequencing and assembly
Sequencing of the mRNAs from the six libraries yielded 312,304,188 raw reads, which were reduced to 310,341,234 clean reads after removing adapter sequences and low-quality reads. The percentage of bases with a Phred value of at least 20 (Q20) and 30 (Q30) were more than 96.71% and 91.71%, respectively ( Table 1). The GC contents for all the libraries were around 50% ( Table 1). The clean reads were then mapped to the reference genome of S. argus . Notably, an average of >93.18% of the reads mapped to the S. argus genome (Table 1). A total of 25,426 genes, including 23,690 known genes and 1736 novel genes, accounting for 98.03% of the reference genomes, were detected (Supplementary Table S2 and Table S3).
Differentially expressed genes of the two libraries
Principal component analysis (PCA) performed before the identification of DEGs revealed that the biological replicates of each group clustered together (Figure 2A), indicating the consistency of the ovary stage grouping. A total of 12,829 genes were detected, with 11,497 genes expressed in both stages, and 448 and 1,332 only detected in stage III and stage IV, respectively ( Figure 2B). Comparative expression profiling between stage III and stage IV revealed 3666 DEGs ( Figure 2C, Supplementary Table S4). Notably, 2950 and 716 genes were up-regulated and down-regulated, respectively, in the stage IV group compared with the stage III group. A Venn diagram was generated using an online software (https://www.omicshare.com/tools/ Home/Soft/venn) to classify all the expressed genes and DEGs. Of note, 210 down-regulated genes and 753 up-regulated genes were detected in stage III and IV ovaries, respectively ( Figure 2D).
GO and KEGG classification of DEGs
GO analysis and KEGG enrichment were performed after analysis of the gene expression profiles to better understand the biological functions of the DEGs during secondary ovarian growth in S. argus. GO analysis categorized the DEGs into three basic functional categories: biological processes, cellular components, and molecular functions (Figure 4 and Supplementary Table S5). Cellular process (2163), single-organism process (1992), and metabolic process (1729) were the most enriched GO terms in the biological process category. Binding (1966), catalytic activity (893), and structural molecule activity (187) were the most enriched GO terms in the cellular components category. Cell part (2116), cell (2116), and organelle (1890) were the most enriched GO terms in the molecular functions category. KEGG pathway enrichment analysis showed that 93 pathways were significantly enriched (P < 0.05) in the up-regulated DEGs ( Figure 5A and Supplementary Table S6). The 93 pathways could be divided into six categories: cellular process (eg. Focal adhesion, Adherens junction, Endocytosis, Tight junction, and Cell cycle), environmental information processing (eg. AMPK signaling pathway, Notch signaling pathway, ECM-receptor interaction, MAPK signaling pathway -fly, MAPK signaling pathway, TGFbeta signaling pathway, Hippo signaling pathway, JAK-STAT signaling pathway, Wnt signaling pathway, and FoxO signaling pathway), genetic information processing (eg. Ubiquitin mediated proteolysis, Nucleocytoplasmic transport, and Fanconi anemia pathway), metabolism (eg. Lysine degradation, Glycosaminoglycan biosynthesis -heparan sulfate/heparin, Inositol phosphate metabolism, and Glycerophospholipid metabolism), organismal systems (eg. Axon guidance, Thyroid hormone signaling pathway, Cholesterol metabolism, Insulin signaling pathway, and PPAR signaling pathway), and human diseases (eg. MicroRNAs in cancer, Proteoglycans in cancer, and Pathways in cancer). Similarly, 28 pathways were found to be significantly enriched (P < 0.05) in the down-regulated DEGs and could also be divided into the same six categories ( Figure 5B and Supplementary Table S7).
Expression of representative DEGs in ovarian growth and development
We analyzed some representative DEGs/genes associated with specific aspects based on the transcriptome data to better understand the biological function of DEGs in ovarian growth and development.
Expression of genes associated with steroid hormone biosynthesis
Ten DEGs (fshr, foxl2, cyp17a1, cyp7b1, hsd17b1, hsd3b1, hsd3b7, pgr, esr1, and ar) associated with steroid hormone biosynthesis were identified from the transcriptome data. All the genes were significantly up-regulated in stage IV ovarian growth and development (Figure 6 and Supplementary Table S8).
Multiple cathepsins, including cathepsin B, L, and D, play roles in vitellogenin hydrolysis during oocyte growth in oviparous animals (Meng et al., 2022). Interestingly, only the expression of ctss met the condition of log2 (FC) >1 and FDR <0.05 among the 10 cathepsin-encoding genes identified in stage III and IV ovaries ( Figure 7A). The other nine cathepsin-encoding genes included ctsd, ctsb, ctsd-like, ctsc, ctsa, ctsf, ctsz, ctsa, and ctsl. The expressions of ctsd, ctsb, ctsd-like, ctsc, and ctsa were up-regulated, while those of ctsf, ctsz, ctsa, and ctsl were down-regulated in stage IV compared to stage III ovarian ( Figure 7B). Nonetheless, all the genes generally showed high expression in stage III and stage IV, especially ctsa and ctsd-like.
Discussion
Ovary development is pivotal for the successful reproduction of fish and has thus attracted tremendous research interests in aquaculture. Numerous studies have focused on genes regulating ovary development in fish, including S. argus. In recent years, transcriptome analysis has been utilized to identify genes and signal pathways that are critical for special biological processes of oogenesis in several fish species. Herein, we compared the expression of genes in stage III and stage IV ovary in S. argus using RNA-seq. The genes and pathways involved in secondary ovarian growth were then identified. The findings of this study enhance the understanding of the molecular regulation of vitellogenesis, lipid droplet formation, steroidogenesis, and meiosis during ovary growth in S. argus. The ovary accumulates nutrition factors extrinsically from the blood and produces some structure components or materials intrinsically during the secondary growth stage. The liver is the main source of the nutrition factors, including Vtg, lipid, and zona pellucida (Zp) proteins which are transported into the ovary via the bloodstream. RNA-seq recently revealed that E2 biosynthesis pathway genes were highly activated in ovarian follicles at the late vitellogenic stage in ricefield eel (Monopterus albus) (Meng et al., 2022). Hypothalamic-pituitary-gonadal aix hormones play important roles in ovary development and maturation, including in estrogen-mediated VTG synthesis. Fshr and Lhr are two major receptors that mediate the pituitary Fsh and Lh signals, respectively, Histogram showing the selected differentially expressed genes (log2 (FC) >1 and FDR <0.05) involved in paracrine signaling in stage III and IV ovary of S. argus. The data is based on the FPKM values obtained from the RNA-seq data. *, P<0.05; **, P<0.01; ***, P<0.001.
FIGURE 7
Histogram showing the expression of genes associated with lipid droplet and yolk production. in the gonads in mammals. Notably, mutation of Fshr and Lhr results in infertility in female mice (Abel et al., 2000;Jonas et al., 2021). In contrast, fshr mutant female zebrafish exhibit follicle activation failure with follicles arrested at an early stage and became infertile, while the lhr mutant female fish remain fertile (Zhang et al., 2015). Fshr also mediates the signal of Lh in zebrafish (Chu et al., 2014). Loss of fshr also results in blocked ovary development in medaka and a significant decrease in the serum E2 level (Murozumi et al., 2014). The serum E2 and Vtg levels in female individuals at stage IV ovary are significantly higher than those of individuals at stage II and III ovary in S. argus (Cui et al., 2017). The up-regulation of serum E2 is attributed to higher ovary fshr expression in female S. argus at stage IV ovarian growth. Cytochrome P450 (Cyp)17A1 coded by Cyp17a1 has both 17, 20lyase and 17-hydroxylase activities involved in producing androgens, estrogens, and progestin under Lh regulation (Magoffin, 1989;Zhai et al., 2018). We deduce that cyp17a1 is potentially up-regulated by the fshr pathway in the stage IV ovary, considering that Fsh and Lh possibly share the same receptor, fshr. Up-regulation of cyp17a1 is possibly the direct reason for the increase of estrogen synthesis. In addition, other up-regulated DEGs, including foxl2, hsd17b1, and pgr, involved in steroidogenesis were identified ( Figure 5 and Supplementary Table S8). Of note, Foxl2 is an activator of cyp19a1a which codes for armoatase and is critical for estrogen synthesis in fish (Wang et al., 2007;Bertho et al., 2018). Herein, cyp19a1a expression remained unchanged at mRNA level despite foxl2 being upregulated in stage IV ovary (Supplementary Table S3). Alternatively, the Cyp19a1a protein level possibly increased in stage IV ovary. Future studies should thus determine gene expression at the protein level during ovarian growth and maturation.
Besides regulating Vtgs synthesis via estrogen in the liver, uptake of Vtgs by the oocyte is also an important process during the secondary growth sage (Johnson, 2009). The uptake of Vtgs is fulfilled by its receptor-mediated endocytosis. Lpr8 and Lpr13 are Vtg receptors in fish (Hiramatsu et al., 2015). In orange-spotted grouper (Epinephelus coioides), the lrp13 mRNA level decrease from stage II to stage IV ovary, while western blot (WB) analysis shows that stage IV ovary has the highest Lrp13 protein level (Ye et al., 2022). In this study, numerous lpr genes, including lrp8, were upregulated at stage IV ovary, while lrp13 (EVM0006844, lrp1b) was highly expressed at both stage III and stage IV (Figure 7 and Supplementary Table S9). Both lrp8 and lrp13 potentially act as Vtgr in S. argus, like in other fish. They could be catalyzed to form yolk granules after Vtg is incorporated into the oocytes. In ricefield eel, RNA-seq analysis revealed that 13 cathepsin genes were highly expressed in the ovarian follicles during some development stages (Meng et al., 2022). Herein, numerous genes involved in Vtg catalyzation (cathepsins genes) were highly expressed at both stage III and IV ovary, implying that they might be critical for vitellogenesis in S. argus (Figure 7 and Supplementary Table S9).
Accumulation of lipid droplets is a major characteristic of stage IV ovary. In Japanese flounder, RNA-seq revealed that numerous genes associated with lipid metabolism in ovaries were up-regulated from the primary growth ovary stage to the late lipid droplet stage . S. argus is a marine fish, and its oocyte stores lipids forming lipid droplets to ensure egg buoyancy . The fatty acid binding protein (Fabp) super-family plays an important role in transporting the lipids to the cellular metabolic target sites (Chmurzyńska, 2006;Lei et al., 2020). The fatty acids and PPAR agonists can up-regulate the regulation of fabp2, fabp3, and fabp6 genes in zebrafish tissues (Venkatachalam et al., 2013). Dietary fish oil supplementation increase fabp2 expression in the A B
FIGURE 9
Histogram showing the selected differentially expressed genes (log2 ( liver, indicating that fabp2 potentially participates in the metabolism of long-chain unsaturated fatty acids in S. argus (Wang et al., 2021b). Herein, ppara, ppard, and fabp3 were upregulated, while fabp1 and fabp4 were down-regulated in the stage IV ovary (Figure 7 and Supplementary Table S9), indicating that different fabps potentially have different functions. However, the critical fabps for lipid droplet acculturation in S. argus remain to be elucidated. In European sea bass (Dicentrarchus labrax L.), lipoprotein lipase (lpl) is highly expressed in the ovary with high gonadal-somatic index in the follicle cells surrounding the oocyte, indicating that it is critical for lipid droplet formation (JoséIbañez et al., 2008). Herein, lpl was significantly up-regulated in stage IV ovary suggesting that it plays a conserved role in lipid absorption in oocytes of S. argus.
Ovaries also produce some egg components, such as Zp, which is critical for fertilizing and protecting eggs (Wu et al., 2018). Twenty Zp family gene members have been identified in teleost and are mostly expressed in fish ovaries (Wu et al., 2018). Notably, six zp gene members are the most significantly expressed genes in the ovary of Japanese flounder (Paralichthys olivaceus). Their expressions gradually decreased from primary growth, early oil droplet to late oil droplet oocyte groups . In the reference study, the early oocytes needed more Zp protein to meet their oocyte development requirements . Similarly, both zp2 and zp3 were down-regulated at stage IV compared to stage III ovary in this study (Supplementary Table S12). In black rockfish, immunohistochemistry analysis showed that Zpb2a was detected in the cytoplasm of oocytes at stage III and the region close to the zona pellucida and cell membrane of oocytes at stage IV. The strongest protein signal in the zona pellucida was observed in stage V oocytes . However, Zp expressions at the protein level during ovary development should be studied in the future to elucidate their assembly process at both mRNA and protein levels. The cellular components and preparation of nutrition factors are well organized during ovary development in fish. Of note, the underlying mechanism of this organization would be an important scientific question in the future.
Besides the accumulation of nutrition and structure proteins in the oocyte, a number of meiosis related genes may increase their expression before the end of vitellogenesis and eventually lead to ovarian maturation (resumption of meiosis) (Lubzens et al., 2010;Meng et al., 2022). In ricefield eel ovary, the highest expression of the adenyl cyclase (adcy) genes is at MV and LV. The expression of the cyclic AMP-dependent transcription factor atf-3 gene and the phosphodiesterase (pde) genes observed at full grown stage indicate that cAMP signal pathways potentially play critical roles in oocyte meiotic arrest and resumption (Meng et al., 2022). Herein, many genes involved in the cellular cycle, meiosis, cAMP or cGMP synthesis, and hydrolysis were up-regulated in stage IV ovary compared to stage III ovary, indicating that the oocyte meiosis was potentially restarted, causing them to mature.
Plenty of endocrine factors and cytokines were involved in regulating multiple cellular processes in the ovary that were identified as DEGs in this study. The insulin like growth factor (Igf) signal pathway plays an important role in ovary development, including in the resumption of meiosis and final maturation in fish (Ndandala et al., 2022). The Igf signal pathway consists of ligands (Igf1, Igf2, and Igf3), Igf binding proteins (Igfbp), and Igf receptors. Herein, igf1r, ifg2r, igfbp1, igfbp3, and igfbp7 were up-regulated in stage IV ovary, indicating that the Igf signal pathway can be potentially enhanced to promote the ovary developmental processes. Igf1 expression and secretion are regulated by the growth hormone (Gh) secreted from the pituitary gland (Nicholls and Holt, 2016). In S. argus, exogenous E2 up-regulates the mRNA expression of pituitary gh . Pituitary gh and liver igf1 mRNA expression gradually increase from stage II to IV ovary Ru et al., 2020). We thus deduce that there exists a positive feedback regulation E2 on pituitary gh. Igf1 and Igf2 enhance the expression of cyp17a1 in the vitellogenic ovary via PI3 kinase in yellowtail (Seriola quinqueradiata). However, there is no such upregulation in pre-vitellogenic ovary (Higuchi et al., 2020). Combined with the steroidogenesis associated genes, we summarize a possible regulation network between Gh-Igf and E2 synthesis pathway during ovarian growth and maturation in S. argus ( Figure 10).
TGF-beta signal pathway members are also critical during ovary development in fish . Herein, numerous TGFbeta signal pathway members involved in ovary development were DEGs. They included inhb, bmp1, bmp2k, bmp4, and TGF-beta pathway receptors and smads (Figure 7 and Supplementary Table S9). However, some TGF-beta members, such as bmp15 and gdf9, were not DEGs between stage III and stage IV ovary in this study (Supplementary Table S12), but we still could not exclude that they play an important role in regulating ovary development because they were highly expressed. Bmp15 is essential for female sex maintenance in adult zebrafish (Dranow et al., 2016). In the same line, Gdf9 regulates the tight junction (TG) related genes in the ovary of zebrafish, thereby influencing cellar permeability (Clelland and Kelly, 2011). Herein, several TG-related genes were differentially expressed and could be regulated by TGF-beta signaling (Supplementary Table S10). These findings collectively suggest that there are plenty of signal pathways and genes that Predicted schematic overview of the yolk synthesis regulatory network in S. argus.. regulate ovary growth and are highly expressed in the ovary in some developmental stages. Future studies should focus more on the detailed function and regulatory networks of these important genes and pathways in ovary development. While the regulation relationships among these genes were mainly dependent on RNA-Seq/RT-qPCR data and the existing research of other species, more experimental evidences are required to certify them.
Conclusion
The expression profiles of secondary growth-related genes in the ovaries of S. argus were identified using RNA-seq. There were 3666 DEGs between stage III and IV ovary, regulating steroidogenesis, vitellogenesis, lipid droplet formation, and meiosis. Besides these DEGs, some genes are expressed highly at both ovary stages and are critical for ovary development. The signal pathways and genes associated with ovarian growth are relatively conserved among fish. These results provide baseline data for studying the regulatory mechanisms of oogenesis in S. argus and the artificial propagation of S. argus in aquaculture.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm.nih.gov/, SRP171076, PRJNA906196.
Ethics statement
The animal study was reviewed and approved by Animal Research and Ethics Committee of Guangdong Ocean University, China.
Author contributions
M-YJ: Investigation, Data curation, Formal analysis, Visualization, Writing-original draft and Funding acquisition; Y-FZ, HL, Y-XP, and Y-QH: Investigation, Data curation and Formal analysis; S-PD: Formal analysis and Visualization; YH and GS: Resources; C-HZ and G-LL: Resources, Project administration, Supervision; D-NJ: Conceptualization, Writing-original draft, Writing-review and editing, Funding acquisition, Project administration, Supervision. All authors contributed to the article and approved the submitted version. | 6,969.2 | 2023-02-21T00:00:00.000 | [
"Biology"
] |
Multi-function Hybrid Microgrid with Four-Leg Voltage Source Inverter
— Unintentional islanding of distributed generation systems is generally avoided as a safety measure implied by worldwide electrical standards. Considering current grid technologies with better sensing capabilities and semiconductor devices with novel capabilities that allow faster response times and higher maximum ratings, there are some applications where distributed resources could improve reliability in the case of intentional islanding. Safe microgrid equipment with better isolation and control features allows efficient use of energy surplus whenever grid fault events occur, feeding emergency energy to critical loads. In this context, a 20 kW hybrid on-grid/off-grid multi-function microgridis presented along with a simplified approach for designing the contained power inverter used with intentional islanding function.
I. INTRODUCTION
The market of distributed generation (DG) in Brazil is in constant growth, reaching sales of 1.07 billion dollars in 2018, with turnover exceeds up to 978 MWp (megawatt peak), according to a study published by Greener Brasil (based on data from Federal Revenue Service, Greener Brasil, and ANEEL) [1], where growth continues at extremely high rates, despite Brazil's economic and political situation. In 2017 there were approximately 21,998 DG connections and in December of 2018, they totaled 52,852 (a growth rate of 140%), with a variation from 184.8 MWp installed capacity to 545.6 MWpbetween 2017 to 2018, respectively. Substantial growth is occurring even in a period of recessive economy, with the evolution of micro and macro economic reforms in progress, especially what will ensue the recent approval of the Brazilian pension reform bill. A positive impact is expected for maintaining the incentives for generation systems because their market is small compared to other developed countries. Estimates point more than 80 million consumer sites where it would be viable to have energy production reconciled with consumption, therefore there is still a gigantic potential for growth of the accumulated DG market (MWp). A developed microgrid system meets five aspects solved by market only through distinct equipment: 1) solar inverters operating with nominal power of 20 kW, that can be cascaded in modules with total power up to 2MW; 2) uninterruptible power supplies (UPS); 3) local and remote power flow management; 4) auxiliary power unit integration, such as battery banks and fuel cells, to extend reserve power; 5) automatic transfer switch systems for utility grid interconnection. Developed technology consist in a set of control boards: central microprocessor control module, gate drive, sensor data module, communication interface, and human-machine interface. This device is suitable for integrating distributed generation of solar energy networks as a local source, providing greater energy efficiency and, consequently, accelerated investment gains, whose development initially targets theoretical studies to integrate DC photovoltaic generation and battery storage systems. Fig. 1 presents the hybrid microgrid topology to feed critical loads through a solar energy source and battery. A main component of the system is the four-leg voltage source inverter. Two types of power inverters are generally available in market when considering main grid connection or isolation: off-grid and on-grid inverters. Off-grid inverters are intended to provide energy from DC-generated sources such as photovoltaic power to isolated consumers in alternating current, referred to as island mode operation. On the other hand, on-grid inverters use energy from DC sources to feed alternating current as well as to inject, if possible, surplus energy to the main grid.
Although not yet officially available in the Brazilian market, on-grid inverters with off-grid function allow both modes of operation, connected or isolated from the main grid. Unintentional islanding of distributed generation systems is generally avoided as a safety measure implied by worldwide electrical standards. Considering current grid technologies with better sensing capabilities and semiconductor devices with novel capabilities that allow faster response times and higher maximum ratings, there are some applications where distributed resources could improve reliability in the case of intentional islanding. In this paper, we propose a simplified methodology to develop power inverters and a working prototype four-leg voltage source inverter used with the hybrid microgrid system that ensures safe isolation and power to critical loads during grid fault events. The remaining of this paper is organized as follows: Section II presents the power inverter market in Brazil and functionality, Section III the considered power inverter design approach, Section IV case study of the microgrid system, simulations and obtained results for the 20 kW prototype.
II. POWER INVERTER MARKET AND FUNCTIONALITY
An important share of the renewable sources applied in distributed generation market, solar energy in Brazil has great prospects. According to the US Department of Energy, Brazil has a significant potential for renewable energy generation, where in 2024 is estimated at 7GWp of installed capacity. With current legislation and regulations, the investment forecast for the year 2030 is $21 billion. There may be a variation in growth of this market with the revision of Resolution 482/2012 of ANEEL, currently underway, which is expected to introduce a form of compensation to the electric distribution companies from using the grid to compensate energy generation and consumption. This change probably will reduce investment returns on systems as of 2020.
Current business models must comply with standards as well as design, manufacturing and installation restrictions of system components, where modular configurations are more capable of adapting to different applications. Among existing equipment configurations, the DC power supply type may vary considering the ability to integrate photovoltaic generator units, wind generators, hydrogen fuel cells, biofuel motor-generator sets. Other equipment include battery banks, DC/DC converters, Uninterruptible Power Supply (UPS) controllers, and rectifiers depending on load type and expected power availability.
In Brazil, UPS is commonly referred as no-break, and attend a significant portion of the commercial, industrial and residential sectors, such as apartment complexes and rural businesses. Today, according to [3], this market is estimated at $400 million, with production of large equipment (from 3 to 500 kVA) around 100,000 units/year. An important feature of a UPS is the time to reestablish energy, which should be less than one period of the sine waveformgiven the standard 60 Hz grid frequency would correspond to approximately 15ms. There are recent products in the market promoted as short-breaks, which, for less demanding applications, energy is reestablished in a longer period, usually two or three periods (30-45ms), maybe providing an ideal cost-benefit for systems that can support brief discontinuities.
The presented inverter topology consists of a bridge configuration, whose main advantage according to [4] is to lessen wear of commutating components, such as from current spikes, overvoltage and over temperature. Fig. 1 is the basic arrangement used for the hybrid inverter considered for the project, denominated as the Three-phase Four Leg Voltage Source Inverter (FLVSI). Circuit input is a DC voltage source with a cap filter, followed by One of the main challenges in DC/AC converter design is minimizing switching losses and parasitic circuit elements, which prevent efficient operation in high power applications. The so-called "dead times" during switching cycles can be seen as short-circuits to the source caused by premature state changes, which should be avoided. Another point to have in mind is that inverter operation should suffer little influence from the applied load, where the extreme conditions would be damage to the components when there is no load present (open circuit) and when load low impedance (short circuit).
There are several techniques used to generate highfrequency switching, the simplest known as Pulse Width Modulation (PWM). There are several PWM techniques, among them the most implemented [6] is the sine wave PWM, or SPWM, due to its relative simplicity. The modulating signal to control gate triggering is from the comparison of a sine wave (Vcontrol) with a triangular carrier wave (Vtri), illustrated by Fig. 2. Upper gate devices are turned on releasing current when polarization voltage is positive and blocked when voltage is negative. Lower gate devices have a similar scheme to conduct negative voltage when upper gates are blocked.
When the output phases are not balanced, even though the angular difference is 120°, there is an undesirable common mode voltage fed to consumers that can damage equipment. To reduce common mode voltage, [7][8][9][10][11] suggest that the neutral connection obtained from the inverter topology be controlled from a fourth leg parallel to the switching bridge, providing a means to adjust the neutral voltage point so that it is symmetrical with respect to the phases.
III. POWER INVERTER DESIGN APPROACH
When designing a new inverter, this primarily involves determining switching element specifications and the thermal dissipation layout. After defining the inverter topology type, the next step is to solve (1)- (12) to obtain an estimate of IGBT characteristics, to select a semiconductor for this design, and (13)-(21) to obtain a suitable heatsink. Among other variables, the DC source characteristics, expected input voltage, and voltage and power requirements at output are needed to begin.
A. Determining Switching Element Specifications
The amplitude modulation index (ma) can be expressed by the ratio of theoretical output peak-to-peak voltage and the input voltage: (1) where VORMS is the nominal AC output voltage and Vimin is the minimum DC input voltage. The inverter must supply output the specified voltage when the input voltage is at its lowest, this constraint is not a problem when the input voltage is higher.
The modulation index should not exceed 1.0, in this case there is over modulation which means that with the SPWM technique the amplitude of the first harmonic is not linearly proportional to the modulation index [11]. There are improvements such as third-harmonic injection (THI-SPWM) where the idea is to add a third harmonic of the reference sine wave to flatten the region of peak voltage, increasing linear range of operation. Because third harmonic components are equally applied to the three phases this would not compromise output phase generated by the inverter. Since the objective was to present a simple methodology of inverter design, the details regarding these techniques is out of scope of this paper.
The output voltage wave form with SPWM modulation for a given phase is expressed by: (2) whereφ is the waveform phase and Vi the DC input voltage. The theoretical apparent power (PAP) is estimated from: where PAT is the expected active power and cos θ is the expected power factor. The nominal current is expressed by the equation: Output equivalent load and inductance is, respectively: The current applied to the IGBT module is equivalent to output peak current, thus: (7) Freewheeling diodes in parallel to the IGBTs at peak power dissipate approximately the same peak output current: (8) Nominal and average current integrations for the IGBT modules considering SPWM switching [12] are: Nominal and average current integrations for the freewheeling diodes considering SPWM switching [12] are: (11) (12)
B. Thermal Dissipation Layout
The cooling of the device occurs through forced air conduction through heatsinks in contact with the active elements. The next equations followed the schematic shown in Fig. 4, where Rth-jc is the thermal resistance of each component to its packaging; Rth-csthermal resistance of the packaging; Rth-dsthermal resistance of the heatsink; Tjjunction temperature; Pigbtpower dissipated by the IGBT; Pdiodepower dissipated by the freewheeling diode; Pmodpower dissipated by the IGBT-diode pair module.
The sum of the power dissipation occurring in conduction and switching in each IGBT is described by (13)(14), considering the adjustment of the parameters Eon and Eoff, the dissipated energies for switch turn on or off, respectively, corrected for the chosen operating voltage and current. (14)
The power dissipated from each freewheeling diode, considering the influence of the parameter Erec, reverse energy recover, for operating voltage and current conditions: whereVto and Rt are the voltage and impedance of forward polarization of the diode obtained form the datasheet specifications.
The module type chosen is a pair of IGBTs and diodes. The dissipated power by a single module is: The total dissipation in a switching cycle is given by: The junction temperature can be estimated as 85% of the maximum junction temperature especified for the IGBT, as a reasonable margin. The thermal resistance of the heatsink (Rthdiss) is in function of a chosen ambient temperature (TAMB): We can calculate the junction temperature of the IGBT with the obtained with the heatsink's thermal resistance: The heatsink'stemperature estimate is expressed by: With the determined thermal resistance and temperature, a sufficient heatsink model can be chosen. An additional module to increase cooling may be required according to the application, whether it be forced ventilation or liquid cooling, to protect components from overheating.
IV. CASE STUDY OF THE HYBRID MICROGRID SYSTEM
Simulations were performed with the PSIM™ software Version 9.1.1.400 where the detailed schematic can be seen in Fig. 5 and 6. Input power is a set of photovoltaic panels with sum of nominal power above 20 kW. The parameters used for the load cell block available in the software took into consideration solar farm plans of 3 rows with 17 panels in series each: According to [13], for equipment conformance labeling, the Conformity Assessment Requirements apply to the following equipment: Photovoltaic modules; Battery Charge/discharge Controllers; inverters for autonomous systems with nominal power between 5 and 10 kW; inverters for systems connected to the grid with rated power up to 10 kW; Battery bank systems. The following procedures for the tests are based on the minimum requirements of the equipment according to current standards: Although there is no precise specification of the standards for hybrid generation and storage systems equipped with static transfer switches, the different test procedures were carried out to ensure the safety and quality of the equipment. Test results for the three-phase four-leg voltage source inverter demonstrated stable operation of the equipment as specified as designed.
V. CONCLUSIONS
We thereby present a simple design methodology for power inverters and a working prototype with perspective to contribute to the development of Brazilian regulations regarding solar inverters above 10 kW and hybrid ongrid/off-grid systems.
In order to certify distributed generation equipment, it is necessary to submit them to the tests specified by [13], not designed specifically for hybrid inverters. Regulations also do not require certification of inverters above 10 kW. We hope that our development will contribute to develop certification processes in Brazil regarding applied technologies. | 3,284.4 | 2020-07-13T00:00:00.000 | [
"Engineering"
] |
Comparison of Various PID Control Algorithms on Coupled-Tank Liquid Level Control System
On account of the unsatisfactory control effects of traditional PID control theory on dealing with complicated or nonlinear control system, the paper focused on multiple intelligent control theories, including fuzzy self-adaptation modulation PID control, BP neural network modulation PID control and RBF neural network modulation PID control. By setting double-Tank model as an example, and adopting MATLAB software programming, the research is aimed to realize tank level control via four PID control algorithms. By comparing the simulation curves of the four control algorithms, the superiority of the REF neural network in arbitrarily accurate approximating any continuous function is proved.
Introduction
PID control is to realize the controlling relationship between control signal and the controlled object by transmitting deviation signal to proportion (P), integration (I) and differential (D) links subjecting to linear combination [1] .Thanks to the characteristics such as simple algorithm, high reliability and robustness, it has been applied in a variety of control fields and has achieved excellent effects [2] . Due to the existing of the actual production objects, traditional PID control is not applicable to closed-loop optimization control or complex nonlinear system for the complicated parameter setting and other factors. Under such a background, intelligent PID parameter setting emerged, including input variables fuzzy control and neural networks control artificial neural network involved in. The paper summarized PID control theories of traditional PID, fuzzy PID, BP neural network PID and RBF neural network PID. By setting coupled-tank as an example, it simulated water tank level under the four PID controllers by adopting MTALAB software. Through comparing the simulated curves, it can be seen that REF neural network shows the optimal liquid level control effect for coupled-tank.
Mathematical Model for Coupled-tank:
Mathematical model for coupled-tank is selected as shown in Fig.1 Figure.1. Schematic representation of the coupled-tank system The differential equation of coupled-tank is derived from tank material balance equation: In formula (1), 1 T refers to the time constant of the upper tank, 1 The paper selected the liquid level of the lower tank is selected as the controlled object to study the liquid level changes as being affected by inlet valve, outlet valve opening and other factors.
Control Principle and Results of Traditional PID Control
Traditional PID controller utilizes deviation signal subjecting to proportion, integration and differential links as the controlled object. The deviation () et , namely the difference value between liquid level output and preset value, is the input signal of PID controller. The control principle of traditional PID control can be expressed as: In formula (3), u(t) is the output signal of PID controller, k p is the proportion constant, k r is the integral time, t d is the differential time, and e(t) is the input signal of PID controller. On the basis of traditional PID theory, the paper adopted incremental PID control algorithm, namely adopting analog signal discretization and control all the data acquired through deviation calculation. The formula is as shown below: The operational formula of incremental PID control algorithm is obtained based on the difference between the control amount at the time K and that at the time k-1: (6), proportionality coefficient is Incremental PID double-capacity water tank level control simulation curve realized via formula programming as mentioned above:
Figure. 2. Results of incremental PID
It can be seen from Fig. 2 that traditional incremental PID algorithm requires more oscillations. The adjusting time is 22s and stabilization time is 25s. It takes long time.
Fuzzy Self-adaptation Setting PID Control and Results
During industrial production process, the controlled objects are subject to changes on characteristics, parameters, and results as being affected by the load and interference factors [3].The adaptive control in modern control theory has the advantages on real-time changing the control strategy and improving the control quality. However, the control effect is closely related to the precision of identification model. Therefore, for complicated and large scale system, if the control model precision is unsatisfactory, the control effect will be poor. The fundamental principle of fuzzy self-adaptation PID controller is to take error e and error change e c as controller input thus to satisfy PID parameter selfsetting requirements via e and e c at any time. A fuzzy control rule table is established to modify PID parameters [4].
It is an algorithmic program by running MATLAB software, preparing fuzzy control rules table of KP, KI and KD, and fuzzy self-adaptation PID double-capacity water tank control. Liquid level control simulation results and running results of fuzzy self-adaptation PID double-capacity water tank are as shown in Fig. 3 below. Fig. 3 (a) is the simulation output curve of fuzzy PID double-capacity water tank level. And Fig. 3 (b) is the error curve. Figure. 3
(b). Error curve
It can be seen from Fig. 3 that fuzzy adaptive PID setting requires less oscillation comparing to traditional PID control. However, it still requires long stabilization time 19s.
BP Neural Network PID Control and Results
BP neural network is to deal with the data automatically at complicated network environment and to analyze the normal running status of the system based on the defined learning rate, learning step size, and performance index [5]. By adopting BP network, a self-learning PID controller can be established for K P , K I and K D . And it can realize PID control optimization through self-learning of neural network, weighted coefficient adjusting for neural network output corresponding to certain control rule. Figure 4 shows BP neural network PID controller structure. The control algorithm steps are summarized as following:
Figure. 4. BP neural network PID controller structure
(1) To determine the structure of input layer, implication layer and output layer including each layer neuron number and activation function of BP neural network. The activated function is as shown in the following formula. An original value is assigned for each layer of weight number. Choose appropriate learning rate and operation step [6]; e e x (7) (2) To obtain the sum of (3) To perform neuron data treatment, input and output through the defined parameters for each layer of neural network [7]; (4) To calculate the output of PID controller of formula (6) (5) To perform self-adaption learning thus to realize automatic adjustment under certain rules, input and output calculation and automatic setting of the final output of KP, KI and KD control parameters [8][9]; (6) To set 1 kk , and return to (1). Liquid level control simulation results and running results of BP neural network PID doublecapacity water tank are as shown in Fig. 5 below. Fig. 5 (a) is the simulation output curve of BP neural network PID double-capacity water tank. And Fig. 5 (b) is the error curve.
RBF Neural Network Setting PID Control
Radial primary function RBF neural network has three layers of feed forward network in single hidden layer structure. It is a local approximation network. It has been approved that it can get close to any continuous function at an arbitrary precision [10][11][12].
The principle of RBF neural network PID setting is as shown below. The physical quantity formula of control error e (k) is as shown in formula (8): The three inputs of PID controller are as shown in formula (9): Incremental control algorithm is as shown in Formula (6):Performance index function is as shown in formula (10): Proportion and integration. The dynamic adjustment approach of differential coefficient K P , K I and K D is the same with that of BP neural network. For negative gradient descent, the computational process is as shown in formula (11): The paper adopted 3 neurons of RBF neural network input layer, 6 neurons of implication layer and 1 neuron of output layer which form the network structure as shown in 3-6-1. Liquid level control Fig. 6 below. Fig. 6 (a) is the output curve of RBF neural network PID double-capacity water tank. And Fig. 6 (b) is the error curve. It can be seen from Fig. 6 that the time required by RBF control algorithm has been decreased significantly. However, the algorithm result is idealistic since the transfer function is quite simple.
Comparison of the Results of Four PID Control Algorithms
The simulation curves of traditional PID, fuzzy PID, BP neural network PID and RBF neural network PID on double-capacity water tank level control system are obtained as shown above. Through comparing the simulation curves, the parameters of the four control modes are as shown in Table 1 and 2. From the control algorithm results as shown in Table 1 and Table 2, as well as combining with the curves of the four control methods, it can be seen that traditional PID algorithm requires more oscillations, longer adjusting time and higher overshoot; the difference on adjusting time between fuzzy PID algorithm and BP neural network PID algorithm is not significant. However, the overshoot of BP neural network PID is much lower than that of fuzzy PID. Comparing with former three kinds of algorithms, RBF neural network has no overshoot and can achieve stability in the shortest time. We get the final conclusion that RBF neural network algorithm is the optimal method for double-capacity water tank level control. | 2,149 | 2020-09-01T00:00:00.000 | [
"Computer Science"
] |
An Another Protocol to Make Sulfur Embedded Ultrathin Sections of Extraterrestrial Small Samples
Another protocol to make sulfur embedded ultrathin sections was developed for STXM–XANES, AFM–IR and TEM analyses of organic materials in small extraterrestrial samples. Polymerized liquid sulfur—instead of low-viscosity liquid sulfur—is the embedding media in this protocol. Due to high viscosity of the polymerized sulfur, the embedded samples stay near the surface of polymerized liquid sulfur, which facilitates trimming of glassy sulfur and ultramicrotomy of tiny embedded samples. In addition, well-continued ribbons of ultramicrotomed sections can be obtained, which are suitable for the above mentioned analyses. Because there is no remarkable difference in Carbon XANES spectra of Murchison IOM prepared by this protocol and by the conventional protocol, this protocol gives another alternative to prepare sulfur embedded ultramicrotomed sections.
Introduction
Sulfur embedding ultramicrotomy was originally devised to measure energy-loss near-edge structure (ELNES) of light elements such as carbon, oxygen and nitrogen in organic material included in interplanetary dust particles (IDPs) by using electron energy loss spectrometer (EELS) equipped on transmission electron microscope (TEM) [1]. More recently, S embedding ultramicrotomed sections have been used for X-ray absorption near-edge spectroscopy of light elements such as carbon, oxygen and nitrogen in organic material in IDPs, meteorites, the returned Comet 81P/Wild 2 dust particles and ancient terrestrial samples by using scanning transmission X-ray microscopy (STXM-XANES) at synchrotron facilities e.g., [2][3][4][5][6][7][8], which has enabled in situ analysis of submicron-sized extraterrestrial organic materials. Focused ion beam (FIB) processing becomes more common to prepare thin samples for the light elements STXM-XANES analysis of organic materials in extraterrestrial samples e.g., [9][10][11][12][13][14].
In the previous methods, small fragments of crystalline sulfur with a sample was melted on a glass slide to make a sulfur melt droplet containing a sample. After solidification of the droplet, it was removed from the glass slide and the droplet was attached on a stub by glue, e.g., [1,16]. In this protocol (Figure 1), we excluded these remove and attachment procedures. This protocol avoids thin cracks in a sulfur droplet due to mishandling, which may introduce organic contamination through thin cracks. In addition, in the protocol, translucent or even transparent sulfur droplets are obtained (Figure 2), which greatly improves visibility of fine-grained samples under microscopes during trimming of glassy sulfur and ultramicrotomy of tiny samples. Detailed protocols to obtain the sulfur droplets will be described later.
Stainless Stub
To obtain clear sulfur droplets, stainless-steel stubs were specially designed for this new protocol ( Figure 3). The lower part of the stub is 8 mm in diameter, which is common to typical epoxy stubs for ultramicrotomy. The upper part of the stub is 2 mm in diameter and 2 mm in height. Stubs made of the other material, such as heat-resistant glass, can certainly be used. We selected stainless steel because it can be processed easily by using a lathe to manufacture stubs. After making the stubs, machine oil should be carefully removed from them. The stubs should be cleaned by ultrasonication in acetone for 5 min. This process should be repeated 3 times; the stubs should be wiped well with cleaning tissues.
Use of Viscous Liquid Sulfur
Although some researchers recognize the usefulness of the viscous polymerized liquid sulfur, low-viscosity liquid sulfur has been used in the conventional, widely used protocols e.g., [14]. In our protocol, polymerized sulfur is used. Sulfur is a unique material that shows equilibrium polymerization in the liquid state [17][18][19][20]. The viscosity of liquid sulfur increases strikingly around 159.4 • C [19] that is known as λ temperature (T λ ). Above T λ , as cyclic octa-atomic sulfur (S 8 ) units polymerize, the viscosity of liquid sulfur increases [21]. Highly viscous liquid sulfur is needed to embed a sample Life 2020, 10, 135 3 of 13 because it serves to prohibit depolymerization of liquid sulfur during quenching. It also prevents for the sample from sinking to the bottom of the liquid sulfur droplet. Average S 8 polymer chain length reaches the maximum value at~170 • C [21]. The average polymer chain length is related to the viscosity of the liquid sulfur.
Stainless Stub
To obtain clear sulfur droplets, stainless-steel stubs were specially designed for this new protocol
Stainless Stub
To obtain clear sulfur droplets, stainless-steel stubs were specially designed for this new protocol ( Figure 3). The lower part of the stub is 8 mm in diameter, which is common to typical epoxy stubs for ultramicrotomy. The upper part of the stub is 2 mm in diameter and 2 mm in height. Stubs made of the other material, such as heat-resistant glass, can certainly be used. We selected stainless steel in acetone for 5 min. This process should be repeated 3 times; the stubs should be wiped well with cleaning tissues.
Use of Viscous Liquid Sulfur
Although some researchers recognize the usefulness of the viscous polymerized liquid sulfur, low-viscosity liquid sulfur has been used in the conventional, widely used protocols [e. g. 14]. In our protocol, polymerized sulfur is used. Sulfur is a unique material that shows equilibrium polymerization in the liquid state [17][18][19][20]. The viscosity of liquid sulfur increases strikingly around 159.4 °C [19] that is known as λ temperature (Tλ). Above Tλ, as cyclic octa-atomic sulfur (S8) units polymerize, the viscosity of liquid sulfur increases [21]. Highly viscous liquid sulfur is needed to embed a sample because it serves to prohibit depolymerization of liquid sulfur during quenching. It also prevents for the sample from sinking to the bottom of the liquid sulfur droplet. Average S8 polymer chain length reaches the maximum value at ~170 °C [21]. The average polymer chain length is related to the viscosity of the liquid sulfur.
Because the color of liquid polymeric sulfur is dark yellow [19] and references therein, we are able to recognize polymerization by watching the color change of the droplet under a binocular stereo microscope. A compact hotplate is very useful to perform micromanipulation under a binocular stereomicroscope. In this protocol, the sulfur droplet is heated to ~170-180 °C based on the measurement by using a contact thermometer. Because polymerization rate of liquid sulfur is temperature dependent [21], it takes time to make a highly viscous liquid sulfur. In the following sections, we describe the details of this protocol according to the flow chart shown in Figure 1
Details of the Protocol 1: Melting of Sulfur on a Hot Plate
Sulfur powder (purity 99.99%) or a small fragment of sulfur crystal (purity 99.999%) is set on the top side by using a tiny medicine spoon or a pair of tweezers. Then the stub is carefully moved to a small hot plate that can be used under a stereomicroscope ( Figure 4). Figure 2 shows glassy sulfur droplets with suitable sizes for ultramicrotomy viewed from the direction normal to the top side of the stubs. When the edge of a sulfur droplet is at the top face rim of the stub as shown in Figure 2, the height of the droplet is high enough to prevent a diamond knife from hitting against the stainless stub during ultramicrotomy. In case that the amount of sulfur becomes less, the stub must be removed from the hotplate and sulfur powder or fragment must be added so that the sulfur is melted again.
Because sulfur fumes are toxic for humans, we use a local ventilation equipment to decrease the aspiration of toxic sulfur fumes during melting of sulfur and embedding processes. The local ventilation equipment may also serve to reduce corrosion of glass microscope optics by sulfur fumes. Because the color of liquid polymeric sulfur is dark yellow [19] and references therein, we are able to recognize polymerization by watching the color change of the droplet under a binocular stereo microscope. A compact hotplate is very useful to perform micromanipulation under a binocular stereomicroscope. In this protocol, the sulfur droplet is heated to~170-180 • C based on the measurement by using a contact thermometer. Because polymerization rate of liquid sulfur is temperature dependent [21], it takes time to make a highly viscous liquid sulfur. In the following sections, we describe the details of this protocol according to the flow chart shown in Figure 1
Details of the Protocol 1: Melting of Sulfur on a Hot Plate
Sulfur powder (purity 99.99%) or a small fragment of sulfur crystal (purity 99.999%) is set on the top side by using a tiny medicine spoon or a pair of tweezers. Then the stub is carefully moved to a small hot plate that can be used under a stereomicroscope ( Figure 4). Figure 2 shows glassy sulfur droplets with suitable sizes for ultramicrotomy viewed from the direction normal to the top side of the stubs. When the edge of a sulfur droplet is at the top face rim of the stub as shown in Figure 2, the height of the droplet is high enough to prevent a diamond knife from hitting against the stainless stub during ultramicrotomy. In case that the amount of sulfur becomes less, the stub must be removed from the hotplate and sulfur powder or fragment must be added so that the sulfur is melted again.
Because sulfur fumes are toxic for humans, we use a local ventilation equipment to decrease the aspiration of toxic sulfur fumes during melting of sulfur and embedding processes. The local ventilation equipment may also serve to reduce corrosion of glass microscope optics by sulfur fumes.
Details of the Protocol 2: Recognition of Polymerization of Liquid Sulfur
As described in Section 3.2, we use viscous polymerized liquid sulfur for embedding a small sample. In our case, when we set~195 • C on the display of the controller of the hot plate, the top face of a stainless stub reaches to 170-180 • C. It was difficult to measure the exact temperature of the top face of the stubs by a contact thermometer because the instrument readings fluctuated from 170 to~180 • C. It takes 3-5 min to make a sulfur droplet viscous from sulfur powder. In contrast, it takes at least~20 min to make a sulfur droplet viscous enough from sulfur crystal. However, when we remelt a glassy sulfur, which is once made by melting sulfur crystal, the sulfur droplet becomes viscous rapidly as is the case of sulfur droplets made from sulfur powder. After a liquid sulfur droplet becomes viscous, the droplet is stirred by a thin (10 µm in diameter) tungsten probe by using a micromanipulator. A ready-made thin tungsten probe can be also certainly used. Occasionally, Life 2020, 10, 135 5 of 13 tungsten probes may shed small opaque particles that can be confused for the embedded particle itself. You must keep the tungsten probe in the liquid sulfur under observation not to confuse them with the sample. After stirring, most liquid sulfur attached on the tungsten wire is removed except that a very small droplet of the liquid sulfur on the tip of the tungsten wire. The removed sulfur is attached to the cylindrical face of the upper part of the stainless stub ( Figure 5c). Based on our trial,~70 to~80% of glassy sulfur droplets are kept mostly transparent till the next day. Ideally, however, ultrathin sections should be made on the same day.
Details of the Protocol 2: Recognition of Polymerization of Liquid Sulfur
As described in Section 3.2, we use viscous polymerized liquid sulfur for embedding a small sample. In our case, when we set ~195 °C on the display of the controller of the hot plate, the top face of a stainless stub reaches to 170-180 °C. It was difficult to measure the exact temperature of the top face of the stubs by a contact thermometer because the instrument readings fluctuated from ~170 to ~180 o C. It takes 3-5 min to make a sulfur droplet viscous from sulfur powder. In contrast, it takes at least ~20 min to make a sulfur droplet viscous enough from sulfur crystal. However, when we remelt a glassy sulfur, which is once made by melting sulfur crystal, the sulfur droplet becomes viscous rapidly as is the case of sulfur droplets made from sulfur powder. After a liquid sulfur droplet becomes viscous, the droplet is stirred by a thin (10 μm in diameter) tungsten probe by using a micromanipulator. A ready-made thin tungsten probe can be also certainly used. Occasionally, tungsten probes may shed small opaque particles that can be confused for the embedded particle itself. You must keep the tungsten probe in the liquid sulfur under observation not to confuse them with the sample. After stirring, most liquid sulfur attached on the tungsten wire is removed except that a very small droplet of the liquid sulfur on the tip of the tungsten wire. The removed sulfur is attached to the cylindrical face of the upper part of the stainless stub (Figure 5c). Based on our trial, ~70 to ~80% of glassy sulfur droplets are kept mostly transparent till the next day. Ideally, however, ultrathin sections should be made on the same day.
Details of the Protocols 3-5: Picking-Up and Embedding of Fine-Grained Samples and Solidification of Liquid Sulfur
After the protocol 2, a small sample is picked up from a sample holder by using the tungsten probe under the stereomicroscope (Figure 5e). A manual XY stage equipped on a binocular stereomicroscope is useful to change the view field from the hot plate to the sample holder quickly. Because the liquid sulfur is so sticky that the tungsten probe should be withdrawn from the liquid
Details of the Protocols 3-5: Picking-Up and Embedding of Fine-Grained Samples and Solidification of Liquid Sulfur
After the protocol 2, a small sample is picked up from a sample holder by using the tungsten probe under the stereomicroscope (Figure 5e). A manual XY stage equipped on a binocular stereomicroscope is useful to change the view field from the hot plate to the sample holder quickly. Because the liquid sulfur is so sticky that the tungsten probe should be withdrawn from the liquid sulfur (Figure 5g). The sample is embedded in the sulfur droplet under the stereomicroscope. The best depth of the embedded sample is~50-100 µm from the top of the droplet. After embedding the sample, the stub is stored in a small refrigerator rapidly by using tweezers to quench the polymerized liquid sulfur. The stub is stored in the refrigerator for 20-30 min to solidify in a glassy state. Alternatively, the stub is set on a cold aluminum slab that was cooled to~10 • C in the refrigerator after the stub was cooled in the refrigerator for 0.5-1 min. As a result, relatively clear glassy sulfur with low turbidity is obtained ( Figure 2). In Figure 2, there are small cloudy spots on the surfaces of the glassy sulfur droplets. These spots are radial aggregates of acicular sulfur crystals. If the sample is not incorporated in such a spot, there is no problem to make ultrathin sections of the embedded sample. However, if the sample is unfortunately incorporated in such a spot, the stub must be heated again to melt the glassy sulfur and pick the sample out from the liquid sulfur by using a tungsten probe for doing over the protocol again.
Details of the Protocol 6: Trimming of Glassy Sulfur by Using a Diamond Knife
After solidification of sulfur, the stub is set on a chuck for samples embedded in cylindrical capsules. In this protocol, a diamond trimming knife with inclined edges is used for preparing well-continued ribbons of ultrathin sections. Alternatively, it takes a much longer time to finish trimming than scraping by freehand trimming. After setting, the top of the glassy sulfur droplet is removed by using the diamond trimming knife until the embedded sample can be easily recognized by using a binocular stereomicroscope equipped on the ultramicrotome. The cutting speed is 0.7 mm/s and the thickness during trimming is 0.5 µm.
The chuck is removed from the goniometer of the ultramicrotome to measure the depth of the embedded sample from the surface of sulfur by using a microscope with scale marks of 1-µm intervals. It is important to record the angle between a marker on the chuck and a mark on the goniometer ( Figure 6) because the chuck is rotated 90 degrees clockwise and anticlockwise in the later trimming processes. Because the refractive index of molten sulfur is 1.91-1.93 [22], the real depth of the embedded sample is almost twice the apparent (measured) depth. However, because it is difficult to see the top of the embedded sample as shown in Figure 7a,b, it is safe to remove the glassy sulfur by measuring the depth (apparent thickness) after the chuck is set on the ultramicrotome with the same configuration. After repeating this process a few times, the top of the embedded sample comes just below the surface (a few µm) of the embedding sulfur.
Conceptual diagrams of the trimming processes are shown in Figure 8. After cutting the top face, the trimming knife is moved to cut one pyramidal side and the base face ( Figure 8b). As shown in Figure 8b,c, one pyramidal side and the base face are cut at the same time. After two pyramidal sides and the base face were cut, the chuck is rotated 90 degrees clockwise and the other two pyramidal faces and the base face are cut (Figure 8d). Then, the chuck is rotated 90 degrees anticlockwise (Figure 8e). The sides of the top face are 150-200 µm in length for~50 µm-sized samples (Figure 7c). The truncated pyramid is typically 30 µm in height (Figure 7d). The volatility of glassy sulfur is quite high even under a room temperature, its surface vaporizes quickly as shown in Figure 7c. Thus, it is important to move onto the next ultramicrotomy process immediately after trimming was finished.
the embedded sample is almost twice the apparent (measured) depth. However, because it is difficult to see the top of the embedded sample as shown in Figure 7a,b, it is safe to remove the glassy sulfur by measuring the depth (apparent thickness) after the chuck is set on the ultramicrotome with the same configuration. After repeating this process a few times, the top of the embedded sample comes just below the surface (a few μm) of the embedding sulfur. Conceptual diagrams of the trimming processes are shown in Figure 8. After cutting the top face, the trimming knife is moved to cut one pyramidal side and the base face ( Figure 8b). As shown in Figure 8b,c, one pyramidal side and the base face are cut at the same time. After two pyramidal sides and the base face were cut, the chuck is rotated 90 degrees clockwise and the other two pyramidal faces and the base face are cut (Figure 8d). Then, the chuck is rotated 90 degrees anticlockwise ( Figure 8e). The sides of the top face are 150-200 μm in length for ~50 μm-sized samples (Figure 7c). The same configuration. After repeating this process a few times, the top of the embedded sample comes just below the surface (a few μm) of the embedding sulfur. Conceptual diagrams of the trimming processes are shown in Figure 8. After cutting the top face, the trimming knife is moved to cut one pyramidal side and the base face ( Figure 8b). As shown in Figure 8b,c, one pyramidal side and the base face are cut at the same time. After two pyramidal sides and the base face were cut, the chuck is rotated 90 degrees clockwise and the other two pyramidal faces and the base face are cut (Figure 8d). Then, the chuck is rotated 90 degrees anticlockwise ( Figure 8e). The sides of the top face are 150-200 μm in length for ~50 μm-sized samples (Figure 7c). The
Details of the Protocol 7: Cutting of Ultrathin Sections of Glassy Sulfur Embedding a Small Sample
After adjusting the top face of the truncated pyramid parallel to the edge of a diamond knife for ultramicrotomy, ultrapure water is filled in the trough of the diamond knife. The amount of water is slightly more than that for ultramicrotomy of epoxy-embedded samples. Because ultrathin sections of sulfur are not drawn back during ultramicrotomy, slightly more water serves to decrease compression of sections. Well continued ribbons of ultrathin sections can be formed as shown in Figures 9a and 10b. Life 2020, 10, x FOR PEER REVIEW 8 of 13 truncated pyramid is typically 30 μm in height (Figure 7d). The volatility of glassy sulfur is quite high even under a room temperature, its surface vaporizes quickly as shown in Figure 7c. Thus, it is important to move onto the next ultramicrotomy process immediately after trimming was finished.
Details of the Protocol 7: Cutting of Ultrathin Sections of Glassy Sulfur Embedding a Small Sample
After adjusting the top face of the truncated pyramid parallel to the edge of a diamond knife for ultramicrotomy, ultrapure water is filled in the trough of the diamond knife. The amount of water is slightly more than that for ultramicrotomy of epoxy-embedded samples. Because ultrathin sections of sulfur are not drawn back during ultramicrotomy, slightly more water serves to decrease compression of sections. Well continued ribbons of ultrathin sections can be formed as shown in Figures 9a and 10b.
Details of the Protocol 8-9: Scooping-Up of Ultrathin Section Ribbons of Glassy Sulfur Embedding a Small Sample, Followed by Evaporation of Glassy Sulfur
These ribbons can be easily scooped up by using a loop and the ribbons can be set on TEM grids as is the case in the conventional epoxy embedded ultrathin sections. This conventional scooping method is used to put ribbons onto optical grade ZnS crystal or diamond window for atomic force microscope-infrared spectroscopy (AFM-IR). Because these crystals are hydrophobic, a small strip of filter paper is attached on the edge of the loop to remove water quickly within the loop. The resultant ribbons on a diamond window is shown in Figure 9a. After the ribbons are air-dried, the diamond window is set in an isothermal bath at 80 °C more than 6 h to sublimate sulfur. After sublimation of sulfur, only ultrathin sections of the sample are left on the window (Figure 9b).
For STXM-XANES analysis, the ribbons are set on TEM grids with SiOx supporting film or 500μm-wide single-window metallic Si TEM grids with silicon nitride supporting film. The opening sizes of TEM grids with SiOx supporting film are 300 mesh (~63 μm). Therefore, we made a tool to control the positions of ribbons on TEM grids in order to set as many ultramicrotomed sections as possible at the openings of these TEM grids ( Figure 10). A cross tweezer, which is attached on a small manipulator, holds a TEM grid that is partially submerged in the trough water and controls its position (Figure 10a). After one or two ribbons of ultrathin sections were cut out of the sulfur Life 2020, 10, x FOR PEER REVIEW 9 of 13 embedded sample, the long side of the ribbons are attached on the waterfront line of the TEM grid carefully by using a cleaned fiber brush probe (Figure 10b). Then, the TEM grid is drawn back and recovered by using the manipulator. After removing water from the TEM grid by a filter paper, the grid is set on the filter paper in a petri dish. After the grid is air-dried, the grid is set in an isothermal bath at 80 °C more than 6 h to sublimate sulfur. Figure 10c shows a 500 μm-wide single window Si grid with silicon nitride supporting film. An arrow indicates that the position of an ultrathin section of an Antarctic micrometeorite (AMM) after sublimation of sulfur. Figure 10d shows an enlarged image of the ultrathin section of the AMM. The obtained ultrathin foil samples are not only available for STXM-XANES and AFM-IR analyses, but also suitable for TEM observation and analysis if the ultrathin sections are on carbon supporting film. If TEM observation of ultrathin sections on SiOx supporting film is required after STXM-XANES analysis, there are two ways to observe them by TEM; When one uses a TEM with a
STXM-XANES Analysis
C-XANES spectra of the Murchison meteorite IOM sections prepared by this protocol and the conventional methods were compared (Figures 11 and 12). All the C-XANES spectra show peaks at 285.3 eV assigned to aromatic C=C and 288.7 eV assigned to C(=O)O (carboxyl/ester), respectively. A small peak at 286.6 eV assigned to C=O and a shoulder at 287.8 eV assigned to aliphatic CHx were also observed. The peak at 289.6 eV in one of the spectra obtained from the conventional method could be due to C-OH (hydroxyl). These features are consistent with C-XANES of Murchison IOM in literature which was also prepared by the conventional sulfur-embedding ultramicrotome method [23] (Figure 11b). Slight differences in peak intensities were observed between the samples prepared by the two different protocols, e.g., aromatic and aliphatic C were smaller in this protocol than conventional one. However, these differences could be attributed to local heterogeneity of IOM, considering that the C-XANES from [23] also show some differences. These ribbons can be easily scooped up by using a loop and the ribbons can be set on TEM grids as is the case in the conventional epoxy embedded ultrathin sections. This conventional scooping method is used to put ribbons onto optical grade ZnS crystal or diamond window for atomic force microscope-infrared spectroscopy (AFM-IR). Because these crystals are hydrophobic, a small strip of filter paper is attached on the edge of the loop to remove water quickly within the loop. The resultant ribbons on a diamond window is shown in Figure 9a. After the ribbons are air-dried, the diamond window is set in an isothermal bath at 80 • C more than 6 h to sublimate sulfur. After sublimation of sulfur, only ultrathin sections of the sample are left on the window (Figure 9b).
For STXM-XANES analysis, the ribbons are set on TEM grids with SiO x supporting film or 500-µm-wide single-window metallic Si TEM grids with silicon nitride supporting film. The opening sizes of TEM grids with SiO x supporting film are 300 mesh (~63 µm). Therefore, we made a tool to control the positions of ribbons on TEM grids in order to set as many ultramicrotomed sections as possible at the openings of these TEM grids ( Figure 10). A cross tweezer, which is attached on a small manipulator, holds a TEM grid that is partially submerged in the trough water and controls its position (Figure 10a). After one or two ribbons of ultrathin sections were cut out of the sulfur embedded sample, the long side of the ribbons are attached on the waterfront line of the TEM grid carefully by using a cleaned fiber brush probe (Figure 10b). Then, the TEM grid is drawn back and recovered by using the manipulator. After removing water from the TEM grid by a filter paper, the grid is set on the filter paper in a petri dish. After the grid is air-dried, the grid is set in an isothermal bath at 80 • C more than 6 h to sublimate sulfur. Figure 10c shows a 500 µm-wide single window Si grid with silicon nitride supporting film. An arrow indicates that the position of an ultrathin section of an Antarctic micrometeorite (AMM) after sublimation of sulfur. Figure 10d shows an enlarged image of the ultrathin section of the AMM.
The obtained ultrathin foil samples are not only available for STXM-XANES and AFM-IR analyses, but also suitable for TEM observation and analysis if the ultrathin sections are on carbon supporting film. If TEM observation of ultrathin sections on SiO x supporting film is required after STXM-XANES analysis, there are two ways to observe them by TEM; When one uses a TEM with a LB 6 filament, there is no problem to observe them by the TEM because the supporting film will not be broken by charge-up due to low current density of LB 6 filament. When one uses a field-emission TEM, the supporting film will be broken by charge-up unless the TEM grid is coated by carbon.
STXM-XANES Analysis
C-XANES spectra of the Murchison meteorite IOM sections prepared by this protocol and the conventional methods were compared (Figures 11 and 12). All the C-XANES spectra show peaks at 285.3 eV assigned to aromatic C=C and 288.7 eV assigned to C(=O)O (carboxyl/ester), respectively. A small peak at 286.6 eV assigned to C=O and a shoulder at 287.8 eV assigned to aliphatic CH x were also observed. The peak at 289.6 eV in one of the spectra obtained from the conventional method could be due to C-OH (hydroxyl). These features are consistent with C-XANES of Murchison IOM in literature which was also prepared by the conventional sulfur-embedding ultramicrotome method [23] ( Figure 11b). Slight differences in peak intensities were observed between the samples prepared by the two different protocols, e.g., aromatic and aliphatic C were smaller in this protocol than conventional one. However, these differences could be attributed to local heterogeneity of IOM, considering that the C-XANES from [23] also show some differences.
Life 2020, 10, x FOR PEER REVIEW 11 of 13 Figure 11. Carbon XANES spectra of Murchison insoluble organic matter (IOM) prepared by this protocol ("New protocol" in the caption) and by conventional protocol ("Conventional" in the caption).
(a) Raw data obtained for two samples; (b) normalized and smoothed data for two samples. A carbon XANES spectrum of Murchison IOM prepared by the conventional protocol by [23] is also present in this figure for comparison. Figure 11. Carbon XANES spectra of Murchison insoluble organic matter (IOM) prepared by this protocol ("New protocol" in the caption) and by conventional protocol ("Conventional" in the caption).
(a) Raw data obtained for two samples; (b) normalized and smoothed data for two samples. A carbon XANES spectrum of Murchison IOM prepared by the conventional protocol by [23] is also present in this figure for comparison. Figure 11. Carbon XANES spectra of Murchison insoluble organic matter (IOM) prepared by this protocol ("New protocol" in the caption) and by conventional protocol ("Conventional" in the caption). (a) Raw data obtained for two samples; (b) normalized and smoothed data for two samples. A carbon XANES spectrum of Murchison IOM prepared by the conventional protocol by [23] is also present in this figure for comparison.
Discussion and Conclusions
The difference between the two protocols is the temperatures of molten sulfur: 170-180 °C in the new protocol and ~150 °C or less in conventional one. If the temperature of new protocol affected the molecular structures of IOM, one may expect that the aromatic peak increased and aliphatic peak decreased, but it is not in the case. Calculation based on the kinetic experiments of decreases of aliphatic groups in Murchison IOM by FTIR indicated that 10% decrease in aliphatic C-H functional
Discussion and Conclusions
The difference between the two protocols is the temperatures of molten sulfur: 170-180 • C in the new protocol and~150 • C or less in conventional one. If the temperature of new protocol affected the molecular structures of IOM, one may expect that the aromatic peak increased and aliphatic peak decreased, but it is not in the case. Calculation based on the kinetic experiments of decreases of aliphatic groups in Murchison IOM by FTIR indicated that 10% decrease in aliphatic C-H functional groups would require over one day (~10 5 s) at 170 • C in an inert atmosphere [24], although it should be accelerated with the presence of oxygen. Thus, it is unlikely that slight differences in aliphatic features are due to the higher temperatures of molten sulfur.
In conclusion, another protocol to make sulfur embedded ultrathin sections was successfully developed for STXM-XANES, AFM-IR and TEM analyses. We use custom-made stubs made of stainless steel on which sulfur is melted, which reduces possible contamination. Polymerized liquid sulfur instead of low-viscosity liquid sulfur is the embedding media in this protocol. Due to high viscosity of the polymerized liquid sulfur, embedded samples stay near the surface of polymerized liquid sulfur, which facilitates trimming of glassy sulfur and ultramicrotomy of tiny samples. In addition, well-extended ribbons of ultramicrotomed thin sections can be obtained for STXM-XANES, AFM-IR and TEM analyses. By using a special tool to control the positions of the ribbons on TEM grids, we were able to improve the yield of ultrathin foil samples on the openings of these TEM grids. Because the Carbon XANES spectra of foil samples prepared by this protocol are consistent with those of the sections prepared by previous protocol, this protocol gives another alternative to prepare sulfur embedded ultramicrotomed sections. Therefore, it is expected that ultrathin samples prepared by this new protocol will enable the in situ analysis of prebiotic organic materials in the early Solar System, without modification of the returned asteroid samples. Funding: This work is supported by the Astrobiology Center Program of National Institutes of Natural Sciences (NINS) (Nos. AB281011, AB291008, AB301004 and AB021012). | 7,693 | 2020-08-01T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Chemistry"
] |
Recent tectonic model for the Upper Tagus Basin (central Spain)
Active tectonics within the Upper Tagus Basin is related to the lithospheric flexure affecting the Palaeozoic basement of the basin. This flexure displays NE-SW trending. Besides, this structure is in agreement with the regional active stress field defined by the maximum horizontal stress with NW-SE trending. In this tectonic framework, irregular clusters of instrumental seismicity (Mw< 5.0) fade in the zone bounded by the Tagus River and the Jarama River valleys. These clusters are related to major NW-SE trending faults of suspected strike-slip kinematics. Moreover, reverse faults with NE-SW trending are affected by the strike-slip system as well. Despite the reverse faults are in agreement with the present SHMAX orientation, though, they apparently are blocked as seismogenic sources (scarce instrumental seismicity recorded today). In addition, we have determined the regional and local stress/ strain fields and two different fracture patterns were observed. Hence, we have divided the area in two zones: (1) the lateral bands of the basin, defined by reverse faulting (NE-SW trending) and strike-slip faulting (NW-SE trending) and (2) the central zone of the basin characterized by shallow normal faulting and NE-SW trending strike-slip faults. Furthermore, surface faulting and liquefaction structures are described affecting Middle to Late Pleistocene fluvial deposits, suggesting intrabasinal palaeoseismic activity (5.5 < M < 6.5) during the Late Quaternary. The obtained structural and tectonic information has been used to classify and characterize the Upper Tagus Basin as a semi-stable intraplate seismogenic zone, featured by Pleistocene slip-rates < 0.02 mm/yr. This value is low but it affords the occurrence of Pleistocene paleoearthquakes.
Introduction
The Upper Tagus Basin (UTB) is located at the central part of the Iberian Peninsula, and include the province of Madrid, parts of the provinces of Guadalajara, Toledo, Cuenca and Segovia, and the major mountain range constituted by the Spanish Central System (SCS).The most relevant feature of the UTB to be faced by the seismic risk study is their proximity to large urban and industrial areas (i.e.Madrid, Guadalajara and Alcalá de Henares).The instrumental seismic record of the area displays small earthquakes with magnitudes M < 5.0 (www.ign. es; 1996-2011).Moreover, this instrumental seismicity in the region displays spatial and temporal clusters of smallmoderate earthquakes (3.0 < Mw < 5.0), most of them located within a NE-SW narrow band area defined by the watershed zone of the Tagus and Jarama river valleys, about 30-40 km SE of the Madrid city.The last significant earthquake in the zone occurred in 7 June 2007 (Escopete; Guadalajara) with a magnitude mb 4.2 and 10 km depth (www.ign.es).This earthquake triggered a ground motion with a maximum PGA value of 0.071g (Carreño et al., 2008).This relatively high ground response is related to the relatively thick Cenozoic sedimentary filling of the basin (up to 3 km, i.e.Alonzo-Zarza et al., 2004;Gómez-Ortiz et al., 2005), but also the near-field effect has to be considererd (Carreño et al., 2008).The extensive ancient Late Neogene surface dominating the intrabasinal landscape, as well as the scarce evidence of recent earthquake-related deformations (Silva et al., 1997;De Vicente et al., 2007), make difficult to assign a deformation Quaternary tectonic slip-rate for the Upper Tagus Basin (UTB).However the occurrence of deep Canyonlike valleys at basin center locations, boundend by relevant kilometric lineal scarps between 40-60 m high, related in some cases to moderate historic and instrumental seismicity (Silva et al., 1988;Silva, 2003;De Vicente et al., 2007), support the hypothesis of Late Quaternary tectonic activity in the region.Furthermore, Quaternary fluvial deposits of Middle to Late Pleistocene age, display evidence of strong fracture density, tectonic deformation and a wide variety of liquefaction structures related to synsedimentary faulting and collapse of the underlaying Neogene evaporites (Silva et al., 1988;Giner, 1996;Silva et al., 1997;Silva, 2003).Under these assumptions, the integration of both new and old data as well as the implementation of new techniques are required to perform an updated seismic hazard analysis for the UTB.
Hence, the main goal of this work is to propose a innovative interpretation of the seismic potential of the UTB by the integration of (a) paleoseismic evidence for Pleistocene deposits within the main river valleys; (b) seismotectonic data from the focal mechanism solutions and instrumental seismicity; and (c) the widespread existing structural and geological data on active faulting for the area and rheological models for the lithosphere within the basin as well.
Active faulting and Quaternary tectonics are the eventual responsible for the landscape and shaping of the SCS border, the uplift of the Neogene materials at the basin
Palabras clave: Sismotectónica, Mecanismos focales de terremotos, Paleosismicidad, Pleistoceno, Cuenca alta del Tajo.centre and the paleoseismic evidence within the valleys, with recording estimated event-magnitude of c.a. 5.5 to 6.5, are the way to go a step beyond for the seismic hazard assesment in this zone of scarce intrumental seismicity.
Geodynamic and geologic background
The Upper Tagus Basin (UTB) covers an area of about 12,000 km 2 , and is located at the central part of the Iberian Peninsula, including the provinces of Madrid, Guadalajara, Toledo, Cuenca and Segovia.The basin is a complex zone of Paleogene to Neogene sedimentary infilling (Alonso Zarza et al., 2004), limited by three intracratonic mountain ranges: the Spanish Central System (SCS) to the north, the Altomira Range (AR) to the east,and the Toledo Mountains (TM) to the south (Fig. 1).The SCS is interpreted as a Cenozoic pop-up controlled by E-W and NE-SW structures linked to large-scale lithospheric flexure triggered by the SE-NW far-field stress propagation of the Africa-Eurasia collision (De Vicente et al., 2007).These authors indicated that the aforementioned fault systems reach and affect to shallow crustal levels within the basin.
The topography and the crustal structure of the SCS and adjacent areas can be explained by lithospheric folding according to the analogue experimental models performed by Fernández-Lozano et al. (2011), and based on previous proposals (i.e.Giner et al., 1996 ;Cloetingh et al., 2002 ;De Vicente et al., 2007).The shallow seismicity at the intraplate western area of Iberia is explained by Fernández-Lozano et al. (2011) as a consequence of the stress transfer from the plate boundaries to the interiors.Relationships between the topography, the Bouger anomaly, the temperature at 100 km depth and the temperature at the Moho, suggest a hot-zone between the SCS and the TB, in agreement with the zone in which is presently recorded the instrumental seismicity (time-period between 1980-2010) (Fernández-Lozano et al., 2011).
The basement of the basin is composed by Variscan granitic and metamorphic rocks, with a Mesozoic and Paleogene cover (pre-tectonic Alpine unit), whilst the main infilling materials are constituted by overlaying Neogene sedimentary sequences and "cut and fill" Quaternary deposits (post-tectonic Alpine unit) related to the development of the present drainage network.In this geological framework, the main tectonic structures have a conjugate orientation, with NE-SW and NW-SE trending (Giner et al., 1996).All of these structures are related with the aforementioned lithospheric flexure defined by the basement geometry and differential filling and thickness of Neogene sequences (De Vicente et al., 1996;Giner et al., 1996;Cloetingh et al., 2002;De Vicente et al., 2007, 2009;Martín-Velázquez et al., 2009).
Thermal and rheological models for the SCS and adjacent areas were developed by Suriñach and Vegas (1988) and Tejero and Ruiz (2002), with a surface heat flow ranging between 80 and 60 mWm -2 , and with the Moho located at 31-34 km in depth.Jiménez-Díaz et al. (2012) suggested a value ranging between 81-83 mWm -2 of the surface heat flow for the Tagus Basin, involving mantle Upper Tagus Basin (UTB), composed by the Spanish Central System (SCS), Altomira Range (AR) and the Toledo Mountains (TM).Fig. 1.-Situación geológica de la zona de estudio (recuadro pequeño), con las principales unidades geológicas: Cuenca Alta del Tajo, Sistema Central, Sierra de Altomira y Montes de Toledo. .processed to the continuous uplift of the SCS.Models developed by these authors indicate a maximum thickness for the upper crust of 16 km within the Tagus Basin and of 11 km for the SCS.Moreover, two crustal discontinuities, at 11 and 31 km depth respectively, have been described from the spectral analysis of gravity data (Gómez-Ortiz et al., 2005).In consequence, in this work is assumed the interval between 11-16 km depth as the preferred crustal level for brittle deformation and seismogenic source, in order to evaluate the area for faulting rupture according to the main Quaternary faults traces described in the following sections.
These mountain ranges display different tectonic frameworks and deformation ages, as a consequence of the evolving stress-field throughout the Cenozoic (De Vicente et al., 1996;2007, 2009;Babín-Vich andGómez-Ortiz, 1997, Martín-Velázquez et al., 2009;Fernández-Lozano et al., 2011).The active stress-field within the SCS is defined by the horizontal maximum stress orientation, SHMAX, NW-trending (De Vicente et al., 1996;Herraiz et al., 2000).This stress-field is still working from the Late Miocene (De Vicente et al., 1996).The stress-field promoting SCS building corresponds with an intraplate response to far-field effect from the active tectonic boundaries.Commonly, the lithospheric folding instead of mantle process is invoked to explain the active tectonic deformation and vertical movements in this zone (Cloetingh et al., 2002;De Vicente et al., 2007, 2009).Polyphase deformation from the Miocene to the present may explain the stress convergence at the SCS (Cloetingh et al., 2002).This convergence comes from the Pyrenean compression (N-to NE-trending), the Betic compression (SSE-trending), the Mid-Atlantic Ocean Ridge (MAOR) push (W-trending) and the Valencia Through extension (WNW-trending).At this point, different authors explain vertical movements at SCS as result of the lithospheric folding (Giner, 1996;De Vicente et al., 1996;Cloetingh et al., 2002;De Vicente et al., 2007, 2009;Martín-Velázquez et al., 2009).Since the lithospheric folding implies large-scale deformation and large wavelength in vertical movements, the active tectonic slip-rate operating in the SCS has to be estimated from Pleistocene, at least.Pérez-López et al. (2005) described the stress-field at the South Border of the SCS as strike-slip to uniaxial extension from Eocene to present.Martín-Velázquez et al. (2009) modeled this stress-field by using finite elements and a punctual rheological model obeying the thermal model of Tejero and Ruiz (2002).The relevance of this model is the simulation of the stress state at the surface of the SCS, where the present topography plays as tectonic loading for faults.
In this work, firstly, major faults with surface trace larger than 30 km (see Wells and Coppersmith, 1994) are recognized.Secondly, striate fault-orientation and stress agreement are studied and finally, Quaternary tectonic markers and paleoseismic evidence are described.However, fault parametrization for hazard purposes is still a controversy.In this sense, the geothermal features (Jiménez-Díaz et al., 2012), the analogue models of the lithosphere (Fernández-Lozano et al., 2011) and numerical models (Martín-Velázquez et al., 2009) underneath the SCS, could shed light on the fault width which could be load for trigger either moderates or destructive earthquakes.
Macro-scale structural analysis
Despite the asymmetric character of the deformation spatial distribution within the basin (De Vicente et al., 2007), the geomorphology features and lineaments interpreted from digital elevation models suggest the existence of two different fracture patterns (Fig. 2): (1) Guadarrama Fracture Pattern (GP1), mainly oriented towards NW-SE, and secondary toward NE-SW.We assume these lineaments orientations (NW-SE trending) with strike-slip faults NW-trending, and the secondary set with reverse faults with NE-trending (De Vicente et al., 1996;Giner, 1996).( 2) Guadalajara Fracture Pattern (GP2): defined by lineament sets oriented towards NE-SW.This fracture pattern could be related with the local stress tensor defined by NE-trending horizontal maximum stress orientation (Giner, 1996;Giner et al., 1996).GP2 re-activates normal faults and secondary strike-slips with NE-trending.This stress field is orthogonal to the Guadarrama stress field and it is interpreted as a switch between the main axes (SHMAX, SHMIN) as a response for the basement flexure (Giner et al., 1996).Assuming these stress fields could be coeval and SHMAX and SHMIN could switch through time and thus generate the roughly orthogonal compression and extension directions.
Hence, we have divided the UTB in two subzones according to both fracture patterns described above.
Micro structural analysis
Figure 3 shows the location of the 43 structural field stations for fault planes affecting Neogene and Quaternary deposits.There, we have measured more than 700 (743) kinematic data on fault planes to obtain the active stress tensor (Fig. 4).We have applied the structural analysis technique proposed by Reches (1987) to obtain the stress tensor (s 1 , s 2 , s 3 and SHMAX) from slip vectors measured on the fault planes.The results suggest the coeval existence of two stress fields: Regional stress tensor (Fig. 5a): featured by SHMAX with NW-trending, activates strike-slips (Fig. 6) and reverse faults (Fig. 7a and 7b).
The spatial distribution of the structural stations covers all the zones defined from the macro-scale structural analysis.Accordingly, we can observe that the structural stations associated with the local stress tensor are located within the central zone, Guadalajara fracture pattern (GP2) (Fig. 2 and 4) whereas the regional stress tensor is related with stations located within the Guadarrama fracture pattern (GP1).Furthermore, the spatial relationship between both deformational patterns, observed in the field stations in the south boundary of the basin, also suggests that faults activated by the local stress tensor are younger in age to those faults activated by the regional stress field.
Finally, it is necessary to note that the sedimentary infilling of the basin may enlarge the deformation recorded at surface by halokinesis of the Neogene evaporates located at basin centre locations (see Alonso Zarza et al., 2004 for spatial location).However, the coherence and arrangement of both fault patterns, in agreement with the tectonic context of the basin, suggest a main tectonic control underneath karstic-collapses assisting large canyonvalley development in central basin locations as previously suggested by several authors (Silva et al., 1988;Giner et al., 1996;Silva, 2003;Alonso Zarza et al., 2004).
The Pleistocene paleoseismic record
Several liquefaction structures have been described within the basin (Giner et al., 1996;Silva et al., 1997;Silva 2003;Silva et al., 2010;Silva et al., 2011), related to recent faulting and with Paleolithic settlements (younger than 780 Ka).However, more detailed studies are required to establish a complete table of paleoseismic parameters (slip-vector, total offset, recurrence intervals, etc.).Furthermore, the most of the structures described here correspond with liquefactions, and consequently it is very difficult to assign seismic sources and recurrence intervals.
All of these paleoseismic structures are in agreement with the local stress tensor (SHMAX NE-trending) and
Instrumental seismicity
Anomalous clusters of seismicity are located in a narrow band between the Tagus River and the Jarama River (Fig. 10).These clusters are classified as anomalous since no Quaternary active faults have been described in this area and no seismic sources have been recognized except those in the works of Silva et al. (1988;1997) and Giner (1996).This zone is elongated with NE-trending in coincidence with the axis of the basement flexure (Figs. 2, and 3).We have analyzed the focal mechanism solutions recorded within the area (Giner, 1996;De Vicente et al., 1996;Andeweg et al., 1999;Carreño et al., 2008), according to the methodology proposed by De Vicente et al. (1996).The regional stress-field for the Iberian Peninsula is described in Herraiz et al. (2000) and Stich et al. (2006) and(2010).In these works the UTB stress-field is defined by SHMAX with NW-trending, in agreement to the works focused at the UTB by Giner (1996) 10).Normal faults are coherent with the local stress tensor defined here (SHMAX NE-trending) and located at the extensional zone defined by the basement flexure.However, the focal mechanisms of reverse faulting obey the regional stress tensor (SHMAX NW-trending) and affecting mainly fluvial Middle to Late Pleistocene deposits (Fig. 8).Most of the liquefaction structures observed are sand-dikes-like mainly associated to normal and lateral faults activated by the local stress tensor (Fig. 7b and 7c).The nature and dimensions of these structures suggest the occurrence of paleoearthquakes with magnitudes greater than M5-M5.5, according to the proposals of Obermeier (1996) and Rodríguez-Pascua et al. (2000).In addition, there are recorded liquefaction affecting gravel deposits (Rodríguez-Pascua et al., 2000) as well as thick (0.8-1.2 m) individual liquefaction horizons in medium-coarse sands (Silva et al., 2010 and2011), suggesting paleoearthquake magnitudes greater than M6.
We have recognized liquefaction related to reverse faulting (NE-trending) as well, located at the south zone of Villarubia de Santiago town (Toledo) (Fig. 9).This fault was activated by the regional stress-tensor (SHMAX NW-trending).The liquefaction affects Middle-Late Pleistocene fluvial sediments of the Tagus River, but also all these features are common within the main tributaries such as the Jarama, Tajuña and Manzanares (Silva, 2003;Alonso Zarza et al., 2004;De Vicente et al., 2007).Sand-dikes are injected into the fault plane and the vertical throw was 0.5 m.We assume that this liquefaction corresponds with one single event and consequently, the observed vertical offset corresponds with a coseismic vertical throw.Accordingly, empirical relationships (Wells and Coppersmith, 1994) suggest a maximum magnitude ranging between M6.4 and M6.6.River.These faults are activated by the regional stress tensor with SH-MAX oriented to NW-SE.Fig. 6.-Ejemplo de una de las estructuras de desgarre (SHMAX según NO-SE, tensor regional) en el norte de la cuenca.Las fallas afectan a materiales de terrazas del río Tajo de edad Pleistocena.
spatially located southward of the flexure zone.
The orthogonal relationship between the SHMAX orientation of the regional and local stress tensor points the same genetic relationship for both types of faulting, normal and reverse (Giner et al., 2003).This same genetic origin for both fault sets we assume that is due to the lithospheric folding and basement flexure, provoking a stress axes switch from the regional tensor (SHMAX NW-trending) to the local one (SHMAX NE-trending).Hence, normal faulting in this area works as the seismogenic source related to the flexure of the basement, restricted to the extensional zone of shallow and surface folding (Fig. 11) recognized by several authors on basis to the deformation of Late Neogene intrabasinal surfaces (Fernández-Casals, 1979;Silva et al., 1988;Alonso Zarza et al., 2004;De Vicente et al., 2007).
Reverse faulting with NW-trending are related with the regional stress-field and related with the basement flexure as well.Therefore, shallow normal faulting can be considered as earthquake sources at basin centre locations, whereas earthquakes related to reverse faulting can be produced at deeper crustal levels (Fig. 11).Addi- tionally the deformation is accommodated by strike-slips NE-trending at the NE part of the basin.These faults are transfer faults.The Escopete earthquake (7 th of June, 2007) (Carreño et al., 2008) was triggered by this type of faulting with a focal mechanism of sinistral lateral fault.
The second cluster of instrumental seismicity is located at the Southern border of the SCS (Fig. 10), though the major reverse faults (length > 100 km) have no evidence of paleoseismicity and scarce record of instrumental seismicity.The earthquake cluster of the SCS is more related with strike-slip NW-trending.This fault set is similar to the strike-slips located at the northwestward of the basement flexure.Therefore, the major reverse fault of the South boundary of the SCS would not totally broke as a unique large segment (length > 100 km) since it is segmented by crustal-scale transverse NW-strike-slip faults (Fig. 11).The scarce record and low magnitude of instrumental seismicity in this zone seems to support this seismotectonic scenario.The hyponcentre errors assumed in this work corresponds with focal mechanism solutions described in Andeweg et al. (1999).Since this work, only the Escopete earthquake ( 2007) has occurred within the slip-rate estimated should be related with a strength balance of all of these tectonic driving forces.The strain field observed into the UTB is defined by two different patterns (Fig. 2): (1) active normal and strikeslips faults, SHMAX NE-trending, parallel to the Tajuña River and morpho-lineaments with NE-trending, and (2) morpho-lineaments with NW-trending.Seismicity clusters are related with the second strain pattern.Both strain patterns are in agreement with the strain regime obtained from the structural analysis: (A) the regional strain tensor with SHMAX NW-trending, activating NE-reverse faults and NW-strike-slips faults, and (B) the local strain tensor defined by SHMAX NE-trending.This local tensor we assume a switch off the regional tensor due to the basement flexion, acting surface faulting with 10 km long and 11 km depth, normal NE-faults and lateral NW-faults.
Basement faults within the basin, as the South Border of the SCS, are well oriented according to the presentday NW-trending tectonic compression (De Vicente et al., 1996, 2007, 2009).What is the role of this basement faults as seismogenic-sources? Reactivation under the presentday stress field is possible within the most accepted tectonic framework, although the question emerges from the potential total energy accumulation of these faults, fed by the far-field stress transferred from the SE plate boundary.Taking into account that the bulk portion of the stress is released in the Betic Cordillera and another relevant fraction is derived for lithospheric folding and topographical loading, a minimum stress value can be stored at the Southern Border of the SCS.Following this assumption, 125 km of fault length (assuming a complete rupture of the fault of the South border of the SCS) and 16 km width (upper crust) should involve a lot of energy accumulation (6.31 10 16 J) that could be released in a M8 earthquake (according to Wells and Coppersmith, 1994).No such sizedearthquakes have been evidenced along the paleoseismic record since the Middle Pleistocene (~780 kyr B.P.), suggesting a minimum recurrence interval in the same timerange for expected earthquakes of M8, and independent of the fault segmentation for the SCS.
The basement flexure determines the spatial distribution of active faults in the Tagus Basin.The NE boundary of the flexure displays strike-slips faults with NE-trending and 20 km long (Fig. 11).The Escopete earthquake (7 th , June of 2007) with magnitude mb 4.2 and 10 km depth occurred within this area.Maximum peak ground acceleration (PGA) was 0.07g, though the estimated maximum value for 400 years was 0.04g (Carreño et al., 2008).
The analysis of the focal mechanism solutions for the scarce instrumental seismicity within the area (www.ign.es), namely in the surrounding of the flexure axis, reveals two fault geometries with NE-trending: reverse Madrid Basin with a minimum size to obtain the focal mechanism.Despite the scarce instrumental record to perform a statistical analysis to support the spatial distribution of earthquakes in depth, the flexure model fits enough to explain the present instrumental distribution of earthquakes within this basin.
Discussion and conceptual model
Summarizing, three different tectonic sources determine the intraplate tectonic field within the UTB and they may activate faults: (1) the tectonic far-field from the convergence between Africa, Iberia and Eurasia plates, plus the pushing of the MAOR (2) the lithosphere -mantle coupling by the lithospheric flexion evidenced by large basement wave-length folding and (3) the tectonic loading by the topography of the SCS.Therefore, the tectonic and normal faults.Despite the error of the hypocentral location for these earthquakes, we suggest the presence of a non-finite surface between the extensional (SHMAX NE-trending) and compresional (SHMAX NW-trending) zones.This implies that reverse earthquakes would be deeper than normal ones.The reorganization of the fluvial network within the basin and the generation of large lineal canyon-shaped valleys at basin centre location date from Middle-Plesitocene (Silva et al., 1988(Silva et al., , 1997)), but paleoseismic evidence are characteristic until the last interglacial period (i.e.125-90 kyr B.P., Silva, 2003).Vertical throws of 0.5 m recorded for the Late Pleistocene (125 kyrs), and empirical relationships used for paleoearthquakes and geometrical parameters of Quaternary faults (surface trace and width), suggest a tectonic slip-rate of 0.004 mm/yr, probably related to single isolated events.This value is too small and probably related with secondary faulting as to be considered as representative for the whole basin.
Conclusions
The Upper Tagus Basin (UTB) is featured by two major tectonic structures: The reverse fault of the South border of the Spanish Central System (SCS), with NE-trending, almost 100 km long and with no evidence of Quaternary and recent tectonic activity, according to the geomorphology within the area.Moreover, this fault displays a scarce and lowmagnitude instrumental seismicity for the historical plus the instrumental period , and mainly related with strike-slip faults (NW-trending), which divided and cross NE-reverse-faulting. Strike-slips have a maximum surface length of 8 km.
Basement flexure determined by NE-trending axis and located underneath the Tagus Basin (TB).This zone is related with instrumental seismicity of magnitude maximum as M4.2 (Escopete earthquake, 7 th June of 2007), shallow normal earthquakes (depth < 10 km) and located between the area bounded by the Tagus River and the Jarama River.This basement flexure is in agreement with the regional stress/strain tensor defined in this area by others authors (Giner et al., 1996;De Vicente et al., 1996;Herraiz et al., 2000;Tejero and Ruiz, 2002;De Vi-cente et al., 2007, 2009;Fernández-Lozano et al., 2011).Reverse earthquakes are deeper than normal earthquakes (16 km > depth > 10 km) in agreement with the geometry of the flexure and the strain field defined upper and down of the non deformation surface (strain deformation distribution in flexural folding with extrados/stretching and intrados/shortening).The interval of depth is according to the lower limit for the upper crust defined by Tejero and Ruiz (2002).However, new thermal models are required for describe the compressive and extensional distribution of the lithosphere within the Tagus Basin.Lateral NWtrending faults show low-magnitude instrumental seismicity (Carreño et al., 2008).Liquefaction structures for normal faulting indicate a paleoearthquake magnitudes between 5.5<M<6.4for shallow normal faulting.
Analysis for the instrumental seismicity plus the major tectonic structures, suggest a tectonic slip-rate of 0.004 mm/yr in this area.However, taking into account the geodynamic framework of the UTB, liquefaction structures and the empirical relationships between slip-rates and recurrence interval delivered by Villamor and Berryman (1999), we define the UTB as a stable intraplate area featured by Quaternary tectonic slip-rates < 0.02 mm/yr for a time period of 125 Ka.
Taking into account that the studied area (UTB) includes large cities with a huge number of citizens, industrial and critical facilities (Madrid, Toledo, Guadalajara, Alcalá de Henares, etc.), it is necessary a full reevaluation of the potential seismic sources in the zone in order to be included in future seismic hazard analyses.In this sense, we suggest that such seismic sources are oriented bands of tectonic deformation at depth instead particular single large-faults as occur in plate margin locations.These bands of tectonic deformation account for most of the geomorphic anomalies in the studied area (i.e.Silva, 2003;De Vicente et al., 2007), Quaternary lineation and lineal Canyon-shaped valleys into the basin centre, as well as for the spatial clustering of instrumental seismicity.Potential damaging earthquakes (M>5.5)can be produced in the zone like events generated during the time-window of c.a. 780 kyr and 125 kyr.
Figure 2 shows both areas elongated with NE-SW trending mainly located at: (a) the central subzone are in coincidence with the Tajuña River and lineaments interpreted (NE-trending).This subzone overlies the axis of the flexure of the basement, and (b) lateral subzones located at the NW and SE boundary of the basin and lineaments with NW-trending.The South boundary of the subzone
Fig. 5 .
Fig. 5.-Rose diagrams for SHMAX of the stress tensors determined within the Upper Tagus Basin.(a) Regional stress tensor and (b) local stress tensor.Fig. 5.-Resultados del análisis de esfuerzos de las estaciones de análisis estructural con datos de planos de falla estriados.Se representan la rosa de orientaciones medias de SHMAX obtenidas en cada una de las estaciones de análisis: (a) tensor regional y (b) tensor local.
and De Vicente et al. (1996) and, therefore, we have made a kinematic interpretation of the focal mechanisms.The focal mechanisms for earthquakes with M> 3.4 are (Andeweg et al., 1999): (a) reverse and normal faults oriented with NE-trending and (b) strike-slip faults with NW-trending (Fig.
Fig. 6 .
Fig. 6.-Strike-slip faulting affecting Pleistocene fluvial terraces of the TagusRiver.These faults are activated by the regional stress tensor with SH-MAX oriented to NW-SE.Fig.6.-Ejemplo de una de las estructuras de desgarre (SHMAX según NO-SE, tensor regional) en el norte de la cuenca.Las fallas afectan a materiales de terrazas del río Tajo de edad Pleistocena.
Fig. 7 .
Fig. 7.-Photographs of faulting affecting fluvial sediments at the Jarama River and the Tagus River, located at the South of the Tagus Basin.(a) y (b) are reverse faulting activated by the regional stress tensor (SHMAX NW-trending), and (c) and (d) lateral and normal faulting activated by the local stress tensor (SHMAX NEtrending).Fig.7.-Ejemplos de diferentes tipos de fallas afectando a materiales fluviales del río Jarama y del río Tajo en el sur de la cuenca: a) y b) fallas inversas decimétricas (SH-MAX NO-SE), c) fallas direccionales con componente normal y d) falla normal (ambas según SH-MAX NE-SE, tensor local). | 6,580 | 2012-05-16T00:00:00.000 | [
"Geology"
] |
Combination of Oxyanion Gln114 Mutation and Medium Engineering to Influence the Enantioselectivity of Thermophilic Lipase from Geobacillus zalihae
The substitution of the oxyanion Q114 with Met and Leu was carried out to investigate the role of Q114 in imparting enantioselectivity on T1 lipase. The mutation improved enantioselectivity in Q114M over the wild-type, while enantioselectivity in Q114L was reduced. The enantioselectivity of the thermophilic lipases, T1, Q114L and Q114M correlated better with log p as compared to the dielectric constant and dipole moment of the solvents. Enzyme activity was good in solvents with log p < 3.5, with the exception of hexane which deviated substantially. Isooctane was found to be the best solvent for the esterification of (R,S)-ibuprofen with oleyl alcohol for lipases Q114M and Q114L, to afford E values of 53.7 and 12.2, respectively. Selectivity of T1 was highest in tetradecane with E value 49.2. Solvents with low log p reduced overall lipase activity and dimethyl sulfoxide (DMSO) completely inhibited the lipases. Ester conversions, however, were still low. Molecular sieves employed as desiccant were found to adversely affect catalysis in the lipase variants, particularly in Q114M. The higher desiccant loading also increased viscosity in the reaction and further reduced the efficiency of the lipase-catalyzed esterifications.
Introduction
Genetic engineering, through significant advances in site-directed mutagenesis and directed evolution, is a series of strategies available to modify and control almost any enzyme property [1]. Both strategies can endow enzymes with super ability against high temperatures, organic reagents and denaturants [2] and are extensively used to enhance enzyme activity and enantioselectivity [1]. Enantioselectivity can be genetically engineered into existing enzymes by isolating novel biocatalysts or by engineering the medium of reaction [3]. Changing the solvents employed in the reactions is one method in medium engineering that can alter enantioselectivity [4][5][6]. However, there is still no general understanding between solvent physicochemical properties and enzyme enantioselectivity due to the diversity of substrates and enzymes used [7].
Thermophilic enzymes are more thermally active and stable in organic solvents than other enzymes [8] and have significant potential applications in biotechnological processes [9]. Previously, we described a thermoalkalophilic lipase, called T1, from Geobacillus zalihae gene encoding 388 aa residues using vector pGEX 4-T1, that was highly expressed in recombinant E. coli BL21 (DE3) pLysS [10]. The catalytic machinery is formed by Ser-114, His-359 and Asp-314 [11]. The oxyanion of T1 lipase was deduced to be Q114 and F16, based on the BTL2 lipase oxyanion at Q115 and F17 that shares 96% sequence similarity to T1 lipase. The F17 residue in BTL2 lipase was reportedly to be highly conserved [12] and the same was expected of the F16 in T1 lipase. However, substitution at Q114 was anticipated to be less damaging and more tolerable.
It has been described that mutations near the enzyme active site can enhance enantioselectivity more strongly than distant mutations [13]. The effect of mutating the oxyanion Q114 on enantioselectivity of T1 lipase has yet to be explored. We substituted the hydrophilic Q114 with hydrophobic Leu and Met, to afford lipase variants Q114L and Q114M. Both lipase variants and the wild-type were then compared in the enantioselective esterification of ibuprofen in mixtures that consisted of different solvents and variable desiccant loading. The intention of this article was to show that the oxyanion at site 114 plays a role in imparting and controlling enantioselectivity in T1 lipase. The influence of different properties of solvents and variable amounts of desiccant were also examined in order to obtain a better perception on how both parameters affected enantioselectivity.
Effect of Solvents
In the resolution of racemic ibuprofen, oleyl alcohol was used as the resolving agent to separate the enantiomers of ibuprofen (Scheme 1). Recent molecular dynamic simulations revealed that the activation mechanism of T1 lipase involved the lid domain. It is formed by the helix-loop-helix motif that was proposed to be involved in interfacial activation of T1 lipase. The large structural rearrangement of the lid that reveals the entrance to the active site only occurs as a result of interaction between the hydrophobic residues of the lid with octane [14]. Hence, oleyl alcohol was chosen, as it a natural substrate that T1 lipase normally catalyzes and the hydrophobicity of the oleyl alcohol is pertinent for the activation of catalysis. The effect of different solvent properties on the esterification of racemic ibuprofen with oleyl alcohol catalyzed by T1, Q114L and Q114M were evaluated and the results are illustrated in Figure 1. The lipase activity in the esterification reaction was represented by the conversion of the ibuprofen ester. Activity in reactions of T1, Q114L and Q114M were found to be good when log p of solvents were 1.25 ≤ log p ≤ 7.6, but were inactivated in dimethyl sulfoxide (DMSO) (log p = −1.23). Their activity was noted when solvent dipole moment, µ < 10 or dielectric constant, ε < 0.1. The highest conversion catalyzed by T1, Q114L and Q114M occurred in isooctane which corresponded to 15.7%, 18.4% and 15.0%, respectively.
The activity of the lipase variants improved with decreasing solvent polarity (higher log p), except in hexane. The activity of the lipase variants declined significantly when in hexane. For T1 lipase, the lowest conversion occurred in hexane at 3.2%, Q114L in toluene at 1.8% and Q114M in dodecane at only 1.7%. DMSO completely inhibited activity in all lipase variants. The lipase variants exhibited low conversions (<5%) in dichloromethane and N-tetradecane. Only T1 lipase showed good activity in dichloromethane with ester conversion at 10.3%. It was noted that ester conversions in T1, Q114L and Q114M-catalyzed reactions correlated better with the log p of solvents. The E values and e.e p % in T1, Q114L and Q114M-catalyzed enantioselective esterification of ibuprofen with respect to solvent log p is illustrated in Figure 2. Generally, e.e p % increased with increasing log p with notable common deviation when in hexane, whereby e.e p % of all lipase-catalyzed reactions significantly dropped. T1 also showed slightly reduced e.e p % in dodecane and in reactions of Q114M, e.e p % was considerably reduced when log p exceeded 4.5. Meanwhile, E values of reactions varied between 1.4 to 53.7 and were particularly poor at low solvent log p. Conversely, E values gradually improved when log p of solvents were increased, with the exception of hexane. Similar deviations were also observed for reactions of T1 and Q114L in dodecane. Good selectivity was achieved when dielectric constants and dipole moments were lowest and selectivity of the lipase variants almost diminished when in hexane (E < 2). However, due to poor correlation between ester conversion with solvent dipole moment and dielectric constant, the same outcome was also expected of E values in the lipase-catalyzed reactions.
Selectivity of T1 lipase improved significantly in isooctane and N-tetradecane to give E value 19.1 in isooctane, and its best E value at 49.2 in N-tetradecane. E values for Q114L were 12.2 and 9.7 in isooctane and N-tetradecane, respectively. Selectivity of Q114M was highest in isooctane to give E value 53.7 and a moderate E value 20.5 in dichloromethane. Unlike T1 and Q114L, Q114M performed poorly in N-tetradecane with E value 5.5, as with T1 lipase in dichloromethane. E values of T1 lipase also deviated slightly in dodecane with a sharp decline in E value at 6.4, followed by a sharp increase in E value to 49.2 in N-tetradecane (log p 7.6).
It is well reported in literature that enzyme activity and selectivity are strongly affected by the choice of organic solvents [15][16][17][18]. In addition, using organic solvents in reactions can also provide the advantage of shifting the thermodynamic equilibria to favor synthesis over hydrolysis [18]. It was observed that all three lipases were completely inhibited in DMSO, which can be ascribed to the high polarity of DMSO as opposed to the other solvents that were used. Our finding corroborated other reports that highly polar solvents were responsible for the stripping of the essential water from the protein and disrupting the structure of the enzyme [19].
Hydrophilic solvents, such as DMSO, used in the reactions have been described as having higher affinity towards water rather than to the enzyme. As a consequence, there was a loss of conformational flexibility in the enzyme due to lack of bound water [15,20,21], which resulted in the loss of activity as well as enantioselectivity. Also, these solvents have been known to induce changes in resonances of amide bonds of surface amino acids in enzymes, as previously reported for Candida antarctica lipase B when in the presence of acetonitrile [22]. This would have certainly caused intermittent adoption of unreactive enzyme conformation and temporary loss of structural rigidity. Resonances of the amide bonds meant active fluctuations between single bond and double character on the carbonyl group, making the enzyme surface highly unstable, comparable to a molten globule-like intermediate structure [23]. In essence, hydrophilic solvents impede enzyme activity, and not just because they strip water from the enzyme [24]. On the other hand, the high ester conversions in isooctane suggested that it was a good solvent for the lipase-catalyzed esterification. This finding was similar to a report that illustrated that esterification of ketoprofen by Candida rugosa afforded good results when cyclohexane or isooctane was used as the main solvent [25]. In another study, the subtilisin lipase retained its active native environment, including its structural water upon treatment with octane (log p 4.9). They discovered that the bulk of the solvent partitioned away from the active site of the lipase [26]. A similar phenomenon may have occurred for Q114M in isooctane (log p 4.5), which may account for its more efficient catalysis during the esterification reaction. Also, the comparatively more hydrophobic methionine residue in Q114M would not have displaced water due to its opposing hydrophobicity.
Meanwhile, the particularly low ester conversion in dichloromethane can be attributed to the increased difficulty of substrate accessibility into the active site. The catalytic pocket of lipase is largely available to external solvents and substrates through a rather narrow hydrophobic channel [27]. For lipases, T1, Q114L and Q114M, the hydrophobic lid was the additional obstacle before the catalytic pocket could be accessed. The lid tends to interact more strongly with hydrophobic solvents in order for it lid to open. Hence, the use of less hydrophobic solvents resulted in a weaker activation of the hydrophobic lid and increased substrate difficulty to access the active site. Hence, the use of less hydrophobic solvents could have resulted in a weaker activation of the hydrophobic lid, given the fact that the lid domain of T1 lipase is involved in interfacial activation [14]. Similar observations were also reported for an esterification reaction catalyzed by the CALB enzyme [28]. This might partially explain the observed poor conversion of ibuprofen ester for T1, Q114L and Q114M-catalyzed reactions in dichloromethane. Furthermore, it has been described in literature that solvents can also influence the ground state of the reactants and products. A higher intrinsic solubility of the substrates in organic solvents tends to thermodynamically stabilize the substrate ground state and decrease enzyme activity [26].
The E values of the thermophilic lipase variants, T1 lipase, Q114L and Q114M correlated better with hydrophobicity of the solvent rather than the dielectric constant and dipole moment. Results from the solvent properties, dielectric constant and dipole moment were less conclusive due to poor association between the E values and both solvent properties. Good correlation between the E values and solvent log p were in accordance with reports that linked enantioselectivity of enzymatic reaction with hydrophobicity of solvent [29] and could be used as a guide to estimate the solvent effects in T1, Q114L and Q114M-catalyzed esterification of racemic ibuprofen with oleyl alcohol. Only one of the solvents investigated, namely hexane, deviated substantially in this respect. Another noteworthy point to be considered was the solubility of the substrate and product in the various solvents. The poor solubility of ibuprofen in some organic solvents has been known to decrease enzyme activity and result in a lower conversion and E value [30].
Effect of the Presence of Desiccant
The effect of variable desiccant loading on the activity of the lipase variants is represented as ester conversion. The results for the T1, Q114L and Q114M-catalyzed esterification of racemic ibuprofen are shown in Figure 3. The molecular sieves quickly dispersed after a few minutes of stirring and caused the medium to turn slightly viscous, particularly in the medium of the 4 mg loading.
In comparison, lipase activity was higher in reactions without added molecular sieves, as exemplified in the higher ester conversions. However, in the absence of molecular sieves, ester conversion started to decline when the incubation period exceeded 12 h. The T1, Q114L and Q114M variants afforded their highest conversion at 12 h, corresponding to 15.2%, 12.0% and 18.1%, respectively. Conversely, mixtures containing desiccant achieved equilibrium at a much later time. In the 2 mg loading, T1 and Q114L achieved equilibrium after 18 h, but conversion was still rising in the Q114M-catalyzed reaction. At 4 mg loading, only T1 and Q114M showed marginal increase in conversion. However, Q114L attained equilibrium much sooner, at approximately 18 h, with considerably lower enzyme activity as compared to T1 and Q114M.
The E values and e.e p % of T1, Q114L and Q114M-catalyzed esterification with respect to the amount of desiccant loading are illustrated in Figure 4. In all reactions catalyzed by the three lipases, e.e p % reduced as desiccant loading was increased. Selectivity was highest for reactions without added desiccant with Q114M exhibiting the highest E values-between 29.1 and 53.4. Selectivity profiles for T1 and Q114L under variable desiccant loading were quite similar, except selectivity, which was marginally higher in the former at E values 11.7 to 13.8, while the latter attained E values 8.6 to 11.8.
It was clear that the enantioselectivity of the lipase variants was particularly low in reactions that contained molecular sieves as desiccant. The effect of the desiccant on the reactions, however, was more profound in the Q114M variant. The E values of Q114M were significantly reduced from 53.4 to 29.1 when the desiccant loading was increased from 0 mg to 4 mg. However, E values of T1 and Q114L lipase were less affected, although the enzymes were considerably less selective than Q114M, regardless of the amount of additives that were used. Q114L was found to be the least selective and the E values in the T1 and Q114L-catalyzed reactions declined gradually over time. It is a well-known fact that an esterification reaction releases water as a by-product, which favors hydrolysis and counteracts esterification. The long incubation time of T1, Q114L and Q114M-catalyzed reactions at high temperatures would have increased water buildup. This could have considerably affected reaction equilibrium and influence the e.e% of product or unreacted substrate. To overcome this, displacement of the equilibrium towards formation of product using additives would certainly improve enantiomeric excess, particularly when the enzyme has low or moderate enantioselectivity [8]. In this study, direct addition of desiccant into the reaction medium was favored, as it was by far a much simpler method to remove generated water. Using molecular sieves as desiccant to remove water generated from the reaction was reportedly an effective way to increase conversion of substrates in an esterification reaction. However, the adverse effects of the molecular sieves on the lipases, especially in the Q114M-catalyzed reactions, implied that there could be some interaction between the lipase and molecular sieves. The stirring further intensified lipase-molecular sieve interaction that damaged the protein structures and reduced activity. Moreover, high loading of the additives would have added more bulk to the reaction. This caused low collision probability between enzyme and substrate, which further reduced the efficiency of catalysis [20]. It can be concluded that addition of molecular sieves to remove generated water in the reactions was not suitable and use of an alternative desiccant should be considered.
The Effect of Mutation on Enantioselectivity
The oxyanion Q114 in T1 lipase selected for site-directed mutagenesis is located next to the catalytic Ser, and is important in the stabilization of the substrate intermediate during catalysis (Figure 5). Mutations near the active site have been proposed to enhance enantioselectivity more strongly than distant mutations [13]. The differences in enantioselectivities between the lipase variants could have been attributed to the more hydrophobic Leu and Met residues at the as compared to the hydrophilic Gln in T1 lipase. In terms of hydrophobicity, Met (Q114M) is more hydrophobic, followed by Leu (Q114L), and finally Gln (T1 lipase). According to reports, high enantioselectivity in enzymes have been associated with conformational rigidity of enzyme structure [31]. Active sites that are more rigid will result in an enzyme that is more enantioselective, due to formation of a more rigid binding pocket that can permit only one enantiomer to properly enter the active site [32]. With regards to this, Met being the most hydrophobic was expected to reduce the flexibility in the binding pocket in Q114M: hence, enhancing its rigidity. Thus, it might explain the observed moderate improvement in enantioselectivity of the lipase. The proponent that enhanced enantioselectivity in Q114M could be the increased rigidity in its active site. In the case of Q114L, substitution with the hydrophobic Leu clearly did not improve selectivity of the enzyme. Apart from modulation of the rigidity in the binding pocket, replacement of Gln in T1 lipase could have also invoked other substantial changes, such as altering the shape and/or size of the active site [33]. Substitution of the bulky Gln residue with a much smaller Leu residue would logically free up more space at the substrate binding site and alleviate steric hindrance, allowing entrance of both enantiomers of ibuprofen acid. Hence, arrangements of residues in the vicinity of the mutation are disrupted, altering active site topology of Q114L that compromised enantiorecognition or chirality in the active site of the lipase. Figure 5. The location of the oxyanion Gln114 that was selected for site-directed mutagenesis in the catalytic pocket of T1 lipase with respect to the catalytic triad, Ser113, His358 and Asp317.
Materials
The components for the growth media were procured from Difco Laboratories (USA). The culture harboring the recombinant T1 lipase gene was obtained from stock cultures from the Enzyme and Microbial Technology Laboratory, UPM. The BL21 (DE3) pLysS cultures were from Invitrogen (Groningen, The Netherlands). The Quick-Change™ Site-Directed Mutagenesis Kit was purchased from Stratagene (La Jolla, CA, USA). Agar plates were prepared by addition of tributyrin to Luria-Bertani (LB) agar (Oxoid, UK) and antibiotics, chloramphenicol (35 mg/mL) and ampicillin (50 mg/mL) were supplied by Amresco (Solon, Ohio, USA). The Bradford reagent was also from Amresco (Solon,Ohio, USA). Oleyl alcohol was purchased from Fluka (Buchs, Switzerland). Ibuprofen and molecular sieves 3 Ǻ were purchased from Sigma-Aldrich (St. Louis, MO, USA). The solvents, DMSO, acetonitrile, ethyl acetate, hexane, toluene, isooctane, dodecane and N-tetradecane, were of HPLC grade and were obtained from Merck (Darmstadt, Germany). Deionized water was produced in our laboratory.
Substitution of Residues by Site-Directed Mutagenesis
The Q114L and Q114M lipase proteins were made using the QuickChange method. The PCR reaction mixture was set up according to manufacturer's protocol. The primers were designed with the sequence change located in the center of the primer with approximately 10-15 bases on both sides. The DNA was incubated with the restriction enzyme Dpn-1 at 37 °C for 1 h to digest the parental methylated DNA. The Dpn-1-digested DNA was introduced into competent E. coli BL21 (DE3) pLysS cells, and the cultures were grown on LB-tributyrin agar media containing ampicillin and chloramphenicol. Each mutation was confirmed by sequencing. Glycerol stocks were then prepared and kept at −80 °C.
Preparation of Stock and Working Culture
The stock culture was prepared by inoculating a colony of lipase-producing bacteria into aliquots of Luria Bertani (LB) broth and incubated for 12-16 h. Aliquots of LB broth (800 µL) were added to sterile 15% glycerol (200 µL) in 1.5 mL Eppendorf tubes, vortexed and stored at −80 °C. As for the working culture, one loop of culture from the glycerol stock was streaked onto tributyrin-nutrient agar-ampicillin plates and incubated for 12-16 h at 37 °C. A carefully selected single colony was inoculated aseptically into a 10 mL sterile LB broth and incubated for 12-16 h at 37 °C and shaken at 200 rpm. The bacterial culture was centrifuged at 10,000 rpm for 10 min and stored at −80°C.
Partial Purification of Enzymes
The recombinant culture (1000 mL) was harvested by centrifugation and resuspended with 40 mL of PBS (pH 7.4) containing 5 mM of DTT prior to sonication. The cell lysate was cleared by centrifugation at 12,000× g for 30 min and filtered with 0.45 µm membrane filter (Sartorius). The resin glutathione-sepharose HP (10 mL) was packed into an XK 16/20 column (GE Healthcare) and was with ten-column volume (CV) of PBS (pH 7.4). The cleared cell lysate was loaded on a Glutathione-Sepharose HP column at a flow rate of 0.25 mL/min. The column was washed with the same buffer until no protein was detected. The bound lipase was eluted with Tris-HCl buffer, 100 mM NaCl and 0.33 mM CaCl 2 , pH 8.0. The active fractions were determined by SDS-PAGE gel electrophoresis and pooled, followed by concentration using Amicon Ultra-15 centrifugal filter (Millipore, Bedford, MA, USA). The concentrated solution was subjected to gel filtration in Sephadex G25 in XK16/20 column and pre-equilibrated with PBS buffer pH 7.4. The solution was run in the same buffer at a flow rate of 1 mL/min. The active fractions were collected and concentrated with an Amicon Ultra-15 centrifugal unit. The homogeneity of the partially purified protein was verified by SDS-PAGE and the protein was lyophilized and stored at −20 °C [11].
Standard Lipase Assay and Protein Concentration
The standard assay for the determination of lipase activity was carried out according to previous work [10]. The protein content was determined by the method of Bradford (1976). Amresco assay reagent and bovine serum albumin (BSA: Sigma-Aldrich, St. Louis, MO, USA) were used as the standard. The different concentrations of bovine serum albumin (BSA) were prepared from a stock solution of BSA (0.1 mg in 10 mL of dH 2 O) and a spectrophotometer (HITACHI U-3210) was used to monitor the protein content at 595 nm using preparations without BSA as blank [34].
Enantioselective Esterification of (R-S)-Ibuprofen with Oleyl Alcohol
The reaction consisted of lyophilized lipase powder (5 mg), (R,S)-ibuprofen (0.8252g, 4 mmol) and oleyl alcohol (1.872 mL, 6 mmol) and isooctane (unless otherwise stated) (8 mL) in screw-capped flasks (25 mL). The mixture was stirred at 200 rpm in a thermoconstant oil bath at 50 °C. Periodically, samples from each reaction mixture (100 µL) were collected and put in ice to stop the reaction. The mixtures were diluted appropriately for chiral HPLC analysis. Reaction samples (0.5 mL) were also withdrawn and mixed with hexane (2 mL).
Effect of Reaction Conditions
The esterification reactions were carried out in screw-capped flasks (25 mL) and were subjected to similar conditions of 50 °C, lyophilized lipase (5 mg), (R,S)-ibuprofen (0.8252 g, 4 mmol) and oleyl alcohol (1.872 mL, 6 mmol) and stirred at 200 rpm. Investigation on the effect of various solvents was carried out using solvents, DMSO, acetonitrile, ethyl acetate, hexane, isooctane, toluene, dodecane and N-tetradecane (8 mL). Assessment of the presence of the variable amount of desiccant was performed in mixtures containing molecular sieve contents of 0, 2 and 4 mg.
Analysis and Determination of Ibuprofen Esters
Analysis of the reaction mixture was performed on an Agilent 1200 HPLC equipped with a chiral phase column: (R,R)-Whelk-O1 chiral column (Regis Technologies, Morton Grove, IL, USA) and an ELSD detector for detection of all products. The eluent consisted of a mixture of hexane, isopropanol and acetic acid (98:2:0.5, v/v/v). The flow rate was kept at 0.75 mL/min and the column temperature at 25 °C. Sample injections were carried out by an autosampler. Samples (10 µL) were injected and the respective retention times of (R)-ibuprofen acid, (S)-ibuprofen acid, (R)-ibuprofen ester and (S)-ibuprofen ester were 9.2, 10.5, 5.4 and 5.7 min. The conversion, (X), of ibuprofen was determined by titration with NaOH (0.03 M). The conversion of ibuprofen ester was calculated using the following Equation 1; (1) where X is the overall conversion, C o the initial amount of racemic ibuprofen (mM) and C i the amount of racemic ibuprofen at a particular reaction time (mM). The E values and e.e p % were calculated using Equations 2 and 3 as previously described in literature [35]. The values were an average of three measurements from two separate determinants. Meanwhile, [R] ester and [S] ester represents the concentration of the (R) and (S) enantiomers of ibuprofen esters, respectively. (1) (2)
Conclusions
Mutation at the oxyanion Q114 had a profound effect on enantioselectivity of reactions. Substitution with Met improved enantioselectivity, but Leu did not. From the synthetic point of view, solvent properties were found to influence the activity and selectivity of the lipase-catalyzed reactions. The E-values of Q114L, Q114M and T1 lipase correlated better with the log p of solvents, with the exception of hexane. Meanwhile, the low ester conversions were attributed to practical limitations set by the acidic pH of reaction that was far below the pH optimum of the lipase variants and incompatible use of desiccant. Extending the lipase range of pH tolerability towards lower pH values and using other desiccants as additives could improve the lipase-catalyzed esterification reaction. | 5,915 | 2012-09-17T00:00:00.000 | [
"Chemistry",
"Engineering",
"Environmental Science"
] |
Comparative Study on Ant Colony Optimization ( ACO ) and K-Means Clustering Approaches for Jobs Scheduling and Energy Optimization Model in Internet of Things ( IoT )
T latest IoT applications depend on promotion of wireless sensor networks (WSNs) with expert of engineering. These IoT applications contain a large number of devices, connected with different requirements and technologies. Such kinds of IoT applications do the sensing and collection of data with transmission of data to the administrator nodes for other possible operations and even a cloud at the backdrop for data analytics. These processes require routing protocols for their completion. Routing protocols have two major challenges. The first challenge is to improve data transmission and scalability whereas the second challenge is to minimize energy consumption. In an IoT application, network nodes under different network topology collect different kind of data so that an IoT application produces an enormous amount of data. The heterogeneity in network topology restricts the TCP/IP to become the best policy for proper resource allocation to computing and routing [1]-[3], [27]-[29]. Owing to the above-mentioned challenges, different persons view IoT in different ways, based on their perception and requirements. A routing protocol includes the multiple job scheduling methodologies. These job scheduling methodologies are reported as either heuristic or metaheuristic-based approaches. Heuristic-based methodologies are comparatively more helpful when we look for a local optimum whereas metaheuristic methodologies further try to explore the solution DOI: 10.9781/ijimai.2020.01.003
space to attain global optima. Despite the fact that the metaheuristic methodologies look very engaging and a large number of parameters to be turned on account of IoT thus limits the utilization [4]- [9], [27]- [29].
A number of researchers have developed and used the ACO algorithms for finding the shortest path in several routing problems. An ACO algorithm includes a stochastic local search strategy to structure the routing paths which can be established by a set of artificial ants. These ants work cooperatively using indirect communication of information for construction of the optimal shortest path. Inan intelligent optimization algorithm, the ACO idea is borrowed from the food searching characteristic of the real ant colony and how ants do this difficult job when they work together. Depending on biological studies on ants, it can be assumed that the ACO performs the finding of the shortest path from the nest to the food. Ant's pheromone distribution mechanism to share information with other ants in indirect coordination is called stigmergy. A number of researchers have suggested that the ACO optimization algorithm is very good for collaborating, exchange and transmission of information. The ACO algorithm is based on pheromone updates. This pheromone updating depends on the best solution achieved by the pheromone amount and the number of ants. The natural ants find the shortest path based on their own best knowledge solution and it depends on a strong pheromone trace. Finding the shortest path using ACO algorithm is inversely proportional to the pheromone quantity and length of the path [10]- [12].
The ACO depends on a probabilistic method for solving the computational problems and minimizes the paths through graphs [13]. ACO algorithm can be given in detail as: Clustering is used in a wide range of research areas like engineering, medicine, data mining, biology, artificial intelligence and even IoT. Xu and Wunsch (2005) have represented an abbreviated survey on clustering algorithms. K-means clustering is the most commonly used algorithm. K-means clustering algorithm divides the data/substances into a number of clusters based on minimizing the sum of the squared distances between the data/substances and the centroid of the clusters. The k-means clustering algorithm is one of the simplest algorithms, but it is not suitable for a large amount of data set due to higher time complexity. Various methods have been proposed to accelerate the working of k-means such as when computation complexity is increased the backtracking is required [4], [12].
In recent years, many new clustering algorithms have been proposed after deep study on clustering. There is a clustering algorithm which is based on ant system. A combination of two or three different clustering algorithms is new to clustering analysis. Clustering analysis plays an important role in the datamining field. Data can be grouped into different classes or clusters by clustering analysis. There exists better similarity among the objects in the same class and poorer similarity among the objects in different classes. In machine learning, clustering is a kind of unsupervised learning because it has no prior knowledge of classification labels. Clustering analysis is widely applied in image processing, model recognition, document retrieval, medical diagnosis, web analysis etc. [4], [14].
A. The Basic Principle of the K-means Algorithm K objects are randomly chosen from n objects as initial clustering centers. Then the algorithm calculates the distance from each object to k clustering centers and judges which clustering center is nearest, assigning the object to the cluster of the nearest center. When all the computation work is done, it will form knew clusters. Next, the algorithm re-computes a mean value of each new cluster as its new clustering center. According to the above procedure, the algorithm will repeat calculating the distance and iterating till criterion function converges. The sum of square error is often-used as the criterion function. It is defined as [4], [14]: (1) E is the sum of square error to all objects in the database. p is a point in the space that expresses a given object. m i is the mean of clustering C i . According to this criterion, data belonging to the same class are as similar as possible and data from different classes are as different as possible [4], [14].
The article is divided into six sections. After a brief introduction about ACO and k-means clustering in section I, section II contains the related work. In section III we explain our problem definition, in section IV we show our proposed algorithm, and finally, section VI explains the discussion and conclusions.
II. Related Work
The IoT environment contains a large scale of different types of networks. Routing techniques in WSN from source to destination is one of the important issues in the IoT system. The algorithms which are used to select the cluster heads/nodes depend on specific characteristics of clusters and/or network environments like energy level that suffers from complexity. Hence, the architectures of IoT are unsuitable and features of IoT application are dynamic. A large number of different kinds of research work on routing in IoT from source to destination have been done in literature [1], [4], [26].
Omar Sajid [1] has proposed optimum routing path using Ant Colony Optimization (ACO) algorithms inside the IoT system. Depending on the types of network, Sajid [1] has suggested to divide the IoT environment into various zones like status, requirements, etc. then use the ACO algorithm that was fit for each network. Finally, the simulation results proved that the proposed routing algorithm has better energy saving techniques. Kumar et al. [4] has presented a comparison of some clustering algorithms to analyze the scheduler performance and suggested that K-means based clustering is effective for the IoT based environment. Lu et al. [10] has suggested that the ACO finds the path to broadcast signaling contained in various network nodes and various flexible network structure problems, and during simulation analysis it is noticed that finding the path by ACO in IoT decreases the transmission storm efficiently. When the number of nodes increases in the finding path process, then it is important to reduce the time of path structure. In order to analyze a large-scale routing strategy, Guang Ji [15] has proposed IoT ant colony searching routing based on Markov decision model. Markov decision ant colony routing selection algorithm is based on multi-parameter equilibrium. Markov routing is a decision model to estimate the number of nodes in a node communication range and facilitates the decision which meets the requirements. This algorithm efficiently decreases the overhead workload which is generated by controlled messages and multiple hops routing between clusters and make the evaluation function value of the path for allowing decision set corresponding to the pheromone concentration of ant colony for repeating the process. During the routing discovery phase, it calculates the transition probability of nodes and selects the global optimal routing. During the simulation-based analysis, it is observed that the problem of network "hot spots" is effectively solved by the Markov-A algorithm and the energy consumption of the network is balanced so that the life cycle of the network is prolonged. Dorigo et. al. [16] has specified an explanation of the Ant Colony Optimization (ACO) meta-heuristic and has discussed the type of problems where it can be applied. Dorigo et. al. [16] has used ACO algorithms in two typical applications, namely traveling salesman problem and routing in packet-switched networks. Merkle et. al. [17] has introduced an ACO method for the resourceconstrained project scheduling problem (RCPSP). It is a combination of direct (or local) and indirect (or global) for ants in the structure of a new solution and uses pheromone evaluation approaches. From newly added features, this algorithm changes the strength of the heuristic effect and the rate of pheromone vaporization over ant peers. Below some limitation author's proposed algorithm perform the best solution compared to some other heuristics with and without limitations to the number of evaluated schedules shows the flexibility of the method. Michael Frey et al. [18] have proposed a framework and methodology to study ant routing algorithms for wireless networks. While running experiments in a wireless test bed is a number of some, expensive and error-free task, studying ant routing algorithms in simulation allows investigating some specifics properties of these algorithms more easily. This includes behavior of all aspects such as adaptive and pheromone evolution, the scalability in respect to the number of nodes or traffic flows, and mobile scenarios. These frameworks are easy to extend and customize by providing new back ends for different network simulators (or test bed frameworks) which is feasible with acceptable efforts. Mariusz et al. [19] have proposed Ant Colony Optimization (ACO) based algorithm designed to find the shortest path in a graph. The algorithm consists of several sub problems that are presented successively. Each sub problem is discussed from many points of view to enable researchers to find the most suitable solutions for the problems. Algorithms based on the metaheuristic of ant colony do not guarantee finding an optimal solution in all possible cases. Accordingly, to experimentation, it is particularly important to find out and select parameters dedicated to each of the problems under consideration. Individual elements of the procedures applied in the process should be also analyzed with regarding to their usability and purpose fullness of application. The construction of this Shortest Path ACO algorithm directly reflects to the various variants of the execution of individual elements of the procedure. In this way, it is possible to improve the method for the solution of the shortest path problem to approach or reach optimal solutions. An evaluation of the duration time and the quality of returned solutions will provide information for making a decision on the implementation of a given scheme as being of optimum quality or an alternative to more time-consuming procedures or procedures with higher computational cost. Yuq in get. al. [20] has recommended a new K-Means algorithm. This algorithm is a combination of density-based and ant searching theory, which is controlled by the initial parameter of k-means and local minimal by the random ants. The experiments analysis shows that, this algorithm has better quality for productivity and accuracy of clusters. Thus, it can place the similar types of objects together in one cluster and eliminate the dissimilar types of objects away. This procedure has random competence of ACO which avoid clustering success into local optimality, and it furthermore avoids responsiveness of the primary partition of the k-means algorithm. Gelenbe et al. [25] has proposed the relationship and effect between choice of system load, energy consumption and QoS using a simple queuing model. They [25] have analyzed the parameters which are effect of response time of the system and energy cost per job using the mathematical queuing model. S. Kumar et Al. [30], [31] and V. García-Díaz et Al. [32] have proposed the Supply Chain Management based model for optimizing the response time and job scheduling by applying M/M/1 queuing model in IoT environment.
III. The Problem Definition
This section contains the problem statement followed by a description of the ACO and K-Means clustering based IoT messaging service architecture considered in the work and the job model.
A. Problem Statement
In an IoT environment, a big amount of heterogeneous wired and wireless devices/objects interconnect with each other identified by IPv6 addressing using single or multiple levels of subnets. These devices/objects generate a big amount of data (much time a continuous stream too) and scheduling these in the IoT environment from source to destination becomes a challenging issue.
IoT is a mixture of multiple wired or wireless communication technologies. The routing is the most important challenge in the IoT environment for solving how to find the best optimal path for data transmission from one node to another node in a different environment. An IoT environment includes different types of networks which depend on the network's status, and requirements. In this article, it is proposed that each network has own responsibility for finding an optimal path in the IoT environment. There are many inter-connections between different networks and the seinter-connections are called overlapped areas. So, this work intends creating an algorithm to control the use of algorithms and determines a solution for overlapped areas problem. This algorithm has been tested and compared with ACO and K-Means clustering algorithms that are closely related to the IoT routing problem.
The prime objective of this work is to calculate and compare the response time for message forwarding of the entire IoT environment using ACO and k-means clustering algorithms approaches to find the suitable path for reducing the energy consumption/cost. In the big transportation for IoT background, this is very beneficial for providing the flexible and effective response time services. The number of clusters is one of the most important features for calculating the performance of K-means clustering algorithm and the number of paths is one of the most important features for calculating the performance of ACO algorithm. The nodes (for ACO algorithm) and Cluster Head (CH) (for K-means clustering algorithm) may be completely connected or partially connected. The processing speed of each CH/Node can be measured as MIPS (Million Instruction Per Cycle) count [4], [21], [26].
B. Job Model
Each job or message is divided into sub-jobs/tasks depending on the priority of jobs and sequence of data messages. Data are available in the format of data packets and need the transmission from source to destination. The jobs are mathematically modeled by a weighted graph D= (T, E) where T shows the set of t tasks and E shows as the set of e edges among the jobs. The edges show the priority of task/message [4], [26].
C. A Route Planning Model for K-Means Clustering and ACO Algorithms
The performance measurement for both K-Means clustering algorithms and ACO algorithms is done in terms of the average response time. It is assumed that each server follows the M/M/1 queuing model where λ is the arrival rate and μ is the service rate [4], [22], [26].
The set of objects in different colors represents them belonging to different clusters and paths. A sample shortest path is shown between source to destination base station based on a hypothetical approach.
QoS parameters like Average Response Time (RT), Average Waiting Time (WT), and Average Queue Length (QL) have been estimated. Here, it is assumed that each server follows the M/M/1 queuing model. The average queue length at the i th CH/Node with both the number of jobs waiting in the queue and those in service can be written as E [Ni] = Where ρ i is the i th server utilization and whose average queue length is E [Ni] [4], [23]- [24], [26].
The average response time depends on how to quick response a CH/Node. Arrival rate of the jobs at the input queue at each CH/Node is randomly generated. The average response time can be estimated as [4], [23]- [24], [26]: The waiting time is the period of time where the job does not execute because of the execution preference or some event to happen. So, the average waiting time at each CH/Node in the path can be calculated as [4], [23]- [24], [26]: (3) Let us assume that there are n nodes in a network and m nodes transmitting the signal for searching at the same time in network routing. ι ij (t) is the number of active signaling established through the path among node i and j, d ij (i, j = 1, 2,...,n)stands for the distance among the node i and j at the time t.
In the initiation stage, m random nodes are selected, the number of active signaling among the nodes i and j are ι ij (0) and ι (tabu) is the primary part of individual signaling. k is allocated as a preliminary node.
Here, stands for the probability of k transmitting signal from node i to node j at the time of t, then: The permitted k= {0, 1... ,n -1}, and ι st and for the set of nodes and indicating next permitted node to pass. The difference between an artificial ant and real ant colony is the capability of memory. The ι (k = 1, 2,...,m) is utilizing the records of nodes and signaling k pass over at the current time, and it is dynamically adjusted with the transmission of signaling k process. 1-ρrepresentstheunit of disappearing andα, βindividually represent the amount of information collection of signaling in the process of re-transmission. It plays different roles for the heuristic aspect in the path selection through the signaling retransmission. η ij (t) represents the predictions unit of the transmission between node i to j. Then signaling k cover all nodes and make a complete cycle. At that time, the information of all paths is updated according to the following equation: Then (6) represents the amount of information of signaling k's suggestions between node i and j in this cycle.
(7)
Here, Q represents the constant, L k represents the path length that signaling k has paced in this cycle.
In order to estimate the shortest path using ACO from a source node (S) to a destination (E) it is assume that λ 1 isthe arrival rate of the job at Node 1 , working as a source node (S) for transmission to the destination (E)Node 5 . Node 1 has nodes Node 2 , Node 3, and Node 4 as immediate neighbors. Let the probability of arrival of jobs at queues of these neighbors be P 2 , P 3, and P 4 respectively such that P 2 +P 3 +P 4 =1. In this article it is assumed that there are equal probabilities for selection of all the paths i.e. P 2 =P 3 =P 4 . After being serviced by server Node 1 , the jobs arrive at Node 2 being the best path among the available paths offering the minimum response time with probability P 3 . Let this probability be referred to as P 1 for the remaining path, indicating the path chosen in the beginning. Therefore, the arrival rate for Node 3 can be written as λ 1 P 3 = λ 1 P 1 . Similarly, after being serviced by server Node 3 , the jobs arrive at Node 4 followed by Node 3 to finally reach Node 5 with arrival rate λ 1 P 1 .
Therefore, the utilization of Node 1 with P 1 =1 becomes The utilization of the selected nodes can be written as [4], [23]- [24], [26] (9) Where i = Number of stages in the network and j = Index of the selected node in each stage.
The average queue length for Node 1 becomes (10) Similarly, the average queue length can be calculated for the other selected nodes in the path as [4], [23]- [24], [26] (11) The queue length for the entire path can be estimated as:
E [QL avg ] = Sum of Average Queue Length (QL) value of selected nodes in every stage.
Therefore, (12) Here 'i' is the stage and Node j is the node being selected for message forwarding.
Average Response Time =
(from equation (2)) In the current case, the path from Node 1 (source) to Node 5 (destination) comprises of Node 2 , Node 3, and Node 4 . The average response time of for Node 1 : (13) For Node j server the response time can be written as: Similarly, for Node j server the waiting time can be written as: In order to use the K-Means clustering algorithms, a sample path chosen among the clusters to route the messages from Source (S) to Destination (E) has been presented and routing paths as per the K-Means clustering algorithmic characteristic and some properties like Euclidean distance and Degree of nodes to find the shortest path from Source (S) to Destination (E).
Here it is assumed that P is the probability of arrival of the jobs at the source queue to be forwarded to the destination in K-Means clustering through Cluster Head (CH). Let the arrival rate of the jobs at CH 1 beλ 1 which is the Start CH 'S' in the IoT network. The packet needs to be transmitted to the end cluster 'E' which is CH 5 which can be the preferred endpoint like cloud storage. After being serviced by CH 1 , let the message be forwarded to CH 2 . As discussed above, the arrival rate for CH 2 becomes λ 2 P= λ 1 P. Similarly, the message reaches destination CH 5 with arrival rate λ 1 P, being routed through various cluster heads CH forming the path. The best and stable utilization of CH 1 server is [4], [22], [26] ( 19) Similarly, for CHj server, the utilization can be written as (20) where i = Number of stages in the network and j = Index of selected node in each stage.
The average queue length for CH 1 [4], [22], [26]: (21) Similarly, the average queue length can be calculated for the other selected CH j in the path as [4], [23]- [24], [26]: (22) The queue length for the entire path can be estimated as: Where 'i' is the cluster and CH j is the node being selected for message forwarding.
(24)
Similarly, average response time can be calculated for the other selected CH j in the path as [4], [23]- [24]: (25) The average response time for the paths selected become the sum of the Average Response Time (RT) values of selected CH j in every stage 'i'.
Similarly, for CH j server the waiting time can be written as: The average waiting time for the complete path can be written as the sum of the average Waiting Time (WT) values of selected CH in every stage as: (29) IV. The Algorithm The primary objective of this algorithm is to calculate and compare the message response time from source to destination by preserving the service quality of fluctuating sized messages from the source to the destination through CHs/Nodes in the network. The word Cluster Head (CH) for K-Means clustering algorithms and the word Node for ACO Algorithms are used. When a sensor device sends a message from the source CH/Node to the destination CH/Nodes, it fritters away some communication cost in the transfer of the message. In this procedure, it waits for the response of each CH/Node in each IoT network environment. This is a big problem for IoT because it may affect the battery life of the sensor CH/Node as the battery life gets weak with every waiting period. However, if this response time was reduced by proper path choice, it would decrease the waiting time and eventually preserve the power of the sensor's devices. Thus, this method decreases the response time and makes it an energy-aware scheduling algorithm too.
These messages convey data with respect to the physical parameters which change continuously in some degree. Algorithm 1 corresponds to the message-scheduling algorithm for the IoT framework.
V. Performance Evaluation
To evaluate the performance of QoS parameters, Matlab64-bit version 8. 5.0.197613 (R2013a), processor Intel (R) Core (TM) i7-4790 CPU @ 360GHz, 64-bit Operating System, and RAM 4 GB as a simulation platform has been used. M/M/1 priority queuing model has been used for resource provisioning and message scheduling system within the IoT environment. Applied M/M/1 queuing model can be reconsidered for some newer models like phase-type queuing networks or Pareto-distribution-based queuing networks. The basis of considering the M/M/1 queuing model is that it considers each node/ CH as a single node and has been established to fit well in these types of problems. Random data values have been generated for the experiment during execution. Table I. Further, ACO Response time result is a minimum and better than the K-means Clustering Algorithm response time. The effect of some other QoS parameters such as the Average Service Rate, Average Queue Length and Average Waiting Time are shown in Fig. 4-6.In this experimental setup, we fixed the job size of 1000 while varying the number of CHs (For K-means) and Node (For ACO)from 10 to 50. The numeric data is represented in Table II- Fig.7 represents the effect of both ACO and K-means clustering algorithm Response time when we vary the job size from 200 to 1000 for a fixed number of Nodes (For ACO) and CHs (For K-means) that is 10, with the numeric data represented in Table V. We observed that the response time of ACO is again better than the K-means Clustering Algorithm.
VI. Conclusion
IoT is poised to change the way of living. With huge heterogeneity and dynamicity in IoT, the response time should be ensured to be as low as possible for better network performance leading to an efficient and smart IoT. The response time has effect on the use of energy cost per job for the system [25]. The optimization of the response time for transmission of data/jobs in the entire system of IoT environment automatically optimizes the cost/use of energy. This work contains a comparative analysis between ACO and K-means algorithms based on a job/message scheduling model for IoT, based on an N layered network. Performance measures such as the average queue length, average waiting time and average response time have been derived, plotted and analyzed. It is noticed that the ACO offers better performance for the considered parameters. The model gains significance due to the fact that efficient message forwarding in IoT will ensure the optimum use of sensor energy to realize a truly smart framework. | 6,287.4 | 2020-03-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Real-Time Reliability Verification for UAV Flight Control System Supporting Airworthiness Certification
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.
Introduction
UAV has been used in a vast range of civil and military applications, and also brought accidents caused by airborne software failure [1][2][3], we expect to develop a high reliable UAV flight control system. The traditional approach used for manned aircrafts takes robust time and resources, which is not practical to analyze and validate UAV flight control system. For shortening the development cycle and improve the reliability and performance of flight control system, developing an integrated framework for the design process of flight control system is in need [4]. This framework could integrate the existing design methods and verification tools, and use iterative development cycle to implement and quickly validate UAV flight control system design. Therefore, it is important to enhance the reliability and robustness of UAV flight control system by improving the method of modeling, testing and verifying.
Federal Aviation Administration (FAA) requires that the airborne software system must be conducted by airworthiness certification [5]. Thus, we should model and verify the UAV flight control system in term of corresponding airworthiness certification standards.
In airworthiness certification, we considered the DO-178B standard, which is the current software certification standard that released by Radio Technical Commission for Aeronautics (RTCA). DO-178B prescribes the objectives for each important step during the development process of airborne software [6]. With the development of the software verification technology, RTCA and European Organization for Civil Aviation Equipment (EUROCAE) revised the DO-178B and published the DO-178C/ED-12C standard in 2011. DO-178C/ED-12C proposed several technical supplements such as software tool qualification considerations, model-based development and verification, object-oriented technology and formalizing methods [7][8][9].
Development and verification based on executable models can optimize the process of airborne software. During the stage of requirement and design, we should find out the software fault as early as possible in order to eliminate the errors of design and enhance the robustness of software. According to DO-178C/ED-12C, source code should be generated from design model directly, and they all should pass the validation.
In the development of airborne software, object-oriented technology benefits the generation of source code for test and certification via model driven architecture (MDA) or MARTE tool. It can improve the reusability and validity of software. DO-178C/ED-12C therefore requires adopting object-oriented technology.
DO-l78C officially indicated the effectiveness of formal methods during airborne software development process. Formal method is consisted of formal model and formal analysis. Formal model is used for defining unambiguous abstract model of system based on mathematic syntax and semantic. Formal analysis proved the consistency between system property and software requirement by theorem proving or model checking.
The design and verification of airborne software should comply with the guidance of DO-178C to obtain certification approval [10]. Recently, in order to improve the development method of airborne software and meet airworthiness certification, researchers have proposed several integrated development frameworks. Most of these methods focus on the model-based development environment to make different tools and techniques adopted an applied. The main challenge of model-based development approach is that we should generate precision appropriate dynamic models of airborne software at different development stages [1]. Mathematical model can only be used to describe simplified real flight control, and is just an approximation of the airborne software, because sensors of UAV are inaccurate and aerodynamic data are not understandable enough. Therefore, mathematical model is not be able to be used for predicting the real-time reliability responses.
To study the feasibility of applying model checking to avionic embedded software, Sreemani et al. took advantage of software cost reduction to describe software requirement specification of military aircraft A-7E, and then translated specification to symbolic model verifier (SMV) [11]. They pointed out that model checking was of help to provide a safe and reliable software, but it could not be extended to large-scale software.
To verify the requirement specification of large-scale software, Chan et al. proposed an interactive design process based on the progressive refinement of the models. This process can specify the set of properties, and adopt SMV to check the requirements specification of TCAS II. The limitation is that model checkers cannot be integrated with other CASE tools.
Pingree et al. applied model checking to the development of flight software for NASA's Deep Space 1 mission [12]. They used State flow to create software model and translated it into SPIN. The approach can automatically generate software code from State chart model, but they cannot assure that all design errors could be showed in the model. For rapidly synthesize, analyze and validate a candidate UAV software design, Yew Chai Paw et al. proposed a model-based framework, which integrates a set of design tools to realize software model synthesis, off-line and real-time simulation [1]. They pointed out that simulation based on tests is important for saving cost and time. However, Seveg et al. compared simulation with formal verification in SoCs design process [13]. They indicated that formal verification requires more time and memory, but it can identify failed properties and get the highest credibility of the verified system.
Cofer et al. inserted formal analysis tools into a model-based development process to improve quality of avionics software [14]. They used State flow to generate model-based specification, and used Lustre as an intermediate representation for the models, and then translated specification into NuSMV model checker. They applied the method to the FCS 5000 flight control system and an adaptive flight control system for UAV [15].By analyzing the causes of 156 failures on 129 space crafts, Tafazoli pointed out that 6% of these failures are due to software failure [16].
In this study, our purpose is to verify real-time reliability for UAV flight control system. Combining model-based method with formal analysis tool, we can reduce the cost of building separate analysis models, and keep the model consistent with the software design.
Combining with the requirements of DO-l78C, we utilized MARTE class diagrams to set up a time-related static model of flight control system based on MDA. We also employed dynamic models triggered by time to describe system state changes. As MARTE is a graphical model, we proposed a formal PTA-OZ model and constructed its model transformation rules. Using these rules, we can translate both static models and dynamic models into corresponding Object-Z model and PTA models, which are fit for system real-time reliability verification and code automatic generation. In order to get the real-time specification of design model, we presented a method to extract real-time property from TLSC. This paper proceeds as follows. Section 2 outlines the logic structure of flight control system. Section 3 describes the PTA-OZ model based on MDA. Section 4 gives the real-time property extract method from scenario-based language. Section 5 reports a case study of realtime reliability verification. Finally, Section 6 concludes the paper.
The Logic Structure of Flight Control System
The software architecture of UAV flight control system mainly consists of communication link (CL) module, sensor signal processing module, flight guidance (FG) module, servo control (SC) module, mode supervise (MS) module and PWM steering output modules.
1. CL module is used for receiving telecommand (TC) from the ground control station (GCS), down-transporting telemetry (TM) to GCS and for data communication among the airborne modules.
The system uses A/D of AVR to collect signals, such as six degree of freedom information and supply voltage, and uses D/A card to output PWM signal. Fig 1 shows part of hardware and software architecture of the flight control system. We focused on the commands and data processing structure among GCS and FG module, SC module, and MS module. We not only require the UAV flight control software to satisfy the DO-178C airworthiness certification requirements, but also fully reference ECSS-E-70-41A standard to standardize the service offered by each module. So we employed telecomm and verification service and onboard operations scheduling service which adopt ECSS-E-70-41A standard in FG module of flight control software.
Modeling and Description of Time Property
The advantages of development method based on MDA in the design process of embedded software are as follows. Firstly, the execution platform of embedded software is usually heterogeneous and reconfigurable. Secondly, applications often need to react with an external environment, and embedded software design focuses on handling data or control flow. Finally, software design is often subject to real-time requirements and resources availability [17]. Above all, in the early stage of embedded software design, the development method based on MDA could detect if resources and services meet the requirements specification under the given constraints.
Time static model based on MARTE
Although model-based development tool such as SCADE suite has been used to formal verify critical avionics software [15], it is difficult to generate code from time synchronism system. SCADE employs a reference or master clock to define all clocks as a functional sample of the master clock [18]. It can provide a solution for generating code in uniprocessor system. However, it is difficult to generate distributed system code. For parallel implementation, MARTE can generate the multi-threaded code from concurrent specification [19].
MARTE, which could be used to establish formal model of real-time and embedded system (RTES), is a new UML extension profile. MARTE defines a mathematically expressive model of time to annotate UML diagrams with formal timing interpretation. MARTE also defines the necessary concepts to build software model of HW/SW embedded system, and its performance depends on the interaction among the different components.
Although UML2 introduces SimpleTime package to create time model, it is too simple for RTES. The time models based on MARTE are more suitable for software design. They may be physical, logical, or user-defined. MARTE uses a collection of clocks to represent time, and each clock specifies a totally ordered set of instant.
In MARTE, «ClockType» and «Clock» stereotypes can be used to represent the concept of clock. «ClockType», which is the type of «Clock», specifies common features shared by a family of clocks. «Clock» includes more detailed information. We adopted above stereotypes to define a «ClockType» and several clock instants (Fig 2).
Firstly, we use «ClockType» stereotype to define a new clock type, and specify the feature of clock type with tagged value. The new discrete «ClockType» Chronometric, whose supportive unite is s, uses a readable resolution to determine the resolution of the associated clock, and uses currentTime operation to get the current time.
Secondly, we introduced a predefine clock idealClk in MARTE library, which is an instance of IdealClock. It represents the continuous clock of physical time, and uses s as its unit of time. t 1 , t 2 , t 3 , t 4 , t 5 , which are instances of Chronometric, use clock constraint to specify their deviations with respect to the ideal one. In «ClockConstraint» stereotype, we adopted clock constraint specification language(CCSL) to express the clock constraints. The c is defined as an ideal discrete clock whose resolution is 0.001 s. For idealClk, we declare a clock t 1 with a period of 10 ms and resolution of 0.01 s. The t 2 , t 3 , t 4 , t 5 , whose resolution is 0.001 s, can be sampled every 10 periods.
As MARTE can support system-level design, we adopted RtUnited and PpUnit for the active object of UML to create model of flight control system. Fig 3 showed the class diagrams in software logic structure of flight control system. The structure consists of GCS and airborne system. The logic entities of airborne system include CL module, FG module, SC module and MS module. GCS interacts with airborne system through TC and TM signal. RtUnit is used to represent the entities, which can encapsulate object and behavior in a single entity. Any realtime unit can invoke services of other real-time units to send/receive data flow. In class diagrams, we defined attributes, operations and associations, and offered the interface definitions with entity. FG module can dynamically create schedulable resources to execute its services, and MS module has a pool of 10 schedulable resources.
All real-time units share data that is represented by the class DataBase. As the concurrent execution of real-time units, we need to protect the data access encapsulated in the class Data-Base. In order to do this, we tagged the class DataBase by «PpUnit» stereotype, whose concPolicy property of DataBase is set to guarded, which means that real-time unit should access the DataBase property one after another.
Dynamic model supporting time-triggered
State chart can describe the dynamic behaviors of object through creating object model about its life cycle, and focuses on the object behavior changes caused by events. In state chart, an event is an occurrence of motivation, which can trigger state transition. As a special event, Time event represents the state transition triggered by time related factors.
According to the clocks defined in 3.1, we described all kinds of time-triggered mechanisms to create UML dynamic model. So we need to extend UML state chart with time model to support events and behaviors triggered by time.
To reference time-related concept for state chart, we use «TimedProcessing» stereotype of MARTE to specify duration for a behavior and improve the use of UML behavior. Through reference defined clocks, we specified behavior of state chart with user-defined clocks to support time-triggered dynamic mechanism, and described RTES software to offer support for multi-clock mechanism of distributed environment.
In MARTE, we use «TimedEvent» stereotype to represent time event, which extends from the meta-class SimpleTime::TimeEvent of UML. Time event is used to specify the state transition triggered by time in state chart.
Flight control system is forced to obey the following operation rules: TimedProcessing» stereotype showed that the dynamic model supports time-triggered mechanism, and use on attribute to specify associated clock of the current model. « TimedEvent» stereotype was used to define time events: requestTimeout and detectTimeout. Event requestTimeout described that FG module must generate a new air route within 50ms after receiving user's TC command, otherwise it would trigger timeout transition. Event detectTimeout requires MS module to detect and resolve short-term conflict within 10ms, otherwise timeout transition would be triggered.
State chart uses state and state transition to describe dynamic behavior model of system for event response, and is widely used for model programming. Comparing with state chart, automaton is formal to analyze recognizable language. As automaton is the foundation of state chart, it could accurately verify behaviors of class [20,21].
Formal description based on PTA-OZ
As a graphical modeling language, MARTE works well in establishing the software model for RTES, but it has following deficiencies: 1) As defined in different ways, there are inconsistencies existing among abstract grammar, static semantics and dynamic semantics. 2) MARTE lacks reasoning authentication mechanism. These disadvantages limit MARTE's application field and development, we therefore need to define precise and formal semantics for MARTE.
We proposed a formal integrated model called PTA-OZ [22], which uses Object-Z to describe the static semantics of MARTE and uses probability timed automata(PTA) to define its dynamic semantics. The formal integrated model could mathematically prove whether software models meet the requirements [23], and it could also increase the safety and correctness of software.
Definition 1 A PTA-OZ is a tuple PO = (A, I, L, P, OZ), where: • A is used to deal with inheritance, lists all parent class using inherit clause.
• I is the interface, and consists of channel declarations, which are provided and used by classes.
• L is local methods, which can only be accessed from the inside.
• P is a PTA that maps the state, event and transition of state chart to the attributes and operations of Object-Z.
• OZ is from Object-Z that includes a state schema, an initial schema and several operations.
We designed a transforming algorithm for the implementation of the formal model [24].By mapping the class and state chart of MARTE to Object-Z class and PTA expression in Object-Z format, we created the software model of flight control system through MARTE, and then translated into PTA-OZ model. The processing time required by the invariance of the class FG should be no more than 50ms. We used effect+operationname to specify the operational effect, and used enable+operationname to express the operational trigger. For example, in effect_scheduleOperation, telecommand route releases if and only if the release status is enabled, the execution is unblock, and the destination can execute the command. The guard condition of operating enable_sendCommand is that connection comm has been established and servo command has been analyzed.
3.3.2 Dynamic state chart transformation. As a class, the state chart can describe the behavior of class. In Object-Z, the behavior of class can create model in term of the attributes and operations of class. The attributes show the different statuses of the object, and the operations can change the value of the attributes. The state chart of MARTE consists of state, time event and transition. We thus mainly focus on these compositions to define transformation rules.
We used behavior attributes to create observable states model of object which value is boolean type. If true, object was in behavior state, and was regarded as an active state in MARTE. So each state should be mapped to a behavior attribute of Object-Z.
An event is the reception of a signal or the invoking request of an operation [26]. The response to a request can be modeled as receiving operation. Each event of state chart is mapped to the event acceptor operation of Object-Z. If an Object-Z operation was corresponding to a transition, we defined it as the triggered transition operation of event receives operation.
Time event is used to represent transition event triggered by time related factors and state transition in state chart triggered by time. The timing constraints of event in state chart are mapped to the clock constraints annotation on the edge of PTA. In Object-Z, we denoted state transition by comparing clock value with state invariance.
A transition represents either the change of object state or the execution of action. Due to that source state is the condition of transition and the target state is the result of transition, the value of source state is as the precondition of operation, and the value of target state is as the post condition of operation. The source state of transition is mapped to the initial state of behavior attributes in Object-Z, and the target states of transition are mapped to the termination states of behavior attributes.
A function mapStateMachineToOZ, which formally describes the transformation rules between state chart and Object-Z, is defined to translate state, time event and transition of state chart into the attributes and operations of Object-Z. Fig 6 showed a PTA expression in Object-Z format. The state is translated into the behavior attributes of Object-Z class. The initial state of state chart is expressed by the Init state schema in Object-Z class, and it is labeled Ready. An event is defined as different event receive operations. For example, event GCSRequestSent is defined as the operation of sending the route data, and causes the state to change from Ready state to Request state. Time event requestTimeout represents the transition event triggered by timeout, so that Its time can be realized by operation timeCount, and the state will be changed from Request state to Ready state. All the transitions belong to transition operations of Object-Z class. The behavior attributes of source state and target state are respectively defined as the precondition and post condition of transition operations. Transition operations contain all these effective activities. For example, the source state of transition transRequesttoFGcmd is Request, the target state is FGcmd, and effective activity is verifyTC.
Extracting Real-Time Specifications
The assurance of real-time requirement is the key of verifying software real-time, and is also the foundation for design, verification and realization of real-time reliability [27]. Most of the software real-time problems are caused by the inadequacy of acquiring real-time requirement. Flight control system is a hard real-time system, and it has strict deadlines on task. If the task could not satisfy the response time or response is not in time, it would lead to disastrous consequences.
The real-world software system developed according to natural language specifications is difficult to verify whether the resulting software satisfies the natural specifications [28]. For checking the software model, it is necessary to use a formal and precise symbol to represent the design specifications. Ogawa et al. proposed a goal-oriented analysis method of the Real-Time Reliability Verification for UAV Flight Control System requirement specification [29], which uses natural language to specify the specifications, then refines them into linear temporal logic (LTL) formulas and checks them through SPIN model checker. Therefore, model checking always uses LTL to specify properties to unambiguously describe the desired behavior of system. Each formula expresses a specific expectation of software behavior, a set of all formulas describe a consistent pattern of behavior.
Scenario-oriented requirement description
At the stages of design and verification, we need to identify the performance scenarios from end to end, which are usually extracted directly from requirements and used to evaluate the system response times [30]. The duration of a single processing or an end-to-end execution usually needs to be described. We can specify time information within the model by marking time-labeled UML2 interaction diagrams. Scenario-oriented language is able to graphically describe the software requirements, and then be used to verify the properties of design model. The advantages are straightforward and visualization. Message sequence chart (MSC) is widely used in the development field of industry software by International Telecommunication Union(ITU) as the standard and the description language of communication behavior of real-time system. However, MSC is only used to describe the causal relationships among messages, but explain the time partial order constraints of behavior, and clearly distinguish specifications from executable requirements [31,32]. These drawbacks of MSC limit its expression ability and application. Based on the extension of MSC, Werner Damm and David Harel proposed live sequence chart (LSC) [31], which distinguishes the existential scenario and the universal scenario of system. If the condition of universal scenario was true, the system must execute the scenario described in sequence chart, meaning the universal scenario is suitable to specify the activity of scenario.
Yves Bontemps et al. applied an extension of LSC to air traffic control system [33],and specified scenario-oriented requirements by associating an instance with a class. But LSC still adopts timing constraints in MSC, such as timer and delay interval, turning out that this application is limited.
MARTE extends sequence diagram, and defines timing constraints on sending and receiving events, and could record the start and end time, and also uses value specification language (VSL) to specify timing constraints among events. For overcoming the shortcomings of LSC in time property, we adopted MARTE to enrich the expressive power of LSC, and proposed TLSC method, which could intuitively describe time-enriched property and timing partial order relation of behaviors.
For overcoming the drawbacks of LSC in time property, we proposed TLSC method to intuitively describe time-enriched property and timing partial order relation of behaviors by using value specification language(VSL) to specify timing constraints among events and to extending sequence diagram by using MARTE. The extension by MARTE is that adding timing constraints to sending and receiving events for recording the start and end time.
We still adopted the property on of «TimedProcessing» stereotype to associate clocks with current TLSC. Sequence diagram is used to specify the interaction behavior among objects, which may be restricted by time factors in time trigger architecture, thus we need to introduce time observation to describe timing constraint. Time observation offers a method to acquire execution time and duration of system. «TimedInstantObservation» stereotype acquires the start or end instant and uses operation @t to store acquired time in variable t. «TimedDuratio-nObservation» stereotype can be used to express the duration of event, using {t, t+d} to denote that an event starts at t time and ends at t+d time. In order to reference ideal clock, we use «TimedInstantObservatio n» stereotype of MARTE to express time observation. Similarly, we defined a time observation of UML to associate with receive events of control message, and referred to the same ideal clock. The rules of execution time among modules of flight control software are that the delay time for receiving telecommand is less than 5 ms, the calculating time of flight path is less than 50 ms, the response time of conflict detection is less than 10 ms, the execution time of servo control is less than 10 ms, the output delay time of telemetry is less than 5 ms. That is to say, the time from sending telecommand to receiving telemetry is no more than 80 ms. Thus, we need to define a timing constraint, which shows that the duration between time observation events stop and start is less than 80 ms.
Monitoring real-time specification
In the process of software design, it is difficult to design a bug-free system, so we need to employ model checking to find the bugs, which can be used to verify whether software model meets design specification. It is difficult to write bug-free specifications, since we do not know whether the system specification could fully capture the design expectation of software.
Although graphical TLSC could intuitively express timing partial order relation of behaviors, and is suited to describe customer requirement by software engineer, it could not be used to verify the system specification. In the scenario-oriented software engineering, temporal logic is widely applied to describe the software requirements, but it is not realistic to require software engineer to intuitively use temporal logic formula to specify software requirement. Therefore, we tried to extract time property formula from software requirement of flight control system described by TLSC. Firstly, we gave the formal definition of TLSC [34]. • Loc is a set of locations.
• E is a set of events, and it contains two events condition Con and message Msg, that is, E = ConUMsg.
• C is a finite set of clock variables.
• δ: E!Loc is an event mapping function which maps each event to a location.
• Mode: E!{cold, hot} is a behavior mapping function which identifies each event with a provisional or mandatory behavior.
• inv:Loc!F(C) is a timing constraint mapping function, by which each location ‹i,l› is given assigns timing constraint inv(l) called location invariance.
• μ:E!F(C) is a time interval mapping function, it defines the time interval of event occurrence.
Trace-based semantic adopts finite or infinite state sequence to describe the relation of state transition. Using the form of trace-based semantic, precise meaning of software behavior could be reflected, and temporal logic formula could be extracted from TLSC. Here we gave the trace-based semantic of universal chart in TLSC.
Definition 3 Let a CUT sequence r = (cut 0 ,v 0 ), (cut 1 ,v 1 ),. . ., (cut k ,v k ) be an execution of TLSC, where cut i denotes a mapping of current locations of all the instances, and v i denotes the clock interpretation of current state. cut 0 is the starting point and clock interpretation v 0 = 0, cut k is the terminal point, and (cut i ,v i ) = succ((cut i-1 ,v i-1 ),<i, l i >) (i = 0,1,. . .,k-1). The set of all runs is denoting to Runs. We use r k = {r|8r2Runs^|r| = k} to denote the sub path of run r with k-path, and use Path(cut i ,v i ) = {r|8r2Runs^cut 0 = cut i } to denote a path of a run starting at (c i ,v i ). An execution trace, which is the trace of a CUT sequence, is denoted by π = trace(r) Then π = π 0 , π 1 ,. . ., π k-1 denotes the events that trigger the state transition, and In term of execution trace, we define the trace-based language of TLSC tl, that is In the universal chart, all runs must satisfy the given scenario. If executions r of TLSC are in the same sub chart, the relation between formulas is logical conjunction. If executions r are in the different sub charts, the relation is logical implication. Algorithm 1 showed that temporal formulas corresponding to different message in sub chart were combined into an algorithm about ACTL formula.
Using Algorithm 1, we could extract temporal logic formula of real-time property from TLSC for flight control system as showed in Fig 7. After sending message sendRequest to UAV from GCS, each module began to work, and FG module computed the flight path, MS module detected conflict, and SC module sent servo command. We could get the temporal logic formula of real-time property from scenario-oriented TLSC as follow AGðAð:compute^:detect^:controlÞ [ sendRequestÞ ! AF t<80 receiveFeedÞ
Real-Time Reliability Analysis
In the verification of PTA-OZ model, we could check the correctness of grammar and type in Object-Z part, and make sure that all the operations strictly use state model. By using the existing formal tools Z/EVES and PVS, specification could be checked and analyzed in Z form. We could automatically extract corresponding burden of proof for Object-Z specification according to a certain rule, and then test by inputting them to Z/EVES. Through strictly type examination, we could analyze specifications in Object-Z form, and locate the inconsistent information between specifications and requirements.
However, Object-Z is only suitable for data refinement, and does not support the structure similar to program language. Hence, there is semantic gap between Object-Z and program language, while PTA-OZ model can overcome the shortcoming and realize operation refinement through verifying the correctness of operation.
PTA model could be used to express the state transition diagram of flight control system. We could create mathematics model of real-time reliability by applying Markov process to achieve the state transition probability matrix of system. There are three status types of realtime reliability: up denotes that system is in working, down denotes that system shuts down, danger denotes that some transient failures have occurred but have not yet caused system shutdown.
There are various kinds of sensors in UAV for receiving signal, data or command. The performance of sensors is a gradual recession process, thus reduce the reliability of hardware devices. The failure of sensors will cause the software failure of FG module. Though the reliability assessment of hardware devices, the reliability of sensors will reduce after working 1000 h, and will down to 0 after working 1500 h. In flight control system, software module will reboot to rectify the transient fault. If the system was in sensor failure, FG module was unable to read data from the sensor, meanwhile the system would be forced to skip the current cycle. If the number of skipped cycles exceeded a threshold value, then flight control system would fail and emergency instructions or self-destruct would start. Fig 8 showed the expected time of each status within T unit time described in logarithm form. Since the requirement of total time from sending telecommand to receiving telemetry is no more than 80 ms , Fig 8(a) showed the expected time of different system states within 80 ms. Fig 8(b) showed the expected time of different system states within 1h when UAV carries out a short-term mission. The expected time in failure status gradually increases, but it is still less than the working time. Fig 8(c) showed that the expected time of down status will increase significantly while UAV carries out a long-term mission. As shown in Fig 8, the failure status of software caused by hardware fault would gradually increase along with the execution time increase. Thus we could improve the design method of software through enhancing the reliability of hardware.
The integrated framework for UAV flight control development can execute real-time simulation [1]. It adopts real-time operating system, does real-time monitoring by GCS, and executes the embedded software code in real-time. Compared with similar method [1], we can accurately calculate the expected time of each status. The result showed that UAV is highly reliable when it carries out a short-term mission and it is not suiteable for a long-term mission.
The system reliability depends on the threshold value K, Table 1 showed the expected time of system states with danger and up. When the value of K increases, the expected time both increase. The increasing of expected time in up is significantly higher than in danger. The different value of K also effects the system reliability. The reliability probability of processor-inthe-loop was given in Fig 9. The increasing value of K makes the reliability probability to stabilize.
Conclusions
Compared with existing integrated framework [1,36,37], we used graphics modeling language MARTE to create software model of flight control, and transformed static structure and dynamic behavior models into formal integrated framework PTA-OZ through defined transformation strategies. We specially focused on the model and verification of time property, which is not considered by most integration framework. The different processes such as analysis, modeling, transforming and verification in proposed framework are tightly-coupled.
For modeling and verification of the real-time reliability of UAV flight control system, we have mainly done following works. Combined with DO-178C standard, we studied modelbased method, object-oriented technology and formal method in the framework of software development, and propose a formal PTA-OZ model, which can formalize MARTE model in an object-oriented way by transformation rules. To eliminate the difference in the description of the real-time property formula among software engineers, we proposed an extracting method of temporal logic formula from TLSC, which could automatically generate real-time specification. By verifying the real-time reliability of software model, we could analyze its status type at different time periods.
In order to meet the airworthiness certification, we proposed a design, development and validation framework for UAV flight control system. To begin with, we used MARTE to create the time property model, then translated it into PTA-OZ model. Eventually we could analyze | 8,121.4 | 2016-12-05T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Effect of Tubular Chiralities and Diameters of Single Carbon Nanotubes on Gas Sensing Behavior : A DFT Analysis
Using density functional theory, the adsorption of CO, CO2, NO and CO2 gas molecules on different chiralities and diameters of single carbon nanotubes is investigated in terms of energetic, electronic properties and surface reactivity. We found that the adsorption of CO and CO2 gas molecules is dependent on the chiralities and diameters of CNTs and it is vice versa for NO and NO2 gas molecules. Also, the electronic character of CNTs is not affected by the adsorption of CO and CO2 gas molecules while it is strongly affected by NO and NO2 gas molecules. In addition, it is found that the dipole moments of zig-zag CNTs are always higher than the arm-chair CNTs. Therefore, we conclude that the zig-zag carbon nanotubes are more preferred as gas sensors than the arm-chair carbon nanotubes, especially for detecting NO and NO2 gas molecules.
Introduction
Monitoring of combustible gas alarms, gas leak detection, and environmental pollution is of great concern in public security.Advances in nanotechnology give great promise for achieving new sensing materials.Since the discovery of carbon nanotubes in 1991, the single-walled carbon nanotubes (SWCNTs) have been intensively investigated as nanoscale gas sensors because of their great surface areas to bulk ratio and their abilities to mod-ulate electrical properties upon adsorption of various kinds of gas molecules [1]- [17].The emission of carbon and nitrogen oxides (CO, CO 2 , NO and NO 2 ) results from the combustion of fossil fuels, contributing to both smog and acid precipitation, and affecting both terrestrial and aquatic ecosystems [18].Although many efforts have been made to use catalysts to reduce the amount of carbon or nitrogen oxides in the air [19]- [25], an efficient method of sensing and removing carbon and nitrogen oxides is still required.
Because carbon and nitrogen oxides are the most dangerous air pollutants, toxic and global warming gases, our work is concentrated on investigating the effect of tubular chiralities and diameters of single carbon nanotubes on gas sensing behavior for CO, CO 2 , NO and NO 2 gas molecules, applying the first principle calculations.
Computational Methods
All calculations were performed with the density functional theory as implemented within G03W package [26]- [29], using B3LYP exchange-functional and applying basis set 6 -31g (d,p).Pure carbon nanotubes ( ) The obtained diameters [30] and the adsorption energies of gas molecules on CNTs (E ads ) [31] are calculated from the following relations: ( ) where n and m are integral numbers, the composition of chiral vector.
Adsorption of CO, CO2, NO and NO2 Gas Molecules on CNTs
We have adsorbed CO and CO 2 gas molecules vertically on different three positions of ( ) 5, 0 , ( ) 9, 0 , ( ) and ( ) 6, 6 CNTs: above a carbon atom (carbon site), above a bond between two carbon atoms (bond site) and above a center of a hexagon ring (vacant site).The calculated adsorption energies of CO and CO 2 gas molecules Also, we have adsorbed NO and NO 2 gas molecules vertically on different three positions of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs: above a carbon site, above a bond site and above a vacant site.The calculated adsorption energies of NO and NO 2 gas molecules are listed in Table 3.It is found that the best adsorption energies of NO gas molecule are on the ( ) 9, 0 CNT above a bond site, then above a carbon site and after that above a vacant site with adsorption energies of −1.65 eV, −1.55 eV and −1.34 eV, respectively.However, for NO 2 gas molecule is found to be above the bond site on ( ) 9, 0 CNT with adsorption energy of −1.75 eV.Also, it is noticed that the vacant site is always preferred for NO 2 gas adsorption on all the studied CNTs except for ( ) 9, 0 CNT.Therefore, one can conclude that all CNTs can be used as gas sensors for NO and NO 2 gas molecules.
From Table 2, Table 3, one can investigate the effect of the chiralities and the diameters on the CNT gas sensors behavior.It is clear that the adsorption of CO and CO 2 gas molecules is dependent on the chiralities and the diameters of CNTs.The adsorption of CO and CO 2 gas molecules is enhanced with increasing the diameter of the zig-zag CNTs.However, the adsorption of NO and NO 2 gas molecules is independent on the chiralities and the diameters of CNTs.
Energy Gaps of Adsorbed CO, CO2, NO and NO2 Gas Molecules on CNTs
From Table 4, it is clear that the adsorption of CO and CO 2 gas molecules on CNTs does not affect the elec- tronic character of the CNTs.Also, the band gaps of pristine CNTs and the adsorbed CO and CO 2 gas molecules on CNTs are so close.
From Table 5, the adsorption of NO and NO 2 gas molecules on CNTs is strongly affected the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs.However, there is not any change of the electronic character for ( ) and ( ) 6, 6 CNTs.The band gap of pristine ( ) 5, 0 CNT is increased from 0.70 eV to 1.61 eV and to 1.37 eV when NO and NO 2 gas molecules are adsorbed on it, respectively.Also, The band gap of pristine ( ) 9, 0 CNT is increased from 0.25 eV to 1.34 eV and to 1.25 eV when NO and NO 2 gas molecules are adsorbed on it, respectively.One can conclude that the electronic character of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs is not affected by the adsorption of CO and CO 2 gas molecules.The adsorption of NO and NO 2 gas molecules on CNTs is only strongly affected the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs, however the ( )
HOMO-LUMO Orbitals of Adsorbing CO, CO2, NO and NO2 Gas Molecules on CNTs
Our calculated band gaps show that the adsorption of CO and CO 2 gas molecules on CNTs is not affected the band gaps of the pristine CNTs, however the adsorption of NO and NO 2 gas molecules is strongly affected the band gaps.To explain that the molecular orbitals of adsorbing CO, CO 2 , NO and NO 2 gas molecules on ( ) Comparing the HOMO-LUMO energies of the pristine CNTs with ones after the adsorption of CO and CO 2 gas molecules, it is clear that the energy values are so close.Also, it is noticed that there is not any contribution from the gas molecules at the molecular orbitals and the electron density of HOMO and LUMO is distributed over all the carbon atoms of CNTs except for ( ) 9, 0 CNT is located at the terminals of the tube, see Figure 2. Comparing the HOMO-LUMO energies of the pristine CNTs with ones after the adsorption of NO and NO 2 gas molecules, it is clear that the energy values are so close in case of ( ) 5,5 and ( ) 6, 6 CNTs and are quite far in case of ( ) 5, 0 and ( ) 9, 0 CNTs.The HOMO energy levels in case of ( ) 5, 0 and ( ) 9, 0 CNTs after adsorbing NO and NO 2 gas molecular are getting deep (lower) in energy however the LUMO energy levels are getting higher in energy.Results in increasing the band gap from 0.70 eV to 1.81 eV in Table 4.The calculated energy gaps (E g ) of CO and CO 2 above a carbon site, a bond site and a vacant site of pristine ( ) case of ( ) 5, 0 CNT and from 0.25 eV to 1.34 eV in case of ( ) 9, 0 CNT.Also, it is noticed that there is representation from the NO gas molecule at LUMO of ( ) 9, 0 and ( ) 6, 6 CNTs, see Figure 3.
The Reactivity of CNT Surfaces before and after Adsorbing Gas Molecules
Our calculated band gaps and molecular orbitals show that the adsorption of CO and CO 2 gas molecules on CNTs is not affected neither the band gaps nor the molecular orbitals of the pristine CNTs but the adsorption of NO and NO 2 gas molecules is strongly affected both of the band gaps and the molecular orbitals of ( ) 5, 0 and ( ) 9, 0 CNTs.To clear that the reactivity of CNT surfaces before and after adsorbing CO, CO 2 , NO and 2 gas molecules on ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs are studied, see Table 6, Table 7.The surface reactivity of the pristine CNTs is calculated and is listed in Table 6.The dipole moments of pristine ( ) 5, 0 , ( ) 9, 0 , ( ) and ( ) 6, 6 CNTs are found to be 0.54 Debye, 0.20 Debye 0.00 Debye and 0.00 Debye, respectively.Comparing the dipole moments of the pristine CNTs with ones that are adsorbed the CO and CO 2 gas molecules, it is clear that the dipole moment values are so close in case of the adsorption of the CO gas molecule but they are higher in case of the adsorption of the CO 2 gas molecule, see Table 6.Also, it is noticed that the highest dipole moments after the adsorption of the CO 2 gas molecule are 0.74 Debye (when CO 2 is adsorbed above the bond site of ( ) 9, 0 CNT) and 0.77 Debye (when CO 2 is adsorbed above the vacant site of ( ) 6,6 CNTs.Energies of HOMO and LUMO are listed above their molecular orbitals and are given by eV.
Table 6.The calculated dipole moments of pristine and after adsorbing CO and CO 2 gas molecules above a carbon site, a bond site and a vacant site of ( ) 5,0 , ( ) 9,0 , ( ) 5,5 and ( ) 6,6 CNTs.All dipole moments are given by Debye.spectively.Comparing the dipole moments of the pristine CNTs with ones that are adsorbed the NO and NO 2 gas molecules, it is found that the dipole moments are getting higher.When the NO and NO 2 gas molecules are adsorbed on the vacant sites of CNTs, their dipole moments are either quite close to or are lower than the dipole moments of pristine CNTs, except in case of adsorbing NO 2 on ( ) 5, 0 CNT, the dipole moment is increased.Also, all the calculated dipole moments of adsorbing NO and NO 2 gas molecules on the carbon sites of CNTs are increased, except in case of adsorbing NO 2 on ( ) 5, 0 CNT, the dipole moment is decreased.In case of adsorbing NO and NO 2 gas molecules on the bond sites of CNTs the dipole moments are also increased, except in case of adsorbing NO 2 on ( ) 9, 0 CNT is decreased, see Table 7.
From Table 6, Table 7, it is clear that the dipole moments of zig-zag ( ) 5, 0 and ( )
Conclusion
The gas sensing behavior of CNTs, considering a range of different nanotube diameters and chiralities, as well as different adsorption sites is reported.The adsorption of CO, CO 2 , NO, and NO 2 gas molecules on the ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs are studied using B3LYP/6-31 g(d, p).Three different adsorption sites (above a carbon site, a bond site and a vacant site) are applied on CNTs.It is found that the adsorption of CO and CO 2 gas molecules is dependent on the chiralities and the diameters of CNTs and it is enhanced with increasing the diameter of the zig-zag CNTs.However, the adsorption of NO and NO 2 gas molecules is independent on the chiralities and the diameters of CNTs.Also, the electronic character of ( ) 5, 0 , ( ) 9, 0 , ( ) 5,5 and ( ) 6, 6 CNTs is not affected by the adsorption of CO and CO 2 gas molecules.While, the adsorption of NO and NO 2 gas molecules on CNTs is only strongly affected by the electronic character of the ( ) 9, 0 and ( ) 5, 0 CNTs but the ( ) 5,5 and ( ) 6, 6 CNTs are not affected at all.It is found that the dipole moments of zig-zag ( ) 5, 0 and ( ) 9, 0 CNTs are always higher than the arm-chair ( ) 5,5 and ( ) 6, 6 CNTs.Also, it is noticed that the dipole moment of adsorbing NO gas molecule on the bond site of ( ) 5, 0 CNT is increased by ten times compared with the dipole moment of pristine ( ) 5, 0 CNT.Therefore, these findings prove that the zig-zag carbon nanotubes are better than the arm-chair carbon nanotubes as gas sensors, especially for NO and NO 2 gas molecules.
Table 2 .
It is found that the best position and adsorption energy for CO gas molecule is above the bond site on ( )
Table 1 .
The configuration structures and diameters of the studied CNTs.
Table 3 .
The calculated adsorption energies (E ads ) of NO and NO 2 above a carbon site, a bond site and a vacant site of pris-
6, 6 CNTs are investigated, see Figure 2, Figure 3.The band gaps of the pristine CNTs are calculated and are listed in
Table 5 .
The calculated energy gaps (E g ) of NO and NO 2 above a carbon site, a bond site and a vacant site of pristine ( )
Table 7 .
The calculated dipole moments of pristine and after adsorbing NO and NO 2 gas molecules above a carbon site, a bond site and a vacant site of ( ) | 3,304 | 2014-04-16T00:00:00.000 | [
"Chemistry"
] |
Steiner Wiener index of block graphs
Let $S$ be a set of vertices of a connected graph $G$. The Steiner distance of $S$ is the minimum size of a connected subgraph of $G$ containing all the vertices of $S$. The Steiner $k$-Wiener index is the sum of all Steiner distances on sets of $k$ vertices of $G$. Different simple methods for calculating the Steiner $k$-Wiener index of block graphs are presented.
Introduction
All graphs in this paper are simple, finite and undirected. If G is a connected graph and u, v ∈ V (G), then the (geodetic) distance d(u, v) between u and v is the length of a shortest path connecting u and v, see also [6]. The Wiener index W (G) of a connected graph G is defined by The first investigations of this distance-based graph invariant were done by Harold Wiener in 1947, who realized in [21] that there exist correlations between the boiling points of paraffins and their molecular structure and noted that in the case of a tree it can be easily calculated from the edge contributions by the following formula: where n(T 1 ) and n(T 2 ) denote the number of vertices in connected components T 1 and T 2 formed by removing an edge e from the tree T .
The Steiner distance of a graph, introduced in [3] by Chartrand et al., is a natural generalization of the concept of the geodetic graph distance. For a graph G = (V, E) and a set S ⊆ V of at least two vertices, an S-Steiner tree or a Steiner tree connecting S (or simply, an S-tree) is a subgraph T = (V , E ) of G that is a tree with S ⊆ V . Let G be a connected graph of order at least 2 and let S be a nonempty set of vertices of G. Then the Steiner distance d(S) among the vertices of S (or simply the distance of S) is the minimum size of a connected subgraph (the number of edges) whose vertex set contains S. Note that if H is a connected subgraph of G such that S ⊆ V (H) and |E(H)| = d(S), then H is a tree. Clearly, In [4,5] Dankelmann et al. followed by studying the average k-Steiner distance µ k (G), which is related to the k-Steiner Wiener index via the equality µ k (G) = SW k (G)/ n k . In [16], Li, Mao and Gutman introduced a generalization of the Wiener index, by using Steiner distance. Thus, the k-th Steiner Wiener index SW k (G) of a connected graph G is defined by For k = 2, the Steiner Wiener index coincides with the ordinary Wiener index. It is usual to consider SW k (G) for 2 ≤ k ≤ n − 1, but the above definition also implies SW 1 (G) = 0 and SW n (G) = n − 1 for a connected graph G of order n. They obtained the exact values of the Steiner Wiener k-index of the path, star, complete graph, and complete bipartite graph and sharp lower and upper bounds for SW k (G) for connected graphs and for trees. In [10] the appplication of k-Steiner Wiener index in mathematical chemistry is reported, and it is shown that the term W (G) + λSW k (G) provides a better approximation for the boiling points of alkanes than W (G) itself, and that the best such approximation is obtained for k = 7. See [15,17,18,20] for recent results related to Steiner Wiener index and a survey on Steiner distance in [19]. In a graph G, a vertex u is a cut-vertex if deleting u and all edges incident to it increases the number of connected components. A block of a graph is a maximal connected vertex induced subgraph that has no cut vertices. A block graph is a graph in which every maximal 2-connected subgraph or block is a clique [1,6]. Block graphs are a natural generalization of trees, and they arise in areas such as metric graph theory [1], molecular graphs [2] and phylogenetics [8]. They have been characterized in various ways, for example, as certain intersection graphs [11], or in terms of distance conditions [2].
The windmill graph W d(p, n) is a block graph constructed for p ≥ 2 and n ≥ 2 by joining n copies of the complete graph K p at a shared vertex. A claw-free graph is a graph in which no induced subgraph is a claw, i.e. a complete bipartite graph K 1,3 . Claw-free block graphs are block graphs which are claw-free.
The interval I(u, v) between two vertices u and v consists of all vertices that are on shortest paths joining u and v. More generally for a subset A ⊂ V (G) the k-interval of A, denoted by I k (A), consist of all vertices of G that are on some k-Steiner tree joining vertices of A. A graph G is modular [12] if for every three vertices x, y, z there exists a vertex w that lies on a shortest path between every two vertices of x, y, z, i.e. |I(x, y) ∩ I(x, z) ∩ I(y, z)| ≥ 1 . It is easy to see that a modular graph is a bipartite graph. Examples of modular graphs are trees, hypercubes, grids, complete bipartite graphs, etc. The simplest examples of non-modular graphs are cycles on n vertices, for n = 4, and complete graphs.
In this paper we obtain simple methods for calculating the k-Steiner Wiener index of block graphs. We obtain exact values for k-Steiner Wiener index of windmill graphs and claw free block graphs. We generalise the relation between 3-Steiner Wiener index and Wiener index of a tree from [16] to modular graphs and obtain the corresponding similar relation for block graphs. We conclude with the Steiner Wiener decomposition formula for the special family of block graphs -trees via their subtrees.
Decomposition formula of k-Steiner Wiener index of block graphs
For a graph G, let n(G) denote the number of its vertices. For a forest F with p, p > 1, connected components T 1 , T 2 , . . . , T p denote by N k (F ) the sum over all partitions of k into at least two nonzero parts of products of combinations distributed among the p components of F : For a tree T and e ∈ E(T ), let T − e denote a graph obtained by removing edge e from T . Then the following formula has been shown in [15] For a given partition l 1 + l 2 + . . . + l p = k, let α(l 1 , l 2 , . . . , l p ) denote the number of nonzero summands minus 1. For a graph G with p, p > 1, connected components G 1 , G 2 , . . . , G p , we define N k (G) to be the sum over all partitions of k into at least two nonzero parts of products of combinations distributed among the p components of G multiplied by α(l 1 , l 2 , . . . , l p ) : For a connected graph G, we define N k (G) = 0. Note that by the definition n 0 = 1, and n k = 0 whenever n < k.
Steiner Wiener index of Windmill graphs
Theorem 3.1. Let W d(p, n) be windmill graph Then Proof. We prove the theorem by considering two cases.
Case 1. Set of k terminals includes the central vertex of the windmill graph.
Since the central vertex is adjacent to any other vertex, we get a star graph as Steiner tree of k vertices which has size k − 1. Then, the contribution of this case to SW k (G) is (k − 1) n(p−1) k−1 . Case 2. Set of k terminals does not contain the central vertex of the windmill graph.
Sub case 2.1: Set of k terminals are from the same clique. We get a path of length k − 1 as Steiner tree consisting of the set of k vertices and there are n p−1 k possible ways of choosing them. Therefore, the contribution of this case to SW k (G) is n(k − 1) p−1 k .
Sub case 2.2: Set of terminals are from at least two different cliques. Let l 1 + l 2 + . . . + l n = k be a partition. Since any path from K p i to K p j , i = j must pass through central vertex and the set of l i vertices in K p i form l i − 1 path, the Steiner distance is l 1 + l 2 + . . . + l n = k. Then, the contribution of this case to SW k (G) is
Vertex decomposition of Steiner Wiener index of block graphs
For a tree T and v ∈ V (T ), let T \ v denote a graph obtained by removing v from T . Note that T \ v may consists of several components and that their number equals the degree of v. In [15], the vertex version of Steiner Wiener index of a tree is given by Theorem 4.1. For a block graph G with set of cut vertices V c (G), Proof. N k (G \ v) counts number of times a cut vertex v is a non terminal vertex of Steiner tree. Since each such vertex adds 1 to Steiner distance of k vertex set, Steiner distance between k vertices is by k − 1 greater than the number of nonterminal vertices in the corresponding Steiner tree, adding k − 1 for each set of k vertices, we get the sum of Steiner distances between all k sets of vertices, and the equality in formula holds.
The line graphs of trees are exactly the block graphs in which every cut vertex is incident to at most two blocks, or equivalently these are the claw-free block graphs.
Corollary 4.2. For a claw-free block graph G, Steiner Wiener index is given by
Proof. Claw-free block graphs are graphs for which a cut vertex is adjacent to at most two blocks. Let the components of T \ v be T 1 and T 2 . Then formula follows by applying Theorem 4.1.
Therefore SW 3 (G) = 20 + 7 = 27. I(a, b). Hence modular graphs are those graphs for which the 2-intersection interval of every triple of vertices is nonempty. The following result is from [13].
3-Steiner Wiener index of modular and block graphs
Theorem 5.1. Let S = {u 1 , u 2 , . . . , u n } be a set of n > 2 vertices of a graph G. If the 2-intersection interval of S is nonempty and x ∈ I 2 (S), then d(S) = n i=1 d(u i , x).
Next we provide the connection between 3-Steiner Wiener index and Wiener index.
Theorem 5.2. Let G be a graph on n vertices. Then, with the equality if and only if G is a modular graph.
Proof. Let G be a connected graph. A triplet of vertices x, y, z ∈ V (G) is called a modular triplet if I(x, y) ∩ I(x, z) ∩ I(y, z) = ∅.
Let S = {a, b, c} ⊆ V (G), |S| = 3. and let G be a modular graph. Then there exist x ∈ I 2 (S). By Theorem 5.1 it follows d(S) = d(a, x) + d(b, x) + d(c, x). There are two possibilities: , c)). Each pair of vertices in a graph on n vertices belongs to n − 2 different triples of vertices, hence it follows.
For a non modular triplet x, y and z we always have d(x, y, z) > 1 2 (d(x, y) + d(x, z) + d(y, z)), hence we get the strict inequality, for any non modular graph.
Since trees are modular graphs, Theorem 5.2 generalises Corollary 4.5 from [16], where a special case for trees is proved. Theorem 5.3. Let G be a block graph with blocks B 1 , B 2 , . . . , B m . Then Proof. In a block graph G, any nonmodular triplet x, y, z must belong to the same clique. In this case d(x, y, z) = 2 = 1 2 (d(x, y) + d(x, z) + d(y, z)) + 1 2 . Let M (G) denote the set of all modular triplets of G. Then it follows Here the last sum counts the number of nonmodular triplets in a block graph.
The k-intersection intervals and k-Steiner Wiener index of trees
In this section we extend result on Wiener index of trees from Doyle and Graver [7], by generalizing the original notions and proof technique to obtain a formula for k-Steiner Wiener index of a tree, see also [9,14] for alternative proofs in the case when k = 2. For a subset of vertices A, let S(A) denote the Steiner tree connecting them. A subset of k distinct vertices {v 1 , v 2 , . . . , v k } = A ⊆ V (G) is said to be k-collinear if there exist i, 1 ≤ i ≤ k, such that a i ∈ S(A \ {a i }). Let τ k (G) denote the number of non-k-collinear subsets of G.
Theorem 6.1. Let G be a graph on n vertices, such that every subset of k vertices is connected by a unique k-Steiner tree. Then Proof. Let C be the collection of (k + 1)-collinear subsets of V (G) and let D be the collection of all k-subsets of V (G). Define φ : C → D by letting φ(A), A ⊂ C, be the k-subset of vertices whose Steiner tree includes all vertices from A.
None that for a B ⊂ D, φ −1 (B) is the collection of all (k + 1)-subsets of V (G) which contain vertices from B and a vertex on the unique Steiner tree between them. Therefore |φ −1 (B)| = d(B) − (k − 1), which is precisely the number of all inner vertices on a unique Steiner tree with k-terminal vertices, forming a set B. Hence Since |C| + τ k (G) = n k+1 the theorem is proved.
We can extend the definition of the 2-intersection interval as follows. For S ⊆ V (G), the k-intersection interval of S is the intersection of all k-intervals between k-subsets of vertices from S: I k (S) = A⊂S |A|=k I k (A).
Proof. In a tree, any subset of vertices is joined by a unique Steiner tree, hence we can use Theorem 6.1.
If v 1 , v 2 , . . . v k are k distinct non-(k − 1)-collinear vertices of T , joined by Steiner tree S, then there exist a unique minimal subtree S such that S \ S has exactly k components -this is exactly the k-intersection interval of v 1 , v 2 , . . . , v k .
The function M k (T − T ) , is precisely the number of non-collinear k-subsets of V (T ) with T as its k-intersection interval. | 3,617.6 | 2018-05-21T00:00:00.000 | [
"Mathematics"
] |
Semantic Information Extraction from Multi-Corpora Using Deep Learning
: Information extraction plays a vital role in natural language processing, to extract named entities and events from unstructured data. Due to the exponential data growth in the agricultural sector, extracting significant information has become a challenging task. Though existing deep learning-based techniques have been applied in smart agriculture for crop cultivation, crop disease detection, weed removal, and yield production, still it is difficult to find the semantics between extracted information due to unswerving effects of weather, soil, pest, and fertilizer data. This paper consists of two parts. An initial phase, which proposes a data preprocessing technique for removal of ambiguity in input corpora, and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer and multilayer perceptron to find agricultural-based named entity recognition, events, and relations between them. The proposed algorithm has been trained and tested on four input corpora i.e., agriculture, weather, soil, and pest & fertilizers. The experimental results have been compared with existing techniques and it was observed that the proposed algorithm outperforms Weighted-SOM, RAO, PLR-DBN, KNN, and Naïve Bayes on standard parameters like accuracy, sensitivity, and specificity.
Introduction
The agricultural sector contributes a major share to the Indian economy and due to climatic changes, it is highly sensitive. For instance, some important factors like small landholdings, excessive dependence on fertilizers and monsoons, add more vulnerabilities in the Indian agricultural sector [1][2][3]. A large amount of unstructured agricultural data is underutilized due to the lack of data processing schemes. In developing countries like India, still, human experts, and government policies are the primary factors for decision-making. Factual validation based on current data is still mislaid from the perspective of policymaking [4].
In the last few decades, variability in climate has been affected in broad regions over agricultural sectors like agricultural water resource, crop growth and development, and crop production [5][6][7][8]. In the Indian subcontinent, the researchers study the climate-crop relationship based on long-term fertility, regional statistics, and other predictable field experiments that shows the yields of wheat and rice crop production model based on simulation methods [9]. The maximum land in Uttarakhand state is fertile but due to land subdivision problems, the farmers consider the agriculture sector as an infeasible source for gaining food security. The major crops of Uttarakhand are maize and rice known as Kharif/monsoon crops. The Kharif crop production is very less in the Uttarakhand region when compared to other regions because of environmental conditions like the constant threat of landslides, high rates of erosion, and landslides during rains. Crop production is completely dependent on rain-based agricultural land. In the Uttarakhand state, almost 80% of agricultural production is based on rain-fed-based agriculture [10]. The individual growth in diverse agroecosystems with different hydro-geological regions and the diversity in crops and cropping techniques define a high resilience system. The traditional crop rotations and practices followed also help in maintaining the diversity which may vary with irrigation conditions, altitude, soil type, moisture regime, local knowledge, and direction and degree of slope [11].
For a suitable crop, the weather is not the only essential component, soil and fertilizers are also equally contributing. However, the current machine learning methods such as Bayesian networks, Gaussian kernel-based support vector machines (SVM), and artificial neural networks (ANN) are unable to identify the suitable soil and pest and fertilizer for the selected soil [12,13]. Soil quality depends on Electrical Conductivity, pH level, macronutrients, and micronutrients of the selected crop [14]. These soil quality indices help the farmers to select the appropriate pest and fertilizer for the better yield of the selected crop.
As per Fig. 1, inputs can be domain-dependent or independent unstructured/semi-structured corpus (or corpora), domain-specific knowledge, and user-specified extraction patterns [15][16][17]. The information extraction (IE) engine processes the input data to extract knowledge and save it into a structured database (relational and graph databases). The researchers have proposed very limited empirical and soft computing techniques for the prediction of rainfall for crop productivity along with the appropriate land details of the Uttarakhand region. The proposed work bridges this gap by extracting the semantics between extracted named entity recognition (NER) and events from unstructured agricultural text with a focus on the Uttarakhand region [18]. The major contributions of the present research work are: (1) A novel deep learning technique for semantic information extraction using four input corpora (agriculture, weather, soil, and pest & fertilizer) was proposed. The proposed deep learning technique uses long short-term memory (LSTM) with two classifiers i.e., rectification of Adam optimizer and multilayer perceptron (MLP). (2) To remove the noise from input corpora, a new word sense disambiguation (WSD) algorithm was introduced. (3) The proposed technique is able to predict the increase in crop intensity, crop yields, and the resulting increase in the employment of the Uttarakhand region.
The remaining sections of current research work are as follows. Section 2 shows the survey of recent technologies applied for IE to improve crop productivity. Section 3 depicts the proposed methodology as well as the WSD algorithm.. The experimental results, discussion, and validation of the proposed method are reported in Section 4. Section 5 concludes the paper and discusses future work directions.
Literature Survey
For the last two decades, machine and deep learning techniques have made a large contribution in handling the information extraction problem from various application areas including medical image analysis and retrieval [19][20][21][22][23], biometrics recognition [24][25][26], disease diagnosis [27,28], agriculture, etc. The following literature study shows the related work on the agricultural sector using machine learning techniques.
Nair et al. [29] have exhibited ANN in the Global Climate Model (GCM) in India. The goal of the proposed method was to anticipate the Indian Summer Monsoon Rainfall esteems utilizing precipitation yields from GCM. The ANN procedure was connected to different ensemble entities from the GCMs individual to get month-wise scale expectations for India and its sub-divisional region. In the present investigation, straight-forward randomization and double folded approval method were used to minimize over-fitting problems while training the ANN method. The ANN anticipated rainfall is executed from GCMs individuals and decided by examining the absolute error, box plots, contrast, and percentile in linear error in probability sample space. Experimental results proposed the critical changes after applying the ANN system of these GCMs individuals in forecast expertise. The datasets depend on the past estimations of the primary variable however not on logical factors which may influence the framework/variable. Satir et al. [30] proposed a Stepwise Linear Regression and vegetation indices method for crop yield estimation. By applying object-based classification and multi-temporal land-sat data set, mapping was formed on related crop patterns of an area. In this scenario, by applying realtime measurement methods like Mean Percent Error (MPE) prediction was estimated. MPE was estimated for cotton, corn & wheat and combined with soil salinity degrees. Based on weather data forecasting was done and prediction of accuracy was reduced based on a single parameter.
Das et al. [31] investigated the hybrid algorithms such as Least Absolute Shrinkage and Selection Operator (LASSO), ANN, penalized regression models consists of the elastic net (ENET), Principal Components Analysis (PCA), and Stepwise Multiple-Linear Regression (SMLR) for predicting the yield of rice with the help of long-term weather data. The experimental results stated that LASSO-ENET provided good performance because these methods reduced the model complexity and prevented overfitting by using magnitude coefficients. The pairwise multiple comparison test found that the hybrid models were utilized very well for the prediction of the crop on the west coast of India. But, the combination of feature selection methods and feature extraction with neural network include PCA-SMLR provides poor performance because that PCA did not include the dependent variable while alteration of input variables.
He et al. [32] implemented a Hybrid Wavelet-based Neural Network (HWNN) which included Particle Swarm Optimization (PSO), Mutual Information, and Multi-Resolution Analysis into ANN for predicting rainfall from antecedent climate indices and monthly rainfall. The Maximal Overlap Discrete Wavelet Transform decomposed the large-scale climate indices and standardized monthly rainfall anomaly into subseries components with various time scales. The PSO algorithm was applied to find the optimal neuron numbers in ANN's layers (hidden) and the predictor (selected) predicted anomaly sub-series for each rainfall. HWNN method was more efficient for particular season rainfall prediction but took high prediction time in different season rainfall prediction.
Mohan et al. [33] implemented parallel layer regression with Deep Belief Network (PLR-DBN) for the estimation of food crop productivity using factors such as season types, soil type, risk factor, and water availability. The proposed PLR-DBN method targeted five crops in Karnataka based on accuracy, sensitivity, and specificity. Talukder et al. [34] designed a prediction and recommendation technique that determines food crop productivity based on temperature, rainfall, and humidity parameters. K-nearest neighbor (KNN), random forest, SVM, logistic regression, Naïve Bayes classifier were used for the prediction model. Collaborative and multi-condition filtering techniques are used for the recommendation system.
To improve the overall crop productivity, this paper developed a deep learning-based method for the Uttarakhand data, weather data from Indian Metrological Department (IMD), Dehradun whereas the soil, and pest and fertilizer corpora are open source databases.
Study Area and Dataset Description
Uttarakhand is a state in the northern part of India that spreads from 79 • 15' east longitude to 30 • 15' north latitude with 53,483 square km geographical area. This state was taken as the area of study for our research work. The Uttarakhand state i.e., the Garhwal region comprises Chamoli, Dehradun, Pauri, Uttarkashi, Rudraprayag, Tehri, Haridwar, and the Kumaon region with Almora, Bageshwar, Nainital, Pithoragarh, Champawat and Udham Singh Nagar districts. For modeling rainfall-runoff events, the entire region has been considered for the study so that almost the whole state area can be covered. Data from various data sources like IMD, soil, and pest & fertilizer corpora were gathered from various research organizations such as District Soil Testing Laboratory, Dehradun/Soil Testing Laboratories located at Nanda ki Chowki, Premises of Directorate of Agriculture, Premnagar, Dehradun, and a database has been created.
Proposed Methodology
The next five subsections include the proposed framework, min-max algorithm applied for data preprocessing, corpora concatenation techniques, proposed WSD algorithm, and deep learning-based IE algorithm.
Proposed Framework for Semantic IE
The research framework presents a theoretical and practical approach for extracting semantic information. Unlike few existing frameworks in literature, this approach attempts to give a structure that highlights the fundamental concepts and components of semantic IE. The methodology followed in this study is composed of a collection of articles in the selected areas, collection of authenticating data in those relevant fields (mostly the benchmark datasets from repositories) selection of appropriate data mining tools, data storage tools (Excel, Oracle), and editing tools.
In this section, the operational framework is elaborated for presenting the complete flow of the research components carried out for this study. This study mainly spins around information gathering, data pre-processing, semantic extraction, and data post-processing. These four core or concentrated parts are involved in the practical implementations of this framework. Fig. 2 shows the overall view of the present research, wherein the framework has been divided into four different modules including corpus concatenation, deep network-based NER and EE, and Semantic Extraction.
Min-Max Algorithm
This subsection is used for preprocessing the input corpora i.e., removal of noises and identifying missing values. The input data are taken from the database and data consists of different kinds of units like temperature in celsius, wind speed in miles per hour, etc. In deep learning architecture, to avoid the scaling effects, normalized variables between intervals [0-1] has been used in the proposed method. The normalization method applied to the dataset can be observed in Eq. (1), where a i denotes a normalized value for the i th variable, for this variable min a i denotes the minimum value registered in the training dataset and for the same variable, the maximum value in the training dataset represented by max a i .
Corpora Concatenation
The previous subsection used a min-max algorithm to minimize the noises from input corpora and these unstructured input corpora are converted into a single unified corpus. As the nature of these corpora is different, merging the corpora required maximum human intervention. Knowledge-based WSD (KB-WSD) and corpus-based WSD (CB-WSD) are two basic methods used for combining two or more corpora into a single entity. The main purpose behind the integration of these two popular algorithms was to remove semantic ambiguity and merge different natured corpora into a single entity i.e., called Agri_Corpus. The integration of CB-WSD into a KB-WSD successfully has shown a low improvement rate in many cases. So, for the current research work, the integration of KB-WSD into a CB-WSD has been used in the proposed model and the same will be discussed in the next section.
The agricultural data has been collected from Krishi Vigyan Kendra, Dhakrani, District Dehradun, Uttarakhand. (http://agricoop.nic.in/sites/default/files/UKD7-Dehradun-10.07.14.pdf ). Tab. 1 shows the sample data for rainfall prediction to improve the crop productivity of the Uttarakhand region. Tab. 1 describes the sample data collected for major crop productivity of Uttarakhand like rice, barley, and potato. The rice crop gives more productivity like 19689.9 kg per hectares (ha) during rainfall season, whereas barley gives nearly 20 kg per ha for the winter season. Rice has been considered the most important crop for Uttarakhand because of its productivity. Potato can be cultivated during the summer season, which has productivity of 22140 kg per ha. In the Uttarakhand region, the crops like rice, barley, and potatoes are majorly sown at low, medium, and high rainfall respectively. Tab. 2 shows the sample data for monthly average rainfall data of one year. The data collected for nearly 20 years are taken from the region, the rainfall values can be calculated by using the predicted values from the sample table. The values 0 in the predicted column indicate low rainfall, whereas 1 indicates medium rainfall, and 2 represents high rainfall.
Proposed Disambiguation Algorithm
Before applying the natural language processing technique, input data should be processed using a disambiguation algorithm. Few existing methods can be used to extract the sense of ambiguous words from an unstructured text [35]. The proposed algorithm was used to extract the sense of ambiguous words present in the corpus collected for current research. Cosine similarity has been used to measure the similarity between two words and cosine distance to find the similarity distance between two words [36].
Eqs. (2) and (3) represent the cosine similarity and cosine distance between two words (W i and S i ) and have been defined as Sim(W i , S i ) and D_amb (W i , S i ) respectively. The range of cosine distance is between 0 to 1, where 1 represents W i and S i are different in nature and 0 (≈0) represents that W i is associated with S i [37].
Proposed Algorithm for Semantic IE Using LSTM-RAO and MLP
The following algorithm uses the min-max algorithm for data normalization, which has the advantages of the LSTM techniques. For each iteration of backpropagation, an RAO and MLP are applied to modify the weights in a deep network. This optimizer has inherited the properties of RMSProp and AdaGard optimizer. while
//Compute the length of the approximated SMA
(Continued) f. if the variance is tractable, i.e., ρ t > 4 then //Compute bias-corrected moving 2nd moment
LSTM with RAO and MLP
This section represents the algorithm of the proposed deep learning technique comprising of the following architecture (i.e., an LSTM), which has been used as a feature selection method and responsible for treatment in time series. Whereas, MLP network and rectified Adam optimizer (RAO) have been used for classification as well as prediction tasks. Fig. 3 represents the LSTM with RAO and MLP based deep learning network [38]. The proposed model can be divided into two parts namely feature selection and classification. In this network, hyperbolic tangent transfer (tansig) activation function has been used in the deep hidden layers and sigmoid (sig) activation function has been used to increase the correlation within the target data. The following activation functions are used in the hidden layers and have been stated in Eq. (4).
As per Eq. (5), each number in the cell state C t-1 , the f t = [0-1] In the forget gate, W fg and b fg represent the weight, and bias of the forget gate. From input X t , the sigmoid layer and tanh layer have been used to store, update, and decide the cell state. In Eq. (6), the updated information should either ignore or get updated based on the value of the sigmoid function (0,1) and (−1 to 1) of tanh function decides the importance level in Eq. (7). Multiplication of N T and i T has been performed to update the new cell state in the LSTM network. This new memory cell value is then added to the last memory value i.e., C T−1 to find an updated C T as shown in Eq. (8) In the next step, the output value (h T ) is derived from the output (O T ) of the cell state. In Eq. (9) sigmoid function picks that cell state which takes part in the output, then the sigmoid gate output (O T ) is multiplied by the new cell state (C T ) values and h T is used for tanh layer [−1 to 1] in Eq. (10).
Like RMSprop and Adadelta, Adam optimizer can be used to save an exponentially decaying average of the last gradient M T and squared gradient (V T ) in Eqs. (11)- (13). At time T, the stochastic object for finding gradients (G T ) is: Here M T and V T represent the 1 st and 2 nd gradient moment that is the mean and uncentered variance. The biasing ≈ zero have been noticed especially during the initial time T and ε 1 & ε 2 (small delay) ≈ 1. Eqs. (14) and (15) are used to calculate the biases offset which are defined by evaluating the bias-corrected first and second-moment estimates.
Performance Evaluation
For the scenario experimental simulation, Python Jupyter notebook was installed in the computer system with a 3.2 GHz Core i5 processor. The proposed WSD algorithm has been applied to the following small paragraph (next complete paragraph). For a single word, there are various meanings (sense). To demonstrate the proposed algorithm, the following paragraph was used as input "Ginger is a medicinal plant. There is not a particular period to sow this plant but the pre-monsoon shower session is considered a better period. It is considered a Kharif crop. One month of dry weather before harvesting ginger gives better results".
Tab. 3 represented the output in a tabular form. The term "session" is related to the period of activity, a serious meeting, and a weather session. By applying the disambiguation algorithm, the word 'session' was related to a weather session only. The output of the proposed algorithm has been presented in tabular form. 6 shows the other output of the proposed deep learning-based agricultural-based event extraction. In the next phase, the proposed deep learning method has been applied on the unstructured unified corpus to extract agricultural-based NER, events, and relationship that can be used to predict the major crop productivity in the Uttarakhand region. For finding better crop production, the main factors like soil, season, water, input support facilities, and risk were used. Some other observations include Mean Squared Error was 0.065, Root Mean Squared Error was 0.25, Mean Absolute Error was 0.065, and Nash-Sutcliffe efficiency coefficient was 0.99.
Parameter Metrics
In this study, the performance of the proposed method was assessed using standard statistical performance evaluation criteria which included the accuracy, sensitivity, specificity, and F-Measures. The following Tab. 4 provides the value for accuracy, F-measure, sensitivity, specificity for major crops of the Uttarakhand Region.
Comparative Analysis
This section provides a detailed description of the performance of the proposed method. The comparison of the proposed method has been presented with the cross-validation of 80% training and 20% testing data. The cross-validation of the proposed method was also analyzed for 70-30% and 60-40% training-testing data. Fig. 7 shows the accuracy of the proposed method with respect to ANN, recurrent neural network (RNN), LSTM with Adam optimizer, and LSTM with rectified Adam optimizer.
Similarly, Fig. 8 shows the comparison of the proposed method with respect to ANN, RNN, LSTM with Adam optimizer, and LSTM with rectified Adam optimizer in terms of precision, recall, and F-score parameters. As mentioned in Tab. 5, the proposed method with existing techniques such as deep learningbased weighted self-organizing map (DL-SOM) [39], LSTM+RAO [40], PLR-DBN, KNN, and Naïve Bayes techniques were evaluated in the combinations of testing and training percentages like 80% training and 20% testing dataset for rice, barley, and potato.
Conclusion and Future Directions
The proposed methods have presented a statistical investigation of the rainfall, soil, agriculture, and pest and fertilizer dataset for the Uttarakhand region. The scope of the proposed experiment was to extract the agricultural-based NER, events, and the relationship between them. The stated method can be used to enhance the productivity of the major crops like rice, barley, and potato in high rainfall areas of Uttarakhand state by investigating the accurate rainfall required for a good quantity of crop prediction with better soil quality. In this context, a deep learning method was implemented to predict the suitable major crop for the season in Uttarakhand Region, India. The output thus generated using the introduced method shows a better performance than existing methods. An accuracy of 88.10% was achieved by properly utilizing the LSTM with RAO and MLP optimizers. The experimental results were compared with the DL-SOM, LSTM+RAO, PLR-DBN, KNN & Naïve Bayes and it was observed that the proposed algorithm outperforms the existing ones with 1.09%, 1.32%, 1.0%, 1.37% and 1.22 in terms of accuracy, 1.09%, 1.01%, 1.0%, 1.44% and 1.44% on sensitivity, and 1.11%, 1.0%, 1.07, 1.41 & 1.49% on specificity as compared to DL-SOM, LSTM+RAO, PLR-DBN, KNN and Naïve Bayes respectively. The value of the Nash-Sutcliffe efficiency coefficient was 0.99. The advanced scheme delivered an effective performance in the form of improved sensitivity, accuracy, specificity, and F-score than the previous methods related to the other approaches available for crop prediction. To improve agriculture productivity plant deceases dataset can take into consideration for future work. The experimental results show that there is a huge scope for researchers to focus on potato crops productivity in hilly areas. | 5,251.4 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Hierarchial Roots and Shoots or Opera Jehovae Magna! (PSALMS 111:2)
The philosophy of Linnaeus;s classification, Systema Naturae, is briefly reviewed, as well as those of post-Linnaean systems of plant classification. Texts of current codes of nomenclature pertaining to hierarchy, including associated rank terminations, are compared.
INTRODUCTION
The Symposium title speaks of the hierarchy of Linnaeus, especially misnamed for plants since Linnaeus's (1759) artificial system of 24 sexual classes was replaced by Jussieu's (1789) natural families. The Natural System has survived, although its underlying assumptions have drifted. Jussieu's (1789) and Cuvier's (1798) assumption was that God's Creation is continuous (a solid map with artificial lines drawn on it). Later pre-Darwinian thought was discontinuity, islands and archipelagos with peninsulas indicating affinities, Taxa can be located on the map by definitions functioning as coordinates. Current thought is three-dimensional, time, i.e., evolution, being the 3rd dimension.
For me, reality lies in the specimens. What we say about the specimens are hypotheses, i.e., the taxa that we construct, the hierarchies that we design, the systematics that we debate, and the evolutionary steps that we wring from our data. The problem is that we don't like ambiguity and have accorded some value to particular hypotheses for practical purposes, such as identification of unknowns. It is fascinating to see proposals aiming to create hierarchies with same names at different ranks.
What do our Codes, including the Draft Biological Code (Greuter et al. 1996), say about hierarchy (see Appendices)? LINNAEAN PLANT HIERARCHY My title expresses two things: The first part came when I agreed to say something about the past. At the time I had no idea what a difficult subject hierarchies would be. The second part is generally translated as "How great are the works of the Lord." It appears opposite the title page of at least two editions of Linnaeus' Systema Vegetabilium (lOth ed. of 1759 and Murray's 14th ed. of 1784). It expresses a philosophical rooting of early workers who not only knew their Bible, but knew it in Latin. They were exposing the richness that God created, perhaps in six days before he rested on the seventh.
As outlined by Stearn (1957: 26-34), the Linnaean hierarchy involved 24 named classes based on sexual characters termed by Siegesbeck (1737) The point is that the Linnaean hierarchy was absolutely artificial and that's why we botanists are a little surprised to have the Linnaean hierarchy taken so seriously. Linnaeus (1753) gave plants the binomial naming system, i.e., the foundations of the generic and species names that we use today in biology. But is this the part of the hierarchy that we are discussing today? I think not. On the other hand, Linnaeus and the Linnaeans were the last to comprehend the Natural World with works like Systema Naturae (A System of Nature) with the vision of Kingdoms (Regnum) of Animals (Animalium) and Vegetables (Vegetabilium). It was really the Post-Linnaeans, Jussieu (1789), for plants, and Cuvier (1798), for animals, who laid the cornerstones of higher ranked taxa, especially families (then called orders). Peter Stevens's (1994) book on Antoine-Laurent de Jussieu discussed what was known as the Natural Method (as opposed to Linnaeus' Artificial Method). I have relied on his work and apologize for any misunderstandings.
The philosophical underpinnings of the hierarchies are important but were rarely commented upon by the workers themselves. In essence these early post-Linnaeans saw nature as a map of a single land-mass, on which they were drawing lines to separate taxa. One could, by giving latitude and longitude, locate taxa on the map. Perceived gaps between taxa were thought to be an artifact of incomplete knowledge. Thus, Cuvier could be quoted as saying "classes, orders and genera are abstractions by man and do not exist in nature." Corollaries to this perception of nature as continuous meant that criteria for drawing lines could include considerations such as ( 1) taxa should not be too big, i.e., genera should not have more than 100 species, or families more than 100 genera, (2) taxa should not be too small, i.e., genera should not comprise only a single species and no unigeneric families. Indeed, Jussieu (1789) only recognized 100 families, leaving a pile of miscellaneous genera (I.e. 416-446) at the end that he would not place.
The taxa created within this philosophy of continuous variation, meant that the centers of taxa were quite different from the centers of other taxa but taxa adjoining the centers would grade toward other centers. This philosophy of the continuous chains of nature (scalae naturae) had a corollary that any perceived gaps between taxa represented lack of knowledge-another expedition would return with previously unknown material that would neatly fill in the gaps.
LATE POST-LINNAEAN OR PRE-DARWINIAN HIERARCHIES
The new materials from the great expeditions were being worked up and work on the great British colonial floras was initiated, a veritable taxonomic flood. It was becoming increasingly evident that nature wasn't woven of continuous chains-there really are gaps between taxa. The view was changing from a map of a continuous land-mass with arbitrary lines on it (like a map of the U.S.A. with states, counties, etc.) to a map covered by continents/islands of various sizes sometimes with peninsulas and archipelagos suggest-ing closer relationships between some areas than with others.
Although the underlying philosophy was changing, it did not result in much change, differences between classifications of De Candolle (1813), Bentham and Hooker (1862-1883), etc., are clearly rooted in those of Jussieu. In the 20-year period from 1825 to 1845, 24 systems of plant classifications were proposed, characterized by Lawrence (1951: 31) as "only minor improvements or elaborations of the system of de Jussieu and, aside from the major contributions of de Candolle and of [Robert] Brown, gave little indication of deep analysis of basic considerations." If this period is said to have ended with Darwin (1859), its last flowering was the system laid out in Bentham and Hooker's Genera Plantarum in three massive volumes. The publication of Darwin's theories of evolution and the origin of species, coincided with the preparation of the first volume and Hooker wanted to start all over, completely reorganizing. Bentham opposed this, since he didn't accept the essentials, although he did a decade later. One of the great strengths of this work is that its descriptions were based on actual study of specimens in the Kew Herbarium, which was and is phenomenally rich, not on the descriptions compiled from literature.
One could go on with the various systems of Hallier (1905), Bessey (1915), Hutchinson (1926Hutchinson ( -1934, Cronquist (1968Cronquist ( , 1988, Takhtajan (1980), Dahlgren (1989), Thome (1976Thome ( , 1983Thome ( , 1992 and the latest contributors. By and large, especially when viewed from a distance, most of these do not appear radically different from the pre-Darwinian systems. There are reasons for this, perhaps more of a practical nature than theoretical. In essence, most of these workers had/have a lot of experience with the study of specimens. This results in a practical focus by workers who are thinking, how can I organize all this knowledge so that others can more quickly identify unknowns? Can I fit my thoughts within the framework of my predecessors?
These practical, as opposed to theoretical, concerns are very much in the minds of all of us, especially when we are rooted in the realization that the speci-mens are the facts-the things we make of the specimens, including hierarchies for organizing them, are our hypotheses.
Wallace Ernst's (1972) posthumous work on Lamourouxia showed that the genus almost certainly had evolved long tubular flowers (pollinated by hummingbirds) twice, presumably from shorter, more open (beepollinated) flowers. It was clear that the relationships of some bird-pollinated species were with bee-pollinated species, not with other bird-pollinated species, although they, superficially, looked rather similar.
At a dinner meeting in Washington in 1965, another colleague, Phil Humphries, said something that stuck with me. Another worker, who was into programming on the latest computer 30 years ago, begged him for data to crunch. He gave it to him and it came back in the form of a mobile with the comment that all his data could be expressed in this form. Phil hung the mobile over his desk and began contemplating the relationships as the various parts rotated. Suddenly he realized that now he was studying relationships two steps removed from reality, his data were one step removed and the mobile was a second step removed.
My final image of the evolutionary system is no longer a two-dimensional map but a transparent globe with a single point at the center from which everything evolved through the third dimension, time, to the surface which is covered with the living species more or less arranged by their genealogies. I think Kevin de Queiroz' (Queiroz and Gauthier 1994) idea is that we need to abandon the current taxonomic and nomenclatural system of the surface and replace it with a system based on the branches reaching from the core.
Perhaps Alphonse de Candolle ( 1867) expected something like this when he said in his introduction to his Lois: "There will come a time when all the plant forms will have been described; when herbaria will contain indubitable material of them; when botanists will have made, unmade, often remade, raised or lowered, and above all modified several hundred thousand taxa ranging from classes to simple varieties, and when synonyms will have become much more numerous than accepted taxa. Then science will have need of some great renovation of its formulae. This nomenclature which we now strive to improve will then appear like an old scaffolding, laboriously patched together and surrounded and encumbered by the debris of rejected parts. The edifice of science will have been built, but the rubbish incident to its construction not cleared away. Then perhaps there will arise something wholly different from Linnaean nomenclature, something so designed as to give certain and definite names to certain and definite taxa." This is my first visit (outside of an airport) to California since my two years at Stanford ended in 1957. Even then, I was aware of a problem with the red-woods: does the giant redwood, the Big Tree, belong to the same genus as the coast redwood (Sequoia)? In other words, is there one genus with two species or two genera with one species each? This problem was known to me when I first visited the Big Trees in the Sierra foothills and met the General Sherman Tree. This is one BIG tree. The first branch was 100 ft. up and was 6 ft in diameter, the size of the mighty elms arching over our streets back at home in Iowa. That's just the first branch! It is difficult to express how insignificant I felt looking at such a giant that had been standing there for about 2000 years. I felt like a flea contemplating an elephant. Then came a moment of truth-that tree really didn't care what I, or anyone else, called it.
How great are the works of the Lord! (Opera Jehovae magna!).
EPILOGUE
After listening to the other speakers and the discussions I now believe that my image of that globe is not too bad. The surface of this globe is the currently living biological world which is hierarchically subdivided geographically by the so-called Linnaean hierarchycontinents, such as the Animal Kingdom is here and the Plant Kingdom is over there. This image lends itself to the idea that we can more or less agree over how many geographic ranks to recognize-regions (phyla), countries (subphyla), states (classes), counties (orders), townships (families), etc. Such a system has value and I, for one, am not ready to say that it must be abandoned.
But the relationships of this biological world are not the product of what is on the surface and what seems sufficient for organizing the taxa on this "surface" may be insufficient for organizing by the roots. The relationships are, ultimately genealogical and to be revealed by their roots through time. I would be dumbfounded if I were told that I must fit my wife's known genealogy into a fixed number of generations with only a certain number of relationships allowed. If you have parents, you may have other relationships, siblings. If you have grandparents you have more relationships, first cousins, nephews, nieces, maybe a first cousin, once removed. Then there are the second marriages and their products.
I don't want to go into the practice and theory of human genealogy. However, one has two choices in looking at genealogy. A descent chart rotates the data so that you see only the direct descendants of a given person-the relationships to those marrying descendants are rotated away. An ancestor chart rotates the data so that you see only the direct ancestors of a given person-all sibling relationships are rotated away. Nei- 2b. To avoid conflict with mycological usage (Fungi), no other names above family should end with -mycota, -mycotina, -mycetes or -mycetidae. 2c. To avoid conflict with phycological usage (Algae), no other names above family should end with -phyceae or -phycideae. 3. Some earlier botanical classifications treated "Phylum" as a subdivision of "Division" but the 1994 Tokyo Code made it an alternative to "Division".
APPENDIX I. Botanical Code (Greuter et al. 1994) on Hierarchy
Art. 2.1. Every individual plant is treated as belonging to an indefinite number of taxa of consecutively subordinate rank, among which the rank of species (species) is basic. Art. 3.1. The principal ranks of taxa in descending sequence are: kingdom (regnum), division or phylum (divisio, phylum), class (classis), order (ordo), family (familia), genus (genus), and species (species). Thus, except for some fossil plants (see Art. 3.3), each species is assignable to a genus, each genus to a family, etc. Art. 4.1. The secondary ranks of taxa in descending sequence are tribe (tribes) between family and genus, section (sectio) and series (series) between genus and species, and variety (varietas) and form (forma) below species. Art. 4.2. If a greater number of ranks of taxa is desired, the terms for these are made by adding the prefix sub-to the terms denoting the principal or secondary ranks. A plant may thus be assigned to taxa of the following ranks (in descending sequence): regnum, subregnum, divisio or phylum, subdivisio or subphylum, classis,subclassis,ordo,subordo,familia,subfamilia,tribus,subtribus,genus,subgenus,sectio,subsectio,series,subseries,species,subspecies,varietas,subvarietas,forma,subforma. Art. 4.3. Further ranks may also be intercalated or added, provided that confusion or error is not thereby introduced. Art. 5.1. The relative order of the ranks specified in Art. 3 and 4 must not be altered (see Art. 33.5 and 33.6). Art. 10.7. The principle of typification does not apply to names of taxa above the rank of family, except for names that are automatically typified by being based on generic names (see Art. 16).
The type of such a name is the same as that of the generic name on which it is based. Art. 11.9. Priority is not mandatory for names of taxa above the rank of family (but see Rec. 16B). Art Rec. 16B.l. In choosing among typified names for a taxon above the rank of family, authors should generally follow the principle of priority. Art. 17 .1. The name of an order or suborder is taken either from distinctive characters of the taxon (descriptive name) or from a legitimate name of an included family based on a generic name (automatically typified name). An ordinal name of the second category is formed by replacing the [family] termination -aceae by -ales. A subordinal name of the second category is similarly formed, with the termination -ineae. Art. 18.1 The name of a family is a plural adjective used as a substantive; it is formed from the genitive singular of a legitimate name of an included genus by replacing the genitive singular inflection (Latin -ae, -i, -us, -is; transliterated Greek -ou, -os, -es,as, or -ous, including the latter's equivalent -eos). with the termination -aceae. Art. 19.1. The name of subfamily is a plural adjective used as a substantive; it is formed in the same manner as the name of family (Art. 18.1) but by using the termination -oid~ae instead of -aceae. Art. 19.3. A tribe is designated in a similar manner, with the termination -eae, and a subtribe similarly with the termination -inae.
[For specified botanical ranks and their specified terminations, see Table I Pre. [2nd paragraph]. The object of the Code is to promote stability and universality in the scientific names of animals and to ensure that the name of each taxon is unique and distinct. Art. 29(a). Formation of family group names. A family or a subfamily name is formed by adding to the stem of the mime of the type genus the latinized suffix -idae for a family name and -inae for the subfamily. Rec. 29A. It is recommended that the suffix -oidea be added to the stem for the name of a superfamily and -ini for the name of a tribe. Art. 35(a). Taxa. The family group includes all taxa at the ranks of superfamily, family, subfamily, tribe and any other rank below superfamily and above genus that may be desired, such as subtribe.
[For specified zoological ranks and their specified terminations, see Table I | 4,001.2 | 1996-01-01T00:00:00.000 | [
"Philosophy",
"Biology"
] |
Prediction of oxygen - blowing volume in BOF steelmaking process based on BP neural network and incremental learning
: In view of the characteristics of dynamic basic oxygen furnace ( BOF ) steelmaking process, prediction models based on backpropagation neural network and incremental learning ( BPNN - IL ) are proposed for total blow oxygen volume and second blow oxygen volume. The incremental learning is to adjust weights and thresh - olds of the BPNN according to the di ff erence between the predicted value and actual value of each heat and to adapt to the change in furnace conditions. The combined BPNN - IL models are trained and tested by actual produc - tion data, and are further compared with multiple linear regression models and BPNN models. The results show that whether it is total blow oxygen volume or second blow oxygen volume, the BPNN - IL models could provide the most accurate prediction and the introduction of an incremental learning method could further improve the predictive accuracy. So the BPNN - IL method is e ff ective in predicting the oxygen - blowing volume in the BOF steelmaking process.
Introduction
Basic oxygen furnace (BOF) steelmaking is a complex process that includes physical and chemical reactions with high temperatures.By blowing oxygen into a molten pool of BOF, the impurities in metal liquid are oxidized and the molten pool is stirred at the same time, thus achieving the goals of decarbonizing, increasing temperature, and changing components of molten steel.Therefore, the control of oxygen-blowing volume in the BOF steelmaking process is very important, which will directly determine the smelting effect and quality of steel and thus affect the end-point control of the BOF steelmaking process.
At present, the control of oxygen-blowing volume in BOF steelmaking process is mainly through static model and dynamic model.The static model is based on the initial conditions of steelmaking and the target requirements of steel grade, and uses the material balance and heat balance methods to calculate total blow oxygen volume and the additives needed to reach the required end-point conditions of the BOF steelmaking process and to provide a guide for technological operation.However, due to the complexity of the steelmaking process, various influencing factors, and unstable operations, the static model cannot be adjusted in real time, so completely relying on a static model for steelmaking is unable to well control end-point carbon content and temperature in BOF.To solve this issue, in the final stage of BOF steelmaking, instruments such as sub-lance or off-gas analyzer are employed to measure the temperature and carbon content of molten steel.According to measured temperature and carbon content, and end-point target parameters, and with the help of a dynamic model, the amounts of oxygen-blowing and coolants can be calculated and adjusted to control the end-point of the steelmaking process, which is called dynamic control.
As the basis of dynamic model control, the precision of the static model will affect the effectiveness of dynamic control, and therefore, the study of static models is also of great importance.At present, static models mainly include mechanism model [1][2][3], statistics model [4,5], incremental model [6], and intelligent model [7][8][9].The intelligent model, in comparison with other models, solves the nonlinear problem of the steelmaking process, achieves great results, and overcomes the problems that other models have, such as excessive influencing factors, difficulties in describing them with exact mathematical equations and statistical methods, and poor control precision.For example, Wang and Han [7] presented a causality-based case based reasoning model for the static control of converter steelmaking.Zhao et al. [8] established a static model for the prediction of oxygen consumption in BOF based on a genetic algorithm and extreme learning machine.Gao et al. [9] proposed a static control model of BOF steelmaking based on wavelet transform weighted twin support vector regression.Li et al. [10] proposed an improved deep belief network model based on deep learning for the converter of a steel mill based on massive historical data.
For most BOFs, the process control relies on static and dynamic models [11], where the static model is used to guide the early and middle stages of BOF steelmaking, while the dynamic model is used to guide the final stage of BOF steelmaking, which directly affects control precision at the endpoint of BOF steelmaking.Therefore, the improvement of the accuracy of the dynamic model can help increase the hit rates at the end-point of BOF steelmaking.To date, the most commonly used dynamic models are mainly the exponential decarburization model and intelligent model.Most of the exponential decarburization models are based on the sublance and off-gas analyzer system, and the amount of oxygen to be blown in the dynamic stages of BOF steelmaking can be calculated by using the exponential decarburization model, such as the dynamic model of Linz-Donawitz (LD) converters which is established by Carlucci et al. [12] Some exponential decarburization models do not depend on the sub-lance and off-gas analyzer system, such as the BOF quasi-dynamic control model which is established by Chen et al. [1] The exponential decarburization model can relatively reflect the regularities of decarburization rates in the final stage of the BOF steelmaking, but the dynamic model established on the exponential decarburization model still has some problems.For example, many parameters in the model are difficult to determine and they play a decisive role in the precision of the model.When the conditions of the furnace and raw materials are unsteady, the precision of the model is often unsatisfactory, so the self-learning and adjustment of the model parameters are critical.
To further improve the control accuracy of the dynamic model, by now many scholars have already built BOF dynamic models using intelligent model technology.For example, Cox et al. [13] and Fileti et al. [14] presented a prediction model of end-blow oxygen and coolant based on artificial neural network, to improve the hitting rate of BOF end-point temperature and carbon content.Rajesh et al. [15] developed a multi-layered feedforward neural network model for the prediction of end blow oxygen in the LD converter using a two-step process.Han et al. [16] established the BOF dynamic control model by case-based reasoning, adaptive-network-based fuzzy inference system, and robust relevance vector machine.
The above-mentioned intelligent model has played an important role in the development of BOF static and dynamic models, but most of them are based on batch learning mode, which requires that all training data should be prepared well at once before learning, and once the samples are learned, the learning process will terminate; no new knowledge acquired anymore.This will not meet the actual requirements of the BOF, because in practical application, training samples cannot be obtained at once, but gradually with time; meanwhile, the information reflected in samples is unsteady and varies with time.If all data should be relearned after new samples arrive, this will waste a great amount of time and space, and therefore, the models of batch learning cannot meet such requirements.Only the incremental learning algorithm can update knowledge in a progressive way, and correct and strengthen previous knowledge, making the updated knowledge adapt to the newly arrived data, without the need to learn all the data.Incremental learning reduces the demand for time and space and can better meet the actual control requirements of the unstable and timevarying conditions of the BOF steelmaking process.
Aiming at the above problems and actual characteristics of the BOF steelmaking process, this article presents a method that combines backpropagation neural network and incremental learning (BPNN-IL) to construct prediction models of total blow oxygen volume and second blow oxygen volume in the dynamic steelmaking process of BOF.The incremental learning method in the practical application can conduct a self-learning and adjust the weights and thresholds of the current BPNN according to the difference between the predicted value and actual value of each heat, to adapt to the change in furnace conditions, to improve the accuracy of the prediction model, and thereby to realize accurate control of oxygenblowing volume in the steelmaking process, reducing oxygen consumption, increasing the hit rate at the endpoint of BOF steelmaking, reducing the number of overblows and reblows, and reducing the production cost.The number of neurons in the output layer determines the dimension of the output vector.The hidden layer plays a decisive role in the structure of the BPNN.For the hidden layer, single layer or multi-layer structure can be selected.For BP neural network model, the prediction results of the output layer are obtained through forward transfer calculation of information, and then based on the errors of predicted values and expected values, the backpropagation calculation of the errors is carried out using the gradient descent method, and the connection weights between input layer and hidden layer neurons, the connection weights between hidden layer and output layer neurons, and the threshold values of hidden layer and output layer neurons are constantly iteratively revised.The ultimate goal is to find a good combination of weights and thresholds parameters to minimize network errors.BPNN model has a strong nonlinear processing ability and has been applied to the prediction and control of the BOF steelmaking process by many scholars [17][18][19][20][21].However, the BPNN models proposed in these studies are based on batch learning mode and do not have the ability of online self-learning.Unstable and time-varying characteristics of raw material and operation conditions in the actual BOF steelmaking production process will affect the generalization ability of the models.
Construction of BP neural network model can be summarized as a selection of model input and output variables, preparation and preprocessing of data, training and testing of the network, and thus determination of optimal network structure.The selection of model input and output variables is the basis of model establishment, which affects directly the final prediction effect of the model.Preparation and preprocessing of data are mainly to obtain effective training samples and test samples.At the same time, to eliminate the impact of different dimensions on data, before training the network, all samples are normalized in the range (−1, 1) according to formula (1) in this study.Training and testing of the network are to determinate optimal network structure, such as the number of hidden layers, the number of neurons in each hidden layer,
Second blow Main blow (First blow)
First measurement (about 85% blow oxygen) Prediction of oxygen-blowing volume in BOF steelmaking process 405 the transfer function, the weights.and biases in the network.In this study, a BP network with three layers is used.The sigmoid tangent function is used as the transfer function of the hidden layer, as shown in formula (2).The linear transfer function is used as the transfer function of the output layer, as shown in formula (3).
Tapping period Reblow
where y is the normalized value of the variable, and x max and x min are the maximum and minimum of each variable "x."
Incremental learning method
Incremental learning method is a kind of intelligent data mining and knowledge discovery technology that is widely used.Its basic idea is a learning system that can continuously learn new knowledge from new samples and save most of previously learned knowledge.With the gradual accumulation of the samples, the learning accuracy is also improved.For the BOF steelmaking process, actual furnace conditions and raw materials with unstable and timevarying characteristics, and many uncertainties, cannot be considered by BP neural network model.Therefore, this article proposes to introduce incremental learning on the basis of constructing the BPNN model, as shown in Figure 2.
In this way, the established prediction model can conduct a self-learning for the change of operation conditions of each heat in practical application, which can improve the prediction performance of the model.The combined model of BPNN-IL is described as follows: First, BPNN model is established according to Section 3.1, and the optimal network structure is obtained.For example, the three-layer BPNN model in Figure 2, through training of effective historical data, obtained optimal network structure is mainly weights w ij and w jk , and thresholds b j and b k , and the number of hidden layer nodes.This step belongs to offline batch learning.
Then, on the basis of optimal BP neural network structure, in the practical application, when the input variable data of the model prediction are collected, the target output is immediately predicted and given by the BP neural network model according to real-time data of the current heat.Subsequently, when the actual output of current heat is obtained, a gradient descent algorithm with momentum term [22] is used to self-learn the weights w ij and w jk , and thresholds b j and b k of this heat according to the difference between the predicted output and actual output, as shown in formulas (4)− (11).The next heat will use the updated weights and thresholds to calculate the target output.This method belongs to online incremental learning.This method could continuously adjust the weights and thresholds of BP neural network according to the difference between predicted value and actual value of the control target of each heat, which enables the prediction model to adapt to the changing conditions of each heat.Here, learning formulas of the weights and thresholds in Figure 2 are the results obtained after substituting formulas (2) and (3).are thresholds of hidden layer node j for current heat (t) and next heat (t + 1), respectively; η and α are the parameters between 0 and 1, which could be determined by experiments and comparisons in the form of 0.001 per change.
Establishment and experiments of prediction model for oxygenblowing volume in BOF steelmaking process
In this article, based on the dynamic steelmaking process of the converter with sub-lance system, prediction models of total blow oxygen volume and second blow oxygen volume have been established using the BPNN-IL method.
The control flow of oxygen-blowing volume in the whole steelmaking process is shown in Figure 3.
Refresh weights and biases for next heat
Calculate the output value of each neuron in the hidden layer : Prediction value of the model Calculate the output value of each neuron in the output layer: Real-time data of current heat : input variables x i (i=1,2, ) Self-learning of weight w jk and bias b k Self-learning of weight w ij and bias b j ( 1)= ( ) ( ) ( Actual value of output variables of current heat : s k (k=1,2, ) Prediction of oxygen-blowing volume in BOF steelmaking process 407
Determination of model input and output variables
The selection of input and output variables of the model is very important, which directly affects the prediction accuracy of the prediction model.For prediction model 1 (namely prediction model of total blow oxygen volume), the output variable is total blow oxygen volume, and the input variables are determined by the influencing factors of total blow oxygen volume.For prediction model 2 (namely prediction model of second blow oxygen volume), the output variable is second blow oxygen volume, and the input variables are determined by the influence factors of second blow oxygen volume.
In this article, based on the characteristics of the actual BOF steelmaking process (e.g., Figure 3) and the analysis of historical production data, input and output variables of prediction models of total blowing oxygen and secondary blowing oxygen have been determined, as shown in Table 1.And through historical production data, Pearson correlation analysis has been conducted for each input variable and output variable of the prediction models, and correlation coefficients have been obtained, as shown in Tables 2 and 3. Here, the Pearson correlation coefficient (R) can be calculated by formula (12).The sample is expressed as ( ) X Y , i i .X ¯and Y ¯are the means of the sample, respectively.
As the correlation coefficient of two variables reflects the degree of influence between them, it can be seen from the correlation coefficients in Tables 2 and 3 that influencing factors with a correlation coefficient greater than 0.1 or less than −0.1 are selected as main influencing factors of total blow oxygen volume.Therefore, the main influencing factors of total blow oxygen volume are scrap weight, BOF endpoint temperature, and hot metal [Si] content.According to the size of the correlation coefficient of secondary blowing oxygen and each influencing factor, the influence degree of each influencing factor on
Data preparation and preprocessing
In this study, actual production data from the converter in the M steel plant is collected according to model input and output variables.Then, these data are pretreated by removing incomplete and obviously wrong data, and screening the data that conforms to normal process range as an effective data sample; for example, TSC
Prediction model of total blow oxygen volume
The data of The maximum iteration is set to 1,000.The number of iterations for convergence is 27.The second method is BPNN.The BPNN model is also established for prediction of total blow oxygen volume.The model is designed as follows: multi-layer BP neural network is adopted, and the transfer function in the hidden layer is sigmoid tangent function, and the transfer function in the output layer is linear transfer function, and the LM optimization algorithm is used for training the network.Through the analysis of Section 3.1, the input layer consists of 10 nodes representing the 10 factors, and the output layer is composed of just one node representing the predicted total blow oxygen volume.Four BPNN models have been developed separately for prediction by varying the number of nodes in the hidden layer and can be seen in Table 5.The max epoch is 2,000.
The third method is BPNN-IL.The BPNN-IL model is developed based on the best one of the above four BPNN models.For incremental learning in the model, η and α are determined to be 0.015 and 0.03, respectively, by experiments.
To evaluate prediction effect, these models are compared on the same test data from 171 heats.The results are shown in Table 5.The correlation coefficient (R) between the predicted values and actual values can reflect the potential of the models in actual application.For a perfect fit of data, R is equal to 1.For the MLR model, the correlation coefficient is only 0.5362 and it shows that the predicted values are not very consistent with actual values.For the BPNN models, the correlation coefficient has an obvious improvement.For the BPNN (MLP 10-9-1)-IL model, the correlation coefficient is 0.7214.It indicates that the combination of BPNN and IL can further improve the correlation coefficient.
At the same time, from Table 5, when the predictive errors of total blow oxygen volume are within ±1,000 N•m 3 , the hit rate of MLR model is 84.80%, the hit rate of the best BPNN(MLP 10-9-1) model is 87.13%, and the hit rate of the BPNN(MLP 10-9-1)-IL model is 89.47%.When the predictive errors of total blow oxygen volume are within ±800 N•m 3 , the hit rate of MLR model is 71.93%, the hit rate of the best BPNN(MLP 10-9-1) model is 80.70%, and the hit rate of the BPNN(MLP 10-9-1)-IL model is 84.21%.This shows that the BPNN can get the higher prediction accuracy than the MLR, and the introduction of incremental learning method is also helpful for improving the predictive accuracy of the BPNN model.Prediction of oxygen-blowing volume in BOF steelmaking process 411 iterations for convergence is 19.The second method is BPNN.Similar to the BPNN models for prediction of total blow oxygen volume, the BPNN models for prediction of second blow oxygen volume are also established.Through the analysis of Section 3.1, the input layer consists of seven nodes representing the seven factors, and the output layer is composed of just one node representing the predicted second blow oxygen volume.Five BPNN models have been developed separately for prediction by varying the number of nodes in the hidden layer and are shown in Table 6.The max epoch is 2,000.The third method is BPNN-IL.The BPNN-IL model for prediction of second blow oxygen volume is developed based on the best one in the above five BPNN models.For the incremental learning in the model, η and α are determined to be 0.018 and 0.05, respectively, by experiments.
Prediction model of second blow oxygen volume
To evaluate the prediction effect, these models are compared on the same test data from 280 heats.The results are shown in Table 6.For the MLR model, the correlation coefficient between the predicted values and actual values of second blow oxygen volume is 0.8423.For the BPNN models, the correlation coefficient has an obvious improvement.For the BPNN (MLP 10-9-1)-IL model, the correlation coefficient is 0.9226.It indicates that the combination of BPNN and IL can further improve the potential of the prediction model in actual application.
Meanwhile, from Table 6, when the predictive errors of second blow oxygen volume are within ±500 N•m 3 , the hit rate of MLR model is 92.50%, the hit rate of the best BPNN(MLP 7-5-1) model is 95.71%, and the hit rate of the BPNN(MLP 7-5-1)-IL model is 97.86%.When the predictive errors of second blow oxygen volume are within ±300 N•m 3 , the hit rate of MLR model is 77.50%, the hit rate of the best BPNN(MLP 7-5-1) model is 83.21%, and the hit rate of the BPNN(MLP 7-5-1)-IL model is 85.71%.It can be seen that the BPNN(MLP 7-5-1)-IL model achieves the best prediction accuracy in these models.So we think that the incremental learning method can further improve the prediction accuracy of second blow oxygen volume.As shown in Figure 6, for the BPNN(MLP 7-5-1)-IL model, the predicted values of second blow oxygen volume can agree well with actual values.Here, the weights and biases matrices of the best BPNN(MLP 7-5-1)-IL model for prediction of second blow oxygen volume are shown in Figure 7.
Sensitivity analysis
Sensitivity analysis could investigate the influence of input parameters of prediction model on the prediction of output parameters, and it is very important to evaluate the model and study the robustness of model prediction.So, in this article, the sensitivity analysis for the above prediction models of total blow oxygen volume and second blow oxygen volume has been carried out.The tested parameters (namely key input parameters of the prediction models) are changed from the minimum value to the maximum value, and other input parameters are kept at the average level.The prediction models are used to predict and observe the change of blow oxygen volume.
Sensitivity analysis of the parameters affecting prediction of total blow oxygen volume
Through the analysis in Section 4.1, it is determined that the main influencing factors of total blow oxygen volume are scrap weight, BOF endpoint temperature, and hot metal [Si] content.Single factor sensitivity analysis is performed on the prediction model of total blow oxygen volume for these three factors respectively, as shown in Figure 8.It can be seen from Figure 8a and c that the predicted value of total blow oxygen volume increases with the increase in scrap weight and hot metal [Si] content, respectively.Figure 8b shows that when BOF endpoint temperature is less than 1,640°C, the predicted value of total blow oxygen volume decreases with the increase in BOF endpoint temperature.When BOF endpoint temperature is greater than 1,690°C, the predicted value of total blowg oxygen volume increases with the increase in BOF endpoint temperature.When BOF endpoint temperature is between 1,640 and 1,690°C, the predicted value of total blow oxygen volume changes little.Therefore, when BOF endpoint temperature is controlled in this range (1,640, 1,690°C), it is beneficial to reduce the cost and energy consumption.The sensitivity laws of the main factors to the prediction model reflected in Figure 8 are consistent with the actual production law.So it can be seen that the prediction of the model is relatively robust.Prediction of oxygen-blowing volume in BOF steelmaking process 413 weight.Single factor sensitivity analysis is performed the prediction model of second blow oxygen volume for these five factors respectively, as shown in Figure 9.
Sensitivity analysis of the parameters affecting prediction of second blow oxygen volume
Figure 9a shows that the predicted value of second blow oxygen volume decreases with the increase in TSC temperature and the variation magnitude is large.Figure 9b shows that when TSC [C] content is within 0.3-0.75%, the predicted value of second blow oxygen volume increases with the increase in TSC [C] content and the variation magnitude is large, and in other ranges, the predicted value of second blow oxygen volume has little fluctuation.
Figure 9c shows that when BOF endpoint [C] content is less than about 0.05%, the predicted value of second blow oxygen volume decreases with the increase in BOF endpoint [C] content and the change is large, and when BOF endpoint [C] content is more than about 0.05%, the predicted value of second blow oxygen volume changes little.Figure 9d shows that when BOF endpoint temperature is greater than 1,640°C, the predicted value of second blow oxygen volume increases with the increase of BOF endpoint temperature and the change is large, and when BOF endpoint temperature is less than 1,640°C, the predicted value of second blow oxygen volume has little change.Figure 9e shows that the predicted value of second blow oxygen volume increases with the increase in scrap weight.Based on the above analysis and Figure 9, it can be seen that TSC temperature, TSC [C] content, and BOF endpoint temperature have great influence on the prediction of second blow oxygen volume.At the same time, the sensitivity laws of the main factors to the prediction model reflected in Figure 9 are consistent with the actual production law.So the robustness of the model is better.
Conclusion
Aiming at dynamic steelmaking process of the BOF with the sub-lance system, the MLR, BPNN and BPNN-IL methods have been proposed to establish the prediction models of total blow oxygen volume and second blow oxygen volume.To validate their prediction effect, comparative experiments based on the same data set are carried out.The results show that whether it is total blow oxygen volume or second blow oxygen volume, the BPNN-IL method could provide the most accurate prediction among these methods and the introduction of incremental learning method could further improve the predictive accuracy.For the BPNN-IL method, the hit rate of total blow oxygen volume is, respectively, 89.47, 87.13 and 84.21% when prediction errors are within ±1,000, ±900 and ±800 N•m 3 .The hit rate of second blow oxygen volume is respectively 97.86, 95.00 and 85.71% when prediction errors are within ±500, ±400 and ±300 N•m 3 .And the correlation coefficient between the actual value and predicted value of second blow oxygen volume is as high as 0.9226.The experimental results also show that the predicted values of oxygen-blowing volume agree well with actual values.Furthermore, the influence of the main input parameters on the prediction of output parameter for the BPNN-IL method has been investigated by sensitivity analysis, and the robustness of the method can be evaluated.The results of the sensitivity analysis show that the BPNN-IL method has good robustness to predict the total blow oxygen volume and second blow oxygen volume, and could be applied to the actual production process.
Figure 1 :
Figure 1: Blowing process of the BOF steelmaking.
BPNN structure : w ij , w jk , b j , b k , number of hidden layer nodes, etc.
Figure 3 :
Figure 3: Flow diagram of the control of oxygen-blowing volume in BOF steelmaking process.
Figure 4
shows comparison of total blow oxygen volume between predicted values and actual values of the best BPNN(MLP 10-9-1)-IL model, and the results indicate the predicted total blow oxygen volume is close to the actual total blow oxygen volume.Here, the weights and biases matrices of the best BPNN(MLP 10-9-1)-IL model for prediction of total blow oxygen volume are shown in Figure 5.
The data of 1 ,Figure 4 :
Figure 4: Comparison of total blow oxygen volume between predicted values and actual values of the BPNN(MLP 10-9-1)-IL model.
Figure 5 :
Figure 5: The weights and biases matrices of the best BPNN(MLP 10-9-1)-IL model for prediction of total blow oxygen volume.
Figure 6 :
Figure 6: Comparison of second blow oxygen volume between predicted values and actual values of the BPNN(MLP 7-5-1)-IL model.
Figure 7 :
Figure 7: The weights and biases matrices of the best BPNN(MLP 7-5-1)-IL model for prediction of second blow oxygen volume.
Figure 8 :Figure 9 :
Figure 8: (a) Sensitivity analysis between scrap weight and total blow oxygen volume predicted; (b) sensitivity analysis between BOF endpoint temperature and total blow oxygen volume predicted; and (c) sensitivity analysis between hot metal [Si] content and total blow oxygen volume predicted.
Table 1 :
Input and output variables for prediction models content, BOF endpoint [C] content, BOF endpoint temperature, scrap weight, coolant addition, and hot metal weight in descending order.In the same way, the influencing factors with a correlation coefficient greater than 0.1 or less than −0.1 are also selected as the main influencing factors of second blow oxygen volume.So the main influencing factors of second blow oxygen volume are TSC temperature, TSC [C] content, BOF endpoint [C] content, BOF endpoint temperature, and scrap weight.For the selection of input variables of prediction models, the main factors affecting output variables must be reserved, while other factors can be retained selectively.
The MLR model is first established for prediction of total blow oxygen volume.It is implemented on the SPSS software.The optimization algorithm is Levenberg-Marquardt (LM) method.
1,471 heats mentioned before are divided into training set and test set.The data from 1,300 heats are used as training set, and the data from other 171 heats are used as test set.Based on the same training set, three methods are used to construct prediction model of total blow oxygen volume, respectively.The first method is Multiple Linear Regression (MLR).
Table 2 :
Correlation analysis of the process variables for prediction model 1
Table 3 :
Correlation analysis of the process variables for prediction model 2
Table 4 :
Descriptive statistics of model variables
Table 5 :
Hit rate of predicted total blow oxygen volume with different models for 171 heats
Table 6 :
Hit rate of predicted second blow oxygen volume with different models for 280 heats | 6,955.2 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Computer Science"
] |
Coil combination of multichannel MRSI data at 7 T: MUSICAL
The goal of this study was to evaluate a new method of combining multi-channel 1H MRSI data by direct use of a matching imaging scan as a reference, rather than computing sensitivity maps. Seven healthy volunteers were measured on a 7-T MR scanner using a head coil with a 32-channel array coil for receive-only and a volume coil for receive/transmit. The accuracy of prediction of the phase of the 1H MRSI data with a fast imaging pre-scan was investigated with the volume coil. The array coil 1H MRSI data were combined using matching imaging data as coil combination weights. The signal-to-noise ratio (SNR), spectral quality, metabolic map quality and Cramér–Rao lower bounds were then compared with the data obtained by two standard methods, i.e. using sensitivity maps and the first free induction decay (FID) data point. Additional noise decorrelation was performed to further optimize the SNR gain. The new combination method improved significantly the SNR (+29%), overall spectral quality and visual appearance of metabolic maps, and lowered the Cramér–Rao lower bounds (−34%), compared with the combination method based on the first FID data point. The results were similar to those obtained by the combination method using sensitivity maps, but the new method increased the SNR slightly (+1.7%), decreased the algorithm complexity, required no reference coil and pre-phased all spectra correctly prior to spectral processing. Noise decorrelation further increased the SNR by 13%. The proposed method is a fast, robust and simple way to improve the coil combination in 1H MRSI of the human brain at 7 T, and could be extended to other 1H MRSI techniques. © 2013 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.
INTRODUCTION
H MRSI enables the noninvasive investigation of local biochemical changes in healthy and pathologic brain tissue. However, as a result of the low metabolite concentrations, the signal-to-noise ratio (SNR) of in vivo MRS is intrinsically low. The SNR can be increased by the use of array coils (ACs), higher magnetic field strengths (e.g. 7 T) and shorter TEs.
With an efficient combination of the individual signals obtained from each channel, ACs provide a two to three times higher SNR than volume coils (VCs) (1)(2)(3), and enable accelerated data acquisition by the use of parallel imaging (4,5). However, AC data are challenging to combine whenever accurate phase information plays an important role, as in 1 H MRSI (6) or phase imaging (7). The general coil combination uses complex weights w n to phase the spectra coherently and to weight spectra with higher SNR more heavily (8). Additional SNR can be gained by correcting for the noise correlation between the channels (1). The available methods for coil combination in 1 H MRSI can be grouped into intrinsic and extrinsic reference methods.
Intrinsic reference methods estimate the complex weights w n from the acquisition under consideration itself, e.g. from the first free induction decay (FID) point (1stFIDpoint method) (9,10) by minimizing the difference between the magnitude and absorption spectrum (11), or based on the program LCModel (3).
Extrinsic reference methods, in contrast, use an additional reference scan to determine the combination weights w n . Two examples of such methods are the use of sensitivity maps (Sensmap method) (2) or an additional 1 H MRSI scan without water suppression (12).
At higher field strength (e.g. 7 T), and for a larger number of channels, coil combination becomes increasingly challenging (7), but the potential increase in SNR and spectral resolution are significant. Recently, several approaches for 1 H MRSI of the brain at 7 T have been proposed that promise to overcome the limitations caused by the high specific absorption rate, chemical shift displacement errors, shortened T 2 times and increased B 0 / B 1 inhomogeneities (13)(14)(15)(16)(17)(18)(19)(20)(21). Among these, the direct acquisition of the FID is one promising approach (13)(14)(15).
The established method for the 1 H MRSI coil combinationusing the first FID pointis problematic at 7 T and can lead to an incoherent data combination as a result of higher phase variations at higher magnetic fields, resulting in degraded spectral quality. The coil combination based on sensitivity maps, however, performs very well and allows the object-intrinsic phase component (e.g. iron deposition) to be preserved, but this phase component is irrelevant in 1 H MRSI. In contrast, 1 H MRSI requires the efficient elimination of all phase components that would interfere with an accurate quantification, including B 0 -induced, coil-specific and, also, anatomical phase variations. Only the phase component inherent to ideal FID oscillations and k-space encoding needs to be preserved. In addition, sensitivity maps cannot be estimated easily if no reference coil (i.e. body coil or VC) is available. With the advent of multi-transmit coils, particularly at higher field strength, body coils and VCs are frequently unavailable (22). Therefore, we evaluated a new method for combining AC 1 H MRSI data, which is based on the rapid acquisition of matching imaging calibration data within a few seconds. These imaging data are used directly as weights for coil combination, enabling a pre-phasing of spectra without any reference coil.
THEORY
The general signal combination, as described by Roemer et al. (23), can be written as: where S Comb → r; t À Á is the combined signal at position → r and time t, λ → r À Á is the scaling factor; N is the number of channels; S n → r; t À Á is the signal of channel n, w m → r À Á is the complex weighting of channel m and ψ nm is the noise correlation between channels n and m. The latter can be computed from noise samples χ of size , where k is the number of sampled points. For computational simplicity and clarity, a correlation-free AC is assumed in this theory part, i.e. ψ nm = δ nm . The scaling factor λ → r À Á can be defined as: The factor λ, however, does not affect the SNR of the combined signal, but only rescales the combined data. This factor is crucial in order to eliminate the intensity profile of the AC if absolute metabolic concentrations are examined using external referencing. Yet, if only metabolic ratios are considered or absolute quantification is performed based on internal referencing, the scaling factor λ is of no importance.
The following equations describe the case when using an FIDbased 1 H MRSI sequence and a gradient echo (GRE) sequence with matching imaging parameters to combine the multichannel data. The proposed method is called 'Multichannel Spectroscopic Data Combined by Matching Image Calibration Data' (MUSICAL). MUSICAL can also be used with other sequences. The theoretical formulation might differ slightly in such cases. Let S n → r; t À Á be the spectroscopic data and I n → r À Á the imaging data of channel n, respectively, described by Equations [3] and [4].
The first factors in both equations represent the channeldependent phase, and the second factors represent the phases caused by different kinds of B 0 inhomogeneity for an acquisition delay T AD or TE. The third factors describe the magnitude reception profiles of coil n, and the final factors summarize channel-independent influences, such as relaxation, proton density, B 1 + effects and water suppression. The different coil combination methods mainly differ by the choice of the weighting factor w n . Three methods are described here: (i) the 1stFIDpoint method; (ii) the MUSICAL method; and (iii) the Sensmap method.
1stFIDpoint method
For the 1stFIDpoint method, the complex weights are chosen by: Combining Equations [1], [3] and [5] yields: If the first FID point reflects the phase of the water resonance well, all channels are perfectly phased to zero before coil combination [i.e. all zero-order phase terms are eliminated and S Ã 0 → r; T AD À Á cancels the zero-order phase of S 0 → r; t À Á ]. However, this may not always be the case. The extent to which this holds is considered in the Discussion section.
MUSICAL method
For the MUSICAL method, the weights are defined by: leading to: when combining Equations [1], [3], [4] and [7]. The channeldependent phase ϕ I n ð Þ → r À Á and magnitude B À . This is no contradiction, when considering that ϕ S (n) can be influenced by the fat signal, which usually has a different phase from the metabolite signals. If an imaging sequence similar to the MRSI sequence [I 0 → r À Á ≈S 0 → r; T AD À Á , except for a lower signal of the MRSI data as a result of water suppression] and TE = T AD are chosen, the phase of the imaging data cancels the phase of the MRSI data, resulting in a combined signal that is phased to zero. No further phasing during spectral processing is then required.
Sensmap method
Sensitivity maps can be computed by dividing the data of each channel by the data of a reference coil. The weights are defined as: This leads to: when combining Equations [1], [8] and [9]. It is obvious from Equation [10] that the coil combination itself is equivalent to that of the MUSICAL method, only with a different scaling, leading to a phasing of the data based on the reference coil after coil combination. This method is ideal for combining imaging data, as any AC-specific phase information is removed, whereas the phase of the reference coil is introduced to the data and phase changes caused by anatomical B 0 inhomogeneities are preserved. This is clearly not helpful in MRSI. Quite the contrary, it introduces an additional source of error, as the MRSI signal has to be later phased to zero before spectral processing.
Subjects and hardware
Seven healthy volunteers were measured on a 7-T whole-body MR scanner (Magnetom, Siemens Healthcare, Erlangen, Germany) with a 32-channel AC for signal reception and a VC for signal reception/transmission (Nova Medical, Wilmington, MA, USA). One measurement was excluded as a result of motion artifacts. The remaining six datasets were processed further. Institutional Review Board approval was obtained. Written informed consent was obtained from all volunteers.
Data acquisition
A three-dimensional T 1 -weighted magnetization-prepared rapid gradient echo (MPRAGE) sequence was acquired as an anatomical reference and for the creation of a brain mask. The sequence parameters were as follows: TE = 3.41 ms; TR = 3 s; TI = 1.7 s; GRAPPA factor of 3; matrix size, 256 × 246 × 160; nominal voxel size, 0.90 × 0.93 × 1 mm 3 . A B 1 + map was acquired with a presaturation turboFLASH-based B 1 mapping (24,25) sequence to calibrate the optimal pulse reference amplitude for the 1 H MRSI slice under investigation.
The 1 H MRSI data were acquired using a two-dimensional FIDbased sequence (13) with 64 × 64 voxels (elliptically weighted k space acquired in a pseudo-spiral pattern), field of view (FoV) of 220 × 220 mm 2 , nominal voxel size of 3.4 × 3.4 × 12 mm 3 , T AD = 1.3 ms, TR = 600 ms, spectral bandwidth of 6 kHz, 2048 complex FID points and weak WET water suppression (26). The acquisition time was 30 min. One volunteer was measured with and without water suppression. The non-water-suppressed data were acquired with the same parameters, except for water suppression and TR = 370 ms, leading to an acquisition time of 18.5 min.
A pair of GRE images with imaging parameters matching those of the 1 H MRSI sequence, i.e. same matrix size, FoV, slice thickness, and pulse shape and duration, was acquired as a calibration scan for the coil combination. One of the GRE images was acquired with reversed imaging gradients to allow for the correction of minor phases introduced by the readout gradient. Other sequence parameters were TE = 1.3 ms, readout bandwidth of 1950 Hz/pixel and TR = 4 ms, resulting in a measurement time of 0.6 s. The TE of the GRE images matched the T AD of the 1 H MRSI sequence to ensure the same phase evolution [see Equation [8]].
Volunteer 1 was measured with the VC to test whether the coil combination methods based on LCModel phase estimations are reliable for low-quality spectra. Volunteers 2-4 were measured with both AC and VC in the same session, whereas volunteer 5 was measured with only AC because of the long measurement time. Volunteer 6 was measured with a non-water-suppressed MRSI sequence in addition to the water-suppressed sequence to test whether the proposed coil combination method leads to comparable results to the ideal method. The overall measurement time was 1.5 h when measuring with both coils and 1 h when measuring with one coil.
Pre-processing
Brain masks were obtained using the brain extraction tool BET2 (http://www.fmrib.ox.ac.uk/analysis/research/bet/) employing T 1 -weighted images. Only 1 H MRSI voxels within this brain mask were processed.
Gradient delays during the acquisition of the GRE images caused a very minor linear phase gradient in the frequencyencoding direction. This phase gradient was eliminated by adding the complex data of normal GRE images to such images with reversed readout gradients. A FoV-dependent phase was added and elliptical filtering was performed to match the phase and point spread function of the 1 H MRSI data, respectively.
The 1 H MRSI data of all channels were combined using the GRE images [MUSICAL method, see Equation [8]], the first FID point [1stFIDpoint method, see Equation [6]], the first FID point of the non-water-suppressed MRSI data and the sensitivity maps [Sensmap method, see Equation [10]] as complex weights. The scaling factor λ was computed according to Equation [2]. The 1 H MRSI data were Hamming filtered after coil combination in all approaches.
Post-processing
After coil combination, the 1 H MRSI data were fitted with LCModel (http://s-provencher.com/pages/lcmodel.shtml). The basis set was simulated using NMR scope from the jMRUI package (http://www.mrui.uab.es/mrui/mrui_Overview.shtml). The spectra have a first-order phase error caused by the acquisition delay. This error was taken into account by introducing the same B. STRASSER ET AL.
wileyonlinelibrary.com/journal/nbm error to the basis set by truncating the appropriate number of points at the beginning of the basis set FIDs (13,14). LCModel was not restricted in computing the zero-order phase when processing data with the Sensmap method or VC data, but was restricted to 0 ± 20°for the 1stFIDpoint and MUSICAL methods, leading to a faster spectral fitting. After LCModel fitting, the results were processed using MATLAB (MathWorks, Natick, MA, USA), and metabolic, phase and Cramér-Rao lower bounds (CRLB) maps were created. The SNR, defined as the signal of N-acetylaspartate (NAA) divided by twice the standard deviation of the noise in the frequency domain, was also calculated in MATLAB.
Feasibility of the MUSICAL method
Before implementing the new coil combination algorithm, the consistency between the phases obtained from images and from 1 H MRSI data was evaluated from the scans of four volunteers, which were acquired with VC. This was necessary to evaluate whether the imaging data can provide an accurate prediction of the 1 H MRSI phase, and thus provide good coil combination weights. The phase of the 1 H MRSI data was estimated in two ways: using LCModel and the first complex FID point. Both 1 H MRSI phase maps were compared voxel-wise with the imaging phase maps.
The feasibility of the MUSICAL method in conjunction with the FID-based approach was further tested by computing the amount of voxels with CRLBs < 20% for the low-signal metabolites glutathione, N-acetylaspartyl glutamate (NAAG), aspartate, γ-aminobutyric acid (GABA) and taurine (Tau). A sample spectrum combined with the MUSICAL method and with a corrected firstorder phase is provided in comparison with the same spectrum without the correction.
Phase estimation by LCModel
Forty voxels with high spectral quality (SNR > 17 and linewidth 12 Hz) were selected from one VC dataset. Based on these spectra, different SNRs in the range 1-20 and linewidths in the range 8-37 Hz were simulated by adding white noise or by apodizing the FID and adding white noise to compensate for the increased SNR, respectively. The resulting datasets were processed with LCModel without any restriction in its phase computation. The phase deviation to the unmodified spectra, as a function of the simulated SNR and linewidth, was evaluated to show the variability of estimating coil combination weights with LCModel, as proposed by Maril and Lenkinski (3).
Comparison with other methods
The performance of the MUSICAL method was compared with that of the two standard methods, i.e. the 1stFIDpoint method and the Sensmap method, for the datasets of five subjects, and with the first FID point of the non-water-suppressed MRSI data in the case of volunteer 6. The appearance of problematic spectra near the skull, metabolic ratio maps, CRLB values and SNR were quantitatively and qualitatively compared between the three methods.
Noise decorrelation
The SNR improvement when performing noise decorrelation between the AC channels was evaluated, as suggested by Wright and Wald (1). The noise correlation matrix ψ nm was computed from the last 200 FID points of voxels outside the head, and taken into account during coil combination with the MUSICAL method, according to Equation [1]. For volunteer 5, additional noise-only data were measured with an FID sequence without any localization or radiofrequency pulses to test whether voxels outside the brain are a reliable source of noise data. The SNR was compared with and without noise decorrelation for volunteers 2-6, and for volunteer 5 also between the two different sources of noise.
Feasibility of the MUSICAL method
In Fig. 1, the phase maps computed using the first FID point [FID (1)] (Fig. 1a), the whole FID by LCModel [FID(all)] (Fig. 1b), the GRE data (Fig. 1c) and the subtraction phase maps, FID(all) À FID(1) (Fig. 1d) and FID(all) À GRE (Fig. 1e) are shown for one volunteer. The phase differences are listed in Table 1 for volunteers 1-4. The phase of the GRE data matched that of the 1 H MRSI data quite well after correction for gradient delays and FoV-dependent phase offsets (see Fig. 1 and Table 1). These data also indicate that the FID(all) and GRE phases agreed better than the FID(all) and FID(1) phases, as the standard deviations of the latter subtraction maps were higher (p < 0.05 with paired t-test using the standard deviations of volunteers 1-4). If the phase computed by LCModel is considered to be correct, which is an acceptable assumption for VC data for which the SNR is reasonable in the whole brain, the GRE phases will be better estimates than those computed by the first FID point.
The percentages of brain voxels with CRLBs < 20% were 93.0%, 84.3%, 32.7%, 78.6% and 79.9% for the metabolites glutathione, NAAG, aspartate, GABA and Tau, respectively, when processed with MUSICAL and pooled over all volunteers. A firstand zeroorder phase-corrected high-quality spectrum processed with the MUSICAL method is shown in Fig. 2.
Phase estimation by LCModel
The dependence of the phase estimation error on the simulated SNR and linewidth is shown in Fig. 3. The phase estimation varied strongly with the SNR and linewidth, although Gaussian noise and fast signal decay cannot influence the phase. This suggests that the LCModel phase estimation is not reliable for low SNRs or broad linewidths.
Comparison of coil combination methods
In Fig. 4, three spectra at the border of the brain, where coil combination was most problematic, are shown for the three compared coil combination methods and VC as a reference. The spectra that were combined using the 1stFIDpoint method showed more artifacts, altered SNR and differing peak ratios compared with the other methods and with the VC results, whereas the spectra resulting from the MUSICAL and Sensmap methods were more similar to the VC results. As expected, the MUSICAL and Sensmap spectra looked very similar.
In Fig. 5, the metabolic ratio maps of total Creatine (tCr) to total NAA (tNAA) [tCr/tNAA], and of total Choline (tCho) to tNAA [tCho/tNAA], are shown for one representative volunteer and all three coil combination methods and the VC measurement. This figure shows that, when using the 1stFIDpoint method, LCModel could not fit some of the brain regions, implying a suboptimal signal combination. In contrast, CRLBs > 20% were less numerous using the MUSICAL (9.2% of all CRLB values) or Sensmap (9.0% of all CRLB values) methods, compared with the 1stFIDpoint (17.6% of all CRLB values) method, for volunteers 2-6, when only the following metabolites were considered: GABA, myo-inositol (mI), Tau, tCho, tNAA, tCr, and glutamine and glutamate (Glx).
In Fig. 6, CRLB maps of the brain metabolites GABA and Tau are shown for one volunteer and for the three coil combination methods. This figure shows, again, the equality of the MUSICAL and Sensmap methods, but illustrates that the 1stFIDpoint method leads to higher uncertainty in the fitting process. Table 2 shows the CRLB values of the brain metabolites GABA, mI, Tau, tCho, tNAA, tCr and Glx for all three coil combination methods. The CRLB values of the MUSICAL method were lower by 33.3% (p < 0.05 with paired t-test using the average CRLB values of all voxels and all aforementioned metabolites of volunteers 2-6), on average, than those of the 1stFIDpoint method, but were similar to those of the Sensmap method (p > 0.6 with paired t-test using the averaged CRLB values of volunteers 2-6). When using the additional non-water-suppressed MRSI data, the average CRLB values decreased by 1.5% for volunteer 6 in comparison with the MUSICAL method.
In Table 3, the mean and standard deviation of the SNR ratio between the three methods, evaluated within the brain, is listed for all AC datasets. These results demonstrate that the MUSICAL method performed better than the 1stFIDpoint method, not only in visual evaluation, but also led to a higher SNR, on average, by about 29.4% (p < 0.001 with paired t-test using the means of volunteers 2-6). The MUSICAL and Sensmap methods performed almost equally well, with slightly higher SNR values (1.7%) for the MUSICAL method (p < 0.01 with paired t-test using the means of volunteers 2-6). The SNR ratio between the MUSICAL method and that using the first FID point of the additional non-watersuppressed MRSI scan was SNR MUS /SNR NonSupp = 1.02 for volunteer 6.
Noise decorrelation
The SNR ratio with and without noise decorrelation is shown in Table 4 for volunteers 2-6. The average SNR increase for the four volunteers was 13.1 ± 2.0% (p < 0.001 with paired t-test using the means and GRE phase maps agree better than the FID(1) and FID(all) phase maps, suggesting that the GRE data provide a better phase estimation than the first FID point. wileyonlinelibrary.com/journal/nbm of volunteers 2-6). The use of additionally measured noise data rather than the end of FIDs outside the head did not change the resulting SNR of volunteer 5. Taking the noise correlation between the AC channels into account led to a significantly increased SNR, independent of which data were used to compute the noise correlation matrix ψ nm .
DISCUSSION
In this work, we have demonstrated a new method for combining multi-channel 1 H MRSI data at 7 T. We provide evidence that the new method leads to better results than the most commonly used method, i.e. the 1stFIDpoint method, and to results similar to those of the Sensmap method. The proposed method increases the SNR, is computationally less demanding than the latter, needs no reference coil and results in phased spectra, making phase corrections during post-processing obsolete. At the top of each subimage, the residuum, i.e. the difference between the measured spectrum (black line) and its fit (red line), is shown. Spectrum #1 gives an example in which the 1stFIDpoint combination led to severe artifacts, whereas spectrum #2 provides an example in which the SNR was degraded strongly, but otherwise no artifacts occurred. Spectrum #3 shows that metabolic ratios might be altered [e.g. the glutamine and glutamate (Glx) peak at 2.3 ppm in comparison with the N-acetylaspartate (NAA) peak at 2.0 ppm] when using the 1stFIDpoint method, even if the spectrum looks reasonable. The spectra are not first order phased as a result of the free induction decay (FID)-based sequence and the fitting approach used. Figure 5. Comparison of metabolic ratio maps of the different coil combination methods and the volume coil (VC) measurement with a T 1 -weighted image of the same slice (T1w). Color-coded metabolic ratio maps were overlaid on T 1 -weighted images. There were no gross differences between the Multichannel Spectroscopic Data Combined by Matching Image Calibration Data (MUSICAL) and the Sensmap methods. The 1stFIDpoint method was problematic, especially at the border of the brain (colorless and purple voxels). All metabolic maps were interpolated from 64 × 64 to 128 × 128. tCho, total choline; tCr, total creatine; tNAA, total N-acetylaspartate. Figure 6. Comparison of Cramér-Rao lower bound (CRLB) maps of the metabolites γ-aminobutyric acid (GABA) and taurine (Tau) for the different coil combination methods for one volunteer. Color-coded CRLB maps were overlaid on anatomical T 1 -weighted images. Colorless voxels had CRLB values higher than 60 or LCModel could not fit these voxels. The 1stFIDpoint method showed more regions with higher CRLB values and more voxels with CRLB > 60.
Feasibility of the MUSICAL method
Imaging data with parameters matching those of the 1 H MRSI data provided good estimates of the 1 H MRSI phases, demonstrating the feasibility of the MUSICAL method. The first-order phasecorrected spectrum and the evaluation of low-signal metabolites showed that the MUSICAL method, in combination with a FIDbased sequence at 7 T, can provide excellent spectral quality.
Phase estimation by LCModel
Our data suggest that phase estimation by LCModel is unreliable for low spectral quality. Therefore, coil combination using LCModel phase estimates (3) is suboptimal for large coil arrays and in regions distant to individual elements (e.g. center of the brain). This can result in the incoherent summing of spectra, causing spectra with degraded quality and SNR.
Comparison of coil combination methods
The 1stFIDpoint method is the most commonly used intrinsic reference method for MRSI, and is implemented on most scanners. It has the advantages of being fully automatic and computationally very simple, and results in reasonably well prephased combined spectra. However, in our study, its performance was worse than that of the MUSICAL method, when using a FIDbased sequence at 7 T. The reasons for this are probably related to water suppression, lipid contamination and distorted water peaks. If water suppression is good, the magnitude of the first FID point is substantially reduced, thus increasing the uncertainty for estimating the coil weights w n , as shown by Dong and Peterson (10). In addition, close to the skull, the lipid contamination for different channels is affected by the individual coil channel sensitivity, leading to a local difference in residual water-to-fat ratio for each channel. Fat and water resonate at different frequencies. Hence, different phases are detected. The first FID point reflects a mixture of these phases. Thus, the 1stFIDpoint method can fail to sum spectra from lipid-contaminated voxels coherently. Moreover, if the residual water peak is distorted, this can cause additional phase problems when using the first FID point. In contrast, the water signals in the reference data of the MUSICAL and Sensmap methods are not suppressed, which is why contamination and other artifacts have little impact on the phases obtained.
Dong and Peterson (10) extended the 1stFIDpoint method by acquiring the 1 H MRSI data without water suppression. This solves all the above-mentioned problems, but introduces problems with sideband artifacts at short TEs. Thus, Dong and Peterson (10) recommend the use of their method only at long TEs, which is unfavorable at 7 T.
Intrinsic reference methods performed in the spectral domain have been proposed by Prock et al. (11) and Maril and Lenkinski (3). Prock et al. (11) estimated the phase of the weights by minimizing the difference between the real and the magnitude spectrum within a specified spectral range near a major metabolite resonance. Maril and Lenkinski (3) estimated the Table 2. Means and standard deviations (SD) of the Cramér-Rao lower bounds (CRLBs) of important brain metabolites for all three coil combination methods. All voxels within the brains of volunteers 2-6 were used to compute the means and SD. The means of the 1stFIDpoint method were substantially higher than those of the other two methods (1,23), but is rarely used in MRSI coil combination. With this method, meaningful object-intrinsic phase information is preserved, the weights used are very insensitive to noise and the sensitivity maps can also be used for sensitivity encoding (SENSE)-based parallel imaging. However, the preservation of phase information is not necessary in MRSI. Indeed, in MRSI, any object/coil-dependent phase component that may bias accurate quantification should be removed. The Sensmap method introduces the phase of the reference coil, which is an additional error source for MRSI quantification. In addition, the necessary reference coil data cannot be acquired and can only be estimated if no VC or body coil is available (27).
A different extrinsic reference method determines the weights w n from an additional similar MRS(I) acquisition without water suppression (12,16). This method has no severe limitations other than the prolonged measurement time. The results in this study suggest no benefit of this method in comparison with the MUSICAL method.
Noise correlation
Most previous studies on coil combination have ignored the effects of noise correlation (2,8,9,23). Other studies have reported very diverse SNR gains when performing noise decorrelation, ranging from 0.5% (28) to 40% (1), and up to 70% (29). In our study, we achieved an SNR increase of about 13%, which is slightly higher than that predicted by Roemer et al. (23), but much lower than the results of Wright and Wald (1) and Qian et al. (29). The most likely explanation for this large variation in SNR gain is the substantial difference in coil design. Although the gain achieved is not huge, our results underline the importance of noise decorrelation to optimize the SNR gain, particularly as it is easy to implement and can aid in the improvement of SNR-problematic MRSI scans.
Performance at 7 T and other MRSI sequences
The MUSICAL method has been shown to perform well at 7 T and with FID-based sequences, which is most challenging as the phase is spatially more variable at higher field strengths, and the severe first-order phase poses additional difficulties. As a consequence, more care needs to be taken to avoid incoherent signal combination.
Our coil combination method could take 1 H MRSI at 7 T further towards clinical practice, as it improves the SNR per unit time. By trading off the extra SNR against the sequence duration, e.g. with parallel imaging, the measurement time can be reduced significantly (4,5). Parallel imaging has been shown to perform even better at higher field strengths (30). Moreover, as two spatial dimensions are available for acceleration, high reduction factors can be expected.
In principle, the MUSICAL method can be extended to any type of MRSI sequence. For standard phase-encoded MRSI sequences, a matching imaging sequence can be achieved by omitting spectral acquisition and replacing one phase-encoding direction with frequency encoding. In fast MRSI sequences, such as spiral sampling, a faster matching imaging sequence can be achieved by omitting spectral data acquisition. The TR can be reduced to save time. It is important that the matching imaging sequence mimics the first FID point of the MRSI data as closely as possible, except for water suppression, with the same point spread function and the same phase evolution. The pulses of the MRSI and reference sequence should be the same or at least of a similar type and duration. A spin echo MRI sequence with the same TE and the shortest possible TR can be used as a reference sequence for a conventional spin echo MRSI sequence.
Limitations
One disadvantage of the proposed coil combination method, MUSICAL, is the introduction of an additional weighting to the 1 H MRSI data by the weighting with the imaging data. This can be corrected for by defining the scaling factor λ so as to obtain the uniform noise weighting described in ref. (8). Yet, when only metabolic ratio maps are of interest, the computation of signal ratios cancels out all common factors introduced to both metabolite maps, including this weighting.
CONCLUSION
Our results show that the MUSICAL 1 H MRSI coil combination method has significant advantages at 7 T using an FID-based sequence compared with two state-of-the-art methods, i.e. the 1stFIDpoint and Sensmap methods. The benefits include an increase in SNR, a decrease in CRLB values and an improved metabolic map and spectral appearance compared with the use of the 1stFIDpoint method, and an increase in SNR and a decrease in computational and hardware demands, with a prephasing of the resultant spectra, compared with the use of the Sensmap method. The pre-phasing increases the fitting speed, accuracy and reproducibility. In addition to the SNR increase enabled by ACs in conjunction with the MUSICAL method, noise decorrelation further enhances the SNR by 13%. In combination, the MUSICAL method is therefore an ideal tool with which to optimize the results of multichannel MRSI data, independent of coil hardware limitations. | 7,756 | 2013-09-04T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Axial-vector and pseudoscalar tetraquarks [ud][c¯s¯]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[ud][{\overline{c}}{\overline{s}}]$$\end{document}
Spectroscopic parameters and widths of the fully open-flavor axial-vector and pseudoscalar tetraquarks XAV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{AV}}$$\end{document} and XPS\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}$$\end{document} with content [ud][c¯s¯]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[ud][{\overline{c}}{\overline{s}}]$$\end{document} are calculated by means of the QCD sum rule methods. Masses and current couplings of XAV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{AV}}$$\end{document} and XPS\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}$$\end{document} are found using two-point sum rule computations performed by taking into account various vacuum condensates up to dimension 10. The full width of the axial-vector state XAV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{AV}}$$\end{document} is evaluated by including into analysis S-wave decay modes XAV→D∗(2010)-K+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{AV}}\rightarrow D^{*}(2010)^{-}K^{+}$$\end{document}, D¯∗(2007)0K0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\overline{D}}^{*}(2007)^{0}K^{0}$$\end{document}, D-K∗(892)+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$D^{-}K^{*}(892)^{+}$$\end{document}, and D¯0K∗(892)0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\overline{D}}^{0}K^{*}(892)^{0}$$\end{document}. In the case of XPS\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}$$\end{document}, we consider S-wave decay XPS→D¯0∗(2300)0K0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}\rightarrow {\overline{D}} _{0}^{*}(2300)^{0}K^{0}$$\end{document}, and P-wave processes XPS→D-K∗(892)+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}\rightarrow D^{-}K^{*}(892)^{+}$$\end{document} and XPS→D¯0K∗(892)0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}\rightarrow {\overline{D}} ^{0}K^{*}(892)^{0}$$\end{document}. To determine partial widths of these decay modes, we employ the QCD light-cone sum rule method and soft-meson approximation, which are necessary to estimate strong couplings at tetraquark–meson–meson vertices XAVD-D∗(2010)-K+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{AV}}D^{-}D^{*}(2010)^{-}K^{+} $$\end{document}, etc. Our predictions for the mass mAV=(2800±75)MeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_{\textrm{AV}}=(2800 \pm 75)~\text {MeV}$$\end{document} and width ΓAV=(58±10)MeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma _{\textrm{AV}}=(58 \pm 10)~\text {MeV}$$\end{document} of the tetraquark XAV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ \textrm{AV}}$$\end{document}, as well as results mPS=(3000±60)MeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_{\textrm{PS}}=(3000 \pm 60)~\text {MeV} $$\end{document} and ΓPS=(65±12)MeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma _{\textrm{PS}}=(65 \pm 12)~\text {MeV}$$\end{document} for the same parameters of XPS\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{\textrm{PS}}$$\end{document} may be useful in future experimental studies of multiquark hadrons.
Introduction
Recent LHCb information on new structures X 0 (2900) and X 1 (2900) observed in the invariant D − K + mass distribution of the process B + → D + D − K + [1,2], enhanced activity of researches to investigate fully open-flavor exotic mesons. In fact, by taking into account dominant decays of the resonance-like peaks X 0(1) (2900) → D − K + and assuming that they are four-quark systems, one sees that X 0(1) (2900) are built of quarks c, s, u, and d. The LHCb a e-mail<EMAIL_ADDRESS>(corresponding author) collaboration measured masses and widths of the structures X 0(1) (2900), and fixed their spin-parities. It was found that X 0 (2900) and X 1 (2900) bear the quantum numbers J P = 0 + and J P = 1 − , respectively. Let us note that, alternatively, structures X 0(1) (2900) may appear as triangle singularities in some rescattering diagrams: such interpretation was not excluded by LHCb as well.
In the four-quark picture, widely accepted to explain the LHCb data, X 0(1) (2900) may be considered in the framework of both the molecule and diquark-antidiquark (tetraquark) models. Thus, in publications [3,4] the resonance X 0 (2900) was analyzed as the scalar diquarkantidiquark state [ud][cs], whereas in Refs. [5,6] it was treated as a molecule D * − K * + or D * 0 K * 0 . The situation is almost the same for the vector resonance X 1 (2900): it was studied in the context of the tetraquark and molecule models, for instance, in Refs. [5,7,8]. There are numerous articles devoted to investigations of X 0(1) (2900) using different methods and schemes of high energy physics: relatively complete list of such papers can be found in Refs. [6,7,9]. An interesting conjecture about nature of X 0 (2900) was made in Ref. [10], where it was interpreted as a radial excitation X 0 of the scalar tetraquark X 0 = [ud] [cs]. We addressed this problem in our work [9], and calculated masses and widths of the ground-state 1S and radially excited 2S tetraquarks X ( ) 0 using the QCD sum rule method. We modeled X ( ) 0 as particles composed of the axial-vector diquark [ud] and axial-vector antidiquark [cs]. We also constructed the tetraquarks X ( ) S by utilizing a scalar diquark and antidiquark, and found their parameters. It was demonstrated that, the ground-state particles X 0 and X S are lighter than the resonance X 0 (2900), whereas radially excited tetraquarks X 0 and X S with the masses around ≈ 3320 MeV are heavier it.
It other words, none of these four-quark states can be identified with the resonance X 0 (2900). Therefore, it is reasonable to treat X 0 and X S as new hypothetic exotic mesons to be searched for in experiments.
Fully open-flavor tetraquarks, to be fair, were already objects of theoretical studies, which intensified after information on the resonance X (5568) presumably composed of b, s, u, and d quarks [11]. Though existence of X (5568) was not confirmed by other experimental groups, its charmed partners b → c are still under detailed analysis. In fact, the spectroscopic parameters and full width of scalar tetraquark X c = [su] [cd] were calculated in Ref. [12]. Masses of exotic mesons with the same content, but quantum numbers J P = 0 + and J P = 1 + were estimated in Ref. [13].
In various combinations c, s, u, and d quarks form different classes of four-quark mesons, features of which deserve investigations. Interesting class of fully open-flavor particles is collection of states Z ++ = [cu] [sd], which carries two units of electric charge. The scalar, pseudoscalar, axial-vector and vector members of this group were studied in our articles Refs. [14,15], respectively. It was pointed out that scalar and vector tetraquarks Z ++ S and Z ++ V may be observed in the D + K + mass distribution of the decay B + → D − D + K + [15].
Tetraquarks with a content [ud][cs] establish new class of open-flavor particles. Observation of the resonances X 0(1) (2900) by the LHCb collaboration, available experimental data, and possible interpretation of X 1 (2900) as a vector state X V = [ud][cs] make these particles objects of special interest. In the present article, we continue our studies started in Refs. [7,9] by calculating spectroscopic parameters and full widths of axial-vector and pseudoscalar four-quark states X AV and X PS with the same [ud][cs] content.
Masses and current couplings of X AV and X PS are evaluated in the context of the two-point sum rule method [16,17]. In calculations, we take into account various vacuum condensates up to dimension 10. The full width of the axialvector state X AV is found by including into analysis its Swave decay modes To estimate width of the pseudoscalar tetraquark X PS , we consider kinematically allowed S-wave channel X PS → D * 0 (2300) 0 K 0 , and Pwave decay modes X PS → D − K * (892) + and X PS → D 0 K * (892) 0 .
Partial widths of these processes are governed by strong couplings at relevant vertices, for example, at X AV D * (2010) − K + for the first process. To calculate required couplings, we use the QCD light-cone sum rule (LCSR) method [18] and soft-meson approximation [19,20]. The latter is necessary to treat tetraquark-meson-meson vertices, which due to unequal number of quark fields in tetraquark and meson interpolating currents differ from standard three-meson vertices [21]. This paper is organized in the following manner: In Sect. 2, we calculate the masses and current couplings of the tetraquarks X AV and X PS . In Sect. 3, we determine strong couplings g i , i = 1, 2, 3, 4 corresponding to vertices and X AV D 0 K * (892) 0 . In this section, we compute partial widths of corresponding processes, and estimate full width of X AV . In Sect. 4, we consider the decays X PS → D * 0 (2300) 0 K 0 , D − K * (892) + , and D 0 K * (892) 0 , and find strong couplings G j , j = 1, 2, 3 at relevant vertices. Using G j , we calculate partial width of these decays and evaluate full width of X PS by saturating it with these channels. Section 5 contains our conclusions.
Masses and current couplings of the tetraquarks X AV and X PS
In this section, we compute spectroscopic parameters of the states X AV and X PS by means of the two-point sum rule method. It is an effective nonperturbative approach elaborated to evaluate parameters of ordinary mesons and baryons. The QCD sum rules express various physical quantities in terms of universal vacuum condensates which do not depend on a problem under consideration. At the same time, they contain auxiliary parameters s 0 and M 2 specific for each computation. The first of them is the continuum subtraction parameter s 0 that separates contribution of a ground-state particle in the phenomenological side of a sum rule from effects of higher resonances and continuum states. The Borel parameter M 2 is required to suppress these unwanted continuum effects. By introducing M 2 and s 0 into analysis and employing an assumption about quark-hadron duality one connects phenomenological and QCD sides of sum rules and gets a sum equality. The latter can be used to express physical observables in terms of different vacuum condensates. The parameters M 2 and s 0 generate theoretical uncertainties in results, which nevertheless can be estimated and kept under control.
In what follows, we calculate the mass m and current coupling f of the axial-vector meson X AV (we employ also m AV and f AV ), and provide only final results for X PS . The starting point in computation of the spectroscopic parameters of the tetraquark X AV is the correlation function where, T is the time-ordered product of two currents, and J μ (x) is the interpolating current for the axial-vector state X AV . We model the tetraquark X AV as a compound formed by the scalar diquark u T Cγ 5 d and axial-vector antidiquark cγ μ Cs T , which are antitriplet and triplet states of the color group SU c (3), respectively. Therefore, corresponding interpolating current is given by the formula and belongs to [3 c ] ud ⊗ [3 c ] cs representation of the color group. In expression above, = abc ade , where a, b, c, d and e are color indices. In Eq. (2) c(x), s(x), u(x) and d(x) denote quark fields, and C is the charge conjugation matrix.
The phenomenological side of the sum rule Phys μν ( p) Phys is derived from Eq. (1) by inserting a complete set of intermediate states with quark contents and spin-parity of the tetraquark X AV , and carrying out integration over x. The momentum and polarization vector of X AV are denoted by p and ε, respectively. It should be noted that in Phys μν ( p) the ground-state term is written down explicitly, whereas contributions of higher resonances and continuum states are shown by ellipses.
In Eq. (3), we have assumed that the phenomenological side of the sum rule Phys μν ( p) can be approximated by a single pole term. But in the case of multiquark systems this approximation has to be used with some caution, because the physical side receives contribution also from two-hadron reducible terms. Indeed, a relevant interpolating current couples not only to a multiquark hadron, but also to a two-hadron continuum. This problem was raised in Refs. [22,23] when considering pentaquarks, and revisited recently in the case of tetraquarks [24], where it is argued that the contributions at the orders O (1) and O(α s ) in the operator product expansion (OPE) are canceled out exactly by the meson-meson scattering states at the hadronic side and the tetraquark molecular states start to receive contributions at the order O(α 2 s ). Then the reducible contributions should be subtracted from the sum rule, which can be done by means of two methods. One of them is direct subtraction of two-hadron terms from Phys μν ( p) by calculating current-two-hadron coupling constant using an independent QCD sum rule. This strategy was realized, for example, in Ref. [25] to investigate anti-charmed pentaquark state. Existence of a two-hadron continuum below a multiquark system means that such particle is unstable and decays to these conventional hadrons. In other words, a two-hadron continuum generates the finite width ( p 2 ) of a multiquark system. Relevant effects can be taken into account by modifying the quark propagator in Eq. (3) This second method was used to study the tetraquarks [26].
Rather detailed investigations demonstrated that effects of the modification Eq. (4) can be taken into account by absorbing two-meson contributions into a current-tetraquark cou-pling constant and keeping stable tetraquark's mass [27,28]. Uncertainties generated by changing of a coupling are numerically smaller than theoretical errors of sum rule analysis itself. In fact, two-meson effects lead to additional ≈ 7% uncertainty in the current coupling f T for doubly charmed pseudoscalar tetraquark ccss with the mass m T = 4390 MeV and full width T ≈ 300 MeV [27]. In the case of the resonance Z − c (4100) these uncertainties do not exceed ≈ 5% of the coupling f Z c [28]. Therefore, one can neglect two-meson reducible terms and use in Phys μν ( p) single-pole zero-width approximation, as it has been done in Eq. (3).
To simplify the correlation function Phys μν ( p) and express it in terms of the tetraquark's mass and current coupling, we use the matrix element and recast Phys μν ( p) into the following form Phys The function Phys μν ( p) has two Lorentz structures determined by g μν and p μ p ν . One of them can be chosen to continue sum rule analysis. We work with the structure proportional to g μν and corresponding invariant amplitude Phys ( p 2 ). Advantage of this structure is that it is formed due to contributions of only spin-1 particles, and is free of any contaminations.
The QCD side of the sum rules OPE μν ( p) should be computed in the operator product expansion with some accuracy. To this end, we substitute into μν ( p) explicit expression of the current J μ (x), contract relevant quark fields, and replace contractions by appropriate propagators. These operations lead to the expression where and Here, S c (x) and S u(s,d) (x) are the heavy c-and light u(s, d)-quark propagators, respectively: Their explicit expressions are presented in Appendix (see, also Ref. [29]).
The function OPE μν ( p) is a sum of components proportional to g μν and p μ p ν . We choose the invariant amplitude OPE ( p 2 ) corresponding to structure ∼ g μν , and use it to derive sum rules for m and f , which read and In expressions above, (M 2 , s 0 ) is the Borel transformed and subtracted invariant amplitude Computing the function (M 2 , s 0 ) and fixing of regions for parameters M 2 and s 0 are next problems in our study of m and f . Calculations prove that (M 2 , s 0 ) has the form where M = m c + m s . In the present paper, we neglect the masses of u and d quarks, as well as set m 2 s = 0 saving, at the same time, terms ∼ m s . The spectral density ρ OPE (s) is found as an imaginary part of the function OPE The vacuum condensates, that enter to sum rules Eqs. (9) and (10), are universal quantities obtained from analysis of various hadronic processes [16,17,30,31]. Below, we list their values used in our numerical computations qq = −(0.24 ± 0.01) 3 GeV 3 , ss = (0.8 ± 0.01) qq , As is seen, the vacuum condensate of strange quarks differs from 0|qq|0 [30]. The mixed condensates qg s σ Gq and sg s σ Gs are expressed using the corresponding quark condensates and parameter m 2 0 , numerical value of which was extracted from analysis of baryonic resonances [31]. For the gluon condensate g 3 G 3 , we employ the estimate given in Ref. [32]. This list also contains the masses of c and s quarks from Ref.
Another problem is a choice of working windows for parameters M 2 and s 0 . They are fixed in such a way that to meet constraints imposed on (M 2 , s 0 ) by a pole contribution (PC) and convergence of the operator product expansion. These constraints can be quantified by means of the following expressions where DimN (M 2 , s 0 ) is a sum of DimN ≡ Dim(8 + 9 + 10) terms. In what follows, we require fulfilment of the restrictions The PC and R(M 2 ) are employed to fix the higher and lower limits of the Borel parameter M 2 , respectively. These two values determine boundaries of the region where M 2 can be varied. Calculations show that intervals are appropriate regions for the parameters M 2 and s 0 , and comply with limits on PC and convergence of OPE. Thus, at M 2 max = 3.2 GeV 2 on average in s 0 the pole contribution is 0.51, whereas at M 2 min = 2.5 GeV 2 it becomes equal to 0.73. To visualize dynamics of the pole contribution when varying the Borel parameter, we plot PC as a function of M 2 at different s 0 in Fig. 1. One can see, that except for a To be convinced in convergence of OPE, we calculate R(M 2 min ) at the minimum point M 2 min = 2.5 GeV 2 , and get R(2.5 GeV 2 ) ≈ 0.027 in accordance with constraint from Eq. (15). Results of more detailed analysis are depicted in Fig. 2. In this figure, we show the perturbative and nonperturbative components of the correlation function (M 2 , s 0 ): A prevalence of the perturbative contribution to (M 2 , s 0 ) over nonperturbative one is evident. Without regard for some higher dimensional terms, the nonperturbative contributions reduce by increasing the dimensions of the corresponding operators.
The region for s 0 has to meet constraints coming from dominance of PC and convergence of OPE. Self-consistency of performed analysis can be checked by comparing the parameter √ s 0 and the X AV tetraquark's mass extracted from the sum rule: Evidently, an inequality m < √ s 0 should be satisfied. Additionally, √ s 0 bears information on a mass m * of the first radial excitation of the tetraquark X AV , therefore the restriction m * ≥ √ s 0 provides low limit for m * . We extract the mass m and coupling f by computing them at different M 2 and s 0 , and finding their mean values averaged over the regions Eq. (16). Our predictions for m and f read The results in Eq. Fig. 3, we depict m as functions of M 2 and s 0 , in which is seen its dependence on the Borel and continuum subtraction parameters. Strictly speaking, physical quantities should not depend on M 2 , but computations demonstrate that such effects, nevertheless, exist. Therefore, in a chosen region for M 2 this dependence should be minimal. Due to a functional form of the sum rule for the mass Eq. (9) given as the ratio of correlation functions, variation of m in the region for M 2 is mild. There is also dependence on the parameter s 0 which contains information about the lower limit for the mass of the excited tetraquark.
The pseudoscalar tetraquark X PS and its parameters have been explored by the manner described just above. Here, we model X PS as a tetraquark built of the pseudoscalar diquark u T Cd and scalar antidiquark cγ 5 Cs T . The relevant interpolating current J PS (x) is determined by the expression and belongs to the antitriplet-triplet representation of the color group SU c (3). The physical side of the sum rule in this case has relatively simple form where m PS and f PS are the mass and current coupling of the tetraquark X PS , respectively. To derive Phys ( p), we have used the matrix element of the pseudoscalar particle X PS The function Phys ( p) has trivial Lorentz structure proportional to I , therefore the invariant amplitude Phys ( p 2 ) is equal to r.h.s. of Eq. (19). The QCD side of new sum rules is given by the formula The spectroscopic parameters of X PS can be obtained from Eqs. (9) and (10) In these regions the PC changes inside limits 0.74 ≥ PC ≥ 0.50.
Dependence of m PS on the parameters M 2 and s 0 is shown in Fig. 4. In the left panel one can see a relatively stable nature of m PS under variation of M 2 .
Decays of the axial-vector tetraquark X AV
The mass and spin-parity of the tetraquark X AV allow us to classify its decay channels. We restrict ourselves by analysis of S-wave decay channels of X AV which are The full width of the axial-vector state X AV is estimated by including into analysis namely these channels. We are going to provide rather detailed information about computation of a partial width of the decay X AV → D * (2010) − K + , and outline important steps in analyses of other processes. A quantity to be extracted from a sum rule is the strong coupling g 1 of particles at the vertex X AV D * (2010) − K + . This coupling is defined in terms of the on-mass-shell matrix element where the mesons K + and D * (2010) − are denoted as K and D * , respectively. Here, p , p and q are four-momenta of the tetraquark X AV , and mesons D * and K , and ε ν and ε * μ are the polarization vectors of the particles X AV and D * .
In the framework of the LCSR method the coupling g 1 can be obtained from the correlation function with J ν (x) being the current for the tetraquark X AV from Eq. (2). The interpolating current for the meson D * (2010) − is abbreviated in Eq. (26) as J D * μ (x), and defined by the expression where j is the color index.
The main contribution to the correlation function μν ( p, q) comes from a term with poles at p 2 and p 2 = ( p + q) 2 . This term is given by the formula where m D * and f D * are the mass and decay constant of the meson D * (2010) − . To derive Eq. (28), we have used Eq. (25) and the following matrix elements The term written down explicitly in Eq. (28) corresponds to contribution of ground-state particles in X AV and D * channels: effects of higher resonances and continuum states in these channels are shown by dots. The function Phys μν ( p, q) constitutes the phenomenological side of a sum rule for the coupling g 1 . It contains two terms determined by structures g μν and p μ p ν . In our studies, we use the term ∼ g μν and corresponding invariant amplitude Phys ( p 2 , p 2 ) which is a function of two variables p 2 and p 2 .
The correlation function μν ( p, q) calculated in terms of quark-gluon degrees of freedom forms the QCD side of the sum rules and is equal to where α and β are the spinor indices.
As is seen, besides c and d quark propagators the function OPE μν ( p, q) contains also local matrix elements of the K + meson, which carry spinor and color indices. We can rewrite K |u b α s e β |0 in convenient forms by expanding us over the full set of Dirac matrices J and projecting them onto the colorless states Operators u J s sandwiched between the K meson and vacuum generate local matrix elements of the K meson, which are known and can be implemented into OPE μν ( p, q). As usual, OPE μν ( p, q)-type correlators depend on nonlocal matrix elements of a final meson (for example, K meson), which are convertible to its distribution amplitudes (DAs). This is correct while one treats strong vertices of three conventional mesons in the context of the LCSR method. In the case of tetraquark-meson-meson vertices relevant correlation functions instead of DAs of a final meson contain its local matrix elements. These matrix elements are determined at the space-time point x = 0 and are overall normalization factors. Within the LCSR method similar behavior of correlation functions was seen in a limit q → 0 of three-meson vertices, which is known as a soft-meson approximation [19]. This approximation requires adoption of additional technical tools to deal with new problems appeared in a phenomenological side of corresponding sum rules [19,20]. It turns out that the soft limit and related technical methods can be adapted to investigate also tetraquark-meson-meson vertices [21]. It is worth to emphasize that the soft limit should be implemented in a hard part of the correlation function μν ( p), but in matrix elements one takes into account the terms with q 2 = m 2 K . The term proportional to g μν in Phys μν ( p, q) in the limit q → 0 with some accuracy can be transformed into the expression Phys where m 2 = (m 2 + m 2 D * )/2. The invariant amplitude Phys ( p 2 ) depends on the variable p 2 , and has a double pole at p 2 = m 2 . The Borel transformation of Phys ( p 2 ) is given by the formula The ellipses in Eq. (34) stand not only for terms suppressed after this operation, but also for contributions which remain unsuppressed even after Borel transformation. Therefore, before performing usual subtraction procedure, one should remove these contributions from B Phys ( p 2 ). To this end, we have to apply the operator to both sides of a sum rule equality [19,20], and subtract conventional terms in a usual way. Then the sum rule for the strong coupling g 1 reads where OPE (M 2 , s 0 ) is the Borel transformed and subtracted invariant amplitude OPE ( p 2 ) that corresponds to the structure g μν in OPE μν ( p, q). To finish calculation of the strong coupling g 1 , we need to specify local matrix elements of the K meson which contribute to the function OPE μν ( p, q). Details of calculations necessary to find OPE μν ( p, q) in the soft limit were presented in Ref. [21], therefore we skip further features of relevant analysis and provide only final expressions. First of all, our computations demonstrate that in the soft-meson approximation the correlator OPE μν ( p, q = 0) receives contribution from the matrix element with m K and f K being the mass and decay constant of the K + meson. The Borel transformed and subtracted correlation function OPE (M 2 , s 0 ) is calculated by taking into account condensates up to dimension 9 and given below 1864.84 ± 0.05 where μ K = f K m 2 K /m s . In expression above, the nonperturbative function F non-pert. (M 2 ) is determined by the formula Partial width of the process X AV → D * (2010) − K + can be obtained by employing the following expression where λ = λ (m, m D * , m K ) and The sum rule for the strong coupling g 1 contains different vacuum condensates, numerical values of which have been collected in Eq. (12). Apart from that, the Eq. (36) depends on the spectroscopic parameters of particles involved into decay process. The mass and current coupling of the tetraquark X AV have been calculated in the current work. The masses and decay constants of the mesons D * (2010) − and K + are collected in Table 1. This table contains also spectroscopic parameters of other mesons which appear at final stages of different decay channels. For masses all of the mesons and decay constants of K , K * and D mesons, we use information from Ref. [33]. The decay constant f D * of the vector mesons D * (2010) − and D * (2007) 0 , and decay constant f D * 0 of the scalar meson D * 0 (2300) 0 are borrowed from Ref. [34]. The Borel and continuum subtraction parameters M 2 and s 0 required for calculation of the coupling g 1 are chosen in accordance with Eq. (16). Numerical computations yield Then it is not difficult to find The decay of the tetraquark X AV to a meson pair D * (2007) 0 K 0 is another process with K meson in the final state. This process is a "neutral" version of the first channel differences being encoded in masses m 1 and m 2 of mesons D * (2007) 0 and K 0 , respectively. Treatment of this decay mode does not differ from analysis described above. Therefore, we provide final results for the coupling g 2 and partial width of the process The remaining two decay channels X AV → D − K * (892) + and X AV → D 0 K * (892) 0 have been explored by a similar manner. Let us consider, for instance, the process X AV → D − K * (892) + . The strong coupling g 3 that corresponds to the vertex X AV D − K * (892) + is defined by the matrix element where * μ is polarization vector of the meson K * (892) + . The correlation function that allows us to extract g 3 is with J D (x) being the interpolating current for the pseudoscalar meson D − The main term that contributes to this correlation function and determines phenomenological side of a sum rule for g 3 has the following form Table 2 Decay channels of the tetraquarks X AV and X PS , strong couplings g i , G j and partial widths i AV , j PS . The star-marked coupling G 1 has a dimension GeV −1 i Channels In this section, we investigate decays of the pseudoscalar tetraquark X PS with the mass m PS and current coupling f PS which have been extracted from two-point sum rules in Sect. 2. We consider the S-wave process X PS → D * 0 (2300) 0 K 0 , as well as P-wave decay modes X PS → D − K * (892) + and X PS → D 0 K * (892) 0 of this four-quark state.
We begin our analysis from the decay X PS → D * 0 (2300) 0 K 0 . The coupling G 1 required to calculate partial width of this process can be defined using the on-mass-shell matrix element The correlation function, which should be considered to determine strong coupling G 1 of particles at the vertex X PS D * 0 (2300) 0 K 0 , is given by the formula where D * 0 stands for meson D * 0 (2300) 0 . Here, J PS (x) and J D * 0 (x) are interpolating currents for the tetraquark X PS [see, Eq. (18)], and for the scalar meson D * 0 (2300) 0 . The latter is defined by the expression A contribution to the correlation function Phys ( p, q) with poles at p 2 and p 2 = ( p + q) 2 comes from the term Here, m 7 and f D * 0 are the mass and decay constant of the meson D * 0 (2300) 0 , respectively. In order to find Eq. (62), we employ Eqs. (59) and (20), as well as the matrix element The QCD side of the sum rule OPE ( p, q) has the following form The functions Phys ( p, q) and OPE ( p, q) have trivial Lorentz structures ∼ I , therefore both of them contain only one invariant amplitude. The amplitude OPE ( p 2 ) is calculated with dimension-9 accuracy and given by the following where the function F non-pert. (M 2 ) is determined by the formula and μ K 0 = m 2 2 f K /m s . In the soft-meson approximation the coupling G 1 can be found by means of the sum rule where m 2 = (m 2 PS + m 2 7 )/2. The width of the decay X PS → D * 0 (2300) 0 K 0 is found by utilizing the formula in which λ = λ(m PS , m 7 , m 2 ). Our computations for the coupling G 1 and partial width of the process yield and The coupling G 2 that describes strong interaction of particles at the vertex X PS D − K * (892) + and determines partial width of the channel X PS → D − K * (892) + is defined by the matrix element The sum rule for G 2 is obtained from the correlation function In terms of involved particles' physical parameters this function has the form To extract this expression, we use the matrix elements from Eqs. (50) and (52), as well as one defined by Eq. (71). The correlation function ( p, q) calculated using quarkgluon degrees of freedom fixes the QCD side of the sum rule and is equal to The sum rule for the strong coupling G 2 can be obtained by employing standard manipulations. The width of the process X PS → D − K * (892) + is calculated by means of the expression where λ = λ(m PS , m 3 , m 4 ).
For the coupling G 2 numerical computations give and the partial width of the decay under analysis equals to The decay channel X PS → D 0 K * (892) 0 of the tetraquark X PS which is last process considered in the present paper, can be treated in a similar way. Therefore, we refrain from further details and provide all relevant information in Table 2.
The full width of X PS does not differ considerably from the width of the axialvector tetraquark X AV .
Conclusions
In the current article, we have investigated the axial-vector and pseudoscalar tetraquarks X AV and X PS from a family of exotic mesons [ud][cs] containing four different quark flavors. We have calculated their masses, and also estimated full widths of these states using different decay channels.
Interest to fully open-flavor structures renewed recently due to discovery of resonances X 0(1) (2900) made by the LHCb collaboration. One of these states X 1 (2900) was studied in Ref. [7] as a vector tetraquark X V = [ud] cs . The mass and width of X V are close to physical parameters of the resonance X 1 (2900) measured by LHCb, which allowed [cs] with different spinparities. The lower and upper red (solid) and blue (dashed) lines are 1S and 2S scalar states, respectively. The red lines correspond to the scalar tetraquark X 0 , whereas blue lines show parameters of X S . The mass and width of the vector particle X V were determined in Ref. [7]. The axial-vector and pseudoscalar states have been explored in the current article. Theoretical uncertainties of extracted observables are not shown us to interpret X V as a candidate to the vector resonance X 1 (2900).
The masses and widths of the ground-state scalar tetraquarks X 0 and X S , and their first radial excitations were calculated in Ref. [9]. The scalar particles X 0 and X S were modeled using axial-vector and scalar diquark-antidiquark pairs, respectively.
Information gained in our studies about tetraquarks [ud][cs] with spin-parities J P = 0 + , 0 − ,1 + and 1 − is shown in Fig. 5. As is seen, the pseudoscalar X PS state is heaviest particle in family of tetraquarks [ud][cs], whereas light particles are scalar ones with different internal organizations. The vector and axial-vector states have comparable masses and widths.
It is interesting to consider hadronic processes, where exotic mesons X AV and X PS may be observed. We have noted in Sect. 1 that resonances X 0 (2900) and X 1 (2900) were discovered in the invariant D − K + mass distribution of the process B + → D + D − K + . For the scalar tetraquark X 0 the decay X 0 → D − K + is S-wave process, whereas X 1 → D − K + is P-wave channel for the vector particle X 1 (2900). The invariant mass distribution of D − K * (892) + mesons in exclusive decays of B + meson may be explored to observe tetraquarks X AV and X PS . In fact, decays to D − K * (892) + mesons are S-wave and P-wave channels for the tetraquarks X AV and X PS , respectively. Because branching ratios of these channels amount to 0.30 and 0.36, respectively, they may be employed to fix structures X AV and X PS . These and relevant problems require further theoretical and experimental studies.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: All the numerical and mathematical data have been included in the paper and we have no other data regarding this paper.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development. | 9,256.8 | 2023-03-01T00:00:00.000 | [
"Materials Science"
] |
Probabilistic inference of lateral gene transfer events
Background Lateral gene transfer (LGT) is an evolutionary process that has an important role in biology. It challenges the traditional binary tree-like evolution of species and is attracting increasing attention of the molecular biologists due to its involvement in antibiotic resistance. A number of attempts have been made to model LGT in the presence of gene duplication and loss, but reliably placing LGT events in the species tree has remained a challenge. Results In this paper, we propose probabilistic methods that samples reconciliations of the gene tree with a dated species tree and computes maximum a posteriori probabilities. The MCMC-based method uses the probabilistic model DLTRS, that integrates LGT, gene duplication, gene loss, and sequence evolution under a relaxed molecular clock for substitution rates. We can estimate posterior distributions on gene trees and, in contrast to previous work, the actual placement of potential LGT, which can be used to, e.g., identify “highways” of LGT. Conclusions Based on a simulation study, we conclude that the method is able to infer the true LGT events on gene tree and reconcile it to the correct edges on the species tree in most cases. Applied to two biological datasets, containing gene families from Cyanobacteria and Molicutes, we find potential LGTs highways that corroborate other studies as well as previously undetected examples. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1268-2) contains supplementary material, which is available to authorized users.
is direct uptake of foreign genetic material through the cell, conjugation is transfer of genetic material through a bridge-like structure between the two cells, while transduction is insertion of foreign genetic material through bacteriophages. Various types of mobile elements are also important forces that drives the genomic rearrangements. Lateral gene transfers challenges the classical definition of species, and the assumption of tree-like evolution of the species. Since LGTs are also observed between distant bacterial species, they are a confounding factor in inference of phylogenetic trees. Inference of gene phylogenies inside a species tree in the presence of LGT is therefore not a trivial task. A number of methods have been proposed to solve the gene-species reconciliation problem in this context. Goodman et al. [7] introduced the notion of a tree reconciliation, which took duplication and loss of genes into account. They used a parsimony based approach and proposed an algorithm that finds the most parsimonious reconciliation (MPR) in the presence of gene duplication and gene loss events. The most parsimonious reconciliation is a reconciliation that uniquely maps the vertices of a gene tree to the vertices or edges of a species tree such that the number of inferred evolutionary events is minimized. MPR works under assumption that evolutionary events are rare and therefore, parsimonious scenarios are the most likely scenarios. The MPR-based methods are fast but less realistic biologically than probabilistic methods. A number of attempts have been made to model gene-species tree reconciliation in the presence of lateral gene transfer events. Hallet et al. [8] introduced the first parsimony based model that took lateral gene transfer events into account. Since then, many other parsimony based methods have been proposed that includes lateral gene transfers [9][10][11][12].
DLTRS (Duplication, Loss, Transfer, Rate, and Sequence evolution), introduced by Sjöstrand et al. [13,14], is perhaps the first biologically realistic probabilistic model, with LGT events taken into account along with duplications, losses, and sequence evolution in a single comprehensive model. A modified birth-death process is used to model lateral gene transfers as well as gene duplications and gene losses. The probability of a gene tree, its edge lengths, and other parameters are computed similar to Åkerborg et al. [15], with the modification that gene tree lineages are allowed to jump across the species tree lineages. In the previous work [13,16], focus was on estimating the correct gene tree under the DLTRS model. Identifying possible LGT scenarios was done in a parsimony model. In the present work, we apply the DLTRS model also for inferring LGT and/or duplication events and their timing. Another attempt to model LGT, in the context of genespecies tree reconciliation, was made by Suchard [17]. A hierarchical model framework was proposed, in which the top layer involves a random walk over the gene trees and a species tree, while the bottom layer consists of reconstruction of gene trees given the multiple sequence alignments conditional on the random walk process. The model does not incorporate branch-length information of the gene trees and does therefore not involve an explicit gene/species tree reconciliation. The lack of branch-lengths on gene trees, and the use of non-dated species tree makes the model less realistic biologically. Szöllősi et al. [18] integrated the processes of origination, duplications, losses and lateral gene transfers into a single model, ODT (Origination, Duplication, Transfer, and Loss), to reconstruct a chronologically ordered species tree by explicitly modeling the evolution of genes in their genomes. Origination occur from species that are either extinct or not present in the study.
The model
Over any edge x, y in the species tree, each gene lineage is exposed to gene duplications (GD), gene losses (GL), and LGTs at rates δ, μ, and τ , respectively. When a gene lineage u is exposed to a GD event, it is replaced by two children, which both continue evolving over the same species tree edge as did u. When the gene lineage u is exposed to an LGT, it is replaced by two children: one continuing to evolve over the same species tree edge x, y as did u, and one evolving independently over another species tree edge, chosen uniformly from those concurrent with x, y at the time of the LGT event. A loss of the gene lineage u removes it from the process as well as from the generated tree, in which its former parent is suppressed. Each lineage reaching a speciation vertex y in S splits into two independent processes, each evolving down a distinct outgoing edge of y. The process continues recursively down to the leaves where it stops. So, a gene tree vertex represents either a speciation, a GD, or an LGT event; the divergence time for a speciation vertex is given by the corresponding species tree vertex, while the divergence time for a GD or an LGT vertex is given by the DLT process. Divergence times associated with vertices of a tree induce edge times as well as time intervals, in the natural way. The DLT-model also generates a realization explaining how the gene tree has evolved by mapping each gene tree vertex to where in the species tree it was created, i.e., a vertex of the species tree or a species tree edge combined with a time point along it. The substitution rate model obtains biological realism via a relaxed molecular clock, effectively transforming dated trees with leaves representing extant entities, such trees being necessarily ultra-metric, into trees consistent with a relaxed molecular clock. This provides a biologically realistic prior distribution for edge lengths-the convolution of edge times and substitution rates conventionally used in substitution models. In our implementation, edge substitution rates are independently and identically gamma distributed.
Finally, sequence evolution over the gene tree, with these edge lengths, can be modeled using any of the standard substitution models used in phylogenetics [19].
Methods
In this section we describe the core of our method, but defer many details to the Additional file 1. We also discuss some practical matters, such as how to compare LGT predictions.
Input and parameters
The input to our method, and experiments, is sequence data D and a dated species tree S. For computational reasons, the species tree S is discretized (see [14] for details). As a first step, S is obtained by introducing discretization vertices with out-degree 1 on each species tree edge contemporaneous to a species tree vertex. Then, the final discretized species tree S is obtained by further discretizing edges of the S by introducing vertices with out-degree 1 occurring on the regular time points, the same time points across contemporaneous edges.
Sequence evolution is modeled using standard substitution models. The edge rate model is a Gamma distribution with parameters m and cv for mean edge rate and its coefficient of variation. For convenience, we write θ = (δ, μ, τ , m, cv) to summarize all model parameters. All rate parameters can be specified as input, or be inferred during MCMC.
Reconciliations and realizations
We introduce three types of mappings between a gene tree G and a species tree S. Gene tree vertices are mapped to a vertex or an edge in the species tree in a reconciliation. A realization, maps vertices of a gene tree to vertices of a discretized species tree S . Reconciliations and realizations map the gene tree vertices in a manner consistent with the gene tree; a gene tree vertex is never mapped closer to the root in the species tree than its parent. In addition, a realization never maps a child vertex and its parent to the same time. We also define continuous realization as a reconciliation where each gene tree vertex mapped to a species tree edge is associated with a time. (This terminology deviates from that of Sjöstrand et al. [16], which uses the term realization for what we call continuous realization and the term discretized realization for what we below call realization).
Applying MCMC
The DLTRS model is applied in a Bayesian MCMC framework to estimate a posterior distribution over gene trees with edge lengths, and other parameters of the DLTRS model. This framework performs an algorithmic Rao-Blackwellisation [20,21] over the realizations, which is computationally advantageous. We now describe a sampling algorithm that can be applied when also a realization is desired. The Rao-Blackwellisation is still beneficial, since the sampling of realizations or reconciliations can be focused to a subset of the gene trees, perhaps those with high posterior or only the MAP gene tree. The probability density of a state in the Markov chain can be expressed as follows: where G is a gene tree and l are the edge lengths of G. The probabilities and probability densities are written as P(.) and p(.), respectively. The first factor P(D|G, l) is computed by the standard so-called peeling algorithm [22]. An algorithm for computing the second factor p(G, l|θ, S) was the main algorithmic contribution in [16], which is partly explained below and also expanded upon. The prior p(θ) is assumed to be uniform and independent. The denominator P(D|S), the normalizing constant, is not calculated when using MCMC because it cancels when computing acceptance probabilities.
In each iteration of the MCMC, a combination of ordinary differential equations (ODE) and dynamic programming is used to compute the factor p(G, l|θ, S) (see [16] or Additional file 1). The term p(G, l|θ, S) is then approximated as following: where C is the set of reconciliations, and A(c) and D(c) are the sets of continuous realizations and realizations, respectively, compatible with the reconciliation c. The factor (d) is the product of the lengths of the intervals in which the discretization points used by d are found, and accounts for that we are approximating integrals over these intervals.
Inferring reconciliations and realizations
The datastructures used to compute p(G, l|θ, S) can be reused for inferring reconciliations and realizations, both for sampling and maximum a posteriori (MAP) estimation. The sampling is performed by, in preorder over the vertices of the gene tree G, sampling discretization vertices V (S ) to map the gene tree vertices to. That is, for each internal vertex u of the gene tree, i.e., V (G) \ L(G), a vertex x in S that u is mapped to, is sampled conditioned by where the parent of u is mapped and how the process continued from there. That u is mapped to x, will be denoted 'u → x' . We will also determine the type of event that a gene tree vertex u mapped to x corresponds to and denote this 'u → x, speciation', 'u → x, transfer' or 'u → x, duplication', with the natural interpretation. MAP estimation is performed using dynamic programming, by adapting the method for computing p(G, l|θ, S). For details, please see Additional file 1.
Comparing realizations
We want to quantify the difference between two realizations (d and d ) of a gene tree G, in order to compare true realizations and the inferred realizations in simulations. The topological distance D G is defined as the length of the path between the two transfer vertices in G. A gene tree might have more than one transfer event and we therefore consider both the average topological distance and the maximum of the topological distance between the transfer events of the two corresponding realizations. Let q be a posterior distribution q over realizations of MAP gene trees (obtained in the MCMC framework). For every d from q, we get an average topological distance D Ga (d, d |G), and a maximum topological distance D Gm (d, d |G). Expectations of these two distances, with respect to q, are obtained and are represented as E D Ga (d, q|G), and E D Gm (d, q|G), respectively: We are also interested in quantifying the temporal distances between the corresponding transfer events of any two given realizations. Note that a vertex on the species tree S is first sampled for all the transfer events in the realization using the proposed dynamic programming algorithm. Since the species tree S is anchored in time, every transfer event is also associated with a time interval. For each pair of transfer events between any two realizations, we now compute the temporal distances D T . As mentioned above, there may be more than one transfer events in a realization, so we compute the average temporal distance D Ta (d, d |G) and maximum temporal distance D Tm (d, d |G). Expectation of such distances is then computed across the posterior distribution and are represented as E D Ta (d, q|G), and E D Tm (d, q|G), respectively:
Convergence tests
Three different convergence diagnostics were used to check for non-convergence of MCMC chains: Geweke [23], Gelman-Rubin [24], and Estimated Sample Size (ESS) [25], using VMCMC [26]. A burnin was chosen, for each MCMC trace, using the max-ESS estimator [25]. Each MCMC chain was run for 5 · 10 6 iterations and a thinning factor of 500 was used.
Synthetic data generation
To evaluate our method, we performed tests on synthetic datasets. We used the species tree obtained by Abby et al. [27] and generated 500 synthetic gene trees. For biological realism, the synthetic families were generated using parameters sampled from the DLTRS posteriors of Cyanobacteria families studied in Sjöstrand et al. [28]. Since the focus of our study is to detect LGT events, only LGT rates so high that a transfer event was expected were used. To be able to compare LGT results, we constrained our tests to those 303 gene families where the MAP gene tree was correctly inferred. Of those, there were 117 families with LGT events generated. GenPhyloData [28] was used for generation of ultrametric gene trees and subsequent branch relaxation (i.e., simulating a relaxed molecular clock), and sequences were generated using SeqGen [29]. We modified Gen-PhyloData such that the information about the donor lineage (labeled 'From'), and the recipient lineage (labeled 'To'), in the realizations was noted for each transfer event.
Synthetic data results
As a first assessment, we wanted to know whether the method infers the correct number of LGT events.
In 129 out of 303 gene families, the corresponding posterior distribution has at least 80 % of the realizations with the correct number of LGT events. 170 gene families had at least 50 % of the realizations having the correct number of LGT events. While on the other end of the histogram, we have 74 gene families, where less than 20 % of the corresponding posterior distributions are able to infer the correct number of LGT events (see Additional file 1: Figure S3).
Finding the correct number of LGT events is informative, but finding the correct vertex on the gene tree where the transfer has occurred is more valuable for biological interpretation. Additional file 1: Figure S4A shows the fraction of realizations in the posterior distribution having the same vertex as the one in the true gene tree where the LGT event has occurred. There are 24 cases where at least 98 % of the realizations in the corresponding posterior distribution has the same LGT vertex as the true tree, while there are eight cases where the correct LGT vertex could not be identified.
Since our method is species tree-aware, another question is how well it places LGTs in the species tree, i.e., how often are the From and To lineages in the species tree correctly identified? Once the correct LGT vertex is identified and requiring a posterior probability > 0.5, our method identified the correct From lineage in 82 out of 117 synthetic families (Additional file 1: Figure S4B; there are 82 families with posterior probability > 0.5). Similarly, 73 out of 117 To lineages (Additional file 1: Figure S4C) are correctly inferred. In 73 cases out of 117, both From and To lineages are correctly inferred (Additional file 1: Figure S4D).
The placement of a transfer can be ambiguous even if you know the true gene tree. We therefore assessed predictions with topological and temporal distance metrics (see above), measuring how far away from the true LGT event the estimated posterior is. Figure 1a and b shows the performance of our method according to E D Ga (d, q|G) and E D Gm (d, q|G), respectively. As expected (from correctly placed LGT events, above), both distance metrics are zero in most cases. However, there are much fewer than 73 families with distance 0 and this is due to the conservative definition of the distance metrics: even when the MAP prediction is correct, the distances can be non-zero. Similarly, performance for the temporal distance metrics E D Ta (d, q|G) and E D Tm (d, q|G) is shown in Fig. 1c and d, respectively. We note that although there are more families for which the temporal metrics is zero or relatively low, we see some families for which the distances are relatively higher.
Inferred transfers in Mollicutes and Cyanobacteria
We applied our method to the two biological datasets studied by Sjöstrand et al. [28]: Mollicutes and Cyanobacteria. The Mollicute dataset comprises 726 gene families from 14 strains and the Cyanobacteria dataset consists of 2296 gene families from 13 strains.
Based on the posterior probabilities of LGT, we estimate a total of 266 expected transfers in the Mollicutes dataset, so on average about one LGT in every third gene family, and we have 122 predicted LGT events with posterior probability higher than 0.5. Similarly, in Cyanobacteria, the total expected number of transfers in MAP samples was estimated to 575, i.e., about one LGT in every fourth gene family. We get 94 LGT events predicted with probability higher then 0.5. We found that transfer events are not distributed evenly across different lineages of the Mollicutes and Cyanobacteria phylogenies (see Fig. 2 and Additional file 1: Figure S5). There are some inferred LGT events that occurred in a significant number of gene families. For instance, a transfer between Mesoplasma florum L1 and the ancestoral copy of Mycoplasma capricolum subsp. capricolum ATCC 27343 and Mycoplasma mycoides subsp. mycoides SC PG1 appeared (with posterior probability higher than 0.5) in 116 gene families (Fig. 2, the transfer event over the edge 3, 6 ). Figure 2 and Additional file 1: Figure S5 show putative LGT highways detected by our method for Cyanobacteria and Mollicutes datasets. In Additional file 1: Figure S5, we can see that our method finds some of the LGT highways in the earlier branches of Cyanobacteria, but there are also strong signals of LGT highways in the recent lineages. Similar trends has been observed in the case of Mollicutes (see Fig. 2). In Cyanobacteria, our results regarding LGT highways are consistent with those presented by Sjöstrand et al. [28], Zhaxybayeva et al. [30], and Dvorak et al. [31]. For instance, our method detected the two major LGT highways reported by Sjöstrand et al. [28], i.e., β ff ↔ β t and β hs ↔ β t , where β ff represents the freshwater and filamentous sub-clade of Cyanobacteria species tree, β hs denotes hot springs colonies, and β t represents terrestrial Cyanobacteria (see Additional file 1: Figure S5). However, in contrast to the analysis by Sjöstrand et al. [28], we also find some recent LGT highways in the marine subclade of Synechococcus (see in Additional file 1: Figure S5); this observation corroborates work by Dvorak et al. [31]. We have also noticed a likely LGT event from M. synoviae to M. gallisepticum, is matching with the results reported in Vasconcelos et al. [32] (Fig. 2, edge 11, 15 ).
Discussion
We present a probabilistic method that takes a gene family, represented by a multiple sequence alignment, and a dated species tree as input; as output, it provides samples of reconciliations from the posterior over gene trees with the species tree. The method employs an MCMC framework and is based on the probabilistic DLTRS-model [14], an integrated model of gene duplication, gene loss, lateral gene transfer, and sequence evolution in the presence of a relaxed molecular clock. This is, to the best of our knowledge, the first probabilistic method that takes gene sequence data directly into account when sampling reconciliations of gene and species trees, i.e., not merely when constructing the gene tree. It has been shown, both on simulated and on genomic data, that using species-tree aware methods gives better gene-tree reconstruction [15,33]. Speciestree aware methods are sensitive to errors in reconstructed species trees; however, resources such as Time-Tree [34] and recent species tree reconstruction methods, such as Phyldog and MixTreEM [35,36], appears to be sufficiently reliable.
For future work, extending the model to incorporate even more biological knowledge is of interest. In particular, being able to distinguish incomplete lineage sorting (ILS) would be informative, especially since there are scenarios inferred by DLTRS that might be better to interpret as ILS.
Conclusions
Our simulation results show that the DLTRS-sampler performs well in terms of identifying gene-tree edges corresponding to LGT events. In addition, it often also correctly identifies the species tree edges between which LGT events have occured, i.e., both the species lineage that the gene is transfered from and the one it is transfered to. This behaviour suggests that it can provide an accurate method for identifying highways of LGT. In fact, we used these from and to lineages information in our biological datasets analysis and detected some of the interesting LGT highways that are reported by others [28,30,31]. Finally, our method also provides good temporal estimates of LGT events over the species tree. | 5,349.4 | 2016-11-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Rigosertib ameliorates the effects of oncogenic KRAS signaling in a murine model of myeloproliferative neoplasia
Aberrant signaling triggered by oncogenic or hyperactive RAS proteins contributes to the malignant phenotypes in a significant percentage of myeloid malignancies. Of these, juvenile myelomonocytic leukemia (JMML), an aggressive childhood cancer, is largely driven by mutations in RAS genes and those that encode regulators of these proteins. The Mx1-cre kras+/G12D mouse model mirrors several key features of this disease and has been used extensively to determine the utility and mechanism of small molecule therapeutics in the context of RAS-driven myeloproliferative disorders. Treatment of disease-bearing KRASG12D mice with rigosertib (RGS), a small molecule RAS mimetic that is in phase II and III clinical trials for MDS and AML, decreased the severity of leukocytosis and splenomegaly and extended their survival. RGS also increased the frequency of HSCs and rebalanced the ratios of myeloid progenitors. Further analysis of KRASG12D HSPCs in vitro revealed that RGS suppressed hyperproliferation in response to GM-CSF and inhibited the phosphorylation of key RAS effectors. Together, these data suggest that RGS might be of clinical benefit in RAS-driven myeloid disorders.
INTRODUCTION
Somatic mutations of RAS genes are present in approximately 5-40% of hematological malignancies and often arise as secondary events that cooperate with other driver mutations [1,2]. In addition to mutation of RAS genes themselves, mutation of genes such as PTPN11 or NF1 often result in a loss of negative regulatory cues and ultimately lead to hyperactivation of RAS-driven signaling [1,2]. Of the myeloid malignancies that harbor RAS mutations or exhibit abnormally high levels of RAS activity, juvenile myelomonocyitc leukemia (JMML), an aggressive and rare childhood cancer [3], almost invariably (~90%) presents with driver mutations in KRAS, NRAS or other genes encoding RAS pathway regulatory proteins [4,1,5]. The high frequency of these genomic alterations suggests that targeting RAS signaling, either by inhibiting RAS proteins themselves, their effectors, or regulators, might be an effective strategy to combat this and other myeloid malignancies that are RAS pathwaydependent.
Rigosertib (RGS) is a small molecule RAS mimetic [6] that is currently in phase II and III clinical trials for high-risk myelodysplastic syndrome (MDS) either as a single agent or in combination with hypomethylating agents (HMAs) [7][8][9]. Previous studies by us and others have shown that treatment of MDS cell lines and primary bone marrow isolated from MDS patients with RGS resulted in the induction of apoptosis as well as inhibition of RAF1 and AKT phosphorylation at residues that are critical for RAS- and PI3K-driven signaling [10][11][12]. These pre-clinical data, combined with the agent's safety profile revealed in clinical trials [10,13], suggest that RGS might be an effective therapeutic in hematological malignancies that exhibit altered RAS-driven signaling and for those where there is not already a perceived clinical benefit [7].
To further examine effects of RGS in RASdependent myeloid disorders, we utilized the Mx1cre kras +/LS-LG12D mouse model which phenocopies many key aspects of JMML. These mice develop of an aggressive and lethal myeloproliferative neoplasm (MPN) with rapid onset and present with severe anemia, hepatosplenomegaly and leukocytosis [14]. Here, we present data demonstrating that treatment with RGS improves the disease burden in MPNbearing animals. Our studies show that RGS-treated mice show improvements in complete blood counts and a reduction in the degree of splenomegaly due to a decrease in erythroid cells that accumulate in the spleen. Importantly, we also show that treatment with RGS resulted in a clear survival benefit, suggesting that this compound might be useful in the treatment of myeloid disorders.
Effect of rigosertib on KRAS G12D -driven myeloproliferative neoplasia
To determine whether rigosertib (RGS) reduces the disease burden in RAS-dependent myeloproliferative neoplasias (MPNs), Mx1cre-Kas +/G12D mice [14] were treated with a single dose of polyinosinic:polycytidylic acid (pIpC) to induce KRAS G12D expression in the hematopoietic compartment and the disease allowed to progress over a 14-day period. Complete blood counts performed at this time showed that MPN phenotype was readily evident, as animals presented with marked leukocytosis in the peripheral blood as well as organomegaly of both the liver and spleen ( Figure 1A and 1B). Mx1cre-Kas +/G12D mice treated with RGS [6] over this 2-week period had reduced white blood cell counts (WBCs), with a reduction in neutrophil counts being largely responsible for the overall decrease in WBCs ( Figure 1A and data not shown). Monocytosis, which is pronounced in this model [14] and a characteristic feature of JMML and chronic myelomonocytic leukemia (CMML) [2,3,5], persisted in RGS-treated animals.
Further examination of the livers and spleens of vehicle and RGS-treated Mx1cre-Kas +/G12D mice revealed that while the livers of animals in both treatment groups remained enlarged at the end of the 2-week treatment period, the degree of splenomegaly in RGS-treatment animals was significantly reduced in terms of both organ weight and overall cell number ( Figure 1B and 1C, respectively). The cellularity of the bone marrow (BM), which is reduced as a function of KRAS G12D expression [13,14], was not improved with RGS treatment ( Figure 1C).
Rigosertib suppresses extramedullary erythropoiesis in the spleens of KRAS G12D mice
Extramedullary hematopoiesis often occurs in myeloproliferative disorders and is recapitulated in the Mx1cre-Kas +/G12D model [14,15].Flow cytometric analysis of the cell types present in the spleen showed that treatment with RGS predominantly reduced the number and frequency of TER119 + CD71 hi cells ( Figure 2A); the number of mature myeloid cells was also reduced in the spleens of RGS-treated animals by more than 30% ( Figure 2C), although this difference was slightly less than significant (p=0.057) and did not translate into a reduction in the frequency of these cells. Analysis of these erythroid and myeloid populations in the bone marrow also showed that these populations were reduced by 20-25% in RGS versus control-treated animals ( Figure 2B and 2D). Together, these results demonstrate that the improvement in splenomegaly induced by short-term RGS treatment is largely due to the selective loss of erythroid progenitors and to a lesser extent, CD11b+ myeloid cells.
Short-term rigosertib treatment influences the nature of the stem and progenitor compartments in the bone marrow of KRAS G12D mice
Previous studies have shown that the MPN that develops in Mx1cre-Kas +/G12D originates in the hematopoietic stem cells (HSCs) within the bone marrow [16][17][18]. A more detailed examination of the bone marrow revealed that while KRAS G12D -induced abnormalities in cellularity were not significantly improved in RGStreated mice ( Figure 1C), the frequency of Lin-Sca1+ckit+ (LKS+) CD150+CD48-HSCs, which are reduced as a consequence of KRAS G12D expression [16,18,19], was slightly, but significantly increased in RGS-treated animals ( Figure 3A). The abnormal shift to a bias in granulocyte-monocyte lineage progenitors at the expense of those of the erythroid lineage that is conferred by KRAS G12D expression [16,17] in the RGS-treated group was similar to that observed in wild-type animals, whereby the frequency of megakaryocyte/erythrocyte precursors (MEPs) (Lin-Sca1-cKit+CD34 lo CD16/32[FcγR] lo ) and granulocyte/macrophage precursors (GMPs) (Lin-Sca1-cKit+CD34 + CD16/32[FcγR] hi ) were increased and reduced, respectively ( Figure 3B).
We also assessed whether hypersensitivity of myeloid progenitors to GM-CSF, which is a hallmark of JMML and CMML [2,3], was sensitive to the effects of RGS. For this study, whole bone marrow was isolated from pIpC-treated wild-type and Mx1cre-Kas +/G12D mice and plated in methylcellulose in the presence of increasing concentrations of granulocyte-macrophage stimulating factor (GM-CSF) and RGS where indicated. Figure 3C shows that as expected, the colony forming units (CFUs) that developed from KRAS G12D bone marrow were hypersensitive to GM-CSF and grew at low concentrations of this cytokine, whereas no CFUs were detectable in cultures derived from wild-type bone marrow. Treatment with RGS suppressed the formation of CFUs in the presence of RGS as a function of GM-CSF concentration in KRAS G12D cultures, and at higher concentrations (>1ng/ml), the number of CFUs was equivalent to those formed by wild-type bone marrow.
Rigosertib inhibits oncogenic RAS-driven signaling in primary KRAS G12D mouse bone marrow
To confirm that RGS inhibited RAS-driven signaling in hematopoietic stem and progenitor cells (HSPCs), whole bone marrow was isolated from pIpCtreated Mx1cre-Kas +/G12D mice and grown for 16 hours in the presence or absence of RGS. The cells were then stimulated with stem cell factor (SCF) and GM-CSF and subjected to flow cytometric analysis to analyze the phosphorylation status of ERK, AKT and ribosomal S6 (an AKT effector) in the HSPC-enriched lineage-negative (Lin-) compartment. Figures 3D and E show that treatment with RGS attenuated the phosphorylation of both ERK, AKT and S6, proteins whose functions are regulated by RAS [21,22] and are activated in Mx1cre-Kas +/G12D mice [16]. We also examined the phosphorylation status of STAT5, which has recently been shown to mediate the phenotypes observed in mutant and hyperactive K-and NRAS-driven hematopoietic phenotypes in animal models [16,19,[23][24][25][26] and is often aberrantly activated in JMML and other myeloid malignant cells treated with low concentrations of GM-CSF [25]. Treatment with RGS also inhibited cytokine-induced STAT5 phosphorylation in Lin-cells ( Figure 3D and E), providing confirmatory evidence that this compound blocks multiple facets of RAS-driven signaling.
Treatment with rigosertib improves survival in KRAS G12D mice with myeloid neoplasia
To determine if the phenotypic improvements observed in RGS-treated mice might enhance survival, we treated cohorts of wild-type and KRAS G12D mice with vehicle or RGS and monitored these animals over a 2-month period. As seen in Figure 4, while the KRAS G12D vehicle-treated animals succumbed to the effects of MPN at a median of 26 days, RGS-treated mice survived significantly longer, with a median survival of 48 days. Of the 6 RGS-treated animals that eventually succumbed to a lethal myeloproliferative disorder, 5 simultaneously developed T-cell acute lymphoblastic leukemia (T-ALL)/ thymic lymphoma with a predominance of CD4+CD8+ and CD8+ cells (data not shown) [17,18]. This observation is similar to those seen with MPN-bearing KRAS G12D mice that have been treated with MEK, PI3K or AKT small molecule inhibitors, whereby this nonmyeloid malignancy appears to be the primary cause of death in a small percentage of KRAS G12D mice treated with those targeted therapeutics [19,20]. However, unlike these inhibitors which normalize CBCs and prevent MPN-induced lethality, treatment with RGS appears to delay disease progression as analysis of the hematopoietic compartment in moribund RGS-treated animals displayed the phenotypic characteristics associated with KRAS G12Ddriven MPN (data not shown).
DISCUSSION
Here, we present data describing the effects of RGS, a small molecule RAS mimetic [6], as a therapeutic agent in a pre-clinical mouse model of KRAS G12D -driven MPN. Previous studies using compound genetically modified mouse models and small molecule inhibitors have highlighted the utility of inhibiting downstream effectors of RAS proteins in the treatment of RASdriven myeloid malignancies [1,2,19,20,[27][28][29]. In these instances, overall survival, as well as both disease mice were treated once daily with vehicle (PBS) or 100mg/mg rigosertib as described in the methods section. Median survival was 26 days versus 48 days for the vehicle and rigosertib-treated groups, respectively, as estimated by Kaplan-Meier survival analysis. p=0.028 as calculated by the log rank test (n=8 and 7 mice per wild-type and K-RAS G12D cohorts, respectively). burden and malignant phenotypes, were dramatically improved, demonstrating that blocking signals that are transmitted downstream of RAS has the potential to be clinically beneficial, even in the absence of elimination of the malignant clone. Our studies show that short-term treatment (2-weeks) with RGS is able to reduce the degree of leukocytosis in myeloproliferative disease (MPD)bearing KRAS G12D mice as evidenced by reductions in the number of WBCs, particularly neutrophils, in the peripheral blood. This improvement was consistent with a reduction (~30%) in the number of CD11b+ myeloid cells in the spleen and to a lesser extent in the bone marrow (~20%). Although the frequency of these cells remained similar to that of vehicle treated animals, the substantial loss of TER119+ cells in the spleen (discussed below) likely results in a commensurate increase in the frequencies of other populations.
Analysis of the spleen revealed that the severity of splenomegaly was improved by RGS treatment and was largely due to a reduction in the number of TER119 + CD71 hi erythroid progenitors. Although the nature of this response would be considered palliative at best in the absence of improvements in erythropoiesis and anemia in the remainder of the hematopoietic compartment, patients with hematological disorders that are phenocopied here could still achieve clinical benefit from RGS. Ruxolitinib, a JAK1/JAK2 inhibitor which is used for the treatment of myelofibrosis, received approval from multiple agencies due to its ability to reduce symptoms of the disease, including splenomegaly by 35% [30,31]. Given that RGStreated KRAS G12D mice survived significantly longer than those treated with vehicle, it is tempting to attribute this to a reduction in spleen volume, possibly in conjunction with improvements in overall peripheral blood count. However, as mentioned in the results section, this response is not durable and the majority of animals treated long-term (2 months) ultimately succumbed to the effects of MPN as well as T-cell leukemia. It should be noted, however, that the dosing regimen used herein is a caveat of the longterm study. We have previously shown that the number and grade of pancreatic intraepithelial neoplastic lesions in Pdx cre-Kras +/G12D mice were significantly reduced in animals treated twice-daily with 200mg/kg RGS and that the decrease in tumor grade and burden correlated with inhibition of RAS-driven signaling and the induction of apoptosis [6]. Although we did observe phenotypic improvements in our short-term study that utilized the same dose of RGS, mice in the long-term study (Figure 4) were treated once-daily with 100mg/kg RGS to minimize the effects of repeated intraperitoneal injections over a 2-month period. Hence, we were unable to determine the utility of RGS in prolonging survival of this model to the fullest extent.
We also examined the effects of RGS in HPSCs since the MPN in KRAS G12D mice originates in the HSCs and is also manifested in hematopoietic progenitors [16][17][18]. KRAS G12D expression in the bone marrow results a loss of HSCs, with those that remain having enhanced repopulating ability due to increased cell cycle entry. Cell cycle progression in myeloid progenitors is also enhanced, although these cells are unable to initiate leukemias in competitive bone marrow transplantation assays [17,18]. The frequency of HSCs was significantly increased in RGS-treated animals compared to the vehicle-treated cohort, suggesting that RGS might have the ability to alter the behavior of the stem cell pool. It is, however, unclear whether the neoplastic behavior of these cells is altered in the absence of data from bone marrow transplantation studies. The percentage of MEPs, which is decreased as a function of KRAS G12D expression [16,17], was also restored to nearly normal levels in response to RGS treatment and was associated with a concomitant decrease in the abnormally high frequency of GMPs in these animals. Interestingly, hyperactivation of STAT5 in Mx1cre-Nas +/G12D mice is mainly due to expansion of granulocytic-monocytic progenitors to GM-CSF [26] as well as proliferating and self-renewing HSCs [23]. The mechanism by which mutant RAS isoforms activate STAT5 is not understood, and it is unclear if these mechanisms also apply to KRAS G12D -driven MPDs. Given that the perceived phenotypic corrections in HSPCs within the bone marrow do not always translate into improvements in all hematopoietic tissues, use of RGS in combination with other agents that might synergize with and enhance its effect in these cell types might be of clinical benefit.
Mice
Kras+/LSLG12D (stock 008179) and Mx1-Cre (stock 003556) mice were purchased from The Jackson Laboratory. Breeding and experiments were performed under protocols approved by the Icahn School of Medicine at Mount Sinai's Institutional Animal Care and Use Committee according to federal and institutional guidelines and regulations Polyinosinic:polycytidylic acid (pIpC) (Sigma) was resuspended at a concentration of 2.5mg ml -1 in sterile Dulbecco's PBS (D-PBS). Mice were injected intraperitoneally (ip) with a single dose of pIpC (250μg) at 4-6 weeks of age. Treatment with vehicle (sterile PBS) or GMP-grade rigosertib (RGS) (Onconova Therapeutics, Inc.) was initiated 10-14 days post-pIpC treatment. RGS was administered twice daily via ip injection 5 days per week a dose of 200mg/kg for short-term (2 weeks) studies as previously described by us [6]. Mice treated long-term (2 months) were treated once daily with a dose of 100mg/ kg of RGS on a 5 day on, 2 days off schedule. RGS was freshly dissolved in PBS at the time of each injection for all studies. Tissues were harvested at the times indicated. www.oncotarget.com Complete blood counts were measured using a Hemavet 950 multi-species hematology system (Erba Diagnostics).
Colony formation assays
1x10 4 whole bone marrow cells isolated from pIpC-treated Kras +/G12D or wild-type mice were plated in 1.5ml of Methocult M3231 (Stemcell Technologies) supplemented with the indicated concentrations of GM-CSF (Peprotech) and RGS in a non-treated 35mm dish in duplicate. Cells were cultured for 7 days before colonies were scored.
Intracellular staining and subsequent flow cytometric analysis of phosphorylated proteins in RGS-treated primary bone marrow was performed as follows: single cell suspensions were prepared in Iscove's Modified Dulbecco's medium supplemented with 10% heat-inactivated FBS, seeded at a density of 1x10 7 cells/ml in the presence of vehicle (PBS) or 2μM RGS and grown for 16 hours at 37°C under humidified conditions and 5% CO 2 . The cells were then stained with Lin cocktail antibodies conjugated to PE for 15 minutes at 37°C prior to stimulation with 100ng/ml SCF and 10ng/ml GM-CSF (Peprotech) for 5 min where indicated. Fixation was performed using 1X Lyse/Fix buffer (BD Biosciences) according to the manufacturer's instructions. Cells were then permeabilized on ice for 30 min using Perm Buffer III (BD Biosciences) and washed extensively using PBS prior to staining with the indicated phospho-antibodies on ice for 1 hr. The samples were then washed with PBS and subjected to flow cytometric analysis on the day of staining. AF647-conjugated phospho-specific antibodies directed against phospho-p44/42 MAPK Thr202/Tyr204 (E10), phospho-STAT5 Tyr694 (C71E5), pAKT Ser473 (D9E) and phospho-S6 ribosomal protein (Ser235/236) (D57.2.2E) were purchased from Cell Signaling Technology.
Data were acquired using an LSRFortessa X-20 or FACS Canto (BD Biosciences) and analyzed using FlowJo v10 software (Treestar). Bone marrow from 2-3 Kras +/ G12D mice was pooled prior culturing for each independent experiment.
Statistical analysis
All data were analyzed using Prism 7 (GraphPad). Kaplan-Meier survival estimates were analyzed using the log-rank test. Statistical analysis of differences in CBC numbers as well as population subsets in the bone marrow, spleen, liver and peripheral blood were performed using a standard, unpaired, two-tailed Student's t test. Data are graphed as mean ± SD. Results are considered significant at p≤0.05.
ACKNOWLEDGMENTS AND FUNDING
This work was supported by grants from Onconova Therapeutics Inc. (Newtown, PA) and the NIH 5R21CA227963-02to EPR. Use of the flow cytometry shared resource facility was supported by a NIH Cancer Center Support Grant (P30CA196521) to the Tisch Cancer Institute.
CONFLICTS OF INTEREST
EPR is an equity holder, board member and a paid consultant of Onconova Therapeutics, Inc. MVRR and SCC are stockholders and paid consultants of Onconova Therapeutics Inc. SJB is a paid consultant of Onconova Therapeutics Inc. All authors are named inventors on pending and/or issued patents filed by Temple University. | 4,452 | 2019-03-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Muscle Tissue as a Surrogate for In Vitro Drug Release Testing of Parenteral Depot Microspheres
Despite the importance of drug release testing of parenteral depot formulations, the current in vitro methods still require ameliorations in biorelevance. We have investigated here the use of muscle tissue components to better mimic the intramuscular administration. For convenient handling, muscle tissue was used in form of a freeze-dried powder, and a reproducible process of incorporation of tested microspheres to an assembly of muscle tissue of standardized dimensions was successfully developed. Microspheres were prepared from various grades of poly(lactic-co-glycolic acid) (PLGA) or ethyl cellulose, entrapping flurbiprofen, lidocaine, or risperidone. The deposition of microspheres in the muscle tissue or addition of only isolated lipids into the medium accelerated the release rate of all model drugs from microspheres prepared from ester-terminated PLGA grades and ethyl cellulose, however, not from the acid-terminated PLGA grades. The addition of lipids into the release medium increased the solubility of all model drugs; nonetheless, also interactions of the lipids with the polymer matrix (ad- and absorption) might be responsible for the faster drug release. As the in vivo drug release from implants is also often faster than in simple buffers in vitro, these findings suggest that interactions with the tissue lipids may play an important role in these still unexplained observations. Supplementary Information The online version contains supplementary material available at 10.1208/s12249-021-01965-4.
INTRODUCTION
The drug release rate from implantable formulations is a critical parameter of their therapeutic effectiveness. However, the release rate and duration obtained by the current in vitro release testing methods often strongly differ from the subsequent results of animal and human trials as the oversimplified in vitro conditions fail to mimic the complex environment of the tissue. The factors responsible for these differences are yet mostly unknown or only hypothetical. An ideal "biorelevant" in vitro method would specifically consider all factors affecting drug release in vivo. The urgent need for biorelevant in vitro methods has been repeatedly emphasized (1)(2)(3)(4), as achieving a more accurate simulation can reduce patients' risk of receiving an inadequate daily dose, accelerate formulation development, and decrease the number of animal experiments.
The biorelevant dissolution has been for a long time vividly investigated topic in the field of oral formulations. Enhancing simple buffers used as dissolution media by additional physiological parameters (e.g., surfactants, enzymes, concomitant food intake, biphasic dissolution, etc.) has been in many cases shown to provide better in vitro-in vivo correlations (5)(6)(7). In contrast, methods for the parenteral formulations are nowadays still at the stage of simple buffered media. Furthermore, due to the lack of standardization and no specific pharmacopeia apparatus, the setups largely vary between the research groups: either simple agitated/stirred vials, dialysis membranes (for a physical separation of particulate systems), or flow-through cells (mostly the USP 4 apparatus, originally designed for oral formulations) are currently applied (3). Independently on the settings, they all rely on the use of simple buffered release media (mostly phosphate buffer) of physiological pH 7.4 and temperature 37°C (3,4). Only few studies have described attempts on application of biorelevant media, either with respect to the ionic composition (8) or with additional specific components of extracellular matrix (ECM)-such as hyaluronic acid either in a dialysis model mimicking the subcutaneous tissue (9) or as a component of a simulated synovial fluid (10)(11)(12). An alternative approach is the release testing in hydrogels which are supposed to simulate the gellike physical nature of the ECM (13)(14)(15)(16)(17)(18)(19); however, the hydrogel-forming polymers used, such as agar or agarose, do not resemble the specific chemical composition of the tissue. Electrostatic binding interactions between the released drug and the components of the ECM can occur since the collagen is at the physiological pH positively charged and the hyaluronic acid and chondroitin sulphate are negatively charged (9,20).
The drug release from implants is often faster in vivo than during the in vitro testing in simple buffers (21)(22)(23)(24)(25)(26). In the case of biodegradable polyesters such as poly(lactic-co-glycolic acid) (PLGA), this is being explained by faster hydrolytic degradation of the polymer and erosion due to the enzymes present in the living tissue, despite the fact that the effect of the enzymes is inconclusive and many studies are supporting nonenzymatic hydrolysis (22,23,27). Therefore, attempts should be made to investigate the influence of further components present in the tissue environment to better understand the complex in vivo situation. Amongst the diverse components in the tissue which can affect the structural integrity of the implanted materials are the lipids (28,29). The locations of intramuscular (i.m.) or subcutaneous (s.c.) administration-the muscle and the adipose tissue-are heavily involved in the turnover of lipids and fatty acids: there is a constant "flux" of lipids and fatty acids through the blood capillaries and the interstitium either as a source of energy (muscle) or storage and release for other cells (adipose tissue). Needless to say, also the membranes of cells and extracellular vesicles are formed by a lipid bilayer. Therefore, the lipids and the fatty acids may likely either directly interact (e.g. ad-/absorption) with the i.m./s.c. implanted microspheres or contribute to drug partitioning and so influence the drug release rate.
Excised animal tissues are commonly used for the in vitro testing of pharmaceuticals, as they provide morphology and chemical composition equivalent to the intended site of administration: e.g., human/porcine skin for topical formulations and transdermal patches (30,31), or porcine eyes for ophthalmic formulations (32)(33)(34). Analogically, we have investigated here for the first time the possibility of application of excised porcine muscle tissue as an in vitro model simulation of the intramuscular environment for the testing of depot microspheres.
The muscle tissue was used in form of freeze-dried powder for advantageous handling and reproducibility compared with the bulk muscle tissue. The tested microspheres were prepared out of ethyl cellulose or various grades of PLGA. Several studies have reported faster risperidone release from depot PLGA microspheres in vivo than in vitro (26,35); therefore, we have chosen the risperidone as a good candidate for encapsulation to study the influence of the investigated release testing conditions and their biorelevance. Flurbiprofen and lidocaine were also included as additional drugs for encapsulation. We have further investigated the effect of lipids isolated from the muscle tissue on drug solubility and drug release from the microspheres; and in addition also a binding interaction of the model drugs with the freeze-dried muscle tissue.
Microsphere preparation
The microspheres were prepared using simple oil-inwater (o/w) emulsion solvent extraction/evaporation method as described earlier (19). The drug (flurbiprofen, lidocaine, or risperidone; 120-250 mg, depending on the drug loading) and the polymer (PLGA or ethyl cellulose) in a total amount of 1 g were dissolved in 7-ml dichloromethane and emulsified in 20 ml of 1% polyvinyl alcohol (PVA) under 750 rpm for 1 minute. The emulsion was transferred into 1 L of 0.1% PVA to allow evaporation of the dichloromethane under constant stirring (300 rpm). The microspheres were washed, collected on a filter, and vacuum-dried for 24 hours. After drying, the particles were sieved through 150-μm mesh size to remove agglomerates and were used either directly after preparation or stored at 4°C until used (stored not longer than 1 week). The size of the microspheres was measured using a laser diffraction particle size analyzer LA-960 (HORIBA, Kyoto, Japan). The median particle size is given in the following sections as a parameter D50.
To determine the drug loading, 20.0 mg of the microspheres was dissolved in 2 ml of acetonitrile and afterwards filled up to 25.0 ml using either 0.1-M HCl (in case of risperidone and lidocaine) or 0.1-M NaOH (in case of flurbiprofen). The drugs were analyzed using HPLC methods described below.
The simulated muscle setup
As recognized during initial trials, direct injection of the tested microspheres into a bulk excised muscle tissue would result in several problems: 1) Unknown localization of all "administered" microspheres and consequently problematic localization of the released drug. 2) Destructive sampling (pieces of muscle tissue as samples would be necessary) and formation of a concentration gradient of the released drug of decreasing concentration with increasing distance from the dosage form. In addition, the usual "media replacement" with the equivalent volume of fresh medium very easily applicable in case of liquid release media would be very complicated in case of a muscle tissue. 3) Risk of incomplete drug recovery from muscle tissue. 4) Microbial instability of the tissue over the long time period at 37°C (several weeks can be expected in case of implantable microspheres).
To overcome the abovementioned problems, novel approach was developed here. The muscle tissue was freezedried and subsequently pulverized for better handling and reproducibility. The tested microspheres were then incorporated together with the freeze-dried muscle tissue powder into a small assembly held together by agarose hydrogel (procedure described in detail below), enabling that the microspheres are in direct contact with the muscle tissue, and at the same time, the small size of the assembly allows its placing into aqueous buffered release medium (simulating blood compartment), from which the samples can be conveniently taken and analyzed using conventional techniques such as HPLC. The aqueous release medium can be easily replaced in contrast to bulk muscle tissue providing advantage of nondestructive sampling. The agarose was selected for its excellent long-term stability (36) and to mimic the gel-like nature of ECM (37)(38)(39). Although the microspheres are intended to be suspended in a vehicle before administration to provide injectability, the release rate mechanism is determined solely by the characteristics of the microspheres and the vehicle is not intended to contribute to the sustained release. Therefore, incorporation of microspheres directly in muscle tissue without vehicle will not significantly alter the results.
Preparation of muscle tissue powder
Porcine muscle tissue (from hind limb) was bought at a local butcher, cut to smaller pieces, frozen at −30°C overnight, and freeze-dried in Steris Lyovac GT2 freeze dryer (Mentor, OH, USA). The tissue was weighed before and after freeze drying to determine the water content. The freezedried tissue was then pulverized in Retsch Centrifugal mill ZM1 (Haan, Germany) in two cycles of decreasing mesh size (mesh sieve size 2 mm followed by 0.5 mm).
Incorporation of microspheres into the simulated muscle setup
Microspheres (25.0 mg) and muscle tissue powder (50 mg) were weighed on an analytical balance and mixed using a spatula. Agarose hydrogel (2%) was used as a binding matrix for the mixture of microspheres and the freeze-dried muscle tissue powder so that the resulting assembly does not disintegrate upon placing into release medium. Importantly, not pure water, but the release medium was used for the preparation agarose hydrogel, to provide buffered conditions and pH 7.4. In order to solubilize the agarose, the suspension must be heated to 90-95°C and the sol-gel transition occurs spontaneously upon cooling below 35-37°C. Such high temperatures over 90°C even for short contact time could affect the tested microspheres (burst release or softening of PLGA); hence, the solubilized agarose was cooled to around 40°C under vigorous stirring (necessary to prevent the sol-gel transition to occur already before application). To provide a reproducible shape and dimensions of the resulting agarose gel/muscle tissue/microspheres setup, an assembly of flat punches and a die (12 mm diameter) made of stainless steel was used. Firstly, one punch was partially inserted into the die from below, to form a cavity into which the dry blend of muscle tissue powder with the microspheres was filled. The cooled 2% agarose solution (170 μl) was added using a micropipette and the second punch was quickly introduced from the top. The weight of the upper punch applied sufficient pressure to allow the (liquid) cooled agarose solution to homogenously penetrate into the blend of muscle powder and microspheres, before the sol-gel transition of agarose occurred. The added volume of the agarose solution was selected to correspond to the amount of water lost during the freeze-drying step, in order to keep the ratio of water and solids the same as in the original muscle tissue. The procedure was performed to obtain a normalized reproducible shape and dimensions-the diameter is determined by the inner diameter of the die and the height by the constant mass (weighing on an analytical balance and using a micropipette provides sufficient precision and reproducibility). The thickness of the final disc-shaped assembly was approx. 2 mm (Fig. 1). The final assembly was left standing for 5 minutes to allow complete gelation of agarose hydrogel and then transferred into 20.0 ml of release medium.
Extraction of muscle lipids
The freeze-dried muscle powder was suspended in a mixture of chloroform/methanol in a 2:1 volume ratio and horizontally shaken for 12 hours. Afterwards, the insoluble components (predominantly the muscle proteins) were separated by filtration and the volatile solvent mixture was left to evaporate at room temperature, followed by vacuum drying for 24 hours to remove the solvent residues. The extracted lipids were subsequently emulsified in the release medium using an IKA Ultra Turrax® T 18 (Staufen, Germany). Some components of the extracted lipids (most likely lecithin and free fatty acids), as well as the 0.02% concentration of polysorbate 20 in the release medium, acted as surfactants, facilitating the emulsification and stabilizing the emulsion formed. The concentration used for drug release testing was 30 mg lipids/ml of the release medium unless stated differently.
The lipids emulsified in the release medium create a biphasic system of lipid droplets and micelles. The focus was confined on the effect on the drug release from the microspheres; therefore, the released drug was determined in both phases together. The fractions of the drug in the lipid phase and in the aqueous phase were not separately investigated, since the drug partitioning is already a step after the release from the microspheres and not determined by the dosage form formulation; it is only determined by the physicochemical properties of the particular drug (such as log D).
The muscle tissue powder without the extracted lipids was also kept and used to prepare the same assembly with microspheres as described in previous section, in order to test the release behavior when the lipids were not present.
Drug release conditions
The release medium consisted of 0.1-M sodium phosphate buffer (pH = 7.4) with osmolarity adjusted to 285 mosm/ml with sodium chloride; 0.05% sodium azide was used to prevent microbial growth and 0.02% polysorbate 20 to provide sufficient wettability of the microspheres and to prevent their aggregation. Sink conditions were maintained for all drugs. The temperature of 37°C was chosen for better consistency with other studies, although the average temperature of resting muscles is between 34 and 35°C (40). The investigated release conditions were compared with the simultaneous testing of the microspheres of the same batch under current "standard conditions" by freely suspending in the release medium alone. The pH of the release medium was regularly checked during the period of release testing and adjusted in case of pH drop caused either by hydrolysis of the PLGA or the lipids. Due to the inevitable hydrolysis of the lipids to free fatty acids, the pH of the medium with 30 mg/ml of emulsified lipids slowly decreased in time (despite the buffering capacity of 0.1-M phosphate buffer) by approx. 0.1 on the pH scale in 7 days if no pH adjustment was performed. One quarter of the volume of the release medium was replaced in each sampling point to introduce medium with "fresh" lipids. The vials with the release medium with emulsified lipids were mildly manually agitated in each sampling point to prevent phase separation of the emulsion (the control vials with lipid-free medium were also agitated to eliminate the possible influence on drug release).
Determination of drug binding on muscle tissue
Protein binding was determined in suspension of the freeze-dried muscle tissue powder in the release medium (20 mg/ml). Into 9.0 ml of the suspension, 1000 μl of stock solution of the drug (100μg/ml) was added and the suspension was incubated at 37°C for 24 hours. The drug concentration was chosen to approximately correspond to the concentrations present during the drug release. To determine the binding on the soluble proteins, the blank suspension was incubated for 24h under 37°C, centrifuged, and the drug stock solution was added to the supernatant and incubated for additional 24 hours. As a control, the drug stock solution was added to the pure release medium. All experiments were performed in triplicate. The drug concentration after incubation was determined by the HPLC methods described below. The drug recovery was calculated as percentage of the peak area in relation to the control in pure medium.
Drug solubility
The saturation solubility (Cs) was determined in 3 ml of either pure release medium or with different concentrations of emulsified muscle lipids (3,15,30, and 60 mg/ml). The drug suspension was incubated at 37°C for 2 days to obtain a saturated solution in equilibrium. The undissolved particles of drug excess were filtered through a 0.2-μm pore size syringe filter, diluted, and analyzed using respective HPLC methods described below.
Liquid chromatography analysis (HPLC)
Analyses were performed with a Waters Alliance (Waters, Milford, MA, USA) HPLC 2695, equipped with a Waters 996 photodiode array detector. All three drugs were analyzed on the C-18 column LiChrospher 100 RP 18-5μm EC (CS-Chromatographie, Merck) based on previously described methods (19). The mobile phase for flurbiprofen consisted of a mixture of acetonitrile and water buffer (1% chloroacetic acid adjusted with ammonium hydroxide on pH 3.0) in ratio 60/40; flurbiprofen was detected at 244-nm wavelength. The mobile phase for risperidone was a mixture of acetonitrile and water buffer (0.1% acid adjusted with ammonium hydroxide to pH = 3.0) in a ratio of 65/35; risperidone was detected at 273-nm wavelength. The mobile phase for lidocaine consisted of acetonitrile and 0.01-M phosphate buffer of pH = 6.5 in a ratio of 65/35. The UV detection was at 215 nm. The linearity was determined in a range between 0.5 and 50 μg/ml (for further details, see Supplementary Material).
Contact angle (wettability)
The films for the contact angle measurements with 8% drug loading were prepared by a solvent casting method by dissolving the drug and respective polymer in dichloromethane, casting on nonadhesive Petri dish and evaporating. The contact angle was measured using Drop Shape Analyzer EasyDrop (Krüss, Hamburg, Germany), as a sessile drop on the film surface (n = 6).
Differential scanning calorimetry (DSC)
The instrument DSC 2 (Mettler Toledo, Giessen, Germany) with nitrogen as a cooling gas was used. The heatingcooling-heating cycle was: +25°C/−20°C/+60°C/−20°C/+60°C at a rate of 10K/min. The samples were analyzed at hydrated state as taken from the release medium in aluminum pans with nonpierced lid and weighted after each run to assure no water was lost during the heating.
Binding interactions with the muscle tissue
We found that flurbiprofen and lidocaine partially bind on the structural components of the muscle tissue. When the stock solution of flurbiprofen was added to the release medium with 20 mg/ml suspended muscle tissue powder, only 79.2 ± 0.9% recovery was determined in relation to the simultaneously performed blank (without muscle tissue). The same procedure with lidocaine showed only 91.3 ± 1.1% recovery; in the case of risperidone, complete recovery (99.8 ± 0.5%) was determined. However, complete drug recovery of all 3 model drugs was obtained when the drug solution was added to the supernatant (soluble components of the muscle tissue). This suggests that the flurbiprofen and lidocaine bound to the structural insoluble components of the tissue.
Effect of muscle lipids on drug solubility
With increasing concentration of emulsified muscle lipids in the release medium, the saturation solubility of all three model drugs gradually increased (Fig. 2). For instance, with 30 mg/ml of lipids, the saturation solubility (Cs) of flurbiprofen increased from 7.92 ± 0.05 mg/ml in the lipidfree release medium to 11.55 ± 0.09 mg/ml, the Cs of lidocaine from 9.94 ± 0.06 to 20.87 ± 0.18 mg/ml, and the Cs of risperidone from 0.32 ± 0.01 to 0.61 ± 0.01 mg/ml.
Drug release under investigated conditions
The muscle setup (Fig. 1) remained mechanically stable over the whole period of the release testing without disintegrating in the release medium. The sodium azide efficiently prevented microbial instability over the period of drug release as no organoleptic signs of microbial deterioration were apparent. Only the color of the muscle tissue changed after the first day in the medium from the original light pink to whitish.
The release of the three drugs from the microspheres prepared from the acid-terminated Resomer® grades (502H, 503H, and 504H) was in all cases slower in the muscle setup than when they were freely suspended in the release medium. The release profiles from the 503H grade microspheres are given as an example in Fig. 3. Differences were generally more pronounced in the case of risperidone than flurbiprofen and lidocaine. The addition of lipids into the medium (30 mg/ml) did not have any impact on the lidocaine release from the acid-terminated grades and the release of flurbiprofen and risperidone was only marginally faster (Fig. 3).
However, different release behaviors were observed in the case of the ester-terminated 505 grade. When the flurbiprofen-loaded Resomer® 505 grade particles were tested, the release was faster in the muscle setup (Fig. 4a): the difference was most prominent at the 5.8-day time point with 74.1 vs. 51.5% released in the muscle setup vs. in pure medium, respectively, and at the time point of 6.3 days, 88.2 vs. 70.0%. From the further results in Fig. 4a, it is apparent that with the addition (emulsification) of only the isolated muscle lipids, even stronger acceleration of flurbiprofen release occurred. On the contrary, if the particles were incorporated in the assembly made out of muscle tissue powder after the removal of the lipids, the release was even slower than in the release medium alone. This clearly shows the determining effect of the lipids on the acceleration of the flurbiprofen release. The T g of the particles after 5 days (approx. the time point when the release profiles started to differentiate) in the release medium either with or without lipids was determined (in a hydrated state)-in both cases, the T g was around 21°C, indicating no significant plasticizing effect of the lipids.
This effect on drug release was similar for different average particle sizes with similar flurbiprofen loading (compare Fig. 4a and b). In the initial phase, the release curves were overlapping, indicating no retarding impact of the additional drug diffusion step through the muscle tissue. The same accelerating trend was observed also in the case of the Resomer® 504 formulation (lower Mw than the 505) (Fig. 4d): on day 5, there was already 85.3% flurbiprofen released in the medium with lipids, 76.2% released in the muscle setup while only 66.7% in release medium alone; on day 6, then 98.1 vs. 94.4 vs. 83.9%, respectively.
With a higher flurbiprofen drug loading in 505 microspheres (Fig. 4c), the drug release was very rapid and complete within 4 days, the effect of lipids was less pronounced, but the release in case of the muscle setup (despite lipids were not removed) was retarded.
The release of lidocaine from the 505 grade particles was also accelerated. Interestingly, the release in the muscle setup was accelerated between days 4 and 7 (Fig. 5a), while the emulsified lipids accelerated the release only in the final days and the release curves in the preceding days were overlapping (Fig. 5b and c).
When the risperidone loaded 505 microspheres were tested in the muscle setup, there was also marginally faster risperidone release observed between days 7 and 12 (Fig. 6a), but not as prominent as in the case of flurbiprofen or lidocaine. From day 13 onwards, the released amount was, on the contrary, lower in the case of the muscle setup.
The effect of different lipid concentrations in the medium (3, 15, and 30 mg/ml) was tested on two 505 formulations with different risperidone loadings. The release rate increased with increasing concentration of emulsified lipids in the release medium ( Fig. 6b and c). The trend was similar for both drug loadings; however, the degree of the difference from the lipidfree medium was more pronounced in the case of the formulation with higher drug loading. The lipid concentration of 3 mg/ml was the lowest where any significant effect on release could be observed.
The contact angle on the surface of the Resomer® 505 films loaded with 8% of either of the model drugs was measured to determine a possible effect of the lipids on the wettability of the medium on the hydrophobic surface. There was no difference between the lipid-free medium (37.3 ± 3.8°) and the medium with 30 mg/ml of lipids (36.8 ± 3.5°). The particles in the lipid-free medium were already well wetted given by the presence of 0.02% polysorbate 20, sank, did not float on the surface of the medium, did not agglomerate, and remained individually suspended; hence, it is unlikely that a change in wettability contributed to the faster release.
The lipids in the release medium as well as the incorporation into the simulated muscle setup also significantly impacted-accelerated-the drug release from the ethyl cellulose formulations. The release of all three model drugs from the ethyl cellulose microspheres followed the same trend, slowest release in the pure medium, fastest in the medium with emulsified lipids, and the release curve obtained in the muscle setup was in between (Fig. 7), with an exception in case of the first day time point on the release curves of lidocaine and risperidone with the lowest released amount in the muscle setup. The presence of lipids accelerated the release already from the first day, whereas in the case of the 505 PLGA grade, the difference was prominent only in the final stages. Due to the diffusion-controlled release from ethyl cellulose matrix, the release rate gradually decreased; however, as apparent from the slope of the release curves, this decrease in the release rate was less prominent in the muscle setup and in the presence of lipids than in the pure medium-resulting in the overall faster release. This effect was at most pronounced on the least soluble risperidone after the first week of release.
DISCUSSION
Muscle tissue in its freeze-dried pulverized form offered a worthwhile approach in the development of a first model simulation of intramuscular environment for drug release testing. The investigated factors in most cases affected the drug release and can therefore provide additional information about the factors influencing the release in vivo than testing in simple buffers. Flurbiprofen and lidocaine are known to bind strongly on plasma proteins (41-43) and a binding tendency towards the muscle tissue was also observed in our study. Such binding interactions of the released drug on the structural proteins of interstitium (e.g., collagen or elastin) might occur in vivo and should be also considered in development of biorelevant methods. However, the binding interaction was not responsible for the observed slower drug release from the acidterminated PLGA grades in the muscle setup than in pure medium, because in the case of the ester-terminated grades and the ethyl cellulose, the release in the muscle setup was, on the contrary, faster than in pure medium. Furthermore, the release profiles between the muscle setup and pure medium differed at most for risperidone despite no binding of risperidone was determined. The slower release in the muscle setup can be more likely explained by swelling restriction of the particles tightly incorporated in the muscle setup held together by the rigid agarose gel, while the particles freely suspended in the medium were able to swell freely. The swelling of PLGA was reported in many studies to control the drug release from PLGA dosage forms (19,24,(44)(45)(46) and the more hydrophilic acid-terminated grades swell faster than the ester-terminated (47,48). This effect was also observed for the risperidone-loaded Resomer® 505 formulation in the later stages of advanced polymer degradation and hydration. The swelling might be similarly limited also in the tight interstitial space upon in vivo administration.
There was a lower amount of lidocaine and risperidone released from the ethyl cellulose particles on the first day in the muscle setup than in the pure medium, which can be attributed to the additional diffusion step of the released drug through the muscle tissue. This retarding effect was also observed in the case of 505 PLGA grade with the higher flurbiprofen loading (≈15%); however, not in the case of the low drug loadings. This additional retarding step is, therefore, apparently more prominent when a high initial drug amount is released (burst release). A similar effect was observed in a study by Andhariya et al. in vivo, when upon i.m. administration of leuprolide acetate loaded PLGA microspheres, considerably lower drug plasma concentrations were detected in the initial stage compared with a high burst release in vitro (49). The authors attributed this effect to the additional step of diffusion and absorption of the peptide from the muscle tissue; yet the overall release was, nonetheless, faster in vivo. However, the diffusion of the macromolecules such as the leuprolide acetate might be much more impacted (hindered) than that of low-Mw drugs.
Lipids isolated from the muscle tissue were identified to accelerate the release of all three investigated drugs. Drug solubility is one of the crucial factors determining the release rate from polymeric matrices (22). The tissue lipids increased the solubility of our model drugs and this effect could not only explain the accelerated release in our experiments but also the generally reported faster release rates in vivo. The presence of the fatty acids, lipid membranes, and other components of the ECM in the tissue might make the surroundings of the implanted formulation more lipophilic compared with the standard phosphate buffer, hence favor the release by increased drug solubility as observed in vitro in this study. Higher lidocaine solubility and its faster release from a gel formulation in human peritoneal fluid than in phosphate buffer have been reported and were attributed to the presence of physiological surfactants (50). However, the presence of lipids in the release medium did not affect the release from the acid-terminated PLGA grades in our experiments, but the higher solubility should unspecifically increase the release rate from all grades. Therefore, the accelerated release does not appear to be solely explained simply by a general solubility increase in the surrounding medium.
The lipids/fatty acids will tend to adsorb and accumulate on the hydrophobic surface of the microspheres. Adsorption of lipids on the hydrophobic surface of the implanted materials has been already shown to occur in vivo (51)(52)(53). The partitioning of a hydrophobic drug from the "very hydrophobic" PLGA/ethyl cellulose matrix into the "very hydrophilic" release medium (in other words, the release) might be made more favorable by this hydrophobic "intermediate compartment" of the lipid layer adsorbed on the surface.
Alternatively, the lipids and fatty acids can penetrate inside the polymer matrix (54) and enhance absorption of the release medium into the core of the matrix by osmotic effect and/or by counteracting the repulsive forces between the hydrophobic polymer chains and water molecules. Absorption of lipids into implanted silicone prosthetics (29,(55)(56)(57) and poly(glycolic acid) sutures (28) with impact on their structural and mechanical properties has been already documented earlier; therefore, the drug-loaded implants can be similarly affected with a direct consequence on drug release rate. Lipids co-encapsulated with risperidone in PLGA microspheres have been intentionally used to modify (accelerate) risperidone release (58).
The lipids in medium had an accelerating effect on the drug release, despite that the sink conditions in the lipidfree medium were already provided. Although the lipids emulsified in the release medium create a biphasic system, this is the important difference from the biphasic dissolution testing of oral dosage forms, where non-sink conditions exist in the aqueous phase and the drug partitioning into the organic phase prevents the drug saturation in the aqueous phase (5). However, in the microenvironment inside of the polymer matrix, the sink conditions may not be maintained, as demonstrated in a study by Siepmann et al. (59). The authors found that despite the sink conditions in the surrounding release medium, non-sink conditions inside of hypromellose matrix might exist even for freely soluble drugs. The authors further concluded that if such effect was observed even for relatively hydrophilic hypromellose matrices in a highly hydrated and swollen state, the effect might be even stronger for more hydrophobic matrices containing much less water (60). In a further study by the same research group, an equivalent situation was observed also in the case of PLGA extrudates (45). Correspondingly, the lipids and fatty acids in our experiments might have acted on increasing of the solubility in the microenvironment inside the microspheres upon their absorption into the matrix, despite sufficient sink conditions in the surrounding medium were provided already in the lipid-free medium. The acid-terminated PLGA grades being more hydrophilic allow faster water penetration inside the matrix and swelling; hence, the local drug saturation inside their matrix might not have been the rate-determining factor-possibly explaining why the lipids accelerated the release only from the ester-terminated PLGA and the ethyl cellulose microspheres.
Possible acceleration of the hydrolytic degradation of the PLGA is unlikely since in the experiments with the (nondegradable) ethyl cellulose, we have also observed faster release in the lipid-containing medium and in the muscle setup. The diffusion-controlled release from the ethyl cellulose matrices is characterized by decreasing rate with the decreasing concentration gradient (first-order kinetics) and the lipids seemed to partially prevent this rate deceleration-leading to overall faster release. On the other hand, in the case of the Resomer® 505, the effect of the lipids seemed to be on shifting of the onset of the phase of rapid release to sooner time points and to shortening of the lag phase.
Despite the exact mechanisms are yet to be elucidated, the interactions with lipids could, nevertheless, in some cases provide an explanation to the often reported faster drug release in vivo than in simple buffers in vitro.
The muscle tissue-based release methods shown here represent the first attempts towards the mimicking of the intramuscular conditions and are surely not without limitations. Clearly, the characteristic morphological structure of the native muscle fibers is not retained in the pulverized freeze-dried tissue. The limitation of swelling had an impact on the drug release, but this was determined by the strength of the agarose gel and may not resemble the actual physical properties of muscle tissue. Also, the lipids in tissue are not present freely as when emulsified in release medium and are mainly organized as lipid droplets inside of adipocytes, as bilayer in cell membranes, or the fatty acids bound on albumin (61,62). However, displacement from this physiological distribution could still occur when the partitioning towards the implant surface is thermodynamically favorable or due to mechanical impact on the tissue caused by implant administration. The natural lipids are accompanied with a risk of chemical instability during the long testing period which may also lead to a change in the properties of the release medium in time. However, frequent replacement with medium with fresh lipids can overcome this issue.
CONCLUSION
A muscle tissue homogenate-based drug release testing method was suggested here to closer mimic the intramuscular administration of depot microspheres. In addition to the novel biorelevant design, additional essential parameters of pharmaceutical quality control were considered in the development, such as long-term stability, reproducibility, low costs, convenience of sampling, and ease of preparation. From the diverse biological components of muscle tissue, particularly, the lipid components affected the drug release from the microspheres and these findings represent a previously unknown factor influencing the drug release from parenteral depot microspheres. The accelerating effect of the lipids was observed for all drugs tested and across chemically different polymers (PLGA and ethyl cellulose) but differed within different grades of the same polymer (acid-vs. esterterminated PLGA). Although these experiments cannot conclusively tell whether the same interactions will happen in vivo, the results of this study strongly suggest the biological lipids as one of the important factors responsible for the differences between in vitro and in vivo release. Further studies will be necessary on investigation whether the whole mixture of the lipid extract or a single component is responsible for the effect and if the same effect can be achieved by a single triglyceride/fatty acid as a pure standardized substance. Our findings can provide basis for further more complex biorelevant models.
SUPPLEMENTARY INFORMATION
The online version contains supplementary material available at https://doi.org/10.1208/s12249-021-01965-4. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 8,652 | 2021-03-29T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
IDENTIFICATION OF ENERGY EFFICIENCY OF ORE GRINDING AND THE LINER WEAR BY A THREE- PHASE MOTION OF BALLS IN A MILL
A breakthrough in ore preparation has been the design of a technological unit with a low discharge level [1]. In these mills, balls move in a three-phase cycle, in contrast to two phases in units of previous structures [2]. This research field has been advanced in papers [3–6]. The velocity fields of a shear layer at low and medium rotation speeds of the drum were reported in [3]. The three-phase ball motion in the mill’s drum was simulated in studies [4, 5] and other publications. Great improvement in the characteristics for this ball mill is achieved via adaptive control over pulp rarefaction [6]. An improved mill with a low discharge of pulp and a three-phase movement of balls can increase the growth of specific performance at the first stage of grinding by 28–30 %. Despite the significant advantages of the new type of a ball mill, such units are slowly introduced to production. One of the main obstacles is the lack of means of automated control over energy efficiency of ore grinding and the wear of a liner, which leads to an increase in the cost of the resulting product ‒ iron ore concentrate.
Introduction
A breakthrough in ore preparation has been the design of a technological unit with a low discharge level [1]. In these mills, balls move in a three-phase cycle, in contrast to two phases in units of previous structures [2]. This research field has been advanced in papers [3][4][5][6]. The velocity fields of a shear layer at low and medium rotation speeds of the drum were reported in [3]. The three-phase ball motion in the mill's drum was simulated in studies [4,5] and other publications. Great improvement in the characteristics for this ball mill is achieved via adaptive control over pulp rarefaction [6]. An improved mill with a low discharge of pulp and a three-phase movement of balls can increase the growth of specific performance at the first stage of grinding by 28-30 %.
Despite the significant advantages of the new type of a ball mill, such units are slowly introduced to production.
One of the main obstacles is the lack of means of automated control over energy efficiency of ore grinding and the wear of a liner, which leads to an increase in the cost of the resulting productiron ore concentrate.
Literature review and problem statement
Initially, automated control over the energy efficiency of ore grinding at ball mills was executed based on a generalized indicator. A given indicator took into consideration the ball and ore loading, as well as water, by measuring the magnitude of oil pressure in supporting bearings, the mean power of a drive electric motor, the acoustic signal formed in the mill. Later, the vibroacoustic means of control over the operation of ball mills were developed [7]. That approach was subsequently improved through signal processing [8,9].
A new direction of research was initiated in paper [10]. The authors devised a model of a nonlinear observer, which includes a series of technological parameters and derivatives from some of them. However, the article concluded that under available measurements of technological parameters it is not possible to distinguish between ore and balls, not to mention the characteristic size of ore and its shredding.
In addition, especially following the advent of a new type of ball mills, there is an increased interest in determining the wear of a liner without stopping the units, because a possibility to predict its service life is uncertain in the industry. Thus, a new model was proposed to predict the wear rate of a liner. The model includes the following basic parameters: the type of ore, relative speed between ore and a liner, the rigidity of a liner, conditions for friction, etc. [11]. The model proposed could determine the liner wear depending on operating conditions. However, this is an indirect determination of the liner wear and it does not provide for adequate accuracy.
In this situation, it appears promising, as shown in [12], to select and study parameters that can be controlled automatically, which would directly characterize the energy efficiency of the process of a material destruction in the mill's drum. Such a parameter could be the deformation of a homogeneous rod that interacts with balls and red at threephase motion of grinding bodies. A given converter makes it possible to measure the liner wear as well.
The above components of the task show that it is required to solve a generalized problem, which comes down to identifying the energy efficiency of ore grinding, as well as the liner wear, at three-phase movement of balls in the mill.
The aim and objectives of the study
The aim of this study is to identify the energy efficiency of ore grinding, as well as the liner wear, at three-phase movement of balls in the mill based on homogeneous rod primary converters intended to measure technological parameters.
To achieve the set aim, the following tasks have been solved: -to analytically derive the equation that relates the technological parameters of a ball mill and the parameters of a primary rod converter when shredding large pieces of ore; -to achieve invariance of the volume of shredded ore to a change in the motion speed of balls in a technological unit; -to derive a mathematical model of determining the volume of shredded ore at the end side of a rod primary converter, invariant to a change in the motion speed of balls and its length in the process of operation; -to derive the equation that relates parameters of the length of a rod primary converter that is excited in the upright position by the impact of a reference ball against the idle end side to measuring parameters; -to design a circuitry for the implementation of an automated control system over the energy efficiency of ore grinding by mills with a three-phase movement of balls, as well as the liner wear.
1. Choosing a primary converter of energy efficiency of ore grinding and the liner wear in a ball mill
In the current work, a principal structural measuring element is the homogeneous rod converter of constant cross-section with one fixed end. At the other end of the converter, ore pieces are shredded by the normal impacts from balls, which in a given mill move under in a three-phase mode. The length of the rod under these conditions is 250 mm. The impact of a ball leads to an elastic deformation along the rod primary converter. First, there is the compression of the rod primary converter, followed by the reverse process -its unloading. If force F is a variable function of time, the speed c of compression deformation wave propagation remains unchanged, while velocities υ of cross sections motion at any time are different along the rod. The same features are characteristic of the reverse process. If F=const, then the velocity of cross sections motion remains unchanged as well. The devised model is capable of simulating the shredding of ore and wear of the mill's liner.
2. Development of a generalized model for a rod primary converter
Based on [13], the expression for determining the absolute value for the contraction of a rod primary converter under the action of force F takes the following form where Δl is the absolute value for contractions; l 0 is the initial length of a rod; E is the Young modulus (modulus of longitudinal elasticity); S is the cross-sectional area of a primary rod converter. The propagation speed of a compression deformation wave along the axis of the rod is equal to where ρ is the density of a material of a primary rod converter. The deformation at velocity υ makes all the particles of a material in the compressed part of the primary rod converter move. A given velocity υ is many times less than the wave propagation speed c. The speed is equal to The force applied along the rod primary converter, which caused the contraction f, will equal where f is the complete contraction of a rod primary converter under the action of force F. The time required for all the particles of a material of a rod primary converter to pass at deformation along direct and reverse directions is where l c is the current length of a rod primary converter that wears together with a liner. The work of deformations in the deformed volume of a shredded piece of ore, which dissipates in the form of heat is determined from dependence [14] ( 6) where k is the coefficient of proportionality that depends on the strength of ore; k 1 is the dimensionless coefficient of proportionality; V р is the volume of a shredded piece of ore.
Dependence (6) makes it possible to determine the work spent for shredding a piece of known technological variety of ore based on its defined volume.
In long ball mills, which include ore milling as well, along a drum there form the zones of grinding environment with approximately the same size of balls in the outer layer of their particular cross-section [15]. A given distribution of balls is confirmed experimentally at existing ball mills by a sound-metric method. In addition, such an effect of balls arrangement can be enhanced by using the specialized profiles of a liner [16].
Features of a three-phase movement of balls in the mills with a low discharge of pulp, dependences of parameters for a rod primary converter and balls arrangement in the drum, make it possible to execute automated control over energy efficiency of ore grinding and the liner wear.
1. Analytical derivation of equation that relates the technological parameters of the process and converter
Grinding of large pieces of ore occurs at the upper side end of a rod through the kinetic energy of moving balls. Kinetic energy, in contrast to potential, depends not on the coordinates of a body but on its speed, and remains constant until a numerical value for speed changes. If a body moves at constant speed, the sum of all forces acting on it equals zero; in this case, no work is performed. If a body acts at some force along the motion direction towards another body, it can perform work. In this case, the body's kinetic energy changes. A change in the body's kinetic energy equals the work performed in this case. Since the kinetic energy used to shred a piece of ore equals the work that is performed when it is shredded, then one can record Е kd =А d , where Е kd is the kinetic energy spent on shredding, А d is the work performed when shredding a piece of ore. The work performed at shredding a piece of ore is expressed by dependence (6).
Kinetic energy 2 2 k k m υ of the ball of mass m k moving at velocity υ k is spent on performing the work of shredding a piece of ore at the upper end side of a rod and on the deformation of compression of a rod primary converter. The deformation of compression of the rod primary converter uses the kinetic energy of a moving ball, reduced by the magnitude А d . The velocity of a ball after the deformation of compression of a rod primary converter is zero. The body that has completely stopped is unable to carry out work, so all the energy due to its movement equals the work that is performed when moving a body to the full stop: А=F·s, wherе F is force, s is the path traveled. In our case, the work of compression deformation is 1 1 , where F 1 is the force that develops when a rod converter is compressed; f 1 is the full contraction of a rod under the action of force F 1 .
Given that a change in the kinetic energy of a moving ball is equal to the work performed in this case, one can record the following equation In equation (8), force F 1 is uniquely associated with the full contraction of rod f 1 via dependence (4). Substitute F 1 from equation (4) in (8) and obtain 2 2 Equation (9) relates the technological parameters of a ball mill, the shredded material, and the parameters for a rod primary converter of energy efficiency of ore grinding.
2. Achieving the invariance of control over energy efficiency of ore grinding by a ball mill to a change in the motion speed of balls
The velocity of balls υ k under a three-phase movement mode will vary within a significant range when changing operational conditions of a ball mill. Given that the velocity of balls, in addition, is included squared in the equation, one should expect significant errors in determining the volume of ore V p .
We shall introduce the concept of basic and auxiliary rod primary converters of energy efficiency of ore grinding by a ball mill (Fig. 1). The basic rod primary converter ( Fig. 1, a) has a greater cross-sectional area compared to the auxiliary rod converter (Fig. 1, b). It concentrates and destroys large pieces of ore under the impacts of balls. For the basic rod primary converter, equation (9) takes the form where S 1 and l 01 are, respectively, the cross-sectional area and the initial length of a basic rod primary converter. It is possible to improve the accuracy of determining the volume of ore V p at the upper end side of the basic rod primary converter by discarding a variable parameter υ k in the dependence. For this purpose, an auxiliary rod primary converter, whose upper end side, given its small cross section, does not receive large pieces of ore that must be shredded ( Fig. 1, b), is installed next to the basic one. The auxiliary converter must be installed independently of the basic one. Such a primary converter is characterized by that the entire kinetic energy of a moving ball would convert into the work on compression deformation. In this case, the following equality holds where F 2 is the force that develops when an auxiliary rod converter is compacted; f 2 is the full contraction of an auxiliary primary converter under the action of force F 2 .
In a general case, the auxiliary converter has other structural parameters. Therefore, by using dependence (4), write down Considering that under particular conditions for the operation of a ball mill the velocities of ball motion are the same, we equate the right-hand sides of equations (10) and (12) We fabricate the basic and auxiliary converters using the same material of the same length, but with different cross-sectional areas. Then Е 1 =Е 2 =Е, l 01 =l 02 =l 0 . Considering it, equation (14) takes the form ( ) Equation (15) links the technological parameters of shredded material, the structural parameters of rod primary converters, their deformations, and does not depend on the motion velocity of balls.
Contraction of the basic and auxiliary rod primary converters could be determined through secondary converters. There are many methods to measure deformations. A study into characteristics of the mechanical properties of structural steel has established that reliable results could be obtained from a strain gauge [17]. Measurements of deformations in real conditions of operation on steel structures have shown close coherence between the measured and simulated deformations [18]. A research into optimization of strain gauge arrangements when estimating the characteristics of a structure has proven that the unified model for the optimization of strain gauge arrangement forms through maximizing the coverage [19]. That is, strain gauges should cover the entire length of rod primary converters, thus registering complete contractions.
3. Construction of a mathematical model invariant to a change in the length of rods
It is impossible to determine the volume of large pieces of ore inside the working space of a converter based on the complete contraction of length f 1 and f 2 from dependence (15), since these parameters are not available while the rods wear out. Therefore, the deformation parameters for the compression of rods are most appropriately measured based on their final section at a certain distance x from the fastening point. The region for measuring parameter B can be represented as part of the entire length of the converter, that is, B=k 2 l 0 , where k 2 can theoretically accept a value from 0 to 1. Assume that the distance of planes and a base B for measuring the parameters of both converters are the same. Given the proportionality of deformations along the converters under certain forces, values for the contraction of parts of the rods on bases B for measuring contraction accept 11 1 0 , Define complete contractions of rods in the process of measurement from dependences (16) and (17), which would equal 0 1 11 , Substituting the found values for f 1 and f 2 in dependence (15), we obtain ( ) The derived dependence (20) is a mathematical model for the energy-efficient ore grinding in a ball mill, according to which one can control a given technological parameter during operation.
4. Determining the wear of a mill's liner
In the operation of a ball mill, one needs to know the state of the liner wear. It is advisable to elucidate by defining the length of rods because they wear out along with the liner. The current length l z of rods that partially worn out during operation of a ball mill of initial size l 0 can be determined based on one of them, for instance basic, from dependence where υ is the velocity speed of particles of a material under the strain of compression and unloading of a rod primary converter; Т і is the period of oscillations of a measuring generator; n c is the number of pulses over a separate cycle of measurement; l P is the distance between the idle end of the basic rod to the point of mounting a strain gauge at base distance B. The liner's wear is l c =l 0 -l z . The measurements that are performed in line with formula (21) are carried out using a strain gauge signal. The strain gauge perceives fluctuations in the rod over the idle cycle when a wave propagates in direct and reverse directions. The wave is excited by the impact of a reference ball against the idle end of the basic converter at each rotation of the mill's drum.
The circuitry for implementing the identification of energy efficiency of ore grinding and the liner wear
A functional scheme of implementing the automated control over energy efficiency of ore grinding in a ball mill is shown in Fig. 2. When the mill's drum rotates around axis 12, the balls in the outer layer first move along a circular trajectory, since they are pressed to the liner. In the upper part, they detach from the liner and move along a parabolic trajectory, followed by their movement along a straight line at the surface of the supporting layer of grinding bodies, coated with pulp, until they hit a liner in the bottom part of the drum. Along the parabolic trajectory, the motion velocity of balls is constant, so their kinetic energy is permanent. Part of this energy is lost in the process of moving along the straight line. There, the motion velocity changes; it can accept different values at the time a ball hits a liner because the kinetic energy is lost depending on the position of a supporting layer of grinding bodies, the level of pulp, pulp density and its viscosity, and other factors. In a particular technological situation, the motion velocity of balls and their kinetic energy would remain constant prior to the impact against a liner.
A ball mill is loaded with large fractions of ore, residing in the pulp formed by a small firm solid. The volume of large fractions of solid in a unit volume of pulp is a measure of the energy efficiency of ore grinding in a ball mill. This volume of large solid 2 will stay at the upper end of basic rod 11 at the moment of impact of ball 1. The same processes would occur at liner 3, located at the inner surface of drum 4; they, however, are out of control.
The energy required to shred large fractions at the upper end of rod 11 depends at certain strength of ore 2 on its volume in the controlled zone of shredding. Thus, a given energy is a measure of the energy efficiency of ore grinding in a ball mill. The energy applied to the rod upon impact is equal to the kinetic energy of a moving ball, reduced by the magnitude of energy required to shred large fractions of solid at the upper end of the basic rod. The more a mill is loaded with ore (the concentration of large solid in pulp, the volume of ore at the upper end of the basic rod), the greater the energy that is used by a ball to destroy large solid. In this case, the residual energy of the ball that ensures the compression of a basic rod at the moment of impact will be lower. Basic rod 11, rigidly fixed by support 9, would contract less in this case. Thus, the contraction of basic rod 11 can be used to estimate the energy efficiency of ore grinding in a ball mill. The same information is acquired from the contraction of the controlled section of rod 11. However, when the balls' kinetic energy is lost along the section of rectilinear motion in the pulp, the results would be distorted. That would introduce a considerable error to the identification of a ball mill's loading. This is because the level, the density of pulp, and other factors, may accept different values during operation, which are characteristic of a given technological regime in a particular situation. 5 -auxiliary rod; 6 -lock; 7 -reference ball; 8 -oscillation exciter; 9 -support; 10 -waterproofing and shock-absorbing sleeve; 11 -basic rod; 12 -mill's rotation axis; SG1, SG2 -strain gauges; Т1, Т2 -timers; КЕ1, КЕ2, КЕ3, КЕ4 -key elements; TЕ1 -threshold element; SD1, SD2, SD3, SD4, SD5 -storage devices; A1 -amplitude selector; ЕM1, ЕM2 -elements of memory; LЕ1, LЕ2 -logical elements «AND»; NCEK -normally closed element of key; TELL -threshold element of low level; DA1, DA2 -devices to average signals; BSD -basic computing device; OSS -ore strength setter; ЕК -element of key; DM1 -decelerated multivibrator; NDEK -normally disabled element of key; TEHL -threshold element of high level; PS -pulse shaper; ТR -trigger with two stable states; ACD -auxiliary computing device, RG -reference generator Auxiliary rod 5 is fabricated with a small cross-section, which is why hitting its end by ball 1 is an unlikely event compared to basic rod 11. Even more unlikely is the situation when a particle of large solid is arranged at the time when ball 1 hits the end of auxiliary rod 5. Therefore, the current work will mostly register the impacts of ball 1 against the end of auxiliary rod 5. In the time interval, preset by timer T1, strain gauge SG1 generates signals that, through key element КЕ1 and threshold element TЕ1, arrive to storage device SD1. Threshold element TЕ1 does not let the random pulses of small amplitude enter storage device SD1. Storage device SD1 accumulates, in the time interval preset by timer Т1, the signals that correspond to the direct hit of the ball against rod 5, and, possible, against rod 5 through a large chunk of ore. A pulse of the highest amplitude would correspond to a direct collision between auxiliary rod 5 and ball 1. The pulse is displayed at device A1 and is sent to the input of key element КЕ2, which enables the in-turn recording of the highest value from rod 5 in a given time interval, preset by timer T1, and erases the preceding value. The command to execute such an operation is triggered by logical element «AND» LЕ1, the input of which must receive a signal from the timer about the end of the cycle, and a signal on the largest value for a measured parameter. If device A1 failed to generate such a signal, then the operation of a control system would continue based on the previous value for the largest signal from auxiliary rod 5. This corresponds to a highly unlikely event when there was no, in the time interval preset by timer T1, any collision between ball 1 and the end of auxiliary rod 5. Thus, the outputs of memory elements ЕM1 and ЕM2 will always produce the signal from auxiliary rod 5. A contraction in the controlled section of auxiliary rod 5 describes the magnitude of the kinetic energy of a ball under these technological conditions. Because the mass of a ball and its speed at the end of a parabolic section remain unchanged, the contraction of the controlled section of auxiliary rod 5 carries information only about the level of change in the kinetic energy in the process of ball motion along a straight section prior to its hitting the liner and contraction f 21 . The same changes in the kinetic energy of a ball in this technological situation would be characteristic of basic rod 11.
The contraction of a section of basic rod 11, measured by strain gauge SG2, is converted into an electrical signal, which, via a normally closed element of key NCЕК, a key element КЕ3 in case timer Т2 is enabled, enters the threshold element of low level TЕLL, set to a certain level of the signal. Under the largest loading of the mill will ore the signal from strain gauge SG2 would be minimal, especially when the strongest ore is used. However, the signal would be even much smaller when a ball hits a junction, thereby covering part of the working end of rod 11 and waterproofing and shock-absorbing sleeves 10 or liner 3. The level of setting threshold element ПЕНР should be slightly higher that this signal. Then storage device SD2 receives signals whose amplitude exceeds the threshold level of element TЕLL. These signals correspond to objective information about the volume of large fractions of ore 2 at the working end of basic rod 11. The signals that arrived to storage device SD2 are averaged by device DA1.
The averaged signal characterizing the volume of shredded ore at the working end of basic rod 11, through its contraction, arrives to one of the inputs of basic computing device BSD, whose other inputs receive a signal from ore strength setter OSS, a signal that corresponds to the contraction of auxiliary rod 5. Other quantities at devices are physical or accepted to be stable, defined by the design of the control system over energy efficiency of ore grinding in a ball mill and are recorded to permanent storage devices. The primary computing device, in accordance with dependence (20), defines the volume of ore to be shredded. The result of computation is sent via ЕК to storage device SD4 or SD5.
The fact of ball 1 hitting or missing the working end of basic rod 11 at rotation of the drum of a ball mill around axis 12 is a random event. Thus, it can be guaranteed to happen only over a specific period of time or after certain number of the drum rotations. A given time interval is set by timer T2. Since the probability of ball 1 hitting the end of auxiliary converter 5 is significantly smaller compared with such a probability for the basic converter, then the time interval to register the contraction of a section of auxiliary converter 5 should be several times longer. The time interval is se by timer T1. Given that one should not accept it to be too long considering a possible certain change in the technological situation, it is advisable to operate at shorter intervals and their actual prolongation only in cases when there were no collisions between balls and auxiliary rod 5 over the predefined cycle. This is provided by logical element LЕ1.
A channel for determining the thickness of a liner in a ball mill functions in the following way. Passing a signal by threshold element of low level TЕLL triggers decelerated multivibrator DМ1, which, over the period of exposure, disables connection between strain gauge SG2 and key element КЕ3 and enables, during exposure time, the chain of normally disabled element of key NDEK connecting strain gauge SG2 to threshold element of high level TEHL, which is also set to a certain amplitude of the signal. At the same time, when a mill's drum rotates around axis 12 vibration exciter 8 enters the zone close to the vertical state. Reference ball 7, while overcoming the resistance of support 6, deals a severe blow to the idle end of basic rod 11, which excites a wave process in the rod -a wave of compression propagates to the working end and, bouncing off it, a wave of unloading moves to the idle end. In this case, strain gauge SG2 generates a signal of significant amplitude, however, due to the disabled element of key NCЕК, they do not enter the channel where a signal about the amount of shredded ore forms and, conversely, they arrive, through a temporarily disabled element of key NDEK, at the input of threshold element of high level TEHL, then pass it, forming a pulse in PS, thereby changing the state of the trigger with two stable states ТR, which, along with a signal from timer T2, generates at the output of logic element «AND» LЕ2 a permissive signal at the input to key element KE4. Fluctuations from reference generator RG arrive to storage device SD3. When strain gauge SG2 generates a signal of significant amplitude when an unloading wave passes trigger ТR enters a second stable position. If there is a signal from timer T2 and a change in the signal from trigger ТR a permissive signal at the output to logical element «AND» LЕ2 disappears and memory device SD3 terminates counting the pulses. Therefore, the number of high-frequency pulses in ACD corresponds to double the distance from strain gauge SG2 to the working end of basic rod 11. Thus, as a result of ending the exposure of decelerated multivibrator DМ1, switches are executed in elements NCЕК and NDEK. The circuit is again ready to receive the main signal about crushed ore from strain gauge SG2. The signal, averaged by device DA2, is sent to the input of auxiliary computing device ACD, which, according to dependence (21), determines the current length of rods that matches the thickness of the liner in a ball mill.
Discussion of results of the identification of energy efficiency of ore grinding and the liner wear
The energy efficiency of ore grinding in ball mills is defined by the concentration (volume) of large particles of ore over a section limited by its area. The volume of the ore can be determined by shredding it, as it was established in the derived equation of relationship between the technological parameters of a ball mill and the parameters for a primary rod converter. A given approach is characterized by the direct measurement of a parameter, which guarantees high accuracy. Prior to this, the indirect measurements were performed at rather low precision.
Mills with a three-phase movement of balls during the final phase typically change their speed, which depends on the state of ball environment, the density and level of pulp over a supporting layer, which would severely affect the accuracy of control. Achieving the invariance to a change in speed almost eliminates the disadvantage of these ball mills in terms of measurement.
A mathematical model of two-rod determination of the volume of shredded ore at the end of a primary rod converter is invariant both to a change in the velocity of balls and to a change in the length of converters in the process of wear. The model makes it possible to measure the volume of shredded ore with high accuracy, since the major disturbing factors do not affect it.
Determining the thickness of a liner is also performed by direct measurement of the parameter, which ensures high accuracy compared to the indirect estimation [11] that does not warrant high accuracy.
The circuitry that was used to implement the proposed system of automated control over operating parameters at grinding ore in mills with a three-phase movement of balls has shown the feasibility of a given approach. It is possible to accurately determine the operational parameters of grinding by using modern microprocessor tools. Mathematical modelling has established that the relative error in determining the parameters would not exceed ±2.5 %. That will significantly improve the effectiveness of ore preparation at modern enrichment plants.
The results obtained (the equations for rod primary converters, a mathematical model of control over energy efficiency of ore grinding, the equations for determining the thickness of a liner in a ball mill) could be used for both analytical and computational experiments. They could be applied when implementing control operations in ball mills with a three-phase motion of balls of different sizes. The equations for determining the thickness of a liner in a ball mill might be used when implementing control operations in mills with a two-phase mode of ball movement. The technique for achieving measurement invariance with the application of an additional rod primary converter could be implemented for converters of other types.
Note that there are still unresolved issues related to the development of both means of control and tools for signal processing. The disadvantages of this approach also include the need to replace rod primary converters when the liner of a ball mill wears out, as well as certain difficulties in transferring signals from a rotating object to a stationary one.
In the future, it is planned to design and investigate the systems of automated control over energy efficiency of ore grinding in mills with a three-phase motion of balls of specific sizes and to explore the impact of speed of a compaction deformation wave's propagation along the axis of a rod on the readings by strain gauges. The first specified shortcoming is easily eliminated by fabricating rod primary converters in units that are mounted in the hatch of a mill's drum. The second drawback, characteristic of rotating objects, is typically removed by applying radio engineering tools. In order to almost completely eliminate it, noise-resistant radio equipment is designed that could automatically set operation modes.
The results reported in the current work need no experimental verification, since all the provisions have been proven analytically. There are no assumptions that could have deviated results from those obtained. The only parameter that might affect the accuracy of identification is a coefficient k 1 in dependence (6), which characterizes the shredded volume of ore and the volume exposed to the pressure of a falling ball. If these volumes are the same, then there is no error in the identification of a technological parameter. Experimental verification would be appropriate when designing means of identification for specific operational conditions.
Conclusions
1. We have analytically derived an equation that relates the technological parameters of a ball mill and the parameters of a primary rod converter when the balls destroy large pieces of a material at its end. That proves the possibility to use a converter as a means of control over energy efficiency of ore grinding at the early stages of ore preparation at concentrating mills. The energy efficiency of ore grinding is estimated based on the volume of shredded large chunks of a material in the pulp, which are at the end of a rod primary converter at the time point of contact with the ball. The volume of the shredded ore must match the value determined with precision.
2. Using two rod primary converters -the basic one with a greater cross-sectional area, where ore is shredded, and the auxiliary one with a lower cross-sectional area, which interacts with the ball without orewe have achieved the invariance of the volume of destroyed material to a change in the motion velocity of balls within a technological unit with a low level of pulp discharge. That warrants high precision in estimating the technological parameter under conditions of change in the state of ball loading, as well as parameters for pulp.
3. We have analytically derived a mathematical model of the two-rod determination of the volume of shredded ore based on the contractions of basic and auxiliary rod primary converters using the accepted deformation measurement bases by strain gauges. The model is invariant to both a change in the motion velocity of balls and in the current length of converters in the process of wear. This ensures the independence of the measured technological parameter on the magnitude of forces applied by balls and the length of rod primary converters that wear out during operation. By using the proposed mathematical model, it is possible to estimate not only the fact of energy efficiency of ore grinding, but to track when a ball mill approaches an emergency as a result of overload. This can be achieved due to that the sensitivity to deviations in the volume of a material from recommended levels is high enough. Testing the method by computer simulation has shown that it is possible to estimate the energy efficiency of ore shredding with a relative error not exceeding ±2.5 %. 4. We have derived an equation relating the parameters of length of a basic rod primary converter that is excited in the upright position by the impact of a reference ball against the idle end, and the measured parameters. The measured parameters include: the motion velocity of particles in a material inside a converter at deformation of compression and unloading, the period of oscillations of a reference generator, and the number of pulses over a separate cycle of measurement. A given equation makes it possible to determine the current length of partially worn-out rods, and, consequently, the thickness of the liner and to determine the magnitude of its waer as the difference between the starting and current lengths. Information about the liner thickness in a ball mill is rather valuable because it makes it possible to estimate dynamically the performance of a technological unit and to prolong the inter-service periods of technological units that work under loaded modes.
5. The designed circuitry for the implementation of a system of automated control over energy efficiency of ore grinding in mills with a three-phase movement of balls and the liner wear has demonstrated the feasibility of a given approach in automated mode. By using modern microprocessor means, based on the proposed mathematical model, it is possible to determine with high accuracy the volume of destroyed large pieces of ore that corresponds to the energy efficiency of material grinding. Similarly assessed is the thickness of a liner in a ball mill. That makes it possible to significantly improve the performance of ball mills with a three-phase movement of balls, to reduce the consumption of power, balls, and liner. | 9,174.2 | 2019-06-28T00:00:00.000 | [
"Engineering"
] |
Inside the Group: Investigating Social Structures in Player Groups and Their Influence on Activity
Social features, matchmaking, and grouping functions are key elements of online multiplayer experiences. Understanding how social connections form in and around games and their relationship to in-game activity offers insights for building and maintaining player bases and for improving engagement and retention. This paper presents an analysis of the groups formed by users of the the100.io—a social matchmaking website for different commercial titles, including Destiny on which we focus in this paper. Groups formed on the100.io can be described across a range of social network related metrics. Also, the social network formed within a group is evaluated in combination with user-provided demographic and preference data. Archetypal analysis is used to classify groups into archetypes and a correlation analysis is presented covering the effect of group characteristics on in-game activity. Finally, weekly activity profiles are described. Our results indicate that group size as well as the number of moderators within a group and their connectedness to other team members influences a group's activity. We also identified four prototypical types of groups with different characteristics concerning composition, social cohesion, and activity.
I. INTRODUCTION
Social relationships formed within and through online multiplayer games influence the engagement and user experience of players [1], [2].Moreover, social relationships in games are essential drivers of retention and monetization in games [3], [4].The facilitation and management of player communities and the connections between players is an important part of maintaining a healthy player base for a game and is vital for the survival of online multi-player games, which rely on a persistent presence [3], [4], [5], [6].Building an understanding of how social connections are formed across such platforms -whether they are provided as part of a game or game distribution network or have grown around a game -and how connections can foster engagement, retention, or promote particular behaviors (e.g., to reduce toxicity among community members [7]), can thus offer actionable insights for companies to achieve this goal.
The importance of social connections in games means that massively multiplayer online (MMO) games -irrespective of hardware platform -routinely provide dedicated matchmaking or group-generation features in order to make it as easy as possible for players to find similarly skilled teammates, solve G. Wallner is with the University of Applied Arts Vienna M. Schiller, C. Schinnerl, and J. Pirker are with the Graz University of Technology A. Monte Calvo is with Bungie, Inc. R. Sifa is with Fraunhofer IAIS A. Drachen is with the University of York group quests, participate in raids, find opponents, a clan, or guild to join, etc.However, while many games such as World of Warcraft or Starcraft support the in-game formation of friendships, guilds, or groups [8], not all games (including Destiny) include features to build in-game communities.This has created an opportunity for external solutions such as online player-grouping websites and matchmaking services of various kinds.They actively seek to assist players in finding like-minded people to play with and thus in building and maintaining long-term social relationships in and around games.Social networks of grouping or matchmaking features, thirdparty services, or similar can be analyzed through the adoption of social network analysis (SNA) techniques combined with machine learning of contextual data such as demographic, self-report, and behavioral telemetry (e.g., [4], [6], [9], [10], [11], [12], [13]).SNA can hence be used as a foundation for investigating player interactions and relationships.In practice, however, SNA in games is an underexplored topic across network analysis and games user research [11].Furthermore, the combination of social network data and contextual data is even rarer, Rattinger et al. [4] forming a notable exception.There is, thus, a general gap in existing work regarding the knowledge about how network behavior in games relates to the behaviors of a player or the group the player is part of, the psychological aspects of the player (e.g., motivation, preference, personality), or the in-game behavior of the player [6].In addition, work so far has focused on groups formed within a game itself (e.g., [14], [15], [16], [17]) and not on groups formed on external looking-for-group facilities.
In this paper, the focus is on taking a step toward addressing the current situation by combining the social network with self-report information from the social matchmaking service the100.ioacross tens of thousands of players of the game Destiny [18] -a hybrid online first-person shooter and multiplayer/massively multi-player game.The work presented here extends previous efforts by not only considering a playerestablished community but also by integrating demographic and preference data.
We present a series of analyses targeting the problem of characterizing player groups and developing metrics to describe them, and investigate correlations between group characteristics and their activity level in Destiny.Specifically, we present a correlation analysis aiming at identifying the effect of group characteristics on group activity.Results show that the number of moderators, their connectedness, and the group size correlate with group activity.Categories of player groups developed via archetypal analysis [19], [20], [21] across a series of group features show the presence of four types of player groups with varying degrees of social cohesion, moderator activity, activity levels, character level, etc. Group activity is also presented as a function of weekdays to investigate when the100.io players schedule activities, across groups comprised of casual or serious players.
The metrics used to generate these results are based on factors and behaviors that are common across a wide range of online games and can thus likely be transferred to social behavior analysis in other titles as well.The archetypes presented provide a means for distinguishing different types of groups in games communities and thus give community managers and game designers concise information to act on to facilitate their needs.
II. RELATED WORK
Analyzing and understanding social interactions and connections between players in online multiplayer games is crucial for obtaining a deeper understanding of in-game behavior, player experience, and player retention [4].Thus, it is important to understand how these games function as entertainment communities and social platforms and how groups within and outside games are formed and structured.
In the following, we discuss related work in the fields of: (1) groups and communities in games, (2) social networks analysis in games, and (3) behavioral profiles and archetypes in games.
A. Groups and Communities in Games
Collaboration and competition have always been crucial elements of gaming and playing.Players always tended to form interest groups, with play communities existing long before modern multi-player online games [22].Identification and analysis of such groups, social aspects, communicative strategies, and different interaction forms are relevant strategies to improve game design and to gain insights into social and communicative behaviors.Thus, understanding social behavior, groups, and communities in large-scale and popular multiplayer online titles is an essential step toward an improved understanding of player behavior.For example, Manninen [23] investigates interaction forms and communicative actions in multiplayer games and illustrates a social theory framework of interaction forms as a tool for designing and analyzing games.
Ducheneaut et al. [24] investigate and discuss social dynamics and social experiences in the large-scale gaming community of World of Warcraft and show that in-world grouping (e.g., through joint quests) is less important socially compared to player associations such as guilds.Guilds, player groups, and player communities, however, have a significant impact on player patterns.Ducheneaut et al. [15] explored structural properties of guilds which may contribute to the success or failure of the guild.The social network is approximated by relying on the locations of characters in the game world.Thurau and Bauckhage [25] performed a categorization of different guilds of players in World of Warcraft using matrix factorization in order to analyze the development of guilds over time.Poor [17], also focusing on World of Warcraft, studied the relationship between guild membership and character leveling, finding that guild membership does not significantly support leveling.Mason and Clauset [14] combined data on ad-hoc teams formed in Halo: Reach with survey data to investigate the influence of friendships on collaborative and competitive performance.In comparison to our work, players had to select their friends from a list compiled based on their game history while in our case this information was directly accessible.Goh and Wasko [26] used a mixed-methods approach, including affiliation networks, to identify characteristics of potential guild leaders.Chen et al. [16] looked into guild dynamics, focusing on guild-joining behavior, guild participation, and movement between guilds.Contrary to all these works which concentrate on in-game groups we are focusing on an external service aimed at facilitating play in the first place.
Unfortunately, identifying and analyzing meaningful ingame groups and communities often poses a challenge as the social network cannot be readily deduced as explicit information about connections is not available or accessible.However, implicit social connects as formed, for instance, through player matches have shown to be an important aspect for player engagement and player performance [4] and can be used to recommend teams and match-partners [27].
B. Social Network Analysis
Social Network Analysis (SNA) has been shown to be a valuable method to analyze social communities formed within traditional organizations [28] or in modern online platforms such as Facebook or Twitter [29].It has become a significant tool in fields such as sociology, information science, political science, economics, or organizational studies (e.g., [30], [31]).However, its application for investigating gaming communities is comparatively new.As a consequence there are still relatively few studies that use SNA to analyze player behavior and structures.However, existing work so far has shown the potential of this graph-based approach for investigating social structures, match-partner recommender systems [27], and for identifying potential cheaters [32].While most authors explored networks formed through friendships or groups, only a few looked at indirect connections, for example, formed through in-game behavior (e.g., [33]).Moreover, the state-of-the-art focuses on typical social network metrics to investigate social gameplay and does not include behavioral features or preference data.Recently, however, Rattinger et al. [4] explored social networks formed through matches in the hybrid shooter Destiny and combined it with behavioral profiles.The authors show correlations between such implicit social structures and in-game behavior, engagement, and performance.
C. Behavioral Profiling in Games
The availability of large-scale game behavioral data has led to a tremendous amount of attention to behavioral analytics in game development and research.The analysis of player behavior has rapidly emerged to become an integrated component of game development [6], [34], [35].One critical challenge in game analytics is pattern finding and the development of actionable models of behavior based on such patterns and any contextual data.Behavioral profiling provides an opportunity for condensing highly varied and high-frequency user telemetry into condensed, actionable profiles.These can be used to inform design, assist matchmaking, build user prediction models, track problems, etc., similar to the application of profiling in areas such as web analytics [21], [36].
While a complete review of the previous work in behavioral profiling in games is out of scope of the current paper it is important to note that the application of behavioral profiling to digital games is relatively new, arising with the introduction of large-scale user behavior data through hosting of games on social media platforms and with the introduction of mobile platforms [3].One of the first publications addressing the problem of developing actionable behavioral profiles from behavioral telemetry in games was Drachen et al. [37] who worked with self-organizing networks to develop profiles characterizing player behavior in the major commercial title Tomb Raider: Underworld.Since then a substantial amount of research on the topic has been released, including Thawonmas and Iizuka [38] who used multi-dimensional scaling to characterize behavior in the game Shen Zhou Online.Evaluating the fitness of simplex volume maximization and k-means on behavioral data from Tera: Online and Battlefield 2, Drachen et al. [20] noted the different strengths and weaknesses of centroidseeking vs. convex hull-seeking clustering models.Normoyle and Jensen [39] introduced Bayesian Clustering to behavioral profiling in games, drawing on data from Battlefield 3. Bauckhage et al. [40] introduced spatiotemporal clustering and developed waypoint graphs that permitted behavioralbased partitioning of game maps.Drachen et al. [41] developed behavioral profiles for Destiny, comparing four different cluster models.In general, cluster analysis has become the primary machine learning tool used for profiling purposes.As a flexible unsupervised learning method, clustering is useful for pattern exploration and permits condensation of multivariate space [21].Reviews of clustering models and their application in digital games are provided by Bauckhage et al. [21] and Drachen et al. [41].Archetypal analysis (AA) [19], [42] is repeatedly mentioned in this literature as a scalable model for developing plainly isolated and logical profiles in games and is therefore adopted here.An introduction to AA is provided in Section V.
III. DESTINY AND THE100.IO
Destiny [18] is an online multiplayer shooter set in a science fiction-themed world where players take on the role of Guardians to defend the Earth against alien aggressors to save mankind from extinction.Players can play as one of three character classes which can be leveled up to unlock new abilities and become more powerful.The game offers a wealth of weapons, armor, and other equipment with most of these being modifiable as well.The game blends shooter mechanics with elements of role-playing games.The gameplay mainly revolves around individual and small team combat.Toward this end, Destiny offers various player vs. player and player vs. environment game modes.Multi-player is often performed by assembling players into fireteams which work together to achieve a common objective or take on against each other.
However, Destiny itself does not provide any in-game matchmaking facilities for most activities such as raids to help players to connect with each other.In lack thereof, so-called Looking for Group (LFG) websites emerged which assist players in finding team mates.the100.io is a group matchmaking service that helps players to find a permanent group of like-minded people while other LFG websites focus on temporary groups for instant matches.Users of the the100.ioneed to create a profile providing different information such as preferred platform and preferred time of the day for playing, time zone, character level, and light level (see Section IV).Based on the entered preferences the the100.ioautomatically assigns the player to a group of similar players.However, players can also join other groups apart from the one they get assigned to.Groups also have different properties such as play style, platform, typical time of day for playing, and the number of members.Furthermore, as Destiny does not support crossplatform play, groups are specific to a certain platform.Also, each group can have moderators and sherpas.The latter are players who act as guides for inexperienced players.Besides that, the website allows players to add friends and to schedule and sign-up for Destiny related activities.For instance, a user can schedule a game for 9 PM CET and allow other members to sign up for it.
IV. DATA COLLECTION AND PREPROCESSING
Information about users, groups, and games are listed within pages on the the100.ioand was collected through a Python script as of December 16, 2016.The collected data set contains information about 218,214 players registered on the100.iothat scheduled a total of 637,823 unique games and form 2,468 groups.Since the100.ioallows for scheduling games for different video games, groups that did not report playing Destiny, games scheduled for games other than Destiny1 , and games that had no group information attached were removed.Groups composed of fewer than three players and groups with missing activity score information were excluded as well.Furthermore, user data was checked for invalid and missing values in the self-reported variables such as character level and light level and these users were not taken into consideration for further analyses.
After cleaning the data 586 groups remained of which 196 groups were designated as serious and the remaining 390 as casual groups (see below).In terms of platform, 252 groups are dedicated to PS4, 42 to PS3, 216 groups are playing on Xbox One, 42 on Xbox 360, and the remaining 34 are PC groups.Visual inspection of the variables of interest did not indicate any remarkable differences among the different platforms for which reason we did not distinguish among platforms for this first investigation.In total 26,317 players distribute across these groups, having played a total of 1,493,599 games at the time of data collection.While the100.iorequires that friendships need to be confirmed by both parties before being considered friends, we also considered friendships if only confirmed by one party as at least one user expressed interest in the connection.For this paper the variables of interest for groups are: • play style either casual or serious.Serious groups are groups which are intended for players with a serious and competitive play style.Casual groups are for people who play on a more leisurely basis.Here, it is important to note that the coding was not performed by ourselves.Rather the distinction is made by the the100.ioitself and users get initially assigned to either a serious or casual group based on their self-reported play style.• group size (N g ), i.e., the number of members of a group • number of moderators • number of sherpas • density, as a measure of interconnectedness of the group members, calculated as the number of actual friendships divided by the number of potential connections, that is, as defined by Newman [43] as a measure of the overall clustering of a group (given by 3 × the number of triangles in a network divided by the number of connected triplets of nodes) • average degree centrality (dc) of sherpas: Besides the number of sherpas the connectivity of sherpas in the group might play a role for activity as well.As such we calculated the degree centrality (i.e,.number of friendships / (N g − 1)) for each sherpa and averaged it over all sherpas of the group.• average degree centrality (dc) of moderators: As above, but for moderators.• activity score, the100.ioassigns a score to each group as an indicator of how active the group is.It equates to the number of confirmed sessions of a group (each time a member of the group joins a gaming session) over the past week.2• active games, that is, the number of recent and upcoming games as listed on a group's profile page as a snapshot of activity.Hence, it reflects the current activity level at the time of sampling while the activity score indicates confirmed activity over a week.In addition to these group-level variables, we derived information about the groups based on the individual members to derive measures of the group members' experience, in particular: • average level (level): the maximum level of a character of a player 3 , averaged over all group members • average light level (light level): the light level is a rating of a character's equipped gear, for example, weapons.A higher light level corresponds to better equipment and results in better offensive and defensive abilities.Please note that both character level and light level are userreported variables.However, as the100.ioassigns players based on their provided data we assume that players mostly report their actual data as otherwise, they may end up in groups not fitting their play style or experience.
A. Basic Data Description
Figure 1 shows histograms of the distribution of the basic group-related properties.First of all, group size (Figure 1a) varies mostly between 0 and 100 members with peaks both near 0 and near 100.The sudden drop after that can be explained by the fact that the the100.ioforms groups of 100 players.However, due to people getting invited to groups, groups may also get larger.Players in our dataset are all in all very advanced concerning their character level (Figure 1b) with most players having a level of or near 40 (the maximum possible character level).Light level (Figure 1c) although a little bit more dispersed is also quite high with most players having a light level between 300 and the maximum of 400.In terms of the number of moderators (Figure 1d) the majority of groups have none, similarly, most groups also do not have any sherpas (Figure 1e).Overall, however, groups have more sherpas than moderators.Most groups are also not at all or only loosely connected as reflected by the very low density values for most groups (Figure 1f).Lastly, it is also noticeable that a large portion of the groups only has very low activity scores.While groups with activity scores of up to 200 are still quite common, groups with activity scores larger than that are rare.
B. Correlations
Table I shows the results of a Spearman rank correlation (chosen because of non-normally distributed variables, see, e.g., [44]), relating the variables outlined in Section IV except play style (due to being a dichotomous variable).Please note that some individual correlations are based on a slightly smaller number of groups (573) as some groups were excluded because of missing data for the respective correlations and that the global clustering coefficient has only been calculated for groups with at least one triad (a group of three connected users), that is, 270 groups as otherwise the coefficient would be undefined.To account for multiple comparisons, a Bonferroni corrected (cf.[44]) α-level of .00091was used to determine statistical significance.In the following discussion, we will restrict ourselves mainly to correlations with |ρ| > .5 in relation to the activity related measures.
First, we should note that the number of moderators and sherpas is highly correlated with group size.Since density is measured relative to the network size, it is also worth noting that density also increases with group size, i.e., players in larger groups establish relatively more friendships.The average level of the players in a group, however, did not result in any noteworthy correlations which, very likely, is a direct consequence of the level cumulating at the maximum level of 40.However, the light level did show an influence but also has been more widely distributed.Both, activity score and the number of active games show similar correlations with the other metrics and as such we will not distinguish between them in the remainder of this section.Concerning group composition and connectedness, group size, number of moderators and sherpas, average connectedness of moderator and sherpas, density and to a smaller extent the clustering coefficient all are positively correlated with activity.As such we also conducted a multiple linear regression to better understand the influence of the individual factors and to develop a model for predicting group activity from the number of moderators, the number of group members, the number of sherpas, the connectivity of sherpas and moderators, as well as density.Basic regression coefficients are shown in Table II.Three of the six predictor variables have a significant (p < .001)zero-order correlation with group activity, namely group size, number of moderators, and the average degree centrality of moderators.The three predictor model was able to account for 72.67% of the variance in group activity, F (4, 574) = 254.5,p < .001,with an adjusted R-squared of 0.7267.
C. Weekly Activity
As pointed out above we collected data on 1,493,599 games.These games were scheduled from January 1, 2011 to December 31, 2017 (games can be scheduled in advance) with the large majority of them taking place during 2015 (883,695) and 2016 (587,247).Figure 2 (left) shows the number of games scheduled and the number of players signing up for these games on weekdays and time of day.Surprisingly, and contrary to what we would have expected, Saturday and Sunday have the lowest number of games while activity peaks on Tuesdays.Activity across the other four days is roughly constant with activity considerably increasing toward the evening of each day.These patterns are also evident if we split the scheduled games by serious and casual groups (see Figure 2, right), i.e., the weekly behavior is consistent for casual and serious groups.The low activity on weekends may indicate that players have less use for the the100.ioduring weekends, possibly because their regular playing groups have no trouble to coordinate on weekends.However, this is currently only speculative and further research would be necessary to verify this assumption.
D. Archetypal Analysis
While commonly employed to detect patterns in behavioral analyses in games, interpretations of clusters from the perspective of applicability of the developed profiles can be difficult [20], [37], [45].This is notably the case for the perhaps most widely adopted unsupervised clustering algorithm, k-means, which is theoretically suited for behavioral analytics.However, as it is focused on retrieving compact cluster regions, results can be hard to interpret in practice, as discussed by Bauckhage et al. [21], [46].
The soft clustering based analysis in this work is performed by utilizing archetypal analysis (AA).AA was introduced by Cutler and Breiman [19], and more recently extended to be applicable to large-scale datasets [21], [46], [47].Formally, as a constrained two matrix factorization technique, AA allows us to arrive at compact and interpretable data representations via extreme representative points that are called archetypes and the stochastic coefficients that indicate belongingness ratios to the corresponding archetypes.Formally, considering a column data matrix X ∈ R m×n defined X = [x 1 , x2 , . . ., xn ], archetypal analysis deals with finding X ≈ ZA where the two column matrices Z ∈ R m×k and A ∈ R k×n represent the archetypal matrix and the column stochastic coefficient matrix.Each column of Z is an archetype living in the data convex hull, whereas, each column of A lives in a (k − 1) simplex and is used to represent each data point as a convex mixture of the columns of Z.It is important to note that since every data point x i in X has a corresponding vector a i with lower dimensionality (i.e., k m), AA also allows for dimensionality reduction.
parameter selection is done by applying an alternating least squares procedure where each iteration requires the solution of several constrained quadratic optimization problems.AA has become attractive to behavioral analytics in games because it permits detection of special player behaviors, such as elite players, people adopting cheats, or players who struggle to progress in the game, as it is focused on finding extremes in the dataset [20], [46], [48].Specifically, what AA does is that it automatically detects a combination of features that leads, when being locked in pairs, to a similar but more complex segmentation as k-means without requiring any user intervention (e.g., in determining the value of k).Where kmeans produces cluster centroids, AA is different in that it is not looking for commonalities between players, but rather archetypal (extreme) profiles that do not reside in dense cluster regions but at the edges of the multidimensional space.In game analytics, archetypal data representations were previously used for profiling player behavior [20], [45], [49], analyzing population interest [50], building game recommender systems [48], and analyzing behavioral structures in social multiplayer online games [4].For more detailed applications of AA for behavioral profiling we refer to [20], [21], [45].
In our case, we used AA to find prototypical groups of the the100.io.To facilitate interpretability of the resulting clusters we kept the number of included features low while ensuring to include group characteristics (group size, density, number of moderators and sherpas), activity related measures (activity score), as well as factors reflecting the experience of groups (average level and average light level).Before running the AA, we excluded groups that contained invalid group features -such as invalid average level and average light level.This yielded a total number of 573 groups remaining for the archetypal analysis.We then run the AA for two to ten clusters (k = 2 − 10) and after inspection of the percentage of variance using the elbow method (following [20], [21], [51]) and the clusters for interpretability (following [20], [37]) we ended up with a four cluster solution of which the profiles are shown in Figure 3.While the scree plot (Figure 4) showed no major elbow, higher number of clusters such as five or six clusters mainly resulted in A2 being split into smaller fragments.Please note that due to AA being a soft clustering approach a group belongs to the different archetypes (A1-A4) with varying degrees [21].A4 are highly active, large, and densely connected groups with a fairly large number of moderators and sherpas while at the other end of the spectrum groups in A1 can be characterized by being small and inactive (and members may thus be in danger of leaving again).Between these two extremes, A2 covers groups which are already larger than the ones in A1 but have lower experience scores (level and light level) while groups associated with A3 are fairly large, have an increasing number of sherpas and also show an increase in the number of friendships.
To illustrate the structural characteristics of these groups, Figure 5 shows prototypical groups for each of the four archetypes (belongingness coefficient of each group > .98).Each sector of the chord diagram represents one group member with the sector being colored according to the member's role in the group.Edges connecting members indicate friendships.The inner circle is color-coded to reflect the activity score of the group.As can be seen from Figure 5a, the prototypical group belonging to A1 has just four members of which none is friends with each other.Probably as a result of the very small group size the group is also not very active.These are probably groups which have been newly formed on the the100.ioand thus are still waiting for more members.The group serving as an example for A2 (cf. Figure 5b) has already more members including one serving as sherpa but members have not established connections so far with one exception.This starts to change with A3 (Figure 5c).In the chosen representative group, members already have more connections, the group size itself has approximately doubled compared to the prototypical group of A2, and some members have already taken the role of moderators and sherpas.The last group (Figure 5d) belonging to A4 again roughly doubled in group size with members being much more connected.The group in question also has a considerable number of moderators and sherpas, and some members even take the role of both.
As noted above, AA provides the option for both soft and hard clustering.Each has distinct advantages and disadvantages.In the current analysis, soft clustering was used, i.e. a group does not belong exclusively to one of these four archetypes but can be expressed as a combination of them, which provides the ability to evaluate cluster affiliations in a more nuanced fashion than hard clustering [50].Hard clustering does not provide affiliation information across clusters, but has the advantage of providing clearer output.Table III shows the result of a hard clustering of the groups based on the highest membership value together with descriptive statistics for each cluster.In order to accentuate the AA-developed profiles, k-means clustering was also applied to the group dataset.K-means is a centroid-seeking cluster model -covered in detail in [51] -and thus works differently than the convexhull seeking AA [20], [19], [36].As for AA, k-means was run for k=2 to k=10 clusters.Similar as for AA, elbow plot indicates a k=4 solution (see Figure 4) with a, however, much more distinct elbow.Despite the two models having different search parameters, the resulting profiles are quite similar to the AA profiles and of similar size (n=240,31,217,85, similar ordering as in Table III, see also Figure 3) adding support to these.As the k-means results support the AA results, we are not covering them in greater detail here.
VI. DISCUSSION
First, it is noticeable that the the100.ioattracts high level players with high light levels, across casual and serious groups.Even players considering themselves as casual can thus be viewed as engaged and dedicated players.Most of the groups, however, are not very active and are not very connected as reflected by the overall low density values.However, the results of this study indicate that activity increases with group size.While this may not seem very surprising it may warrant further discussion in light of the work of anthropologists such as Dunbar [52] who stipulated that there is an upper limit of stable relationships a human can maintain.For example, work on Twitter [53] found evidence of such an upper limit, finding that users can entertain up to 100-200 stable relationships.
In the context of games this number has to be found to be much smaller.Ducheneaut et al. [15] found that most guilds in World of Warcraft have 35 or fewer members.Chen et al. [16] found the average guild size to be between 50 and 60 members and noted that the greatest instability seems to occur at a group size of around 60 with an average level of 60.Our results do not indicate such a size limit.This is probably due to guilds requiring more communication and organization than groups on player matching services which are intended to easily allow players to find playmates.Building larger groups seems thus to be more desirable in this context as it offers players more potential playmates.However, we should note that the maximum group size was 203 in our dataset.Williams et al. [54], also looking into World of Warcraft guilds noted that smaller groups tend to be more focused on social bonds whereas in our case density increases with group size as well.
Again, this might be a consequence of the different purposes of guilds and the the100.io.With increasing group size usually more effort has to be spent to maintain cohesion among the group members.Our results show that as groups get larger, they are getting more organized with larger groups having a larger number of sherpas and moderators or even people who are taking on both roles.Indeed, the number of moderators together with the connectedness of the moderators seems to be the largest predictor of group activity.Moderators seem to act as a sort of facilitator of play within the groups.In this sense, it might thus be worthwhile to ensure that moderators are also present in smaller groups.At that point it might also be worthwhile to reemphasize that while the connectedness of moderators predicts activity we did not find evidence that the same also holds true for the overall interconnectedness of the group (i.e.density).
The results of the archetypal analysis give us an impression of the overall distribution of the groups.In some way, the identified archetypes can be viewed as reflecting the evolution of groups on the the100.iowith groups starting small and then developing into larger, better connected and organized groups.Viewed from an activity perspective, at the top end of the activity spectrum we have groups characterized by having many quite well connected members, many moderators and sherpas (A4).This reflects the results from the correlation and regression analysis that moderators seem to serve as a facilitator of activity.However, assigning groups to the archetype based on their highest membership values shows us that only about 10.5% of the groups in our dataset fall within this high-activity cluster.At the lower end we have groups with a small number of members with none or only a small numbers of moderators and sherpas (e.g., groups mainly belonging to A1 or A2).These are most likely groups recently created on the the100.io.In terms of activity, a large number of groups falls within these two extrema but still have rather low activity with an average activity score of 11.4 if assigned to their dominant archetype (cf.Table III, A3).Providing means for such groups to reach the characteristics of A4 may help foster activity.For example, a LFG platform could provide recommendations to a group's founder for promoting group members (preferably highly active and well connected ones) to moderators.A k-means cluster analysis led to four clusters with similar characteristics adding support for the AA solution.
In terms of activity over time we witnessed lower activity on weekends than on weekdays with activity peaking on Tuesday evenings, irrespective of casual or serious groups.This peak seems to coincide with the weekly reset time where many activities and rewards are reset by Bungie, which as of the time of data collection took place at 2:00 AM Pacific time (see [55]).In general, afternoons and evenings are the preferred gaming times for all days of the week.As such it seems advisable to encourage events on weekday evenings where they are likely getting more attention.
In terms of the limitations of the current study we should note that we focused on one specific game -Destiny -in this study.However, metrics which have been specific to Destinylevel and light level -did not lead to any relevant conclusions.Since the other metrics used are mainly independent of the actual game, we believe that our results may also appear when looking at groups playing other, similar, games.However, we need further investigation to confirm this assumption.Furthermore, while we were able to obtain data on scheduled games we could not verify if these games really took place or how many players have participated in these games in the end.As we did not have access to these data, we could also not assess how long the games lasted or if they fell apart immediately after starting.To take this factors into account, one would need to be able to relate the games scheduled on the the100.iowith the actual instances in Destiny.Despite this, we believe our paper contributes to the study of general MMO matchmaking and player behavior through the lens of Destiny and the the100.io.
Having said that, there are also several interesting avenues for further research.Among others, as we only looked at a snapshot of time it might be worthwhile to investigate how groups develop over time.Moreover, given that Destiny exposes in-game data through a publicly available API it might be interesting to observe how group structure correlates with in-game behavior of the groups or vice versa.Both directions could lead to further interesting insights on how groups need to be organized to stay healthy and active.
Lastly, while the work presented here is focused on the matchmaking service the100.io, it constitutes part of more considerable interdisciplinary challenges around how to handle group formation, group maintenance, and service, as well as overall community management in online environments [22], [29], [30], [31].These are challenges that cut across domains such as information systems, human-computer interaction, social science, media, psychology, and application design.This provides a strong motivation to investigate social connections in and around games further.
VII. CONCLUSIONS
Online multiplayer and massively multiplayer games such as Destiny depend on players being able to find other people to play with [1], [4], [8], [12], [22], [27], [56].Being able to analyze, categorize, and understand social structures in player communities, therefore, provides not only insights into online behavior but can also be leveraged by game companies to enhance matchmaking, player grouping, tune in-game activities, and events to the behavioral patterns of groups, as well as improve engagement via promoting group types that facilitate the requirements of the players.Being able to analyze online player communities at both the group level and the individual level can thus directly contribute to a more sophisticated user experience in multi-player games.As an example, identifying players that are not socially active and thus in danger of leaving, or groups who are not active or of sufficient size to foster activity, can be of great value in order to counteract negative development, for example, by providing the kind of help needed, incentives, or even adapting game content.In essence, understanding how to establish a thriving community which is well-aligned with the particular needs of a particular game can be a valuable asset for ensuring longtime engagement and retention.
The results presented in this paper contribute to the understanding of online player communities [4], [11], [22], [27], [32], [56].A large-scale analysis of an online social player community has been presented, covering tens of thousands of players and integrating data about their social connections as well as self-report data about playing preferences.While social networks in games have seen some research in the past, such work has almost exclusively relied on implicit ingame "friends" connections (Jia et al. [11] being a notable exception), rather than communities established explicitly by players and formed around one or more games, where repeated shared activities permit evaluation of the strength of connections between players and groups.
Analyses have been presented that investigate correlations between group characteristics and activity level, showing that the size of the group, as well as the number of moderators and how well connected they are with the other group members correlate with activity.The influence of sherpas on activity has not been as high as we would have expected.Categories of groups have been generated using archetypal analysis, indicating four distinct types of player groups each with their own characteristics.Group activity was also presented as a function of weekdays to investigate when the100.io players schedule games, across groups comprised of self-reported casual or serious players.Finally, we here take an applied angle, describing and defining a series of metrics, for example, group characteristics, and models, such as archetypal analysis, which can be employed by game developers and community managers to gain insights into their communities whether formed through or around a game.
Fig. 1: Histograms of distributions of group related characteristics according to serious ( ) and casual groups ( ).
Fig. 2 :
Fig. 2: Activity on weekdays and time of day.Left: Number of scheduled games ( ) and number of players ( ) signing up for these games.Right: Average number of scheduled games across casual ( ) and serious groups ( ).
Fig. 5 :
Fig. 5: Prototypical groups for each of the four archetypes.All groups have a belongingness coefficient greater than 0.98 with respect to the archetype in question.Each sector represents one group member colored with respect to the role in the group ( = moderator, = sherpa, = sherpa & moderator).Friends are connected by lines.The background color of the inner circle reflects the group's activity score (0 352).
TABLE I :
Spearman rank correlations between different group-related characteristics.Correlations with |ρ| > .5 are written in bold face.
TABLE II :
Multiple linear regression of group characteristics on group activity.
TABLE III :
Groups belonging to the different archetypes based on the highest membership value together with descriptive statistics of these groups. | 9,683.8 | 2019-12-01T00:00:00.000 | [
"Computer Science",
"Sociology"
] |
Influence of Glass and Sisal Fibers on the Cure Kinetics of Unsaturated Polyester Resin
Laboratory of Polymers, Universidade de Caxias do Sul – UCS, Rua Francisco Getúlio Vargas, 1130, CEP 95070-560, Caxias do Sul, RS, Brazil Laboratory of Composite Materials, Program of Postgraduate Studies in Mining, Metals and Materials Engineering, Department of Materials Engineering, Federal University of Rio Grande do Sul – UFRGS, Av. Bento Gonçalves, 9500, CEP 91501-970, Porto Alegre, RS, Brazil Federal Institute of Education, Science and Techonology of Rio Grande do Sul – IFRS, Rua Mário de Boni, 2250, CEP 95012-580, Caxias do Sul, RS, Brazil
Introduction
Over the past few decades, polymers have replaced many conventional materials due to benefits such as low density and easy processability.In particular, reinforced polymers have attracted the researchers attention due to their advantages over established materials.However, to obtain specific properties, polymer composites require to be modified by appropriate constituent materials; for example, fibers, whiskers and fillers [1][2][3] .
There are basically two types of fibers: natural and synthetic.Vegetal fibers are biodegradable, readily available from natural resources, cheap, of lower density and abrasive nature, and higher specific strength, among other features.Other desirable properties include high impact strength, high flexibility, less equipment abrasion, less skin and respiratory irritation, vibration damping and enhanced energy recovery [1][2][3][4] .
Furthermore, natural fibers contain a significant amount of hydroxyls and due to their high cellulose content their structure is around 70% crystalline, which characterizes relevant structural differences in comparison with, for example, glass fiber 5 .The differences cited in the literature must be considered for possible applications of natural fibers -based composites, as the contact surface between the matrix and discontinuous phase can directly influence the cure kinetics of the polymer matrix 6 .
Coir, sisal, jute, waste silk, cotton and bamboo are some natural fibers described in the literature [4][5][6] .Their composition consists mainly of cellulose fibrils embedded in a lignin matrix, the fibrils being aligned along the length of the fiber, their components including cellulose, hemicellulose, lignin, pectin, waxes and water [7][8][9][10] .The reinforcing efficiency of natural fibers is related to the nature and crystallinity of cellulose 7 .
The literature reports that the sisal fiber density (1.33-1.45g.cm -3 ) is lower than that of E-glass (2.5-2.55 g.cm -3 ) [11][12][13] .The lower sisal fiber density offers the potential to provide higher added value, especially in the automotive industry, through the manufacturing of non-structural lightweight parts 14 .
In contrast, owing to poor moisture resistance, degradation in some properties of the natural fiber composites should be considered, causing a limitation of their use for some applications.Hybridization of natural fibers with synthetic fibers is one of the techniques adopted to overcome some of the drawbacks that have been identified, combining two fibers in a single matrix in order to compensate the disadvantages of one fiber in the presence of the other one.In order to improve the fiber-matrix interface the application of natural or synthetic fibers as reinforcements in composite materials requires strong adhesion between the fibers and the matrix.Physical and chemical treatments can be used to optimize this interface.To improve the mechanical properties of the final composite, a thin reactive coating (sizing), generally consisting of coupling agents, can be used to treat the fibers; this induces physical or chemical bonds between the matrix and the fibers 15,16 .Kinetic studies are required in order to understand and optimize the manufacture of composites.Many isothermal [17][18][19][20] and non-isothermal 21 models for thermal analysis have been used to determine the cure kinetics.Some non-isothermal models are based on analyses carried out at different heating rates () applied to the system studied [22][23][24] .
In an earlier study reported by Fei Yao et al. 25 , some models were employed for the evaluation of the decomposition kinetics in different fibers.The Kissinger, Friedman, Flynn-Wall-Ozawa and modified Coats-Redfern methods were used.For all fibers, approximately 60% of the thermal decomposition occurred in the temperature range of 215 and 310 °C.For all methods, the activation energy (E a ) showed a similar trend.The E a value obtained is the sum of the activation energy of all chemical reactions and physical processes that occur during the thermal degradation.
The aim of this study is to evaluate the cure kinetics of an unsaturated polyester resin (UPR), containing glass fiber (UPR/GF) and sisal fiber (UPR/SF) reinforcements.
Fiber milling
Fibers were milled in a Medizintechnik ball mill (model TMA-69022; Leipzig, Germany) aiming to increase the surface area by reduction the fiber size and improve the homogeneity and dispersion of the reinforcements in the polyester matrix.Ceramic spheres of 30 and 25 mm diameter, in quantities of 35 and 81 units, respectively, were used during the milling process.The sisal fiber was washed twice in distilled water and dried for 90 minutes at 105 °C.
Granulometry analysis
For the granulometry analysis of the grinded fibers, a Produtest shaker was used.The sieve sizes were 0.149 mm, 0.074 mm and bottom.The granulometry analysis was carried out for 20 minutes.
Sample preparation
Sample preparation was carried out with 25 vol% of glass or sisal fibers (16.7 mL of fibers for 50 mL of polyester resin) and incorporated of 2 vol% of P-MEK.The densities were determined according to ASTM D792 in n-butyl acetate.
Scanning electron microscopy (SEM)
Scanning electron microscopy (SEM) was carried out using a Superscan S-550 apparatus, with a secondary electron detector and acceleration voltage of 15.0 kW.The scanning was carried out at magnifications of 2000× and 500×.The samples were previously metalized with gold.
Differential scanning calorimetry (DSC)
The DSC measurements were recorded by a Shimadzu DSC50 apparatus under nitrogen atmosphere (40 mL/min).Samples of ≈10 mg for the polyester and ≈30 mg for the composites were analyzed.Samples were heated from 25 °C up to 250 °C at four different heating rates (5, 10, 20 and 40 °C/min) 26-30 .
The Flynn-Wall-Ozawa (FWO) method
The kinetic method proposed by Flynn, Wall 27 and Ozawa 28 can estimate the activation energy of chemical reactions.The resolution of the FWO method can be attained by integral approximation, considering that FWO is an isoconversional method used to estimate the activation energy.The FWO method supports a Doyle's approximation 21,30 through which Equation 1 can be obtained: where g((T)) is related to the conversion.
Results and Discussion
The densities estimated for the neat resin, glass fiber and sisal fiber were 1.17, 2.50 and 1.10 g.cm -3 , respectively.
The granulometry test results indicated particle sizes of 30 ± 11 µm for the grinded sisal fiber and 26 ± 8 µm for the grinded glass fiber.In addition, from the SEM analysis of the grinded glass fiber fiber-shaped particles (Figure 1a, b) can be observed; however, in general, the geometry evidenced was that of a refined powder.In contrast, for the grinded sisal fiber (Figure 1c, d), the fibrous aspect was not clearly identified.
Differential Scanning Calorimetry (DSC)
Figure 2 shows the DSC thermograms for the UPR obtained by applying different heating rates.Only the mass of resin was correlated to the heat released in the polymerization processes.As expected, an increasing heating rate promoted a faster reaction, leading to lower time required to complete total conversion.Similar behavior was observed for Figure 3 the composites.
Conversions (a) could be obtained by integrating the exothermic peak related to the cure reaction of the systems, using Equation 1.The total heat reaction values (DH tot ) used to determine the conversions are presented in Table 1.
According to Martín 31 , the heat generated in a cure reaction is independent of.The author states that when a low heating rate is used, the calorimetric signal is small; however, cure time is longer, and vice-versa; therefore H may be constant.However, the reduction in enthalpy with an increasing heating rate may be associated with shorter times required for the cure reaction to occur.
For isoconversional methods, the reaction rate in a fixed conversion is only temperature-dependent.Thus, using Equation 1, proposed by FWO, it is possible to obtain a plot of log vs. 1/T for each conversion (a). Figure 4 shows the log vs. 1/T plot for the unsaturated polyester resin, and the UPR/GF and UPR/SF systems showed similar behavior.The conversion values studied for all composites were set at between 0.02-0.8.The activation energies of the cure process for the systems were calculated from the slopes of the straight lines obtained [26][27][28] .
The activation energy values are shown in Table 2.The results obtained for the E a of UPR showed two kinetic stages.Firstly, for = 0.02-0.2, a decrease in E a was observed from 46.7 to 43.7 kJ.mol -1 .With the reaction advancement, the E a increased from 43.9 to 46.6 kJ.mol -1 for = 0.3-0.6.Stabilization of the activation energies in the range of 47.3-47.2kJ.mol -1 for = 0.7-0.8 was observed.For UPR similar E a values were found in the literature 32 .The chemical reactions features for each stage have been widely discussed in literature 33 .
In a study of the UPR cure behavior by Lu and co-authors 34 using the Avrami equation it was suggested that the beginning of cure is characterized by styrene polymerization through the formation of microgel structures that remain dispersed as monomers and oligomers.Also, as the polymerization occurs rapidly, it can be considered that the nucleation of the crosslinking network is probably instantaneous.
Yang and Lee 35 also reported that the cure of UPR is followed by the formation of structural heterogeneities (microgel) resulting from intermolecular networks and, as a consequence, spherical structures are formed.
Table 2 and Figure 5 show the E a values for the neat resin, UPR/GF and UPR/SF composites.A decrease is observed in the activation energies values in the conversion range of 0.02 to 0.3 both for the neat polyester resin and the UPR/GF composite.For the UPR/SF composite, the decreasing range is between = 0.02-0.4.After this, E a values increase for all samples studied.The initial reduction in E a is related both to an autocatalytic effect and the exothermic heat generated during the polymerization process.For the neat polyester resin the subsequent increase in E a for higher conversion values can be related to the formation of a microgel structure 34,35 .For the UPR/GF and UPR/SF composites, this behavior can be attributed to the formation of a microgel structure, as well as a restriction in the reaction due to the presence of a non-reactive phase.
Some authors state that the interface between the polymer matrix and glass fiber affects the cure reaction as well as the chemical composition and functionalized surface [36][37][38][39] .Kalaprasad et al. 40 describe the nature of a glass fiber as being isotropic and amorphous and, although the glass fiber has a thin reactive coating (sizing), this may explain the difference in the E a values for UPR/GF and UPR 41 .
The E a values for UPR/SF were found to be higher than those for neat UPR and the UPR/GF system.The E a values for = 0.02 and 0.8 were 86.9 and 73.7 kJ.mol -1 , respectively.Sisal is composed of organic structures (cellulose, hemicellulose, lignin, etc.) and has a significant amount of hydroxyl groups 5,42 ; these structures can interact with the resin, affecting the cure process.
Mwaikambo and Ansell 5 studied the chemical modification of sisal and other fibers by alkalization and reported that all plant fibers are composed basically of cellulose, hemicellulose, lignin, wax and pectin, among others.Therefore, the chemical composition of fibers Polyester resins are of polar nature, so the removal of surface impurities on the plant fibers is advantageous in terms of fiber-matrix adhesion, assisting both mechanical interlocking and bonding reactions through exposure of the hydroxyl groups to chemicals.The removal of impurities provides not only more polar and reactive hydroxyl groups but also a rough surface, these features being obtained by physical and/or chemical modification 5 .
Conclusions
In this study, the influence of glass and sisal fibers on the thermal and cure kinetics of unsaturated polyester resin was investigated.
The DSC results showed that an increasing heating rates promoted a decrease in reaction time.The activation energy values of the cure process, obtained through the FWO method, showed that glass fiber-containing composites had higher activation energy values compared with the neat polyester resin, indicating that the size and surface area of the particles affect the cure kinetics of the composites.The sisal fiber-containing composites showed the highest activation energy values for the cure process, compared to the other systems studied herein.
Thus, the use of natural fibers in the polymer matrix composites can affect the cure kinetics of the thermoset resins, indicating that modifications in the parameters and processes may be necessary if these fibers are used as substitute materials for synthetic fibers.affects their properties.Higher-lignin content plant fibers have better reactivity and as a result, are appropriate for use as chemical modifiers.On the other hand, highercellulose content fibers will have superior stiffness, being better employed in resin reinforcement.The structure of cellulose is semicrystalline and has hydroxyl groups; however, a high amount of cellulose is involved by substances like lignin 5 .
As a result of the polar and crystalline feature of SF, the cure reaction of UPR may be influenced by the interface between the phases of the system.Physical interactions, such as hydrogen bonding can occur, affecting kinetic parameters.The literature cites that the interface of polymer matrix composites with untreated sisal fibers is poorer when compared with the treated ones 43 .
Figure 3 .
Figure 3. Conversion given by Equation 1 for the UPR resin.
Figure 4 .
Figure 4. Log vs. 1/T for the determination of the activation energy of UPR resin.
Figure 5 .
Figure 5. E a for UPR, UPR/GF and UPR/SF as a function of conversion.
Table 1 .
Results obtained from the DSC thermograms.
Table 2 .
Activation energy (E a ) and correlation coefficient (r) values for the samples studied. | 3,206 | 2012-06-26T00:00:00.000 | [
"Materials Science"
] |
Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers
Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder.
Lasers made with organic dyes in liquid solutions or embedded in solid matrices are appreciated for their high efficiency 1 . Thus random lasers (RL) 2 , a notable example combining disorder and gain media, but lacking a cavity (which hinders feedback) were demonstrated 3 and are most often made with organic dyes. In this case the feedback is obtained from scattering off the disordered medium so no external cavity is needed. Due to their nature it is reasonable to expect some fluctuating behaviour in their emission 4 . Apart from the case where chaotic behaviour is purposefully provoked for technological applications 5 , laser fluctuations were mostly observed to occur (and fought against) above threshold, i.e. during laser action. For instance, fluctuations in the emitted spectra of ZnO in an organic solid matrix observed when excited with pulses longer that the chromophore lifetime were described as a lasing instability due to interplay between pulse length and excited state lifetime 6 . Unlike these intrinsic fluctuations, a similar system, albeit liquid, showed fluctuation attributed to the dynamic disorder within the colloid realized for each pump pulse 7 . Mujumdar et al. introduced the idea of mode coupling between the long lived extended modes for the chaotic behaviour of emission spectra 8 .
In any case, the lasing threshold of lasers, random or conventional, is perhaps the regime that garnered the least attention 9,10 . Intensity fluctuation between the emission from lasing and non-lasing modes at the threshold in cw GaAs laser were modelled using coupled van der Pol oscillators 11 . A temperature dependent study of the correlation between the fluctuations of different modes for a semiconductor laser has also been carried out 12 . The latter examples deal with very few modes and depend on direct energy exchange between one lasing mode and few neighbouring non-lasing ones. These modes are relatively far apart and respond differently to changes in gain so their noise effect is of an individual rather than collective character. However lasers involving many modes require statistical approaches that treat them as off-equilibrium systems whose stationary regime is brought about by a constant pumping that causes the system to behave as if in equilibrium with a thermal bath: the pumping rate. This allows to liken modes to a liquid and permits to draw phase diagrams of the lasing function 13 . Typically these systems require a mechanism, like disorder in RLs, that establishes the loss channel.
When a large number of modes are involved in the system emission and an interaction between them is considered, it is advantageous to treat the system as a spin glass 14 in the sense that any actual state is composed of a large number of interacting spins that can fluctuate adopting random values to find the equilibrium state. This problem was solved through the use of the replica trick for the calculation of free energy 15 providing an order parameter that was later on proven to bear a physical meaning 16 and the phenomenon has ever since been referred to as the replica symmetry breaking. In the search to minimize energy some of the possible configurations are blocked because the spins involved cannot comply with the random distribution of couplings in the system (frustration). This theory has been used to model the functioning of ordered and disordered lasers permitting to draw a phase diagram 13 and was found to account for the modal behaviour of random lasers 17 . Random lasers base the mode interaction in the fact that a proper cavity is lacking and spatial overlap from unfulfilled orthogonality allows an efficient energy exchange. Many modes can be excited by the pumping pulse some of whose interactions are frustrated so that the system is led to choosing between different but equivalent configurations. The set of the activated mode configurations changes from pulse to pulse. Each of these configurations is a thermodynamic state characterized by the set of modes activated and their interactions. In our case, as in the case of RLs, under the exact same conditions the systems ends up in different states but, because the distribution of Parisi overlaps between states (of mode configurations) is the same as between replicas 18 , it is possible to identify the RSB from the statistical analysis of overlaps among states. In this framework the result of successive instances of pumping a lasing system can be viewed as equivalent states (conceived as modes configurations) that may present correlations that depend on the states and with non-trivial statistical distributions. While the states are equivalent their correlations may not be. In fact, replica symmetry breaking was so far observed only in RLs because they provide a collection of light modes whose emissions are equivalent (degenerate) and susceptible to frustration. On the contrary, ordinary lasers usually accumulate too few modes under the gain curve to lead to liquid-like behaviour and well defined orthogonality pre-empting frustration. Although quenched disorder is often the fundamental reason for frustration and RSB, some systems with complicated, though deterministic 19 , interactions that can establish self-induced frustration 20 were shown within the replica theory to display RSB.
In this work we demonstrate a fluctuating behaviour at the threshold region of lasers made from pure liquid dye solution in a cuvette and dye doped DNA films without adding any scatterers. The fluctuating behaviour is evident from the direct intensity and full width at half maximum (FWHM) observations. We have performed measurements to synchronously collect the energy of the pump pulses and the corresponding emission intensity. Our performed measurements demonstrate that the fluctuations are not due to changes in the pulse energy from shot to shot or thermal effects from the sample. Further, the fluctuations in the threshold region are not only present in liquid state lasers but also in solid state although showing comparatively less marked fluctuating behaviour. We have also tested our laser fluctuation for varying pulse duration (τ p ) and assessed the impact on fluctuations of spatial (through cuvette thickness) and temporal (through the pumping pulse duration) control. Finally, we carried out statistical analyses to prove that mode coupling/frustration is responsible for the fluctuations in the threshold region. The system is the first to show replica symmetry breaking with no intentional disorder because a lousy cavity ultimately induces frustration. This comes about when the immense number of leaky modes involved experience couplings that are frustrated: coherent oscillation of one mode simultaneously with two other coherently oscillating modes can be impossible. This opens the way to equivalent states with different sets of activated modes in each shot.
Two types of systems were tested: liquid-state dye solution laser and solid state dye film lasers. For the former a dye solution was placed in a cuvette and pumped with ns or ps pulses from Q-switched lasers. The latter consisted of dye in DNA films subjected to the same pumping. Figure 1a shows the main features of lasing as a function of pump energy for the typical liquid state configuration. On pumping the cuvette filled with 1 mg DCM in 1 mL THF at low energy density broad photoluminescence from the sample was observed. The FWHM for the photoluminescence spectra is ∼ 60 nm. However, above a certain energy density, a spectral narrowing was observed, accompanied by a significant increase in intensity. For this sample a clear transition from regular broadband (∼ 60 nm) to narrow band (∼ 20 nm) laser like emission was observed on increasing the pump energy which is the accepted as a signature of the lasing regime 21 . In the threshold, however, series of spectra at constant pump energy showed that some of the emission peaks have very high intensity with a FWHM ∼ 20 nm (lasing) and some have low intensity and FWHM ∼ 60 nm (Fig. 1b). The two events of lasing and non lasing can be captured on a screen and are shown in the image Supplementary Fig. S1(a,b) respectively. Only rarely the emission has widths and intensities in between (see Fig. 1c) showing a departure from a monotonous behaviour. Figure 1c shows the intensity maxima for spectra collected over 6000 successive laser shots. The lasing or fluorescence behaviour of the successive spectra was clearly observed from the FWHM versus shot number plot (Fig. 1d). Notice that, unlike FWHM, recorded intensity is limited by the detector dynamic range and all shots of high intensity simply register as the detector maximum making it look like there are many fewer.
To rule out the occurrence of the blinking due to fluctuations of the pump energy that, owing to the rapid change in slope of the emission intensity near threshold, might cause a similar fluctuation in emission we have measured the shot energy synchronously for each spectrum. The normalized correlation coefficient between the intensity and shot energy was calculated according to the expression: where I i are the peak maxima and P i the laser shot energies for the corresponding series of spectra and I and P their averages. From Fig. 2a it is evident that the intensity from the sample is well correlated with the shot energy below and above the threshold. But, at or near above the threshold the correlation factor reduces to a value of 0.5 indicating the spectral intensity is decoupled from the shot energy and presents strong fluctuations. Figure 2b shows FWHM vs. shot energy for 3000 consecutive shots where we can see that at the threshold (∼ 0.32 mJ/pulse) the peak widths have two most frequent values (∼ 60 nm and ∼ 20 nm) while these values collapse in one or the other away from the threshold region (60 nm below and 20 nm above threshold). This is further detailed in Fig. 2c Scientific RepoRts | 6:32134 | DOI: 10.1038/srep32134 where a contour plot of the statistical distribution of peak width P(FWHM) is represented against the pumping energy near the threshold. Shots were grouped in 30 energy segments and the distributions normalized for each segment in the following way. A one hundred-bin histogram of each segment gives P i (FWHM) (i = 1… 30) for thirty energies in the range highlighted Fig. 2a. Each P i is normalised so that the total probability for each energy adds up to one. These thirty vectors are the columns in the contour plot shown. One can see clearly that at lower energies most of the peaks have widths around 60 nm while for the higher pulse energies this population has diminished and most peaks are narrow with widths of around 20 nm. In the region in between there is a bimodal distribution. The low intensity-power correlation obtained near threshold is comparable to the degree of time correlation 2 between successive shots viz. lasing intensity, I i , and shot energy, P i−l , as a function of lag (l). Notice that shots are separated by one fifth of a second while emitted pulses are in the nanosecond range. Such measurements, as expected, show no correlation between the two except when the lag imposes only two data points in the calculation (l = ± 3000) making cross-correlation identically equal to 1 merely for mathematical reasons. Away from the threshold region correlation simply attest to the pump lasers stability. (see Supplementary Fig. S2.) This confirms that r IP values in the threshold represent fully uncorrelated behaviour. Time delayed intensity-power correlation away from threshold (above or below) shows zero-delay correlation values signalling the fact that in this regime the intensity for each shot is linked to the power causing it.
In order to ensure the findings are not dye dependent other laser dyes like Rhodamine B and also different solvent system (e.g. EtOH, H 2 O, DMSO, acetonitrile, ethylene glycol etc.) were tried. In every case we find the same behaviour at the threshold region. The fluctuation behaviour is independent of the solvent-solute interaction, viscosity and boiling point of the solvents. To further ascertain that the fluctuations of the intensity at the threshold are not due to the thermal or other effects from the solution we have made several samples with different concentration of DCM dye in THF so as to place the threshold at different absolute pulse energies. As we increase the concentration the lasing threshold increases drastically as can be seen from the Supplementary Fig. S3a. The required threshold energy increases one order of magnitude from 261 μ J to 2.4 mJ, as we increase the concentration from 0.05 mg/mL to 3.2 mg/mL. This is believed to be due to the inner filter effect 22 as well as internal quenching. It is also interesting to note that at the highest concentration the samples show fluctuation not only at the threshold but also above threshold due to pump power limitations. Gain is low and the region of ordinary lasing above threshold is pushed too far towards high energy and beyond the limits of the available laser and probably the dye tolerance. However, the intensity fluctuation at the threshold was observed for all the samples. When the threshold is low there is ample margin to pump much above threshold before any sign of saturation shows (see Fig. 1a). If the threshold is pushed higher, when stimulated emission sets in the system is near saturation and the fluctuations cannot be damped.
In order to establish it further, beyond photophysical properties, we have prepared samples of various solutions having a range of boiling point viz. THF → EtOH → DMSO → Diethylene Glycol. The solvents have an increasing boiling point starting from 66 °C up to 244 °C. The concentration of DCM in all the cases was kept constant. The intensity fluctuations were present for all the samples. These observations prove that the fluctuations in the intensity of the samples are not due to the thermal effect of the samples and clearly support the hypothesis that the fluctuations are not due to environmental factors but are intrinsic to the system. Next we intended to measure solid state samples to check whether the fluctuating behaviour is also present in that case. We prepare a DNA-CTMA complex doped with DCM dye as previously developed in our laboratory 23 . The sample, in the form of a thin film of several hundred micrometres thickness, was pumped by a stripe formed by the cylindrical lens and the emitted light was collected from the edge. For this particular sample the intensity fluctuations were not as evident from the successive spectra collected at different pump energies. The plot in Supplementary Fig. S4(b) shows thirty consecutive spectra at the highest fluctuation point.
To obtain more insight into the fluctuation from different samples (liquid and solid), it is instructive to analyse the fluctuation coefficient (f) defined as the standard deviation (σ I ) of the intensity of light emitted by the sample divided by the mean of the intensities = σ I f ( ): I I so that we can describe the strength of the fluctuations by examining their statistical distribution. At threshold when fluctuation is strongest the probability distribution is not Gaussian. Instead they follow U-quadratic distribution (see Supplementary Fig. S5.) Fig. 3a shows the statistical analysis of f as a function of pump energy for the liquid sample for the solid film. It is evident from the plot that f has maximum at the threshold and minima at the regions below and above threshold. This is clearly at odds with related results in RLs where the variance of the emitted intensity scales as the intensity itself 17 . Figure 3b shows the statistical distribution of FWHM below (green), at (orange) and above (red) the threshold. In all three cases and despite the difference in ranges data was processed into one hundred bins which makes the histogram bars thinner away from the threshold. Both below and above the distributions clearly resemble a normal distribution. At threshold however the distribution (orange histogram) greatly departs from normal as can be seen in Fig. 3b where f takes its maximum value. It is interesting to note that f reaches values as large as 152% to be compared with values for the laser that never go above a few. In fact the fluctuations at the threshold are so large that, owing to its limited dynamic range, the detector saturates which impedes to record a good distribution, something that the FWHM permits very clearly. The fact that some of these pulses are so much more intense than the average makes the sporadic lasing events directly observable by the naked eye as a red blinking point on a constant green background (that corresponds to the pump laser). Figure 3c shows a similar analysis for the intensity: here the range of the random variable is so wide that separate logarithmic scale plots are needed. Again, while below and above the distributions are ostensibly Gaussian, in the threshold a clear bimodal distribution appears. The threshold region of the solid state laser also shows fluctuations that reach a maximum value of ∼ 27%, well above the pumping laser pulses.
Mode coupling
The dynamics of the single mode laser field as it interacts with the lasing medium has been examined in the context of non-equilibrium statistical mechanics so that a laser near threshold can find a close parallel in the order/ disorder phase transition of a pure fluid vapour-liquid second order phase transitions. The interaction of a molecule's emission with that of all other molecules is entirely similar to that of a magnetic dipole with its environment in a ferromagnet. This invites to identify the laser electric field as the variable corresponding to the ferromagnetic order parameter and the population inversion as the temperature 24 . Our systems, despite their apparent similarity to ordinary rather than RLs, contain many modes and, owing to low quality of the cavity, we believe that mode coupling is responsible for the intensity fluctuations. Other possible causes like random losses; local change of molecules concentration which might rapidly change the threshold or the optical feedback like chaotic seeding can probably be ruled out by the independence on physical and chemical environments as tested here.
In the case of the liquid sample the cuvette cavity acts as a resonator. The longitudinal mode spacing for a regular Fabry-Perot is given by λ ∆ = λ nL 2 2 . Here L is the cavity length and n is the refractive index of the medium. The calculated mode spacing, Δ λ = 0.068 nm and 0.0135 nm for 2 mm and 10 mm path length cuvette respectively. These values are well below the resolution limit of the spectrometer (0.4 nm). The number of modes per unit volume supported by a cavity can be expressed as = π λ N 8 3 3 ; which gives 3.8 × 10 10 modes per cubic millimetre, for light of 604 nm wavelength. For the excitation of an area of 25 μ m 2 and considering the path length of the cuvette 10 mm ~9.53 × 10 7 modes are activated along the cylindrical length (π R 2 L). This immense number of modes sets this system apart from early studies in few modes semiconductor lasers just as the fact that the cavity is regular rather than disordered gives it a novel character at variance with RLs. It is therefore an unlikely environment for ordinary lasers threshold instability and for replica symmetry breaking but the latter is proved by the analysis of shot to shot correlations. To assess the establishment of a regime of replica symmetry breaking induced by coupling between the modes as a function of pump energy we followed a statistical mechanics approach. Just because the distribution of overlaps between mode states is the same as between mode replicas, it is possible to detect the replica symmetry breaking from the statistical analysis of the former 18 , it is possible to detect the replica symmetry breaking from the statistical analysis of the former. For each shot, α, we evaluate the intensity fluctuation of the modes (labelled by wavelength, is the intensity of mode k for shot α and I k ( ) is the average over all shots at mode k. The overlap of the spectral fluctuation from shot to shot can be calculated by the correlation between intensity fluctuation of any two shots α and β: where the sums run over the number of modes (N). This analysis is based on the intensity alone, as the phase is hard to obtain, but it has shown its power in revealing the different stages RL presents in terms of replica symmetry breaking 25 . Figure 4 shows the plot of statistical distribution of P(q) versus q calculated for α, β = 1, … 500 shots that provide a total 500 × (500 − 1)/2 values of q. The plots are in log scale to try to show as much detail as possible in the cases where the distribution presents a large range of probability densities. At very low pulse energies (0.25 μ J in Fig. 4 upper row) the distribution presents a peak centred at q = 0 on top of a much weaker background. This same behaviour is found at very high pulse energy (326 μ J in Fig. 4 upper row) and can most possibly be a sign that a regime where full RSB is attained. At pulse energies very near above and below the threshold the distribution presents the signature of one-step plus full RSB with a peak around q = 0 and two wings reaching q = ± 1. In the threshold energy range (101 μ J in Fig. 4 upper row) the distribution is totally different with two strong maxima at q = ± 1 and a largely depleted region around q = 0 pointing to a one-step RSB. In these circumstances where emission consists of broad weak peaks and narrow, intense lasing pulses if the statistical analysis is performed after separating both kinds of spectra, two differing statistics are found: weak peaks obey a replica symmetric zero-centred distribution while intense laser bursts follow a one-step RSB distribution with high density at q = 0 (refer to SI Fig. S6). It is very clear from (Fig. 4) that in the threshold regime (P = 101 μ J) the range of small |q| is depleted indicating that any two shots have very strongly correlated intensity fluctuations over wavelength. Either upswings in some modes coincide with upswings in others (q ~ 1) or upswings coincide with downswings (q ~ − 1). In other words, in this regime the electromagnetic modes strongly interact and become coupled. On the other hand, if we prepare the system in the region below or above threshold the modes are mostly uncorrelated as revealed by zero peaked distribution.
If the fluctuations in the intensity at threshold are mediated by coupling of the modes and frustration, which are activated during the pump pulses, they should respond to the time dependence of the excitation. The coupling is more effective if the pulse duration (τ p ) is comparable to or much longer than the mode lifetime. The mode lifetime is in the order of hundreds of picoseconds 26,27 . The experimental set up ( Supplementary Fig. S1) allows nanosecond laser pulses (τ p = 9 ns) and picosecond laser pulses (τ p = 30 ps) to be sent to the sample without disturbing any other part of the set up. Unlike in the case of nano pumping, during the pico pumping as we sweep the pulse energy through the threshold region, no fluctuations are observed in the intensity emitted. Moreover the effective spectral window over which the lasing peaks are found for ns pulses (14 nm) is narrower as compared to the range generated by picosecond pulses (32 nm) as can be seen in Fig. 5. The time required to establish the coupling between modes makes that in case of nano pumping a larger number of modes is activated whereas in the case of pico pumping only strongly interacting modes can establish an effective coupling in the duration of the exciting pulse 6,28 .
It is also possible to act on the intensity fluctuation through a spatial control. By making the sample thinner and thinner, eventually we decrease the number of modes to a point where there is not sufficient direct (mode to mode) and mediated (through a third mode) coupling between modes so that they can't couple effectively and subsequently be liable to frustration, to produce a large intensity fluctuation. They can only show a relatively small fluctuating behaviour as was seen from our solid state sample.
In conclusion, we have studied the fluctuating behaviour of the emission spectra of fabricated solid state and solution state lasers. Because of the high gain of the liquid dye solution the coupling between the modes is so effective that replica symmetry breaking becomes a readily observable phenomenon. Although our system has no deliberate disorder, its modes are allowed to exchange energy and interact but some of these interactions are frustrated obliging the system to choose and leading to equivalent but distinct states. The fluctuation can be explained in terms of mode coupling and frustration. This behaviour is completely intrinsic and can be observed for many Figure 4. Overlap distribution as a function of pumping. Plot of P(q) versus q for different pumping rates from below threshold to above threshold and at the threshold region for liquid state sample (upper row) and below and on threshold for solid state sample (lower row). Each panel carries the pump energy as a label and can be identified in Fig. 1a and Fig. 3a. by the larger symbols. All distributions in relative frequency to insure integral equals unity.
dyes. We also show how to manage the fluctuation behaviour by temporal and spatial control. It is also interesting to study this behaviour for large fluorescence lifetime samples (e.g. lanthanides) and in presence of scattering media as well as diffusion time. The ease of fabrication and the simplicity of sample preparation may bring a large variety of application such as use in security marker and photonic displays or even in random number generation.
Methods
Preparation of liquid dye solution. 1 mg DCM dye was dissolved in 1 mL of THF (tetrahydrofuran). A glass cuvette (internal path length 10 mm) was filled with the THF solution of DCM and mounted on the sample stage for studies.
Preparation of Solid sample. In a typical procedure at first 10 mg DCM dye was dissolved in 20 mL EtOH by sonication. A previously prepared DNA-CTMA complex (100 mg) was dissolved in 5 mL EtOH. The above two solutions were mixed and stirred for thirty minutes. Films of DCM doped DNA-CTMA on a quartz substrate were prepared by dip coating, keeping them overnight in a calibrated oven at temperature 40 °C.
Optical set up. The 532 nm laser line was cleaned by using appropriate filters. The beam was split into two parts by a pellicle beam splitter, one part was sent to the energy meter and the other part was focused on the sample using a plano-convex lens (f = 10 cm) on a DCM dye solution containing cuvette. The ASE/fluorescence was collected using another biconvex lens (f = 10 mm), after removing the pump (532 nm) using a 532/1064 nm notch filter and LP 532 nm, the collimated ASE/fluorescence light was focused on an optical fiber (600 μ m core) coupled to the Ocean Optics USB2000+ /USB4000 spectrometer (Supplementary Fig. S1). The sample was pumped either by a ND:YAG laser (Litron model Nano-T 250-10) providing 20 mJ, 9 ns, 532 nm pulses operating at 10 Hz or 5 Hz repetitions rate or by a ND:YAG laser (EKSPLA model PL2250) providing 30 ps, 20 mJ, 532 nm pulses operating at 10 Hz repetition rate. To measure correlation between pulse and emission the 532 nm laser beam was split using a 45:55 (R:T) pellicle beam splitter. The energy of the each shot was measured using an energy meter (Gentec-Solo-2) coupled with a detector. To ensure a good synchronization the laser frequency was kept at 5 Hz. 3000 spectra were collected for 10 minutes at several shot energy ranging from 0.0-0.9 mJ. | 6,843.2 | 2016-08-25T00:00:00.000 | [
"Physics"
] |
Success Rate Queue-Based Relocation Algorithm of Sensory Network to Overcome Non-Uniformly Distributed Obstacles
: With the recent development of big data technology that collects and analyzes various data, the technology that continuously collects and analyzes the observed data is also drawing attention. Moreover, its importance is growing in data collection in areas where people cannot access. In general, it is not easy to properly deploy IoT wireless devices for data collection in these areas, and it is also inappropriate to use general wheel-based mobile devices for relocation. Recently, researches have been actively carried out on hopping moving models in place of wheel-based movement for the inaccessible regions. The majority of studies, however, so far have unrealistic assumptions that all IoT devices know the overall state of the network and the current state of each device. Moreover, various physical terrain environments, such as coarse gravel and sand, can change from time to time, and it is impossible for all devices to recognize these changes in real-time. In this paper, with the migration success rate of IoT hopping devices being relocated, the method of estimating the varying environment is proposed. This method can actively reflect the changing environment in real-time and is a realistic distributed environment-based relocation protocol on behalf of non-realistic, theory-based relocation protocols. Also, one of the significant contributions of this paper is to evaluate its performance using the OMNeT++ simulation tool for the first time in the world to reflect actual physical environmental conditions. Compared to previous studies, the proposed protocol was able to actively reflect the state of the surrounding environment, which resulted in improved migration success rates and higher energy efficiency.
Introduction
Big data and AI technologies are usually focused on data analysis. If there is a problem with data collection, it is complicated to find an abnormal point due to an infinitely large volume of data. Therefore, the technology to continuously collect data from observation areas has recently become a crucial issue. The rapid development of IoT devices enabled data collection in various fields [Alioto and Shahghasemi (2018)]. To be specific, continuous data collection by small IoT devices also enabled big data analysis in areas that are inaccessible to humans [Chen, Mao and Liu (2014); Yin and Wei (2018)]. Deployment of small IoT devices via unmanned aerial vehicles such as drones can collect data from inaccessible areas. However, it is challenging to deploy IoT sensor nodes appropriately in the desired areas, which can lead to distortions in the analysis of collected data. Moreover, small IoT devices have inherent energy depletion problems [Zhang, Fang, Zhao et al. (2020); Yaqoob, Ahmed, Hashem et al. (2017)]. There are many applications where wireless data network technology using small IoT devices (called sensor nodes) is utilized. In general, dozens or hundreds of sensor nodes can be deployed in earthquake, radiation spills, or enemy lines where military activity is underway to collect various information. The characteristics of these areas are those that cannot be accessed by civilians due to harmful substances, rough territory with many obstacles, or unintended exposure to the enemy. Therefore, it would be essential to scatter small devices in a vast, rough area to obtain valuable information. However, there is still a limitation that they can be improperly deployed. Furthermore, after deploying small sensor nodes, assume that data collection is excessively frequent in specific areas. Some sensors may run out of energy earlier than others due to the ongoing collection and transmission of data in such areas. This situation is called a sensor node failure, and in the worst-case scenario, communication on the entire network will be lost, and the desired data cannot be collected. An area where a certain amount of devices fail or fail to deploy correctly, and thus fail to obtain the desired data, is called sensing holes [Kim, Park and Lee (2019)]. In the past, various studies have been carried out to prevent sensing holes occurrence, including how to minimize energy consumption by adjusting the sensor's active and idle states, or how to set energy-efficient paths. Still, they have not thoroughly addressed the faults of sensor nodes caused by energy problems [Kim, Park and Lee (2018); Wang, Gao, Yin et al. (2018); Wang, Gao, Liu et al. (2019)]. Therefore, the most realistic solution is to have a relocation protocol to restore the sensing hole by moving other devices to the area where the sensing hole occurred. For this reason, research on mobile sensors has begun to draw attention. Mobile sensors can move to dark areas, where there is no information, to collect data, and can also continue tasks by replacing energy-starved sensor nodes. Initial studies of the mobile sensor were wheel-based moving models, but there are many physical constraints to these movements. In other words, energy consumption should be further considered for wheel drive, and in reality, wheel-based action is not appropriate in very rough or obstructed areas. In order to overcome the limitations of these movements, hopping-based moving models were introduced. Mobile nodes are biomedically designed so that they can jump on their own like locusts and move to the desired direction. The research article from Zhang et al. [Zhang, Chen and Yang (2016)] provides detailed performance comparisons for the maximum jump height, travel distance, etc., for various jump-capable robots. Also, recent studies have been actively carried out on the relocation of hopping sensors to restore sensing holes [Cintron (2013);Snyder (2014); Senouci and Mellouk (2016)]. However, most of the studies so far have made a non-realistic assumption that the cluster (the area of interest properly divided, the cluster's set is the entire area) because the header sensor nodes understand all the information in the entire network area. No matter how small the entire network area is, setting up information exchanges and paths between the entire cluster headers is not only very difficult in reality, but also involves the transaction of numerous control messages. Recently, our research team has solved these problems [Kim, Park and Lee (2019) ;Park, Kim and Lee (2020)]. First of all, the cluster header does not need to know information from other cluster headers or all networks. It is a distributed networking-based relocation protocol of restoring sensing holes by requesting the sensor nodes to nearby cluster headers. Of all the sensor nodes in a nearby cluster, appropriate nodes can be selected based on intercommunicated information and moved to nearby sensing holes. However, so far, various relocation algorithms have assumed an ideal environment for all areas where sensors are placed. Previous article from Kim et al. [Kim, Mutka and Choo (2010)] has first studied how to relocate hopping sensors depending on the degree of obstacles, but there is an unrealistic assumption that deployed sensors are identifying the degree of obstacles in all regions. Another study from Park et al. ] tried to determine the degree of obstacles through moving sensor nodes, but it is still difficult to predict a realistic environment. In this paper, we propose an active estimation method of the obstacle level of the surrounding environment by grasping the condition of the moving sensors moving by the cluster header and also propose a relocation scheme considering the active obstacle estimation method. Besides, it can be significant that the first realistic experiment was conducted using OMNeT++ [OMNeT (2020)]. The rest of this paper is organized as follows. Chapter 2 describes our prior studies of relocation protocols, and Chapter 3 provides detailed descriptions of proposed protocols, along with scenarios. In Chapter 4, performance analysis is done through simulation, and in Chapter 5, this paper is concluded.
Previous work
This chapter describes a preliminary study of our proposed protocols in Chapter 3. Our research group has conducted many preliminary studies over a long period to propose a realistic relocation protocol. Among them, a simple summary of the studies was made on underlying assumptions, protocols for the redeployment of distributed environments, and methods to consider the surrounding environment.
Basic assumptions and network components
In order to collect and analyze data of a region of interest, first, small mobile IoT devices are randomly distributed. They can be distributed randomly using drones, etc., as shown in Fig. 1, in areas that are difficult to access. Thereafter, the entire area is divided into cluster zones of appropriate size with an appropriate clustering algorithm, and a cluster header is selected for each cluster zone. The cluster header periodically communicates with mobile IoT devices (hereinafter referred to as hopping sensor node members) in its cluster zone and can manage representative information of each node. The primary purpose of this study is to study the technique of relocating the hopping sensor nodes, so it is assumed that the clustering and the selection of headers are possible in various ways [Rostami, Badkoobe, Mohanna et al. (2018)]. Again, the clustering and the header selection is not our concern in this paper. Considering the connectivity problem of the wireless data network, communication between all hopping sensor node members and the cluster header can send and receive messages by jumping to an appropriate height [Cintron, Pongaliur, Mutka et al. (2012)]. Moreover, the sensor nodes include a GPS unit capable of determining their current location [Sabor, Sasaki, Abo-Zahhad et al. (2017)]. Fig. 1 briefly explains the terminology used in this paper. All hopping sensor nodes can be 'cluster header nodes' or 'sensor node members.' Since the maximum transmission area when the cluster header performs the maximum jump is defined as a 'cluster zone,' there is a high possibility that direct communication between cluster headers is impossible. Some sensor node members near an area intersecting the cluster zones, i.e., the maximum transmission radius of the cluster header, are likely to communicate with more than one cluster header. This sensor node is called a 'relay node,' and the role of the relay node will serve to relay communication between cluster headers [Kim, Park and Lee (2019)].
Relocation protocol in a distributed environment
Unlike the past various hopping sensor relocation protocols, which are difficult to apply in reality, research paper from Kim et al. [Kim, Park and Lee (2019)] proposed a relocation protocol suitable for a distributed environment so that it can be applied to a real environment. In other words, the protocol includes the most practical methods, such as a mechanism in which the cluster header recognizes the sensing hole and a request to surrounding clusters for immediate sensing hole recovery. Fig. 2 summarizes the underlying distributed environment relocation protocol. In each cluster zone, the cluster headers (H A, HB, HC, HD) can periodically check the status of sensor node members in their zone using broadcasting of the HELLO message. If less than a certain number of sensors are detected (as determined in the initial network policy), the cluster header can determine for itself that its zone has become a sensing hole. Here, the cluster header HC can detect the number of node members less than three, and it can be recognized that its zone has become a sensing hole. The strategy of relocation protocol to recover this is as follows. Step 1. The cluster header HC broadcasts a RELAY message to all its relay nodes (R2, R3) to request one sensor node member from the neighboring cluster zones (Cluster Zones B, D).
Step 2. Each relay node immediately sends a RELAY-ACK message in response to the RELAY message, where the response from R2 arrives at HC the fastest. The response of R3 to be received later is ignored.
Step 3. The cluster header HC sends a REQ message to the selected relay node R2 to request one sensor.
Step 4. The relay node R2 delivers the REQ message received from the cluster header HC to another cluster header HB.
Step 5. The cluster header HB selects M3 as a sensor node member that can move from its zone to the neighboring zone and sends a MOVE message to move M3 to Cluster Zone C. The member M3 receiving the message moves to the neighboring cluster zone.
Step 6. At the same time, the cluster header HB predicts that its zone will also be a sensing hole and sends a REQ message requesting one sensor to the relay node R1.
Step 7. Relay node R1 delivers the REQ message to the cluster header HA.
Step 8. The cluster header HA selects M2 from among sensor node members in its zone and commands M2 to move to cluster zone B. As a result, one sensor is properly added within each cluster zone, and the sensors are relocated so that all sensing holes of the entire network can be recovered.
Relocation protocol considering terrain conditions
The environment in which the hopping sensor must be distributed through an unmanned aerial vehicle is very different from the typical terrain. Therefore, by combining the terrain information for obstacles around the cluster zone in addition to the study from Kim et al. [Kim, Park and Lee (2019)] in Section 2.2, it is possible to increase the relocation rate of the hopping sensor ]. If the cluster header of the sensing hole does not consider the surrounding environment, the satisfaction of the sensor node movement request for its neighbor zones is inevitably insufficient. Since there is still a sensing hole that cannot be overcome and persists, the number of messages requesting neighboring zones is bound to increase. This situation places a heavy load on the entire network and must be very negative in terms of energy efficiency. Indirectly predicting the distribution of obstacles (stones, puddles, mud, etc.) around the sensing hole is very important in reality, and we will look at the basic mechanism through the example below. First, let us look at the relocation scenario of ordinary node members who do not consider the existence of obstacles. In cluster zone B of Fig. 3(a), two members exhausted energy, resulting in a node failure. After a while, header HB determines that his zone is in the sensing hole state, and requests two node-members to cluster zone A (first REQ). The two members (M1, M2) who are ordered to move in Zone A move to Zone B. One member (M1) made a successful move, but the other (M2) was caught by an obstacle and failed to move to Zone B. The header of zone B determines that it is a sensing hole again, and requests another member movement to zone A (second REQ). In Fig. 3(b), the member M3 of Zone A fails to move to Zone B because of an obstacle again during the movement. In order to overcome the sensing hole, the header of zone B again requests another member movement to zone A (third REQ), and the member M4 succeeds in the movement and can recover the sensing hole. We have seen that three relocations have been performed to restore the first sensing hole. Second, we will consider relocating node members when considering the existence of obstacles. In cluster zone B of Fig. 3(a), two members exhausted energy, resulting in a node failure. After a while, Header HB decides that it is a sensing hole and tries to request two node members to Zone A. First, in the relocation protocol of Park et al. ], the previous work of our research group, the number of members to be requested is calculated in consideration of the 'transfer success rate.' While continuously relocating, the 'success rate of migration' p is continuously updated, and the number of requested members (cnt) considering each p-value is calculated as follows in consideration of the surrounding environment. p=(number of members successfully migrated)/(number of members requested) cnt=Ceiling [(number of members needed)*(1+(1-p))] After setting the initial value of p to 1, the number of member requests is calculated as Ceiling [2×(1+(1-1))], and two member movements are requested to zone A (first REQ). In Fig. 3(b), two members who are commanded to move in Zone A move, and as in the first scenario, one member (M1) makes a successful migration, but the other (M2) is caught in an obstacle and moves to Zone B. The move failed. The header of zone B updates the value of p to 1/2 and determines that it is a sensing hole. The number of members to request to Zone A is Ceiling [1×(1+(1-1/2))], and the header of Zone B requests the movement of two members (second REQ). In Fig. 3 (b), of the two members of Zone A, one member (M3) fails to move to Zone B, as in the previous scenario. However, the other member M4 can successfully move to zone B and overcome the sensing hole state of zone B. Looking at the previous two scenarios, it can be considered that the distribution of obstacles between clusters can be measured by considering the success rate of migration. Accordingly, it can be seen that considering the success rate of migration for the relocation of the sensor node member, it is possible not only to enable rapid sensing hole recovery but also to reduce control messages such as the number of request messages across the network.
Proposal of relocation protocol based on the prediction of the surrounding environment
In this chapter, we look at what problems the method of considering the obstacle state in the previous study has and propose an appropriate method for predicting the surrounding environment.
A Study on the prediction of the surrounding environment
In the paper from Park et al. ], Eq. (1), which is the successful movement rate of the hopping sensor node members, is used to consider the surrounding obstacles. However, it can be confirmed that there is a difficulty in node members moving between the cluster zone and the cluster zone, and predicting the large area with just one movement. Fig. 4 illustrates this example as follows.
In the example of Fig. 4, suppose that at least 5 sensor node members must be retained for sensing hole recovery. As shown in Fig. 4(a), the header HB of cluster zone B detects the sensing hole and requests 5 members to neighbor cluster zone A (first REQ message). In Fig. 4(b), five members selected by the header H A are moving to the sensing hole. One of the members became a fault due to obstacles. As shown in the first row of the table in Fig. 5, the header HB rewrites the result of the first REQ message; the migration success rate is updated to 4 move successes out of 5 requests, that is, 4/5=0.8. In Fig. 4(c), the header HB that failed to recover the sensing hole uses Eq.
(2) (Ceiling [1×(1+(1-0.8))]=2) and transmits the two members to the neighboring cluster zone A through the second REQ. The two members selected in Fig. 4(d) fail to move due to obstacles. As shown in the second row of the table in Fig. 5, the header HB updates the migration success rate to 0/2=0.0 as a result of the second REQ message.
(e) (f) Figure 4: Obstacle prediction mechanism [Park, Kim and Lee (2020)] Again, as shown in Fig. 4(e), the header HB requests two members to the cluster zone A through the third REQ. Finally, as shown in Fig. 4(f), one member among the moving members can arrive at the cluster zone B and recover the sensing hole. As shown in the third row of the table in Fig. 5, the migration success rate of the move is updated to 1/2=0.5 as a result of the third REQ message. Through the three REQ messages, it was possible to overcome the status of the sensing hole in cluster zone B, but let us look at the change in the migration success rate. That is, in this method, it is assumed that the obstacle state between the cluster and the cluster can be predicted through the success rate of migration.
However, while the state of the obstacle is fixed, the state of the obstacle (1-p), which is considered as a change in the success rate of migration, is dramatically changing to 20%, 100%, and 50%. For this reason, it is judged that an unreasonable assumption is made that the distribution of obstacles in a large area between clusters can be estimated as a result of only one member request. If this assumption is appropriate, the distribution of obstacles will be evenly distributed, unlike the example in Fig. 4. The example in Fig. 4 is an example where the obstacle terrain is biased to one side.
Figure 5:
A table of HB to update the success rate of migration in Fig. 4
Proposal for accurate prediction of the surrounding environment
In order to more accurately predict the environment, it is most appropriate to consider the distribution of success rates for request events. This is because the nature of the distribution will converge to a specific model as the size of the sample set for request events is increased to some extent. Fig. 6 demonstrates a simple example of two sample sets, as shown in Fig. 4. In the example of Fig. 6, suppose that it is necessary to have at least 5 sensor node members for sensing hole recovery. As shown in Fig. 6(a), the header HB of cluster zone B detects the sensing hole and requests 5 members to neighbor cluster zone A (first REQ message). The calculation of the requested number '5' uses the average value of the migration success rates stored in the queue holding the success rate. Initialize all the internal values of the queue that will initially store the success rates, and assume the length of the queue is 2 (i.e., write P2 [1, 1]). Then, the average P * of the values stored in P2 is the average of the success rates, and it can be assumed to be 1 as the initial value. If Eq.
(2) is modified as follows, the number of requests (cnt) to be inserted into REQ is calculated by Eq. (3). cnt=Ceiling [(number of members needed)×(1+(1-P * ))] (3) In Fig. 6(b), five members selected by the header HA are moving to the sensing hole. Suppose that only one member of the group moved to the sensing hole. In other words, four members became 'fault' due to the influence of obstacles. As shown in the first row of the table in Fig. 7, the header HB updates the current migration success rate to 1/5=0.20, which is one move out of five requests, as a result of the first REQ message. Also, the status of the queue is updated to P2 [1.00, 0.20] so that P * is 0.60. In Fig. 6(c), the header HB that did not remove the sensing hole chose the 6 members calculated using the formula (3) (Ceiling [4×(1+(1-0.60))]=6) and transmitted them to the neighbor cluster zone A as a result of the second REQ. The six-member group selected in Fig. 6(d) can recover the sensing hole when four members of the moving members arrive at cluster zone B. Then, as shown in the second row of the table in Fig. 7, the success rate of the migration is updated to 4/6=0.67 as a result of the second REQ message. Also, the queue is updated with P2 [0.20, 0.67], so P * is 0.44. Unlike Fig. 4, through the two REQ messages, it was possible to overcome the sensing hole state of cluster zone B. Here, the state of obstacles (1-P * ) as seen by the change in the average of the migration success rates (by the queue length) is 33% to 56%. The examples in Figs. 4 and 6 assume that the obstacle between each cluster zone is 50% in the area, and the 50% assumes an obstacle environment in which 40% is distributed in the upper half and 10% in the lower half. In this environment in which the distribution of obstacles is not uniform, the number of REQ messages can be reduced by one even if only the length of the queue in which migration success rates are to be stored is two. Also, it was possible to intuitively observe how the distribution status of all obstacles approached 50%. In this paper, we propose a method to stably calculate the number of requests from movable hopping sensor node members by properly guessing the distribution rate even for obstacles that are not uniformly distributed.
Descriptions of the proposed protocol
In this section, for the operation of the protocol in which the newly proposed method operates, the message type and the protocol implemented in each cluster header and hopping sensor node member are defined. First, all message types and their detailed definitions are shown in Tab. 1. As shown in Fig. 8, the cluster header periodically broadcasts a HELLO message to identify its hopping sensor node members. The broadcasting period means an interval at which a timer (helloMsgTimer) is executed and ends. After receiving the HELLO message, the hopping sensor node determines the level of its own jump by grasping the received signal strength and then transmits the HELLO-ACK message to the cluster header. The timer can be set to a time when all sensor node members initially deployed can send HELLO-ACK messages to the cluster header (e.g., set in our simulation to 30 minutes). However, as the number of active sensors decreases due to energy defects, the fixed timer, which was initially set long enough, can leave the sensing hole longer. For example, even though a sensing hole has occurred, checking at fixed interval will make the cluster header unable to recognize the sensing hole immediately. Thus, adjusting the timer after the cluster header has determined the number of active sensors can efficiently reduce the time spent recognizing sensing hole occurrence. These discussions could be another valuable research topic to go on in the future. Fig. 9 shows the procedure for determining whether a hopping sensor node member is a relay node after receiving a HELLO message. If a sensor node member receives HELLO messages from multiple cluster headers, it remembers each header as its own cluster header and recognizes that it has become a relay node. In Fig. 10, when the timer of the cluster header ends, it is determined whether or not its cluster zone is a sensing hole by checking the number of registered members. If it is not a sensing hole, the cluster header periodically sends a HELLO message again and restarts the timer. If it is judged as a sensing hole, it sends a RELAY message to its relay nodes. As soon as the relay nodes receive the RELAY message, they immediately send the RELAY-ACK message back to the cluster header. The cluster header can sequentially receive multiple RELAY-ACK messages. The relay node of the first received RELAY-ACK message is selected as a relay node capable of delivering a message to the header of the neighboring cluster, and other incoming RELAY-ACK messages are ignored. The cluster header of the sensing hole delivers node member requests for sensing hole recovery to the header of the neighbor cluster through a REQ message. The number of node members required for recovery may be calculated using a difference (T-C) between a threshold value for determining a sensing hole (T) and a current number of sensors (C). However, in reality, since the environment around the cluster header is rough terrain, there is a high possibility that the sensing hole will continue when the sensors moved less than the requested number of (T-C). Therefore, the cluster header can calculate the number of member sensor requests (cnt) that can overcome the sensing hole using Eq. (4) below with Eq. (3). cnt=Ceiling [(T-C) (1+(1-P * ))] (4) where P * is the average of the migration success rates of the queue PL (queue length L) storing the migration success rates. In Fig. 11, if the relay node receives the REQ message from the cluster header of the sensing hole, it transmits the REQ message to another cluster header of the relay node. Here, the source address of the REQ message is its own address, and the destination address is modified to the address of the cluster header to which the message is to be delivered. Fig. 12, when the cluster header of the zone located in the neighborhood of the sensing hall receives the REQ message, the ADV message is broadcasted to determine the movable member among the member sensor nodes in its zone. Upon receiving the message, the member nodes transmit an ADV-ACK message containing their current physical information to the cluster header. Here, 'physical information' is a variety of information for a relocation strategy [Virdis and Kirsche (2019)] with current energy status, hopping mobility level, GPS coordinate information, and the like. The cluster header receives ACK messages and selects movable member sensor nodes. Then, the MOVE command message including the location information of the cluster header of the sensing hole is transmitted to the selected members. The member nodes receiving the MOVE message hop to the neighboring zone by using the GPS information of the cluster header of the zone to be moved.
Simulation result and analysis
The performance evaluation so far from most of the hopping sensor relocation protocol studies has been unreliable because the protocol has been an unrealistic theory-based relocation protocol. However, one of the significant contributions of this paper is that OMNeT++ [Zarrad and Alsmadi (2017); Virdis and Kirsche (2019)] simulation is used to reflect the actual physical environment. Tab. 2 describes the environment settings used in OMNeT++ for performance evaluation. As shown in Fig. 13(a), 200 hopping sensors were randomly sprayed in an area of 250 m×60 m to collect data. If there are less than 15 sensor nodes in the cluster zone, it is assumed to be a sensing hole. The communication of each sensor uses the IEEE 802.11 model, and it is assumed that the standard transmission radius is 20 m, and the maximum transmission radius when the hopping sensor jumps to the maximum height is 29 m. It was assumed that it is possible to move about 2 m forward in a single jump and that it is possible to hop up to 130 times. Hopping of 130 means the amount of movement that can move a diagonal line, which is the longest moving distance in the network area. As shown in Fig. 13, obstacles are randomly generated with a value of 2% of the total area. Although the size of the obstacle seems very large, it was assumed that the obstacles were square with horizontal and vertical dimensions of 1 m, respectively. For visual effect, the obstacle display was made large in the picture. The distribution of the obstacles is not evenly distributed, and the obstacles are distributed with the weight of the obstacles in the upper and lower half being 80%:20%. That is, this is for reproducing a state in which the distribution of obstacles between clusters is not uniform and biased toward one side. For the performance analysis of the relocation protocol, a sensing hole was generated in the middle of a given area, as shown in Fig. 13. Assuming that continuous data collection occurs in the middle cluster zone, a scenario is set in which the energy of sensor nodes in the middle zone is rapidly consumed. Each sensor in the middle cluster generates a data collection event with exponential distribution (5 minutes on average). For convenience, the initial energy value for sensing is set to 100, and energy is consumed by 1 for each event. Then, the cluster header determines the occurrence of the sensing hole (less than 15 sensors) with a 30-minute interval HELLO message. In order to perform a simulation of the relocation of sensor node members, it is assumed that sensors in different zones have no data collection. In Fig. 13, after the continuous data collection of the sensor, it is indicated in yellow that the energy is exhausted and the fault has occurred. (15) for the central cluster zone not to be a sensing hole in a time series of 36 hours. The average satisfaction rate for the entire simulation time can be seen that the proposed method improved by 7.9%. These results show that the proposed method is more active in overcoming the sensing hole. Moreover, even if a sensing hole still exists for each method, there will be an effect that the remaining sensors in the sensing hole continuously collect/deliver data to reduce the frequency of outliers at the observation point.
Figure 15:
Average of migration success rate at the middle cluster Fig. 15 shows the change in the average migration success rate. In the given environment, the distribution of obstacles is not evenly distributed, especially in the upper part. In this environment, if a different previous migration success rate is used each time, as in the case of one queue length, a subsequent migration success rate may change dramatically. On the other hand, it can be seen that the change in the average migration success rate considering 20 queue lengths is more stable than when only the previous migration success rate is considered (1 queue length case). Therefore, when requesting members from clusters in a nearby area, a more stable average migration success rate (in the case of 20 queue lengths) was used, so the degree of obstacles fixed around the sensing hole can be better predicted compared to the previous method (that is, 1 queue length case).
Figure 16:
The moment for sensing hole appearance at the middle cluster Fig. 16 indicates the frequency and time of occurrence in which the central cluster becomes a sensing hole. For 36 hours, 32 sensing holes occurred in the proposed method, but 44 sensing holes occurred in the previous method. Whenever a sensing hole occurs, a cluster header sends massive messages to members of its own zone, and also sends massive request messages to a neighboring zone. Therefore, the proposed method can reduce energy consumption for message transmission compared to the previous method. Figs. 17 and 18 provide further details in terms of message transmission and energy consumption for the entire network area. Fig. 17 shows the number of members requesting for each REQ when a REQ message is generated for a member request that lacks the cluster header of the sensing hole. In addition, Fig. 18 accumulates the graph of Fig. 17 in chronological order, expressing the minimum number of members that have moved across the network since the start of observation. The difference between each graph over time can be regarded as an indicator for improving energy consumption across the network, and as a result, the proposed algorithm can save up to 36.9% of energy after 36 hours compared to the previous algorithm.
Conclusion
The physical obstacles of the sensory network and the inherent energy deficiencies of micro IoT devices are not only adequate and hindering data collection, but can also lead to unintended consequences such as loss of reliability in big data analysis. Typically, the only way to quickly overcome the sensing hole is to relocate the movement sensors, but so far, most studies have been only theoretical studies that cannot be used in reality.
Recently, our research team designed a relocation protocol based on a distributed environment that can dramatically solve these problems. In this paper, a method for predicting the surrounding environment is proposed for the more successful relocation of hopping mobile IoT devices. In the existing method, the obstacle level between cluster zones was assumed to be the success rate for a single relocation request, but in this paper, the representative value of the distribution for several relocation request success rates was used to predict the environment of the obstacle. As a result, excellent sensor relocation results were obtained in the case where the obstacles were distributed non-uniformly and showed better sensing hole recovery ability and energy efficiency. Moreover, it is one of the most significant contributions to the design and performance analysis of protocols that can be used immediately in the real environment through simulation through OMNeT++.
Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study. | 8,404.2 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Observation of Magnetic Domains in Amorphous Magnetic Wires with a Diameter of 10 μm Used in GSR Sensors
The core of a Gigahertz Spin Rotation (GSR) sensor, a compact and highly sensitive magnetic sensor, is composed of Co–Fe-based amorphous magnetic wire with a diameter of 10 μm. Observations of the magnetic domain structure showed that this magnetic wire has unusual magnetic noise characteristics. Bamboo-shaped magnetic domains a few hundred micrometers in width were observed to form inside the wire, and smaller domains a few micrometers across were observed to form inside these larger domains. The magnetic domain pattern changed abruptly when an external magnetic field was applied to the wire. Herein is shown how these changes may be a source of magnetic noise in the wire.
Introduction
Magnetic sensors are used in various automotive, medical, and data-storage applications, and their performance is expected to be improved. The magnetic core of a Gigahertz Spin Rotation (GSR) sensor is composed of amorphous magnetic wire, on which a detection coil is wound [1][2][3]. Amorphous magnetic wires are known to have a characteristic magnetic domain structure and have been extensively studied [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Takajo and Yamasaki et al. reported that Fe-based wires have maze-like magnetic domains near the surface and magnetic domains parallel to the axis in the core [11,15,17,18]. It is also known that the magnetic domain structure of Co-Fe-based wires changes with the Co content, with circumferential bamboo-shaped magnetic domain structures occurring near the surface and triangular closed domains being observed on the polished surface [17]. Such disturbances in the magnetic domain structure, as well as local vortex-like structures, can be a source of noise in GSR sensors that use this type of wire. Thermal noise and the magnetostriction of magnetic materials have been reported to be one of the causes of noise in GMI sensors and fluxgate magnetic sensors using amorphous magnetic materials [20][21][22][23][24][25][26][27]. However, although observations of the magnetic domains of wires with diameters of 50 to 100 µm have been made, there have been few studies of the domains of amorphous magnetic wires with diameters of less than 50 µm, such as those used in GSR sensors [4][5][6]. High-resolution magnetic domain observation techniques using Magnetic Force Microscope (MFM) and Transmission Electron Microscope (TEM) [6,28] are necessary to observe magnetic domains in thin wires. On the other hand, the magnetic domain observation technique using the Kerr effect has the advantage of the in situ observation of magnetic domain structure changes due to applying a magnetic field. However, it is inferior to MFM and TEM regarding spatial resolution. The authors improved the high-spatial-resolution magnetic domain observation technique using the Kerr effect by using ultraviolet light and image processing [29,30]. In this study, we aimed to clarify the source of noise in GSR sensors by making observations of the magnetic domain structure of a 10 µm diameter Co-Fe-based magnetic amorphous wire.
Experimental Procedure
The wire used for our observations was an as-cast Co-Fe-Si-B amorphous magnetic wire that had a diameter of 10 µm. A vibrating sample magnetometer was used to measure hysteresis loops, and a Kerr effect microscope was used to observe the magnetic domains. To polish the sample to have a mirror-like surface, sections of wire approximately 5 mm long were embedded in epoxy resin, mechanically polished using a diamond paste, then chemically polished using alumina and colloidal silica. An external magnetic field was applied using a Helmholtz coil; the changes that then occurred in the magnetic domain structure were observed. The magnetic domain contrast was enhanced by differential processing of magnetic domain images [31].
Hysteresis Loop
The measured hysteresis loop of the Co-Fe-Si-B amorphous magnetic wire is shown in Figure 1. The measurement direction was along the wire's axial direction, and the coercive force was about 0.86 Oe. It can be seen that the saturation magnetization was about 1.3 T, and that this could be reached by applying an external magnetic field of about 10 Oe. These results indicate that the wire had excellent soft magnetic properties. Figure 2 shows the magnetic domains that were observed on the polished surface of the magnetic wire. The center of the 5 mm long wire was polished over a length of about 1 mm. The width of the polished surface is narrower at the left-and right-hand ends of the polished area, indicating that the polished depth is smaller at the ends and greater in the center. The bright and dark magnetic domains correspond to upward and downward magnetization, respectively, indicating that the orientation of the magnetization is transverse to the axis of the wire. The domain configuration also indicates that the magnetization is oriented in this direction at the polished surface, i.e., the magnetic domains can be considered to form a bamboo-like structure [9,11,12,15,17] with a non-uniform width. Figure 3 shows the change in the magnetic domain that occurred around 700 to 800 µm from the left-hand end of the wire when the magnetic field, H, that was applied in the axial direction was varied from +15 to −15 Oe. The bright and dark areas in Figure 3a correspond to the leftward and rightward magnetization components, respectively. In the remanent magnetization state (H = 0), the magnetization direction alternates between left and right in the radial direction of the wire. When H reaches either −15 or +15 Oe, the original bamboo-like domain structure, which is characteristic of wires containing Co [9,17], disappears, and the direction of the domain becomes uniform throughout the wire. This indicates that the application of a magnetic field causes rotation of the magnetization in the wire's axial direction, resulting in magnetic saturation. Figure 3b shows the same field of view as Figure 3a; in this case, the bright and dark areas correspond to the detected magnetization component in the vertical direction in the figure. Since a line-shaped contrast can be observed at the boundary of the magnetic domain in Figure 3a, it is considered that the magnetization forms a Néel magnetic wall that rotates in the plane of the polished surface in the vicinity of the magnetic wall.
Macroscopic Observations of Magnetic Domains
Based on these results, a model showing the structure of the magnetic domains in the wire was constructed and is shown in Figure 3c. The direction of the magnetization is along the wire axis, and magnetic saturation occurs at H = +15 Oe. When the magnetic field is reduced to +6 Oe, the magnetization rotates so that it is directed to the left and right of the wire radius and magnetic domain walls appear, thus forming a striped magnetic domain structure with the magnetization facing the wire radius in the remanent magnetization state. When the external magnetic field is subsequently increased in the negative direction, new magnetic domains are nucleated; in addition, continuous rotation of the magnetization and movement of the magnetic domain walls occurs. Thus, a discontinuous magnetic domain change occurs inside the wire during the magnetization process in addition to the rotation of the magnetization and magnetic wall movement. These sudden changes in the magnetic domain structure can produce noise in magnetic sensors. We consider that the nucleation of magnetic domains occurs when a magnetic field is applied in the axial direction of the wire because the easy axis of magnetization is inclined from the radial direction of the wire. Figure 4 is an image of the magnetic domains around 400 to 500 µm from the lefthand end of the wire shown in Figure 2. The surface was polished to a greater depth than in the case of the wire shown in Figure 3. Figure 4a shows the magnetic domain contrast in the radial direction; it can be seen that there is uniform leftward magnetization. Figure 4b shows the magnetic domain contrast in the axial direction of the wire; a complex striped pattern of several µm in size can be observed. It can be seen that an even finer magnetic domain structure with domains a few micrometers across formed inside the bamboo-like domain structures described in Section 3.2.1 (which had widths of several hundred micrometers). The magnetization in these smaller domains is directed along the radius of the wire; however, Figure 4b shows that it has a small axial component. To investigate the magnetization process for this fine magnetic domain structure, we observed the change in the magnetic domain structure when a magnetic field directed along the wire axis was applied. Figure 5 shows the change in the magnetic domain pattern within the area enclosed by the red box in Figure 4b when the magnetic field in the direction of the wire axis was varied from +25 to −25 Oe. Figure 5a,b show the magnetic domain pattern in the radial and axial directions of the wire, respectively. In contrast, Figure 5b shows that fine magnetic domains a few micrometers across with magnetization components in the axial direction of the wire were observed near H = 0 Oe. The magnetic domain contrast becomes weaker as the magnetic field increases and disappears when magnetic saturation is reached. In addition, when the magnetic field direction changes from positive to negative, the magnetic domain pattern reverses; in other words, when the magnetic field changes from 0 to −1 Oe, a sharp reversal of the magnetization component in the direction of the wire axis occurs. This abrupt change in the magnetic domain pattern is a possible source of noise in the GSR sensor element. Figure 6 shows the remanent magnetization state for the wire shown in Figure 5a Figure 5a, the overall brightness of the wire when a magnetic field of +25 Oe was applied was 124.6; for a field of −25 Oe, it was 44.5. These states were defined as corresponding to angles of +90 • and −90 • relative to the wire radius direction (left direction), respectively. Using these definitions, an angle of 0 • had a brightness of 84.5. The formula used to derive the amount of rotation, θ, was θ = sin −1 (Brightness value) − 84.5 124.6 − 84.5 (1) The region where this rapid change in the magnetization direction occurs is shown in blue in Figure 8. In can be seen that a sharp change in the direction of the magnetization in the direction of the wire axis occurred in the zigzag-shaped region in the center. This region occupies about half the area of the entire wire, and this abrupt change in the magnetic domain structure will generate a large output voltage at the detection coil that is wound around the wire in the sensor element, thus causing an increase in noise.
Conclusions
Observations were made of the magnetic domain structure on the polished surface of a Co-Fe-based amorphous magnetic wire that had a diameter of 10 µm. Bamboo-shaped magnetic domains a few hundred micrometers in width formed inside the wire. This macroscopic domain structure causes rotation of the magnetization and a magnetic wall shift during reversal of the magnetization; in addition, the nucleation of multiple magnetic domains causes discontinuous and abrupt magnetization changes. These changes in the domain structure can produce noise in magnetic sensors. Inside the bamboo-shaped magnetic domains, there is an additional, finer domain structure consisting of domains a few micrometers across. Within the micrometer-scale zigzag-shaped region observed in the center of the wire, large changes in the direction of the magnetization occur. Such a fine magnetic domain structure inside the bamboo-shaped magnetic domain was not observed in the 100 µm diameter magnetic wires observed in the past. These changes in direction and the magnetic domain structure are another source of noise in magnetic sensors.
In this study, observations of magnetic domains were performed at the polished surface of an amorphous magnetic wire. Further investigations, such as three-dimensional magnetic domain observations [32][33][34][35][36], are needed to clarify the effect of polishing on the magnetic domain structure. In the future, it would be desirable to experimentally measure the noise of the GSR sensor and compare it directly with the changes of the magnetic domain structure. Nevertheless, we clarified one of the causes of noise in GSR sensors by observing the magnetic domains in microwires. In subsequent studies, we plan to study the effect of heat treatment on the magnetic domain structure of amorphous magnetic wires and GSR sensor noise. | 2,850.4 | 2023-03-27T00:00:00.000 | [
"Physics"
] |
Providence and God ’ s emergent will through prayer as it relates to determinism and healing
Providence and God’s emergent will through prayer as it relates to determinism and healing The paper has a twofold purpose. The first is to explore: if God has settled His plans and He will do what He is going to do, then does it matter whether one prays or not? This section will also deal with the aspect of healing and prayer, specifically from a scientific perspective. The important question is: How should one treat reports of miraculous healings, and the belief that prayer can affect healing? Secondly, if prayer has any effect on what happens, then it would seem that God’s plans are not fixed in the first place, and then the idea of an open-future would seem to be valid. As a result, one could no longer see the world as a mechanistic Newtonian picture. Rather, the picture portrayed would be of a world of flexibility and openness to change. The question would then be: What is the manner and scope of divine action and wherein lies the causal joint? Regarding this, areas related to determinism will be explored as determinism states that all events in the world are the result of some previous event, or events. Bringing clarity to these questions is important, as is it has a direct bearing on how one will view miracles recorded in the Scriptures, and how far one will go in trusting God to meet one’s needs through prayer.
INTRODUCTION
The first question one might ask in a debate about the providential hand of God in creation is: How should one define providence?Polkinghorne (1998:84-85) refers to providence as Divine action in the world.From a theological and scientific view, he sees providence divided into three levels.
General providence.This is the divine sustaining of the order of the world, in which one understands the laws of nature as expressions of God's faithfulness.The deist, as much as the theist, will accept this idea.
Special providence.This view concerns itself with particular Divine actions within cosmic history.One understands it as taking place within the grain of physical process, thus not immediately distinguishable from other happenings.God may act through famine or through times of plenty, and this may be recognisable by faith, but it will not be demonstrable to the sceptic.
Miracle.The concern here is with radically unnatural events, such as turning water into wine or restoring the dead to life.If such things happen, their very nature suggests that they are the effects of Divine action of an unusual kind.
For Polkinghorne, these categories are not entirely sharply defined.There are some events, such as those that might be interpretable as highly significant coincidences, which might seem to fall into a grey borderline area.Nevertheless, the classification provides a useful taxonomy for thinking about possible Divine acts.
As a result, in recent writings about science and theology there has been much discussion of God's action in the world.The following paper is presented to survey some of the suggestions put forward.But, before one ventures any further, a problem that has concerned thoughtful Christians when considering the nature of providence, is the role of prayer, and how it links to miraculous events; specifically the healing of one's body.Every committed Christian wants to believe that prayer makes a difference.What is the point in praying, according to Ware (2000:164), if prayer itself turns out to be superfluous and ineffectual?One should note from the start of this discussion, that these questions are simply one particular form of the larger issue of the relationship between human effort and Divine providence.Barth (1958:148) defines Divine providence in terms of the sovereignty of God when he states that: God "rules unconditionally and irresistibly in all affairs… Nature is God's 'servant', the instrument of His purposes… God controls, orders, and decides, for nothing can be done except the will of God… God foreknows and predetermines and foreordains".
Although this statement might be biblically true, it does appear from Scripture that God often works in a sort of partnership with humans.As a result, it seems as if God does not act if humans do not play their part.Therefore, when Jesus ministered in His hometown of Nazareth, He did not perform any major miracles; all He did was heal a few sick people.Scripture states that Jesus, "was amazed at their lack of faith" (Mk 6:6) suggesting that the people of Nazareth simply did not bring their needy ones to Him for healing.Often the act of faith was necessary for God to act, but it seems that this was lacking in Nazareth.
When it comes to prayer and Divine providence, Thiessen (1979:129) states that some hold that prayer can have no real effect on God, since He has already decreed just what He will do in every instance; he does argue that this is an extreme position.One must not ignore James 4:2, "You do not have because you do not ask".God does some things only in answer to prayer; He does other things without anyone's praying; and He does some things contrary to the prayers made.In His omniscience He has taken all these things into account, and in His providence He sovereignly works them out in accordance with His own purpose and plan.Thiessen (1979:129) further argues: If we do not pray for the things that we might get by prayer, we do not get them.If He wants some things done for which no one prays, He will do them without anyone's praying.If we pray for things contrary to His will, He refuses to grant them.Thus, there is a perfect harmony between His purpose and providence, and man's freedom.
In this regard, an area one would need to consider is the contentious issue about the belief that God heals when one prays.
PROVIDENCE IN PRAYER AND HEALING
The twentieth and early twenty-first century has seen a remarkable growth in interest in the subject of spiritual healing of the body.This has come in three related but distinct stages of movements according to Erickson (2001:852-853).First is the Pentecostal movement, which arose and grew in the United States in the early part of the twentieth-century.This stage stressed the return of certain of the more spectacular gifts of the Holy Spirit.Then, about the middle of the century, the Neo-Pentecostal or Charismatic movement began; it had many of the same emphases.In the 1980s and onwards the "Third Wave" arose.These movements put greater stress on miracles of spiritual healing than does Christianity in general.Often they make no real attempt to give a theological explanation or basis for these healings.But when one raises the question, the answer often given is that healing, no less than forgiveness of sins and salvation, is found within the atonement.Christ died to carry away not only sin, but sickness as well.Among the major supporters of this view was A B Simpson, founder of what today is known as the Christian and Missionary Alliance.
One of the striking features of the view that Christ's death brings healing for the body, according to Simpson (1880:30-31), is the idea that the presence of illness in the world is a result of the fall.When sin entered the human race, a curse (actually a series of curses) was pronounced on humanity; diseases were part of that curse.According to Simpson and others, since illness is a result of the fall, not simply of the natural constitution of things, one cannot combat it solely by natural means.Being of spiritual origin, one must combat it in the same way one combats the rest of the effects of the fall: by spiritual means, and specifically by Christ's work of atonement.Intended to counter the effects of the fall, His death covers not only guilt for sin but sickness as well.Healing of the body is, therefore part of a Christian's great redemption right.
Unfortunately, this is in stark contrast to various researches undertaken over many decades to study the area of healing; specifically when it comes to prayer for healing.The following is a breakdown of these findings.
HISTORICAL ASPECTS OF PRAYER AND MEDICINE
In various interviews and surveys undertaken over several decades by prominent scientists and medical doctors (Meyers and Benson 1992;Angel 1985;Kleinman et al 1978;and Engel 1977), it was found that most people believe that not only does the mind affect the body (a view with which most scientists would agree), but there are also supernatural forces that have an intense affect on one's physical and emotional well-being (a view with which most scientists would disagree).From a scientific perspective, the important question is: How should one treat reports of miraculous healings, and the belief that prayer can effect healing?Is there a special connection between belief in the supernatural and physical well-being?With the accelerating technical advances of Western medicine, there are increasing patient complaints against the medical community for their exclusionary focus on the biomedical model of disease.
According to these surveys, it would seem that many patients, particularly if their disease is severe, want metaphysical as well as medical interventions, that is, they want a direct link from their medical care to God.
In later studies, and in response to these findings, McCullough (1995:15-29) in a review of the prayer literature, considered the following four areas of prayer research.
• Prayer and subjective well-being; • Prayer as a form of coping; • Prayer and psychiatric symptoms; • Intercessory prayer.
He reported that both the frequency of prayer and the presence of mystical and religious experience during prayer were predictive of subjective well-being on many indexes.It was, however, stated that several confounds in the studies reviewed, rendered the data interpretation problematic.Variables such as religious commitment and socio-demographics were not controlled.As a result, if one prays often but has little commitment to religious belief one may predict that the positive effects on subjective well-being might diminish.
McCullough further noted that the use of prayer is more often for symptoms treated with medication, and discussed with a doctor, than those that have not.One obvious problem found is that prayer as an effective coping response is confounded with medical treatment.Thus, as one experiences the effect of the medical treatment, there might be a tendency to credit change to prayer.
One might ask: What about intercessory prayer (IP), or the act of praying for another?Sir Francis Galton (1872:125-135) was the first to apply statistical analysis in trying to determine the effects of IP.While his data collection method was flawed, he inferred that IP was not a significant predictor of life span or social class.Since Galton's study in 1872, there have been six empirically based studies looking into the effect of intercessory prayer.These studies, undertaken by Collipp (1969:201-204), Elkins et al (1979:81-87), Joyce andWeldon (1965:367-377), O'Laoire (1997:38-53), Wirth and Barret (1994:61-67) centred mainly around the effect of prayer on various medical conditions of adults and children.The results recorded, found no statistically significant effect of intercessory prayer for these patients.Green (1993Green ( :2752)), however, did find positive expectancy (the belief in the effectiveness of prayer) in relation to IP to have a significant effect on patient anxiety levels.Thus, for those patients who had a high expectancy for the effectiveness of prayer to reduce anxiety, anxiety was reduced.But these studies do not validate or deny the effect of prayer.The question therefore remains unanswered: Does prayer work?
PSYCHOLOGY AND PRAYER
The question one might now ask is: Should medical doctors or psychologists advise their patients to pray?According to Sloan et al (1999:664-667), "it is premature to promote faith and religion as adjunctive medical treatments".According to them, so far, the existing research on the effect of prayer is so flawed in terms of controlling for viable alternative theories and the likelihood of errors, that belief in prayer for physical and emotional well-being is simply unwarranted.However, the empirical evidence strongly suggests that expectancies for desired outcomes, social connectedness, and deep religious positive expectancies may be effective buffers for the stressors associated with various medical conditions.As such, any intervention that improves patient wellbeing is valuable.One could also ask: What is the role of psychology in understanding the effectiveness of prayer in one's life?
The study of prayer in the early history of modern psychology was, without doubt, a thriving concern (see Pratt 1908;Strong 1909).In the years that followed, however, the study of prayer dropped dramatically, following the general trend of declining interest in the relation between psychology and religious beliefs (see Spilka and McIntosh 1999).However, during the last several years, researchers have revisited the topic of prayer (see Hood et al 1993;Ladd and Spilka 2002;Laird et al. 2001;Poloma and Gallup 1991).Consequently, Ladd and Spilka (2002) proposed an explicit theoretical basis for understanding prayer as a means of forming cognitive connections.One should note that none of these proposals was based on the premise that one was dealing with a personal God when praying.As a result, one might then ask: What has this to do with providence?
The reason why the author has brought this into the discussion is to show that many pray without really believing that anything will happen, except within them.And, of course, the person praying has the comfort of knowing that they have someone they can talk to, whether the wanted outcome of the prayer manifests itself or not (this is explored further on).Thus, according to the research conducted by Ladd and Spilka (2002:234), prayer contains inward, outward, and upward dimensions as postulated by the research conducted by Foster (1992).The theory behind this is that inward prayers emphasize self-examination.Outward prayers focus on strengthening human -to -human connections.Upward prayers centre on the human-Divine relationship.
Besides the directionality of prayer put forth, Ladd and Spilka (2002) also reported three second-order factors, referred to as higher orders that appear to represent the intentionality of prayer.
• Higher order factor one, consists of content stressing intercession.
Outward: Prayer on behalf of someone's difficulties.
Outward: Prayer to share another's pain.
Inward: Prayer to evaluate one's spiritual status.
In broad terms, it seemingly represents a way of connecting which highlights the internal conditions of others as well as oneself.
Engaging in intercessory prayer compels recognition of another's inner struggle, even as examination prayer evaluates one's own private situation.Perhaps even more intense is the prayer of suffering or the willingness to enter someone else's pain to provide comfort.
• The second higher order factor encompasses prayers of rest.
PROVIDENCE AND GOD'S EMERGENT WILL Here, connections with the Divine appear to provide both peace and pain.These mixed experiences of spiritual pleasure and pain are not uncommon (cf.Weil 1951).
• The third higher order factor is marked by: Outward: Assertiveness and petitionary prayer.
Outward: Material request approaches to praying.
No inward experience is recorded here.
This factor shows connections based on a bold use of prayer.Instead of abandoning one's needs, this type of prayer puts those needs at its centre.The research conducted did not refer to any empirical data stating whether any of the needs prayed for were received.
Doubtless, what these researchers have uncovered and systemised is correct, and does throw more light on the subject of prayer.The problem is that it fails to answer the question of God's involvement in one's prayers, other than at a superficial level.The comfort of knowing that from an inward, outward and upward belief, prayer does, to a degree accomplish something; is not enough in the author's view.
It is unfortunate that many of the studies undertaken around prayer and healing were based on empirical data, inclining to ignore the omnipotence and omni-benevolence of God.It was also not pointed out whether or not any of the subjects interviewed, or the scientists conducting the experiments, had a believing trusting faith in God, even though they did pray.So far, the author of this paper has not found any major research undertaken by evangelicals to counter-claim these scientific findings.It is also unfortunate that many scholars, even those in the theological disciplines, are sceptical, when it comes to anything related to healing or any miraculous events.Bultmann (1958:16), for example, asserted that miracles were "myths".He wrote, "Modern men take it for granted that the course of nature and history…is nowhere interrupted by the intervention of supernatural powers".
The question remains as to why the Bible would instruct Christians to pray in all circumstances, if God were not going to answer any of their prayers, specifically prayers for healing.Although it was suggested that the research data presented was flawed, and that much research is still needed, one might well ask the question: Is that a good enough answer when reading the negative statements made within these studies about the relationship between God, prayer and healing?
In all fairness, one must say that science deals with facts.Facts, according to Barton (1999:17), are the instruments that the natural scientist uses to build a coherent framework for understanding the world.The problem is that as this framework has developed, it has conflicted with religion and will continue to be in conflict with religion in future studies that it undertakes, until common ground is reached between the two disciplines.The reason being, as science is exposed to new data it is subject to change, and is therefore continuously evolving.One could say that there are no absolutes at this point in the scientific world, especially in its understanding of prayer.None of the scientists quoted can claim that his or her observations have earned the status of ultimate truth.In this vein, the following letter sums up what the general consensus is, on the limits of science.
In a letter written to the scientific magazine Nature, Donald MacKay (1997:502) from the Department of Communications and Neuroscience, at University of Keele in the United Kingdom wrote; In scientific laws we describe, as best we can, the pattern of precedent we observe in the sequence of natural events.While our laws do not prescribe what must happen, they do prescribe what we ought to expect on the basis of precedent.If by a "miracle" we mean an unprecedented event…then science says that miracles ought not to be expected on the basis of precedent.What science does not (and cannot) say…is that the unprecedented does not (or cannot) occur…We cannot dogmatically exclude the ever present possibility that the truth about our world is stranger than we have imagined.
Although, as previously suggested, doubtless, science has achieved enormous success as ways of knowing the structures and processes of the material world, physical science, it appears that it leaves no place for Divine action.One should also declare that it is a human moral trait to seek explanations.Regardless of whether this is in science, or any other discipline, each could claim that he or she is doing research simply for the very sake of understanding how nature works.This is irrespective of whether it is in religion or any other field that deals with unexplainable events, for example, the discipline of quantum physics.
Natural science needs to understand that, if breakthroughs are to be achieved in the dialogue between science and religion, scientific methods -as advanced as they are -hold no intrinsic guarantee that it could lead to ultimate truth.This is specifically so when it comes to unexpected happenings, that is, when one prays and things happen.
Regarding this, Bloesch (1978:58) writes: Evangelical prayer is based on the view that a sovereign God can and does make Himself dependent on the requests of His children.He chooses to realise His purposes in the world in collaboration with His people.To be sure, God knows our needs before we ask, but He desires that we discuss them with Him so that He might work with us as His covenant partners toward their solution.There is, of course, a time to submit as well as a time to strive and wrestle with God in prayer, but this should come always at the end of prayer and never at the beginning.Moreover, our submission is not a passive resignation to fate but a relinquishing of our desires and requests into the hands of a living God to answer as He wills.
A question that would now seem to surface is: How does God influence humanity regarding prayer and His answering of it, and how does this in turn affect surrounding reality to bring about Gods Divine will?
DETERMINISM
According to Barrett (2004:142), Divine action is a long-standing topic of debate.If the world is no longer construed in terms of the mechanistic Newtonian picture but rather as a world of flexibility and openness to change, what is the manner and scope of Divine action and wherein lies the causal joint?It is fairly obvious from the empirical data thus far presented, that the causal joint to bring about change is not found in prayer.Thus, how or where does God actually act?Furthermore, has God in eternity past determined the course of all future events -this will thus make prayer even more irrelevant unless another reason can be found for prayer.Doubtless, Determinism and Divine Causality have far-reaching implications concerning prayer and hence require a more than superficial investigation.Therefore, the first area one would need to consider is the act of determinism.
One area of contemporary discourse in science that relates to the issue of human freedom is the notion of genetic determinism.Here, the concept of determinism is linked directly to the genes in the DNA of a person.Because one already knows that aberrations in certain genes can lead to various forms of physical and mental disease in humans, one can say with some certainty that people are physically determined by their genes.But genetic determinists want to extend this further, by claiming that even one's behaviour is determined by one's genes.In this line of thinking, humans are but victims of their genetic makeup, and any effort to change their moral nature or behavioural patterns would be futile.
Thus determinism states that all events in the world are the result of some previous event, or events.Accordingly, all of reality is already in a sense predetermined or pre-existent, therefore nothing new can materialise.Thus the obvious question: Why pray?
To begin, this closed deterministic view sees all events in the world simply as effects of other prior effects -a sort of Supervenience or Emergence taken place -and has particular implications for morality, science, and religion.Ultimately, if determinism is correct, then all events in the future are as unalterable as are all events in the past.Consequently, human freedom is simply an illusion and the need of prayer irrelevant in changing surrounding reality, as its course of action -in a sense -has already been determined.The question then is: How does this affect or impact on humanity's ability to make free choices and plot their own future, specifically when praying for change, either inwardly or outwardly?
Regarding determinism, Murphy (1995) has proposed that God determines all quantum indeterminacies.However, God does arrange that law-like regularities usually result, in order to make stable structures and scientific investigation possible.Thus, God ensures that human actions have dependable outcomes, so that moral choices are possible.As such, orderly relationships do not constrain God, since He includes them in His purposes.Murphy holds that in human life God acts both at the quantum and at higher levels of mental activity, but does it in such a way that it does not violate human freedom.
Mindful of this, an alternative would be to say that while most quantum events occur by chance, God "influences" certain quantum events without violating the statistical laws of quantum physics (see Russell 1998).However, a possible objection to this model is that it assumes bottom-up causality within nature once God's action has occurred.This, in turn, seemingly concedes to the reductionism's claim that the behaviour of all entities is determined by their smallest parts -cells (or lowest levels).The action would be bottom-up even if one assumed that God directed His intents to the larger wholes (or higher levels) affected by these quantum events.However, most scholars in this field also allow for God's action at higher levels, which then results in a top-down influence on lower levels, as well as quantum effects from the bottom-up (see Gregersen 2000:155-157;Clayton 1997:252-257).
In line with this, Peacocke (1993:215) says that without argument, God exerts a-top-down causality on the world.It must be stressed that Peacocke is influenced by the panentheistic view, and is very much in favour of an open-theistic view of the future.In his view, God's action is a boundary condition or constraint on relationships at lower levels that does not violate lower-level laws.Generally, boundary conditions may be introduced not just at the spatial or temporal boundaries of a system, but also internally through any additional specification allowed by lower-level laws.In human beings, God could influence the highest evolutionary levelthat of mental activity -thereby changing the neural networks and neurons in the brain.
Peacocke maintains that Divine action is effected in humans down the hierarchy of natural levels; hence one has at least some understanding of the relationships between adjacent levels.Thus Peacocke suggests that God communicates His purposes through the pattern of events in the world.Consequently, one can then look on evolutionary history as acts of an agent who expresses purposes, but does not follow an exact predetermined plan -open theism.Moreover, he says, God influences one's memories, images, and ideas, just as one's thoughts influence the activity of neurons.Furthermore, Peacocke states that Christ was a powerfully Godinformed person who was a uniquely effective vehicle for God's self-expression, so that in Christ, God's purposes are more clearly revealed than in nature or elsewhere in history.In the author's view Peacocke's ideas seem to lean towards process theology or as stated, the openness view of God, which, to a certain extent, relies on chance as the determiner of all future events.The reason being, God, in the openness view, relies on humanity making decisions, through free-choices, that will hopefully line up with His determined plan. .Regarding this type of freedom, Barbour (2000:127) states "We cannot choose the cards we have been dealt, but we can to some extent choose what we do with them".
As such, ideas of top-down causation are called forth by both Peacocke (1993:157-16) and Polkinghorne (1998:60;1996:31-32), but in different ways.As mentioned, Peacocke speaks of the relationship between Creator and creation in panentheistic terms, placing great emphasis on the immanence of God, who is all the time creating in and through the processes of the world.According to him, these processors are, in themselves, God's action and thus constrained to be what they are in all their subtlety and fecundity by virtue of the way God interacts with the world-as-a-whole.Knowing the interconnectedness of the world to the finest detail, one thus envisages God as being able to interact with the world, "at a Supervenient level of totality" -Holistically -thereby bringing about particular events and patterns of events, that is, His predetermined plan.Such interaction amounts to the input of information, which by nature forms patterns, the energy content of which can be vanishingly small, so that there is no breach in the causal network of natural law.Indeed, it is a form of top-down causation that Peacocke prefers to call whole-part influence.As such, it meets his concern always to interpret the world's happenings as naturalistically as possible, seeing this as a crucial task of theology in the scientific age.
In the view of Barbour (2000:111), The idea of top-down causality has also been extended by theologians who suggest that God acts as a top-down cause from a higher level without violating the laws describing events at lower levels.God would be the ultimate boundary condition, setting the constraints within which events in the world occur.
Consequently, Polkinghorne also speaks of top-down causality through providing similarly energy-less active information, although PROVIDENCE AND GOD'S EMERGENT WILL he suggests a more direct input into the world's processes -the chaos concept.With the chaos concepts of butterfly effect and strange attractor in mind, it is conceivable that pattern-forming information can lead a system from one arrangement to another.Meaning, since any trajectory from one point within its strange attractor to another does not involve any change of total energythus, Polkinghorne suggests, the Divine will could be exerted within any macroscopic part of the world's structure.Besides, he also believes that there is a greater dynamical openness for Divine agency via chaotic systems than simply through holistic operation on the world-as-a-whole.As such, when challenged, macroscopic physical systems -even in their chaotic mode -follow deterministic equations and therefore cannot be expected to offer any room for manoeuvre.Accordingly, Polkinghorne (1998:36) states that the equations can be understood as estimations to true physical reality, applicable in only those rare and specific situations, in which a system can be treated as totally isolated from its environment.
According to Barrett (2004:146), the idea of Divine providential action through the hidden, introduced active information that is consonant with that of a gracious Creator, a Creator who allows the creation to be itself and to have room to develop.This development takes place through the exercise of human free will and the pathways of free process, via divinely installed guiding principles of chance and necessity -Is this perhaps again a subtle form of open theism?In Christian theology it is the Creator-Spirit, who is thus creatively at work throughout space-time.This Spirit of Life, referred to by Taylor (1972:27-28) as the Go-Between God, is ever at work in nature, in history and in human living, and wherever there is a flagging or corruption or selfdestruction in God's handiwork, he is present to renew and energize and create again....If we think of a Creator at all, we are to find Him always on the inside of creation.And if God is really on the inside, we must find Him in the process, not in the gaps.We know now that there are no gaps...If the hand of God is to be recognised in His continuous creation, it must be found not in isolated intrusions, not in any gaps, but in the very process itself.
METAPHYSICAL DETERMINISM
In this view, all events in the universe including organismic behaviour are necessary outcome of antecedent conditions.Nothing but the behaviour that did occur could have occurred, given the antecedent causal circumstance.
As such Nevin (1991) and Baum (1994) have made statements that appear to be compatible with metaphysical determinism.For example, Nevin (1991:36) makes clear: "According to the most central tenets of our creed, all behaviour is determined by genetics and environmental processes".Similarly, Baum's (1994:11-14) remarks on determinism are easily interpreted as metaphysical in nature.He considers determinism to be "the notion that behaviour is determined solely by heredity and ones environment".
One could look at this from another perspective.In the view of Byl (2003:106), although God is the primary cause of everything according to Col 1:16-17 and Heb 1:3, He usually works through secondary causes.In sustaining the universe from one moment to the next, God generally does so by the properties He has assigned to humans.Thus God usually permits humans to act according to their natures.In particular, He normally allows humans to do what they want, making their own decisions.Yet these human choices cannot be put into actions without God's concurrence or cooperation.God could, in a sense, place laws of determinacy into cells at the quantum level.From this a determined emergence could occur throughout the different levels till it reaches the mental states (see Murphy 1996:23).From this mental state, ideas could emerge -one could call them God ideas (see Barbour 2000:170).It is at this level that one could either determine or reject, by an act of free-will, to go forward with the emerging ideas to bring about changes in the natural realm of reality.For Murphy (1996:25), this is where topdown action occurs; when human volition is involved.Consequently, this brings about the necessary causal changes with the capacity to influence that which sustains its very existence -the natural realm.One then has the combination of upward determinism and downward causation.This then brings about human experience which then changes and adjusts human nature as God would have.One could in a sense say that prayer is the causal joint to start the process of bringing about His will on this earth as the person praying, to a large degree, is rendering their will to a higher power.Thus every normal natural event has two causes; a primary Divine cause and a PROVIDENCE AND GOD'S EMERGENT WILL secondary, natural cause.At this point, one could actually say that miracles occur in those extraordinary cases (Divine healing for example) when God withholds His concurrence and substitutes some other effect.
But despite all said, the question of whether God answers specific prayer remains unanswered, and, one can then only hypothesise as to how God works at the quantum level, or for that matter at any level He chooses.But, in saying this, could it perhaps be that prayer does not concern God as a means of fulfilling His will on this earth, as He has already predetermined His will through bottom-up and top-down emergent properties?Consequently, one could consider an alternative view expressed by some scholars when dealing with the issue of prayer.Some maintain that the idea of prayer is more to do with soliloquy, reflection on life and inner change, rather than to change the mind of God.
IS PRAYER ONLY A MEANS OF INWARD CHANGE?
Moltmann (1996:247-249), who breaks with monotheism and embraces a Hegelian form of panentheism (see Heiler 1958), contends that one can no longer pray to God but only in God, that is, in the spirit of God.Accordingly then, one reinterprets prayer rather as soliloquy, reflection on life or meditation on the ground of being.Some theologians (see Tillich 1957 andSchleiermacher 1963) believed that prayer should only take the form of gratitude, resignation, or meditation, rather than a petition to alter the ways of God.In other circles, prayer is interpreted and understood as a consciousness-raising experience which brings one into tune with the infinite.This is very much in line with the findings undertaken by Green (1993Green ( :2752) ) and Sloan et al (1999:664-667) who stated that for those patients, who had a high expectancy for the effectiveness of prayer to reduce anxiety, anxiety levels were indeed reduced and might also have been effective buffers for the stressors associated with various medical conditions.
What these researchers, in the author's view, fail to recognise is that prayer is an essential element in the totality of Christian living, especially regarding intercessory prayer.Paul, writing to Timothy states the following in I Timothy 2:1-2, "I urge that supplication, prayers, intercession, and thanksgiving be made for all men, for kings and all who are in high positions".While no sharp distinction can be drawn between "supplications" and "intercessions", petitionary prayers are offered on behalf of others.But this does not, unfortunately, answer the question: Does God heal at one's request, or at the request of others, as in intercessory prayers offered on behalf of others?Packer (1997:29) clearly and rightly addresses this contentious area of God's providence and healing in the following way: Petitions for healing or anything else, are not magic spells, nor do they have the effect by putting God under pressure and twisting His arm…Non Christian prayers for healing may surprise us by leading to healing; Christian prayers for healing may surprise us by not being answered that way.There are always surprises with God.But with God's children 'Ask and you will receive' is always true, and what they receive when they ask is always God's best for them long-term, even when it is a short-term disappointment.Some things are certain, and that is one of them.
Furthermore, one could also say that as one submits to God, so the ideas and desires about what to pray, subtly come on a person's thoughts through emergent properties determined by God at the quantum cell level, or gene level.Thus, when one prays those ideas and thoughts that emerge, one is, in a sense, praying God's determined will on the earth, and as a result, things begin to change in the physical which then, as discussed, impacts on human experience and then changes and adjusts human nature as God would have.
In this way, both the determinist and the libertarians concerns regarding God's Divine acts and humanity's free-will are addressed.Although much research is still required in this most intriguing area, one thing is clear.God will bring about His will on the earth, regardless of whether humanity works with Him or not.
SUMMARY AND CONCLUSIONS
In conclusion, one might again ask the question: Does God answer prayer, and how is it accomplished?Furthermore, what is a miracle, whether that is around healing or any other suspension or alteration of natural laws, to a scientist and to a theologian?
Firstly, the author began by looking at what providence consists of and how scholars view providence.The conclusion was that one could interpret providence in several ways, depending on what conclusion one hopes to reach.
The question of prayer and its relationship to providence were explored, to see how they link up, and whether God's plans are fixed and unchangeable.The determination is that God is able to work His plans within nature without violating human freedom through bottom-up and top-down causality.Although areas of emergence and supervenience were used to make a case for Divine causality, it was nevertheless stated that this is only a hypothesis and further research is no doubt still needed in this intriguing area.
It was also presented that one could envisage God, who, knowing the interconnectedness of the world to the finest detail, is able to interact with the world "at a Supervenient level of totality" -Holistically -thereby bringing about particular events and patterns of events, that is, His predetermined plan It was also presented that for some, prayer is not a means that God uses to accomplish His plans, rather, prayer is more to change the person praying then to change Gods mind.Regarding this, it was put forth that what these researchers, in the authors view, fail to recognise is that prayer is an essential element in the totality of Christian living, especially regarding intercessory prayer.Paul writing to Timothy states the following in 1 Timothy 2:1-2: "I urge that supplication, prayers, intercession, and thanksgiving be made for all men, for kings and all who are in high positions".Thus prayer is not simple a method God uses to change people, but also a method God uses to change circumstances to bring about His will in the earth.One could in a sense say that prayer is the causal joint to start the process of bringing about His will on this earth as the person praying, to large degree, renders their will to a higher power which brings about the changes asked for.This in turn brings about the necessary changes both inwardly and outwardly.
Regarding miraculous events, some might simply see them as illusions -events that are really fabrications, coincidences, or the results of some mysterious power of the mind or an unknown law of nature and not of any Divine activity.In other words, there are no miracles; theologically speaking, there are only unusual events.This, of course, is a hypothesis that remains to be proven.But if part of the cause of a miraculous event is Divine providence, then, to a scientist, a miracle, whether that be a supernatural causal event, or a healing taking place within a person, will appear simply as an inexplicable event -a mystery that seemingly goes beyond what one can explain by natural cause.
If, on the other hand, one suggests Divine providence, miracles should then be of interest to all those who are trying to understand how God acts in the world.To the believer then, the providence of God is not an abstract conception.It is the believer's conviction that he or she is in the hands of a wise and powerful God, who will accomplish His purposes in the world, whether or not the prayer for healing or any other need is answered or not. | 9,247 | 2007-09-21T00:00:00.000 | [
"Philosophy"
] |
Controlling chimeras
Coupled phase oscillators model a variety of dynamical phenomena in nature and technological applications. Non-local coupling gives rise to chimera states which are characterized by a distinct part of phase-synchronized oscillators while the remaining ones move incoherently. Here, we apply the idea of control to chimera states: using gradient dynamics to exploit drift of a chimera, it will attain any desired target position. Through control, chimera states become functionally relevant; for example, the controlled position of localized synchrony may encode information and perform computations. Since functional aspects are crucial in (neuro-)biology and technology, the localized synchronization of a chimera state becomes accessible to develop novel applications. Based on gradient dynamics, our control strategy applies to any suitable observable and can be generalized to arbitrary dimensions. Thus, the applicability of chimera control goes beyond chimera states in non-locally coupled systems.
Collective behavior emerges in a broad range of oscillatory systems in nature and technological applications.Examples include flashing fireflies, superconducting Josephson junctions, oscillations in neural circuits and chemical reactions, and many others [1].Phase coupled oscillators serve as paradigmatic models to study the dynamics of such systems [2].Remarkably, localized synchronization-in contrast to global synchrony-may arise in non-locally coupled systems where the coupling depends on the spatial distance between two oscillators.Dynamical states consisting of locally phase-coherent and incoherent parts are called chimera states [3,4], alluding to the fire-breathing Greek mythological creature composed of incongruous parts from different animals.Chimera states are relevant in a range of systems; chimeras have been observed experimentally in mechanical, (electro-)chemical, and laser systems [5,6], and related localized activity may play a role in neural dynamics [7][8][9][10].By definition, local synchrony is tied to a spatial position that may directly relate to function: in a neural network, for example, different neurons encode different information [11].In non-locally coupled phase oscillator rings, the spatial position of partial synchrony not only depends strongly on the initial conditions [3], but it also is subject to pseudo-random fluctuations [12].These fluctuations are particularly strong for persistent chimeras in networks of just few oscillators [13] as commonly found in experimental setups.This naturally leads to the question whether it is possible to pin the coherent part of a chimera state to a (desired) spatial position.
In this article, we derive a control scheme to modulate the spatial position of a chimera state dynamically.In contrast to other recent applications of control to chimera states [13], the goal here is to control the chimera state itself by imposing a target spatial location.Our control mechanism for a ring of non-locally coupled phase oscillators exploits drift induced by breaking the symmetry of the coupling [14] to move the chimera along the ring.This control approach may not only serve to control a chimera's spatial position in an application, but it may also elucidate how position is maintained in systems where spatial localization of synchrony plays a crucial functional role and that are subject to noise.Moreover, applications of control to dynamical states have led to intriguing applications in their own right [15].In terms of the analogy to the Greek mythological creature: what would you be able to do if you could control a fire-breathing chimera?
Chimeras in Non-Locally Coupled Rings-Rings of non-locally coupled phase oscillators provide a well studied model in which chimera states may occur [4].Let S := R/Z be the unit interval with endpoints identified and let T := R/2πZ denote the unit circle.Let d be a distance function on S, h : R → R be a positive function, and α ∈ T, ω ∈ R be parameters.The dynamics of the oscillator at position x ∈ S on the ring is given by The coupling kernel h determines the interaction strength between two oscillators depending on their mutual distance.The system evolves on the torus S × T where x ∈ S is the position of an oscillator on the ring and ϕ(x, t) ∈ T its phase at time t.
Chimera states are characterized by a region of local phase coherence while the rest of the oscillators rotate incoherently.Let φ ∈ Φ := {φ : S → T} denote a configuration of phases on the ring.The local order parameter is an observable which encodes the local level of synchrony of φ at x ∈ S. That is, its absolute value R(x, φ) = |Z(x, φ)| is close to zero if the oscillators are locally spread out and attains its maximum if the phases are phase synchronized close to x.A chimera state is a solution ϕ(x, t) of (1) which consists of locally synchronized and locally incoherent parts.The value of the local order parameter yields local properties of a chimera.The local order parameter obtains its maximum at the center of the phase synchronized region and its minimum at the center of the incoherent region; cf. Figure 1 for a finite dimensional approximation.
Chimera Control-Is it possible to dynamically move a state to a desired position by exploiting drift properties?Before considering chimera states, we concentrate on general solutions moving along the spatial direction.A solution of (1) may be seen as a one-parameter family of functions ϕ t ∈ Φ which assign a phase to each spatial position.Let Q : S × Φ → R n be differentiable.
Think of Q as an observable of the system which depends on the spatial position.A solution ϕ t of (1) with initial condition ϕ 0 ∈ Φ is called Q-traveling along S if there are suitably smooth functions y(t) and q : S → R n such that Q(x, ϕ t ) = q(x − y(t)) for all t; in particular, a solution is Hence, the temporal evolution of a Q-traveling solutions in terms of the observable Q is a shift along S.
Our control scheme is based on gradient dynamics and applies to general observables Q.Let ∂ z f (z)| z 0 denote the partial derivative of a function f with respect to z at z 0 , let f ′ denote its total derivative, and ż the temporal derivative of z(t).Let ϕ t be a Q-traveling solution with q(x) and y(t) such that Q(x, ϕ t ) = q(x − y(t)).Fix a target x 0 ∈ S and assume that q is differentiable with all critical points being extrema.The function q(x 0 − y) is maximized in y if y is subject to the gradient dynamics ẏ = γ∂ y q(x 0 −y) for γ > 0 (assuming that the initial condition is not a local minimum).Note that then the function Q(x, ϕ t ) will attain a maximum at x = x 0 in the limit of t → ∞.
Given a suitable family of coupling kernels, there exist solutions which maximize an observable Q at a given point.Assume that for a given observable Q there is a family h a of coupling kernels, indexed by a ∈ A, and an invertible map ν : A → R such that ϕ t is a Q-traveling solution at constant speed v = ν(a) of (1) with coupling kernel h a .In other words, we assume that a coupling kernel h a yields a Q-traveling solution moving at constant speed along S, i.e., Q(x, ϕ t ) = q(x − ν(a)t).The map ν now allows to use a as a control parameter.Equation ( 3) is a dynamical equation for the speed to have Q obtain a maximum at a given point x 0 ∈ S for large times.Note that ẏ = v(t) and thus (3) yields a direct relationship between the traveling solution and the parameter a.More precisely, choosing a time dependent control parameter a according to (4) yields a traveling solution whose dynamics maximize the observable Q at x 0 .
To control chimeras we apply this general control scheme to the absolute value R of the local order parameter.Since it encodes the local level of synchrony, dynamics that maximize the local order parameter through R-traveling chimera solutions yield a chimera moving to a specified target position.Note that R(x, ϕ t ) = r(x) of a chimera state ϕ t is stationary [4] so it is R-traveling at constant speed zero.Here we further assume that there is a family of coupling kernels h a that lead to R-traveling solutions at nonzero speed ν(a).The control parameter dynamics (4) for the observable R are Hence, choosing a time dependent control parameter a according to ( 5) is equivalent to gradient dynamics to maximize the local order parameter at x 0 .
Implementation in Finite Dimensional Rings-We implement chimera control in an approximation of the continuous equations (1) by a system of N phase oscillators.Suppose that ι : {1, . . ., N} → S, ι(k) = k/N assigns a position on the ring S to the kth oscillator.The evolution of each oscillator is for k = 1, . . ., N. Here, d(x, y) = x − y + 1 2 mod 1 − 1 2 is a signed distance function on S. The local order parameter of the discretized system is defined for ϕ = (ϕ 1 , . . ., ϕ N ) as and its absolute value R d (x, ϕ) encodes the local level of synchrony; cf. Figure 1.
To implement the chimera control scheme (4), the assumption of a monotonic relationship ν between a system parameter and the chimera's drift speed has to be satisfied.To this end, we employ the recent observation that breaking the symmetry of the coupling kernel results in the drift of the chimera [14].Here we consider a family of exponential coupling kernels for a ∈ (−1, 1), where a determines the symmetry of the coupling kernel.There is indeed a monotonic relationship ν(a), independent of the system's dimension (not shown).For non-zero values of a the resulting drifting chimeras are in good approximation R-traveling with constant speed.However, these states seem to be transient and break down quickly in particular for larger asymmetry (|a| 0.015).We discuss the implications of this breakdown on the control below but refer to a forthcoming article [14] for more details on drifting chimera states in systems with asymmetric coupling kernels.
The relationship between asymmetry parameter a and the drift speed now allows for a straightforward implementation of the control scheme.The control rule (5) acts as feedback control through the asymmetry parameter.If the chimera is off target, the nonzero asymmetry yields a drift of the chimera towards the target according to the derivative of the local order parameter at the target position.Once the target is approached, the control subsequently reduces the asymmetry and acts a corrective term keeping the randomly drifting chimera [12] on target.In the finite ring define a discrete derivative at x 0 ∈ S for a given δ ∈ (0, 0.5) by For small δ we have Since traveling chimeras are robust only for sufficiently small asymmetry of the coupling kernel, we employ a sigmoidal function to ensure an upper bound for the asymmetry.Let a max > 0 be a suitable bound for the asymmetry parameter, λ(x) = 2(1 + exp(−x)) −1 − 1 and K > 0 be a constant.Given a target position x 0 ∈ S, an approximation of (5) for control is These dynamics will maximize the local order parameter at x 0 .In other words, a chimera ϕ(t) will move along the ring until its synchronized part is centered at x 0 .
Solving the dynamical equations subject to control numerically shows that the chimera adjusts to the imposed target position.Figure 2 shows a simulation for N = 256 phase oscillators with K = 100, and a time dependent target position x 0 (t).The asymmetry parameter updates every ∆t = 1 time units according to (10).The chimera tracks the changes of the target position and adjusts to match a new control target.
Successful Control Despite Breakdown-A chimera moving along S will eventually break down if the asymmetry of the coupling kernel is too large.The system then converges to either the fully phase synchronized state ϕ 1 = • • • = ϕ N or the "splay state" with uniformly distributed phases.At the same time, a monotonic relationship ν(a) between the asymmetry parameter and the chimera's drift speed implies that the larger asymmetry, the faster the chimera will move along the ring.Note that the absolute value of the asymmetry parameter in a system subject to chimera control is only large until the target position is reached.Thus, as long as a chimera attains its target faster than it breaks down, control is successful.Specifically, let d max < ∞ be the distance between antipodal points on S. If chimeras drift reliably for a distance d max then control is always successful since any target position can be reached without a breakdown along the way.
To determine whether control is applicable despite the transient nature of traveling chimeras, we calculated the average distance it travels before breaking down.Note that the quantity I(t) = 1 0 R(y, ϕ t )dy is conserved for a R-traveling solution ϕ t .This means that the corresponding quantity I d (t) = 1 N N k=1 R(ι(k), ϕ(t)) for the discretized dynamics fluctuates around a constant value.To calculate the revolutions along the spatial dimension S before breakdown, we first sampled I d (t) during a chimera transient without asymmetry to determine its mean I d and standard deviation σ.After breaking the symmetry of the coupling the chimera is said to lose shape at the smallest t > 0 with I d (t) − I d > 2.5σ.This yields a rather conservative criterion in good agreement with individual trajectories; it is typically triggered before the transient drifting passes and the trajectory enters the immediate vicinity of the asymptotic state (not shown).For a traveling chimera at asymmetry a loosing shape at t 0 , the revolutions until breakdown (RUB) are given by RUB(a) = t 0 ν(a).
Remarkably, even for large values of the asymmetry parameter, our chimera control scheme remains viable.With the unit distance defined above, control is successful if a chimera reliably travels a distance d max = 1/2.Figure 3 shows that the RUBs typically exceed d max even for large values of control parameters.When large asymmetry is allowed, a chimera quickly reaches the target position and symmetric coupling is restored.Thus, the control scheme is robust even for large asymmetry despite a potential chimera breakdown.
Discussion-The chimera control presented here allows the dynamical modulation of the spatial position of a chimera state in real time.Such control is relevant for implementation in experimental setups.If the coupling is computer mediated [6] then an implementation is straightforward.
However, the applicability of our control goes beyond such computer dependent experiments: the coupling in ( 8) is motivated by coupling through a common external medium [16] subjected to drift as we detail elsewhere [14].In such a setup, control can be realized by modulating the drift speed in the coupling medium.Hence, we anticipate our control strategy to find direct application in experimental setups.
Effectively, the control can be seen as a coupling of the dynamical equations to a function of the local order parameter.In contrast to systems with symmetric order parameter-dependent interaction [17], in chimera control the order parameter induces a time-dependent asymmetry (5) to the nonlocal coupling to realize directed motion [14].As a result, the chimera drifts along a subspace defined by the symmetry of the uncontrolled system to achieve the target position.Our control is noninvasive in the sense that the control signal vanishes on average upon attaining the target position; cf.Equation (2).Note that our chimera control is not limited to the control of the spatial position of a chimera.Control may be applied if for a suitable observable there is a relationship between a control parameter and directed motion of a solution.Thus, we anticipate that a similar approach extends for example to control of chimeras in higher dimensions.
In summary, chimera control is a robust control scheme to control the spatial position of a chimera state.Remarkably, the control remains effective even if chimeras moving along the ring do not persist.It is worth noting that the breakdown is different than the spontaneous breakdown observed for chimeras in finite oscillator systems with symmetric coupling [18] because of its transient behavior.At the same time, gradient dynamics is just one approach to maximize an objective function.Here it serves as a proof of principle to show that chimera states themselves can be controlled.Applying the presented control scheme to experimental setups and studying its relevance in biological settings provides exciting directions for future research.
Figure 1 .
Figure1.Chimera state for a ring of N = 256 oscillators with exponential coupling kernel h 0 as defined in(8).The upper panel displays the oscillator phase φ(x) on the circle S, and the lower panel the magnitude of the local order parameter, |Z d (x, φ)|, defined in(7).The maximum indicates the center of the synchronized region, the minimum the position of the incoherent part.
Figure 2 .
Figure 2. The position of the chimera adjusts to the imposed target for the control scheme applied to N = 256 oscillators.The top panel displays the phase evolution of the chimera, with the gray shading indicating individual oscillator phases in the co-moving frame defined by the synchronized region with maximal order parameter.The black line shows the desired target position.The bottom panel displays the time evolution of the asymmetry parameter according to(10).The asymmetry parameter |a| is bounded by a max = 0.015 and a tends to stay near zero once the target position is reached.
Figure 3 .
Figure 3. Chimera control is successful even for large asymmetry values a. Revolutions until breakdown(RUB) quantify how many times the chimera travels around the circle S before it breaks down; errorbars depict mean and standard deviation across 10 runs.For every value of a, the chimera typically travels further than the minimal distance of 1 2 (dashed line) required for successful control. | 4,146.8 | 2014-02-25T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
Mid-Cretaceous marine Os isotope evidence for heterogeneous cause of oceanic anoxic events
During the mid-Cretaceous, the Earth experienced several environmental perturbations, including an extremely warm climate and Oceanic Anoxic Events (OAEs). Submarine volcanic episodes associated with formation of large igneous provinces (LIPs) may have triggered these perturbations. The osmium isotopic ratio (187Os/188Os) is a suitable proxy for tracing hydrothermal activity associated with the LIPs formation, but 187Os/188Os data from the mid-Cretaceous are limited to short time intervals. Here we provide a continuous high-resolution marine 187Os/188Os record covering all mid-Cretaceous OAEs. Several OAEs (OAE1a, Wezel and Fallot events, and OAE2) correspond to unradiogenic 187Os/188Os shifts, suggesting that they were triggered by massive submarine volcanic episodes. However, minor OAEs (OAE1c and OAE1d), which do not show pronounced unradiogenic 187Os/188Os shifts, were likely caused by enhanced monsoonal activity. Because the subaerial LIPs volcanic episodes and Circum-Pacific volcanism correspond to the highest temperature and pCO2 during the mid-Cretaceous, they may have caused the hot mid-Cretaceous climate.
The authors have provided an improved version of their work which takes into account the suggestions of the reviewers. I have especially found improvements in the discussion which are now more complete. I do recommend the publication.
In the current version I spotted a few minor things reported as follows: lines 165 and 167 "represent" is repeated twice.
line 240 "is the more" I do not think this is correct in english, please check line 270 "Previous studies have revealed THAT the onsets of the major Cretaceous OAEs (OAE1a, Wezel, Fallot, and OAE2) in the Tethyan region correspond to…." The authors have addressed the most significant points raised in the reviews. I note some minor points in the text that need attention and some more substantive issues. Page 21: in their discussion of Cenomanian-Turonian temperatures the authors state that no longterm evidence for hydrothermal activity is recorded and they reference The manuscript is well written. The results are well summarized. The paper is well structured. Nice story! There are only a few points that I do not find so successful. The interpretations for the different intervals are not always consistent. For example, I would like you to look at the OAE 1b in a more differentiated way. I think there is more music in your data than you express. The statement that OAE 1b is to be regarded like OAE 2 or the Aptian OAEs is too simple. Jacob seems to be below the negative excursion. Kilian is characterized by a clear volcanogenic signal. For Leenhardt and Co the data is weak. I propose to discuss this time interval in more detail. Reply Thank you for many valuable comments. As you pointed out, OAE1b is composed of many organic-rich layers having different geochemical features. Thus, we have dealt with OAE1b separately in the different paragraphs and added a more detailed explanation as following "Among the mid-Cretaceous OAEs, OAE1b is a problematic example. In the Umbria-Marche Basin OAE1b is composed of several major organic-rich horizons (Jacob, Kilian, Urbino, and Leenhardt Levels)
1-2. Comment
Your statement that submarine volcanism does not contribute to the CO2 rise in the atmosphere is brave! Is there any evidence for this -perhaps references from recent submarine volcanism or modeling data? One should not let this statement stand alone and
undiscussed. Reply
Thank you for the comments. The volcanic events under the deep-sea condition were totally different from the subaerial eruption and the outgassing of volatile is suppressed by the hydrostatic pressures. We added the recent articles explaining the difference between the subaerial and submarine volcanism and added some explanations on this point as "When a basaltic plateau was emplaced under submarine conditions, outgassing from submarine volcanism and the expansion of the volatile to shallower waters could have been suppressed by high hydrostatic pressure59, and, thus, they may not have contributed to the long-term increase in the pCO2." (L. 363-366).
1-3. Comment
L.1: "This title is misleading a bit. Why not mentioning the first long term Os-Isotope record"
Reply
Thank you for the suggestion. We modified the title of this manuscript as "Mid-Cretaceous marine Os isotope stratigraphy: evolutional history of hydrothermal activity" (L. 1)
1-4. Comment
L. 140-142: "I am not sure this makes a lot of sense. Of course you may have a high percentage of CaCO3 based soleley on the presence of authigenic carbonates. Better delete as the sentence doesnt contribute to your story anyway" Reply L.146. Thank you for the suggestion. We removed this part.
1-5. Comment
L. 248: "Looks as if the Os-isotopes decrease happens after Niveau Jacob, right? I've never been a big fan of grouping the different horizons together one OAE 1b (and than differentiating between OAE 1a and Livello Wezel). Mayb you should try to tell a more differentiated story fpr the diffenet black shale levels based on your data -now.
Reply
As we explained above, we have discussed OAE1b in the different sections (L. 342-350).
1-6 Comment
L. 307-311: "I am not an expert of submarine volcanism and its influence on pCO2, but here I would like to see some quotes that confirm this theory!" Reply L. 317. Thank you for the comment. We added a detailed explanation and a reference of a recent article explaining the difference between the submarine and subaerial eruptions supporting our hypothesis as "When a basaltic plateau was emplaced under submarine conditions, outgassing from submarine volcanism and the expansion of the volatile to shallower waters could have been suppressed by high hydrostatic pressure 59 , and, thus, they may not have contributed to the long-term increase in the pCO2." (L. 363-366).
Reviewer #2
The manuscript by Matzumoto and co-authors provides the first compilation of Os187/Os188(i) record across the Late Barremian-Late Cenomanian time interval by combining many published data and new data collected for the late Albian-early Cenomanian.
The dataset is very good and represents an important reference for this time interval. The work has potential for publication. Please find below my general comments and some minor comments/suggestions in the pdf attached.
2-1. Comment
The main issue I see is that the authors can improve the discussion with respect to what we already know from other works about the paleoceanographic conditions occurred during this time interval. The discussion chapter lacks a deeper discussion with respect to literature especially regarding the ocean-atmosphere system evolution across the mid-Cretaceous and the studied OAEs. Reply Thank you for the comment. We added discussions on the detailed atmosphere and oceanographic conditions during OAEs and mid-Cretaceous and added some references as: (L. 270-292) "Previous studies have revealed the onsets of the major Cretaceous OAEs (OAE1a,Wezel,Fallot,and OAE2) in the Tethyan region correspond to unradiogenic Os isotopic shifts 13,16,19-21 , which Therefore, the eruption style (e.g., submarine or subaerial) and its duration could have potentially influenced not only the temperature variations but also the diversity of calcareous planktons during the mid-Cretaceous.".
2-2. Comment
lines 248-257: this is not something new. Moreover, literature data do not support a "productivity" model to explain all the OAEs mentioned by the authors. I suggest to revise this part taking into account what is known from literature and, if possible, also trying to go a bit further with the interpretation since now the authors have a nice long-term curve of the Os ratio variations. Reply As you have indicated, the references do not fully support that these OAEs are productivity OAEs. Thus, we modified the meaning of this sentence to avoid misleading interpretation as following (L. 270-273) "Previous studies have revealed the onsets of the major Cretaceous OAEs (OAE1a,Wezel,Fallot,and OAE2) in the Tethyan region correspond to unradiogenic Os isotopic shifts 13,16,[19][20][21] , which is consistent with synchronicity between massive submarine volcanism and OAE.".
In addition, we added the more precise discussion on the Os behavior and its implication of the emission of the volatile elements and its relation to onset of OAEs as "During these OAEs, unradiogenic Os shifts are often accompanied by the negative carbon isotopic excursions [19][20][21]29 , implying the volcanic events supply mantle-derived CO2 with negative carbon isotopic values. Besides, 2 to 16 times increase in the input of mantle-derived Os are required to explain these unradiogenic Os isotopic shifts. Considering that Os could have been supplied in highly volatile oxidized form (OsO4), enormous amounts of other volatile trace metal elements could have been also injected into the ocean-atmosphere system during the most prominent unradiogenic Os isotopic shifts in these OAEs (OAE1a,Wezel and Fallot events,and OAE2). This possibility supports the linkage between biolimiting trace mental input and the high productivity 52 ." (L. 273-282).
2-3. Comment
lines 264-274: The new datasets confirms what has already been said in some published works about "OAE 1 c" and OAE 1d which were not induced by volcanic activity but rather by other imposed paleoclimatic and paleoecological conditions. I suggest to revise this part underlying this aspect and citing the papers who already said that on the basis of other proxies. Please, be also carful that, following some works, OAE 1d black shales were not marked by high productivity (see my comments in the text) Reply Thank you for the comment. We added the explanation on the previous studies and mentioned the productivity during OAE1d as following (L. 295-321) "Therefore, we consider that the onsets of OAE1c and OAE1d were unrelated to massive submarine volcanism, unlike other major mid-Cretaceous OAEs. Mercury anomaly has been reported just below the OAE1d horizon at the Youxia section, the eastern Tethys, which has been interpreted as the submarine volcanic eruption at the Kerguelen Plateau 53 . (Fig. 3) 8,22 . During Quaternary, astronomically modulated monsoonal activity cyclically enhanced the hydrology of the Mediterranean Sea at low latitude, which supplied freshwater and nutrients to the peri-continental ocean. The resulting input of terrigenous organic matter, stratification, and slightly enhanced productivity led to oxygen-depleted bottom-water conditions and the deposition of organic-rich sediments dominated by the terrigenous origin [54][55] . Thus, the lack of the unradiogenic Os isotopic shift and the cyclic deposition of thin black shale layers during OAE1c and OAE1d may suggest a regional-scale weak marine anoxia caused by monsoonal activity modulated by astronomical cycles as proposed by previous studies 55 rather than an episodic large volcanic event 53 . The increase in the primary productivity was not significant at the Tethyan region during OAE1d 55 . However, a small positive carbon isotopic excursion during OAE1d suggests a slight increase in the primary production (Fig. 2). In addition, organic-rich sediments are reported from the Calera Limestone in California, which was deposited in the Pacific Ocean, and thus the oxygen-depleted condition could have prevailed in the East Pacific as well 14 . Thus, the latter process can also cause a supraregional increase in productivity to some extent.". Which time scale did you use? This is critical with respect to the position of the LIP ages.
2-4. Comment
Please specify and eventually discuss that in the text. Reply
2-13. Comment
L. 272-273: "be careful as Bornemann et al 2005 and Gambacorta et al 2020, for example, do not find high productivity during OAE 1d." Reply Thank you for the comments. As you have pointed out, there are few data suggesting the high productivity during OAE1d and1c. However, in case of OAE1d, the possibility of slightly enhanced productivity cannot be ruled out because it is accompanied by the slight positive carbon isotopic excursion and organic-rich sediments have been discovered in the Pacific region. Therefore, we modified "high productivity" to "slightly enhanced productivity" (L. 313) and added more detailed explanation on this point as (L. 298-324) "Therefore, we consider that the onsets of OAE1c and OAE1d were unrelated to massive submarine volcanism, unlike other major mid-Cretaceous OAEs. Mercury anomaly has been reported just below the OAE1d horizon at the Youxia section, the eastern Tethys, which has been interpreted as the submarine volcanic eruption at the Kerguelen Plateau 53 .
However, considering the lack of Os isotopic perturbations around OAE1d, this mercury enrichment is probably more related to local perturbation with limited influence on global climate. Major Cretaceous OAEs (OAE1a, Wezel, Fallot, and OAE2) are represented by thick organic-rich intervals, whereas the sedimentary expression of OAE1c and OAE1d in the Umbria-Marche Basin consist of cyclic alternations of thin black shales 8 . Similar cyclic intercalations of thin black shale layers in a carbonate sequence have been observed in the Valanginian-Barremian, Albian, and upper Cenomanian in the Umbria-
Marche Basin (Fig. 3) 8,22 . During Quaternary, astronomically modulated monsoonal activity cyclically enhanced the hydrology of the Mediterranean Sea at low latitude, which supplied freshwater and nutrients to the peri-continental ocean. The resulting input of terrigenous organic matter, stratification, and slightly enhanced productivity led to oxygen-depleted bottom-water conditions and the deposition of organic-rich sediments dominated by the terrigenous origin [54][55] . Thus, the lack of the unradiogenic Os isotopic shift and the cyclic deposition of thin black shale layers during OAE1c and OAE1d may suggest a regional-scale weak marine anoxia caused by monsoonal activity modulated by astronomical cycles as proposed by previous studies 55 rather than an episodic large volcanic event 53 . The increase in the primary productivity was not significant at the Tethyan region during OAE1d 55 . However, a small positive carbon isotopic excursion during OAE1d suggests a slight increase in the primary production (Fig. 2). In addition, organic-rich sediments are reported from the Calera Limestone in California, which was deposited in the Pacific Ocean, and thus the oxygen-depleted condition could have prevailed in the East Pacific as well 14 . Thus, the latter process can also cause a supraregional increase in productivity to some extent.". Besides, we added important references which seem to contradict our arguments and explanations on the points. For example, OAE1d was widespread and organic-rich sediments have been reported in the Pacific region. Thus, we added explanations on this point as (L. 318-324) "The increase in the primary productivity was not significant at the Tethyan region during OAE1d 55 . However, a small positive carbon isotopic excursion during OAE1d suggests a slight increase in the primary production (Fig. 2). In addition, organic-rich sediments are reported from the Calera Limestone in California, which was deposited in the Pacific Ocean, and thus the oxygen-depleted condition could have prevailed in the East Pacific as well 14 . Thus, the latter process can also cause a supraregional increase in productivity to some extent.".
2-14. Comment
We have also added some explanations on the sulfur isotopic fluctuations during OAE2 because it seems concordant to our hypothesis (L. 262-268): "δ 34 Sbarite data around OAE2 are scarce, but δ 34 S of pyrite and carbonate-associated sulfates (CAS) around OAE2 51 has been intensively investigated instead. δ 34 SCAS and δ 34 Spyrite showed a positive excursion (2-4‰) across the OAE2, suggesting an enhanced sulfate reduction 51 . Considering the global oceanic anoxia and short duration of unradiogenic Os isotopic shift during OAE2 (~600 kyr), the effect of the sulfate reduction could have overwhelmed the effect of volcanic sulfur input.".
Finally, we mentioned the paper discussing the volcanic events during OAE1d based on mercury as following (L. 298-304) "Therefore, we consider that the onsets of OAE1c and OAE1d were unrelated to massive submarine volcanism, unlike other major mid-Cretaceous OAEs. Mercury anomaly has been reported just below the OAE1d horizon at the Youxia section, the eastern Tethys, which has been interpreted as the submarine volcanic eruption at the Kerguelen Plateau 53 . However, considering the lack of Os isotopic perturbations around OAE1d, this mercury enrichment is probably more related to local perturbation with limited influence on global climate.".
3-4. Comment
Line 57: Robinson et al. recognize an OAE1d black shale in the Calera Limestone of California, which derives from the paleo-Pacific (Geological Society of America Bulletin, 120, pp.1416-1426). Hence, the reach of this event may be greater than just Tethys/Atlantic. The Pacific also has a record of OAE 1a and OAE 2 black shales that reflect major events. How does this Calera Limestone occurrence affect the interpretation of OAE 1d as monsoon-related? Note also the new mercury evidence bearing on OAE 1d: Yao, H., Chen, X., Yin, R., Grasby, S.E., Weissert, H., Gu, X. and Wang, C., 2021.
Mercury evidence of intense volcanism preceded oceanic anoxic event 1d. Geophysical Research Letters, 48, p.e2020GL091508. Some reconsideration is warranted here, even if the absence of an Os-isotope signal may be telling us something. Reply Thank you for the constructive comments. We have added an explanation of the Pacific record of OAE1d in the Introduction as (L. 57 to 59) "Additionally, other minor OAEs (e.g.,OAE1b,OAE1c,and OAE1d), which have been reported mainly from the Tethys and Atlantic Oceans 9,11-13 and a part of the Pacific region 14 , are regarded as regional to supraregional marine anoxic events." and discussion section as (L.318-324) "The increase in the primary productivity was not significant at the Tethyan region during OAE1d 55 .
However, a small positive carbon isotopic excursion during OAE1d suggests a slight increase in the primary production (Fig. 2). In addition, organic-rich sediments are reported from the Calera Limestone in California, which was deposited in the Pacific Ocean, and thus the oxygen-depleted condition could have prevailed in the East Pacific as well 14 . Thus, the latter process can also cause a supra-regional increase in productivity to some extent.".
Besides, we added the discussion of mercury anomaly in the main text. Considering the studied site was located near the Kerguelen Plateau, this mercury anomaly could represent the local small volcanic events at the Kerguelen Plateau. We added the explanation as following (L. 299-304) "Mercury anomaly has been reported just below the OAE1d horizon at the Youxia section, the eastern Tethys, which has been interpreted as the submarine volcanic eruption at the Kerguelen Plateau 53 . However, considering the lack of Os isotopic perturbations around OAE1d, this mercury enrichment is probably more related to local perturbation with limited influence on global climate.".
3-5. Comment
Line 104: I agree that considering OAE 1b as a cluster of events is the most useful approach. The Kilian is now taken to mark the Aptian-Albian boundary (GSSP: see Kennedy et al.,Episodes,vol. 40).
Reply
Thank you for the comments. We tried to add the explanation and citation on the Aptian-Albian boundary at this point. However, the description of the just AAB seems not important in our paper. Besides, the number of references is limited, we gave up citing the reference here.
3-6. Comment
Line 107: it is not clear to me what 'partially records OAE 1c' means. How is (or was) OAE 1c defined? Maybe go back to Arthur, M.A., et al., 1990. Stratigraphy, geochemistry, and paleoceanography of organic carbon-rich Cretaceous sequences. In Cretaceous resources, events and rhythms (pp. 75-119). -who suggested the OAE nomenclature.
Reply
We modified the explanation and reference of Amadeus segment and OAE1c in the main text as (L.111-114) "A peculiar ~2-m-thick interval in the upper Albian, called the Amadeus segment 24 , is located in the middle part of OAE1c that spans almost the entire Biticinella breggiensis planktonic foraminiferal Zone 8,25 .".
3-7. Comment
Line 132: Reference 9 does not discuss carbon-isotope data. Refer to the paper of Gambacorta et al. (Newsletters on Stratigraphy,48, for carbon-isotope patterns in the Upper Cretaceous of Marche-Umbria.
Reply L.138. We added the reference here.
3-8. Comment
Line 184: I could not find a reference to modelling of weathering of Ontong Java in Tejada et al., although they suggest various types of basalt-seawater interaction to produce the Os-isotope excursion . Presumably warm-to low-temperature submarine weathering of basalt could be part of this. Reply Thank you for the comment. As you have indicated, the hydrothermal activity includes the weathering under the submarine condition though the estimation of its effect is unclear at present. We modified the discussion as (L. 174-179) "these unradiogenic Os isotopic shifts correspond to the radiometric ages of the Ontong Java, Manihiki, and Hikurangi Plateaus, which once formed a single large oceanic plateau called "Ontong Java Nui" (OJN) (Fig. 3a, e), these unradiogenic Os isotopic shifts were likely triggered by a massive input of mantle-derived unradiogenic Os through hydrothermal activity and warm-and low-temperature submarine weathering at OJN 13,20,21,34 . " .
Reply
Line 194: We modified here to "during the early Albian"
3-12. Comment
Line 217: in what way is the geochemical behaviour of Os different from that of Sr in this context?
Reply
When Os was oxidized, it is volatile and easy to move. This effect during the LIPs eruption is a very critical factor to determine the Os behavior during the eruption. We added the explanation in the main text as (L. 222) "and the volatile feature of highly oxidized form of OsO4.".
3-13. Comment
Line 228: presumably you exclude the possibility that the positive sulphur-isotope excursion was due to increased pyrite burial in black shales?? As illustrated in Fig. 3c, most of the increase in S-isotope values took place in the Cenomanian, post-dating OAE 1d; in so doing, S isotopes rise in parallel with established carbon-isotope curves (e,g. Jarvis et al. for the English Chalk) likely signifying gradually increasing global marine carbon burial in the run-up to OAE 2. The alternative to the idea of a decrease in hydrothermal input of sulfur, namely that the data reflect synchronous increased burial of isotopically light carbon and sulphur, needs to be discussed.
Reply
Thank you for the comment. We checked the carbon isotopic excursions during Cenomanian-Turonian at the English Chalk (Jarvis et al., 2006 Geological Magazine).
The onset of the positive carbon isotopic excursion occurs after the positive sulfur isotopic excursion at the English Chalk. Since the residence time of sulfur is much longer than carbon, the fact that carbon isotopic excursion postdates the sulfur isotopic ratio is hard to explain. Besides, in the Umbria-Marche Basin, abundant organic-rich sediments are observed during the Albian and the latest Cenomanian after the mid-Cenomanian event, which does not match the timing of sulfur isotopic excursion. Therefore, we consider that the cease in the volcanic sulfur is the most likely candidate for triggering the positive sulfur isotopic excursion as suggested in Laakso et al. (2020). The time lag between Os and sulfur isotopic ratio is derived from the large differences in the residence time.
Because of the extremely long residence time of sulfur compared to Os, the sulfur isotopic ratio may have postdated the Os isotopic curve. Also, the sulfur isotopic data during Albian is scarce, and more detailed data will be required for further discussions on this point. We have explained and discussed in the main text as following (L.235-244) "The positive excursion of δ 34 Sbarite can be also explained by an increase in sulfur reduction during the early Cenomanian. However, considering the organic-rich sediments are more pronounced during Albian than Cenomanian at the Umbria-Marche Basin, sulfate reduction should also have been more significant during Albian than Cenomanian.
Therefore, we consider that the decrease in the volcanic sulfur is the more important factor for the positive δ 34 S excursion rather than the sulfate reduction. The positive excursion of δ 34 Sbarite during the Cenomanian postdates the cease of the Os isotopic fluctuations (Fig. 3a,c). Since the residence time of sulfur in the ocean is longer than Os, the onset of the changes of δ 34 Sbarite could have been more gradual and possibly postdated the radiogenic Os isotopic shift.".
3-14. Comment
Line 244: Sulfur-isotope evolution around OAE 2 (data from the English Chalk and the | 5,453 | 2022-01-11T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Long-lived coherence in driven spin systems: from two to infinite spatial dimensions
Long-lived coherences, emerging under periodic pulse driving in the disordered ensembles of strongly interacting spins, offer immense advantages for future quantum technologies, but the physical origin and the key properties of this phenomenon remain poorly understood. We theoretically investigate this effect in ensembles of different dimensionality, and predict existence of the long-lived coherences in two-dimensional and infinite-dimensional (where every spin is coupled to all others) systems, which are of particular importance for quantum sensing and quantum information processing. We explore the transition from two to infinite dimensions, and show that the long-time coherence dynamics in all dimensionalities is qualitatively similar, although the short-time behavior is drastically different, exhibiting dimensionality-dependent singularity. Our study establishes the common physical origin of the long-lived coherences in different dimensionalities, and suggests that this effect is a generic feature of the strongly coupled spin systems with positional disorder. Our results lay out foundation for utilizing the long-lived coherences in a range of application, from quantum sensing with two-dimensional spin ensembles, to quantum information processing with the infinitely-dimensional spin systems in the cavity-QED settings.
I. INTRODUCTION
Collective quantum coherences of many-spins systems play central role in quantum science and technology. But quantum coherence is fragile, and extending its lifetime is a critical problem. For instance, in spin ensembles the collective coherence (collective transverse polarization) is destroyed by dipolar interactions [1][2][3][4], and the the spin echo signal, which quantifies coherence, quickly decays on the time scale T 2 [5]. Coherence can be preserved e.g. via pulse and/or continuous-wave decoupling that suppresses dipolar interactions [1][2][3]. Recently, an intriguing alternative has attracted much attention: it exploits, rather than fights, the spin-spin interactions. Namely, the unusual many-spin states, which are formed in ensembles of dipolar-coupled spins under periodic driving by π-pulses, exhibit collective spin coherences living up to 10 5 times longer than the T * 2 and about 10 4 time longer than the T 2 time [6][7][8][9]. This phenomenon, along with other similar effects, has been observed in various solid-state nuclear magnetic resonance (NMR) experiments [6,7,[9][10][11][12][13], but still remains poorly understood. The long-lived coherences emerge from the combination of strong dipolar coupling, disorder, and pulse imperfections (understood broadly as deviations of the real π-pulses from perfect instant 180 • rotations) [6,12]. Besides, the long-lived coherences demonstrate subharmonic response, i.e. asymmetry in the magnitudes of even and odd echoes [6,9,12]. Both effects of long coherence lifetime and its subharmonic response are remarkably stable against perturbations and decoherence. a<EMAIL_ADDRESS>b<EMAIL_ADDRESS>author to whom correspondence should be addressed The long-lived coherences could be of great benefit for new quantum technology platforms. For instance, promising platforms for quantum sensing utilize twodimensional systems (d = 2, see Fig. 1), such as surface spins or 2D layers of NV spins [14][15][16][17]), which can be brought close to the system being sensed, thus improving resolution and sensitivity. Employing ensembles of spins boosts the total signal, and thus greatly improves the signal-to-noise ratio, but the collective coherence decays quickly due to dipolar coupling between the spins. Increasing the lifetime of collective coherences would be of enormous benefit for quantum sensing. On the opposite end (d → ∞) are the spin ensembles in a cavity QED-type settings, actively explored for quantum information applications [18,19], where each spin is coupled to all others with a similar strength via collective photonic, phononic, or magnonic mode [20][21][22][23][24][25][26]. Taking full advantage of the long-lived coherences could increase the signal-to-noise ratio in these systems by orders of magnitude. In order to achieve that, detailed understanding the long-lived coherences in systems of different spatial dimensionality d is required. So far, even existence of the long-lived coherences at d = 2 or d → ∞ has remained elusive, and their properties have been unknown.
In this article we predict that the long-lived coherences do exist in these important systems, thus opening the way to employing them in novel quantum information platforms. In order to analyze in detail the transition from d = 2 to d → ∞, and to clarify generic features of the long-lived coherence dynamics, we numerically simulate the dynamics of disordered dipolar-coupled quantum spin ensembles of different dimensionalities, subject to periodic driving by imperfect π-pulses, with the rotation angle slightly deviating from 180 • . For all dimensionalities d studied, we observe the long-lived coherences and see emerging subharmonic response when the time inter- two-dimensional, three-dimensional, and (effectively) infinitedimensional systems. The latter system with all-to-all interactions of similar strength is realized e.g. by coupling the spins to a detuned collective photon/phonon/magnon mode in a cQED-like setting. (b) Schematic representation of the periodic pulse sequence. Imperfect π pulses (P) are applied at times (2n + 1)τ (n ≥ 0), so that a series of echoes (E) is formed at times 2nτ . Each pulse rotates the spins along the x-axis by an angle π(1 + ), where is the pulse imperfection parameter.
val τ between the pulses increases to become comparable to T 2 , see Fig. 1b. Our simulations show that the magnitude of the long-lived coherence decreases at larger d, but still remains quite large even at d → ∞. By analyzing the Floquet operator, we establish the kinematic origin of the long-lived coherences, determine that it is similar for all dimensionalities, and identify the states that are involved in their formation.
Some aspects of the long-lived coherence resemble the time crystal dynamics in periodically driven spin systems [27][28][29][30][31][32][33][34][35], with their characteristic robustness [36][37][38]. Time crystals have been observed in many spin systems, from trapped ions [39] to spin ensembles in diamond [40], including the kind of NMR systems that exhibits the long-lived coherences [35,41,42]. However, the physical origins of the two phenomena are different: the coherences are determined by the transverse magnetization, while the time crystal dynamics refers to the behavior of the longitudinal polarization. They are governed by different processes, and their lifetimes can differ by many orders of magnitude [1,2]; e.g., in the case of perfect pulses (instantaneous 180 • rotations) the time crystal oscillations have the longest lifetime and largest amplitude, while the long-lived coherences completely van-ish [6,9,12]. The relation between the long-lived coherence and the time crystal dynamics is poorly explored, in spite of its fundamental interest. Besides, the studies of time crystals so far have mostly focused on d = 1 [35,37,38,[43][44][45][46][47][48][49][50] and d = 3 systems [39][40][41][42]51], as well as on the systems with d → ∞ [21,36,40,[50][51][52]. For these reasons, we also explore here the time crystal dynamics of the longitudinal spin polarization for different dimensionalities. We find that it also demonstrates time crystal-like oscillations with a very long lifetime for all d, but the physical features are markedly different from those of the long-lived coherences. More detailed exploration of this issue in the future would be of utmost interest, but is beyond the scope of the present paper.
Dimensionality plays a key role in the dynamics of dipolar-coupled positionally disordered spin systems. Such systems exhibit characteristic d-dependent dynamical singularities: the spin dynamics at short times is strongly singular for d = 2, while for d → ∞ the singularity disappears [2,[53][54][55][56]. Besides, dimensionality is well known to be decisive in the context of localization and thermalization dynamics in spin ensembles [28,38,[57][58][59]. Since so many key features of the spin dynamics depend on d, one would expect that the properties of long-lived coherences would also strongly depend on d. Surprisingly, our results demostrate that this expectation is incorrect.
II. QUALITATIVE DISCUSSION OF THE EFFECT
We study an ensemble of N s spins S i = 1/2 (i = 1 . . . N s ) in a standard setting of a magnetic resonancetype experiments. Namely, the spin system is placed in a strong quantizing magnetic field H Q directed along the z-axis [1]; this field induces fast spin precession with Larmor frequency ω Q , which is much larger than all other frequency scales of the problem [60]. Following the standard theory of magnetic resonance, we describe spin dynamics in the coordinate frame that rotates around the z-axis with the circular frequency ω Q , and retain only the secular terms in the system's Hamiltonian, which remain static in the rotating frame [61], or vary slowly in comparison with ω Q [1,2].
Initially, by applying a preparatory π/2 pulse, the spins are prepared in a state weakly polarized along the x-axis of the rotating frame, such that the initial ensemble density matrix is ρ(t = 0) ∝ I − µM x , where I is identity matrix, µ 1 is a parameter determining the absolute polarization of the ensemble, and M x = j S jx is the collective coherence operator. Here and below, S jα with α = {x, y, z} denotes the component of the j-th spin along the rotating-frame axis α. In experiments, the ensemble coherences along the x-and y-directions are quantified by the total transverse magnetizations M x and M y along the corresponding axes, so we use the terms "coherence" and "transverse magnetization" interchangeably. The longitudinal polarization, exhibiting time crystallike behavior (see Sec. V), is quantified by the magnetization M z = j S jz along the z-axis. Note that in this work we vary only the spatial dimensionality d of the ensemble, while spins themselves remain embedded in three dimensions, i.e. have three orthogonal components.
In typical experiments, the spins experience random quasi-static local magnetic fields, described by the Hamiltonian H L = Ns j=1 h j S jz ; everywhere in this article we set = 1 and normalize the spins' gyromagnetic ratio to γ = 1.
[62] These fields cause fast dephasing: the x-component of each spin S jx oscillates at its own rate proportional to h j , and the collective coherence M x (t) = Tr[ρ(t)M x ] vanishes at the timescale T * 2 . This is usually too short for practical needs, and dephasing is suppressed by applying a number of hard π pulses, which reverse the sign of the Hamiltonian H L (ideal, i.e. instantaneous 180 • hard pulse along the x-axis performs rotation S iz → −S iz , S iy → −S iy for all spins at once). A pulse applied at t = τ restores collective coherence, producing Hahn echo signal at t = 2τ [1,2] (see Fig. 1b).
However, the hard π pulses do not affect the spinspin interaction that destroys the ensemble coherence by entangling different spins [1][2][3][4]. In relevant experiments, the dominant interaction is the dipolar coupling, described by the Hamiltonian [1,2] ij is the coupling constant between the spins i and j, r ij = | r ij | is the distance between them, and θ ij is the polar angle of r ij . Under the influence of H I , the Hahn echo signal gradually decays as a function of time t = 2τ at the timescale t ∼ T 2 T * 2 . Note that the dipolar interaction is long-ranged, so that each spin is coupled to all other spins for all systems considered here, for all values of d from 2 to ∞, and the coupling strength J ij decays with distance r ij between the spins i and j in the same way for all d. However, the statistical properties of the values J ij greatly vary with the system's dimensionality, thus leading to dramatically different decay of the Hahn echo signal.
The rate of the Hahn echo decay is governed by the positional disorder of spins. In relevant experiments [9-11, 14-19, 40], the spins are very dilute: they occupy a small fraction of the lattice sites, and are randomly distributed in the sample. As a result, each spin S i has its own set of dipolar couplings J ij to the other spins, such that different spins "feel" different environments made of other spins. In low-dimensional ensembles these spin-tospin variations are very strong [53][54][55], leading to a very fast decay of the collective coherence. In Appendix A we show that, in the limit of T 2 T * 2 , when the flip-flop terms can be omitted and the Hamiltonian (1) acquires the form J ij S iz S jz , the Hahn echo signal M x (t) decays with time as for d ≤ 5, such that for d = 2 the initial decay is infinitely fast. For larger d the fluctuations are not as strong, and the singularity of M x (t) at t = 0 is weak for d = 4 and 5. In the limit d → ∞ each spin is coupled to all others with almost uniform coupling, so the echo decay acquires Gaussian form M x (t) ∝ exp − (t/T 2 ) 2 , without any singularity at all. The total Hamiltonian of the system, taking into account the pulse driving, is where H P (t) describes periodic driving by a train of (generally imperfect) π-pulses, as shown in Fig. 1b. If the pulses were ideal, they would suppress the dephasing term H L (see Appendix A for details), and leave the dipolar interaction H I intact. Thus, the echo signal M x (t) would not depend on the number of pulses applied during the time t, and the echo decay would follow Eq. 2.
Our results show that for non-ideal pulses this is true at short times: the dynamics drastically depends on d, with the initial decay rate varying from infinity at d = 2 to zero at d → ∞. At long times, for spin ensembles of all dimensionalities d, the spin dynamics is controlled by accumulation of the pulse imperfections. This process depends on the specific pulse sequence [6,7,9]; here we focus on the simple and efficient Carr-Purcell-Meiboom-Gill (CPMG) protocol shown in Fig. 1b. Accumulation of the pulse errors, combined with the dipolar spin-spin coupling, leads to long-lived tails in the echo signal, extending far past T 2 time, in the region where the Hahn echo has already vanished. The magnitude of the longlived coherence tails is particularly large when the interpulse time delay τ is short compared to T 2 , as seen in Fig. 2a for d = 2. The effect is remarkably robust, and does not disappear even when τ becomes comparable to T 2 . In that regime, another feature emerges for all values of d: the long-lived coherences exhibit pronounced subharmonic response (Fig. 2b), with even echoes being larger than odd ones. This behavior reflects the fact that the period of the evolution operator for CPMG protocol is twice the period of the Hamiltonian H P (t) of the CPMG pulse train [3] (see Sec. III).
We use direct numerical solution of the many-spin time-dependent Schrödinger equation with the secondorder Suzuki-Trotter [63] or Chebyshev [64] expansion of the evolution operator U (t) for up to N s = 24 spins.
In all situations that we tested, both methods give consistent results. The initial mixed-state density matrix is represented by a random pure state, i.e. as a random unit-norm complex-valued vector of length 2 Ns , uniformly sampled from a sphere S 2 Ns −1 of unit radius. We calculate the experimentally measured normalized magnetization response for initial polarization along the axis α. The spins are randomly placed in a d-dimensional cube, and averaging is performed over 100-360 realizations of the spatial arrangements and values of local fields. The cube's edge length (i.e. the spin density f s ) is adjusted to have T 2 = 1 for each system after averaging (see Appendix A for relation between T 2 and f s ). T 2 is defined as the time when the echo magnitude decreases by the factor e. In experiments the pulse imperfections may arise from deliberate or accidental miscalibration, or also due to local fields and dipolar interactions, which affect the spin rotation during the finite-duration pulses [6,9,12]. For hard short pulses, imperfections are small, and in this article we model nonideal pulses as instantaneous rotations around the x-axis by an angle π(1 + ) with 1.
III. DYNAMICS OF THE LONG-LIVED COHERENCES
The behavior of M x (t) in pulse driven spin ensembles, as described above, is shown in Fig. 2 for d = 2, and in Fig. 3 We consider two different values of the inter-pulse delay τ : short τ ≈ 0.07 T 2 , and long τ ≈ 0.7 T 2 (recall that T 2 = 1). At short times t T 2 , the system's response M x (t) closely follows the Hahn echo, and is in excellent agreement with our analytical predictions (see Appendix A for details), as seen in the figures (the small differences between the simulated Hahn echo decay and the analytical predictions are due to the finite-size effects). At longer times t T 2 , the magnetization response M x (t) exhibits long-time tails for all dimension-alities d considered, and for both short and long τ . These long-time tails extend far beyond the T 2 time, and in experiments are likely to be limited by the spin-lattice relaxation time T 1 . With increasing d, the overall amplitude of the long-time echoes becomes somewhat smaller. Still, even in the limit d → ∞, the long-time tails do not vanish but saturate at a nonzero value. Also, the amplitude of the long-time tails becomes smaller as the inter-pulse delay τ increases.
When the inter-pulse delay τ is large, comparable to T 2 , the long-lived coherences demonstrate pronounced subharmonic dynamics (Fig. 2b), where the period of the magnetization response 4τ is twice longer than the period of driving 2τ . The subharmonic response was observed for all values of d we modeled. This feature can be rationalized with the notion of the cycling period of a pulse sequence [3]: for the control Hamiltonian H P (t), the cycling period t c is defined by the condition of periodicity of the evolution operator, i.e.
It is known that the cycling period of a pulse sequence can differ from the period of the Hamiltonian [3], e.g. for ideal π-pulses, the cycling period of the CPMG protocol is t c = 4τ , and includes two π-pulses [67], i.e. contains two periods of the underlying control Hamiltonian H P (t). We also note that the evenodd asymmetry is present for short τ , but its amplitude is too small to be easily seen.
It is important to point out that the long-lived coherences and the subharmonic response crucially depend on the driving sequence. For instance, for an alternating-phase Carr-Purcell (APCP) sequence, where the direction of driving alternates between the positive and the negative x-direction, no long-lived tails of M x (t) emerge [6].
IV. FLOQUET OPERATOR ANALYSIS OF THE LONG-LIVED COHERENCES
Periodically driven systems, described by a Hamiltonian obeying H(t + 2τ ) = H(t), can be analyzed using Floquet theory. The stroboscopic time evolution of the system's density matrix, considered only at times that are integer multiples m of the driving period 2τ , can formally be written as is the operator of rotation around the x-axis by an angle π(1 + ), and U H (τ ) is the operator of evolution under the action of the system's internal Hamiltonian H I + H L .
As a unitary operator, U F possesses a complete orthonormal set of eigenstates |ψ k , with complex eigenvalues e iφ k of unit modulus: For short τ (panel a), a large number of large entries concentrate on the diagonal φj ≈ φ k , producing long-lived coherence with noticeable amplitude. For long τ (panel b), the values at semi-diagonals |φj − φ k | ≈ π become comparable to the values on the diagonal, which leads to noticeable subharmonic response. Overall, the values for long τ are smaller than those for short τ . The system size is Ns = 14. Only values larger than 0.25 are included in the figures.
so the magnetization response after m driving periods is The signal M x (2τ m) is therefore mainly determined by two quantities: firstly, by the magnitude of the matrix elements M jk x = | ψ j |M x |ψ k | 2 , and, secondly, by the distribution P (φ j −φ k ) of the quasienergy differences φ j − φ k , i.e. by the number of Floquet eigenstates |ψ j and |ψ k with a given difference in the quasienergies φ j and φ k . The terms with j = k in Eq. (6) do not depend on m, so the long-time response M x (2τ m) is governed Fig. 4 for short τ (a) and long τ (b) for a two-dimensional system, for one typical realization of the positional disorder. The matrix M jk x for short τ is dominated by large diagonal elements, whereas the off-diagonal elements are almost negligible. This corresponds to the pronounced long-lived coherences in Fig. 2a and Fig. 3, with the almost time-independent amplitude. It is clearly seen that the long-lived coherence contains comparable contributions from a large number of Floquet eigenstates, rather than being confined to a small subset of some special states. long τ (b). The sharp peaks at ∆φ = 0 and at ∆φ = π correspond to the long-lived coherences and to the subharmonic response, respectively. Specifically, P (∆φ) is the total number of the Floquet eigenstates |ψj and |ψ k having a given difference ∆φ in their quasienergies φj and φ k . The quantity ∆φ is downfolded to the interval ∆φ ∈ [0, π] and binned, so that P (∆φ) includes all states whose quasienergies satisfy the condition |φj − φ k | ∈ [∆φ − β, ∆φ + β] or 2π − |φj − φ k | ∈ [∆φ − β, ∆φ + β], where 2β = π · 10 −5 is the width of a bin. To avoid double counting, only the states with φj ≥ φ k are included in P (∆φ). The results shown are averaged over many realizations of the disorder. The inset in panel (b) shows the magnified view of the peak at ∆φ ≈ π. The simulation parameters are the same as in Fig. 2.
For long τ , the matrix M jk x exhibits large entries both on the diagonal, and on the two semi-diagonal lines corresponding to φ j − φ k ≈ ±π; the latter correspond to emerging even-odd echo asymmetry seen in Fig. 2b. The semi-diagonals also show comparable contributions from a large number of pairs of Floquet eigenstates. In comparison with the case of short τ , the diagonal values M jj x on average are smaller for long τ , corresponding to smaller amplitude of the long-time tails in Fig. 2b.
The matrices M jk x for other d exhibit similar structure. As an example, Fig. 5 exhibits the results for M jk x in the case of d = 5. The results of diagonalization of the Floquet operator for different d evidence that the physical origin of the effect of the long-lived coherences is similar for all dimensionalities.
The other quantity determining the signal M x (2τ m) in Eq. (6) is the distribution P (∆φ) of the quasienergy differences |φ j − φ k |. In order to take into account the symmetries of the summands in Eq. 6, we downfold the quantity ∆φ to the interval [0, π], i.e. we take ∆φ = |φ j − φ k | when |φ j − φ k | ≤ π, and ∆φ = 2π − |φ j − φ k | when |φ j − φ k | > π (so that ∆φ is the smallest angular distance between φ j and φ k on the S 1 circle). The binned distribution P (∆φ) for d = 2 is shown in Fig. 6, the bin width is 2β = π ·10 −5 . A peak at ∆φ ≈ π clearly emerges for long τ . This means that the number of quasienergy pairs with |φ j −φ k | ≈ π becomes larger for long τ . Hence, the subharmonic response emerges not only because the values of M jk x on the semi-diagonals become larger, but also because the total number of non-zero entries on the semi-diagonals increases.
V. LONGITUDINAL MAGNETIZATION AND THE INFINITE-TEMPERATURE TIME CRYSTAL DYNAMICS
Let us now focus on the dynamics of the longitudinal polarization M z (t), which under some circumstances can exhibit robust long-lived infinite-temperature time crystal-like dynamics [27].
Long-lived coherences and time crystal dynamics share a number of similarities: both are induced by strong spinspin interactions, robust to experimental imperfections, and demonstrate subharmonic dynamics under appropriate conditions. However, the underlying physics is totally different. Since the system's internal Hamiltonian H I + H L conserves the total z-magnetization, the signal M z (t) remains constant between the pulses. If the πpulses were ideal, then M z (t) would just switch between +1 and −1. In contrast, the coherence M x (t) would quickly vanish under the action of the dipolar spin-spin coupling, along with the Hahn echo, at the timescale T 2 , without any long-lived tail.
For non-ideal pulses, one would expect the longitudinal polarization to decay with increasing the number of applied pulses. Indeed, for > 0, after an imperfect pulse the absolute value of M z (t) would decrease by a factor cos(π ), while the y-component increases by sin(π ). Accordingly, if the y-component has irreversibly dephased to zero during the time 2τ between pulses, then the absolute value of M z (t) would be expected to decay monotonically as cos m (π ) with increasing the number m of applied pulses [35,41].
The actual behavior of the longitudinal magnetization response M z (t) in a d = 2 disordered spin system with N s = 20 is shown in Fig. 7. Initial decay roughly follows the expected cos m (π ) pattern for both short and long τ (panels (a) and (b), respectively); the additional modulation seen in panel (a) is likely due to incomplete dephasing of the y-component between the pulses due to short τ .
At later times, however, the absolute magnitude of Each π-pulse flips the z-magnetization, so that the subharmonic response with the period 4τ is the dominating component of the system's long-time response. Without dipolar coupling, accumulated pulse error would modulate the magnetization along the z-axis, and Mz(t) would decay as cos m (π ) after m pulses (dashed black line). However, in the presence of the dipolar coupling, Mz(t) exhibits long-time tails, alternating between the directions "up" and "down" after each π-pulse. The system size is Ns = 20, = 0.07, and T * 2 ≈ 0.02.
driving sequence. This long-lived subharmonic response is seen for all dimensionalities we studied: the results for another example, a d = 8 spin ensemble, are presented in Fig. 8, and are very similar, except that the amplitude of the long-time tail is somewhat smaller than in the d = 2 case. This conclusion is also consistent with the experimental evidence reported for d = 3 spin systems [40][41][42]. Note that the initial decay of M z (t) does not exhibit the singularities present in the short-time dynamics of M x (t), because it is governed by a different physical process, by accumulation of the rotation errors. Likewise, the long-time behavior of the longitudinal M z (t) and the transversal M x (t) polarization is also qualitatively different. For instance, with increasing the inter-pulse We have also analyzed the stroboscopic evolution of M z (t) by diagonalizing the Floquet operator; an example of the corresponding matrix M jk z = | ψ j |M z |ψ k | 2 is shown in Fig. 9 for one typical realization of the positional disorder. As anticipated, it exhibits nonzero values only at the semi-diagonals φ j − φ k ≈ ±π, in accordance with the subharmonic response described above. The diagonal elements are negligible. Similar to the case of long-lived coherences, we see again that many states, distributed all over the Hilbert space, contribute to the effect.
For the initial polarization along the y-axis, no longlived magnetization response M y (t) is seen in any dimensionality d, and the elements of the matrix M jk y = | ψ j |M y |ψ k | 2 are also generally small, without any clear structure.
VI. FINAL REMARKS AND CONCLUSIONS
Before concluding, let us make some final remarks. a) Time crystal dynamics and the long-lived coherences exhibit striking similarity: both are induced by spin-spin interactions and demonstrate subharmonic dynamics under appropriate conditions. At the same time, these effects arise in very different regimes, demonstrate very different dynamics, and are differently affected by the pulse imperfections. Understanding the connection between time crystals (in particular, infinite-temperature time crystals [35]) and long-lived coherences is an interesting and important, yet rather unexplored problem.
b) The presence of long-time tails along the z-axis which is perpendicular to the pulse driving field implies that the fundamental process for establishing long-time tails of the magnetization response may not be spin locking [1,2] as may appear in analogy with other similar effects [10,11,68]. c) Our simulations show that even if the direction of the driving during the π-pulses is chosen randomly for each spin, the long-lived coherences emerge and persist for long times, as long as the direction of the driving remains constant in time. This observation suggests that the origin of the long-lived coherences is primarily kinematic; it may arguably also involve many-body localization [28,31], transient prethermal regime [32,69,70], or spin-glass like behaviour.
d) The results shown in the present article were obtained for a fixed pulse imperfection = 0.07. Our numerical simulations with other values of demonstrate that qualitatively the same behavior occurs for other choices of 1. The long-time spin coherences persist even when is chosen randomly for different spins, as long as it remains constant in time. e) We considered in this article spin systems of finite size, with the total number of spins N s ≈ 20. Our simulations demonstrate very modest quantitative changes as the system size has been varied from N s = 10 to N s = 24. Besides, our numerical results for d = 3 agree with the reported experimental results for three-dimensional systems [6,9,40,41]. Summarizing, we have investigated disordered dipolarcoupled quantum many-spin systems of different spatial dimensionality subjected to periodic driving (CPMG protocol). Depending on the dimensionality, such systems exhibit singularities in short-time spin dynamics, which are caused by statistical fluctuations in the dipolar spinspin interaction strength. For all dimensionalities, we observed the long-lived spin polarization along the driving pulses, and along the axis conserved by the internal Hamiltonian (z-axis). The amplitude of the long-lived magnetization depends on the inter-pulse time delay and on dimensionality. The Floquet operator analysis shows that the long-lived coherences M x (t) contain comparable contributions from a large number of Floquet eigenstates. Our results imply that the long-lived coherences and subharmonic response are generic features of dipolar-coupled disordered spin systems, including two-, three-, and infinite-dimensional systems that are particularly relevant for practical applications. Developing specific protocols for such applications is an exciting avenue for further research.
[58] L. [40,41,51] the role of strong quantizing field is played by strong continuouswave Rabi driving applied at the frequency ωQ. In this case, the dynamics is to be considered in a doubly rotating frame, where the effective quantization axis is directed along the Rabi driving in the primary rotating frame, and the role of the principal Larmor frequency ωQ is played by the Rabi frequency ωR, see e.g. Refs. 1 and 2 for details. The doubly rotating frame in the theory of magnetic resonance is analogous to the dressed state basis in the quantum optics context. [65] For d = 2 the spins are located on a plane, and the zaxis (direction of the quantizing field) can be aligned at different angles with respect to this plane. If the z-axis is normal to the plane, all spin pairs have θij = π/2, while for the in-plane alignment this angle varies between 0 and π. However, the resulting changes in the Hahn echo are very small: for a fixed areal spin density fs, the values of T2 differ by only 4%. The long-lived coherences also do not exhibit any qualitative differences between these two cases. Fig. 2 so that the integral in Eq. (A2) is well defined at large r, where cos b(θ)t/r 3 → 1.
Singularity of the spin dynamics is determined by the competition of two effects: the dipolar coupling constants J ij decrease with increasing r, but the number of spins S j which interact with the given spin S i grows with r as the volume element dv = r d−1 drdA (where dA is the surface element of the (d − 1)-dimensional hypersphere of unit radius). For d < 6, this growth is sufficiently slow, so the integration over r can be extended to infinity, yielding the above-mentioned result (see Eq. (2) of the main text) . The integral over the hypersphere is a numerical factor of order of one, and the quantity (A6) which comes from integration over r, is also of order one, such that T 2 is determined just by the spin density f s . By renormalizing the spin density, we can set T 2 = 1 as explained above.
For d = 2, the singularity of the Hahn echo M H x (t) = exp − |t/T 2 | 2/3 is strong: the initial echo decay is infinitely fast due to strong fluctuations in the positions of the spins at small distances r ij . Although the typical distance between spins is of the order of 1/f a large fraction of spins has many neighbors at much smaller distances. Correspondingly, while the typical dipolar coupling is of the order of one (recall that we normalize f s to yield T 2 = 1), many realizations of the disorder produce very large dipolar couplings J ij .
Of course, in real crystals, at extremely short times the initial decay rate is finite, because the distance between spins is limited by the crystal lattice constant, which in turn limits the maximal dipolar interaction strength. However, the corresponding times are extremely small, orders of magnitude smaller than T * 2 , and are irrelevant for the phenomena considered here.
For d = 3, such fluctuations in J ij are less strong, and the singularity is weaker: M H x (t) = exp (− |t/T 2 |) has a cusp at t = 0, but the initial rate of decay is finite. Still, for both d = 2 and 3, the total spectral power of the resonance line is (formally) infinite. For d = 4 and 5, the fluctuations are less pronounced, the total spectral power of the resonance is finite, and the singularity in M H x (t) is weak.
For d ≥ 6, the integration over r cannot be extended to infinity: the integral Eq. (A6) diverges at small z (which correspond to r → ∞). This divergence means that the number of the spins S j coupled to the given spin S i grows too fast with increasing r. The contribution from the surface of the sample becomes important, dominating at larger d. The form of M H x (t) then depends on the sample shape and size, but in the limit d → ∞ it again acquires a universal shape-independent Gaussian form. For any regular-shaped sample in the limit d → ∞ all spins are located near the surface, and each spin pair is separated by almost the same distance, producing all-to-all interactions with a uniform coupling constant J ij → J. Eq. (A2) also demonstrates almost identical short-time behavior and long-lived coherences for the full and for the reduced models, as shown in Fig. 11a. However, in the case d → ∞ shown in Fig. 11b, the calculated signals for both full and reduced models coincide only at short times, and exhibit quantitative difference later. Although both models clearly demonstrate long-lived coherences, the echo amplitude is approximately twice higher for the reduced model as compared with the full model. A possible reason for this discrepancy is in the geometry of the dipolar couplings. The flip-flop process between two spins is important if the dipolar coupling between the two spins is comparable to, or larger than, the difference in the respective local fields. In the situation considered here, with T 2 T * 2 , the above case can only occur if the local fields of two spins are accidentally similar. In d = 2 systems, these two spins must be close to each other to ensure non-negligible dipolar coupling. In contrast, in d → ∞ systems, these two spins can be at any distance to each other because the dipolar interaction is homogeneous, coupling each spin to all other spins. Thus, the probability for two spins to have accidentally similar local fields, and to undergo a flip-flop process, is much larger in d → ∞ systems than in d = 2 systems.
In this article we use the full model in our simulations, with the exception of the results for the single-pulse Hahn echo shown in Figs. 2 and 3 of the main text. cient equal to | ψ j |M x |ψ k | 2 ; this corresponds to summation along diagonals of Figs. 4 and 5. Two δ-functions in the formula reflect the fact that the quantity ∆φ is downfolded to the interval [0, π] , i.e. we take ∆φ = |φ j − φ k | when |φ j − φ k | ≤ π and ∆φ = 2π − |φ j − φ k | when |φ j − φ k | > π; this downfolding reflects the symmetries of the summands in Eq. 6. Note that the quantity Σ(∆φ) is proportional to the cosine Fourier transform of the CPMG signal M x (2τ m).
To avoid double counting, only the states with φj ≥ φ k are included in Σ(∆φ). The normalization is chosen such that Σ(∆φ)d(∆φ) = 1. All other simulation parameters are the same as in Fig. 2. | 8,943.2 | 2019-11-14T00:00:00.000 | [
"Physics"
] |
Research on Sentiment Analysis Model of Short Text Based on Deep Learning
With the wide application of the Internet and the rapid development of network technology, microblogs and online shopping platforms are playing an increasingly important role in people’s daily life, learning, and communication. The length of these information texts is usually relatively short, and the grammatical structure is not standardized, but it contains rich emotional tendencies of users. The features used by custumal machinery schooling methods are too sparse on the vector space model and lack the semantic information of short texts, which cannot well identify the semantic features and potential emotional features of short texts. In response to the above problems, this paper proposes a bidirectional long-term and short-term memory network model based on emotional multichannel, combining the attention mechanism and convolutional neural network features in deep learning and learning the short text by combining shallow learning and deep learning. The semantic information and potential emotional information of the short text can be improved to promote the effective expression of short-text emotional features and improve the short-text emotional classification effect. Finally, this paper compares the above models on multidomain classification data sets such as NLPIR and NLPCC2014. The accuracy and F1 value of the model proposed in this paper have achieved good improvement in the field of short-text sentiment analysis.
Introduction
ese days, people want to check the latest current a airs, online shopping, news gossip, and nancial stocks; people are no longer limited to reading newspapers or sitting in front of the TV to watch hot topics but have more ways to participate in the discussion of hot topics such as Weibo, Taobao, Douyin, Zhihu, and WeChat public platforms and other media. As an Internet platform, Weibo, Taobao, and Douyin, in another aspect, realize information sharing and dissemination by virtue of user relationships, attracting a large amount of individuals to participate, and are favored by people; on the other hand, a large amount of posts published by users are text mining and provide a huge amount of data. Sun et al. found that Weibo information can re ect changes in people's attention to hot spots [1] and can even infer the current emotional condition of users based on Weibo user information. In addition, Weibo sentiment analysis also provides reference opinions for some industries, such as stock trading decisions [2], movie box o ce predictions, and election predictions [3,4]. Literature provides reference opinions for consumers to purchase products by mining online shopping platform product review information and establishing a review sentiment analysis model [5][6][7] and guidance for merchants to adjust production plans and product improvement and also promotes online shopping platforms. Users are provided with a more e cient quality of service. With the vigorous development of online social media and the rise of arti cial intelligence, more and more experts, scholars, and scienti c research institutions are now turning their attention to the analysis.
Research Status of Text Sentiment Analysis.
Text sentiment analysis is an indispensable link in natural language handling. In the past, relatively large part experts and scholars have carried out research in the light of sentiment dictionary. Li and Hong reviewed sentiment analysis methods, respectively [8,9].
(1) e medium in the light of the vocabulary for expressing emotions and emotional tendencies mainly judges the sentiment climate. It needs to manually construct the vocabulary for expressing emotions and emotional tendencies mainly or use the internal statistics Mutual Information (MI), Symmetric Conditional Probability (Symmetric Conditional Probability), external statistics (Branch Entropy and Access or Variety), and other methods to expand the sentiment dictionary. In text orientation analysis, well-known sentiment dictionaries are HowNet2, WordNet [10], and ConceptNet [11]. (2) e feature-based method is to use statistical knowledge to screen features from a large quantity of corpora, use features to represent the entire text, and then use relatively unnovel algorithms in machine schooling to classify the text. is method requires high feature engineering and feature selection. e results directly affect the classification effect. For a long time, features have occupied an important position in text classification, the classification matrix has not been greatly improved, and it has indirectly led to the problem of overfitting, which is called the Hughes effect [12][13][14]. In reality, training a large number of features requires enough samples, where obtaining enough samples requires time and labor. (3) Based on deep learning methods, features such as words, sentences, and chapters can be mapped to high-dimensional spaces to learn deeper feature representations in text data. Wang added a self-attention mechanism (Attention Mechanism) after the output storey of the LSTM network [15] and obtained the context information of the LSTM output unit attention means for relevant automatic acquisition. e experimental results show that the attention mechanism can recognize the emotional information in the text [16,17]. e model first inputs the word vector into the Bidirectional LSTM net to learn the textual content and emotional information, and then uses the self-look mechanism to extract emotional representations of monolingual and bilingual texts, respectively. Based on the above work, in the work of sentiment analysis, each word in the text has a multitudinous collision on the overall emotional climate of the text, especially some emotional words, which can often directly reflect the emotional climate of the text, and through the look mechanism, the importance of words can be obtained, and the potential representation information in the text can be learned.
Research Status of Short-Text Sentiment Analysis. (1)
In the method based on the sentiment dictionary, Xiao constructed a sentiment dictionary by analyzing the emotional part of speech and the domain words in the topic domain (World Cup, iPhone, and NBA games) in the context of microblogs [18] and proposed a sentiment lexicon-based sentiment analysis strategy. Chen improved the mutual information algorithm [19] and obtained emotional words in microblogs on the Chinese microblog sentiment dictionary constructed in the light of mutual information.
(2) In the modus in the light of machine learning, Xie et al. combined sentiment dictionary, context features. and topic features [20] and proposed an SVMbased sentiment classification method. Li and Ji extracted features such as words, negative words, and special symbols to construct an SVM model and a CRF model to perform sentiment analysis on microblog data [21] and concluded that the appropriate choice should be made under different circumstances. Conclusions of the model. (3) In the method based on deep learning, Zhou integrated part-of-speech features and word embedding features in the research on sentiment classification of product reviews [7]. e experiments are higher than the traditional text convolutional neural network. Chen proposed a microblog using part-of-speech features of emotional words and learning more hidden information [22]. e experiments verified the proposed model is robust to different data.
Because the short text is relatively short, it will bring about the problem of lack of text semantics, which brings challenges to the short-text sentiment analysis. Although the existing short-text sentiment analysis methods have done some feature extraction, feature selection, and model selection, many works still do not fully consider the context of short texts and deeply dissect semantic features. Some new words may not be recognized in the word segmentation stage. In order to improve the shortcomings of existing methods in feature selection, this paper extracts shallow learning features such as emotional part of speech, location information and dependencies of words from short texts, as well as deep learning features such as word vector features, convolutional neural network features, and emotional attention features. Learning features enrich the textual feature representation of short texts.
In the near future, deep learning has also been widely used in sentiment analysis of short texts. Its concept comes from the research of artificial neural network. Its purpose is to explain the feature information existing in the data by imitating the thinking structure and learning mechanism of the human brain and build a neural network for machine analysis and learning. Compared with the use of nonlinear network structure to make up for the shortcomings of the algorithm, it also has the following two shortcomings: (1) In deep learning, the amount of training data is required by the model. When the scale of the model is large enough, the connection between short-text data can be fully captured. However, in most cases, facing the problem of short-text sentiment orientation classification and other classification problems, sufficient training data cannot be found, and a lot of data are manually annotated, and the model is difficult to achieve optimal, which is common in the industry. (2) Due to the "black box" nature of the features learned by the deep schooling network model, it is difficult to fundamentally find out where the learned features come from, and it is also difficult to explain the specific meaning of the learned features.
Relevant Theoretical Basis and Technical Introduction
In recent years, the analysis and mining of time series data have been gradually applied to many fields. Natural language processing is a typical application [23][24][25][26]. In the research of many scholars in the scope of natural language processing, the text sentiment analysis task has two important steps. e first is to convert the text information into coded information that can be recognized by the computer, and the second is to analyze the sentiment tendency of the text. Since the words in the text cannot be directly fed into the network model, the first task is to convert the text into a digital representation. e word vector is to map the words in the text into a digital vector representation. According to the different encoding methods, word vectors are mainly divided into discrete word vectors and distributed word vectors.
Text Representation.
Natural language is a complex system that expresses a given intention and thought. It is generally composed of words and punctuation marks. One, two or more words are spliced into a word, and several words are connected to form a sentence. After continuous combination, it forms paragraphs and chapters. Unlike humans, machines cannot directly understand the emotional information in language but need to obtain the corresponding information in language by establishing certain rules or models [27], in which only one bit is 1 and the rest are 0. Under the one-hot rule, the words after word segmentation are discretized and mapped to the Euclidean space by the implementation of row vectors, but in large-scale data sets, the vocabulary size of a data set may reach tens of thousands of dimensions or even ten of thousands of dimensions, and vectors at this time will undoubtedly bring huge memory consumption. At the same time, the large-scale vocabulary makes the constructed word vector matrix too sparse, which brings great inconvenience to the feature schooling. In view of the fact that one-hot encoding will have problems such as dimensional disaster, word similarity, and poor model generalization ability in natural language modeling, Google proposed the word2vec model in 2013, also known as the word embedding model. Each position in the model-trained and the general value range is between −10 and 10. Taking "I appreciate one country, two systems" as an example, the 200-dimensional word vector representation is set as shown in Table 1.
Related Methods of Sentiment Analysis.
Sentiment analysis is essentially a text classification problem. Deep learning can learn deep features in data and bring about brilliant bonanza in odd spheres. erefore, profound learning has also been diffusely used in sentiment decomposition tasks in recent years. Neural networks commonly used in text sentiment decomposition include CNN, recurrent neural networks, and LSTM networks. ese networks can better mine the latent information hidden in the text and outperform most custumal machinery schooling methods.
Since RNN adds a loop structure to the traditional neural network, the content of the loop body will be executed at each step, but when the number of historical nodes input to the RNN decreases, the RNN cannot memorize information far from the current node. LSTM is an improved model of RNN. e main improvement is the introduction of three phylums when memorizing information: input gate, forget gate, and output gate. rough these three gates, LSTM can bridle the information passing through the cell and can selectively add information or delete existing information according to the needs of the result, and specific LSTM network structure and internal structure are shown in Figure 1.
At the outset, LSTM used the forget gate to fix the message that the cell demands to discard. e specific calculation is shown in the following equation: LSTM determines the information stored in the cell through the input phylum. e input phylum calculates a value from 0 to 1 by sigmod, the key in news to modernize the condition of the current node, that is, what information needs to be updated or stored. Among them, 0 means not accepting new information and 1 means accepting all the input information. LSTM generates a new memory C(t), which is the input memory, not the final memory. e condition is determined by the previous output and the current input, which are the worth of the concealed stratum of LSTM in the new memory, the degree of seriousness of the current input, and the degree of seriousness of the current input. e specific calculation of the bias in the new memory is shown in the following equation : Scientific Programming e final desired memory is generated by the part to be remembered and the part to be forgotten.
rough the calculation of the previous two phylums, we already know the proportion i(t) of new information retained, the proportion of old information that needs to be forgotten, and the new memory and old memory f(t). In the final step, the output phylum is used to determine the upshot of the final cell output. e output phylum calculates a value between 0 and 1 by sigmod-the input information to calculate the proportion of information that the cell finally outputs. Among them, 0 indicates that no information is output and 1 indicates that all the final memory results are output. e specific calculation is shown in the following equation: In a classical recurrent network, the condition and output are always transmitted from front to back. However, in some problems, the transmission and output of the condition are not only related to the previous condition but also related to the subsequent condition such as prediction. A missing word is not only related to the preceding text but also to the following text. Among them, BiLSTM is a common neural network model that considers the contextual relationship. It is based on Bi-RNN. One LSTM module propaphylums from front to back, and another LSTM module propaphylums from back to front. BiLSTM builds a double-storey network model. e input is forwarded to the LSTM and the reverse LSTM, respectively, and the final output result is the vector superposition of the LSTM output results in two different directions. Finally, the entire output result is fully connected, and then, the sigmod output is the final output result, where BiLSTM is trained in the same way as LSTM [28][29][30].
Sentiment Classification Model Based on Deep Learning
Considering that the custumal machinery schooling methods and profound learning methods are insufficient in feature representation, this paper makes two improvements to the custumal machinery schooling methods and profound learning methods: one is to use shallow learning features as one of the input features of profound learning. Add a look mechanism to the network layer to allow the model to better learn the underlying semantic features in the text.
A Bidirectional Long-and Short-Term Memory Network
Model Based on Emotional Multichannel. So as to make full use of the unique emotional resource information in the text sentiment decomposition task, this paper proposes BiLSTM Based on Sentimental Multichannel, referred to as BM-ATT-BiLSTM based on multichannel sentiment. e learning storey builds multiple channels to improve sentiment classification performance. BM-ATT-BiLSTM is a left-to-right multistorey neural network structure mainly composed of 5 parts: input layer, semantic learning layer, emotional attention layer, merging storey, and sentiment classification output storey. e input storey inputs are composed of features and shallow features (emotional part-of-speech features of words, location information features, and dependency features). e traditional LSTM model can obtain the forward semantic information in the text but ignore the reverse semantic information of the text. In response to this paper, let LSTM learn backward directions of the sequence. Feed the data in both directions into the BiLSTM model, and the calculation method is relatively simple, by calculating the emotional weight of each word. If the previous storey of the BiLSTM network is multiplied, it will be normalized and fused batch. Use the fully connected layer of the model to output the feature matrix, use softmax to normalize the feature matrix, and finally get the classification result.
Experimental Parameter
Setting. Different hyperparameters may have different effects on the experimental results. Although parameter tuning itself is not the main research content of this paper, for the sake of fairness, this paper considers the overall effect of the experiment and Figure 1: LSTM text classifier model. Table 2.
Experimental Environment and Comparative
Experiments. e computer hardware configuration used in the experiment in this paper is as follows: the CPU is Intel Core processor i5-9400f, and the GPU series is NVIDIA GTX; the operating system is configured as Windows 10; the programming software used is PyCharm, the programming language used is Python, and the support library used in profound learning is pytorch10, keras11, and gensim12.
Experimental Results and Analysis.
Since the NLPIR and NLPCC2014 data sets do not contain many training samples, too many iterations may lead to overfitting problems, and if the number of iterations is insufficient, it is troublesome for the matrix to be taught effective features. erefore, 20% of the data set is divided into the validation set, 16% is divided into the test set, and 64% is divided into the drilling battery. Precision, Recall, and F1-measure are selected as evaluation indicators. So as to make the experiment more fair, the evaluation index results in this paper take the average of 50 experimental results. e detailed experimental results are shown in Table 3.
(1) Compared with LSTM, the overall effect of CNN is weaker than that of LSTM. In the process of extracting features, CNN mainly captures multiple different N-grams of text, and there are many different convolution kernels for one N-gram. Useful information is extracted from different angles, but the experimental data sets are all short texts, and the amount of data is not large, so the use of CNN to extract features may lead to insufficient features and adds emotional attention to CNN, and the effect is obviously promoted. (2) e results of CNN and CNN + SVM show that using SVM instead of softmax can improve the classification effect. e reason is that the loss function of SVM can get faster convergence on the three data sets of text, and the output of softmax is only a probability of compressed data, and the probability distribution of softmax is deviated from the actual result. (3) Among all the LSTM memory networks involved in the comparative experiments, BM-ATT-BiLSTM has the best effect. An important reason is that the emotional attention mechanism is added, and more potential emotions can indeed be learned from the text through attention. According to Table 3, the BM-ATT-BiLSTM method outperforms other models in terms of precision and recall. is effect is essentially due to the time series characteristics of natural language. Cells in the LSTM model can effectively record the time series information in the text. e BiLSTM model structure is used to learn the semantic information in the text, in order to enhance the capability of the mold to learn the reverse text, and strengthen the capability to seize the news of the text context. Combining the emotional parts of speech, location information of words, as well as word embedding features, emotional attention features, and features extracted by convolutional neural networks, this paper proposes three neural network models, which are the foundations of emotional multichannel features-BiLSTM classification model, convolutional neural network in the model mechanism increases the attention, and multikernel convolutional neural network in the model mechanism increases the attention. Tentatives show that the BM-ATT-BiLSTM recommended in this paper has the best performance; therefore, it can be concluded that adding the above shallow learning features and profound schooling features to the shortversion sentiment decomposition can improve the classification manifestation of the mold.
Conclusion and Outlook
Combining the emotional part of speech features, location information features, and dependency features of words, as well as word embedding features, emotional attention features, and features extracted by convolutional neural networks, this paper proposes three neural network models, which are emotional multichannel features-BiLSTM classification model, convolutional neural network in the model mechanism increases the attention, and multikernel convolutional neural network in the model mechanism increases the attention. Tentatives show that the BM-ATT-BiLSTM recommended in this paper has the best performance, so it can be concluded that adding the above shallow (1) is paper puts forward a bidirectional LSTM memory network model based on emotional multichannel, which integrates shallow learning features and profound learning features at the feature level, so the mold can heighten the expression ability of text semantic information and learn the potential emotion of the text. e test outcome is also multifeature prefusion, which is very helpful for sentiment climate analysis. (2) is paper adopts CS-ATT-CNN and CS-ATT-TCNN to solve the problem of sentiment climate analysis of short texts. e model effectively combines machine learning and profound learning, the training time is short, and the model is better than the traditional convolution-Neural network classification model and TCNN model. eoretically, the semantic information of the LSTM model text is good, but from the practical point of view, the LSTM model still has deficiencies, and the BM-ATT-BiLSTM method in this paper combines the emotional partof-speech features, location information features, and dependency features of words. As well as from the emotional attention feature, we can learn the underlying emotional regularity in the text so as to capture the important information that affects the emotional climate of the text. e BM-ATT-BiLSTM proposed in this paper is good, but there is one more factor that should be considered in the forget phylum, and it takes a lot of time to detect these meaningless communications can achieve better experimental results in a short time.
Prospect.
Judging from the current situation and the experiments of this paper, there is still a long way to go in the analysis of sentiment climate of short texts. ere are many challenges and opportunities, and there are many problems that deserve further study and improvement. e following two points can be used as the next step for research.
(1) is paper only mines text data and does not consider the author's user attributes. If user attributes, user's Weibo or online shopping comments, and posting time and other information are fully considered, the accuracy of text orientation analysis may be greatly improved. In the experiment, this paper only uses the data of Wikipedia as the corpus of the training word vector, and the data of Weibo and product reviews can be added as the expanded corpus of the training word vector in the future. Due to the complexity and diversity of the network environment and Chinese, some new words may not be recognized in the word segmentation stage, and there is a polysemy problem in Chinese. In the future, a new word dictionary can be constructed in combination with the context. (2) ere are some sarcastic sentences in Weibo and product reviews. is paper cannot identify the emotional polarity of such sentences well. erefore, we can add sarcastic sentences to transform the model of this paper. If you combine the knowledge graph to do sentiment decomposition, consider each word as an entity in the knowledge graph, and form a many-to-many relationship between entities and entities, and it is very possible to dig out the emotional connection between words and words. In the next step, we will combine some actual Internet projects to verify more combined models to reflect their socio-economic value.
Data Availability e data set can be accessed upon request.
Conflicts of Interest
e author declares that there are no conflicts of interest. | 5,646.2 | 2022-05-29T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Mechanical Performance of Biodegradable Thermoplastic Polymer-Based Biocomposite Boards from Hemp Shivs and Corn Starch for the Building Industry
Bio-sourced materials combined with a polymer matrix offer an interesting alternative to traditional building materials. To contribute to their wider acceptance and application, an investigation into the use of wood-polymer composite boards is presented. In this study, biocomposite boards (BcB) for the building industry are reported. BcB are fabricated using a dry incorporation method of corn starch (CS) and hemp shiv (HS) treatment with water at 100 °C. The amount of CS and the size of the HS fraction are evaluated by means of compressive bending and tensile strength, as well as microstructure. The results show that the rational amount of CS independently of HS fraction is 10 wt.%. The obtained BcB have compressive stress at 10% of deformation in the range of 2.4–3.0 MPa, bending of 4.4–6.3 MPa, and tensile strength of 0.23–0.45 MPa. Additionally, the microstructural analysis shows that 10 wt.% of CS forms a sufficient amount of contact zones that strengthen the final product.
Introduction
Wood and polymer-based composites are low-carbon and environmentally-friendly materials. They have many advantages, such as a lightweight, corrosion resistance, dimensional stability, and recyclability, and are widely used in outdoor construction, logistics, decoration, and so on [1].
Even though hemp concrete is characterised by sufficient thermal insulating properties (from 0.05 to 0.13 W/(m·K) [2]), it is not resistant to mechanical impact. After conducting compressive strength tests on hemp shiv (HS) and lime-based composites, [3,4] obtained values of 0.20-0.50 MPa and from 0.40 to 1.2 MPa, respectively. Other authors [5] tested composites from HS and different classes of hydraulic lime. The experimental results of the composites, which were hardened for 28 days at 20 • C and 50% relative air humidity, showed that the compressive strength of such composites ranges from 0.10 to 0.31 MPa when the density is 460-480 kg/m 3 . Authors [6] tested composites from HS and slaked, as well as hydraulic lime. The results showed that the compressive strength varied from 0.15 to 0.20 MPa. Taking into consideration bending strength, according to literature [7,8], it varied from 0.20 to 0.32 MPa. However, no research on tensile strength of hemp and lime-based composites was conducted. Due to low mechanical durability, materials of this type cannot be used for bearing
Raw Materials
For the preparation of biocomposite boards (BcB), fibre HS aggregate (obtained from local farmers (USO 31 species), Rokiskis region, Lithuania) arising from a hemp fibre separation process was used. In order to conduct the tests, the following HS fractions (the particle size range) were chosen: 2.5/5 (particles size from 2.5 to 5 mm), 5/10 (particles size from 5 to 10 mm), 10/20 (particles size from 10 to 20 mm) and 2.5/20 mm (particles size from 2.5 to 20 mm). Additionally, the shredded fraction obtained from milling a 2.5/20 mm fraction (up to 5.6 mm) was used. As a binder, corn starch (CS) ("Roquette", Lestrem, France) with a bulk density of 550 kg/m 3 , compressibility-40%, amylose content-26%, moisture content-11.4% and gelatinization temperature-62 • C was chosen and used at 10,20,30,40 and 50 wt.%. The viscosity is not relevant due to a dry incorporation of CS into wetted HS. The tensile strength of hardened sole starch/water paste-1.40 ± 0.21 MPa [24].
Forming Process
BcB were formed from different HS fractions, CS and water. Shredding of the 2.5/5 mm fraction was conducted with a laboratory shredder (self-made, Vilnius, Lithuania) having a power of 1.1 kW and blade rotational speed of 2800 rpm. The aim of shredding was to fibre HS particles to fine fibres. While fibreing, it was noticed that the process was not successful due to the high amount of dust and the shape of the particles changing only slightly. Therefore, HS were poured with 100 • C water and left for 2 h. Further, wetted HS were then were shredded for 60 s.
All fractions were treated with 100 • C water and left for 2 h, after which they were drained for 10 min in order to eliminate an excess of water. Into water treated HS, powder type CS (without any pre-treatment) was dosed through a 0.63 mm-sized sieve. All HS fractions were mixed with different amounts of CS each. Totally, 30 compositions were formed ( Table 1). The obtained mixtures were then thoroughly mixed for no less than 3 min until a homogeneous mass was obtained. BcB were formed using a metal mould, as presented by a graphical sketch in Figure 1. On the bottom part of the metal mould, which is lubricated with oil, a wooden frame is added. Furthermore, the forming mixture is distributed through the whole frame. Then, the mixture is trampled down with wooden scantling in order to obtain the initial form. After that, the upper part of the mould is added onto the bottom part and both of them are screwed together. The whole setup is then put on the stand, and a hydraulic jack is set on the upper part of the mould. The mixture is compressed up to 40 vol.%. After the loading, BcB are further thermally hardened in a ventilated oven. Thermal hardening consists of three stages: temperature increase (160 °C in 1 h), temperature maintenance (160 °C for 6 h) and temperature reduction (heating is turned off). After the thermal hardening, the mould with a hardened product is left in the thermal processing oven until it cools down to environmental temperature. Then, hardened BcB are demoulded and cut into specimens.
Tests Methods
Compressive stress at 10% of deformation was tested according to the EN 826 method [25] using a computerised machine H10KS (Hounsfield, Surrey, UK) with a maximum loading force of 10 kN, a loading accuracy of ± 0.5% and a loading speed accuracy of ± 0.05%. Three specimens for each composition with a size of 50 × 50 × d mm 3 (d-thickness of specimen) were prepared. Before the test, specimens were conditioned for not less than 6 h at 23 ± 5 °C. Then, the specimen is aligned onto the bottom support and loaded with an initial loading of 250 ± 10 Pa. The loading speed during the tests is 0.1d min −1 and the specimen is compressed until 10% of deformation.
The bending strength of the BcB was determined in accordance with the EN 310 method [26]. For the test, equipment consisting of two parallel cylinder supports, which have a length greater than the side of a specimen, and diameters of 15 ± 0.5 and 30 ± 0.5 mm, were used. The test was conducted using the same computerised machine as for compressive stress determination for three specimens with a size of (20d + 50) × 50 × d mm 3 . Before the test, specimens were conditioned at 20 ± 2 °C and 65% ± 5% relative air humidity until constant mass was achieved.
The tensile strength perpendicular to the specimen surface was tested based on the EN 319 method [27]. The test was conducted using the computerised machine used for compressive stress and bending strength determination. Three specimens for each composition with a size of 50×50×d mm were prepared. Before the test, specimens were conditioned at 20 ± 2 °C and 65% ± 5% relative air humidity conditions until constant mass was achieved.
The density was determined according to EN 1602 [28] for specimens which size was the same as for mechanical properties testing. Three density ranges were obtained, i.e., separately for compressive stress, bending and tensile strength.
The structure of the BcB was studied using scanning electron microscopy (SEM) with a JEOL SM-7600F (JEOL Ltd., Tokyo, Japan). Before the SEM analysis, the BcB were sputter coated with a thin gold layer under vacuum using a QUORUM Q150R ES (Quorum Technologies Ltd., Lewes, UK).
Experimental analysis of the obtained test data was conducted using mathematical and statistical methods, during which standard deviations were evaluated, and distribution functions and parameters were determined using the software STATISTICA (8.0). For the determination of the optimal relationship between X and Y, linear and non-linear correlation methods were used. According to the obtained determination coefficient, it is possible to conclude a relationship between the two parameters. When the value of the correlation square ratio is >0.9, the relationship is very strong, when it is in the range of >0.7-0.9 it is strong, when it varies from 0.5 to 0.7 it is averagely After the loading, BcB are further thermally hardened in a ventilated oven. Thermal hardening consists of three stages: temperature increase (160 • C in 1 h), temperature maintenance (160 • C for 6 h) and temperature reduction (heating is turned off). After the thermal hardening, the mould with a hardened product is left in the thermal processing oven until it cools down to environmental temperature. Then, hardened BcB are demoulded and cut into specimens.
Tests Methods
Compressive stress at 10% of deformation was tested according to the EN 826 method [25] using a computerised machine H10KS (Hounsfield, Surrey, UK) with a maximum loading force of 10 kN, a loading accuracy of ± 0.5% and a loading speed accuracy of ± 0.05%. Three specimens for each composition with a size of 50 × 50 × d mm 3 (d-thickness of specimen) were prepared. Before the test, specimens were conditioned for not less than 6 h at 23 ± 5 • C. Then, the specimen is aligned onto the bottom support and loaded with an initial loading of 250 ± 10 Pa. The loading speed during the tests is 0.1d min −1 and the specimen is compressed until 10% of deformation.
The bending strength of the BcB was determined in accordance with the EN 310 method [26]. For the test, equipment consisting of two parallel cylinder supports, which have a length greater than the side of a specimen, and diameters of 15 ± 0.5 and 30 ± 0.5 mm, were used. The test was conducted using the same computerised machine as for compressive stress determination for three specimens with a size of (20d + 50) × 50 × d mm 3 . Before the test, specimens were conditioned at 20 ± 2 • C and 65% ± 5% relative air humidity until constant mass was achieved.
The tensile strength perpendicular to the specimen surface was tested based on the EN 319 method [27]. The test was conducted using the computerised machine used for compressive stress and bending strength determination. Three specimens for each composition with a size of 50 × 50 × d mm were prepared. Before the test, specimens were conditioned at 20 ± 2 • C and 65% ± 5% relative air humidity conditions until constant mass was achieved.
The density was determined according to EN 1602 [28] for specimens which size was the same as for mechanical properties testing. Three density ranges were obtained, i.e., separately for compressive stress, bending and tensile strength.
The structure of the BcB was studied using scanning electron microscopy (SEM) with a JEOL SM-7600F (JEOL Ltd., Tokyo, Japan). Before the SEM analysis, the BcB were sputter coated with a thin gold layer under vacuum using a QUORUM Q150R ES (Quorum Technologies Ltd., Lewes, UK).
Experimental analysis of the obtained test data was conducted using mathematical and statistical methods, during which standard deviations were evaluated, and distribution functions and parameters were determined using the software STATISTICA (8.0). For the determination of the optimal relationship between X and Y, linear and non-linear correlation methods were used. According to the obtained determination coefficient, it is possible to conclude a relationship between the two parameters. When the value of the correlation square ratio is >0.9, the relationship is very strong, when it is in the range of >0.7-0.9 it is strong, when it varies from 0.5 to 0.7 it is averagely strong and when it is <0.5 it is weak [29]. In order to evaluate the scattering of experimental data on both sides of the regression line, the average square deviation S r was determined.
Compressive Strength of BcB
Most thermal insulating materials under compression do not show an evident fracture limit-specimens do not fracture but densify. The same manner has been observed for some porous thermal insulating materials and lime and HS-based composites [30]. When such materials are compressed, the conditional strength limit is determined. Dots O, A, B and C in BcB from the HS aggregate and CS binder compression graph ( Figure 2a) designate a smooth transition into other mechanical states. The OA section shows linear or close to linear relation between deformation and loading. Furthermore, compared to the OA section, the AB section shows an increase in deformation when the loading is marginally increased. The third zone (BC section) presents a reduction in deformation while the stress increases-densification of the material is observed. Figure 2b shows the 50 × 50 × 10 mm 3 -sized non-compressed specimen of BcB, while Figure 2c depicts the up to 70% compressed specimen which has the final size of 50 × 50 × 3 mm 3 . It can be seen that the specimen densifies but does not fracture. strong and when it is <0.5 it is weak [29]. In order to evaluate the scattering of experimental data on both sides of the regression line, the average square deviation Sr was determined.
Compressive Strength of BcB
Most thermal insulating materials under compression do not show an evident fracture limitspecimens do not fracture but densify. The same manner has been observed for some porous thermal insulating materials and lime and HS-based composites [30]. When such materials are compressed, the conditional strength limit is determined. Dots O, A, B and C in BcB from the HS aggregate and CS binder compression graph ( Figure 2a) designate a smooth transition into other mechanical states. The OA section shows linear or close to linear relation between deformation and loading. Furthermore, compared to the OA section, the AB section shows an increase in deformation when the loading is marginally increased. The third zone (BC section) presents a reduction in deformation while the stress increases-densification of the material is observed. Figure 2b shows the 50 × 50 × 10 mm 3 -sized non-compressed specimen of BcB, while Figure 2c depicts the up to 70% compressed specimen which has the final size of 50 × 50 × 3 mm 3 . It can be seen that the specimen densifies but does not fracture. This way, when the compressive stress is determined for BcB, the limiting compressive strength is not obtained; therefore, it has to be taken into account that permissible compressive loadings are specified. Hereby, normative references for thermal insulating materials designate compressive strength as compressive stress at 10% of deformation (based on thickness).
The apparent density is an important parameter for describing the mechanical properties of building materials. Therefore, it is important to evaluate its impact on BcB from HS and CS. The dependence of BcB compressive stress at 10% deformation on density is presented in Figure 3. Based on the experimental data, the relation between compressive stress and density is determined and can This way, when the compressive stress is determined for BcB, the limiting compressive strength is not obtained; therefore, it has to be taken into account that permissible compressive loadings are specified. Hereby, normative references for thermal insulating materials designate compressive strength as compressive stress at 10% of deformation (based on thickness).
The apparent density is an important parameter for describing the mechanical properties of building materials. Therefore, it is important to evaluate its impact on BcB from HS and CS.
The dependence of BcB compressive stress at 10% deformation on density is presented in Figure 3. Based on the experimental data, the relation between compressive stress and density is determined and can be approximated by the regression equation (Equation (1)) with standard deviation S r = 0.196 MPa (n = 90) and correlation square ratio η 2 yx = 0.756: where σ 10% is the compressive stress at 10% deformation, MPa and ρ is the density of BcB, kg/m 3 . According to the chosen mathematical model, the obtained correlation square ratio is η 2 yx = 0.756 and it shows that 75.6% of changes in compressive stress are determined by the change in BcB density. (1)) with standard deviation Sr = 0.196 MPa (n = 90) and correlation square ratio η 2 yx = 0.756: where σ10% is the compressive stress at 10% deformation, MPa and ρ is the density of BcB, kg/m 3 . According to the chosen mathematical model, the obtained correlation square ratio is η 2 yx = 0.756 and it shows that 75.6% of changes in compressive stress are determined by the change in BcB density. It was calculated that the average density obtained for BcB with HS aggregate and different amounts of CS binder varies from 319 to 408 kg/m 3 . However, authors [21] obtained HS and starchbased biocomposites within the similar binder amount interval with less than half the density. Additionally, scientists [31] present literature values for hemp-based composites with a varying density from 351 to 627 kg/m 3 , which proves that the parameter is dependent on the formation technology, i.e., loading conditions and HS treatment.
After conduction of the experimental data analysis (Figure 4), it is determined that differences occurring due to the interaction between varying amount of CS and different HS fractions, are relatively small. The highest compressive stress values are obtained for BcB from shredded HS and CS binder. When the amount of CS varies from 10 to 50 wt.%, the obtained compressive stress ranges insignificantly, it is within the error range (from 3.0 to 3.3 MPa). Moreover, compared to the control BcB without a binder, the greatest increment, i.e., ~15.6%, is observed for BcB from shredded HS and 10 wt.% of CS. It was calculated that the average density obtained for BcB with HS aggregate and different amounts of CS binder varies from 319 to 408 kg/m 3 . However, authors [21] obtained HS and starch-based biocomposites within the similar binder amount interval with less than half the density. Additionally, scientists [31] present literature values for hemp-based composites with a varying density from 351 to 627 kg/m 3 , which proves that the parameter is dependent on the formation technology, i.e., loading conditions and HS treatment.
After conduction of the experimental data analysis (Figure 4), it is determined that differences occurring due to the interaction between varying amount of CS and different HS fractions, are relatively small. The highest compressive stress values are obtained for BcB from shredded HS and CS binder. When the amount of CS varies from 10 to 50 wt.%, the obtained compressive stress ranges insignificantly, it is within the error range (from 3.0 to 3.3 MPa). Moreover, compared to the control BcB without a binder, the greatest increment, i.e.,~15.6%, is observed for BcB from shredded HS and 10 wt.% of CS. (1)) with standard deviation Sr = 0.196 MPa (n = 90) and correlation square ratio η 2 yx = 0.756: where σ10% is the compressive stress at 10% deformation, MPa and ρ is the density of BcB, kg/m 3 . According to the chosen mathematical model, the obtained correlation square ratio is η 2 yx = 0.756 and it shows that 75.6% of changes in compressive stress are determined by the change in BcB density. It was calculated that the average density obtained for BcB with HS aggregate and different amounts of CS binder varies from 319 to 408 kg/m 3 . However, authors [21] obtained HS and starchbased biocomposites within the similar binder amount interval with less than half the density. Additionally, scientists [31] present literature values for hemp-based composites with a varying density from 351 to 627 kg/m 3 , which proves that the parameter is dependent on the formation technology, i.e., loading conditions and HS treatment.
After conduction of the experimental data analysis (Figure 4), it is determined that differences occurring due to the interaction between varying amount of CS and different HS fractions, are relatively small. The highest compressive stress values are obtained for BcB from shredded HS and CS binder. When the amount of CS varies from 10 to 50 wt.%, the obtained compressive stress ranges insignificantly, it is within the error range (from 3.0 to 3.3 MPa). Moreover, compared to the control BcB without a binder, the greatest increment, i.e., ~15.6%, is observed for BcB from shredded HS and 10 wt.% of CS. Contrary to the obtained results for BcB with shredded HS, BcB from 5/10, 10/20, 2.5/20 and 2.5/5 mm HS fractions are characterised by the lowest compressive stress values. The parameter, when from 10 to 50 wt.% of CS is used, averagely varies from~2.5 to~2.6 MPa. The obtained values are similar to the ones presented by [32] who additionally treated HS with NaOH, Ca(OH) 2 and ethylenediamintetracetic acid. It means that the same or even better values may be obtained without further treatment of the aggregate. Results regarding the control BcB from all fractions, except the shredded one, and without a CS binder, are lower, the average value of compressive stress is~1.9 MPa. This can be attributed to the assumption made by [5] that when the proportion of smaller HS particles (in this case shredded ones) is larger, they are better coated by the binder during the production process.
As can be seen, Figure 5a presents the microstructure images of BcB without a CS binder and Figure 5b for BcB with a CS binder. Figure 5a shows that HS particle treated at 100 • C has rough and dishevelled surface. Hot water treatment leads to breaking microfibril bundles and defibrillation. The recent study of [33] proved the defibrillation effect on piassava fibres. Therefore, better interfacial adhesion is ensured between aggregate and thermoplastic binder. The addition of CS binder into forming mixture allows overlaying of tracheids on the surface of HS, then, during the thermal treatment (BcB hardening process), it forms links that strengthen the contact zones between HS. Contrary to the obtained results for BcB with shredded HS, BcB from 5/10, 10/20, 2.5/20 and 2.5/5 mm HS fractions are characterised by the lowest compressive stress values. The parameter, when from 10 to 50 wt.% of CS is used, averagely varies from ~2.5 to ~2.6 MPa. The obtained values are similar to the ones presented by [32] who additionally treated HS with NaOH, Ca(OH)2 and ethylenediamintetracetic acid. It means that the same or even better values may be obtained without further treatment of the aggregate. Results regarding the control BcB from all fractions, except the shredded one, and without a CS binder, are lower, the average value of compressive stress is ~1.9 MPa. This can be attributed to the assumption made by [5] that when the proportion of smaller HS particles (in this case shredded ones) is larger, they are better coated by the binder during the production process.
As can be seen, Figure 5a presents the microstructure images of BcB without a CS binder and Figure 5b for BcB with a CS binder. Figure 5a shows that HS particle treated at 100 °C has rough and dishevelled surface. Hot water treatment leads to breaking microfibril bundles and defibrillation. The recent study of [33] proved the defibrillation effect on piassava fibres. Therefore, better interfacial adhesion is ensured between aggregate and thermoplastic binder. The addition of CS binder into forming mixture allows overlaying of tracheids on the surface of HS, then, during the thermal treatment (BcB hardening process), it forms links that strengthen the contact zones between HS. In the SEM images (Figure 5b) of the BcB with a binder, it is noted that the HS are fully enclosed by the CS matrix and a considerable adhesion occurs in the interface region between the two components. This result is in accordance with the performance of previously discussed compressive stress tests results and data obtained by [34], which show a good interaction between aggregate and a binder.
Bending Strength of BcB
In order to use BcB from fibre HS aggregate as thermal insulating-structural materials, bending strength, which is an extremely important parameter, should be determined. Based on the results obtained, it may be possible to decide the product's durability during transportation, installation, exploitation under specific conditions, as well as application in ceilings, external layers of three-layered boards, envelopes and so on. Therefore, the results of bending strength for BcB from different fractions of HS and varying amounts of CS are presented in Figures 6 and 7. Basically, Figure 6 presents the dependence of bending strength on BcB density. In the SEM images (Figure 5b) of the BcB with a binder, it is noted that the HS are fully enclosed by the CS matrix and a considerable adhesion occurs in the interface region between the two components. This result is in accordance with the performance of previously discussed compressive stress tests results and data obtained by [34], which show a good interaction between aggregate and a binder.
Bending Strength of BcB
In order to use BcB from fibre HS aggregate as thermal insulating-structural materials, bending strength, which is an extremely important parameter, should be determined. Based on the results obtained, it may be possible to decide the product's durability during transportation, installation, exploitation under specific conditions, as well as application in ceilings, external layers of three-layered boards, envelopes and so on. Therefore, the results of bending strength for BcB from different fractions of HS and varying amounts of CS are presented in Figures 6 and 7. Basically, Figure 6 presents the dependence of bending strength on BcB density. After conducting an analysis of the experimental data (Figure 7), it can be seen that differences occurring due to the interaction between different amounts of CS and various HS fractions are relatively small.
The average value of bending strength for BcB from HS aggregate and CS binder is ~1.5 MPa. It is noticed that different amounts of CS and various HS fractions do not impact the final value of bending strength, i.e., for all BcB, it changes insignificantly. When CS binder is added from 10 to 50 wt.%, the value of bending strength increases from ~5.2 to ~6.0 MPa. According to experimental data, it can be stated that the highest increment in parameter, i.e., 3.5 times, is observed for BcB from various HS fractions and 10 wt.% of CS binder. The more obvious increase in bending strength of BcB is observed in [35] with an increasing amount of CS. However, the authors obtained values varying from 0.03 to 0.13 MPa, while the current study investigates much stronger BcB. Such a significant difference may be due to the different incorporation of starch and loading conditions, i.e., the authors used starch solution, while the current study investigated a dry method. In contrast, the information regarding the loading conditions during production was not presented. After conducting an analysis of the experimental data (Figure 7), it can be seen that differences occurring due to the interaction between different amounts of CS and various HS fractions are relatively small.
The average value of bending strength for BcB from HS aggregate and CS binder is ~1.5 MPa. It is noticed that different amounts of CS and various HS fractions do not impact the final value of bending strength, i.e., for all BcB, it changes insignificantly. When CS binder is added from 10 to 50 wt.%, the value of bending strength increases from ~5.2 to ~6.0 MPa. According to experimental data, it can be stated that the highest increment in parameter, i.e., 3.5 times, is observed for BcB from various HS fractions and 10 wt.% of CS binder. The more obvious increase in bending strength of BcB is observed in [35] with an increasing amount of CS. However, the authors obtained values varying from 0.03 to 0.13 MPa, while the current study investigates much stronger BcB. Such a significant difference may be due to the different incorporation of starch and loading conditions, i.e., the authors used starch solution, while the current study investigated a dry method. In contrast, the information regarding the loading conditions during production was not presented. After conducting an analysis of the experimental data (Figure 7), it can be seen that differences occurring due to the interaction between different amounts of CS and various HS fractions are relatively small.
The average value of bending strength for BcB from HS aggregate and CS binder is~1.5 MPa. It is noticed that different amounts of CS and various HS fractions do not impact the final value of bending strength, i.e., for all BcB, it changes insignificantly. When CS binder is added from 10 to 50 wt.%, the value of bending strength increases from~5.2 to~6.0 MPa. According to experimental data, it can be stated that the highest increment in parameter, i.e., 3.5 times, is observed for BcB from various HS fractions and 10 wt.% of CS binder. The more obvious increase in bending strength of BcB is observed in [35] with an increasing amount of CS. However, the authors obtained values varying from 0.03 to 0.13 MPa, while the current study investigates much stronger BcB. Such a significant difference may be due to the different incorporation of starch and loading conditions, i.e., the authors used starch solution, while the current study investigated a dry method. In contrast, the information regarding the loading conditions during production was not presented.
Whereas It can be stated that the average bending strength value falls into the 4.6 ≤ σb ≤ 5.7 MPa interval. Even though all values fall into the interval, BcB from shredded HS can be distinguished due to the average value of bending strength, which is higher than the upper confidence limit. Therefore, the assumption can be made that the bending strength for shredded HS-based BcB is relatively higher than for ones from non-shredded HS. According to the density, BcB from shredded HS fraction can be referred to as low density boards. Although, the density is low, the bending strength, as per given in the study of [36], is almost the same as for medium density boards.
The conducted structural analysis shows that fracture surface of BcB from non-shredded HS is uneven (Figure 9a). During bending strength, part of the shivs in a fracture zone partially or totally break. It is as well can be seen that part of the shivs are pulled out. Therefore, it can be stated that connections in these places are not strong enough. Moreover, Figure 9b shows the fracture surface of BcB from shredded HS. The fracture mostly occurs through contact zones of HS while forming quite even fracture line, however, in some places, sole larger particles are pulled out. It can be noticed that the strength of contact zones between HS aggregate and CS binder is quite uniform throughout the whole specimen volume. Due to the higher amount of contact zoned, which are formed between shredded HS particles, these BcB may withstand higher stresses during the test.
Tensile Strength of BcB
The tensile strength of a material is one of the most important mechanical properties. This parameter is relevant if any other material or product is assigned to be connected or glued on BcB. It is also important during installation or transportation. Based on the value obtained, the application area of the material may be anticipated. It can be stated that the average bending strength value falls into the 4.6 ≤ σ b ≤ 5.7 MPa interval. Even though all values fall into the interval, BcB from shredded HS can be distinguished due to the average value of bending strength, which is higher than the upper confidence limit. Therefore, the assumption can be made that the bending strength for shredded HS-based BcB is relatively higher than for ones from non-shredded HS. According to the density, BcB from shredded HS fraction can be referred to as low density boards. Although, the density is low, the bending strength, as per given in the study of [36], is almost the same as for medium density boards.
The conducted structural analysis shows that fracture surface of BcB from non-shredded HS is uneven (Figure 9a). During bending strength, part of the shivs in a fracture zone partially or totally break. It is as well can be seen that part of the shivs are pulled out. Therefore, it can be stated that connections in these places are not strong enough. Moreover, Figure 9b shows the fracture surface of BcB from shredded HS. The fracture mostly occurs through contact zones of HS while forming quite even fracture line, however, in some places, sole larger particles are pulled out. It can be stated that the average bending strength value falls into the 4.6 ≤ σb ≤ 5.7 MPa interval. Even though all values fall into the interval, BcB from shredded HS can be distinguished due to the average value of bending strength, which is higher than the upper confidence limit. Therefore, the assumption can be made that the bending strength for shredded HS-based BcB is relatively higher than for ones from non-shredded HS. According to the density, BcB from shredded HS fraction can be referred to as low density boards. Although, the density is low, the bending strength, as per given in the study of [36], is almost the same as for medium density boards.
The conducted structural analysis shows that fracture surface of BcB from non-shredded HS is uneven (Figure 9a). During bending strength, part of the shivs in a fracture zone partially or totally break. It is as well can be seen that part of the shivs are pulled out. Therefore, it can be stated that connections in these places are not strong enough. Moreover, Figure 9b shows the fracture surface of BcB from shredded HS. The fracture mostly occurs through contact zones of HS while forming quite even fracture line, however, in some places, sole larger particles are pulled out. It can be noticed that the strength of contact zones between HS aggregate and CS binder is quite uniform throughout the whole specimen volume. Due to the higher amount of contact zoned, which are formed between shredded HS particles, these BcB may withstand higher stresses during the test.
Tensile Strength of BcB
The tensile strength of a material is one of the most important mechanical properties. This parameter is relevant if any other material or product is assigned to be connected or glued on BcB. It is also important during installation or transportation. Based on the value obtained, the application area of the material may be anticipated. It can be noticed that the strength of contact zones between HS aggregate and CS binder is quite uniform throughout the whole specimen volume. Due to the higher amount of contact zoned, which are formed between shredded HS particles, these BcB may withstand higher stresses during the test.
Tensile Strength of BcB
The tensile strength of a material is one of the most important mechanical properties. This parameter is relevant if any other material or product is assigned to be connected or glued on BcB. It is also important during installation or transportation. Based on the value obtained, the application area of the material may be anticipated.
The conducted experimental data analysis ( Figure 10) showed that, according to the tensile strength results, control specimens without a binder may be distinguished into two groups. Control BcB from 5/10, 10/20, 2.5/20 and 2.5/5 mm HS fractions are characterised by the lowest tensile strength with an average value of~0.057 MPa. Meanwhile, control BcB from shredded HS have the highest value average equal to~0.25 MPa. With the addition of 10 wt.% of CS binder into the forming mixture, the tensile strength of all HS fraction-based BcB reaches 0.20-0.39 MPa. Due to the porous microstructure of HS, part of the CS particles (<25 µm [37]) can penetrate and fill the vessels and form a good mechanical connection with another part of CS, which then enhance the interface. The conducted experimental data analysis ( Figure 10) showed that, according to the tensile strength results, control specimens without a binder may be distinguished into two groups. Control BcB from 5/10, 10/20, 2.5/20 and 2.5/5 mm HS fractions are characterised by the lowest tensile strength with an average value of ~0.057 MPa. Meanwhile, control BcB from shredded HS have the highest value average equal to ~0.25 MPa. With the addition of 10 wt.% of CS binder into the forming mixture, the tensile strength of all HS fraction-based BcB reaches 0.20-0.39 MPa. Due to the porous microstructure of HS, part of the CS particles (<25 μm [37]) can penetrate and fill the vessels and form a good mechanical connection with another part of CS, which then enhance the interface. Comparing the results of BcB from all fractions HS and 10 wt.% of CS, it is determined that the tensile strength for shredded HS-based BcB is by ~2.9 times higher than for 2.5/5 mm-based BcB ( Figure 11). The similar observation is done by [36] for composites with a smaller fraction aggregate. However, the density range of such composites was from 1100 to 1300 kg/m 3 . Further addition of CS does not significantly change the parameter of BcB from all HS fractions. However, it may be stated that BcB from shredded HS and different CS are characterised by relatively higher tensile strength. As can be seen, the lowest tensile strength is obtained for control BcB without CS binder and the obtained results are in a great agreement with the ones obtained by [38]. When the specimen is under tensile force, stresses are formed and they must be compensated by internal cohesive forces between particles. In the case of excess of such forces, the specimen fractures. Therefore, Figure 12 presents fracture zones of BcB after tensile test. Whereas the CS structure consists of strands, which have glucose remaining, iodine molecules may penetrate into them [39]. Consequently, CS when reacted with iodine may colour itself in dark blue colour. In order to evaluate Comparing the results of BcB from all fractions HS and 10 wt.% of CS, it is determined that the tensile strength for shredded HS-based BcB is by~2.9 times higher than for 2.5/5 mm-based BcB ( Figure 11). The similar observation is done by [36] for composites with a smaller fraction aggregate. However, the density range of such composites was from 1100 to 1300 kg/m 3 . Further addition of CS does not significantly change the parameter of BcB from all HS fractions. However, it may be stated that BcB from shredded HS and different CS are characterised by relatively higher tensile strength. As can be seen, the lowest tensile strength is obtained for control BcB without CS binder and the obtained results are in a great agreement with the ones obtained by [38]. The conducted experimental data analysis ( Figure 10) showed that, according to the tensile strength results, control specimens without a binder may be distinguished into two groups. Control BcB from 5/10, 10/20, 2.5/20 and 2.5/5 mm HS fractions are characterised by the lowest tensile strength with an average value of ~0.057 MPa. Meanwhile, control BcB from shredded HS have the highest value average equal to ~0.25 MPa. With the addition of 10 wt.% of CS binder into the forming mixture, the tensile strength of all HS fraction-based BcB reaches 0.20-0.39 MPa. Due to the porous microstructure of HS, part of the CS particles (<25 μm [37]) can penetrate and fill the vessels and form a good mechanical connection with another part of CS, which then enhance the interface. Comparing the results of BcB from all fractions HS and 10 wt.% of CS, it is determined that the tensile strength for shredded HS-based BcB is by ~2.9 times higher than for 2.5/5 mm-based BcB ( Figure 11). The similar observation is done by [36] for composites with a smaller fraction aggregate. However, the density range of such composites was from 1100 to 1300 kg/m 3 . Further addition of CS does not significantly change the parameter of BcB from all HS fractions. However, it may be stated that BcB from shredded HS and different CS are characterised by relatively higher tensile strength. As can be seen, the lowest tensile strength is obtained for control BcB without CS binder and the obtained results are in a great agreement with the ones obtained by [38]. When the specimen is under tensile force, stresses are formed and they must be compensated by internal cohesive forces between particles. In the case of excess of such forces, the specimen fractures. Therefore, Figure 12 presents fracture zones of BcB after tensile test. Whereas the CS structure consists of strands, which have glucose remaining, iodine molecules may penetrate into them [39]. Consequently, CS when reacted with iodine may colour itself in dark blue colour. In order to evaluate When the specimen is under tensile force, stresses are formed and they must be compensated by internal cohesive forces between particles. In the case of excess of such forces, the specimen fractures. Therefore, Figure 12 presents fracture zones of BcB after tensile test. Whereas the CS structure consists of strands, which have glucose remaining, iodine molecules may penetrate into them [39]. Consequently, CS when reacted with iodine may colour itself in dark blue colour. In order to evaluate the distribution of CS, a 5% concentration iodine solution was used. Accordingly, Figure 12a,d present the fracture zone of iodine solution coated BcB from 2.5/5 mm and shredded fractions HS without CS. It is seen that the colour of the fracture surface does not change. Contrary observations are completed for iodine solution coated BcB from 2.5/5 mm and shredded fractions HS and 10 wt.% of CS. Therefore, noticeable bright change in colour can be seen (Figure 12b,c,e,f). Comparing Figure 12b,c to Figure 12e,f, it can be noticed that CS binder covers larger area of BcB specimen from shredded HS aggregate. Meanwhile, Figure 12b,c depict light zones, which show that CS binder in BcB from non-shredded HS covers smaller area of specimen compared to the one for BcB from shredded HS. the distribution of CS, a 5% concentration iodine solution was used. Accordingly, Figure 12a,d present the fracture zone of iodine solution coated BcB from 2.5/5 mm and shredded fractions HS without CS. It is seen that the colour of the fracture surface does not change. Contrary observations are completed for iodine solution coated BcB from 2.5/5 mm and shredded fractions HS and 10 wt.% of CS. Therefore, noticeable bright change in colour can be seen (Figure 12b,c,e,f). Comparing Figure 12b,c to Figure 12e,f, it can be noticed that CS binder covers larger area of BcB specimen from shredded HS aggregate. Meanwhile, Figure 12b,c depict light zones, which show that CS binder in BcB from non-shredded HS covers smaller area of specimen compared to the one for BcB from shredded HS. Contact zones that are not covered with CS may withstand lower stresses while larger specimen surface area covered with CS strengthens contact zones between HS particles. Due to larger cohesive force between particles, specimen is able to sustain higher tensile stresses. This assumption is confirmed by tensile test results. Based on the results obtained, the lowest tensile strength is reached by BcB from 2.5/5 mm HS fraction and it is 0.235 MPa at density of 368 kg/m 3 , while the highest one has BcB from shredded HS and it is 0.451 MPa at density of 375 kg/m 3 . Additionally, in order to better understand the extent of an effect, Table 2 presents mechanical tests results for BcB from HS aggregate and CS binder. The differences emerge due to different production method, raw material preparation, thermal treatment and different densities of materials. Tensile strength, MPa [21] 182-188 0.57-0.63 -0.080-0.11 Contact zones that are not covered with CS may withstand lower stresses while larger specimen surface area covered with CS strengthens contact zones between HS particles. Due to larger cohesive force between particles, specimen is able to sustain higher tensile stresses. This assumption is confirmed by tensile test results. Based on the results obtained, the lowest tensile strength is reached by BcB from 2.5/5 mm HS fraction and it is 0.235 MPa at density of 368 kg/m 3 , while the highest one has BcB from shredded HS and it is 0.451 MPa at density of 375 kg/m 3 . Additionally, in order to better understand the extent of an effect, Table 2 presents mechanical tests results for BcB from HS aggregate and CS binder. The differences emerge due to different production method, raw material preparation, thermal treatment and different densities of materials. All things considered, the boards with shredded HS fraction and 10 wt.% of CS exhibit good mechanical properties, which make them promising potential substitutes for commercially available products with similar density. The good mechanical performance of BcB can be attributed to the good adhesion between CS and HS. Indeed, as can be observed from the optical images (Figure 12f), the surface of the HS appears to be covered with the binder without significant detachments.
Conclusions
Based on the results obtained, relationships between the amount of CS binder and mechanical performance were observed. Increasing CS binder amount up to 50 wt.%, increases compressive stress at 10% deformation irrespective of HS fraction used, while improvement in bending and tensile strength can be observed when up to 10 wt.% of CS binder is used. Application of 20-50 wt.% CS does not impact the later properties of BcB.
Overall, regarding the mechanical properties, it is expedient to produce BcB with a shredded HS fraction and 10 wt.% of CS binder under thermal treatment at 160 • C. The thermal process assures the release of lignin from HS. Consequently, the system of lignin and CS strengthens the zones between aggregate particles and form structure, which determines the better mechanical performance compared to BcB with non-shredded HS.
BcB with shredded HS aggregate and 10 wt.% of CS are characterized by the greatest mechanical performance, i.e., independently on HS fraction, such BcB have: compressive stress at 10% of deformation up to 3.0 MPa, bending strength of 6.3 MPa and tensile strength of 0.45 MPa. The obtained average density (~319-408 kg/m 3 ) indicates that, according to European normative document EN 316 [40], BcB can be classified as softboards and used as self-bearing structural material for the building industry. Based on the requirements, BcB can be applied in dry and humid conditions for the internal and external uses without loading (EN 622-4, Section 4.2) or as load-bearing boards in dry and humid conditions for instantaneous or short-term load duration (EN 622-4, Section 4.3). | 10,865.4 | 2019-03-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Environmental Science"
] |
Correcting delayed reporting of COVID‐19 using the generalized‐Dirichlet‐multinomial method
Abstract The COVID‐19 pandemic has highlighted delayed reporting as a significant impediment to effective disease surveillance and decision‐making. In the absence of timely data, statistical models which account for delays can be adopted to nowcast and forecast cases or deaths. We discuss the four key sources of systematic and random variability in available data for COVID‐19 and other diseases, and critically evaluate current state‐of‐the‐art methods with respect to appropriately separating and capturing this variability. We propose a general hierarchical approach to correcting delayed reporting of COVID‐19 and apply this to daily English hospital deaths, resulting in a flexible prediction tool which could be used to better inform pandemic decision‐making. We compare this approach to competing models with respect to theoretical flexibility and quantitative metrics from a 15‐month rolling prediction experiment imitating a realistic operational scenario. Based on consistent leads in predictive accuracy, bias, and precision, we argue that this approach is an attractive option for correcting delayed reporting of COVID‐19 and future epidemics.
to compare a joint (spatio-temporal) model to independent time series models.
A.1 Data
We use data from the Brazilian state of Paraná, which was severely affected by the 2009 H1N1 epidemic compared to other states (Codeço et al., 2012) and continues to have one of the highest rates of SARI incidence (Bastos et al., 2019). Here we consider a much longer time period of 230 weeks (from the start of January 2013 to the end of May 2017), compared to 66 weeks in Bastos et al. (2019), to enable us to draw meaningful conclusions about seasonal variation. The state is divided into s = {1, . . . , 22} health regions and we consider the total count to be fully observed 6 months after occurrence (D max = 27). The dimension of the total counts y t,s is therefore 230x22 and the dimension of the partial counts z t,s,d is 230x22x27 (corresponding to over 100k observations). For this application, we imagine that the present-day week, t 0 , is week 224 (mid April 2017). We then seek to make predictions for t 0 = 224, for previous weeks where the total count is still partially unobserved (t = {t 0 − D max + 2, . . . , t 0 − 1} = {199, . . . , 223}) and for the next 6 weeks (t = {t 0 + 1, . . . , t 0 + 6} = {225, . . . , 230}). The plot shows a clear seasonal cycle across all regions, with outbreaks reaching their peak in April-July, as well as considerable year-to-year variability. There is also some evidence of regional variation in the overall rate -for example, the brightest green region tends to have quite a low rate of cases per 100,000 people, compared to some other regions -as well as regional variation in the seasonal timing of outbreaks. At "present day" t 0 = 224, shown by the vertical line, we are in the early stages of the annual influenza outbreak, so forecasting predictions should ideally show an increasing trend in the number of SARI cases.
A.2 Model for SARI cases
In Section 4.2, we introduced a nested spline structure to add a regional dimension to the time series model for dengue fever cases presented in Stoner and Economou (2020). To model the incidence of SARI cases we adopt a near identical approach, making use of spatially-varying intercept, temporal and seasonal effects: f (t, s) = ι s + δ t,s + ξ t,s ; Effects δ t,s and ξ t,s are temporal and seasonal (cyclic) splines, respectively, which vary by region. Figure 1, however, suggests that a large portion of temporal and seasonal variation may be common across all regions. Once again, we can introduce temporal and seasonal effects α t and η t , and make their (basis function) coefficients the mean of the coefficients for the regional effects δ t,s and ξ t,s . Similarly, the model for the expected cumulative proportion reported after each delay is characterised by the addition of fixed delay effects ψ s,d which are independent across regions and temporal spline effects γ t,s : Prior distributions and implementation are the equivalent to the COVID-19 model, detailed in Section 4.3.
A.3 Results
Figure 2 shows median predicted temporal (δ t,s , left) and seasonal (ξ t,s , centre) effects on SARI incidence, as well as the temporal effect on the cumulative proportion reported (γ t,s , right). A different colour is used for each region and the dashed black lines show the median predicted overall effects, α t , η t and β t , respectively.
The estimated effects on SARI incidence follow the overall trends quite closely, with only a few deviating substantially. For example, there are noticeable increases in the temporal effect on SARI incidence for almost all regions around mid 2013 and around mid 2016, corresponding to the two largest outbreaks seen in Figure 1.
Similarly, all the seasonal effects reflect the increase in SARI incidence leading up to Brazil's winter seen in Figure 1. The effects on the cumulative proportion reported are substantially more variable, suggesting that the delay mechanism may be driven more by local factors compared to SARI incidence. Summarising, the 22 health regions of Paraná have a lot in common, in terms of temporal and seasonal variation in SARI incidence. It is worth examining, however, whether anything tangible was gained from modelling the regions simultaneously as opposed to using 22 independent time series models. For instance, modelling the regions independently could impede the model's ability to capture the variance of the total number of reported cases across all regions. To assess this, we applied 22 independent models where f (t) = ι + δ t + ξ t and g(t, d) = ψ d + γ t for each region. We then used posterior predictive checking (Gelman et al., 2014) to see which approach captures the variance of the total better. For this experiment, the overall 95% prediction interval coverage was 0.99 for predictions of the total reported count corresponding to previous weeks (t < t 0 ), 1 when forecasting (t > t 0 ) and 1 when nowcasting (t = t 0 ). q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q All code and data required to reproduce these results are provided as supplementary material.
Appendix B Illustration of Data Structure
To support understanding of the reporting delay in the COVID-19 hospital deaths application, this section illustrates the data structure. First, recall from Section 1 that a portion of deaths occurring on a given day (e.g. 7th February) will be reported during the same 24-hour reporting period they occurred in and published the following day (8th February) at 5pm. Some more deaths may be reported in the next 24-hour period, and published the day after that (9th February), and so on until no more deaths are reported. If we compile data files published each day, we can arrange the number of deaths (for each region) by date of death (rows) and reporting date ( Then, assuming a maximum delay of D max = 4 days for this example, we can organise the data in terms of the delay between date of occurrence and date of reporting (publication): Note that there is now a triangular shape of unknown values, shown as "?", which represents the numbers of deaths which are yet to be reported. This is often called the "delay triangle" in the literature. For any date of death where not all delays counts have been observed, the total deaths is also not yet completely observed. The statistical problem is predicting the unknown total deaths, conditional on any partial information for those dates of death, and on any completely observed past total death counts. The final table below shows the same data with modelling notation (where t is the 10th of February): Reporting
Appendix C Simulation Experiment
In the main article, our assessment of the GDM predominantly focuses on out-ofsample prediction performance, reflecting its intended use as a method for operational nowcasting and forecasting. However, we also claimed that the GDM can provide insights into factors determining the structure of the reporting delay and variability in the total count. We test that here by fitting the GDM to simulated data, such that inference can be compared against the known effect of covariates.
C.1 Data
The data we simulate are completely synthetic, but are intended to represent a hypothetical disease epidemic, with covariates imitating real-world drivers of change in the disease progression and reporting performance. Note that repeated simulation would be necessary to assess bias, variance, prediction interval coverage etc. Here we simulate one set of data as an example and we use the phrase "true value(s)" to mean the coefficient values chosen in this one example.
Let y t,s be the number of disease cases occurring on day t ∈ 1, . . . , 100 in region s ∈ 1, 2, 3, and let z t,s,d be the parts of y t,s,d observed at each delay d ∈ 1, . . . , D max .
The experiment focuses on the quality of inference rather than prediction performance, so we assume all y t,s and z t,s,d are known. First, we simulate y t,s from a Negative-Binomial model with mean λ t,s and scale parameter θ s : The mean number of cases is comprised first of a fourth-order orthogonal polynomials in time δ t,s . The polynomials, shown in the left panel of Figure 5, are distinct for each region to reflect differences in disease progression and non-pharmaceutical interventions. In all 3 regions, the simulated polynomials have an "M" shape, representing two waves of the disease. Then, the mean is also affected by the proportion of the population successfully administered a vaccine in each region, V t,s , shown in the right panel of Figure 5. The vaccine becomes available after 40 days, and is then administered at a different rate in each region. After scaling to have 0 mean and standard deviation 1 (let V ( * ) t,s be the scaled version), our imaginary vaccination covariate has coefficient α 1 = −1.75, meaning the case rate reduces substantially as more people are vaccinated. t,s are both assumed constant across regions, meaning we assume the protection of the vaccine and properties of the new variant to be the same across regions.
Combining regional intercept terms ι s , polynomials δ t,s , the effect of vaccination, and the effect prevalence of variant 2, Figure 7 shows the overall mean daily cases for each region (lines) and then the simulated daily cases y t,s (shapes). The M shape from the polynomials is still visible, but dampened by the effect of the vaccination covariate. Now, it remains to simulate z t,s,d : the parts of y t,s reported at each delay. Here, we simulate the first D = 7 delays for modelling. We can ignore the delay structure of the remainder term because, as mentioned previously, we assume all z t,s,d are observed. To define the mean reporting distribution, we start with a probit model for the expected cumulative proportion reported S t,s,d , as in the Survivor variant of the GDM (described in Section 3 of the main article): Temporal variation in S t,s,d is characterised first by monotonically increasing "delay curve" effects ψ s,d and upwards linear trends in (scaled) time X ( * ) t . We also included staff absence percentage covariates A t,s (Figure 8), simulated as first or-der autoregressive variables. The coefficients of the scaled absence percentage A ( * ) t,s , β 2,s , are intuitively negative so that a higher percentage of absences slows down reporting. Finally, we have also included the scaled mean daily number of cases λ ( * ) t,s as a covariate in the cumulative proportion reported. The coefficients of this effect, β 3,s , are also negative such that, when cases are high, reporting slows down. Inclusion of this covariate is notable because: a) to the best of our knowledge, this is the first attempt (albeit for simulated data) to fit a model for delayed reporting which considers the effect of the same covariates on both the mean total cases/deaths and the reporting delay; and b) effectively including the same covariates in both parts of the GDM hierarchy is more likely to cause inferential problems such as non-identifiability of the covariate coefficients. Combining all of the effects in (5) yields the mean cumulative proportion reported at each delay S t,s,d , as shown in Figure 9. Notably, reporting performance visibly slows down in the first 30 days or so as cases surge (as seen in Figure 7). One can imagine this reflecting health systems prioritising treatment over administrative tasks when cases are high.
Using these simulated S t,s,d , we then simulated two different sets of delayed counts z t,s,d , to be fitted separately. We simulated the first set, z Generalised-Dirichlet-Multinomial (GDM) itself: This allows us to assess inference when fitting the model to data generated from it. Then, we also simulated a second set, z We then add an i.i.d. Gaussian noise term ϵ t,s,d ∼ Normal(0, 0.25 2 ) to each u t,s,d and input this into an inverse CLR transformation to produce a new noisier set of proportions µ (2) t,s,d . A standard deviation of 0.25 was chosen for ϵ t,s,d so that the variance of z (2) is fairly close to the variance of z (1) . From these new (noisier) proportions reported at each delay, we can compute relative proportions ν (2) t,s,d : which can then be used as the mean of Binomial conditional distributions to simulate z (2) t,s,d : t,s,j ).
Fitting a GDM model to z (2) t,s,d then allows us to test the flexibility of the GDM to capture non-GDM variance structures, through posterior predictive checking (Gelman et al., 2014).
C.2 Results
We fit the GDM model outlined in equations (3)-(7) to the simulated two data sets.
We use the NIMBLE package (de Valpine et al., 2017) for MCMC and weaklyinformative prior distributions (e.g. Normal(0, 10 2 ) for coefficients), as in the main article.
A key question of interest is whether the known trends and covariate effects in the mean daily cases and in the reporting delay are appropriately captured by the model. To assess this, we should examine outputs from the model fit to z (1) t,s,d . We can first look at the polynomial trends in the mean daily cases. Figure 10 shows the posterior median estimates of δ t,s with 95% credible intervals (CIs).
The polynomials are reproduced very closely, with the true values (dashed lines) captured completely by the 95% CIs.
Next, we can look at the coefficients in the mean daily cases for the vaccine (α 1 ) and for the prevalence of Variant 2 (α 2 ). Figure 11 shows density estimates of the posterior samples of these two coefficients, with vertical lines representing the true values. In these plots, we are looking to see whether the true values are extreme with respect to their corresponding posterior distributions. Here, the true values for both coefficients are towards the centre of the distributions, indicating the models captures the effect of these covariates well. Then, Figure 12 shows the posterior densities for the cumulative proportion reported coefficients of time (β 1,s ), staff absence (β 2,s ), and the daily case rate (β 3,s ), respectively. In most cases, the true values are well within the bulk of the distributions, and after computing 95% CIs we determined that they all contain their corresponding true values.
Finally, we can assess whether the GDM can appropriately capture the variance of the second set of delay counts z (2) t,s,d . Recall that these were simulated from Binomial-Gaussian mixture distributions, rather than a GDM. To achieve this, we use the MCMC samples to simulate posterior predictive replicates of the original t,s,d . Figure 13 shows the posterior predictive sample standard deviations for the first 6 delays. The true values are all within the bulk of the replicate distributions, indicating the model has captured the variances of the delayed counts well despite being generated from a different model.
C.3 Conclusions
Here we simulated a data set of daily disease case counts in 3 regions, with covariates imitating real-world drivers of disease (vaccination, proliferation of different variants) and reporting delays (staff absences, pressure from high case rates). Investigating the latter effect is particularly unusual, because it means we tested a model design that effectively included the same covariates in both the model for the total counts and the model for the reporting delay. Despite this, the GDM was able to reproduce all of the known covariate effects well. We also tested the fit of the GDM to delay counts z (2) t,s,d simulated from a different model, in this case a Binomial-Gaussian mixture. Compellingly, the flexibility of the GDM meant that it was able to capture the variance of these alternative counts very closely.
COVID-19 Deaths
In Section 4.5 of the main article, we compare two versions of the GDM against four competing models for COVID-19 deaths in a rolling nowcasting experiment.
Here we provide details on the specification and implementation of each of these models.
D.1 GDM Hazard model
With the first version of the GDM being the GDM Survivor model described in Section 4.2, we detail second version based on the Hazard variant of the GDM here. The hierarchical structure is the same as before: y t,s | λ t,s , θ s ∼ Negative-Binomial(λ t,s , θ s ); (11) z t,s | y t,s , ν t,s , ϕ t,s ∼ GDM(ν t,s , ϕ t,s , y t,s ).
The model for the mean daily fatality rate λ t,s is also the same as before: with δ t,s being independent penalised splines of time for each region s, for the purpose of the prediction experiment (as explained in Section 4.5). The difference in the Hazard variant is that the Beta-Binomial means ν t,s,d are modelled directly: Here
D.2 Negative-Binomial Survivor model (NB Survivor)
The first model competing against the GDM is a Negative-Binomial model intended to approximate the (as-yet) unknown marginal model for the delayed counts z t,s,d obtained from the GDM when integrating out the total counts y t,s . The key feature of this model is that the systematic models for the total count and the reporting delay are otherwise identical to those presented in Section 4.2: log(λ t,s ) = ι s + δ t,s ; where µ t,s,d is the mean proportion at each delay, again defined by a probit model for the cumulative proportion reported: and λ t,s is the mean of y t,s . Like in the GDM model for COVID-19 deaths, we include models for the first D ′ = 6 delayed counts z t,s,d , but also an extra (identical) model for the remainder r t,s = y t,s − D ′ d=1 z t,s,d -which is unobserved when y t,s is unobserved (see Stoner and Economou (2020) for the rationale behind modelling the remainder). The model is implemented using MCMC following the specification in Section 4.3 for prior distributions and implementation.
D.3 Negative-Binomial model based on Bastos et al. (INLA)
The second competitor is a Negative-Binomial model for the delay counts z t,s,d based on the framework and implementation detailed in Bastos et al. (2019): Independently for each region, ι s are the intercepts, δ t,s are first order random walks (RW1) (to capture the epidemic curve), β d,s are RW1 effects to capture the mean delay distribution, γ t,s,d are RW1 temporal effects for each delay to capture temporal variation in the delay distribution, and ξ t,s,d are cyclic RW2 effects of day of the week for each delay, to capture weekly cycles in the reporting delay.
The main difference between this model and the above marginal approximation to the GDM is the lack of separation of systematic variability in the total count and the reporting delay. The model is implemented using the Integrated Nested Laplace Approximation method and the r-inla package (Lindgren and Rue, 2015), using code adapted from Bastos et al. (2019).We found that modelling each of the d = 1, . . . , 14 delays separately resulted in extreme 95% prediction interval upper bounds for some data cutoffs (C). We therefore opted to model only the first D = 6 delays, together with a remainder term r t,s (as in the NB Survivor model), which solved the problem.
14)
The third and fourth competitor models are based on the framework for the delayed counts z t,s,d proposed by McGough et al. (2020) and implemented using the NobBS package for R. The package allows for both Poisson and Negative-Binomial models, and we opted for the latter to account for over-dispersion: log(λ t,s,d ) = α t,s + log(β d,s ); where α t,s is an RW1 effect to capture the epidemic curve and β d,s are vectors of proportions such that Dmax d=1 β d,s = 1 (recall the maximum delay D max = 14). Effects β d,s therefore capture the expected proportion reported at each delay and are modelled as fixed in time. To account for temporal heterogeneity in the delay distribution, McGough et al. (2020) proposes fitting the model to data in a moving window of fixed length, so that the estimated β d,s are more appropriate for recent data. We fit two models using this approach, one with the same 70 day window length as we used for the other approaches (NobBS) and one with a shorter window of 14 days , so that the estimated mean delay distribution is representative of the last two weeks. In both cases, we used the default weakly-informative prior distributions from the NobBS package, which implements the model using the JAGS software facility for MCMC (Plummer, 2003).
D.5 Measurement of computation times
To compare the practicality of each approach, we measured computation times needed to run the whole experiment (all 153 cutoff dates), but some care is needed in how these are interpreted because we did not employ the same parallel computing strategies for each method. All models were run on the same Ubuntu Linux Desktop with an Intel Core i9-7900X CPU and 128Gb of system memory (plus a 256Gb swap drive). For the GDM Survivor, GDM Hazard, and NB Survivor approaches, we ran models (with data for all regions) for different cutoff dates simultaneously across 16 CPU threads. We therefore divide the total run time for these by the ceiling function of (153/16). For the INLA models, we ran models sequentially for each region and for each cutoff date, with parallelisation handled automatically by INLA. We therefore divided the total run time for the INLA approach by 153. Finally, for the NobBS models we ran models for each region in parallel across 7 threads (one per region) and for each cutoff date sequentially. As all the models we tested were independent across regions, they all could feasibly have been run in parallel across regions in the same manner as we did for NobBS. Therefore, for a fair comparison of the relative practicality of each model for daily operation use, we divided the total run time for the NobBS models by (153/7).
The resulting indicative run times are presented in the rightmost column of Table 1 in the main article.
weeks (W = 105). To account for the change in the length of the time series, we decreased the number of knots in the splines of time proportionally from 10 to 5 in the models with W = 35 (days), and increased the number of knots proportionally to 15 for W = 105.
To investigate differences in accuracy, precision, and reliability, we computed mean average errors of predictions, mean 95% prediction interval widths, and 95% prediction interval coverage values.
Appendix F Under-reporting
Where count data are affected by delayed-reporting, the total reported count, y t,s,d , is often still a substantial under-representation of the true count, termed here x t,s,d .
Reports of COVID-19 cases may, for instance, be affected by under-reporting due to a lack of testing availability, false negative test results, or a lack of symptoms.
Similarly, some deaths due to COVID-19 may be missed if the patient was not tested and COVID-19 was not specified on the death certificate. To take this into account, Stoner and Economou (2020) presents a comprehensive framework for simultaneously modelling under-reporting and delayed-reporting. Extended here to include a spatial dimension, this is achieved by replacing Equation (1) in Section 3 of the main text with: x t,s | λ t,s , θ s ∼ Negative-Binomial(λ t,s , θ s ); (23) y t,s | x t,s , π t,s ∼ Binomial(π t,s , x t,s ); log π t,s 1 − π t,s = i(t, s), for y t,s ≤ x t,s and where i(t, s) is a general function which may include covariates or random effects, e.g. covariates representing access to COVID-19 tests. The likelihood for y t,s is non-identifiable between a high λ t,s and a low π t,s , or viceversa, so in the case where all available counts are assumed potentially underreported (i.e. x t,s is always unobserved), identifiability can be achieved using prior information . | 6,249.6 | 2022-12-09T00:00:00.000 | [
"Mathematics"
] |
Extraordinarily Large Contribution Ratio of Ferroelastic Domain Switching to Piezoresponse in Monoclinic (K, Na)NbO3 Films
Ferroelectric monoclinic phases have attracted exceptional attention as the origin of giant piezoelectricity, whilst the detailed contributions of ferroelastic domain switching and electric‐field induced lattice strain to the piezoelectric response remain still challenging to clarify. In this work, these contributions to the piezoelectric response are deconvoluted in a (K0.4Na0.6)NbO3 (KNN) film epitaxially grown in monoclinic phase on Nb‐doped SrTiO3, where the as‐deposited film feature (111) and ( 1¯11$\bar{1}11$ ) non‐180° domains. By time‐resolved synchrotron X‐ray diffraction, the ferroelastic domain switching and electric‐field induced lattice strain subjected to an ultrafast electric‐field pulse are quantitatively probed. The switching of ≈4% volume fraction of (111) domains into ( 1¯11$\bar{1}11$ ) ones by a moderate electric field and its response within 30 µs as well are unambiguously unveiled. Interestingly, the contribution of domain switching to the strain is larger than the total strain of the film, which is enabled because of the negative electric‐field‐induced lattice strain. The present study connects macroscopic piezoelectric response in KNN films with the underlying microscopic origins unveiled by separating two contributions, which may provide a knowledge platform that allows for significant achievement of practical lead‐free piezoelectric microelectromechanical and nanoelectromechanical systems in the future.
Introduction
[3][4][5][6][7][8][9] The piezoelectric response of ferroelectrics is stemmed from both intrinsic and extrinsic mechanisms.[17] It is widely known for PZT thin films that the latter contributes far more to piezoelectric response than the former. [18,19]Therefore, non-180°ferroelastic domain switching is critical for boosting piezoelectric response and developing environment-friendly alternatives to the prototype PZT.
Since the breakthrough by Saito et al., (K, Na)NbO 3 (KNN)based solid solutions have been one of the most intriguing lead-free piezoelectric candidates, in view of whose closest piezoelectric characteristics comparable to those of PZT. [20]xtensive efforts are devoted to elucidating intricate domain patterns such as checkerboard, stripe, and herringbone-like patterns, [21][22][23] that arise from the competition between electrostatic and elastic energy in pure KNN and KNN-based films.[26] In addition, various advanced techniques such as piezo-response force microscopy (PFM), transmission electron microscopy, and optical means were employed to reveal the domain switching, yielding intriguing and novel results.However, the majority of them are ex situ or semi-in situ methods, accompanied by time resolution limitations, [27][28][29] which impedes the insight into the domain switching dynamics.Here, it is worthwhile to mention that the low symmetry monoclinic phase in several lead-based systems manifesting large piezoelectric response was verified by first-principles calculations and Xray diffraction (XRD). [30]On the other hand, to date, experimental studies on the domain switching in monoclinic KNN thin films have not been fully explored yet and remain to be clarified to achieve a high-performance piezoelectric response.Thus, epitaxial monoclinic KNN films could provide an unambiguous insight into the domain switching dynamics for high-performance piezoelectric response.33][34] In this work, the detailed contributions of ferroelastic domain switching and electric-field induced lattice strain to the piezoelectric response were simultaneously revealed in pure (K 0.4 Na 0.6 )NbO 3 film epitaxially grown on Nb-doped SrTiO 3 (Nb:STO).The (K 0.4 Na 0.6 )NbO 3 film in monoclinic phase where featuring (111) and ( 111) non-180°domains were proven by XRD reciprocal space mapping (RSM) in conjunction with PFM.By time-resolved synchrotron XRD, the ferroelastic domain switching and electric-field induced lattice strain subjected to fast electric-field pulses were quantitatively probed.It was found that the moderate bias enhanced the extrinsic contribution over the total strain of the film due to the appearance of negative intrinsic contribution, whereas the intrinsic response becomes positive at a larger bias.Both time for inducing the domain switching by bias applications and relaxing back to the original state after removal of the bias was only ≈30 μs.
Structural Property and Phase Transition
Crystallinity and epitaxial growth of the KNN film were determined by XRD. Figure 1a displays the XRD 2/ scan of the 850 nm-thick KNN film on Nb:STO (111), where only (hhh) pc (h = 1, 2) diffraction peaks indexed to the KNN film were observed, indicating the perfect (111) pc orientation of the KNN film with the absence of secondary phases.The XRD ϕ scans for KNN (110) pc and Nb:STO (110) were performed, as shown in Figure 1b.The same in-plane orientation of KNN film with that of the Nb:STO substrate was confirmed by 3-fold symmetry with an equal 120°i nterval, demonstrating that the KNN film was epitaxially grown and has the cube-on-cube relationship with the substrate; namely, [ 110] pc KNN (111) pc || [ 110] Nb:STO (111).
To inspect the detailed crystal and domain structures of the as-deposited KNN films, temperature-dependent XRD RSMs for both the symmetric and asymmetric planes were carried out.Figure 1c-e,f-h show the evolution of RSMs around Nb:STO 111 and 112 with incremental temperature, respectively.At room temperature, the film has two diffraction peaks around Nb:STO 111 that correspond to the ( 111) and (111) domains as shown in Figure 1c, and similar but much broader two peaks are observed around Nb:STO 112 (Figure 1f).The latter broader two peaks can be due to the two sets of similar lattice spacings for 121 and 121, and for 211, 112, 211, and 112 in the monoclinic phase.Here note that the monoclinic phase has comparable energy with the orthorhombic phase, albeit the more energy-stable monoclinic structure (M c ) at room temperature was confirmed. [35]At 300 °C, the 111 and 111 peaks merged together (Figure 1d), and the two peaks around Nb:STO 112 become clearer and can be assigned to 211 and 112 (Figure 1g), indicating the tetragonal phase.At 500 °C, the 211 and 112 peaks merged almost together (Figure 1h), corresponding to the cubic phase.The tetragonal and cubic phases in the KNN film at high temperatures agree with those in KNN bulks of the same chemical composition.
To accurately clarify the phase at room temperature, the structural characterization of 2 μm-thick KNN film deposited on Nb:STO (001) substrate under the same deposition condition as that on Nb:STO (111) substrate was also performed.The typical XRD 2/ scans, and ϕ scans for KNN (110) pc and Nb:STO (110) shown in Figure S1a corresponding to monoclinic M c symmetry (Figure S2, Supporting Information).It is worth mentioning that similar monoclinic (001) pc KNN films were also reported. [24,26]The stabilization into the monoclinic phase is also supported by the fact that its chemical composition of (K 0.4 Na 0.6 )NbO 3 exhibits the monoclinic phase in bulk KNN at room temperature. [36]Furthermore, as the temperature increased, the crystal symmetry transformed from monoclinic to tetragonal and cubic, in accordance with that of bulk KNN (Figure S1c-e,f-h,Supporting Information).Here, we recall that the broader two peaks in RSM around Nb:STO 112 for the (111) pc KNN film at room temperature (Figure 1f) can be also reasonably explained by the monoclinic phase as discussed above; thus, it can be concluded that both of the (001) pc and (111) pc KNN films are in the monoclinic phase.
The set of structural analyses provided above lets us know the polarization directions of the (111) pc and (001) pc -oriented KNN films as depicted in Figure 1i and Figure S1i (Supporting Information), respectively.In the case of (001) pc -oriented KNN film, there are four variants with different in-plane polarization directions (Figure S1i, (Supporting Information).However, all the variants have the same out-of-plane polarization component; they are equivalent in terms of piezoelectric response d 33 when the electric field is applied along with the film thickness, and there will be no domain switching induced by the electric field.On the other hand, the (111) pc -oriented KNN film has nonequivalent domains; the ( 111) and (111) domains have different polarization components along not only the in-plane direction but also the out-of-plane directions (Figure 1i).Therefore, these are nonequivalent in terms of the piezoelectric response d 33 , and domain switching by the electric field is possible.
Ferroelectric Property and Domain Structure
The polarization-electric (P-E) hysteresis loops for the KNN films were recorded at a maximum electric field of 300 kV cm −1 and a frequency of 10 kHz, as shown in Figure 2a and Figure S3 (Supporting Information).These films, as can be seen, possess well-developed P-E hysteresis loops with negligible leakage current.For the (001) pc KNN film (Figure S3, Supporting Information), the remnant polarization, P r , and saturation polarization, P sat , were 13.5 and 15.7 μC cm −2 , respectively.P r and P sat are comparable to those reported for single crystals with analogous compositions listed in Table S1 (Supporting Information), proving that the XRD observed complete c domain of M c phase.The (111) pc KNN film demonstrated less square hysteresis (Figure 2a), where P r and P sat were 2.6 and 6.3 μC cm −2 , respectively.Based on the polarization axis in the M c phase, P r and P sat for the (001) pc KNN film, and the volume fraction of the ( 111) domain in the (111) pc KNN film, the P r and P sat for the present (111) pc KNN film can be roughly predicted to be 8 and 9 μC cm −2 .The observed P r was somewhat smaller than the expectation, but P sat was not far from it.
Prior to the piezoelectric property, it is comparatively significant to investigate the domain structure of the (111) pc KNN film having a possibility of domain switching.As described above, it is worthy to note that there are (111) and ( 111) domains and the polarization components of nonequivalent ( 111) and ( 111) domains are distinct along not only the out-of-plane but also the in-plane directions, as schematically shown in Figure 1i.Namely, there are 6 in-plane variants for (111) domains and 3 in-plane variants for ( 111) domains.Figure 2b,c shows the vertical and lateral PFM images of the (111) pc KNN film after poling treatment of 25 V and the corresponding component histogram of these images.Here, Acos(′) is so-called mixed response combining the PFM amplitude A and phase ′; in the present study, the latter was adjusted so that the phase at the resonance frequency is 90°to correct the slight offset of the measured frequency dependence of the phase.As can be found in Figure 2b, Acos(′) values in the vertical PFM image are all positive, indicating the poled state of the film.On the other hand, there are two different regions labeled as V+1 and V+2, indicating the coexistence of ( 111) and ( 111) domains with different out-of-polarization components.The fraction estimated from the histogram was 44%:56%, which is approximately 111) and ( 111) domains, though the difference in the in-plane polarization components was not clearly visualized within the accuracy of our PFM measurements.Although further investigation is needed to clarify the detailed domain patterns, the present PFM study showed the domain structure having two different out-of-plane polarization components, which agrees with our XRD study.
Domain Switching Dynamics
Figure 3a shows the schematic configuration of the in situ timeresolved synchrotron XRD measurement.The pulsed voltages were applied to clarify the effect of the electric field on the domain structure.The films were poled before the measurements, and the unipolar pulse voltage of the time width of 50 μs with the same direction as the poling bias was repeatedly applied with the time interval of 1000 μs at each diffraction angle.The time width of 50 μs for pulse voltage was selected from the viewpoint of the saturation time of the domain switching as discussed below.By integrating the photon intensity during the measurement, the synchrotron XRD spectra with and without bias were constructed. [37,38]The synchrotron XRD spectra of the (111) pc KNN film are shown in Figure 3b.As aforementioned, ( 222) and (222) peaks correspond to ( 111) and (111) domains, The of these two peaks barely alter with increasing the bias the bias is ≤ 36 kV cm −1 , but their intensity ratio varies prominently: ( 222) increases, (222) decreases.Subsequently, the peak positions slightly shift toward lower angles 111) pc KNN film with that of state-of-the-art lead-based and lead-free materials.PPT and MPB represent the polymorphic phase transition and morphotropic phase boundary, respectively.[45][46][47][48][49][50][51][52][53] upon the bias is ≥ 48 kV cm −1 .The variation in the volume fractions of the domains is determined by the alteration in the peak intensity ratio.Figure 3c shows the volume fraction of the ( 111) domains, V ( 111) , which can be expressed as follows: where I ( 222) and I (222) refer to the intensities of ( 222) and (222) peaks.As displayed in the Figure 3c, V ( 111) was increased nonlinearly and saturated at a 4% larger value till to the electric field ≥ 48 kV cm −1 , where the shift of the peak position evolved.The increment of V ( 111) denotes that the part of the (111) domain was reconfigured to the ( 111) domain possessing the larger out-ofplane polarization component; namely, the electric-field-driven domain switching.The small volume fraction change of 4% in the present epitaxial (111) pc KNN film would be due to the substrate clamping, by which the large tensile stress is induced in the film when the domain switching takes place.At the same time, the substrate clamping may provide the driving force to get back to the original domain state when the electric field is removed.
Taking the volume fractions and field-induced strain of ( and (111) domains into account, total strain, S, can be quantitatively determined by: 2) can be simplified as Equation (3). Figure 3d plots the field-induced strain in view of the intrinsic and extrinsic contributions, and the total strain.As observed, when the bias is ≤ 36 kV cm −1 , the strain for both ( 111) and ( 111) domains from intrinsic contribution were nearly zero, even slightly negative.On the contrary, the strain arising from the extrinsic contribution, i.e., domain switching, was large and increased with increasing the bias.Extrinsic contribution-driven strain was saturated above 36 kV cm −1 because of the substrate clamping effect, but intrinsic contribution increased afterward.Consequently, the intrinsic and extrinsic piezoelectric responses are highly dependent on the applied electric field; the moderate bias enhances the extrinsic contribution via domain switching, whereas the intrinsic contribution emerges only under the large bias after the saturation of domain switching.When the extrinsic contribution is dominant, the domain switches from (111) to ( 111) involving the large polarization rotation induces the tensile stress in the entire film as the in-plane lattice constant of ( 111) domain is smaller than that of (111) domain.As a consequence, the strain from the intrinsic contribution is suppressed to be nearly zero or below.However, the domain switching reaches the limitation at a certain bias due to the large elastic energy accumulated by the domain switching; thus, the intrinsic positive contribution becomes noticeable under the large bias.Although the strains induced by the intrinsic and extrinsic piezoelectric responses are nonlinear, the overall field-induced strain increases monotonously with the electric field.On average, the effective piezoelectric parameter d 33,f of the KNN (111) pc was 50.2 pm V −1 , which is comparable to that macroscopic piezoelectric constant (51.8 pm V −1 ) measured by DBLI (Figure S4, Supporting Information).It should be emphasized here that just a small fraction of domain switching significantly enhances d 33,f in the current film.Because of the slightly negative intrinsic response under the moderate bias, the extrinsic contribution ratio to overall d 33,f surpasses 100% as shown in Figure 3e.
[41][42][43] Figure 4 shows the comparison of the extrinsic contribution in the present (111) pc KNN film with that of state-of-the-art lead-based and lead-free ferroelectric materials.It was found that the extrinsic contribution in the (111) pc KNN film is far superior to other lead-based and lead-free single crystals, ceramics as well as thin films, except for the most recent lead-based PZT thin films.This is an interesting finding since such a large extrinsic contribution induced by a small domain switching has never been reported for lead-free materials before.Especially, the large extrinsic contribution is usually found in KNN-based systems with the composition in the vicinity of polymorphic phase boundary providing an easy polarization rotation path. [46,54]The results of the present work may give a different strategy apart from the phase boundary engineering to further enhance the piezoelectric performance.
Domain Switching Speed
Figure 5a shows the time evolutions of the ( 222) and (222) diffraction peak profiles, and corresponding time dependencies of the volume fraction of the ( 111) domain under the electric fields of 36 kV cm −1 .In accordance with Figure 3b, the intensities of ( 222) and (222) peaks respectively increase and decrease under the field.Moreover, the peak intensities recover back to initial values after the removal of the field, demonstrating that the domain switching is fully reversible.Confirmed from the time-dependent ΔV ( 111) in Figure 5b, this domain switching saturated during the interval of the voltage pulse of 50 μs.Both the times required to induce the domain switching by the field and relax back to the original state after withdrawing the field were ≈30 μs, indicating that the extrinsic piezoelectric response can follow at the frequency of 30 kHz (Figure 5b).They are substantially longer than those (25 and 95 ns) of (111) pc -epitaxial Pb(Zr 0.65 Ti 0.35 )O 3 film with rhombohedral phase from our previous report. [15]The physical explanations to pinpoint elaborately why the major domains are unswitchable and the slow domain switching speed are not clear yet.Nonetheless, there are two probable explanations proposed here: one is the difference of polarization axis.Un- like the rhombohedral (111) pc PZT film, [15] the domain switching from (111) to ( 111) in the (111) pc KNN film involves the rotation of the polarization from the axes nearly lying in the in-plane as shown in Figure 5c, which would require a much larger bias or a longer time when the bias is minimal. [55]Indeed, the applied electric field for the rhombohedral (111) pc PZT films reported by Ehara et al. was 1000 kV cm −1 , [15] which is almost 30 times for the present KNN films.Another one is that the reported (111) pc PZT film is 200 nm thick, but the present (111) pc KNN film is 850 nm thick.Thinner films would respond faster than thicker films because domain switching takes place with the domain wall motion. [8,56,57]As a consequence, a comprehensive investigation is further needed to understand the more detailed domain switching dynamics of KNN films in the future.
Conclusion
In summary, this work explicitly deconvolutes the contributions of ferroelastic domain switching and electric-field induced lattice strain to the piezo-response by quantitatively probing the piezoelectric dynamics in (111)-epitaxial (K 0.4 Na 0.6 )NbO 3 film under electric-field pulses.The (K 0.4 Na 0.6 )NbO 3 film featuring (111) and ( 111) non-180°domains in the monoclinic phase, were proven by XRD RSMs in conjunction with PFM.Time-resolved synchrotron XRD reveals that the extrinsic contribution to strain by 4 vol% domain switching under the moderate bias exceeded the total strain due to the negative intrinsic contribution, whereas the intrinsic response became positive under large bias.Meanwhile, both times for inducing the domain switching by applied bias and relaxing back to the original state after withdrawing the bias were only ≈30 μs.Intriguingly, the extrinsic contribution ratio in the (111) pc KNN film is far superior to the state-of-the-art lead-free ferroelectric materials.Our findings provide a deep understanding of the underlying mechanism and would allow for fine-tuning and significant improvement of the piezoelectric performance, eventually hastening to replacement of the lead-based thin films in piezoelectric MEMS and NEMS devices.
Experimental Section
KNN Films Pulsed laser deposition with a KrF excimer laser ( = 248 nm) was used to grow KNN films on 0.5 wt.% Nb-doped SrTiO 3 (001) and (111) substrates.The films were deposited at 673 °C under 1 Torr oxygen pressure.The laser energy and repetition rate were 95 mJ and 7 Hz, respectively.The KNN targets with K:Na = 1:1 fabricated by a solid-state reaction from K 2 CO 3, Na 2 CO 3, and Nb 2 O 5 were used.The thickness of the deposited films on the (001) and (111) substrates was 2 μm and 850 nm, and their chemical composition was determined to be around (K 0.4 Na 0.6 )NbO 3 .Pt top electrodes with diameters of 100 μm and 200 μm were deposited using electron beam evaporation on the surface of KNN films via a shadow mask.
Structural Characterizations: XRD using Cu K 1 X-rays (Bruker, D8 DIS-COVER) was used to determine the crystallographic structure and orientation of the films.A field emission scanning electron microscope (FESEM) equipped with energy-dispersive X-ray spectroscopy (EDX) was used to examine the cross-section and chemical composition of the films (Hitachi, S-4800).
Electrical and Electromechanical Characterizations: A ferroelectric tester (Toyo, FCE-1) and a double-beam laser interferometer (aixACCT Systems, aixDBLI) were used to record room-temperature ferroelectric loops and macroscopic piezoelectric response, respectively.Local piezoelectric response was acquired by PFM using Asylum Research MFP-3D equipped with a conductive Ti/Ir-coated tip (spring constant k of 1.4-5.8Nm −1 , resonant frequency f of 58-97 kHz), for which the dual AC resonance-tracking (DART) mode with AC voltage of 0.5 V was employed to avoid crosstalk with topographic information.
In Situ Time-Resolved Synchrotron XRD: The electric-field driven lattice strain and domain switching were evaluated using in situ time-resolved synchrotron XRD at the SPring-8 in Japan using the BL13XU and BL15XU beamlines.The diffraction patterns were acquired using an a-Si avalanche photodiode detector, and the collimated X-ray beam was focused on the electrode that was utilized for applying pulsed voltages.The setup for the in situ time-resolved synchrotron XRD measurements had been detailed elsewhere. [13,15,58,59] ,b (Supporting Information) clearly indicate the (001) pc out-of-plane orientation with 4-fold symmetry with an equal 90°interval, confirming the cube-on-cube relationship with the substrate, i.e., [100] pc KNN (001) pc || [100] Nb:STO (001).In Figure S1c (Supporting Information), a single diffraction peak from the film was observed in the symmetric RSM around Nb:STO 002.The asymmetric RSM around Nb:STO 203, however, consists of three splitting peaks (Figure S1, Supporting Information).Two of them are shifted upward and downward along Q z with the same Q x , and the third one has a larger Q x and is located at the midpoint of the other two along Q z .These results suggest that the film is in either the monoclinic or orthorhombic phase.However, based on the lattice parameters estimated from RSMs (a = 3.990 Å, b = 3.940 Å, c = 4.001 Å, and = 90.28°), it can be clearly identified that the fabricated (001) pc KNN film is in the monoclinic phase and has the out-ofplane along c-axis and the in-plane composed of a and b axes,
Figure 1 .
Figure 1.a) XRD 2/ scan of the KNN film grown on Nb:STO (111) substrate.b) ϕ scan profiles for KNN (110) pc and Nb:STO (110).c-e) XRD RSMs around Nb:STO 111 with the temperature rising.f-h) XRD RSMs around Nb:STO 112 with the temperature rising.i) schematic illustration of possible polarization directions in the KNN film, for which the downward directions are excluded.Yellow and orange arrows are the polarization directions respectively for the ( 111) and (111) domains.
Figure 2 .
Figure 2. a) Room temperature P-E hysteresis loop measured at 10 kHz.b) Vertical PFM Acos(′) image and the corresponding component histogram.c) Lateral PFM Acos(′) image and the corresponding component histogram.
Figure 3 .
Figure 3. a) Schematic of in situ time-resolved synchrotron XRD measurement setup.In situ time-resolved synchrotron XRD with incremental electric fields for the (111) pc KNN film: b) 2/ scans, c) field dependence of volume fraction of ( 111) domain, d) field-induced strain, and e) extrinsic piezoelectric-response contribution.
Figure 5 .
Figure 5.Time evolution for the (111) pc KNN film with the applied electric field of 36 kV cm −1 .a) Color map for the ( 222) and (222) XRD peak profiles as a function of time (horizontal axis) and diffraction angle (vertical axis).b) Time-dependent volume fraction of ( 111) domain estimated from (a).The dashed lines mark the start and finish times of the application of the electric field.c) The schematic illustration of domain switching from (111) to ( 111).The orange and yellow arrows denote the respective polarization vectors, respectively. | 5,597.2 | 2023-10-15T00:00:00.000 | [
"Physics"
] |
Raman-guided subcellular pharmaco-metabolomics for metastatic melanoma cells
Non-invasively probing metabolites within single live cells is highly desired but challenging. Here we utilize Raman spectro-microscopy for spatial mapping of metabolites within single cells, with the specific goal of identifying druggable metabolic susceptibilities from a series of patient-derived melanoma cell lines. Each cell line represents a different characteristic level of cancer cell de-differentiation. First, with Raman spectroscopy, followed by stimulated Raman scattering (SRS) microscopy and transcriptomics analysis, we identify the fatty acid synthesis pathway as a druggable susceptibility for differentiated melanocytic cells. We then utilize hyperspectral-SRS imaging of intracellular lipid droplets to identify a previously unknown susceptibility of lipid mono-unsaturation within de-differentiated mesenchymal cells with innate resistance to BRAF inhibition. Drugging this target leads to cellular apoptosis accompanied by the formation of phase-separated intracellular membrane domains. The integration of subcellular Raman spectro-microscopy with lipidomics and transcriptomics suggests possible lipid regulatory mechanisms underlying this pharmacological treatment. Our method should provide a general approach in spatially-resolved single cell metabolomics studies.
S ingle-cell omics methods have revolutionized biology by resolving the heterogeneity that underlies population averages [1][2][3][4][5] . One envisioned application is that of pharmaco-omics (e.g., pharmacogenomics), in which the genetic or functional composition of diseased tissues is harnessed to guide the deployment of custom therapeutic strategies for individual patient 6,7 . Single-cell metabolomics has lagged behind other omics methods for the lack of proper toolsets for non-perturbative and targeted (analyte-specific) detection, but it has the potential to offer deep insights via shining light on the metabolic reprogramming that accompanies many disease states 8,9 . Mass spectrometry metabolomics has recently advanced to the level where analyte labeling techniques can permit multiplex analysis from single cells 10,11 , but it is intrinsically sample-destructive, so prohibits live-cell analysis. The fluorescence-based methods offer high sensitivity 12 , but with poor multiplexing, and fluorophore labels can hinder metabolite processing 8 .
As a non-invasive optical tool, Raman spectroscopy probes the vibrational motions of chemical bonds, which allows detection of endogenous metabolites in a label-free manner. Multiple types of cellular metabolites have been identified by Raman fingerprinting, including nucleic acids, amino acids, lipids, glucose, neurotransmitters and etc [13][14][15] . In addition to spectroscopy, Raman microscopy further generates subcellular chemical maps by targeting predetermined vibrational peaks. In particular, the recent emergence of stimulated Raman scattering (SRS) microscopy, utilizing stimulated emission quantum amplification, provides imaging quality comparable to fluorescence microscopy with resolution of~450 nm and speed up to video-rate in live cells and tissues 16,17 . By sweeping the laser across a designated wavelength range, hyperspectral-SRS (hSRS) rapidly produces Raman spectra of up to 600 cm −1 at subcellular locations [18][19][20][21] . Going beyond label-free analysis, Raman spectro-microscopy provides targeted detection and imaging of specific metabolites by recent strategies of stable-isotope labeling 22,23 .
In this work, we explore Raman spectro-microscopy for subcellular pharmaco-metabolomics. We adopt a series of BRAFmutant patient-derived melanoma cell lines as a model system. Metastatic melanoma is the most-deadly form of skin cancers, for which 66% of them harbor mutations in the BRAF kinase 24 . We utilize Raman spectro-microscopy to characterize this series of related but distinct BRAF-mutant melanoma cancer cell phenotypes, each corresponding to a different level of cancer cell differentiation, from melanocytic (differentiated) to mesenchymal (de-differentiated) [25][26][27] . The associated biology of these and similar melanoma models has been deeply investigated, which informs our study here [27][28][29][30][31] . The sensitivity of these cell lines to various targeted inhibitors and immunotherapies associates with de-differentiation status 27,28,32 . Differentiated phenotypes exhibit higher sensitivity toward BRAF inhibitors, while the dedifferentiated phenotypes exhibit an innate resistance 27,33,34 . We hence mine the resulting spectroscopic information to identify phenotype specific, druggable metabolic susceptibilities.
We first establish a transcriptional relationship between cellular de-differentiation and metabolic reprogramming. We then integrate single-cell Raman data with transcriptomics analysis to establish that Raman-extracted trends in cellular chemical composition correlate with corresponding trends in gene expression. We identify and validate two druggable metabolic susceptibilities. One is specific to the differentiated melanoma cell lines studied, and is consistent with trends in gene expression. The second susceptibility is specific to the de-differentiated cell lines, and is uniquely extracted from the Raman analysis of subcellular lipid droplets (LDs). It is not detected through either bulk transcriptional analysis or bulk metabolomics, but can be validated by lipidomics. Raman analysis of single cells is thus shown as a potent pharmaco-metabolomics tool.
Results
Metabolic features are shown in transcriptome and Raman analysis. Tsoi et al. recently published a pharmaco-genomic analysis of 53 patient-derived BRAF-mutant melanoma cell lines 27 . Notably, they demonstrated that the expression profiles of these cell lines faithfully reflected what was seen in the corresponding patient tumors. Further, they adopted unsupervised clustering of those profiles and classified the cell lines into four groups based upon de-differentiation status: melanocytic (differentiated), transitory, neural-crest-like, and mesenchymal (dedifferentiated). We first selected a subset of 30 of these cell lines for analysis, on the basis that they did not also contain RAS mutations. Similar to reported 27 , the whole transcriptomic data of these 30 cell lines, when visualized within a two-dimensional space (see "Methods"), yielded a clear separation into four distinct phenotypes, separated by level of de-differentiation (Fig. 1a, top panel). The nature of cancer cell de-differentiation means that energetic requirements, cellular morphology, etc., are all altered, suggesting that cellular differentiation is also accompanied by metabolic reprogramming 35 . We tested this hypothesis by similarly analyzing the same 30 melanoma cell lines, but including in that analysis only~1600 genes associated with metabolic processes. In fact, this calculation yielded an almost identical clustering (Fig. 1a, bottom panel). Just like the well-reported phenotypic markers 28,36 , metabolic genes also showed a clear phenotype-dependent expression trend, with associated functions that span different metabolic processes (The representative (top 4 ranked) metabolic genes are shown in the bottom of Fig. 1b, the complete heatmap and list of the top ranked metabolic genes are shown in Supplementary Fig. 1 and Supplementary Table 1). This implies that metabolic susceptibilities that exist within these cell lines may well vary with cellular de-differentiation, similar to what is known for inhibitors that target oncogenic signaling 37 .
We selected five representative patient-derived cell lines based upon the single criteria that they collectively spanned the range of de-differentiation status (indicated at the top of Fig. 1b with information listed in Supplementary Table 2), from M381 (undifferentiated) to M262 (differentiated). We acquired spontaneous Raman spectra at the single-cell level from all five cell lines ( Supplementary Fig. 2a) over the molecular fingerprinting spectral range of 700-3100 cm −1 (Fig. 1c). These spectral shapes are largely similar across four phenotypes. To extract differences between these spectra, we first utilized unsupervised surprisal analysis (SA) for dimension reduction 38 . SA is similar to principal component analysis (PCA) in that it is an orthogonal transformation of the data, with the dominant eigenvectors (also called constraints) capturing most of the variance observed in the Raman spectra of different cell lines. While SA has been successfully applied to analyzing gene expression datasets 26,39 , an early application was for the analysis of molecular spectra 40 . We first confirmed that the constraints and their respective weights obtained from SA could recapitulate the fine Raman spectral features ( Fig. 1d and Supplementary Fig. 2b). We then generated a heatmap of the top 5 constraints, labeled in ascending order as λ 0λ 4 , with each cell line represented by ten individual cells (Fig. 1e). The largest constraint, λ 0 , captures universally shared spectral features and is expected to be invariant across cell lines. This shared spectrum, with peak assignments, is provided in Supplementary Fig. 3. The second largest constraint, λ 1 (Fig. 1e) captures the greatest variance from spectra to spectra and exhibits an average score that obviously changes with cellular dedifferentiation (Fig. 1f) MITF PMEL MLANA TUBA1A NGFR TGFB1 WNT5B CYR61 TYR DCT GYG2 GMPR RXYLT1 ALDH1A1 CYP27A1 NPR1 ST8SlA5 GALNT5 DHRS3 ALDH1A3 NNMT MGST1 GDA BCAT1 Fig. 1 Transcriptomics and spontaneous Raman spectra analysis of metastatic melanoma cell lines. a Dimensional reduction of bulk transcriptomics data of 30 melanoma cell lines yields a clear separation of four different melanoma phenotypes, based on either the expression of all genes (top panel) or~1600 metabolic genes (bottom panel). b A heatmap of gene expression levels for representative genes involved in defining the cellular and metabolic phenotypes shown in a. The black-font row labels are well-reported phenotypic marker genes for defining different subtypes. The gray-font row labels are top 4 ranked metabolic markers within each phenotype representing different processes, as identified by matching the symbol with the key at the bottom of the heatmap. The color-coded bars at the top of the heat map indicate the different cellular phenotypes for each cell line, while the arrows point to the five representative cell lines selected for Raman analysis. c Spontaneous Raman spectra of five selected cell lines (averaged over 50 spectra from 10 cells per cell line examined over three independent experiments). Each spectrum is offset apart in y-axis with no changes of absolute intensities. d A representative Raman spectrum of M262 cells reconstructed by summing the constraints λ 0 -λ 4 identified using surprisal analysis (SA). The inset plot shows the high correlation between the reconstructed and the measured spectrum. e Heatmap for scores of the top five constraints (λ 0 -λ 4 ) calculated by SA of the Raman spectra across the five cell lines (10 cells from each cell line). Each column represents SA scores across λ 0 -λ 4 from an individual cell. Each row represents the score of a given constraint across multiple single cells. f The average score of constraint 1 (λ 1 ) of 10 cells across all 5 cell lines. Data shown as mean ± SEM. g The spectrum of λ 1 , with Raman peak assignments. The most negative feature is from CH 3 vibration at 2940 cm −1 arising mainly from proteins (blue, boxed). The most positive feature is a CH 2 vibration at 2845 cm −1 mainly from lipids (red, boxed). Source data are provided as a Source data file. NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-18376-x ARTICLE NATURE COMMUNICATIONS | (2020) 11:4830 | https://doi.org/10.1038/s41467-020-18376-x | www.nature.com/naturecommunications amplitude and less revealing in spectral features ( Fig. 1e and Supplementary Fig. 3). The spectral distribution of λ 1 exhibits positive contributions from CH 2 vibrational stretches (2845 cm −1 , largely arising from lipids), and negative contributions from CH 3 stretches (2940 cm −1 , mostly from proteins) ( Fig. 1g and Supplementary Fig. 4). The λ 1 score declines from M262 to M381. This indicates that the lipid/protein (CH 2 /CH 3 ) ratio decreases with the progression of de-differentiation in these melanoma cell lines (Fig. 1g). We note here that the relative high variance in the λ 1 originates from the intracellular heterogeneity from relatively low sampling in spontaneous Raman acquisition ( Supplementary Fig. 2a). This issue is largely bypassed in SRS imaging, as shown below, with much higher resolution and sampling.
Differentiated cells are susceptible to fatty acid synthesis. After mining the metabolic-associated spectral features from the wide fingerprint region, we next turned to live, single-cell imaging investigations to capture intracellular heterogeneity. We utilized SRS imaging ( Supplementary Fig. 5a), with microsecond-level pixel dwell time, subcellular resolution and linear-concentration dependence for straightforward metabolic quantifications 16,17 , to interrogate how the overall trend shown in Fig. 1f is reflected at the whole cell level. We targeted the lipid peak at 2845 cm −1 (attributed to CH 2 vibrations 16,17 , Fig. 2a, top) and the protein peak at 2940 cm −1 (from CH 3 vibrations 41 , Fig. 2a, middle). The generated CH 2 /CH 3 ratiometric images (Fig. 2a, bottom) indeed nicely resolved a decreasing trend from melanocytic M262 cells toward mesenchymal M381 cells, implying that the more differentiated cells are relatively richer in lipids. SRS images on fixed cells yielded similar conclusions ( Supplementary Fig. 5b). After quantifying the averaged CH 2 /CH 3 intensity ratios ( Fig. 2b and Supplementary Fig. 5c), we then asked whether this trend extracted from Raman imaging could be correlated to transcriptomics data. Strongly correlating or anti-correlating gene expression patterns are shown in the heatmap of Fig. 2c. In particular, several genes associated with lipid processing are identified with strong positive correlations, including fatty acid synthase (FASN), 3-hydroxyacyl-CoA dehydrogenase, and Malonyl CoA-acyl carrier protein transacylase, mitochondrial. In fact, the gene ontology (GO) fatty acid synthetic processes exhibits a strong linear correlation with the CH 2 /CH 3 Raman ratios ( Fig. 2d, top, r = 0.93, p = 0.02). Also notable are genes (Fig. 2c) and biological processes that exhibit a negative-correlation with CH 2 /CH 3 , such those associated with the cell migration pathway (Fig. 2d, bottom, r = −0.91, p = 0.03). The high migratory nature is a known feature in mesenchymal phenotypes 42 . Similar relationships from features strongly related to melanocytic (Supplementary Fig. 6a, top) or mesenchymal cell types ( Supplementary Fig. 6a, bottom) were also resolved. These data demonstrate that single-cell Raman imaging yields information consistent with transcriptional profiling.
Elevated FASN expression ( Supplementary Fig. 6b) in the differentiated cell lines implies increased de novo fatty-acid synthesis. We first sought to further explore this biology through targeted SRS imaging. Elevated glucose catabolism is a characteristic of many cancers, and produces an excess of the glycolytic end-product, pyruvate, some of which can be converted to acetyl-CoA and then further converted, through an FASN mediated pathway, to fatty acids 43,44 (Fig. 2e). The relative importance of de novo fatty-acid synthesis in the various cell lines can be inferred by tracking the conversion of glucose into fatty acids (Fig. 2e). Thus, we incubated the cells in media by replacing regular glucose with deuterated glucose (d 7 -glucose) for 3 days before SRS imaging (Fig. 2f). The rationale is that an active de novo fatty-acid synthetic pathway will convert some of this d 7glucose into deuterated lipids, which exhibit a unique lipid associated C-D spectral signature around 2150 cm −1 , effectively yielding a live-cell assay of FASN activity 45 . SRS images of the five cell lines, collected at 2150 cm −1 , are provided in Fig. 2f. The measured cytoplasmic Raman spectrum ( Supplementary Fig. 6c) matches what is expected from deuterated lipids 45 . The subsequent quantification of average C-D signals across multiples image sets (Fig. 2g) implies that de novo fatty acid synthesis is most activated in the differentiated cell lines M262, M229, and M397 and remains relatively low in de-differentiated M409 and M381.
Elevated FASN activities in the more differentiated melanoma cell lines suggest that the FASN pathway may constitute a metabolic susceptibility in just those phenotypes. In fact, interruption of this pathway has been previously studied for cancer drug development 46 . We tested this hypothesis by treating the cells with FASN inhibitors, 10 μM cerulenin 46 or 0.2 μM TVB-3166 47 , for 3 days. As hypothesized, the three most differentiated phenotypes exhibited the highest sensitivity to cerulenin and TVB-3166 while the two most undifferentiated cell lines are barely affected by such drug treatments ( Fig. 2h and Supplementary Fig. 6d). These data demonstrate that single-cell Raman spectro-microscopy, integrated with transcriptional profiling, can uncover phenotype-specific druggable susceptibilities in cancer cells.
Mesenchymal M381 accumulates selected lipids in lipid droplets. The above results indicate that metabolic susceptibilities within BRAF mutant melanoma cell lines can be strongly dependent upon de-differentiation phenotype. A second relevant example is that of mesenchymal-specific GPX4-inhibitor-induced ferroptosis identified using pharmacogenomics by Tsoi et al. 27 . That susceptibility is related to lipid peroxidation. Finding new druggable targets for the highly-invasive ( Supplementary Fig. 7a) and BRAFi innate-resistant phenotype (Supplementary Table 2) might facilitate the development of clinically relevant inhibitors. We thus hypothesized that a deep interrogation of the lipid biochemistries in these cell lines might reveal additional druggable susceptibilities that distinguish the mesenchymal phenotypes. To this end, we studied the role of lipid storage in LDs. LDs are sub-micrometer-size lipid reservoir organelles 48,49 that are comprised of a highly dynamic mixture of neutral lipids (i.e., triacylglycerides (TAG) and cholesteryl esters (CE)). They are increasingly recognized for their central roles in modulating the transport and oxidation of lipids through interaction with other organelles 49,50 .
We used hSRS microscopy to analyze the composition of these sub-cellular LDs at a spatial resolution of~450 nm. Such live-cell compatible and non-perturbative subcellular quantification by hSRS is beyond what mass spectrometry and fluorescence analysis could offer. The unique spherical morphologies of LDs are readily imaged by SRS. Since they are lipid-rich, they exhibit large CH 2 Raman scattering signals near 2845 cm −1 (Fig. 3a). We generated Raman spectra on LDs from each of the 5 cell lines, by acquiring SRS images across the C-H vibrational region from 2800 to 3050 cm −1 with high spectral resolution of 8 cm −1 (Supplementary Movie 1 and Fig. 3b). To extract the phenotype-dependent variations from these spectra, we again employed surprisal analysis (SA), which resolved a universal constraint λ 0 , and just a single additional constant λ 1 . As before, we confirmed that summing these two dominant constraints could recapitulate the measured hSRS spectra of LDs ( Supplementary Fig. 7b). We then generated a heatmap of the weights of λ 0 and λ 1 for individual LDs, grouped by their associated cell lines (Fig. 3c). Again, λ 0 is ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-18376-x constant across cell lines ( Fig. 3c and Supplementary Fig. 8a) while λ 1 exhibits a uniquely high positive amplitude for the mesenchymal M381 cell line (Fig. 3c, d). Based on Raman spectra from reference pure lipid species ( Supplementary Fig. 8b), we annotated the spectral distribution of λ 1 . The 3022 cm −1 peak is assigned to the C-H stretch where the carbon is associated with a C = C double bond (i.e., =C-H). This spectral feature arises mostly from unsaturated lipids (UL) 51 . The broad band from 2957 to 2997 cm −1 largely originates from the C-H vibrations on the sterol rings of cholesterol ester (CE) (Fig. 3e) 52,53 . This spectral composition of λ 1 suggests that LDs within M381 cells bear the highest level of UL and CE among the five cell lines. This is further verified by direct normalization of all hSRS spectra to 2908 cm −1 (Fig. 3f, g), which is a zero point in λ 1 (Fig. 3e). The observation that the intracellular LDs within the mesenchymal M381 cell line exhibit a relatively increased level of unsaturated lipids, relative to the other cell lines (Fig. 3g, bottom), suggested a novel lipid regulation process within that cell line. We first examined whether this trend of lipid unsaturation was reflected in bulk analysis. We performed gas chromatography-mass spectrometry (GC-MS) based analysis of fatty acids from cell pellets (Fig. 3h). For presentation, we follow the common lipid notation of xx:yy, where xx represents the number of carbon atoms in the lipid chain, and yy refers to the number of double bonds (Fig. 3h). Although M381 cells show slightly enhanced level of 18:1 fatty acid (i.e., oleic acid) relative to the other cell lines, the heterogeneity of overall unsaturation across the cell lines is minor. Similarly, there is no clear trend for Fig. 8c). It is likely that the compositional variability of neutral lipids (i.e., TAG and CE) in LDs is averaged out by other more abundant lipid species in the bulk GC-MS analyses. Therefore, we performed liquid chromatographymass spectrometry (LC-MS) based lipidomics profiling with preserved lipid structures. Indeed, bulk lipidomics data for M381 cells clearly shows that while droplet-enriched species of TAG and CE have the highest unsaturated fatty acid (UFA) composition among major lipid species (Fig. 3i), they only account for a small portion (in total <6%) of all major lipid species (Fig. 3j).
Thus, M381 has elevated lipid unsaturation levels specifically within intracellular LDs.
Desaturases are involved lipid-droplet unsaturation of M381. We next sought to trace the source of the enhanced lipid-droplet unsaturation in M381 cells. Such an increase may arise from either cellular uptake or de novo synthesis. Further, the unsaturated lipid signal could originate from either mono-unsaturated fatty acids (MUFA) or poly-unsaturated (multiple double bonds) fatty acids (PUFA). First, to assess lipid uptake, we adopted a labeled SRS imaging approach by incubating M381 cells in medium containing deuterated MUFA (d 33 -oleic acid) or saturated fatty acids (SFA) (d 31 -palmitic acid), the two most widely used fatty acids for assaying uptake. We found that M381 cells have the lowest uptake of extracellular fatty acids across all cell lines ( Supplementary Fig. 9), suggesting that de novo synthesized fatty acids may serve as major sources for M381. We next tested whether the MUFA or PUFA de novo synthesis pathway (Fig. 4a) contributes to the elevated lipid-droplet unsaturation. In mammalian cells, Δ9 desaturase (Stearoyl-CoA desaturase-1, SCD1) is the rate-limiting enzyme for MUFA generation, specifically for producing oleic acids (OA, 18:1) and palmitoleic acids (PO, 16:1) from stearic (ST, 18:0) and palmitic (PA, 16:0) acids (Fig. 4a). In addition, Δ6 and Δ5 desaturases contribute to generating functionally important PUFA, such as docosahexaenoic acid (DHA, 22:6) and arachidonic acid (AA, 20:4) by catalyzing the formation of additional double-bonds from essential fatty acids of linoleic acid (LA, 18:2) and alpha-linolenic acid (ALA, 18:3) (Fig. 4a). We adopted pharmacological approaches to probe these pathways. CAY10566 (CAY) and SC 26196 (SC) are Δ9 (SCD1) and Δ6 desaturase inhibitors, respectively (Fig. 4a) 51 . Upon treatment with varying doses of CAY or SC on M381 for 3 days, our hSRS spectra revealed decreasing levels of unsaturation within LDs (Fig. 4b, c, 3022 cm −1 ), demonstrating the involvement of both MUFA and PUFA in LDs. This spectral response for decreased unsaturation upon drug treatment was also well-reflected in the heatmap of constraint scores by SA ( Supplementary Fig. 10). In addition, the involvement of MUFA and PUFA in LDs of M381 was supported by lipidomics of TAG and CE ( Supplementary Fig. 11), the main LD species.
Inhibiting SCD1 but not Δ6 desaturase induces apoptosis in M381. Although both CAY and SC reduce the lipid-droplet unsaturation levels, CAY inhibition of SCD1 for MUFA synthesis leads to a more significant loss of viability for M381 cells relative to the other four cell lines ( Fig. 4d and Supplementary Fig. 12), while SC treatment to block the Δ6 desaturase for PUFA synthesis pathway barely affects the viability of M381 (Fig. 4e). It is worth noting that this specific susceptibility of SCD1 in M381 is not indicated by bulk gene expression of SCD1 ( Supplementary Fig. 8c) or bulk fatty acid analysis (Fig. 3h). The inhibitory function of SCD1 is further confirmed by small hairpin RNA (shRNA) based gene silencing of SCD1 (Fig. 4f). This result illustrates that SCD1 inhibition could be a susceptibility of mesenchymal M381 cells and inspired us to develop a deeper understanding of SCD1 regulation in M381 cells. First, our bulk GC-MS analysis on major fatty acid species from cell pellets showed that SCD1 inhibitor mostly blocks the generation of the monounsaturated OA (18:1) from saturated ST (18:0) (Fig. 4g). This is consistent with the knowledge that OA (18:1) is the principle product of SCD1 54 . Second, a time-lapse apoptosis video assay demonstrated that CAY reduces the viability of M381 by inducing apoptosis (Fig. 4h). Surprisingly, both the time-lapse apoptosis ( Fig. 4h) and the time-dependent viability assays ( Fig. 4i) revealed that the M381 cells do not initiate apoptosis program until 1-2 days treatment with CAY. A similar lagging effect is also observed for the decrease in the hSRS spectral signature for unsaturation within LDs (Fig. 4j, k). Taken together, the GC-MS and the kinetics data imply that the susceptibility of CAY may originate from the gradual depletion of OA (18:1) and/ or the corresponding accumulation of ST (18:0).
SCD1 inhibition induces phase-separated membrane structures.
Lipotoxicity from excessive SFA (e.g., PA, 16:0 and ST, 18:0) is a well-documented effect that impairs cellular functions by inducing endoplasmic reticulum (ER) stress [55][56][57] , unfolded protein response (URP) [55][56][57] and the formation of ceramides 55 and reactive oxygen species 57 . Recently, it was found in live HeLa cells that supplying extra SFA into the culture medium could convert the intracellular membranes from the regular liquid-disordered phase into an ordered-solid phase 58 . This resulted in perturbed membrane functions and induced cell death. The conversion of a fluidic normal membrane (NM) into a rigid solid membrane (SM) can be characterized by detergent wash, in which the NM will be removed while the SM is not 58 . Since CAY treatment of M381 mostly reduces intracellular OA (18:1) while increasing ST (18:0) levels by blocking the ST-to-OA conversion (Fig. 4a, g), we S C ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-18376-x domains, and LDs (Fig. 5b, normalized to 2908 cm −1 as previously indicated). First, comparing the spectra of NM from before and after wash, the greatly reduced peak at 2845 cm −1 confirmed the effective extraction of most lipid contents by detergent wash (Fig. 5b, blue solid line vs blue dashed line). The maintained intensity at 2940 cm −1 after wash suggests that the NM is also enriched with proteins. In contrast, both SM and LD exhibit a largely maintained SRS spectral shape following detergent treatment (Fig. 5b, red, SM before vs after and green, LD before vs after). This indicates that both structures are resistant to detergent wash. Interestingly, the overall Raman spectral shape of SM is very distinct from that of NM (Fig. 5b, NM before vs SM before), but is similar to that from the LDs (Fig. 5b, SM before vs LD before). The similarity indicates that the SM is highly lipidrich. The difference between SM and NM suggests that the formation of phase-separated SM domains causes an exclusion of membrane-residing proteins, consistent with previous models that proteins or peptides which are anchored in intracellular membranes by α-helix clearly prefer the liquid phase and would be excluded by the solid phase for dimerization 56,59 . Thus, CAY inhibition on M381 cells indeed causes the formation of phaseseparated intracellular solid-membrane structures that enrich lipids, but exclude proteins.
Since CAY inhibition of SCD1 affects the de novo fatty-acid synthesis pathway (Fig. 4a), the SMs should have a high accumulation of newly synthesized lipids. Having identified that de novo lipid synthesis in our melanoma cells traces back to glucose (Fig. 2e, f), we again supplied M381 cells with d 7 -glucose, but this time together with CAY treatment for 3 days. We then used SRS imaging of C-D vibrations at 2150 cm −1 to visualize lipids that are synthesized specifically during the treatment period. As expected, we detected a formation of SM structures that were retained after detergent wash from C-D SRS images, which show similar patterns to that in the C-H channel (Fig. 5c, before vs after wash). This observation confirms that the newlysynthesized saturated lipids contribute to the formation of SM upon SCD1 inhibition of CAY.
CAY treatment induces the formation of SM structures by blocking the cellular conversion of newly synthesized SFA to UFA. This imbalance of homeostasis between SFA and UFA may also be caused by supplying cells with extra amount of SFA in the medium, which could promote the formation of SM structures 58 . Indeed, we observed the appearance of solid-membrane patterns by treating M381 cells with 100 μM deuterated palmitic acid (PA, 16:0) or 50 μM deuterated steric acid (ST, 18:0) (Fig. 5d, before vs after). Interestingly, the viability assays with PA and ST treatment (Fig. 5e) exhibited a M-shaped trend across all 5 cell lines. This trend is similar to that with CAY treatment (Fig. 4d), suggesting a similar toxicity effect between CAY and SFA. As a control, incubating cells with extra UFA show negligible toxicity for all cell lines (Supplementary Fig. 13a, PO, 16:1 and OA, 18:1). In addition, in similar ways the invasiveness of M381 cells is impaired with either CAY, PA or ST treatments ( Supplementary Fig. 13b). The loss of invasiveness is likely because the formation of SM structures leads to a loss of membrane fluidity, which is required for metastatic cancer cells to invade through the dense basement membrane 60 . We again validated the formation of intracellular solid-membrane structures and their associated cytotoxicity when cellular pool of SFA exceeds that for the homeostatic level in M381 cells.
Lipidomics suggest a reservoir role of LDs. To obtain a comprehensive picture of how SCD1 inhibition perturbs lipid homeostasis, we carried out bulk lipidomics analysis of M381 cells with and without CAY treatment (Fig. 6a). We presented heatmaps on six major intracellular lipid species of TAG, diacyglycerol (DAG), CE, free fatty acid (FFA), phosphatidylcholine (PC) and phosphatidylethanolamine (PE) in control, and CAY-treated cells (Fig. 6a). Heterogeneous remodeling for SFA and UFA is revealed across different lipid species (Fig. 6a, pink, SFA, saturated fatty acids; green, UFA, unsaturated fatty acids of different acyl chain length and double-bond number). For quantification, we first plot the ratios of total SFA to total UFA in each of six lipid species for control cells (CT, pink) and CAY-treated (1 μM, 3 days, purple) cells (Fig. 6b, SFA/UFA). Indeed, SFA/UFA ratios increase, although to a different extent, across all six lipid species in CAY-treated M381 samples. The increase is particularly obvious for TAG and CE (Fig. 6b), the main residents in LDs. This again explains why hSRS spectroscopy and imaging on LDs is so revealing. The difference between CT and CAY-treated cells becomes particularly obvious for the ST/OA (i.e., 18:0/18:1) ratios in each species (Fig. 6b). This is consistent with our previous GC-MS results (Fig. 4g) that the function of SCD1 is more strongly directed toward the generation of OA (18:1) from ST (18:0) relative to the generation of PO (16:1) from PA (16:0) ( Supplementary Fig. 13c). Further, the total concentrations of membrane lipids, PC and PE, increase by 40.2% and 38.6% after CAY treatment ( Supplementary Fig. 13d), which may explain the abnormally high lipid signals observed in SMs (Fig. 5a). These species-dependent lipidomics heatmaps and ratio analysis confirm the relative increase of saturation level across all different lipid species and identify the more dominant changes in both TAG and CE under CAY treatment, consistent with our SRS data.
Further quantification of the absolute concentrations of ST (18:0) and OA (18:1) from lipidomics (Fig. 6c) (right) polyunsaturated fatty acids (PUFA) in mammalian cells. CAY10566 and SC 26196 are SCD1 (Δ9-desaturase) and Δ6-desaturase inhibitor, respectively. b Normalized (to 2908 cm −1 ) hSRS spectra of LDs in M381 cells without (CT) and with treatment of (top) 1, 5 and 10 μM CAY (n = 16, 18, 19, 19 for CT, 1, 5 and 10 μM CAY, respectively), and (bottom) 1 μM and 5 μM SC (n = 16, 19, 19 for CT, 1 μM and 5 μM SC, respectively) for 3 days. c Quantification of unsaturated lipid (UL) by intensity ratios of 3022 cm −1 /2908 cm −1 from b. Relative viability of all five cell lines after treatment of 1 μM and 10 μM CAY for 3 days (n = 4 independent experiments) (d) or 1 μM and 10 μM SC for 3 days (n = 4 independent experiments) (e). f Relative viability of M381 cells after shRNA knockdown of SCD1 gene compared to scrambled control (CT) (n = 2 independent experiments). g GC-MS measurements of fatty acids extracted from bulk M381 cells with (CAY, purple) and without (CT, pink) treatment of 1 μM CAY for 3 days. The percentages of 16:0, 16:1, 18:0, 18:1, 18:2, and 20:4 fatty acids are normalized to total extracted fatty acids (n = 5 independent experiments). h Time-lapse apoptotic cell counts of M381 cells with (purple, CAY) and without (pink, CT) treatment of 1 μM CAY (n = 3 independent experiments, data shown as mean ± error with 95% CI). i Time-dependent relative viability of M381 cells after treatment of 1 μM CAY for 0, 1, 2, and 3 days (n = 4 independent experiments). j Normalized (to 2908 cm −1 ) hSRS spectra of LDs in M381 cells without (CT) and with 1 μM CAY treatment for 12 h, 1 day and 3 days (n = 20, 19, 17, 14 for CT, 12 h, 1 day and 3 days, respectively). k Quantification of UL from intensity ratios of 3022 cm −1 /2908 cm −1 in j. **p < 0.01, ***p < 0.001, ns: not significant (p > 0.05) from two-tailed unpaired t-tests. Data shown as mean ± SEM. Source data are provided as a Source data file. This suggests that OA (18:1) in TAG may be hydrolyzed and released under CAY treatment. Taken with our previous kinetic SRS data that the unsaturation levels of LDs, which are mainly comprised of TAG and CE, only decrease after 1-day of CAY treatment (Fig. 4j, k), we suggest a possible reservoir role of TAG for UFA in the LDs of M381 cells. After SCD1 inhibition blocks the conversion of newly synthesized ST (18:0) to OA (18:1), the cytosolic saturation level increases. When the level of newly synthesized SFA in the cytosol reaches a threshold (in our case, after 1-day of CAY treatment), the TAG in the LDs starts to release UFA (e.g., OA) to restore the balance of cellular lipid unsaturation. With the continuous depletion of UFA from TAG in LDs under prolonged CAY treatment, this storage is eventually depleted. The imbalance of intracellular SFA/UFA ratios then leads to the formation of toxic SM structures, as observed in Fig. 5a.
Supplying UFA rescues SCD1-induced apoptosis in M381. We reasoned that supplying CAY-treated cells with extra UFA, such as OA, may rescue the toxicity effect of the drug by restoring the balance between SFA and UFA. Indeed, both cell viability (Fig. 6d) and cell invasiveness ( Supplementary Fig. 13e) of CAYtreated M381 cells were restored by adding OA in the medium together with CAY in a dose-dependent manner. Our time-lapse apoptotic assay (Fig. 6e) confirmed that high-dose (10 μM) of OA fully rescues M381 cells from apoptosis under CAY (1 μM) treatment. Further, with co-treatment of OA and CAY, the phase separated solid-membrane structures are absent, even at higher concentration (5 μM) CAY (Fig. 6f, before vs after). It is known that OA supplementation can reduce lipotoxicity by channeling extra cytosolic SFA into LDs 61 . We hence performed a pulsechase experiment to explore the possible rescue effect (Supplementary Fig. 13f). We first pulse-treated M381 cells with 5 μM CAY for 60 h. Verified the formation of solid-membrane structures in this condition ( Supplementary Fig. 13f, lipid, C-H), we then chased (i.e., rescued) the cells with 20 μM of deuterated OA (d 33 -OA) for another 10 h. We observed much less solidmembrane ( Supplementary Fig. 13f, set 1 and 2, C-H) and a significantly increased number of LDs derived from deuterated OA ( Supplementary Fig. 13f, set 1 and 2, boxed, C-D). We also queried whether other UFA could have a similar rescue effect. At a low dose (1 μM), OA is the most effective tested rescue agent ( Supplementary Fig. 13g) Supplementary Fig. 13h), showing that the key is to restore the cellular balance between SFA and UFA.
To understand specific gene regulatory pathways involved in the saturated-lipid associated M381 susceptibility, we carried out RNA-seq transcriptomics analysis on CT cells, cells treated with 1 μM CAY (CAY), and cells co-treated of 1 μM CAY and 1 μM OA (CAY + OA). We ranked the gene sets that either exhibit increased or decreased expression levels under CAY treatment relative to CT, and then exhibit restoration under CAY + OA treatment. Two pathways stand out (Fig. 6g) (Fig. 6g, left and middle columns) is upregulated with CAY treatment and recovers with CAY + OA. This observation is consistent with our functional assays (Figs. 4h, 6e). Second, the NFκB1-targets pathway exhibits decreased expression with CAY and recovers with CAY + OA. The high NFκB transcriptional state in melanoma has been suggested to be BRAFi resistant, consistent with what is known for M381 25,27,64 . In addition, the NFκB pathway has been implicated in maintaining the stemness feature in ovarian cancer stem cells 51 , so it might play a similar role here in maintaining the mesenchymal nature of M381. In previous reported lipotoxicity studies, perturbation of cellular lipid composition through the use of either relatively high concentrations (~0.5 mM) saturated lipid in the culture medium, or via SCD1 inhibitors, was shown lead to the activation of ER stress sensors and the UPR [55][56][57] . In this study, transcriptional signatures associated with neither ER nor UPR stresses were significantly elevated following 1 μM CAY treatment (Supplementary Fig. 13i). One possibility is that suppression of protein translocation into the SM structures may trigger proapoptotic signaling.
Discussion
Single-cell metabolomics is challenging because there is neither a general amplification strategy, such as PCR, nor a general capture agent approach, such as antibodies, to facilitate the detection of specific metabolites with required sensitivity. Here, we demonstrated that Raman spectro-microscopy opens up the ability to spatially resolve and quantitatively analyze particular classes of metabolites, as well as specific targeted metabolites, in live and fixed cells. Raman imaging and spectral analysis essentially serves as a multiplex functional assay for metabolites that rapidly respond to environmental stimuli, and so provides a powerful complement to mass spectrometry and fluorescence detection methods. We showed the value of metabolic analysis by imaging a series of patient-derived, BRAF mutant melanoma cell lines, representing different de-differentiation phenotypes. The subcellular metabolic heterogeneity across these cell lines is effectively captured by Raman and used to mine for phenotype-dependent, druggable metabolic susceptibilities. We termed this approach as subcellular pharmaco-metabolomics. In many cancers, mesenchymal-like cells exhibit invasive characteristics, as well as innate drug-resistance to targeted or even immune-therapies 32,65-67 . We hypothesized that the maintenance of such characteristics required lipid biochemical processes that could be mined for druggable susceptibilities. To this end, we utilized a comparative analysis of Raman spectro-imaging on intracellular LDs to identify that lipid-unsaturation associated metabolic activities were uniquely upregulated in the mesenchymal M381 phenotype, as depicted in Fig. 7. This picture is supported by several findings. First, from SRS imaging of deuterated fatty acids and glucose, M381 cells exhibited the lowest relative activities for both lipid uptake (e.g., OA, 18:1) and de novo fatty acid synthesis. Such low metabolic activity might contribute toward making M381 cells insensitive to BRAFi 68 . Second, incubation with SCD1 inhibitor, CAY, which blocks the conversion of SFA to MUFA, led to an imbalance of intracellular SFA and UFA. This imbalance drives the release of UFA stored in M381 LDs to restore the balance. This suggests an intracellular UFA reservoir function for these droplets. Prolonged SCD1 inhibition eventually depletes these LDs of UFA, leading to an excess of SFA in M381. This excess, in turn, contributes toward a type of lipotoxicity through the formation of a phase-separated SM domain. The accompanying loss of membrane fluidity and exclusion of membrane-residing proteins are then associated with an induced apoptosis-a cell fate that can be avoided by supplying extra MUFA in the culture medium. The susceptibility of SCD1 is uniquely revealed by subcellular Raman analysis, but is not reflected in the bulk transcriptomics ( Supplementary Fig. 8c) or bulk metabolomics (Fig. 3h). Both the mechanism and applicability underlying reported susceptibilities in our work are distinctly different from previous reports that mainly relied on bulk analysis [69][70][71] . This demonstration thus emphasizes the unique value of subcellular pharmaco-metabolomics as a revelatory tool for uncovering new cell biology.
The work here provides an important proof of concept for the use of Raman spectro-microscopy in identifying phenotypedependent metabolic susceptibilities in cancer cells. It is likely that we are just beginning to mine for how different metabolites are processed and utilized within different cellular subcompartments. Our current subcellular investigations focus on the spectral region of 2800-3100 cm −1 , but can be readily extended to additional windows within the fingerprint spectral region to permit the identification of additional metabolite classes [19][20][21] . Other subcellular structures could be probed similarly to how the LDs were analyzed here, to resolve a more comprehensive intracellular picture of the organelle network, such as the membrane-bound organelles of ER and the Golgi apparatus 46 . Another aspect that worth exploration is the generality of the cell line specific results reported here. For example, whether the susceptibility of SCD1, as revealed in the mesenchymal M381 cells, applies more generally across mesenchymal BRAF mutant melanoma tumors, is both intriguing and important, given the challenges in drugging such tumors. A second challenge will be to extend these Raman tools, in conjunction with surprisal analysis, to characterize the metabolic heterogeneity within intact tissues, and more physiologically relevant environments 17 . Such studies will further validate the general applicability of specific targets identified here and perhaps open up avenues for clinical translation.
Cerulenin (Sigma, C2389-5MG), TVB 3166 (Sigma, SML1694-5MG), CAY10566 (Cayman, 10012562), SC 26196 (Cayman, 10792) were dissolved in DMSO (ATCC, 4-X) at designated concentrations before adding to cell culture media. To conduct cell viability assay, 30k to 50k cells were seeded into six well dishes (Corning, 3516). After culturing for 2 days, growth medium was replaced with fresh medium containing drugs with indicated concentration, and the incubation continues for another 3 days. Cell viability was measured by counting cell numbers of each well with trypan blue. Cell number in vehicle (with DMSO as vehicle) well was used as normalization. 58 . We recommend that at least three biological replicates with at least five cells in each replicate are acquired for analysis.
Spontaneous Raman spectroscopy. Fixed cell pellets were washed two times with pure water, and then resuspended into water to be cell solution to avoid influence from salt crystals after drying. The cell solution containing 5k cells was added dropwisely on a glass slide. After air dry, glass slides with cells were then used to take Raman spectra. Spontaneous Raman spectra were acquired using an upright confocal Raman spectrometer (Horiba Raman microscope; Xplora plus). A 532 nm YAG laser is used to illuminate the sample with a power of 12 mW on sample through a 100×, N.A. 0.9 objective (MPLAN N; Olympus) with 100 µm slit and 500 µm hole. Spectro/Raman shift center was set to be 2000.04 cm −1 . With a 1200 grating (750 nm), Raman shift ranges from 690.81 to 3141.49 cm −1 was acquired to cover whole cellular Raman peaks. Acquisition time for one spectrum was set to be 25 s (5 s times five averaging). The target cell was chosen randomly and spectra of five points (center, top, bottom, left, right) on individual cell were acquired. The acquired spectra were processed by the LabSpec 6 software for baseline correction. Spontaneous Raman spectra were organized and presented by Excel and GraphPad, respectively. To reduce spectral variance for spontaneous Raman spectra caused by intracellular heterogeneity, we recommend that at least three biological replicates with at least ten cells in each replicate are acquired for analysis.
Coating of imaging dish. Imaging dish (MatTEK, P35G-1.5-14-C) was coated with 2% sterile gelatin solution (Sigma, G1393) for 30 min, then the coating solution was removed and the dish was left for air dry for another 30 min before using.
Ratio image processing and data analysis. Images are analyzed and assigned color by ImageJ. For CH 2 /CH 3 ratio imaging, a threshold (mask) image was first generated by adjusting threshold using Huang method, then nonzero values were normalized to one. CH 2 images were then divided by the same set of CH 3 , and the resulting ratio image multiplied with mask image to create the final CH 2 /CH 3 ratio image. The display range of CH 2 /CH 3 ratio images is set to be 0 -0.5.
Fatty acid analysis. Five million cells were harvested, frozen, and lyophilized overnight. Fatty acid methyl esters (FAMEs) were produced from biomass in a combined extraction, hydrolysis, and derivatization procedure based on previous methods 72 . For each sample, dried biomass was mixed with 2 ml of methylation mixture (20:1 v/v anhydrous methanol/acetyl chloride) and 1 ml hexane and reacted in sealed VOA vials at 100°C for 10 min. After cooling, 2 mL deionized water was added to the mixture followed by three times extraction with 2 ml hexane. The hexane solution was then treated with anhydrous Na 2 SO 4 to remove residual water and concentrated under a steam of N 2 to a final volume of 0.5 ml. FAMEs were identified via gas chromatography/mass spectrometry (GC/MS) on a Thermo Fisher Scientific ISQ by injecting 1 μl of sample in splitless mode. Chromatographic separation was achieved on a ZB-5ms capillary column (30 m by 0.25 mm; film thickness, 0.25 µm). Peaks were identified by comparing the mass spectra and retention times to the authentic standards and library data. Quantification was achieved by a flame ionization detector. To avoid complications from sample loss at sample preparation stage, we used the relative abundance of each species of fatty acids for data interpretation. Relative abundances were calculated by dividing the Schematic of the proposed cellular metabolic processes for M381 cells under SCD1 inhibition. SCD1 inhibition blocks de novo MUFA synthesis from SFA, which leads to an imbalance of intracellular SFA and UFA. This imbalance drives the release of UFAs stored in M381 lipid droplets, which act as reservoirs of unsaturated lipids, to restore the balance between SFA and UFA. Prolonged SCD1 inhibition eventually depletes the stored UFA. The resulting imbalance between SFA and UFA transforms fluid normal membrane domains into phase-separated solid membranes. The accompanied loss of membrane fluidity and exclusion of membrane-residing proteins are associated with an induced apoptosis-a cell fate that can be rescued by supplying excess UFA in the culture medium.
peak area for each of the six most abundant fatty acids (16:0, 16:1, 18:0, 18:1, 18:2, and 20:4) to the sum of peak areas of all six species. Data were processed by Excel. The signals of other species are too low and mostly buried in noise.
Detergent wash. PBS solution containing 0.5% Triton X-100 (Sigma, T8787), short as PBS-T solution, was used to wash cells in imaging dish 58 . Gently add 1 ml PBS-T detergent solution (above) into imaging dish and place the dish in 4°C for 10 min. Then the PBS-T washing solution was gently removed and the samples were washed with PBS for two times before imaging.
RNA extraction, library construction, and sequencing. Total RNA was extracted from frozen cells pellets (~1 million cells) using the RNeasy Micro Kit (Qiagen, 74004) according to the manufacturer's protocol. Then the RNA sequencing (RNA-seq) was performed using BGISEQ-500 platform at BGI Genomics (Wuhan, China). The library preparation was followed by BGI's standard procedure.
RNA-seq data dimension reduction and clustering analysis. Sequencing reads were mapped and aligned to Human Reference Genome (UCSC hg 19) with TopHat. Assembled transcripts for each sample were generated from mapped reads using Cufflinks. All assemblies were combined into a single assembly by Cuffcompare for differential expression analysis. Expression levels in fragments per kilobase of exon per million fragments mapped were generated using Cuffdiff as normalized read counts.
Heatmap and clustering analysis of transcriptomic dataset was performed via MATLAB. Hierarchical clustering was performed with average linkage and Euclidean distance metric. Transcriptomic data of 30 BRAF but not NRAS mutated melanoma patient derived cell lines from the Gene Expression Omnibus database (GEO) 27 were chosen for dimension reduction and clustering analysis. Gene expression of the whole transcriptome or metabolic subset (with all metabolic genes defined from reference 73 ) were project onto the top two most dominant constraints defined from surprisal analysis 38 . This way, cell lines with similar whole transcriptomic profiles or metabolic-related gene expression profiles were projected nearby to each other. Cell lines were color-coded based on their respective phenotypes. Top 100 cell phenotype-specific metabolic genes for each of the phenotype are selected based on gene's contribution score toward each phenotype as listed in Supplementary Table 1. Contribution score of each gene to each phenotype are calculated based on gene's contribution score toward the X-axis (G1) and Y-axis (G2) in the two-dimensional map (G1 and G2 values from surprisal analysis). Detailed equations are listed as the following: contribution score of melanocytic phenotype, S melanocytic (S 1 ) = −G1 -G2; contribution score of transitory phenotype, S transitory (S 2 ) = −G1 + G2; contribution score of neural-crest phenotype, S neural-crest (S 3 ) = G1 + G2; contribution score of undifferentiated phenotype, S undifferentiated (S 4 ) = G1 − G2. Heatmap of all 400 phenotypic-specific metabolic genes are plotted in Supplementary Fig. 1 and heatmap for a few representative phenotype markers and phenotypic-specific metabolic genes are shown in Fig. 1b.
For CH 2 /CH 3 correlation analysis across five cell lines, spearman correlation was calculated between each gene and the measured CH 2 /CH 3 ratio across all five cell lines, where genes that displayed the highest positive or negative correlation with CH 2 /CH 3 ratio (Spearman > 0.95 or < −0.95) were further mined for their function through enrichment analysis.
Gene set enrichment analysis (GSEA) 74 was performed using GSEA v4.0.1 with 1000 geneset permutations. Normalized enrichment score was assessed across the curated Molecular Signatures Database (MSigDB) Hallmark, C2 curated gene sets, C4 computational gene sets and C5 gene ontology gene sets. To identify biological processes and pathways most correlated with CH 2 /CH 3 ratio, we first ranked the genes based on the Spearman correlation between their expression and CH 2 /CH 3 ratio across all five melanoma cell lines and then performed the pre-ranked option of GSEA with 1000 permutations.
Surprisal analysis of Raman spectra. Surprisal analysis was applied as previously described 38 . Briefly, the measured Raman peak signal at certain wavenumber i at cell c, ln Xi(c), is expressed as a sum of a steady state term ln 0 Xi (c), and several constraints (modules) λj(c) × Gij representing deviations from the steady state. Each deviation term is a product of a cell-dependent weight (influence score) of the constraint λj(c), and the cell-independent contribution of the wavenumber peak to that constraint Gij. Peaks i with high positive or negative Gij values are the ones that are positively or negatively correlated with constraint (module) j, which can be used to infer the meaning of each module. To implement surprisal analysis, we first utilized singular value decomposition, which factors this matrix ln Xi(c) in a way that determines the initial estimate of the two sets of parameters that are needed in surprisal analysis: the Lagrange multipliers (λj) for all constraints at a given cell, and for all cell the Gij (cell-independent) analyte patterns for all analyte i at each constraint j. Further interaction is implemented when necessary to stabilize the steady state and refining the constraints.
Incucyte cell apoptosis assay. Cells were seeded and monitored using an Incu-Cyte ® S3 live-cell imaging system (Essen BioScience). Cells were exposed to drug treatments for up to 72 h in the presence of IncuCyte ® Caspase-3/7 Green apoptosis dye (Essen BioScience, Cat. No. 4440). Images were taken at 20-min intervals from nine separate regions per well using a 20× objective. Apoptotic cell counts per well at each time point were quantified using the IncuCyte Basic Analyzer.
Migration and invasion assays. Transwell chambers coated with (Corning, 354480) and without matrigel (Corning, 354578), respectively were utilized to conduct the invasion and migration assays according to manufacturer's protocol. Briefly, cells received indicated treatments three days before the assays. At the start of the assays, cells were harvested and counted, and 50k ml −1 cells suspension was prepared. 0.5 ml of cell suspension was added to the upper chamber of the 24-well chambers. The media in lower chamber contains 10% FBS. Cells were allowed to migrate for 22 h at 37°C. The transwell membranes were then fixed and stained with 0.05% crystal violet solution. A cotton swab was used to remove cells that had not migrated or invaded through the chamber. Then, a fluorescence microscope was used to image the migrated or invaded cells, and four fields were independently counted from each migration or invasion chamber. Two or four biological replicates of experiments were conducted.
Lipidomics profiling. Cells going through indicated treatments were harvested as frozen pellets. Lipids were extracted using methyl tert-butyl ether (MTBE)/ methanol after the addition of 54 isotope labeled internal standards across 13 lipid classes. The extracts were concentrated under nitrogen and reconstituted in 10 mM ammonium acetate in dichloromethan:methanol (50:50). Lipids were analyzed using the Sciex Lipidyzer platform consisting of a Shimadzu LC and AB Sciex QTRAP 5500 LC-MS/MS system equipped with SelexION for differential mobility spectrometry (DMS). Multiple reaction monitoring (MRM) was used to target and quantify over 1000 lipids in positive and negative ionization modes with and without DMS. The resulting lipidomics data are provided as Supplementary Data 1. | 12,373.2 | 2020-09-24T00:00:00.000 | [
"Medicine",
"Chemistry",
"Biology"
] |
Misk â sowin—Returning to the Body, Remembering What Keeps Us Alive
: The n ê hiyaw ê win (Plains Cree language) Cree word, misk â sowin, relates to the sacred teachings of Treaty Elders of Saskatchewan as a concept pertaining to wellness of “finding one’s sense of belonging”—a process integral in the aftermath of colonial disruption. M é tis educator and performance artist Moe Clark offers an approach to healing and well-being, which is imparted through movement, flux and through musical and performance-based engagement. Moe works with tools of embodiment in performance and circle work contexts, including song creation, collaborative performance, participatory youth expression and land-based projects as healing art. She shares her process for re-animating these relationships to land, human kin, and other-than-human kin through breath-work, creative practice and relationality as part of a path to wholeness. The authors document Moe’s approach to supporting the identity, growth, healing and transformation of others.
Introduction
This article explores the nature of healing and belonging for Two-Spirit Métis multidisciplinary artist, Moe Clark. Honouring an Indigenous paradigm, the authors follow a circular methodology as demonstrated in Moe's spoken word poem Coyote. In doing so, we are witness to a story of healing and transformation. Each author has taken a particular role in the articulation of this work through the shared tasks of interviewing, transcribing, writing, contextualizing and word-smithing. This embodied, transformative and multi-faceted approach to healing for the Métis, for the wider Indigenous and/or cultural communities can be inspirational and holds a place in the disciplines of cultural psychology, healing and recovery and the human potential movement.
Without taking up too much space for an already well-documented history, it is important to note that the Métis are one of three Indigenous peoples in Canada (in northern Turtle Island) who were colonized by England, France and then the settler-state postconfederation (1867). Perhaps the best narrative account of the historical experience for the Métis can be found in Maria Campbell's (latest) publication of "Halfbreed" (1973). Here, Campbell documents the major and ongoing sources of strife for the Métis including land theft, poverty imposed through state violence, denied access to education, abuse and sexualized assault by police and barriers to adequate housing, food and clean water. Today, while many Métis are employed in middle-class or industrial jobs, the Métis continue to be subjected to racism and systemic barriers; many are forced to make untenable choices, such as working in jobs which perpetuate environmental destruction or remain unable to support one's family (Richardson 2017). Particular ongoing issues of identity for Métis are fostered by the stereotyping found in terms such as "Halfbreed", which, by the terms of eugenics and scientific racism, imply that the Métis are less-than-whole. One of the most perturbing conditions for the Métis is found through child welfare practices which continue to target Indigenous or "darker skinned" families, implying that their parenting practices are sub-par, blaming individuals for systemic and structural barriers. There are more Métis children in government care today than during the "Sixties Scoop", an epoque highlighted for disastrous and genocidal racial profiling of Indigenous people. These are some of the conditions that Métis people today are trying to address. An ongoing, life-affirming energy is required to meet these challenges head on while building community and engaging in personal and collective transformation work. Relating to land, place, identity and our various kin (including non-human relatives) is central to the process.
Indigenous story work and circular methodology provoke experiential understanding of place and identity. Many Métis scholars have shared the importance of story and "First Voice" in both methodology and healing. Graveline (2000) approaches "First Voice" as methodology through "[a] fluid pattern [of] Medicine Wheel as 'paradigm'" (364). The medicine wheel is representation of wholeness, characterizing balance in an integrated way. While the model is used to conceptualize holistic well-being, the medicine wheel is also a structure that was built into the earth. One may walk through the medicine wheel in places such as Wanuskewin, Saskatchewan or in the Bighorn mountains of Wyoming, USA. Circularity is an organizing principle in this article and can be detected through the storied sequence, moving from personal account, to dialogue, to reflection and finally to connection with larger socio-political or spiritual themes. Richardson (2004) contends that: "Métis themes tend to come in strands that are closely woven together. [...] Themes of healing, learning through stories, and finding belonging are [closely intertwined]" (p. 24). Here, Moe Clark guides us through her creative practice which involves connecting to community and landscape, and undercovering miskâsowin and wâhkotowin: belonging and kinship.
Emergence
Is it possible to hear the whispers of the northern lights arriving, particularly if one is just waking from a dream? And from whence do they appear, from the spirit world, from the Cypress Hills/kâtepwa, or from the Red River? These are ancestral lands for the Métis, as for Moe's family. She acknowledges the Cypress Hills, katêpwa and other places that had been essential stopping points for her Métis family when they were pushed west from the Red River so many years back. The Métis people are taught about Louis Riel and the role of the artists, called upon to wake up the people: My people will sleep for one hundred years, but when they awake, it will be the artists who give them their spirit back (Riel 1885) Today, many of the younger generation on Turtle Island speak of being "woke", of having come alive, of becoming aware of who one is in the context of power, discrimination and (the absence of) social justice. This experience of "coming into greater consciousness" resonates with an African American usage (woke-ness) that was reintroduced into "mainstream public consciousness". In 2008, rapper Erykah Badu sang "Master Teacher" with her lyrics "I stay woke," designating the #staywoke hashtag (Minamore 2020). This chapter explores the process of "awakening to culture", of finding belonging as integral aspects of healing and becoming whole. The pathway to this knowledge was eked out by Métis singer, spoken word poet and facilitator of healing processes, Moe Clark. She notes: They speak about miskâsowin as a process of finding one sense of origin, finding ones belonging (Cardinal and Hildebrand 2000). You know they (the Elders) go further to say, miskâsowin means locating oneself within the circle, and to me I feel like this is an ongoing practice and process. I might belong in one way to a particular community and I might belong differently to another community; my roles are perhaps changing. Being able to reflect and to perceive that notion of belonging within different circles, which in turn makes you part of a whole, like a part of a whole system (Clark 2020a, p.1) One's relationship to land and ecological and human systems is acknowledged through a process of self-location. Moe begins her work by situating herself as such: I was born and raised in Treaty 7, otôskwanihk (Calgary), the meeting place of the Elbow and Bow rivers. My Métis ancestral roots trace back from Saint François-Xavier, Manitoba in the Red River and into the southern plains and of what is now central Alberta. I moved to Tio'tiá:ke (Montreal) over ten years ago, following an inner prompting, to pursue an artistic career. Here, I have been building a life and an artistic practice, nestled on this island of salt-crusted winter streets, at the foot of Mount Royal surrounded by the almighty Saint Lawrence river. These landmarks have become emblematic in my journey as a musician and poet, educator and artistic producer, shape-shifting between roles and mediums (Clark 2020b, p. 26) Like many poetically inspired beings, Moe begins her articulation of waking up to her culture through metaphoric story. Imagine awakening from a deep sleep, from a silenced place of not-knowing, to greet what is coming down the path. How quietly must one listen to hear the arrival of a muzzled coyote?
Coyote came upon the turtle's back in a dream I was sleeping so deeply like my people for at least one hundred years Métis theorists have long written about this returning of culture (Campbell 1973;Scofield 2016;Dimaline 2017). The 1990s appeared to mark a wave of increased Métis identification, increased pride and reclamation of Métis culture. This movement was marked by the victory of a number of important court decisions in Canada, such as the Daniel's Decision and the Powley Decision, reaffirming Métis rights in the Canadian constitution. In the same time period, Richardson articulated a personal and academic narrative in Belonging Métis (Richardson 2016), based on her doctoral research. Herein, Richardson speculated on a slight increase in cultural safety in Canada (Blanchet-Cohen and Richardson/Kinewesquao 2017). Richardson documents a final "grandmother/kokum speak out" that often precipitated a new awareness in Métis families. After years of hints and suggestions that the family might have "Indian blood", a grandmother reveals on her deathbed that she is "Métis". Many family Elders had tried to protect their young ones from harm by finding ways to keep their Métis ancestry underground, despite the knowledge being present.
The disclosure "We are Métis" has become a battle call for Métis organizations. Upon hearing these words, a reaction was prompted through the Métis family as each person opens up to this reality in a different way-some with relief and joy, others with confusion and denial. Being Indigenous in a virulent racist society 1996 1 requires a certain amount of courage. Moe's courage comes through in the story, as coyote prompts her to question her situation: Wrapped tight around his nose a coarse iron rope muzzled his throat and in his eyes a stone cold silence why the violence? he glared In coming to terms with her Métis identity, after years of denial in her family, Moe recounts the process of moving towards identity and wholeness, through a "piecing together" of the different parts of herself and her ancestry. Through his glare and muzzled lips, what is it that coyote wants back? Could it be his story or his people? The land where he/they have lived since time immemorial? Is he reminding us that we can also look into the things that were taken, that we long to re-embrace? Moe's story of coyote denotes the process of breaking the silence in order to more fully occupy her place as a Métis person: I've always identified as an artist, but I haven't always identified as Métis. The story behind this identity reconfiguring continues to reveal itself. Grandpa Ron, my Métis grandfather, died before I was old enough to know or ask him to know more about this hidden chapter. But after his passing, stories and questions surfaced in our family and we began piecing together the family genealogy and lineage. But like many, these fragments only took me so far. I have also relied on my own memories of being held and welcomed by the land, of the songs and campfire teachings told to me as a small girl and all the pieces I've gathered along the way from community, archives and whatever family stories have surfaced. From these fragments I have built new kinship relationships and adopted elders, community and artists as family. These relationships continue to accompany, inspire and support artistic and personal growth (Clark 2020b, p. 26) Moe's words point to one's belonging with and on the land, where much of the DNA of her ancestors is embedded. In addition, the living memories activated through relationships she has had with her family on the land, provide substance for this reconnection and healing. To reciprocate and give back to these life-giving relationships of land and sky, and the healing they provide, Moe speaks of making offerings. More particularly, how she is drawn outside to make offerings to the northern lights (the Ancestors) and it is there, she is met by the howling call of coyote. This encounter marks a pivotal moment in her healing journey. Moe writes: That night on the prairies, I decide to step outside and make an offering of cistêmaw, tobacco, both as a way to pay my respects for their visit and to indicate I'm happy right where I am, on this land, and will not be going with them. As I do this, I am met by the haunting cry of mêscacâkanis, coyote (Clark 2020b, p. 28) One of the Cree teachings of Turtle Island is that we do not look directly at the Ancestors in the sky, out of respect, lest they interpret our gaze as calling them to take us away with them (Buck 2009). This offering of tobacco thus becomes a way of looking without gazing directly. Tobacco becomes the witness who acts as the bridge between Moe and her ancestors. From a Cree perspective, tobacco moves first, and carries our prayers to spirit world so the ancestors know we are grateful (Clark 2020a). Maintaining embodiment practices such as these, Moe remembers, reaffirms, and honours the reciprocity of these sacred relationships. In the ongoing healing process, relationality and meaning-making are essential. Moe upholds these practices by communicating with her Ancestors on the land. By doing so, she is " . . . [activating] ancestral memory [and] engaging with lineage [ . . . ] with all that has been and all that will come" (Clark 2020a, p. 3). In recognizing their presence and tending to coyote's call, Moe is intentional in how she finds and affirms her belonging/being.
Healing through Intentionality
The process of meaning-making differs for every individual. Leanne Simpson (2017) speaks to this undertaking as she writes: " . . . Individuals carry the responsibility for generating meaning within their own lives; they carry the responsibility for engaging their minds, bodies, and spirits in a practice of generating meaning" (p. 52). As an artist, performer and educator, Moe carries the responsibility for meaning-making through ongoing relationships with community. Through the creation of song, story, engagement with creative kin and community through performance and arts facilitation, a reciprocal healing process has arisen-one that does not exist solely in Moe's body, but is shared between many bodies/minds/spirits of those who come into creative practice with her. She says: Therefore, much of my healing has come through being in relationship with others and others' healing. We grow together and that circle expands collectively and in community. Therefore, I definitely feel like the teacher is the student, and the student is the teacher, and this process experienced through intergenerational relationships and transmission of knowledge, is so important, one that has always been part of my life (Clark 2020a, p. 2) Through this transmission of knowledge, prayer, and ceremony, creation is born. Verral (1988) cited in Iseke-Barnes (2003) shares the Cree word mom-tune-ay-chi-kun which refers to "the sacred place inside, where we can dream, imagine, create and talk to the grandmothers and grandfathers" (pp. 218-19). Iseke-Barnes then recites Métis grandmother Maria Campbell as she states: "Mom-tune-ay-chi-kuna in English is translated as mind or wisdoms or 'the thoughts and images that come from this place . . . [which] can be given to others in stories, songs, dances, and art . . . All these are gifts that come from that sacred place inside'" (Maria Campbell quoted in (Verral 1988, p. 3) cited in Iseke-Barnes 2003). Moe describes finding this inner place where she can create these gifts and build connection, she writes: Each time I prepare to write, to sing, to perform, I bring in plant medicines, sage or sweetgrass, and burn them to smudge. I engage in a process of making sacred, each act of creation (Clark 2020b, p. 31). I see the creative process as relational, ceremonial, and in constant renewal; helping to reaffirm and grow relationships with community, with ourselves and with the land. In this way, the creative process invites miskâsowin and wâhkotowin: belonging and kinship (Clark 2020b, p. 29) Through voice and music, Moe has acquired a deeper understanding of miskâsowin, how she belongs with " . . . the inner circle, [the] circle of close family and friends; then community; and then belonging in relationship to the larger circles of society, land, and the invisible world" (Clark 2020b, p. 31). The process of coming into being and healing through memory and visions requires inviting connection to these kinships. On relationality in ceremony and prayer, Cree scholar Shawn Wilson (2008) writes that: "[t]he purpose of any ceremony is to build stronger relationships or bridge the distance between aspects of our cosmos and ourselves" (p. 11), therefore ceremony allows for: " . . . [a] raised level of consciousness and insight into our world" (p. 11). Moe speaks to the use of plant medicines in her creative process as helping to bridge this distance by opening her spirit (Clark 2020b).
For Moe, coyote's arrival serves as a symbol. It is a prompting, a place of beginning, establishing the creation of this spoken word poem. This piece confronts colonial violence, forced displacement, and imperialism, which Moe says is "calling out" the "Iron Wire" (Clark 2020b). Listening intently to coyote's call, Moe generates a story, the meaning is the onset of an unwinding. By tending to this story, Moe names the violence instead of surrendering to it. She notes that: "in these places of [naming the violence], loosening the wire, new openings were formed. Through these holes, new visions could be seen for stories to be told" (Clark 2020b, p. 31): I loosened the rope from around Coyote's throat he opened his mouth wide no more did he hide the beads burning holes in his tongue through the holes I gazed awake in dreams a stream of neon green spilled out in solar flames Here, an integral aspect to Moe's healing process is revealed: the importance of stories in generating meaning, expanding her understanding and helping her grow. Moe has found wisdom through kin and kinship relations, the movement from "I" to "we" and the expansion of collective (Métis) awareness. The transmission of knowledge is facilitated through relationships and story-sharing. Kinships are inherent along Moe's creative path, and ongoing exchanges with Elders and Knowledge Keepers help Moe integrate and uplift the meaning of her experiences. One cannot do this alone.
Embodying This Story: Connecting with Elders and Knowledge Keepers
Elders' and Knowledge Keeper wisdom plays an immense role in community and individual healing. Iseke-Barnes (2003) observes that "Indigenous Elders encourage us to walk with our traditions, finding support for our lives and our work in these ways" (p. 211). Elders are the leaders of their communities, supporting, strengthening and affirming Indigenous cultures and pedagogies (Iseke 2013). Through storytelling, a practice within Indigenous cultures, places, epistemologies, individuals and their experiences are protected and validated (Iseke 2013). Smith (1999) cited in Iseke (2013) affirms that: "Elders are important in the process of recovery and resistance to colonial realities and in reinsertion of the importance of remembering our past and remaking our future(s). Elders mentor and provide support and have systematically gathered wisdom, histories, skills, and expertise in cultural knowledge" (p. 561). Moe speaks of an Elder who provided wisdom that furthered her healing process: I want to speak about a very important elder, the late Bob Smoker, who helped me to reclaim my voice in this creative process. I first met Bob outside the Regina airport at the beginning of a nêhiyawêwin (Plains Cree language) songwriting process with my adopted auntie and uncle, Cheryl L'Hirondelle and Joseph Naytowhow. mosom, grandfather Bob was sent to pick me up when Cheryl and Joseph could not. There was an instant connection and only a few days later Bob adopted me into his family, gifting me a drum and old pow wow drum mallet he had used in his early years. From that moment, up until he passed, we kept in touch with weekly phone calls and in-person visits whenever possible. He became not only part of my kinship circle, but also an integral player in my creative and personal process (Clark 2020b, p. 32) Spiritual experiences, like the visit from coyote and the Northern relatives, help guide profound understandings that lead to finding a sense of purpose and value. "Sacred traditions and the elders who possess special teachings act as bridges to spiritual experiences and as facilitators for learning about spiritual matters" (Cajete 1994, p. 44). Moe affirms this as she writes: I called up mosom Bob the morning after I met coyote and the northern lights. The imprint from this meeting stayed with me as we spoke on the phone. While he listened to me tell my story, Bob shared his vision of light green and soft pink enveloping my body as I spoke. The green brought up the colour of wâwâhtêwa, northern lights, and the smoky light pink he described was the colour of prayer: the colour when earth and sun cross over in the sky at dusk and dawn. I also knew this pink as the colour of wild prairie roses and for me, they'd become a symbol of resilience, reminding me how to remain open and vulnerable even while growing amidst harsh alpine climates and dry prairie dirt. If these vulnerable pink prairie roses could survive such harsh conditions, so could I. He told me to pray hard, to trust in my voice and to keep going. With his guidance, and wahkohtowin, kinship, I renewed my strength in the process (Clark 2020b, p. 32) These relationships, which were formed farther along on Moe's path, hold a strong place for her, in the context of healing, creation and in her everyday life. In the same way, the relationship Moe had with her Métis grandfather as a child helped guide her appreciation of story, song, and land, which are foundational to her artistry today. Acknowledging these kinships, both past and present, helps to construct Moe's understanding of the science or alchemy of belonging. Cajete (1994) writes about the notion that one's knowing is not merely subjective, the process of learning does not transpire in isolation of others and their knowing. Simon et al. (2000) argue that "[ . . . knowing occurs . . . ] in relations within the process of 'a communicative act' they call 'pedagogical witnessing'" (Simon et al. 2000, p. 294). Accordingly, these interactions with Knowledge Keepers and Elders are essential for Moe's interpretation of healing and place. Further, Cajete holds that these relationships teach of respect and reciprocity through the passing of knowledge: " . . . So it goes, giving and receiving, giving and receiving stories-helping children remember to remember that the story of their community is really the story of themselves!" (p. 168). Moe states: The teachings I've received from both Bob Smoker and Grandpa Ron, among others, make up my medicine bundle, my creative toolkit, and continually inform my approach as an artist. I know through experience that I can seek out guidance when necessary, and I also know when to look inside for further support when challenges present themselves. Bob's unconditional love, his gentle words and continual kindness, were all gifts he proudly reiterated anytime we spoke, and they came through strong in these difficult moments. His words: "I'm gonna need you as much as you're gonna need me," will always resonate for me. This reciprocity is a return to helping oneself by helping others, and feeding the circle. In this way, our fires can stay lit and so can our connection to one another (Clark 2020b, p. 33) The teachings from Elders and Knowledge Keepers have helped Moe reinforce the importance of reciprocity in her relationships to land, human-kin, and other-than-human kin. Moe acknowledges her relationality and connection to these relationships in her creative practice and in her daily interaction with the animacies around. She explains that her body is not simply on the land, it is nested within the land: " . . . even when we are on the land there is a layer of witnessing and being witnessed" (Clark 2020a, p. 3) Healing that comes through relationality can be understood through the medicine wheel framework. From a nêhiyawak (Plains Cree) perspective, the medicine wheel centres the understanding that we are four-bodied people (mental, emotional, spiritual and physical bodied), and all our relationships are imbedded in this cycle of four: the four cardinal directions, four elements, four seasons, four stages of life, etc. (Naytowhow 2013). Moe relates to this framework as a guide for how to find wholeness and maintain balance within herself through her four bodies: physical, mental, emotional and spiritual. These reiterated number four is interwoven with all existence and cannot be separated. In the same way, all that is alive exists in tandem with one another. Articulating this belief, Regnier (1994) notes: "Nothing exists in isolation of the whole. Although parts are differentiated from one another, they are also interconnected with one another the way seasons are joined through the natural passage of time. Transition through the phases of life interconnects birth with death and infancy with adolescence. All creatures-the winged, the two-legged, the four-legged, and the swimmers-have their place and belong in the scheme of the whole. Through their interconnection, they establish balance in the universe" (p. 133). From this place of balance, Moe tends to her place in the circle and heeds the call of the Grandmothers. It is here that she stays connected to kinships, past, present and future: Grandmothers, ancient and alive arrived to light their messenger fires carriers of creation seeping through the cosmic cracks their circle feast called me back
Conceptualizing 'Feast'
The meaning of nourishment is personal, it can refer simply to food as nutrition, or, depending how it is conceptualized it can mean more; leading one to question: What nurtures and sustains our being? What fills our bodies, minds, and hearts with the required fuel to keep going: to discover more, to share, move, grow and dance? Do we become stronger with every feast? The answers can be reinvented countless times depending on one's interpretation. For many Indigenous Nations, feast transcends food. One legacy of colonial violence is the continued attempt to disconnect Indigenous peoples from their cultural practices. Moe speaks of feasting as an act of celebration and resistance. Not only does it represent strength through a " . . . cultural practice of honouring, remembering and reaffirming relationships with our ancestors, with one another and with the land" (Clark 2020b, p. 35), it extends beyond this to the feasting and nurturing of spirit: During the Idle No More movement, round dances swept across Turtle Island (Canada) as a tool for bringing people together to celebrate, connect and reinforce our communities through the act of song and circle dancing. These feasts were not necessarily tied to food but to the feasting of spirit and body. We assembled in mass groups to the beat of the drum and to our collective heart beats, dancing and calling out our place in communion with the land and with one another, as we held our hands together in a feast of spirit and body (Clark 2020b, p. 35) Feasting, in this way, is an embodied practice Moe creatively employs to resist the perpetual colonial violence that she and her ancestors, community and kin have experienced. Through her deliberately crafted expressions of song, story and music that embody a landbased and animate perspective, Moe creates provocative pieces of resistance (Clark 2020b). This practice, she explains, generates a sense of "affirmative action and participatory embodiment" (Clark 2020b, p. 35). The manifestation of this feasting, as embodied resistance, connects to Moe's experience with the northern lights (the Ancestors)-she adds: By acknowledging these animate relationships with sky world and our dancing Northern relatives, I was reaffirming connections to the relationships I cannot see, to my ancestors and to the teachings of the elders. My body merged with the darkness of night, I emptied my physical self out into the natural landscape, made an offering, and from this place I opened to receiving a vision. In this case, the vision came in the form of dancing grandmothers (Clark 2020b, p. 35) Their message was: Break your stillness, my child Be true to your spirit, be wild and take the beads Beads so red, red like blood pulsing through me beads so blue, blue like rivers renewing memory so with courage as my guide I took the beads from coyote's tongue stories left unspoken sear the root until they're sung These blue and red beads are illustrative of the Métis people's infinity flag, a resistance flag which features a white infinity symbol amidst a blue or red background: " . . . representative of our continued relationships to both Indigenous and non-Indigenous lineage" (Clark 2020b, p. 36). Now, holding the beads of her Métis identity, Moe is reminded of her belonging. To continue feeding the circle, she must nurture these beads through her connections. By taking the beads from coyote's tongue, Moe breaks her stillness and silence. The poem continues: Coyote circled round me his howling sounds resounding like all that came before my body opened like a doorway to the night I began to rise up in spirit and in choice suspended anti-gravity with roots reaching from my voice to where the silence dwells deep beneath me seeds quivered as they listened and rising, a wild rose glistened pale pink in prairie grasses medicines mends the inside so the violence can pass us by may we be bigger than the shadow for we are all constellations not just some consolation prize and in the gifts of kinship we thrive, we survive we're alive Rooted firmly in the presence of her kinship guides; coyote, the Grandmothers, her ancestors, Moe describes breaking through the silence to a place where her spirit is strengthened through voice.
With the rope loosened from coyote's throat, he howls and as his voice breaks the silence, and he becomes the bridge: to the voices of the past, to the silence in between and to a future that is open and alive. Through these openings, Moe imagines seeds (Clark 2020b) when embodied and voiced, these seeds become planted, rooted in the ground. " . . . [They become] embodied memory, maps to our creation stories that intrinsically connect us to all of creation. These seeded stories return to the land and people as offerings, in a feast of song and poetry, where they continue to grow and heal" (Clark 2020b, p. 37). The arrival of coyote conveys a reminder of resistance and growth. Through this metaphoric story, we learn about Moe reaffirming and healing her Métis identity.
May the fire from my lips burn away the iron wire may the water from my eyes wash away the scars may the wind in my belly clear away the ash and may the earth in my heart wake up the spirit path Coyote came upon the turtles back in a dream I was sleeping so deeply like my people and now we're waking up
Conclusions
Moe's healing process is the sum of many parts, a circular collective envisioning, at the heart of which lives the sacred teaching of miskâsowin: finding one's sense of belonging; finding one's place within the circle. Through embodiment practices that weave a tapestry of relationality, creations unfold-in this case, the poetic story of coyote and the northern lights. Moe's healing is circular and influx, made possible through "returning to breath and body" (Clark 2020b), and involves a process of remembering, embracing, and re-embracing relationships to kin-past, present and future. In this honouring of connection to the dynamic animacies that exist all around, Moe feasts relationships that help her grow. | 7,357 | 2021-04-01T00:00:00.000 | [
"Education",
"Art",
"Philosophy"
] |
Boosting Oxygen Reduction through Microenvironment Modulation to Enhance Mass Transportation
Electroreduction of oxygen driven by renewable electricity holds significant promise for the sustainable production of value‐added hydrogen peroxide (H2O2). While water is a desirable source of protons and electrons for this reaction, its low gas solubility often limits the transportation of gas molecules and consequently leads to large concentration overpotential, resulting in unsatisfactory energy efficiencies. Herein, a facile and effective strategy to promote the 2e− oxygen reduction reaction (ORR) through microenvironment modulation is presented. In this work, it is specifically aimed to facilitate oxygen transportation at the reaction interface, particularly at high rates. To achieve this, hydrophobic polytetrafluoroethylene (PTFE) particles are introduced into a catalyst layer containing amino‐group‐functionalized carbon nanotube (CNT‐NH2) as the ORR catalyst for H2O2 production. As a result, the PTFE‐modified CNT‐NH2‐based gas‐diffusion electrode (GDE) substantially improves ORR activity. At 100 mA cm−2, the PTFE‐modified CNT‐NH2 achieves a high cathodic energy efficiency of 92%, 1.5 times higher than the pristine CNT‐NH2‐based GDE (63%). Detailed kinetic analysis reveals that this enhanced ORR performance is indeed due to the enhanced oxygen transportation induced by the persistent hydrophobic microenvironment created by the PTFE‐modified catalyst layer, reducing the concentration overpotential during ORR.
Introduction
Hydrogen peroxide (H 2 O 2 ), recognized as an environmentally benign chemical oxidant, holds considerable significance as a value-added chemical within the contemporary chemical industry.
Its versatile applications encompass sewage treatment, [1,2] propylene epoxidation, [3] paper disinfection, and bleaching. [4]Additionally, H 2 O 2 shows great potential as a green energy carrier owing to its high energy density. [5]owever, the existing large-scale production of H 2 O 2 , accounting for 95%, relies on the energy-intensive anthraquinone oxidation process.[8] In the light of these limitations, the 2e À electrochemical oxygen reduction reaction (ORR), involving the directly reduction of O 2 to H 2 O 2 using renewable electrical energy emerges as a promising and sustainable alternative for H 2 O 2 production under mild operating conditions.
In electrochemical reactions, the measured overpotentials often consist of three primary compositions: activation overpotentials, concentration overpotential, and Ohmic drop.Previous studies have primarily focused on mitigating the activation overpotential of ORR by employing strategies such as heteroatom doping [30,31] and defect engineering [18] to enhance the intrinsic activity of catalysts.In contrast, the limited solubility of O 2 in H 2 O gives rise to the concentration overpotential caused by the insufficient O 2 mass transfer, which is the primary contributor restricting the ORR performance under practical relevant operating conditions. [32,33]To address this challenge, several approaches have been developed to enhance the O 2 mass DOI: 10.1002/aesr.202300143Electroreduction of oxygen driven by renewable electricity holds significant promise for the sustainable production of value-added hydrogen peroxide (H 2 O 2 ).While water is a desirable source of protons and electrons for this reaction, its low gas solubility often limits the transportation of gas molecules and consequently leads to large concentration overpotential, resulting in unsatisfactory energy efficiencies.Herein, a facile and effective strategy to promote the 2e À oxygen reduction reaction (ORR) through microenvironment modulation is presented.In this work, it is specifically aimed to facilitate oxygen transportation at the reaction interface, particularly at high rates.To achieve this, hydrophobic polytetrafluoroethylene (PTFE) particles are introduced into a catalyst layer containing amino-group-functionalized carbon nanotube (CNT-NH 2 ) as the ORR catalyst for H 2 O 2 production.As a result, the PTFE-modified CNT-NH 2 -based gas-diffusion electrode (GDE) substantially improves ORR activity.At 100 mA cm À2 , the PTFE-modified CNT-NH 2 achieves a high cathodic energy efficiency of 92%, 1.5 times higher than the pristine CNT-NH 2 -based GDE (63%).Detailed kinetic analysis reveals that this enhanced ORR performance is indeed due to the enhanced oxygen transportation induced by the persistent hydrophobic microenvironment created by the PTFE-modified catalyst layer, reducing the concentration overpotential during ORR.
transfer at the electrode/electrolyte interface, such as the construction of porous liquids [34] and ionic liquids layer on catalyst surface. [35]t the reactor level, gas-diffusion electrode (GDE) has emerged as a viable solution to circumvent the mass transfer limitations arising from the inadequate solubility of O 2 in the electrolyte.The GDE comprises a porous gas-diffusion layer (GDL) and a catalyst layer.The catalyst layer contacts the liquid, and the gas directly diffuses from GDL to the catalyst layer and participates in the reaction without requiring dissolution into the electrolyte. [36]Therefore, the maintenance of the hydrophobicity within the catalyst layer and the prevention of the catalyst layer flooded by the electrolyte have become keys.[39][40][41] For example, hydrophobic polytetrafluoroethylene (PTFE) was employed to modify the catalyst layer directly, wherein the carbon black catalyst was wrapped by PTFE through calcination. [38]In another work, PTFE was introduced on the carbon fiber of the GDL by calcination, [39] followed by the loading of carbon black catalyst.This modification substantially enhanced the utilization rate of O 2 and the yield of H 2 O 2 .Although these studies appear to improve the performance of O 2 reduction to H 2 O 2 , it remains to be understood that how PTFE influences O 2 mass transfer during the ORR.The debate surrounding whether electrochemical reactions in GDEs occur through improved mass transfer of gaseous gas reactants at the solid-liquid-gas interface or through enhanced mass transfer of dissolved gas reactants at the conventional electrodeelectrolyte interface remains controversial. [38,39,42,43][40][41] However, the pore structure of GDE will be altered due to PTFE shrinkage and agglomeration during the calcination process, this will block the catalytic active sites and lead to increased activation overpotential and further reduction in the ORR performance [39,44] In addition, the melting of the nonconductive PTFE during calcination on the catalyst and carbon fiber in GDL will reduce the electronic conductivity of the GDE, [39,45,46] thereby increasing the Ohmic overpotential during ORR.Moreover, the final state of the PTFE is affected by many factors such as the operation time, temperature, etc., during calcination, making accurate control of these modification challenging.This lack of control may explain why the mass ratio of PTFE to the catalyst exhibits no discernible correlations to the ORR performance. [46]Overall, due to that above the inherent limitations, the current understanding of how PTFE modification enhances O 2 mass transport and the subsequent impact of this enhancement on ORR performance remains somewhat ambiguous and inconclusive.Hence, there is a need to conduct detailed mechanistic studies on the aforementioned interfacial modifications to further optimize the O 2 mass transfer without sacrificing desirable properties such as high active-site density, electronic conductivity, etc.
In our previous work, we found that the amino-groupfunctionalized carbon nanotubes (CNT-NH 2 ) exhibited improved performance for 2e À ORR compared to oxidized CNT, especially at high current densities.This improvement was attributed to its capacity in preserving the hydrophobicity of the catalyst layer. [47]ence, we are intrigued by the potential influence of microenvironment engineering on ORR performance and whether the performance of CNT-NH 2 could be further improved.
In this study, taking CNT-NH 2 as a model catalyst, we developed a straightforward microenvironmental modulation strategy to enhance the 2e À ORR performance by physically mixing PTFE particles into the catalyst layer directly.Specifically, PTFE was dispersed into the catalyst ink for preparing the catalyst layer of the GDE.With an optimized PTFE loading, we observed a significantly increase in the cathodic energy efficiency for H 2 O 2 formation at practical relevant rate, from 62% to 92% in a typical flow cell configuration while maintaining a high H 2 O 2 selectivity (%80%).To elucidate the impact of PTFE on ORR, we carefully investigated the ORR kinetics using the rotating ring-disk electrode (RRDE) system in the presence of PTFE, and performed a COMSOL simulation to understand the solid-liquid-gas contact state of CNT-NH 2 with different hydrophobicity.It is concluded that the promotion of H 2 O 2 production by PTFE can be attributed to the fine balance of liquid/gas phase achieved at the triple-phase interface, which enhanced the mass transfer of gas-phase O 2 , thereby significantly reducing the concentration overpotential, while also the maintaining of the hydrophobicity of the catalyst layer during the ORR process.
Characterization of the GDEs
Two GDEs were prepared for comparison.One used the pristine catalyst ink consisting only of CNT-NH 2 as the catalyst, while the other used the same CNT-NH 2 ink however with the addition of PTFE particles (CNT-NH 2 /PTFE).First, the morphology of the CNT-NH 2 was characterized using scanning electron microscope (SEM).As shown in Figure 1a and S1, Supporting Information, the typical morphology of CNT-NH 2 can be clearly observed on the GDE surface.Fourier-transform infrared (FTIR) spectroscopy was employed to identify the characteristic chemical groups within the CNT-NH 2 .As shown in the FTIR spectra in Figure 1b, the characteristic adsorption bands at 3450 and 1633 cm À1 can be attributed to the amine-functional groups on the CNTs.As for CNT-NH 2 /PTFE shown in Figure 1c and S1, Supporting Information, similar CNT morphology was observed with PTFE particles dispersed among the CNTs.The powder X-Ray diffraction (PXRD) patterns of these two samples are presented in Figure 1d.CNT-NH 2 /PTFE had the similar PXRD pattern to pure CNT-NH 2 , in which characteristic peaks of CNTs are at 2θ = 26°, 41°, and 54°referring to the (002), (100), and (004) reflections, respectively, indicating an unaltered crystal structure of CNT upon the addition of PTFE.Energy-dispersive X-Ray spectroscopy (EDS) element mapping images of CNT-NH 2 / PTFE in Figure 1e indicate the presence of carbon, nitrogen, and fluorine elements, and fluorine element distributed evenly throughout the sample.Taken together, the aforementioned physical characterization results confirm the successfully dispersion of PTFE particles into the catalyst layer of the CNT-NH 2 / PTFE without introducing any morphological or structural changes to the CNT-NH 2 catalyst.
ORR in GDE Cell
Subsequently, we employed a typical flow cell to evaluate the ORR performance after adding PTFE into the catalyst layer.As shown in Figure 2a (Figure S2 and S3, Supporting Information), at various current densities, ranging from 20 to 200 mA cm À2 , the electrode potentials showed significant reduction (over 100 mV at each current density) on CNT-NH 2 /PTFE compared to that of CNT-NH 2 .For instance, a potential of 0.80 V versus reversible hydrogen electrode (RHE) was obtained on CNT-NH 2 /PTFE electrode at a current density of 200 mA cm À2 , which is comparable to that of CNT-NH 2 at 20 mA cm À2 .This indicates the superior ORR activity on the CNT-NH 2 /PTFE electrode compared to on the CNT-NH 2 electrode.In addition, the selectivity of O 2 reduced to H 2 O 2 of these two electrodes was also analyzed.Figure 2b presents the Faradic efficiency (FE) of H 2 O 2 at 100 mA cm À2 , revealing a higher FE for H 2 O 2 on the CNT-NH 2 /PTFE electrode (77%) compared to that on the CNT-NH 2 electrode (73%).This represents another improvement for ORR upon the introduction of PTFE particles.With these data, we can estimate the cathodic energy efficiency of CNT-NH 2 /PTFE electrode to be 92%, substantially higher than that obtained on the CNT-NH 2 electrode (63%) at 100 mA cm À2 .Overall, these results demonstrate that the 2e À ORR of O 2 to H 2 O 2 on the CNT-NH 2 electrode can be promoted significantly by simply introducing hydrophobic PTFE particles into the catalyst layer.
To gain insight into the potential enhancement of O 2 transport within the catalyst layer, the ORR activities of these two electrodes were compared under various O 2 gas flow rates, as shown in Figure 2c (Figure S4 and S5, Supporting Information).The pure CNT-NH 2 electrode shows a weak dependence of the applied potential on the O 2 flow rate, with a slight increase from 0.69 to 0.72 V versus RHE as the flow rate increased from 1 to 5 sccm at a fixed current density of 100 mA cm À2 .In contrast, the potential on the CNT-NH 2 /PTFE electrode exhibits a distinct trend, rapidly increasing from 0.71 to 0.84 V versus RHE as the O 2 flow rate reached 5 sccm, after which it plateaus.As suggested previously, a higher O 2 flow rate results in more dissolved oxygen at the reaction interface. [36]Hence, if the ORR in GDE primarily occurred through reactions with dissolved O 2 at the traditional electrolyte-electrode interface, a notable dependence between the O 2 flow rate and electrode potential would be observed for both the CNT-NH 2 /PTFE and CNT-NH 2 electrodes.polarization curves and electrochemical impedance spectroscopy (EIS).The Tafel slopes are similar on CNT-NH 2 /PTFE and CNT-NH 2 catalysts, suggesting that ORR mechanism was not changed after adding PTFE into catalyst layer (Figure S6b, Supporting Information).EIS studies show a lower O 2 mass-transport resistance (R mt ) on the CNT-NH 2 /PTFE catalyst (Figure S7, Supporting Information).The correlation of frequency and imaginary part of complex capacitance (C 00 (ω)) was calculated to compare the resistance of ion transport. [48]In Figure S8, Supporting Information, the mild difference between both CNT-NH 2 /PTFE and CNT-NH 2 at various potentials is an indicative of the similar ion-transport resistance.Taken together, we show that the introduction of hydrophobic PTFE particles in the catalyst layer enhances the ORR performance by facilitating the accelerated gaseous O 2 mass transport at the triple-phase interface.
The electrochemical active surface areas of CNT-NH 2 /PTFE and CNT-NH 2 electrodes were estimated by measuring their corresponding electrochemical double-layer capacitance (C dl ).As shown in Figure 2d, CNT-NH 2 /PTFE exhibits an even smaller C dl (3.88 mF cm À2 ) than CNT-NH 2 (4.71 mF cm À2 ).The electric C dl arises from the interactions between the GDE surface charges and the counter ions within the electrolyte.Therefore, the smaller C dl observed for the CNH-NH 2 /PTFE electrode indicates the less solid-liquid interfaces and, in turn, more triple-phase interfaces.These results demonstrate that the presence of PTFE dispersed in the catalyst layer promotes the O 2 masstransport process by creating a hydrophobic microenvironment, resulting in dominant solid-liquid-gas interfaces, which is desirable for efficient ORR.
The hydrophobicity of the GDE surface induced by PTFE particles was further evaluated by contact-angle measurement.As shown in Figure 2e, it can be observed that CNT-NH 2 /PTFE electrode exhibits a contact angle of 147.6°slightly larger than CNT-NH 2 electrode (145.6°).At first glance, this indicates the apparent hydrophobicity of the GDE is not increased by the introduction of PTFE into the catalyst layer.However, it should be noted that the hydrophobicity of the catalyst layer would gradually change during the electrochemical reaction.In fact, GDE surfaces tend to become more hydrophilic as the electrochemical reaction proceeding, which causes the electrolyte to flood the catalyst layer, thereby inhibiting the mass transfer of gas-phase reactants and reducing the reaction rate.To assess whether the high hydrophobicity of these GDEs retains after catalysis, we remeasured the contact angles of the CNT-NH 2 /PTFE and CNT-NH 2 electrodes after ORR at 50 mA cm À2 for 1 h.As shown in Figure 2e, the contact angle decreased slightly from 147.6°to 140.0°after reaction for CNT-NH 2 /PTFE electrode.In contrast, CNT-NH 2 electrode exhibited a significantly decrease in contact angle, which dropped from 145.6°to 85.0°under the same conditions.The stability of catalyst was further evaluated by chronopotentiometry at a fixed current density of 100 mA cm À2 .As shown in Figure S9, Supporting Information, the cathode electrode potential of CNT-NH 2 /PTFE was consistently maintained around 0.75 V throughout >8 h of operation, with an approximate 75% FE observed for the entire duration of the test.In contrast, a decrease in H 2 O 2 FE was observed for bare CNT-NH 2 after 6 h.In contrast, as shown in Figure S10-S12, Supporting Information, negligible changes are observed in terms of morphologies for both CNT-NH 2 /PTFE and CNT-NH 2 after reactions, indicating no obvious changes occurred to the catalyst itself.Therefore, we believe that the hydrophobic PTFE acts as a protective element, preserving the hydrophobic nature of the catalyst layer during the reaction.This preservation helps maintain the solid-liquid-gas interface, facilitating the mass transfer of O 2 during ORR and reducing the concentration overpotential.
Furthermore, to evaluate the applicability of the PTFE effect, we measured the ORR performance of the PTFE-modified GDE based on commercial Pt/C catalyst (the state-of-the-art 4e À ORR electrocatalyst).Similarly, two types of Pt/C electrodes were prepared, Pt/C/PTFE electrode with the introduction of PTFE particles in the catalyst ink, and the pristine Pt/C electrode.These two GDEs also showed nearly identical morphology (Figure S13, Supporting Information).As shown in Figure 2f (Figure S14 and S15, Supporting Information), once again, Pt/C/PTFE exhibits substantially lower overpotentials to drive the ORR across a broad range of current densities, from 20 to 300 mA cm À2 .The difference is even more significant at large current densities, suggesting the dominate effect of O 2 mass transport within the conventional design.This result confirms that our strategy can be extended into systems with different catalysts, and potentially different gas involving electrochemical processes.
Investigating the PTFE Effect using RRDE
Although the catalytic interface in the RRDE configuration is not exactly identical to that in a flow cell, RRDE enables robust kinetic studies by precisely control the convection flow for efficient mass transportation.This is beneficial for studying the influence of the hydrophobic microenvironment created by PTFE on the kinetics of ORR.We first performed RRDE voltammetry measurements on catalysts with different loadings of PTFE. Figure 3a presents the recorded ORR polarization curves on CNT-NH 2 with an increasing loading of PTFE from 10% to 60%, using 0.1 M KOH as the electrolyte and a rotation rate of 1600 rpm.In the absence of PTFE, a normal S-shaped profile is obtained with a plateau current density of %3.3 mA cm À2 due to O 2 mass-transport limitations.However, the plateau current density for ORR exhibits an increase when PTFE is added into the catalyst layer.Specifically, the plateau current density reaches 5.4 mA cm À2 for a PTFE loading of 10% and 4.6 mA cm À2 for a higher PTFE loading.As shown in Figure S16a, Supporting Information, the selectivity of the 2e À ORR on CNT-NH 2 with PTFE are lower than on pure CNT-NH 2 .This result seems to be inconsistent with the results measured in the flow cell.However, it should be noted that there is not much difference in the ring current (Figure S16b, Supporting Information), i.e., the amount of H 2 O 2 detected by RRDE does not differ significantly, but the disk currents on PTFE-modified CNT-NH 2 are even higher.This implies that on PTFE-modified CNT-NH 2 , the generated H 2 O 2 on the disk of RRDE might be immediately further reduced before it could be transferred to the ring, resulting in a higher current.This indirectly proves that the CNT-NH 2 catalyst modified with PTFE has higher activity for 2e À ORR.As shown in Figure 3b, CNT-NH 2 catalysts with PTFE exhibit more positive half-wave potential (E 1/2 ) compared to that without PTFE.Furthermore, the E 1/2 increases from 0.63 to 0.65 V vs. RHE with the increase in PTFE mass ratio.This observation is further supported by the comparison of kinetic current density for ORR on CNT-NH 2 with different PTFE loadings (Figure S17, Supporting Information).These results align with the improved performance observed in the flow cell experiments.
To verify the aforementioned improved performance in ORR is primarily attributed to the optimized microenvironment instead of other factors, such as reaction pathway, morphology, conductivity, etc., we first conducted Tafel analysis for ORR on the CNT-NH 2 catalysts with different PTFE loadings (Figure S18, Supporting Information).It turns out that similar Tafel slopes were obtained for ORR on the catalyst with and without PTFE, about 60 mV dec À1 in each case.This Tafel slope indicates that the rate-determining step of ORR is likely to be the initial protonation of the surface-adsorbed O . Furthermore, we compared the electrochemical C dl and the morphology of CNT-NH 2 with and without PTFE.As expect, similar C dl were observed on all CNT-NH 2 catalysts (Figure S19, Supporting Information).Taken together, we believe that negligible changes occurred upon the addition of PTFE into catalyst layer regarding the catalyst morphology/structure and ORR reaction pathway.In addition, EIS was employed to analyze the conductivity, charge transfer, and O 2 masstransport kinetics for the aforementioned catalysts.As shown in Figure 3c, the EIS spectra of CNT-NH 2 under ORR conditions consist of two semicircles in high-and low-frequency regions, respectively.The semicircle locates at high-frequency region is likely associated to the ORR charge-transfer resistance (R ct ), while the one at low-frequency region can be attributed to the O 2 mass transport. [49]Consequently, the simulated solution resistance (R s ), R ct , and O 2 R mt of CNT-NH 2 samples are compared in Figure 3d.At a given potential, nearly identical solution resistances (around 43 Ω) were obtained for different samples, indicating negligible difference existed in electrode conductivity/solution resistance.Similarly, the estimated R ct of ORR on pure CNT-NH 2 is similar with those with PTFE, suggesting that the introduction of PTFE into the catalyst layer does not affect the electron transfer rate during ORR on CNT-NH 2 .In contrast, the R mt decreases dramatically upon the addition of PTFE, i.e., R mt decreased from 111.5 to 19.3 Ω with only 10% PTFE added into the CNT-NH 2 catalyst layer.At larger overpotentials (O 2 -diffusion-limiting regions), notably smaller R mt was observed on the PTFE-modified CNT-NH 2 compared to the pristine CNT-NH 2 electrode (Figure S20, Supporting Information).Additionally, we analyzed the ion-transport resistance by plotting frequency against C 00 (ω), revealing minor differences between CNT-NH 2 /PTFE with various PTFE mass ratios and pristine CNT-NH 2 at different potentials, as depicted in Figure S21, Supporting Information.These EIS results further confirm that the enhanced ORR performance of CNT-NH 2 /PTFE can be attributed to the improved O 2 mass transport facilitated by the hydrophobic microenvironment induced by PTFE, with no significant impact on resistance related to ion transport, charge transport, or conductivity, consistent with EIS results in the flow cell at relatively high current densities.Moreover, this is supported by a comparison of the O 2 uptake rates (υ O 2 ) on CNT-NH 2 with varying PTFE loading.As shown in Figure 3e, CNT-NH 2 catalyst exhibits a moderate υ O 2 of 1.79 Â 10 À4 mmol cm À2 s À1 in absence of PTFE.Upon the addition of PTFE, υ O 2 increases with the loading of PTFE, reaching the highest υ O 2 of 2.46 Â 10 À4 mmol cm À2 s À1 at 60% PTFE.
Overall, based on the aforementioned RRDE data, we believe the introduction of PTFE particles into the catalyst layer does not alter the catalytic mechanism, the intrinsic activity of the catalyst, and the conductivity of the electrode.Hence, the enhanced ORR performance can be attributed to the facilitated O 2 mass transfer at the reaction interface.As shown in Figure 3f, the presence of PTFE could likely mitigate the concentration gradient of O 2 between electrode surface (x = 0) and bulk electrolyte (x = δ), and further reduce the concentration overpotential of ORR.However, the mass transport of O 2 has been optimized in the RRDE configuration.In addition, the ORR at the electrode/electrolyte interface in RRDE relies on dissolved O 2 .Hence, the PTFE effect on ORR performance in RRDE system is not as significant as that in the flow cell.In particular, the reduction in the apparent E 1/2 is only 22 mV on CNT-NH 2 /PTFE compared with CNT-NH 2 , while the reduction in ORR overpotential is over 130 mV at 100 mA cm À2 in the flow cell after the introduction of PTFE to the same catalyst.Once again, this implies that the improved ORR performance on CNT-NH 2 /PTFE in the flow cell configuration is primarily attributed to the enhanced gas-phase O 2 mass transport rather than dissolved O 2 .
Effect of PTFE Loading on the Microenvironment
To achieve an optimal gas/solid/liquid triple-phase microenvironment inside the catalyst layer, the loading of the PTFE particle was optimized.Figure 4a, S22, Supporting Information, show the electrode potential on CNT-NH 2 /PTFE with various PTFE mass ratios at 100 mA cm À2 .The electrode potential exhibited a volcanic relationship with PTFE mass ratio, where the potential first increased from 0.69 to 0.83 V versus RHE as PTFE mass ratio increased from 0% to 40% and dropped to 0.66 V versus RHE when the PTFE mass ratio was further increased to 60%.Similar phenomenon occurred with regarding the ORR selectivity (Figure 4b), and the FE of H 2 O 2 also first increased along with the PTFE loading, however, declined at high PTFE mass ratio of 60%.Consequently, as shown in Figure 4a, the cathodic energy efficiency also shows a strong dependence on the PTFE mass ratio.As expected, a maximum cathodic energy efficiency of 92% is obtained with 40% PTFE loaded into the catalyst layer.These results suggest that the ORR energy efficiency can be improved substantially by reducing the concentration overpotential, and one effective strategy is to introduce hydrophobic (i.e., PTFE) particles directly to the catalyst layer.
We also measured the contact angles of the CNT-NH 2 electrodes with different PTFE mass ratios (Figure S23, Supporting Information).As the previous results, the contact angles measured on the GDEs are not altered before the ORR test, at least not significantly.However, different contact angles were observed for CNT-NH 2 electrodes with different PTFE loadings after the ORR test.As shown in Figure 4c, with all the PTFE loading studied, the hydrophobicity of electrode layer was largely retained after ORR.
To understand the interplays among electrode hydrophobicity, gas/liquid-phase balance, and ORR performance, we plotted an illustration based on our system and results, as shown in Figure 4d.First, the hydrophobicity of electrode surface during reaction can be retained by adding PTFE particles to the catalyst layer, as verified by the aforementioned contact-angle measurements.In addition, the gas/liquid-phase balance could be affected by the hydrophobicity of the electrode.In absence of PTFE, the electrode tends to become more hydrophilic and the ORR occurs primarily at the electrolyte-electrode interface, predominantly involving dissolved O 2 in the electrolyte.In this scenario, the ORR performance is largely constrained by the low solubility of O 2 , leading to significantly concentration overpotentials particularly at high reaction rates.Conversely, the hydrophobicity of electrode surface can be well maintained in presence of PTFE, ensures ORR predominately occurs at the solid-liquidgas interface.In this scenario, gaseous O 2 could easily participate in ORR at the triple-phase interface, avoiding the limitation of low solubility of O 2 , and reduce the concentration overpotential.However, when access amount of PTFE is loaded to the catalyst surface, the hydrophobic nature of the PTFE will repel the liquid electrolyte and sacrifice the optimized triple-phase boundaries for ORR, and further impede the ORR performance.Consequently, the cathodic energy efficiency exhibits a volcano-type correlation on the PTFE loading (Figure 4e).To probe this hypothesis, computational fluid dynamic (CFD) simulations were conducted to analyze the solid-liquid-gas contact state for CNT-NH 2 with different hydrophobicity.Figure 4f,g shows the formation of continuous gas layer on the surface of CNT-NH 2 /PTFE with high hydrophobicity, while an incontiguous gas layer is formed above the surface of CNT-NH 2 with low hydrophobicity.On the one hand, although the O 2 /H 2 O volume fraction at the oxygen inlet is close to 1 in both scenarios, the O 2 /H 2 O volume fraction farther from the O 2 inlet on the catalyst surface is higher when the surface possesses a higher hydrophobicity.O 2 transported to the more hydrophobic surface catalyst tends to form a dense gas layer instead of diffusing into the electrolyte, resulting in a higher O 2 concentration on the catalyst surface and reduction in the concentration overpotential.On the other hand, when the catalyst surface is less hydrophobic, the catalyst contacts the electrolyte directly.Thus, the CNT-NH 2 without PTFE modification will be flooded more easily than that modified by PTFE, which rationalizes its smaller contact-angle post reaction.In addition, the potential influence of surface pressure stemming from hydrophobicity was investigated by CFD.As shown in Figure S24, Supporting Information, the surface pressure on the CNT-NH 2 /PTFE electrode is approximately 3 Â 10 6 Pa, which is slightly smaller to that observed on the pristine CNT-NH 2 electrode (%4 Â 10 6 Pa).This result indicates that the impact of surface pressure stemming from hydrophobicity is likely minimal.Taken these together, we have observed improved 2e À ORR performance upon the introduction of PTFE using our approach compared to the previous works (Figure S25, Supporting Information).Overall, an appropriate amount of PTFE particles is critical in maintaining the hydrophobicity of the catalyst layer and ensure the optimal balance of the triple-phase interfaces for efficient ORR.
Conclusion
In summary, we demonstrate a facile and effective strategy in enhancing the performance of the 2e À ORR for the production of H 2 O 2 via microenvironment modulation.Encouragingly, by simply adding hydrophobic PTFE particles into the catalyst layer, the energy efficiency of O 2 reduced to H 2 O 2 can be improved substantially, and with modest increase in product selectivity.Detailed kinetic analysis reveals that the improved ORR performance is largely attributed to the enhanced gas-phase O 2 transportation, which significantly reduces the concentration overpotential during ORR.We believe that the hydrophobic microenvironment modulation induced by the PTFE particles in the catalyst layer is the origin of this promotion effect.Furthermore, the loading of PTFE in the catalyst layer is critical in optimizing the balance of triple-phase interface to accelerate the ORR.Overall, we believe our strategy of modulating the hydrophobic microenvironment of the catalyst layer can be applied to broader research directions that rely on gaseous reactant.
Preparation of GDE for ORR in the Flow Cell: The incorporation of PTFE into catalyst layer is a physically mixing process.In this process, the catalyst suspension ink was prepared by dispersing 10 mg of catalyst (CNT-NH 2 or Pt/C), different mass ratio of PTFE powder, and 50 μL 5% Nafion in 950 μL mixture of isopropanol (610 μL) and water (340 μL).The ink was further sonicated for 15 min using.The catalyst suspension ink then was sprayed on the carbon paper by a spray gun used as GDE with a loading amount around 0.7 mg cm À2 .Then, the prepared GDE was dried at room temperature for standby.
Preparation of Electrode for ORR in the RRDE: An RRDE setup (RRDE-3A, ALS Co. Ltd) containing a glassy carbon disk electrode and a platinum ring with a collection efficiency of 0.38 was used.The same catalyst (CNT-NH 2 or Pt/C) ink as that used in GDE preparation was pipetted onto a clean glassy carbon ring disk by 7.5 μL to reach the loading amount of around 0.3 mg cm À2 , and then dried in the fume hood.
Materials Characterization: The SEM was carried out on a JEOL JEM-7610M at 5 kV equipped with EDS analyzed at 15 kV.Powder X-Ray diffraction (PXRD) patterns were recorded on a Bruker D8-advance X-Ray powder diffractometer operated at 40 kV and 30 mA with CuKα radiation (λ = 1.5406Å).FTIR spectroscopy was recorded using VERTEX 70 FTIR spectrometer.
Electrochemical Measurements: In the flow cell, the electrochemical performance was evaluated using a three-electrode system on Biologic VMP3e electrochemical workstation.The prepared GDE, a Pt sheet, and a Ag/AgCl electrode were served as working, counter, and reference electrode, respectively.The geometry surface area of the working electrode was fixed to 1 cm 2 .The catholyte and anolyte were both 1.0 M KOH, the flow rates were both 1.5 mL min À1 .The cathode side was supplied with O 2 gas in various flow rates.For the RRDE test, the prepared RRDE with catalysts, a carbon rod, and a Hg/HgO electrode were used as working, counter, and reference electrodes, respectively.The ORR performance measurement was performed in O 2 saturated 0.1 M KOH aqueous solution.LSV curves were conducted at a scan rate of 10 mV s À1 and an electrode rotation speed of 1600 rpm with a ring potential of 1.48 V versus RHE.Tafel plot was recorded using LSV method from 0.95 to 0.8 V versus RHE at a scan rate of 2 mV s À1 .
The electron transfer number (n) in the RRDE was calculated by the following equation where I d and I r are the current recorded on the glassy carbon disk and Pt ring of the RRDE, respectively, and N is the collection efficiency (0.38).
Kinetic current density ( j k ) was calculated according to the equation C dl is the electrochemical double-layer capacitance derived from the slope of linear fit of the difference between cathodic and anodic current density against the scan rate using cyclic voltammogram method at a non-Faradic potential window.
Oxygen uptake rate (υ O 2 ) was calculated based on where FE H 2 O 2 ð Þis the Faradic efficiency of H 2 O 2 and j d is the current density on the glassy carbon disk of RRDE.Solution resistance (R s ), R ct , and O 2 R mt were obtained by fitting of Nyquist plot according to the equivalent circuit using ZView software.
H 2 O 2 Qualification and Calculation of Cathode Energy Efficiency: After ORR electrocatalysis, the amount of generated H 2 O 2 was determined by the standard potassium permanganate titration process according to the following reaction equation The FE of H 2 O 2 was calculated based on The cathodic energy efficiency was calculated using the following equation Cathodic energy efficiency ¼ where E Θ is the standard electrode potential for O 2 reduced to H 2 O 2 , E is the electrode potential measured experimentally, and FE is the Faradaic efficiency of H 2 O 2 .COMSOL Multiphysics Simulation: COMSOL Multiphysics 5.6 was employed to perform computational analyses in surface-tension-dominant two-phase flows. [50]We systematically adjusted the hydrophobicity of the electrode by manipulating the contact angle of the material within an oxygen-water system.The contact-angle values were determined based on contact-angle measurements.Specifically, a contact angle of 140°w as employed for CNT-NH 2 /PTFE, while a contact angle of 85°was used for pure CNT-NH 2 .The momentum equations governing the entirety of the flow field are expressed as follows [51] ρ ρ Â ∇ Â u ¼ 0 (9) where ρ (kg m À3 ) is the density, u (m s À1 ) is the velocity components in y direction, t (s) is the time, P (Pa) is the pressure, τ is the deviatoric stress tensor for a Newtonian fluid, F (N m À3 ) is the surface tension force per unit volume, g (m s À2 ) is the acceleration due to gravity, μ (N s m À2 ) is the dynamic viscosity, T (K) is the temperature (293.1 K in this work), σ (N m À1 ) is the surface tension coefficient (0.072 N m À1 in this work), k (m À1 ) is the interfacial curvature, n is the interfacial unit normal vector, and δ is the delta function centered at the interface.
Figure 1 .
Figure 1.a) Scanning electron microscope (SEM) images and b) Fourier-transform infrared (FTIR) spectra of the pristine amino-group-functionalized carbon nanotube (CNT-NH 2 ) sample.c) SEM images of CNT-NH 2 /polytetrafluoroethylene (PTFE) sample.PTFE particles are circled in yellow.d) Powder X-Ray diffraction (PXRD) pattern comparison of CNT-NH 2 /PTFE and pure CNT-NH 2 samples.e) Corresponding energy-dispersive X-ray spectrometer (EDS) element mapping images of CNT-NH 2 /PTFE sample taken from the SEM image in Figure 1c.
Figure 2 .
Figure 2. a) Potentials for oxygen reduction reaction (ORR) on CNT-NH 2 /PTFE and CNT-NH 2 electrodes at various current densities with an O 2 flow rate of 5 sccm.b) Faradic efficiency (FE) of H 2 O 2 and cathodic energy efficiency on CNT-NH 2 /PTFE and CNT-NH 2 electrodes at 100 mA cm À2 .c) Potentials for ORR at various O 2 flow rates on CNT-NH 2 /PTFE and CNT-NH 2 electrodes at 100 mA cm À2 .d) C dl fitting plots of CNT-NH 2 /PTFE and CNT-NH 2 electrodes.e) Contact-angle measurement images of CNT-NH 2 /PTFE and CNT-NH 2 electrodes before and post reaction.f ) Potentials for ORR on Pt/C/PTFE and Pt/C electrodes at various current densities with an O 2 flow rate of 5 sccm.
Figure 3 .
Figure 3. a) Rotating ring-disk electrode (RRDE) voltammograms collected on CNT-NH 2 /PTFE with different PTFE mass ratios at a scan rate of 10 mV s À1 and an electrode rotation rate of 1600 rpm in O 2 -saturated 0.1 M KOH solutions.b) E 1/2 comparison.c) Nyquist plots of these CNT-NH 2 /PTFE sample (inset is the equivalent circuit).d) Relationship between R s , R ct , and R mt with PTFE mass ratio at 0.7 V versus RHE.e) Relationship between O 2 uptake rate and PTFE mass ratio at 0.65 V vs. RHE.f ) Schematic illustration of the reduced concentration overpotential caused by the enhanced O 2 mass transport.
Figure 4 .
Figure 4. a) Potentials, FEs of H 2 O 2 , and b) cathodic energy efficiencies for ORR at 100 mA cm À2 on CNT-NH 2 /PTFE electrodes with various PTFE mass ratios added into catalyst layer.c) Difference of contact angle (Δθ) of CNT-NH 2 /PTFE electrodes with various PTFE mass ratios before and after 1 h electrolysis at 100 mA cm À2 .d) Relationships among Δθ, R mt , and cathodic energy efficiency on CNT-NH 2 /PTFE electrodes with various PTFE mass ratios.Error bars represent the standard deviation of three independent replicate experiments.Simulation results of solid-liquid-gas triple interface of f ) CNT-NH 2 /PTFE, and g) CNT-NH 2 electrodes. | 8,726.2 | 2023-10-17T00:00:00.000 | [
"Engineering"
] |
Chained Data Acquisition and Transmission System Protype for Cabled Seafloor Earthquake Observatory
Seafloor observatories can provide long-term, real-time submarine monitoring data, which has great significance for the study of major scientific technology in marine science, especially in the seafloor earthquake observation. The chained submarine data sampling and transmission system is the prototype and foundation of cabled seafloor earthquake observatories. This paper designs and builds a chained data sampling and transmission system (SQSTS) based on Zynq-7000 Soc (System on chip) and clock synchronization. At the beginning, we realized high-precision submarine data (24 bit) sampling based on Zynq-7000 Soc and ADS 1256. Using the PPS (Pulse per second) signal provided by the P88 1588 PTP (Precise time protocol) clock synchronization board and the inner crystal oscillator of the Zynq-7000 Soc, the time stamp up to the microsecond level, for the seismic data sampled in each seismometer node can support subsequent inversion of seismic data. In addition, a high-speed data transmission link connecting nodes in SQSTS, which is based on the Gigabit transceiver and optical cable, has been investigated. The transmission link has been realized by using the Aurora IP core. The theoretical calculations indicate that the data transmission bus bandwidth can reach 4 Gbps, while in the meantime its reliability has been proved by experiments. The experimental results show that the system owns the characteristics of high data sampling accuracy, stable and reliable high-speed transmission, and has promising application prospects.
Introduction
As the largest area on the earth, the ocean is of great significance for regulating the global environment and climate, maintaining the global ecosystem. Abundant natural resources and economic benefits are obtained by reasonable exploration and utilization of the ocean [1,2]. However, frequent marine disasters cause a considerable threat to human life and property in individual productive marine activities. Therefore, the forecast of marine disasters has become a key issue in marine research. On this basis, researchers pointed out that the establishment of seafloor observatories could solve this problem effectively [3]. Seafloor observatories can detect the ocean internal environment stably during a long period and provide long-term, real-time submarine data. With the aid of the inversion of the sampled data, the early warning capabilities of marine disasters can be highly improved.
More than 85 percent natural earthquakes take place in the ocean; however, their monitoring have been still relatively weak. Nowadays, natural submarine earthquakes observation, based on the seafloor observatories, has become a novelty type of observed method. 2 of 12 In the international, it is represented by American-MARS [4], Canadian-NEPTURE [5], Japanese-DONET [6] and EU-EMSO [7]. As for in China, the research team of Tongji University has done a lot of work in this field and put forward many innovative design schemes in the East China Sea [8][9][10]. From the operation of the seafloor observatories, we can recognize that seabed seismic observatories can be more effectively for monitoring small earthquakes that occur on the seabed and provide early warning for marine disasters [11]. To summarize, in addition to Japan's seafloor observatories, others are still single-point, seafloor surface monitoring. The seismograph and the junction station can only be within a short distance (50 m), which cannot meet the requirements for networked monitoring of submarine earthquakes [12][13][14].
With continuous investigation on seafloor observatories, some novelty or improved seafloor earthquake observed schemes have been proposed. Yu et al. first proposed an object model for regulating the sensor control and data sampling of seafloor observatories, which is conceptually designed as a group of sensor resource objects. Based on the properties and operations of these objects, the client-server sensor control architecture can realize bidirectional information flow of control commands and sampled data [8]. Zhan et al. successfully sensed earthquake waves and water waves over a 10,000-km submarine cable connecting Los Angeles, California and Valparaiso, Chile by monitoring the polarization of conventional optical communication channels. Without special equipment, laser sources or dedicated fibers requirements, their method exhibits very convenient scalability to convert global submarine cables into continuous real-time seismic and tsunami observatories [15]. Diouf et al. created a PoF and communication system for extending seafloor observatories, which is based on an optical structure to simultaneously transmit power and data through the same single-mode fiber. Raman scattering effect is used to amplify the optical data signal, reducing the power consumption of sensor nodes [16]. Bruce et al. discussed the blueprint of intelligent submarine cable system, which integrated sensors into the future submarine telecommunication cables. The simulation results showed that cable-based pressure and seismic acceleration sensors could effectively improve the tsunami warning time and seismic parameters [17].
In this paper, we investigate a chained data sampling and transmission system (SQSTS), based on Zynq-7000 Soc and clock synchronization, which can be used as a prototype and foundation for a new seafloor observatory. Compared with conventional sensor web prototype [9], SQSTS replaced the traditional undersea junction box with the control module based of Zynq-7000 Soc. In this way, we can keep the overall architecture of the usual seafloor observatory system and expand its functions. Besides that, we also introduce the clock synchronization module to ensure that the time stamp in each seismometer node can reach the µs level, laying a solid foundation for supporting subsequent seismic data inversion operations. Compared with the ocean bottom cabled seismometer system, which is similar to our system in structure, the main control module of SQSTS uses the Zynq7000 Soc with both FPGA (PL side) and ARM (PS side) [11]. In consequence, an additional CPU is not demanded to control data acquisition and data transmission functions. The transmission between nodes of SQSTS uses the gigabit transceiver in the physical layer of the Zynq7000 Soc. Compared with Ethernet transmission (100 Mbps), it can achieve 4 Gbps high-speed data transmission. Furthermore, in contrast to the previous clock synchronization system with a time stamp accuracy of 0.1 ms, SQSTS adopts a new method based on the PPS signal and the internal clock cycle of the FPGA to extend the minimum time unit of the time stamp to the µs level and the theoretical accuracy is 1 µs.
The content of our article is organized as follows: In Section 1, the expected goals that SQSTS needs to achieve, and the overall system design scheme are introduced. In Section 2, the classification of nodes in SQSTS and the selection of hardware modules in nodes are proposed. In Section 3, the software functions to be realized by each node in SQSTS and their designs are shown. In Section 4, the SQSTS model and experimental results, as well as a discussion of the results, are presented.
System Overall Scheme Design
As the prototype of the seafloor observatories, the SQSTS designed in this paper have the basic function of the seafloor observatories: data acquisition. Thus, a data acquisition module is indispensable in our system. Considering that SQSTS is mainly used for the observation and early warning of submarine earthquakes, the submarine data in this article therefore specifically refers to submarine seismic signals. In addition, in order to support the subsequent processing and analysis of the sampled data, it is also necessary to add a clock synchronization module to the SQSTS to provide accurate time stamps for the data sampled at each moment. Finally, a specific control module is also required, which is similar to a traditional junction box to realize functions such as controlling data collection and transmission. The specific design of each module is as follows.
With the increase of the number of submarine sensors and sensor signal channels, the high-speed data transmission of the link is much more desirable than before. In order to realize high-speed data transmission between nodes of submarine optical cable link, SerDes (SERializer/DESerializer) communication technology is adopted in this paper. SerDes is a point-to-point, time-division multiplexed transmission technology, which can greatly decrease the number of channels and chip pins required for data transmission thus, reduce the cost of data transmission. As an important serial communication bus, SerDes offers advantages of low power consumption, strong anti-interference ability, fast speed, long transmission distance, etc. Hence, it is very suitable for application in SQSTS.
For the control module of each node in the SQSTS, it is inevitable to own the ability of controlling multiple secondary devices at the same time, facilitating data reading and fusion processing, and interacting with the other main control chips of the last/next level nodes by using SerDes communication technology. Considering the massive application in the future networking, the cost should be minimized as much as possible on the premise of satisfying the chip function and reliability.
In order to meet the requirements of the high-precision seismic signals sampling and have the ability to communicate with the main control chip in full duplex working mode, we intend to adopt an AD sampling module with an accuracy of up to 24-bit, high sampling rate, and supporting for SPI protocol. Moreover, in the clock synchronization module, the clock synchronization accuracy is demanded to reach ns level, and the time stamp is required to reach µs level.
Overall Design
SQSTS is composed of three types of seismometer nodes, which are classified as the front node, the intermediate node and the end node according to their distribution locations. The components of the hardware circuit of each node are divided into three parts: the control part (primary device), the data sampling part (secondary device), and the clock synchronization part (secondary device). The components of each part are shown in Table 1. In each node, the control part is responsible for receiving and fusing the data uploaded by each secondary device; the data sampling part is responsible for sampling submarine data; the clock synchronization part is responsible for synchronizing with the master clock and providing time stamp. The nodes are connected by optical cables to form a transmission link. Each node packs its own data and data from the last level node, and then sends them to the next level node. Finally, the end node sends all nodes' data, the link data to the data center through the Gigabit Ethernet. The whole system block diagram is shown in Figure 1.
Component Design
Considering the design requirements of the main control module of each node, the zynq7000 Soc developed by Xilinx company, which adopts the ARM Cortex A9 architecture, is chosen as the main control chip. The chip has PL side (FPGA) and PS side (ARM) and supports multiple secondary interfaces to meet the interface requirements of the peripherals used in each node. PL side means the side of programmable logic, which contains a Field Programmable Gate Array (FPGA) and owns parallel processing capability. PS side means the side of processing system, which contains an ARM cortex-A9 processor, and owns serial processing capability. Apart from that, the Gigabit Transceivers (GTX) and Ethernet PHY chips are used in the system. ADS1256 data sampling chip produced by TI Company is selected for data sampling. The accuracy of the chip can reach 24 bits, and the chip has 8 channels with the sampling rate of 30KSPS, which satisfies the performance requirements of the data sampling part of our system. The external 5 V input required by the chip can be directly provided by the control part, and the SPI bus is connected with the MIO ports on the PL side of zynq-7000 SOC.
In order to ensure that the time information of each node is synchronized so as to obtain accurate time stamp, a clock synchronization system needs to be added to each node; thus, the P88 1588 PTP clock synchronization board (Produced by Coolshark Company) is selected as the slave clock in the node. The board can be connected to the master
Component Design
Considering the design requirements of the main control module of each node, the zynq7000 Soc developed by Xilinx company, which adopts the ARM Cortex A9 architecture, is chosen as the main control chip. The chip has PL side (FPGA) and PS side (ARM) and supports multiple secondary interfaces to meet the interface requirements of the peripherals used in each node. PL side means the side of programmable logic, which contains a Field Programmable Gate Array (FPGA) and owns parallel processing capability. PS side means the side of processing system, which contains an ARM cortex-A9 processor, and owns serial processing capability. Apart from that, the Gigabit Transceivers (GTX) and Ethernet PHY chips are used in the system. ADS1256 data sampling chip produced by TI Company is selected for data sampling. The accuracy of the chip can reach 24 bits, and the chip has 8 channels with the sampling rate of 30KSPS, which satisfies the performance requirements of the data sampling part of our system. The external 5 V input required by the chip can be directly provided by the control part, and the SPI bus is connected with the MIO ports on the PL side of zynq-7000 SOC.
In order to ensure that the time information of each node is synchronized so as to obtain accurate time stamp, a clock synchronization system needs to be added to each node; thus, the P88 1588 PTP clock synchronization board (Produced by Coolshark Company) is selected as the slave clock in the node. The board can be connected to the master clock through an optical cable, keeping the time consistent with the master clock. Meanwhile, the board can be connected to Zynq-7000 Soc through the GPIO port to provide synchronized time information (in PPS and ToD formats). The EGM master clock (Produced by Coolshark Company, Beijing, China) is selected as the master clock, whose clock source is satellite (GPS, Beidou, Glonass). The output form can be set up as IEEE1588 PTP, SyncE and time signals (frequency, PPS, ToD) through optical ports. It has a distributed synchronous design architecture, which can expand the number of unicast slaves up to 450. It features all the characteristics of the IEEE1588 master clock and boundary clock, also achieves effective management and maintenance, and provides a continuous and accurate time information after the loss of the relevant reference source signal.
Considering the functional distinctions caused by the different distribution positions of the nodes, it is necessary to divide the nodes into three categories: front node, intermediate node and end node, and write corresponding Verilog HDL programs according to the functions of each class. For the front node, the functions that need to be achieved include the controlling and reading the data sampling part and the clock synchronization part, as well as the fusing and packaging of time, MAC address, sampled data and other data, and finally sending it to the next level node through the optical cable. For the intermediate node, besides the whole functions of the front node, it also needs to have the functions of receiving data from the last level node and merging and repackaging with its own data. Finally, for the end node, not only all functions of intermediate node except the function of sending data to the next level node are required, but also the PS side of the end node needs to be used to send the data of the entire link to the data center through Gigabit Ethernet. The functions of different types of nodes are shown in Figure 2.
clock through an optical cable, keeping the time consistent with the master clock. Meanwhile, the board can be connected to Zynq-7000 Soc through the GPIO port to provide synchronized time information (in PPS and ToD formats). The EGM master clock (Produced by Coolshark Company, Beijing, China) is selected as the master clock, whose clock source is satellite (GPS, Beidou, Glonass). The output form can be set up as IEEE1588 PTP, SyncE and time signals (frequency, PPS, ToD) through optical ports. It has a distributed synchronous design architecture, which can expand the number of unicast slaves up to 450. It features all the characteristics of the IEEE1588 master clock and boundary clock, also achieves effective management and maintenance, and provides a continuous and accurate time information after the loss of the relevant reference source signal.
System Software Design
Considering the functional distinctions caused by the different distribution positions of the nodes, it is necessary to divide the nodes into three categories: front node, intermediate node and end node, and write corresponding Verilog HDL programs according to the functions of each class. For the front node, the functions that need to be achieved include the controlling and reading the data sampling part and the clock synchronization part, as well as the fusing and packaging of time, MAC address, sampled data and other data, and finally sending it to the next level node through the optical cable. For the intermediate node, besides the whole functions of the front node, it also needs to have the functions of receiving data from the last level node and merging and repackaging with its own data. Finally, for the end node, not only all functions of intermediate node except the function of sending data to the next level node are required, but also the PS side of the end node needs to be used to send the data of the entire link to the data center through Gigabit Ethernet. The functions of different types of nodes are shown in Figure 2.
Data Sampling Module Design
Zynq-7000 Soc reads the sampled data in ADS1256 through the SPI protocol. The specific process is as follows: When Zynq-7000 Soc needs to communicate with ADS1256, it pulls down the chip selection signal-CS, and uses the clock signal SCLK to control the data reading and writing of ADS1256. The MOSI signal is responsible for configuring the state of the registers in the ADS1256. When detecting that the DRDY signal is set in the high voltage level, the Zynq-7000 Soc can obtain the sampled data of the ADS1256 from the MISO signal.
Clock Synchronization Module Design
The P88 1588 PTP clock synchronization board transmits the ToD information which is in the NMEA ZDA format and PPS information to the Zynq-7000 Soc. The Zynq-7000 Soc can use the ToD information to obtain a time stamp of the second level. However, the requirements of our system are time stamps up to the microsecond level. Thus, we designed a program to obtain microsecond level time information by using the PPS signal provided by the P88 1588 PTP clock synchronization board and the inner crystal oscillator of the Zynq-7000 Soc. The specific process is as follows: Set the period of the PPS signal to 1 s with the pulse width of 300 µs. The crystal oscillator used on the PL side of the Zynq-7000 Soc is 100 MHz, which means that its clock cycle is 10 ns. When the rising edge of each clock cycle arrives, the Zynq-7000 Soc reads the PPS signal. If the PPS signal is set in the low voltage level at this time, it starts to count the clock cycle. When the Zynq-7000 Soc detects that the PPS signal is set in the high voltage level, the counter returns to zero and stops counting. It can be seen that this method does not need to add any other peripherals and makes full use of the resources inside the node. It is simple and reliable, and its theoretical error accuracy is ±1 µs.
The equation can be used to obtain microsecond level time information: Time_µs = (Count of Clock Cycle + 10)/100 + 300 (1) The flow chart of the control of clock synchronization module is shown in Figure 3.
Zynq-7000 Soc reads the sampled data in ADS1256 through the SPI protocol. The specific process is as follows: When Zynq-7000 Soc needs to communicate with ADS1256, it pulls down the chip selection signal-CS, and uses the clock signal SCLK to control the data reading and writing of ADS1256. The MOSI signal is responsible for configuring the state of the registers in the ADS1256. When detecting that the DRDY signal is set in the high voltage level, the Zynq-7000 Soc can obtain the sampled data of the ADS1256 from the MISO signal.
Clock Synchronization Module Design
The P88 1588 PTP clock synchronization board transmits the ToD information which is in the NMEA ZDA format and PPS information to the Zynq-7000 Soc. The Zynq-7000 Soc can use the ToD information to obtain a time stamp of the second level. However, the requirements of our system are time stamps up to the microsecond level. Thus, we designed a program to obtain microsecond level time information by using the PPS signal provided by the P88 1588 PTP clock synchronization board and the inner crystal oscillator of the Zynq-7000 Soc. The specific process is as follows: Set the period of the PPS signal to 1 s with the pulse width of 300 μs. The crystal oscillator used on the PL side of the Zynq-7000 Soc is 100 MHz, which means that its clock cycle is 10 ns. When the rising edge of each clock cycle arrives, the Zynq-7000 Soc reads the PPS signal. If the PPS signal is set in the low voltage level at this time, it starts to count the clock cycle. When the Zynq-7000 Soc detects that the PPS signal is set in the high voltage level, the counter returns to zero and stops counting. It can be seen that this method does not need to add any other peripherals and makes full use of the resources inside the node. It is simple and reliable, and its theoretical error accuracy is ±1 μs.
The equation can be used to obtain microsecond level time information: Time_μs = (Count of Clock Cycle + 10)/100 + 300 (1) The flow chart of the control of clock synchronization module is shown in Figure 3.
Design of Transmission Link Module
In the optical cable transmission design between nodes, we used the Aurora IP core, which was embedded in the Vivado compilation environment by Xilinx Company. The IP core can conveniently call the physical layer device-Gigabit transceiver for optical cable data transmission. The schematic diagram and design configuration parameters of the IP core are shown in Figure 4.
to the data center. Consequently, in the design, the data packet sent by the last level must be received first, and then the received data from the last level node and the data of this level node can be packaged and sent to the data center.
The composition sequence of the data packet between nodes is Header, Seismometer number, Time stamp, Sampling data, MAC address, Separator, and its format is defined as follows in Table 2. From the figure, it can be seen that the maximum bandwidth is set to 6.5 Gbps, the data length of each transmission is 4 bytes, and the reference clock is 125 MHz. The link bus bandwidth can be 4 Gbps according to the following calculation equation: Band Width = Clock Frequency * Bits (2) In the procedure of data transmission between nodes, we use the similar Aurora IP core, and the design parameters, working mode and other configurations. As a result, a unified design of the IP core can be customized. However, we must consider the functional distinctions caused by the different locations of each node, which further leads to differences in the timing design of various types of nodes. The specific timing design of this paper are as follows: In the timing design of the transmission program of the front node, only the function of sending data packets to the next level node needs to be operated. Therefore, it can be sent as soon as the data packet is packaged. In the timing design of the transmission program of the intermediate node, not only the function of sending data packets to the next level node, but also the function of receiving data packets from the last level node must be deliberated. Hence, in the design, the data packet sent by the last level node must be received first, and then the received data from the last level node and the data of this level node are packaged and sent. In the timing design of the transmission program of the end node, it is necessary to consider not only its function of receiving data packets from the last level node, but also its function of sending data packets to the data center. Consequently, in the design, the data packet sent by the last level must be received first, and then the received data from the last level node and the data of this level node can be packaged and sent to the data center.
The composition sequence of the data packet between nodes is Header, Seismometer number, Time stamp, Sampling data, MAC address, Separator, and its format is defined as follows in Table 2. This section may be divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, and the experimental conclusions that can be drawn.
In our design, the end node of the transmission link needs to merge and package the data of all nodes and send it to the data center. Taking into account the practicality of implementation and security of the transmission form, as well as the scalability of the transmission data width, Gigabit Ethernet is adopted as the information transmission method between the end node and the data center. This method takes advantage of both the PL side and the PS side of the Zynq-7000 Soc at the same time: the data fusion and package on the PL side can fully utilize the efficient parallel processing capabilities of the PL side. Data transmission is carried out on the PS side, which can take full advantage of the soft-core architecture on the PS side and program to realize Gigabit Ethernet transmission easily.
Xilinx company provides LwIP function support package in the development environment of PS side. By using it, the Gigabit Ethernet transmission based on the TCP server protocol can be quickly realized. PL side and PS side interact with each other through AXI bus. PL side transmits link data to PS side through AXI bus, and the PS side can transmit the user's control commands for link data transmission to the PL side through AXI bus: transmission start, stop and reset too. We set the PS side to send a data packet to the data center every 10 ms.
The data packet transmitted between the PS side and the data center is composed of Header, Packet Index, Length, Data (combined end node data, intermediate node data with front node data), and its format is defined as shown in Table 3.
Experiment
The completed SQSTS model is shown in Figure 5. Node-1 represents the end node, Node-2 represents the intermediate node, and Node-3 represents the front node. The PC simulates a data center, and a sliding rheostat simulates the sampled signal source. Each node includes a Zynq-7000 Soc main control board, an ADS1256 sampling card and a P88 1588 PTP clock synchronization board. The nodes are connected by optical cables to form a transmission link. The P88 1588 PTP clock synchronization board of each node and the EGM master clock are also connected by optical cables to form a master-slave clock synchronization network to provide synchronized time stamp. Node-1 finally sends the link data to the PC via Gigabit Ethernet. Through the host computer program on the PC, the received link data can be read out.
Node-2 represents the intermediate node, and Node-3 represents the front node. The PC simulates a data center, and a sliding rheostat simulates the sampled signal source. Each node includes a Zynq-7000 Soc main control board, an ADS1256 sampling card and a P88 1588 PTP clock synchronization board. The nodes are connected by optical cables to form a transmission link. The P88 1588 PTP clock synchronization board of each node and the EGM master clock are also connected by optical cables to form a master-slave clock synchronization network to provide synchronized time stamp. Node-1 finally sends the link data to the PC via Gigabit Ethernet. Through the host computer program on the PC, the received link data can be read out.
The Verification of Link Transmission Function
The link data read by the PC is shown in Figure 6, and the results obtained are analyzed below. The content of a link data packet displays in the red box. According to the design, the first is the frame header of the data packet, which corresponds to the yellow box. The following is the number of the data packet, which corresponds to the orange box. And next is the length of the data packet, which corresponds to the dark blue box. Then, the data of the front node corresponds to the blue box, the data of the intermediate node corresponds to the green box, and the data of the end node corresponds to the purple box. It can be seen that the data format of each node is Header, Seismometer number, Time stamp, Sampling data, MAC address and Separator, which is consistent with the design, indicating correctness of the link data transmission. By calculating the time difference between every two packets of the same node, it can be found that the value is 10 ms, which is consistent with expected design: the end node PS side sends a data packet to the data center every 10 ms.
The Verification of Clock Synchronization
From the data packets which are sent from the end node to the PC end, we can obtain the TOD information of the three nodes. After checking an amount of data packets, we found that the time information of the nodes: year, month, day, hour, minute, and second are the same, which can demonstrate the synchronization of the ToD information. Besides that, we also read and display PPS signals provided by P88 1588 PTP clock synchronization board of each three nodes and measure the PPS signals' high level width by using
The Verification of Link Transmission Function
The link data read by the PC is shown in Figure 6, and the results obtained are analyzed below. The content of a link data packet displays in the red box. According to the design, the first is the frame header of the data packet, which corresponds to the yellow box. The following is the number of the data packet, which corresponds to the orange box. And next is the length of the data packet, which corresponds to the dark blue box. Then, the data of the front node corresponds to the blue box, the data of the intermediate node corresponds to the green box, and the data of the end node corresponds to the purple box. It can be seen that the data format of each node is Header, Seismometer number, Time stamp, Sampling data, MAC address and Separator, which is consistent with the design, indicating correctness of the link data transmission. By calculating the time difference between every two packets of the same node, it can be found that the value is 10 ms, which is consistent with expected design: the end node PS side sends a data packet to the data center every 10 ms.
The Verification of Clock Synchronization
From the data packets which are sent from the end node to the PC end, we can obtain the TOD information of the three nodes. After checking an amount of data packets, we found that the time information of the nodes: year, month, day, hour, minute, and second are the same, which can demonstrate the synchronization of the ToD information. Besides that, we also read and display PPS signals provided by P88 1588 PTP clock synchronization board of each three nodes and measure the PPS signals' high level width by using Tektronix oscilloscope to verify the synchronization accuracy of these signals. The digital waves displayed by the oscilloscope are shown in the Figure 7. Considering that the level standard of PPS signal is TTL standard, the high level of each channel is set to 2.40 V. In a large range (the scale of oscilloscope is 200 µs), it can be observed that the high-level width of PPS signals of three nodes are 300 µs, which is consistent with the duration of high-level pulse of 300 µs in our design, and posedges and negedges of the three PPS signals are also keep the synchronous changes. When observing in a small range (the scale of oscilloscope is 1 µs), we can see the posedges of PPS signals of three nodes more meticulously, and the signals still keep a precise synchronization. Thus, we can draw the conclusion that the PPS signals of the three nodes are synchronized. As for the initial clock signals of the three nodes, we do not use a specific method to keep them in synchronization like PPS signals.
However, because the maximum difference between random two clock signals of the three is half of their period: 5 ns, it cannot have a significant effect on our time synchronization precision. In the Zynq-7000 Soc programming language, when getting a reminder in the division calculation, the result will be rounded automatically. Thus, using division in our program can produce an error accuracy of ±1 µs. In conclusion, we have verified that we can calculate the time of µs level by using PPS and clock signals, and the precision of our method is ±1 µs. For future practical seafloor observatory applications, the distance between nodes is usually more than ten kilometers, so the time delay caused by PPS signal transmission in optical cable must be taken into account. We can estimate the time by using the transmission speed of electromagnetic wave in copper cable and the distance between nodes and configure on P88 1588 PTP clock synchronization board in advance to subtract the transmission time for compensating. ting a reminder in the division calculation, the result will be rounded automatically. Thus, using division in our program can produce an error accuracy of ±1 μs. In conclusion, we have verified that we can calculate the time of μs level by using PPS and clock signals, and the precision of our method is ±1 μs. For future practical seafloor observatory applications, the distance between nodes is usually more than ten kilometers, so the time delay caused by PPS signal transmission in optical cable must be taken into account. We can estimate the time by using the transmission speed of electromagnetic wave in copper cable and the distance between nodes and configure on P88 1588 PTP clock synchronization board in advance to subtract the transmission time for compensating.
Summary
The above-mentioned experimental phenomena show that the data transmission link has realized the design functions, and also verify the correctness and stability of the clock synchronization system. After the system has been running for a long time, no packet loss has been observed, proving the stability and reliability of the data transmission system.
It is worth noting that the phenomenon of data inversion appears, because the highorder byte is sent first and then the low-order byte during an Ethernet transmission. Using the network device manager on the PC, the transmission speed of Ethernet has reached the standard of Gigabit Ethernet.
Discussion
This article describes a chained data acquisition and transmission system (SQSTS), which can be used as the basic component of the cabled seafloor earthquake observatories. Based on the Zynq-7000 Soc, the control module can replace the traditional junction box and realize the functions of controlling high-precision data acquisition and high-speed transmission (4 Gbps). On the basis of the clock synchronization technology, it is possible to provide a time stamp up to the microsecond level for the sampled data, supporting for the subsequent seismic data inversion operation and earthquake early warning. The system has provided satisfactory actual performance through a series of tests and applica-
Summary
The above-mentioned experimental phenomena show that the data transmission link has realized the design functions, and also verify the correctness and stability of the clock synchronization system. After the system has been running for a long time, no packet loss has been observed, proving the stability and reliability of the data transmission system.
It is worth noting that the phenomenon of data inversion appears, because the highorder byte is sent first and then the low-order byte during an Ethernet transmission. Using the network device manager on the PC, the transmission speed of Ethernet has reached the standard of Gigabit Ethernet.
Discussion
This article describes a chained data acquisition and transmission system (SQSTS), which can be used as the basic component of the cabled seafloor earthquake observatories. Based on the Zynq-7000 Soc, the control module can replace the traditional junction box and realize the functions of controlling high-precision data acquisition and high-speed transmission (4 Gbps). On the basis of the clock synchronization technology, it is possible to provide a time stamp up to the microsecond level for the sampled data, supporting for the subsequent seismic data inversion operation and earthquake early warning. The system has provided satisfactory actual performance through a series of tests and applications. We have proved by this research that the design and implementation of SQSTS is beneficial to the research of seafloor observatories. In the next step, we will develop a truly practical, cabled seafloor earthquake observatory based on SQSTS. | 8,661.8 | 2021-08-15T00:00:00.000 | [
"Geology"
] |
Moving harmonic source depth estimation by MDFMD-v in a weakly range-depend waveguide
To estimate the depth of a non-cooperative moving sound source in a range-depend waveguide, a passive depth estimation method combining depth-polarity search for a harmonic moving source is proposed in this paper. Firstly, the method obtains each mode parameters at the source and horizontal linear array (HLA) locations by utilizing the modal depth function constrained modal Doppler velocity estimation (MDFMD-v) and matrix pencil (MP) methods, respectively. Subsequently, mode amplitudes are calculated by compressive sensing. To determine the polarity of each mode amplitude, a depth-polarity search method is given in this paper. Finally, a depth ambiguity function with mode amplitude polarity is used to obtain the depth estimation results. Simulation results verify the effectiveness of the method.
Introduction
Depth estimation is important in many underwater applications.One classical method for sound source depth estimation is the matched field processing (MFP) [1]- [3], which requires marine environmental information to compute the replica fields, and can utilize the joint search of ambiguity function to estimate source depth and range.However, the traditional MFP method is restricted in practical applications due to the environmental mismatch problem.To alleviate the impact of environmental mismatch, one idea is to simplify the source depth estimation problem.Premus and Conan et al. simplify the source depth estimation to a binary classification problem [4,5], i.e., to discriminate whether the source is near the surface or submerged.These approaches address depth classification problem by extracting depth-depend acoustic features and combining them with algorithms such as the matchedsubspace method [4] or the trapped energy ratio [5].However, these methods are usually applicable to low frequencies and cannot estimate the source depth accurately.Another idea is to extract the normal modal parameters from the sound field data received by an array.A representative one is the data-based MMP method proposed by Yang [6,7], which estimates the mode depth functions through a vertical line array (VLA), then estimates the mode amplitudes using a virtual horizontal array synthesized by a moving source, and finally constructs a depth ambiguity function to achieve source depth estimation.The limitation of the method proposed by Yang is that the velocity and the frequency of the moving source is required, which is ineffective for non-cooperative targets.Besides, the most of the above methods are studied in range-independent waveguides.There are few studies on source depth estimation in range-depend waveguides with uncertain environmental information.
This paper proposes a method for estimating the depth of a non-cooperative target in a range-depend waveguide with unknown seabed parameters based on a horizontal linear array (HLA).And only waveguides satisfying the adiabatic approximation is considered for simplicity.In this case, to estimate the source depth, it is necessary to know the mode depth functions of source and HLA locations to decouple the amplitude of each mode excited by the source.Under the assumption that sound speed profile (SSP) is known, the mode depth functions at the HLA and the source locations can be extracted based on matrix pencil (MP) [8] and modal depth function constrained modal Doppler velocity estimation (MDFMD-v) [9] respectively by combing the difference equation introduced in Ref. [10].And then the amplitude of each mode excited by the source can be extracted by using compressive sensing (CS) [11].Considering the fact that the polarity of each mode amplitude has an impact on the depth estimation, a depth-polarity search method is designed to estimate the polarity of each mode amplitude.Finally, the depth of the sound source can be obtained by the combination of depth ambiguity function and optimal amplitude polarity.
Mode depth function constrained modal Doppler velocity estimation
A range-depend waveguide is shown in Figure 1, with a maximum value of water column depth 1 h and a minimum depth of 2 h .A Cartesian coordinate system is established as shown in Figure 1, the sound speed at coordinates ( ) , rz is ( ) , c r z , and the parameters of the seabed is unknown.In a shallow water waveguide, the sound field can be calculated by the adiabatic approximation theory when the bottom changes slowly [12].Consider a moving harmonic source at the bottom of a slope departing from away from the HLA at a speed 0 v with 0 vc , the sound field at frequency received by the sensor located at coordinates ( ) , rr rz in the shallow water waveguide can be represented as: 0 .
Equation ( 1) has the same form as the equation ( 4) in Ref. [10].If the SSP at the source location is known and the source contains the adjacent frequencies 1 f and 2 f , the MDFMD-v method in Ref. [10] can be used to jointly estimate the velocity 0 v , the horizontal wavenumber ( , ) rm s kr and the source frequencies 12 , ff .Then the difference equation proposed in Ref. [8], see equation ( 2), can be utilized to estimate the mode depth function.
where z is the difference step.
Depth estimation
As shown in Figure 1, the HLA is arranged at the top of the slope with L elements; the distance between two adjacent elements is d ;the first element is located at the (0, ) r z .Therefore, the sound field emitted by the source located at ( , ) ss rz and received by the th n element in the HLA can be expressed as where ( ) mr kr at this location can be estimated by the MP method [9].When ( , ) mr kr are estimated, the corresponding mode depth functions
, respectively.After estimating the horizontal wavenumbers and mode depth functions at the source and HLA locations, one still needs to estimate amplitude of the normal modes.CS is used to estimate the complex mode amplitudes in this paper: , , , ] , 4) can be efficiently solved by convex optimization, such as the CVX toolkit [13].However, due to the lack of information such as the initial range of the source, it is difficult to obtain an accurate polarity of each mode amplitude, which has a significant effect on the estimation of the source depth.A depth-polarity search method is given in this paper to solve the polarity estimation of each mode amplitude.Before introducing the depthpolarity search method, we can restrict the search range of polarity combinations.Firstly, the preestimation of the source depth is given by , then we can calculate the polarity combinations of the mode amplitudes where the peaks located and the polarity matrix S is modeled as ( , , ) ( , ) , (0, , , , The response of ( , , ) D z n is concentrated at the depth of the source when the polarity combination is searched to match the actual mode amplitudes polarity combination.To confirm the actual polarity combination of the mode amplitudes, a cost function is defined as The polarity combination where the cost function taken to its minimum value can be considered as the actual polarity of the mode amplitudes.Finally, the source depth can be calculated as Furthermore, a joint estimation function that uses the estimation results of both frequencies can be defined as arg max , , , , , where .The effectiveness of the proposed method will be verified by simulation in Section 3.
Simulation
The range-depend waveguide is shown in Figure 1, and the aperture of the HLA is 1200 m with spacing 5m.The sound source is moving away from the HLA at a velocity , and emits a combination of harmonic signal of frequencies 1 150 Hz f = and 2 160 Hz f = .The signal with a duration of 1000s is selected for depth estimation.Noting that the source velocity and source frequencies are unknown parameters in the simulation process.The sound field is calculated by equation (1), in which the mode depth functions and the horizontal wavenumbers can be calculated by Kraken, and in order to be more realistic, Gaussian white noise with signal-to-noise ratio (SNR) of 0 dB is added to the sound field.
With the source parameters estimation method introduced in Section 2, one can get the estimation of the velocity and two frequencies of the source as shown in the Once the source velocity and frequencies have been estimated, it is possible to compute the horizontal wavenumbers ( , ) m s k rf that correspond to the frequencies and then determine the mode depth functions ( , ) ms rz at the source location.Figure 2(a) shows the comparison of the estimated mode depth functions with the Kraken-calculated mode depth functions at a frequency of 160Hz (Figure 2 illustrates the estimation results for modes 8, 10, 11 and 12, and the other modes estimated perform similarly.Besides, the results are similar for 150 Hz) when the source depth is 50m.From figure 2, it can be seen that the estimated mode depth functions tally well with the Kraken calculation.The estimation and comparison results of 160Hz are shown in figure 2(b).As mentioned in Section 2, the complex mode amplitudes are computed using the CS method, and the polarities are estimated using the depth-polarity search method.Figure 3 demonstrates the comparison of depth estimation results between the real mode amplitudes (with attenuation and without attenuation) and the mode amplitudes solved by the CS method.In the CS method, the regularization parameter is set as 0.01 to ensure a better fit to the data when the horizontal wavenumbers are determined.As depicted in figure 3, the influence of low-number mode amplitudes attenuation on depth estimation can be negligible within a certain range.Moreover, if the polarity combination of mode amplitudes is known, estimated depth using the mode amplitudes calculated by CVX program is reliable.To further validate the performance of the method, several simulations with different source depths were performed, and Figure 5 shows the results obtained by using the two frequencies alone and joint estimation.The average relative errors of the source depths estimated at 150Hz, 160Hz and the joint estimate are 26.20%,21.77% and 19.27%.And the depths estimation in the intervals of 0-10m and 150-200m result in the main estimation error (the average relative errors of the estimated source depths are 8.07%, and 3.48% for the 150 Hz, 160 Hz and joint estimation in the intervals of 10-150m, respectively).Therefore, compared with the estimation results using individual frequencies, the errors of the joint estimation are smaller.The depth estimation errors for shallower sources are mainly due to the less highnumber modes are used (the maximum mode number used during the simulation is 15, while the total number of modes is about 30).The depth estimation errors of the deeper sources are mainly due to the estimation of these depths depends on the low-number modes.And the ability of the MDFMD-v method to estimate the wavenumbers of the low-number modes at the source location is affected by the size of the synthetic aperture, which is 2500m in this paper and it is not able to separate the low-number modes at the source location (the first two modes interference span is about 7,000m).
Conclusion
In this paper, the combination of the MDFMD-v and depth-polarity search method achieves the depth estimation of a non-cooperative target sound source in a weakly range-depend waveguide using a HLA.Only two adjacent frequencies emitted by the source and a priori of the sound speed profile in the water column is required in this method.The simulation results show that the method has a good performance in source depth estimation.However, only the moving source at the bottom of the slope in weakly rangedepend waveguide is simulated.In future work, we will extend this method to more complex waveguides and perform experimental data processing and validation.
R
is the horizontal coordinate of the sound source at 0 t = .s z is depth of the source, are the mode depth function and horizontal wavenumber of mode m at the location ( ), rz , and M is the total number of propagation modes.The dependence of the mode depth function on frequency is neglected in(1).
Figure 1 .
Figure 1.In a weakly range-depend waveguide, the source moves away from the horizontal linear array (HLA) at the speed 0 v.
a fixed and large source range s r for different HLA elements.The frequency domain field received by the HLA is approximated as a linear superposition of exponential functions, and the horizontal wavenumber( , )
.
The estimation of the horizontal wavenumber and mode depth function of the th m normal mode is denoted by ( , ) rm r kr and ( , ) mr rz controls the relative importance between the data fit and the 1 l norm of the solution.Equation ( and assuming the polarity combination of one peak matches that at the actual source depth.If the peaks of ( , ) pre Dz occur at' , 1, 2, , represents the sign-taking operation.So, a depth estimation function can be defined as , 1
Figure 2 .
Figure 2. Comparison between the estimated mode depth function with that calculated by Kraken at a frequency of 160Hz.(a) is the result of the mode depth function at source location.(b) is the result of the mode depth function at HLA location.The blue solid line is the estimated mode depth function and the red dashed line is the mode depth function calculated by Kraken.
Figure 3 .
Figure 3.Comparison of estimation results in different cases at a source depth 50 m s z = and a frequency of 160 Hz.The red solid line shows the depth estimation results without considering attenuation, the blue dotted line shows the depth estimation results considering attenuation and the black dashed line shows the depth estimation results using the mode amplitudes estimated by the CVX.
Figure 4 Figure 4 .
Figure 4 illustrates that a peak occurs in the spectrogram of
Figure 5 .
Figure 5. Estimation results at different depths.(a) Depth estimation results estimated by the frequencies 150Hz (black scatter) and 160Hz (red scatter) separately when the source is located at different depths.(b) Depth estimation results estimated by joint estimation when the source is located at different depths.The solid black line is the reference line.
Table 1 .
Results of velocity and frequencies estimated by the MDFMD-v method. | 3,308.2 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Rapid statistical discrimination of fluorescence images of T cell receptors on immobilizing surfaces with different coating conditions
The spatial organization of T cell receptors (TCRs) correlates with membrane-associated signal amplification, dispersion, and regulation during T cell activation. Despite its potential clinical importance, quantitative analysis of the spatial arrangement of TCRs from standard fluorescence images remains difficult. Here, we report Statistical Classification Analyses of Membrane Protein Images or SCAMPI as a technique capable of analyzing the spatial arrangement of TCRs on the plasma membrane of T cells. We leveraged medical image analysis techniques that utilize pixel-based values. We transformed grayscale pixel values from fluorescence images of TCRs into estimated model parameters of partial differential equations. The estimated model parameters enabled an accurate classification using linear discrimination techniques, including Fisher Linear Discriminant (FLD) and Logistic Regression (LR). In a proof-of-principle study, we modeled and discriminated images of fluorescently tagged TCRs from Jurkat T cells on uncoated cover glass surfaces (Null) or coated cover glass surfaces with either positively charged poly-L-lysine (PLL) or TCR cross-linking anti-CD3 antibodies (OKT3). Using 80 training images and 20 test images per class, our statistical technique achieved 85% discrimination accuracy for both OKT3 versus PLL and OKT3 versus Null conditions. The run time of image data download, model construction, and image discrimination was 21.89 s on a laptop computer, comprised of 20.43 s for image data download, 1.30 s on the FLD-SCAMPI analysis, and 0.16 s on the LR-SCAMPI analysis. SCAMPI represents an alternative approach to morphology-based qualifications for discriminating complex patterns of membrane proteins conditioned on a small sample size and fast runtime. The technique paves pathways to characterize various physiological and pathological conditions using the spatial organization of TCRs from patient T cells.
The advent of single-molecule and superresolution microscopy has enabled the investigation of nanoscale and microscale spatial organization and rearrangement of T cell receptors (TCRs) on the plasma membrane of T cells during T cell activation [1][2][3][4][5][6] . New mechanistic insights have revealed the distinct roles of TCRs in signal amplification and dispersion 7,8 , distinction between foreign-and self-peptides 2,9,10 , and sensing of mechanical forces 11,12 , among others. It is now known that the clustering of TCRs correlates with some of the functions mentioned above. The spatial redistribution of TCRs and the formation of TCR clusters on the plasma membrane begs the question whether the organization of TCRs, e.g., obtained through standard fluorescence imaging, contain diagnostic or prognostic values. Such information could potentially augment the overall expression level registered by flow cytometry to improve the segmentation and diagnostic accuracy. However, state-ofthe-art single-molecule and superresolution microscopy techniques are not yet clinically ready. To fully harness the spatial information of TCR in a statistically significant manner, the imaging modality needs to be relatively high-throughput with reproducible results, both of which currently require substantial development in the singlemolecule and superresolution community. Various attempts have been made to analyze TCR distribution 13 and dynamics 14 using statistical techniques, yet the lack of rapid TCR cluster analysis techniques using conventional fluorescence images sets the second hurdle. Specialized cluster analysis techniques, such as those developed for single-molecule localization microscopy (SMLM) [15][16][17] , cannot be readily applied to standard fluorescence images.
TIRF imaging of T cells on surfaces coated with different ligands.
We obtained high-contrast TIRF images of TCRs from live Jurkat T cells on three types of glass surfaces, which we defined as Null class, PLL class, and OKT3 class. Null class depicted TCR images that were acquired from a cover glass surface, without a ligand coating (Fig. 1a, Supplemental Fig. S1). The uncoated cover glass served as an inert surface that would minimally, if at all induce a response in T cells. PLL class represented images acquired from an immobilizing surface coated with PLL (Fig. 1b, Supplemental Fig. S2). The electrostatic interactions between positively charged PLL and negatively charged cell membranes facilitate cell attachment to the glass surface. OKT3 class represented images acquired from the other immobilizing surface, coated with the OKT3 antibody (Fig. 1c, Supplemental Fig. S3). Images were collected by a 100×/1.49 TIRF objective with a 1.5 × external magnification and a Photometric 95B sCMOS camera, resulting in an image pixel size of 73 nm. The 100 images were collected by scanning 2 wells on 4 different days for the PLL surface condition and 5 different days for both OKT3 and Null surface conditions. Calcium imaging data were collected to determine the overall stimulatory effect of the surface condition on the T cells (Supplemental Fig. S4).
Development of the pixel-based image model for discrimination analysis. Our image model stems from the observation that the 2-dimensional autocorrelation function of an image is similar to that of a PDE in 2 independent space variables. The original 2D image can be modeled by a sequence of 2D images resulting from pixel shifts, termed spatial lags, in x (horizontal shift), y (vertical shift), and both x & y (horizontal and vertical shifts) simultaneously. Similarly, linear and stationary PDEs contain a linear combination of independent terms described by their defining parameters (Fig. 2a). As such, a linear sequence of images could be modeled as an ordinary least squares regression (OLS) approximation of the PDEs, given that individual images concatenating into a single matrix can be formatted into a linear combination of independent vectors. www.nature.com/scientificreports/ To realize this modeling strategy, we first transformed 2D pixel-based fluorescence images into 1D vectors in column-major order: an m by n pixel image matrix was converted to a m × n-by-1-pixel vector. A single image regression design matrix is made up of columns, which are spatial lags of the original image. This strategy is a 2D adaptation from time series methods, in which explanatory regression variables represent time lags of the series being estimated. If each image design matrix has k columns, then each class has a class parameter matrix consisting of k columns and the number of rows equal to the number of images being modeled. We have previously shown this vector transformation preserves the parametric relationship between the PDE model and the image 23 (Supplemental Note 1).
To identify model parameters in the PDEs, we implemented a general OLS modeling strategy: where the best Image Model was obtained by minimizing the variance of the Residuals. The design matrix plays a role in the construction of the Image Model, and the Residuals represents the error in the estimation. We employed Student-t tests to evaluate the statistical significance of the model parameters by OLS estimation for each individual image collected. The t test used the White asymptotic parameter covariance matrix, as shown in Table S1. In our model construction, "significant" designates a p < 0.05 for Student's-t tests and p < 0.01 for chisquare tests, unless otherwise noted. If model parameters were found insignificant, spatial lags were increased to reconstruct an alternative image model, until significant model parameters were obtained (Fig. 2b). Model parameters and their Student-t statistics were estimated using the detailed method outlined in Supplemental Note 1. For discrimination analyses, training and testing regression models for all images had one spatial lag with 3 significant OLS model parameters (Supplemental Note 1). Figure 2c shows the intensity profile of a typical fluorescence image of the TCRs in a T cell (280-by-280 pixels) and the corresponding image model consisting of a linear combination of three spatial lags (279-by-279 pixels). Figure 2d. shows the fidelity of the image model with an R 2 = 0.9966. Table 1 shows averaged estimated model parameters of 20 test images from each class. The significant model parameters for each class were found to differ markedly.
Fisher linear discriminant of model parameters enables class discrimination. We applied the Fisher Linear Discriminant (FLD) method to achieve class discrimination using model parameters described above. We termed the technique FLD-SCAMPI. Briefly, FLD projects individual parameter vectors as a scalar product. The FLD eigenvalue projection vector allows for maximum separation of the training images from the different surface conditions, for example PLL class and OKT3 class, while minimizing the separation within each class (Fig. 3a, Supplemental Note 2). Figure 3b shows the flowchart for FLD-SCAMPI for PLL class and OKT3 class, which is similar for discrimination between Null class and PLL class or OKT3 class. We applied FLD to the OLS estimated model parameters from the training dataset containing 80 randomly selected images from these surface conditions. The resultant eigenvector for PLL class and OKT3 class was found to be v c = [1.344 0.797 0.659] T ; the discrimination of training images for this FLD was 76.9%. The cell-to-cell variations and heterogeneities of TCR-specific characteristics among the two classes are accounted for in the discrimination accuracy of these two classes (Supplemental Figs. S2, S3). One hundred OLS image models were estimated for each class. Eighty parameter sets were randomly selected from each image model class as training set data. FLD training set analysis produced the optimal class separating projection vector vc. The dot product of vc and the remaining 20 test image parameter sets were the points on the horizontal axis of Fig. 3c. It can be shown these points are normally distributed as shown on the vertical probability density axis of Fig. 3c. Figure 3c demonstrates the discrimination of the 20 test images not used to construct the FLD eigenvectors. The 3-element projection vector v c successfully identified 18 out of 20 test images from PLL class and 16 out of 20 test images from OKT3 class. The overall class discrimination of the test images was 85%. A discrimination accuracy of 85% can also be seen between OKT3 class and Null class. Between Null class and PLL class, the discrimination was 83%, where 14 out of 20 test images from PLL class were correctly identified, and 19 out of 20 Null test images were correctly identified by SCAMPI. FLD-SCAMPI is subject to the small sample test of a linear discriminator. We found that smaller sample sizes and overfitting of the training set can lead to decreasing discrimination efficiencies. Additionally, we randomly selected 20 OKT3 images from the first 50 images and 20 from the second 50 images for the FLD-SCAMPI, as a test to determine whether SCAMPI can discriminate between the same class. The resulting FLD showed a classification error of 37.5% (Supplemental Fig. S5). The Kullback minimum discrimination information statistic failed to support at the 5% level the hypothesis that distributions of these two sets of images are different. The result suggests the robustness of our technique, and that FLD-SCAMPI does not discriminate between two non-overlapping subsets we randomly formed within the Image = Image Model + Residuals, Table 1. Mean (n = 20) model parameters and mean Student-t tests of parameters for image models constructed with three parameters (one spatial lag). SCAMPI is sensitive to other changes as well, such as cell spreading. In order to investigate cell spreading on the different surface conditions, we determined the distribution of the surface area of the cells using a MATLAB code that labelled individual pixels in each image of the specific class using a binary system: 0 (below specified threshold) and 1 (above specified threshold). The pixels with intensities greater than the specified threshold were used to determine surface area of cell spreading (Supplemental Fig. S6). While an up-shift was observed from images obtained on the OKT3 surface (Supplemental Fig. S6), differentiating an individual image whose intensities in the overlapping region (< 2 × 10 4 , a.u.) remains challenging using traditional approaches. Additionally, we examined the averaged fluorescence intensity of TCR images. An up-shift of intensity can be observed from the OKT3 distribution, which is in alignment with the fact that OKT3 induced more TCR clustering, and thus higher fluorescent intensity values. Despite this trend in the intensity distribution, only 19% of the OKT3 images displayed intensities brighter than 2.5 × 10 7 (a.u.), the maximal intensity registered by the PLL images. As such, all 100 PLL images and 81 OKT3 images out of 200 total images cannot be differentiated from each other based on the intensity threshold. In comparison, our SCAMPI achieved substantially better results with 85% classification accuracy (Supplemental Fig. S7). In addition, the more pronounced clustering of TCRs may shift the distribution of intensities registered as pixel values upwards from individual images. Such change may be sufficient to discriminate images from PLL versus OKT3 and NULL versus OKT3. We extracted signals as www.nature.com/scientificreports/ pixel values greater than 1.1 times the median intensity of the image as background. We employed lognormal parameter estimates to extract the mean (µ) and standard deviation (σ) from the distribution parameters of signals from each image. Supplemental Fig. S8 revealed a significant overlap between PLL versus OKT3 and NULL versus OKT3, demonstrating the better separation achieved by our SCAMPI in comparison.
Mean values of OKT3 class cell images
Logistic regression estimated class probabilities. The FLD projections from the previous section represent a weighted sum, or scalar product, of the image's model parameters. It is important to note that the FLD-based SCAMPI (Fig. 3c) does not give the probability that a given image belongs to a particular class. To overcome this limitation, we developed SCAMPI using Logistic Regression (LR), or LR-SCAMPI. LR-SCAMPI estimates the probability of each test image belonging to the OKT3 class. To this end, we employed a logistic function to model the binary dependent model parameters between PLL class and OKT3 class. Test images from PLL class are expected to have low probabilities of being classified into OKT3 class (close to 0); whereas test images from OKT3 class are expected to have high probabilities (close to 1). To estimate the cell probabilities, the classified cell projections (the independent axis entries in Fig. 3d) become the independent variable in a logistic regression. The dependent variable estimated is the LR estimation of each cell's probability of belonging to OKT3 class. LR-SCAMPI outputs a value (y axis in Fig. 3d) as the probability that an image was obtained from the OKT3 surface. A value closer to 1 represents that the image is mostly likely obtained from the OKT3 surface and a value closer to 0 indicates the image is most likely obtained from the PLL surface. This double optimization is possible because SCAMPI is less susceptible to the statistical limitations that affect popular AI techniques. We provide the technical details of the logistic regression in Supplemental Note 3 and associated Supplemental Fig. S9. We show that the SCAMPI optimization is essential for probability estimation, as the independent variables of the LR parameters are not sufficient alone for probability estimation.
In addition to the similar classification results obtained by FLD-SCAMPI, LR-SCAMPI acts as a corroboration of the FLD (Fig. 3c). An examination of Fig. 3d,e indicates 25 out of 40, or 62.5%, of the test images are within 10% of probability 0 or 1.0. The results also confirmed the consistency between the FLD and LR discrimination techniques. The ability to classify between fluorescent TCR images from PLL and OKT3 surfaces suggests the feasibility of statistical quantification of TCR clusters using pixel-based values from fluorescence images.
Discussion
Our results indicate that pixel-based image information contained clustered features that can be classified into similar groups. These features could be contributed by the clustering state and global distribution of TCRs upon contact with different ligands. In addition, we observed that T cells flattened more on the OKT3 coated surface upon interaction. This larger degree of cell spreading could have also played a role in the differentiation between Null or PLL class and OKT3 class.
The effectiveness of FLD-based SCAMPI may depend on both the spatial distribution and fluorescent intensity of individual TCR clusters. The FLD projection of an individual cell represents an optimal weighted average of that cell's OLS parameters, also known as FLD eigenvector weights. To this end, our model parameters are sensitive to the image format and quality. Such dependence can be minimized by collecting images under identical experimental conditions. Unique characteristics related to the optical system, sample preparation, and data acquisition have been normalized within these imaging data. Image characteristics, including the point spread function of the optical system, higher-order optical aberrations, sample labeling densities, photophysical properties of different fluorescence labels, pixel size, and quantum efficiency of the detector camera, can play critical roles in the classification accuracy. In addition, the expression level of EGFP-tagged TCRs may also affect the results. In this case, immunofluorescence staining may be used.
We expect that pre-screening of images for quality control, such as for those obtained in clinical settings, could further improve the discrimination accuracy; however, small fluctuations in the data set may be tolerated by SCAMPI. For instance, out of the 100 PLL class images (Supplemental Fig. S2) whose average background intensity ranged from 127 to 180 (a.u.), four images had a high background intensity in the range of 193-294 (a.u.). The fluctuations may be due to user error associated with the TIRF angle, the set exposure, or laser intensity. These four images were included as part of the training data set for constructing the FLD projection. Despite the intensity fluctuations, these four images accounted for only 5% of the training data set. Therefore, their impact on the FLD projection and subsequent discrimination is attenuated. Such characteristic renders SCAMPI immune to small fluctuations.
Using FLD and LR, we demonstrated rapid classification using only 100 images from each class. We attribute the unique capability of SCAMPI to two salient factors. First, the vector transformation of fluorescence images provides the OLS regression analyses with a large degree of freedom (for example, the three-parameter model of Fig. 2c has 25,761 samples per estimated parameter). As such, the subsequent discrimination analyses leverage robust image-derived statistics. Second, the OLS estimation minimized inter-system noise. The PDE image model not only carries information about the number of TCR clusters and characteristics, such as their size and shape but also the detailed spatial structure of the image. Each training image positively enhances the class discrimination model. In our demonstration, as few as 20 test images per class were found to be sufficient in achieving significant class separation and probabilistic corroboration. However, over-fitting may occur and degrade class separation by SCAMPI, resulting in less significant classification of images due to the random characterizations within the data set itself. FLD-SCAMPI is subject to the small sample test of a linear discriminator. We found that smaller sample sizes and overfitting of the training set can lead to decreasing discrimination efficiencies. For example, using FLD projections, the discrimination of a 40-40 test image set of PLL class and OKT3 class, comprised of 20 random training images and the original 20 test images, yielded a moderate discrimination of 72.5% compared to the 85% of the 20-20 test image set of those two classes. This lower discrimination accuracy www.nature.com/scientificreports/ may be a result of data overfitting, representing a limitation of SCAMPI. Nevertheless, a unique advantage of our technique lies in the rapid discrimination using small datasets. In our study, an OLS model estimate was achieved in 4.4 ms and the entire model construction and image classification took less than 22 s. One potential application of SCAMPI is the rapid screening of a large combination of experimental conditions and data sets. The pilot results can then guide and optimize the experimental design of more data-demanding investigations.
In this study, we employed controlled surface conditions to develop and validate the SCAMPI technique. In addition to the surface ligands, SCAMPI can be applied to time point studies, where TCR organizations may display unique temporal dynamics. Applications of SCAMPI to physiological samples require further considerations owing to the complexity and heterogeneity of single cells. Our 85% discrimination accuracy translates to the correct identification of 85% of all images obtained from PLL vs OKT3 or OKT3 vs Null surfaces. In addition to the discrimination accuracy, the "in-between" data points from LR-SCAMPI (Fig. 3d) may provide additional information. In the clinical context, these data points may indicate the requirement of further evaluation or testing. Such information can potentially mitigate the risk of misdiagnosis, but it is not captured in binary classification or reflected by the classification accuracy. In the future, such capability can enable the evaluation of the health status of patient T cells before initiating T-cellbased therapies. For drug development, SCAMPI may be effective in observing the efficacy of T cell inhibitor drugs that prevent the formation of TCR clusters 53 .
SCAMPI may accelerate the translation of mechanistic understanding obtained by cutting-edge fluorescence microscopy 4,54-56 into clinical applications. As we demonstrate in this proof-of-principle study, determining whether a given fluorescence image of TCRs corresponds to OKT3, PLL, or inert surfaces can be difficult due to nuanced differences and heterogeneity displayed by the images (Supplemental Figs. S1, S2, S3). Image analysis techniques used to differentiate clustering states of TCRs has largely remained in the domain of low-throughput single-molecule and superresolution microscopy studies. In parallel, lack of interpretability and predictionoriented algorithm design presents a fundamental problem for machine learning and deep learning-based discrimination strategies on standard fluorescence images. In contrast, the SCAMPI model development entirely depends on the relationship between the variables and the significance of the relationship. With its fast run time, SCAMPI can drive the development of high-resolution and high-throughput imaging flow cytometry to improve diagnostic accuracy by incorporating the spatial distribution of membrane markers.
In this report, we demonstrate a computationally efficient statistical technique to discriminate fluorescence images of TCRs from Jurkat T cells on uncoated glass surfaces and coated surfaces with different ligands. SCAMPI is computationally effective using a small sample size. Our proof-of-principle study using TCRs indicates their global distributions, in addition to the clustering state, may contain physiologically relevant information. Such information will complement single-molecule and superresolution studies to reveal the impact from the heterogeneous distribution of TCRs and associated membrane proteins on the regulation of TCR signaling and downstream T cell function. The successful demonstration of SCAMPI suggests that pixel-based image information can be utilized to classify complex organizations of membrane proteins beyond standard quantification techniques for fluorescence images. In the future, SCAMPI can be extended to study cells interacting with agonist versus non-agonist peptide-major-histocompatibility-complexes and other membrane markers 51 , as well as with different imaging modalities 57 , creating inroads for transforming fluorescence image-based discovery into clinical applications.
Surface preparation. Eight-well chamber cover glasses (Borosilicate sterile No 1.5, CAT# 155409, Lab-Tek) were cleaned with absolute ethanol and dH 2 O, then incubated overnight at room temperature. Stimulating surfaces were produced by adding OKT3 antibody (200 μL) at a concentration of 1 μg/ml in PBS (from Gibco, USA) into a well. Poly-L-lysine (PLL) surface were produced by adding PLL (200 μL) at a concentration of 0.01% in H 2 O (P8920 from Sigma-Aldrich, CAS#: 25988-63-0) into another well. Eight-well chamber slides containing OKT3 and PLL were incubated overnight at 4 °C. For the inert surface, a room temperature, sterile chamber slide was used.
Live imaging of TCR clusters. Supernatants of the wells containing OKT3 and PLL were decanted, and cells (50-100 k) in culture media were added to each well. They were incubated for 30 min at 37 °C and 5% CO 2 . After the incubation, cells were observed under a conventional microscope to confirm whether they were attached to the surface. Supernatants of the wells were decanted, and pre-warmed imaging buffer was added to the wells; cells were imaged live. The 100 images were collected by scanning 2 wells on 4 different days for the PLL surface condition and 5 different days for both OKT3 and Null surface conditions. SCAMPI standard model statistics. The standardization proposed in the following model is based on the diffusive and advective structures currently reported and studied in the cell model literature 58,59,60 . For this purpose, we propose the PDE model in (1), which is a temporal equilibrium form of a nonhomogeneous, hyperbolic PDE (Supplementary Note 1). Its digital, estimable form in (2) illustrates the model dependence on protein advection parameters and diffusion parameters.
The regression models for both training and testing utilized spatial lags, providing 3 OLS parameters. The spatial lags were in the x, y, and both x and y direction. After 3 parameters (one spatial lag), no training images had additional significant parameter estimates, therefore the test images were projected by 3 by 1 eigenvector. This projection vector was applied to the 40 test images (20 from each class).
To meet the demands placed on (2) as a T cell protein membrane model, the number of images for estimating the parameters is restricted so that parameter significance is maintained to support an accurate discriminator as well as an accurate predictor of individual class member probabilities. Overfitting degrades parameter significance and compromises discrimination, which in turn compromises estimation of individual class member probabilities.
To determine the overall effect, in training and testing classes, that overfitting has on image discrimination, 20 images from PLL class and OKT3 class were selected randomly as a FLD training set. The separation of the 20-20 images resulted in a mean discrimination accuracy greater than 85%, as noted in Table 1; however, when this same procedure was followed with 80-80 images, the mean discrimination accuracy was 76.9%. This decrease in discrimination accuracy as a direct outcome of an increase in training images, coincides with the hypothesis that overfitting will degrade parameter significance and compromise image discrimination. Because the OLS regressions each have a very large degree of freedom, parameter significance is degraded by increased parameter numbers and image sample numbers from over fitting.
An important assumption of our statistical model is that the image parameters are normally distributed. All 20 element b parameter vectors for both classes passed a Kolmogorov-Smirnov test for normality at 0.05 or better (Table 1). Fig. S3 confirms, in the sense of McShane and Gal 61 , the normal distribution of estimated b k,l values.
The Student-t statistics in Table 1, as noted, were computed using the White asymptotic parameter covariance matrix.
Calcium imaging. Fura-2 AM was freshly thawed in cell culture media to a final concentration of 5 µM. T cells were incubated in the Fura-2 AM solution for 50 min in the dark at room temperature in a 1.5 mL microcentrifuge tube. Cells were washed once with HBSS, then incubated for 10 min in HBSS at room temperature in the dark. T cells were then resuspended in their corresponding imaging buffer.
Cells were diluted using respective imaging buffers and added to wells containing their specific surface conditions (PLL, OKT3, Cover glass).
Cells were imaged with a Nikon Eclipse Ti2 inverted microscope with a 40×/1.30 oil-immersion objective and a 1.5 × external magnification. A Retrapad that can trigger between 340 and 380 nm excitation wavelengths was used. 10-min videos were taken using 50 ms exposure time, 1 s time delay, and 2 × 2 binning.
Determining effect of cell spreading. The distribution of the surface area of the cells was completed using a MATLAB code that labelled individual pixels in each image of the specific class using a binary system: 0 (below specified threshold) and 1 (above specified threshold). The pixels with intensities greater than the specified threshold were used to determine surface area of cell spreading. SCAMPI GitHub repository. Our 100 fluorescent images from the three different surface conditions and the MATLAB codes we used have been deposited on GitHub at the following address: https:// github. com/ jesseander son/ The-Ying-Hu-Group.
Data availability
Image data and MATLAB codes used in this work have been deposited to the following GitHub address: https:// github. com/ jesse-ander son/ The-Ying-Hu-Group. www.nature.com/scientificreports/ | 6,419 | 2021-07-29T00:00:00.000 | [
"Biology"
] |
Performance evaluation and analysis of four waves mixing in DWDM optical communications
Optical nonlinearities give rise to many ubiquitous effects in optical fibres ’. These effects are interesting in themselves and can be detrimental in optical communication. In the Dense Wave length division multiplexing system (DWDM) the nonlinear effects plays important role .DWDM system offers component reliability, system availability and system margin. DWDM system carries different channels. Hence power level carried by fiber increases which generates nonlinear effect such as SPM , XPM, SRS, SBS and FWM. Four wave mixing (FWM) is one of the most troubling issues. The FWM gives crosstalk in DWDM system whose channel spacing is narrow. Wavelength exchanging enables data swapping between two different wavelengths simultaneously. These phenomena have been used in many applications in Wavelength Division Multiplexing (WDM) optical networks such as, wavelength conversion, wavelength sampling, optical 3R, optical interconnects and optical add-drop multiplexing.
ii. INTRODUCTION FWM (Four Wave Mixing) or Four Photon Mixing (FPM) is the process whereby optical power from one channel in a multi-channel system is spilled over into adjacent channels.Three waves mix together to produce the fourth wave which may coincide with the original channel or may not be coinciding [1][2][3][4][5][6][7][8].
Equation (1) shows the formula for the generation of FWM interfering term.
Equation (2) shows the total no. of interfering terms.iii.
Effects of FWM
Four wave mixing (FWM) is one of the most troubling issues.Three signals combine to form a fourth spurious or mixing component, hence the name four waves mixing.Spurious components cause following problems [16][17][18][19][20]: Interference between wanted signal(cross) It generates additional noise and degrades system performance Power is lost from wanted signals into unwanted spurious signals FWM can be substantially reduced or perhaps completely eliminated through the following steps [18][19][20].In a multi wavelength system like DWDM, three waves mix together and produces the fourth wave given by Equation (3).
( )
Equ.3 [16] Where n is the refractive index λ & c are the wavelength and speed of the light respectively.
X is the electric susceptibility.
D is the degeneracy factor which is 3 for Two tone and 6 for Three tone mixing.
is the Four wave mixing efficiency (inversely proportional to Dispersion).
iv. FWM FOR EQUAL AND UNEQUAL CHANNEL SPACING
From Figure 2, it is inferred that when the inter channel spacing is equal means then the FWM power falls on the original signals such that it will induce crosstalk.To The FWM effect is described as [12] : ( ) In order to fulfill the coupling conditions, the four frequencies must be commensurate in a manner of Combining three waves, a fourth wave can be generated and the four wave signals are written as [12]: Where is the slowly varying envelope amplitude of the optical field with frequency is the fiber loss coefficient and Depending on the state of polarization of the waves, the parameter C=2 for parallel polarization and C= 2/3 for orthogonal polarization.The propagation mismatch constant ( ) ( )
EFFECT OF DISPERSION ON FWM
The dispersion in the fiber produces the phase mismatch and hence the interfering terms may get reduced.The simulation of the layout in the optsim software gives the power of the FWM terms for various no. of channels [20][21][22].The dispersion in the fiber could be set from 0 ps/nm/km to any value [15,16,20 ].
vi. Minimization of FWM Effects
Traditional non-multiplexed systems have used dispersion shifted fiber at 1550nm to reduce chromatic dispersion.Unfortunately operating at the dispersion minimum increases the level of FWM.Conventional fiber (dispersion minimum at 1330 nm) suffers less from FWM but chromatic dispersion rises .Solution is to use "Non-Zero Dispersion Shifted Fiber" (NZ DSF), a compromise between DSF and conventional fiber (NDSF, Non-DSF).ITU-T standard is G.655 for non-zero dispersion shifted single mode fibers.By using unequal spacing between DWDM channels effect of FWM decreases [14][15][16][22][23][24][25].
viii. RSULTS AND DISCUSSION
By varying the dispersion from 0 to 4 ps/nm/km we observed effect of dispersion on FWM by Optisystem 7.And also effect equal and unequal spacing on FWM is observed.These effects are shown in following figs [15] .
ix. CONCLUSION
FWM leads to interchannel crosstalk in DWDM systems.It generates additional noise and degrades system performance.By using non zero dispersion shifted fiber i.e. fiber having 4 ps/nm/km and using unequal spacing among channels FWM effect can be reduced.As Unequal channel spacing could simultaneously reduce non linearities [FWM & SRS], it may result in better performance reduced BER arising out of more OSNR. x.
Newly formed FWM productsMay fall on the original signal (cannot be filtered out).May not fall on the original signal (can be filtered out).Spurious components are created, causing • Interference • Degradation of signals • Cross talkFour-wave mixing transfers' energy from a strong pump wave to two waves up shifted and downshifted in frequency from the pump frequency ω1[8][9][10][11][12][13][14].If only the pump wave is incident at the fibre, and the phase matching condition is satisfied, the Stokes and anti-Stokes waves at the frequencies ω3 and ω4 can be generated from noise.On the other hand, if a weak signal at ω3 is also launched into the fibre together with the pump, the signal is amplified while a new wave at ω4 is generated simultaneously.The gain responsible for such amplification is called the parametric gain[4,[12][13][14][15][16].
(a) Individual channel power reduction (b) Increased dispersion (Phase Mismatch) (c) Increased channel spacing.
avoid this, unequally spaced channels are used.Fabrizio (1995) has analysed the FWM for different bandwidth expansion factors and demonstrated the reduction in overlapping of interfering FWM terms with the original channels for the unequal channel spacing [16-20].
Fig. 4 :
Fig.4: The spectrum at the fiber end
Table 4 .
1 FWM Vs No. of Channels for the Dispersion of 0 ps/nm/km
Table 2 .
The Matlab programs are executed and the interfering terms are Table2.Comparison of Interfering Terms.The numbers inside the cells of the table are obtained by counting the coinciding terms for each channel from the results of execution of Matlab program. | 1,388.8 | 2021-04-26T00:00:00.000 | [
"Physics"
] |
Placing the ‘post-social’ market: Identity and spatiality in the xeno-economy
Following Knorr Cetina and Bruegger (2002), an understanding of financial markets as ‘post-social’ environments has gained sway. This claim is premised on the idea that new technologies, in particular screen displays of complex real-time financial information, have displaced ‘the market’ from the social and economic relations in which it might otherwise be assumed to be embedded. We argue that recent historical transformations in trading and markets are better characterized as ‘re-spatializations’ involving shifts in the placing and mediation of market spatiality. Material from the City Lives project and other sources is analysed to explore the transformation of the London International Financial Futures Exchange (LIFFE) from the early 1980s onwards. Using the notion of ‘xeno-economy’ (cf. Rotman’s (1987) ‘xeno-money’) it is argued that the spatial redistributions of the market did not so much efface sociality as set up new kinds of relations between local traders and institutions, notably mediated through geographical displaced ‘trading arcades’. The immanence of modes of sociality to markets as intrinsically and necessarily social objects is thereby emphasized.
Introduction
It is commonplace to acknowledge a 'turn to practice' across a broad range of social science areas, including marketing (see Araujo et al., 2008). While the precise terms of this turn differ markedly across each instantiation, it is possible to discern a number of key general characteristics. These include a rejection of simplistic cause-effect relationships in favour of a focus on the contingent emergence of phenomena and the 'work' which goes into their 'making'; a general tendency towards a form of constructionist epistemology often coupled with process-based ontology; and a refusal of dualistic thinking -e.g. subject/object; mind/body; theory/practice -with a particular concern to suspend the analytic category of 'the social' as a catch-all explanatory device. We might then describe the turn to practice as the empirical programme of post-structuralism (in the same way that conversation analysis is viewed as the empirical programme of ethnomethodology).
It is the last of these characteristics -the move towards 'post-social' or 'object-centred' analysis -which concerns this paper. To what extent is it possible to think about markets and economics without some version of the social acting as the basis for explanation? Can one really think consumption and exchange without associations between persons forming the core object of analysis? That is to say, without considering the determination of events through differentiated non-human actors?
Practice-based approaches do, of course, concern themselves with human action. In fact it might be said they consider little else. The 'performative' approach to economics developed by Donald MacKenzie, Michel Callon, Fabian Muniesa and Yuval Millo, for example, aims to dispense with the model of the market as a set of autonomous inhuman mechanisms and in its place offers a sophisticated image of calculative practices, techniques and associations between people and things that form assemblages which perform themselves as 'markets' (see Callon, 1998;Callon et al., 2007;MacKenzie, 2008;MacKenzie et al., 2008). This performance is accomplished through the constitution of new economic objects (or 'market devices') such as financial derivatives and the Black-Sholes model which although appearing to have an independent objective existence are in fact entirely contingent upon the network of social and technical relations through which they have emerged (see MacKenzie, 2008). Thus practice-based approaches of this kind might at first glance be seen as offering a complete socialization of economics and markets.
This proves ultimately not to be the case. MacKenzie and collegues (2008) argue that once they have been constituted, market devices rapidly acquire their own capacity to act within a network of relations. Moreover, market devices allow for connections and associations with other networks, thereby amplifying and ramifying the potential to act for the network as a whole in a way which far exceeds the abilities or desires of its human participants, as recent financial crises have powerfully demonstrated. The performative approach to economics then takes the capacity to act of networked relations between people and things as its object of analysis without apportioning greater relevance a priori to one or the other (an analytic stance once termed the 'second principle of symmetry' by Callon, 1986).
Our argument in this paper is that while such an analysis quite properly draws attention to the mediation of human action by technical procedures and devices, the social nevertheless stubbornly persists as a matter of concern. More specifically, there are distributions of identity between human actors within markets which remain relatively intransigent in their form, despite massive changes in the networked relations and practices which make up these markets. The example we will use to advance this argument is the way that the subject position of the 'local trader' has remained a relevant live concern for LIFFE and Intercontinental Exchange Futures (ICE Futures), despite the globalization and technical transformations of international financial futures markets. We argue that LIFFE and ICE Futures are not 'post-social' markets but rather ones which have been subject to successive re-spatializations which have dis-placed and re-placed the subject position of 'local trader' over time.
In the following section we outline our objections to the concept of a 'post-social' market and the necessity of considering market spatiality. We then provide a series of examples to support our claims for the relevance of the figure of the 'local trader' drawn from first-hand accounts of market change. These are taken both from previously unanalysed interview-elicited accounts provided by participants about the history of LIFFE to the British Library's 'City Lives' project researchers, and from our own ongoing research into the transformation of the ICE Futures, primarily conducted with that market's 'local' traders. Finally, we offer some reflections on the relative persistence of the social within complex, self-referentially defined market spaces that we refer to as the 'xeno-economy'.
Market space
In their much cited article on the rise of the 'post-social' market, Knorr Cetina and Bruegger argue that technical shifts in the nature of trading in financial markets has depersonalized, dehumanized and, in so doing, fundamentally transformed the nature of market interaction. The contrast they make is between the sort of interpersonal trading found in telephone based and open cry trading with screen based trading. As they put it (2002: 163): After the introduction of screens, the market became fully available and identified as a separate entity in its own right for the first time -with prices, interests and the relevant information all visually indicated on screen. The market on screen is a 'whole' market and a global presence; it subdivides into different information feeds and dealing systems, but these are configured to form a global picture framed by the boundaries of the screen, which also serves as a medium for transactions.
Knorr Cetina and Bruegger argue, in effect, that through the virtualization and reframing of market information the social element of interacting with the market is somehow 'lost'. As a consequence it is irrelevant to ask what sorts of subject positions 1 are involved in managing these interactions, since there are no substantive social relations in the usual sense of that term. But we would point out that the most striking aspect of the changes they describe is not so much a de-or re-socialization as a re-spatialization. The sociality of a market is undoubtedly altered by the technology through which it is delivered, but this was as true of the introduction of ticker-tape, the telegraph, the telephone and earlier generations of computers as it is of advanced screen-based trading. As markets are fundamentally interactive environments, any new form of communications technology or mode of communication introduced to them will necessarily alter their social form and the sorts of subject positions they afford. The assertion that a screen-based market is therefore postsocial implies that previous markets were in some sense 'more social' -in other words, they were less prone to determination by non-human agents.
For Knorr Cetina and Bruegger, the screen that displays complex financial information is the critical mediator in the market. They claim that it enables traders to develop the belief that the information brought together on the screen has a relative autonomy in relation to the human actors and actions in which it is entangled: The argument we make is that the exteriorization, assemblage and contextualization of 'the market' on screen construe the market, which at one time was dispersed among isolated and specific human connections, as an external 'life form' to which traders relate in sometimes adversarial forms of bonding while at the same time remaining able to 'enter' the life form and to become part of it. (Knorr Cetina and Bruegger, 2002: 164) The danger inherent in this argument, of course, is that in rendering screen based markets as in some sense exceptional, it misses and/or sidelines the non-human content of all markets. That is to say, all markets are mediated to some extent by artefacts that expand and complexify human action -indeed, markets themselves and the products that are traded there are both such artefacts. Or put slightly differently, interpersonal subjective relations have always been mediated by objects or 'interobjective' relations during the course of interactions with the market. Notwithstanding this, Knorr Cetina and Bruegger (2002) have clearly identified something important that takes place with the introduction of the screen, but we would nevertheless argue that the fundamental shift here is a spatial rather than a social one tout court.
Geographers (Harvey, 1990;Crang and Thrift, 2000;Pickles, 2004); literary theorists (Ross, 1989;Moretti, 1999Moretti, , 2007; social, historical and scientific theorists (de Certeau, 1984;Latour, 1993); political economists (Cohen, 2000;Cameron and Palan, 2004); and many others have all analysed spatiality not just as the setting for social and economic activity, but moreover, as constitutive of it. In many social scientific contexts space is treated as an essentially passive 'container' for the complex drama of human interaction and development taking place within it. History is regarded as the essential locus of social dynamism while space provides little more than a backdrop. As the writers cited above and many others have argued in recent years, however, spatiality is never simply passive. Subject positions are worked out and given a historicity through spatial practices which combine formal institutional systems (e.g. local, national and international legal codes) and informal, interpersonal practices (e.g. divisions between private and public, hierarchical and permeable social relations) which are articulated through language and individual practice (e.g. the performance of gender, identity, status, being-in-the-know, etc.) (see de Certeau, 1984;Lefebvre, 1991Lefebvre, (1974 for classic accounts).
To be 'post-social', a market would have to be 'postspatial' and there is simply no such thing yet in our experience as a market that so exceeds the spatial as to be sensibly rendered in such terms. Rather, and rather more modestly, there are complex, contingent and highly mediated transformations in the socio-spatial formations that compose markets. This results in sometimes dramatic and sometimes subtle redistributions of the spatially grounded subject positions that are threaded across a given market. The socio-spatial content of the market may change with the introduction of the screen -it cannot do otherwise -but it does not disappear.
While we clearly take issue, therefore, with some aspects of Knorr Cetina and Bruegger's (2002) characterization of contemporary markets as post-social, we do agree with them that markets are being (re)constituted in the context of significant changes in the nature of socioeconomic life. Specifically, the spatiality of the market is being reconstituted not simply with respect to existing spatial forms -states, national economies, international systems, etc. -but with respect to an increasingly pervasive and, at first sight, anomalous space that we describe here as the 'xenoeconomy'.
The idea of a xeno-economy is developed out of Brian Rotman's relabelling of Eurodollars as 'xeno-money'. As he put it: For 'Euro' and 'dollars' one should write 'xeno' and 'money' respectively. The Eurodollar has long since shed its attachment to Europe. It is in fact, no longer geographically located, but circulates within an electronic global market which, though still called the Eurodollar market, is now the international capital market. (Rotman, 1987: 90) This superficially seemingly unlocatable market allows its product -xeno-money -to take on the mode of being of the signifier of value that is related only and inevitably to its own future states of value, as exemplified by the relation between the futures and spot markets for the imaginary currency of the Eurodollar. The value of xeno-money is then entirely self-referential. Of course, things have not stood still in the 20 years since Rotman wrote his account. In the intervening period one could perhaps say that currency has somewhat lost its currency or at least its absolute primacy as a medium of exchange. The Eurodollar market is not now the international capital market. Rather, what we inhabit is a world in which all manner of financial instruments of ever more abstruse nature and derivation can and are being traded against each other in a dizzying whirl.
As Rotman (1987) also implies, the geographical dislocation of capital markets from states has not taken the form of a relocation somewhere else. Rather, through various legal means, capital markets have been constituted outside of any recognizable or (for the moment at least) regulated socioeconomic space (Palan, 2003). They are everywhere and nowhere -they are placeless and increasingly so. Where there was once a relatively legible market space, trading now takes place in everything, everywhere, in a system of exchange that, despite its reliance on various forms of general equivalence, resembles more and more a barter based bazaar.
As the various processes unleashed in the 1970s have accelerated and proliferated, they have come to encompass more and more aspects of economic activity and, as a result, increasing aspects of everyday life. We use the term xeno-economy in this context to refer to the overall delinking of markets from conventional spatial locations (often captured by concepts such as 'globalization' or 'offshore', cf. Cameron and Palan, 2004) in contrast to that partial aspect of this process that Knorr Cetina and Bruegger describe as the 'post-social'.
From this it ought to follow that if the markets which make up the xeno-economy become decoupled from clear spatial location then this ought also to result in the sort of de-socialization that Knorr Cetina and Bruegger assert. But the history of LIFFE and ICE Futures tells us otherwise. In the move from 'open-pit' trading to a largely electronic system, both individual and institutional responses sought to replace the lost spatial relations (social as well as business) in order to restore a legible, humanized spatial form to the market. These responses took the form of resistances (traders staging walkouts), exclusions (new boundaries being set and policed) and inclusions (new participants entering and reshaping the market). Some were the result of planned transitions (though not always with planned outcomes). Others were spontaneous reactions -positive and negative -to the new market structure. In other words there was a re-spatialization of the market in response to changes which re-inscribed social relations and subject positions -notably that of the 'local trader'.
In the following sections we will now turn to consider LIFFE and ICE Futures. We will offer an account of the ways in which these particular markets and their various participants have responded to changes in the socio-spatial context -some planned, some not -in which they function. We will argue that having been confronted with the unsettling openness of virtualized markets, the response came in the emergence of processes of 're-placement' which reinvested economic activities with a legible and usable spatial form and meaning. The 'local trader', which constituted a problematic subject position often seen as an obstacle to the globalization and virtualization of markets, returned through processes of re-inscription and re-placement. This does not necessarily imply the simple rediscovery of lost spatial forms and subjectivities but rather a complex set of processes whereby old and new spatial forms are being created and/or reasserted by individuals and groups within and across a variety of institutional forms (public and private, corporate and state, personal and collective).
The Glocalization of LIFFE
The London International Financial Futures Exchange, LIFFE, which began trading in 1982, in many ways typifies the transformation of financial and securities markets during the 1980s and 1990s. Starting out as a self-consciously innovative futures market based on experiences of the Chicago futures exchange, LIFFE was from the outset regarded as a pioneering city institution (Kynaston, 1997). Not only were the traded products new in their increasing abstraction, but over time LIFFE was to innovate trading practices that would later become the norm for all city institutions. Read in one way, LIFFE's history is the archetype of the developmental trajectory of the post-social market. Revelling in the language of deregulation and globalization, and deliberately adopting the brash and often aggressive individualism of the Chicago exchanges, LIFFE began trading as a consequence of the Thatcher government's removal of exchange controls from British markets in 1979. LIFFE was a self-consciously 'global' player even before the idea of globalization had fully entered the public or media lexicon.
The account here of LIFFE's transformation into a fully screen-based market is derived from materials gathered under the auspices of the 'City Lives' project, whose results are held by the British Library's Sound Archive (see Courtney and Thompson, 1996, for a brief account of this resource as well as some of its highlights). The project encompasses extensive interviews with some 200 participants in 'the City' during a period of rapid transformation and thus constitutes an invaluable resource against which more recent changes can be assessed. The interviews with participants are exemplary in the ways in which they have encouraged and elicited deep reflection and huge self exposure across whole swathes of life, work, family background and dynamics and, indeed, the broad historical and institutional context in which they worked. The material we discuss here is offered as supporting material for the argument we are making -we are not presenting a thorough analysis of either the whole corpus or the individual interview in question. Our interest here in the material is then theoretically driven and the use we make of it is illustrative and for the purposes of exemplification.
The extracts we will discuss come from an interview with John Barkshire, the central figure in the founding of the LIFFE (Kynaston, 1997: 20). In his interviews, Barkshire outlines the background to the creation of LIFFE in complex spatial terms: It really started with the five months that I spent in New York and Chicago on Mercantile House's behalf, looking at the futures markets to see what was in it for us. And that led me to come back in September of 1979 with two recommendations, one was that Mercantile House should be involved in the futures market . . . and secondly a view that the futures markets were going to spread outside Chicago, and they were already beginning to spread into New York, and that if they did spread outside Chicago they were likely to become international, as international players were starting to become members of the Chicago markets, and if they became international there ought to be a market in Europe, and if there was going to be a market in Europe it jolly well ought to be in London. And so ... I mean they weren't very clever thoughts, they were just logically looking at the way futures had developed since they'd been invented in 1975, till that period in 1979. In 1975 they were, like all of the futures markets in Chicago, dominated by the locals, individual traders, who made up all the trading. By 1979 the major Wall Street houses were becoming involved in the markets; one or two were beginning to become members, or just starting to buy seats on the markets, and it was quite plain that the markets were going to become institutionalized rather than dominated by locals. And it was not as plain but fairly plain that they were going to become international. (BLSA interview) The spatial redistribution that Barkshire claims as his vision of the future of the market requires a little unpacking. On the one hand it clearly embraces the possibilities of internationalization (and, later, globalization) opened up by deregulation, but does not present a simple spatial displacement of existing market forms. Barkshire's internationalism is one in which the existing spatiality of the markets spreading from Chicago to New York are further internationalized within and through specific spatial networks and centres -'it jolly well ought to be in London' -and, of course, within the bounded market of futures transaction. As such, Barkshire's vision was, from the outset, 'glocalized' (Swyngedouw, 1997) or, more specifically, 'glurbanized' (Jessop, 2000). These twin portmanteau terms direct attention to the negotiation of the global in the local (and vice versa). In other words, despite what would later become a rather simplistic narrative of 'globalization', LIFFE was always couched within a complex and emergent urban spatial matrix. Barkshire's vision is of a market with clear boundaries to entry and activity, but one that is already international, and distributed across several sites and their interrelations.
At the centre of this spatial mix is Barkshire's distinction between a market dominated by 'locals' and one that is 'institutionalized'. 'Locals' are professional traders that trade on their own account, as opposed to trading funds for clients. Their main activity is speculation on the movement of prices within markets, typically through short-term positions. The practice of local trading is perhaps to be witnessed in its purest form in the act of 'scalping' -the trading off of the difference between the bid price of an instrument (the price at which participants in the market are willing to purchase) and its ask price (the price at which participants in the market are willing to sell). In Chicago in the 1970s locals were central to the operation of derivatives markets: 'the army of professional speculators . . . who traded on their own account in the futures market there . . . provided up to 60% of total turnover' (Kynaston, 1997: 10).
The subject position 'local trader' is seen as a key ingredient to successful futures trading because of its association with risk. In trading discourse, locals provide the ongoing bedrock of trading activity that delivers the liquidity deemed to be essential to any functional market. Locals boost the number of participants and transactions in a particular market and thus are key to maintaining market dynamism that might otherwise gradually be dominated and trammelled by big investors. In addition to their sheer number, the 'local trader' is taken as the source of fine-grained market knowledge that adds a depth of analysis of particular trading opportunities unavailable to the institutional investor. As one of our informants from the IPE comments: When trading in the pit, most of us would trade over the very short term, trying to capture small profits quickly with little risk, taking advantage of brief inefficiencies that made a certain price either cheap or expensive. This could be done by watching the flow of orders into the market and knowing exactly what every traded month was worth against another. We could also buy the market if we saw Goldman Sachs for instance, buying the market. And taking a loss was far easier because we could see where the orders were to get out. (Trader T: personal email) Barkshire's most significant claim, therefore, is less the macro-spatial distribution of the futures markets across national borders than his belief that the market could and would function more effectively once the institutions took over from locals such as Trader T, whose local knowledge is expected to be effaced by the demise of pit trading. The displacement of the market is, thus, twofold for Barkshire -internationalism and institutionalism replace localism both in the sense of the 'locally' concentrated market and the 'local' knowledge embedded in the social and personal networks of the traders themselves.
This hostility to the 'local' was not just Barkshire's personal preference but also a strategic choice based on assumptions about the risk aversion of the investors he needed to create LIFFE in the first place. Although a key feature of futures markets in their initial form, locals and market localness (particularly in Chicago) were regarded as sources of potential instability. Barkshire believed that big institutional players would need to be reassured that entering a derivatives market still seen by many as inherently risky would not entail an organization swimming in the seemingly tawdry and dangerous waters of a market of speculation: Why should these commercial people, who were good at making soap powders or ball-bearings, or whatever it may be, why should they take a risk of moving into what they perceived as a risky market? And their treasurer might well stand there and say, 'it's the avoidance of risk'. And they'd say, 'well we did read something about it, and it's locals taking risks, whatever locals might be. We've read words like ''speculators''; we've read things like ''gambling'' and all those sorts of things. Doesn't this happen in the market?'. (BLSA interview) As this extract implies, assessments of market risk were to some extent linked to the spatiality of the participants. It was perhaps because of this that at the outset risky, speculative locals were specifically excluded from LIFFE -all the trading 'seats' being taken by institutions (Kynaston, 1997). Although LIFFE wanted to reassure its clients by restricting access in this way, it also recognized the significance of the subject position of the 'local trader' to market function. The physical architecture and appearance of the trading floor, for example, specifically emulated the set up of the locally dominated Chicago exchanges: There was never any doubt that it would be an open outcry market, but it was a conscious decision to base it physically on the Chicago model of futures markets rather than on, say, the cocoa or coffee markets in London. This model had three prime characteristics: pits (not rings), with steep steps; open, low booths (not boxes that allowed private conversation); and a big display board. The Chicago model also dictated that the traders would wear coloured jackets, never before seen in the city. Each member chose something different, and almost the only one rejected by LIFFE was a Union Jack design, the worry being that it would be seen on television as selling the pound down the river. (Kynaston, 2001: 610) All aspects of the new market, both inside and out, were designed, consciously or not, with a view to 'public' consumption. The placing of a Chicago-style market in the Royal Exchange Building in the City of London, and the initial relative marginalization of the locals were used to send a variety of messages about the nature and function of the market to a range of viewers and actors. LIFFE used the physical space of the trading floor both to narrate its spatial and institutional relationships to others markets and to discipline the behaviours and identities of its (selected and exclusive) market participants. This was done in response to perceptions of the changing nature and spatiality of the wider markets in which LIFFE was situated, including, perhaps rather anachronistically, the national sensitivities of British investors and commentators over the fate of Sterling. As such, LIFFE constituted itself as a 'special public' (cf. Merrill and Clark, 1934), wrought within a complex matrix of boundaries and identities. It was not long, however, before one aspect of the initial set up had to be radically transformed. Soon after opening to an exclusively institutional clientele, LIFFE felt obliged to admit locals in order to deliver not only the enhanced liquidity with which their presence is often associated but also the willingness to engage speculatively on their own part in order to enable an opposite party to hedge (Kynaston, 2001: 610).
Screens and arcades
Much as Barkshire and other powerful market makers might have been able to plan and construct their trading environments to a degree, they could only do so in the wider context of a rapidly changing economy. While Barkshire's designs for LIFFE were a response to the increasing international reach of emerging futures markets initially based elsewhere in the early 1980s, the rapid globalization of securities and derivatives markets soon brought about further significant changes in trade spaces -specifically the introduction of screen-based trading. LIFFE began to move to electronic trading in June 1998 (Kynaston, 2001: 780), emulating many other markets' initial partial moves away from complete mediation of trade by open outcry pits, by offering an electronic trading platform to run alongside activity taking place in its pits.
Given the obvious significance placed on the physical design of the market space, even this partial shift to screen-based trading constituted a very fundamental change and one that triggered complex responses. Ostensibly, moving away from the open pit promised to further remove the risky idiosyncrasies of 'localness' from the trading process. While the screen did not render the individual trader wholly obsolete, particularly while the possibility of open outcry trading persisted, it began to modify the subject position of the 'local trader' in part by altering what traders actually did. The most immediate change brought about by the screens -not fully realized until the pit was suspended in favour of total electronic trading -was that part of a trader's business melted away. As one of our informants who experienced a similar process of partial to complete electronification of trading at the International Petroleum Exchange (IPE)/ICE put it: In the pit, traders either made money by doing business for trading firms or they made money by buying low and selling high. On the screens, there is no need for trading firms to use traders as they have their own computer terminals and merely have to type the orders in themselves. (Trader T: personal email) This is not the only change wrought here. For a second means of making money for the local also seems to ebb away in the face of the screen: But the biggest difference for myself and other similar traders trying to buy low and sell high is the lack of transparency on the screens. [ . . . ] On the screen, however, we have no idea who is buying or how many they want to buy due to various computer programs available that disguise amounts or even pretend to look like they wish to buy when they actually don't. And when taking a loss, the price you see on the screen is rarely the price you will receive because the speed and hidden/non-existent orders often mean that you just press the button and hope. I for one am trading much less volume than I did in the pit because it is much harder to quantify my risk. (Trader T: personal email) Just as the physical design of the original LIFFE exchange constrained the relative visibility and audibility of trades, so the design of screens at the ICE Futures disciplined trader behaviour by only showing them certain aspects of the market. It did this not by erasing sociality or by according the market an autonomous 'form of life' as Knorr Cetina and Bruegger (2002) suggest, but rather through a process of social complexification. For example, hidden and nonexistent orders are understood by traders as the manifestation of social strategies played with other traders, although they can appear to take on a kind of autonomy when they are 'read' through the constantly updated data on the screens. The local interpersonal practices are still there -they remain intransigent and immanent to the technological shift to screen-based trading. Rather than reduce the figure of the 'local trader' the screen now makes the potential presence of local traders a continuous and ongoing concern. Every fluctuation in the quantitative movements continuously updated on the screen may be read as the trace of local traders' strategic activities. The 'local trader' now became everywhere, legible in every flicker and fluctuation of market data.
The introduction of the screens certainly altered the social and spatial dynamics of trading behaviour, yet it did not eliminate them. Individual traders instead adapted their 'style' to accommodate the constraints and possibilities of the electronic market space: [M]ost ex-pit traders have had to change their trading style due to the reasons I mentioned earlier.
These changes mean quite a few fail, and many I know have left the business completely, becoming plumbers, taxi-drivers, etc. But most of us have relearned our trade and continue trading daily, with some going on to even greater heights making even more money than they did in the pit. (Trader T: personal email) Perhaps even more significantly for our argument, with the final closure of the pits and obligatory electronic trading, new forms of engagement with the market have developed which either recreate aspects of the pit, or which create new arenas of interaction altogether. Specifically, the responses of the dealers and wider stakeholders have been to re-place it in a variety of ways. It is not simply that the subject position of 'local trader' has persisted through the changes in trading, it has become actively re-elaborated and re-placed in new forms of spatial relations.
Of these, perhaps the most important have been their relocation into 'trading arcades'. Arcades developed in parallel with the rise of screen-based trading in markets such as LIFFE and partly as a reaction against them. The arcade offers individual traders a space and the necessary facilities to engage in precisely the sort of idiosyncratic, 'risky', socialized trading characteristic of the open pits. Silverman (2001: 16) described the original American arcades in the following terms: An arcade may have a hundred or more traders in it, or as few as two. The traders may trade for a proprietary 'house' account in which all in the house share in the trading results, or they may be individual customers, each trading his or her personal capital and responsible for his or her own profits, losses, and business expenses. Demographically, arcades are melting pots, mixing professional traders with novices, mid-career changers with those fresh from a university, and both men and women (at this time there are still far more men, but the disparity in numbers is diminishing). Many different products may be traded in an arcade. A single screen may accommodate electronic markets in NASDAQ securities, stock index futures, cash bonds, and scores of other products from exchanges all over the world. The traders may have access to analytical information such as charts and news services, software that allows them to test trading ideas, and Internet connectivity. There is also likely to be a squawk box service that broadcasts live prices from various open outcry markets. Finally, no trading arcade is complete without a television monitor hanging from the rafters tuned to CNBC.
The arcade embraces the distanciated and distanciating technologies of the xeno-economy -particularly the multiplicity of screens -but draws it into a physical trading environment which deliberately reproduces the possibilities of interpersonal, social, interaction that characterized the open pits. That the pits are not simply reproduced in this process is seen in the radically different demographics of the arcade participants. And intriguingly, the arcade also reshapes possible modes and media of engagement with the market, outsourcing ownership of the trading seat, its mode of connection to the servers hosting the electronically enabled market clearing mechanism, and often the analytical and visual software through which the market is realized as entity for the trader, reengineering the differently integrative institutionalizing trend initiated by Barkshire and colleagues.
This differential and even oppositional aspect of the arcades could, in some cases, be expressed by their physical location with respect to the exchanges. In 1997, for example, the Kyte Group opened an arcade -one of the first in the UK -directly opposite LIFFE and populated by LIFFE traders. As Peter Green, currently Director and Chief Executive of the Kyte Group Ltd, put it: We opened it across the street from LIFFE so that traders could move between the floor and our dealing room very quickly . . . At the time, most traders worked on the screens during quiet periods in the market, but almost invariably they would run -literally -back to the pits if there was any real excitement in the markets. (Cited in Zwick, 2007: 60) Thus although the arcades offer the potential to redress the spatial imbalances of the institutional scale markets and to 'resocialise' trading (though this is not perhaps what their designers thought they were doing), they have not simply replaced the locales in markets such as LIFFE take place. Rather, they have reintegrated themselves into the institutional structures of the markets in ways that both reintroduce and reassert the importance of locals, but which have subsequently also opened up new spatial and temporal trading forms.
The intransigence/immanence of the social to 'post-social' markets Our argument in this paper has been that a practice-based approach which rejects 'the social' as part of its analytic apparatus does disservice to the historical changes which have occurred in financial markets since the 1970s. While we welcome the focus on technical mediation and the relative displacement of markets through the shift from supposedly 'interpersonal' trading in favour of screen based trading, we think that the approach of MacKenzie and colleagues, as recommended by Araujo et al. (2008), is problematic because it fails to foreground the link between spatiality and identity. Subject positions are grounded in spatial arrangements, and changes in the one always result in transformations of the other.
We have offered some brief extracts which convey some sense of the changes that have occurred in IFFE and ICE Futures. Our aim has not been to offer a thoroughgoing analysis of these changes but rather instead to state that even minimal engagement with the accounts provided by participants demonstrates that a 'post-social' reading is lacking. The re-spatializations taking place throughout these accounts of market change are complex. They are both integrative -drawing together elements of the market that had previously been spatially separate, often invisible and 'territorially nested' -and disintegrative -they fragment established markets' relationships and redistribute functional, interpersonal networks and groups. While some of these become -in Knorr Cetina and Bruegger's terms -(partially) 'post-social', what emerges in the trading arcades is a re-socialized and, therefore, re-spatialized form of market interaction. And with the new spatial, sectoral and ethical boundaries come new modes of market identity. As this implies, wholly postsocial markets are either unworkable -they need some form of socio-spatial 'proximity' (real and/ or virtual) to function -and/or undesirable -for all the apparent predominance of technology, markets remain intrinsically human institutions.
We conclude then with a warning. There can be little doubt that one of the most pressing analytic tasks facing marketing theory is understanding how 'the market' comes to appear as an autonomous actor for both consumers and marketing researchers. Practice-based approaches quite rightly draw our attention to the labour which underpins the constitution of markets as such, including financial markets. But in doing so there is a danger of seriously downplaying the way in which certain kinds of subject positions -such as the 'local trader' within financial markets -are critical to the functioning of markets. It is not the case that humans make markets, but neither is it the case that markets make persons. Rather, we need to understand the complex historical dynamic of shifting placings and re-placings of identities, mediated by technology, which have delivered us the markets we are now obliged to inhabit. Note 1. We use the term subject position to refer to the constitution of a recognizable social subject within a given practice (see Hollway, 1989;Davies & Harré, 1990). This happens through a combination of techniques, discourses and material arrangements. Foucault's (1972) notion of an enunciative position' describes this process well although it is important to grasp that Foucault is concerned with simultaneous discursive and non-discursive production of subjectivity (see Brown & Stenner, 2009). | 9,289 | 2010-09-01T00:00:00.000 | [
"Economics"
] |
Matching 3d N=2 Vortices and Monopole Operators
In earlier work with N. Seiberg, we explored connections between monopole operators, the Coulomb branch modulus, and vortices for 3d, N=2 supersymmetric, $U(1)_k$ Chern-Simons matter theories. We here extend the monopole / vortex matching analysis, to theories with general matter electric charges. We verify, for general matter content, that the spin and other quantum numbers of the chiral monopole operators match those of corresponding BPS vortex states, at the top and bottom of the tower associated with quantizing the vortices' Fermion zero modes. There are associated subtleties from non-normalizable Fermi zero modes, which contribute non-trivially to the BPS vortex spectrum and monopole operator matching; a proposed interpretation is further discussed here.
Introduction
Three-dimensional U (1) gauge theories exhibit IR-interesting phenomena and phases, with qualitative similarities to 4d non-Abelian gauge theories. For example, electricmagnetic dualities can be explored in this context, and the U (1) gauge group makes it easier to make the duality more precise, and potentially construct the duality-map between fields. This is particularly true for 3d theories with N ≥ 2 supersymmetry, where magnetically charged, BPS vortex solitons can be regarded as giving the dual quanta in terms of the electric variables, with corresponding chiral superfield monopole operators.
Building on [1], we here consider 3d, N = 2 supersymmetric, compact 1 U (1) k gauge theory (k is the Chern-Simons coefficient), with matter chiral superfields Q i , with general electric charges n i ∈ Z. A key aspect is that the theory has an exact 2 , conserved global U (1) J topological symmetry, with current j µ J = ǫ µρσ F ρσ /4π, and associated charge (1.1) The theory contains local operators, and particle states, with q J = 0, despite the fact that the photon and Q i have q J = 0. There are three distinct, related ways to get q J = 0: 1. Monopole operators: disorder the gauge field, with q J units of magnetic flux, around a point x µ 0 in spacetime [4,5,6]. It is a local, chiral N = 2 operator (the 3d reduction of 4d 't Hooft line operators). This short-distance definition of the operator is independent of IR data, e.g. the particular vacua, or the spacetime geometry. The chiral condition implies that the real scalar σ = Σ| of the N = 2 photon linear multiplet has [6,1] Upon taking ζ → 0, all Q vac i → 0, the BPS magnetic vortices become massless, and can potentially condense and give dual Higgs description of the Coulomb branch [9], in the sense of 3d mirror symmetry's exchange of the electric and magnetic Higgs and Coulomb branches [10]. See also [11,12] for vortices and partition functions.
Connections and distinctions between monopole operators, vortices, and the Coulomb branch, for the theories in flat space, were studied in [1,13], and will be further explored here. We determine, and match, the gauge and global charges of monopole operators and the vortices. For the monopole operators X ± , the charges are simply, and exactly, obtained by a one-loop calculation of induced Chern-Simons terms [9,14,15,1] to be with k c ≡ 1 2 i n i |n i | (see sect. 2). The operators X ± in (1.6) exist as gauge invariant operators 3 only if k = ∓k c ; this is the condition for the X ± Coulomb branch to exist. 3 The superconformal U (1) R * of the N = 2 SCFT at Q i = X ± = 0, is a linear combination of those in (1.6), U (1) R * = U (1) R + j R j U (1) j , so ∆(Q i ) = R i , and ∆(X ± ) = R(X ± ) = 1 2 i |n i |(1 − R i ), with R i determined by F-extremization [16](or τ RR minimization [17,14]).
The corresponding charges of BPS vortices arise in a seemingly different way, from quantizing the vortex Fermion zero modes 4 , Ψ A , with A = 1 . . . N z , i.e. from This formally gives a tower of 2 N z degenerate states: treating the Ψ A (Ψ † A ) as raising (lowering) operators, the top and bottom vortex states in this tower are Writing "|0 " q J as the naive (ignoring zero modes) groundstate for q J = 0, We identify the X ± quanta with the top and bottom vortex states: with |0 the q J = 0 vacuum. We verify that the vortex charges, computed from (1.10), are indeed compatible with (1.11) and the X ± charges in (1.6).
This matching was verified in [1] for theories with N matter fields Q i , with all n i = 1.
We here extend the analysis to theories with general matter charges n i . We find that, in the q J = ±1 vortex background (for ζ > 0), the Fermion component of Q i leads to |n i | zero modes, Ψ i,p=1...|n i | , with charges 6 and spin given by (again, k c ≡ 1 2 i n i |n i |): Quantizing the Ψ i,p gives a tower of 2 |n i | degenerate vortex states. The top and bottom states |Ω ± q J , as in (1.8), have quantum numbers that follow from (1.12) and (1.10); this gives the charges of |Ω ± q J =1 in (1.12). These |Ω ± q J =1 charges indeed agree with those of X + and X † − in (1.6), fitting with the proposed operator / state map in (1.11). As we will see, the |q J | = 1 Fermi zero modes in (1.12) have large z behavior (from (1.4)) |Ψ i,p | ∼ |z| p−1−|n i | , and the p = |n i | case is non-normalizable, for every matter field.
As in [1], we quantize all Fermi zero modes as in (1.7), including the non-normalizable ones, and interpret the non-normalizable Fermi zero modes as mapping between different Hilbert spaces. But some additional discussion is required here, particularly for theories with k = k c = 0. Then both X + and X − exist in the same theory, corresponding to the two Coulomb branches. Fitting with (1.11), both |Ω + q J =1 and |Ω − q J =1 in (1.12) have U (1) spin zero, and can condense to give the X + or X − branches. But |Ω + q J =1 and |Ω − q J =1 are are related via non-normalizable Fermi zero modes. The BPS quanta created by X + and X † − evidently must reside in different Hilbert spaces, which seems puzzling. Our (tentative) interpretation is that this reflects the fact that X + and X − label two disconnected branches of the moduli space of vacua, i.e. that X + X − ∼ 0 in the chiral ring. Quantum field theories typically do not have a Hilbert space of single-particle states, with a mapping between them via normalizable zero modes. To the extent that it can happen for BPS states relies on the x-independence of the chiral ring OPE. If a product of chiral operators is zero in the chiral ring, the associated BPS states can appear to reside in different Hilbert spaces. We discuss this further in sect. 5, e.g. for N f = 1 SQED, and its W = M X + X − dual. It would be good to have a more complete understanding.
The outline of the remaining sections is as follows. Section 2 briefly reviews some of the basic points, and sets up our notation and conventions; a few more details are in an appendix. Section 3 broadly discusses the BPS vortices, and their zero modes, for the general N = 2 susy, U (1) k charge n i matter theories. Section 4 discusses vortices and zero modes in general cases with a vev Q i ∝ δ i,1 , with Q 1 of charge n 1 = 1. Section 5 considers theories with N ± matter fields of charge n i = ±1, e.g. N = 2 SQED with N + = N − = N f flavors. Section 6 discusses cases where Q i = 0 for matter with charge n i = 1, where there can be an unbroken Z |n i | discrete gauge symmetry, i.e. an orbifold.
One could generalize to non-Abelian gauge theories; it will not be considered here.
2.
A few preliminaries (see also the appendix)
Lagrangian and effective Chern-Simons terms
The U (1) k gauge theory, with matter fields Q i of charges n i , has classical Lagrangian We will set the real masses m i = 0, and take W tree = 0. Dirac-quantization for monopole operators implies that the Chern-Simons coefficient k is quantized as The supersymmetric vacua have expectation values of the Coulomb modulus σ = Σ|, or the matter fields Q i = Q i |, subject to the conditions D = 0 and m j (σ)Q j = 0, where and m i (σ) ≡ m i + n i σ. The effective FI parameter ζ ef f , and Chern-Simons coefficient 3) are shifted by integrating out massive matter, with ζ ef f = ζ for m i = 0 and and k ef f = ζ ef f = 0. The asymptotic values of k ef f for σ → ±∞ are if k = ∓k c , respectively. For non-zero k ef f and ζ ef f , there are also isolated "topological vacua," with Q i = 0 and σ = −ζ ef f /k ef f ; those vacua will not enter in our discussion.
Chern-Simons contribution to Gauss' law, and charges and spin from q J
The Chern-Simons term affects Gauss' law (the A 0 EOM), as The Chern-Simons contribution in (2.6) implies that operators or states with q J = 0 acquire an associated electric charge, and a related contribution to their spin [31][32][33] if the Fermions are massive and integrated out. For vortices, if k = 0, the last term in (2.6) leads to A 0 = 0, which complicates the equations of motion.
The gauge and global charges of the X ± operators in (1.6) follow from (2.7), and its analogs for mixed gauge-flavor Chern-Simons coefficients. Since X ± extend to σ = ±∞, which is the condition for X ± to be a gauge invariant, scalar operator: If k = ∓k c , then the X ± Coulomb branch exists. i n i sign(n i σ). Taking σ → ±∞ for q J = ±1, the analog (2.7) for the global charges then gives the corresponding charges in (1.6).
BPS and anti-BPS particles
Particle states can be labelled by their U (1) spin , s, and it is convenient to convert the spinors to a rotational spin-diagonal basis (s = 1 for z = x 1 + ix 2 and ∂ z = 1 2 (∂ x 1 + i∂ x 2 )). For the supercharges, we define (fixing a minor notational issue vs [1]) so Q ± and Q ± have spin s = ± 1 2 . In terms of these, the N = 2 algebra is and the remaining two supercharges make a two-dimensional representation Likewise, an anti-BPS particle has m = −Z > 0, and is annihilated by Q + and Q − . Every BPS state has a CPT conjugate anti-BPS state, with opposite global charges and Z, but with the same U (1) spin spin s. The R-charges and spins of these states are [1]
BPS and anti-BPS vortices
The central term of the supersymmetry algebra (setting real masses m i = 0) is For Z > 0, the vortex can be BPS, annihilated by Q − and Q + (2.11). For Z < 0, the vortex is anti-BPS, annihilated by Q − and Q + . The condition that these supercharges annihilate the background implies the BPS equations for a static (all ∂ t → 0) vortex with with D given by (2.3). One must also impose Gauss' law (2.6). In our conventions, the chiral superfields, Q i , of a Z > 0 BPS vortex are anti 7 -holomorphic (resp holomorphic for a Z < 0 anti-BPS vortex). We will here be particularly interested in the zero modes.
The vortex's Fermi zero modes are the static ∂ t → 0 solutions of the Fermion equations of motion, from (2.1) with m i = 0, in the background of the static vortex's Bosonic fields: where ψ i↑,↓ and λ ↑,↓ have spin ± 1 2 , and U (1) R charge −1. As we discuss in section 4, the number of solutions of (3.5) and (3.6), and their quantum numbers, are as in (1.12): each matter field contributes |n i | Fermi zero modes, with spin correlated to the sign of n i .
, is not analytically known, nor is it needed: knowing its existence and number 7 This (unfortunately) is due to following [34]'s sign convention for A µ ; see the appendix.
of zero modes suffices. The vortex with U (1) J charge q J has |q J | complex Bosonic zero modes, and |q J | spin + 1 2 Fermionic zero modes. The q J = 1 vortex has one complex zero mode z 1 , the translational invariance zero mode of the BPS vortex core location, and one complex spin 1 2 Fermionic zero mode [20,21], Ψ 1 , a combination of the photino and the matter fermion that solves (3.5) and (3.6). The Bosonic field configuration is annihilated by Q − and Q + (2.11), while the other two supercharges give the Fermi zero mode, Ψ 1 ∼ Q + , and complex conjugate Ψ † 1 ∼ Q − , i.e. the photino and matter Fermi field configuration of Ψ 1 follows from acting with Q + on F vortex µν (z, z) and Q vortex 1 (z, z).
yields a BPS doublet (2.12); adding the q J = −1, anti-BPS, CPT conjugate states gives one copy of the spectrum (2.13). The U (1) R and U (1) spin quantum numbers there are found 2 theory is dual to a theory of a free chiral superfield, X ± [35]. The FI parameter ζ maps to a real mass m X in the dual. BPS vortices map to X-particle states.
Cases with multiple matter fields Q i : the (anti)-BPS equations for the Bosonic fields
By (3.4), the vortex gauge field configuration is completely determined by that of any non-zero matter field Q i : The condition that the gauge field (3.7) be smooth, with winding number q J (1.4), implies [36] that a charge n i = 1 matter field has Q i (z) with |q J | zeros, at the vortex core locations, with f i ≡ f i (z, z) non-vanishing. Turning on Bosonic zero modes can resolve the zeros in , D (n j ) )) Since the LHS of (3.11) is non-negative, equations (3.4) have a Q j = 0 solution only if the second term on the RHS of (3.11) has the correct sign. M i j = Q i Q j = 0 in a theory with vector-like matter. As discussed in [9], the fact that BPS vortices require M i j = 0 can have a simple dual perspective, e.g. for N f = 1 SQED it is clear from the W = M X + X − dual that the X ± quanta are only BPS for M = 0. See [38,39] for other, dynamical arguments leading to the same conclusion. The general solution of (3.9) for a q J = 1 BPS (or q J = −1 anti-BPS) vortex is then where the denominators are determined by the z → z 1 vanishing degree of Q 1 in (3.8), (which is the only singularity of the ratio) and the numerators by (anti) holomorphy and the condition that the ratio approaches the vacuum value, i.e. zero, for |z| → ∞: (3.14) The |n j | coefficients c j,p (or c j,p ) in (3.14) are the Bosonic zero modes for matter field Q j with Q vac j = 0 in a BPS (or anti-BPS) q J = 1 vortex. Matter field(s) Q i with Q vac i = 0 also yield |n i | Bosonic zero modes, one of which is the translational zero mode z 1 .
Normalizable vs non-normalizable zero modes
The Bosonic or Fermionic zero modes of the static vortex are replaced with dynamical variables on the vortex worldline theory, if the associated induced kinetic term is normalizable. Non-normalizable zero modes, on the other hand, are frozen parameters. For example, the translational zero mode of a |q J | = 1 vortex is quantized as z 1 → z 1 (t), which is normalizable, with finite induced kinetic term d 2 zL → 1 2 m BP S |ż 1 | 2 . Considering the c j,p or c j,p term in (3.13) for large |z| gives |Q j | ∼ |c j,p ||z| p−1−|n j | , so the induced coefficient of a |ċ j,p | 2 term involves ∼ d 2 z|z| 2(p−1−|n j |) , i.e. c j,p and c j,p are normalizable for 1 ≤ p < |n j | (requiring |n j | > 1) and log-IR-divergent non-normalizable for p = |n j |.
The non-normalizable ρ j ≡ c j,p=|n j | or ρ j ≡ c j,p=|n j | zero modes in (3.13) generalize the non-normalizable zero modes of "semi-local vortices" [27][28][29][30]. As found there, turning on ρ i = 0 dramatically changes the character of the vortex solution, removing the zero in (3.8) at the vortex core, and changing the flux F 12 in (3.3) from having the usual ∼ e −cm γ |z| exponential falloff for large |z| (with m γ the Higgsed photon mass) into a diffuse, powerlaw falloff. In our general n i case, each matter field with sign(n i ) = sign(ζ) and Q vac i = 0 yields one-such non-normalizable ρ i bosonic zero mode. If |n j | > 1, there are also |n j | − 1 additional normalizable, and hence dynamical, zero modes c j,p<|n j | or c j,p<|n j | .
The bosonic non-normalizable zero modes, ρ i , are interpreted, as in [1], as superselection parameters already of the q J = 0 vacuum, even before adding the vortex: taking including all normalizable and also non-normalizable Fermi zero modes.
Fermi zero modes of BPS vortices for somewhat general cases.
We will consider |q J | = 1 BPS and anti-BPS vortices, taking ζ/n 1 > 0, in the vacuum with σ = 0 and non-zero expectation value for only Q 1 : For the rest of this section, we assume that n 1 = 1, though we allow for general charges n j for the other Q j>1 matter fields in (4.1). We will discuss the n 1 = 1 case in sect. 6.
Each Q i matter field with n i > 0 has n i Bosonic zero modes, while Q i with n i < 0 have none. The Q 1 Bosonic zero mode is the normalizable, translational zero mode, z 1 .
For the matter fields Q j =1 , with n j > 0, the Bosonic zero modes are the c j,p or c j,p in Since, for q J = 1, Q 1 has a degree one zero at z 1 , this gives (similar to (3.13)) n j > 0 (q J = 1) : with the n j coefficients, u j,p=1,...n j , Fermionic zero modes of spin p − 1 2 . Likewise, with the |n j | coefficients, d j,p , Fermionic zero modes of spin −(p − 1 2 ). As in the bosonic case, for either (4.7) or (4.8), the p = |n j | Fermi zero mode is non-normalizable. The spins of u j,p and d j,p follow from constructing the angular momentum generator, much as in [40], assigning spin +1 to z, and spin + 1 2 to ψ j,↑ in (4.7). By (1.5), Q n j 1 /(z − z 0 ) n j is θ independent for large |z|, so we assign spin + 1 2 to each term u j,p z p−1 in (4.7), and, likewise, spin − 1 2 to all d j,p z p−1 in (4.8). So u j,p has spin p− 1 2 and d j,p has spin −(p− 1 2 ). In sum, the q J = 1 vortex has the Ψ (q J =1) n j ,p in (1.12): |n j | Fermion zero modes, of spins sign(n j )(p − 1 2 ), for p = 1 . . . |n j |. The q J = −1 vortex is similar. The other quantum numbers likewise follow from those of ψ j,↑,↓ , and are as given in (1.12). We assign U (1) gauge charges in (1.12), even though U (1) gauge is spontaneously broken (screened) by (4.1).
The zero modes of a matter field Q i are in |n i | different N = (2, 0) chiral multiplets (i.e. a complex Boson and a complex Fermion) if sign(n i ) = sign(ζ), or |n i | N = (2, 0) chiral Fermi multiplets (i.e. a complex Fermion and an auxiliary field) if sign(n i ) = − sign(ζ).
All the Fermi zero modes are quantized, as in (1.7) and (1.8), giving 2 |n i | states. The zero mode should be regarded as Q + , i.e. neutral under U (1) gauge and the non-R-symmetry global symmetries; quantizing this zero mode yields BPS doublets (2.13). Including all zero modes yields 2 |n i |−1 BPS doublets.
Consider a theory with vector-like, charge-conjugation symmetric matter content, with pairs Q i and Q i , of charges ±n i . Then k c = 1 2 i n i |n i | = 0 in (2.5), and the k = 0 theory with ζ = 0 has asymptotic Coulomb branches X ± . The theory respects P and T if k = 0, and it respects C if ζ = 0. For every Fermi zero mode Ψ n j ,p , there is a Fermi zero mode Ψ −n j ,p of opposite spin, so the A Ψ A appearing in (1.10) has spin s = 0, and the top and bottom states |Ω ± q J =1 have s = − 1 2 k, so spin 0 for k = 0, This fits with (1.11): these states map to the quanta of X ± , |Ω + q J =±1 ∼ X ± |0 and |Ω − q J =±1 ∼ X † ∓ |0 , with X ± a gauge invariant operator for k = 0.
Examples: theories with N ± matter fields of charge n i = ±1
We denote the matter as Q i=1...N + , with n i = +1, and Q i=1...N − , with n i = −1. The charge is +1 for all Q i and Q i . We take N + > 0, and ζ > 0, and then The cases (N + , N − ) = (N, 0) were discussed in [1]. The minimal matter case, N = 1, was reviewed in sect. 3.1. The vortices of the N > 1 case is the N = 2 version of the "semilocal" vortices of [27][28][29], allowing also for Chern-Simons terms. Our present discussion in this section also includes cases with both N + N − = 0; we did not find much discussion of vortices in such theories in the literature, aside from some brief comments in [23,24].
For general (N + , N − ), a q J = 1 BPS vortex has N + complex bosonic zero modes. One is the normalizable, translational zero mode, z 1 , corresponding to the vortex core location.
The remaining N + − 1 bosonic zero modes are the non-normalizable ρ i parameters in The N − negatively charged matter fields Q i must identically vanish (3.12) in a BPS configuration, so they do not yield bosonic zero modes.
As discussed [1] and section 3.4, we quantize all N + + N − Fermi zero modes, including the non-normalizable ones. This leads to a tower of 2 N + +N − vortex states, with the top and bottom states |Ω ± q J =1 , with quantum numbers as in (5.4). The normalizable zero mode, Ψ 1 , is identified with Q − , so the states form 2 N + +N − −1 BPS doublets (2.12). These come from quantizing the non-normalizable Ψ j>1 and Ψ i Fermi zero modes: The omitted U (1) gauge charge is screened by Q vac If k = ∓k c ≡ ∓ 1 2 ∆N , the X ± Coulomb branch exists, and |Ω ± has spin 0, and is an SU (N + − 1) × SU (N − ) singlet, consistent with (1.11) and interpreting X ± as a condensate and is normalizable, and the Ψ 1 ≡ Ψ 2 zero mode has spin − 1 2 and is not normalizable. Quantizing Ψ 1 and Ψ 2 (1.7) gives two BPS doublets: The two BPS doublets in (5.8) and (5.9) reside in different Hilbert spaces, since they are connected via the non-normalizable Ψ 2 Fermi zero mode from ψ Q . For k = 0, both |Ω ± q J =1 have spin 0, and quantum numbers consistent with (1.11): |Ω + q J =1 ∼ X + |0 and |Ω − q J =1 ∼ X † − |0 . We interpret |Ω ± in different Hilbert spaces as corresponding to X + X − ∼ 0 in the chiral ring, and the disconnected X ± branches of the ζ = 0 theory 9 .
The W = M X + X − dual [9] must have the same structure: the map from the X + |0 to the X † − |0 BPS state must involve (in addition to the normalizable Q + zero mode), a ∼ 1/|z| non-normalizable ψ M = Q ψ Q zero mode. Again, we propose that this reflects that This tentative interpretation should be further clarified, perhaps in future work.
6. Cases with Q vac i = 0 for matter with n i = 1.
If a matter field Q 1 , with n 1 > 1, has an expectation value (4.1) (negative n 1 can be obtained via charge conjugation of the present discussion), Q vac 1 = 0 breaks U (1) gauge → Z n 1 , a discrete gauge symmetry, a.k.a. a Z n 1 orbifold. See [46], and references cited therein, for more about Z n 1 gauge theory. Before the Z n 1 orbifold projection, the Fermion zero modes are essentially the same as in section 4, with |n i | Fermion zero modes Ψ i=1,p=1...|n i | for each matter field Q i , and charges as in (1.12). This includes n 1 Fermi zero modes (one is the supercharge) coming from matter field Q 1 and the photino, from eqns. (3.5), (3.6). 9 Parity is a symmetry for k = 0 and maps X + ↔ X − . We can turn on a (P odd) real mass m Q for Q and Q and then there is only one Coulomb branch, X ± if m(X ± ) = −m Q ± ζ = 0; m Q = 0 also eliminates the non-normalizable ψ Q zero mode. There is then a BPS state matching either X + |0 , or X † − |0 , depending on sign(m Q ζ). Taking m Q → 0 requires both doublets in (5.7).
The Fermi zero modes are quantized as in (1.7), giving a tower of 2 i |n i | states, and one then projections to Z n 1 gauge invariant states. The top and bottom states |Ω ± q J =1 (1.8) survive the Z n 1 projection, with quantum numbers again matching with X + and X † − . As a special case, recall from [1] that if the charges all have a common integer factor, n i = n n i , with n and n i integer, the theory is simply a Z n orbifold of a rescaled theory: Note that q J ∈ Z, while q J ∈ nZ, and a has periodicity a ∼ a + 2π, while a ∼ a + 2π/n.
Consider e.g. the theory of a single matter field, Q 1 , with charge n 1 > 1, which is equivalent to a Z n 1 orbifold of the rescaled theory with matter of charge n 1 = 1. Since the q J = 1 vortex of the original theory maps (6.1) to a q J = n 1 vortex of the rescaled theory, it has n 1 complex Bosonic zero modes (the locations z 1 , . . . , z n 1 of the individual vortex cores in the rescaled theory), and n 1 Fermionic zero modes, Ψ 1 , . . . , Ψ n 1 , prior to the Z n 1 orbifolding.
Quantizing the Ψ A=1...n 1 as in (1.7), gives a tower of 2 n 1 states. The top and bottom states, , have charges as given by (1.10) and (1.12), here with k c = 1 2 n 2 1 . These states are Z n 1 invariant, and their charges match those of X + and X † − in (1.6). For k = ∓k c , the operator X = X ± is U (1) gauge neutral, with spin 0, and labels a half-Coulomb branch.
This theory is a Z n 1 orbifold of a free field theory [1], with X 1/n 1 the free field.
We can also consider BPS vortices in vacua with Q vac i = 0 for multiple fields, of different charges n i , with all sign(n i ) = sign(ζ) (3.10), i.e. a weighted projective space, with weights n i . The Fermi zero mode analysis for the general case is then complicated by the couplings among flavors in (3.6). In any case, the counting and charges of the Fermi zero modes cannot be affected by continuous moduli, so they must again be as as (1.12).
In conclusion, in all cases the BPS vortex states |Ω ± q J =1 have quantum numbers compatible with (1.11). For k = ∓k c , it is a spin 0 BPS state, which becomes massless for ζ → 0 and can condense to give a dual Higgs description of the X ± Coulomb branch.
Acknowledgments:
I would especially like to thank Nathan Seiberg for many illuminating discussions, key observations, and helpful suggestions. I would also like to thank Juan Maldacena, Ilarion Melnakov, Silviu Pufu, Sav Sethi, and David Tong, for useful discussions or correspondence.
I would like to thank the organizers and participants of the workshops String Geometry and Beyond at the Soltis Center, Costa Rica, and the KITP program New Methods in Nonperturbative Quantum Field Theory for the opportunities to discuss this work, and for many stimulating discussions. I would especially like to thank the KITP, Santa Barbara, for hospitality and support in the final stage of this work, in part funded by the National Science Foundation under Grant No. NSF PHY11-25915. This work was also supported by the US Department of Energy under UCSD's contract de-sc0009919, and the Dan Broida Chair.
Appendix A. Additional details, conventions, and notation In components, the lagrangian (2.1) is (A.1) We use [34] conventions 10 (reduced from 4d to 3d along the x µ=2 direction, see [47]), though this introduces an unfortunate, non-standard sign convention 11 for the gauge field. In a configuration where the fields asymptote to a zero of (A.2), the total energy of (A.1) (with m i = 0) can be written (using (3.11) and (2.6)), as (with F 12 ≡ F W &B
12
) with D as in (2.3). The BPS (resp. anti-BPS) configurations saturates the inequality for upper (resp. lower) sign choice and ζq J > 0 (resp. ζq J < 0). , which changes the names of BPS vs anti-BPS with respect to much of the vortex literature. This could be fixed by introducing a minus sign in the definition (1.1) of q J , but that introduces sign differences with other literature, e.g. the definitions of X ± in [9,1], so we will not do that here. | 7,697.2 | 2014-06-10T00:00:00.000 | [
"Physics"
] |
Development of a New Solver to Model the Fish-Hook Effect in a Centrifugal Classifier
Centrifugal air classifiers are often used for classification of particle gas flows in the mineral industry and various other sectors. In this paper, a new solver based on the multiphase particle-in-cell (MP-PIC) method, which takes into account an interaction between particles, is presented. This makes it possible to investigate the flow process in the classifier in more detail, especially the influence of solid load on the flow profile and the fish-hook effect that sometimes occurs. Depending on the operating conditions, the fish-hook sometimes occurs in such apparatus and lead to a reduction in classification efficiency. Therefore, a better understanding and a representation of the fish-hook in numerical simulations is of great interest. The results of the simulation method are compared with results of previous simulation method, where particle–particle interactions are neglected. Moreover, a validation of the numerical simulations is carried out by comparing experimental data from a laboratory plant based on characteristic values such as pressure loss and classification efficiency. The comparison with experimental data shows that both methods provide similar good values for the classification efficiency d50; however, the fish-hook effect is only reproduced when particle-particle interaction is taken into account. The particle movement prove that the fish-hook effect is due to a strong concentration accumulation in the outer area of the classifier. These particle accumulations block the radial transport of fine particles into the classifier, which are then entrained by coarser particles into the coarse material.
Introduction
Centrifugal classifiers are used for classification of particle gas flows due to their good classification efficiency and wide range of applications, especially in the pharmaceutical, food, coal, and cement industries [1][2][3][4][5]. Evaluation parameters for the classification properties of a classifier are the classification efficiency d 50 , which indicates at which particle size 50% each ends up in the fine and coarse product, and the classification selectivity κ, which results in d 25 /d 75 . The particles are classified by the rotating blades in the classifier, which generate a forced vortex, causing the particles to experience a centrifugal force acting against the direction of flow. Coarse particles are thus rejected at the outer edge of the classifier, while fine particles follow the air flow inwards and enter the fines [6]. In order to better understand the classification mechanism and to optimize the geometry with respect to energy efficiency, a number of experimental and numerical studies have been carried out in the past. Many numerical studies so far have had the goal of investigating and optimizing geometric influences such as the horizontal and vertical classifiers or the structure of the classifying wheel blades in more detail [7][8][9][10][11][12][13]. As a general practice, the resulting velocity and pressure profiles were determined without taking particle-particle interactions into account and particle trajectories in the classifier were derived. In some cases, even the influence of the solid load on the flow was neglected. These simplifications were chosen due to the complexity of the classifier resulting in a high computational effort. In some cases, these simplifications are justified by the fact that only low solid loads are present simplifications were chosen due to the complexity of the classifier resulting in a high computational effort. In some cases, these simplifications are justified by the fact that only low solid loads are present and the influence of the particles is negligible [10,14]. The comparison of experimental and numerical separation efficiency confirms these assumptions. However, some studies declare that the solid load has an influence on the separation efficiency in a classifier [8]. Probably the influence of the solid load depends on the design of the classifier and the process conditions. Moreover, the fish-hook effect, which often occurs in the classifier, could not be reproduced in simulations. The fish-hook effect, which owes its name to the characteristic curve of the separation efficiency, is shown in Figure 1. Since more particles enter the coarse material as the particle size decreases, the curve rises sharply in this area, causing large portions of the fine material to enter the coarse material and significantly reducing the yield of a classifier. For this reason, it is of interest to better understand the processes that lead to the fish-hook effect and to represent them numerically. Various researchers have attempted this so far, not only for centrifugal classifier but also for similar apparatus like cyclones. Nagaswararao et al. [15] summarize previous studies and draw the following conclusion. There is no uniform consensus in literature for the occurrence of the fish-hooks effect. Generally, two effects are held responsible for it. In the first theory, the fish-hook effect is based on the entrainment of fine particles in the boundary layer of coarser particles. In the second theory it is assumed that fine particles acquire velocities larger than the Stokes velocity when entrained by coarse particles [16]. In addition, the fish-hook effect occurs more frequently in measurements when the sizing analyses are carried out by Laser diffractometry using his optical mode [15].
In centrifugal classifiers, however, only a few studies on the fish-hook effect are available. The flow profile in a centrifugal classifier is similar to a cyclone but not the same. Firstly, Eswairah et al. [17] blame the fine particles' rebound in the classifier's blades, Guizani et al. [18] consider secondary recirculation flows and bubble-like vortex decay inside the classifier. Eswairah et al. [17] support their results with sieve curves in which the fishhook effect is measured in a classifier. Furthermore, Barimani et al. [19] adopted a new approach to study the fish-hook effect. By focusing the investigations on a periodic section of the classifier, the relevant regions in front of and between two classifying wheel blades were resolved in more detail. Using the Discrete Phase Model (DPM) particle trajectories of particles of different sizes were then determined and conclusions were drawn about Various researchers have attempted this so far, not only for centrifugal classifier but also for similar apparatus like cyclones. Nagaswararao et al. [15] summarize previous studies and draw the following conclusion. There is no uniform consensus in literature for the occurrence of the fish-hooks effect. Generally, two effects are held responsible for it. In the first theory, the fish-hook effect is based on the entrainment of fine particles in the boundary layer of coarser particles. In the second theory it is assumed that fine particles acquire velocities larger than the Stokes velocity when entrained by coarse particles [16]. In addition, the fish-hook effect occurs more frequently in measurements when the sizing analyses are carried out by Laser diffractometry using his optical mode [15].
In centrifugal classifiers, however, only a few studies on the fish-hook effect are available. The flow profile in a centrifugal classifier is similar to a cyclone but not the same. Firstly, Eswairah et al. [17] blame the fine particles' rebound in the classifier's blades, Guizani et al. [18] consider secondary recirculation flows and bubble-like vortex decay inside the classifier. Eswairah et al. [17] support their results with sieve curves in which the fish-hook effect is measured in a classifier. Furthermore, Barimani et al. [19] adopted a new approach to study the fish-hook effect. By focusing the investigations on a periodic section of the classifier, the relevant regions in front of and between two classifying wheel blades were resolved in more detail. Using the Discrete Phase Model (DPM) particle trajectories of particles of different sizes were then determined and conclusions were drawn about the concentration and residence time of different particle sizes in the classifier. This proved that there is a strong accumulation of particles of similar cut size in the classifier directly in front of the classifier wheel blades. According to Barimani et al., the accumulations ensure that the solids concentration upstream of the classifier is many times higher than previously assumed and exceeds the feed concentration many times over. Furthermore, they derive that these increased solids concentrations intensify the interaction of especially very small particles with larger particles and thus inhibit the radial movement of the very fine particles into the interior of the classifier. However, a proof of this assumption has not yet been achieved, since consideration of particle-particle interactions has always been neglected in the previous simulations.
When simulating a multiphase solid-fluid flow in a classifier, the Euler-Lagrangian approach is suitable. In this article, the Euler phase is modelled with the continuum Navier Stokes equations, while the particles are modelled as Lagrangian elements with fixed properties such as diameter and density. A fully coupled (4-way) Euler-Lagrangian approach includes the momentum transfer between the two phases as well as a consideration of particle-particle interactions. Since a detailed resolution of each individual collision for densely charged air flows is very computationally intensive, the multiphase particle-in-cell (MP-PIC) method is used for the first time in this work. In the MP-PIC method, particleparticle interactions based on averaged particle stresses derived from the Lagrangian approach are transferred to the Eulerian network, which means that the particle collisions do not have to be resolved directly. The modelling of the particle collision using the Eulerian mean values and the parcel concept, in which several particles with the same properties are considered as one parcel, make the MP-PIC method suitable for dense particulate flows without a significant loss of accuracy [20].
Therefore, this new solver allows for the first time the consideration of particleparticle interaction in a 3D simulation for classifiers. The results are compared with results without considering particle-particle interactions and validated with experimental data. Furthermore, the effects of the solid load on the flow profile are examined in more detail. Figure 2 shows the laboratory plant provided for the validation of the numerical model. The particles are fed into the apparatus from above onto a deflector plate, which introduces the particles into the system between static guide blades and the classifier. The air flow introduced via a tangential inlet conveys the particles through the classifier into the fine product, where they are then separated from the air with the help of a cyclone. The particles separated at the classifier due to centrifugal forces are held in the periphery of the classifier between static guide vanes and the classifier until they sediment downwards due to gravity and enter the coarse product. The material that reaches the coarse or fine material is weighed and sampled. A Mastersizer 2000 from Malvern Panalytical then measures the particle size distributions using laser diffraction. Material properties of the solid and the air as well as characteristic sizes of the classifying wheel are presented in Table 1. The pressure drop is determined between the point in front of the static blades and at the outlet of the classifier. In the following, the general equations that serve as the basis for the numerical solver are described. The equations are based on the assumption that the flow is incompressible and isothermal. For the fluid phase, since the solver is based on a Euler-Lagrange approach, the Navier-Stokes equations are solved, in which an influence of the solid phase is taken into account. The volume fraction of the continuous phase α F results in
Apparatus Description
where α P is the volume fraction of the solid phase and is expressed according to from the individual particle volumes in a grid cell. The continuity conservation equation then becomes ∂ ∂t and the momentum equation is given by
Governing Equations
In the following, the general equations that serve as the basis for the numerical solver are described. The equations are based on the assumption that the flow is incompressible and isothermal. For the fluid phase, since the solver is based on a Euler-Lagrange approach, the Navier-Stokes equations are solved, in which an influence of the solid phase is taken into account. The volume fraction of the continuous phase α F results in where α P is the volume fraction of the solid phase and is expressed according to from the individual particle volumes in a grid cell. The continuity conservation equation then becomes and the momentum equation is given by with the interphase momentum transfer F F given by and the mass of the solid particle m P , the density of the fluid ρ F , the density of solid particle ρ P , the velocity of the fluid u F , the pressure of the fluid p F , the velocity of the solid particle u P , the fluid stress tensor τ F , the gravity vector g, and the particle volume v P . The particle distribution function f depends on the particle position x P , the particle velocity u P , the particle mass m P , and the time t define the evolution of the particle phase, which is expressed by a Liouville equation: where A P is the particle acceleration, given by The interparticle stress τ P includes particle-particle interactions with each other and must be taken into account in the MP-PIC method for particle acceleration. The first three terms are the drag force, the force due to a pressure gradient within the fluid and gravitational force. Other smaller forces such as virtual mass, Basset or lift forces are neglected. The continuity and momentum equations for the particulate phase result from the multiplication of α P v P and α P v P u P with Equation (5) and an integration over particle volume, density, and velocity. These terms are not presented here, as they are already described in detail in the literature [19,20].
For the interparticle stress, the model by Harris and Crighton [21] applies, which is described in Equation (8) where P S is the solid pressure constant, β is an empirical constant, α CP is the volume fraction of the dispersed phase at close packing, and ε is a small number to satisfy numerical stability. The model by Harris and Crighton does not include direct consideration of velocity differences between particles. At first, this seems to be a major disadvantage, as particles that are rejected at the classifier have significantly higher velocities after particle-wall collision with rotating components than entering particles. However, it must be emphasized that if a particle cloud is formed in front of the classifier wheel, this error is mitigated, since the majority of particles move around the classifier wheel at similar velocities and, secondly, the fish-hook effect is presumably due to the fact that small particles never get between two classifier wheel blades, since otherwise they would almost certainly enter the fines. This steric hindrance should be well reproduced by the Harris-Crighton model. Furthermore, it is numerically very stable.
In the MP-PIC method, the influence of particle-particle interaction can be subdivided into sub-models. The most important ones are packing models [22] collision damping models [23] and collision isotropy models [24]. In this study, the explicit packing model and the stochastic collisional isotropy model, which are already implemented in OpenFoam are applied. No collision damping model was considered because it leads to unrealistic particle movements. The drag force contained in the interphase momentum transfer term F F is taken into account with a combination of the models by Ergun [25] and Wen-Yu [26], both of which are known to be well-suited for density-charged particle flows. If the continuous phase fraction is less than 0.8, the Ergun model is exercised. Particle-wall interactions are described using a simple impact model with restitution coefficients. For the fluid simulation, Reynolds-averaged Navier-Stokes (RANS) equations is used as a turbulence model along with Menter's shear stress transport (SST) turbulence model [27]. The individual parameters for the models used are shown in Table 2. The values were adjusted in the simulation to fit as well as possible with the experimental data. The effect of turbulence for particles is taken into account by applying stochastic dispersion model from OpenFoam-6. Therefore, the velocity is perturbed in random direction, with a Gaussian random number distribution. Since the classifier is a rotating part, an MP-PIC solver based on the software environment OpenFOAM-6 is extended with the multi-frame of reference (MRF) model. In the MRF model, the numerical cells of the rotating part are supplemented with additional centrifugal and Coriolis forces. The rotating part is frozen in a fixed position; an exchange surface between the different frames of reference is applied. This approach requires that the particle forces be calculated according to the zone. If a particle is inside the classifier in the rotating section, the relative velocity of the fluid is taken into account to calculate the particle forces, if a particle is in the stationary section outside the classifier, absolute velocities are considered. This is necessary because the rotating wall does not rotate in the simulation and a particle therefore only moves at the relative velocity to the rotating wall. This model works well and has already proven itself in other studies due to its short computing time and robustness [28]. However, it has never been combined with the MP-PIC method. For this purpose, the calculation of the interparticle stress term also had to be adapted. This is calculated with absolute velocities of fluid and particle, but its effect is adjusted for the rotating zone. The solver is adapted in this respect.
Simulation Conditions
Two properties influence the creation of the grid when applying the MP-PIC method. On the one hand, the flow requires a good resolution of the flow area, on the other hand, the grid cells are larger than the particles. Furthermore, the accuracy increases if a sufficiently large number of particles are present in a grid cell, since the numerical instability increases with strongly fluctuating volume fraction. For this reason, only a periodic section of the classifier is examined in the geometry under investigation. This significantly reduces the number of grids and the number of particles per volume can be significantly increased. In reality, there is no complete rotational symmetry due to the tangential flow inlet of the air, but investigations on the 360 • geometry have shown that the high solid loads between the static guide vanes and the classifier cause a uniform distribution of the airflow over the radius. Figure 3 shows the geometry with the boundary conditions. At the air inlet, the volume flow corresponds to the volume flow in the experiments. At the outlet, an absolute constant pressure of 0 Pa is set. The walls have a standard no-slip boundary. Since both the solids feed and the coarse material discharge are airtight, these are also assumed as walls. In addition, the discharge of coarse material is significantly reduced, as it can be adopted that particles that have exceeded the lower edge of the classifier will enter the coarse material. An uneven distribution of the solids flow as well as particle velocities due to the baffle plate are neglected, so that the particle feed takes place uniformly without velocity. The averaged value of y+ is 10 for the blades of the classifier and 5-7 for all other walls. A full resolution of the boundary layer requires a y+ value of 1 and therefore, a significant number of additional cells. That is why a y+-wall function is used to model the near-wall turbulence. This is a good compromise between accuracy and computational costs.
to the baffle plate are neglected, so that the particle feed takes place uniformly withou velocity. The averaged value of y+ is 10 for the blades of the classifier and 5-7 for all othe walls. A full resolution of the boundary layer requires a y+ value of 1 and therefore, significant number of additional cells. That is why a y+-wall function is used to model th near-wall turbulence. This is a good compromise between accuracy and computationa costs. To investigate the sensitivity of the grid, three different grids are investigated. Tabl 3 compares the pressure drop from the simulation with experimental data for the thre grids. The comparison is at a classifier speed of 900 rpm. The standard deviation is calcu lated to pressure loss in experiment for all grids. The grid used is the medium grid, which consists of a hexahedral mesh with 304,22 cells. It is shown on the right-hand side of Figure 3. The grid is a good compromise be tween accuracy and calculation time. The left-hand side of Figure 4 plots the contours o the static pressure on the periodic surface and an axial section through the apparatus. Th axial section is made through the red line in Figure 3. The figure shows that the pressur inside the classifier drops dramatically. This is due to the fact that the tangential velocit inside the classifier first increases considerably with smaller radius and then drops dras tically. The flow profile is comparable to a cyclone and described in detail by Toneva e al. [29] and also a proof for the correctness of the simulations. In addition, the pressur To investigate the sensitivity of the grid, three different grids are investigated. Table 3 compares the pressure drop from the simulation with experimental data for the three grids. The comparison is at a classifier speed of 900 rpm. The standard deviation is calculated to pressure loss in experiment for all grids. The grid used is the medium grid, which consists of a hexahedral mesh with 304,222 cells. It is shown on the right-hand side of Figure 3. The grid is a good compromise between accuracy and calculation time. The left-hand side of Figure 4 plots the contours of the static pressure on the periodic surface and an axial section through the apparatus. The axial section is made through the red line in Figure 3. The figure shows that the pressure inside the classifier drops dramatically. This is due to the fact that the tangential velocity inside the classifier first increases considerably with smaller radius and then drops drastically. The flow profile is comparable to a cyclone and described in detail by Toneva et al. [29] and also a proof for the correctness of the simulations. In addition, the pressure loss between experiment and simulation is compared on the right-hand side of Figure 4. In the simulations, the pressure loss is underestimated by about 10% compared to the experimental data, what is a satisfactory result. The deviations are probably due to simplifications in the geometry. loss between experiment and simulation is compared on the right-hand side of Figure 4. In the simulations, the pressure loss is underestimated by about 10% compared to the experimental data, what is a satisfactory result. The deviations are probably due to simplifications in the geometry.
General Flow Profile in Classifier
At the beginning, the general flow profile in the classifier is discussed. From this, it is possible to better understand the movement of the particles. The particle separation takes place between the classifier blades. For this purpose, Figure 5 shows an axial section, see red line in Figure 3, through the classifier and the radial and tangential velocity profiles in front of and between two classifier blades.
General Flow Profile in Classifier
At the beginning, the general flow profile in the classifier is discussed. From this, it is possible to better understand the movement of the particles. The particle separation takes place between the classifier blades. For this purpose, Figure 5 shows an axial section, see red line in Figure 3, through the classifier and the radial and tangential velocity profiles in front of and between two classifier blades. loss between experiment and simulation is compared on the right-hand side of Figure 4. In the simulations, the pressure loss is underestimated by about 10% compared to the experimental data, what is a satisfactory result. The deviations are probably due to simplifications in the geometry.
General Flow Profile in Classifier
At the beginning, the general flow profile in the classifier is discussed. From this, it is possible to better understand the movement of the particles. The particle separation takes place between the classifier blades. For this purpose, Figure 5 shows an axial section, see red line in Figure 3, through the classifier and the radial and tangential velocity profiles in front of and between two classifier blades. The velocities shown are at a speed of 900 rpm and at clockwise rotation. The outer edge of the classifier rotates at a tangential velocity of 15 m/s. The tangential velocities in front of the classifier are significantly lower than between the classifier blades, which means that the leading blade acts as a tear-off edge and a dead zone forms between the classifier blades. This dead zone constricts the radial air transport into the interior, which means that there is no uniform radial velocity profile between the classifier blades. Negative radial velocity means that the air flows towards the center inwards, positive velocity transports air outwards. The formation of the dead zone depends on the rotational speed of the classifier and becomes larger as the rotational speed increases. This is due to the fact that as the classifier speed increases, the difference in velocity between inside the classifier blades and outside becomes greater and greater. This is shown in left-hand side of Figure 6 in more detail. There the radial velocity for three different classifier speed is shown. The right-hand side of Figure 6 illustrates the influence of the solid load on the radial velocity between the classifier blades. The solid load equalizes the radial velocities and reduces the formation of the dead zone with positive radial velocities.
The velocities shown are at a speed of 900 rpm and at clockwise rotation. The outer edge of the classifier rotates at a tangential velocity of 15 m/s. The tangential velocities in front of the classifier are significantly lower than between the classifier blades, which means that the leading blade acts as a tear-off edge and a dead zone forms between the classifier blades. This dead zone constricts the radial air transport into the interior, which means that there is no uniform radial velocity profile between the classifier blades. Negative radial velocity means that the air flows towards the center inwards, positive velocity transports air outwards. The formation of the dead zone depends on the rotational speed of the classifier and becomes larger as the rotational speed increases. This is due to the fact that as the classifier speed increases, the difference in velocity between inside the classifier blades and outside becomes greater and greater. This is shown in left-hand side of Figure 6 in more detail. There the radial velocity for three different classifier speed is shown. The right-hand side of Figure 6 illustrates the influence of the solid load on the radial velocity between the classifier blades. The solid load equalizes the radial velocities and reduces the formation of the dead zone with positive radial velocities.
Particle Movement in Classifier
The particles entering the classifier have significantly lower tangential velocities than the rotating classifier wheel due to the low air tangential velocities in front of the classifier. Therefore, particles entering the classifier blades collide with the trailing blade. The lefthand side of Figure 7 sketches the distribution of the particles and their size in front and between the classifier blades in 2D. There, smaller particles are marked blue, larger particles are shown in red. The figure illustrates that above all small particles enters the classifier blades and then collide with the trailing the blades.
The right-hand side of Figure 7 shows the tangential velocity of the particles. This demonstrated that the particles colliding with the trailing blade are accelerated by the classifier and have significantly higher velocities after impact. The particles accumulate primarily on the trailing blade. Fine particles are now transported further into the fine material, while coarse particles are pushed outwards by centrifugal force. The particles rejected at the classifier accumulate directly in front of the classifier wheel and move on a circular path around the classifier. They have significantly higher tangential velocities than particles on the circular path outside the classifier. Accordingly, they collide with the particle cloud as they exit, which slows them down again considerably. The particle cloud moving on a circular path in front of the classifier also prevents the transport of "new"
Particle Movement in Classifier
The particles entering the classifier have significantly lower tangential velocities than the rotating classifier wheel due to the low air tangential velocities in front of the classifier. Therefore, particles entering the classifier blades collide with the trailing blade. The lefthand side of Figure 7 sketches the distribution of the particles and their size in front and between the classifier blades in 2D. There, smaller particles are marked blue, larger particles are shown in red. The figure illustrates that above all small particles enters the classifier blades and then collide with the trailing the blades.
The right-hand side of Figure 7 shows the tangential velocity of the particles. This demonstrated that the particles colliding with the trailing blade are accelerated by the classifier and have significantly higher velocities after impact. The particles accumulate primarily on the trailing blade. Fine particles are now transported further into the fine material, while coarse particles are pushed outwards by centrifugal force. The particles rejected at the classifier accumulate directly in front of the classifier wheel and move on a circular path around the classifier. They have significantly higher tangential velocities than particles on the circular path outside the classifier. Accordingly, they collide with the particle cloud as they exit, which slows them down again considerably. The particle cloud moving on a circular path in front of the classifier also prevents the transport of "new" particles into the classifier. In addition, the angle of entry depends on the speed of the classifier. The entry angle of the particles between the classifier blades depends on several factors. Firstly, it depends on the particle size. Small particles are accelerated faster by the high tangential air between the classifier blades than coarse particles and therefore reach further inwards between two classifier blades bevor colliding with the trailing blade. Figure 8 illustrates schematic the difference particle path between a small and a coarse particle. In reality, the greatest wear is detected at these points in the apparatus, which supports the plausibility of the calculated trajectories. Figure 8. Schematic of particle path between classifier blades for a small particle in blue and a coarser particle in red. Figure 7 shows that small particles that would actually enter the fines due to their size do not pass through the cloud between the classifier wheel blades. At this point, it must be mentioned that the particle impact model used is subject to a fundamental assumption. The influence of different particle velocities is only taken into account to a lim- The entry angle of the particles between the classifier blades depends on several factors. Firstly, it depends on the particle size. Small particles are accelerated faster by the high tangential air between the classifier blades than coarse particles and therefore reach further inwards between two classifier blades bevor colliding with the trailing blade. Figure 8 illustrates schematic the difference particle path between a small and a coarse particle. In reality, the greatest wear is detected at these points in the apparatus, which supports the plausibility of the calculated trajectories. The entry angle of the particles between the classifier blades depends on several factors. Firstly, it depends on the particle size. Small particles are accelerated faster by the high tangential air between the classifier blades than coarse particles and therefore reach further inwards between two classifier blades bevor colliding with the trailing blade. Figure 8 illustrates schematic the difference particle path between a small and a coarse particle. In reality, the greatest wear is detected at these points in the apparatus, which supports the plausibility of the calculated trajectories. Figure 8. Schematic of particle path between classifier blades for a small particle in blue and a coarser particle in red. Figure 7 shows that small particles that would actually enter the fines due to their size do not pass through the cloud between the classifier wheel blades. At this point, it must be mentioned that the particle impact model used is subject to a fundamental assumption. The influence of different particle velocities is only taken into account to a lim- Figure 8. Schematic of particle path between classifier blades for a small particle in blue and a coarser particle in red. Figure 7 shows that small particles that would actually enter the fines due to their size do not pass through the cloud between the classifier wheel blades. At this point, it must be mentioned that the particle impact model used is subject to a fundamental assumption. The influence of different particle velocities is only taken into account to a limited extent. It can be assumed that particle-particle collisions, which would accelerate particles to very high velocities, are thereby weakened. In reality, it is quite possible for large particles to receive a high velocity component inside the classifier and enter the fine material. The impact model used here therefore tends to support ideal separation.
In the following, more attention is focused on the axial particle transport. Figure 9 shows the particle distribution in the apparatus at two different time steps in simulation.
The particles are all shown in the same size, small particles are colored blue, larger particles are colored red. If one compares the two figures, it is noticeable that the particles are primarily located between the static guide vanes and the classifier. In this area, denser particle clouds repeatedly form, which then sediment downwards into the coarse material as a particle swarm. The swarm sedimentation is a non-stationary process, which is illustrated by the two time steps. Very small particles, in the order of <60 µm, reach the fine material over the entire classifier height. At the same time, however, fine particles accumulate in denser particle clouds and are carried down with them and can also enter the coarse material. This can also be seen in the separation efficiency curves shown in the left-hand side of Figure 10.
ited extent. It can be assumed that particle-particle collisions, which would accelerate particles to very high velocities, are thereby weakened. In reality, it is quite possible for large particles to receive a high velocity component inside the classifier and enter the fine material. The impact model used here therefore tends to support ideal separation.
In the following, more attention is focused on the axial particle transport. Figure 9 shows the particle distribution in the apparatus at two different time steps in simulation. The particles are all shown in the same size, small particles are colored blue, larger particles are colored red. If one compares the two figures, it is noticeable that the particles are primarily located between the static guide vanes and the classifier. In this area, denser particle clouds repeatedly form, which then sediment downwards into the coarse material as a particle swarm. The swarm sedimentation is a non-stationary process, which is illustrated by the two time steps. Very small particles, in the order of <60 µm, reach the fine material over the entire classifier height. At the same time, however, fine particles accumulate in denser particle clouds and are carried down with them and can also enter the coarse material. This can also be seen in the separation efficiency curves shown in the lefthand side of Figure 10. The separation efficiency curves from the experiments and the simulations for three speeds are compared. As expected, the separation efficiency curve is shifted to the left as the classifier speed increases, since fewer and fewer particles enter the fines as the centrifugal force increases. The simulated curves reflect this effect well and the calculated d50 values also deviate only very slightly from the experimental data. In the experimental The separation efficiency curves from the experiments and the simulations for three speeds are compared. As expected, the separation efficiency curve is shifted to the left as the classifier speed increases, since fewer and fewer particles enter the fines as the centrifugal force increases. The simulated curves reflect this effect well and the calculated d 50 values also deviate only very slightly from the experimental data. In the experimental tests, the fish-hook effect only occurs above a speed of 600 rpm; it is not observed at lower speeds. This can be attributed to the fact that at high classifier speeds, more particles are rejected at the classifier and the particle concentration in front of the classifier increases as a result. In the simulations, the fish-hook effect only occurs at 900 rpm. Nevertheless, the simulations allow the fish-hook effect to be proven and can depict it in a weakened form. Furthermore, the simulated separation efficiency curves are sharper, which is probably due to the particleparticle interaction model used, as mentioned above. Furthermore, periodicity is assumed in the simulation. Due to only one air inlet and a possibly inhomogeneous particle feed, it is quite realistic that poorer classification selectivity occurs in the experiments. The right-hand side of Figure 10 compares the new method presented here that takes particle interactions into account and a solver that does not take particle interactions into account. Both solvers provide similar separation degree curves, but the fish-hook effect is only reproduced by the new solver. This is also confirmed by Table 4 in which the classification efficiency d 50 and classification selectivity κ are qualitatively compared for both numerical methods with experimental results. Table 4. Comparison classification efficiency d 50 and classification selectivity κ in experiment, new method with MPPIC and solver without particle-particle interaction.
Discussion
In this paper, a new solver for simulating the particle gas flow in a centrifugal classifier is presented and validated against experimental data from a laboratory plant. Based on the MP-PIC method, the solver allows for the first time the estimation of the influence of particle-particle interactions on the classification process in 3D case. Therefore, the flow profile, particle movement and separation process in the classifier can be described in more detail.
It is proven that particles rejected at the classifier accumulate more in front of the classifier and sterically block the radial transport of other particles. As a result, fine particles do not reach the inside of the classifier and are dragged into the coarse material by coarse particles. To reduce the fish-hook effect, therefore, the formation of the particle cloud in front of the classifier would have to be prevented, maybe by installing flow baffles in front of the classifier. Furthermore, it is shown that particle cloud formation in front of the classifier is discontinuous and that high load fluctuations occur in front of the classifier. In addition, the fish-hook effect is mapped in simulations for the first time and its development process is thus resolved. This shows a comparison with simulations without particle-particle interaction in which the fish-hook effect is not reproduced.
However, the fish-hook effect only appears in the new simulation method in a weakened form at higher classifier speeds. This is possibly due to the limiting of the solver, especially the approach that parcels are simulated instead of particles due to the immense computing time or that the impacts are not fully resolved. Nevertheless, the calculated results well represent characteristic parameters of the classifier such as the pressure loss and the classification efficiency d 50 .
In further steps, the validation should be continued, and the solver should also be compared with experimental results for other classifier types and process conditions. In addition, other models should be tested instead of the Harris and Crighton model. | 9,200.8 | 2021-06-22T00:00:00.000 | [
"Engineering"
] |
Transmission Performance Improvement by Non-Linear Distortion Noise Power Control in Multi-Band Systems
This paper proposes a non-linear distortion noise power control method with bandwidth control for multiple frequency band transmission which simultaneously uses plural frequency bands in wireless communication systems. The control method employs clipping and filtering, and generates out-of-band noise reduction signals using a part of used signal bands to reduce harmful interference in a primary existing system which shares frequency bands with the multi-band system. The improvement of Signalto-Noise Ratios (SNRs) in the bands used by the primary system is evaluated by computer simulations. The simulation results show that the proposed method at the band use rate of 50 % can improve them by 15 dB at the receiving power ratio of -30 dB to the spectrum sharing multi-band system. Figure 1: Multi-band system model. Special Issue: Wireless and Mobile Networks and Their Applications
Introduction
Future wireless communication systems require broader frequency bands to realize larger capacity because of the rapid spread of smartphone and the development of IoT (Internet of Things).Spectrum sharing is one of the promising technologies to yield broader bands with limited frequency resources, in which plural systems share the same frequency bands, and the secondary system uses unused bands in the allocated bands to the primary existing system [1][2][3][4].Because the unused bands are narrow and separated, the secondary system needs to simultaneously use multiple bands to transmit broadband information signals.
On the other hand, OFDM transmission, which is very effective in broadband transmission in multi-path fading channels, is widely used in wireless communication systems while it causes excessive peak power which generates in-band and out-of-band noise because of non-linear distortion of transmission power amplifiers.In addition, multi-band transmission with OFDM causes serious distortion noise by inter-modulation distortion.The distortion noise of a multiband system becomes harmful interference to the existing system in spectrum sharing.Therefore, effective power amplification for multiband transmission has been studied [5][6][7].
This paper proposes a non-linear distortion noise power control method with bandwidth control for multi-band OFDM transmission systems to reduce interference to the spectrum shared existing system in wireless communications.The control method generates out-ofband noise reduction signals using a part of used signal bands to reduce interference.The method uses clipping and filtering (CAF) for this noise control, which is one of the very effective peak power reduction methods of OFDM signals [8][9][10].The conventional CAF usually uses out-of-band filtering which reduces out-of-band noise power caused by clipping.In addition to this filtering, the proposed method uses in-band filtering which removes in-band distortion noise components from clipped OFDM signals.This paper clarifies the effect of distortion noise power control with the proposed method by computer simulations.
Multi-band Transmission Systems System Model
Figure 1 shows the system model in this paper.A secondary multi-band system shares the same frequency bands with a primary existing one.The service area of the secondary system overlaps with that of the primary one.Therefore, the transmission signals of the secondary multi-band system become interference to base and mobile stations of the primary one.
Figure 2 shows frequency usage of spectrum sharing systems in this paper.The frequency bands are allocated to the primary system, and the unused bands among them are used by the secondary one.The secondary system simultaneously uses multiple frequency bands as shown in Figure 2.This use can realize highly efficient frequency utilization and broadband transmission with limited frequency resources.However, when multi-bands are used simultaneously, serious in-band and out-of-band distortion noise occurs by nonlinearity of a transmitter.The distortion noise increases by using OFDM transmission employed in current wireless communication systems because the peak power of OFDM signals is larger than that of single carrier transmission.This distortion noise interferes
Abstract
This paper proposes a non-linear distortion noise power control method with bandwidth control for multiple frequency band transmission which simultaneously uses plural frequency bands in wireless communication systems.The control method employs clipping and filtering, and generates out-of-band noise reduction signals using a part of used signal bands to reduce harmful interference in a primary existing system which shares frequency bands with the multi-band system.The improvement of Signalto-Noise Ratios (SNRs) in the bands used by the primary system is evaluated by computer simulations.The simulation results show that the proposed method at the band use rate of 50 % can improve them by 15 dB at the receiving power ratio of -30 dB to the spectrum sharing multi-band system.
Special Issue: Wireless and Mobile Networks and Their Applications
transmission signals on the bands used by the primary system when the service areas of two systems are overlapped.To realize spectrum sharing, the secondary multi-band system needs to reduce the interference so as to satisfy transmission quality for the primary one.
Transmitter with bandwidth control
Figure 3 shows the transmitter for multi-band OFDM transmission with bandwidth control.Transmission signals are mapped to modulation symbols and allocated to multiple frequency bands.The bandwidth of each band is controlled to reduce in-band and out-ofband distortion noise which is harmful interference of the primary system.Figure 4 shows the method of bandwidth control for the secondary system.The method controls the bandwidth used by the secondary system so as that each used band becomes narrower than the usable one.This control can generate unused bands in the usable bands as shown in Figure 4.
Clipping and filtering
The CAF part is shown in Figure 3 out-of-band filtering removes a part of clipping noise.To reduce peak power to the set clipping level, the CAF is repeated.These peak power reduced OFDM signals by iterative CAF are modulated by a quadrature modulator and amplified by a non-linear power amplifier.the usable band into 2 bands, and the same allocation as Type 2 is employed.Type 4 has 3 used bands and 2 unused bands.In the following, the distortion noise reduction effects with these allocation types are evaluated.
Simulation condition
Computer simulations were conducted to clarify the effect of the proposed method in multi-band transmission with OFDM.Spectrum properties and SNR improvement with the proposed method were evaluated in a receiver of the primary existing system.
Table 1 shows simulation conditions in this paper.The modulation scheme was 64QAM, and the total FFT point number was 16384.The number of transmission frequency bands, N b , was set to 1 and 2 for the primary existing and secondary multi-band systems, respectively.The sub-carrier number for single band was 600, and it was 1200 for the multi-band system.
The input back-off values of non-linear amplifiers (NLAs) were set to be 8 dB for both systems.In this paper, the typical model of an NLA was used [12].The non-linear factor of the model was set to be 3.The clipping level was 3 dB, and iteration number was 5 in iterative CAF for bandwidth control in the multi-band system.
The ratio of used bandwidth to usable one was set to be 0 to 100 % for the multi-band system by bandwidth control, and it was 100 % for the primary existing system.
The received power ratios of the existing system were set to be 0 dB, -10 dB, -20 dB, and -30 dB to the spectrum sharing multi-band system.frequency of horizontal axis is normalized by the sampling rate 1/T of OFDM symbols.The band use rate by bandwidth control with CAF was set to be 60 %.The received spectrum of the existing system is also shown in Figure 8, and the power ratios P r are set to be 0 dB and -30 dB in Figure 8(a) and (b), respectively.The existing system uses the band of third inter-modulation distortion caused by the multi-band system.
Spectrum properties by spectrum sharing
Figure 8(a) shows that bandwidth control by using CAF is effective in the reduction of out-of-band noise by third inter-modulation distortion.The figure also shows that the occupied bandwidth of outof-band noise on third inter-modulation band becomes narrower, and the bandwidth becomes 60 % which is equal to the set band use rate.
In addition, Figure 8(b) shows that the bandwidth control is more effective in Signal-to-Noise Ratio (SNR) improvement at lower power ratios of the existing system because interference power from the multi-band system is decreased by out-of-band noise power reduction.set to be 0 dB and -30 dB in Figure 9(a) and (b), respectively.In these figures, the existing system uses an adjacent band of the multi-band system.
Figure 9 (a) shows that bandwidth control by CAF can significantly reduce out-of-band noise on the adjacent band of the multiband system, and the noise power reduction of 12 dB is obtained.Furthermore, Figure 9(b) confirms that the bandwidth control can improve SNR performance of the existing system at lower power ratios.
SNR improvement by bandwidth control
Figure 10 shows SNR improvement of the primary existing system when the secondary multi-band system employs bandwidth control by iterative CAF.In this SNR improvement evaluation, the total noise power N of SNRs is calculated by the following equation. ( where D p is own distortion noise power of the existing system caused by its transmitter non-linearity, and D s is the distortion noise power received from the multi-band system.The existing system uses the band of third inter-modulation distortion by the multi-band system.The band utilization methods of Types 1 to 4 were used in this figure .The band use rate of a multi-band system is set to be 0 to 100 %.The value of 100 % is without bandwidth control, and 0 % means that the multi-band system uses no frequency band.SNR improvement is represented by the difference for the SNR at 100 %. Figure 10 (a) shows SNR improvement evaluation results at the power ratio P r of 0 dB.This figure shows that the effect of bandwidth control is very small at all band use rates.These results are because the received signal power is sufficiently large, and own distortion noise power D p is larger than D s from the multi-band system.This results in small influence of distortion noise power reduction by bandwidth control.There is no difference by allocation types in usable bands of the multi-band system.Figure 10 (d) shows evaluation results at P r of -30 dB.This figure shows that the improvement of bandwidth control is the same as that at -20 dB.The maximum improvement value is 5 dB at the band use rate of 10 to 60 %.At less than -20 dB, distortion noise from the multiband system, D s , is dominant in the used signal band of the existing system.Therefore, the improvement values of SNRs are equal in spite of power ratio lowering.
Figure 11 shows SNR improvement evaluation results by the bandwidth control.In this evaluation, the existing system uses the adjacent band of the multi-band system as shown in Figure 9. Allocation types of Types 1 to 4 were used in the evaluation.The band use rate of the multi-band system is set to be 0 to 100 %.
Figure 11 (a) shows the evaluation results at the power ratio P r of 0 dB.This figure shows that bandwidth control has almost no effect on SNR improvement of the existing system.The reason is the same as the case of third inter-modulation distortion band, and it is because the received signal power is sufficiently large, and D p is larger than D s .The results also show that there is no difference of SNR improvement by allocation types.Figure 11 (b) shows evaluation results at P r of -10 dB.This figure shows that bandwidth control is effective in SNR improvement of the existing system at -10 dB.It can improve by 4 dB at the band use rate of less than 50 % by using Type 1.The reduction values are almost the same as that of 100 % in which the multi-band system uses no frequency band.Because the unused band for out-of-band noise reduction signals is close to the adjacent band used by the existing system in Type 1, the leakage distortion power from the multi-band system can effectively be reduced.On the other hand, the used signal band of Type 2 is close to the adjacent band.Therefore, the distortion noise power reduction effect of Type 2 is smaller than that of Type 1.
Figure 11 (c) shows the results at P r of -20 dB.This figure shows that the improvement of bandwidth control is larger than that of high power ratios.The improvement value reaches to 10 dB at 50 % at a maximum.The figure also shows that the difference by allocation types is very significant in case of the adjacent band.Type 1 can especially obtain larger SNR improvement compared with other allocation types.At this power ratio region, Ds is dominant in SNRs, and larger Ds reduction effect by bandwidth control of Type 1 remarkably appears in SNR improvement.Figure 11 (d) shows the results at P r of -30 dB.This figure shows that the improvement of bandwidth control is larger than that of -20 dB.The improvement value with Type 1 becomes 15 dB at 50 %.The value becomes 6.5 dB even at 80 %.In addition, Type 3 can improve the SNR of the existing system by 6 dB at less than 60 %.
The above results confirm that the SNR improvement of the adjacent band is larger than that of third inter-modulation distortion band.This is because the band allocated to out-of-band noise reduction signals becomes near distortion noise to be reduced.
Although Types 3 and 4 are more effective in distortion noise reduction in third inter-modulation distortion noise band from the results of Figure 10, the difference of reduction effect is not large.On the other hand, the advantage of Type 1 is obvious in the adjacent band from the results of Figure 11.These result shows that Type 1, which allocates unused bands outside usable bands, is superior to other allocation types on distortion noise reduction.
Conclusion
This paper has proposed a non-linear distortion noise power control method with bandwidth control for multi-band OFDM transmission by spectrum sharing, which uses iterative clipping and filtering.The method employs in-band and out-of-band filtering to effectively control non-linear distortion noise power and improve SNR performance.The evaluation results with the proposed method show that it can reduce out-of-band distortion noise power of a secondary multi-band system and improve the SNR performance of a primary existing system.The results confirm that spectrum sharing is feasible even in multi-band systems.
The signals with bandwidth control are transformed into OFDM signals by IFFT in Figure3(a).The peak power of the OFDM signals is larger than saturation power of the used power amplifier, and it results in much distortion noise out of transmission signal bands.Then, before power amplification, clipping and filtering, CAF, are performed to reduce such peak power of OFDM signals.Citation: Tomisato S, Onji J, Uehara K (2019) Transmission Performance Improvement by Non-Linear Distortion Noise Power Control in Multi-Band Systems.Int J Comput Softw Eng 4: 143.doi: https://doi.org/10.15344/2456-4451/2019/143Page 2 of 7 (b).At the clipping and peak power detector, peak signal components exceeding the set clipping level in OFDM signals are detected in the time domain as shown in Figure 5.These detected peak signals are equivalent to removed components by usual clipping.The detected components are transformed into inband and out-of-band clipping noise of the frequency domain by FFT.
Figure 6
Figure 6 shows filtering for clipped OFDM signals in this paper.The filtering part in Figure 3(b) removes the clipping noise.Two methods can be used as this filtering.One is out-of-band filtering which removes clipping noise generated outside usable bands by intermodulation distortion as shown in Figure 6(a).The other is in-band filtering which removes clipping noise added on the used signal bands [11].The in-band and out-of-band filtering leaves clipping noise in only unused bands as shown in Figure 6(b).The remained clipping noise becomes out-of-band noise reduction signals.The filtered signals are transformed into peak power components of OFDM signals in the time domain by IFFT in Figure 3(b).These transformed signals are subtracted from the original OFDM signals.Although the peak power of OFDM signals are reduced by this CAF, the peak reduction becomes imperfect because in-band and
Figure 3 :
Figure 3: Multi-band OFDM transmitter by clipping and filtering.
Figure 7
Figure 7 illustrates used band allocation methods in one usable band for the multi-band system.There are 4 types in the methods.These differ on the allocation of unused bands in which out-of-band noise reduction signals are generated by iterative CAF.Type 1 sets unused bands outside the usable band, and the number of its used band is 1.Type 2 sets one unused band inside the usable band, and its used bands are divided into 2.In Type 3, the usable band is first divided into 2, and the same allocation as Type 1 is performed.This results in 2 used bands and 3 unused bands.Type 4 also divides
Figure 8
Figure8shows received spectrum properties of a secondary multiband system with and without CAF at an existing system receiver.The band utilization method of Type 1 was used in this figure.The
Figure 9
Figure 9 also shows received spectrum properties with and without CAF.The band utilization method of Type 1 was used.The band use rate by bandwidth control was set to be 60 %.The power ratios P r are
Figure 7 :
Figure 7: Band utilization methods in a multi-band system.
Figure 10 (
Figure 10 (b) shows SNR improvement at P r of -10 dB.This figure shows that bandwidth control is effective in SNR improvement.It can improve by 2 to 3 dB at the band use rate of less than 60 %.The difference by allocation types is not much.Because the received signal power and own distortion power D p is still large, the reduction difference of distortion power D s by allocation types does not appear to the evaluation results.
Figure 10 (
Figure 10 (c) shows the evaluation results at P r of -20 dB.This figure shows that the improvement of bandwidth control is larger than that of high power ratios.The improvement values reach to 5 dB at a maximum.The figure also shows that the selection of allocation types is effective.Types 3 and 4 can be more reduced by about 2 dB compared with Types 1 and 2. Because types 3 and 4 divide the bands of out-of-band noise reduction signals, they can reduce distortion noise of more frequency.In addition, because used signal bands are divided, noise by third inter-modulation distortion spreads and its power per sub-carrier becomes lower.This results in distortion noise reduction within the band used by the existing system, and the SNR is improved. | 4,253.2 | 2019-03-06T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Comparative Study on Mechanical Properties of SiC/Gr &Al2O3/Gr reinforced AL6061 hybrid metal matrix composites
Silicon carbide / Graphite and Alumina / Graphite reinforced AL6061 Hybrid metal matrix composites are fabricated by stir casting (liquid metallurgy) route. Four samples A,B,C,D with varying proportions in both matrix and reinforcements by fixing graphite proportion (5%) constant for all sampled are prepared. Mechanical properties of all the samples are compared with matrix material (AL6061). Scanning electron microscope is used to examine the microstructural characteristics of the composite samples. Mechanical test results exhibit 3.5% (sample C) increase in the hardness number than the base matrix. But, yield and ultimate tensile strength are reduced with all the reinforcements. Microstructural characterisation clearly depicts the presence of cracks, Agglomeration of reinforcements, cast defects on the surface of prepared composites which leads to poor yield and ultimate tensile strength.
Introduction
In recent times, the engineering world is in requisites of newer materials to meet their demands. Such newer materials are also called as composite materials, their usage in military, automobile, aviation, defence sectors, keeps on increasing owing to their less weight, high thermal resistance, high strength to weight ratio, wear resistance, high hardness, and stiffness when compared to conventional engineering materials. In this way it is witnessed that hybrid metal matrix composites have overcome the limitations of other composite materials from recent researches. A hybrid metal matrix composites consist of a minimum of two distinct phases for reinforcement with the matrix phase. Aluminium based hybrid metal matrix composites find way in recent times to replace older materials. Researchers fabricated different grades of aluminium (AL6061, AL7075, AA6351, AL6082, AL2024) based hybrid MMCs with ceramic reinforcements like Alumina (Al 2 O 3 ), Silicon Carbide (SiC), Titanium Boride (TiB 2 ), Titanium Oxide (TiO 2 ), Boron Carbide (B4C), etc. These reinforcements enhance the mechanical properties of composites to meet industrial demands. B.Jayendra,D.Sumanth,G.Dinesh, Dr. M.Venkateswara Rao had observed that reinforcement of B4C and Graphite in AL7075 enhanced the hardness, impact, and tensile strength of the composites considerably [1]. M.Satheesh, M.Pugazhvadivu found 8% reinforcement of [2]. V.Jaya Prasad, K.Narasimha Rao, N.Kishore Babu R reinforced ceramics (TiB 2 /SiC) and observed the addition of ceramics in aluminium increases the mechanical properties of the composites [3]. Abhishek Sharma,Vyas mani sharma,Jinu paul has found reinforcement of graphene and carbon nanotubes (CNT) in AL6061-SiC matrix increases the nano hardness of composites by 27% and microhardness values by 36% than alone AL6061 [4]. V. Anirudh, M.Vigneshwaran,E.Vijay,R.Pramod, GB Veeresh Kumar studied the TiB2 and graphitereinforced AL6061 alloy and concluded increasing the proportion of reinforcements increases the hardness and UTS of composites than alone AL6061 [5]. V.Mohanavel, K.Rajan, P.V.Senthil, S.Arul in another study informed after the dispersion of Al 2 O 3 and Graphite in AA6351 alloy increased the mechanical properties of composites than pure AA6351 [6]. B.Ramgopal Reddy, C.Srinivas has studied the reinforcement of SiC and fly ash in AL6082 matrix as a base and found a considerable enhancement in UTS, Hardness, and wear resistance in the composites [7]. Cheng-jin Hu, Hong-ge YAN, Ji-hva CHEN.Bin SU reinforced Graphite and SiC in AL2024 matrix and concluded tensile strength and elongation of composites were reduced with reinforcements [8,9]. From this literature survey it was observed that aluminium based hybrid metal matrix composites are prepared by various processes such as Squeeze casting, Vacuum hot pressing, Friction stir processing, Stir casting, and Powder metallurgy. Among the above methods stir casting is mostly preferred owing to some advantages over other methods, one of the important phenomena where stir casting stands prior to other methods is the preparation of a wide range of shapes with larger sizes is possible and this process is economically suited. There is a lack of research concentration in AL6061 based hybrid metal matrix composites. So, this work is aimed to fabricate a hybrid metal matrix composite of AL6061 reinforced with SiC/ Al 2 O 3 (varying proportions) with constant proportion (5%) of Graphite using stir casting process and to compare their mechanical properties with the base matrix (AL6061).
Material Selection
AL6061 was chosen as matrix material because of its cast ability and being utilized in wide range of applications such as in aircraft wings, fuselages etc., table 1 and table 2 shows the chemical composition and mechanical properties of AL6061. 3. Stir casting has been selected for fabrication of composites as the reinforcements having uniform distribution in the matrix phase in this process. Al6061(CP) was taken in form of ingots and melted in the graphite crucible (750°C) and then followed by mixing SiC with stirring speed 350 rpm for 15 minutes and graphite is mixed to the molten mixture to prepare samples A&B. Then above procedure is followed by replacing SiC with Al 2 O 3 to prepare samples C&D respectively. SiC and Al 2 O 3 were used in the form of powders with 20µm particle size. Mechanical type stirrer is used in the stir casting setup. Stir casting setup and work flow chart is shown in the figures 1 and 2 as a pictorial illustration. Molten mixture was poured into the die to get circular specimen of 225mm length and 20mm diameter.
Hardness test
Rockwell hardness test machine (B-Scale) was used to measure hardness values in the fabricated samples. Hardness test specimens were prepared as per ASTM standards and the test results are shown in the below chart 1. Pre-treatment was given to the test specimens; its surfaces were degreased and polished to get even surface. Load of 450kgs was applied for 10 seconds. Sample "C" exhibits higher RHN value than the base matrix.
Tensile test
As per ASTM SA370 standard tensile test specimens were prepared like the design shown in the figure 3. Tests were carried on the prepared specimens in universal testing machine. The ultimate tensile strength (UTS) and yield strength values are obtained for all samples. The values are given in the below charts 2 & 3. Test results reveals that the yield strength and the ultimate tensile strength of all samples are decreased when compared with the AL6061 base matrix.
Micro structural characterization
Scanning electron microscope was used to observe micro structural changes on the stir casted specimen surfaces with high magnification. Micro structural images of different samples were given in below figures 4,5,6&7. Image results show the distributed reinforcement particles in the AL6061 matrix phase with higher level of agglomeration of reinforcements and cracks in the surface of the composites. For the sake of getting good quality images pre-treatments like mirror polishing and degreasing were done on the surface each sample. The cracks and cast defect on the surface of the composites fails to improve the yield and ultimate tensile strength when compared with as cast AL6061 base matrix.
Conclusion
AL6061 based hybrid metal matrix composites reinforced with Al 2 O 3 /Graphite and SiC/Graphite samples with varying proportions with fixed proportion of graphite (5%) were fabricated using stir casting process. From the experimental results the following comparison statements can be drawn. There is an increase in the hardness value with increasing the proportion of the reinforcements, Sample "C" exhibits 3.5% increment in RHN number then the base matrix when compared with other samples. Micro structural characterization images of all the samples A, B, C, D clearly shows the distribution of reinforcement particulates in the matrix phase with clusters of reinforcements in some places. Also, cast defects and surface cracks are present in the samples, which reduce the mechanical properties of the composites drastically. Cast defects are the major limitations found in the study. This work will be extended by reducing the clustering of reinforcement particles in the stir casted composite by performing further heat treatment process. Secondly, optimization of process parameters could be done with optimization tools. | 1,800.4 | 2020-10-14T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Comparison between Two Adaptive Optics Methods for Imaging of Individual Retinal Pigmented Epithelial Cells
The Retinal Pigment Epithelium (RPE) plays a prominent role in diseases such as age-related macular degeneration, but imaging individual RPE cells is challenging due to their high absorption and low autofluorescence emission. The RPE lies beneath the highly reflective photoreceptor layer (PR) and contains absorptive pigments, preventing direct backscattered light detection when the PR layer is intact. Here, we used near-infrared autofluorescence adaptive optics scanning laser ophthalmoscopy (NIRAF AOSLO) and transscleral flood imaging (TFI) in the same healthy eyes to cross-validate these approaches. Both methods revealed a consistent RPE mosaic pattern and appeared to reflect a distribution of fluorophores consistent with findings from histological studies. Interestingly, even in apparently healthy RPE, we observed dynamic changes over months, suggesting ongoing cellular activity or alterations in fluorophore distribution. These findings emphasize the value of NIRAF AOSLO and TFI in understanding RPE morphology and dynamics.
Introduction
Retinal diseases vary widely, but most of them cause visual symptoms, often leading to blindness [1], one of the biggest fears of today's world population [2].Some retinal disorders, such as age-related macular degeneration (AMD) or inherited retinal diseases (IRDs), including retinitis pigmentosa (RP), remain poorly understood and without efficient treatment.In the last decade, novel stem cell therapies have been developed to address these pathologies, and a few clinical trials on humans are currently underway [3].However, one of the most significant challenges in the current development of these therapies is to reach the capacity to monitor the structural and functional changes in the retina driven by these treatments at a cellular level.In particular, it is known that RPE cells play an important role in the progression of these pathologies, though the mechanisms involved are not yet well understood.It is, therefore, crucial to study alterations in the RPE to assist the development of effective treatment methods for AMD and IRDs.
Although there are a few existing clinical modalities to detect and image the RPE, they only allow an assessment of the state of the layer at the tissue level, and subtle alterations cannot be appreciated using existing clinical tools.Optical coherence tomography (OCT) generates cross-sections of the retina and allows clinicians to observe the RPE layer and follow significant changes over time [4].The most commonly used modality to evaluate the RPE state in the clinic is short and near-infrared wavelength autofluorescence scanning laser ophthalmoscope (SLO), which, through excitation of endogenous fluorophores [5] allows the user to observe damage to the RPE [6].To observe fine cellular changes, various teams applied Adaptive Optics (AO) technology to these modalities, first leading to cellular resolution of autofluorescence images of the RPE in the SLO camera [7][8][9][10][11][12][13] and later on revealing individual cells in OCT cross-sections [14][15][16].Autofluorescence imaging with the AOSLO requires longer exposure times (from 30 s to 90 s) to provide high-resolution images of the RPE.Autofluorescence imaging in histology samples has enabled the study of the different types of fluorophores that are excited in these cells [17].This has led to the hypothesis that, on top of lipofuscin, melanin pigment was also excited using infrared light [8,11,17].In vivo autofluorescence AOSLO could similarly be used to facilitate the work for understanding the spectral fluorescence of these endogenous fluorophores in the living eye.In particular, near-infrared autofluorescence (NIRAF) AOSLO could potentially be used to help characterize melanin distribution in this layer, which is believed to play an important role in diseases such as AMD.Recently, another AO technique has been developed to image RPE cells through the development of transscleral imaging.The first system implemented by LaForest et al. [18][19][20] consisted of sequentially illuminating the retina through the sclera at each side of the eye pupil and then subtracting each generated image.This system, known as transscleral optical phase imaging (TOPI), revealed the RPE cell mosaic reportedly through phase contrast.In this study, we present another system using transscleral illumination, a commercial prototype named transscleral flood illumination (TFI) camera (Imagine Eyes, Orsay, France), which is currently installed at the Quinze-Vingts Hospital in Paris.Although the TFI also illuminates the retina through the sclera, it is based on a different image formation as both transscleral beams are simultaneously shone on opposite sides of the pupil to obtain an optical sum of both signals.Unlike TOPI, which uses the difference between two oblique illuminations to reduce absorption and increase phase signal, the simultaneous illumination of the TFI suggests that its image contrast is mostly derived from absorption.Both transscleral systems exploit the fact that the retina is illuminated through the sclera to avoid the strong backscattering of the photoreceptors, which masks RPE signal when detecting the light coming from the pupil in standard retinal imaging [18][19][20].In this work, we aim to compare images of the same healthy RPE cells obtained with this new transscleral modality and with near-infrared autofluorescence (NIRAF) images from AOSLO.In particular, we intend to validate the observation of RPE cells in the TFI modality to better understand the origin of contrast in both modalities and discover potential biomarkers of healthy RPE.
Adaptive Optics Ophthalmoscopes
Participants were imaged with two types of ophthalmoscopes capable of revealing RPE cell mosaic: the Transscleral Flood Illumination camera (rtx1 TFI, Imagine Eyes, France) and two Near-Infrared Autofluorescence (NIRAF) AOSLO.They are described below in detail.
Transscleral Flood Illumination (TFI) ophthalmoscope
The TFI is an AO retinal camera with a transscleral flood illumination system (rtx1 TFI, Imagine Eyes, France).The ophthalmoscope uses two LED arrays at an 810 nm wavelength, which shine light through the sclera from both sides of the pupil (Figure 1) and generates images showing the boundaries of RPE cells as bright pixels and the centers of each cell as darker pixels.The acquisition time is 6 s per video.NIRAF Adaptive Optics Scanning Laser Ophthalmoscopes (AOSLO) Subjects 1 and 4 were imaged with the Paris near-infrared autofluorescence (NIRAF) AOSLO situated at the Quinze-Vingts Hospital, which has been previously described [11].Subjects 2 and 3 were imaged with the NIRAF AOSLO situated at the Pittsburgh Vision Institute detailed in [21].Both systems exploited near-infrared excitation of RPE fluorophores with light at similar wavelengths of 757 nm and 720 nm, respectively, which was used to generate images of the RPE cell mosaic in all subjects through detection of the autofluorescence.Image sequences from both systems were registered and corrected for distortions using a custom algorithm described in [22] and then averaged.Both NIRAF system characteristics can be found in Table 1.
Cohort Description and Image Acquisition
Four healthy volunteers (1 female, 3 males) over 18 years old were recruited.Routine fundus imaging was performed on volunteers on multiple occasions to verify that the retina was healthy and that there was no sign of retinal pathology.The study spanned 7 years; therefore, subjects have aged between the first and last image acquisitions.In Table 2, we detailed the acquisitions on each system and the corresponding subjects' ages.In summary, NIRAF images on the AOSLO at the Quinze-Vingts in 2017 were acquired in subjects 1 and 4. NIRAF images on AOSLO at the Pittsburgh Vision Institute in 2022 were taken in subjects 2 and 3. Finally, all subjects were imaged on the TFI system in 2023, and additionally, images were acquired on subject 1 from 2021 to 2023.Dilating drops were administered to subjects 15 min prior to imaging to enlarge their pupil via one drop each of phenylephrine hydrochloride (2.5%) and tropicamide (1%).We generated image montages from 10 degrees in the temporal retina (10°T) to the fovea of subjects 1-3 in both NIRAF and TFI systems.Then, images were acquired on all subjects on a region of interest at 10°T.An image of the photoreceptor layer is simultaneously acquired in all systems, using brightfield detection in the TFI and confocal in the NIRAF systems.
The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee in Paris (IDRCB number: 2019-A00942-55, CPP Sud Est III: 2019-021-B) and the Institutional Review Board of the University of Pittsburgh (CR20040340-010) for this study in humans.Written informed consent was obtained after the risks were explained to the participants both verbally and in writing.Before the start of imaging, the power of the laser was measured and recorded, ensuring the laser power was below the maximum MPE outlined by the ANSI standard guidelines.
Image Processing and Analysis
Background subtraction of TFI images was performed by subtracting a Gaussianfiltered version of the image.The i2k software (Imagine Eyes, Orsay, France, https://www.imagine-eyes.com/products/i2kretina,accessed on 4 October 2023) was used to assemble the montages of average images from various eccentricities taken with the TFI system, as well as to align images from the same region taken at different times.NIRAF images were montaged with a Matlab (The MathWorks, Natick, MA, USA) custom-made algorithm [23] inspired by [24].TFI and NIRAF images of RPE cells display slightly different contrast, which renders their superposition difficult.We manually aligned the average images of photoreceptors from TFI with NIRAF confocal images and used them as a reference to manually superimpose the RPE images of both modalities.This alignment has been done using Photoshop software (version 24.2.1, Adobe, San Jose, CA, USA).The power spectrum density of the images at 10°T from both modalities were computed using custom-made Matlab software (R2022b).We then extracted cell spacing and density metrics using the estimation method described in [25] for all subjects.
Images of TFI and NIRAF of Same Regions
Reconstructed montages of NIRAF and TFI images revealed the characteristic appearance of the RPE cell mosaic in both modalities as shown for subject #1 in Figure 2.These first TFI images of the RPE layer show dark cell centers surrounded by brighter cellular borders.
In Figure 2, various zooms of the same regions across 10°from the fovea to the temporal side show a closer look at the cell mosaic generated by each modality.Images from both modalities present strong similarities, suggesting we are observing the same cells.They both share a similar appearance of the RPE cells with a hyposignal center and a hyper-signal surround.Similarly, in both cases, the RPE mosaic is wider and clearer at further eccentricities, while close to the fovea, cell size seems to decrease, and the cell mosaic signal is not as clearly resolved.
In NIRAF imaging, we can sometimes observe photoreceptor signals in the autofluorescence image [11].This leakage effect is most apparent in the NIRAF zoom on the right of Figure 2 corresponding to the fovea, although it can also be observed to a lesser extent on the other NIRAF zooms.Although such leakage is not as apparent in the TFI images, it remains difficult to distinguish a clear RPE mosaic, and we notice what seems to be small cells in an irregular arrangement closer to the fovea.On the other hand, the modalities display significant differences.TFI images display a higher signal contrast compared to NIRAF.Also, there are no vessel shadows in TFI images, contrary to those obtained with NIRAF.However, TFI images present a modulation of the background signal with low frequencies, which does not exist in NIRAF because the autofluorescence signal is only emitted from the RPE cells and thus provides inherent optical sectioning only to the RPE layer.Additionally, the confocal pinhole provides further filtering of any residual out-of-focus light reaching the detector.We were able to image the same retinal region at 10°from the fovea in the temporal retina (10°T) on the same subjects with both TFI and NIRAF modalities, allowing us to generate images of the same cells with these two different imaging systems.Figure 3 shows for each subject the average image in NIRAF and TFI and the superposition of the radial average of power spectrum densities (PSDs) computed on the NIRAF and TFI images.All PSDs show a peak corresponding to the modal spacing of the RPE cells.NIRAF and TFI peaks are perfectly aligned for most subjects, providing strong evidence that we are observing the same cell mosaic with both techniques.Although the modal TFI spacing of S#3 displays a small shift with respect to the NIRAF modal spacing, they appear relatively close.Cell density and spacing were estimated from the modal spacing extracted from these PSD graphs using the methods described in [25] and [26], respectively.The mean values for cell spacing and density computed from data of all subjects are very similar for both modalities and fall inside the uncertainty interval (see Table 3 and Bland Altman plots Figure 4).We were also able to identify the same cells in both TFI and NIRAF images (see yellow arrowheads in Supplementary Video S1), further strengthening the hypothesis that we are observing the same cellular structure.We noticed that the clarity of the RPE mosaic varied according to the subjects.Figure 3 shows that subjects #1 and #2 generated a higher contrast image with distinct RPE cells.Subject #2 seems to display a sharper autofluorescence signal compared to the transscleral one, even though cells can be identified in both modalities.Subject #3 RPE cells are smaller than the other subjects in the same regions, rendering the distinction of cellular structure more difficult.Similar to subject #2, the autofluorescence signal looks clearer than the TFI signal that is modulated by the background low frequencies.
RPE Mosaic Variations with Time in Healthy Retina
The RPE mosaic at 10°T of subject #1 was imaged over three years at intervals going from minutes to months.We observed that the RPE mosaic seems stable when imaged minutes apart, suggestive of the same amount of pigmentation in the same RPE cells (see Figure 5).However, cell shape and the cell center level of intensity varied over months.In Figure 5, we show that although some cells remain the same, others show a very dark center in the T0 acquisition, showing a grayer cell center in the next acquisition 6 months and a year later.In addition to these changes inside the cell, some of these appear elongated in some dimensions at different times.For instance, in Figure 5, the green arrowheads identify a cell whose dark center is larger at T0 than at T0 + 6 m.
Discussion
We imaged the RPE cell mosaic for the first time using two different imaging modalities and identified many of the same cells in each modality.Although showing images generated from two different types of signal, we were able to observe with both systems the same cell morphology, the same modal spacing peak in the power spectrum densities of the images, and similar estimated cell density (Figures 3 and 4).In addition, cell spacing and density values were also in accordance with RPE densities in these eccentricities [27].We were able to validate the fact that we are observing the same retinal cellular structure identified as RPE in autofluorescence images in TFI images.Two interesting aspects of the TFI system are the fact that it provides a 7-time larger image field area (4°× 4°) than typical AOSLO images (1.5°× 1.5°) and that its acquisition time (6 s) is 5 times shorter than the minimum acquisition used in NIRAF AOSLO (30 s).In other words, if we compare the time needed for capturing the same retinal area, TFI is 35 times faster, which makes this system adapted to a clinical environment.
Origin of RPE Cell Contrast
Imaging the RPE layer using two modalities whose contrasts result from different light interactions with the tissue provides us additional information on these retinal cells, such as the level of pigmentation or fluorophore distribution, and may help us understand and determine more about the origin of the contrast in NIRAF, but also in TFI.
In the TFI system, we illuminate the retina through the sclera in a symmetrical manner, both light beams simultaneously shining through opposite sides of the pupil (see Figure 1).This symmetrical illumination of the same region implies an optical addition of the absorption terms of the detected signal and the cancellation of the phase terms.However, the retina has complex, highly scattering properties, and also, although the light beam enters at opposite sides of the pupil, we cannot positively describe how the region is being illuminated.For these reasons, although we hypothesize that absorption by the cell is responsible for most of the contrast, we cannot completely rule out a phase contribution, even if it is probably small.
We believe that, through absorption, the TFI modality removes the highly backscattering photoreceptor signal, and we can produce a contrasted RPE mosaic.Darker spots of TFI images correspond to regions with higher absorption, which are most likely due to higher densities of pigment granules such as melanolipofuscin [28,29].In particular, it has been shown through histological images of RPE using laser scanning microscope (LSM) and structured light microscope (SIM) that lipofuscin granules are pushed towards the basolateral cell borders because of the nucleus, while there is a high content of melanolipofuscin granules at the center, which led to a larger region of hypoautofluorescence signal which do not exclusively represent the cell nucleus [28,30].This is also observed in NIRAF imaging in vivo, where there is a large hypoautofluoresecent cell center (Figure 6).Similarly, in TFI images, some dark centers appear quite large (Figure 2), more than the typical size of a nucleus, which could also be due to the accumulation of these pigment granules around the nucleus leading to higher absorption and generating the hyposignal at the center of the RPE cells.This granule distribution is also supported by our observation in TFI images of the variation in intensity levels of the cell center, which could be attributed to variation in the number of granules inside the RPE cell leading to changes in absorption (see enlarged regions in both Figures 2 and 6).
Finally, an interesting difference between NIRAF and TFI RPE images is the absence of vessel shadow on the TFI RPE mosaic.This is most likely because red blood cells in the vessels have absorbed the NIRAF light, the shorter wavelength of which is closer to their absorption spectrum.This hypothesis is further demonstrated in previous studies [10,21] where a shorter wavelength led to darker shadows of vessels on the RPE mosaic generated by AO autofluorescence imaging.Unlike autofluorescence imaging in histological RPE tissue [32], in NIRAF imaging in vivo, we do not seem to distinguish the hypofluorescent/nonfluorescent gap between individual RPE cells, nor does it appear visible in TFI images either.This is most likely due to a lack of resolution as we are limited to a lateral resolution of approximately 2 µm.A potential limitation arising from this is the inability to distinguish between a multi-nucleated RPE cell, which has been highly documented in the literature [30,32-34], and two cells.Even though RPE multinucleation has not been associated with any failure of the RPE function yet, it has been noticed to increase with age and would thus be an interesting biomarker to obtain in vivo too, requiring further improvement of our imaging modalities [24].
RPE In Vivo Signal Variation with Eccentricity
Neither imaging modality shows a homogeneous RPE mosaic across eccentricities, with cells displaying a distinctive hyposignal center surrounded by a bright ring at around 10°T, which seems to become further mixed with other signals closer to the fovea.This observation in NIRAF is in accordance with autofluorescence imaging in histological RPE samples, where some authors noticed that perifoveal RPE cells (corresponding to eccentricities between 10°and 18°around the fovea) appear to have the highest autofluorescence signal compared to foveal and near-peripheral RPE cells [32].This could be due to a higher lipofuscin granule load, which, as previously shown [28], is pushed towards the borders of the cell cytoplasm, generating these distinctive bright borders similarly to what is observed in Figure 2 NIRAF images.Other autofluorescence studies [10] have shown more contrasted RPE mosaics at larger eccentricities, particularly in the temporal retina.That study suggested that cell visibility is affected more by regional retinal characteristics than the age or ocular quality of the participants [10].Interestingly, perifoveal locations in TFI images also display clearer RPE mosaics, suggesting a granule distribution that leads to more contrasted TFI images than in foveal regions.In addition, the larger size of the RPE cells in perifoveal locations compared to fovea [32] facilitates the resolution of individual cells at further eccentricities in both modalities.One concern raised in previous work developing RPE imaging was the possibility that there might be some signal from the rods at certain eccentricities which could be mistaken for RPE cells when they were forming a ring of single cells around the cones [15].Our region of interest of 8°T-10°T corresponding to 2.5-3.1 mm is beyond eccentricities around 1.35 mm displaying these types of rings and rods [35,36], suggesting the observed cells are indeed RPE cells.
Another aspect influencing the RPE cell contrast in both modalities is believed to be a potential modulation of the NIRAF signal by the photoreceptors [10,11].In Figure 2, bright small spots stand out from the background in NIRAF images close to the fovea, which has also been observed in previous in vivo autofluorescence studies of the RPE [10,11,13].One hypothesis is attributing the autofluorescence in the pattern of cones to their waveguiding properties as the potential mechanism for this modulation.This could be the result either from the ingoing excitation light, which would lead to focused autofluorescence light at the tip of the outer segments, or on the way out, with bidirectional waveguides guiding the autofluorescence out of the eye through the cone.Although TFI image contrast is supposed to originate in absorption, it also seems to be mixed with brighter, smaller spots of the size of photoreceptors in foveal locations.This transscleral system is a full field modality, which, unlike confocal systems, has poor optical sectioning and detects light from all retinal layers.The transmission signal could thus be affected by the photoreceptors layer, leading to some modulation of the RPE mosaic.This trend in the RPE mosaic in subject #1 (Figure 2) was also observed in subjects #2 and #3, although with less strength in the NIRAF image in subject #3 (see Figures A1 and A2 in Appendix A).
RPE Pigmentation Dynamic
We were able to detect a variation of the grayscale level inside the same RPE cells over certain periods of time in TFI images.We excluded changes in the image acquisition, such as subject head position in the system, by acquiring several image sequences at 10-min intervals and asking the subject to stand back in between.In particular, given the nonexistent optical sectioning of the TFI, we wanted to verify that the observed changes were not generated by variations from other regions of the retina, like the choriocapillary flow.Additionally, unlike the AOSLO, the TFI system has a full field acquisition, which prevents distortion artifacts in the average images, allowing us to interpret the observed changes as physiological variations.Furthermore, while minor fluctuations in AO correction may impact the quality of individual frames, they would not be expected to give rise to the observed variations, which are deemed physiological.As shown in Figure 5, RPE cells remain stable over these intervals.However, images acquired over a few months show, at several locations, changes in the apparent gray levels of the RPE cell (marked in Figure 5 by blue arrowheads), which even seems to modify the cell shape and size appearance (marked in Figure 5 by green arrowheads).The contrast variations are illustrated in Figure A3 through standard deviation images corresponding to each set of zooms depicted in Figure 5.The red spots indicate regions where there are notable differences in intensity between the time intervals.We observed that in zoom sets representing shorter time intervals, there are scattered small red spots, most likely due to noise.However, in zoom sets derived from images taken over several months, we observed larger circular spots, indicating changes in the intensity at the center of the RPE cells.The fact that these variations inside the RPE cells are not observed in images acquired between short timescale intervals (10 min) suggests that they are derived from slow physiological changes in the cell.One interpretation of these changes inside the RPE cells could be that these could be driven by pigment organelle motility.Thus, these variations in the cells hyposignal could imply a dynamic of melanincontaining complex granules inside the RPE cell, which has actually been long reported in the literature [37].Furthermore, Feeney's study [37] reveals the existence of a dynamic and complex interrelationship between the various components of the phagolysosomal system and the melanin granules in the RPE cytoplasm [37].Although phagocytosis in the RPE occurs in a diurnal fashion and the entire photoreceptor outer segment population is turned over every 2 weeks [38], it does not entail that the melanin-containing granules follow that cycle or that enough changes in the amount of granules have occurred in that time to affect the contrast of the images.Similarly, other granules and structures are produced or absorbed by the RPE cell during this cycle, which could also affect the level of light absorption and, therefore, the contrast of the RPE mosaic in TFI images.Nevertheless, we observed for the first time in healthy subjects dynamic changes in the structure of the RPE mosaic at the level of single cells in vivo, which differs from the changes described in patients with retinopathologies such as age-related macular degeneration, where large clumps of pigment migrate over distances of several microns [21].
Conclusions
We successfully imaged individual RPE cells in the same living eyes using two different cellular resolution modalities.We show, for the first time, that the same cells could be imaged and overlaid 1:1 between modalities, cross-validating each of these tools for imaging of individual RPE cells.This validates that the observed mosaic is a cellular structure whose morphology and size match the RPE.We also show for the first time in vivo variability in the contrast and structure of individual healthy RPE cells, suggestive of pigment redistribution, which could allow us to extract biomarkers of healthy functioning RPE and thus better understand the role of pigment in the RPE.However, it must be noted that in this study, we compared images from the TFI system to NIRAF images from previous work on the same subjects [11] generated several years ago, representing a considerable time gap between the two datasets.Given that we observed dynamic changes within the RPE layer using the TFI system over intervals of months, one limitation of this study was the fact that we could not compare intracellular features between the two imaging modalities despite observing the same cells.Therefore, in the future, we will expand our study cohort to include more subjects and acquire images during the same imaging session for both modalities, allowing for a more robust analysis of intracellular comparisons between NIRAF and TFI imaging modalities.Finally, the addition of TFI and NIRAF AOSLO to future multi-modal imaging studies may provide complementary information about RPE structure and may help to better understand RPE pigment redistribution in different pathologies and with age.In particular, due to its longitudinal monitoring capability, we will introduce the TFI system into clinical protocols to follow the progression of diseases affecting the RPE layer, such as AMD.The TFI system is particularly adapted to a clinical setting because of its short acquisition time and large field of view.
Figure 1 .
Figure 1.Schematic of the transscleral illumination of the TFI system.
Figure 2 .
Figure 2. Montages of RPE layer images acquired with the (top) TFI and (bottom) NIRAF modality from the fovea (right) to 10°T (left) on subject #1.Enlarged regions are compared at various eccentricities.Scale bar is 100 µm.Appendix A Figures A1 and A2 show montages for subjects #2 and #3.
Figure 3 .
Figure 3.Comparison of RPE images taken with NIRAF and TFI modalities on the same region (10°T).The power spectrum densities were computed on each image and superimposed on the last column.Dashed lines highlight the peaks corresponding to the modal frequency of the RPE cell spacing, in blue for TFI and red for NIRAF.Scale bars are 50 µm long.Supplementary Video S1 shows the superposition of TFI and NIRAF images for all subjects.
Figure 4 .
Figure 4. Bland Altman plots comparing TFI and NIRAF cell spacing and density.The plots show the agreement among the two different systems as measures fall inside the horizontal lines, i.e., inside the limits of agreement.
Figure 5 .
Figure 5. Longitudinal imaging subject #1 RPE mosaic (a-c) over minutes and (d-f) months.Three zoomed regions for short-term intervals (zoom 1-3 for (a-c)) and for long-term intervals (zoom 4-6) show details of single RPE cells.Scale bars are 50 µm.
Figure 6 .
Figure 6.(Left) Enlarged region of subject #4 in Figure 3 showing same cells in NIRAF and TFI contrasts with yellow arrowheads showing some examples.(Right) Schematics of melanin-containing granules such as melanolipofuscin granules (dark circles) and lipofuscin (light circles) granules simplified arrangement inside an RPE cell suggested by histology findings in [28-31].
Table 2 .
Subjects image acquisition details for each modality.
Table 3 .
RPE cell spacing and density extracted from the Power Spectral Density of TFI and NIRAF images for all subjects. | 6,654.2 | 2024-04-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Accuracy and minor embedding in subqubo decomposition with fully connected large problems: a case study about the number partitioning problem
In this work, we investigate the capabilities of a hybrid quantum-classical procedure to explore the solution space using the D-Wave 2000QTM quantum annealer device. Here, we study the ability of the quantum hardware to solve the number partitioning problem, a well-known NP-hard optimization model that poses some challenges typical of those encountered in real-world applications. This represents one of the most complex scenario in terms of qubits connectivity and, by increasing the input problem size, we analyze the scaling properties of the quantum-classical workflow. We find remarkable results in most instances of the model; for the most complex ones, we investigate further the D-Wave Hybrid suite. Specifically, we were able to find the optimal solutions even in the worst cases by fine-tuning the parameters that schedule the annealing time and allowing a pause in the annealing cycle.
I. INTRODUCTION
Recently, the availability for the first time of quantum annealing devices from D-Wave Systems has captured the attention of both researchers and technology companies [1][2][3][4].Besides, a growing interest is in the experimental determination of whether or not a quantum speedup can be achieved with this new class of quantum devices and what kind of working applications can be developed on such platforms [5][6][7][8].
The participation of major technology powers such as Google, Lockheed Martin, and Los Alamos Laboratories continues rising, together with the scientific literature and application reports.Nevertheless, there is still a strong limitation in the usage of this model of computation for solving real world problems due to the limited number of qubits and couplers inside the quantum processor unit (QPU).Actually, it is well known that quantum annealers need to have a dramatically larger number of qubits and couplers in order to model the complexity of real life problems.Especially, the limited connectivity between qubits inside the current Chimera graph architecture represents an additional obstacle in mapping large real problems in the QPU [9][10][11][12].
Furthermore, with the release of an open-source suite spanning from the decomposing solver Qbsolv to the new Hybrid framework, D-Wave took a significant step forward towards gathering the attention from technology companies.As a matter of fact, with these technologies it is possible to close the gap between logical qubits representation encoded in the QUBO (Quadratic Unconstrained Binary Optimization) matrix and the physical embedding of the problem into the Chimera graph [13].
Besides, it is possible to decompose large problems into smaller subsets in such a way that they can be integrated immediately into the QPU, by providing both the combinatorial implementation required for the physical embedding and the decomposition procedure for the creation of the smaller instances.Also, the backend to be used during the computation can be specified in order to solve the model by means of either a classical or a quantum-based platform.
However, despite all the attention drawn to this crucial tool, a systematic investigation of the efficiency related to the optimization and decomposition performance has not been exhaustively conducted yet.Some studies have been developed using special techniques such as the timeto-target metric [14] or applying methods based on the matrix factorization [15] but without taking into account the capabilities of scaling up when the input size grows.
In this work we investigate the accuracy and the capability of the D-Wave 2000Q quantum annealer to solve problems with a significantly large input.To perform this study, we use one well-known NP-Hard model: the number partitioning problem (NPP) [16].Thanks to the simplicity of this problem, it is easy to generate artificial problems of any size for which the optimal solution is known.Consequently, the measurement of the quality of the solution provided by the quantum annealer, along with the classical implementation of the tabu-search algorithm for the problem decomposition, will be possible even for large datasets.The number partitioning problem (NPP) is defined as the task of discriminating if a given set S of positive integer numbers can be divided (partitioned) into two subsets S 1 and S 2 where the total sum of the elements in S 1 equals the total sum of the elements in S 2 .Although the NPP is an NP-complete problem, the optimization version is considered NP-Hard and can be formulated in the following way: given a list of N positive integers {a 1 , a 2 , ..., a N }, the solution consists in finding a subset A ⊂ {a 1 , a 2 , ..., a N } such that the difference: is minimized.Throughout this work, we will refer to this difference as the delta between the two subsets A and S \ A. This problem is of both practical and theoretical importance: possible real applications span from multiprocessor pipeline scheduling [17], where balancing and partitioning different resources can be crucial, to cryptography [18] and all those problems requiring load balancing for I/O capacities, e.g. during databases processing [19].The D-Wave device implements a quantum annealing heuristics to solve sampling, optimization and machine learning problems.Specifically, given a physical system composed of qubits, it is possible to define its Hamiltonian and initialize it in such a way that the lowest-energy state corresponds to all qubits being in a superposition state of 0 and 1.Then, as the annealing proceeds, a new Hamiltonian deriving from the problem's specifications, called the problem Hamiltonian, is introduced and gradually takes over the initial energy landscape, up to a point where it contains all the energy contributions.The Hamiltonian of the system can be defined in the following way: where H I is the initial Hamiltonian, H P is the problem Hamiltonian and their temporal evolution through the annealing is such that H I (0) H P (0) and H I (t f ) H P (t f ), being t f the final time of the annealing.
As the problem Hamiltonian is introduced, the energy levels of the excited states are originated, increasing the probability of the system to jump from the ground state to some other excited state.In particular, there exists a critical point, the point of minimum gap, where the ground-state energy level is closest to the lowest energy level of one of the excited states.In such point, the probability of escaping the ground state is the highest, in which case the system is driven away from the global minimum.
In practice, in order to manipulate the Hamiltonian of the system, an external magnetic field is applied to the qubits.In this way, the probability of qubits falling into the 0 or 1 state is changed.The quantity that controls the magnetic field, called bias or weight, is directly controlled by the function of the problem at hand, that is the one from which a sample is needed or that has to be minimized.Moreover, it is possible to correlate qubits by entangling them.This is obtained by setting the value of a coupler, which represents the strength of the correlation between qubits that are linked together.
Hence, by letting the initial system undergo the quantum annealing process, it is possible to raise energy barriers in such a way that the energy of the system reflects the function to be minimized or sampled from.If the quantum annealing is slow enough, the system is able to naturally end up in the lowest-energy state, i.e. the low energy states needed in a sampling problem or the solution of a minimization problem.
In its current implementation, the D-Wave's quantum annealer is able to solve problems expressed in the form of an Ising glass, with a Hamiltonian written in the following form: where H is the Hamiltonian encoding the problem, S i ∈ {−1, 1} are the spin values and h i and c ij are respectively the qubits weights and the couplers coefficients of the model.
A complete formulation of the NPP as Ising spin glass has been provided in Ref. [20].The Hamiltonian for this type of problem can be defined by assuming an increase in the energy when the total of amplitudes associated with positive spin states is different from that of amplitudes with negative spins.Accordingly to this formulation, it is possible to use the following relation: with S i = ±1 the spin values indicating the subset to which the i-th element belongs and a i the element of the set A. It follows that if the ground state has H > 0 there is no exact solution of the specific problem and the ground state is the one minimizing the mismatch between the two subsets.
In order to formulate the problem as a Quadratic Unconstrained Binary Optimization model, we first have to convert our S i = ±1 into binary variables of the form q i ∈ {0, 1}.This can be done by using the following simple relation: where q i is the i-th variable and S i is the spin value.Now, the original Ising problem can be mapped into the QUBO form: where x represents a binary variable and Q is the socalled QUBO matrix containing the weights of qubits (h i in Eq.3) in the diagonal and the couplers coefficients (c ij in Eq.3) in the (i, j) elements.This matrix will be symmetric (c i c j = c j c i ), allowing a reduction in the number of variables by selecting only i j and setting the remaining terms to zero, leading to an upper-triangular matrix.
Having the QUBO matrix, it is possible to submit it to the QPU and retrieve a solution of the optimization problem.However, the connectivity between qubits required by the NPP is that of a complete graph, which is yet to be supported by any modern quantum annealer that provides a fairly high number of qubits.To overcome this and similar problems, the D-Wave device operates a minor-embedding of the problem onto its Chimera architecture.Specifically, one can either run the built-in tabusearch heuristics provided by the D-Wave Hybrid tool to optimally decompose the problem into subproblems or choose a custom minor-embedding strategy.The subproblems will then be mapped onto che Chimera graph, for which the QPU will start the quantum annealing.
In Fig. 1 the time required to solve the NPP on classical hardware by using the D-Wave Qbsolv is reported as a function of the input set size.The elaboration time increases exponentially while a structured procedure is applied in order to find the minimum: a number of subproblems are generated, handled and finally merged into a global solution of the NPP.The exponential increase in the execution time confirms the NP-Hardness of the problem when approached with classical hardware and formulations.When the problem is submitted to the QPU, the execution time changes and paves the way for a wide range of investigations of the D-Wave Hybrid tool.Moreover, this peculiar model allows us to study what happens in one of the worst case scenario from the perspective of the qubits connectivity: a fully connected graph, where the number of couplers and weights precision play a central role [5,21].
III. RESULTS
In order to investigate the capabilities of the D-Wave hybrid tool, we solve multiple NPP examples of increasing size.For each fixed problem size we use 10 different datasets and collect statistics of the results.For experimental purposes, we choose the data in such a way that the ground state of the corresponding Ising models is H = 0, i.e. there is a single partition of the set of numbers.
For our studies, we first construct the QUBO matrix for each problem, and then we define the tabu-search heuristics as the algorithm that splits the original problem into the subproblems, preparing them to be embedded on the Chimera graph.
Fig. 2a shows the QUBO matrix defining the connectivity of qubits required by the specific NPP instance and with regular patterns related the number amplitudes in the dataset.With the problem being formulated as an Ising model, all variables are coupled in pairs, resulting in a dense (upper-triangular) QUBO matrix.Such connectivity is the most complex to handle and can thus be an issue for current quantum hardwares, making it interesting to investigate the quantum annealer performance.
The distribution of partition deltas for each different problem size is summarized in Fig. 2b.We produced 10 different datasets to be partitioned for every problem size and we computed the value of delta for all these instances.For each problem size we have built a boxplot of deltas centered on the median of the 10 delta values coming from the solution of the NPP.
The combination of quantum annealing with the classical minor-embedding heuristics is able to find the optimal solution in most cases.This is achieved especially when the problem is very small (and, as a consequence, computationally easy) or when its size is significantly higher.In fact, for our smallest problem and for those with input size greater than 450 binary variables, we are able to optimally solve the 10 different NPP instances.On the other hand, for middle-sized problems, not all distributions of data allow qubits to reach the ground state.As a result, we obtain the optimal solutions only for a subset of the given problems.Figs.2c-d report the density distribution of each of the 10 datasets used for two different problem sizes (200 and 500 variables).As explained above, the quality of the results on the bigger model exceeds the one on intermediate sizes.Comparing both density distributions we can conclude that this behavior is fundamentally related to the fact that a shift of the distribution curve to lower values leads to a dataset containing more solution degeneracy for lower energy states and consequently simpler to solve even when the problem is bigger.
An effective method to enhance the exploration of the solution space is the direct manipulation of the annealing schedule [22,23].This distinctive technique can be used to improve the quality of the solution in the cases described before in which we could not reach the ground state.Indeed, in contrast to what we did with the first approach, where the annealing has been used without interfering with the spontaneous process, we exploit now the capability of the D-Wave solver API to manipulate directly the scheduling of the cycle.To accomplish this, we define the time instant at which the cycle has to be stopped and resumed, as well as the value of the persistent current powering the adiabatic relaxation.This entire procedure is referred to as annealing pause.
The top panel of Fig. 3 is a sketch of the evolution over time of the initial and problem Hamiltonians, as the timescheduled moves forward, compared with their theoretical state if no pause is scheduled.In both cases the problem Hamiltonian grows while the initial one decreases, but in the time-schedule case we find a moment (deter-mined by the user), when the annealing is paused and, as a consequence, the two terms of the system Hamiltonian remain constant.Once the pause is finished, the normal scheduling is resumed and continues its cycle.At the end of the process, the initial Hamiltonian vanishes and the energy of the system is determined by the problem alone.
The middle and bottom panels of Fig. 3 show the results of the analysis on two problems, one of size 200 and the other of size 300, for which the uncontrolled annealing performed worst.For each problem we have paused the annealing after 10 µs, let the system rest for 10, 40, 60, 100 and 120 µs respectively halfway through the flow of current and finally let the annealing end.This whole process was repeated five times for each problem.
The best energy configuration in terms of distance from the ground state for the two problems analysed here were not achieved with the same parameters settings.In fact, every instance requires different values of the pause starting point, duration and persistent current.Nevertheless, all of our choices greatly improved the results previously obtained with the uncontrolled annealing, even though not all of them led to the optimal solution.We were able to record considerable results multiple times, proving that the introduction of the pause can increase the accuracy of the annealing.This improvement in the quality of the results is due to the effect of the pause on the search region of the solution space: by pausing the flow of the persistent current, and hence the annealing, we are able to widen the exploitation of the energy landscape and, as a consequence, the probability of finding the global minimum.b-c.Boxplots of deltas found over multiple runs of the annealing with the same pause starting point, same value of persistent current but different pause duration times.The dots represent the values of deltas for each run (saturated where needed as in Fig. 2).b. shows the boxplots for a single instance of the problems (300 variables) associated with a degradated solution, i.e. far from the ground state, whereas c. depicts the same but for a single instance of the problem with 200 variables.
The parameters must be tuned wisely: too long pauses could make the system escape from energy points near the ground state, while too short pauses could not be effective at all.At the same time, if we schedule a pause after the system overcome the minimum gap between the energy levels, i.e: it has already been in some equilibrium state, we will get no benefit from this whole procedure; likewise, a pause scheduled too early will have no effect on the probability of obtaining the global minimum because the chances of escaping the ground state are still high.
It has been shown empirically that finding the appropriate time to start the pause and its duration is a technique that is likely to increase the computation performance of the quantum annealing, yielding much better solutions at the cost of only little more QPU computational time [22].
IV. CONCLUSIONS
In this work we studied the capabilities of the D-Wave quantum annealer and the D-Wave Hybrid framework to approach problems in a complex scenarios.For our analysis we selected a fully connected model: the NPP, which poses an enormous challenge to the currently available QPUs and the architecture they are based on.
Two different analyses were done: accuracy of the outcome when the input size scales up and the impact of the annealing pause on the solution quality.
For the first part we conducted our analysis on a number of small-to-large size problems to investigate the behaviour of the quantum annealer on a level of complexity which is potentially that of real-life problems.One interesting result was found: a discontinuous accuracy with the problem size.While high-quality results were found at small problems, there is a counter-intuitive behaviour as the problem dimension increases: a dip in the accuracy for medium-sized problems and a recovery as it continuous increasing.This effect was explained by the value distribution within the dataset: lower values in the input allow higher accuracy of the result, even when the size of the problem is rising.
The medium-sized problems were studied in more detail by applying pauses during the annealing cycle, allowing the system to explore the solution space with a modified equilibrium.Our results prove that with the correct parameters tuning it is possible to improve dramatically the accuracy of the solution, obtaining optimal results in cases that had proven to be troublesome in a non-altered context.
Figure 1 :
Figure 1: Execution time of tabu-search for increasing input size.Classical partitioning of a set with the classical embedded tabu-search as backend.The red line is the exponential fitting t = Ae x/B where B=340 a.u. and x is the size of the input.Blue points represent measured data.The blue line in the bottom part is the deviation of the experimental point from the value of the fitting.
Figure 2 :
Figure 2: QUBO matrix and delta distributions over multiple datasets.a. QUBO matrix of one instance of data with problem size equal to 100.The entries are scaled and the intensity of colors is used as a means to summarize the main characteristics of the plot: the diagonal is made up of negative the lower triangular part is zero and the upper one has no null entries.b.Boxplots of deltas for different input problem sizes computed over 10 datasets for each size with dots representing the values of the delta in each instance.c-d.Kernel density estimation of the distribution of input data for problems with, respectively, 200 and 500 variables, showing the data from all 10 instances in each plot.For b, c and d the values of deltas were saturated to a reference value of 50, therefore such numerical value is to be interpreted as the result of a bad solution.
Figure 3 :
Figure 3: Annealing cycle and boxplots of deltas with pause.a. Sketch representation of the terms contributing to the system Hamiltonian as a function of the time.The solid red, dashed orange, solid green and dashed blue lines represent respectively the problem Hamiltonian with and without pause and the initial Hamiltonian with and without pause.b-c.Boxplots of deltas found over multiple runs of the annealing with the same pause starting point, same value of persistent current but different pause duration times.The dots represent the values of deltas for each run (saturated where needed as in Fig.2). b. shows the boxplots for a single instance of the problems (300 variables) associated with a degradated solution, i.e. far from the ground state, whereas c. depicts the same but for a single instance of the problem with 200 variables. | 4,956 | 2019-07-01T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Phenomenology of A Little Higgs Pseudo-Axion
In models where the Higgs is realized as a pseudo-Nambu-Goldstone boson (pNGB) of some global symmetry breaking, there are often remaining pNGBs of some $U(1)$ groups (called `pseudo-axions'), which could lead to smoking gun signatures of such scenarios and provide important clues on the electroweak symmetry breaking mechanism. As a concrete example, we investigate the phenomenology of the pseudo-axion in the anomaly-free Simplest Little Higgs (SLH) model. After clarifying a subtle issue related to the effect of symmetric vector-scalar-scalar (VSS) vertices (e.g. $Z_\mu(H\partial^\mu\eta+\eta\partial^\mu H)$), we show that for natural region in the parameter space, the SLH pseudo-axion is top-philic, decaying almost exclusively to a pair of top quarks. The direct and indirect (i.e. via heavy particle decay) production of such a pseudo-axion at the $14\,\mbox{TeV}$ (HL-)LHC turn out to suffer from either large backgrounds or small rates, making its detection quite challenging. A $pp$ collider with higher energy and luminosity, such as the $27\,\mbox{TeV}$ HE-LHC, or even the $100\,\mbox{TeV}$ FCC-hh or SppC, is therefore motivated to capture the trace of such a pNGB.
I. INTRODUCTION
Despite the great success of the Standard Model (SM), marked by the discovery of the 125 GeV Higgslike boson [1,2] and the on-going measurements of its properties, how the SM is embedded into a larger theory still remains a mystery. Since the Higgs boson mass parameter is in general not protected under radiative correction, a naive embedding would signal a high sensitivity of infrared (IR) parameters (the electroweak scale and the Higgs boson mass) to ultraviolet (UV) parameters (i.e. physical parameters defined at a high scale). Although this fine-tuned situation is logically possible, or might be explained to some extent by anthropic reasoning [3,4], it is nevertheless natural to conjecture the existence of some systematic mechanism which protects the Higgs boson mass parameter from severe radiative instability. A well-known example of such systematic mechanism is supersymmetry, which has the merit of being weakly-coupled and thus offers better calculability compared to scenarios based on strong dynamics. However supersymmetry requires the introduction of a large number of new degrees of freedom, and a large number of new parameters associated with them, making the model quite cumbersome. None of the new degrees of freedom have been observed. It *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>§<EMAIL_ADDRESS>¶<EMAIL_ADDRESS>is therefore well-motivated to consider alternative but simpler mechanisms with weakly-coupled dynamics in their range of validity.
One candidate of such alternative is the Little Higgs mechanism [5][6][7][8] 1 , in which the Higgs boson is a Goldstone boson of some spontaneous global symmetry breaking. The global symmetry is also explicitly broken in a collective manner 2 such that the Higgs boson acquires a mass and at the same time the model is radiatively more stable. A very simple implementation of this collective symmetry breaking (CSB) idea is the Simplest Little Higgs (SLH) model [15,16], in which the electroweak gauge group is enlarged to SU (3) L × U (1) X , and two scalar triplets are introduced to realize the global symmetry breaking pattern The global symmetry is also explicitly broken by gauge and Yukawa interactions, but in a collective manner to improve the radiative stability of the scalar sector. The particle content is quite economical. Especially in the low energy scalar sector, there exists only two physical degrees of freedom, one of which (denoted H) could be identified with the 125 GeV Higgs-like particle, while the other is a CP-odd scalar η which is referred to as a pseudo-axion in the literature [17,18]. In the SLH, the pseudo-axion η is closely related to the electroweak symmetry breaking (EWSB) and therefore studying its phenomenology is well-motivated. According to the hidden mass relation derived in ref. [19], the η mass m η is anti-correlated with the top partner mass m T , which is in turn related to the degree of fine-tuning in the model. The hidden mass relation is derived within an approach consistent with the continuum effective field theory (CEFT) and does not rely on the assumption on the contribution from the physics at the cutoff. Although phenomenology of the η particle has been studied by quite a few papers (e.g. [14,17,18,20,21]), their treatment was not based on the hidden mass relation, and also most of the papers were written before the 125 GeV boson was discovered. It is thus timely to revisit the status of η phenomenology in light of the discovery of the 125 GeV boson, taking into account the properly derived hidden mass relation and focusing on the parameter space favored by naturalness considerations.
There is another important reason that warrants a reanalysis of the η phenomenology. The SLH is usually written as a gauged nonlinear sigma model, in which the EWSB can be parametrized through vacuum misalignment. However, the vacuum misalignment also leads to the fact that, in the usual parametrization of the two scalar triplets, there exist scalar kinetic terms that are not canonically-normalized, and vector-scalar two-point transitions that are "unexpected" [22]. A further field rotation, including an appropriate gaugefixing procedure, is thus required to properly diagonalize the vector-scalar sector of the SLH model. This subtlety had been overlooked in all related papers before ref. [22], and if one carries out a proper diagonalization of the bosonic sector of the SLH, some of the η-related couplings will turn out to be different from what has been obtained in previous literature. This is the case for both the ZHη coupling and the coupling of η to a pair of SM fermions. The occurrence of the mass eigenstate antisymmetric ZHη vertex (i.e. Z µ (H∂ µ η − η∂ µ H)) is postponed to O(ξ 3 ) (with ξ ≡ v f , v ≈ 246 GeV and f is the global symmetry breaking scale of Eq. (1)), and the couplings of η to a pair of SM charged leptons, and to bb, cc, uu are found to vanish to all order in ξ. This leads to significant changes in the η phenomenology, which will be studied in detail in this work.
When one tries to derive the η-related Lagrangian in the SLH, symmetric vector-scalar-scalar (VSS) vertices, e.g. Z µ (H∂ µ η + η∂ µ H) naturally appear, which is a feature that is often present in models based on a nonlinearly-realized scalar sector. The effects of such symmetric VSS vertices contain some subtleties which, to our knowledge, have not been discussed before in literature. Therefore, we devote one section to the analysis of symmetric VSS vertices, which could also be helpful to clarify similar situations in other nonlinearly-realized models.
In this work we do not aim to give a complete characterization of the η phenomenology, which could be very complicated in certain corner of parameter space. Instead, we focus our attention on the parameter space favored by naturalness considerations. More specifically, we will consider η mass in the region 2m t m η 1 TeV, which is favored by naturalness. We then calculate the η decay and production at future high energy hadron colliders in various channels. It turns out at the 14 TeV (HL)-LHC the detection of η is quite challenging due to various suppression mechanisms. A pp collider with higher energy and luminosity, such as the 27 TeV HE-LHC, or even the 100 TeV FCC-hh or SppC, is therefore motivated to capture the trace of such a pNGB.
The paper is organized as follows. In Section II we review the basic ingredients of the SLH, including the crucial hidden mass relation obtained from a CEFT analysis, and present the mass eigenstate Lagrangian relevant for phenomenological studies. In Section III we clarify the effect of symmetric VSS vertices. Then in Section IV we derive important constraints from electroweak precision observables relevant for the pseudoaxion phenomenology. Section V is dedicated to the study of η decay and production at hadron colliders. In Section VI we present the discussion and conclusions.
A. Overview of the Simplest Little Higgs
In the SLH, the electroweak gauge group is enlarged to SU (3) L × U (1) X . Two scalar triplets Φ 1 , Φ 2 are introduced to realize the spontaneous global symmetry breaking pattern in Eq. (1). They are parameterized as Here we have introduced the shorthand notation s β ≡ sin β, c β ≡ cos β, t β ≡ tan β. f is the Goldstone decay constant. Θ and Θ are 3×3 matrix fields, parameterized as η is the pseudo-axion, and h and k are parameterized as (v denotes the vacuum expectation value (vev) of the For future convenience, we introduce the notation We note that the spontaneous global symmetry breaking Eq. (1) should deliver 10 Goldstone bosons, which are parameterized here in Θ and Θ . The electroweak gauge group SU (3) L × U (1) X will eventually break to U (1) EM , and therefore 8 Goldstone bosons will be eaten to make the associated gauge bosons massive. Only two Goldstone bosons remain physical, parameterized here as h and η. The parametrization of these Goldstone fields actually has some freedom, which we refer the reader to ref. [19] for explanation. In the SLH, under the full gauge group SU The gauge kinetic term of Φ 1 and Φ 2 can thus be written as 3 in which the covariant derivative can be expressed as 4 In the above equation, A a µ and B x µ denote SU (3) L and U (1) X gauge fields, respectively. g and g x denote the coupling constants of SU (3) L and U (1) X gauge groups, respectively. It is convenient to trade g x for t W ≡ tan θ W . T a = λ a 2 where λ a , a = 1, ..., 8 denote the Gell-Mann matrices. For Φ 1 , Φ 2 , Q x = − 1 3 . Following ref. [23], we parameterize the SU (3) L gauge bosons as 3 We note that Eq. (8) automatically satisfies the requirement of CSB. 4 In this paper our convention agrees with Ref. [23] but differs from Ref. [24]. The conversion between the two conventions is discussed in Appendix A.
with the first-order neutral gauge boson mixing relation Since the electroweak gauge group is enlarged to SU (3) L × U (1) X , it is also necessary to enlarge the fermion sector in order that fermions transform properly under the enlarged group.
We adopt the elegant anomaly-free embedding proposed in ref. [16,25,26]. In the lepton Yukawa sector, the SM left-handed lepton doublets are enlarged to SU (3) L triplets L m = (ν L , L , iN L ) T m with Q x = − 1 3 (m = 1, 2, 3 is the family index). There are also right-handed singlet lepton fields Rm with Q x = −1 and N Rm with Q x = 0. The lepton Yukawa Lagrangian can be written as [23] In the quark sector, we have the following field content Here The quark Yukawa Lagrangian can be written as [23] In the above equation, n = 1, 2 is the family index for the first two generations of quark triplets. d Rm runs over (d R , s R , b R , D R , S R ) and u Rm runs over (u R , c R , t R , T R We now turn to the scalar potential. Using a 1 Φ 2 ) 2 + h.c. term because it formally violates CSB. We note that introducing such a term may lead to spontaneous CP violation [27]. Furthermore, if both the (Φ † 1 Φ 2 ) 2 + h.c. term and Majorana mass terms for N R 's are introduced, the SLH light neutrino masses can be radiatively generated [28]. which we use the following leading order (LO) expression With the above expressions for the scalar effective potential we are able to compute the electroweak vev, Higgs mass, pseudo-axion mass, etc. as a function of µ 2 , λ and other Lagrangian parameters in the model. Finally we note that there of course exist gaugeinvariant kinetic Lagrangian for the SU (3) L × U (1) X gauge fields and the fermion fields in the model, according to their representations.
B. Hidden Mass Relation, Unitarity and Naturalness
Before starting the phenomenological analysis in the SLH, it is important to notice that there exist certain constraints that we have to take into account [19].
First, there exists a hidden mass relation which follows from an analysis of the scalar effective potential Eq. (17). This is because if we consider g, t W , λ t as fixed, then the scalar effective potential Eq. (17) is fully determined by 5 parameters, say µ 2 , λ, f, t β , m T . Requiring electroweak vev to be 246 GeV and the CP-even Higgs mass to be 125 GeV should eliminate two parameters, leaving only three parameters as independent. For instance, we may choose f, t β , m T as the three independent parameters, then any other observable could be expressed in terms of these three parameters. Especially, the pseudo-axion mass m η is determined from the following hidden mass relation derived in ref. [19] Here t −1 2θ ≡ 1 tan(2θ) , s −2 θ ≡ 1 sin 2 θ , and θ, A, ∆ A are defined by The basic feature of this mass relation is that the pseudoaxion mass is anti-correlated with the top partner mass.
Second, the SLH is meant to be only an effective field theory valid up to some energy scale, which could be revealed by an analysis of partial wave unitarity. This is done in ref. [19] and the unitarity cutoff is determined to be Apart from the lepton Yukawa part, the SLH Lagrangian is manifestly symmetric with respect to the exchange Φ 1 ↔ Φ 2 (with the corresponding exchange of all related coefficients), therefore without loss of generality we may restrict to t β ≥ 1. The resulting formulae have the t β ↔ 1 t β invariance. Nevertheless, the lepton Yukawa Lagrangian Eq. (12) does not share this exchange symmetry, and the t β ↔ 1 t β invariance could be lost. However, if we do not deal directly with lepton-related vertices, the t β ↔ 1 t β invariance violation could only come from input parameter corrections, which are all suppressed by v 2 f 2 [19], which is a very small quantity if we consider current bound on f . Therefore in the following, unless otherwise specified, we will assume t β ≥ 1. (Moreover, in Section IV we will show that the t β < 1 case is disfavored by electroweak precision measurements for natural region of parameter space.) Then we can express the unitarity cutoff as and we require all particle masses be less than Λ U . We note that since Λ U is determined by the smaller of the triplet vevs, while m Z is determined by the quadrature of the triplet vevs, therefore requiring m Z ≤ Λ U leads to an upper bound on t β (besides our assumption t β ≥ 1) Third, the parameter M T has a lower bound derived simply from the structure of the Yukawa Lagrangian [24] where s 2β ≡ sin(2β). M T is also bounded from above by either Λ U or the requirement that m 2 η obtained from Eq. (29) should be positive.
Finally, from the LHC search of Z boson in the dilepton channel [29,30], we estimate the lower bound on f as [27] f 7.5 TeV (37) We note that when combined with Eq. (36) and Eq. (35) this also leads to a lower bound on the top partner mass of around 1.7 TeV, which is much more stringent than constraints from top partner searches at the LHC. It is remarkable that the naturalness issue can also be analyzed in a CEFT approach, which is done in ref. [19].
We define the total degree of fine-tuning at a certain parameter point as Here λ U , µ 2 U denote the λ, µ 2 parameters defined at the unitarity cutoff. The above definitions obviously reflect how the IR parameters (e.g. m 2 h ) are sensitive to UV parameters (e.g. λ U , µ 2 U ), and thus may serve as a measure of the degree of fine-tuning in the allowed parameter space. We may follow ref. [19] to compute the degree of fine-tuning, and find several general features. One feature which is easy to understand is, generally speaking, with smaller f and m T we could get smaller degree of fine-tuning. In Figure 1 we present the density plot of Log∆ TOT in the m η − m T plane for f = 8 TeV. Only the colored region is allowed by various constraints. From the figure it is clear that the parameter region favored by naturalness considerations is featured by a small m T , with m η around 500 GeV. A light η, with a mass less than 2m t , is unfortunately disfavored.
C. Fermion Mass Diagonalization and Flavor Assumption
Fermion mass diagonalization has been studied in ref. [23,24]. In the lepton sector, the fermion mass matrices can be diagonalized by the following field rotations: where U l , W l are both 3 × 3 unitary matrices. In this work, for simplicity we will assume U l , W l are both identity matrices. This leads to simplification of some Feynman rules associated with the heavy neutrino N .
In the quark sector, first of all we perform field rotations in the right-handed sector as follows For simplicity, the phenomenological studies done in this work will be carried out under the following flavor assumptions on the quark Yukawa Lagrangian Eq. (16) These flavor assumptions turn off all the generationcrossing quark flavor transitions and lead to a trivial CKM matrix, i.e. V CKM = 1 3×3 , which is not realistic. Nevertheless, in this paper we are concerned with the direct production of new physics particles at high energy colliders rather than quark flavor observables. Also, for the parameter region which we are interested in, the phenomenology is not sensitive to the flavor assumptions adopted here, if the λ's in Eq. (45) and Eq. (46), which characterize the generation-crossing quark flavor changing effects, are small. With the above flavor assumptions, it is then straightforward to show, up to O( v f ), after right-handed sector field rotations we only need to perform the following field rotations in the left-handed sector to diagonalize the quark mass matrices In the above equations, the field rotation parameters δ t , δ Dd , δ Ss can be expressed using f, β and the corresponding heavy fermion mass as follows 6 Note in the above equations, before the square root, both the plus sign and minus sign give possible solutions, which leads to a total of eight sign combinations. When we refer to the sign combination in these equations, we will list according to the order δ t , δ Dd , δ Ss , as e.g. (+, +, +), (+, +, −), etc. m d , m s , M D , M S correspond to the mass of d, s, D, S, respectively. In the following we will simply neglect the small m d , m s , then the expressions of δ Dd , δ Ss become identical, apart from a possible sign difference before the square root. Then we obtain the simple expression where the superscripts indicate the sign choice for the corresponding rotation parameter. The rotation parameters δ t , δ Dd , δ Ss are important since they appear directly in the coefficients of various interaction vertices which affect the η phenomenology, as we will see.
D. Lagrangian in the Mass Basis
We are now prepared to present the Lagrangian in the mass basis which is relevant for the investigation of η phenomenology. However, let us first note that there is a subtle issue regarding the diagonalization in the bosonic sector. After EWSB, it can be shown that the CP-odd sector scalar kinetic matrix in terms of the η, ζ, χ, ω fields are not canonically-normalized. Also, there exist "unexpected" two-point vector-scalar transition terms like Z µ ∂ µ η after expanding the covariant derivative terms of the scalar fields. Therefore, a further field rotation (including a proper gauge-fixing) is needed to diagonalize the bosonic sector. This subtle issue had been overlooked for a long time in the literature, and was only remedied in a recent paper [22]. In ref. [22], an expression for the fraction of mass eigenstate η field contained in the η, ζ, χ, ω fields originally introduced in the parametrization Eq. (4), Eq. (5),Eq. (6) was obtained, valid to all orders in ξ ≡ v f , as follows (we collect the four fraction values into a four-component column vector Υ) The Υ vector is involved in the derivation of all η-related mass eigenstate vertices. Especially, from the expression of Υ we see there is an O(ξ) component of mass eigenstate η contained in χ. This has the following consequences.
If we parameterize the mass eigenstate ZHη vertex as follows where c as ZHη denotes the coefficient of the anti-symmetric ZHη vertex, and c s ZHη denotes the coefficient of the symmetric ZHη vertex, then it is shown in ref. [22] that We see that the anti-symmetric ZHη vertex only shows up from O(ξ 3 ), in contrast to the results presented in ref. [17,18] which claimed the existence of anti-symmetric ZHη vertex at O(ξ) due to the lack of an appropriate diagonalization in the bosonic sector. This subtle issue of diagonalization in the bosonic sector also has impact on the η coupling to fermions. For instance, if we consider the expansion of ijk Φ i 1 Φ j 2 , with the help of the expression for the Υ vector in Eq. (54), we could find the following result for the neutral component An important message from this is that ijk Φ i 1 Φ j 2 does not contain any fraction of mass eigenstate η field, to all orders in ξ. Therefore, from Eq. (12) we immediately conclude that η does not couple to a pair of charged leptons to all orders in ξ. This point has been overlooked by previous studies [21,31] which rely on η → τ τ .
In the following let us collect the other mass eigenstate vertices that are relevant for η phenomenology, to the first nontrivial order in ξ. In the Yukawa sector, we have the following couplings of H and η to a pair of fermions: 1. H and η couplings to lepton sector: 2. H and η couplings to up-type quark sector: 3. H and η couplings to down-type quark sector: In the above equations, m ln , n = 1, 2, 3 denote the masses of e, µ, τ leptons, M N n , n = 1, 2, 3 denote the masses of the three heavy neutral leptons N n . m u , m c denote the masses of the u, c quarks, respectively. η can also be a decay product of the heavy fermions N, T, D, S, therefore we also list the relevant Lagrangian for the heavy fermion gauge interaction which enters the heavy fermion decays A further interesting possibility is that η might come from the decay of a Z boson. The Z -related parts of interaction Lagrangian are listed below: 1. Z couplings to leptons: 2. Z couplings to 3rd generation quarks: 3. Z couplings to 1st and 2nd generation quarks: 4. Z couplings to bosons (relevant for Z decay):
III. SYMMETRIC VSS VERTICES
In the derivation of SLH Lagrangian in the mass basis we obtain the ZHη vertex in the form of Eq. (56), which contains two parts: the antisymmetric part (Z µ (η∂ µ H − H∂ µ η)) and the symmetric part (Z µ (η∂ µ H + H∂ µ η)) 7 . An antisymmetric VSS vertex often appears in models based on a linearly-realized scalar sector, such as the usual two-Higgs-doublet model(2HDM). It is natural to ask whether the symmetric VSS vertices can have any physical effect. We note that in a Lorentz-invariant 7 The Hermiticity requirement on the Lagrangian does not forbid the symmetric part. Zµ, H, η are all real fields. ∂µ does not lead to an additional minus sign under Hermitian conjugate because in quantum field theory x µ 's are labels, not operators. This is not to be confused with the situation in ordinary quantum mechanics.
ZHη vertex, the ∂ µ may act on any of the three fields (Z µ , H, η). However because a total derivative term ∂ µ (Z µ Hη) has no physical effects, we therefore expect at most two independent contributions from the interaction of one vector fields with two scalar fields. If symmetric VSS vertices are allowed and present in a general theory and could lead to distinct physical effects, it would mean that a vector field could interact with two scalar fields in a manner different from the usually expected antisymmetric pattern, which may further reveal interesting features of the enlarged scalar sector.
Let us first note that the symmetric VSS Lagrangian Z µ (η∂ µ H + H∂ µ η) can be written as via Leibniz rule and is therefore (via integration by parts) equivalent to in the Lagrangian formulation of the theory. A reflective reader might at this moment wonder whether terms like (70) indeed contribute to S-matrix elements if canonical quantization is adopted. Note that what matters in canonical quantization is the interaction Hamiltonian in the interaction picture (denoted H int I ), and if Z µ is a massive spin-1 field, then the corresponding interaction picture field operator Z µ I (the subscript "I" denotes interaction picture) will automatically satisfy [32] It is tempting to arrive at the conclusion that terms like (70) cannot contribute to S-matrix elements due to Eq. (71). Actually this is not quite correct. The correct procedure from the classical Lagrangian to the interaction Hamiltonian in the interaction picture H int I is first identify appropriate canonical coordinates and their conjugate momenta, then perform a Legendre transformation to obtain the Hamiltonian and express it in terms of canonical coordinates and their conjugate momenta, then promote the canonical variables to field operators satisfying appropriate canonical communtation relations, and finally split the Hamiltonian into a free part and an interaction part and replace the Heisenbergpicture quantities with their interaction-picture counterparts [32]. If this procedure is strictly followed, we would find that only the spatial components of Z µ can be treated as independent canonical coordinates while Z 0 is dependent because no matter we start with Eq. (69) or Eq. (70) the derivative of the Lagrangian with respect toŻ 0 cannot be made to satisfy canonical commutation relations. To avoid the appearance of ∂ 0 Z 0 in the Hamiltonian we could start with Eq. (69) and then the problem turns out to be what has been treated in Section 7.5 of Ref. [32]. Using the results there we could see that Eq. (69) leads to a term in the interaction Hamiltonian in the interaction-picture (barring a Lorentz non-covariant term which is not shown here). This will certainly lead to a vertex Feynman rule where k µ is the Z momentum flowing into the vertex. This vertex Feynman rule could also be derived from Eq. (69) via the path-integral method. Notice that it is not legitimate to perform integration by parts in the interaction-picture Hamiltonian H int I to obtain from Eq. (72) 8 . 8 More specifically, integration by parts for spatial components The appearance of ∂ µ Z µ in Eq. (70) is reminiscent of covariant gauge-fixing in gauge field theories. Eq. (70) is not gauge-invariant, nevertheless at this moment let us suppose that it can be deduced from a gauge-invariant operator. Because we are dealing with quantum field theories it is important not to be confused with the case of classical field theories. In a classical gauge field theory a gauge-fixing condition (such as the Landau gauge condition ∂ µ Z µ = 0) is employed so that the solutions of the equation of motion are required to also satisfy the gauge-fixing condition. In quantum field theory all classical field configurations, regardless of whether they satisfy the classical equation of motion, are to be integrated over in the path-integral. The usuallyadopted covariant gauge, the general R ξ gauge, actually corresponds to a Gaussian smearing of a class of covariant gauge conditions and does not strictly force the classical field to satisfy a simple gauge-fixing equation. However, the limit ξ → 0 makes the gauge-fixing functional act like a delta-function imposing the Landau gauge condition ∂ µ Z µ = 0 [32]. Therefore it is heuristic to guess that in the Landau gauge, symmetric VSS vertices do not contribute to S-matrix of the theory. However, we should not forget that in the Landau gauge it is necessary to take into account the Goldstone contribution to the S-matrix, and also the associated ghost contribution when we go beyond tree level in perturbation theory. This observation suggests that at tree level, processes involving symmetric VSS vertices can be seen as purely Goldstone-mediated. of antisymmetric VSS vertices is where A and h denote a generic CP-odd and CP-even 2HDM Higgs boson, respectively. The corresponding Feynman diagram is shown in Fig. 2 in unitarity gauge. Now suppose we replace the antisymmetric VSS ZhA vertex in Fig. 2 by a completely symmetric VSS ZhA vertex. It is obvious that if the Z boson is on-shell, then the amplitude should vanish since for an on-shell massive vector-boson we have the relation p · = 0 for its momentum and polarization vector. It is tempting to proceed with the case that Z boson is off-shell.
The amplitude in this case can be examined from two perspectives. First, we can perform the calculation in unitarity gauge. In this gauge, the result of dotting the Z momentum p at the ZhA vertex into its s-channel propgator is again proportional to the Z momentum p at the Zff vertex. It is then obvious that only the axial-vector part of the Zff vertex contributes to the amplitude, with a contribution proportional to the fermion mass m f . Alternatively, we may perform the calculation in Landau gauge (ξ = 0), in which the diagram shown in Fig. 2 does not contribute to the amplitude, nevertheless we need to take into account the s-channel Goldstone-mediated amplitude, which again gives an contribution proportional to the fermion mass m f . Although usually f is a light fermion with negligible mass effects, we might be interested in the case that f is heavy with important mass effects, e.g. the top quark. If in this case the symmetric VSS vertex could lead to physical effects, we would seem to produce a paradox in the SLH. In the SLH there exists a symmetric ZHη vertex, however if we consider a linearly-realized SLH as a UV completion, then it cannot lead to symmetric VSS vertices and hence there will be no related physical effects. Since the usual nonlinearly-realized SLH can be related to a linearly-realized SLH via an appropriate field redefinition, the above discussion seems to cause violation of the field redefinition invariance of the Smatrix element 9 . We can turn the argument around to use the field redefinition invariance to infer the existence of additional contribution in the SLH which also contributes to the ff → Hη process such that the field redefinition invariance is maintained. In fact, if we examine the Yukawa part of the SLH Lagrangian, we would find the following four-point contact vertex (m f denotes the mass of f ) Hηf γ 5 f (76) 9 The radial mode does not help since it does not have the required CP property.
Here g A is the axial coupling of the fermion f which also appears in its interaction to Z boson and the associated Goldstone χ as Now if we compute the amplitude for ff → Hη in R ξ gauge, we need to include three contributions: schannel Z exchange, s-channel χ exchange, and f f Hη contact interaction, as shown in Fig. 3 corresponding to these three diagrams are computed to be (from left to right): Here p f and pf are the four-momenta of f andf , respectively and q ≡ p f + pf . When we add the three contributions, we find which is exactly what we would expect from field redefinition invariance. Moreover, we see that the Z and χ contributions add to be gauge-independent, while the contact interaction contribution itself is gaugeindependent.
Here we would like to mention a further subtle point related to the symmetric VSS vertex. It might still be somewhat counter-intuitive the contribution from the symmetric ZHη vertex is cancelled by the contribution from f f Hη contact vertex, since the former contribution should know the position of Z pole and therefore vanish for an on-shell Z boson while the latter certainly does not "feel" the Z pole. To illustrate this issue, we can include the effect of Z boson width Γ Z so that the Z boson propagator in the unitarity gauge is written as When this propagator is dotted into q ν coming from the symmetric VSS Feynman rule, at q 2 = m 2 Z it will vanish, which seems quite plausible given our previous argument that symmetric VSS vertex does not contribute to the process in which the related vector boson is onshell. However, this immediately leads to the paradoxical situation that near on-shell region the field redefinition invariance is again violated since the contribution from f f Hη contact vertex certainly does not know about the Z pole.
The resoultion of this paradox consists in the treatment of particle width in its propagator. The naive treatment in Eq. (82) is actually not quite correct and will in general lead to results that violate the Ward-Takahashi identities. A proper treatment can be made by e.g. employing the complex mass scheme which properly retains gauge invariance. The final result is, of course, no exotic structure appears near Z pole and the field redefinition invariance is maintained.
IV. CONSTRAINTS FROM ELECTROWEAK PRECISION OBSERVABLES
As discussed in Section II in the study of the pseudoaxion phenomenology there are eight sign combinations for the rotation parameters δ t , δ Dd , δ Ss . Moreover, when lepton sector is relevant, either t β ≥ 1 or t β < 1 could be possible, leading to further complication. Nevertheless, as will be shown in this section, the number of possibilities greatly reduces if we require 1. The parameter space under consideration is favored by naturalness consideration and thus embodies (to some extent) the original motivation of the SLH model.
The parameter space under consideration is allowed by electroweak precision measurements.
As discussed in Section II the first requirement points to the region characterized by a small top partner mass. In the SLH, currently the lower bound on top partner mass is derived from Eq. (36) where f is stringently constrained by dilepton resonance searches. Constraints from direct searches for top partner production is not as competitive at the moment. For given f , a small top partner mass could be obtained by requiring a large t β (or t −1 β for t β < 1), which is in turn bounded by unitarity consideration. To summarize, the first requirement points to the region characterized by a small f and large t β (or t −1 β for t β < 1). As to the second requirement, in the present work we consider the following electroweak observables 1. The W boson mass m W .
2. R observables measured at the Z-pole: R b , R c , R e , R µ , R τ , which are defined by in which Γ(had) denotes the total hadronic width of the Z boson, and Γ(bb), Γ(cc), Γ(l + l − ) denote the Z boson partial widths into bb, cc, l + l − channels.
To set up the calculation we choose the fine structure constant α em ≡ e 2 4π (defined at Z-pole), Fermi constant G F and Z boson mass m Z as the input parameters. Expressed with the SM quantities we have the tree level relations These relations get modified in the SLH to be Here we note that in the above equations, as in Section II, g, v, s W represent quantities in the SLH and are thus different from the SM quantities g SM , v SM , s W,SM . From the above two set of relations we may derive To calculate the R observables in the SLH we also need the modified Z couplings to light fermions. Although the corrections relative to the SM come in in the v 2 f 2 order, they are still relevant since the R observables have been measured to a few per mille precision. In such a case the diagonal entries in the rotational matrices in Eq. (49) should be understood as 1 − 1 2 δ 2 Dd and 1 − 1 2 δ 2 Ss , respectively. Then the modified Z couplings to light fermions in the SLH can be written as g L,Z,f = g L,Z,f + δ Z g L,Z ,f , g R,Z,f = g R,Z,f + δ Z g R,Z ,f , for f = u, c, b, e, µ, τ In the above equations, δ Z is the O v 2 f 2 Z − Z mixing angle, appearing in the mixing relation Here Z m , Z m denote the final mass eigenstates after the O v 2 f 2 rotation while Z, Z denote the states before the rotation, as define via Eq. (11). In the process of gauge boson mass diagonalization, δ Z is computed to be 3 , Q f denote the third component of the isospin and the electric charge of f , respectively. g L,Z ,f , g R,Z ,f are leading-order coefficients of the Lagrangian termsf L γ µ f L Z µ ,f R γ µ f R Z µ , which are given in Eq. (64),Eq. (65) and Eq. (66). g L,Z,f , g R,Z,f in Eq. (91) denote the coefficients of the precision. For f = d the modified Z couplings in the SLH turn out to be Obviously the additional correction is due to the lefthanded D − d mixing. The corresponding formulae for f = s can be obtained by the replacement d → s, D → S. g L,Z,D , g L,Z,S are leading-order coefficients of the Lagrangian termsD L γ µ D L Z µ ,S R γ µ S R Z µ g L,Z,D = g L,Z,S = 1 3 Now we have all the SLH couplings that are necessary to calculate the R observables. It should be noted that in the above coupling formulae, s W , c W , t W are quantities in the SLH and are therefore different from their SM counterparts s W,SM , c W,SM , t W,SM , see Eq. (90). Therefore, the modification of Z couplings to light fermions relative to the SM is caused by three factors: Z − Z mixing,left-handed D − d, S − s mixing, and correction of the weak-mixing angle. A 95% CL level constraint can be obtained in the f −t β plane by performing a χ 2 -fit of the five R observables. The χ 2 is defined by In the above equation, R f denote the experimental values and δ R f denotes the associated experimental uncertainty. Also, R f,SM is the SM theory prediction and δ R f,SM denotes the associated theory uncertainty. Their values are listed in Table I In Figure 4 the results of the electroweak precision analysis of m W and R observables are shown. To clarify the situation we present the results according to whether t β ≥ 1 and the sign combination of the rotation parameters δ Dd , δ Ss (see Eq. (53)). At first sight there are eight possibilities in total, however it is immediately recognized that δ + Dd , δ − Ss and δ − Dd , δ + Ss make no difference in terms of constraints in the f − t β plane, reducing the number of possibilities to six. Therefore we obtain the six panels in Figure 4, each panel showing one possibility as described in the caption.
For all the panels, the green and yellow regions correspond to parameter points that are allowed by χ 2 -fit of R observables at 68% and 95% CL, respectively. These allowed regions do not exhibit a t β → t −1 β symmetry (for example, the allowed region in the upper right panel and the lower left panel still differ under the transformation t β → t −1 β ), since in the computation of R observables, the correction of s 2 W relative to its SM value has to be taken into account, as was pointed out previously. When f is larger than about 17 TeV there will be a lower theoretical bound (from the mass relation) on t β or t −1 β which is larger than 1, corresponding to the white region at large f and small t β or t −1 β in each panel. The 2σ constraints from m W measurements are simply implemented by requiring In the above equation m W denotes the experimentally measured W boson mass and δ m W and δ m W,SM denote the associated experimental and theoretical uncertainties, respectively. We superimpose the constraint boundary See the text for detailed description.
on the six plots as blue or red lines, representing constraints from Tevatron or ATLAS measurements, respectively. For all these m W constraint boundary lines, the regions on the right side of the lines are allowed at 2σ level.
As can be seen from Figure 4, if t β < 1, then the region favored by naturalness consideration is disfavored by constraints from both R observables and W boson mass measurements, regardless of the sign combination of the rotation parameters δ Dd , δ Ss . If t β ≥ 1, then W boson mass measurement does not constrain the parameter region favored by naturalness consideration. However, in this case constraints from R observables are significant when any of the rotation parameters δ Dd , δ Ss adopt the plus sign in Eq. (53). This is because the choice of plus sign leads to a large t β enhancement of the rotation parameter and therefore a larger deviation of Z couplings to the corresponding fermion. Although the lower bound on f has been pushed to around 7.5 TeV by LHC dilepton resonance searches, the R observable constraints still force us to avoid this t β enhancement, and consequently the only possibility left is δ − Dd , δ − Ss with t β ≥ 1. This result has important consequences for the pseudo-axion phenomenology since the sign combinations of δ Dd , δ Ss will determine how η interacts with the D, S quarks which in turn influences the decay and production of the η particle, as will be discussed in more detail in the next section.
In previous literature on the SLH model the t β ≥ 1 and t β < 1 cases are usually not distinguished, since a t β → t −1 β symmetry is tacitly assumed. Then only the t β ≥ 1 case is considered. However strictly speaking this symmetry is only valid when the leptonic sector is not considered. Here we established clearly that if we consider the region favored by naturalness consideration, the t β < 1 case is disfavored by measurements m W and R observables. This is closely related to the breakdown of the t β → t −1 β symmetry in the lepton sector. Moreover, in previous literature [12,24], the sign combination of the rotation parameter δ Dd , δ Ss was simply assumed to be (effectively) δ − Dd , δ − Ss , in order to suppress contribution to the electroweak precision observables. Here we also establish firmly this choice based on constraints from R observables, combined with m W and naturaless consideration, keeping in mind that the constraint on f has been pushed to around 7.5 TeV due to updated LHC constraints.
V. PRODUCTION AND DECAY OF THE PSEUDO-AXION
With the preparation made in the previous three sections we are now ready to calculate the production and decay of the pseudo-axion. We will restrict ourselves to the region 2m t m η 1 TeV, which is favored by naturalness consideration. All the related partial widths formulae are given in Appendix B.
A. Decay of the Pseudo-Axion
For η in the mass range 2m t m η 1 TeV, it can always decay into tt, gg, γγ channels. (The W W, ZZ, Zγ channels are also possible and may have comparable branching ratio compared to γγ. However from a detection viewpoint, it is preferrable to consider further decays into leptons in these channels, leading to an additional suppression by the leptonic branching. For simplicity we will not consider these channels further in this work.). η → ZH is highly suppressed, since the antisymmetric ZHη vertex is suppressed to O v 3 f 3 while the symmetric ZHη vertex does not contribute, as pointed out in Section III. If the new fermions D, S, N are heavy enough such that they cannot appear as decay products of η, then we are left with only the tt, gg, γγ channels. Nevertheless we should keep in mind that when f and m η are given, the partial withds of these channels still depend on the masses of the additional heavy quarks T, D, S which do not appear as decay products of η. First, the η → tt decay is controlled by the rotation parameter δ t , which in turn depends on the top partner mass. The loop-induced decays η → gg, γγ have contributions from both the top quark and the heavy quark partners T, D, S. The top quark contribution again depends on δ t while the T, D, S contributions depend on the ηTT , ηDD, ηSS couplings which are propotional to the corresponding rotation parameters times the quark partner mass. Experimentally the current lower bound for light-flavor quark partner D and S is around 700 GeV [34]. Thus for a heavy enough η the η → Dd, Ss channels are still possible if the mass of D or S is close to the lower bound. To be definite, we will consider four benchmark scenarios: The total width and branching ratios of η are shown in Figure 5 and Figure 6 for Case A and Case B respectively. In these two cases, the additional fermion partners D, S, N are not light enough to appear as decay products of η and therefore we are left with the standard η → tt, gg, γγ channels. From the figures it is clear that η can be viewed as a narrow width particle, however the width is not small enough to give rise to displaced vertices. In both Case A and Case B and for both sign choices, η decays almost 100% to tt, with only very small branching ratios to gg (O(0.1%)) and γγ (O(0.001%)). Here (and in the following) all the partial widths are calculated at LO, but it is obvious that the inclusion of higher order radiative corrections has little effect on the whole picture. From a detection point of view this situation is somewhat unfortunate since the dominant channel tt suffer from huge background at hadron colliders, while the clean channel γγ has an extremely small branching ratio. It is natural to ask how the situation will change if any of D, S, N is light enough, such that exotic channels like η → N N, N ν, Dd, Ss could be open. This is embodied in Cases C and D and we show the corresponding branching ratio plots in Figure 7. Nevertheless the exotic channels contribute at most a few percent in terms of branching ratio, therefore are of little use for η detection even if any of D, S, N is light enough. This can be understood from the interaction Lagrangian containing the ηDd, ηSs and ηN ν, ηN N vertices. The Eq. (60). However, when η → N N is open, M N n can be at most O(v). Moreover, the ηN N coupling suffers from a t β suppression. Therefore numerically η → N N channel is much suppressed compared to η → tt channel.
B. Decay of the Top Partner
The pseudo-axion may appear as a decay product of some additional heavy particles in the model. Among the additional particles in the SLH only Z and T are closely related to EWSB and naturalness favors small Z and T masses within theoretical constraints. In this subsection we consider the decay of the top partner. To be specific we fix f = 8 TeV and m η = 500 GeV and then plot the total width and branching ratios of T as a function of the top partner mass m T in Figure 8. Both δ + t and δ − t possibilities are considered. Note that when m T is also given, then according to the mass relation, t β can be calculated, which in turn determines the total width and branching ratios. The relation Br(T → bW ) = 2Br(T → tH) = 2Br(T → tZ) holds to a good approximation. In the δ + t case, Br(T → tη) is small (not larger than 10% for m T > 2 TeV) and decreases with the increase of m T . In the δ − t case, Br(T → tη) is sizable and becomes dominant (larger than 50%) for m T 2.2 TeV. Another interesting and important feature is about the total width of T . In the δ − t case, the total width is around 20 GeV which makes the narrow width approximation valid to high precision. In the δ + t case, the total width increases with m T . For m T ≈ 3.5 TeV the total width increases to around 500 GeV. In this case Γ/M 20% and the narrow width approximation still roughly holds, if the phase space is large enough. The width will however leave appreciable impact on the invariant mass distribution of the T decay products.
C. Direct Production of the Pseudo-Axion
The pseudo-axion can be directly produced via the gluon fusion mechanism at hadron colliders.
The particles running in the loop now contain t, T, D, S. In the calculation of the production cross section 10 , we Tj 14 TeV Tj 27 TeV Tj 100 TeV TT 14 TeV TT 27 TeV TT Tj 14 TeV Tj 27 TeV Tj 100 TeV TT 14 TeV TT 27 TeV TT
D. Pseudo-Axion Production from Top Partner Decay
The above discussion shows that it is very difficult to detect η via the gluon fusion and ttη associated production channels. It is therefore natural to consider alternative η production mechanisms, such as decay from heavier particles. In the SLH, particles that can be heavier than η are T, D, S, N, Z , X and Y . Here we will concentrate on T , which is most tightly connected to EWSB. We will briefly comment on the possibility of detecting η from other heavy particle decays in the next subsection.
Under current constraints, the lower bound on m T is already larger than the largest possible value of m η plus m t , therefore the exotic decay channel T → tη will always open. The branching fraction of T → tη has been discussed (see Figure 8). Here we focus on top partner production. Two major production mechanisms are pair production through QCD interaction, and single production through the T bW vertex. Pair production has the virtue of being model-independent, while single production depends on the value of δ t . In Figure 11 we present the cross section of pp → TT and pp → T j +T j for both δ + t and δ − t , as a function of m T while we fix f = 8 TeV, m η = 500 GeV. Three center of mass energies (14,27,100 TeV) are considered. Whether pair or single production delivers a larger cross section depends on the sign choice for δ t and the center of mass energy. In the δ + t case, for all three center of mass energies the single production cross section is larger. In the δ − t case, at 14 TeV single production is larger since pair production is highly suppressed by phase space. At 27 TeV pair production and single production become comparable while for 100 TeV collider energy pair production dominates.
To detect η we would also like to consider the top partner decay T → tη that follows the pair or single production of T . The associated cross sections are plotted as a function of m T in Figure 12, using narrow width approximation, for both δ + t and δ − t . For definiteness we take f = 8 TeV, m η = 500 GeV. To be precise, the plotted cross sections are defined by (for pp → T j, the contribution from pp →T j is also included) For the purpose of η detection, let us consider using the η → tt channel, which has almost 100% branching fraction. Then the η production from top partner decays generically leads to a multi-top (≥ 3) signature. Moreover, the top quarks will be boosted since m T m t + m η . For example, suppose a 2 TeV top partner is produced with little boost in the lab frame and then decays into t + η. At this step t and η roughly shares the rest energy of the top partner and therefore will each have about 1 TeV energy. The η boson then further decays into t andt, each of which roughly has an energy about 0.5 TeV. All three top quarks are boosted: the first one will have the decay (t → bW ) cone size approximated by ∼ 2m t /E t 0.4 while the second and third top have ∼ 2m t /E t 0.8. Furthermore, the second and the third top quark decaying from η is close to each other, of separation approximated by ∼ 2m η /E η 0.8.
In the single production case, the signature will be 3t + j, in which the first top is highly boosted while the second and third are still somewhat boosted and close to each other. One can make use of such kinematics to discriminate from QCD backgrounds. The most serious background is perhaps multi-top production. One may be able to reduce the background using the boosted techniques [38]. In the pair production case, if we consider one top partner decaying into tη with the other decaying into bW , then we obtain a signature of 3t+b+W in which the top quarks and also the W boson will be boosted. In both single and pair production channels, the invariant mass peaks at m T and m η will also be helpful in discriminating between the signal and background. Nevertheless, a full signal-background analysis using boosted-top techniques is beyond the scope of the present work.
From Figure 12, we see that the cross sections at 14 TeV (HL-)LHC for all these channels are very small (< 1 fb), making the detection very difficult. Nevertheless, with the increase of collider energy, the signal cross sections increase significantly. For example, at the 100 TeV FCC-hh, for both δ + t and δ − t and pair and single production channels, at relatively small m T the cross sections could reach O(100 fb). In the δ + t case, the single production (with the top partner decaying to tη) turns out to deliver a cross section of about 200 fb, which is larger than the pair production channel. In the δ − t case, the pair production (with one top partner decaying to tη) turns out to deliver a cross section of about 400 fb, which is however larger than the single production channel.
In principle, top partner production and decay provide a way to measure t β (which is important for testing the SLH mass relation) and also discriminate between the δ + t and δ − t cases. In practice, we may consider the partial width ratio R η ≡ Γ(T →tη) Γ(T →bW ) as both an indicator of the sign choice for δ t and a way to measure δ t , which in turn determines t β . δ t can also be determined from pp → T j production since the cross section is proportional to δ 2 t . Furthermore, in the δ + t case the total width of T could reach O(100 GeV), which may have impact on the invariant mass distribution of T decay products (e.g. bW ). Measurement of the T total width in principle could also help determine the value of δ t . If δ t is determined (including the sign choice), we should note however the determination of t β and the test of mass relation still requires the measurement of f and m η , which can be obtained if we are able to measure the masses of Z and η particles.
E. Comments on Other Channels
Currently the SLH is stringently constrained by the LHC Z → ll search, nevertheless it also means that if the SLH was realized in nature, the Z → ll signature would be the first place that we might expect the appearance of new physics. Then it would also be important to consider whether we may detect η as a decay product of Z . Two channels might be conceived: Z → ηH and Z → ηY . However, it turns out they give too small branching fractions: Br(Z → ηH) < 0.01 and Br(Z → ηY ) < 10 −4 . This is regardless of whether the Z → DD, SS, N N channels are kinematically allowed. Therefore it is not preferable to consider detecting η from Z decay.
If kinematically allowed, we might also consider D → dη, S → sη, N → νη decays. However, these decay channels also suffer from small branching fractions, since the ηDd, ηSs, ηN ν couplings are O( v f ) suppressed compared to HDd, HSs, HN ν couplings (see Eq. (60) and Eq. (62)). For example, D will dominantly decay to uW, dZ, dH, with only Br(D → dη) < 1%, for the benchmark point f = 8 TeV, m T = 2 TeV, m η = 0.5 TeV and any value of m D . Here δ − Dd is assumed, to be consistent with electroweak precision constraints. As to D production, for the δ − Dd case, there is a t −1 β suppresion for single D production, therefore D pair production is more promising. Moreover, current collider constraint on D mass is not stringent, such that m D = 700 GeV is still allowed [34]. Therefore, if m D is as light as 700 GeV, the large pp → DD production cross section could compensate for the small D → dη branching fraction, leading to sizable η production rate. At the 100 TeV FCC-hh, the η production cross section from D decay, σ(pp → DD → η + anything) could also reach more than 100 fb for m D not much larger than 700 GeV(see Figure 13). This is comparable with η cross section from top partner production, and in principle could also be used to measure t β . The expected signature would be tt+2j+W/Z/H, in which the W/Z/H should be boosted. The existence of various intermediate resonances would be helpful in discriminating signal and background. Nevertheless, we should be aware that naturalness does not offer any guidance on the preferred value of m D . This is different from the case of m T , in which naturalness clearly favors a lighter top partner. The case of pp → SS production with S → sη decay is completely similar to the above discussion of D production and decay. For N , Br(N → νη) is also very small (less than 1% for the benchmark point f = 8 TeV, m T = 2 TeV, m η = 0.5 TeV and any value of m N . Moreover, N does not have QCD pair production channels like D, S, therefore it is difficult to detect η from N decay at hadron colliders. The X, Y gauge bosons in the SLH may have decays like X → ηW and Y → Hη. However, single production cross section of X, Y at hadron colliders are highly suppressed, and we need to rely on production with other heavy particles (heavy gauge bosons or quark partners) [24]. Since X, Y bosons are quite heavy (with masses of about 0.8m Z ), their production with other heavy particles would be limited by phase space while their decays are expected to be dominated by fermionic final states. Therefore we don't consider η production from X, Y decays as promising channels for η detection.
VI. DISCUSSION AND CONCLUSIONS
The Simplest Little Higgs model provides a most simple manner to concretely realize the collective symmetry breaking mechanism, in order to alleviate the Higgs mass naturalness problem. In the scalar sector, its particle content is very economical, since besides the CPeven Higgs which should serve as the 125 GeV Higgs-like II: Summary of η production from T, D(S) decays at the 100 TeV FCC-hh. For pp → T j, the contribution from pp →T j is also taken into account. For TT , T j channels, the benchmark point is f = 8 TeV, mT = 2 TeV, mη = 500 GeV while for DD channels, the benchmark point is f = 8 TeV, mT = 3 TeV, mη = 500 GeV, mD = 700 GeV. When listing the signatures for TT , DD channels we don't consider the situation in which both quark partners decay into η + t or η + j, but this possibility is taken into account in the cross section values and plots.
particle, the only additional scalar particle is the pseudo-Nambu-Goldstone particle η associated with a remnant global U (1) symmetry. The detection of η is important since its mass enters into the crucial SLH mass relation and it will also play an important role in discriminating SLH from other new physics scenarious. In this work we are concerned with the production and decay of η particle at future hadron colliders. We found that for natural region of parameter space, m η is larger than 2m t and decays almost exclusively to tt, and Br(η → γγ) is too small to be considered promising for detection. Also it is very difficult to detect η in direct production channels pp → η (gluon fusion) and pp → ttη. Channels that are worth further consideration include η production from heavy quark partner (T, D, S) decays, in which the heavy quark partner might be singly (for T ) or pair produced. The corresponding η production cross section at 100 TeV FCC-hh could reach O(100 fb) for certain range of parameter space that is allowed by current constraints, while at 14 TeV (HL-)LHC the rate might be too small for detection. However, the detection prospects in these channels (at 100 TeV) might still be challenging since the final states are quite complicated, including multi-top associated production with other objects, in which one or more of them could be boosted, requiring sophisticated tagging techniques. At the same time the SM background also enjoys a large increase with the collider energy, with more complicated hadronic environment. The aim of this paper is to examine the η production channels with a LO estimate of the η cross sections in the relatively promising ones as a function of model parameters, keeping in mind the most up-to-date theoretical and experimental constraints (see Table II for a summary). We do not attempt here to give a quantitative assessment of the collider sensitivities in these channels. Phenomenology of the η particle in the SLH was studied long time ago by several papers (e.g. [17,18,20,21]). Compared to all the previous studies, the present paper is different in a few crucial aspects: 1. Instead of working with the ad hoc assumption of no direct contribution to the scalar potential from the physics at the cutoff, we take into account in all calculations the crucial SLH mass relation Eq. (29) which is a reliable prediction of the SLH. Therefore our prediction preserves all the correlation required by theoretical consistency but does not depend on the choice of any fixed cutoff value such as 4πf .
2. We have focused our attention on the parameter region favored by naturalness consideration. This region is characterized by small m T and large t β or t −1 β . The favored η mass is larger than 2m t .
3. We have taken into account the recent collider constraint on f (f 7.5 TeV) which is much more stringent than the constraints obtained long time ago. We also take into account the constraint from perturbative unitarity which sets an upper bound on the allowed value of t β or t −1 β . These two factors determine the current lower bound on m T and crucially affect the largest cross section that can be achieved in all channels. 4. Our study is based on an appropriate treatment of the diagonalization of the vector-scalar system in the SLH, and especially the field redefinition related to η. This affect the derivation of ZHη vertices and also η coupling to fermions, which are not treated properly in previous works until ref. [22]. 5. We also clarify the role played by the symmetric VSS vertices that appear in the Lagrangian and how they are compatible with the general principle like field redefinition invariance and gauge independence.
From our study it turns out that the detection of η at the 14 TeV (HL-)LHC will be very difficult, and therefore a pp collider with higher energy and larger luminosity, such as the 27 TeV HE-LHC or even the 100 TeV FCC-hh or SppC, is motivated to capture the trace of such an elusive particle. Moreover, generally we would expect some other SLH signatures (e.g. Z → ll, T → bW or D → uW ) to show up earlier than η signatures since η signatures are usually very complicated (with multiple top quarks) and suffer from small rates). It is nonetheless important to study η properties since they are crucial in testing the SLH mass relation and also provide a basis for model discrimination.
The same formulae hold for S decay channels with the replacements δ Dd → δ Ss , m D → m S , D → S, d → s, u → c. They also hold for N decay channels with the replacements m D → m N , D → N, d → ν, u → and
Z decay
For Z → ff decay modes, assuming the interaction Lagrangian L ⊃ f g(a f Lf L γ µ f L + a f Rf R γ µ f R )Z µ , then the decay width is given by . (B19) . (B20) Decay widths in bosonic channels: . (B28) . (B29) | 15,919.4 | 2018-09-11T00:00:00.000 | [
"Physics"
] |
Fungi-Based Microbial Fuel Cells
: Fungi are among the microorganisms able to generate electricity as a result of their metabolic processes. Throughout the last several years, a large number of papers on various microorganisms for current production in microbial fuel cells (MFCs) have been published; however, fungi still lack sufficient evaluation in this regard. In this review, we focus on fungi, paying special attention to their potential applicability to MFCs. Fungi used as anodic or cathodic catalysts, in different reactor configurations, with or without the addition of an exogenous mediator, are described. Contrary to bacteria, in which the mechanism of electron transfer is pretty well known, the mechanism of electron transfer in fungi-based MFCs has not been studied intensively. Thus, here we describe the main findings, which can be used as the starting point for future investigations. We show that fungi have the potential to act as electrogens or cathode catalysts, but MFCs based on bacteria–fungus interactions are especially interesting. The review presents the current state-of-the-art in the field of MFC systems exploiting fungi.
Introduction
The technology known as microbial fuel cells (MFCs) has been intensively developed over the last two decades, due to its great potential for clean energy production in the form of electric current [1,2]. MFCs owe their popularity to various microorganisms: Bacteria, fungi, or algae whose catalytic activity allows for current generation from a wide range of substrates, from simple sugars to toxic, highly polluted wastewaters [3]. Usually, microorganisms are used in MFCs in the anode compartment, where they transfer electrons released from the microbial oxidation of an organic substrate. The protons formed during oxidation pass through a proton-selective membrane to the cathode, where they combine with oxygen to form water. The spontaneous movement of electrons from anode to cathode results in the production of electric current in the system. Nowadays, the most popular MFCs are those exploiting bacteria, which were initiated after the first electrogenic strains, such as Shewanella sp. or Geobacter sp., were discovered in the late 1990s [4,5]. Electrogens were found to transfer electrons directly to the anode through outer membrane transporting proteins, like cytochrome c, or through membrane appendages called nanowires [6]. Other electrogenic strains, like Pseudomonas aeruginosa, have been proved to self-produce endogenous mediators, such as pyocyanin, that can shuttle electrons to the anode [7]. Electrogenic bacteria are exploited in MFC systems as single strains or in bacterial consortia, which are especially favorable when complex substrates are used for current production [8].
Although the breakthrough discovery of the electrical effects accompanying organic matter decomposition was made in 1911 on yeast and bacteria simultaneously, fungi-based MFCs were less investigated than bacterial ones [9]. This lower interest in fungal MFCs was connected with the lack of proved fungal electrogens and the lower power production of MFCs working on single fungi
Fungi as Biocatalysts in the Anode of MFCs
The anode is a crucial element of each MFC owing to its direct contact with the electron-producing microorganisms. The anode not only ensures electron conductivity, but also strongly influences the adhesion of microorganisms, which determines the biofilm formation needed for ensuring the electron transfer from the microorganism to the electrode. Various fungi species have been used as anode microorganisms, producing electrons from different substrates and transferring them to the anode directly via redox-active enzymes or due to chemical mediators. The power production obtained from fungi-based MFCs is from several mWm −2 to as high as several Wm −2 , depending on the anode material and construction, the fungi type, and the presence of a mediator.
Saccharomyces cerevisiae
Yeasts are among the best-studied eukaryotic cells, as they are easy to grow and susceptible to biological and genetic modifications. Most strains are not pathogenic and can metabolize a wide range of substrates. For these reasons, yeast has also been most widely considered as a biocatalyst in MFCs. During the last two decades, several papers reporting the utilization of Saccharomyces cerevisiae as an anode biocatalyst in yeast-based MFCs, with or without an external mediator, have been published. A variety of exogenous mediators, e.g., methylene blue (MB), neutral red (NR), or thionine, were applied in order to enhance electron transfer between the microorganism and the anode (Table 1). Bennetto et al. reported that Baker's yeast could be utilized as a biocatalyst in the anode of an MFC where two mediators-thionine and resorufin-were used [40]. The results showed that an MFC employing S. cerevisiae immobilized on the anode with resorufin as a mediator obtained a
Fungi as Biocatalysts in the Anode of MFCs
The anode is a crucial element of each MFC owing to its direct contact with the electron-producing microorganisms. The anode not only ensures electron conductivity, but also strongly influences the adhesion of microorganisms, which determines the biofilm formation needed for ensuring the electron transfer from the microorganism to the electrode. Various fungi species have been used as anode microorganisms, producing electrons from different substrates and transferring them to the anode directly via redox-active enzymes or due to chemical mediators. The power production obtained from fungi-based MFCs is from several mWm −2 to as high as several Wm −2 , depending on the anode material and construction, the fungi type, and the presence of a mediator.
Saccharomyces cerevisiae
Yeasts are among the best-studied eukaryotic cells, as they are easy to grow and susceptible to biological and genetic modifications. Most strains are not pathogenic and can metabolize a wide range of substrates. For these reasons, yeast has also been most widely considered as a biocatalyst in MFCs. During the last two decades, several papers reporting the utilization of Saccharomyces cerevisiae as an anode biocatalyst in yeast-based MFCs, with or without an external mediator, have been published. A variety of exogenous mediators, e.g., methylene blue (MB), neutral red (NR), or thionine, were applied in order to enhance electron transfer between the microorganism and the anode (Table 1). Bennetto et al. reported that Baker's yeast could be utilized as a biocatalyst in the anode of an MFC where two mediators-thionine and resorufin-were used [40]. The results showed that an MFC employing S. cerevisiae immobilized on the anode with resorufin as a mediator obtained a maximum power density of 155 mWm −2 . Yeast-based MFCs with the addition of MB as a mediator were also studied by Walker et al. [41] and Permana et al. [42]. They used S. cerevisiae to determine optimal conditions for electricity generation with respect to initial fungi concentration, temperature, substrate, and oxygen concentration in double-chamber yeast-based MFCs. However, power production lower than that for thionine and resorufin was obtained [40]. The effect of MB on the performance of a S. cerevisiae-based two-compartment glucose-fed MFC was examined elsewhere [43]. A maximum power density of 146 ± 7.7 mWm −3 was achieved at a 1000 Ω resistance. The low yield of the yeast-based fuel cell in this experiment was assigned to the O 2 reduction overpotential and inefficient electron transfer between the mediator and cell walls of S. cerevisiae. The authors reported cytotoxicity of the mediator as a factor limiting the power output.
As the construction of the anode significantly affects the performance of an MFC, modification of its surface was proposed to improve S. cerevisiae-based MFC performance. The goal was to enhance microorganism adhesion and electron transfer from the biocatalyst to the anode surface. For these investigations, carbon paper electrodes with a thin (5 nm and 30 nm) layer of cobalt or gold were used in S. cerevisiae-based MFCs [44,45]. It was observed that application of a Co layer significantly increased the adhesion of S. cerevisiae cells to the anode surface and enhanced the performance of the MFC. Another modification of the anode surface was the immobilization of yeast cells on carbon nanotubes (CNTs). Christwardana and Kwon studied the performance of mediatorless MFCs, employing S. cerevisiae cells immobilized on CNTs as the anodic catalyst (yeast/CNT) and laccase as the biocatalyst in the cathode [46]. Yeast cells were bonded to CNTs covalently (C-N), and electron transfer was enhanced by the hydrophobic interaction between carbon nanotubes and yeast. Additionally, the effect of a yeast-based catalyst structure containing poly(ethyleneimine) (PEI) and glutaraldehyde (GA) on MFC performance was investigated. PEI was used as an entrapping polymer to support the adhesion of yeast cells to CNTs (positive charge on PEI and negative charge of yeast), while GA acted as a cross-linker between S. cerevisiae cells and PEI. The use of both investigated yeast-based anodic catalysts significantly improved the stability and performance of the MFCs. The maximum power density increased from 138 mWm −2 for the MFC employing bare CNTs to 344 mWm −2 with yeast-based anode catalysts. Enhanced power generation in MFCs using a yeast/CNT catalyst was explained by facilitated electron transport via cytochrome c and cytochrome a3 when this catalyst was applied. The obtained power production here was significantly higher than for other S. cerevisiae-based MFCs. In other studies, the application of S. cerevisiae immobilized on reticulated vitreous carbon (RVC) with neutral red (NR) as a mediator achieved a much lower power density, i.e., ca. 80 mWm −2 [13], and the maximum power density in the MFC with S. cerevisiae immobilized on a graphite electrode with thionine as a mediator peaked at 60 mWm −2 [47]. However, the value of production for the yeast/CNT catalyst-based MFC was lower than that of an MFC employing S. cerevisiae immobilized on carbon felt with an MB mediator, where the maximum power density was 1500 mWm −2 [48]. RVC-reticulated vitreous carbon is abbreviated, CNT-carbon nanotubes MWCNT-multi-walled carbon nanotube, SPEEK-sulphonatedpoliether ether ketone, ADE 75-air diffusion electrode, MB-methylene blue, NR-neutral red, BcG-bromocresol green, MR-methyl red, MO-methyl orange, GA-glutaraldehyde, PEI-poly(ethylenimine), GOx-glucose oxidase, CDH-cellobiose dehydrogenase, PDH-pyranose dehydrogenase, YP fru -medium containing yeast extract, peptone and fructose.
In another study, Fishilevich et al. investigated the performance of a dual-chamber glucose-fed MFC employing S. cerevisiae with glucose oxidase (GOx) from Aspergillus niger [49]. The applied enzyme is a highly efficient redox enzyme, which converts β-D-glucose to glucono-δ-lactone and hydrogen peroxide. The anode compartment comprised GOx surface-displaying yeast cells, MB as a redox mediator, and glucose as a substrate. The cathode compartment contained a laccase from Trametes versicolor as a catalyst and ABTS as a redox mediator. However, the obtained maximum power density obtained in this MFC was relatively low: ca. 13.6 mWm −2 . Gal et al. studied the performance of a S. cerevisiae-based MFC using two different surface-displayed dehydrogenases [50]. They were cellobiose dehydrogenase from Corynascus thermophilus and pyranose dehydrogenase (PDH, EC 1.1.99.29) from Agaricus meleagris and were displayed on the S. cerevisiae surface using a yeast display system. S. cerevisiae cells with PDH were used as an anodic biocatalyst in a two-compartment mediatorless MFC using lactose as a substrate. The MFC employing PDH-displaying S. cerevisiae produced a maximal power output of 33 mWm −2 , which was ca. 12 times higher than that obtained in the MFC using unmodified S. cerevisiae (2.7 mWm −2 ). In another study, S. cerevisiae was used as an anodic biocatalyst in an air-cathode MFC using synthetic wastewater as a substrate and graphite as electrodes [51]. It was shown that the yeast-based MFC power production was associated with the substrate concentration and the pH. The highest current density was observed at pH 6.0-160.36and 282.83 mA/m 2 at an organic loading rate of 0.91 and 1.43 kg COD/m 3 , respectively. The investigations of the electron transfer mechanism revealed the presence of redox mediators, such as NADH/NAD and FADH/FAD + , and suggested that the higher rate of electron transfer at pH 6.0 was connected with enhanced proton shuttling by different redox mediators.
The utilization of complex substrates in MFCs requires the application of microbial consortia wherein various species are able to accomplish diverse processes. It is commonly known that the fermentative microorganisms of the consortium can transform large organic molecules into smaller fermentation products that can be used by electrogenic species. Lin et al. designed an MFC that employed a microbial consortium, including the electrogen Shewanella oneidensis MR-1 and genetically engineered S. cerevisiae as a fermenter, with glucose as a carbon source [56]. The ethanol pathway was knocked out in S. cerevisiae cells, while the lactic acid pathway was programmed in the cells. As a result, S cerevisiae metabolized glucose to lactic acid, which was further used by S. oneidensis for power production. Additionally, the efficiency of such a fermenter-exoelectrogen consortium was increased by achieving an optimal relation between the carbon metabolism of S. cerevisiae and the extracellular electron transfer of S. oneidensis. Moreover, the use of S. cerevisiae, which did not form a biofilm on the anode, allowed the anode surface to be entirely occupied by the exoelectrogen, which also enhanced MFC performance. The maximum power density obtained from that glucose-fed MFC was 123.4 mWm −2 .
Candida melibiosica
Candida melibiosica 2491 is a yeast strain with high phytase activity that digests the hardly biodegradable phosphorous compounds of plant tissues. Hubenova and Mitov found the oxidation-reduction potential for C. melibiosica without the chemical addition of artificial mediators, which indicates its ability to transfer electrons in MFCs [57]. To examine the possibility of using C. melibiosica as an electron shuttle, the performance of an MFC employing this yeast strain was investigated. Results of experiments with different carbohydrates, i.e., glucose, fructose, and sucrose, demonstrated that the C. melibiosica-based MFC could produce bioelectricity even in the absence of an exogenous mediator, with a maximum power output of 60 mWm −3 when fructose was used as a substrate. It has been observed that there is a correlation between the produced current, yeast cell growth phases, and the rate of the substrate assimilation, which was demonstrated by the rate of in vivo produced electrons. Additionally, the influence of other exogenous mediators with various formal potentials (bromocresol green (BcG), bromocresol purple, bromothymol blue, bromophenol blue, Congo red, cresol red, eosin, Eriochrome Black T, methyl red (MR), methanyl yellow, methyl orange (MO), murexide, neutral red (NR), and tropaeolin) on the performance of an C. melibiosica-based MFC was investigated [57,58]. It was demonstrated that MB, MO, MR, and NR increased the performance of the MFC in comparison to a mediatorless system. The highest achieved enhancement in power density was from 20 to 640 mWm −2 with MB at a concentration of 0.8 mM. The authors related the improvement in the MFC performance to the ability of the mediator to increase electron transfer kinetics.
C. melibiosica-based MFC performance was also studied with the use of modified carbon felt as the anode material [59]. The carbon felt anode was modified by the electrodeposition of nickel using agalvanostatic (Ni (g) -galvanostatically modified Ni-carbon felt) or a potentiostatic (Ni (p) -potentiostatically modified Ni-carbon felt) pulse plating technique. The use of Ni-modified carbon felt in a dual-chamber mediatorless C. melibiosica-based MFC allowed for the increase in power output from 36 mWm −2 for the nonmodified electrode (NME) to 390 and 720 mWm for Ni (p) and Ni (g) , respectively. Interestingly, these values were even higher than that achieved in an MFC using nonmodified carbon felt as the anode and MB as the electron shuttle. The improvement of C. melibiosica-based MFC performance using these modified anodes was related to Ni, which served as an electron acceptor or initiated an adaptive mechanism with an increased electron transfer rate through the yeast cell membrane. In another report, the authors used nanomodified NiFe and NiFeP-carbon felt materials as the anode in a C. melibiosica-based MFC. Anodes were modified by the pulse electrodeposition technique, as mentioned above. The highest power production was achieved for the NiFeP-modified electrode and peaked at 260 ± 8 and 155 ± 6 mW in the case of the potentiostatic and galvanostatic carbon felt modification, respectively [62].
Candida sp. IR11
Lee et al. isolated Candida sp. IR11 from a biofilm formed on the anode of a single-chamber glucose-fed MFC inoculated with anaerobic sludge [61]. Since the ability to reduce ferric iron by this new yeast strain has been demonstrated, it was supposed that Candida sp. IR11 has electrogenic potential. Candida sp. IR11 was inoculated into a single-chamber MFC where the substrate was wastewater from upflow anaerobic sludge, and a maximum power density of 20.6 ± 1.52 mWm −2 was observed, which was accompanied by COD removal at a level of 91.3 ± 5.29%.
Arxula adeninivorans
Arxula adeninivorans is a yeast strain thatcan grow at high temperatures (up to 48 • C) with a wide pH and high salinity tolerance. The bioelectrochemical activity of Arxula adeninivorans was investigated using a mediatorless double-chamber MFC operated in continuous mode [60]. The maximum power density in the MFC employing Arxula adeninivorans was ca. 28 mWm −2 , and it was observed that A. adeninivorans secretes a soluble electroactive molecule, which is believed to provide current generation in MFCs. Further, the use of 2,3,5,6-tetramethyl-1,4-phenylenediamine (TMPD) as a redox mediator was examined. Application of TMPD as an anodic mediator and KMnO 4 as the cathodic reducing agent in an A. adeninivorans-based MFC led to a significant increase in maximum power density, i.e., 1.03 ± 0.06 Wm −2 . The obtained values were close to the highest power output reported by Ganguli and Dunn for their yeast-based MFC [49].
Hansenula anomala
Prasad et al. investigated a mediatorless MFC with H. anomala as a biocatalyst and glucose as a substrate [11]. The H. anomala cells were immobilized on a plain graphite anode by physical adsorption and covalent bonds. It was found that H. anomala used enzymes present in their outer membrane, i.e., lactate dehydrogenase and ferricyanide reductase, in the electron transfer process. Additional experimentswith different types of anodes, i.e., plain graphite, graphite felt, and polyaniline (PANI)-coated graphite modified with Pt catalyst, revealed a maximum power output of 2.9 Wm −3 obtained for the anode modified with PANI and Pt. However, the use of the graphite felt anode, which had a surface area bigger than that of plain graphite, resulted in a maximum power output similar to the level of 2.3 Wm −3 , which was apparently higher than that for the plain graphite electrodes (0.69 Wm −3 ).
Other Species
The catalytic activity of seven different yeast strains-S. cerevisiae, Kluyveromyces marxianus, Pichia pastoris, Hansenula polymorpha, Kluyveromyces lactis, Schizosaccharomyces pombe, and Candida glabrata-in a mediatorless double-chamber MFC was investigated by Kaneshiro et al. [63]. The highest power density was obtained when carbon fiber was used as an anode with glucose as a substrate. The MFC employing K. marxianus rendered the highest power output of 850 Wm −3 . Moreover, it was found that K. marxianus can utilize fructose and xylose as a carbon source in addition to glucose. The three highest power levels were obtained for glucose, fructose, and xylose, respectively. Since glucose and xylose are produced from lignocellulosic biomass and the degradation of both is required for complete degradation of the biomass, the authors proposed that K. marxianus could be useful for the development of MFCs using waste products from the wood industry and forestry.
Enhanced current generation using yeast-bacterial coculture was observed by Islam et al. [64]. The power production in MFCs, which inoculated the yeast Lipomyces starkeyi with Klebsiella pneumoniae was 3-6 times higher (12.87 Wm −3 ) in comparison to MFCs using bacteria or yeast separately. It was observed that the yeast utilized electron shuttles produced by the bacteria, which is encouraging for future investigations on the mutualistic interactions of fungi and bacteria for increasing the power production of MFCs.
Fungi Used as a Cathode Catalyst
Cathode construction and the type of the final electron acceptor used in the cathodic compartment have an essential influence on an MFC's electrical output. Oxygen is a common electron acceptor, due to its easy accessibility, high redox potential, and the absence of waste or toxic chemical end products. However, the oxygen reduction process at the cathode limits the performance of the MFCs, because of high overpotential and low reaction kinetics. The application of a suitable catalyst can improve the efficiency of oxygen reduction by lowering the activation energy and enhancing the reaction rate. The most common cathodes used for MFCs are Pt-coated carbon electrodes that use dissolved oxygen as the electron acceptor. Carbon electrodes modified with Pt exhibited significantly decreased oxygen reduction activation energy and increased the reaction rate. However, the use of a platinum catalyst has serious drawbacks, such as high costs and toxicity. Enzymes have considerable advantages over chemical catalysts, such as biocompatibility and higher specific selectivity, transformation efficiency, and activity under mild conditions. Although enzymes have better electrochemical catalytic performances in comparison to microbes, the use of enzymatic fuel cells is limited, due to the high costs of enzyme production and purification, as well as the short-term enzyme activity in relation to its inactivation. A microbial community placed in the cathode chamber or inoculated directly on the cathode produces enzymes that effectively catalyze the reduction of oxygen, which results in significantly improved cathode performance [65,66]. A great advantage of using the whole microbial cells for catalyzing the oxygen reduction reaction is an eliminated need for enzyme isolation and purification. Such an approach also allows the enzymatic cycle to occur in its natural environment, i.e., within a living organism.
Trametes versicolor (Coriolus versicolor)
Trametes versicolor is a filamentous fungus known for its ability to produce oxidative enzymes. These enzymes enable the exchange of electrons between the electron donor and the acceptor. Wu et al. [67] investigated the performance of an MFC employing the laccase-secreting white-rot fungus T. versicolor, where this white-rot fungus was able to continuously secrete laccase, which could help to avoid the cost-consuming laccase isolation and purification (Table 2). Moreover, production of laccase enabled the elimination of disruptions of the system by the irreversible inactivation of the enzyme. C. versicolor was inoculated in a cathode chamber of an MFC filled with a medium containing glucose as a carbon source. ABTS was used as a redox mediator, since it had been previously used as an effective mediator, which allowed for transferring electrons between the electrode and laccase [22,23]. Two additional MFCs-one with a catholyte enriched with commercially available laccase and one with a carbon fiber cathode-were operated as the controls. The MFC inoculated with the white-rot fungus obtained higher values of maximum power density (320 ± 30 mWm −3 ) in comparison to the control MFCs with conventional abiotic cathodes (40-50 mWm −3 ). However, the MFC employing the laccase-based cathode demonstrated a maximum power density of 480 ± 30 mWm −3 . In order to maintain a high voltage level, a dose of ABTS was reapplied. However, the maximum power density of the MFC inoculated with C. versicolor was 2/3 of that for the laccase-based MFC, which the authors attributed to the limited electron transfer, due to adhesion of the white-rot fungus to the carbon fiber. Increasing the pH in the cathode chamber and reducing it in the anode chamber led to a decrease in MFC performance and stability [68,69]. In the MFC inoculated with the white-rot fungus, pH variation of the catholyte was observed, which was connected with the metabolism of the fungus. Experimental results showed that the white-rot fungus T. versicolor inoculated in the cathode chamber converted glucose to acetate and thus attenuated the deactivation of laccase. PVA-polyvinyl alcohol, SAP-superabsorbent polymer, CPC-carbide porous ceramic, PTFE-polytetrafluoroethylene.
Interesting investigations were conducted by Fernandez de Dios et al., who developed a fungus-bacterium-based MFC for energy production from wastewaters. Additionally, the authors applied the electro-Fenton process in the cathode chamber to increase the degradation efficiency of the organic compounds present in the wastewater [70]. During the Fenton process, hydroxide radicals are produced to oxidize organic pollutants, either completely into carbon dioxide, water, and inorganic salts, or incompletely into less hazardous intermediates [75]. The traditional Fenton process is widely used as a suitable treatment method for highly concentrated wastewaters and sludge. Although the reaction components Fe 2+ and H 2 O 2 are comparatively affordable, the high consumption and chemical instability of H 2 O 2 significantly reduce the efficiency of this process. Moreover, removal of iron after the treatment is required. Recently, it was reported that H 2 O 2 can be synthesized from acetate or wastewaters at the cathode of an MFC [76,77]. It was observed that fungal hyphae provided a type of scaffolding through which bacteria could effectively move and spread over a larger surface area. Thus, they could overcome difficulties related to movement in the surrounding environment [78]. Moreover, there is a catabolic cooperation between these microorganisms when they coexist in one medium. These interactions between bacteria and fungi may indicate that the physical interaction between them can play an important role in organic substrate degradation and electron transport during MFC operation. Fernandez de Dios et al. have conducted research in which the fungus T. versicolor was grown in the presence of S. oneidensis in order to allow the bacterium taking advantage of the fungus network to transport the electrons to the anode. It was shown that after 1 month of MFC operation, a homogenous biofilm of the bacterium and fungus developed over the electrode. The bacteria were anchored on the surface of the fungal hyphae, which allowed for transport of electrons. Examination of the biofilm structure by SEM analysis indicated that the combination of the fungus and bacterium is a suitable system that permits a high bacterium concentration in the electrode chamber. The ability to produce power in a fungus-bacteria-based MFC was demonstrated using an H-type reactor in a minimal medium with acetate as the carbon substrate. A maximum volumetric power density of 1.2 Wm −3 per anode liquid volume was obtained, which was higher than that achieved with the S. oneidensis MR1 H-type reactor (0.24 mWm −3 ). The enhanced electricity generation of the MFC was related to Fenton's reactions, which promoted electron consumption. Additionally, it was shown that during this MFC operation, approximately 94% of Lissamine Green B and 83% of Crystal Violet were removed. Although the decolorization grade was on a similar level to the present and previous studies, utilization of in situ electro-Fenton reactions in the MFC greatly reduced the operation cost by eliminating the external energy supply requirement.
Ganoderma lucidum
Lai et al. investigated a single-chamber MFC with the laccase-secreting white-rot fungus Ganoderma lucidum BCRC 36123 inoculated on the cathode surface to improve power production and degrade the azo dye acid orange 7 (AO7) [71]. Azo dyes are synthetic pigments that abundantly appear in wastewaters produced by the textile industry and paper manufacturing. These chemical compounds are poorly biodegradable and can be completely degraded only by white-rot fungi. The ability of white-rot fungi to break down azo dyes is attributed to laccase activity. Laccase was presumed to simultaneously act as a cathode catalyst in the MFC and, in conjunction with an anaerobic microbial community in the anode chamber, collectively degrade azo dye pollutants. The availability of AO7 to the fungal mycelium on the cathode was assured by substitution of the proton exchange membrane for polyvinyl alcohol hydrogel (PVA-H) film, which allowed AO7 diffusion from the anode chamber to the cathode. Application of fungal mycelium to the cathode provided a constant delivery of fresh laccase, and additional microorganisms enabled the degradation of the azo dyes. The decolorization of the azo dye AO7 coupled with the generation of electricity was investigated in a double-chamber MFC with a fungal biocathode [72]. The laccase-producing G. lucidum was cultivated under solid-state fermentation on wood chips. Utilization of this solid substrate supported fungal growth and stimulated the steady production of laccase. Wood chips inoculated with G. lucidum were placed around the MFC cathode and the performance of three types of MFCs-blank (potato dextrose broth (PDB) and fungus, no wood chips), substrate only (wood chips and PDB, no fungus), and both substrate and fungus (wood chips, PBD, and fungus)-were studied. MFC employing G. lucidum inoculated on wood chips exhibited a power density of 207.74 mWm −2 , and the efficiency of AO7 decolorization was 96.7%. It was found that the laccase secreted by the white-rot fungus diffused into the anode chamber, where, together with the microbial community, it enhanced the dye removal from wastewaters. In order to examine the influence of AO7 concentration on AO7 removal efficiency and power generation, the performance of a G. lucidum-based MFC was investigated using various AO7 concentrations in the anolyte, from 30 to 1000 mgL −1 . The results showed that the power density increased with AO7 concentration up to 500 mgL −1 . Further increases in AO7 concentration (up to 1000 mg/L) caused a reduction in power density. The phenomenon was assigned to the inhibitory effect of the azo dye acid orange 7 on bacterial growth. The maximum power density and maximum current density at an AO7 concentration of 500 mgL −1 were 207.74 mWm −2 and 585.18 mAm −2 , respectively. The obtained values were significantly higher than those reported in previous studies [71,79,80].
Galactomyces reessii
Galactomyces reessii is a yeast strain which is able to degrade a wood matrix by secreting laccase. Chaijak et al. [73] investigated the performance of a two-chamber MFC employing the fungus G. reessii as a biocatalyst on the cathode. The fungus cultured on coconut coir was placed in the cathode chamber so that it came into physical contact with the cathode. Plain carbon cloth was used both in the anode and cathode. Additionally, two other types of cathodes were used: Vulcan carbon cloth coated with Pt and plain carbon cloth with coconut coir as a positive and negative control, respectively. A mixture of sludge from the rubber industry and synthetic wastewater containing sulfate and ethyl acetate was used as a substrate. It was found that coconut coir supported the growth of G. reessii and the production of laccase without the use of additional chemicals or culture media. The maximum current densities and power densities produced by the MFC using G. reessii were 59 mWm −2 and 253 mAm −2 , respectively. It is worth noting that the MFC employing Pt as a cathodic catalyst and the MFC using a biocathode with G. reessii generated similar values of maximum voltage, current, and power. Therefore, the authors suggested that G. reessii could be utilized as a biocatalyst, instead of Pt.
Other Species
Three strains of filamentous fungi-Rhizopus sp., Aspergillus sp., and Penicillium sp.-isolated from Caatinga's soil were involved as biocatalysts in the cathode compartment of a double-chamber MFC [74]. The power outputs for the examined fungi were obtained using two different cathode materials, i.e., (1) carbon felt electrodes coated with carbon Black Vulcan ® and (2) Pt in polytetrafluoroethylene (PTFE) and Pt-free Black Vulcan ® -coated carbon felts. A plate of graphite immersed in potassium ferricyanide was used as the anode. The maximum power outputs, both in the case of the platinum-free carbon cathode and in the case of the Pt-coated cathode in the MFC employing Aspergillus, were 328.73 and 438.16 mWm −3 , respectively.
Summary
Among all the investigated fungi strains, S. cerevisiae has been the most popular. The highest power densities obtained for S. cerevisiae-based MFCs reached 1.5 Wm −2 when MB was used as a mediator in a dual-chamber reactor. In a mediatorless system, the maximum power density for a S. cerevisiae-based single-chamber MFC was 334 mWm −2 when a carbon nanotube-based electrode modified with PEI was used. However, the highest power production for fungi-based MFCs was 720 mWm −2 in a mediatorless system when C. melibiosica was used as an anode biocatalyst, with the use of Ni-modified electrodes. Though it is hard to compare the results of investigations where different reactor and electrode arrangements were applied, we can conclude today that the power production in MFCs using a single strain of fungi seems to be comparable to power production in MFCs using a single bacteria strain. For example, the maximum power production in a dual-chamber system was on the level of 41 mWm −2 where the single strain S. oneidensis MR-1 was used [81], or 4-45 mWm −2 for Geobacter strains in early two-chamber reactors [82]. These values are close to the power densities obtained for most mediatorless, fungi-based dual-chamber MFCs, collected in Table 1. The power production in electrogenic bacteria-based MFCs was remarkably increased to ca. 860 and 461 mWm −2 for S. oneidensis and G. sulfurreducens, respectively, when a mixed-culture or single-chamber reactor was applied [81,82]. A similar effect was observed for fungi-based MFCs where the application of bacteria-fungi consortia allowed for an increase in power production to 12.87 Wm −3 or even 850 Wm −2 for milliliter-scale reactors (Tables 1 and 2).
The years of investigation into MFC technology has allowed for the development of commercially available bioreactors able to produce power of up to 300 kW [83]. Until now, most investigations on MFCs have been conducted with the use of various bacteria strains and consortia. However, the present review indicates that fungi can be considered very promising catalytic microorganisms for MFC technology. The highest power density obtained for fungi-based MFCs was 1.5 Wm −2 , which provides researchers with a foundation for future investigations, especially with the application of bacteria-fungi mixed consortia in MFCs. The recent findings showed that the application of such a consortium (S. oneidensis-S. cerevisiae, L. starkeyi-K. pneumoniae, S. oneidensis-T. versicolor) enhanced the power generation in the system and allowed for power production from complex substrates. The application of fungi in MFC systems can be especially valuable when complex, difficult substrates need to be used in the MFC. The high capacity of fungi for the biodegradation of difficult, toxic substances (e.g., biodegradation of azo dyes at the level of 97%) can be used for current production with the simultaneous treatment of difficult waste.
Challenges and Perspectives for Fungi-Based MFCs
Similar to bacterial MFCs, the main obstacle that needs to be overcome in fungi-based MFCs is a low power production that is insufficient to assure the energetic self-sufficiency of such systems. A serious drawback of MFCs using fungi is the use, in most studies, of dual-chamber reactors, which are known for higher internal resistance in comparison to single-chamber ones. To date, in fungi-based MFCs, ferricyanide has been used as the main electron acceptor, which is not feasible to apply at a larger scale. Also, in many studies, mediators are still used, which excludes such an arrangement for commercial purposes. Future studies with fungi-based MFCs should be focused on enhancing the power production in such systems. This can be realized through investigations using single-chamber reactors with various electrode designs and materials. The practical application of fungi-based MFCs demands the elimination of chemical catholytes, such as ferricyanide, and mediators. Thus, searching for new strains of fungi that can produce power with a high efficiency is crucial. A very promising direction for investigations seems to be MFCs based on bacteria-fungi consortia, whose diverse properties can lead to the multiple-times enhancement of power production in the system, especially when complex substrates are used. | 8,118.6 | 2018-10-19T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Chemistry"
] |
Nonparametric Bayesian inference for multidimensional compound Poisson processes
Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density $r_0$ and intensity $\lambda_0$. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context, which, under some assumptions, we argue to be optimal posterior contraction rates. In particular, our results imply the existence of Bayesian point estimates that converge to the true parameter pair $(r_0,\lambda_0)$ at these rates. To the best of our knowledge, construction of nonparametric density estimators for inference in the class of discretely observed multidimensional L\'{e}vy processes, and the study of their rates of convergence is a new contribution to the literature.
Introduction
Let N = (N t ) t≥0 be a Poisson process of constant intensity λ > 0 and let {Y j } be independent and identically distributed (henceforth abbreviated to i.i.d.) R dvalued random vectors defined on the same probability space and having a common distribution function R, which is assumed to be absolutely continuous with respect to the Lebesgue measure with density r.Assume N and {Y j } are independent and define an R d -valued process X = (X t ) t≥0 by The process X is called a compound Poisson process (henceforth abbreviated to CPP) and forms a basic stochastic model in a variety of applied fields, such as e.g.risk theory and queueing; see Embrechts et al. (1997) and Prabhu (1998).
Suppose that corresponding to the true parameter pair (λ 0 , r 0 ), a sample X ∆ , X 2∆ , . . ., X n∆ from X is available, where the sampling mesh ∆ > 0 is assumed to be fixed and thus independent of n.The problem we study in this note is nonparametric estimation of r 0 (and of λ 0 ).This is referred to as decompounding and is well-studied for one-dimensional CPPs, see Buchmann and Grübel (2003), Buchmann and Grübel (2004), Comte et al. (2014), Duval (2013) and van Es et al. (2007).Some practical situations in which this problem may arise are listed in Duval (2013), p. 3964.However, the methods used in the above papers do not seem to admit (with the exception of van Es et al. (2007)) a generalisation to the multidimensional setup.This is also true for papers studying non-parametric inference for more general classes of Lévy processes (of which CPPs form a particular class), such as e.g.Comte and Genon-Catalot (2010), Comte and Genon-Catalot (2011) and Neumann and Reiß (2009).In fact, there is a dearth of publications dealing with non-parametric inference for multi-dimensional Lévy processes.An exception is Bücher and Vetter (2013), where, the setup is however specific, in that it is geared to inference in Lévy copula models, and unlike the present work, the high frequency sampling scheme is assumed (∆ = ∆ n → 0 and n∆ n → ∞).
In this work we will establish the posterior contraction rate in a suitable metric around the true parameter pair (λ 0 , r 0 ).This concerns study of asymptotic frequentist properties of Bayesian procedures, which has lately received considerable attention in the literature, see e.g.Ghosal et al. (2000) and Ghosal and van der Vaart (2001), and is useful in that it provides their justification from the frequentist point of view.Our main result says that for a β-Hölder regular density r 0 , under some suitable additional assumptions on the model and the prior, the posterior contracts at the rate n −β/(2β+d) (log n) ℓ , which, perhaps up to a logarithmic factor, is arguably the optimal posterior contraction rate in our problem.Finally, our Bayesian procedure is adaptive: the construction of our prior does not require knowledge of the smoothness level β in order to achieve the posterior contraction rate given above.
The proof of our main theorem employs certain results from Ghosal and van der Vaart (2001) and Shen et al. (2013), but involves a substantial number of technicalities specifically characteristic of decompounding.
We remark that a practical implementation of the Bayesian approach to decompounding lies outside the scope of the present paper.Preliminary investigations and a small scale simulation study we performed show that it is feasible and under certain conditions leads to good results.However, the technical complications one has to deal with are quite formidable, and therefore the results of our study of implementational aspects of decompounding will be reported elsewhere.
The rest of the paper is organised as follows: in the next section we introduce some notation and recall a number of notions useful for our purposes.Section 3 contains our main result, Theorem 2, and a brief discussion on it.The proof of Theorem 2 is given in Section 4. Finally, Section 5 contains the proof of the key technical lemma used in our proofs.
Preliminaries
Assume without loss of generality that ∆ = 1, and let where {Y j } are i.i.d. with distribution function R 0 , while T, that is independent of {Y j }, has Poisson distribution with parameter λ 0 .The problem of decompounding the jump size density r 0 introduced in Section 1 is equivalent to estimation of r 0 from observations Z n = {Z 1 , Z 2 , . . ., Z n }, and we will henceforth concentrate on this alternative formulation.We will use the following notation: 2.1.Likelihood.We will first specify the dominating measure for Q λ,r , which allows one to write down the likelihood in our model.Define the random measure µ by Under R λ,r , the random measure µ is a Poisson point process on [0, 1] × (R d \ {0}) with intensity measure Λ(dt, dx) = λdtr(x)dx.Provided λ, λ > 0, and r > 0, by formula (46.1) on p. 262 in Skorohod (1964) we have The density k λ,r of Q λ,r with respect to Q λ, r is then given by the conditional expectation (2) where the subscript in the conditional expectation operator signifies the fact that it is evaluated under R λ, r ; see Theorem 2 on p. 245 in Skorohod (1964) and Corollary 2 on p. 246 there.Hence the likelihood (in the parameter pair (λ, r)) associated with the sample Z n is given by 2.2.Prior.We will use the product prior Π = Π 1 × Π 2 for (λ 0 , r 0 ).The prior Π 1 for λ 0 will be assumed to be supported on the interval [λ, λ] and to possess a density π 1 with respect to the Lebesgue measure.
The prior for r 0 will be specified as a Dirichlet process mixture of normal densities.Namely, introduce a convolution density where F is a distribution function on R d , Σ is a d × d positive definite real matrix and φ Σ denotes the density of the centred d-dimensional normal distribution with covariance matrix Σ.Let α be a finite measure on R d and let D α denote the Dirichlet process distribution with base measure α (see Ferguson (1973), or alternatively Ghosal (2010) for a modern overview).Recall that if F ∼ D α , then for any Borel-measurable partition B 1 , . . ., B k of R d the distribution of the vector (F (B 1 ), . . ., F (B k )) is the k-dimensional Dirichlet distribution with parameters α(B 1 ), . . ., α(B k ).The Dirichlet process location mixture of normals prior Π 2 is obtained as the law of the random function r F,Σ , where F ∼ D α and Σ ∼ G for some prior distribution function G on the set of d × d positive definite matrices.For additional information on Dirichlet process mixtures of normal densities see e.g. the original papers Ferguson (1983) and Lo (1984), or a recent paper Shen et al. (2013) and the references therein.
2.3.
Posterior.Let R denote the class of probability densities of the form (4).By Bayes' theorem, the posterior measure of any measurable set A ⊂ (0, ∞) × R is given by The priors Π 1 and Π 2 indirectly induce the prior Π = Π 1 × Π 2 on the collection of densities k λ,r .We will take a liberty to use the symbol Π to signify both the prior on (λ 0 , r 0 ) and the density k λ0,r0 .The posterior in the first case will be understood as the posterior for the pair (λ 0 , r 0 ), while in the second case as the posterior for the density k λ0,r0 .Thus, setting A = {k λ,r : (λ, r) ∈ A}, we have In the Bayesian paradigm the posterior encapsulates all the inferential conclusions for the problem at hand.Once the posterior is available, one can next proceed with computation of other quantities of interest in Bayesian statistics, such as Bayes point estimates or credible sets.
2.4.Distances.The Hellinger distance h(Q 0 , Q 1 ) between two probability laws Q 0 and Q 1 on a measurable space (Ω, F) is given by We also define the V-discrepancy by In addition to this, for positive real numbers x and y we put Using the same symbols K, V and h is justified as follows.Suppose Ω is a singleton {ω} and consider the Dirac measures δ x and δ y that put mass x and y respectively on Ω.Then K(δ x , δ y ) = K(x, y), and similar equalities are valid for the V-discrepancy and the Hellinger distance.
2.5.Class of locally β-Hölder functions.For any β ∈ R, ⌊β⌋ will denote the largest integer strictly smaller than β, while N 0 will stand for the union N {0} for N the set of natural numbers.For a multi-index k The usual Euclidean norm of a vector y ∈ R d will be denoted by y .Let β > 0, τ 0 ≥ 0 be constants and let L : R d → R + be a measurable function.We define the class C β,L,τ0 (R d ) of locally β-Hölder regular functions as the set of all those functions r : R d → R, such that all mixed partial derivatives D k r of r up to order k .≤ ⌊β⌋ exist and for every k with k .= ⌊β⌋ verify See p. 625 in Shen et al. (2013) for this class of functions.
Main result
Define the complements of the Hellinger-type neighbourhoods of (λ 0 , r 0 ) by where {ǫ n } is a sequence of positive numbers.We say that ε n is a posterior contraction rate, if there exists a constant M > 0, such that -probability.The ε-covering number of a subset B of a metric space equipped with the metric ρ is the minimum number of balls of radius ε needed to cover it.Let Q be a set of CPP laws Q λ,r .Furthermore, we set We recall the following general result on posterior contraction rates.
Theorem 1 (Ghosal and van der Vaart (2001)).Suppose that for positive sequences Then, for ε n = max(ε n , ε n ) and a large enough constant M > 0, we have that as n → ∞, in Q n λ0,r0 -probability, assuming the i.i.d.observations {Z j } have been generated according to Q λ0,r0 .
In order to derive posterior contraction rate in our problem, we impose the following conditions on the true parameter pair (λ 0 , r 0 ).Assumption 1. Denote the true parameter values for the compound Poisson process by (λ 0 , r 0 ). (i (ii) The true density r 0 is bounded, belongs to the set C β,L,τ0 (R d ) and additionally verifies for some ǫ > 0 and all k Furthermore, we assume that there exist strictly positive constants a, b, c and τ, such that Conditions on r 0 come from Theorem 1 in Shen et al. (2013) and are quite reasonable.They simplify greatly when r 0 has a compact support.
We also need to make some assumptions on the prior Π defined in Section 2.2.
Assumption 2. The prior Π = Π 1 × Π 2 on (λ 0 , r 0 ) satisfies the following assumptions: (i) The prior on λ, Π 1 , has a density π 1 (with respect to the Lebesgue measure) that is supported on the finite interval [λ, λ] ⊂ (0, ∞) and is such that for some constants π 1 and π 1 ; (ii) The base measure α of the Dirichlet process prior D α is finite and possesses a strictly positive density on R d , such that for all sufficiently large x > 0, and some strictly positive constants a 1 , b 1 and C 1 , where for all small enough x > 0 and for any 0 < s 1 ≤ • • • ≤ s d and t ∈ (0, 1), Here eig j (Σ −1 ) denotes the jth smallest eigenvalue of the matrix Σ −1 .
This assumption comes from Shen et al. (2013), see p. 626 there, to which we refer for an additional discussion.In particular, it is shown there that an inverse Wishart distribution (a popular prior distribution for covariance matrices) satisfies the assumptions on G with κ = 2.As far as α is concerned, we can take it such that its rescaled version α is a non-degenerate Gaussian distribution on R d .
Remark 1.The assumption (10) requiring that the prior density π 1 is bounded away from zero on the interval [λ, λ] can be relaxed to allowing it to take value zero at the end points of this interval, provided λ 0 is an interior point of [λ, λ].
We now state our main result.
Theorem 2. Let Assumptions 1 and 2 hold.Then there exists a constant M > 0, such that as n → ∞, We conclude this section with a brief discussion on the obtained result: the logarithmic factor (log n) ℓ is negligible for practical purposes.If κ = 1, the posterior contraction rate obtained in Theorem 2 is essentially n −2β/(2β+d) , which is the minimax estimation rate in a number of non-parametric settings.This is arguably also the minimax estimation rate in our problem as well (cf.Theorem 2.1 in Gugushvili (2008) for a related result in the one-dimensional setting), although here we do not give a formal argument.Equally important is the fact that our result is adaptive: the posterior contraction rate in Theorem 2 is attained without the knowledge of the smoothness level β being incorporated in the construction of our prior Π.Finally, Theorem 2 in combination with Theorem 2.5 and the arguments on pp.506-507 in Ghosal et al. (2000) imply existence of Bayesian point estimates achieving (in the frequentist sense) this convergence rate.
Remark 2. After completion of this work we learned about the paper Donnet et al. (2014), that deals with non-parametric Bayesian estimation of intensity functions for Aalen counting processes.Although CPPs are in some respects similar to the latter class of processes, they are not counting processes.An essential difference between our work and Donnet et al. (2014) lies in the fact that, unlike Donnet et al. (2014), ours deals with discretely observed multi-dimensional processes.Also Donnet et al. (2014) use the log-spline prior, or the Dirichlet mixture of uniform densities, and not the Dirichlet mixture of normal densities as the prior.
Proof of Theorem 2
The proof of Theorem 2 consists in verification of the conditions in Theorem 1.The following lemma plays the key role to that end.
Lemma 1.The following estimates are valid: Moreover, there exists a constant C ∈ (0, ∞) depending on λ and λ only, such that for all λ 0 , λ ∈ [λ, λ] it holds that The proof of the lemma is given in Section 5. We proceed with the proof of Theorem 2.
Let ε n = n −γ (log n) ℓ for γ and ℓ > ℓ 0 as in the statement of Theorem 2. Set ε n = 2Cε n , where C is the constant from Lemma 1.We define the sieves of densities F n as in Theorem 5 in Shen et al. (2013), where = n and a 1 and a 2 are as in Assumption 2. We also put ( 17) In Shen et al. (2013) sieves of the type F n are used to verify conditions of Theorem 1 and to determine posterior contraction rates in the standard density estimation context.Below we will show that these sieves also work in the case of decompounding, in that we will verify conditions of Theorem 1 for the sieves Q n defined in (17).
Verification of (6). Introduce the notation
Let {λ i } be the centres of the balls from a minimal covering of [λ, λ] with h 1intervals of size Cǫ n .Let {r j } be centres of the balls from a minimal covering of by appropriate choices of i and j.Hence, By Proposition 2 and Theorem 5 in Shen et al. (2013), there exists a constant c 1 > 0, such that for all n large enough log On the other hand, With our choice of ε n , for all n large enough it holds that We can simply rename the constant c 1 /(2C 2 ) in the above formula into c 1 and thus ( 6) is satisfied with that constant.
4.2.Verification of (7) and (8).We first focus on (8).Introduce ).From ( 14) we obtain Furthermore, using (15), Combination of the above inequalities with the definition of the set B(ε, Q λ0,r0 ) in (5) yields Shen et al. (2013) yields that for some A, C > 0, and all sufficiently large n, We substitute ε with √ An −γ (log n) ℓ0 and write ε n = √ 3ACn −γ (log n) ℓ0 to arrive at Now for all n large enough, as γ < 1 2 , Consequently, for all n large enough Choosing c 2 = C+1 3AC , we have verified (8) (with c 4 = 1).For the verification of (7) we use the constants c 2 and ε n as above.Note first that Π . By Theorem 5 in Shen et al. (2013), cf. also p. 627 there, for some c 3 > 0 and any constant c > 0, it holds that Without loss of generality, we can take the positive constant c in the above display bigger than 3AC(c 2 + 4) − 4.This gives us We have thus verified conditions ( 6)-( 8) and the statement of Theorem 2 follows by Theorem 1, since εn ≥ ε n (eventually).
Proof of Lemma 1
We start with a lemma from Csiszár (1963), which will be used three times in the proof of Lemma 1.Consider a probability space (Ω, F, P).Let P 0 be a probability measure on (Ω, F) and assume P 0 ≪ P with Radon-Nikodym derivative ζ = dP0 dP .Furthermore let G be a sub-σ-algebra of F. The restrictions of P and P 0 to G are denoted P ′ and P ′ 0 , respectively.Then P ′ 0 ≪ P ′ and The proof consists in an application of Jensen's inequality for conditional expectations.This lemma is typically used as follows.The measures P and P 0 are possible distributions of some random element X.If X ′ = T (X) is some measurable transformation of X, then we consider P ′ and P ′ 0 as the corresponding distributions of X ′ .Here T could be a projection.In the present context we take X = (X t , t ∈ [0, 1]) and X ′ = X 1 , and so P in the lemma should be taken as R = R λ,r and P ′ as Q = Q λ,r .
In the proof of Lemma 1 for economy of notation a constant c(λ, λ) depending on λ and λ only may differ from a line to a line.We also abbreviate Q λ0,r0 and Q λ,r to Q 0 and Q, respectively.The same convention will be used for R λ0,r0 , R λ,r , P r0 and P r .
Proof of inequalities (11) and (14).Application of Lemma 2 with g(x) = (x log x)1 {x≥0} gives K(Q 0 , Q) ≤ K(R 0 , R).Using (1) and the expression for the mean of a stochastic integral with respect to a Poisson point process, see e.g.property 6 on p. 68 in Skorohod (1964), we obtain that where c(λ, λ) is some constant depending on λ and λ.The result follows.
Proof of inequalities (12) and (15).We have with an obvious definition of I and II.Application of Lemma 2 with g(x) = (x log 2 (x))1 {x≥1} (which is a convex function) gives As far as II is concerned, for x ≥ 0 we have the inequalities The first inequality is trivial, the second is a special case of inequality (8.5) in Ghosal, Ghosh and Van der Vaart (2000), and is equally elementary.The two inequalities together yield Applying this inequality with x = − log dQ0 dQ (which is positive on the event { dQ0 dQ < 1}) and taking the expectation with respect to Q gives For the final inequality, see Pollard (2002), p. 62, formula (12).
Combining the estimates on I and II we obtain that After some long and tedious calculations employing (1) and the expressions for the mean and the variance of a stochastic integral with respect to a Poisson point process, see e.g.property 6 on p. 68 in Skorohod (1964) for some constant c(λ, λ) depending on λ and λ.Combining the estimates ( 22) and ( 24) on III and IV with the inequalities ( 21) and ( 11) yields ( 12).Similarly, the upper bounds ( 23) and ( 25) combined with ( 21) and ( 11) yield (15). | 5,103 | 2014-12-24T00:00:00.000 | [
"Mathematics"
] |
Feminist interpretation in the context of reformational theology : a consideration
Feminist interpretation in the context of reformational theology: a consideration This article explores the contribution that Biblical interpretation from a feminist perspective may make in the context of reformational theology. After an overview of the diverse nature of feminist Biblical interpretation that in itself stems from specific developments in hermeneutics, this article explores the contributions made by two prominent scholars in this field, namely Schüssler-Fiorenza and Trible. These contributions are then brought to bear on the South African situation and the debate on the role of women in the church. A suggestion is made as to the contribution that the work of Schüssler-Fiorenza and Trible can make in this context.
Introduction
Over the years many significant developments took place in the field of general hermeneutics as well as Biblical hermeneutics.A look at these developments reveals the three major approaches to be historical, literary, and reader orientated or "interested" in nature.A feminist interpretation of the Bible is one of the interested approaches.Along with the development of feminism in general, this approach began in theological circles due to women's growing desire to be liberated from the limitations that the ideology of patriarchy placed on them, not only in society at large, but also in the church.Below the development of feminist hermeneutics in general is traced.The subsequent discussion will focus on the contributions made by two prominent theologians, namely Schüssler-Fiorenza and Trible.Schüssler-Fiorenza follows an historical approach to the understanding of this Biblical text to attain a feminist objective, while Trible makes use of a literary approach for a similar reason.The article then traces the development in three Afrikaans reformed churches related to the role of women in the church.In the last instance the value of feminist approaches such as those developed by Schüssler-Fiorenza and Trible in the context of reformational theology is considered.
Feminist interpretation -a definition
There are many differences among feminist Biblical scholars.Newport (1996:139) explains that some seek to explore Biblical characters, books and themes that are relevant to the modern woman's situation.Others read the complete text from a female perspective to see what differences there are between the way a woman understands the text and the way that a man understands it.A third group reads the Bible as women in order to speak-up against patriarchy.They want to expose the Bible as a possible tool of oppression against woman.In order to practise feminist Bible interpretation one reads and understands the Bible from the standpoint of a feminist theory of justice and a feminist movement for change (Schüssler-Fiorenza, 2001:1).Sawyer (1990:231) in essence sums up what all forms of feminist interpretation share with the following statement: "Feminist interpretation of the Bible offers an alternative assessment of the Biblical evidence as seen through the eyes and experience of women readers and theologians".According to Trible (1984:3) feminist hermeneutics is a prophetic movement, examining the status quo, pro-nouncing judgement and calling for repentance by engaging with Scripture in various ways.
It has been noted by Thiselton (1992:410) among others that feminist theology is related in some of its themes and concerns to liberation theology.Feminist writers, however, do not write as outsiders in defence of "the oppressed", but as insiders who are concerned with oppression in terms of gender.It is only with the development of third wave feminism towards the end of the twentieth century that specific class concerns also entered the fray.When it comes to offering a critique of patriarchal ideologies, feminist theology has to face the challenging task of working between the world of the Bible and the world in which we live (Gillingham, 1998:141).
In all the major areas of Christian theology today, feminists are proposing an alternative to the existing ways of interpretation in order to remove the harmful effects of patriarchy and accommodate the insights of women.By doing so, feminist theologians want the church and society at large to benefit explicitly from the contributions of half the race that experiences God in a female body (Carmody, 1995:66).Schüssler-Fiorenza (1985:55) explains that in traditional hermeneutics man was the paradigmatic subject of scientific knowledge and interpretation while women were defined as the other or the object of male interpretation.Feminist interpreters insist, however, on the re-conceptualisation of language as well as intellectual frameworks so that women, as well as men, are subjects of interpretation.
Developments in hermeneutics and their meaning for women
According to Phillips (1999:391) traditional Christian exegesis shows that women are subordinated to men in the order of creation and that a woman's purpose is fulfilled in her relationship to her husband.Bible passages like the second creation story in Genesis 2:4b-3:24, the author's (Paul?) affirmation of men's headship in 1 Timothy 2:11-14, women's speaking in church being prohibited in 1 Corinthians 14:35-35 and women being taught to obey their husbands (1)(2)(3)(4)(5)(6) were all interpreted in such a way that people believe women's authority must be surrendered to their husbands and so to men in general (Phillips, 1999:391).This resulted in women not being allowed to interpret, teach and preach from the Bible.It can therefore be said that there is a direct correlation between Biblical hermeneutics and the role of women in the church.Women realised that altering the dominant perspective from which the Bible is read, can bring about new interpretation possibilities.This resulted in what is known as feminist Biblical interpretation.
It was already stated that variety exists in feminist interpretation.The three major groups within this field, namely revolutionary feminism, reformist feminism and reconstructionist feminism, are briefly considered here.There are, of course, also other ways to group together the main ideas in the field.Osiek (1997:960), for example, elaborates on the proposals by Ruether (1983) and Sakenfeld (1981, as quoted in Osiek, 1997) and describes the reaction of women in Christian communities to oppressing patriarchal structures under five headings.Although a more simplified proposal is used below in this general description, reference will be made to the demarcation suggested by Osiek as well.
Revolutionary feminist theology
This group can in fact be described as post-Christian.Most of these women have been part of Christianity at some stage, but their feminist consciousness led them to the conclusion that Christianity is irredeemably patriarchal and often in opposition to women (cf.Daly, 1983).The main problem that these women have with the Bible is the centrality given to the revelation of a male God by Christian churches.Furthermore, they are of the opinion that Christians continue to subordinate women in their churches and marital relationships.They conclude Christianity is oppressive to women and should be abandoned (Clifford, 2001:32).According to Keane (1998:123) revolutionary feminists believe that the Judaeo-Christian tradition is so intrinsically biased in favour of the male and so fundamentally patriarchal, that it has to be rejected completely (cf.Osiek, 1997:960-961).
Reformist Christian feminist theology
Reformist feminist theologians have almost nothing in common with revolutionary feminist theologians.The reformist approach does not seek to revolutionise Christianity; neither do they want to replace the God as revealed in the flesh by Jesus Christ.They are looking for modest changes within existing church structures (cf.Groothuis, 1997).They share a commitment to the Christian tradition.Some followers of this form of feminist theology believe that they can solve the problems of women's secondary status with measures such as inclusive translations of the Bible and more emphasis on egalitarian passages in the Bible.They are of the opinion that permitting women to hold church offices and do church-related ministries will help restore the woman's place in the church (Clifford, 2001:33).This form of feminist theology has exponents that can be placed under Osiek's (1997:962) rubric of a loyalist hermeneutic and is by-andlarge the type of studies undertaken in the circles of the GKSA thus far (cf.Breed et al., 2008).Her criticism of this approach is fitting also in the GKSA context: it … stretch[es] history and the literal meaning of texts, and it tends to be innocent of the political implications of the types of social interaction and relationships that it advocates on the basis of fidelity to the biblical text as divine revelation (Osiek, 1997:963).
Reconstructionist Christian feminist theology
This group of theologians shares with reformist feminism a commitment to Christianity and they see the Bible as the means of reconstructing a positive Christian theology for women, while at the same time criticising the tradition regarding the role of woman inand outside the church (Sawyer, 1990:232).Reconstructionist feminist theologians seek a liberating theological core for women within the Christian tradition, while also working towards a deeper transformation or reconstruction not only of church structures, but also of civil society (Clifford, 2001:33).
Ruether sees feminism as part of a general movement of liberation for all, both male and female, who are subjected to oppression (Sawyer, 1990:232).The aim of feminist theology according to Ruether (1983:18) is "the promotion of the full humanity of women".
To her, the appeal to women's experience is of the utmost importance because it is precisely women's experience that has been shut out of hermeneutics (Thiselton, 1992:433).According to Loades (1998:82) this approach to feminist interpretation proceeds on the assumption that all, thus not just women, stand to gain from this movement.In this article particular attention will be paid to Schüssler-Fiorenza and Trible.Osiek (1997:963, 965) draws a more subtle distinction here when she labels Trible as operating with a revisionist hermeneutic, whereas Schüssler-Fiorenza works from a liberationist model.We include both under the reconstructionist rubric.
Schüssler-Fiorenza, Trible and reconstructionist feminist Biblical interpretation
Schüssler -Fiorenza (1983:6) argues that feminist hermeneutics is a type of liberation theology and she goes further by saying, "all theology, willingly or not, is by definition always engaged for or against the oppressed".Schüssler-Fiorenza maintains that female subordination was not part of the original gospel but rather the result of the church's eventual compromise with Graeco-Roman society (Sawyer, 1990:243).
Trible (1978:5-8) identifies the re-contextualisation of Biblical texts within the framework of a tradition as the first hermeneutical issue to be addressed.Her method is derived from rhetorical criticism as expounded by James Muilenburg (cf. Muilenburg, 1969) , Trible (1978:9) writes.Her view is that male and female originates with God and therefore are equal.
A comparison will be drawn between these two theologians in the next section.Once the differences and similarities in their approaches have been described, these will be evaluated.Particular attention will then be given to how their main arguments can be applied to the question of women in the modern day church.
4. Schüssler-Fiorenza and Trible: a comparison 4.1 Schüssler-Fiorenza and a historical approach to the text Elisabeth Schüssler-Fiorenza can certainly be viewed as one of the key contributors when it comes to the field of feminist theology.She has intensely experienced the oppression and subordination of women in the church and in society.It takes a person of courage to achieve what she has done in terms of not only feminist hermeneutics but also in terms of seeking women's rightful place in society.Although she is viewed as a German-born American by many, she sees herself as a "resident alien" (Segovia, 2003:23).Schüssler-Fiorenza is a Catholic scholar who became the first woman in Würzburg, Germany, to complete the full academic program in theology that male students for the priesthood were required to take (Clifford, 2001:62).
Schüssler-Fiorenza moved to the USA at the beginning of the 1970s, because according to her, "there was no possibility of work in Germany for me as a theologian" (Segovia, 2003:10).Clifford (2001:62) also mentions the important fact that Schüssler-Fiorenza became the first female president of the Society of Biblical Literature in 1987.She spent most of her academic career in North American institutions and taught at the University of Notre Dame, the Episcopal Divinity School in Cambridge, Massachusetts, and at the Harvard Divinity School (Clifford, 2001:62).
Schüssler-Fiorenza has written many works that are of great value to the field of feminist hermeneutics.She is best known for her book In memory for her: a feminist reconstruction of Christian origins (Schüssler-Fiorenza, 1983).One of the major foci in her work has been on women in the church.She consistently argues for a reformation of ecclesiology that would give proper attention to the ministry women have done and are doing in the church.She is a critical historian and has developed and applied feminist hermeneutical theory to Biblical sources.She critically appraises cultural understandings of gender and the role of patriarchy in all its facets.She has always been critical of male dominated academy that has marginalised women and their contributions to Christian history (Clifford, 2001:62).Trible is considered a leader in the text-based exploration of women and gender in Scripture.In an interview with Sally Cloke (2002) during the National Anglican Conference in Sydney, where Trible was a speaker, she said the following:
Trible and a literary approach to the text
Some feminists have given up on the Bible because they think it's totally patriarchal, totally andocentric.I'm not among them.But I read the Bible differently from the way it has traditionally been read.I ask questions that have not been asked before, and I take an interest in subjects that have not been dealt with before.For example: what difference does it make to read the Bible from the point of view of its minor characters rather than its major characters?What happens when you read the Bible in terms of the stories of the losers, rather than the winners?
This indicates Trible's commitment to reconstructionist feminists who hold the Biblical text in high regard.Trible is best known for her books God and the rhetoric of sexuality (1978), Texts of terror: literary-feminist readings of Biblical narrative (1984) in which she follows a text focused (literary) methodology.She mainly makes use of rhetorical criticism and by conducting interactive close readings of texts.She is of opinion that every reader brings to the text certain perspectives and therefore there cannot be a final interpretation of a text.Trible is committed to the Bible, but she reads it differently than it has been read traditionally and critiques unchallenged patriarchal interpretations to affirm feminist goals.
A comparison between Schüssler-Fiorenza and Trible
The greatest similarity between Schüssler-Fiorenza and Trible is the fact that they are both reconstructionist Christian feminist theologians who seek a liberating theological core for women within the Christian tradition, but also work towards a transformation and reconstruction of society.This calls for repentance by righting wrongs.The wrong that is of primary concern here is the effects of patriarchy on people's lives (Clifford, 2001:33-34).In order to achieve these stated goals both Schüssler-Fiorenza and Trible employ a hermeneutics of suspicion, remembrance and reconstruction in their work.
Schüssler-Fiorenza shows special interest in what lies behind the text in its historical setting.She mentions in her analysis of the book Luke that the author emphasises the execution of Jesus as the king of the Jews was a failure of the Roman justice under pressure of Jewish leadership.Luke uses this to subvert Jewish political and Roman universalist tendencies and he plays down Jewish political hopes in favour of imperial Roman theology (Schüssler-Fiorenza, 1992:212).Schüssler-Fiorenza (1992:211) further indicates that the author of Luke intends to write an historical account of Christian beginnings for an elite male audience whose domain is history.Although these statements are not given directly in the texts, Schüssler-Fiorenza is of opinion that it can be deduced from what is stated, or not, in the text.
She also makes use of form criticism as a tool to reconstruct the social life and institutions of Biblical communities and pays particular attention to the effects of patriarchy at the time.She indicates that patriarchy originated centuries ago and was mediated through Christianity (Schüssler-Fiorenza, 1992:203).Emphasis is placed on Luke's rhetorical strategies to reveal the patriarchal structures that are inscribed in them.
Her interest in the context in which certain ideas are expressed, provides evidence of her use of tradition criticism.She also reconstructs prehistories of texts, which indicates her use of redaction criticism.Schüssler-Fiorenza (1992:210) explains in her analysis of the woman who was bent double, that the andocentric tendencies of Luke's version should not be explained away or seen as time-conditioned but these tendencies as well as the political strategies used in the text should be brought to light.This can be done through a reconstruction and recontextualisation by reading against the ideological grain (Schüssler-Fiorenza, 1992:212).The assertion that the Jewish people and leaders have rejected Jesus and caused his death is articulated in the Lucan text and should be reconstructed in the same way (Schüssler-Fiorenza, 1992:212).
Trible, on the other hand, focuses on the text itself and on the relationships of its various components to one another.Her summary of the Eve story (Trible, 1984) serves as a good example of how literary form and theological content cannot be separated.Her emphasis on the meaning of the words and terms, as well as her concern for why one term is used and not another, marks the literary orientation of her work.The fact that she emphasises the text's reference to "earth creature" rather than "man" at creation, for instance, is at the core of validating her interpretation.At several places in her analysis of the story of Jephthah's daughter she indicates the significance of the repetition of phrases.Trible (1984: 103) explains that the word that has "gone forth" from Jephthah's mouth in his vow in Judges 11:36 has become the daughter who has "gone forth" from his house in Judges 11:34.The same word is used in Hebrew and Trible links the two usages of the word.
Special attention should be paid to the fact that Trible relies on rhetorical criticism and in the story of Jephthah's daughter it is evident how she used this method to uncover rhetoric in the text.She points out that Jephthah made a human vow because of unfaithfulness.This vow can easily persuade the interpreter that Jephthah is devoted to the Lord.Then it would be justified that he keeps his vow and sacrifices his daughter.Trible's close reading, however, exposes Jephthah's unfaithfulness and distrust in Yahweh as well as his daughter's premature, violent and undeserved death.
It is clear that although Schüssler-Fiorenza and Trible are both classified as reconstructionist Christian feminist theologians, they differ in their approaches to the text.Their opposite stances in this regard make them good candidates when in the last section their work will be evaluated and applied in a South Africa context with a focus on the reformed tradition.Before that can be done, however, the contours of the South African context should be drawn.
5. Women and the church in mainline reformed churches in South Africa
A brief history and the current role of women in the church
The role of women in the church has been researched in South Africa for decades.According to Janse van Rensburg (2002:720) when the issue became a burning one in the latter quarter of the twentieth century both the NG Kerk (NGK: Dutch Reformed Church) and the Nederduitsch Hervormde Kerk (NHK) based their assessments of the situation on the belief that gender should not serve as the basis for any form of discrimination among church members.In research done by Van Helden (2002:762) women from both these denominations indicated that they have enough opportunities to participate in the church.In the Gereformeerde Kerke in Suid-Afrika (GKSA), however, the issue has not yet been resolved.Janse van Rensburg (2002:720) indicates that although the Bill of Rights included in the Constitution of South Africa (1996) has secured the equality of gender, women are still being discriminated against in society and, of more relevance to this study, in the church.The following subsections take a closer look at this issue.
Nederduitsch Hervormde Kerk van Afrika (NHK)
The Nederduitsch Hervormde Kerk van Afrika (NHK) must be credited for being the first of the Afrikaans protestant churches to allow a women's league as well as allowing women to serve in the offices.The following steps in the development of the role of women in the NHK should be noted: • The traditional attitude of the NHK towards women in the special offices can be seen in a 1937 report on the right of women to vote in the church.It is explained that equality between men and women in the church is unacceptable (Bergh & Barnard-Weiss, 1999:94).
• In 1940, however, the NHK became the first of the three mainline Afrikaans protestant churches to allow a women's league (Bergh & Barnard-Weiss, 1999:91).
• Several factors, like the changing position of women in society, led to women being allowed to vote in the church in 1957 (Bergh & Barnard-Weiss, 1999:99).
• Kleynhans (1983:19) notes that the general church meeting of the NHK allowed women as deacons in 1973.Dreyer (1999:385), a researcher from within the NHK, indicates that this decision was taken already in 1970.
• At the following general church meeting in 1976 it was decided that research should be done on the possibility of allowing women as ministers (Kleynhans, 1983:19).Dreyer (1999:386) indicates as one of the results of this inquiry, a book by P.S. Dreyer (1977) entitled Vroue as predikante?(Women as ministers?).This book formed a point of departure for the subsequent debate on the matter.
• A report regarding women in the office of minister (VDM) was presented at the 1979 general church meeting and it was decided that women can serve in this office (Kleynhans, 1983:19).
• By 1983 female ministers attended the general church meeting (Bergh & Barnard-Weiss, 1999:92).At this meeting a decision was taken to allow women to serve also as elders (Dreyer, 1999: 387).
• In 1995 it was decided to accommodate women in terms of inclusive language (Bergh & Barnard-Weiss, 1999:113).
• The NHK also confirmed its willingness to publicly allow equality in 1998 when a female minister was elected as a curator.
According to Bergh and Barnard-Weiss (1999:92-97) the theoretical equality within the NHK did not necessarily resulted in the increase of women's rights within the church.
Nederduitse Gereformeerde Kerk (NGK)
According to Du Pisani (1996:250) the events in the NGK as regards finding a place for women in the special offices reflect a society that wants to be freed from the old practices in a patriarchal system that does not cater for the practical demands and values of a changing environment.The developments that took place in the NGK began in the 1940s.The following dates and events are of significance: • In 1944 it is decided in the Orange Free State that female deacons can be appointed to serve where the need may occur.It is noted that it is not a fourth office but only a position for woman to serve.In 1952 the term "diakonesse-hulpdiens" (assisting deacons) is coined (Kleynhans, 1983:17).
• During the 1960s practical issues led to the reconsideration of women in the offices.A commission was appointed in 1966 to investigate the issue (Du Pisani, 1996:251).
• In 1970 it is still taken for granted that women are not allowed to become elders or ministers.The possibility of women becoming deacons is researched.
• At the synod meeting of 1974 it is emphasised that women are not allowed as elders or ministers but it is still not grounded on Scripture whether or not they can be deacons.A commission has to do further research.
• In 1978 it is decided that the Bible does not give a final answer to the question of women in the offices.The issue is referred to the regional synods to be considered.The general synod approves of women serving as assistant deacons (diakonesse-hulpdiens).
• At the general synod of 1982 it is finally decided that women are accepted in the office of deacon.
• Women as deacons are confirmed at the synod of 1986.The decision regarding the offices of elders and ministers is postponed until the following meeting.
• In 1990 it is decided that women can serve in all the offices including elders and ministers.
In summarising this development, Du Pisani (1996:241) states: "The conclusion is reached that a definite shift away from traditional fundamentalism towards a more open and less dogmatic approach has taken place in the NGK".Even though Du Pisani also mentions the fact that in practice equality is less visible, it is of more significance to this article that women received their rightful place in the church on the basis of liberating (not liberal!) hermeneutics.This will happen wherever people accept and apply sound hermeneutical principles, which seek to steer clear of fundamentalism on the one and relativism on the other hand.
Gereformeerde Kerke in Suid-Afrika (GKSA)
Van Deventer (2005:690-696) gives a summary of the decisions taken by the GKSA regarding the role of women in the churches and he identifies the following events: • At the national synod of 1918 the issue of women voting for male office bearers was tabled for the first time.Forty years later (1958) it is decided that the right of women to vote in the church implies a right to govern, and to be elected in the church offices.Hence, voting rights should not be granted for women in the church.
• In 1979 a commission was appointed by the synod to study the role and place of women in the church with regard to offices, service and voting.
• In 1982 another commission was appointed with the same assignment as the previous one, since the report presented by the 1979 commission at the synod of 1982 exhibited a lack of conclusions and recommendations.
• The process was repeated yet again at the synod of 1985.Significantly, the 1982 commission stated in their report with regard to the office of deacon that Scriptural evidence indicated the eligibility of women to be called to this office.
• An extensive report on the place of women in the church was tabled at the synod of 1988 and it was decided that since women are full members of the congregation, they might participate in the election of male office bearers.
• In the same report the conclusion was reached that in exceptional circumstances, women can be gifted and called by God to do certain services.The conclusion, however, was deemed by the synod not to be based on solid Scriptural grounds and therefore it was not accepted.
• Twice in 1994 and 1997 appeals against the 1988 decision were tabled but in both cases these were unsuccessful.Another commission was formed in 1997 to contact churches abroad regarding their positions on allowing women into the offices.
• A further commission appointed in 2000 had to study "what the Bible reveals about the manner in which the Lord used and still uses women in his church".
• Based on this report the GKSA decided at the synod of 2003 to reverse its decision on the issue of women in the offices, when women were allowed to serve as deacons.
Van Deventer (2005:696) focuses his study on this 2003 synod decision.It should be noted, however, that a further change occurred at the synod of 2006 when an appeal carried against the 2003 decision allowing women in the office of deacon and that decision was once again reversed.This latest decision was overturned in 2009, when the synod decided that women may serve in the office of deacon, but that the offices of elder and minister remain the sole prerogatives of men.
Reasons for these developments
On a theoretical level Bergh and Barnard-Weiss (1999:95) identify a patriarchal paradigm as one of the issues to be kept in mind when considering the role women play in the church.The general demise of this social paradigm after World War II and especially since the advent of the second-wave feminism in the 1960s (Weedon, 2003:111) also led to a reconsideration of the role of women in the church.Bergh and Barnard-Weiss (1999:99) indicate that the interpretation of the Bible is directly linked to the frame of reference of the interpreter.A shift in the patriarchal frame of reference also meant a shift in the way the Bible was read on the issue of women in the church.On a practical level other factors such as urbanisation, which placed a bigger work load on the church and gave women an opportunity to play a role in church activities, as well as lack of willing men to serve on church councils, played a significant role in allowing women in the special offices (Bergh & Barnard-Weiss, 1999:101-106).
Du Pisani (1996:255) explains that at the end of the 1970s, the old generation of traditionalists in the NGK hierarchy was on their way out.A change therefore occurred in the NGK's official view on the position of women in the church.Van Deventer (1990:5) indicates that this change was grounded on a new interpretation of Biblical texts, which was previously used to justify male superiority.This new interpretation was related to the historical-critical method that opened the eyes to the vast historical differences between an ancient text related to the Greco-Roman world and a present-day con-text.In the South African context the results of historical criticism were incorporated in the theological debate especially in the context of the NHK, albeit according to Breytenbach (1999:176) through the mediation of dialectical theology.The use of this method and its results by a NHK theologian can be seen in a study related to the issue of women in the ministry dating already from the mid-1970s (cf.Pelser, 1976).This study received the accolade "excellent" from a NGK theologian after the turn of the century (Du Toit, 2001:172).
Studies related to the issue of women in the ministry in the GKSA do not refer to this article by Pelser, probably because a historical critical approach is not appreciated in that context.Van Deventer (2005:696) points out that the core problem in the GKSA is a hermeneutical one.The lack of reflection by the GKSA on hermeneutical developments in the twentieth century resulted in a too narrow approach to the text.This naïve-realistic approach that is followed in the GKSA limits the interpretation of the text by among others ignoring the role of the reader.This implies that regardless of the circumstances, the GKSA has interpreted certain texts in the "correct" way, leaving no room for new ideas and approaches -such as, among others, historical criticism.The possibility that this "correct way" of reading is flaunted by the influence of the reader's presuppositions is not considered.Might this be the reason why objecttions against the synod's decision were in some cases just viewed as "not founded on good exegesis" (Van Deventer, 2005:692)?Should this be true more questions will arise like "What is good exegesis?"or "Which approach is correct?"These questions are very relevant to this study and beg the following: Can the approaches to the text of Schüssler-Fiorenza or Trible be viewed as "good exegesis"?
Hermeneutical evaluation
In theory the NHK and NGK have overcome the problem regarding the role of women in the church.This study focuses primarily on the GKSA because of the need to still give a theoretical solution to the problem of the role of women in the church.
One of the main reasons for the lack of a theoretical solution to the problem concerning the role of women in the GKSA is because central developments in hermeneutics have not been investigated thoroughly.This results in limitations with regard to the newer insights in terms of historical and literary aspects of texts, as well as the role played by the reader in interpretation.The traditional approach to exegesis within the GKSA is labelled as grammatical-historical (Krüger, 2006:28).An illustration thereof can be seen in the explanation of Coetzee et al. (1980:26) of what Paul meant in 1 Corinthians 14:34 when he commands women to be quiet during services.In terms of the historical part of this approach the GKSA in essence does not take into consideration the important distinction between the "history in the text" and the "history of the text" (Hayes & Holladay, 1987:45).The first expression refers to what the text narrates about events while the second is concerned with the history of the text itself.
With regard to the grammatical aspect of the grammatical-historical approach, the GKSA sees it as referring to linguistics.Krüger (2006:28) points out that the term grammatical encompasses the confession that the Bible was inspired by God but it also implies that every word carries meaning.Kaiser (1994:33), however, indicates that when Keil originally used this term in 1788, it referred to the term literal, meaning "simple, plain, direct or ordinary" and not necessarily linguistics.In concluding this hermeneutical evaluation it should be said that the existing interpretation model has not seriously considered the developments in hermeneutics and specifically Biblical hermeneutics over the past half a century.Recent contributions from theologians still working in the old paradigm failed to make any impact regarding decisions about women in the offices of elder and deacon (cf.Breed et al., 2008;Helberg, 2008).It just goes to show how interpreters are in the final analysis also influenced by their subjective perspectives on "what the text says" -another aspect that hermeneutical theory has underlined especially over the past three decades (cf.Osiek, 1997:963;Van Deventer, 2005).
A consideration of Schüssler-Fiorenza and Trible's approaches to the issue of women in the church
It was evident from a previous section that both Schüssler-Fiorenza and Trible emphasise the role of the Biblical text in their hermeneutical approach but each introduces new insights in dealing with the text.These insights are rooted in the development of hermeneutics.The reason for their approaches to the text is the fact that they questioned the kind of interpretation that led to the subordination of women in Christian communities.Their "hermeneutics of suspicion" referred to above (4.3) is suspicious of the unstated power-relations informing oppressive interpretations.In the reformed tradition this kind of questioning should always be present since this tradition took root in the questioning of the authoritative interpretation of the church.Modern men and women in the GKSA should once again evaluate their faithfulness to Scripture in terms of the role of women in the church.According to Van Helden (2002:768) many women in the GKSA feel that they have been deceived for decades.They are of opinion that un-Biblical interpretations, translations and traditional views and practices have kept them from the truth.
6.1 The historical approach and reformed theology in South Africa According to Le Roux (1994:198) the future of historical criticism in South Africa is linked to its past.South Africa has missed the challenge of dealing with the results of the Aufklärung that introduced the historical-critical method in Europe.This made European theologians realise the humanness of the Old Testament.South Africa, instead, took another direction because we never experienced the pressure of working with the historical-critical method and its results, except to some extent in the NHK tradition as noted above.
Le Roux (1994:199) elaborates that our theological past did not accommodate the historical-critical method and its results and therefore neither the historical-critical method nor a critical theology has taken root in South Africa.Spangenberg (1994:156) summarises the historical-critical paradigm as follows: the Bible is a collection of old Near-Eastern religious writings, which were written by limited people who in their humanness can err.Several of these writings developed over a long period of time and they were often written by more than one author.These writings include the religious insights and religious testimonies of those people and their contemporaries.These people lived during a specific time in history at specific places on earth.To understand the Bible correctly, the reader must posses the necessary knowledge in terms of the history of Israel and other old Near-Eastern nations as well as early Christianity, the cultural setting of those people, their worldview and their religious beliefs and practices.
Le Roux (1994:200) identifies two reasons why historical criticism was neglected.Firstly, it was the belief of many followers of the structural (text immanent) approach that theirs was the only valid approach to the Bible.Secondly, there was the estrangement of history and exegesis that included the modernist belief that "history" referred to "historical facts" investigated only to see if something really happened.This view of history has caused alienation between history and theology and between history and exegesis (Le Roux, 1993:23-42).
Van Helden (2002:756) indeed mentions that the Bible must never be separated from the cultural-historical information that forms part of it.Vergeer (2002:668) points out, however, that some people are upset by the change in meaning that is brought about by using information outside the text.In order to change the current situation in terms of the historical-critical approach to the Bible, a historical consciousness must be cultivated and the method rediscovered as a means of giving meaning to life (Le Roux, 1994:202).
Interpreters should take into consideration as many as possible socio-historical perspectives to understand the meaning of a specific part of Scripture in its original context (Vergeer, 2002:670).The historical approach to a text will help to discover which guidelines are timeless and which ones are time-bound.Schüssler-Fiorenza's use of the historical approach will now be analysed to see to what extend it can make a positive contribution to the debate.
Schüssler-Fiorenza's approach
Schüssler-Fiorenza's awareness of the way in which the context of both the text and the reader determines the meaning of a text is of great value.It was seen that Schüssler-Fiorenza is of the opinion that a text has a specific message within a specific community.The reader is not always aware of the liberating message that texts offer and therefore the prejudice of the reader should be dismantled for this message to surface.If the GKSA can apply this insight by a model going beyond tradition and patriarchy and in so doing encounter the liberating message of some texts, it can lead to the liberation of women in the church.In order to apply her insights to the current issue, her hermeneutical objectives as identified by Ng (2002:13) will serve as foundation.
• Schüssler-Fiorenza (1975:605-626) makes use of the critical theory of the Frankfurt School that criticises what is experienced as alienation and oppression.Women are alienated from taking their rightful place in the GKSA, thus also serving as leaders.This alienation, in essence, is a form of oppression.Schüssler-Fiorenza's approach can therefore be applied to criticise the alienation that is experienced in the GKSA.
• Schüssler-Fiorenza (1983:6) employs some ideas of liberation theologies, which gave the insight that theology is engaged for or against the oppressed.According to Van Helden (2002:770) the women in the GKSA have an intense, suppressed and mostly unspoken desire to be equipped and acknowledged as doers of the Word.If a liberation hermeneutics is applied this limitation can be rectified.The GKSA women themselves recommend that opportunities for participation in the body of Christ by women must be expanded so that liberation in Christ can be experienced in its fullest sense (Van Helden, 2002:770).It is clear that women in the GKSA cannot be reckoned as following revolutionary feminist theology, hence the years of tolerance.The GKSA woman must, however, take her position in Christ and call on the liberating Word of God to correct the theology that limited her as the oppressed in the past (Van Helden, 2002:768).
• Schüssler-Fiorenza (1992:46) often uses the term rhetorical criticism to denote the approach of a communication that links knowledge with action.Wuellner (1987:448) states in similar vein: Rhetorical criticism of literature takes the exegetes of biblical literature beyond the study of theological or ethical meanings of the text to something more inclusive then semantics and hermeneutics.
Wuellner (1987:449) further explains that rhetorical criticism goes beyond language as a reflection of reality but rather focuses on language as a possible instrument of influence.This form of rhetorical criticism is one that emphasises the persuasive power of a text.In the context of the GKSA it has to be noted that the persuasive power of a text may and can work against traditional ideological stances such as patriarchy.In fact, a reformed model of understanding demands that it does exactly this.
• Schüssler-Fiorenza (1984:8) finds liberation theologies alone not critical enough of patriarchy as a system oppressive to woman.Thus she insists on putting the liberation of women at the centre of her thinking.This implies that a text is approached from the viewpoint and experience of women.This approach is needed since the Bible at large is more "male-centred".Janse van Rensburg (2002:721) lists some problems in this regard related to hermeneutics and the reading of sacred texts.A women-centred interpretation cannot be done by men alone, since it primarily involves women's experience.In the context of the GKSA this voice can only be heard if the doors at the synod meetings are opened for women as well.
Finally, the importance of developing a historical consciousness should be stressed.A historical understanding of reality and the text is more important than a specific method.However, it is in the search for truth behind the text that the historical-critical method can play an indispensable role (Le Roux, 1994:201-202).
The literary approach and reformed theology in South Africa
As early as 1933 , Du Plessis (1933:523) posed the question whether or not the view of the GKSA on the role of women was in accordance with Scripture or whether it was only an old human tradition.When Scripture is approached in the text-centred way Trible suggests an answer to this question is possible.It seems, however, that such an approach is not utilised by theologians of a Reformed persuasion as yet.
This "new" approach among South African Biblical scholars developed almost four decades ago.The development led to a new understanding of the Bible and terms like "structural analysis" and "immanent exegesis" were coined to describe it.It had far-reaching consequences for the understanding of the Old Testament, but at the same time also led to an undervaluation of historical criticism (Le Roux, 1994:200).
In 1971 Vorster proposed this new approach in a paper read at the annual meeting of the New Testament Society of South Africa (cf.Le Roux, 1993:28).In this proposed literary approach the focus fell on the final form of a text and led to the dismissal of information about the text's historical growth.The terminology introduced included the following: diachrony (referring to the historical approach), synchrony, and structural analysis.As Le Roux (1993:28) puts it: "Something really new was introduced, which set New Testament scholars in motion and resulted in a new approach that subsequently received the status of a 'n ormal science'".
Trible's approach
Pieterse (2002:712) notes that another question in the debate on the role of women in the GKSA is whether men and women are equal before God.The answer to this question largely determines people's interpretation of relevant Scriptures regarding women in the offices.Trible's (1978:72-143) interpretation of Genesis 2-3, the "Eve-story", convincingly answers this question.
• Trible shows how she grounds in Scripture the notion that men are not superior to women as was believed for centuries under influence of a patriarchal ideology.This belief resulted in the GKSA's interpretation of Scripture in a male-orientated way and their view of women in the church.
In the very act of distinguishing female from male, the earth creature describes her as 'bone of my bones and flesh of my flesh' (Gen.2:23).These words speak unity, solidarity, mutuality and equality.Accordingly, in this poem the man does not depict himself as either prior or superior to the woman.His sexual identity depends upon her even as hers depends upon him.For both of them sexuality originates in the one flesh of humanity.(Trible, 1978:98-99.)• When Trible refers to the concepts of "unity, solidarity, mutuality and equality" they refer to man and women as a whole.The implication is therefore that they are equal in all aspects of life, including church life.This further implies that women and men can fulfil any function in the church, be it deacon, elder, minister or Sunday school teacher.
• According to Trible (1978:101) nowhere in the narrative has subordination a connotation to the phrase "taken from" that is used in Genesis 2:21.For both the man and the woman, life originates with God.This confirms what was said previously: men and women are in essence equal before God.
• Trible (1978:97) furthermore explains that it cannot be derived from Genesis 2-3 that men is superior to women or have power over them since no purpose is stated in God's bringing of the women to the earth creature.Previously God brings animals to the earth creature to name them and plants to take care of them, but God specifically does not give the earth creature authority over the woman and she thus does not fit the pattern of dominion as was seen in the previous episodes with the animals and the plants.
If this interpretation that men have no God-given authority over women and are not superior to them is applied in a church context, then commissions will not be needed to research endlessly the role of women in the church with no result.The text immanent approach can also be applied to interpret the "problem texts" like 1 Corinthians 14 and 1 Timothy 2 to liberate women from the belief that they are subordinated by divine command.
Conclusion
In this article newer developments within the field of Biblical interpretation were noted.After discussing what one such development, namely feminist criticism, entails, the attitude of reformational theology as expounded in the GKSA towards both the historical and literary approaches to the Bible was discussed.Some of the Afrikaans churches within a reformed tradition seem to stand critical to historical approaches (as used by Schüssler-Fiorenza).It may further be concluded that there seems to be a limited knowledge related to the application of developments in interpretation from a text-centred approach (as used by Trible).This implies that such churches are, because of ignorance regarding and/or rejection of new methods, stuck in old approaches to the text.Such a rigid approach within the dynamic science of Bible interpretation resulted in women not being allowed to fulfil their calling, but instead play a submissive role in the church. | 10,556 | 2009-07-26T00:00:00.000 | [
"Philosophy"
] |
Floquet bound states around defects and adatoms in graphene
Recent studies have focused on laser-induced gaps in graphene which have been shown to have a topological origin, thereby hosting robust states at the sample edges. While the focus has remained mainly on these topological chiral edge states, the Floquet bound states around defects lack a detailed study. In this paper we present such a study covering large defects of different shape and also vacancy-like defects and adatoms at the dynamical gap at $\hbar\Omega/2$ ($\hbar\Omega$ being the photon energy). Our results, based on analytical calculations as well as numerics for full tight-binding models, show that the bound states are chiral and appear in a number which grows with the defect size. Furthermore, while the bound states exist regardless the type of the defect's edge termination (zigzag, armchair, mixed), the spectrum is strongly dependent on it. In the case of top adatoms, the bound states quasi-energies depend on the adatoms energy. The appearance of such bound states might open the door to the presence of topological effects on the bulk transport properties of dirty graphene.
I. INTRODUCTION
Driving a material out of equilibrium offers interesting paths to alter and tune its electrical response. A prominent example is the generation of light-induced topological properties, 1-3 e.g. illuminating a material like graphene to transform it in a Floquet topological insulator (FTI). Very much as ordinary topological insulators (TI), [4][5][6][7] FTIs have a gap in their bulk (quasi-) energy spectrum-being then a bulk insulatorand their Floquet-Bloch bands are characterized by non-trivial topological invariants. 3,8,9 In addition, and despite some important differences with TIs, 8,10 FTIs show a bulk-boundary correspondence and hence host chiral/helical states at the sample boundaries.
The emergence of such non-equilibrium properties has been intensively investigated in recent years in a variety of systems including graphene [11][12][13][14][15][16][17][18][19] and other 2D materials, 20,21 normal insulators, 2,22 coupled Rashba wires, 23 photonic crystals, 24 cold atoms in optical lattices, [25][26][27][28][29][30][31] topological insulators, [32][33][34][35][36] and also classical systems. 37 The research interest has focused in many different aspects of the problem such as the characterization of the edge states, 16,17 different signatures in magnetization and tunneling, 38,39 the proper invariants entering the bulk-boundary correspondence, 8,10,19,40 their statistical properties, 41,42 the role of interactions and dissipation 41,[43][44][45][46] and the associated two-terminal 47,48 and multiterminal (Hall) conductance both in the scattering 49 and decoherent regimes. 45 So far, however, the experimental confirmation of the presence of such edge states has only been achieved in photonic crystals. 24 Nonetheless, in condensed matter systems the Floquet induced gaps have already been observed at the surface of a topological insulator (Bi 2 Se 3 ) by using time and angle resolved photoemission spectroscopy (tr-ARPES). 33 More recently, effective Floquet Hamiltonians were realized in cold matter systems. 50 Despite the intense research on FTIs, most of the studies address pristine samples. Besides occurring naturally in any sample, defects will also host Floquet bound states when the sample is illuminated. If the defects are extended, the presence of the associated Floquet bound states might allow for new experiments probing them. This motivates our present study. Specifically, taking laser-illuminated graphene as a paradigmatic example of a FTI, we study Floquet bound states around defects in the bulk of a sample. We show that chiral states circulate around holes or multi-vacancy defects of different shapes and lattice terminations (zigzag, armchair or mixed) like the ones showed in Fig. 1. The properties of these states (quasi-energies and their scaling with the system parameters, associated probability currents, etc.) are characterized using both numerical simulations, by means of a tightbinding model, and analytical approaches, by solving the appropriate low energy Dirac Hamiltonian in a reduced Floquet space. Quite interestingly, these bound states persist even in the limit of a single vacancy defect. Furthermore, bound states are found around adatoms that sit on top of a C atom (like H or F, for instance).
While the presence of Floquet bound states around vacancy-like defects or adatoms might jeopardize the exper- imental observation of laser-induced gaps, they could, on the other hand, also open the route towards the observation of interesting topological transport phenomena in dirty bulk samples by changing localization or percolation properties, for instance.
The rest of the paper is organized as follows. First, we introduce our low energy model and the associated analytical Floquet solutions (Sec. II). Several particular cases are presented in section III, namely, large holes with zigzag or armchair edge terminations, as well as defects consisting of regions with a staggered potential. The chiral nature of the currents associated to the bound states is discussed in Sec. IV. In Sec. V we compare our solutions with numerical calculations on a tight-binding model. The case of point like defects such as vacancies or adatoms is presented in Sec. VI. We finally conclude in Sec. VII.
II. THE LOW ENERGY MODEL AND THE FLOQUET SOLUTION
Let us consider an irradiated graphene sample with a single defect. Since the bound states we want to describe are topological in origin, 16,17,19 the specific form or nature of the defect (see Fig. 1) is irrelevant for probing their existencethough the details of the quasi-energy spectrum and the particular form of the wave-functions will depend on it. To simplify the discussion we will start by assuming that the defect potential does not mix the different graphene valleys (Dirac cones)-this assumption will be relaxed when discussing particular examples. Hence, the low energy behavior around both cones can be described by a Hamiltonian given bŷ if we use the isotropic representation where the K and K ′ cones are described by the wave-functions Here v F ≃ 10 6 m/s denotes the Fermi velocity, σ = (σ x , σ y ) represents the Pauli matrices describing the pseudo-spin degree of freedom (sites A and B of the honeycomb lattice), e is the absolute value of the electron charge, c is the speed of light and A(t) = Re A 0 e iΩt the vector potential of the electromagnetic field (a plane wave incident perpendicularly to the graphene sheet). The associated electric field is then It is important to emphasize that while we will refer to graphene from hereon, our results apply to any massless Dirac fermion system described by Eq. (1).
Since for solving the time-dependent Schrödinger equation we will take advantage of the Floquet formalism 51,52 used to deal with time dependent periodic Hamiltonians, it is instructive to briefly introduce its basic ideas (for a more extensive general reviews we refer to Refs. [53] and [54]). Floquet theorem guarantees the existence of a set of solutions of the form ψ α (t)⟩ = exp(−iε α t ̵ h) φ α (t)⟩ where φ α (t)⟩ has the same time-periodicity as the Hamiltonian, φ α (t + T )⟩ = φ α (t)⟩ with T = 2π Ω. 51,53 The Floquet states φ α ⟩ are the solutions of the equationĤ whereĤ F =Ĥ − i ̵ h∂ t is the Floquet Hamiltonian and ε α the quasi-energy. Using the fact that the Floquet eigenfunctions are periodic in time, it is customary to introduce an extended R ⊗ T space (the Floquet or Sambe space 52 ), where R is the usual Hilbert space and T is the space of periodic functions with period T . A convenient basis of R ⊗ T can be built from the product of an arbitrary basis of R (the eigenfunctions a n ⟩ of the time-independent part of the Hamiltonian, for instance) and the set of orthonormal functions e imΩt , with m = 0, ±1, ±2, ... that span T . Then, or, in a vector notation in R ⊗ T , Here, u α m ⟩ = ∑ n B α mn a n ⟩ are linear combinations of the basis states of R. Written in this basis,Ĥ F is a time-independent infinite matrix operator with Floquet replicas shifted by a diagonal term m ̵ hΩ and coupled by the radiation field with the condition, for pure harmonic potentials, that ∆m = ±1.
In the absence of any defect, the Floquet spectrum presents dynamical gaps at different quasi-energies. 1,17,19 Here, we will focus on the gap, of order η ̵ hΩ, that appears at ε ∼ ̵ hΩ 2 and look for bound states inside it. Since we will only consider the limit η = v F eA 0 c ̵ hΩ ≪ 1, it is sufficient to restrict the Floquet Hamiltonian to the m = 0 and m = 1 subspaces (or replicas) for the analytical calculations-the numerical results can retain a larger number (N FR ) of replicas if necessary. As discussed in Refs. [17] and [19], this restriction is enough to get the main features of the energy dispersion and the Floquet states when η ≪ 1.
The reduced Floquet Hamiltonian describing states near ε ∼ ̵ hΩ 2 then corresponds tõ It is straightforward to see thatH F φ(r) = εφ(r) implies that and hence only two functions, u 0A (r) and u 1B (r), have to be found. These functions satisfy where p 2 = p + p − = p − p + . Because we are interested in describing the effect of a defect-which breaks the translational invariance of the systems-, it is useful to change at this point to a polar coordinate system, r and ϕ, centered at it. In terms of these variables we have, Similarly, as in the case of local defects in ordinary TI, 55,56 the solutions of Eq. (8) can be written as u 1B (r) = e ilϕ f (k 0 r) and u 0A (r) = e ilϕ g(k 0 r) with l an integer number. This follows from the fact that [H F , L] = 0, where and L φ(r) = ̵ hl φ(r), where φ(r) is given by Eq. (6). In order to proceed further we define the adimensional parameters With this notation, the equations for f (ξ) and g(ξ) become For quasi-energies inside the bulk dynamical gap, the wavefunction must decay far from the defect. Hence, let us look for a solution of the form f (ξ) = c K l (λξ) and g(ξ) = d K l (λξ), where K l (x) is the modified Bessel function of the 2nd kind that satisfy Introducing this into Eqs. (12) we arrive to the following condition for λ, and the relation The equation for λ has four solutions which are complex conjugate in pairs. The two physical solutions correspond to Re(λ) > 0 as this guarantees an exponential decay for large r.
Let us denote these two solutions as λ + and λ − = λ * + , The region where Re(λ) > 0 corresponds to µ < η 1 + η 2 , that is, inside the bulk dynamical gap, 17 ∆ = ̵ hΩη 1 + η 2 . The other components of the Floquet wavefunction can be readily obtained as which are straightforward to evaluate since ∂ ξ ∓ l ξ K l (λξ) = −λK l±1 (λξ). It is worth to point out that ⟨u 1 u 0 ⟩ = 0 so that φ(r, t) can be normalized for any time t in this approximation, 17 which allows to calculate not only time-averaged quantities but also their time dependence explicitly.
To proceed any further we need to specify the defect type, which allows the setting of the appropriate boundary conditions. In the following we present a detailed discussion for some particular but relevant cases.
III. BOUNDARY CONDITIONS
The boundary conditions (BC) must guarantee that the probability current perpendicular to the defect boundary cancels out. Here, we shall consider only three types of BCs that represent three generic cases and serve to illustrate the overall picture: the zigzag-like BC (ZZBC), the armchair-like BC (ABC) and the infinite mass BC (IMBC). 57 Since the BC needs to be satisfied at any time, in Floquet space the boundary condition must be imposed on each replica separately. Therefore, the boundary problem is analogous to the static one and we shall follow Refs. [58] and [59] and use a matrix M to introduce the appropriate relations between the components of the A and B sublattices and the two Dirac cones at the boundary for the three types of BCs. 59,60 An arbitrary BC can be written in the form where R(ϕ) defines the shape of the defect and the matrix M (in the isotropic representation) is given by Here σ refers to the sublattice pseudospin and τ to the valley (Dirac cones) isospin. The matrix M has all the information about the shape of the boundary via the unit vectorn. On the other hand, the nature of the honeycomb lattice's termination is related to the unit vectorν, that rules whether the two Dirac cones mix or not. Namely, for a defect with a straight boundary, 59 ZZBC →ν =ẑ ,n = ±ẑ ABC →ν ⋅ẑ = 0 ,n =ẑ ×n B (20) IMBC →ν =ẑ ,n =ẑ ×n B , wheren B is an unitary vector perpendicular to the defect boundary and pointing inwards. From the above expressions it is clear that while armchair BC mixes cones, zigzag and infinite mass BCs do not. In the following we shall be interested in the comparison between analytical and numerical results for simple geometries, and so we will restrict ourselves to handle only defects with regular polygonal shapes with N sides. The general form of M N for such cases is given in the Appendix A.
While for the honeycomb lattice, defects with well defined terminations can only have N = 3 or N = 6, it is useful to discuss the limiting case of a circular defect and then compare with the numerics. For the ABC and IMBC this corresponds to the limit N → ∞ while for the ZZBC care is needed to account for the change of the sublattice character of the edge atoms [n = ±ẑ depending on the sublattice].
A. Circular defect with "zigzag" boundary condition The ZZBC does not mix valleys. This is valid for arbitrary N , i.e. M N is diagonal in the isospin subspace. Moreover, it is also diagonal in the pseudospin subspace. However, it is possible, as in the hexagonal geometry, that different sides of the polygon terminate in sites corresponding to different sublattices. This is represented by then = ±ẑ in Eq. (20), where the sign changes from side to side, thereby making it cumbersome to handle analytically. Hence, for the sake of simplicity, we will consider a 'fictitious' case where the ± sign is ignored and later compare with the exact numerical calculation. Hereon we will refer to it as the circular-ZZBC (cZZBC).This will help us to better grasp some aspects of the problem.
For a circular defect (of radius R) the BC implies, say, that u 1B ( r = R) = 0 and u 0B ( r = R) = 0-this corresponds to a honeycomb lattice that ends on A sites. To satisfy it we need to combine the two independent bulk solutions discussed in Section II. That is, where we have kept the previous notation. Then we have that with ξ 0 = k 0 R. This leads to the following relations between coefficients: c + = c − and d + = d − . By introducing them back into Eqs. (12) we obtain, for the K cone, the following equation for the quasi-energy (µ) with The solutions (µ l ) to this equation form a discrete set of quasienergies inside the bulk dynamical gap. Figure 2 shows them as a function of ξ 0 (throughout this work, we shall use η = . Notice that the symmetry between l > 0 and l < 0 is broken by the radiation field. The symmetry of the Floquet spectrum around the center of the gap (µ = 0) is recovered when the complementary valley (K ′ cone) is considered. For that, we recall that the solutions for the K ′ cone can be obtained by relabeling the Floquet wavefunction as . This results in an additional set of quasi-energies that can be obtained from the condition It can be shown that the latter set of quasi-energies can be obtained from Eq. (23) by exchanging (l, µ) → (−l, −µ), which is precisely what is needed to recover the symmetry around µ = 0. It is interesting to consider, for a fixed l, the limit of very large radii, ξ 0 ≫ ξ d = k 0 ̵ hv F ∆ = 1 + η 2 2η and approximate K l (λξ 0 ) by its asymptotic expansion. By doing so, Eqs. (23) and Eq. (25) leads to respectively. This result can be understood in terms of the quasi-energy dispersion of the edge states in irradiated semiinfinite graphene sheets with a zigzag termination. 17 In that case, it was shown that, close to the center of the gap, the quasi-energy dispersion can be approximated by ε k = ̵ hΩ 2 ± ̵ hΩη 2 2 + ̵ hv F ηk. Our result for µ l is then reflecting the fact that the wavevector k along the defect's edge must be quantized, It is worth mentioning that in this large radii limit the Floquet states have roughly the same weigth on the two Floquet replicas.
B. Infinite mass boundary condition
The IMBC was introduced by Berry and Mondragon in Ref. [57] to study confined Dirac particles ('neutrino billiards'). It corresponds to add a mass term to the Dirac equation only in a given region of space (in our case the defect) and take the limit of such a mass going to infinity. While this could be thought as a local staggered potential in the honeycomb lattice, it must be kept in mind that this is only the case for a staggered po-tential much smaller than the bandwidth-this is so because if the staggered potential is too large it behaves like an effective hole (introducing inter-valley scattering depending on the geometry of the defect). The latter limit was not a problem in Ref. [57] , because they only considered a single unbound massless Dirac particle.
Since the IMBC does not mix valleys either, we can treat again both Dirac cones separately. We start by using the circular geometry, which corresponds to the N → ∞ limit of M N . For the IMBC M ∞ is not longer diagonal in the pseudospin subspace and thus the A and B components of the wavefunction are not independent any more. In fact, Eq. (18) requires that 57 for the K and K ′ cone, respectively, where j = 0, 1 is the Following the same procedure as in the previous section, and using the same notation, these conditions imply that while the equation for the quasi-energies is given by Here the (-) and (+) signs correspond to K and K ′ cone, respectively. It can be shown that the above expression remains invariant under the change (µ, l) → (−µ, −l) for each cone separately and, therefore, unlike the cZZBC, the Floquet spectrum for the IMBC is symmetric around µ = 0 for each cone. Using this symmetry of Eq. (30) it is straightforward to verify that there is no solution for l = 0 (that necessarily corresponds to µ = 0). The IMBC Floquet spectrum is shown in Fig. 3 as a function of ξ 0 . Note that the two cones have a completely different spectrum. This could be anticipated from the fact that the presence of both the staggered potential and the radiation field breaks the valley symmetry (cf. Fig. 5 below)-it is worth mentioning that the bulk Floquet gap at k = 0 can even present a topological phase transition depending on the relative magnitude of the mass term and the radiation field. 61 When defects are made of regular polygons, i.e. with finite N , the M N matrix acquire a non-trivial structure as a function of ϕ. Thus, the states whose quantum numbers l differ in N are coupled, thereby leading to avoided crossings. The equations for this case are rather cumbersome (some of them are presented in the appendix) but can be solved in a perturbative fashion. Some examples are presented in Sec V in comparison with the numerical solutions of the tight-binding model.
C. Armchair boundary condition
The ACB is analog to the IMBC in the pseudospin subspace, leading to similar quasi-energy spectra. The difference between both boundary conditions rely on the isospin subspace: while ACB mixes cones, IMBC does not. Thus, ACB exhibits additional avoided crossings between modes belonging to different cones (see numerical results in Sec. V). Because cones are mixed, they both need to be treated together and hence the dimension of the Floquet space is doubled. The analytical procedure is similar to the one presented for the other BCs, whose details are beyond the scope of the present work. We will then limit, for this case, to discuss the numerical results in in Sec. V.
IV. PROBABILITY CURRENT DENSITY: CHIRAL CURRENT
So far we have mainly analyzed the spectrum of the Floquet bound states inside the dynamical gap (around ̵ hΩ 2) for a circular defect. Now we focus on their chiral nature. The velocity operator is given byv = v F σ and hence the time averaged (over one period) probability current density is where ⟨σ α ⟩ j = {u * jA,l (r), u * jB,l (r)}σ α {u jA,l (r), u jB,l (r)} T , j = 0, 1 is the same as earlier, σ r = σ ⋅r and σ ϕ = σ ⋅φ. Using the solutions founded in the previous section, it can be readily shown that Since λ + = λ * − , one can easily check that Im f l (ξ)f * l (ξ) = Im (g l (ξ)g * l (ξ)) = 0 so that the radial component of the current density vanishes, as expected. Therefore, we have Figure 4 shows the spatial dependence of both the probability and the current density for the K and K ′ cones and for the two different boundary conditions analyzed in Sec. III. The curves correspond to a defect of R = 30 a cc , i.e., ξ 0 = 1 with the parameters used throughout this work. We have only retained the Floquet wavefunctions with l = 0, 1, 2, whose corresponding quasi-energies can be seen from Fig. 2 and Fig. 3 for ξ 0 = 1. Due to the oscillating nature of the Floquet wavefunctions both probability density functions and current densities show relative maxima and minima (with the same or different signs in the case of current densities) as a function of ξ. Nevertheless, all of them decay exponentially away from the edge of the defect. This is more evident for the Floquet wavefunctions whose quasi-energies are close to the middle of the dynamical gap as in that case the decay length is shorter. For quasi-energies close to the edges of the dynamical gap, the decay length becomes larger and larger and the ξ −1 2 power law decay, characteristic of the K l Bessel functions with purely imaginary argument becomes apparent. In these latter cases, however, the current amplitude becomes several orders of magnitude smaller than in the formers (see Fig. 6). For the cZZBC, Fig. 4 shows the equivalent role that play the K and K ′ cones under the change l ↔ −l, as it was explained before in Sec. III A. Unlike the cZZBC, for the IMBC the K and K ′ cones are inequivalent. In this case, as discussed in Sec. III B, the change l ↔ −l lead to the same probability and current densities for each cone separately.
The lack of equivalence between the K and K ′ cones for defects with IMBC is also present in systems other than circular defects. For illustrating purposes, Fig. 5 shows the kdependent local density of states (LDOS) for a nanoribbon with both cZZBC and IMBC, projected on the m = 0 Floquet replica. Notice that, unlike the cZZBC, the IMBC presents an asymmetry (at each edge) with respect to the middle of the dynamical gap. The symmetry is broken by the presence of the mass term at the edges and it is only globally recovered when both edges are considered-this is so because for zigzag nanoribbons, as considered here, the atoms at the two edges belong to different sublattices.
Even when the current density oscillates as it decays away from the defect, the total current (current densities integrated on r) for cZZBC has the same sign for all the bound states. This is the signature of the chirality of the Floquet states and their signs only depends on the sign of the helicity of the circularly polarized radiation field. Figure 6 shows the total currents for both cZZBC and IMBC as a function of the quantum number l for defects with ξ 0 = 1, 5, 10, 20. Unlike the cZZBC, the IMBC only presents chiral Floquet states for the K cone. Analogously, Fig. 5 shows a similar behavior for the nanoribbon with IMBC: while the K cone presents two chiral states at each edge, K ′ cone has none.
Finally, it is interesting to analyze the value of the total current of a given bound state in the limit of a large defect. As discussed in Sec. III A for large R the quasi-energy dispersion can be related to the one corresponding to a nanoribbon as the boundary of the defect appears (locally) as a straight line (i.e. when the radius is much larger than the decay length). In that case the expected velocity for each bound states is . 17 The inset of the Fig. 6 shows the current in units of v F for Floquet states with l = 0 (red points) as a function of the size of the defects. The black dotted line represent the expected η (1 + η 2 )-this is also indicated in the main figure. Clearly, there is a good agreement with the expected value. A similar behavior is observed for states with different quantum number l as the size of the defect increases.
V. COMPARISON WITH THE TIGHT-BINDING MODEL
In this section, we calculate the quasi-energy spectra within the dynamical gap numerically as a function of the size and shape of the defect for all three types of boundary conditions mentioned before, ZZBC, ABC and IMBC, and compare with the analytical results when possible.
In order to describe the electronic structure of irradiated graphene sheets near the Fermi energy, we resort to the widely used tight-binding Hamiltonian, [62][63][64] which is written only in terms of p z orbitals with energies i for a given carbon atom located at site i and hopping matrix elements γ ij between nearest-neighbors carbon atoms. In second quantization notation, it results where the operator c † i (c i ) creates (annihilates) a p z -electron on site i. The effect of the laser is introduced through the time-dependent phase of the hopping matrix elements, 1,65,66 where Φ 0 is the magnetic flux quantum and γ 0 ∼ 2.7 eV. 67 By using Floquet theory 54,68,69 as described before one can compute the Floquet spectrum. Once again, one ends up with a time-independent problem in an expanded space. In this case one can picture it as tight-binding problem in a multichannel system where each channel represents the graphene sheet with different number of photons. 51,66,70 It is worth mentioning that in the tight-binding method the time dependent perturbation is never purely harmonic given the exponential dependence of Eq. (35) on the radiation field amplitude. Hence, there is a coupling among all the replicas 66 and not just those with ∆m = ±1. Nevertheless, for η ≪ 1, only the latter are relevant.
Because the problem in the Floquet space becomes time independent, one can use standard techniques to calculate the quasi-energy spectrum. In this case we used the Chebyshev's polynomials method 71 which provides an order N method of proven efficiency. 72 This allows us to tackle very large systems sizes so that our defect is far from the boundaries and can be considered as a 'bulk defect'. For simplicity we only retained two Floquet replicas just like its theoretical counterpart studied in Section II. This is a good approximation whenever η ≪ 1. The addition of more replicas would lead to the development of a hierarchy of bound states in a similar way as for edge states at the border of an irradiated graphene sample. 19 Defects were introduced in graphene by defining geometrical shapes-triangles, hexagons, and circles-and removing all atoms inside it (for the ZZBC and ABC) as well as any remaining dangling bonds. In the case of the IMBC, a staggered potential was introduced only inside the defect-i.e. we added on-site energies (±δ) whose signs depend on the sublattice index. In all calculations we used δ = γ 0 2, which is larger than ̵ hΩ 2 (taken to be ∼ γ 0 20) but not too large as to become equivalent to a hole (δ → ∞ is equivalent to a hole defect). Triangles and hexagons in arbitrary orientations lead to edges with mixed zigzag and armchair terminations. However, for specific orientations with respect to the C-C bonds, it is possible to construct defects with only one termination type-we will refer to them as zigzag/armchair triangular and hexagonal defects. Circles, of course, are always a mixture of different edge terminations and, as we will show, present some special features. In all cases, the numerical calculations were performed using graphene samples of 1000 × 1000 unit cells. Figures 7 and 8 show a color map of the Floquet local density of states (FLDOS) inside the bulk gap (projected onto a few sites around the defect boundary, and on the m = 0 replica) as a function of the size of the defect for hole and staggered potential defects, respectively. The shape of the defect is indicated in the figures. Left panels correspond to zigzag terminations and the right panels to the armchair ones. Dashed (black) lines correspond to the solutions obtained from the continuum model (see discussion below). It is apparent from the figures that discrete Floquet bound states do appear inside the dynamical gap. Interestingly, in most cases, the quasi-energy spectrum resemble the ones obtained with the analytical model proposed in Sec. II. This remains valid for the triangular shaped zigzag hole even when the analytical solution relies on the circular symmetry of the defects. It is worth mentioning that for a quantitative comparison an effective radius is needed. In these cases we used R = 1 (2π) ∫ radius R(ϕ), as discussed in Sec. III and the appendix. This avoided crossings occurs whenever the quantum numbers of the crossing levels, l and l ′ , differ in a multiple of the number of sides N . A few particular examples are indicated in the Fig. 8. (ii) the latter picture is very particular in the case of the zigzag triangular hole defect (top-left in the Fig. 7). On the one hand, the matrix M is independent of ϕ-note thatn =ẑ for any ϕ as the edge site always belong to the same sublattice and the direction ofν is fixed for each cone-and hence the only dependence on ϕ appears through the boundary radius R(ϕ). On the other hand, for each cone, the 'unperturbed' energy levels of the 'zigzag circle' are never degenerated, making the effect even weaker. As a result, the energy level are well described by assuming that there is no mixing between states with different quantum number l. Notice also there is no mixing between different cones or valleys.
(iii) the zigzag triangular defect with the staggered potential shows a shift in energy with respect to the IMBC solution. This is related to the sublattice imbalance of the edge sites and the fact that both sublattices have different energy inside the defect (staggered potential). This effect is not observed for the other geometries as they have balanced edges.
(iv) the armchair hexagonal hole defect shows two distinct contributions to the quasi-energy spectrum. The one shown in Fig. 7, that is very close to the analytical solution for IMBC [except for the anticrossings between energy levels belonging to different cones that are only present in the armchair case (black arrows)], and the one presented in Fig. 14 completely different pattern. The two cases differ in the way the atom chains that constitute each side match at the vertices.
(v) the zigzag hexagonal hole defect presents a rather complex spectrum quite different from the rest. This is related to the strong mixing between states with different l imposed by the BC that requires that alternating components of the wavefunction cancel in alternating sides. A precise description of this case is beyond the scope of the present work.
Finally, we show numerical results for circular defects in the Fig. 9. The top panel corresponds to a hole defect and the bottom one to the staggered potential defect. Clearly, the latter is very well described by the analytical solutions (dashed black lines). Notice that no avoided crossings (if they exist) are resolved in our numeric simulations, presumably because they are very small since the actual geometry of the defect is very close to a circle. The spectrum of the circular hole defect is, as in the zigzag hexagonal one, very complex. Here, however, a more regular pattern emerges for large R as the quasi-energy of the bound states are pretty much confined to regions delimited by the analytical solution of the zigzag circular defect (dashed lines).
One of the questions that remains is to what extent do these bound states survive in the limit of a vacancy defect or, more generally, in the case of adatoms. This is particularly important as the presence of bound states around such impurities might hinder the ability to resolve the laser-induced gaps in actual experiments or lead to percolating states in dirty samples.
VI. THE ADATOM AND VACANCY DEFECTS
The continuum model presented in Sec. II is not adequate for analyzing the vacancy limit. In fact, in the R → 0 limit for zigzag hole (the appropriate one for a vacancy defect) one finds that there are no solutions inside the gap. Of course, this is not the correct approach as one should introduce a spatial cutoff to account for the finite size of the defect. In this sense, a tight-binding model approach is more convenient and allows for its generalization to include the adatom case.
Since we focus on the bound states within the dynamical gap at ̵ hΩ 2, it is enough to consider, as before, only two Floquet replicas, m = 0 and m = 1. While for the numerical calculations we will use the real space version of the tightbinding Hamiltonian presented in the previous section, for the discussion of the main aspects of the problem it is better to use a k-space representation. Then, the Floquet Hamiltonian is written as Here a † mk and b † mk create an electron on the Floquet replica m on the Bloch state with momentum k on the sublattice A and B, respectively, φ k = ∑ δj e jk⋅δj , where {δ i } are the relative coordinates of the three nearest neighbors A sites of a given B site, t = γ 0 J 0 (z) , and A k = γ 0 J 1 (z) ∑ δj e ik⋅δj (δ jx − iδ jy ) a cc with J n (x) the n-th Bessel function of the first kind and z = 2πA 0 a cc Φ 0 . 66 We describe the adatom impurity with a single orbital of energy bounded to the C atom at the origin. The Hamiltonian of the impurity in the Floquet representation is and the hybridization term is Note that the the coupling matrix element V does not depend on the radiation field as we are considering normal incidence, hence the phase factor appearing in Eq. (35) is zero. The vacancy limit can be obtained from here by taking V → ∞. We define the Green function matrix G with elements given by G ij = ⟨⟨f i , f † j ⟩⟩. Using the Dyson equation it can be written as where G nm (ω) = ∑ k G nm (ω, k) and G nm (ω, k) = ⟨⟨a nk , a † mk ⟩⟩. Explicit expressions for the latter propagators are and with The propagator G 11 (ω, k) can be obtained from G 00 (ω, k) by the substitution ω ↔ (ω− ̵ hΩ) while G r 10 (ω, k) = G a 01 (ω, k) * where r and a denote retarded and advanced, respectively.
The energies of the bound states (if they exist) are determined by the poles of the trace of Eq. (39). This can be found numerically (as it is done below) but to grasp the main physical ingredients it is better to analyze the problem perturbatively. The imaginary part of the retarded self-energy V 2 G r 00 (ω) is proportional to the LDOS of the irradiated pristine graphene projected onto the m = 0 Floquet subspace and has a dynamical gap centered at ̵ hΩ 2. Its real part, on the other hand, is non zero inside the gap and diverges at the gap edges with different signs on each edge. As a consequence, to the lowest order in the impurity hybridization, the impurity spectral density (∝ −Im(G r 00 (ω))) has always a pole within the dynamical gap with an energy given by ω − − V 2 G r 00 (ω) = 0. Assuming, for the sake of argument, that = 0 , it is easy to see that in the same order and in the m = 1 Floquet subspace there is a bound state symmetrically positioned with respect to the gap center.
These results are in fact exact since G 01 (ω) = G 10 (ω) = 0 within the dynamical gap-we checked this numerically (see Fig. 10) but it can also be obtained from Eq. (41) in the low energy limit where φ k (A k , D(ω, k)) is odd (even) under the change k → −k. Therefore, there are two bound states, belonging to the m = 0 and m = 1 Floquet replicas, whose energies are given by the zeroes of ω − − V 2 G 00 (ω) and ω − − ̵ hΩ − V 2 G 11 (ω), respectively. Figure 11 shows a color map of the local Floquet spectral density (corresponding to the three sites around the adatom) calculated using the Chebyshev method, described in Sec. V, as a function of the hybridization matrix element V for different values of . We found that while the energies of the bound states depend on the energy of the adatom, these states are always present regardless of the size of the hybridization. The symmetry between replicas is broken if ≠ 0 and it is only recovered in the limit of very large hybridization where the problem reduces to that of a vacancy. In this vacancy limit (V → ∞), the position of the bound states, are given by the solution of G r 00 (ω) = 0 and G r 11 (ω) = 0 (indicated by the arrows in Fig. 10), being the spectrum within the dynamical gap symmetric with respect to the gap center.
Interestingly, when looking at the weight of each of these states on the adatom and the three carbon atoms around it, one finds that they belong to a single replica. This particular result is a consequence that the coupling between the adatom and the layer of graphene was considered unaffected by the radiation field-see Figure 11.
VII. CONCLUSIONS
In summary, we have presented a detailed study of the Floquet bound states associated to defects in graphene illuminated by a laser. In particular, we focus on the bound states at the dynamical gap ( ̵ hΩ 2) using both analytical and numerical techniques applied to different defect types.
On one hand we consider large hole-like defects with different terminations. In this case, we show how the number of bound states increases with the defect radius and that the spectrum depends on the shape and type of lattice termination. In the case of cZZBC we proved analytically that in the limit of large radii the discrete bound states can be seen as nanoribbonlike chiral states 16,17 with a quantized linear quasi-momentum, as might have been anticipated. Staggered like potential (infinity mass boundary conditions) was also discussed with similar results, except that in this case there is a clear distinction between the two Dirac cones, and only one of them support chiral bound states. The chiral nature of the states was corroborated by an explicit calculation of the probability currents around the defect in the two analytical cases we presented.
On the other hand, we also consider point-like defects such as vacancies and adatoms and show that they also exhibit bound states around them. While the bound states spectrum depends on the value of the adatoms' orbital energy ( ) in the large hybridization or vacancy limit, they remain close to the bottom (top) border of the gap in the m = 0 (m = 1) replica.
Following the argument presented in Ref. [19] one can anticipate that additional bound states will also appear inside the high order gaps induced by high order photon processes. The contribution of such states to the spectral density projected onto the m = 0 replica is parametrically smaller provided η ≪ 1.
It remains a challenge for future work to evaluate the effect of these bound states on the bulk transport properties of dirty samples.
Appendix A: Boundary conditions
As we already mentioned in Sec. III, an arbitrary BC can be imposed by knowing the matrix M and their action on the wavefunction evaluated at the boundary: Ψ = M Ψ [58]. It can be demonstrated that boundary conditions are determined by two unit vectors:ν acting on the isospin (valleys) andn acting on the pseudospin (sublattices) [59]. In the isotropic representation M = (ν ⋅ τ ) ⊗ (n ⋅ σ), where τ and σ are the Pauli's matrices belonging to the isospin and pseudospin subspaces, respectively. In the following, we show the explicit form of the matrix M for regular polygons, included the circle as the limit case, and the three kinds of BCs considered in this work. For both, ZZBC and ABC/IMBC,n = ±ẑ (the sign depends on the sublattice termination) andn(ϕ) =ẑ ×n B (ϕ), respectively (see Fig. 12). In the latter expression,n B (ϕ) is the normal unit vector located at the edges of the defects pointing outward from the region of interest-for our purpose, this unit vector pointing to the center of the defects. For simplicity, we introduce the angle γ p related to the pseudospin degree of freedom. Thus, we can handle both types of boundary conditions at the same time by writinĝ and chose γ p = 0(π) or γ p = π 2 in order to select one or another type of BC. It must be noted that while z-component is exclusively related with the ZZBC, the xy-components are related with ABC and IMBC-the difference between two latter types of BCs resides in the isospin ν i.e., in the details of the lattice terminations. For a regular polygon with N sides, the normal unit vector pointing inwards has the form . It is useful to rewrite this quantity as a Fourier series where A m,N = sinc(π N ) (1 − mN ) and sinc(x) = sin x x.
It is straightforward to see that for circular defects we have lim N →∞ A m,N = δ m0 .
Analogously, for the isospin degree of freedom:ν =ẑ and ν ⋅ẑ = 0 for the ZZBC/IMBC and ABC, respectively. Introducing now the angle γ i , we can write this all three BCs in the formν where γ i = 0 for both, ZZBC and IMBC-for these BCs K and K ′ cones are decoupled. For the ABC however,ν lies on the xy plane, i.e., γ i = π 2-the Φ-phase is only relevant for the ABC, however, the analytic solutions of the ABC is out of the scope of this work. Finally, the matrix M in terms of the angles (γ i , γ p ) is and the analogous to the set of conditions (20), is The dependence of M with the polar angle ϕ relies on the pseudospin contribution. Triangles and hexagons are the unique regular polygons with well defined zigzag terminations. Therefore, the angle γ p for the ZZBC can behave in two different ways: it can be constant along the boundary of the defect (triangular defects), or it can alternate between 0 and π depending on the sublattice termination (hexagonal defects) (see Fig. 12). In order to tackle circular defects with ZZBCs, one is tempted to define the circle case as the limit of a polygon with a N large enough and an alternating n = ±z on their faces, corresponding to different sublattices terminations. However, this artificial limit is misleading because of is not possible construct such a defect, i.e., a regular polygon with N > 6 whose edges were constructed exclusively of zigzag neither armchair terminations. For simplicity, throughout this article we only work with ZZBC for triangular defects, in such a way that M is ϕ-independent. In this case, introduce the first condition of the set (A7) into Eq. (A6) leads to ψ B,l (ϕ, ξ 0 ) = ψ ′ B,l (ϕ, ξ 0 ) = 0 -we emphasize that, in the isotropic representation, ψ = (ψ A , ψ B , −ψ ′ B , ψ ′ A ) T must be used. Thus, there are two equations per cone [Eqs. (22) for the K cone], one for each Floquet replica, which allow us to find the relation between coefficients c + and c − , and then, the quasi-energies µ l [solutions of the Eqs . (23) and (25)].
On the other hand, for the ABC and the IMBC, the dependence of matrix M with the polar angle ϕ can not be avoided whatever the number of sides of the polygon considered. Even in the limit of circular defects: lim N →∞ Ξ N (ϕ) = ie −iϕunlike the ZZBC and the ABC, the circular defect is well defined for the IMBC because of this kind of BC does not depend on the details of the terminations at the edges (zigzag, armchair or mixing of them). As a consequence, for the IMBC the strategy to find the quasi-energies is quite different from that of the ZZBC (see App. C). For the K cone, the Floquet state restricted to n = 0 and n = 1 Floquet subspaces, has the form where the components are Hence, We also notice that λ − = λ * + and K ν (z * ) = K * ν (z). Because of cZZBC and IMBC do not mix different valleys [see Eq. (A5)], we can impose normalization conditions for each valley in an independent way. According to Eqs. (B1), and the angular dependence of the components of the Floquet state given by (B2), the normalization constant results timeindependent (1 + µ l ) 2 ξ dξ.
In order to obtain solutions belonging to the K ′ cone, the isotropic representation requires that: ψ A,l → −ψ ′ B,l and ψ B,l → ψ ′ A,l . By doing these replacements, the same procedure applied in Sec. II leads to a set of equations analogous to Eqs. (12)-and their respective boundary condition ψ ′ B (ξ 0 ) = 0which, in principle, must be solved again. However, for the cZZBC case, latter set of equations and their boundary conditions can be obtained from that of belonging to the K cone by doing follow changes: (µ, l) → (−µ, −l). Doing so, for the K ′ cone we have (B7) The time averaged probability density current (over one period) only has an angular component as it is shown in Sec. IV. Then, the density currents for both cones are J l = −2 Im{e iϕ u 1A,l u * 1B,l + u 0A,l u * 0B,l }, J ′ l = −2 Im{e −iϕ u ′ 1A,l u ′ * 1B,l + u ′ 0A,l u ′ * 0B,l }. (B9) Hence, it is straightforward to see that J l = J ′ −l . On the other hand, there is no any transformation between the K and K ′ cones for the IMBC case which simultaneously leaves invariant the set of differential equations and their respective boundary condition. Therefore, the set of quasienergies for the K ′ cone must be founded following the same procedure used for the K cone. The isotropic representation imposes that (B10) For the K ′ cone, coefficients c + and c − are related now by the phase e iθ = ω ′ + β + (ω ′ − β − ), with ω ′ ± = (1 + µ l )K l (λ ± ξ 0 ) − λ ± K l+1 (λ ± ξ 0 ). The time averaged probability density currents for each cone are also given by Eqs. (B8) and (B9). Nevertheless, there is no any relation between J l and J ′ l .
Appendix C: Solutions for the IMBC -Polygonal defects.
For simplicity, we will only tackle the IMBC, which does not mix cones. In this case, introducing the third condition of the set (A7) into Eq. (A6) leads to a mixing of solutions with different l quantum numbers due to the aforementioned dependence, i.e. where the upper (lower) sign refers to the K(K ′ ) cone and components given by Eq. (B2), [(B7)] were used. We also have to account for the dependence of the coordinates of the edges with the polar angle ϕ, i.e. ξ 0 (ϕ). For regular polygons with N sides, the points located at the edges can be written as where coefficients a m,N are given by R 0 is the apothem of the polygon andR 0 = R 0 a 0,N represents the mean value of their radii. For triangles and hexagons, a 0,3 = 3 ln(2 + √ 3) π and a 0,6 = 3 ln 3 π, respectively. In the large N limit, the deviations of R(ϕ) with respect toR 0 are small and we can expand the modified Bessel functions of second kind K ν appearing in Eqs. (C1) to first order on the deviation. That is, K ν (λξ 0 (ϕ)) ≃ K ν (λξ 0 ) + ∂K ν (λξ 0 ) ∂ξ 0 ξ 0 ξ 0 (ϕ) −ξ 0 . It is straightforward to see that only for circular defects, the mixing among different l quantum numbers is removed, since lim N →∞ a m≠0,N = lim N →∞ A m≠0,N = 0 and lim N →∞ a 0,N = lim N →∞ A 0,N = 1.
Finally, in order to find the quasi-energies, the infinite series in the Eqs. (C5) must be truncated. Doing so, it is possible to write a system with 2d equations for d quasi-energies (each quasi-energy introduce two additional coefficients: c + and c − ) and then, find their solutions. | 12,103 | 2016-03-14T00:00:00.000 | [
"Physics"
] |
2 × 2 charge density wave in single-layer TiTe2
A density functional theory study concerning the origin of the recently reported charge density wave (CDW) instability in single-layer TiTe is reported. It is shown that, whereas calculations employing the semi-local functional PBE favor the undistorted structure, the hybrid functional HSE06 correctly predicts a distortion. The study suggests that the magnitude of the semi-metallic overlap between the valence band top at and the conduction band bottom at is a key factor controlling the tendency towards the distortion. It is also shown that tensile strain stabilizes a CDW, and we suggest that this fact could be further used to induce the instability in double-layers of TiTe, which in the absence of strain remain undistorted in the experiment. The driving force for the CDW instability seems to be the same phonon mediated mechanism acting for single-layer TiSe, although in single-layer TiTe the driving force is smaller, and the semimetallic character is kept below the transition temperature.
Introduction
Transition metal dichalcogenides of the groups IV and V rank among the most controversial materials exhibiting charge density wave (CDW) instabilities [1,2]. The possibilities of strong or weak electronphonon coupling scenarios in group V 2H-MX 2 ( M = Nb, Ta; X = S, Se) and either phonon mediated or excitonic mechanisms in group IV 1T-TiSe 2 have been discussed for decades [1]. Many of these systems also exhibit superconductivity (SC) under certain conditions and the competition between the two instabilities remains an important question still unanswered [3,4]. These materials are built from MX 2 layers interacting through weak van der Waals forces and thus are easily exfoliated [5]. Consequently, they offer the possibility to examine the above mentioned issues at the two-dimensional (2D) limit as well as by smoothly varying the density of carriers through gate doping. This is at the origin of the huge revival of interest recently raised by these materials [6][7][8][9][10][11].
Indeed, intriguing differences of these few-flake or even single-layer materials with their bulk counterparts have been discovered. Recent reports on the existence of a very weak pseudo-gap at the Fermi energy in single-layer NbSe 2 [6,12] or the possible occurrence of incommensurate modulations for slightly electron doped TiSe 2 crystals of thicknesses less than 10 nm [10,13] make clear that we are still far from a full understanding of the physics of CDW materials and more particularly when the screening is reduced.
In this context, the recent report of a 2 × 2 CDW in single-layer TiTe 2 by Chen et al [8] came as a very intriguing surprise. Since long [14][15][16] it has been known that bulk 1T-TiTe 2 does not exhibit the 2 × 2 CDW that occurs in isostructural 1T-TiSe 2 [17] In addition, the 2 × 2 CDW is not observed anymore in double-layer TiTe 2 [8]. In contrast, the 2 × 2 CDW instability is observed in TiSe 2 from the single-layer, for ultrathin films with up to six layers [18], and for the bulk crystal [17]. The experimental indication of the occurrence of the CDW in single-layer TiTe 2 is even more surprising when considering that first-principles density functional theory (DFT) calculations found that single-layer TiTe 2 shows no tendency to distort towards the 2 × 2 CDW structure at the generalized gradient approximation (GGA) level [8]. Yet, calculations of the same quality successfully predict that the 2 × 2 CDW structure is more stable than the nondistorted structure for single-layer TiSe 2 [13,18,19].
B Guster et al
Overall, these observations suggested the hypothesis that something really new and challenging is at work in single-layer TiTe 2 [8].
However, one should note that the CDW transition occurs at 100 K in single-layer TiTe 2 [8] but at a considerably higher temperature, 232 K, in single-layer TiSe 2 [11]. Hence, the driving force for the distortion must be considerably weaker in single-layer TiTe 2 . Before concluding that a new scenario is needed to grasp the origin of the unexpected 2 × 2 CDW in this mat erial, one should wonder about the appropriateness of the so far successful GGA-type of DFT approaches to the CDW instabilities in single-layer group IV and V dichalcogenides. Such an appraisal is needed because it impinges on very fundamental questions concerning CDW instabilities at the 2D limit. Note that it has been recently shown [20] that GGA-type functionals like PBE [21] overestimate the overlap between the Ti 3d and Se 4 p levels in bulk 1T-TiSe 2 . This can be corrected by using hybrid functionals like HSE06 [22,23] leading to an improvement of the electronic description of bulk TiSe 2 [20]. In the following, we report a DFT study of the likeliness of a 2 × 2 CDW in singlelayer TiTe 2 employing both PBE and HSE06 functionals which provides useful insight on the origin of the CDW instability in single-layer TiTe 2 .
Results and discussion
An isolated layer of 1T-TiTe 2 is made of an hexagonal lattice of Ti atoms in an octahedral environment of Te atoms ( figure 1(a)). The repeat unit of the hexagonal bulk crystal structure contains just one of these layers. A detailed description of our calculation method is presented in appendix. Let us start our analysis by briefly considering the PBE description of the electronic structure. The optimized a cell parameter is 3.804 Å for the single layer and 3.815 Å for the bulk, in good agreement with the bulk experimental value of 3.777 Å [24]. Within a single-layer there are several Te...Te contacts shorter than the sum of the van der Waals radii so that the valence bands, which have their maximum at Γ and are mostly built from Te 5 p orbitals, are considerably wide and overlap with the bottom part of the Ti 3d (~2%-3%) bands, which have their minima at the M point ( figure 1(b)). As expected from the fact that in the bulk there are short Te...Te contacts in the direction perpendicular to the layer, the semimetallic overlap is 19% larger in the bulk. Note that, as it is clear from figure 1(b), inclusion of spin-orbit coupling effects does not have any noticeable effect and therefore it will not be considered anymore in the following. Thus, according to the PBE calculations single-layer TiTe 2 is a semimetal exactly as the bulk. In contrast with the case of single-layer TiSe 2 , for which the same type of calculations led to a phonon with imaginary frequency at the M point [13], our calculations with the GGA functional for single-layer TiTe 2 (figure 1(c)) show no phonons with imaginary frequency, in agreement with those of Chen et al [8]. Although there is some remnant of the instability at M (notice the optical branch that disperses downwards and shows the lowest frequency at the M point, reminiscent of the mode that becomes unstable for TiSe 2 ), there is no definite indication of a phonon instability that may lead to the 2 × 2 distortion of the structure. Detailed structural optimizations of 2 × 2 supercells confirmed this result. However, as indicated by the frozen-phonon total energy calculation as a function of the soft phonon mode amplitude at M (figure 1(d)), the GGA potential energy surface is extremely flat. Under such circumstances, even if strictly speaking the PBE calculations disagree with the experimental results in that no tendency toward the 2 × 2 CDW distortion is found, the results are somewhat inconclusive and a closer look is needed.
The very flat frozen-phonon energy curve of figure 1(d) suggests that small external perturbations could be able to change the relative stability of the undistorted and 2 × 2 CDW structures. This could occur, for instance, by the effect of strain. Thus, we studied the evolution of the band structure and the relative stability of the undistorted 1 × 1 and 2 × 2 CDW structures as a function of biaxial tensile strain (allowing the atomic positions to relax for each strain applied). The strain is defined as s = δm/m 0 where m 0 is the unstrained cell parameter and m 0 + δm the strained cell parameter. Thus, positive values correspond to tensile strains. As shown in table 1, a tensile stress as small as 1% is sufficient to make the 2 × 2 CDW structure slightly more stable than the undistorted one. For strains larger than 2%, the 2 × 2 CDW is definitely favored.
The development of the 2 × 2 distortion is accompanied with the opening of energy gaps at the crossings between the folded valence and conduction bands. However, as shown in figures 2(a)-(c), the distorted structure is still semimetallic and only for relatively large tensile strains (between 5% and 6%) a full band gap occurs at the Fermi level. As shown in figure 2(a), in the folded band structure of the initial unstrained layer there is a band crossing slightly below the Fermi level at approximately halfway of the Γ-M line. Under a slight tensile strain, the semimetallic overlap decreases and this crossing occurs exactly at the Fermi level thus leading to the opening of a gap at that energy (see the folded and unfolded band structure for the stable 2 × 2 CDW structure under a 2% biaxial strain in figure S1 of the supplementary information (SI) (stacks.iop.org/TDM/6/015027/mmedia)). Nevertheless, the second Te-based valence band centered at Γ (the one with a larger effective mass) still crosses the Fermi level: it does hybridize with the M -point bands, but the resulting gap opens about 0.1 eV above the Fermi level. These results are in good agreement with the exper imental ARPES data [8], which indicate that the valence band with larger effective mass remains metallic for temperatures below the CDW transition T c , whereas the band with smaller effective mass does develop a gap at the Fermi level below T c . The calculated densities of states (DOS, figure S2 of the SI) reflect the same result, with a pseudo-gap occurring at the Fermi level for small strains under which the 2 × 2 CDW structure is already stable ( ∼ 2%-3%). This also compares favourably with the pseudo-gap observed in the experimental STS spectra (figure 4 in [8]), which has a similar width as the one observed in ARPES.
Judging from the comparison of the previous results with the ARPES data of Chen et al [8] it appears that the GGA-PBE description of single-layer TiTe 2 is only consistent with the experimental situation when a slight tensile strain is imposed in the calculation, as the GGA functional overestimates the overlap between conduction and valence bands, which is then compensated by the imposed tensile strain. The main effect of the strain is a decrease of the intralayer Te... Te short contacts which leads to a decrease of the Te 5 p bandwidth and, consequently, of the semimetallic overlap. Only when this overlap decreases with respect to the PBE description of the system the 2 × 2 CDW becomes more stable. This observation is consistent with the fact that, when Te...Te interlayer interactions come into play in the bulk or even in the double-layer, the semimetallic overlap increases and the 2 × 2 CDW is not observed anymore. As shown in figures 2(d)-(f) the semimetallic overlap increases from single-to double-layers. Note that the band structure for the TiTe 2 double-layer with ∼ 6% strain is very similar to that of the single-layer with a 3% strain.
These results are reminiscent of the above mentioned work concerning the functional type dependence of the semimetallic overlap of bulk TiSe 2 [20] and prompted us to reconsider the stability of the unistorted versus 2 × 2 CDW structures using the hybrid type functional HSE06 [22,23]. Shown in figure 3 is a frozen-phonon calculation of the energy difference between the undistorted structure and the 2 × 2 CDW structure following the soft phonon mode dist ortion. The curve clearly shows that, at the HSE06 level, unstrained single-layer TiTe 2 is indeed unstable towards the 2 × 2 CDW distortion. By relaxing the structure around the minimum of the frozen-phonon curve we have obtained an energy gain of 3.8 meV/ formula unit. This stabilization energy is much lower than the value obtained for a TiSe 2 monolayer using a PBE functional, 6 meV/f.u [13]. Hellgren et al [20] showed that for bulk TiSe 2 hybrid functionals predict a much higher stabilization energy than the PBE functional. Thus, it is understandable that according to our PBE type studies single-layer TiTe 2 does not tend to experience a 2 × 2 CDW instability. Overall, our data suggest a weak driving force for the distortion.
Since this is a significant result let us consider in more detail the region of the semimetallic overlap. A fatband analysis of the PBE and HSE06 band structures is shown in figures 4(a) and (b), respectively. A close inspection at the region around Γ points out some clear differences between the results of both functionals. For PBE, above the (partially empty) top of the valence band at Γ there is a non-degenerate band and a pair of degenerate bands at a higher energy. The order is the opposite for the HSE06 functional ( figure 4(b)), for which the two degenerate bands are lower in energy. Around the Fermi level a heavy mixing of the t 2g levels of the Ti atom and the 5 p levels of the Te atoms occurs. Assuming a local coordinate system in which the three-fold symmetry axis of the octahedron occurs along the z direction, the non-degenerate Ti- the bottom of the conduction band at M for the PBE functional than for HSE06. The smaller overlap in the HSE06 band structure is also clear from the decrease of the the area of the constant energy plot at 0.25 eV below the Fermi level [8] for the undistorted TiTe 2 single-layer calculated with the HSE06 functional (compare figures S2 and S4 of the SI). We note that the PBE-type band structure for single-layer TiSe 2 , which provides a satisfactory description of the relative stability of the undistorted and 2 × 2 CDW in this system [13,19], shows exactly the same topology and particularly, the same band ordering as the HSE06 band structure of figure 4(b). As soon as one moves along the Γ-M line (i.e. along the a * direction), the only symmetry element preserved is the symmetry plane perpendicular to the layer and going along the a * direction. One of the two Ti-based doubly degenerate levels at Γ and the Ti d z 2 mix and interact with one of the Te p bands near Γ leading to the slowly descending band from Γ-M which is associated with the electron pockets near M . With the local system of axes mentioned above, the crystal orbitals around M are almost exclusively made of tilted Ti d x 2 −y 2 orbitals (i.e. a mixing of Ti d x 2 −y 2 and Ti d z 2 which leads to the tilting of the orbital) exactly as we recently reported for the TiSe 2 single-layers [13]. We refer the reader to this work for a detailed analysis of the nature of the band structure which entirely applies to the HSE06 one reported in figure 4(b).
The HSE06 band structure and density of states for both the undistorted and the 2 × 2 CDW structures are reported in figure 5. As shown in both figures the stabilizing 2 × 2 CDW distortion opens a gap at the Fermi level. The analysis of the nature and origin of the distortion goes along the same lines presented in detail in our recent work for the TiSe 2 single-layer and will not be repeated here [13]. However, we note that the non-activated conductivity is kept in single-layer TiTe 2 below the CDW transition temperature [8] so that no gap should occur at the Fermi level. Consequently, the HSE06 calculations exaggerate the tendency towards the transition. As mentioned above, the PBE calculations suggest that under reasonable tensile strain a 2 × 2 CDW is favored without the development of a band gap. Only when the strain is relatively large (i.e. around 5%-6%, see figure 2) a band gap really opens. The HSE06 band structure of figure 5(a) is similar to that in figure 2(c) corresponding to a 6% tensile strength. We believe that the HSE06 functional exaggerates the stability of the 2 × 2 CDW and that the real situation concerning the weaker stabilization of the CDW phase and the lack of a full bandgap opening would rather correspond to that of the PBE functional with a moderate tensile strain. This observation, together with previous results on bulk TiSe 2 [20] suggest that a predictive description of these TiX 2 systems is attainable by using the HSE06 functional and tuning the actual contribution of the exact exchange. However, from the viewpoint of the physical origin of the CDW we do not find any noticeable difference with the previously reported analysis of the 2 × 2 CDW instability of TiSe 2 single-layers.
To further assess our conclusion we carried out HSE06 calculations for a TiTe 2 double-layer. Two single-layers with the optimized structure were placed with the interlayer bulk distance. The stabilization energy of the 2 × 2 CDW is reduced to practically onehalf the value in the single-layer. Taking into account the small (and exaggerated) value for the single-layer, the driving force for the 2 × 2 CDW distortion in the TiTe 2 double-layer must be extremely small or most likely nil, as experimentally found.
Our study points out an interesting possibility. Since tensile strain has been found to be a useful technique to induce modifications in single-layer or fewflake materials [25][26][27][28], it is possible that the 2 × 2 CDW can be induced in double-layers or triple-layers of TiTe 2 by using a small tensile strain 3 . Another useful hint provided by our study is that the 2 × 2 CDW in single-layer TiTe 2 may be more stable and result with a band gap under tensile strain or may be suppressed under a slight compresive strain.
Conclusions
A density functional theory study concerning the origin of the 2 × 2 CDW distortion recently reported experimentally for single-layer TiTe 2 has been carried out. This report is surprising because neither doublelayer nor bulk TiTe 2 exhibit the 2 × 2 distortion and a PBE-based DFT study predicts that the undistorted structure is also more stable for the single-layer. Our study shows that, whereas calculations employing the semi-local functional PBE favor the undistorted structure, the hybrid functional HSE06 correctly predict the 2 × 2 distortion. However, the HSE06 calculations seem to exaggerate the stability of the distorted phase and, as a consequence, a noticeable band gap of more than 0.1 eV is induced at the Fermi level. This is in contrast with the metallic character of the TiTe 2 single-layers below the transition temperature. Interestingly, PBE type calculations for the case where the single-layer is subject to a slight tensile strain also favor the 2 × 2 distortion while keeping the semimetallic overlap. The study suggests that the magnitude of the semi-metallic overlap is a key factor controlling the tendency towards the distortion and consequently only functionals describing such overlap very accurately can provide a truly predictive description of the electronic structure.
According to the present study the mechanism of the CDW instability in single-layer TiTe 2 seems to be the same phonon mediated mechanism acting for single-layer TiSe 2 [13] although now the driving force is smaller and the semimetallic character is kept below the transition temperature. As mentioned above, the magnitude of the semimetallic overlap seems to be one of the key factors in controlling the likeliness of the 2 × 2 CDW. Taking into account that the overlap should increase in these TiX 2 systems when the number of short Te...Te contacts increase or when they become stronger, the overlap should increase from single-layers to bulk and from X = S to X = Te. Since the instability is not observed in bulk TiS 2 nor in doublelayer TiTe 2 , it seems that only a relatively narrow range of semimetallic overlaps is associated with the instability. In this respect, a significant result of the study is that tensile strain stabilizes the 2 × 2 CDW distortion in single-layer TiTe 2 . This could be used to induce the instability in double-or triple-layers of TiTe 2 which in the absence of strain remain undistorted, to induce a stronger distortion leading to the creation of a band gap in single-layer TiTe 2 or most likely to suppress the 2 × 2 CDW under a small compression. Such studies could provide useful insight on the CDW mechanism of group IV 1T-TiX 2 phases. | 4,869.2 | 2018-12-10T00:00:00.000 | [
"Physics"
] |
Downregulation of alpha7 nicotinic acetylcholine receptor in two-kidney one-clip hypertensive rats
Background Inflammation processes are important participants in the pathophysiology of hypertension and cardiovascular diseases. The role of the alpha7 nicotinic acetylcholine receptor (α7nAChR) in inflammation has recently been identified. Our previous study has demonstrated that the α7nAChR-mediated cholinergic anti-inflammatory pathway is impaired systemically in the genetic model of hypertension. In this work, we investigated the changes of α7nAChR expression in a model of secondary hypertension. Methods The 2-kidney 1-clip (2K1C) hypertensive rat model was used. Blood pressure, vagus nerve function, serum tumor necrosis factor-α (TNF-α) and both the mRNA and protein levels of α7nAChR in tissues from heart, kidney and aorta were measured at 4, 8 and 20 weeks after surgery. Results Compared with age-matched control, it was found that vagus nerve function was significantly decreased in 2K1C rats with the development of hypertension. Serum levels of TNF-α were greater in 2K1C rats than in age-matched control at 4, 8 and 20 weeks. α7nAChR mRNA in the heart was not altered in 2K1C rats. In the kidney of 2K1C rats, α7nAChR expression was significantly decreased at 8 and 20 weeks, but markedly increased at 4 weeks. α7nAChR mRNA was less in aorta of 2K1C rats than in age-matched control at 4, 8 and 20 weeks. These findings were confirmed at the protein levels of α7nAChR. Conclusions Our results suggested that secondary hypertension may induce α7nAChR downregulation, and the decreased expression of α7nAChR may contribute to inflammation in 2K1C hypertension.
Background
Secondary hypertension affects a small but significant number of the hypertensive population. It is estimated that approximately 15% of hypertensive patients have identifiable conditions causing the blood pressure elevation [1,2]. Renovascular hypertension represents the most common cause of potentially curable secondary hypertension [3]. Therefore, a better understanding of the mechanisms in the end-organ damage could provide new avenues for prevention of cardiovascular events. The 2-kidney 1-clip (2K1C) hypertensive rat is an experimental model that in many respects resembles human renovascular hypertension [4].
Inappropriately activated systemic and local tissue renin-angiotensin systems (RAS) contribute to the hemodynamic and metabolic abnormalities that lead to hypertension. Furthermore, the system can contribute to end-organ damage, at least partly by promoting inflammation [5][6][7]. Recent evidences indicate that neuronal cholinergic systems can modulate inflammatory responses by controlling the release of proinflammatory cytokines [8,9], and that nonneuronal acetylcholine (ACh) synthesis and release machinery are downregulated in inflammation [10]. The anti-inflammatory actions mediated by either neuronal or nonneuronal cholinergic system are believed to depend on activation of alpha7 nicotinic acetylcholine receptor (α7nAChR) [11][12][13], and α7nAChR is therefore proposed as an essential regulator of inflammation and a promising pharmacological strategy against infectious and inflammatory diseases [8,14]. Thus, if impairment of cholinergic pathways contribute to the development of endorgan damage during hypertension, then downregulation of the α7nAChR seems likely.
Our previous study [12] has demonstrated that α7nAChR-mediated signaling is impaired systemically in spontaneously hypertensive rats (SHR, a well-known genetic model of hypertension), which contributes to end-organ damage. Downregulation of α7nAChR occurs at the age of 20 and 40 weeks, but not 4 weeks. Therefore it seems possible that elevated blood pressure may be the fundamental cause responsible for α7nAChR downregulation regardless of the genetic factors. In the present study, using 2K1C hypertensive rats, we test the hypothesis that hypertension induces vagus nerve dysfunction and downregulation of α7nAChR in secondary hypertension, which contribute to inflammation in hypertension.
Animals
Male Sprague-Dawley (SD) rats (8 weeks of age; Sino-British SIPPR/BK Lad Animal Ltd, Shanghai, China) were housed in a 12/12-hour light/dark cycle with free access to food and water. All the animals used in this work received humane care in compliance with the institutional animal care guidelines and the Guide for Care and Use of Laboratory Animals published by the National Institutes of Health.
Preparation of 2K1C
Male SD rats weighing 160 to 180 g were anesthetized with a combination of ketamine (40 mg/kg) and diazepam (6 mg/kg). The right renal artery was isolated through a flank incision, and a silver clip (0.2 mm internal gap) was placed on the renal artery, as described previously [15]. Sham-operated rats that underwent the same surgical procedure except for placement of the renal artery clip served as controls.
Blood pressure and heart rate measurement Systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR) were continuously recorded in conscious rats as described previously [12]. Briefly, rats were anesthetized with a combination of ketamine (40 mg/kg, i.p.) and diazepam (6 mg/kg, i.p.). A catheter was inserted into the lower abdominal aorta via the femoral artery for blood pressure (BP) measurement. Another catheter was placed into the abdominal vena cava via the femoral vein for drug administration. The catheters were filled with heparinized saline (150 IU/ml) to prevent clotting and were plugged with paraffin-filled 23-guage hypodermic needles. After a 2-day recovery period, rats were placed in a Plexiglas cage (30 cm diameter). The aortic catheter was connected to a BP transducer via a rotating swivel, which allowed the animal to move freely. Signals from the transducer were recorded by a computerized system (MPA 2000 M, Alcott Biotech Co LTD, Shanghai, China). SBP, DBP and HR were averaged beat-to-beat during the 5 min test period before and after atropine injection (0.03 mg/ kg, i.v.), atropine induced HR changes were used to assess cardiac vagal tone.
Determination of tumor necrosis factor-α (TNF-α) levels by ELISA Male rats were anesthetized with a combination of ketamine (40 mg/kg, i.p.) and diazepam (6 mg/kg, i.p.). Blood samples were collected. Blood was centrifuged at 3,000 g for 15 min at 4°C to collect serum. The serum was kept at −80°C until analyzed. The levels of TNF-α were measured with commercial ELISA kits (R&D Systems, Minneapolis, MN, USA).
RNA extraction and real-time quantitative PCR analysis
Total RNA was extracted from rat tissues using TRIzol reagent (Invitrogen) according to the instructions of the manufacturer. First-strand cDNA was amplified by PCR using specific primers for the cDNA of α7nAChR (accession no. NM_012832.3), 5' GGTCGTATGTGGCCGTT TG 3' (sense) and 5' TGCGGTTGGCGATGTAGCG 3' (antisense) and GAPDH as an internal control (accession no. NM_017008.3), 5' AGACCTCTATGCCAACACAG TGC 3' (sense) and 5' GAGCCACCAA TCCACACA-GAGT 3' (antisense). Real-time quantitative PCR was performed using the Chromo4™ real-time PCR detection system (Bio-Rad) and the SYBR Premix Ex Taq Mixture (Takara) with specific primers. The PCR reactions were initiated with denaturation at 95°C for 10 s, followed by amplification with 40 cycles at 95°C for 10 s, and annealing at 60°C for 20 s (two-step method). Finally, melting curve analysis was performed from 60°C to 85°C. Data were evaluated with Opticon Monitor™ version 3.0 software. All samples were performed in triplicate. The relative expression of the target gene was normalized to the level of GAPDH in the same cDNA.
Statistical analysis
All values are expressed as means ± SEM. Results were analyzed by paired (within-group) or unpaired (between 2 groups) Student t test or ANOVA, followed by Tukey test (among 3 or more groups). Two-sided P < 0.05 was considered statistically significant.
Basal arterial pressure and heart rate in 2K1C rats
Both SBP and DBP were significantly increased at 4, 8 and 20 weeks after 2K1C surgery when compared with age-matched sham-operated rats. The SBP reached a distinction about 50 mmHg at 4 and 8 weeks, and approached to 70 mmHg at 20 weeks. For DBP, the gap was less than 25 mmHg at 4 weeks and around 35 mmHg at 8 and 20 weeks. HR was not changed at 4, 8 and 20 weeks between the 2K1C and control rats ( Table 1).
Expression of α7nAChR mRNA and its encoded protein in aorta, kidney and heart of 2K1C rats Expression of α7nAChR mRNA and its encoded protein were determined to assess the function of cholinergic pathway in 2K1C rats.
Kidney
Expression of α7nAChR mRNA in the kidney of 2K1C rats showed a temporary but significant increase at 4 weeks when compared to the control ones. However, α7nAChR mRNA in the kidney of 2K1C was less than in control group at 8 weeks and 20 weeks ( Figure 4A). These results from real-time PCR were confirmed at protein levels ( Figure 4B).
Heart
Dissimilar to results from aorta and kidney, expression of α7nAChR mRNA and its encoded protein in the tissues from the left ventricles were unchanged between the two groups at 4, 8 and 20 weeks ( Figure 5).
Discussion
In this work, we assessed the changes of cholinergic pathway with a model of secondary hypertension induced by 2K1C through determination of vagus nerve function and α7nAChR expression. We found that vagus nerve function was decreased after 2K1C surgery (i.e. at 4, 8 and 20 weeks of age); expression of α7nAChR was downregulated in aorta (from 4 weeks) and kidney (from 8 weeks), and serum TNF-α was increased in 2K1C hypertension.
A growing amount of evidences suggest that inflammation participates in the pathogenesis of hypertension [16,17], and hypertension may be in part an inflammatory disease because C-reactive protein level, a marker of systemic inflammation, is associated with future Effects of atropine (i.v., 0.03 mg/kg) on heart rate in sham-operated rats and 2K1C hypertensive rats (n = 6). ΔHR, the difference of heart rate between pre-atropine and post-atropine injection. Atropine induced HR increase in sham-operated rats was significantly larger than in 2K1C group. *P < 0.05, **P < 0.01 vs age-matched sham-operated group (unpaired t test). development of hypertension [17,18]. However, it is still unclear how hypertension is related to the inflammatory process, and what are the causes of inflammation.
It is demonstrated that hypertensive patients are characterized by a sympathovagal imbalance with a reduction of vagal tone [19,20]. Vagal function is impaired in human hypertension, which is associated with an increased risk for morbidity and mortality and may precede the development of risk factors [21]. The neuron cholinergic anti-inflammatory pathway suggests that vagus nerve can modulate the innate immune response and prevent inflammation through activation of α7nAChR in macrophages by releasing ACh, and stimulation of vagus nerve attenuates systemic inflammatory response, including inhibition of proinflammatory cytokines release, such as TNF-α and interleukin-1β [8,11,22]. Therefore, it seems reasonable that chronic hypertension results in decreased vagal function, and decreased vagal function may contribute to inflammation in hypertension. In this work, we determined the tachycardic response to atropine, a classic index of cardiac vagal tone, which reflects vagal function [23]. In accord with previous study indicating depressed cardiac vagal responsiveness in renovascular hypertensive rats [24], we found that the vagal function was significantly decreased in 2K1C hypertensive rats. These results suggested that vagus nerve might be a link between hypertension and inflammation.
It is well accepted that the α7nAChR, expressed in primary immune cells, is a pivotal mediator of the cholinergic anti-inflammatory pathway [8,9,11]. Direct activation of α7nAChR exerts a protective anti-inflammatory effects during renal ischemia/reperfusion injury [25], and regulates cytokines production in sepsis [26]. Our previous study found that chronic treatment of SHR with the α7nAChR agonist PNU-282987 relieved end-organ damage and inhibited tissue levels of pro-inflammatory cytokines [12]. Vida et al. suggested that cholinergic agonists inhibited systemic inflammation via the α7nAChR, and α7nAChR was a molecular link between the parasympathetic and sympathetic system to control inflammation [27]. In this study, we compared the expression of α7nAChR in the tissues from aorta, kidney and left ventricle between 2K1C hypertensive rats and control ones, and found that expression of α7nAChR was downregulated in aorta (from 4 week) and kidney (from 8 weeks). We also measured serum TNF-α, and found that levels of TNF-α in 2K1C hypertensive rats were greater than control at 4, 8 and 20 weeks. Thus decreased vagal function might play a role on the downregulation of α7nAChR, which might be responsible for the increased inflammatory cytokines in 2K1C hypertensive rats.
A main object of this study was to characterize the expression of α7nAChR in tissues from the aorta, kidney and heart in secondary hypertension induced by 2K1C. It was demonstrated that BP begins to increase 7 days after 2K1C surgery and reaches a level of 160 mmHg for SBP around 4 weeks [28][29][30], whereas the development of increased BP in SHR occurs largely between the ages of 4 and 12 weeks [31], and the relatively stable hypertensive levels are reached by about 16-20 weeks of age [32]. The results that α7nAChR downregulation in aorta occurred at 4 weeks in 2K1C rats and at 20 weeks in SHR suggested that BP was possibly the fundamental cause for changes of α7nAChR expression in aorta.
The RAS can contribute to end-organ damage, such as vascular remodeling, and renal fibrosis and dysfunction, at least partly by promoting inflammation [33]. It has been shown that the increased circulating Angiotensin II (ANG II) plays an important role in the development of 2K1C hypertension, and the augmentation of intrarenal ANG II levels plays the crucial role in the maintenance phase of 2K1C hypertension [34]. Hiyoshi et al. found that plasma renin concentrations was elevated in mice within 14 days after 2K1C surgery, followed by reduction to sham levels at 42 days. While the angiotensin II subtype 2 (AT 2 ) receptor mRNA levels revealed a 2-fold increase in the thoracic aortas on day 14, and then returned to sham levels by day 42 in these mice [35]. Similar to changes of AT 2 receptor occurred in 2K1C mice, we postulate that the upregulation of α7nAChR in the kidney at 4 weeks in our study might be a protective response to inflammation induced by Angiotensin II. Different from SHR, the expression of α7nAChR in heart of 2K1C rats was unchanged within 20 weeks, although BP remained at high levels. Further studies are needed to explore this issue.
Our results, expression of α7nAChR was downregulated in aorta (from 4 weeks) and kidney (from 8 weeks), but unchanged in heart, also suggested that the aorta may be a sensitive organ to suffer from hypertensioninduced α7nAChR downregulation. Interestingly, Grundy et al. [18] proposed that inflammatory markers, such as high sensitive C-reactive protein in hypertension, were possibly a response to products from arterial inflammation, and hypertension at least in part was a product of arterial pathology. Though the cause of arterial inflammation is unknown, downregulated α7nAChR in the aorta during hypertension may play a role.
Conclusion
In conclusion, downregulation of the α7nAChR occurred with the development of hypertension induced by 2K1C. Decreased expression of α7nAChR may contribute to inflammation in secondary hypertension. | 3,357.2 | 2012-06-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Experimental study of the parameter effects on the flow and noise characteristics for a contra-rotating axial fan
Introduction: In the present paper, experiments for a contra-rotating axial fan have been conducted to investigate the influences of the fan parameters including axial distance, blade number, blade pattern and blade thickness on the performance and noise characteristics under variable rotational speed regulation. Methods: The characteristic curves and spectrum characteristics of the contra-rotating axial fan with different structural configurations are compared and analyzed. Moreover, the spectrum density of the velocity obtained from the experiment is compared with the classic turbulence models. Results: The results show the characteristic curves of the shaft power and the sound pressure level (SPL) are nearly identical, which indicates the axial distance and blade number are not sensible factors for the contra-rotating axial fan under variable rotational speed regulation. The blade profiles of the fan have an impact on the characteristic curves of the SPL and the shaft power curves of the fan decrease evidently with increase of the blade thickness, while the shaft power curves are very close with different blade patterns. Discussion: In general, the blade profiles are sensible factors for the contra-rotating axial fan under variable rotational speed regulation. Through the SPL spectrum analysis of the contra-rotating axial fan with different blade profile, it can be concluded that the blade profile of the rotors has an obvious impact on the broadband noise characteristics under moderate and high frequency range.
Introduction
Contra-rotating axial fans have been widely applied in the ventilation and air conditioning systems for its merits of the compact size and high pressure rise. Generally, a contra-rotating axial fan is composed of two rotors that rotate in opposite directions with a casing enclosing them. Compared with the traditional fans, the contra-rotating axial fan has the advantage of high efficiency, while with the disadvantage of high noise level. In recent years, many experimental and numerical works have been performed by researchers focusing on contra-rotating fans [1][2][3][4][5][6][7][8][9].
Compared to fans with a single rotor, the axial spacing of rotors as well as the mounting position for contra-rotating fans has significant impact on the flow and noise characteristics. Roy et al [10] designed a contra-rotating fan unit and tested the flow behavior to improve the performance and to develop an effective design principle. The study found that the contra-rotating unit could enhance the overall stall margins and that at different axial distance between two rotors the performance characteristics varied by 7% at most. In addition, the best performance was observed when the axial distance of two rotors was equal to around 50% of the first rotor chord. Shigemitsu et al [11] developed a high speed contra-rotating axial fan and investigated the performance and the internal flow conditions by experimental and numerical methods. They discussed the influences of the axial distance on the performance and the noise characteristics and illustrated the interaction of the flow field between the rotors. Nouri et al [12] conducted an experimental investigation on the inverse design method of contra-rotating axial fans and the effects of varying the axial distance and rotational speed ratios on the overall performance were analyzed. Mao et al [13] investigated the effects of axial distance on the performance of a contra-rotating axial fan by unsteady numerical simulation. The results suggested that the unsteady effects dominated the flow behavior at smaller axial spacing ranges, while the variation of aerodynamic force for two rotors was different as the axial spacing increased. Sun et al [14] designed a contra-rotating fan for mine ventilation and analyzed the performance and flow characteristics under different speed combinations. The results showed that depending on the flow rate and the resistance of the pipe network, variable speed operation of two rotors expanded the stable working range. In addition, Chen et al [15] conducted experimental and numerical investigations on the performance and detailed flow structure of a contra-rotating fan under different rotational speed ratios. Ravelet et al [16] designed three contra-rotating stages with different rotational speed ratios, mean stagger angles of blades and repartitions of blade loads between two rotors to study the global characteristics and the unsteady features of the flow. Luan et al [17] studied the acoustic and vibration effects of the axial spacing based on the experimental analysis and numerical calculation. Ai et al [18] performed experimental studies on the rotational speed matching of two rotors and analyzed its impact on the stable margin and efficient working range.
As is the case in traditional fans, geometric parameters of the rotor have great impact on the flow and noise in contra-rotating axial fans. Joly et al [19] utilized a multidisciplinary optimization methodology to design a low-weight and high-load contra-rotating fan. Cao et al [20] employed the model of forced vortex and free vortex, in which the real inflow velocity distribution of downstream rotor was considered, to improve the performance and optimize the flow field. Mohammadi et al [21] optimized the blade thickness of a ducted contra-rotating axial flow fan and studied the flow characteristics inside it by numerical simulation. They compared the flow behavior and obtained the characteristic map under the optimum blade thickness. Additionally, the influences of different rotational speed ratios and axial gaps of the contra-rotating fan were also investigated in the paper. Xu et al [22] studied the tip clearance flow and associated loss mechanism of a contra-rotating axial fan and compared the stage efficiency and pressure loss coefficient with different tip clearance based on unsteady numerical simulation. The results indicated that on the same tip clearance variation, the efficiency of the downstream rotor decreased more dramatically than that of the upstream rotor, though the contra-rotating fan efficiency almost linearly changed with tip clearance variation. Wang et al [23] investigated the effects of tip clearance on the performance and found isentropic efficiency and stable operating range decreased with increasing tip clearance size. Furthermore, the negative effect on the performance of the upstream rotor was greater than that of the downstream rotor. Grasso et al [24] presented a multi-objective efficiency-noise optimization approach of the blades of a contrarotating fan on the basis of artificial neural networks by means of RANS-based hybrid methods that split the description of the flow field from the quantification of the source of noise and of its propagation. Wang et al [25] applied the perforated trailing-edge for the upstream rotor and perforated leading-edge for the downstream rotor of a contra-rotating fan and obtained an overall noise reduction of 6-7 dB with similar aerodynamic characteristics.
Most of the previous investigations on the contra-rotating axial fan focus on the flow and noise characteristics under design point and off-design points at the fixed rotational speed, while in some circumstances, variable rotational speed regulation of the contrarotating fan is the only practical approach to satisfy the requirement of air quantity. Despite a great deal of research on contra-rotating axial fans, there have been few experimental studies carried out to investigate the influence of the contra-rotating fan parameters on the performance and noise characteristics of the contra-rotating axial fan under variable rotational speed regulation. In the present paper, an experimental study on the performance and noise characteristics of a contra-rotating axial fan under variable rotational speed regulation has been conducted to analyze the influence of the fan parameters. The remainder of this paper is organized as follows. In section 2, the schematics and dimensions of the contra-rotating axial fan model are introduced and the parametric fan configurations are illustrated in detail. Section 3 describes the test rig setup for the contra-rotating fan performance and noise characteristics, as well as the outlet velocity of the main flow region. Section 4 analyzes the tonal noise characteristics of the contra-rotating fan under different rotational speeds and compares the obtained energy spectrum characteristics from experiments with that from classic models. In section 5, the performance and noise characteristics of the contra-rotating axial fan under variable rotational speed regulation are obtained, and on the basis of which the influences of the fan configurations are compared and analyzed. Section 6 draws conclusions. The present investigation is based on a contra-rotating axial fan installed in the air conditioning system due to the high energy efficiency. Figure 1 illustrates the configuration of the contrarotation axial fan used in this study, which is applied in an outdoor unit of central air conditioner with an overall dimension of 920 mm 3 × 640 mm 3 × 600 mm 3 . The air enters the fan after passing by the three heat exchangers located on the side wall brackets and the electric control box is located on the other side. The air exits from the top side where the axial flow fans are located. The two rotors are fixed by the support brackets and electric motors.
The normal volume flow rate for the outdoor unit is around 2,750 m 3 /h, and the two rotors rotate at the same speed and to the opposite directions. The diameter of two rotors are 350 mm with a tip clearance of around 5 mm and a hub to tip ratio of 0.35. The tip axial chord length of R2 is slightly smaller than that of R1.
Parametric scheme of contra-rotating axial fan
The contra-rotating axial fan consists of two rotors, and the fan structural parameters such as axial spacing and the blade parameters including the blade number and blade profile have an impact on the performance and noise characteristics.
The axial distance between the upstream rotor and downstream rotor significantly affects the performance and the noise characteristics, as well as the flow field, under the fixed rotational speed [1,13]. In the present paper, the influence of the axial positions of the two rotors on the performance and noise characteristics of the contra-rotating axial fan under variable rotational speed regulation is also studied. Three schemes with different axial positions of the two rotors are implemented to compare the flow and noise characteristics under variable rotational speed regulation in the experiment. Scheme A represents the original fan with the axial distance of 30 mm. In scheme B, the position of the upstream rotor is fixed and the axial shift of downstream rotor is about 7 mm toward the outlet. In scheme C, the two rotors of the fan are both shifted by 7 mm toward the outlet direction, keeping the axial distance between the two rotors unchanged.
In the present paper, three combinations of blade number for two rotors of the contra-rotating axial fan have been employed to study its influence on the performance and noise characteristics. The scheme 1 is consisted of nine blades for rotor 1 and 7 blades for rotor 2. For scheme 2 and 3, there are both 5 blades for rotor 2, while the blades for rotor 1 respectively are nine and 7.
Apart from the structural parameters of the fan and the rotor, the blade profile such as the blade pattern and thickness, also has a notable impact on the fan performance and noise, as is the case in the rotorstator blade rows [19,21]. To study the effect of the blade profile on the flow and noise characteristics of the contra-rotating axial fan, models with three blade pattern and four blade thickness are applied in the present experiment. Figure 2 shows three different distribution models of blade pattern and blade thickness. In the Figure 2, convex, straight and concave represent the three different blade pattern models, and T5, T10, T15, and T20 represent the four blade thickness models.
Experimental facilities
To study the aerodynamic and noise characteristics of the contra-rotating axial fan, the experimental measurement setups for the performance and noise characteristics have been performed in Media Corporate Research Center. Besides, the local velocity of the main flow region downstream the outlet of the contra-rotating fan has also been measured to obtain the energy spectrum characteristics of the main flow field. Figure 3 shows the schematic and photograph of the performance test rig, which has been built according to the Chinese National Standard GB/T7725-2004 and GB/T1236-2000 for the industrial fan performance testing. The volume flow rate which ranges from 500-5,000 m 3 /h is calculated according to the pressure difference between the upstream and downstream of the multi-nozzles. The uncertainties of the pressure and airflow rate are 0.15 Pa and 0.004 m3/h, respectively. As shown in Figure 3, the outlet of the contra-rotating axial fan faces toward the test rig, and the three inlet sides of the model should be unobstructed. Pressure orifice 1 is set to measure the static pressure, while pressure orifices 2 and 3 are set to measure the flow rate. The rotational speed of the motor is regulated by a SIMENS MicroMaster frequency converter and measured by a photoelectric digital tachometer. The shaft power is measured by two AC power meters.
The aeroacoustics test has been carried out in a semianechoic chamber in Midea Corporate Research Center. The size of the semi-anechoic chamber is 9.6 m long by 5.2 m wide by 3.9 m high and the inside surfaces are lined with mineral wool wedges of 69 cm deep. Figure 4 illustrates the schematic and photograph of the noise test rig, which has been built based on the Chinese National Standard GB/T17758. Four sampling points of During the noise test, PCB microphones and preamplifiers, combined with LMS data acquisition system SCADAS Mobile SCM01, are implemented to proceed the measurement and data acquisition of the noise signals. The data acquisition frequency is 10 kHz and the acquisition time is 15 s at each sampling.
To obtain the energy spectrum characteristics of the main flow region downstream the contra-rotating axial fan, the outlet velocity of the main flow field has been measured via the hotwire anemometer. The sampling point is located at 50% span of the rotor 2 outlet and 80% of the tip axial chord length downstream the rear rotor trailing edge opposite the side of the electric control box. The acquisition frequency is 32 kHz and the acquisition time is 32 s for the experimental sampling.
4 Noise and flow characteristics of the contra-rotating axial fan 4.1 Background noise Figure 5 gives a typical SPL spectrum of the background noise in the hemi-anechoic chamber, and the overall A-weighted SPL of the background noise is around 17.6 dB, which is much lower than the studied fan noise level. In consequence, the effect of the background noise on the noise experiments of the contra-rotating axial fan in the hemi-anechoic chamber in Midea Corporate Research Center is rather negligible.
SPL comparison of four measuring points
As demonstrated in the previous section, four PCB microphones are arranged on the four sides of the fan unit to test the sound pressure level. Table 1 compares the sound pressure levels at four measuring points, and the results indicate that the overall sound pressure levels of the contra-rotating fan are nearly independent of the circumferential angle, thus the noise feature at only one sampling point is analyzed in the following text. Therefore, in the following investigation, the noise data obtained from Microphone1 is applied.
Noise characteristics of the contrarotating axial fan
The SPL spectra of the model with nine blades for rotor 1 and 7 blades for rotor 2 under different frequencies at different rotational speeds is shown in Figures 6A, B, the frequency is nondimensionalized by the rotational frequency. It can be seen in Figure 6A, the broadband noise components of the SPL spectra from 100 Hz to 1,000 Hz show the highest value under different rotational speeds. From Figure 6B, the first peak of the SPL under non-dimensional frequency of 1 at different rotational speeds FIGURE 4 Schematic and photograph of the noise test rig.
Frontiers in Physics
frontiersin.org appears, and the following peaks of the SPL at different rotational speeds all appear under the blade passing frequency of rotor 1 and rotor 2 and their harmonic. It then can be concluded that the primary tonal noise is probably caused by the electric box and by the interaction of two blade rows. It also can be seen that with the increase of the rotational speed of the fan, the tonal noise at the blade passing frequency and its harmonic, as well as the tonal noise at the rotational frequency, becomes higher. Besides, the noise peak value at around 18,000 Hz in Figure 6A is probably generated from the frequency converter, which remains unchanged at different rotational speeds. Figure 7 shows the non-linear fitting of the tonal SPL for fundamental frequency of rotor 1 and rotor 2 under different rotational speeds. From Figure 7, it can be obtained that the expressions of the fitted curve for fundamental frequency of rotor 1 and rotor 2 are as follows, SPL fr1 89.95lg n − 225. 8 ( 1 ) where n represents the rotational speed of the fan. The expression of SPL is as following, where P represents the sound pressure and P ref represents reference sound pressure, which is 2 × 10 −5 Pa. It then can be concluded that P fr1 ∝ U 4.55 (4) P fr2 ∝ U 5.15 (5) where U represents the characteristic velocity and is proportionate to the rotational speed n.
With the increase of the rotational speed of the fan, the tonal noise at the blade passing frequency and its harmonic becomes higher, which indicates the interaction of the blade rows grows intense. Based on the analysis above, the SPL of the tonal noise under fundamental frequency of rotor 1 and rotor 2 can be predicted under different rotational speeds.
Flow characteristics of the contrarotating axial fan
The accuracy of computing the fan broadband noise, in a very large part, relies on the accuracy of the flow characteristics, thus the adequacy of the turbulence model is one of the principal elements for the accurate simulation of broadband noise. The flow in the simulation of the fan is commonly assumed as isotropic turbulence, for which the Liepmann and von Karman models are the most popular models.
The Liepmann model based on the exponential law assumption of turbulence longitudinal correlation coefficient presents the expression of the three-dimensional energy spectrum as following, where E(k) is the three-dimensional energy spectrum, k represents the wave number and Λ represents the turbulence integral scale, and u rms is the root mean square turbulence velocity. For the large eddies with most of the turbulence energy, the Liepmann model shows the law of k 4 well, while for the inertial subrange, it shows the law of k −2 which has a discrepancy from the −5/3 exponential law presented in experiments. The von Karman model, which is presented in the following, exhibits similar feature for the range of large eddies, while for the inertial subrange, it shows the −5/3 exponential law.
where k, u rms and Λ represent the parameters shown in the Liepmann model. In order to compare the energy spectrum density of the measured velocity with that of the Liepmann and von Karman models. Figure 8 compares the power spectrum density under the Liepmann and von Karman models with that obtained from hotwire measurement. It then can be seen that for f<<100 Hz, psd V (f) 2.26psd L (f), and spectral density of the two models largely varies with f 4 . Meanwhile, the power spectral density of the measured velocity changes slightly for this range of frequency, in which the spectral density obtained from hotwire measurement is apparently higher than that obtained from the two models. This is probably because in the lower frequency range, the scale of vortex is large and the flow is anisotropic turbulence, which leads to significant difference between the experimental results and that obtained from Liepmann and von Karman models. While for 100 Hz < f < 1,000 Hz, the difference between psd V (f) and psd L (f) is very small. In this frequency range, Besides, in this range of frequency, the power spectral density of the measured velocity is pretty close to that obtained from the Liepmann and von Karman models. In addition, the variance of spectral density obtained from the Liepmann and von Karman models gradually increases for f>>1,000 Hz owing to the fact that psd L (f) decays slightly faster than psd V (f), and the power spectral density of von Karman model is closer to that of the measured velocity in this range of frequency.
5 Influence of the fan parameters on the performance and noise characteristics
Investigation of variable rotational speed regulation characteristics of the fan
Due to the limitation of certain conditions, variable rotational speed regulation of the contra-rotating fan is probably the only practical approach to satisfy the requirement of air quantity. In the present experiment, the Figure 9 shows variable rotational speed regulation characteristics of the fan with different axial distances. It can be seen that under variable rotational speed regulation, there is remarkably little variation of characteristic curves of shaft power and SPL.
Through the variable rotational speed regulation, the three fan models nearly reach the same volume flow rate. It can be seen that not only the shaft power and SPL are fairly close, but the rotational speed is also basically unchanged at the design point for all of the three models. Figure 10 shows variable rotational speed regulation characteristics of the fan with different blade number. It can be seen that under variable rotational speed regulation, the characteristic curves of the shaft power and SPL are nearly identical.
It can be seen that the shaft power and SPL of the fan show rather small differences under the three blade number combinations, while there is an increase of the rotational speed with decrease of the blade number. Figure 11A shows that the shaft power curves of the fan with the three different blade patterns are very close, especially for the cases with straight and concave blade patterns, the characteristic curves of the shaft power are basically identical. For the fan with the three blade patterns, the characteristic curves of SPL under variable rotational speed regulation vary obviously, as shown in Figure 11B, and the convex blade gets the lowest noise level of all three patterns.
It can be seen that near the design flow rate, the fan model with the convex blade pattern shows slightly better performance and noise characteristics. Meanwhile, to achieve the required design flow rate, the rotational speed for the fan models with different blade pattern varies slightly. Figure 12A shows that the shaft power curves of the fan with the different blade thicknesses under variable rotational speed regulation increase evidently with the increase of the blade thickness. Meanwhile, it can be seen in Figure 12B that with increase of the blade thickness, the SPL curves of the fan decrease evidently first, and with the blade thickness of 15 mm, it shows the lowest noise characteristics. While with the increase of the blade thickness from 15 mm to 20 mm, the SPL curves are quite close. This indicates there might be an optimum blade thickness to get the best noise characteristics.
It can be seen that with the increase of the blade thickness, to reach the flow rate near design point, the required rotational speed and shaft power of the fan model both increase, while the SPL decreases first and then increases slightly. Besides, the disparity of the rotational speed for the fan models with the blade thickness is obviously larger than that for the fan models with the blade pattern.
Spectrum analysis of the fan with different blade profiles
To analyze the influence of the blade thickness on the noise characteristics, Figure 13 shows the SPL spectra of the contrarotating axial fan with the different blade thicknesses under four different rotational speeds. It can be seen from that below the fundamental frequency of rotor 1 and rotor 2, the SPL varies slightly for the model fan with different blade thicknesses under four different rotational speeds. For the frequencies between 100 Hz Frontiers in Physics frontiersin.org and 5,000 Hz, the broadband noise of the model fan decreases with the increase of the blade thickness, which indicates that the blade thickness has an obvious impact on the band noise of the contra-rotating axial fan in this frequency range. Meanwhile, in this frequency range, the SPL of the broadband noise varies evidently for the model with the blade thickness from 5 mm to 15 mm, compared with that from 15 mm to 20 mm. While for the higher frequency range (f > 5,000 Hz), the influence of the blade thickness on the SPL gradually decreases. Additionally, the tonal SPL for fundamental frequency of rotor 1 and rotor 2 of the fan slightly increases with the increase of the blade thickness, which is evidently different from the influence of the blade thickness on the broadband components. Moreover, at its harmonic of the BPF, the variation of tonal SPL is inconsistent with that at fundamental frequency for the model fan with different blade thickness.
In general, the broadband noise of the model fan with the different blade thicknesses between 100 Hz and 5,000 Hz is obviously lower than that out of this frequency range (f < 100 Hz or f > 5,000 Hz), and this also applies to the models with the different blade pattern. It then can be concluded that the blade profile of the contra-rotating fan has an obvious impact on the broadband noise characteristics under moderate and high frequencies.
Conclusion
In the present experimental study, the performance and noise characteristics of a contra-rotating fan under variable rotational speed regulation are studied, and models with different parameters of the fan are applied in the experiments. Compared with the existing experimental and numerical investigations, the present research attempts to discuss the influences of structural and blade parameters of the contra-rotating fan on variable rotational speed regulation characteristics of performance and noise, thus provides instructions for designing contra-rotating axial fans. Conclusions are drawn as follows.
1) It can be concluded that the tonal sound pressure for fundamental frequency of rotor 1 and rotor 2 is proportionate to 4.55th and 5.15th power of the characteristic velocity, respectively, which is proportionate to the rotational speed.
For the accurate prediction of flow characteristics which to a great extent determine the accurate prediction of the broadband noise, the Liepmann and von Karman models yield close results for the power spectrum density at moderately high frequencies, where the two models effectively predict the spectrum density of the velocity. It probably because in this frequency range, the scale of vortex is relatively small and the turbulent flow is approximately local isotropic, thus the experimental results and that obtained from Liepmann and von Karman models agree well. While at higher frequencies, the power spectral density of von Karman model, compared with that of Liepmann model, is closer to that of the measured velocity. 2) For the contra-rotating axial fans with different axial distances between rotors or blade number combinations, the characteristic curves of the shaft power and SPL are nearly identical, which indicates the axial distance and blade number are not sensible factors for the contra-rotating axial fan under variable rotational speed regulation. Under variable rotational speed regulation, the blade profiles of the fan, including the blade pattern and thickness, have an impact on the characteristic curves of the SPL. The shaft power curves of the fan with the different blade thicknesses decrease evidently with increase of the blade thickness, while the shaft power curves are very close with different blade patterns. In general, the blade profiles, especially the blade thickness, are sensible factors of the performance and noise characteristics for the contra-rotating axial fan under variable rotational speed regulation.
3) The SPL varies slightly for the model fan with different blade profile below the fundamental frequency of rotor 1 and rotor 2, while the blade profile of the rotors has an obvious impact on the broadband noise characteristics under moderate and high frequency range. Moreover, for the higher frequency range (f >5000 Hz), the influence of the blade thickness on the SPL gradually decreases.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. | 6,564.6 | 2023-02-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Deriving interaction vertices in higher derivative theories
We derive cubic interaction vertices for a class of higher-derivative theories involving three arbitrary integer spin fields. This derivation uses the requirement of closure of the Poincar\`e algebra in four-dimensional flat spacetime. We find two varieties of permitted structures at the cubic level and eliminate one variety, which is proportional to the equations of motion, using suitable field redefinitions. We then consider soft theorems for field theories with higher-derivative interactions and construct amplitudes in these theories using the inverse-soft approach.
Introduction
The study of scattering amplitudes has revealed surprising simplicity in the mathematical structures underlying Yang-Mills theories.The past few decades have seen impressive progress in our understanding of amplitudes and our efficiency in computing them.Amplitudes exhibit a number of interesting properties and satisfy a variety of relations (KLT, BCJ, color-kinematics and so on).The light-cone gauge offers a not-so-mainstream perspective on scattering amplitudes: with both locality and covariance being non-manifest, this gauge eliminates unphysical degrees of freedom at the cost of making computations more technical.Importantly, spurious degrees of freedom and redundancies do not obscure symmetries in the theory -symmetries often being key to the search for simplicity (the compact spinor helicity variables also emerge naturally in this gauge).
The classic paper [1] presented the 'derivation' of consistent cubic interaction vertices using just two ingredients: physical fields (unphysical degrees of freedom having been eliminated) and the Poincaré algebra (which must close).However, this study did not include higherderivative corrections, which often appear in effective actions (and serve as potential counter terms in loop amplitudes).Such terms were precluded by the choice of length dimension L λ−1 for the coupling constant (λ being the helicity of the fields).There has been considerable work on constructing consistent interaction vertices in the light front approach using the Fock-space method [2][3][4][5][6][7] and in momentum space [8,9].This paper expands the framework of [1] to include, beyond the usual structures, higherderivative terms.The consequences of these terms and their implications for amplitude structures -which have close ties to the light-cone formalism [10,11] -are examined.The inverse-soft method [12][13][14] is then used to build higher-point amplitudes.
Since the light-cone formalism is not covariant, Lorentz invariance needs to be verified.The key idea is to convert this 'task' into a tool, using it to constrain and then determine the Hamiltonian entirely.This allows us to construct cubic interaction vertices for a class of higher-derivative theories.This approach is also generalized to higher-point vertices and as an example, the quartic vertex is constructed for the simplest possible higher derivative operator.We also invoke symmetry arguments to explain the permissible structures for n-point interaction vertices.
Scattering amplitudes for a large class of higher derivative operators have been studied in the literature previously [15][16][17][18][19] using methods like CSW, BCFW, CHY and colorkinematic duality.These operators are not generally constructible, because of the potential boundary term.However, there have been some attempts to recursively construct a class of amplitudes for higher derivative operators using BCFW or all-line shift method [17,18].But in general, higher derivative theories are not constructible.In this paper, we attempt to recursively construct scattering amplitude for higher derivative theories using inputs from soft theorems.We use the inverse soft method, a complementary technique to those mentioned above, to derive higher-point tree-level amplitudes [13,14].In this approach, lower-point amplitudes are multiplied by a universal soft factor with appropriate legs shifted.This method is equivalent to BCFW recursion relations.In fact for MHV amplitudes, the inverse soft method is much simpler than the other known recursion relation methods.This method can only be used to construct amplitudes if there is no pole at infinity.Starting with the derived cubic interaction vertex as a seed amplitude, we construct MHV amplitudes for a class of higher derivative theories.We then extend our construction to higher-point NMHV amplitudes by starting with the known seed amplitudes, using the inverse soft technique to recursively contruct higher-point NMHV amplitudes.
2 Construction of cubic interaction vertices for higherdimensional operators We define light-cone co-ordinates in (−, +, +, +) Minkowski space-time as with ∂ ± , ∂, ∂ being the corresponding derivatives and the operator 1 ∂ + defined following the prescription in [20].x + is chosen as the time coordinate so p − is the light-cone Hamiltonian.
The Poincaré algebra in these coordinates is realized on the two physical degrees of freedom φ and φ.The Poincaré generators split into two types: kinematical K which do not involve the time derivative ∂ + and dynamical D which do -and hence pick up non-linear contributions in the interacting theory [1].The generators are The algebraic structures are Here, we review key features of this formalism and refer the reader to appendix A for additional details.
The Hamiltonian for the free field theory is with φ i referring to a field of helicity λ i with i ∈ Z + .Upon switching 'on' interactions, the δ p − operator picks up corrections, order by order, in the coupling constant α.
Reference [1] focused on the case of interactions between fields, all having helicity λ fields with α having dimensions of L λ−1 .
In this paper, we consider instead the following two -most general -ansatze for cubic interactions (based on dimensional analysis and helicity counting) Type-2 : where µ, ρ, σ, a, b are integers and A and C are numerical factors.
The key departure from [1] for both types of ansatze being that the dimension of the coupling constant is [α] = L λ 2 +λ 3 +λ1 −1 .This choice will permit us to derive cubic interaction vertices through the algebra-closure method in a new class of theories -higher derivative theories, formulated in the light-cone gauge 1 .
Type-1 cubic interaction vertices
We first start with δ ′ α p − φ 1 and use the commutation relations and dimensional analysis to find the unknown parameters.The commutators imposes the following conditions on our ansatz Let λ = λ 2 + λ 3 + λ 1 so the first equation of ( 8) reads (c + d) − (a + b) = λ.The dimensional analysis of (5) gives us the following relation Adding the first equation of ( 8) and ( 9), we get As a, b, c, d > 0, this implies that a = b = 0. Therefore, mixed derivative terms are not allowed for type-1 vertices (5).As c + d = λ, there are λ + 1 possible values for a pair (c, d).
We rewrite the ansatz (5) as a sum of these λ + 1 terms The next commutator [ δj+ , δ The above condition is satisfied if the coefficients obey the following recursion relations To determine the exact values of ρ, µ, and σ, we need the dynamical commutators [ δ j − , δ The boost generators j − and j− also get corrected when interactions are turned on and are of the form The boost generators are determined if we know the spin parts δ α s φ , δ α s φ.For type-1 vertices they are structurally of the form Due to helicity, the transformations δ α s φ and δ α s φ do not exist.We now compute The solution of the above recursion relation for ρ, σ and µ subject to the boundary conditions Plugging the values of ρ, σ and µ in our ansatz, we find Since the interaction Hamiltonian is As is well known, for odd λ, non-trivial cubic vertices require the introduction of an antisymmetric structure constant f abc .
Amplitude structures
In momentum space, the cubic vertices (22) have the following structure (with measure and constants suppressed) The off-shell spinor products in this language are In terms of spinor helicity variables [11], the vertex reads This is consistent with the general result for three-point amplitudes derived in [22,23] using S-matrix arguments, and little group scaling and in [2,3] using a Fock-space approach.
Type-2 cubic interaction vertices
We start with (6) and compute the commutators to arrive at the following conditions We also have, from dimensional analysis, Adding ( 28) and ( 30) Note that if λ 1 = 0 then (26) becomes a type-1 vertex.We now encounter a double sum as opposed to the single sum in (12).We need an index n associated with the λ 2 + λ 3 + 1 possible values the pair (a, b) can take and an index m for the λ 1 + 1 values that the pair (c, d) run over.We rewrite our ansatz (6) as the double sum The detailed calculation for this variety of vertex is presented in appendix B.
We find, for the type-2 vertex, with where u, v are functions of the λ i .
Poincaré invariance is insufficient to uniquely fix the form of cubic interaction vertices of type-2.This is because the helicity constraints permit a non-zero term in the spin part of the boost generator, ie.δ sφ (see appendix B), which is disallowed for type-1 vertices.Physically this non-zero spin part can be thought of as a loop correction to the usual spin transformation (equation 3.26 of [1]).This was noted in [24] where three-point counterterms were constructed for gravity in the light-front formalism.In that work, it was suggested that additional symmetry is necessary to uniquely determine the exact form of the counterterms.In the case of gravity, the residual gauge symmetry was used to fix the exact form of the three-point counterterm.To determine the type-2 vertex uniquely, an analog of residual gauge symmetry is likely to be necessary.
Since the type-2 vertex contains both kinds of derivatives, it can be shown to be proportional to the free equations of motion [2,25] at cubic order.We rewrite (34) as where we have used the equation of motion ).So, type-2 cubic vertices are proportional to the free equations of motion.Therefore, for this class of higher derivative theories, with only type-1 cubic vertices (22), the Hamiltonian is For example, a R 2 type operator, based on dimensional analysis and helicity, can only produce a type-2 cubic interaction vertex [24].This being proportional to the equations of motion, may be removed by a suitable field redefinition (thus all n-point graviton amplitudes produced by the R 2 term vanish as expected [19]).
We can generalize this framework from cubic vertices to specific class of n-point interaction vertices as discussed below.We construct the simplest possible quartic vertex as a specific example.The details are presented in the appendix C.
Comments on n-point interaction vertices in higher derivative theories In this section, we first deduce the structure of interaction vertices at higher orders purely from dimensional and kinematical constraints and then prove that all n-point vertices containing purely one type of transverse derivatives can be uniquely fixed by the Poincaré algebra.
We work here with a special class of higher derivative theories where λ i = λ, and work out the structure of interaction vertices at higher orders purely from symmetry constraints.In a perturbative expansion, the dimension of the coupling for a n-point interaction vertex is where α is the 3−point coupling.The n-point Hamiltonian is of the form φ p φq .We start with the ansatz where p + q = n and the a i , c i are non-negative integers and the µ i are integers.The commutator [δ j , δ p − ] yields Using (38), (39) and the non-negativity of powers of transverse derivatives we obtain For λ = 1, we get p, q ≤ 3.At cubic order, note that the (p = 3, q = 0) A A A structure and (p = 0, q = 3) Ā Ā Ā structure follow from this.At the next order, two new structures A 3 Ā and Ā3 A are allowed as compared to the usual Yang-Mills quartic vertices.
We then consider λ = 2, and use q = n − p in (40) and ( 41) The interaction vertex may be odd or even.For an odd point vertex, n = 2m + 1 where m is a positive integer.This gives Thus, for a cubic vertex where m = 1, (43) allows helicity structures h h h and h h h.
For an even point vertex n = 2m we get Since p is an integer, the condition (44) is For a quartic vertex where m = 2, (45) allows terms of helicity structure h h3 and h h 3 .Thus, higher derivative operators produce new helicity configurations at each order.For example, in [26] it was shown for n = 6, that only h 3 h3 type of vertices occur.Here, additional vertices of type h 2 h4 and h2 h 4 appear at n = 6.
We now prove that: all npoint vertices containing purely one type of transverse derivatives can be uniquely fixed by the Poincaré algebra.
We start with an ansatz for the n-point interaction vertex (38) and plug a i = 0.It reads Consistency with the helicity generator j requires The commutator with the generators j +− , j + determines the vertex upto the powers of ∂ + s.The exact powers of ∂ + s are fixed by the dynamical generators.We argue that the spin transformation appearing in the dynamical generator j − at this order is trivial and hence can be used to uniquely determine the vertex.
The number of transverse derivatives in the spin transformation δ α n−2 s φ ∼ α n−2 φ 1 ...... φn−1 (derivatives suppressed) must be one less than that in δ α n−2 p − φ due to its dimensionality.The dynamical generator j − has helicity +1.In order for the spin transformation δ α n−2 s φ to have helicity +1, the number of transverse derivatives in it must be one greater than that in δ α n−2 p − φ.This proves that the spin transformation δ α n−2 s φ cannot be consistent with the helicity and dimensionality simultaneously and hence must vanish.This allows us to uniquely determine the vertex.Therefore, all n-point vertices containing purely one type of transverse derivatives can be uniquely fixed by the Poincaré algebra.
Vertices to Amplitudes : Motivation to use the inverse soft technique
We would now like to compute amplitudes using the interaction vertices derived previously.In principle, one can construct the n-point tree-level amplitudes.However, the calculation becomes mathematically tedious as the number of Feynman diagrams and the number of terms in the diagram increases exponentially.For example, to calculate a 4-point tree-level amplitude, we need to sum over contributions from the exchange diagrams and the contact diagram.The contact term may be derived by the closure of Poincaré algebra in special cases (as discussed above) which in itself requires a lot of work.We therefore use a complementary technique, the inverse soft method, in the next section to derive higher-point tree-level amplitudes for the higher derivative theories (see appendix C).
Inverse soft construction in higher derivative theories
Soft factors in the light-cone gauge and their use in the construction of higher-point interaction vertices were the focus of [27] (this included a review of the light-cone realization of many covariant results from [12,13,28]).In this section, we explore these methods in the context of the higher-derivative theories considered in this paper.
The idea that higher-point tree level amplitudes can be constructed from the lower-point ones by using a multiplicative universal factor, associated with the emission of a soft boson was first presented in [12] and then subsequently developed in [13,14].It was shown in [13], that inverse soft construction with BCFW [29,30] as a guide, can be used to construct gauge theory and gravity amplitudes.
The inverse soft recursion relation for a n-point tree-level gauge amplitude is
The prime above indicates that momentum conservation on the right-hand side requires a shift in the momenta of adjacent particles p n−1 and p 1 .For a positive helicity soft particle n, the shift is [13] Here only the neighbouring particles are shifted because only they are affected by the soft limit.For the case of gravity, the soft factor depends on all legs thus inverse soft expression (47) will involve sum over all particles.
We will now employ the inverse soft method to construct higher-point amplitudes for theories with higher-dimensional operators.We use the three-point amplitudes found in the previous section as seed amplitudes to systematically construct different classes of higherpoint amplitudes.
HF 2 operator
This is the simplest possible gauge invariant higher-dimensional operator involving spin-0 and spin-1 fields.This is a 5-dimensional operator given by where ν is the gluon field strength and f abc , the structure constant of the gauge group.H is the real scalar field.We consider the cubic interaction Hamiltonian by plugging λ 1 = 0 , λ 2 = λ 3 = 1 in (36) where φ is a complex scalar field and H = φ + φ † .
We can see that by working with the physical fields (using the light-cone gauge), the amplitude for the operator HF 2 naturally decomposes into a holomorphic and an antiholomorphic parts.This key idea was first presented in [15] due to the self-duality of the field tensor.The full amplitude for this operator (49) can be obtained as where A n is the partial color ordered amplitude.The color decomposition for tree-level amplitudes of this operator is similar to the Yang-Mills case [15].
We now construct higher-point amplitudes for this operator.We start with the three-point amplitudes and construct separately the holomorphic and anti-holomorphic amplitudes using the inverse soft and then add them to obtain the full amplitude.
The above formula is valid for both massive and massless scalars, and it reduces to the pure Yang-Mills amplitude when we take the momentum of the scalar to zero.
We now construct the non-MHV amplitudes using the inverse soft method.The four point anti-holomorphic amplitude is constructed by adding a soft gluon of positive helicity and using the appropriate shift defined in (48) One can now recursively construct this for n-point by multiplying with appropriate soft factor and deforming the adjacent legs The above amplitude under the shift [1 + , n + behaves well at large z and there is no pole at infinity so the inverse soft construction is valid for non-MHV amplitudes.As, A 3 (φ, 1 + , 2 + ) = 0 therefore all higher-point amplitudes will be zero.So the full n-point amplitude is Similarly A n (H, 1 − , 2 − , ....n − ) can be constructed using the inverse soft method.Our results match with the amplitudes derived using MHV vertex expansion method in the literature [15,31].
F 3 operator
We now consider a gauge invariant higher-dimensional operator involving purely spin-1 fields.The simplest of such operators is 6-dimensional given by where is the gluon field strength and f abc , the structure constant of the gauge group.
The cubic interaction vertex can be obtained from (36) by setting λ 1 = λ 2 = λ 3 = 1 so λ = 3, and the Hamiltonian is Similar to the previous case, we see that the decomposition of the operator (62) into holomorphic and anti-holomorphic parts is manifest in the light-cone gauge.This decomposition was first presented in [15,16].So the full F 3 amplitude can be constructed by taking the sum of the holomorphic and anti-holomorphic amplitude.
In this theory, holomorphic amplitudes with exactly three negative helicity gluons and an arbitrary number of positive helicity gluons are referred as MHV amplitudes and are denoted by A F + .The anti-holomorphic amplitudes with exactly three positive helicity gluons and an arbitrary number of negative helicity gluons are referred as anti-MHV and are denoted by A F − .
It was shown in [33,34] that for the higher derivative theory of massless particles in four dimensions, the tree-level soft photon and graviton theorems receive modifications at subleading and subsubleading orders.However, the leading soft factors are not altered for these theories because these interactions (F 3 , R 3 ) are generically suppressed in the soft-limit [34].Therefore, the leading soft factor for the Yang-Mills theory with F 3 correction in the soft gluon limit is same as (53).
The three-point amplitudes extracted from (63) are A four-point tree-level MHV amplitude corresponding to a single insertion of the F 3 operator can be constructed using this as follows (we attach a soft gluon between the adjacent legs 1 and 3).
The above construction is valid because under the following shift [1 − , 4 + , the amplitude has no pole at infinity.So the inverse soft construction is valid for this class of amplitudes.
The 5-point MHV amplitude can also be constructed similarly.At the 6-point level, the The inverse soft recursion relation for gravity, for a n-point amplitude, reads [13] M M HV (1, ......., n The prime indicates the momentum shift, mentioned earlier.For a positive helicity soft graviton k, the shift is Using the inverse soft method, the four-point tree-level amplitude is For the case of gravity and R 3 theories, the higher-point amplitudes are functions of both the holomorphic and anti-holomorphic spinors that makes the construction of amplitudes using the inverse soft approach more involved.We show below an example of the construction of the five-point amplitude in the R 3 theory.
[45] 14 24 54 15 25 Using the appropriate shifts, the amplitude reads where M R 5 is the five-point amplitude for pure gravity (the proportionality to gravity ensures 'constructibility').The n-point holomorphic MHV amplitude for R 3 operator can be written as The anti-holomorphic part corresponding to R 3 operator can be similarly obtained.This is consistent with the results derived in [16] using color-kinematic duality.Similar to the previous section, we can construct a class of higher-point NMHV amplitudes using the inverse soft technique.We present the construction of the five-point NMHV amplitude in appendix D.
As we turn on interactions, the other dynamical generators also pick up corrections where δ g s and δ g s represent the spin transformations.
The requirement of closure of the Poincaré algebra imposes various conditions on the integers introduced in (80).The result is From ( 78), the complete Hamiltonian to this order reads [10] If we set λ 1 = λ 2 = λ 3 = λ ′ in (83) with λ ′ odd, H g vanishes.Hence a self-interaction Hamiltonian for odd integer spins exists, if and only if we introduce a gauge group and it reads
B Derivation of cubic vertices with mixed derivatives
The refined ansatz for cubic interaction vertices with mixed derivatives reads To determine the vertex, we first use the kinematical commutators They give the following conditions These conditions are satisfied if the coefficients obey the following recursion relations The idea is to use the dynamical commutators [ δ j − , δ p − ] α φ 1 = 0 to fix ρ, µ, and σ.To solve these commutators we need the spin parts δ α s φ , δ α s φ.For type-2 vertices, these are structurally For type-2 vertices, both the spin parts have a non-trivial structure and closing the dynamical commutators [ δ j − , δ p − ] α φ 1 = 0 does not fix the values of ρ, µ, and σ.Therefore we are forced to introduce two functions u(λ i ) and v(λ i ) that capture this ambiguity in the powers of ∂ + .
The ansatz for the quaric vertex is where β is the coupling constant, X is a constant and f ijk is the structure constant.The commutator [ j , p − ] β produces The commutator [ j The solutions to the recursion relation fall in four independent classes.For this helicity configuration, δ β s φ does not exist because its consistency with the helicity generator j requires such a term to have four transverse derivatives rendering it inconsistent with the dimensionality.We then fix the values of µ , ρ , σ , δ using the commutator with dynamical generator j − for each class of solution.The final form of the quartic vertex is where C 1 , C 2 , C 3 and C 4 are numerical cofficients to be fixed.The Poincaré algebra uniquely fixes the structure of the vertex up to the overall numerical constant for each possible solution.These coefficients will be fixed using the fact that the vertex is antisymmetric under the exchange of gluon legs.
The form of the vertex (98) in real space is complicated, we write the vertex in the momentum space such that it is manifestly antisymmetric.In momentum space, the quartic vertex has the following structure (measure and constants suppressed) + C 2 q a q + (N −a) 1 p + k c k + (2−N −c) l (3−a−c) l + (a+c−1) + C 3 q a q + (N −a) p (N −a) p + (a−1) k c k + (2−N −c) l (3−N −c) l + (c−1) (99) + C 4 q (3−a−c) q + (a+c−1) 1 p + k a k + (2−N −a) l c l + (N −1−c) φ Āi Āj Āk + c.c. .We write the above expression in terms of spinor helicity variables using the binomial expansion.
V β = The vertex (101) matches with the known result in the literature [36].The construction for the case (0 + ++) follows in a similar manner.The vertices with helicity configurations (0++−) , (0−−+) contain mixed transverse derivatives.For such type of vertices, the spin transformations in both j − , j− are non-trivial.Hence, these cannot be uniquely fixed by the algebra.Moreover, at this order, these vertices are proportional to the free equations of motion and hence can be removed by suitable field redefinition.
D Five-point NMHV amplitude for R 3 operator The five-point NMHV amplitude can be constructed using the inverse soft method as shown below The soft factor S for graviton leg 5 of positive helicity is [13] S(1, 5 The first class of diagram has three terms which are related by the interchange of gravitons 2, 3, 4. We evaluate one of the terms | 6,024.2 | 2023-06-08T00:00:00.000 | [
"Mathematics"
] |
Modern Model of a Rural Settlement: Development of Planning Structure and Reconstruction of Villages
The study is aimed at developing a modern model of a rural settlement that corresponds to the socioeconomic conditions of the market economy in Kazakhstan and modern ideas about architectural planning solutions of an aesthetic nature. The proposed model will help researchers, experts, specialists in preparing recommendations for the development of rural settlements of various levels, depending on the administrative and economic significance, population size, national, regional characteristics. Villages at the present stage of development of Kazakhstan vary significantly both in importance (farm center, rural districts, etc.) and in population size (from 5,000 to 50 people), as well as in the status of territorial zones (residential and industrial zones).The study has allowed to:reveal the dynamics of the development of rural settlements, their architectural and planning structure in time and space;determine the dominant factor and priority approach to the formation of the architectural and planning structure of the village at each historical stage;identify the reasons for the degradation of the architectural and planning structure of villages in the historical context;define that the architectural and planning structure with the traditionally clear functional zoning of villages has been gradually replaced by diffuse-penetrating structure;reveal that the mutual position of the main functional zones has changed, the production zone and the community center have undergone the greatest transformation;develop a theoretical model of the formation of the architectural and planning structure of rural settlements; identify the main aspects and principles of its formation.
Introduction
Significant changes have taken place in the Republic of Kazakhstan since it became independent. At first glance, it may seem that political, social, and economic changes are not related to the planning, development, and improvement of the aul (village). However, it is actually much more complicated. For example, the villagers often placed the economic center on their own personal plot when creating their farms. This led to a multiple increase in the livestock population in the residential area, influenced the environment of the village, and disrupted the architectural and planning structure.
In turn, the architectural and planning structure influences the level of public services and amenities, migration, and the comfort of the villagers' life. Accordingly, in the conditions of market relations and rationalization of the space development on the territory of the Republic, the need for the development of programs, forecasts, schemes, and projects for the territorial development of settlement and rural areas is currently increasing as a means of comprehensive solving acute socioeconomic and environmental problems.
In the modern conditions of the development of agriculture in the Republic of Kazakhstan and the economy as a whole, it has become necessary to revise the existing settlement system. The issues on the agenda include the contradictions between society and nature, social demands, and the requirements for the location of production, between the location of production and environmental protection. The settlement of the population on the territory is currently a state problem due to the development of productive forces, social needs of society, and environmental protection.
At the same time, it must be noted that the creation of food supply in the country is impossible without agriculture, which is directly or indirectly related to the living conditions of the villagers and the spatial formation of settlements.
There are studies that consider the role of small settlements in the general settlement system, using examples from different countries of the world, as well as the principles of forming their planning structure, depending on regional conditions. For example, the issues of planning cities and rural areas, as well as the settlement issues were considered by [3], A.V. Ikonnikov [4], V.N. Belousov [5], V.P. Baskakova, A.E. Likhacheva [6], V.I. Krushlinsky, Z.N. Yargina [7], A.Zh. Abilov [8], as well as modern authors [9].
Part of the modern works explore the historical evolvement of urban development activities in the region, which is confirmed in the articles "Urban Planning and Recreational Planning of Populated Areas in the Republic of Kazakhstan in the Second Half of the 20th Century" [15] and "Historical aspects of the formation of rural settlements in northern Kazakhstan during the pre-revolutionary period" [16].
It must be noted that extensive empirical material has been accumulated on the topic under study during the functioning of the market economy, but it has not found the proper creative generalization and, therefore, no proposals (principles) for a new approach to the architectural and planning solution of villages and their reconstruction in the market economy have been developed.
The study is aimed at developing a theoretical model of the formation of rural settlements.
Accordingly, the aim of the study is driven by its relevance. In its turn, the relevance of the study is associated with the following: a radical change of ownership in villages, privatization of agricultural enterprises, and creation of a new system of farms; migration of large masses of the population and a decrease in the population in some villages and auls; changes in the situation in the development and formation of the architectural and planning structure of rural settlements, which is associated with the abandonment and destruction of some public institutions, industrial buildings, and engineering structures; a significant increase in the role of private subsidiary farming, its development, and growth in size; creation of a farming network, including farm centers; complex environmental situation; and lack of scientific developments and scientifically grounded recommendations for the development of villages and the formation of their architectural and planning structure in the new socioeconomic conditions of Kazakhstan.
The idea of the article is associated to the resumption of resettlement and reconstruction of villages at the present stage, which raises the urgent question of their appropriate classification in relation to specific conditions. In this regard, it is necessary to foresee how villages should develop today in the future in order to use capital investments in agriculture with the greatest efficiency, carefully substantiate planning solutions with consideration of the development of settlements, national traditions, and local characteristics, as well as the rational use of territories, landscape, climate, and natural resources.
This work is based on many years of research, analysis of the historical and modern conditions for the formation of villages and the study of social and regional aspects, which allows laying the scientific foundations for the reconstruction and improvement of agricultural settlements, taking their regional characteristics into account.
The leading hypothesis is that the optimal state of the formation of rural settlements is achieved with comprehensive consideration of all factors and elements, taking the vector of time dynamics into account.
Materials and Methods
Following the hypothesis, various research methods were used in the work, based on a comprehensive and detailed consideration of the solution to the problem posed, the study of objects and their specifics in a variety of interrelationships and relative independence.
The forecast for the development of rural settlements was determined based on the method of expert assessments.
As part of the expert assessments (current state, trends, and prospects), an interview was conducted with the experts (253 people), who were offered a questionnaire of 12 questions covering the main functions of agricultural production and living conditions of people. The experts were highly qualified specialists of various professions and ranks. The goal of the study was to obtain a forecast of the development of rural settlements from a competent group of workers.
The methods of applied sociology were used in surveys of the rural population, heads of collective farms, LLCs, JSCs, rural districts, district akimat, chief specialists, mid-level specialists, school teachers, and cultural workers. These methods were applied to determine the prospects for further development and use of rural areas. The interlocutors for the interview were chosen according to the principles of education, which provided the necessary level of judgment.
The The face-to-face method of polling was used, with filling out questionnaires at home (in case of an individual interview) and on the spot in a group interview, with handouts provided.
The authors developed tools (which were corrected and modified at each stage), presented as follows: 1. Questionnaire: to poll the rural population; to interview the heads of farms; and to interview the experts.
Points:
points for the village; demographic points; points describing production and settlement; and points describing farms over five years.
In the course of the polling, the typical method of organizing the sample was determined, and the size of the sample was defined using the formula.
More than a thousand families with a total population of 4,488 people were interviewed at Stage I. The survey was conducted in 37 settlements.
The same number of settlements was taken at Stages II and III, for comparison with Stage I. The survey allowed to determine the main structure forming elements of the formation of rural settlements with the help of a public visual image (social image) based on the conventional ideas about the subject.
The mathematical statistical techniques were used to process the massive statistical material -in particular, to process questionnaires of the population poll and other data on the aggregate of rural settlements, etc. The methods of groupings and correlation regression analysis were used to identify links between the size of rural settlements and their architectural and planning solutions by the example of the Akmola region. The sampling method was applied in this situation. The methods of extrapolation, expert assessment, and verification of forecasts were used to determine the prospects for the development of rural settlements, production volumes, location of economic centers, size of private subsidiary farming, etc.
The final forecast for the formation of villages was made on the basis of a logical summation of the data obtained by different methods.
All analytical conclusions were based on the following: expeditionary research in the region under study (2005 -2019) in order to conduct field surveys of settlements aimed at identifying the peculiarities of the formation of the planning structure of settlements (a number of settlements were selected for research, typical for natural, economic, historical, national, and demographic conditions); 299 rural settlements, 27 regional centers, and 12 cities were surveyed (the sample was substantiated by the recommendations of the University of Tartu; the data were representative); collection of the state statistics (list of settlements and their importance in the settlement system, information on population migration, placement of cultural and household facilities, and the state of the housing stock); study of geographical atlases, climate reference books, and statistical and environmental reports; study of the expert opinions; study of the planning structure and improvement of villages, using photographic plans and general plans of villages; and collection of archival and historical materials.
Results
It was found in the course of the study that the main prerequisites for the reconstruction of rural areas at the considered stage were the following: the direction of the financial and economic policy of the state; determination of territories that were promising from the standpoint of the economy and market functioning; formation of a promising settlement system; and introduction of science and technology.
It was revealed in the survey of the rural population that migration was caused by negative consequences of sanitary and economic processes: poor living conditions (especially in small settlements), level of service, the desire to move to their historical homeland, decline in the education system, healthcare, culture, disorganization of life, lack of recreation areas, as well as climatic and landscape conditions. That was noted by 93 % of the respondents (Table 1).
It is important to note that similar reasons for migration were observed in agricultural settlements in other countries [17]. Migration trends in the region can be traced based on the following schemes (Figures 1, 2). Accordingly, both general principles and specific ones must be taken into account when reconstructing the planning structure of villages. The general principles include relations between people, everyday life, and national characteristics. The specific principles include the settlement location, natural and climatic specifics, and the history of development. It was established that the following requirements were to be accounted along with the principles: social, aesthetic, organizational and economic, functional, economic and environmental.
As a result, it is proposed to consider the following issues when solving social problems: The environmental problems are related to the pollution of the village, the placement of manure, feed, fuels, and lubricants on its territory.
The planning problems are divided into the following groups: impact of historical, climatic, socioeconomic factors, and administrative territorial transformations on the formation of rural settlement systems and the architectural and planning structure of rural settlements; analysis and identification of the specific architectural and planning structure of historically established types of villages, as well as the modern trends in the design of settlements; general architectural and planning composition, layout scheme, and functional zoning of the territory; fundamental solution to the problem of locating economic centers of agricultural formations on the territory of the village; identification of opportunities and analysis of options for the development and placement of private subsidiary farming and minifarms; study of options for the architectural and planning placement of settlements in relation to the differentiation of the size and location of private subsidiary farming and minifarms; designing the development and placement of material elements of the production area; analysis of options for placing a network of streets and cattle passes, economic driveways, organization of communication between farms and pastures; change in the classification of rural settlements is envisaged by administrative significance, by role in the system of cultural and consumer services, by functional purpose, and by geographical (territorial) location; and determination of the principles of formation and the main components of the conceptual model.
In accordance with the main planning tasks, a scheme of the functional and planning structure of rural settlements in the Republic of Kazakhstan was developed ( Figure 3). Different opinions on the issues of reorganization of rural settlements at the present stage must be noted. However, their reconstruction remains to be the main focus [16,18].
It was revealed that the measures for the reconstruction of small settlements consisted in the transition from the predominant focus on the reconstruction of relatively autonomous settlements to the organization of settlement systems of various types and levels in the modern conditions. In this regard, the following is proposed: differentiation of rural areas by the degree of development of integration processes between the cities and villages; priority development of settlement systems in the territories allocated by location, as well as in the most developed zones in terms of using the resource potential; development of a set of measures to organize the economic integrity of settlement systems, taking specific conditions (economic, demographic, sociocultural, natural, and environmental) into account; and reconstruction of small settlements that develop relatively autonomously, taking into account the possibilities of strengthening their interconnection with the nearest urban centers.
It was revealed during the individual interviews with the chief specialists, middle-level specialists, school teachers, and cultural workers (face-to-face polling was used, with filling out questionnaires) that the following topical issues influenced the convenience, attractiveness, and appearance of settlements (where the indicators at all three stages differed only in tenths): principle of historical continuity (noted by 23.4 % of the respondents); principle of synthesis of architecture and nature (35.4 %); and principle of reducing climatic discomfort (41.2 %) ( Table 2). At the same time, the problem of continuity primarily influences the planning solutions using local regional specifics (historical, national, and socioeconomic) and is reflected in the formation of small settlements in terms of population and territorial size; functional organization of the territory; historically justified techniques (for the region under study) of the planning structure; rural image and identity during development; using traditional building materials, structures, volumetric spatial forms, and decor.
The study of the promising problems of urban development using the entire set of social forecasting tools is fundamental in the formation of an architectural model of the development of villages and their structural elements in this work.
The development of a theoretical model as the primary stage of urban planning means increasing the degree of scientific validity of urban planning schemes, programs, and projects.
The following tasks were set and solved when forming a theoretical model of agricultural settlements: improving environmental conditions; increasing the economic efficiency of agricultural production; intensification of integration ties; and intensification of land use.
The following factors were taken into account: natural, historical, national, resource, external, architectural and artistic, scientific, and technical ( Figure 4). Taking into account the above factors will allow the following: natural factors will allow to ensure the comfort of the population in a particular settlement; historical factors will allow to provide for the placement of settlements in the historically justified zones, taking the traditional mode of production into account; resource factors will allow the rational use of land; and external factors will allow the settlement of the population (placement of settlements) in the spaces of economic interaction.
Discussion
The proposed theoretical model of the formation of an agricultural settlement is based on the main modern trends in the village development, which can be formulated as follows: 1. trends related to population demography and lifestyle; 2. trends related to urbanization, integration, and settlement structure; 3. trends related to the quality of the environment; 4. trends in the development of transport and communications; 5. trends arising from the limited natural and labor resources; and 6. trends in the housing design.
The theoretical model of the rural settlement formation is supposed to be introduced into production. The model provides for a stage-by-stage development of space: from the region as a whole to the placement of the elements of the village planning structure: the socioeconomic conditions, historical continuity, territorial differentiation, and natural landscape zoning are determined at the first stage; the structural spatial organization of the region with the allocation of the regional, zonal, and planning level (multilevel organization of space) is envisaged at the second stage; the search for the optimal solution within the allocated zone is performed at the third stage; the fourth stage includes the architectural and artistic organization of the space of a particular settlement at the planning level; and the structural integration is arranged on the fifth level.
A multilevel system of organizing space involves the following: resolving the problem on the territory of Northern Kazakhstan: a regional territorial complex, taking specific conditions and requirements into account at the first level (regional level); the regional territorial complex is subdivided into zones at the second level (zonal level); and formation of a specific object (planning level) with the simultaneous placement of a public center, recreation area, housing, and transport are provided at the third level. Residential formations within the settlement should be subdivided by the level of comfort, based on the cost characteristics of land and real estate. The following territories are distinguished within the settlement: territories of the first level provide increased comfort and are located in more favorable natural conditions; territories of the second level occupy the most preferred part of the settlement; and territories of the third level are of the general level and occupy the rest of the settlement.
In the theoretical model of the agricultural settlement formation, it is proposed to preserve and develop the existing trends in functional zoning with an emphasis made on the following: modern socioeconomic conditions; integration of functional zoning; and return to historically justified fuzzy functional zoning.
Most of the rural settlements need to be rebuilt on the territory of Northern Kazakhstan at the present stage. In this case, it is important to include the settlement in the planning structure of systems of various ranks. In general, the architectural and planning organization of a small settlement should be carried out through the formation of the developed structural and planning elements that have intersettlement significance.
The spatial structure of the theoretical model of a rural settlement includes the following components: 1. historical background; 2. trends in the formation of rural settlements; 3. main factors of the settlement formation; 4. historical stages and dominant factors; 5. trend to constant deterioration; 6. critical point; and 7. new approach, new vision as a synthesis of all factors and trends.
1. Historical background. The principle of "canon and pattern" had been taken as the basis for organizing the architectural space until the beginning of the 20th century (up to the 1917 revolution). The natural organic approach prevailed in the space organization. Historical continuity and folk traditions were the basis for the design.
With the restoration of the Soviet power, all the existing canons (historical, folk, and natural) were discarded, and other principles of organizing public life based on pseudoequality and pseudocollectivism were put forward, where everyone was an average standard element of a large mechanism. This influenced all areas of development of the society, including the organization of the architectural space.
2. The main trends in the rural settlement formation were the following: an increase in the number of settlements due to the development of new territories; the new social doctrine led to changes in the way of life of villagers, which was associated with the processes of collectivization and the emergence of forms of ownership (state farms, collective farms, state breeding farms, etc.); and the specialization and production focus of farms changed.
3. The main factors in the settlement formation were the following: natural climatic, cultural historical, socioeconomic, scientific, and technical. These factors are described as complex and multidimensional, including a variety of elements.
Historical stages and dominant factors. Five main
periods can be distinguished in the historical stages of the village formation, the analysis of which leads to the conclusion that each stage has its own dominant factor and approach. For example, a natural historical approach was characteristic during the formation of the first permanent settlements on the territory of the region (stage 1); the dominant factor was natural. The second stage in the formation of permanent settlements was described by a traditional historical approach; the dominant factor was colonial. A national ethnographic approach to the settlement formation was characteristic at the third stage (from the mid-90s of the 19th century to the early 20th century); the dominant factor was geopolitical. The fourth stage was at the 50s of the 20th century. This stage was described by a standardized typical approach to the formation of rural settlements with a dominant political factor. The fifth stage of the village formation had a pseudoscientific approach with a dominant economic factor. The sixth stage was at the 80s of the 20th century and was described as a voluntarist typical approach.
5. Trend to constant deterioration. The pseudoscientific approach to the space formation led to the fact that only one factor from the whole complex of factors dominated at each subsequent historical stage. The rest were not taken into account. This led to a simplification and, accordingly, to a deterioration in the organization of living space at all levels each time.
6. The critical point in time coincided with the period of the political and economic crisis (the perestroika period, 1985 -1991). This period is described by the maximum destruction and decline in all areas (economic, industrial, social, etc.), which led to the general degradation of rural settlements.
7. The new approach is described by a synthesis of all factors and trends, in which the individual approach prevails in the organization of each specific settlement with the possibility of its change and development in accordance with the requirements of the changing time. The proposed model is not static but dynamic. In theory, the proposed model secures self-organization, self-development, and dynamic balance of the entire structure, which is the most optimal approach in ensuring the life processes.
The theoretical model is based on the following principles of organizing a rural settlement of a new type: 1. Environmental feasibility; 2. Flexibility in approach and space organization; 3. Opportunity for development and changes; 4. Taking into account a complex of individual specifics (regional, historical, natural, socioeconomic, construction and technical, etc.); 5. Each structural element of the village must be organized in accordance with the laws of aesthetics; and 6. Identification of key moments forming the space structure.
The key points may include the following: production evolvement due to the development of new agricultural technologies that influence its structure and organization; development of the residential area will be determined by harmony with nature and the standard of living at each specific historical (time) period; transport and communication frame should be multifunctional with minimum length and ensure safety; and buffer zone (zone of transition from external to internal) protects the internal structure of settlements from external harmful influences of any level: economic, natural, climatic, noise, dust, etc.
The interposition of all structural units of a rural settlement is determined by the optimal feasibility, including such concepts as economic feasibility, social feasibility, environmental, natural climatic, national ethnographic, and global standards.
Conclusion
1. The patterns of the village formation revealed in the work allow to conclude that a social urban planning scenario that takes into account all the individual specifics of the projected object needs to be created.
The social aspect of the proposed scenario lies in the need to take national, demographic, historical, everyday, and cultural requirements into account when designing each specific object.
The urban development aspect of the scenario includes the need to take natural, climatic, environmental, and historical factors into account when designing an object.
The theoretical model of the agricultural settlement formation is based on the set of requirements that take as many factors, trends, principles and design methods in a specific region as possible into account.
The theoretical model of the rural settlement formation proposed in the article has been developed based on the comprehensive analysis of rural areas of Northern Kazakhstan and sociological research and allows using the methods of architectural and urban development analysis to use the identified patterns to achieve programmable results.
2. It has been established in the course of the study that the new socioeconomic conditions of the independent state of the Republic of Kazakhstan, as well as internal and external relations, national, historical, natural climatic, and environmental specifics dictate the need to create a fundamentally new socioeconomic model of a rural settlement, which will require large capital investments, political, economic, and organizational efforts, and long terms.
3. Continuation of research in this direction will reveal the main practical tasks for the revival and development of agriculture and rural areas in the Republic of Kazakhstan.
Social studies, economic estimations, and architectural and planning solutions will be a new drive to the development of the following sciences: economics of private subsidiary farming, land management, planning and development of settlements, and village architecture.
The scientific significance of the work is as follows: on a national scale: there have been no studies of this issue or project proposals, and their implementation in the village development practice will yield a significant economic and social effect; on an international scale: this work will be interesting for the countries with private subsidiary farming and villages with economic centers; and impact of the obtained results on the development of science and the expected social and economic effect. | 6,388.2 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Geography"
] |
On a new commensal species of Aliaporcellana from the western Pacific (Crustacea, Decapoda, Porcellanidae)
Abstract Aliaporcellanaspongicolasp. n. from the Philippines and Indonesia is described. The new species has been frequently photographed by divers because of its striking coloration, but has not been described yet. Aliaporcellanaspongicolasp. n. is in fact a widespread commensal of barrel sponges of the genus Xestospongia and other sponges. Morphological characters and ecological information of all described species of Aliaporcellana, and of other porcellanids associated with sponges and soft corals, suggest that all members of the genus are commensals, and that similar morphological adaptations to dwelling on these hosts have evolved independently in different evolutionary lines within Porcellanidae.
Introduction
The porcellanid genus Aliaporcellana was established by Nakasone and Miyake (1969) for a group of Indo-West Pacific species previously assigned to Porcellana Lamarck and to one of three natural groups within Polyonyx Stimpson, designated by Johnson (1958) as the P. denticulatus Paul'son 1875, group. A diagnostic character considered by Nakasone and Miyake (1969) to raise Aliaporcellana is the dactylus of all walking legs bearing two or more distinctively well-developed fixed spines. Aliaporcellana contained nine species until Haig (1978) restricted the genus to the species of the Polyonyx denticulatus group, which now includes the type A. suluensis (Dana 1852), A. pygmaea (de Man 1902) and A. telestophila (Johnson 1958), and the species described by Nakasone and Miyake (1969), A. kikuchii. A fifth species, A. taiwanensis, was subsequently described by Dong et al. (2011).
Here we describe a new sponge-dwelling species of Aliaporcellana from material collected in the Philippines and Indonesia. Despite having been frequently photographed by divers because of its striking coloration and relatively large size, the species has not been described. With the exception of A. telestophila , commensalism has never been reported for other congeners. We highlight the characters distinguishing the new species from its congeners, and discuss the morphological traits, present in all Aliaporcellana species and other porcellanids associated with sponges, which we interpret as adaptations to living on these hosts.
Material and methods
We found the new species in material collected in the Philippines by G. Paulay [Florida Museum of Natural History, Gainesville, U.S.A. (UF)] and in Indonesia by C.H.J.M. Fransen [Naturalis Leiden, The Netherlands (RMNH)]. The holotype is deposited in the National Museum of Natural History, Philippines (NMCR). Color photographs of the holotype and of the live crab in the field were provided by G. Paulay, and were included in the description. Measurements of carapace length and width (in mm) of type individuals follow collection information. Description. Carapace rounded (Figures 1, 2), considerably variable in form and in length-width ratio; larger females with carapace broader than long (ratio < 1), smaller individuals with carapace relatively longer than broad (ratio > 1); dorsal surface convex, glossy, with faint, transverse striae on branchial and intestinal regions; cervical grooves gently depressed. Front (Figures 1, 2) broad, slightly produced beyond eyes, weakly trilobate, somewhat deflexed; frontal lobe visible in dorsal view, grooved, overreaching lateral ones. Distal margin of entire front lined with row of rounded, upwardly directed small spines (Figure 3a), the largest on supraocular edges. Outer orbital angles ( Figure 2) forming acute, bifid tooth followed by hepatic spine of similar size. Epibranchial margin rounded, produced outwards, marked with epibranchial spine; cervical groove faintly marked. Mesial branchial margins crested, with row of 5 or 6 strong, anteriorly, upwardly directed spines of increasing size posteriorly. Sidewalls entire.
Family Porcellanidae
Eyes moderately large (Figures 1, 2, 3a), retracted, ocular peduncles short. First movable segment of antennal peduncle (Figures 2, 3b) with strong, anteriorly curved distal spine, second with smaller, anterodistal, acute protuberance, third one globular. Basal segment of antennular peduncle (Figure 3c) with anterior surface transversely Chelae moderately different in size and form (Figures 2, 4a-c); merus short, dorsal surface faintly rugose, inner margin with strongly projecting, sub-rectangular projection, fringed distally with cockscomb-shaped row of teeth, other large spines on proximal and distal edge of outer margin, one on distal margin; ventral side with two large spines on distal margin. Carpus 1.5 times as long as wide, dorsal surface evenly convex, similarly structured as carapace, with some faint transversal plications; inner margin with 3-5 low or sharply hooked teeth, decreasing in size distally, distal edge rounded. Outer margin with a row of six or seven acute, upwardly directed spines, the last one forming distal edge. Palm slender, surface rounded, similarly structured as carpus, with faint, transverse striae. Smaller chela with outer margin bearing row of approximately ten sharp spines on proximal half, with scattered, long, simple setae; fingers reaching up to half length of chela, dactylus moderately twisted, opening vertically, cutting edges denticulate, without teeth, both fingers with narrow fringe of fine, plumose setae in proximal 2/3 of length. Larger chela somewhat stouter, outer margin with row of spines less developed or disappearing in large specimens, with scattered, long, simple setae, fingers relatively shorter as in smaller chela; dactylus moderately twisted, opening vertically, cutting edges in pollex and dactylus with broad, shallow tooth, gape naked.
Walking legs (Figures 2, 3f, g) stout, merus with some transversal striae, with scattered, long, simple setae, increasing in number towards dactylus; carpus in first and second leg ending dorsodistally in two minute spines, propodus ventrally with 1 movable spine in addition to terminal triplet; dactylus terminating in bifurcate, curved claw.
Coloration. The background color of carapace and extremities is bright orange (hexadecimal color #e86700), overlain with a reticulate bright blue (hexadecimal color #000de8) pattern (Figures 1, 5). A broad, black band crosses the carapace transversely at the level of the hepatic region; it is fringed on both sides by a small, blue line and a broad, orange band. A similar band extends along the outer border of the chelipeds from the carpus to the tip of the pollex. In a number of individuals the blue color prevails over the orange, and the entire crab appears blue.
Ecology. Aliaporcellana currently consists of six species. Of all species, A. spongicola sp. n. is by far the most strikingly colorful, and has, therefore, become popular among underwater photographers and marine aquarists. Aliaporcellana spongicola sp. n. dwells on large barrel sponges of the genus Xestospongia Laubenfels [family Petrosiidae; e.g., X. testudinaria (Lamarck 1815)] and on other types of sponges, like the "large, grey foliose sponge", on which the crabs from Sulawesi included in this study, were found.
The porcellanid lies in the sponge's folds, where it is most protected from predators ( Figure 5).
Distribution. The type specimens come from the central Philippines and northern Sulawesi, Indonesia.
Etymology. The name spongicola (from the Latin word spongia, meaning sponge, and the Latin suffix cola, meaning dwelling) refers to the sponge-dwelling habit of the new species.
Remarks. Aliaporcellana spongicola sp. n. is considerably variable in the shape of carapace and the degree of spination on body and extremities. As in other porcellanid species, the spines are more defined in smaller specimens. The new species is distinguished from A. pygmaea and A. kikuchii by the lack of acute spines on the dactylus of the smaller cheliped (Osawa 2007;Dong et al. 2011), and by its smoother surface of carapace and chelipeds (Lewinsohn 1969;Nakasone and Miyake 1969;Werding and Hiller 2007;Osawa and Chan 2010). Aliaporcellana spongicola sp. n. can be distinguished from A. suluensis, A. telestophila and A. taiwanensis by its regularly denticulated front (Figures 2, 3a), which is smooth in the other species, and by the basis of the antennular peduncle, which is crowned with a ring of spines ( Figure 3c) and is at most granulate or faintly serrate in the compared species (see Lewinsohn 1969;Werding and Hiller 2007;Dong et al. 2011 for A. suluensis;Ng and Goh 1969 for A. telestophila;Dong et al. 2011 for A. taiwanensis).
Discussion
With the description of Aliaporcellana spongicola sp. n., the genus now comprises six species.
Up to now, A. telestophila is the only species of the genus reported to live as commensal (Johnson 1958;Ng and Goh 1996). Johnson (1958) described this species based on his own collections and observations, highlighting that A. telestophila was found "strictly [in] commensalism with the octocoral Telesto". However, Ng and Goh (1996) doubted the identification of the octocoral host and referred to it as Solenocaulon Gray (family Anthothelidae Broch), instead. Ng and Goh (1996) and Goh et al. (1999) described the porcellanid as dweller inside of the hollow branches of the octocoral, communicating with the outer medium through the openings of these branches. The species lives in male-female pairs; sometimes two pairs are found in one host colony.
Our own observations of the morphology and ecology of A. suluensis collected from sponges in Saudi Arabia, and of all other Aliaporcellana species, led us to conclude that perhaps all species of the genus are commensals. We base our conclusions on the well-developed, fixed spines on the dactylus of the walking legs, a character present in all Aliaporcellana species (see Figures 3f-g) and other porcellanid commensals that inhabit sponges (e.g., Pachycheles ackleianus A. Milne-Edwards, 1880, Polyonyx hendersoni Southwell, 1909and P. splendidus Sankolli, 1963see Haig 1960;Hiller et al. 2010). This morphological trait is probably an adaptation to moving on the surface of this type of host. We hypothesize that all members of the genus Aliaporcellana are commensal of sponges or octocorals, and that this morphological trait has evolved independently in different evolutionary lines within Porcellanidae. Aliaporcellana spongicola sp. n. probably lives in male-female pairs, as A. telestophila does on the octocoral Solenocaulon (Ng and Goh 1996;Goh et al. 1999).
The association between crab and sponge may be easily overlooked because sponges are often attached to each other and to rocks, and are damaged when the rocks are lifted. More collection data of other Aliaporcellana species are needed to confirm the commensal status of the genus. | 2,272.2 | 2018-08-08T00:00:00.000 | [
"Biology"
] |
Closed string deformations in open string field theory II: superstring
This is the second paper of a series of three. We construct effective open-closed superstring couplings by classically integrating out massive fields from open superstring field theories coupled to an elementary gauge invariant tadpole proportional to an on-shell closed string state in both large and small Hilbert spaces, in the NS sector. This source term is well known in the WZW formulation and by explicitly performing a novel large Hilbert space perturbation theory we are able to characterize the first orders of the vacuum shift solution, its obstructions and the non-trivial open-closed effective couplings in closed form. With the aim of getting all order results, we also construct a new observable in the $A_\infty$ theory in the small Hilbert space which correctly provides a gauge invariant coupling to physical closed strings and which descends from the WZW open-closed coupling upon partial gauge fixing and field redefinition. Armed with this new $A_\infty$ observable we use tensor co-algebra techniques to efficiently package the whole perturbation theory necessary for computing the effective action and we give all order results for the open-closed effective couplings in the small Hilbert space.
Introduction and summary
This second paper of the series including [1,2] is devoted to the coupling of on-shell closed string states to open superstring field theory. In particular we are going to discuss the vacuum shift induced by a physical closed string deformation and the corresponding open-closed effective couplings in RNS open superstring field theory [3][4][5]. Focusing for simplicity on the open string NS sector, we have two main frameworks to discuss superstring field theories depending on whether we choose the dynamical string field to live in the large [6,7] or in the small Hilbert space [8].
We start our analysis with the WZW-like theory in the large Hilbert space [6,7]. This theory has the advantage of having microscopic vertices which are relatively simple, with no insertions of picture-changing operators. We then add a gauge-invariant open-closed term consisting of a simple vertex coupling an on-shell closed string with an off-shell open string, controlled by a deformation parameter µ. The form which is more suited for studying the effective action is the so-called t-Ellwood invariant which couples a physical closed string in the (total) -1 picture in the small Hilbert space with an off-shell dynamical open string in the large Hilbert space and has been first discussed by Michishita [9]. Just as in the bosonic string case [1] this term is a tree-level tadpole which destabilizes all the vacua of the theory, which have to be shifted, in order to cancel the tadpole. We show that upon expanding around a given vacuum shift solution (if such a solution exists) the theory remains structurally the same as the undeformed one but it is characterized by a new deformed BRST charge Q µ which anticommutes with η when the vacuum shift equations are satisfied.
Then we analyze the structure of the vacuum shift equations perturbatively in the deformation parameter µ and we attempt to solve them by fixing the standard gauge b 0 = ξ 0 = 0 outside of the kernel of L 0 . In doing so we remain with equations in Ker(L 0 ) which, order by order in µ, are obstructions to the existence of the full solution, analogously to what happens for boundary marginal deformations [10][11][12]. We derive a set of sufficient conditions on the closed string deformation which ensure that such obstructions are vanishing and a vacuum shift solution exists. These conditions are nicely interpreted as vanishing conditions for amplitudes involving an arbitrary number of deforming closed strings and a single physical open string. As it turns out, these amplitudes are naturally written in the large Hilbert space and (denoting P 0 the projector on the kernel of L 0 ) they have an interesting structure involving symmetric insertions of "dual" homotopy operators h andh associated to the mutually commuting derivations Q and η This η-Q symmetry is a consequence of the N = 2 structure which is at the heart of the WZW theory [6,13] and in fact something very analogous to the homotopy operators h andh have been already discussed in [14], in the study of the BV quantization of the free WZW theory, where instead of ξ 0 b 0 the zero mode of ξb(z) was used. In our approach, where we solve perturbatively the equation of motion, these operators come into play rather naturally in the emerging perturbation theory even using the standard gauge fixing ξ 0 = b 0 = 0, provided we remain outside the kernel of L 0 in internal propagators, as we should. After having discussed the existence of a (perturbative) vacuum shift solution, we turn our interest to the construction of the effective action for the modes in the kernel of L 0 (the "physical " or -at zero momentum-the "massless" fields). To build the effective action we follow [15,16] and, order by order in perturbation theory, we obtain the effective couplings between the deforming closed string state and the massless open strings. The obtained couplings also include an effective tadpole which couples a single massless open string to several deforming closed strings and we consistently find out that this tadpole is precisely given by the above-mentioned obstructions to the vacuum shift, precisely following the same logic as in the bosonic string [1]. A rather sharp difference however, compared to the bosonic case, is that the presence of the two dual propagators h andh and the elementary multi-string vertices make the perturbation theory expanding extremely fast and it becomes too cumbersome to go higher than the first few orders. Nevertheless we find out that, just as it was happening for the obstructions, this perturbation theory is still naturally organized in a way which is completely symmetric in η and Q and the associated homotopiesh and h, hinting to an all-order structure which we are however unable to characterize. Because of this, differently from the bosonic case [1], we don't give all-order results for the full open-closed effective action as we did in general for theories which are based on a cyclic A ∞ structure [17], see also [18]. Therefore, before attacking the explicit evaluation of the first non-trivial open-closed couplings (which are discussed in the third paper [2]) we address the same problem in the A ∞ open superstring field theory in the small Hilbert space [8], which can be obtained from the WZW theory in the large Hilbert space by gauge fixing the η gauge invariance and performing a field redefinition [19][20][21].
On the world-sheet, the A ∞ theory is more complicated compared to the WZW one, essentially because its multi-string vertices are constructed iteratively using integrated nonlocal insertions of the picture-changing operator X = [Q, ξ] and its primitive ξ. However the great advantage of the A ∞ theory is its build-in cyclic homotopy structure which allows to switch-on the co-algebra language and to handle at once the whole tree-level perturbation theory. Quite surprisingly no one ever described the analog of the Ellwood invariant [23][24][25] for the A ∞ theory. In fact, despite the theory is still based on Witten star product it turns out that the usual insertion of a physical closed string at the midpoint of the identity string field does not commute with the multi-string products, due to the non-local picture-changing insertions. However this failure of commutativity can be "corrected" by adding a whole tower of new products E k Ψ ⊗k coupling the physical closed string to multiple open strings. These new products E k can be upgraded to coderivations E k in the tensor algebra and, just as it happens for the Ellwood invariant in Witten bosonic OSFT [17,18], it turns out that their sum E = k=0 E k is nilpotent E 2 = 0 and it commutes with the total nilpotent coderivation M describing the fundamental open string products of the theory. This new observable belongs to the general class of observables that have been discussed in [17] (and further properties will be discussed in [26]) for generic A ∞ theories. Moreover, just as the open string products M can all be encoded in a (large-Hilbert space) cohomomorphism G acting on the coderivation of the free theory Q as M = G −1 QG, [8] the "Ellwood" coderivation E can be similarly obtained as where e is the coderivation associated to the simple 0-string product given by the same midpoint insertion which we used in the WZW theory. This has the consequence that, just as the WZW and the A ∞ theory are related by a partial gauge fixing of the former and a field redefinition [19][20][21], the same is true for the WZW theory deformed by e and the A ∞ theory deformed by E. To compute the first few terms of the effective action one can simply follow [11] but in fact it is much more efficient working at the level of co-algebras where we can straightforwardly apply the vertical decomposition for strong deformation retracts discussed in detail in [17] to write down in closed form the complete infinite tower of effective open-closed (tree-level) couplings that one obtains by integrating out the open string field outside the kernel of L 0 . The paper is organized as follows. In section 2 we repeat what we did for the bosonic string in [1] in the superstring context using the NS WZW theory in the large Hilbert space, deforming it with the open-closed invariant. We first discuss the tadpole removal and then we construct perturbatively the open-closed couplings. We conclude section 2 with an explicit computation of a mass deformation induced by a change in the compactification radius for a non-BPS d1 brane. This parallels the corresponding computation for the bosonic string which we presented in [1]. In section 3 we construct the microscopic open-closed coupling E corresponding to the Ellwood invariant in the A ∞ theory in the small Hilbert space. After having constructed the observable E we show that this indeed coincides with the observable that one gets by partially gauge fixing the WZW theory with the previously discussed open closed coupling [9] and performing the field redefinition described in [19][20][21]. In section 4 we systematically write down the open-closed couplings in the A ∞ theory to all orders taking advantage of the co-algebra description. We end up in section 5 with some discussion on future extensions of the presented material.
The relation between the open-closed couplings of the WZW and the A ∞ theory, together with N = 2 localization techniques will be the main theme of the third and final paper of this series [2].
Tadpole shift
The WZW theory coupled to the superstring version of the Ellwood-invariant in the large Hilbert space [9,25] 3 has the following action where we have defined the Maurer-Cartan forms and Φ is the picture and ghost number zero open string field in the large Hilbert space and in the NS sector. Q and η are respectively the zero modes of the BRST current and the η ghost of a given RNS superstring background and they are mutually commuting nilpotent derivations is the identity string field I with a midpoint insertion of an h = (0, 0) primary V (z,z) at total picture −1, which obeys ηe = 0 (2.5) These properties guarantee that the operator Φ, e is gauge invariant under the infinitesimal gauge transformations where the interacting part of the action is explicitly given by and we have denoted Since the action has been deformed by a gauge invariant operator, the gauge symmetry of the action is unchanged, but the equation of motion acquires a source term The interacting part of the equation of motion can be compactly written starting with the formal power series expansion Just as in the bosonic case, because of the tadpole µe, Φ 0 = 0 is not a solution anymore, so if we want to study the physics that is induced by the µ-deformation we have to shift the vacuum to a new equilibrium point. Let's first address this problem at a formal level. To this end we rewrite the EOM (2.11) in the more standard WZW form (see for example [16]) where we have used that the operator is invertible (notice also that by construction ad Φ e = 0). Assume now we have found a solution Φ µ to (2.14). By writing the group element e Φ as [27] the action becomes [28] where the equation of motion has been used to set the tadpole to zero and we have defined Notice that the shifted kinetic operator Q µ is nilpotent even without assuming that Φ µ solves the tadpole-sourced equation of motion. This may look odd but in fact the need for a proper solution is contained in which parallels the analogous bosonic condition discussed in [1]. In other words, when we expand around a proper vacuum shift Φ µ , then Q µ is a nilpotent operator which maps the small-Hilbert space into itself (despite the fact that e −Φµ Qe Φµ itself is not in the small-Hilbert space). So the shifted theory is characterized by the pair of mutually commuting nilpotent operators Q µ and η and has thus the same algebraic structure as the initial theory (without the closed string deformation) but a different BRST charge. Therefore the situation is effectively the same as in the simpler bosonic case [1]. Now, as in the bosonic case, we can search for the vacuum shift Φ µ perturbatively in µ in the form where P 0 ϕ 1 = ϕ 1 . By acting with ηQ we find that the condition for the existence of a solution (to this order) is As in the bosonic case, a sufficient condition (which is also necessary in the zero momentum sector of the open string) is to set Going further, if we insist in finding solutions which do not excite the kernel of L 0 , at second order in µ we get the equation where we have found convenient to definẽ being X 0 the zero mode of the picture raising operator X 0 = [Q, ξ 0 ]. The operatorh obeys In particular we have It is suggestive to realize thath is a propagator in the "dual" small Hilbert space (which is identified with the kernel of Q, rather than η), where η (instead of Q) is the kinetic operator. Indeed just as we Then it is not difficult to see that we have a solution . . .
As in the bosonic case these conditions should capture the vanishing of S-matrix elements between the deforming bulk closed strings and a single massless open string. Notice that these amplitudes are written explicitly in the large Hilbert space and make use of both propagators h andh in a completely symmetric way. It would be interesting (but beyond the scope of 5 We could have defined, as in [14],b ′ 0 ≡ d0 ≡ [Q, (bξ)0] where (bξ)0 is the zero mode of the conformal field bξ(z) and get analogous properties. Then we would have had a slightly different η- . In other words we are still fixing the standard ξ0 = b0 = 0 gauge (outside the kernel of L0) but we are anyhow able to display a complete η − Q symmetry of the amplitudes. this paper) to systematically investigate how these large Hilbert space amplitudes relate to the more familiar ones in the small Hilbert space.
To conclude this subsection, before discussing the effective action approach, we would like to spend few words on the Ellwood invariant we use and on the projector condition P 0 e = 0. To this end let's start considering a NS-NS closed string deformation. This can be constructed starting from a set of (holomorphic) h = 1/2 superconformal primaries U a 1 2 and their anti-holomorphic mirrorsŪ b by considering the picture −1 closed string state 6 and similarly forŪ b . Assuming a generic gluing conditionŪ b where now the polarization also includes the gluing matrix Computing the η and Q variations we find the standard relations The open string field e is then given by where the (twist-invariant) operator U † 1 is given by 53) 6 The subscript "NS" stands for "NS-NS".
where v n are known (but unimportant in our analysis) coefficients [29]. The Fock space state P 0 e can be explicitly computed as in the bosonic case and it gives where the NS-boundary field W is given by the weight 1/2 component of the bulk-boundary OPE Of course it is possible that, for a given bulk field V (z,z) and a given gluing condition Ω the boundary field W 1 2 is vanishing, and in this case the vacuum shift is unobstructed (to first order). In case of a RR deformation we can use the picture (−1/2, −1/2) where α andβ are the spinor indices in a given chirality dictated by the chosen GSO projection. The corresponding holomorphic bilocal field will depend on the gluing condition for the spin field which we will generically take to beSβ(z) → FββSβ(z * ), where the chirality ofβ depends on the boundary conditions where, analogously to the NS-NS casẽ In this case we easily find that (assuming zero momentum in the open string sector) the only possible outcome is where the NS open string field W αβ 1 2 is given by the h = 1/2 contribution of the R-R bulkboundary OPE So we see that in both cases of NS-NS or R-R deformations, the first order obstruction to the vacuum shift is associated to the creation of a physical NS open string field via the bulk-boundary OPE.
Effective action and open-closed couplings in the WZW theory
We now consider integrating out the states outside the kernel of L 0 from the deformed action (2.1). So we write The equation of motion for R is given by Now we fix the gauge and we act on the R-equation (2.62) with the full propagatorhh. In doing so we miss a part of the R equation (the out of gauge equation) which will be however accounted for by the final effective equation for the massless field ϕ, see [17]. This gives the "integral" equation As in the bosonic string [1], the solution R = R µ (ϕ) to this equation is directly related to the corresponding solution R µ=0 (ϕ) in the undeformed µ = 0 theory. Indeed it is easy to see that we have Explicitly the first few terms of R µ (ϕ) are given by . . .
Notice how R k,α is obtained from R (k+α),0 by doing α substitutions ϕ → −hhe in all possible ways, according to (2.65). Already looking at these low order terms, we can realize that a natural way to interpret the double summation in α (number of closed strings insertions) and k (number of open string string insertion) is an expansion in 2α + k which is how the usual string perturbation theory is typically understood. Higher order terms can be obtained straightforwardly but the reader can check that the corresponding perturbation theory becomes cumbersome very soon. It would be clearly desirable to have a closed form expression for the solution R µ=0 (ϕ) as in the case of theories based on a cyclic A ∞ algebra (like the bosonic OSFT and the small Hilbert space theory which we focus on in the next sections). If we keep these first few orders (renaming ϕ → λϕ to explicitly have an open string counting parameter) we arrive at the final form of the effective action as At O(λ 0 ) we find the non-dynamical cosmological constant which consists of purely closed string scattering off the initial D-brane. Then at O(λ) we find the "massless" tadpole, which is the same as the obstructions to the full vacuum shift (2.44). At O(λ 2 ) we find the kinetic term for the (initially) massless fields ϕ. The closed string deformation gives rise to possible mass-terms consisting (at leading order) of a (open) 2 -(closed) amplitude. The non vanishing of this amplitude is in turn a first order obstruction to the existence of an open string marginal deformation triggered by a physical ϕ. At higher order the purely open string effective action already studied in [16] gets corrected by infinite closed string corrections according to the general picture we have already encountered in the bosonic case. Notice that although the recipe to get the above effective couplings is rather straightforward, it is not easy to grasp an all-order structure, which instead was very clear in the bosonic string analysed in [1]. For this reason, as a step towards unveiling the all-order structure of the WZW effective action, we will analyze the same situation in the closely related A ∞ open superstring in the small Hilbert space where, following the general structure presented in [17], we will be able to give all-order results on the effective open-closed couplings. However before delving into this rather technical endevour we will give a concrete example of how the above effective couplings (in particular the one responsible for the mass deformation of the open string spectrum) can be easily computed by simple generalization of what we have done for the bosonic string in [1].
Example: mass correction for the radius deformation of a non-BPS D-string in Type IIA
Before going on with more extended all-order constructions, which will be the topic of the following sections, let us give a simple example on how the above terms in the effective action can be explicitly computed, paralleling what we did in the bosonic case [1]. The setting we want to consider here is a non-BPS D1-brane wrapped on a circle of radius R in Type IIA string theory. The BCFT of this system has been studied in [22] at the critical radius α ′ Y (in canonical -1 picture ) 7 generate an exact moduli space connecting the D-string to a D0-D0 system at maximal distance on the circle. In our example we will start at generic radius R and we will add a closed string deformation of the form see [1] and [29] for the definition of the world-sheet operator U † 1 . For the radius deformation we have to consider and therefore is the world-sheet SUSY partner of ψ y This deformation is the superstring analog of what has been studied in [1] and it describes a modification of the compactification radius R → R + δR. The relation between δR and µ will be derived shortly. But first of all we would like to check that the massless open string field remains massless. To this end we compute the induced mass term from (2.72) Thanks to the condition P 0 e = 0 (which can be readily verified by direct OPE) the above quantity can be manipulated in complete analogy to what has been done in [1] for the corresponding bosonic string computation. This gives the result (2.81) Therefore we find that the mode φ 0 remains massless under the radius deformation. This is what we expect since this mode describes the Wilson line deformation which is exactly marginal for every radius. It is interesting to notice that here in the superstring the integral is finite without the need of a regularization at t ∼ 0 as it was the case for the bosonic string [1]. The reason is that, thanks to the fact that the interacting open strings are at different pictures, there is no zero-momentum negative-weight field propagating in the amplitude. Next, we would like to compute the mass correction for the GSO(-) fields where we have set and we have explicitly included an internal Chan-Paton factor σ 1 [30]. 8 A cocycle factor is understood to make P and e −φ effectively grassmann-odd (like ψ) [22]. We have also considered the field at finite momentum k in order to be able to put ϕ ± on-shell for all values of R and n so that the SFT amplitude will be computable without using the Schwarz-Christoffel map as in [1] and in the previous example. The mass-shell condition is given by and under this condition (2.82) is a physical weight-zero field. Taking into account the multiplicative Chan-Paton for the derivations η and Q [30] η →η = η ⊗ σ 3 (2.85) We are interested in the term of the effective action (2.72) The correlator gives four terms which are all equal and we end up with where we have evaluated the bc correlator and we have isolated an overall minus sign from the internal Chan-Paton's factors (with an understood normalization of the Chan-Paton's trace). Computing the remaining correlator (keeping in mind that e −φ and P ± are effectively Grassmann odd thanks to their implicit cocycle factors) we finally obtain Therefore, from (2.89), we find Now we can relate the SFT deformation parameter µ to the change in the compactification radius δR by matching (2.92) with the analogous formula one gets from the KK spectrum from which we get Therefore we have found a result which is completely analogous to the corresponding computation in bosonic SFT of [1].
In particular we find that that at the critical radius R = √ 2α ′ for n = 1 the massless modes e ±i 2 α ′ Y become tachyonic as the radius increases and the marginal direction connecting the non-BPS D1 brane to the D0-D0 pair is lifted as expected. Following our general picture the non-vanishing of the above mass-shift amplitude is an obstruction to the existence of a solution representing an open string marginal deformation. In the third paper [2] we will confront ourselves with O(µ) mass terms for more complicated D-branes setting and we will take advantage of extra worldsheet structure (the N = 2 worldsheet supersymmetry) which will drastically simplify the computation of the involved four-point function (which in this case was elementary).
A new observable in A ∞ open superstring field theory
While the WZW-like open superstring field theory in the large Hilbert space yields interaction vertices which are extremely economical, at any given order they do not seem to exhibit any recognizable algebraic structure. As we have already emphasized in the previous section, this fact prevents us from obtaining all-order results on the vacuum shift and the effective openclosed vertices using the WZW-like open SFT deformed by adding the Ellwood invariant. Fixing the η-gauge symmetry by setting ξ 0 Φ = 0, it was shown that the WZW-like equation of motion can be rewritten in terms of an A ∞ structure which, however, is not cyclic, so that it is not manifested at the level of action. At the same time, the small Hilbert space theory obtained in this way is known to be related by an explicit field redefinition [19,21] to the "Munich" open superstring field theory [8], which displays a very elegant structure of interactions expressed in terms of a cyclic A ∞ structure (although this comes at a cost of having to deal with a somewhat complicated distribution of PCO insertions in the expanded vertices). A brief review of this theory in the formalism of tensor coalgebras [31] and its relation to the WZW-like theory is presented in Appendix A. The algebraic simplicity of the Munich theory then makes it possible to conveniently package the whole perturbation theory using the homological perturbation lemma [32,33] and therefore to present closedform expressions for effective vertices at arbitrary order. However, if we want to exploit this machinery also for the all-order calculation of the effective open-closed vertices and the vacuum shift in the presence of an exactly marginal closed-string background, a suitable analogue of the Ellwood invariant for the A ∞ theory needs to be discussed first.
The aim of this section will be to present a construction of an observable for the cyclic A ∞ open superstring field theory, which is based on the string field e used in the WZW theory given in (2.52), which is a midpoint insertion of an on-shell weight (0, 0) picture −1 closedstring primary on the identity string field I. 9 This will be called the bosonic Ellwood state.
Contrary to the case of the bosonic cubic OSFT, we will however see that computing the BPZ product of e with the dynamical string field does not give rise to a gauge-invariant quantity (the Ellwood invariant) of the A ∞ open superstring field theory. Taking a lesson from the construction of the superstring products M k of the Munich theory, we will then attempt to define higher products E k for k 1 so as to build the corresponding observable order by order in Ψ. Eventually, we will recognize a recurrent pattern whose validity we will then proceed to establish to all orders in Ψ. Since all E k will turn out to be linear in e, also the resulting observable (the "dressed Ellwood invariant") will be linear in e. We will also show that our observable is related by the field redefinition of [19,21] to the t-Ellwood invariant of the partially gauge-fixed Berkovits' WZW-like open SFT. In Section 4, we will see that adding this observable to the action yields a new theory which formally exhibits weak A ∞ structure.
Zero-and one-string products
The bosonic Ellwood state e, as described above, is known to satisfy the properties [23][24][25] because X 0 e can no longer be regarded as a local midpoint insertion of a closed string operator on the identity string field. Nevertheless, it is possible to quickly verify that this O(Ψ) anomaly can be corrected by replacing where we define a new cyclic 1-product in terms of the gauge 2-product µ 2 (see (A.8) for an explicit expression for µ 2 ). It is also an immediate consequence of the property (3.1b) (and the fact that e ∈ H S ) that the product E 1 is in the small Hilbert space. We have therefore managed to dress the bosonic Ellwood invariant by adding a O(Ψ ⊗2 ) term defined in terms of a small Hilbert space cyclic 1-product E 1 so as to restore gauge invariance to the linear order in Ψ. Hence, it is reasonable to expect that by adding higher order terms defined using some k-products E k for k ≥ 2, we can achieve restoration of gauge invariance to all orders in Ψ.
Higher products
In order to avoid lengthy expressions, we will find it more convenient to work using the tensor coalgebra notation introduced in Appendix A.2. Denoting the desired quantity (which is to be gauge-invariant in the A ∞ open SFT) by E(Ψ), we will see that it can be defined in terms of an odd cyclic small Hilbert space coderivation E (see Appendix A.2 for our conventions on the coalgebra machinery) as where the ghost-number 2 − k, picture-number k − 1 small Hilbert space products E k : H ⊗k −→ H can be extracted as E k = π 1 Eπ k with E 0 ≡ e and we have introduced an arbitrary interpolation Ψ(t) for 0 t 1 where Ψ(0) = 0 and Ψ(1) = Ψ. It was shown in [17] that such E(Ψ) is gauge-invariant (up to terms which vanish on-shell) whenever E commutes as a coderivation with M, that is, whenever we have [E, M] = 0 (3.6) (more details are to be given also in [26]). In terms of the coderivations E k and M k corresponding to the products E k and M k (obviously we then have E = ∞ k=1 E k ), the condition (3.6) can be rewritten as for all k ≥ 1. Let us start our construction of E by considering an odd coderivation e which corresponds to the Ellwood state e understood as a 0-string product. The coderivation e satisfies We also have [η, E 0 ] = 0, namely that E 0 is a small Hilbert space coderivation. Moreover, E 0 is (trivially) cyclic. Next, recalling the form (3.4) of the 1-product E 1 , we are led to put Similarly, by using the properties of E 0 , E 1 , we can convince ourselves (see Appendix B.1 for a detailed calculation) that by defining we obtain [η, E 2 ] = 0, as well as the condition (3.7) for k = 3. Hence, we are led to conjecture that for general k > 0, the coderivation E k can be defined recursively as Note that the coderivations E k carry picture number k − 1.
We will now prove that the recursion (3.13) indeed gives E such that [E, M] = [η, E] = 0. To this end, let us introduce the generating functions It is then straightforward to compute (see Appendix B.2 for details) that the recursion (3.13) implies the following differential equation for E(t), where (3.14a) gives an initial condition E(0) = e, while we also have E(1) = E. Solving (3.15), we therefore find that It also follows from (3.16) and from [e, e] = 0 (this is true for any coderivation derived from a 0-string product) that the coderivation E is formally 10 nilpotent, namely that we have We can therefore conclude that, at least formally, the products E k satisfy the relations of a weak A ∞ algebra, which commutes with the A ∞ algebra of the products M k (in the sense that [E, M] = 0). Finally, for the sake of concreteness, let us expand the expression (3.16) for E, the associated products E k , as well as the resulting observable E(Ψ) explicitly in terms of e and µ k for a couple of lowest orders in Ψ. We obtain . . . . . .
which in turn yields
Going to the large Hilbert space and using cyclicity of the gauge products µ k , we obtain the following perturbative expression for the observable ω L e, µ 2 (ξ 0 Ψ, Ψ) + µ 2 (Ψ, ξ 0 Ψ) + 10 Ignoring any potential divergences coming from closed-string collisions at the midpoint. (3.20)
Relation with the Ellwood invariant of the WZW theory
Let us now make use of the field redefinition (A.22) of [19,21] , (3.21) where ξ t denotes the coderivation associated to the 1-product ξ 0 ∂ t . Using then cyclicity of G, recognizing the expression (A.19b) for the WZW-like gauge potential componentà t in terms of the A ∞ SFT string field Ψ(t), as well as using that dt ω L | (∂ t e ξ 0Ψ (t) )e −ξ 0Ψ (t) + ∆A t (t) ⊗ e , (3.23) where the string field ∆A t (t) satisfies the relation (A.24). At this point, we note that the Ellwood state e can be rewritten in a manifestly η 0 -exact form as where the state f is again a local midpoint insertion on the identity string field. 11 This means that we can rewrite Recalling (2.47) and using η0QU a = 0, in the NSNS sector we obtain e = η0fNSNS with while in the RR sector we obtain e = η0fRR with (see (2.57)) fRR =Ṽ αβ ξcS α e −φ/2 (i)cSβe −φ/2 (−i) + cS α e −φ/2 (i) ξcSβe −φ/2 (−i) I .
Here, in the first equality, we have used the fact that η 0 acting on f can be replaced with the full action of the covariant dervative D η (t) (see (A.25)), because f , being a local midpoint insertion on the identity string field commutes withà η (t). In the second equality, we have used the BPZ property of D η (t) (which follows from the BPZ property of η 0 and cyclicity of the star product m 2 ), while in the last equality, we have finally used the relation (A.24) satisfied by ∆A t (t). This means that the term containing ∆A t (t) actually does not contribute to E(Ψ), so that we eventually obtain (3.28) Hence, we note that under the field redefinition (A.22) of [19,21], the proposed observable for the Munich open SFT naturally maps to the t-Ellwood invariant of the partially gauge-fixed Berkovits' open SFT (which we reviewed in Section 2).
Effective open-closed couplings to all orders
Let us start by noting that as a consequence of the formal nilpotency The products M (µ) k can be alternatively expressed as Introducing for 0 t 1 an interpolation Ψ(t) such that Ψ(0) = 0, Ψ(1) = Ψ, we can therefore recast the deformed action S (µ) (Ψ) in the coalgebra language as . (4.5) As usual, the fact that the products M (µ) k satisfy a (weak) A ∞ algebra guarantees gauge symmetry for such an action. As in the case of the cubic OSFT (see [17]), it is natural to conjecture that the action S (µ) (Ψ) based on the new products M (µ) k describes open SFT with a marginal closed string background (determined by the on-shell closed-string state e) being turned on. Note that the new theory contains a tadpole term (given by the 0-string product e) which has to be removed by shifting the vacuum in order to restore the canonical A ∞ form of the action (without a 0-string product). As it was originally discussed in [17] and [1] in the case of the bosonic string and in the previous sections in the case of the superstring in the large Hilbert space, the fact that this shift can be obstructed may be partially a manifestation of the background D-brane system being unable to adapt to the bulk marginal deformation, as well as of the fact that the bulk deformation itself may not be exactly marginal.
We will now be interested in the algebraic aspects of integrating out the massive part R = (1 − P 0 )Ψ of the dynamical string field Ψ, where P 0 denotes the projector onto ker L 0 . In practice, this is done by solving the equations of motion for R in terms of the remaining degrees of freedom ψ = P 0 Ψ (obtaining a solution R µ (ψ)) and substituting the string field Ψ µ (ψ) ≡ ψ + R µ (ψ) back into the microscopic action S (µ) (Ψ). Doing so, we end up with an effective actionS (µ) (ψ) = S (µ) (Ψ µ (ψ)) for ψ. The solution R µ (ψ) for R takes the form of a tree-level Feynman diagram expansion in terms of the Siegel-gauge propagator h = (b 0 /L 0 )P 0 satisfying the Hodge-Kodaira decomposition as well as the "annihilation" conditions hP 0 = P 0 h = h 2 = 0. This procedure is known to be automatically taken care of by the machinery of the homological perturbation lemma [17]: following the application of the homotopy transfer for the (µ-deformed) interactions δM (µ) ≡ M (µ) − Q, we find that the solution for R µ (ψ) gives where 1 T H is the identity element of T H. We also introduce the cohomomorphisms Π 0 , I 0 corresponding to the canonical projection Π 0 : H −→ P 0 H and the canonical inclusion I 0 : P 0 H −→ H (i.e. we have Π 0 π k = Π ⊗k 0 , I 0 π k = I ⊗k 0 ), 1 T H is the identity cohomomorphism on T H and h is the lift of the propagator h to a map on T H defined such that the lifted Hodge-Kodaira decomposition holds together with the annihilation conditions Π 0 h = hI 0 = h 2 = 0. Substituting the string field Ψ µ (ψ) into the microscopic action S (µ) (Ψ), the classical effective actionS (µ) (ψ) = S (µ) (Ψ µ (ψ)) can be expressed as a weak A ∞ actioñ based on the productsM (4.10) Here the symplectic formω S is defined by restricting ω S to P H and the interpolation ψ(t) runs from ψ(0) = 0 to ψ(1) = ψ. Also note that the constant term S (µ) (Ψ µ (0)) has no effect on the dynamics. For more details about the coalgebra notation, we refer the reader to ref. [17]. Applying the vertical decomposition procedure described in Section 2.5 of [17], the coderiva-tionM (µ) can be recast as an explicit perturbation series in µ, that is where we have introduced the coderivations with δM ≡ δM (µ=0) ≡ M − Q being the interacting part of the undeformed coderivation M.
Denoting the homotopy transfer of M, E for the undeformed interactions δM bỹ we can writeM (4.14) Compared to (4.1), it is therefore necessary to add the higher order corrections O(µ 2 ) in (4.14), which then conspire to restore the nilpotency of the coderivationM (µ) (and therefore the gauge invariance of the effective actionS (µ) (ψ)). Noting that the relation for the homotopy transfer of the coderivation E and M (with interactions δM), we can conclude (see Section 2.3 of [17]) that the homotopy transfer of E(Ψ) is an observable of the effective theory with (undeformed) productsM k = π 1M π k . The relation (4.14) therefore says that simply deforming the effective actionS(ψ) (based on the A ∞ productsM k ) with the observableẼ(ψ) does not yield a consistent SFT action and one has to add higher µ corrections so as to end up with a weak-A ∞ actionS (µ) (ψ). Ignoring the constant term, this can be expanded order by order in µ as where we have denoted byS k,α (ψ) the vertices which couple k + 1 open-string insertions with α (on-shell) closed-string insertions. These vertices can be expressed explicitly in terms of the productsÑ k,α ≡ π 1Ñα π k asS k,α (ψ) = 1 k + 1 ω S (ψ,Ñ k,α (ψ ⊗k )) . for k, α 0 (where we setÑ 0,0 = 0). Evaluating explicitly the productsÑ k,α for a couple of lowest orders using (4.12), we obtaiñ N 0,1 = P 0 e , (4.20a) Substituting these into (4.18), we obtain the corresponding effective action vertices, which we expect to be related by a field redefinition to the WZW-like effective interactions (2.72). While we will not attempt to characterize this field redefinition, we will verify in the final paper [2] of the series (for a couple of lowest orders) that the WZW-like effective potential vanishes if and only if the A ∞ effective potential vanishes, namely that the two theories yield the same constraints on the moduli for any given background.
It is important to note that any classical solution of the effective theory yields a classical solution of the microscopic theory. Indeed, as it was reviewed in detail in [17], given a classical solution ψ * of the equations of motion derived by varying the effective actionS (µ) (ψ), the string field Ψ * ≡ Ψ µ (ψ * ) solves the microscopic equation of motion. An application of this observation arises when discussing the vacuum shift which one needs to perform in order to remove the tadpoles from both the microscopic and the effective action. Indeed, the couplingsS 0,α (ψ) signal the presence of a tadpole in the effective action, which originates from the tadpole of the microscopic theory given by the 0-string product E 0 . As it was extensively discussed in [17] for the bosonic OSFT, the tadpoles of both the microscopic and the effective theory can be removed by expanding the action around the string fields Ψ µ ∈ H and ψ µ ∈ P 0 H, respectively (with the property Ψ µ=0 = ψ µ=0 = 0), where the equations of motion π 1M (µ) for the effective vacuum shift ψ µ can be interpreted as obstructions to solving the equations of motion for the microscopic vacuum shift Ψ µ . Put in other words, the tadpole of both the microscopic and the effective action can be removed if and only if the equations of motion for ψ µ can be solved. The corresponding microscopic vacuum shift is then simply given as in terms of the effective vacuum shift ψ µ . Assuming that the effective vacuum shift ψ µ can be consistently set to zero, the microscopic vacuum shift can then be expanded as (4.23)
Discussion and outlook
In this paper we have extended the computation of the effective open-closed couplings originally derived for the bosonic string in [17,18] and later discussed in [1], to open superstring field theory in the NS sector in both the large and small Hilbert space. Leaving for the third paper [2] the discussion of the relation between the two effective field theories, and the concrete calculation of some non trivial open-closed coupling, here we offer some thoughts on possible future research related to the results we have presented in this paper • Given the fact that the WZW theory is amenable to exact analytic techniques which are natural extensions of the bosonic ones (see for example [27,35]) it would be natural to search for analytic vacuum shift solutions. The situation is perhaps a little less promising than in the bosonic case as the status of exact superstring solutions is still in its infancy and so many natural classical solutions (for example lower dimensional BPS branes obtained via tachyon condensation from unstable higher dimensional systems) are still to be found. In this regard the search for analytic vacuum shift solutions could be an interesting new direction towards analytic methods. Ideally, we would like to describe all possible D-brane systems which are compatible with a given closed string background as classical solutions generalizing [36][37][38] and then we would like to be able to determine their deformations induced by the Ellwood invariant.
• In the bosonic OSFT the Ellwood invariant provided a rather direct way to the computation of the full boundary state defined by a given solution [39]. This construction has not been extended to the superstring and it would be interesting to do so.
• We have observed that the WZW theory in the large Hilbert space has a peculiar perturbation theory which is manifestly symmetric in (Q, η) and in the associated propagators (h,h). It would be interesting to study the systematics of this perturbation theory to understand the emergence of the effective theory in the large Hilbert space (of which we have computed only the first few terms). In the A ∞ small Hilbert space theories this is provided by the homotopy transfer, but in WZW case it seems we lack of an analogous convenient packaging of the perturbation theory.
• It would be clearly instructive to complete our analysis by adding the Ramond sector.
We hope that progress in the above directions will be possible in the upcoming future.
A Review of the A ∞ construction in the small Hilbert space Here we will give a brief review of the way the products of the A ∞ open superstring field theory are built. See the original papers [8,40] for more details. We will also remind the reader of the field redefinition [19,21] mapping this theory to the partially gauge-fixed WZWlike Berkovits open SFT. For a review of the tensor coalgebra notation, which we will heavily rely upon, see for instance [17].
A.1 Product notation
The action of the "Munich" A ∞ open superstring field theory can be written as where Ψ is the dynamical string field in the small Hilbert space H S at ghost number 1 and picture number −1. In the following, we will use grading by degree d(A) = |A| + 1, where |A| denotes the ghost number of A. The dynamical string field is therefore degree-even. The degree-odd small Hilbert space superstring products M k : (H S ) ⊗k −→ H S carrying picture number k − 1 and ghost number 2 − k satisfy the A ∞ relations which provide the non-linear gauge invariance of S(Ψ). These can be constructed in terms of the degree-odd bosonic products m 1 ≡ Q ≡ M 1 and m 2 (the Witten's star product) by going through intermediate steps in the large Hilbert space, the so-called gauge products µ k , which are degree-even. Finally, ω S | : (H S ) ⊗2 −→ C denotes the symplectic (graded-antisymmetric) form on the small Hilbert space which is defined in terms of the BPZ product ·, · and with respect to the products M k are cyclic. That is, we have as well as Varying the action (A.1), we obtain the equation of motion Furthermore, the action is invariant under the gauge transformation We now turn to the definition of the products M k . Setting M 1 ≡ Q and µ 1 ≡ 0, one can define the superstring 2-product M 2 either directly by smearing the PCO zero-mode X 0 cyclically over the insertions of m 2 as (so that it is manifest that M 2 is in the small Hilbert space), or equivalently, in terms of the large Hilbert space gauge 2-product by first graded-smearing the superghost zero-mode ξ 0 cyclically over the m 2 insertions and then putting Such a definition yields M 2 which is non-associative and one therefore needs to define a 3string product M 3 in order to restore gauge invariance of the action. For the purposes of constructing the superstring products M k for k ≥ 3, it will prove beneficial to uplift the products to coderivations acting on a tensor coalgebra, which we shall now explain.
(A. 16) We also have the relation [19] G −1 ηG = η − m 2 . are in the small Hilbert space. The form of (A.14a) also makes it manifest that M satisfies (A.11), that is that the products M k satisfy A ∞ relations.
A.3 Relation to the WZW-like open SFT
Let us now review the field redefinition mapping between the A ∞ -and the partially gaugefixed WZW-like open superstring field theories which was originally established in [19][20][21]. Substituting the expression (A.14a) into the A ∞ SFT action (A.12) and using cyclicity of the cohomomorphism G as well as the relation (A.3) between the symplectic form and the BPZ product, it is straightforward to rewrite the action in the form where we have introduced string fields . (A.19b) One can then verify that these satisfy the relations 0 = ηà η − m 2 (à η ,à η ) , (A.20a) 0 = ηà t − ∂ tÃη − m 2 (à t ,à η ) − m 2 (à η ,à t ) , (A.20b) so thatà t andà η can be interpreted as components of a flat connection. Note that sincẽ A η needs to solve the OSFT-like equation (A.20a), where the "kinetic" operator η has trivial cohomology in the large Hilbert space (since ξ 0 acts as the corresponding contracting homotopy), we must be able to write it in a pure gauge form A η = (ηe ξ 0Ψ (t) )e −ξ 0Ψ (t) (A. 21) for someΨ(t) ∈ H S . Since the form (A.18) of the A ∞ SFT action closely resembles the so-called dual form of the WZW-like action, it is therefore natural to introduce the field redefinition relating the string field Ψ of the A ∞ SFT to the string fieldΨ of the partiallygauged WZW-like open SFT as follows = (ηe ξ 0Ψ (t) )e −ξ 0Ψ (t) .
B Detailed calculations
Here we give some detailed calculations and proofs of various results used in the main body of this paper.
B.1 Properties of E 2
Let us give a detailed investigation of the properties of the coderivation E 2 , which we define recursively in terms of E 0 and E 1 as Recall that we have already seen in Section 3 that E 0 and E 1 satisfy
B.2 Proof of (3.15)
Here we will show that the recursive definition (3.13) implies the differential equation (3.15) for the generating function E(t) given in (3.14a | 12,182.8 | 2021-03-08T00:00:00.000 | [
"Mathematics",
"Physics"
] |
The calcium-sensing receptor regulates parathyroid hormone gene expression in transfected HEK293 cells
Background The parathyroid calcium receptor determines parathyroid hormone secretion and the response of parathyroid hormone gene expression to serum Ca2+ in the parathyroid gland. Serum Ca2+ regulates parathyroid hormone gene expression in vivo post-transcriptionally affecting parathyroid hormone mRNA stability through the interaction of trans-acting proteins to a defined cis element in the parathyroid hormone mRNA 3'-untranslated region. These parathyroid hormone mRNA binding proteins include AUF1 which stabilizes and KSRP which destabilizes the parathyroid hormone mRNA. There is no parathyroid cell line; therefore, we developed a parathyroid engineered cell using expression vectors for the full-length human parathyroid hormone gene and the human calcium receptor. Results Co-transfection of the human calcium receptor and the human parathyroid hormone plasmid into HEK293 cells decreased parathyroid hormone mRNA levels and secreted parathyroid hormone compared with cells that do not express the calcium receptor. The decreased parathyroid hormone mRNA correlated with decreased parathyroid hormone mRNA stability in vitro, which was dependent upon the 3'-UTR cis element. Moreover, parathyroid hormone gene expression was regulated by Ca2+ and the calcimimetic R568, in cells co-transfected with the calcium receptor but not in cells without the calcium receptor. RNA immunoprecipitation analysis in calcium receptor-transfected cells showed increased KSRP-parathyroid hormone mRNA binding and decreased binding to AUF1. The calcium receptor led to post-translational modifications in AUF1 as occurs in the parathyroid in vivo after activation of the calcium receptor. Conclusion The expression of the calcium receptor is sufficient to confer the regulation of parathyroid hormone gene expression to these heterologous cells. The calcium receptor decreases parathyroid hormone gene expression in these engineered cells through the parathyroid hormone mRNA 3'-UTR cis element and the balanced interactions of the trans-acting factors KSRP and AUF1 with parathyroid hormone mRNA, as in vivo in the parathyroid. This is the first demonstration that the calcium receptor can regulate parathyroid hormone gene expression in heterologous cells.
Background
Parathyroid hormone (PTH) regulates calcium homeostasis and bone metabolism. Changes in extracellular Ca 2+ ([Ca 2+ ] o ) are sensed by the parathyroid G-protein coupled calcium receptor (CaR) [1]. The CaR determines the response of the parathyroid to [Ca 2+ ] o at the levels of PTH secretion, PTH gene expression and parathyroid cell proliferation [2,3]. Increased [Ca 2+ ] o activates the CaR, resulting in a G-protein-dependent activation of PLC, PLA2 and PLD [4]. This results in decreased PTH secretion and parathyroid cell proliferation. Calcimimetics bind transmembrane (TM) 6 and TM7 of the CaR to allosterically alter the conformation of the CaR [5,6]. The calcimimetic R568 decreases PTH secretion, PTH mRNA levels and parathyroid cell proliferation [7,8]. Genetic deletion of G q/11 specifically in the parathyroid leads to severe hyperparathyroidism (HPT) [9]. Similarly, CaR -/mice are not viable due to the severe HPT [10] and can be rescued by mating with PTH -/or GCM2 -/mice, where PTH is either absent or markedly reduced [11,12]. Therefore, the CaR and its signal transduction are central to parathyroid physiology and the maintenance of a normal serum PTH and intact Ca 2+ homeostasis.
Low serum Ca 2+ and chronic kidney disease lead to secondary hyperparathyroidism which is characterized by increased PTH mRNA levels in experimental models [13]. The increased PTH mRNA in vivo is post-transcriptional and is mediated by the interaction of trans-acting proteins to a defined cis-acting AU-rich element (ARE) in the PTH mRNA 3'-untranslated region (UTR) [14][15][16]. A 26-nucleotide sequence within the ARE is conserved among species and is both necessary and sufficient for protein binding and the regulation of PTH mRNA stability by dietary calcium or phosphorus depletion [16,17]. AU-rich binding factor 1 (AUF1) and Upstream of N-ras (Unr) are PTH mRNA trans-acting proteins that stabilize PTH mRNA [18,19]. The binding of these proteins to the PTH mRNA 3'-UTR is regulated in the parathyroid by chronic hypocalcemia, hypophosphatemia and experimental kidney failure as well as by the calcimimetic R568 [7,15,16]. We have recently identified the decay-promoting protein KSRP (KH domain splicing regulatory protein) as an additional PTH mRNA 3'-UTR binding protein that determines PTH mRNA stability in transfected cells [20]. KSRP-PTH mRNA interaction is increased in parathyroids from hypophosphatemic rats, where PTH mRNA is unstable, and decreased in parathyroids from hypocalcemic and experimental renal failure rats, where the PTH mRNA is more stable. The balanced interaction of PTH mRNA with AUF1 and KSRP determines PTH mRNA half-life and levels and hence serum PTH levels [20].
There is no parathyroid cell line and the signal transduction pathway whereby [Ca 2+ ] o regulates PTH secretion has been characterized by using bovine parathyroid cells in suspension and rat parathyroid organ cultures [21,22]. Parathyroid hormone gene expression and its regulation in cells in vitro have been studied in primary cultures of bovine parathyroids [23,24]. HEK293 cells transfected with the CaR faithfully maintain an intact signal transduction after the stimulus of a high [Ca 2+ ] o to activate the mitogen-activated protein kinase (MAPK) pathway [25,26]. We have now utilized this transfected heterologous cell system to study the mechanisms whereby the CaR regulates PTH gene expression. As the regulation of PTH gene expression by [Ca 2+ ] o is predominantly posttranscriptional, we studied PTH mRNA stability in a system that was independent of any effect on PTH transcription through the PTH promoter. To do this we expressed PTH mRNA driven by a viral promoter to express large amounts of PTH mRNA in HEK293 cells. Differences in PTH mRNA levels would therefore represent only posttranscriptional regulation. Expression of the CaR markedly decreased PTH mRNA levels and stability and conferred responsiveness to [Ca 2+ ] o and the calcimimetic R568 in these cells. This was mediated by the PTH mRNA 3'-UTR ARE. Moreover, expression of the CaR in the HEK293 cells led to a shift from the interaction of the PTH mRNA with the stabilizing protein, AUF1, to the destabilizing protein, KSRP. Furthermore, the expression of CaR modified AUF1 post-translationally as previously shown in vivo [7,27]. Therefore, the expression of the CaR in this heterologous cell system reproduces the signal transduction that determines PTH gene expression in the parathyroid.
Results
The calcium receptor decreases parathyroid hormone mRNA levels post-transcriptionally in HEK293 cells PTH gene expression is regulated post-transcriptionally by Ca 2+ and calcimimetics affecting PTH mRNA stability [7,15]. To focus on the effect of the CaR on PTH mRNA stability in a cell system we constructed an expression vector containing the full-length human PTH gene including its three exons and two introns (hPTH) driven by a viral SV40 (not shown) or CMV promoter ( Figure 1A). Expression of PTH from both viral promoters gave similar results and we used the CMV promoter for further studies.
The hPTH expression plasmid was transiently transfected into HEK293 cells. The transcribed PTH mRNA was correctly spliced resulting in PTH mRNA of the expected size ( Figure 1D). The PTH mRNA was translated into mature human PTH that was measured in the medium by radioimmunoassay ( Figure 1F). We then studied the effect of the CaR on PTH mRNA levels in HEK293 cells expressing the PTH gene by transient transfection. The expression of the CaR was confirmed by immunohistochemistry on whole cells using an intact cell enzyme-linked immu- CMV noassay to determine cell surface expression [28] ( Figure 1B) and by western blot ( Figure 1C). Transfection efficiency was >95%, as indicated by fluorescent microscopy of the cells co-transfected with a green fluorescent protein (GFP) expression plasmid (not shown). Co-transfection of the human (h) CaR together with the hPTH plasmid resulted in a marked decrease in PTH mRNA levels by Northern blot ( Figure 1D) and real time RT PCR (qPCR) ( Figure 1E) corrected for co-transfected control genes GH ( Figure 1E) or endogenous HPRT ( Figure 1G). The decrease in PTH mRNA by the CaR was reflected in a decrease in secreted PTH at 48 h ( Figure 1F). Transfection with another G protein coupled receptor (GPCR), the PTH receptor (PTH1R) had no effect on PTH mRNA levels, confirming the specificity of the CaR effect on PTH mRNA ( Figure 1G).
The calcium receptor decreases parathyroid hormone mRNA levels through the parathyroid hormone mRNA 3'-UTR AU-rich element
We have previously reported that the regulation of PTH mRNA levels is dependent upon a cis-acting instability ARE in the PTH mRNA 3'-UTR [15,16,20]. We then determined if the regulation of PTH mRNA levels by the CaR is exerted through the PTH mRNA ARE, using a growth hormone (GH) reporter gene containing either the rat PTH 63 nt ARE (GH63) or a truncated non-functional PTH mRNA 40 nt element [29]. Over-expression of the CaR decreased GH63 mRNA levels but had no effect on wild-type GH mRNA levels or on a GH mRNA with the truncated 40 nt PTH mRNA ARE (Figure 2A). Our results indicate that the CaR specifically decreases PTH mRNA levels through the PTH mRNA ARE.
To study the effect of the CaR on PTH mRNA stability we used an in vitro degradation assay (IVDA) [16,20]. A radiolabeled polyadenylated in vitro-transcribed hPTH mRNA was incubated with extracts from cells transfected with either the CaR expression plasmid or a control plasmid.
The amount of intact PTH mRNA remaining with time represents the decay of the transcript by the different extracts and is indicative of the decay rate in vivo [15,30]. The rate of PTH mRNA decay was increased by extracts from cells expressing the CaR compared with extracts from cells with control plasmid correlating with PTH mRNA levels in the transfected cells ( Figure 2B, top gel and Figure 2C). We then performed IVDA using polyadenylated PTH mRNA with an internal deletion of the ARE ( Figure 2B, bottom gel and Figure 2C). The PTH mRNA transcript lacking the ARE was stable throughout the experiment and in contrast to the full-length PTH mRNA, was not affected by expression of the CaR ( Figure 2B and 2C). Our results indicate that the CaR specifically decreases steady-state PTH mRNA levels and PTH mRNA stability through the PTH mRNA ARE.
Protein-PTH mRNA interactions by RNA immunoprecipitation assays
In vivo in the rat parathyroid, PTH mRNA stability is determined by the regulated binding of AUF1 and KSRP to the PTH mRNA 3'-UTR [20]. To identify protein-mRNA interactions in the CaR transfected cells, we performed RNA immunoprecipitation (RIP) assays using extracts from HEK293 cells transfected with the CaR or control plasmids. Immunoprecipitation was performed with antibodies to AUF1, KSRP or control IgG followed by qRT PCR of the recovered RNA and input extracts. Parathyroid hormone mRNA was decreased in the CaR expressing extracts compared with control extracts ( Figure 3A) as above (Figure 1D, 1E and 1G). The RIP assay showed that the amount of PTH mRNA bound to AUF1 was decreased in CaR-expressing cells compared with control cells ( Figure 3B). In contrast, PTH mRNA bound to KSRP was increased in the CaR-expressing cells ( Figure 3C). This binding pattern is consistent with the stabilizing function of AUF1 and the destabilizing function of KSRP on PTH mRNA [18,20]. The increased binding to KSRP and the decreased Over-expression of the CaR decreases PTH mRNA levels in co-transfected HEK293 cells Figure 1 (see previous page) Over-expression of the CaR decreases PTH mRNA levels in co-transfected HEK293 cells. HEK293 cells were transiently co-transfected in triplicate with expression plasmids for the hPTH gene and control GFP or GH and either the CaR (CaR+) or empty vector (CaR-). A. Schematic representation of the human PTH expression plasmid used for transient transfections. The boxes show the CMV promoter (grey) and the PTH exons with the untranslated regions (UTRs) (white) and coding regions (diagonal lines). The arrows show the pre, pro and mature PTH. B. Immunohistochemistry on whole cells using an intact cell enzyme-linked immunoassay for the cell surface expression of the CaR. Untransfected (background), CaR (+) or control plasmid (-) transfected cells were analyzed using a CaR antibody or IgG (-). C. Immunoblot analysis of extracts from HEK293 cells co-transfected with expression plasmids for the CaR and myc-AUF1 as control plasmid, using anti-CaR or myc antibodies. D-G. Effect of CaR on PTH expression. D. Northern blot for hPTH and co-transfected GFP with the CaR (+) or an empty vector (-). Ethidium bromide staining of the membrane is shown as a loading control. E. qPCR for PTH and co-transfected GH mRNA levels from cells with (red) and without (blue) the CaR. F. Secreted PTH from cells as above 1 h after an incubation in fresh medium, 1 mM Ca 2+ . G. qRT-PCR for PTH mRNA levels from cells without and with either the CaR or the PTH1R (checkered). Data in D-G are expressed as fold change (mean ± SE) (n = 3). *, P < 0.01, CaR: control.
The CaR decreases PTH mRNA levels and stability through the PTH mRNA 3'-UTR cis element
The calcium receptor leads to post-translational modification of the parathyroid hormone mRNA binding protein AUF1
In vivo dietary-induced hypocalcemia and hypophosphatemia as well as adenine-induced renal failure and the calcimimetic R568 lead to post-translational modifications of AUF1 as shown by 2D gels [7,27]. To determine whether the effect of the CaR on PTH mRNA stability in the engineered cells involves post-translational modifications of AUF1 as it does in the parathyroid, we performed 2D gels on extracts from HEK293 cells transfected with the CaR expression plasmid or an empty vector as control. AUF1 has four isoforms of p37, p40, p42 and p45. There was no difference in AUF1 protein levels in the 1D gels between cells with and without the CaR, apart from a small decrease in p45 in the CaR-expressing cells ( Figure 4A). However, 2D gels showed that endogenous AUF1 isoforms p37, p40 and p42 had a different distribution of the spots between extracts of cells with and without the CaR ( Figure 4B). These results suggest that the CaR induces post-translational modifications in AUF1 in the transfected cells.
To analyze the four isoforms of AUF1 separately, we utilized myc-tagged AUF1 expression plasmids. HEK293 cells were transiently transfected with expression plasmids for myc-p37, p40, p42 or p45 isoforms of AUF1 together with the CaR or control plasmids. Cell extracts were analyzed on 2D gels with an anti-myc antibody. CaR expression led to changes in mobility of myc-AUF1 p37, p40 and also p45 but not p42 ( Figure 4C). Unlike the changes in myc-AUF1 p45, endogenous AUF1 p45 was only slightly modified by CaR signaling. The change in endogenous p42 was not reflected in the myc-tagged transfected p42. The reasons for these discrepancies are not clear. AUF1 p40 undergoes reversible phosphorylation which may regulate ARE-directed mRNA turnover [31,32]. We therefore added a non-specific phosphatase to extracts from cells expressing myc-p40 with or without the CaR. Calf intestinal phosphatase (CIP) treatment of control extracts modified AUF1 to the form present in CaRexpressing extracts ( Figure 4D). Treatment with CIP had no further effect on the CaR-expressing extracts. These results indicate that the CaR modifies AUF1 post-translationally and suggests that at least part of this change involves phosphorylation of isoform p40.
Parathyroid hormone mRNA levels are regulated by [Ca 2+ ] o and the calcimimetic R568 through the calcium receptor
We then studied the effect of a low-calcium medium on the regulation of PTH gene expression by the CaR in the transfected cells. HEK293 cells transiently co-transfected with expression plasmids for PTH and the CaR or a control plasmid were grown in a medium with either 0.2 or 1.2 mM Ca 2+ . Expression of the CaR decreased PTH mRNA in cells grown in physiological 1.2 mM Ca 2+ concentration by both Northern blots and qRT PCR ( Figure 5A and 5B) as in Figure 1. Importantly, the low-Ca 2+ medium attenuated the decrease in PTH mRNA levels induced by the CaR (Figure 5A and 5B). There was no effect of a low-Ca 2+ medium in cells that did not express the CaR.
Calcimimetics are compounds that bind and activate the CaR [5]. We then performed the same experiments with and without the calcimimetic, R568 ( Figure 4C). The CaR decreased PTH mRNA levels as before at 1.2 mM Ca 2+ ( Figure 5C). Activation of the CaR by R568 decreased PTH mRNA levels at 1.2 mM Ca 2+ ( Figure 5C) and also at 0.2 mM Ca 2+ ( Figure 5D). 0.2 mM Ca 2+ prevented the Figure 5B and 5D). However, at 0.2 mM Ca 2+ the addition of R568 still effectively decreased PTH mRNA levels ( Figure 5D). There was no effect of CaR expression on co-transfected GFP mRNA levels ( Figure 5D) used as a control for transfection efficiency and loading. There was also no effect of R568 on PTH mRNA levels in cells co-transfected with the control plasmid that did not express the CaR ( Figure 5C and 5D). The effect of R568 on PTH mRNA levels in the CaR-transfected cells at 0.2 mM Ca 2+ indicates that the calcimimetic is effective at activating the CaR even when the CaR is in a relaxed configuration. Therefore, in these engineered cells, the CaR is permissive for the effect of [Ca 2+ ] o and R568 on PTH mRNA levels.
Discussion
Primary bovine parathyroid cells in culture have been successfully used for the study of PTH secretion in the short term (hours) but not for longer term effects on PTH gene expression. This is because these primary monolayer cultures of parathyroid cells lose the expression of the CaR after 24 h in culture [33]. [7,15]. This allowed us to use a viral promoter upstream of the hPTH gene to study the effect of the CaR signaling on PTH mRNA levels. The PTH gene was efficiently transcribed, processed and translated to mature immunoreactive PTH. Significantly, PTH mRNA and secreted PTH levels were decreased in cells expressing the CaR compared with control cells. The calcium concentration in the medium was 1.2 mM, which would activate the CaR to reduce PTH mRNA levels in the CaR-transfected cells. Therefore, the CaR suppresses PTH gene expression posttranscriptionally by acting on PTH mRNA driven by a viral and not the native PTH promoter. In vitro degradation assays using extracts from cells with or without the CaR showed that expression of the CaR led to a decrease in full-length PTH mRNA half-life but not the half-life of a PTH mRNA with an internal deletion of the PTH mRNA ARE. The decreased PTH mRNA stability correlated with decreased PTH mRNA steady state levels.
Dietary-induced changes in serum calcium and phosphate levels regulate PTH gene expression in vivo post-transcriptionally [14,15]. This regulation is mediated by the changes in binding of stabilizing trans-acting factors, AUF1 and Unr, and the destabilizing protein KSRP, to the defined cis-acting ARE in the PTH mRNA 3'-UTR [16,[18][19][20]. The role of the PTH mRNA ARE in the regulation of PTH mRNA levels by the CaR was demonstrated by transfection experiments using a GH reporter mRNA containing the PTH mRNA 63 nt ARE. The CaR decreased GH-PTH ARE mRNA levels but had no effect on native GH mRNA or a GH mRNA containing a truncated PTH mRNA ARE. Calcium receptor-transfected HEK293 cells showed increased KSRP-PTH mRNA binding and decreased AUF1-PTH mRNA binding by RIP analysis, correlating with the decreased PTH mRNA levels and stability in the CaRtransfected cells. Therefore, the signaling of the CaR in the parathyroid is maintained in the transfected cells as reflected in changes in AUF1 and KSRP PTH mRNA interactions and the role of the PTH mRNA ARE in this regulation.
AUF1-PTH mRNA binding is increased by hypocalcemia or chronic kidney disease, or decreased by hypophosphatemia or administration of the calcimimetic R568 [7,15]. These changes in serum Ca 2+ or phosphate and kidney disease are associated with post-translational modifications of AUF1. AUF1 isoforms p40 and p42 are modified by dietary-induced hypocalcemia, and the secondary hyperparathyroidism of chronic kidney disease and its treatment by R568 [7,27]. We now show that expression of the CaR in the engineered cells also induces post-translational modifications in both endogenous AUF1 and myc-tagged transfected AUF1 isoforms that are similar to the modifications in vivo in the parathyroid. The modification of at least AUF1 p40 involves phosphorylation where expression of the CaR is associated with dephosphorylation. Post-translational modifications of this isoform, AUF1 p40, are altered concomitant with changes in RNA binding activity and stabilization of AREcontaining mRNAs in phorbol ester-treated monocytic leukemia cells [32]. AUF1 p40 recovered from polysomes was phosphorylated on Ser83 and Ser87 in untreated cells but lost these modifications following phorbol ester treatment. It was suggested that selected signal transduction pathways may regulate ARE-directed mRNA turnover by reversible phosphorylation of AUF1 p40 [31,32]. Therefore, both in the CaR-transfected cells and in the phorbol ester-treated cells, dephosphorylation of AUF1 p40, is associated with decreased AUF1 activity. In the case of PTH mRNA, inactivity of AUF1 results in a less stable PTH mRNA after expression of the CaR, as AUF1 is a PTH mRNA stabilizing protein [18,27].
We also show that expression of the (Table 1). It is likely that PTH secretion would exhibit a similar regulation by calcium as PTH mRNA levels, given its dependence on the latter. However, secretion was not measured as a function of the extracellular calcium concentration in the heterologous HEK293 cell system, in which there is a prominent component of constitutive secretion, unlike the parathyroid cell [37,38]. Secretion in HEK293 cells may be independently regulated by the CaR although we have not studied this question. The parathyroid cell is unique in that the stimulus for secretion, gene expression and cell proliferation is a low [Ca 2+ ] o and not a high [Ca 2+ ] o . Inverse control of secretion by a low [Ca 2+ ] o of PTHrP secretion also occurs in the epithelial cells of the lactating breast and certain breast cell lines [39,40]. In addition high [Ca 2+ ] o decreases hormone secretion in the secretion of renin by renal juxtaglomerular cells [41]. Calcium receptor activation increases cell proliferation in cultured cells [42], but inhibits parathyroid proliferation in vivo [43]. It is remark-able and intriguing that despite these differences, the transfected cell system retains so many of the native characteristics of the unique parathyroid cell, specifically the relationship between calcium and PTH mRNA stability. The CaR-mediated decrease in PTH expression in these cells mirrors the marked increase in serum PTH in vivo in mice and men with inactivating mutations of the CaR [6,10,36,43,44]. These results support the concept that the parathyroid cell is geared to constitutively synthesize and secrete PTH, and it is the presence of a functioning CaR that tonically inhibits PTH expression and secretion and allows responsivity to [Ca 2+ ] o .
Plasmids
For the hPTH expression plasmid, a HpaII fragment including the three exons and two introns of the hPTH gene from plasmid pPTHg108 [20] was inserted downstream of the CMV promoter in pcDNA3 expression plasmid (Invitrogen, San Diego, CA, USA). The hCaR plasmid was kindly provided by M Lohse (Wurzburg, Germany), the GFP pEGFP-C1 (Clontech, Palo Alto, CA, USA) and hGH (kindly provided by O Meyuhas, Jerusalem, Israel [45]) were used as controls. The GH-PTH63 or GH-tPTH40 expression plasmids contained 63 bp or a truncated 40 bp fragments of the PTH mRNA 3'-UTR ARE that were inserted between the coding region and the 3'-UTR of GH mRNA [29]. hPTH1R was provided by MA Levine, Cleveland, OH, USA). In some experiments cells co-transfected with the GFP plasmid were analyzed by fluorescent microscopy to estimate transfection efficiency, which was always >90%.
Immunoreactive parathyroid hormone
Medium was replaced 1 h prior to collection and analyzed using the Immulite 2000 Intact PTH assay (Los Angeles, CA, USA)
In Vitro Degradation Assays
Radiolabeled transcripts (200,000 cpm) were incubated with 40 μg protein extract from cells in a volume of 50 μl and in a reaction buffer containing 3 mM Tris HCl, pH 7.5, 2 mM MgCl 2 , 3 mM NaCl, 10 mM ATP and 30 units RNasin. At timed intervals samples were removed, RNA extracted, separated on agarose gels and analyzed by autoradiography as described [15].
The following plasmids were used as templates for RNA transcription: pBluescript II KS plasmid containing either the full-length rat PTH cDNA (772 bp) or a PTH cDNA with an internal deletion of the PTH mRNA ARE and a stretch of ~150 dT nucleotides that by in vitro transcription produced a poly(A) tail. The plasmids were linearized with SmaI [20].
2D gel electrophoresis
Analysis was performed as previously described [27].
RNA immunoprecipitation
Transiently transfected cells were grown in a 14 cm dish for 48 h, collected in ice-cold PBS, pelleted and re-suspended in RIPA buffer (containing 150 mM NaCl, 1% NP40, 0.5% sodium deoxycholate, 0.1% SDS and protease inhibitors) supplemented with RNase inhibitors (Promega, Madison, WI, USA) and homogenized by pipetting. Equal amounts of whole cell extracts were immunoprecipitated with Protein A agarose-bound anti-KSRP or AUF1 antibody beads (Calbiochem, Darmstadt, Germany) or IgG as control after incubation for 2 hr at 4°C. The beads were then washed with modified RIPA buffer (supplemented with 1 M NaCl, 1% Sodium Deoxycholate, 1 mM EDTA and 2 M urea). RNA was extracted and analyzed by qPCR for PTH and GH mRNA using SYBR Green ROX Mix as described above.
Intact cell enzyme-linked immunoassay
The assay was performed as previously described [27]. In brief: 48 h after transfection cells were detached in PBS supplemented with BSA and EDTA. The cells were then pelleted and incubated for 1 h at 4°C with the anti-CaR antibody followed by PBS wash (× 3) and incubation with the secondary HRP-conjugated antibody. Cells were then washed (× 3) prior to adding the HRP substrate and the fluorescent signal was visualized using an electrochemiluminescent reader. | 5,990.6 | 2009-04-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Chiral excitonic instability of two-dimensional tilted Dirac cones
The Coulomb interaction among massless chiral particles harbors unusual emergent phenomena in solids beyond the conventional realm of correlated electron physics. An example of such an effect is excitonic condensation of interacting massless Dirac fermions, which drives spontaneous mass acquisition and whose exact nature remains actively debated. Its precursor fluctuations growing prior to the condensate have been suggested by a recent nuclear magnetic resonance study in an organic material, hosting a pair of two-dimensional (2D) tilted Dirac cones at charge neutrality. Here, we theoretically study the excitonic transition in 2D tilted cones to understand the electron-hole pairing instability as functions of temperature ( T ), chemical potential ( μ ), and in-plane magnetic field ( H ). By solving a gap equation within a weak-coupling treatment and incorporating self-energy effects due to the Coulomb interaction through a renormalization-group technique, we calculate excitonic instability in a T - μ - H parameter space, and find that the pairing is promoted as H is increased but suppressed as μ moves away from the charge-neutrality point. We show that these findings are explained by enhanced or degraded Fermi-surface nesting between the Zeeman-induced pockets connecting the two tilted cones. Furthermore, to evaluate the precursor excitonic fluctuations in relation to this diagram, we consider the Coulomb interaction via a ladder-type approximation and calculate the nuclear spin-lattice relaxation rate, which provides rational ways to understand otherwise puzzling experimental results in the organic material by the μ and H dependence of the instability.
I. INTRODUCTION
The notable electronic properties of graphene as well as topological semimetals and insulators have been attracting increasing attention not only because of their exotic topological nature, but also due to their unusual effects induced by the electron-electron Coulomb interaction [1][2][3][4][5][6][7].Unlike in conventional solids, the metallic screening in these systems is absent when the Fermi energy E F is fixed at band-crossing points due to a vanishingly small density of states, resulting in preservation of the long-range part of the Coulomb interaction [1,2].The strength of the interaction is then characterized by a dimensionless coupling constant α = e 2 /4πε 0 ε hv that is proportional to the ratio of the Coulomb potential to the electron kinetic energy, where e is the elementary charge and ε is the relative permittivity.This notable long-range form of the interaction causes an anomalous upward renormalization of the Fermi velocity v by a self-energy effect, akin to what has been commonly discussed in the relativistic Dirac and Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license.Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Weyl theories [3,4].Indeed, its influence in solids has been extensively analyzed for weakly interacting regimes at several levels of approximation, first within first-order perturbative expansions of α ( 1) [2] and later by renormalization-group (RG) calculations [2,4,8]; their broad consensus is that a logarithmic correction on v grows upon decreasing the scale of energy E towards the crossing points (E = 0), driving a nonlinear reshaping of Dirac cones, as reported in graphene [9] and more recently in an organic material α-(BEDT-TTF) 2 I 3 under pressure [10,11].
For strong interaction (typically α > 1), a remarkable breakdown of the massless-fermion picture has been suggested at charge neutrality in honeycomb lattice [12][13][14][15][16][17][18][19][20][21][22][23][24][25] as well as in Weyl semimetals [26][27][28][29][30][31][32][33][34][35][36]; following various analyses within Monte Carlo, RG, and mean-field techniques, this breakdown is ascribed to an excitonic condensation of electron-hole (el-h) pairs by the attractive Coulomb interaction, involving occupied states in the conduction band and empty states in the valence band near the crossing points [3,4,[37][38][39][40].Notably, the transition lifts the degeneracy of the conical dispersion associated with a pseudospin-1/2 degree of freedom.The projection of this degree onto the momentum is known as chirality [4,41].Here, we focus on this pairing instability especially in two-dimensional (2D) nodal semimetals, and dub it as "chiral excitonic instability" to recall its link to the chirality (which is a solid-state equivalent of the chiral symmetry breaking in high-energy physics [41]).For the 2D Dirac cones in general, a pair of Dirac points (or valleys) appears in the first Brillouin zone at incommensurate wave number vectors ±k 0 , and is protected by space and time inversion symmetries [42,43].At charge neutrality, chiral excitonic instability can therefore develop around these points either within a cone (i.e., intravalley) or in a path connecting two different cones (intervalley).The electron spin is degenerate, and thus an applied in-plane magnetic field H causes Zeeman splitting of the cones, which shifts the spin-↑ cone from the spin-↓ cone in each valley and generates both electron and hole Fermi pockets.Aleiner et al. [44] have discussed the influence of this Zeeman effect on the instability for graphene based on a weak-coupling mean-field treatment of gap equations, and have highlighted the importance of interband el-h Fermi-surface nesting, involving spin-↑ electrons and spin-↓ holes in the different bands.In graphene, the cones are isotropic and vertical with their crossing points locating at the corners of the first Brillouin zone, which leads the pockets to have identical circular shapes in each valley.Consequently, perfect el-h nesting is realized for both intravalley pairings and intervalley pairings, allowing the instability to equally grow for both cases.
On the contrary, the situation can be quite distinctive when the cones are anisotropic and/or have a tilted axis.The quasi-2D electron system in the organic conductor α-(BEDT-TTF) 2 I 3 [where BEDT-TTF is bis(ethylenedithio)tetrathiafulvalene] provides a good example of this, where extensive studies have revealed the presence of a pair of 2D tilted Dirac cones under hydrostatic pressure [10,11,[45][46][47][48][49][50][51][52][53] that are charge neutral by 3/4 filling of the electronic band [54][55][56][57][58][59][60][61].In this system, the cones are isotropic but canted towards each valley [Fig.1(a)], which in an in-plane H lifts the spin degeneracy and generates elliptic Fermi pockets for the electron and hole bands near the crossing points (at ±k 0 ), where the electron pocket is relatively shifted from the hole pocket in opposite directions at the two valleys [Fig.1(b)].Then, there is poor interband el-h Fermisurface nesting between the spin-↑ and spin-↓ pockets for intravalley pairings, whereas perfect el-h nesting is realized for intervalley pairings [inset of Fig. 1(d)].Therefore, one would expect that chiral excitonic instability may selectively grow for the latter process due to better nesting; in fact, this point is supported by a recent 13 C nuclear magnetic resonance (NMR) study at low temperature (T ), which finds upon cooling (besides the cone reshaping [10,62]) an anomalous upturn in the spin-lattice relaxation rate 1/T 1 that is numerically accounted for by precursor spin-triplet (transverse) fluctuations growing prior to an intervalley excitonic condensate at charge neutrality [11] [characterized by a nesting vector Q = 2k 0 ; see Figs. 1(c) and 1(d)].
If nesting proves to be pivotal to the pairings, one would expect a large impact of H variations on the instability since an increased H enlarges the sizes of Fermi pockets in each valley, and may thereby provide a larger gain in condensation energy by enhanced nesting.By contrast, a small shift of the chemical potential μ off the neutrality point considerably reduces el-h symmetry, and may thus weaken the pairing instability by degraded el-h nesting.In particular, the latter influence seems to be significant in α-(BEDT-TTF) 2 I 3 under pressure because previous Hall measurements reported a sign change in low-T Hall coefficient in some samples [63], which is ascribed by a model calculation [64] to small el-h asymmetry and a tiny electron self-doping effect of a size of a few ppm of the conduction band.Note that thermopower experiments are in line with this suggestion [53,65].To construct a concrete theoretical picture within the excitonic framework, it is therefore very important to quantitatively assess the impacts of these H-and μ-variation effects on the intervalley paring instability, and check their influence on the precursor spin-transvers fluctuations seen by 1/T 1 .However, the pairing instability has not yet been studied as functions of these parameters for tilted cones, and the relation between the transport and 13 C-NMR data in pressurized α-(BEDT-TTF) 2 I 3 remains largely unclear.
Here, to better understand the pairing instability in 2D tilted cones, we extend our previous theoretical analyses at charge neutrality in Ref. [11], and specifically investigate the impacts of variations in μ and H on intervalley chiral excitonic condensate and its associated spin fluctuations.To this end, we use the tilted cones in pressurized α-(BEDT-TTF) 2 I 3 as our testing model, whereas the argument is not restricted to this material but can be extended to generic massless cones.We assume a continuum model for the massless Dirac fermions in this material, and consider the Coulomb interaction by focusing on a ladder vertex of transverse spin susceptibility and self-energy corrections in a RG approach (incorporating both the momentum and frequency dependence [66] of ).The band parameters are taken from those reported by the previous fits to the 13 C-Knight shift data [10], which rely on high-pressure parameters analyzed by Katayama et al. [57] based on a first-principle study of Kino et al. [54].We restrict our attention to the vicinity of the charge-neutrality point, and numerically evaluate the instability via studying a T -μ-H phase diagram using a weak-coupling mean-field theory.In relation to this diagram, we then calculate the corresponding precursor spin-transverse fluctuations and check their influence on the T dependence of 1/T 1 , as functions of μ and H.We discuss these results with additionally performed 13 C -1/T 1 measurements in pressurized α-(BEDT-TTF) 2 I 3 , which jointly reveals that the transport-suggested tiny μ shift in this system can sensitively suppress the excitonic spin fluctuations, in accord with the intervalley Fermi-surface nesting scenario.
The paper is organized as follows.In Sec.II, we review the tilted Weyl model of pressurized α-(BEDT-TTF) 2 I 3 , discuss the expressions of transverse spin susceptibility (linked to 1/T 1 ), and comment on the treatment of the interaction ladder vertex.The methods and approximations involved in quantifying realistic contributions from the Coulomb interaction to the self-energy within the RG approach as well as deriving a weak-coupling mean-field gap equation are briefly described.Section III discusses numerical results together with supportive experimental data of 13 C -1/T 1 measurements in pressurized α-(BEDT-TTF) 2 I 3 .We provide a summary and discuss possible relevance to generic semimetals in Sec.IV.Details of calculations and supportive experiments are given in the Supplemental Material [67].
A. Continuum model for electrons in α-(BEDT-TTF) 2 I
The noninteracting quasi-2D electrons in the pressurized organic conductor α-(BEDT-TTF) 2 I 3 having valleys with a tilted dispersion relation are described in an 8 × 8 matrix formula (by a generalized Weyl model called the tilted Weyl Hamiltonian) as [10,11] where the chemical potential μ and the electron Zeeman term are explicitly included, with the electron g factor of the size of g = 2 [68].The velocities w = (w x , w y ) and v = (v x , v y ) describe the tilt and the anisotropy of the Dirac cone, respectively.The total Hamiltonian considering the long-range part of the Coulomb interaction is given by in terms of an eight-component creation operator † k = c † k,ν,s,η , the density operator ρ(q) = k ν,s,η c † k,ν,s,η c k+q,ν,s,η , and the Fourier transform of the Coulomb potential V 0 (q) = 2π e 2 /ε|q|, where q and k = (k x , k y ) are 2D wave number vectors defined only around the band-crossing points at ±k 0 .(We omit backscattering and Umklapp processes such that both k and k + q are restricted around these degeneracy points; i.e., q 2 k 0 .)Here, the index η = 1/R (−1/L) stands for the valley at k 0 (−k 0 ), and s = 1/ ↑ (−1/ ↓) corresponds to the up (down) spin projection.The creation (annihilation) operator c † k,ν,s,η (c k,ν,s,η ) is based on the Luttinger-Kohn (LK) representation [69], described using the Bloch's functions at ±k 0 as the basis of the wave functions [56,70].The index ν =1/a (−1/b) then denotes the two bases in the LK representation, and we use a notation ν = − ν.The corresponding orbitals in this representation are given in Ref. [67].The three matrices σi , τi , and ŝi stand for the Pauli matrices that represent LK pseudospin 1/2, valley pseudospin 1/2, and real spin 1/2, respectively, with the three indices taking one of the four possible values (i, j = 0, x, y, and z).The index 0 represents a unit matrix.
Throughout this paper, we omit the T dependence of μ.This approximation is justified as the low-T Hall measurements and associated theories [63,64] find only a negligibly small variation of μ(T ), especially in the T range of our interest (i.e., T < 10 K).
B. Self-energy corrections
The self-energy-induced renormalization of velocity by the long-range part of the Coulomb interaction is considered at the level of one-loop RG calculations in leading order in 1/N (N â 1) as discussed previously [10,11], which is valid both for weak and strong Coulomb interaction, where N = 4 is the number of fermion species standing for two spin projections and two valleys [2,4,8].Briefly, we start from a tilted dispersion relation and combine it with one-loop level RG flow equations of v = (v x , v y ) [10], Here, k/|k| = (cos ϕ, sin ϕ) is measured from k 0 , l = ln( /k) (with |k| = k) is a momentum scale measured in the unit of a momentum cutoff = 0.667 Å −1 of the size of an inverse lattice constant [71] and circular around the Dirac point, and The bare coupling constant of the Coulomb interaction is given by α = e 2 /(4πε 0 ε h v x 2 sin 2 ϕ + v y 2 cos 2 ϕ) [10], which is approximated to be α ≈ e 2 /4πε 0 ε hv 0 since the anisotropy is negligibly small in α-(BEDT-TTF) 2 I 3 (v x ≈ v y ≡ v 0 ) [57].Reflecting the RG flow, v grows logarithmically as a function of /k, whereas w does not flow at oneloop level.Notice that Eq. ( 4) considers screening effect of the Coulomb interaction including polarization bubbles in the self-energy [wavy lines in Fig. S1(b) of the Supplemental Material [67]].
To be specific, we will incorporate the RG flow of Eq. ( 4) obtained from fits to the 13 C-Knight shift data in pressurized α-(BEDT-TTF) 2 I 3 by Hirata et al. [10], using an effective tight-binding (TB) model in Ref. [57] as a minimal starting point.To adjust the initial velocity values at the cutoff (|k| = ), a phenomenological parameter u is introduced as v = uv TB and w = uw TB , where v TB and w TB are the velocity values derived from the effective TB model [57].Optimizing the two parameters ε and u by least-square fits, we get (ε, u) ≈ (30, 0.35), which yields α = 12.6 at the high-energy cutoff (at |k|/ = 1) using v 0 = uv TB (where v TB = 2.4 × 10 4 m s −1 is the corresponding velocity in the gentle slope of the tilted cone) [57], and an effective coupling of the order of unity at low energy near the crossing points (at |k|/ 1).(For the detail of the RG flow, see the Supplemental Information of Ref. [11].)Notice that we have u < 1 from the fits, which signals a reduced electronic bandwidth as has been often discussed in correlated electron materials, and ascribed to frequency dependence of the self-energy due to the short-range part of the Coulomb interaction [66].
The parameters obtained in this way thus incorporate both the logarithmic velocity flow and the bandwidth suppression.Throughout this study, we will use these as phenomenological but quantitative estimates of self-energy effects [i.e., the reshaped cones in Figs.1(c) and 1(d)].
C. Transverse excitonic spin fluctuations
We assess the influence of the Coulomb interaction on transverse excitonic spin fluctuations in a ladder-type approximation.In particular, we identify the impacts of variations in both chemical potential μ near the crossing points and in-plane magnetic field H on the nuclear-spin-lattice relaxation rate 1/T 1 , linked to a wave number average of the imaginary part of the transverse dynamic spin susceptibility, where Q is a wave number vector and ω is a frequency in megahertz region [72].
[The full expression of 1/T 1 is given in Eq. (S8) of the Supplemental Material [67].]To this end, it is sufficient to focus on intervalley excitonic instability for even-parity, spin-triplet (or spin-transverse) pairings as a first approximation, which is shown to give major contributions in α-(BEDT-TTF) 2 I 3 at charge neutrality [11].(See the Supplemental Material for the corresponding order parameter [67].)Hereafter, we will assume a pair of gapless points placed along the k x axis at k x = 0 and k x = 2k 0 for the sake of clarity.
We use a generalized expression of χ ⊥ (Q, ω) that involves the two bases in the LK representation (ν = a, b) and picks up intervalley el-h excitation process (connecting η = R and L with a momentum transfer 2k 0 h).The corresponding intervalley susceptibility is obtained by an analytic continuation of where ε n (ω m ) is the fermionic (bosonic) Matsubara frequency, M η,η ν 1 ν 2 ν 3 ν 4 is a constant form factor associated with the tilt of the valleys, ν ± (k; q, iω m ) are vertex contributions of a ladder type, and χ ν ± (k; q, iω m ) are irreducible susceptibilities given by Here, Ĝs, η = [G νν s, η ] is a Green's function for 2D massless Dirac fermions with a tilted dispersion relation, which has a matrix structure based on the LK representation.Note that the aforementioned self-energy effects are incorporated in Ĝs, η .The corresponding form factor M η,η ν 1 ν 2 ν 3 ν 4 and the Green's function Ĝs, η are defined in the Supplemental Material [67].Using these irreducible susceptibilities in Eqs. ( 6) and ( 7), the vertex contributions are expressed by a Bethe-Salpeter-type equation [73][74][75], The corresponding diagrams are given in Fig. S1 of the Supplemental Material [67], and we will convert the analytic continuation of the susceptibility in Eq. ( 5) to the relaxation rate divided by T , 1/T 1 T , by taking a wave number average of its imaginary part as in Eq. (S8) of the Supplemental Material [67].
At the level of random phase approximation, the k dependence in Eq. ( 8) can be separated from the dependence on q and ω m as ν ± (k; q, iω m ) and a linearized weak-coupling gap equation where k is a gap and λ(q, iω m ) is a corresponding eigenvalue.We will numerically evaluate this self-consistent equation in Eq. ( 10) at mean-field level, and use λ(0, 0) ≡ λ to assess excitonic instability, which favors a gap opening at the crossing points when λ = 1 is reached.
To be compatible with the RG flow in Sec.II B [cf. Eq. ( 4)], we use a low-energy effective value of α eff = 1 as our approximate constant Coulomb coupling in Eqs. ( 5)-( 10).This approximation is justified at low temperature (T < 10 K), where the RG flow of the coupling appears to be saturated and the coupling becomes of the order of unity in α-(BEDT-TTF) 2 I 3 (see Ref. [11]).
Notice that we will omit intravalley el-h pairings as they barely contribute to chiral excitonic instability even at charge neutrality.The reason is not only due to poor intravalley el-h nesting in tilted valleys as mentioned in Sec.I [cf.Fig. 1(b)] [11], but also because of chiral property of massless Dirac fermions, which suppresses backscattering processes for the intravalley pairings but not for the intervalley pairings [76,77].Constructing models relying solely on intervalley contributions thus provides a reliable starting point for dis-cussing chiral excitonic instability of 2D tilted Dirac cones.
[A relevant chirality factor is implicitly involved in Eqs. ( 6) and (7).] Similarly, spin-singlet (spin-longitudinal) instability will be neglected, which is shown to be suppressed upon increasing in-plane magnetic field at charge neutrality (see Ref. [11]).
III. RESULTS AND DISCUSSION
Chiral excitonic instability and its impact on the spinlattice relaxation rate in pressurized α-(BEDT-TTF) 2 I 3 with small charge off-neutrality (μ = 0) at in-plane magnetic field H is determined by two logarithmically reshaped massless Dirac cones for even-parity, spin-triplet (transverse), and intervalley pairings.(Notice that this is a natural extension of Ref. [11], performed at μ = 0 and small H, to a more general case.)At a weak-coupling mean-field-level treatment of gap equation, we find that the contribution from the two cones to intervalley excitonic response strongly depends on μ and H (Sec. III A).The way this contribution affects the precursor transverse spin fluctuations is also highly dependent on these parameters (Sec.III B).The additional experimental data of the 13 C-spin-lattice relaxation rate in pressurized α-(BEDT-TTF) 2 I 3 show a good qualitative agreement with this prediction, filling the gap between 13 C-NMR study at μ = 0 [11] and transport results indicating μ = 0 [53,[63][64][65] in this material (Sec.III C).
A. Mean-field phase diagram for the intervalley response
To highlight the intervalley-nesting dependence in chiral excitonic instability, we first focus on a mean-field critical temperature T C where the gap starts to open [i.e., when λ = 1 is fulfilled in Eq. ( 10)], and consider its explicit dependence on chemical potential and in-plane magnetic field.This analysis of a mean-field phase diagram helps us to understand the impact of these parameters on the T dependence of the relaxation rate, discussed in Sec.III B.
Given that the transverse susceptibility in Eq. ( 5) and the corresponding gap equation in Eq. ( 10) pick up the intervalley el-h excitations involving spin-↑ electrons and spin-↓ holes, the shapes of the spin-split Fermi pockets induced by in-plane magnetic field must play an important role.
Figures 2(a) and 2(b) present schematic illustrations of the field-induced pockets around the two valleys (near ±k 0 ) at charge neutrality [Fig.2(a)] and off-neutrality [Fig.2(b)].The Fermi pockets for the spin-↑ electrons (filled lines) and the spin-↓ holes (dashed lines) are depicted.
Intervalley el-h excitations must occur between an el pocket in one valley and a hole pocket in the other.At charge neutrality, there is perfect nesting in the intervalley pairing process with a momentum transfer of hQ = 2 hk 0 [indicated by an arrow in Fig. 2(a)].With increasing the field, the size of these pockets increases, which nevertheless keeps the perfect nesting condition intact.Consequently, the number of el-h pairs involved in nesting increases towards higher field.For off-neutral case, by contrast, el-h asymmetry causes unequally sized electron pockets and hole pockets [Fig. 2 compared to the neutral case, prohibiting el-h pair formation in this process.
These considerations should naturally affect the intervalley excitonic response since it is directly related to the number of el-h pairs involved in the corresponding excitation process.
Figures 2(c) and 2(d) show the T dependence of the meanfield eigenvalue λ in Eq. ( 10) for selected values of in-plane magnetic field at charge neutrality (μ = 0 K) and with small electron doping (μ = 4 K), respectively, where we have also plotted a line corresponding to λ = 1 (dashed line).The crossing points of this line with the data define T C .As shown in Fig. 2(c), excitonic condensation (corresponding to the region with λ > 1) is present at low temperature at charge neutrality, whose region increases to higher temperature upon raising the field.For off-neutrality, on the other hand, the eigenvalue is strongly suppressed, and there is no excitonic region in the present parameter range [Fig.2(d)].
This situation is more clearly discernible when we look at the critical temperature as shown in Fig. 3, where we have plotted T C as a function of temperature, chemical potential, and in-plane magnetic field.For given doping, the excitonic region (colored region in Fig. 3) increases to higher temperature with raising the field, whereas at a fixed field T C decreases upon moving away from μ = 0 by doping.
This property of T C can be directly ascribed to Fermisurface nesting that is enhanced towards higher field and suppressed with doping.Indeed, at low field, the instability grows at μ = 0 only above a threshold field value (H c ≈ 1.7 T).This is because the low-field pockets are small, and 4. Temperature dependence of the nuclear spin-lattice relaxation rate divided by temperature 1/T 1 T for various doping and in-plane magnetic fields.Panel (a) shows the case with a fixed in-plane field (H = 5 T) and a range of small electron doping of μ = 0, 2, and 4 K. Panel (b) depicts the case with fixed small doping (μ = 4 K) and different fields of H = 0 and 5 T. Inset of panel (b) shows the field dependence at low temperature (T = 0.2 K) at charge neutrality (μ = 0 K) and off-neutrality (μ = 4 K).The physical parameters are the same as in Figs. 2 and 3.
accordingly the number of el-h pairs involved in nesting is greatly limited.Near μ = 0 at low field, moreover, the condensation is highly sensitive to doping since a small change of μ can drastically worsen nesting.By contrast, the excitonic state is present at largely doped regions for higher field.The reason is that the field-enlarged pockets weaken the relative influence of el-h asymmetry at the Fermi energy, which improves the nesting condition and thus allows an increased number of el-h pairs to participate in condensation.
We note considerable sensitivity of this instability to a shred of doping, especially near μ = 0.For instance, at 5 T, the instability vanishes for μ 1 K, corresponding to electron doping of just a few ppm of the conduction band.This remarkably small size of doping is in accord with the transport analysis in α-(BEDT-TTF) 2 I 3 , as discussed in Sec.III C.
B. Spin-lattice relaxation rate 1/T 1
The nuclear spin-lattice relaxation rate 1/T 1 allows us to study slow spin dynamics at the Fermi energy.In this section, we investigate how doping and in-plane magnetic field alter the nature of precursor excitonic spin fluctuations in α-(BEDT-TTF) 2 I 3 near charge neutrality, particularly in view of the above nesting condition in relation to the phase diagram.
For the charge-neutral Dirac cones, the low-energy excitations and fluctuations can be decoupled into two parts by means of NMR.The first part is intravalley contribution around each crossing point (at ±k 0 ), which is probed by the uniform component (Q = 0) of the electron spin susceptibility (or the Knight shift).The second part is intervalley contribution, which appears in the Q ≈ 2k 0 response of the local spin susceptibility.The relaxation rate 1/T 1 picks up a sum of these two contributions as it is proportional to a Q average of Imχ ⊥ (Q, ω) [cf.Eq. (S8) [67]].In α-(BEDT-TTF) 2 I 3 , Hirata et al. [10,11] revealed at charge neutrality that the Q = 0 part is almost exclusively suppressed upon cooling by the logarithmic velocity renormalization in Eq. ( 4) [cf. the reshaped cones in Figs.1(c) and 1(d)], such that 1/T 1 becomes considerably sensitive to the Q ≈ 2k 0 part at low temperature.This makes the relaxation rate appealing to the study of chiral excitonic fluctuations near charge neutrality, in particular for intervalley pairings.
Here, for later comparisons with supportive experiments (Sec.III C), we discuss two representative cases, one at a fixed field and a range of small doping [Fig.4(a)], and the other at fixed small doping and for different magnetic fields [Fig.4(b)].For visualization, the rate divided by temperature, 1/T 1 T , will be used.
The first case is presented in Fig. 4(a), where the temperature dependence of 1/T 1 T is shown for various small electron doping (μ = 0, 2, and 4 K) at 5 T, which corresponds to the intervalley excitonic response of the eigenvalues studied in Figs.2(c) and 2(d) (upward triangles).At charge neutrality (μ = 0 K), the curve shows a clear upturn with decreasing temperature due to intervalley (Q = 2k 0 ) spin fluctuations by the ladder vertex (8) growing as a precursor to the condensation [11].A small increase of μ rapidly suppresses the fluctuations, and at μ = Notice that the levelling-off of 1/T 1 T at low temperature (for μ = 4 K) is a universal characteristic of interband elh excitations near charge neutrality, which appears when chiral excitonic instability is absent irrespective of the size of small doping (see discussions in Ref. [11] for chargeneutral valleys, and Fig. S3 in Ref. [67] for off-neutral valleys).) FIG. 5. Temperature dependence of 1/T 1 T in α-(BEDT-TTF) 2 I 3 measured by 13 C-NMR experiments in three samples (1-3) at 2.3 GPa for various in-plane magnetic fields.Panel (a) shows the data in sample 1 (2) at a magnetic field of H = 5.2 T (6 T) applied parallel to the crystalline ab plane (2 replotted from Ref. [11]).Panel (b) depicts the corresponding results in sample 3, measured at H = 8, 14.8, and 23.5 T (for experimental details, see Ref. [67]).
In Fig. 4(b), we present 1/T 1 T for the second case, fixed small doping (μ = 4 K) and different magnetic fields of H = 0 and 5 T, corresponding to the response described by the eigenvalue in Fig. 2(d) (squares and upward triangles).The precursor excitonic spin fluctuations do not grow due to large el-h asymmetry and poor intervalley nesting at this doping compared to the relatively small size of H [cf. Fig. 2(b)].This is in accord with the phase diagram (Fig. 3) where these parameter regions locate far away from the excitonic dome region.
The little field dependence at off-neutrality is contrasted to the neutral case as depicted in the inset of Fig. 4(b), where we have plotted 1/T 1 T at low temperature (T = 0.2 K) against field for μ = 0 4 K.The relaxation rate shows a sizeable (moderate) increase with raising the field at charge neutrality (off-neutrality) as a direct consequence of larger (weaker) elevation of the low-T eigenvalue towards higher field in Fig. 2(c) [Fig.2(d)].The larger field dependence for smaller doping in the relaxation rate agrees with the intervalleynesting picture because better nesting near charge neutrality provides stronger instability and, therefore, larger precursor fluctuations as the pocket sizes are enlarged upon increasing the field.
C. Comparison with experiments in α-(BEDT-TTF) 2 I 3
To assess the qualitative validity of these calculations, we have additionally performed 13 C-NMR experiments in pressurized α-(BEDT-TTF) 2 I 3 and have measured the spin-lattice relaxation rate (experimental details are given in Ref. [67]).Figures 5(a) and 5(b) present the temperature dependence of 1/T 1 T in in-plane magnetic fields, measured for three representative samples (labeled 1-3).As shown in Fig. 5(a), 1/T 1 T in sample 1 (along with 2 replotted from Ref. [11]) decreases upon cooling and shows an abrupt upturn at low temperature (for H = 5.2 to 6 T).This can be well understood by the precursor intervalley (Q = 2k 0 ) spin fluctuations growing prior to chiral excitonic condensate at charge neutrality, as discussed in Sec.III B [cf. Fig. 4(a) for μ = 0 K].For sample 3 in Fig. 5(b), by contrast, 1/T 1 T exhibits no upturn but a levelling-off-like behavior at below 10 K.Moreover, we find this low-T flattening to have little field dependence in a range from H = 8 to 23.5 T. These features are in accord with the above-mentioned expectation for interband el-h excitations in 2D cones when the instability is suppressed by a small shift of chemical potential of a size of just a few Kelvin (submillielectronvolt) off the charge-neutrality point [cf.Fig. 4(b)].
The contrasting observations for samples 1 (2) and 3 in Fig. 5 draw a qualitative parallel with the results in Fig. 4, testifying the validity of our calculations based on laddertype approximation.They further lend support for the idea that intervalley chiral excitonic instability in this system is highly sensitive to a small variation of μ around the crossing points, such that the precursor fluctuations are absent in a relatively doped sample (3) but present in less-doped ones (1 and 2).Moreover, the remarkably small doping considered above quantitatively agrees with what has been suggested by transport experiments in out-of-plane magnetic fields [53,63,65,78] and a relevant calculation within a linearresponse theory [64].It also does not conflict with more recent magnetotransport studies in an in-plane magnetic field [51,79].
These results suggest that there is a possible underlying instability towards intervalley-excitonic ground state, which may be able to be observed in strong in-plane magnetic field at very low temperature if and only if the chemical potential is finely tuned to the crossing points.On the other hand, the fact that μ in this organic material is confined that close to the crossing points (in a submillielectronvolt range) is contrasted with monolayer graphene-which usually suffers from corrugated structures (called ripples) and potential inhomogeneity (known as charge puddles) [80] that make fine tuning of μ difficult near the crossing points-and recommends this system as an ideal testing ground for the study of chiral excitonic instability.A quantitative elaboration of the above results in Figs. 4 and 5 would be challenging and is delegated to future work, but may be achieved by considering frequency dependence of the self-energy beyond present phenomenological level (see Sec. II B) or dealing with omitted contributions from its imaginary part.Incorporating higher-order fluctuations in interaction vertex may be also helpful, which are at the moment neglected in Eq. ( 9).Apart from excitonic pairing, one notes that so-far neglected spin-orbit interaction may also prefer gap opening at very low temperature and turn the system into a topological insulator [81,82].Considering a subtle balance of these different mechanisms would be of particular interest, which may bridge the studies of correlated and topological materials at large.
IV. CONCLUSION
In this paper, we have investigated excitonic instability of the continuum model for the pressurized organic conductor α-(BEDT-TTF) 2 I 3 , hosting two massless charge-neutral cones with a tilted dispersion relation.In particular, we have focused on the way a small charge off-neutrality and an inplane magnetic field affect the intervalley pairing instability, and analyzed how they impact on the transverse spin dynamics relevant to the precursor excitonic fluctuations.Considering the Coulomb interaction within realistic self-energy schemes using a renormalization-group approach, we have calculated the transverse spin susceptibility based on a ladder-type approximation coupled with a weak-coupling gap equation.We have found that electron-hole pairings are suppressed by a tiny doping due to degraded intervalley Fermi-surface nesting between the Zeeman-induced electron and hole pockets at the different cones, whereas they are stabilized by in-plane magnetic field because of enhanced nesting.Combined with additionally performed 13 C-NMR experiments under pressure, we have shown that these nesting conditions directly affect precursor excitonic spin dynamics probed by the spinlattice relaxation rate, such that the spin fluctuations are sensitively suppressed as intervalley nesting is worsened upon doping.The presence of this tiny doping is in quantitative agreement with the earlier transport predictions in this system [53,[63][64][65].All these results lend good support to the notion that chiral excitonic instability and its precursor fluctuations provide a decent frame to understand excitations and dynamics near the crossing points in massless Dirac cones.
The characteristic instability of massless cones discussed here is directly linked to chiral property of the Hamiltonian, which is ubiquitous in various Dirac-Weyl semimetals for any dimension, pseudospin and symmetry [3,4,76].Our framework to understand excitonic pairings and relevant precursor dynamics in titled Dirac cones may thus offer a generic platform for understanding excitonic instability in widespread topological materials
FIG. 1 .
FIG. 1. Intervalley Fermi-surface nesting relevant to chiral excitonic instability in 2D tilted Dirac cones at charge neutrality.(a), (b) Spin splitting of tilted cones.Application of an in-plane magnetic field H removes the spin degeneracy, and generates elliptic Fermi pockets for the spin-↑ electrons and the spin-↓ holes.(c), (d) The corresponding splitting of the tilted cones in α-(BEDT-TTF) 2 I 3 .The linear dispersion (outer transparent cones) and the logarithmically reshaped dispersion (inner colored cones) are highlighted.The self-energy effects by the Coulomb interaction are incorporated [11] (calculated by the RG approach using the bare coupling of α = 12.6; see Sec.II B).For clarity, the cones for the spin-↑ electrons (left; k 0 ) and the spin-↓ holes (right; −k 0 ) are selectively depicted in (d).Inset of (c) and (d): Fermi surfaces at H = 0 (c) and H = 0 (d).The arrow represents perfect el-h Fermi-surface nesting in intervalley excitation process, with the nesting vector of Q = 2k 0 .
FIG. 2 .
FIG. 2. Eigenvalues of intervalley chiral excitonic instability for 2D tilted Dirac cones at various doping and in-plane magnetic fields.(a), (b) Illustrations of spin-split Fermi pockets in an in-plane magnetic field H for the spin-↑ electrons (solid) and the spin-↓ holes (dashed) at charge neutrality (a) and off-neutrality (b).Perfect interband (electron-hole) nesting in the intervalley excitation process is depicted by an arrow in (a), with the nesting vector of Q = 2k 0 .(c), (d) Temperature dependence of the mean-field eigenvalue λ for the even-parity, spin-triplet (transverse) instability in the intervalley pairing (Q = 2k 0 ).Calculated λ are shown for various in-plane magnetic field at charge neutrality (μ = 0 K) (c) and off-neutrality (μ = 4 K) (d).The effective Coulomb coupling of α eff = 1 is used, and the self-energy corrections are considered (see Sec. II).
FIG. Mean-field transition temperature c for intervalley chiral excitonic condensate with even-parity and spin-triplet (transverse) pairings, plotted as functions of temperature T (in K), chemical potential μ (in K), and in-plane magnetic field H (in T).The physical parameters are the same as in Fig. 2. The arrow indicates the critical field H c , and the label bar represents T c .Dashed lines are guides to the eyes.
FIG. 4. Temperature dependence of the nuclear spin-lattice relaxation rate divided by temperature 1/T 1 T for various doping and in-plane magnetic fields.Panel (a) shows the case with a fixed in-plane field (H = 5 T) and a range of small electron doping of μ = 0, 2, and 4 K. Panel (b) depicts the case with fixed small doping (μ = 4 K) and different fields of H = 0 and 5 T. Inset of panel (b) shows the field dependence at low temperature (T = 0.2 K) at charge neutrality (μ = 0 K) and off-neutrality (μ = 4 K).The physical parameters are the same as in Figs. 2 and 3.
4 K flattens 1/T 1 T at low temperature due to worsened intervalley nesting by doping, as discussed in Sec.III A [cf. Figs.2(a) and 2(b)]. | 9,093 | 2020-09-24T00:00:00.000 | [
"Physics"
] |
Effect of Total Flavones from Cuscuta Chinensis on Anti-Abortion via the MAPK Signaling Pathway
For centuries, the Chinese herb Cuscuta chinensis has been applied clinically for abortion prevention in traditional Chinese medicine (TCM). Total flavones extracted from Cuscuta chinensis (TFCC) are one of the active components in the herb and also display anti-abortion effect similar to the unprocessed material. However, how TFCC exerts the anti-abortion effect remains largely unknown. In this study, we aim at characterizing the anti-abortion effects of TFCC and its underlying molecular mechanism in vitro and in vivo using human primary decidua cells and a mifepristone-induced abortion model in rat, respectively. The damage to the decidua caused by mifepristone in vivo was reversed by TFCC treatment in a dosage-dependent manner. High dosage of TFCC significantly upregulated the expression of estrogen receptor (ER), progesterone receptor (PR), and prolactin receptor (PRLR) in decidua tissue but downregulated the expression of p-ERK. Furthermore, we detected higher level of p-ERK and p-p38 in primary decidua cells from spontaneous abortion while treatment by TFCC downregulated their expression. Our results suggest TFCC mediates its anti-abortion effect by interfering with MAPK signaling pathway.
Introduction
Cuscutae Semen (Tu-Si-Zi) is the dried seed of Cuscuta chinensis Lam. or cuta australis R. Br. [1]. It has demonstrated therapeutic effect in various diseases, including neural disease [2], inflammation [3,4], and cancer [5,6]. Studies also showed that Cuscutae Semen has an anti-aging effect [7] and facilitates osteoporosis [8]. Furthermore, dated back as early as 2000 years ago, it has been described as one of the best Chinese medical plants recorded in ShenNong's Herbal and has been employed to treat spontaneous abortion by TCM practitioners due to its regulatory effect on ovulation and hormone regulation [9,10]. A cohort study based on Taiwan population revealed that 96.17% (8766/8430) infertile women had sought TCM treatment and the most commonly prescribed herb was Cuscutae Semen [11].
In spite of its wide usage in Chinese population, C. chinensis is often prescribed and processed together with other herbal ingredients, and this becomes a barrier for identifying the effective components in the herb. Many studies have been devoted to isolate the active components in C. chinensis in order to expand its usage. The chemical components from C. chinensis consist of mainly flavonoids, steroids, volatile constituents, lignans, alkaloids, and polysaccharides. The flavonoids account for about 3% of the total chemical components [12] and are the major active ingredients in C. chinensis. Depending on the species, hosts, and different processing protocols, the exact compositions of TFCC might differ, but hyperoside, rutin, quercitrin, and quercetin remain the staple bioactive components of C. chinensis. Additionally, early studies found that TFCC regulates the endocrine-immune network in maternal-fetal 2 Evidence-Based Complementary and Alternative Medicine interface and thereby reducing the abortion rate of the bromocriptine-stimulated abortion model in rat [13,14]. Yet the exact biological function and molecular mechanism of TFCC at the maternal-fetal interface in human firsttrimester pregnancy await further exploration. Mifepristone (RU486) is the progesterone antagonist and widely used for medical abortion. Thus, RU486 can be used to induce the abortion model. As one of the symptoms of abortion, uterine bleeding also can be observed in RU486-induced abortion mice and the volume of uterine bleeding is closely related to the Th1/Th2/Th17/Treg paradigm, which can be regulated by RU486 [15]. Recent studies revealed that RU486 can upregulate Monocyte Chemotactic Protein-3 (MCP3) at the implantation period caused immune abortion [16].
MAPK signaling pathway is involved in a series of physiological and pathological processes, including cell growth, development, differentiation, and apoptosis, all of which are essential to the invasion and proliferation of decidua stromal cells (DSCs) and trophoblast cells. Recent studies show that MAPK/p38 pathway, activated by upregulation of nucleotide-binding oligomerization domain containing 1 (NOD1) and nucleotide-binding oligomerization domain containing 2 (NOD2), inhibit the invasion of trophoblast cells [17]. Meanwhile, upregulation of S100P also activates the MAPK/p38 pathway, resulting in enhanced trophoblastlike cell proliferation [18]. Moreover, NF-B and ERK1/2 signaling pathway activated by IL-33 promotes proliferation and invasion by DSCs [19]. Additionally, IL-25 activates JNK and AKT signaling pathways, also leading to proliferation of DSCs [20].
Given the proliferative effect of MAPK on DSC, we decided to explore the role of MAPK signaling pathway in abortion prevention induced by TFCC. A previous study indicated that TFCC enhances the proliferation of the first trimester human trophoblast cells via ERK1/2 signaling pathway [21]. However, the exact role of MAPK signaling pathway on DSCs in preventing abortion by TFCC remains unclear. In this study, we investigated key molecules involved in MAPK signaling pathway in patients with spontaneous abortion and compared with the DSCs in patients with normal pregnancy. We hypothesize that MAPK signaling pathway may be among the causes of abortion and decidua dysfunction and TFCC may prevent abortion by inhibiting the MAPK signaling pathway.
Materials and Methods
. . Animals Study. The protocols for the animal studies were approved by institutional ethic committee and the studies were conducted in Guangdong Medical Lab Animal Center (No. B201411-6). Female Sprague-Dawley rats (269.55 ± 22.35 g) and male Sprague-Dawley rats (428.15 ± 22.55 g) were purchased from Guangdong Medical Lab Animal Center.
After one week of observation, female and male rats in mating trails were housed in groups at the ratio of 2:1 for 12 hours. Vaginal smears were taken every morning after mating. Observation of sperm and vaginal plug were considered as first day of pregnancy. Pregnant rats were randomized to five groups (10 rats each group): control group (Normal, Nor), model group (Model, Mod), dydrogesterone group (Positive, Pos), TFCC low dose group (TFCC1), and TFCC high dose group (TFCC2).
The pregnant rats in Nor Group were treated with distilled water by gavage. TFCC1 and TFCC2 groups were treated TFCC by gavage daily at a dose of 9.45 mg/ml and 18.9 mg/ml for ten days. Pos group received 0.604 mg/mL dydrogesterone (Abbott Biologicals B.V) for equivalent duration, 5 ml/kg once a day. On day 10 of pregnancy, rats in Pos Group were treated RU486 at the dose of 45 mg/kg (Hubei Gedian Humanwell Pharmaceutical) by gavage with gastric volume of 5 ml/kg. Nor group was given equivalent volume of normal saline (NS) in the same manner. SD rats were sacrificed under anesthesia 24 hours after administration of RU486. The third left embryo of pregnant rat was observed under the stereomicroscope.
. . Cells, Reagents, and Antibodies. Isolation, culture, and identification of primary decidual cells were performed according to protocols published in previous studies [22]. . . Herbal Extract Preparation. Total flavones were extracted from the dry seed of C. chinensis Lam. The quality control of TFCC was assessed by HPLC with Thermo Ultimate 3000 with Thermo Ultimate 3000 VWD-3x00 detector, Ecosil C18 pillar (4.6 mm × 250 mm, 5 m, Lubex, Japan).
The sample for animals and cell culture was prepared by extracting the powder with eight volumes of 95% of alcohol twice for 1hr each. The combined extracts (TFCC) were concentrated in vacuum and dissolved in pure water (19.45 mg/ml for TFCC1; 18.9 mg/mL for TFCC2) or in DMSO (10 mg/ml).
The mobile phase consisted of A (acetonitrile) and B (50 mM KH 2 PO 4 -H 3 PO 4 , pH 3.0) with a gradient elution flow rate of 1.0 ml/min. The detector wavelength was set at 220 nm. The gradient program (A/B, v/v) was as follows: 0-65 min, 10% → 28% A; 65-85 min: 28% → 52% A. The column temperature was set to 30 ∘ C. The HPLC chromatogram is shown in Figure 4. The contents of Rutin, Hyperoside, Quercitrin, and Quercetin in TFCC ethanol extracts were determined. . . Analysis of Serum Sex Hormones. SD rats were euthanized under anesthesia 24 hours after administration of RU486. Abdominal aorta blood was extracted and centrifuged at 3000 rpm for 15 minutes. The serums were used for ELISA to detect FSH, LH, E2, P, and PRL according to the manufacturer's instructions (Bogoo Company Lit., Shanghai, China).
. . Quantitative PCR. Complementary strand DNA was synthesized from total RNA using the RT reagent Kit with gDNA Eraser (Takara, DRR047A) followed by qPCR using the SYBR5 Premix Ex Taq6 (Tli RNaseH Plus) (Takara, DRR820A) on Bio-Rad IQ5 Real-Time PCR System (Bio-Rad, USA). Primers sequences were shown in Table 1. The reaction setup was as follows: 95 ∘ C for 30s followed by 40 cycles of 95 ∘ C for 5s and 60 ∘ C for 30s. -Actin was used as an internal control to normalize the variability in expression levels.
. . Histopathology. After fixation of samples in 4% paraformaldehyde for 48h, the decidual tissues were dehydrated with ethanol and xylene, embedded in paraffin according to standard procedures and cut into serial sections of 4 m thickness. After deparaffinization, slides were stained with hematoxylin and eosin for routine histological examination. Images were captured by a digital video camera mounted on a light microscope.
. . Immunohistochemistry. The paraffin section of decidual tissue was cut into sections of 4 m thickness and deparaffinized in xylene. After antigen retrieval for 15 minutes by microwave, the samples were incubated in primary antibody at 1:1000 at 4 ∘ C overnight. The slides were rinsed with PBS three times next day and incubated with secondary antibody Goat Anti-Rabbit lgG (HRP) (ab136817 1: 1000) at 37 ∘ C for 30 min according to the manufacturer's instructions. The slides were treated with DAB substrate for color development and then counterstained by hematoxylin. The stained slides were then dehydrated and mounted on slides for observation.
. . Western Blotting. The samples were mixed with 5× Loading buffer and heated for 5 min for protein denaturation. The samples were then stored at -20 ∘ C after centrifugation and proceed to the next step. Samples were separated by SDS-PAGE at 70 voltages for 30 minutes and then the voltage was adjusted to 120V for 1h. Afterwards, the samples were transferred to PVDF membrane with 220mA for 1h. The membranes were blocked with 5% BSA or nonfat dried milk diluted in TBS containing 0.1% Tween 20 or for 1h at room temperature (RT) and then rinsed with TBS-T for three times. The membranes were incubated with appropriately diluted primary antibodies at 4 ∘ C overnight and then probed with HRP-conjugated secondary antibodies for 1 hour at RT. Immunoreactive bands were detected by enhanced chemiluminescence with Clarity6 Western ECL Substrate (Bio-Rad; #170-5060) The intensities of the signals were quantified by densitometry using Image J software according to the manufacturer's instructions.
. . Cell Viability Test. Cell viability was determined by MTT assay. The decidual cells were seeded in 96-well plates at 1×10 5 cells/ml and cultured for 48h. After being treated with the alcohol extraction of TFCC at various concentrations of 100 g/ml, 10 g/ml, 1 g/ml, 0.1 g/ml, cells were cultured for 24h. The control group was treated with seeding media (0.1%DMSO in DMEM/F12) (5 wells in each group, 100 l each well).
Cells were incubated at 37 ∘ C in 5 mg/ml MTT solution for 3.5 h. After the removal of the MTT solution, 100 l of dimethyl sulfoxide were added, and the absorbance at the wavelength of 490 nm was recorded with a microtiter-plate reader. Cell viability was normalized with the control culture. The experiment was repeated three times and presented under the form of mean.
. . Statistical Analysis. Statistical analyses were performed with Student's t-test and one-way ANOVA using SPSS 13.0 (SPSS Inc., Chicago, IL) and presented in the form of mean ± standard. p<0.05 was considered statistically significant.
. . TFCC Reduces Abortion Rate in Mifepristone-Induced
Rat Model. In order to better assess the anti-abortion effect of TFCC to abortion rat model, we first examined model rat's uterus by visual inspection. The affected uterus was in bamboo-like shape covered with pale white membranous tissue along with internal blood stasis (Figure 1(a)).
Compared to the Nor group and treatment group, both the uterine volume and the number of embryos were significant lower in Mod group. Apart from a small amount of blood stasis observed in TFCC1 group, there were no significant differences between Pos group, TFCC1 group, and TFCC2 group. The third embryo from the left-side uterus of each rat was observed under stereomicroscope (OLYMPUS Corporation, Japan) (Figure 1(b)). Embryo morphology was shriveled in Mod group, which showed that the surface folds were significantly less than the other groups. The trace of absorbed blastocysts along with nondistinctive vascular proliferation could be observed in the Mod group. Embryo morphology and vascular proliferation in TFCC1 and TFCC2 were significantly higher than Mod group.
The rats in Mod showed slower locomotion, depression, and loss of luster in hair compared to other groups. No significant differences were found between the 4th day and 9th day of weight growth (p>0.05) among all groups (Figure 1(c)). Comparing with the Mod group, results showed that TFCC can substantially reduce the abortion rate in rats (p<0.05 for TFCC1; p<0.01 for TFCC2) (Figure 1(d)) and increase the diameter of embryos (p<0.05 for TFCC2; p<0.01 for TFCC1) (Figure 1(e)) and left uterus coefficient (p<0.05) (Figure 1(f)). The weight of left embryos was elevated in TFCC1 (p>0.05) (Figure 1(g)). Significant differences in left uterus coefficient could be observed among all groups (p<0.05) (Figure 1(h))
. . TFCC Reduces Decidual Cell Damage and Promotes Serum Gonadal Hormone Level and mRNA Receptor Expression.
To further study the pathological change in uterus after TFCC treatment, the tissues were processed for H&E staining (Figure 2(a)) before being observed under light microscope. Results showed that, compared to Mod group, various types of lesions including edema and degeneration of decidua cell, separation between gland and interstitial tissue, enlarging sponge gland, and local fibrinoid necrosis were observed and the severity was significantly alleviated in Pos group and TFFC1 and TFFC2 group.
Next, we investigated the serum gonadal hormone level in test animals (Figure 2(b)). It was shown that serum gonadal hormone in Mod group was substantially lower than that in Nor group (p<0.05 for Estradiol, p<0.01 for PROG, p<0.001 for PRL and Fsh). Administration of either dydrogesterone or TFCC promotes the secretion of serum gonadal hormone, particularly FSH in TFFC2 group (p<0.05). Total RNA was extracted from the uterine decidua tissue and gene expressions were quantified by quantitative RT-PCR. It was demonstrated that mRNA expressions of ER, PR, PRLR, and FSHR were significantly lower in Nor group (p<0.001). However, administration of low dose TFCC further increased the expression of ER (p<0.05), PR (p<0.05), PRLR (p<0.05), and FSHR (p<0.001); the high dose TFCC has better reversal effect (Figure 2(c)).
. . TFCC Enhance the Expression of Gonadal Hormone
Receptor Protein. To trace differentiated protein expression related to gonadal hormone receptor, decidua cells were harvested from test animals and fixed with 4% neural paraformaldehyde. Protein expression in decidua cells were examined by Immunocytochemistry methods. Under stereomicroscope observation, positive staining of PR and p-ERK was observed in the nucleus of decidua stromal cells. The expression of both PR (Figure 3(a)) and p-ERK (Figure 3(b)) in decidua tissue reduced significantly in Mol group compared to Nor group (p<0.05). When either mifepristone or TFCC were administered to the animals, the protein expressions were rescued and returned to higher level. The trend and degree of change in the protein expression are comparable between TFCC and mifepristone groups.
We further confirmed the observation from ICC by performing Western-blot on total protein extracted from decidua cells. Analysis showed that the protein expression of ER, PR, PRLR, and FSHR in Mod group was significant lower than in other treatment groups (p<0.01). The expression of ER had been already upregulated in TFCC1 (p<0.05), except PR, PRLR, and FSHR. But the high dose TFCC (TFCC2) could upregulated significantly the expression of PR (p<0.01), PRLR (p<0.001), and FSHR (p<0.001) (Figure 3(c)).
. . TFCC Has Little Cytotoxicity on Decidua Cells under g/ml.
To better access the potential toxicity that TFCC may have on decidua cells, we first employed HPLC to identify the active components in TFCC. HPLC analysis revealed the major components in TFCC including rutin, hyperoside, quercitrin, and quercetin. Next, we cultured primary decidua cells until third generation and then treated the cells with different concentrations of TFCC (100 g/ml, 10 g/ml, 1 g/ml, 0.1 g/ml) for 24h. Cell viability was determined by MTT assay. At different concentrations of 0.1 g /ml, 1 g/ml, 10 g/ml, and 100 g/ml of TFCC, no statistical significance was detected in cell viability on decidua cells.
. . TFCC Downregulates p-ERK and p-Expression in
Primary SA Decidua Cells. To demonstrate clinical relevance of TFCC in human patients, we collected human decidua cells from patients with spontaneous abortion and those who willingly terminated pregnancy at first trimester. Western blot results showed that phosphorylation of ERK and p-38 were elevated in decidua cells collected from early pregnancy (p<0.001), confirming the data from previous animal models where phosphorylated ERK and p-38 were enhanced on decidua tissue. After administration of TFCC, the phosphorylation level of ERK in decidua cells decreased in a dosedependent manner. The reduction was most evident at 10 g/ml concentration of TFCC (p<0.05) ( Figure 5(b)). A similar trend was observed in p-38. When TFCC concentration was elevated, the level of p-38 decreased. The reduction was most significant at 100 g/ml of TFCC (p<0.01) ( Figure 5(c)). receptors were increased significantly by TFCC in a dosedependent manner (Figure 6).
Discussion
Spontaneous abortion is a common reproductive disease caused by the dysfunction of decidua cells in maternal fetal interface. It affects approximately 15%-25% of gravida and can lead to severe psychological stress in patients. In this study, we revealed the potential therapeutic effect of TFCC on preventing abortion in DSCs and found that TFCC exerts its anti-abortion effect through interfering MAPK signaling pathway in DSCs. Currently, the main treatment of abortion is still the hormone replacement therapy and immunotherapy. Moreover, the safety of hormone usage during pregnancy remains controversial and both the efficacy and long-term safety of immunotherapy remain unclear. On the other hand, TCM has been employed to treat abortion for thousands of years. In spite of the lack of quantitative evidences, many components have been shown to exhibit clinically effective components in a variety of diseases with unique and advantages [1,[23][24][25].
In order to better study the pharmacodynamics of TFCC, mifepristone (RU486)-induced abortion rat model is introduced in this study. RU486, a derivative of the norethindrone, was discovered by French researchers in 1982. Because of the similarity in structure between RU486 and progesterone, it is highly effective to compete with progesterone for progesterone receptor binding and thereby causes degeneration and necrosis of decidua and villus tissue, leading to subsequent embryonic death. Thus, RU486 is approved for medical abortion usage by the U.S. Food and Drug Administration in 2000. A recent study indicated that even, at low concentration (0.5 M) in vitro, mifepristone may be a contraceptive agent given its inhibitory effect on embryo implantation process during the receptive period [26]. In this study, we observed a universal decrease of not only the level of serum sex hormones (E 2 , P, PRL, FSH) but also the expression of ER, . T-test was applied in the comparison between two groups. " * " p<0.05, " * * " p<0.01, " * * * " p<0.001 compared to the Nor group; "#" p<0.05, "##" p<0.01 compared to the Mod group.
PR, PRLR, FSHR in Mod group decidua tissue compared with Nor group. The decrease in reproductive hormones level and their respective receptors are accompanied by higher abortion rate in Mod group. In summary, mifepristone serves as a successful in vivo model for abortion study with physiological relevance. C. chinensis is an herb in TCM well known for its effect against reproductive system diseases, especially the abortion caused by kidney deficiency. Despite of the fact that C. chinensis has been applied clinically by TCM practitioner, little has been studied on its biochemical composition as well as the molecular mechanism underlying its anti-abortion effect. Without evidence from systematic and quantitative study, it becomes difficult for C. chinensis to benefit larger patient population. One of the best studied components in C. chinensis is TFCC. Previous study using hydrocortisoneinduced abortion model in rat showed that TFCC can reverse the expression of testosterone and androgen receptor [27]. Another study showed that TFCC promotes ER expression in the hippocampus, hypothalamus and pituitaries of psychologically stressed rats as well as LHR expression in the ovaries [9], suggesting a potential role of TFCC in regulation reproductive endocrine function. Hormonal imbalance is one of the leading causes of abortion and bringing back the balance of hormone and respective receptors can help to prevent spontaneous abortion. A pilot study using bromocriptineinduced abortion model in rat already showed [13] that TFCC significantly reduced the abortion rate by improving the blood supply in placenta, promoting the expression of PR in decidua and increasing the level of P as well as Prl. However, the exact role of MAPK pathway in the prevention and treatment of abortion of TFCC remained unexplored. In our study, we show that TFCC not only improves embryo quality and decreases the abortion rate in mifepristone-treated rats, but also reverses the expression of hormone levels in model rats in a dose-dependent manner. Meanwhile, TFCC promotes the expression of PR and PRLR to reduce decidua damage caused by competitive binding with PR by mifepristone. This compensatory mechanism functions as the key of TFCC to prevent and treat abortion. Furthermore, we found the expression of p-ERK is higher in Mod group than Nor group. This finding resonates with the result where the expression of p-ERK and p-p38 are elevated in primary decidua cells from spontaneous abortion than that from normal pregnancy. Administration of TFCC in vivo and in vitro reversed not only the pathological state but also dampened the protein expression of p-ERK and p-38, indicating that MAPK signaling pathway plays a central role in spontaneous abortion and TFCC can effectively intervene with this pathway. Abnormity in the MAPK signaling pathway has been related to multiple diseases including cancer [28], aortic valve disease [27] and endocrine diseases [29]. In the reproductive system, the MAPK pathway also plays an important role in maintenance of normal pregnancy at the maternal fetal interface. One of the examples is the regulation of growth and differentiation of placenta by ERK / MAPK pathway [30]. Several studies also showed that ERK1/2, JNK, and AKT pathway can promote the proliferation and invasion of DSCs [19,20]. However, in our study, both animal models and primary decidua stromal cells showed concurrent upregulation of p-ERK and p-p38 with abortion, suggesting their potential roles in spontaneous abortion. In recent study indicated that hyperoside, one of the major components in TFCC, inhibited the phosphorylation of p65/NF-B, MAPK (including p38, JNK and ERK1/2) in mice with high-carbohydrate/high-fat diet and alloxan-induced diabetes [31]. However, hyperoside had also been reported to significantly increase the protein phosphorylation of p38 mitogen-activated protein kinase (MAPK) and c-Jun N-terminal kinase (JNK) in A549 human non-small-cell lung cancer cells [32]. Therefore, MAPK may serve dual roles in regulating MAPK signaling pathway in TFCC for abortion prevention. In this study, our result demonstrated that TFCC downregulate p-ERK and p-p38 in a dose-dependent manner with a concurrent reduction in abortion rate, indicating that TFCC reverses the pathological process of decidua to suppress abortion by inhibiting MAPK signaling pathway.
Conclusion
Our study proved that TFCC administered in vivo effectively reduced mifepristone-induced abortion and also had little cytotoxicity effect on human primary decidua cells. Identifying MAPK signaling pathway as a target of TFCC allows
Data Availability
The data and materials are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors have declared that no conflicts of interest exist.
12
Evidence-Based Complementary and Alternative Medicine from Zanthoxylum bungeanum leaves in mice with highcarbohydrate/high-fat diet and alloxan-induced diabetes, " International Journal of Molecular Medicine, 2017. | 5,542.6 | 2018-10-02T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The geometric contents and the values of local batik in Indonesia
The culture was carefully associated with regular lifestyles, such as inside the discipline of education. However, there needs to be awareness among school members of the absorption of the culture around them through learning, especially mathematics. Therefore, this study aims to explore the geometric contents in the form of geometric transformations and the values of local batik Indonesia. This study was a qualitative study with an ethnographic approach. The techniques used in this study are observation, literature review, and interviews with Lampung cultural and batik craftsmen. Observations were made by observing batik activities and Lampung batik motifs and conducting interviews to find the mathematical and philosophical elements contained in them. The data were then completed and checked for correctness based on the results from the literature review. The outcomes of this observation imply that the human beings of Lampung utilize the idea of geometric transformation in making batik motifs, which include the Siger motif, pohon haya t motif, and kapal motif. The idea of geometric transformation used is reflection, dilation, and translation. Thus, this research can be a reference for learning mathematics and exploring other Lampung batiks motifs.
Introduction
Indonesia is an archipelago country that is rich in culture. In addition to its large quantity, Indonesia's diverse culture requires its people to continue to preserve and maintain its authenticity from time to time. The thing that can be done to preserve culture is by growing awareness and a sense of belonging and love for one's own culture, especially for the nation's next generation (Nahak, 2019). However, it is undeniable that culture is closely related to everyday life, including in the field of education, especially in the field of mathematics. In this case, the social characteristics and cultural context in the student's environment can improve students' ability to build their knowledge of mathematics (Sharma & Orey, 2017).
Ethnomathematics is a program that uses cultural media to explore mathematical phenomena, which are then directed to the pedagogical realm (Choirudin et al., 2020). Ethnomathematics makes a significant contribution to the preservation of the nation's culture in education. Furthermore, learning mathematics is applied by using ethnomathematics as part of the activities carried out at school. It makes students better understand the benefits and uses of mathematics in everyday life to increase learning motivation, student activity during mathematics learning activities, and the value of learning outcomes for students, especially in mathematics subjects (Kencanawaty et al., 2020). The implementation of ethnomathematics can change the paradigm of children and society that mathematics has a relationship with daily activities and mathematics has a relationship with culture and can be learned in a fun way (Risdiyanti & Prahmana, 2018).
Observation, comparison, classification, evaluation, quantification, measurement, computation, representation, and inference are used in each location and culture to understand knowledge about culture. Ethnomathematics is the name given to the various knowledge systems that have emerged due to this culture's discovery of mathematical concepts (Prahmana & D'Ambrosio, 2020). Revealing the ideas contained in specific cultural activities or particular social groups to develop a mathematics curriculum for, with, and by these groups, it can be done through ethnomathematical research so that mathematics can have different forms and develop according to the development of the user community (Prabawati, 2016). Ethnomathematics can also be used to bridge formal mathematical concepts with the cultural diversity that exists in society (Nur et al., 2021).
Academics have widely carried out research on ethnomathematics, including ethnomathematics research in Indonesia during 2015-2020 (Hidayati & Prahmana, 2022). The results from this study stated that in 2020, the number of articles published in ethnomathematics decreased significantly. In Lampung, there is ethnomathematics of Lampung tapis exploration as a learning resource to protect cultural heritage (Dewi et al., 2019). In addition, mathematical concepts in tapis Lampung by Susiana and Noer (2020). These studies investigate the philosophy contained in tapis motifs and the mathematical concept to discover ethnomathematics approaches to mathematics learning. So, Ethnomathematics on tapis fabrics and Lampung traditional houses (Loviana et al., 2020). According to the study's findings, tapis and traditional Lampung houses have utilized mathematical concepts. Riswati et al. (2021) identified Gemisegh's nature as Lampung's mathematical and cultural wealth. According to the findings of this study, ethnomathematics activities can be found in the alam gemisegh. Next, study of ethnomathematics exploration of arul games Lampung culture (Cahyaningati & Diana, 2022). The findings revealed that arul discovered mathematical elements in traditional games. Some of these studies have yet to discuss ethnomathematics in Lampung batik motifs and their philosophy.
Besides Lampung tapis, Lampung also has a distinctive cloth that is also famous, namely Lampung batik. Although both come from Lampung and are made of cloth, Lampung batik is different from Lampung tapis. Tapis Lampung uses gold thread and is embroidered, while Lampung batik is written with this batik candle and is called "malam." Besides attending official events, the people of Lampung also use this batik cloth for casual events. It has even become a patterned cloth used daily according to its fashion design (Hidayatulloh, 2021;Humaidi, 2021). In 2021, based on the circular letter number: 045.2/3672/07/2021 regarding the adjustment to the use of official clothing for state civil servants in the Lampung Province environment, it is stated that Fridays are set to wear the Lampung Batik Daily Service Clothes (PDH) (Djunaidi, 2021). It makes Lampung batik official and mandatory clothing in the local government, including the educational environment. In addition, Lampung batik has also become a mandatory uniform for students in schools throughout the province of Lampung (Hidayatulloh, 2021;Humaidi, 2021). It can be seen in Figure 1 below. Lampung batik was brought by the Javanese people who lived in Lampung for a long time because batik came from the Java area. The origins of batik birth in Indonesia are related to the development of the kingdoms of Majapahit, Solo, and Yogyakarta (Trixie, 2020). Budianto (2020) stated that the process of migration of the Javanese to Lampung officially began in 1905 in the colonization program. It continued in the government of the Republic of Indonesia the frame of the transmigration program. Furthermore, the Javanese people adapted the existing culture in Lampung so that more and more batik cloth appeared with typical Lampung motifs called Lampung batik. Lampung batik began to develop in the 1970s and was pioneered by Andrean Sangaji (a Lampung culture).
Even though Lampung Batik has become part of the daily life of the Lampung people and is even familiar to the world of education, the Lampung people, especially students, need help understanding the philosophy of Lampung Batik. In addition, they need help understanding the relationship between Lampung batik and subjects, especially mathematics. Therefore, this study aims to explore the mathematical concept of geometric transformation and philosophies on local batik in Indonesia.
Several previous studies that have been described previously examined the ethnomathematics of various regional cultures, including Lampung. In this case, the researchers conducted an ethnomathematical study, specifically on the geometric concept for each type of Lampung batik motif. Studies on the application of geometric transformations have been carried out by Fadila (2017), but these studies are still general for Lampung batik motifs. Meanwhile, in this study, the author examines the Lampung batik motif, in particular, and explores the geometric concepts contained in the motif.
Batik is a picture cloth specially made by writing a motif with wax on it and then processing it through a particular process. Lampung Batik is the result of the development of Indonesian batik, which takes motifs from the characteristics of traditional characters of Lampung, one of which is the Lampung tapis woven fabric. The batik craftsmen in Lampung chose decorative patterns on filter woven fabrics as a source of inspiration. Besides being unique to the people of Lampung, the decorative variety of tapis woven fabric is seen as more exotic because it looks like it was in the stone-age era (Fadila, 2017). Therefore, in this article, the author explores the geometry of various Lampung batiks motifs. However, due to the limitations of researchers and the diversity of typical Lampung batik motifs, the authors only limit their exploration to the Lampung siger batik motif, kapal batik motif, and pohon hayat batik motif. Thus, this research can be a reference for further exploring other Lampung batiks motifs.
Methods
Ethnographic research was the method used for this study. This study examines the culture that exists in society, especially in the Lampung Batik motif. This research is included in ethnomathematical research because it aims to explore mathematical concepts and moral values contained in Lampung batik motifs. The data for this study were collected through field studies and interviews. Researchers used the observation technique to find data on the field regarding ethnomathematics in Lampung batik motifs. Researchers conducted observations by observing the activities of batik Lampung Batik motifs in Andanan batik Lampung and observing Lampung batik motifs to find their mathematical elements.
In this study, the researchers conducted interviews with purposively selected sources, namely Mr. Humaidi, a humanist and retired employee of the Lampung Cultural Park, and Mr. Hidayatulloh, a typical Lampung batik craftsman as well as an entrepreneur who owns Andanan Batik Lampung. Mr. Humaidi is a native of the Lampung tribe who was raised in an environment and family that is very strong in maintaining Lampung culture. After graduating high school, he served and was accepted as a civil servant at Taman Budaya Lampung Province. Through his place of work, he became a pioneer in preserving Lampung culture.
While Mr. Hidayatulloh was a native of the Lampung tribe, who was born and raised in an environment that is thick with Lampung culture, after graduating from the Strata 2 programs in mathematics education, he began to explore Lampung batik through various pieces of training. In 2018, he pioneered the Lampung batik craft business under Andanan Batik Lampung. Apart from having a business in Lampung batik, Mr. Hidayatulloh has also been a resource person both locally and nationally, such as from the ministry of tourism and creative economy and the ministry of industry. Because Mr. Hidayatulloh has the expertise and is also an academic in the field of mathematics education, he also develops his batik products by applying mathematical knowledge to the batik cloth he produces.
The type of interview used by the researcher is a semi-structured interview. Questions in semi-structured interviews are more accessible and more flexible in their implementation when compared to structured interviews. Researchers still use interview guidelines in conducting interviews but can be developed conditionally when conducting questions and answers. It is so that when conducting interviews, an open and non-rigid situation is created. Interviews were conducted with two sources, Mr. Humaidi and Mr. Hidayatulloh, who were deemed able to provide in-depth information regarding Lampung batik motifs.
In addition, the researcher reviewed the literature on Lampung batik to complete the findings from these interviews and observations. Photos, videos, and field notes document all data. Furthermore, source triangulation methods were used to look for the data to see if there was a connection between mathematical concepts and values in Lampung batik motifs. The triangulation technique was used by comparing the data obtained from field observations, interviews, and a literature review on mathematical concepts and values in Lampung batik motifs. The last step is describing the data to investigate each study's findings.
This research is limited to informants who have the same background. In this case, are native Lampung people use the same language, namely Lampung language; have the same administrative area, namely being in Pesawaran Regency, Lampung Province; and possess the same historical knowledge as them, namely history when living, growing, and develop in Lampung. There are seven main descriptions produced in ethnographic research: language, technological system, economic system, social organization, knowledge system, art, and religion (Koentjaraningrat, 2015). In this study, the knowledge system is the primary focus of the research. It is because researchers must observe and investigate the community's knowledge and art system to discover the fundamental knowledge used to create the batik motifs and the cultural values incorporated into the art of batik motifs.
In the process of this research, the researcher uses an ethnographic research design adopted from the research design designed by Prahmana and D'Ambrosio (2020), which can be seen in Table 1 below.
Results
Lampung is famous for having an island with a million siger. In addition to the siger, there is also a Lampung elephant, a characteristic of the Lampung area. Thus, the people of Lampung developed siger and elephant images on filter cloth and batik. In addition, the most famous Lampung batik motifs are kapal motif and pohon hayat, or the tree of life. These two motifs are very distinctive for Lampung culture. They are Lampung's trademarks in the eyes of the international community because they are found in several museums in Australia, Hawaii, and America (Fitinline, 2013). Due to the diverse characteristics of Lampung culture, Lampung batik has various motifs, including the siger motif, kapal, pohon hayat, gajah, kupukupu, sembagi, gamelan, pramadya style, and others. Some of the oldest motifs include kapal motif and the pohon hayat, while one of the most famous Lampung regional icons is the Lampung siger. Thus, considering the author's limitations, the authors limit the study of exploratory geometric transformations to three motifs, namely the siger motif, the kapal motif, and the pohon hayat motif.
The geometric transformation consists of translations, reflections, rotations, and dilations. The translation is a transformation that moves points in a plane with a particular direction and distance. In addition, reflection is a transformation that moves each point in the plane by using the properties of the image by a mirror. The rotation is a transformation that oves points by rotating them by an angle α to a certain point. The dilation is a transformation that changes the distance from the points by a specific multiplier to a certain point. It can be seen in Figure 2
Value and Geometry Transformation in Lampung Siger Batik
The siger symbol for Lampung women and its application to Lampung batik motifs can be seen in Figure 3 below: Humaidi : Siger is worn by Lampung women as a magnificent piece of jewelry. This symbolizes that behind the softness of a woman has strength. Besides batik motifs that have a philosophy, the Lampung siger batik motif also uses the concept of geometric transformation in the form of reflection, translation, and dilation, as shown in Figure 4.
Reflection
Translation and Dilation
Values and geometry transformations in kapal batik
As shown in the following excerpt from the interview with Mr. Humaidi: Researcher : What do you think is the philosophy of this ship's batik motif, sir?
Humaidi
: Well, ma'am, the Lampung area is mostly water, so many people make a living as fishermen. In this way, this ship became one of the characteristics of the people of Lampung. This ship also has a philosophy/value, Ms. If you see that the ship is balanced, it symbolizes the harmony between its citizens. It indicates that humans and nature are interrelated. Besides batik motifs that have values, the ship batik motif also uses the concept of reflection as a form of geometric transformation, translation, and dilation, as shown in Figure 6.
Reflection
Translation and Dilation Translation Figure 6. Geometric transformation of a ship/kapal in batik motifs In Figure 6, it can be seen that there is a reflection where the image is the result of a reflection of the axis of symmetry. Figure 6 also shows the translation results where the resulting image results from a shift. In addition, there are translation and dilation results where the resulting image results from both shifting and shrinking (shrink). Applying kapal motifs with different batik patterns allows for different geometric transformation results. Figure 7 shows the ship motif found in Lampung batik. At first, the ship motif's philosophy was considered a journey of the human spirit that had just died to the afterlife. However, after Islamic teachings entered the people of Lampung, the philosophy of the ship motif turned into a journey of human life. In addition, the ship is also the harmony between its citizens, indicating that humans and nature are interrelated.
Value and geometry transformation of the tree of life (pohon hayat)
As shown in the following excerpt from the interview with Mr. Humaidi: Researcher : What do you think is the philosophy of pohon hayat's batik motif, sir? Humaidi : Pohon hayat means the tree of life, which is the source of life. It signifies the existence of God Almighty as the giver of life. The batik motif also employs the idea of geometric transformation in the form of reflection. In addition to having a philosophy, as shown in Figure 8. Figure 9 shows the tree of life motif, originally mostly applied to the Sarongs of Lampung women. However, this motif is widely used for clothing according to fashion developments. Pohon hayat batik motif is dominated by Buddhist and Islamic cultural Translation influences. This motif, also known as the Tree of Life, has a deep philosophical meaning for the people of Lampung. Pohon hayat means the tree of life, which is the source of life. It signifies the existence of God Almighty as the giver of life.
Discussion
Geometric transformation is a part of geometry that discusses changes in location and presentation based on images and matrices (Iswahyudi, 2003). The making of Lampung batik cannot be separated from the role of the science of geometric transformation, which has been taught in schools. Simple transformations, such as translation, reflection, rotation, and dilation, can be applied to make batik, especially the Lampung batik motif (Fadila, 2017).
Value and geometry transformation in Lampung siger batik motifs
Lampung siger batik motifs are very popular in Lampung society. This batik motif reflects the hallmark of Lampung, namely the siger. Everyone wearing this batik motif is proud because the beautiful batik is a siger from Lampung (Novitasari, 2020). Siger is a crown for the bride of Lampung, which has a bilaterally symmetrical shape, displaying to the left and right. The siger has a specific number of indentations that characterize the region of origin of the siger. Siger is the Saibatin custom that lives in coastal areas has seven curves which mean seven adoq (traditional titles in the Saibatin community), namely suttan/dalom/pangeran (kepaksian/marga), raja jukuan/depati, inner, radin, minak, kimas, and mas/itton. Whereas in the Pepadun custom, the siger has nine indentations which symbolize the existence of nine clans (abung siwo megou) (Deslima, 2021).
The siger motif in Lampung batik symbolizes the strength behind the softness (feminism) of a woman. There is hard work, independence, persistence, and so on behind the softness of a woman. However, the people of Lampung adhere to the patrilineal or patrilineal lineage. However, the vigor of a woman is essential, at the same time, inspiring and driving her life partner's progress (Deslima, 2021;Humaidi, 2021). The geometric transformations in the Siger batik motif include reflection, translation, and dilation.
Values and geometry transformations in kapal batik motifs
The ship (kapal) batik motif symbolizes the Lampung area bordered by water areas. This motif describes the characteristics of Lampung (Novitasari, 2020). The Lampung area is mostly water, so many people make a living as fishermen. This ship became one of the characteristics of the people of Lampung (Humaidi, 2021). Initially, kapal motif was considered as the journey of the human spirit that had just died to the afterlife, but after Islamic teachings entered the people of Lampung, the philosophy kapal motif turned into a journey of human life (Nugroho et al., 2021). The Kapal motif contains a profound philosophy that symbolizes harmony, balance, and interrelationships between human life and the natural surroundings (Humaidi, 2021). The geometric transformations in the kapal batik motif include reflection, translation, and dilation.
Value and geometry transformation of the pohon hayat batik motifs
The tree of life (pohon hayat) batik motif has a deep philosophy for the people of Lampung. The tree depicted here symbolizes life with the curtains of life. Usually, this cloth is used for subordinates by women as a complement to clothing (Novitasari, 2020). The Pohon hayat motif symbolizes unity and God Almighty as the creator of the universe (Humaidi, 2021). In addition, the tree of life symbolizes a person's ability to place himself with people who can determine his life path (Isbandiyah & Supriyanto, 2019). The geometric transformations in the pohon hayat batik motif include reflection and translation.
The findings from the investigation of the application of mathematical concepts for the production of Lampung batik motifs demonstrate that the inhabitants of Lampung have used the idea of geometric transformation. The concept of this geometric transformation has been self-taught and creative ideas have emerged in making Lampung batik motifs based on their experience with the batik process (Hidayatulloh, 2021). It indirectly shows the application of mathematics in a culture often known as ethnomathematics.
Even the results of ethnomathematics exploration have been used to teach mathematics in Indonesian schools. As with several results from the development of teaching materials and learning tools that have been carried out by several researchers that can be used in mathematics learning, including-based geometry transformation teaching materials discovery learning through an ethnomathematical approach (Fitriyah et al., 2018), the development of geometric transformation worksheet's with the Lampung filter motif (Khasanah & Fadila, 2018), the development of teaching materials characterized by the ethnomathematics of the Komering tribe in elementary school students (Nelawati et al., 2018) ethnomathematics in the traditional engklek game and its tools as teaching materials (Aprilia et al., 2019), the development of worksheets Timor woven ethnomathematics-based student work on number pattern material (Disnawati & Nahak, 2019), development of ethnomathematical-based teaching materials for one-variable linear equations and inequalities (Lakapu et al., 2020), and development of ethnomathematical-based teaching materials on geometric transformation materials (Nurmaya et al., 2021). It demonstrates that ethnomathematics-based mathematics instruction can alter students' perceptions of the connection between mathematics and their own culture and real-world experiences, thereby reducing mathematics anxiety (Prahmana & D'Ambrosio, 2020). Some of this research has successfully incorporated ethnomathematical investigation into the design of mathematics instruction. It has been demonstrated to increase student comprehension and make them feel that the mathematics they study has more significance.
In addition, the results of this study also show that every Lampung batik motif has moral values that can be reflected in everyday life regarding symbols of strength, balance, and unity (Humaidi, 2021). In addition to the use of Lampung tapis, the Lampung batik motif marks the characteristics within the Lampung region so that people can use Lampung batik as a symbol of their regional identity (Hidayatulloh, 2021;Humaidi, 2021). Lampung batik, as a uniform throughout the world of education, can also be used as a tangible medium in introducing the application of mathematical concepts in everyday life (Hidayatulloh, 2021).
Thus there is no doubt that mathematics is closely related to culture. In this case, ethnomathematics can be used to introduce culture to the younger generation and maintain its sustainability. The results from this study are expected to a contribution to further research to develop ethnomathematics-based mathematics learning tools, especially geometric transformation materials in Lampung batik motifs.
Conclusion
The inhabitants of Lampung frequently use the idea of geometric transformation to create batik motifs like the siger batik pattern, pohon hayat motif, and kapal motif. The geometric contents concept of geometric transformation used are reflection/reflection, dilation/scaling, and translation/shift. In addition, Batik Lampung has a history, philosophy, and values in each motif. These principles, such as the importance of God Almighty, unity, harmony, balance, and the strength of women's femininity, can be seen in daily life. Due to the limitations of researchers and the diversity of typical Lampung batik motifs, the authors only limit their exploration to the Lampung siger batik motif, the ship batik motif, and the tree of life batik motif. Thus, it is hoped that this research can be a reference for further exploring other Lampung batiks motifs. This ethnomathematical study on the culture of the Lampung batik motif can be used as a starting point in introducing culture to students in Lampung and throughout Indonesia through learning mathematics.
Acknowledgment
The authors would like to thank Mr. Humaidi as an expert on Lampung culture and Mr. Hidayatulloh as a Lampung batik craftsman who has been willing to be an informant in this research.
Conflicts of Interest
There are no conflicts of interest to the publication of this work, according to the authors. Furthermore, the authors have addressed all instances of plagiarism, misconduct, data fabrication and/or falsification, multiple publication and/or submission, and redundancy in great detail.
Funding Statement
This work received no specific grant from any public, commercial, or not-for-profit funding agency.
Author Contributions
Noerhasmalina: data collection and data analysis; Binti Anisaul Khasanah: writing, data collection and data analysis. | 5,772.2 | 2023-01-02T00:00:00.000 | [
"Mathematics",
"Education"
] |
Design of CGAN Models for Multispectral Reconstruction in Remote Sensing
: Multispectral imaging methods typically require cameras with dedicated sensors that make them expensive. In some cases, these sensors are not available or existing images are RGB, so the advantages of multispectral processing cannot be exploited. To solve this drawback, several techniques have been proposed to reconstruct the spectral reflectance of a scene from a single RGB image captured by a camera. Deep learning methods can already solve this problem with good spectral accuracy. Recently, a new type of deep learning network, the Conditional Generative Adversarial Network (CGAN), has been proposed. It is a deep learning architecture that simultaneously trains two networks (generator and discriminator) with the additional feature that both networks are conditioned on some sort of auxiliary information. This paper focuses the use of CGANs to achieve the reconstruction of multispectral images from RGB images. Different regression network models (convolutional neuronal networks, U-Net, and ResNet) have been adapted and integrated as generators in the CGAN, and compared in performance for multispectral reconstruction. Experiments with the BigEarthNet database show that CGAN with ResNet as a generator provides better results than other deep learning networks with a root mean square error of 316 measured over a range from 0 to 16,384.
Introduction
Multispectral images have numerous applications in remote sensing ranging from agriculture [1,2] to environmental monitoring [3,4], change detection [5,6], and geology [7]. The main difference of multispectral images compared to RGB images is the incorporation of narrow bands in a specific wavelength range. These bands can include wavelengths in the visible (VIS, 380-800 nm), visible and near infrared (VNIR, 400-1000 nm), near infrared (NIR, 900-1700 nm), short-wave infrared (SWIR, 1000-2500 nm), mid-wave infrared (MWIR, 3-5 µm), and long-wave infrared (LWIR, 8-12.4 µm) spectrum. The wavelengths may be separated by filters or detected via the use of instruments that are sensitive to particular wavelength. Figure 1 shows a multispectral image taken by the Sentinel-2 satellite [8].
Many sensors for remote sensing capture multispectral images. However, they are more expensive than RGB cameras since the extended spectral information provided by these sensors requires additional complexity. The main application of multispectral reconstruction is the generation of multispectral images when this type of sensor is not available or when the only available images are RGB. In the case of having a mixture of RGB and multispectral images, multispectral reconstruction allows uniform processing. This can be important for change detection applications when part of the available images are RGB, and part are multispectral. Multispectral reconstruction can also be useful in applications that require multispectral image processing, but only RGB images are available. In this case, the spectral reconstruction could be considered as a preprocessing stage like that performed with filters and morphological or attribute profiles to highlight structures in the Therefore there has been considerable interest in developing algorithms for spectral reconstruction of multispectral images from RGB images. The goal of these algorithms is to minimize the error in the creation of multispectral images, achieving a result as faithful as possible to reality. The spectral reconstruction is a supervised machine learning problem, which requires a set of training images. Various techniques have been proposed, ranging from the use of functions or dictionaries [9][10][11] to more innovative methods such as artificial neural networks [12][13][14][15][16][17][18][19][20].
Early works of spectral reconstruction are based on the creation of dictionaries. Arad et al. [9] construct a sparse spectral dictionary collecting images (either general or domain specific), whose projection into RGB provides a mapping between RGB atoms to hyperspectral atoms. Once all these components have been obtained, the spectral signature of each pixel of the test image was estimated using these dictionary representation applying the orthogonal match pursuit algorithm. A drawback of this method is that it treats each pixel independently, so the information available in the neighborhood of that pixel is not taken into account. This approach can be improved by adding additional information, such as the a set of spectral and convolutional features [10]. Principal Component Analysis (PCA) has also been used to extract basis functions from collected databases of spectral reflectance [11].
Architectures based on neural networks have been proposed in the literature for spectral reconstruction. Neural networks are capable of learning complex internal representations which allow them to extract the relevant features from the information they can process. Nguyen et al. [12] addressed the problem with the use of radial basis functions. In this case, it continues to treat each pixel independently. In recent years, methods based on neural networks and deep learning have become more common [21], especially by the use of Convolutional Neuronal Networks (CNNs). An advantage of CNNs is the automatic use of contextual information.
Different network types based on CNNs have been proposed for spectral reconstruction [13]. A moderately deep (6 convolutional layers) model witch residual connections (ResNet) was proposed by Can et al. [14]. The residual connections ensure that more features are available to the final layer. This approach was also used by Sharma et al. [15], where the feature extraction from the three input RGB bands is done by a convolution layer, followed by 10 residual blocks for feature mapping. In addition, there have been studies of other convolutional neuronal network architectures, such as the U-Net used by Stiebel et al. [16]. U-Net consists of a downsampling path and a upsampling path, which gives it the U-shaped architecture. During the contraction, the spatial information is reduced while feature information is increased. Residual connections during the expansion path complete the extracted features. Fubara et al. [17] used a modified U-Net network with skip connections to allow lower level features to flow to deeper layers. Further, a second unsupervised learning method is proposed in that work, which would be useful in case there were no training images available.
Recently, the use of Generative Adversarial Networks (GAN) has been explored for solving a number of tasks in image processing. GANs consist of two networks, a generator and a discriminator. The generator tries to create new plausible synthetic data while the discriminator learns to discriminate between the training samples and the fake data. In this way, both networks improve their learning, the generator trying to trick the discriminator, and the latter trying to distinguish between real and fake data. Conditional Generative Adversarial Networks (CGAN) are an extension of the GAN where both the generator and discriminator are conditioned on some sort of auxiliary information such as class labels or data from other sources. CGANs have been proved useful for various applications in image processing, including classification [22], denoising [23], registration [24], change detection [25], information fusion [26], and precipitation estimation [27].
Isola et al. [18] proposed using CGAN as general-purpose solution to image-to-image translation problems. These solutions include synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. The proposed network in [18] is a CGAN with a U-Net-based architecture as a generator and a convolutional PatchGAN classifier as a discriminator. Lore et al. [19] used CGANs for RGB-to-multispectral image mapping, spectral super-resolution of image data, and recovery of RGB imagery from multispectral data. A similar solution was proposed by Alvarez et al. [20] for spectral reconstruction. Since generator needs to yield full-size detailed images, a U-Net-like architecture was used. The discriminator was focused solely on modeling high-frequency structure and consists of a PatchGAN, which is simpler in terms of convolutional layer count. That is, the networks currently proposed in the literature for CGAN-based spectral reconstruction are built around of U-Net models as generators.
In this work we intend to expand the use of CGANs for spectral reconstruction by exploring other types of generators. Specifically, CNN, U-Net, and ResNet models are adapted and evaluated as generators in the CGAN. As training data for the CGAN, the BigEarthNet database [28] have been used. This database contains approximately half a million images taken from the Sentinel-2 satellite, showing an aerial view of 10 European countries. The rest of the paper is organized into four sections. Section 2 presents the methods used in this study, including the description of the proposed classification network. The experimental results for the evaluation in terms of classification performance and computational cost are presented in Section 3. Then, the discussion is carried out in Section 4. Finally, Section 5 summarizes the main conclusions.
Neural Network Models
In this section we present the neural networks under study that are used as generators for a Conditional Generative Adversarial Network (CGAN) architecture. These networks are designed to convert the RGB composite image to a multispectral image of n bands. The CGAN model for multispectral reconstruction is first described in detail and then the CNN, U-Net, and ResNet models, which are considered as generators. In Section 3, variations in terms of layers, kernel size and skip connections are studied for these architectures.
CGAN Model
A generative adversarial network (GAN) is a class of machine learning framework where two networks (generator and discriminator) compete with each other. The generative network generates candidates while the discriminative network evaluates them. The objective of the generator is to synthesize candidates that the discriminator thinks are real, that is, to increase the error rate of the discriminator. The objective of the discriminator is to detect the candidates synthesized by the generator, this is to decrease its error rate. GANs provide an efficient way to learn deep representations with a relatively small number of training data. This is achieved by generating backpropagation signals through a competitive process that involves both networks. When the GAN is well designed, the generator and discriminator error rates are stable and balanced. The representations that can be learned by GANs may be used in a variety of applications, including synthesis, editing, superresolution, and classification.
Different models of GANs have been proposed in the bibliography, among which we can highlight [29]: • Deep Convolutional Generative Adversarial Network (DCGAN) [30]. The DCGAN is the version of the GAN architecture which uses deep convolutional neural networks for generator and discriminator with a linked training between both networks. This architecture makes use of large unlabeled datasets to train the discriminator in order to be able to distinguish them from those synthesized by the generator. DCGAN has been used as the basis model for generating many other GAN models. For example, the discriminator model can be used as a starting point for developing a classifier, while the generator model could make use of additional information for generating the candidates. • Conditional Generative Adversarial Network (CGAN) [31]. The CGAN is an extension to the GAN architecture that makes use of additional information as input both to the generative and the discriminative networks. In ordinary GAN, there is no control over modes of the candidates to be generated. In CGANs, additional information can be added as input to the generator in order to condition the synthesis of candidates. For example, if class labels are available they can be used. This labels can also be added to the discriminator input to help it distinguish generated candidates from real ones. By providing additional information, two benefits are achieved: -Convergence will be stable and faster since the random distribution that candidates follow will have some pattern.
-
The generator model can be used to generate candidates of a given specific type, for example, for a class label.
• Auxiliary Classifier Generative Adversarial Network (ACGAN) [32]. The ACGAN is an extension to the GAN architecture in which both the generative and the discriminative networks are class conditional as with the CGAN, but also adds an additional model to the discriminator to detect the class label. That is, the discriminator model must predict whether the given candidate is real or generated as in the CGAN, but also will predict the class label of the candidate. • Information Maximizing Generative Adversarial Network (InfoGAN) [33]. The InfoGAN is an extension to the GAN architecture that introduces control variables that are automatically learned by the architecture and allow control over the characteristics of the candidate generated. For example, style, thickness, and type in the case of generating images of handwritten digits. This architecture is motivated by the desire to control and decouple the properties in the generated candidates. The InfoGAN involves the addition of control variables to generate an auxiliary model that predicts the control variables, trained via mutual information loss function. • Semisupervised Generative Adversarial Network (SGAN) [34]. The semi-supervised GAN is an extension of the GAN architecture for training a classifier model while making use of labeled and unlabeled data. Semi-supervised learning is the challenging problem of training a classifier in a dataset that contains a small number of labeled examples and a much larger number of unlabeled examples. In the SGAN, the discriminator is modified to predict n + 1 classes, where n is the number of classes in the classification problem and the additional class represents the synthesized candidate. It involves directly training the discriminator model for both the unsupervised GAN task and the supervised classification task simultaneously.
These GAN architectures are illustrated in Figure 2. In this work, we have chosen the CGAN architecture, since it makes possible to add additional information to both generator and discriminator, which will help the generation of multispectral images. On the other hand, a classification of the candidates is not necessary for this task. In our case, the CGAN architecture makes it is possible to add a RGB composite image as input to the generator in order to synthesize the multispectral image, without requiring the classification of the results [20]. In summary, the functions of generator and discriminator for the task considered are: • Generator: takes as input an RGB composite image, and its goal is to learn how to create the most realistic multispectral bands possible. The CGAN architecture for spectral reconstruction is shown in Figure 3a. The CGAN training process begins in the generator. The input to this part of the network is an RGB composite image, providing a multispectral image as output. Since the objective of the GAN is to synthesize the best possible multispectral image, the best effort should be put into the design of the generator. In this work, different types of networks (CNN, U-Net, and ResNet) have been considered as generator.
Then it is the turn for the discriminator to work. The discriminator takes as input the RGB bands and the multispectral bands of an image and provides a two-value output, which indicates whether the multispectral image is real or generated. In our CGAN, the discriminator is a classifier PatchGan. It uses convolutional layers, consisting of two convolution modules, batch normalization, and an activation function leaky ReLU, as can be seen in the Figure 3b. A final convolution is applied to reduce the size of the result.
In more detail, the operation of the discriminator is as follows. The discriminator must detect whether the image structure is real or generated, and for this it computes an error map by subdividing the image into blocks [18]. Figure 4 shows a example of error map of size 5 × 5 obtained from a image of size 60 × 60 pixels when using the discriminator of the Figure 3b. Each component of the error map estimates the percentage the structure of a particular section of the image is real or generated. This operation is repeated twice, first with the pair RGB bands/multispectral bands provided by the generator, and then with a pair of RGB bands/multispectral bands from the training set. The function used to evaluate the obtained values and build the error map is binary cross-entropy. Finally, to combine the two error maps, the two values obtained by applying this function are added together.
This value is what is used by the discriminator to learn to distinguish the generated images from the real ones. The training of the generator uses another different error function, which takes in counts the result of the discriminator [18]: where Error CGAN is the result of applying the binary cross-entropy function to the discriminator output taking as input the generated image, L1loss is the average absolute error between the expected image and the generated image, and λ = 100 as proposed by Isola et al. [18] to reduce the visual artifacts that may be introduced on certain applications.
CNN Model
A Convolutional Neural Network (CNN) has mainly two layers: convolutional layers and pooling layers. In the convolutional layer, the convolution operation is applied several times using filters or kernels to obtain a map of features. These maps provide information about the content of the image. Several filters can be applied in the same layer and several feature maps will be created. The size of the filters may vary, and the most common are windows of size 3 × 3 and 5 × 5. The pooling layer is responsible for extracting the most representative pixels in each subregion of an image. There are several types of subsampling operations. One of the most used is max pooling, that extracts the maximum value in each window [35].
The CNN model considered in this work to integrate into a CGAN as a generator is designed using five convolutional layers. This network is illustrated in Figure 5. The upper part of the figure shows the feature maps, while the lower part illustrates the operation that is carried out. The number of features increases to 64, while the dimension of the input image remains constant. The number of output features in the last layer can be adjusted to suit the number of spectral bands required. In this network, the number of output bands is set to 9, which added to the three original bands gives a total of 12. This is the number of spectral bands available in the images of the BigEarthNet database, details of which are presented in Section 3. The multispectral reconstruction is approached as a regression problem, so the use of pooling layers is removed as proposed by Stiebel et al. [16]. These layers are used in classification problems, but in the case of reconstruction problems they would cause a loss of information.
This CNN model will be introduced as a generator within the CGAN. The resulting network will be evaluated and considered as a basic scheme with which to compare other more efficient generators.
U-Net Model
The U-Net architecture consists of a downsampling path followed by an upsampling path. A convolution operation is applied in the layers of the downsampling path, while a transpose convolution is used in the layers along the upsampling path. During the downsampling path, the spatial information is reduced based on the size of the kernel, but new features (in the spectral dimension) are created. The upsampling path performs the reverse function. The features of a transposed convolution are combined with the information of previous layers to produce a more precise spectral reconstruction. After passing through a layer, the spatial resolution increases. Figure 6 shows the model considered in this work, that is based on the U-Net proposed by Stiebel et al. [16]. The upper part of the figure shows the feature maps, while the lower part indicates the operation that is carried out. The network input is a RGB composite labeled as Input image in Figure 6 while the output consists of the nine additional bands of the multispectral image, labeled as Output bands in the same Figure. The downsampling path is indicated by a downward arrow in the figure, and the upsampling path by an upward arrow. Figure 6 also illustrates the skip connections and the operation done that combine the output of one layer in the downsampling path as the input to a layer in the corresponding upsampling level. In Figure 6 the spatial dimension of the image is reduced by 2 in each layer in the downsampling path while the number of features progressively increases to 128 in the last layer of this path. Figure 6 also illustrates in magenta the skip connections that combine the output of one layer in the downsampling path as the input to a layer in the corresponding upsampling level.
Since the U-Net proposed in [16] was designed to operated independently and not included within a conditional GAN, in this work the U-Net will be adapted to operate as a generator within that architecture. The original U-Net architecture, proposed by Ronneberger et al. [36] for biomedical image segmentation, will also be considered as a generator for the CGAN named U-Net_b. The U-Net_b model reduces by 2 the dimension of the images in each layer along the downsampling path and magnified it by this same factor along the upsampling path. This is the generator used in the CGAN of Alvarez et al. [20].
ResNet Model
The Residual Neural Network (ResNet) architecture is characterized by using skip connections, which reduce the training error when adding more layers and solve degradation problems [37]. The ResNet considered in this work is based on the residual network proposed by Can et al. [14] for spectral reconstruction. Since the network was designed to operate independently and not included within a GAN, in this work the ResNet will be adapted to operate as a generator within CGAN.
The ResNet model is shown in Figure 7. The upper part of the figure shows the feature maps, while the lower part indicates the operation that is carried out. The boxes represent the convolutional layers, with the labels indicating the size of the kernel. The residual blocks are framed in dotted lines, while the lower lines connecting both sides of the network show the skip connections used in this architecture.
The backbone of the network has two residual blocks. The two convolutional layers before the residual blocks perform a feature extraction and a compression, respectively. Despite the initial features are shrunk, they are used through the skip connections in the last layers of the network. This structure has benefits in terms both of execution and performance, since compressing the features that follow the main path speeds up the computation time and reduces overfitting. The skip connection on the bottom side in Figure 7 estimates the basic mapping from RGB to multispectral reconstruction by a 7 × 7 convolution layer [14]. The last two convolutional layers expand the features to approximate the output to the multispectral image. Through the skip connections, the initial learned features are also used in the network.
Different than the original residual blocks introduced in [37], a Parametric Rectified
Linear Unit (PReLU) activation function is used in this work, instead of a ReLU as in the ResNet architecture proposed by [38]. The PReLU function was shown to improve over the traditional non-parametric ReLU. Other possible modifications to this network to optimize its operation as a generator within the GAN are discussed in Section 3. Table 1 includes the network models considered as generators for a CGAN.
Results
In this section, the accuracy of the multispectral image reconstruction and the computational cost of the CGAN are evaluated using as generators the CNN, U-Net, and ResNet neural networks designed in the previous section. The CGAN of Alvarez et al. [20] that uses the U-Net of Ronneberger et al. [36] as generator is also included in the analysis, named U-Net_b in this section. It was adapted removing the PatchGAN layer in the discriminator and reducing the number of layers and filters to fit the input image size. Table 1 includes those networks and key parameters.
The BigEarthNet [28] dataset was used for training the CGAN. In this work 10,000 image patches were used for the experiments. The size of the dataset was analyzed prior the experiments, considering the limitations of the available hardware. The experiments were carried out on a personal computer with a AMD Ryzen 5 2600 CPU at 3.4 GHz with 16 GB of RAM, and a GPU NVIDIA GeForce GTX 1660 at 1.7 GHZ with 6 GB of RAM. The code was written in Python using the TensorFlow library [39] under a Linux operating system.
The multispectral reconstruction was evaluated using the Root Mean Square Error (RMSE). The RMSE measures the amount of error between the reflectance value of the image and the predicted value. Since it is an error measure, networks that obtain a lower RMSE provide a more accurate reconstruction. The RMSE retains the same scale as the data, so the ranges can be transformed to a common scale so that the results can be compared. The computational cost was measured in terms of execution time (t) and the performance between the CPU and GPU was compared in terms of speedup as the fraction t CPU /t GPU .
Dataset
The BigEarthNet [28] dataset has approximately half a million of multispectral images patches taken from 125 high resolution images acquired by the Sentinel-2 satellite. All the tiles were atmospherically corrected by the Sentinel-2 Level 2A product generation and formatting tool (sen2cor) [28]. Sentinel-2 carries an optical instrument payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60 m spatial resolution. The spectral band B10 was discarded as it does not contain information of the surface because of the sen2cor processing methodology. Table 2 shows the details of the multispectral patches. As the image patches have different resolution at different wavelengths, all the spectral bands were downscaled to 60 × 60 as it is the most common size among all bands, except bands B01 and B09 that were upscale to 60 × 60. See Table 2 for details of the patch sizes. Table 2. Multispectral band name, description, wavelength (nm), spatial resolution (m) and patch size (pixels) of the BigEarthNet with Sentinel-2 database. (*) The spectral band B10 was discarded as it does not contain information of the surface because of the sen2cor processing methodology. For the experiments, 10,000 patches were randomly selected from the BigEarthNet dataset and divided into three non overlapping sets: 80% was used for training and validating and 20% for testing. Of the total samples for training and validation, we make the same partition with 80% for training the neural networks and 20% for validate the performance during the training. Thus, 64% of the total number of samples was used for training, 16% for validation and 20% for testing. Figure 8 shows the RMSE during the training for a ResNet using different number of patches. It is observed how the error decreases when a larger number of patches is used, so the available RAM in the computer was the limiting factor for the dataset size. As our GPU has only 6 GB of RAM compared to the 16 GB of RAM for CPU memory, 10,000 images were used in the experiments. Each image needs 168.75 kB (4 bytes × 60 × 60 × 12), that is the size in bytes of each spectral band times the number of bands plus additional information from the metadata and software used for the experiments.
CGAN Generators Comparison
To evaluate the CGAN with different networks as generators (CNN, U-Net, U-Net_b [20,36] and ResNet), these networks were first trained individually using the validation set and different parameters in order to set up the configuration of each model.
Configuration of the Neural Networks as Generators
To obtain the best results for each network operating as a CGAN generator, some variations and optimizations in the configurations were studied. These variations are numbered from 1 to 4 in Table 3. The first column lists the variations for the U-Net architecture and the second column lists the variations of the ResNet. The row marked with '0' indicates the base architecture on which the variations are applied. The networks were trained for 200 epochs at a learning rate of 0.001.
Regarding the U-Net, the following modifications were studied: (1) variation of the number of layers, (2) add a preprocessing layer for noise images, (3) variation of the size of the convolutional filters, and (4) replace the function concatenate by the add function. Best results were obtained for the configuration shown in Figure 6: a 3 × 3 convolution filter with no padding, a ReLU activation function and concatenated skip connections. The main reasons are that a small convolution filter adds information from the nearest neighbors that may have more similarity and the concatenation activation function keeps all information in the skip connections while the add function means losing some information. Regarding the ResNet, the following modifications were considered: (1) variation in the number of filters in the convolutional layers, (2) variation in the number of residual blocks, and (3) variation in the number of paths. After analyzing the results we can conclude that adding residual blocks or doubling the number of filters marginally improves the results. However, in practice, these two modifications are not recommended, as the computing time they need to train is twice as long, and we only get an improvement of less than 1%. On the other hand, it has been verified that the 2 paths of the proposed architecture are useful and reduce the error by 4% without increasing the execution time. Best results were obtained for the configuration shown in Figure 7: two residual blocks and two paths.
Configuration of the CGAN
Next, we proceed to evaluate the CGAN by integrating the above networks as generators. As previously mentioned, CGAN learning is based on competition between the two networks that compose it. This learning method makes CGAN provide better results than other networks even when they are included in CGAN as part of its architecture, as shown in Figure 9. In this figure, the CGAN using a ResNet as a generator is compared to the standalone ResNet. During most of the training, the error rate of the CGAN is lower than that of the ResNet. A validation set of 1600 images was used to plot the error during the training phase. With the configuration extracted from the training step, the CGAN was evaluated for the multispectral reconstruction calculating the RMSE on the entire test set. The error was calculated from the histogram of the entire dataset using a common scale, resulting a range of values from 0 to 16,384. Table 4 shows the reconstruction error (RMSE) for each network, the number of passes of the entire training dataset (epochs), the execution time for training on CPU and GPU, as well as the GPU speedup with respect to the CPU execution time. The CGAN with ResNet has the best reconstruction result with an error of 316. The CGAN with U-Net_b got the highest error with a RMSE value of 404, while for CNN and U-Net a similar accuracy in the reconstruction is obtained, in particular, an error of 363 and 354, respectively. Table 4. Reconstruction error (RSME), epochs, CPU and GPU execution times in minutes and speedup as the fraction t CPU /t GPU for the training of the CGAN using CNN, U-Net, U-Net_b [20,36], and ResNet as generators for multispectral reconstruction. Networks with lower RSME provide better results. Regarding the execution time for training, the CGAN with ResNet needs 62 min when using the GPU, while for CNN and U-Nets it needs less training time. The CGAN with U-Net_b is the one that needs the least time (11 min), while for U-Net it needs 49 min, both using the GPU. From Table 4 it is also observed that all networks achieved an increase in speed when using the GPU. For the CNN and U-Net the speedup is higher, reaching almost 15×. Both networks, compared to U-Net_b and ResNet are better suited for GPU execution. The speedup obtained on GPU is 9.6× for the ResNet and 2.2× for the U-Net_b.
Multispectral Reconstruction Comparison
Based on the best results obtained with the CGAN using the ResNet as a generator, in this section we compare the multispectral reconstruction from RGB images with two real multispectral images: an image with pastures and an image of the ocean, see Figures 11 and 12, respectively. These figures shows an error map as the absolute error between the each spectral band of the real multispectral image and the reconstructed image. These error is calculated between 0 and 1. In addition, the RMSE calculated for each spectral band is also included in those figures. Figures 11 and 12 consist of several sections. On the left, the RGB composite is shown and the next column presents the red, green and blue channels which are the input of the neural network. The next three columns shows the rest of spectral bands of the image (see Table 2 for details of the name and number of bands). These nine bands are the output of the neural network. Finally, the last three columns are the absolute error that results from comparing the generated bands with the real ones. To improve the visual inspection of the error, the scale has been adjusted between 0 and 0.2 in these figures since no error in the bands exceeded that value. Taking a closer look at Figure 11 (pasture image), the RMSE is under 100 in a range of values from 0 to 16,384, for all the spectral bands but it is observed in the top left of the image a parcel of forest in the middle of the pastures that represent the higher error (higher intensity) in the error map. In the image of the ocean, a minor error is observed, below 20, except in B01, which is greater than 100. As explained in Section 3.1 the spectral band B01 was upscaled to 60 × 60. Since upscaling makes the image patch looks blurry and have a lower quality for the neural network a higher RMSE is expected for these spectral band. The average intensity of the pixels was also compared as illustrated in Figure 13 for the pasture image and Figure 14 for the ocean image. For each band, the average value is calculated and it is plotted in the left graph of these figures. The right graph shows a random pixel of the image. The average intensity has a very good match for the image of pasture, see Figure 13 (left), and a slight difference for the ocean image, see Figure 14 (left), especially in the first band (443 nm), which is the spectral band with the highest RMSE, as shown in the previous section. The difference between the real multispectral image and the reconstruction one is best appreciated when selecting a random pixel. In Figure 13 (right) it is observed between bands B06 (741 nm) and B09 (945nm). In the ocean image, see Figure 14 (right), the difference is also in the vegetation red edge range from 704 nm to 783 nm. The details of the wavelengths and name of the bands can be seen at Table 2.
Discussion
In view of the results obtained in this work, the problem of generating multispectral images from RGB images can be achieved with sufficient quality using CGAN models. The RMSE measured in a range from 0 to 16,384 was on average 316 using the ResNet generator on a set of 2000 test images. The combination of contextual information when using convolutional networks, a training with CGAN that prioritizes that the structure of the image is correct through the combination of a generator and a discriminator, and the use of a ResNet model as a generator, which reduces overfitting and allows training for more periods are the result of the study carried out in this work.
A limitation of deep-learning methods based on supervised learning as the one designed in this work is that they require training samples. In the same way as in supervised classifiers, which require training with the same types of materials (classes) that are to be classified in the images, the CGAN training must be carried out with samples of materials that will be present in the images to be reconstructed. In this sense, the presence of materials with which the deep learning network has not been trained would produce an indeterminate output. This does not imply that the reconstruction should be limited to images with a relatively homogeneous part of surface (forests, crops, etc.), but that all the materials included in the image to be reconstructed must be present during training.
In general, these limitations also apply to anomaly analysis, which is of great interest in many applications, e.g., in the detection of areas with plant damage for agriculture or ecological monitoring. However, in some cases this analysis may be feasible without specific training. For example, when the ground cover has intermediate levels of vegetation, ranging from bare soil to leafy vegetation. Like a supervised classifier, the spectral reconstruction network should be able to correctly handle these intermediate cases.
Fortunately, very extensive land cover databases are now available to facilitate the learning of deep-learning networks. The BigEarthNet database [28], which contains approximately half a million images taken from the Sentinel-2 satellite, showing an aerial view of 10 European countries provide an ideal scenario for machine learning. Although this work focuses on the Sentinel-2 satellite, the database also provides images from the Sentinel-1 satellite. In addition, it is a database of public access. The use of this dataset in this work serve as a starting point for future contributions and improvements since it establishes a baseline of the results that can be achieved with the neural network architectures studied in this work.
In certain processing chains, the spectral reconstruction can be included as an additional step (generation of the multispectral images on basis of RGB images). In this cases, multispectral reconstruction could be considered as a preprocessing stage like that performed with filters and morphological or attribute profiles to highlight structures in the images. For example, this spectral preprocessing could be useful in classification and object detection operations. One advantage that spectral reconstruction networks have over classifiers is that their operation is completely automatic. In classifiers, images have to be labeled manually, often at the pixel level, which is labor-intensive and error-prone, in some cases requiring field views. On the contrary, the CGAN training is carried out automatically since it is a regression and both the input and output are obtained from the databases. In this way, spectral processing operations are decoupled from classification operations in the processing chain, with the advantages of massive training that the former have, since labeled images are not required.
With respect to CGAN models such as the ones used in this work, they can present some disadvantages. First of all, they require a lot of computational resources. For example, training the CGAN with a ResNet as a generator requires 62 min on the GPU of a commodity PC. However, the training is performed only once, while the inference operations are much faster (they require only one epoch compared to 200 during training). On the other hand, the design of CGANs is difficult. If the two networks that make up the CGAN, generator and discriminator, are not well balanced, the model will not reach convergence during training. Therefore, a careful study of the details of the networks is required to achieve effective designs.
Conclusions
The generation of synthetic multispectral images from color images offers the possibility of increasing the volume of images in datasets that can be used for supervised training, for example for classification of the Earth's surface. These synthetic images must be generated with sufficient quality to be useful.
In this article a study and comparison of generators for Conditional Generative Adversarial Networks (CGANs) applied to multispectral image reconstruction from RGB images in remote sensing was presented. Among the CNN, U-Net and ResNet generators under study, the ResNet obtained better results in terms of Root Mean Square Error (RMSE). It was also possible to train the network for more epochs, avoiding the overfit that occurs in other proposals using a U-Net. The discriminator used in all CGAN models was a PatchGan classifier.
The runtime was also analyzed, showing that the U-Net and ResNet generators consume more time. Despite the experiments were executed using the computational capacity of a GPU, the training time of the model using the ResNet was 1 h and 2 min. However, the inference operation from RGB to multispectral was only a few seconds.
The study was carried out in a public database of multispectral images, the BigEarth-Net database. The wide range of images of the Earth's surface available in this dataset were of interest to generalize the training. Being an open access database, it is possible to establish a baseline of the results that can be achieved with the neural network architectures studied in this work.
As future works, the influence of different discriminators on the ResNet generator can be studied. It is also of interest to dive into the ResNet architecture to improve the execution time as other authors have done in the use of U-Net when it is used into a CGAN network. These future works could be executed on a high performance computing server with a GPU with more RAM, giving the possibility to use the entire dataset. All are co-funded by the European Regional Development Fund (ERDF).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data used in this study are openly available in the BigEarthNet with Sentinel-2 Image Patches dataset. A Large-Scale Sentinel Benchmark Archive. Available online: https://bigearth.net/ (accessed on 29 January 2022). DOI:10.1109/IGARSS.2019.8900532.
Acknowledgments:
The authors would like to thank Universidade de Santiago de Compostela for its support.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript: | 9,734.2 | 2022-02-09T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Kinetic approach to a relativistic BEC with inelastic processes
The phenomenon of Bose-Einstein condensation is investigated in the context of the Color-Glass-Condensate description of the initial state of ultrarelativistic heavy-ion collisions. For the first time, in this paper we study the influence of particle-number changing $2 \leftrightarrow 3$ processes on the transient formation of a Bose-Einstein Condensate within an isotropic system of scalar bosons by including $2 \leftrightarrow 3$ interactions of massive bosons with constant and isotropic cross sections, following a Boltzmann equation. The one-particle distribution function is decomposed in a condensate part and a non-zero momentum part of excited modes, leading to coupled integro-differential equations for the time evolution of the condensate and phase-space distribution function, which are then solved numerically. Our simulations converge to the expected equilibrium state, and only for $\sigma_{23}/\sigma_{22} \ll 1$ we find that a Bose-Einstein condensate emerges and decays within a finite lifetime in contrast to the case where only binary scattering processes are taken into account, and the condensate is stable due to particle-number conservation. Our calculations demonstrate that Bose-Einstein Condensates in the very early stage of heavy-ion collisions are highly unlikely, if inelastic collisions are significantly participating in the dynamical gluonic evolution.
INTRODUCTION
A deconfined system of quarks and gluons, under extreme conditions of high temperatures and high densities, can be produced and explored in experiments of ultrarelativistic heavy-ion collisions. The experimental observables like elliptic-flow measurements strongly suggest an early collective-fluid behavior of a medium close to local thermal equilibrium. However, the description of the prethermalization dynamics of the initial off-equilibrium manybody system produced in heavy-ion collisions is still an outstanding problem.
The early stage of heavy-ion collisions is well described within the color-glass-condensate (CGC) effective field theory [1,2], where the heavy nuclei behave as very dense gluon system with high energetic colored partons acting as sources of soft dynamical gluon fields. In this picture, during the collision the hard partons traverse each other while the highly occupied soft gluon fields interact via non-Abelian interactions resulting in the creation of longitudinal chromo-electric and -magnetic fields, which leads to the so-called Glasma [2][3][4][5] state of high gluon density, which runs through a very short isotropization stage [6,7]. Given the high particle density which is parametrically larger compared to the thermalequilibrium value, the system would possess a strongly interacting nature due to coherently enhanced scattering even though the coupling is weak. Thus the possible formation of an off-equilibrium Bose-Einstein condensate (BEC) has drawn stronger attention in recent years [8,9]. Similar issues about off-equilibrium BEC formations arise also in the context of early universe reheating after inflation [10,11] and in systems of cold atoms [11,12].
The formation of a BEC is a fundamental consequence of quantum statistics, where above a certain critical density or below a certain critical temperature any more added bosons must occupy the ground state coherently. The condensation dynamics, especially far from equilibrium, is an interesting issue but still under debate. Many studies have been performed to understand the nonequilibrium dynamics of BECs formation within either a kinetic approach or classical field theory, if solely elastic processes are incorporated [8,[13][14][15][16][17][18].
Inelastic scattering may qualitatively change the picture, allowing only for the formation of a transient BEC. In [19] it is found that inelastic collisions will speed up the thermalization in the infrared regime and may catalyze a faster onset of a BEC. The following study [20] suggested a complete hindrance of BEC formation for massless gluons at vanishing momentum. Within the description of a nonequilibrium massive bosonic O(N) theory applying the 2PI formalism of real-time Schwinger-Keldysh quantum field theory it has been recently shown that the formation of a BEC is potentially prevented by particle-number changing processes [21]. However, a concrete kinetic simulation for a possible transient BEC has not been included in these studies.
So far no kinetic description has been elaborated to describe the expected transient formation and decay of a BEC, initially possible in an off-equilibrium system, including both elastic and inelastic processes. This paper addresses the dynamics of the condensation and thermalization of massive bosons. For this a coupled set of Boltzmann kinetic equations for a transient BEC and a phase-space distribution function is formulated and includes 2 → 2 and particle-number changing 2 ↔ 3 reactions.
KINETIC EQUATIONS
In this work, we focus on an isotropic and homogeneous system. If the evolution is dominated by two-and three-body interactions, the corresponding Boltzmann equation for a phase-space distribution function f ( p) = d g dN/(2π) 3 d 3 x d 3 p, where d g = 16 is the gluon degeneracy factor, taking two spin and eight color states into account, reads [22] Thereby, the indices refer to the momentum phase space of the participating particles (d i : Consequently, f i denotes the corresponding oneparticle distribution function f (t, p i ). The collision integrals take into account quantum statistics via Bose enhancement factors ( f i /d g + 1), leading to the correct long-time equilibrium solution for bosons. The matrix elements |M| 2 are taken as isotropic with a constant cross section [23,24] |M 2↔2 | 2 = 32πsσ 22 , with s = (P 1 + P 2 ) 2 denoting the center-momentum energy squared. Here, we point out that the interesting quantity for the simulations is the ratio of the elastic and inelastic cross sections, σ 23 /σ 22 , determining the dominating processes leading to full equilibration. Energy and particle densities are respectively given by The general argument for the emergence of a BEC is that if in the case of the existence of conserved number of bosons the chemical potential converges to the mass, the distribution can no longer accommodate the particles in the IR regime (p m) although In this case, a special treatment is necessary for the zero mode, by decomposing f (| p|) in a continuumlike part f (| p| > 0) for the higher modes and a discrete part (2π) 3 n c (t)δ (3) ( p) for the zero mode [15,16,18,25]. Given any initial nonequilibrium configuration of the gluon system, one can always determine via the conservation laws if condensation has to be expected in the equilibrium limit by solving ε init = ε eq (T, µ) and n init = n eq (T, µ) and if one encounters µ > m as solution of Eqs. (4) ε init = ε eq (T, µ = m)+ε c and n init = n eq (T, µ = m)+n c (5) where ε c and n c are the energy and particle density of the condensate. Those considerations only apply for number conserving scattering processes (2 ↔ 2). However, if one introduces particle-number changing 2 ↔ 3 scattering processes, this argument breaks down for massive particles, because in thermal equilibrium necessarily µ = 0, implying that a stable condensate can not exist.
By inserting the ansatz f (p) = f | p|>0 + (2π) 3 n c (t)δ (3) ( p) into Eq. (1), we obtain the following evolution equation for the nonzero momentum modes, Every possible diagrammatic contribution displayed in Eqs. (6) and (7), is related to a specific collision integral, with c (condensate) and g (gluon) denoting the participants of the scattering process. The numerical factors relate to the combinatorial weight of the diagrams. Details are straightforward but utmost lengthy. For the isotropic case the scattering angles can be integrated out analytically, leaving us with one-, two-and threedimensional collision integrals, which can be solved numerically. The distribution function is discretized, with the grid becoming finer in the low-momentum region. For the differential equations we employ an efficient high-order adaptive Runge-Kutta method (Cash-Karp) [26], while the collision integrals are treated with two different integration methods. For the one-and two-dimensional integrals we use the simple Simpson quadrature method, and for the three dimensional integrals we employ the Vegas Monte Carlo integration routine from [27].
INITIAL CONDITION
In the context of the CGC framework, the two most relevant quantities are given by the saturation scale Q s and the coupling strength α s , which determines the initial population density ∝ 1/α s of the initial state. As an initial nonequilibrium isotropic profile for gluons, formed at time scales of approximately 1/Q s , one usually considers a step function of the form f init (p) = d g f 0 Θ(1 − p/Q s ), whereby f 0 ∼ 1/α s [8,15,16,18]. However, we use a similar function with a smooth tail around p ≈ Q s . Fixing Q s at 1 GeV, the only free parameter left is f 0 , the step height. Various studies have shown that from this initialization two scenarios can be observed, if equilibration dynamics are dominated by binary scattering. The first is the underpopulated case ( f 0 < f c ) where the chemical potential never reaches the mass and the second as the overpopulated case ( f 0 > f c ), where µ = m and consequently a BEC must emerge. Our investigation is focused on particles with masses m = 100(300, 500) MeV and a cross section of σ 22 = 1 mb. These values are close to expected hard-thermal-loop effective pole masses of approximately gT [28]. The mass acts as an effective IR regulator for the scattering or "emissions." In nature the initialization of the condensate is due to spontaneous fluctuations. Because we choose a deterministic approach, Eq. (7) impliesṅ c ∼ n c , i.e., condensation does not occur, if n c vanishes initially. To overcome this issue we extract effective values for the chemical potential µ eff and the temperature T eff by fitting the IR region ( f IR (p < m)) of the distribution function to the Bose-Einstein distribution function. If now µ eff approaches m (let us name this point in time t onset ), we manually insert a finite but negligibly small seed to the zero mode n c (t = t onset ) = 10 −6 n init [15,16,18,25].
In the following simulations, f 0 = 0.45(2.0) has been chosen such that the condensation criterion is generally fulfilled and vary in detail the ratio σ 23 /σ 22 . Our simulations start in time at t = 0 for solving Eqs. (6) and (7).
RESULTS
In Figs. 1 and 2, the main results are depicted and compared to the known case of the evolution under solely binary scattering processes for several cross section ratios σ 23 /σ 22 . The typical overpopulated evolution for 2 ↔ 2 interactions consists of the particle cascade toward the soft modes ( Fig. 1 (a)) followed by its decrease to the equilibrium distributions (e), while generating a condensate until the equilibrium is reached. The introduction of 2 ↔ 3 kinetics, will dramatically change this picture. The first observation is that the influx of particles toward the soft modes ( Fig. 1 (b), σ 23 /σ 22 = 0.0049) is decelerated compared to the previous case but still sufficient to hit the onset condition somewhat later (Fig. 3), consequentially generating a condensate. But once the Bose-Einstein shape for µ = m is recovered [t 0.9 fm/c, Figs. 1 (f) and 2), we observe that the condensate decays, contrary to the case considering only particle-number conserving 2 ↔ 2 processes. For gradually larger values of σ 23 /σ 22 , the characteristic particle transport towards the soft modes is further damped and µ eff never reaches the onset of condensation. If σ 23 /σ 22 0.01, then no condensation into a BEC is observed. The situation for the chemical potential µ eff can be seen in Fig. 3. While for σ 23 = 0, an equilibrium state with µ eff = m = 0.1 GeV is reached, this is only the case for the two smallest ratios σ 23 /σ 22 = 0.002 and 0.0049, where µ eff reaches m, but finally decreases again to reach the equilibrium state with µ eff = 0, as it is expected, if particle number is not conserved.
In Fig. 4 we show the time evolution of the effective chemical potential, µ eff in dependence of various masses, m = 100, 300 and 500 MeV. Please note for these calculations we employ a strongly overpopulated initial condition with f 0 = 2. Inspecting the calculations, only for σ 23 /σ 22 = 0.0078 and for masses m = 300 and 500 MeV the chemical potential just touches the mass limit µ eff = m, although no condensation will start. The effect of earlier times for the onset of BEC formation of heavier particles has also been found in a similar study with only elastic collisions [29]. Still, taking into account inelastic 2 ↔ 3 collisions, either for smaller or larger masses no condensation occurs for the strongly overpopulated initial condition. Only if σ 23 /σ 22 0.005 a momentary and tiny BEC can develop.
CONCLUSIONS
In this paper we have investigated a complete Bose-Einstein condensation of gluons within kinetic theory, explicitly including number changing 2 ↔ 3 processes. In the presented scenario of an overpopulated nonequilibrium bosonic system akin to Glasma-type initial conditions has been considered. The bosons have been taken with a small but finite mass. The situation is similar to the scenario of [21]. The cross sec- tions are not those of perturbative QCD. On the other hand binary scatterings in thermal QCD are regulated by finite Debye-screening masses of order O(gT ). Radiative perturbative QCD emissions are substantial for describing the observed jet attenuation but also the significant lowering of the shear-viscosity over entropy-density ratio [30,31]. The latter fact can be effectively rephrased by significant 2 ↔ 3 isotropic collisions [23].
Our simulations have shown that a BEC may be formed for some limited time if σ 23 /σ 22 1. For present physical parameters of the masses and overpopulation parameter, f 0 , typically a BEC can only appear if σ 23 is less than 1% of σ 22 . The results suggest, that, as expected, particle-number conserving and changing processes are counteracting mechanisms for the formation and destruction of a BEC. We note that the individual collision integrals scale with the occupation density of the system like f 3 (elastic) and f 4 (inelastic), which resembles a sensitive scenario for possible formation but also immediate decay of a BEC.
Summarizing, our calculations show that Bose-Einstein condensates in the very early stage of heavy-ion collisions are highly unlikely, if inelastic collisions are significantly participating in the dynamical gluonic evolution. | 3,395 | 2019-06-28T00:00:00.000 | [
"Physics"
] |
Binary Black Hole Encounters, Gravitational Bursts and Maximum Final Spin
The spin of the final black hole in the coalescence of nonspinning black holes is determined by the ``residual'' orbital angular momentum of the binary. This residual momentum consists of the orbital angular momentum that the binary is not able to shed in the process of merging. We study the angular momentum radiated, the spin of the final black hole and the gravitational bursts in a series of orbits ranging from almost direct infall to numerous orbits before infall that exhibit multiple bursts of radiation in the merger process. We show that the final black hole gets a maximum spin parameter $a/M_h \le 0.78$, and this maximum occurs for initial orbital angular momentum $L \approx M^2_h$.
The spin of the final black hole in the coalescence of nonspinning black holes is determined by the "residual" orbital angular momentum of the binary. This residual momentum consists of the orbital angular momentum that the binary is not able to shed in the process of merging. We study the angular momentum radiated, the spin of the final black hole and the gravitational bursts in a sequence of equal mass encounters. The initial orbital configurations range from those producing an almost direct infall to others leading to numerous orbits before infall, with multiple bursts of radiation while merging. Our sequence consists of orbits with fixed impact parameter. What varies is the initial linear, or equivalently angular, momentum of the black holes. For this sequence, the final black hole of mass M h gets a maximum spin parameter a/M h ≈ 0.823 , with this maximum occurring for initial orbital angular momentum L/M A few years ago, after a decades-long period of development, breakthroughs were made in computational modeling of strong gravitational fields that now allow numerical relativists to successfully simulate binary black holes (BBH) from inspiral through merger. In general terms, there are now two computational recipes to follow. One of them is based on a generalized harmonic formulation of the Einstein equations [1,2] and uses excision [3,4] of the black hole (BH) singularities. The other recipe, called the moving puncture recipe, involves a BSSN [5,6] formulation, punctures to model BH singularities and a gauge condition for these punctures to move throughout the computational domain [7,8]. Using these recipes, many studies involving interacting BHs and their generated gravitational radiation have been carried out, including gravitational recoil [9,10,11,12,13,14,15], spin hang-up [16] and matches to post-Newtonian (PN) approximations [17,18]. Most center on astrophysical implications and connection to future gravitational wave observations.
BBH simulations also enable studies of strong nonlinear phenomena regardless of traditional gravitational astrophysics consequences. A recent example is the work in Ref. [19] on the self-similar behavior found in the approach to the merger/flyby threshold of BBHs. Similar merger thresholds in BBH encounters or scatterings form the context for our work.
We consider orbits in which the BHs initially fly past one another, but then fall back to orbit and merge. We focus on the gravitational waveform and the angular momentum radiated from such encounters. Serendipitously, we find significant astrophysical implications, both the existence of a maximum in the final BH spin and of multiple encounter orbits with associated multiple bursts of gravitational radiation. Ref. [19] considered only the first close encounter or "whirl," and the study did not extend the evolutions to find possible fall-back orbits such as those here considered. The work in Ref. [19] and our work here have to date been the only studies considering these highly eccentric orbits; while there have been highorder PN studies of inspiral, cases studied so far have described relatively smooth inspirals [20].
All our orbits are parabolic or hyperbolic encounters. Depending on the merger, the fraction of angular momentum radiated varies significantly (0.05 < ∼ J rad /L < ∼ 0.55 with L the initial orbital angular momentum of the binary). This emission of angular momentum sets an upper limit of a/M h ≈ 0.823 for the spin parameter of the final BH; this maximum occurs when L/M 2 h ≈ 1.176 , with M h the mass of the final merged BH.
As in our previous BBH studies [11,21,22], we use a code based on the BSSN formulation and the moving puncture recipe. The results here were obtained with a 634 3 M computational domain consisting of 10 refinement levels, with finest resolution of M/52. We set up nonspinning equal mass BHs using Bowen-York initial data [23]. The mass of each BH is M/2, computed from A ah /16π with A ah their apparent horizon area. The data have the BHs on the x-axis: BH ± is located at ±5 M and has linear momentum P ± = (∓P cos θ, ±P sin θ, 0). We keep the angle constant at θ = 26.565 • = tan −1 (1/2); thus the impact parameter is ∼ 4.47 M . The total initial orbital angular momentum is given by L/M 2 = 10 (P/M ) sin θẑ. We obtain a one-parameter family of initial data by varying the magnitude of the initial momentum in the range 0.1145 ≤ P/M ≤ 0.3093. At the lower limit of the momenta, merger occurs within less than half an orbit of inspiral. We then consider successively higher initial momentum until we find solutions that will clearly require a very long, "infinite" time to merge.
The results are summarized in Figs. 1 and 2. The top panel in Fig. 1 shows the spin a/M h of the final BH as a function of the initial orbital angular momentum L/M 2 h . The spin and mass of the final BH were computed using the apparent horizon formula [21,24]. The bottom panel of Fig. 1 Figs. 1 and 2 to check convergence and make error estimates. We found that the results are consistent with the 4th-order accuracy of our code and that the errors in the quantities displayed in these figures are not larger than 3%.
We have selected six encounters that are representative of the different behaviors in our series. These six cases are L/M 2 = 0.512, 1 For L/M 2 h < ∼ 0.8, the radiated angular momentum is J rad /L < ∼ 0.15, so the final BH has a/M h close to L/M 2 h . The evolution is rather simple in these cases: immediate merger, with minimal inspiral. For instance, in case Ea (Fig. 3), L/M 2 h = 0.521, and J rad /L = 0.05; thus most of the angular momentum goes into the final BH, a/M h = 0.496. Fig. 4.Ea shows the corresponding radiated gravitational wave (M r ReΨ 2,2 4 ). All waveforms were extracted at radius 50 M .
As the initial angular momentum increases, the radiated angular momentum also increases, suppressing and limiting the spin of the final BH. Eventually for large enough initial angular momentum, so much angular momentum is radiated that, as seen in Fig. 1, the final spin reaches a maximum of a/M h ≈ 0.823 at L/M 2 h ≈ 1.176 . Fig. 3.Eb shows the tracks of the BHs in the neighborhood of this maximum. Fig. 4.Eb shows the corresponding radiated waveform. For even larger initial angular momentum, the spin of the final BH actually decreases for increasing L/M 2 h . The reason is that the merger is not only preceded by several hang-up orbits [16,19], but also the merger yields a highly distorted BH that radiates copiously as it settles down. Case Ec with a/M h ≈ 0.68 and L/M 2 h ≈ 1.522 represents this situation in which almost 50% of the initial angular momentum is radiated (see path in Fig. 3.Ec and radiated waveform in Fig. 4.Ec).
A persistent feature of the mergers with L/M 2 h < ∼ 1.3 is that the separation between the BHs (the coordinate distance between the punctures) decreases monotonically with time (monotonic inspiral). Comparing cases Ea, Eb and Ec in Fig. 4, we see general qualitative agreement: inspiral-generated gravitational waves with frequency and amplitude increasing in time, followed by essentially fixed-frequency ringdown waves. There is, however, a hint of disappearance of the monotonic spiral in case Ec. The amplitude of the gravitational radiation has a "shoulder" at about time ∼ 110 M . For a period of time equal to two wave oscillations, the decline of the amplitude ceases and then recommences. The relative orbital separation as a function of time (Fig. 6.Ec) clearly shows there is a plateau in the separation centered at time ∼ 50 M , which is absent for cases Ea or Eb. For a brief period of time there is a closely circular phase in which the BHs "want" to fly apart, but just manage to stay at roughly constant separation. The last three points in Figs. 1 and 2 are the cases labeled Ed, Ee and Ef. They describe orbits without immediate merger but "escape" and recapture; they all show initial approaches followed by increasing mid-evolution separations of 14 M , 25 M and 42 M before the final merger (see Figs. 3 and 6). Because the interaction involves two close approaches, there are two bursts of gravitational radiation, one from the first flyby [25] and the other from the merger (see Fig. 5). We are currently investigating astrophysical implications of detections of these multiple gravitational bursts and hangups in globular clusters [26]. For the Ef case, there is an approximate hangup with separation ∼ 4 − 5 M around time ∼ 950 M similar to the shoulder seen in Fig. 6.Ec around time ∼ 50 M . This structure shows up in the waveform for this Ef case; we actually see a (lower amplitude) precursor to the radiation burst associated with the merger, a hint that orbits with many repeated bounces are possible. For even slightly (0.1%) greater initial angular momentum than case Ef, the BHs complete approximately one loop and then escape. This is a possible indication of chaotic behavior (exponential dependence on initial conditions, c.f. Ref. [19]). Repeated bounce orbits would have to be found with initial angular momentum very slightly above that which resulted in Fig. 3.Ef. As with all critical phenomena, the problem becomes one of careful tuning of the parameters. Note that these interactions of nonspinning BHs produce chaotic orbital dynamics, in contrast to the chaos found in spin evolutions [27,28].
One of the main conclusions of our work is that there is an upper limit on the Kerr parameter for the final merged BH from nonspinning BH merger. For our sequence this maximum is a/M h ≈ 0.823 . We can understand this observation by examining the timing of the formation of the final BH and the radiation from the merger. It appears that the merger occurs through an intermediate ex- cited state which is essentially a highly distorted BH. We say "essentially" because a substantial amount of angular momentum is also radiated in the plunge immediately before the apparent horizon forms. This is consistent with close limit BBH calculations [29] that show merging BHs behaving like a perturbed BH, even before a common apparent horizon forms, so long as the merging BHs are inside the peak of the effective potential of what will be the final BH. This intermediate state emits the largest part of the radiated energy and angular momentum. Because this mechanism is universal (excitation of such a state is inevitable, and it will inevitably radiate), it suggests that no merger of equal mass (or presumably, roughly equal mass) BHs can lead to a final BH with maximal spin parameter a/M h ≈ 1. This result does not directly affect spin up by accretion since mass accretion will not excite the low l modes that strongly radiate angular momentum. Thus typical gas accretion can in principle lead to final spins much closer to the limit a/M h = 1.
Work was supported by NSF grants PHY-0653443 to DS, PHY-0653303, PHY-0555436 and PHY-0114375 to PL and PHY-0354842 and NASA grant NNG 04GL37G to RAM. Computations under allocation TG-PHY060013N, and at the Texas Advanced Computation Center, University of Texas at Austin. We thank M. Ansorg, T. Bode, A. Knapp and E. Schnetter for contributions to our computational infrastructure. | 2,853.2 | 2008-02-18T00:00:00.000 | [
"Physics"
] |
Towards E-CASE Tools for Software Engineering
CASE tools are having an important role in all phases of software systems development and engineering. This is evident in the huge benefits obtained from using these tools including their cost-effectiveness, rapid software application development, and improving the possibility of software reuse to name just a few. In this paper, the idea of moving towards E-CASE tools, rather than traditional CASE tools, is advocated since these E-CASE tools have all the benefits and advantages of traditional CASE tools, and add to that all the benefits of web technology. This is presented by focusing on the role of E-CASE tools in facilitating the trend of telecommuting and virtual workplaces among software engineering and information technology professionals. In addition, E-CASE tools integrate smoothly with the trend of E-learning in conducting software engineering courses. Finally, two surveys were conducted for a group of software engineering professionals and students of software engineering courses. The surveys show that ECASE tools are of great value to both communities of students and professionals of software engineering.
I. INTRODUCTION AND LITRETURE REVIEW Computer-Aided
Software/Systems Engineering (CASE) tools are a group of computerized programs designed to aid software engineers in computerizing different tasks and activities involved in software development, including all phases of software development, starting from information and data gathering and ending with software system testing, deployment, operation, and maintenance.
CASE tools benefits and advantages include speeding up the software development process, automating repeated tasks like reports and screen layouts, automating completeness and consistency checks, improving the possibilities of software reuse, improving software engineers productivity, and helping in the standardization of all processes and reports.
CASE Tools have been used in software systems development for a very long time.Many CASE tools have been developed since the term Software Engineering was casted.Some of them aid in all phases of software development, starting from requirements analysis, design, testing and implementation.These CASE tools are called integrated CASE (I-CASE) tools or full-life cycle CASE tools.For example, Rational Rose UML CASE tool is considered an I-CASE tool that is used in all phases of software development [1].Other examples of CASE tools that are not considered full-life cycle CASE tools include UMLet, QuickUML, and MiniUML.UMLet is used for small models only and it has a subset of UML that can be used for teaching of object orientation [2].QuickUML is a very simple UML/CASE tool for beginners in object oriented development with support to only design class diagrams.It has a very limited functionality in terms of forward and reverse engineering.QuickUML has no checks for validation and consistency [3].MiniUML includes a small subset of the UML notation intended for introducing object-oriented classes.It supports design class diagrams only [4].Other CASE tools are geared towards certain phases of software development.For example, a CASE tool for normalizing a relational database schema, called Normalizer, is presented in [5].A CASE tool for generating an entity-relationship (ER) model from a relational database schema, called, ERRDS, is presented in [6].Another CASE tool for implementing a logic system for testing functional independent normal form in relational databases is presented in [7].The previous CASE tools are used to automate the process of dealing with relational databases.There are CASE tools that automate other parts of a software system development process.For example, a CASE tool to automate the process of obtaining a static class diagram from software requirements is presented in [8].This CASE tool is called Static Class Diagram Constructor (SCDC) and is used to identify classes, their data members and member functions from a narrative description of the software requirements after the description is processed by natural language processor software.
These CASE tools have long been used in software engineering and have proved to be very beneficial in terms of productivity and cost saving.However, these CASE tools are all built as traditional two-tier client/server software applications.These software applications are installed on computers and the installation needs to be repeated every time a new version of the CASE tool software is released.
With the spread of web technology and its ease of implementation, many software systems are deployed using the web technology, in addition, to their use in traditional two-tier client/server environments to benefit from the many advantages of web applications and technology.Therefore, it is natural to consider moving CASE tools to the web, in what we called E-CASE tools.
An Online CASE tool for web application development is presented in [9].The tool is limited in the sense that it focuses on web applications and not all types of applications.
The benefits and advantages of E-CASE tools are presented by focusing on two communities of software engineering, namely, software and information technology PAPER TOWARDS E-CASE TOOLS FOR SOFTWARE ENGINEERING professionals and students of software engineering courses.For students, online support to E-learning courses has been used in many engineering courses as presented thoroughly in [10].
These are not the only two fields where E-CASE tools are beneficial.For example, software development outsourcing is another field where E-CASE tools are of great value.To reduce costs, many software development companies are considering outsourcing as alternative means for developing software solutions and products.The development teams need to communicate with each other during all phases of software development.Therefore, having an E-CASE tool that is accessible using the Internet is of great value to all information technology professionals related to the software project being developed.Therefore, an E-CASE tool is a great asset to software companies benefiting from outsourcing.In addition, E-CASE tools make it easier for software developers to communication easily with the company outsourcing to them.
II. E-CASE TOOLS MAIN ADVANTAGES AND BENEFITS Traditionally, CASE tools have proved to be very beneficial in all phases of software systems development.In this paper, the focus is not on the traditional benefits that are well-documented in software engineering literature, but rather on the benefits that can be obtained using and benefiting from the E-technology in supporting the E-CASE tools.
A. E-CASE Tools' Benefits toTelecommuting and Virtual Workplace
The term telecommuting has been used widely in the literature with varying definitions and meanings.Some people are using the term "telecommuting" to mean remote working, virtual working, e-work and other equivalent terms and phrases.Most people agree that telecommuting is a distributed work arrangement and setup in which a group or a team of persons conducts the work separately from each other in terms of time and/or space, benefiting from the advances and recent trends in information and communication technologies.
There are two general types of telecommuting that are common.The first type is telecommuting from home and the second type is telecommuting through a center setup by the office [11].The most well-known and wide spread forms of telecommuting that most people prefer are homebased.In this form, the employees perform their work and assigned tasks from their homes.The center-based telecommuting is an equipped center or place physically close to the employee place of residents, where employees can go and perform their work and tasks at this center rather than commuting to the office.This involves linking to the employees' company offices using the information and communication technology available at the center.The success or failure of telecommuting depends largely on the information and communication technology equipments of the remote working setting.These equipments include a computer system with a fast internet connection.
Most businesses and industries are considering telecommuting and virtual workplaces as a viable alternative to the traditional way of conducting business.Most of the time, their failures are due to the lack of computer training and information and communication technology knowledge and understanding.However, software industries' workers are the creators of these technologies and are powerful users of them.Therefore, telecommuting fits software industries' employees the most.
Most software development companies have opted to allow software engineering and information technology employees to work from home, benefiting from the hours saved commuting to and from work, in cities having high traffic jams.Thus, most software engineering and information technology professionals involved in software development prefer telecommuting.The main obstacle for software developers was their inability to access the models and designs of the software system being developed.However, E-CASE tools resolve this obstacle and enable the software developers to benefit from the E-CASE tools in the same way as if they are working form their company premises.
Software professionals and employees working from home have access to all models, designs, and diagrams that document the project using an E-CASE tool that is accessible from the web, and thus tend to have high productivity in comparison with other peer employees.
In fact, most software project managers prefer to manage by objective rather than controlling the number of hours software professionals write on their timesheets.Therefore, a software project manager assigns a task to software professional and a deadline and doesn't care that much about the physical presence of the software professional but rather on the accomplishment of the task within the deadline.
A survey to obtain software engineers and information technology professionals feedback regarding the use of E-CASE tools was conducted.The survey uses a similar approach as presented in [12].The survey consists of a number of questions regarding the use of traditional CASE tools and E-CASE tool as part of their software development experience.The questions used in the survey include: 1 1 where the blue bar represents a "Yes" answer and the red bar represents a "No" answer.The X-axis represents the question number, and the Y-axis represents the frequency of each answer.As given in the graph, there is an agreement that the software engineering professionals feel comfortable using CASE tools in general.There is also an agreement that the CASE tools documentation is written clearly and the CASE tools procedures are easy to follow.There is a major problem in using the traditional CASE tool when working from (telecommuting).There is a total agreement among all software engineering and information technology professionals that they prefer to use the E-CASE tool, as part of their software engineering career.In addition, E-CASE tools have increased software engineering professionals interest in telecommuting as opposed to work from company premises and that their productivity has improved greatly using the E-CASE tools.Finally, software engineering professionals support the use of E-CASE tools in general.
B. E-CASE Tools' Benefits toE-learning support
Great efforts and endeavors have been conducted in the last years to provide E-learning frameworks, portals, etc.However, many E-learning course completion failures have been attributed to lack of motivation to the learners, a boring functionality and other sources and reasons of frustrations.In software engineering courses that are supported by E-learning, one of the major skills and intended learning outcomes that are required form the students enrolled in such courses is the use and mastery of CASE tools.To have a hands-on experience in using the CASE tools, students have to switch from the E-learning system to a desktop application that represents the CASE tool.In addition, students don't have easy access to the CASE tools installed in their teaching institute' labs.To improve the learning process, an E-CASE tool that provides a more dynamic and interactive learning experience should be used.In addition, an E-CASE tool integrates smoothly with E-learning in teaching a software engineering course.Finally, an E-CASE tool is easily accessible from students' place of residence which makes it easy for them to access while they are using the software engineering E-learning system.
A survey to obtain software engineering students feedback regarding the use of E-CASE tools was conducted.The survey uses a similar approach used with software engineering professionals.The software engineering course is blended with an E-learning support.The survey consists of a number of questions regarding the use of E-CASE tool as part of their software engineering course and E-learning experience.The questions used in the survey include: 1. Did you feel comfortable using CASE tools in your software engineering course?2. Were the CASE tool experiments clearly written? 3. Were the CASE tool experiments procedures easy to follow? 4. Did you have difficulties in using the help and documentation of the CASE tool? 5. Didn't you experience difficulties in switching between the E-learning platform and the CASE tool?The survey included a group of students consisting of 14 students studying a software engineering course that is supported by E-learning platform.The results of the survey are illustrated in Figure 2 where the blue bar represents a "Yes" answer and the red bar represents a "No" answer.The X-axis represents the question number, and the Y-axis represents the frequency of each answer.As given in the graph, there is an agreement that the software engineering students feel comfortable using CASE tools in general.There is also an agreement that the CASE tools documentation is written clearly and the CASE tools experiments are easy to follow.There is a major problem in using the traditional CASE tool when switching between the E-learning platform and the traditional CASE tool.There is a total agreement among all software engineering students that they prefer to use the PAPER TOWARDS E-CASE TOOLS FOR SOFTWARE ENGINEERING III.CONCLUSIONS CASE tools have proved very beneficial in software and computerized systems development.They are of great value in all phases of software systems development and engineering.This is evident in the huge benefits obtained from using these tools including its cost-effectiveness, rapid software development, standardization of most of the reports and templates, building software systems in a systematic method.In this paper, the idea of moving towards E-CASE tools is advocated.E-CASE tools have all the benefits and advantages mentioned so far and add to them all the benefits and advantages of web applications in terms of accessing the E-CASE tool from any place having an internet connection.This is presented by focusing on the role of E-CASE tools in facilitating the trend of telecommuting and virtual workplaces among software engineering and information technology professionals.In addition, E-CASE tools integrate smoothly with the trend of E-learning in conducting software engineering courses.Finally, two surveys were conducted for a group of software engineering professional and students of software engineering courses.The surveys show that E-CASE is of great value to both communities of students and professionals of software engineering.
Figure 1 .
Figure 1.E-CASE Tool Survey Results for Software Engineering Professionals 6. Do you prefer the use of E-CASE tools rather than traditional CASE tool? 7. Will the use of E-CASE tool software increase your interest in the E-CASE tool labs? 8. Do you think that the use of E-CASE tool is very valuable in terms of simplifying the learning process both on campus and from the place of residence (home)?9. Overall, do you support the use of E-CASE tools in software engineering courses rather than traditional CASE tools?10.Overall, do you support the use of E-CASE tools in software engineering courses blended with Elearning?
Figure 2 .
Figure 2. E-CASE Tool Survey Results for Software Engineering Students E-CASE tool, as part of their software engineering course.In addition, E-CASE tools have increased software engineering students' interests CASE tools labs and improved their learning process.Finally, software engineering students support the use of E-CASE tools in general and the use of E-CASE tools in software engineering courses blended with E-learning.
CASE TOOLS FOR SOFTWARE ENGINEERING difference software development companies.The survey consists of 15 software engineering professionals.The results of the survey are illustrated in Figure | 3,507 | 2013-02-12T00:00:00.000 | [
"Computer Science"
] |
Search for Neutrino Emission from the Cygnus Bubble Based on LHAASO γ-Ray Observations
The Cygnus region, which contains massive molecular and atomic clouds and young stars, is a promising Galactic neutrino source candidate. Cosmic-ray transport in the region can produce neutrinos and γ-rays. Recently, the Large High Altitude Air Shower Observatory (LHAASO) detected an ultrahigh-energy γ-ray bubble (Cygnus Bubble) in this region. Using publicly available track events detected by the IceCube Neutrino Observatory in 7 yr of full detector operation, we conduct searches for correlated neutrino signals from the Cygnus Bubble with neutrino emission templates based on LHAASO γ-ray observations. No significant signals were found for any employed templates. With the 7 TeV γ-ray flux template, we set a flux upper limit of 90% confidence level for the neutrino emission from the Cygnus Bubble to be 5.7 × 10−13 TeV−1 cm−2 s−1 at 5 TeV.
INTRODUCTION
Cosmic rays are high-energy astrophysical particles, primarily protons and atomic nuclei, while their origins have been a mystery for a century.Under the confinement of the Galactic magnetic field, the observed cosmic rays with energies up to several PeV are believed to originate from Galactic sources, called PeVatrons.Cosmic rays interact with the interstellar medium or the radiation field, generating both neutrinos (e.g., π + → µ + + ν µ ) and γ-rays (e.g., π 0 → 2γ).High energy electrons can also produce γ-rays through inverse Compton scattering.However, the cross section suffers more stringent Klein-Nishina suppression for γ-rays with energies above 100 TeV.Therefore, the coincidence between neutrinos or γ-rays (> 100 TeV) and gas clumps will provide critical evidence for the identification of hadronic PeVatrons.
The Cygnus region is an active star forming area in our Galaxy and hosts various astrophysical sources, including massive young star clusters (YMCs, e.g., Cygnus OB2), pulsar wind nebulae (PWNe, e.g., TeV J2032+4130), and supernova remnants (SNRs, e.g., γ-Cygni).Fermi-LAT detected an excess of γ-ray emission (1-100 GeV) from the direction of the Cygnus region after subtracting the interstellar background and all known sources (Ackermann et al. 2011).The hard γ-ray spectrum points to freshly accelerated cosmic rays, whether they are cosmic ray electrons or nuclei.This ∼ 2 • extended γ-ray source, known as Cygnus Cocoon, has been further observed at TeV energies by ARGO-YBJ (Bartoli et al. 2014) and HAWC (Abeysekara et al. 2021).In the latest observation of the Cygnus region, LHAASO reported the Cygnus Bubble at ultra-high energy (LHAASO Collaboration 2024), extending to more than 6 • from the core, which is much larger than the Cygnus Cocoon.The γ-ray brightness follows the distribution of the molecular gas, especially for γ-rays above 100 TeV, suggesting that these γ-rays are produced by the collision between the gas and the cosmic rays.We expect to observe high-energy neutrinos from the Cygnus Bubble.
The IceCube Neutrino Observatory has previously searched for neutrino emission from the PeVatron candidates observed by LHAASO.The hadronic components of the Crab Nebula and LHAASO J1849-0003 are constrained to be no more than ∼ 80% and ∼ 90% of the total γ-rays observed (Huang & Li 2022;Abbasi et al. 2023).As for the Cygnus region, the hadronic contribution is contained to be less than 60% (Kheirandish & Wood 2019), while the resolved sources (e.g., TeV J2032+4130) are not removed from this region.Recently, Neronov et al. (2023) claimed a 3σ excess of neutrino signals from the central region (∼ 1 • ) of the Cygnus region.However, the neutrino emission from the entire Cygnus Bubble remains unclear.
In this study, we conduct two analyses on the Cygnus Bubble using a neutrino data sample of IceCube track events from 2011 to 2018.Firstly, we search for neutrino emission from the Cygnus Bubble with a template likelihood method and set 90% C.L. upper limits on the muon neutrino flux.In addition to using the γ-ray flux maps as the neutrino emission template, we employ six other templates for comparison, testing different template radii.Secondly, we scan the region of the Cygnus Bubble and obtain neutrino hotspots, which are compared, respectively, with γ-ray hotspots, gas distribution, and sources from the TeVCat1 (Wakely & Horan 2008) and the first LHAASO catalog (Cao et al. 2023).
This paper is organized as the following structure.The LHAASO observations and the IceCube muon-track data are introduced in the section 2. The analysis methods for the template search and Cygnus Bubble scan are introduced in section 3. Results and further discussions are shown in section 4. Finally, section 5 summarizes the conclusions.
IceCube Neutrino Sample
The IceCube Neutrino Observatory is a cubic kilometer detector located at the South Pole (Abbasi et al. 2009).Installed between 1.45 and 2.45 km below the surface of ice, IceCube consists of 86 strings equipped with digital optical modules (DOMs), which can detect Cherenkov light emitted by the secondary charged particles (Aartsen et al. 2017a).Muon neutrinos propagating inside the Earth can produce ultrarelativistic muons via charge-current (CC) interactions.These muons, when traversing the detector, will leave a track-like signature.With high statistics and a typical angular resolution of ≲ 1 • at ∼ TeV (Aartsen et al. 2020), track-like events are adequately used for neutrino source searches.
In the analyses, we use 7 years of all-sky muon track data collected by the completed 86-string detector, namely IC86-2011 (IC86-I) and IC86-2012-18 (IC86-II) (IceCube Collaboration 2021).The data consists of three components: (i) Experimental data events, including the reconstructed direction with the R.A. (α) and decl.(δ), angular uncertainty (σ), and reconstructed muon energy (E rec ) for each event.(ii) Instrument response functions, including the effective area A eff (E ν , δ ν ) and the smearing function M (E rec |E ν , δ ν ).The smearing function gives the fraction count of simulated signal events within the reconstructed muon energy (E ν , δ ν , E rec ) bin relative to all events in the (E ν , δ ν ) bin.With the smearing function, the signal energy probability density function (PDF) of the likelihood can be derived under a source spectrum assumption.(iii) The detector uptime, which records the periods of data taking.
LHAASO γ-Ray Data
LHAASO comprises composite detection arrays that aim to study cosmic rays and γ-rays (Cao 2010).Located at ∼ 29 • North in Sichuan Province, China, LHAASO covers a large sky region spanning from −21 • to 79 • in declination.Two arrays of LHAASO, the Kilometer Square Array (KM2A) and the Water Cherenkov Detector Array (WCDA), are used for γ-ray detection.The ∼ 1.3 km 2 KM2A is able to detect photons with energies from 10 TeV to several PeV, while the 0.078 km 2 WCDA probes lower energy photons ranging from 100 GeV to 20 TeV (He 2018).The angular resolution of KM2A is 0.4 • at 30 TeV and can reach 0.2 • at 1 PeV (Addazi et al. 2022).For the WCDA, the angular resolution is better than 0.2 • at 10 TeV.
The Cygnus Bubble, recently reported by LHAASO, was measured by both KM2A and WCDA (LHAASO Collaboration 2024).The residual structure extends to ∼ 10 • after the removal of all resolved γ-ray sources and applying a circular mask with a radius of 2.5 • around LHAASO J2018+3651.The γ-ray excess within a radius of 6 • is still clear after accounting for the diffuse γ-ray background.In the 6 • radius region, the energy spectrum of Cygnus Bubble is fitted by a log-parabola function with energies from 2 TeV to 2 PeV.The fitted photon index of Cygnus Bubble is Γ = (2.71± 0.02) + (0.11 ± 0.02) × log 10 (E/10 TeV).Eight photons with energies above 1 PeV are detected in this region, indicating the existence of super PeVtron(s).The significance maps of different energy bands show a brightening in the center associated with massive molecular clouds.The γ-rays from the Cygnus Bubble are characterized by four components: two diffuse components with the γ-ray emission proportional to the column density of atomic (H I, HI4PI Collaboration et al. 2016) and molecular clouds (MCs, Dame et al. 2001), and two extended sources, LHAASO J2031+4057 and LHAASO J2027+4119.LHAASO J2031+4057 is only observed by WCDA at energy ranges below 20 TeV, while the other three components are observed both by WCDA and KM2A.The γ-ray flux map measured by LHAASO can be obtained with the spatial and the spectral information of these components.
Template Search
An unbinned maximum likelihood method is widely used in neutrino point source searches (Braun et al. 2008(Braun et al. , 2010)).A detailed description of the point source likelihood can be found in Appendix A. Considering the large extension (∼ 6 • ) of the Cygnus Bubble, point source likelihood is not suitable for this analysis.Here, we search for neutrino emission associated with the Cygnus Bubble following the ps-template method (Aartsen et al. 2017b).There are two modifications compared to the point source likelihood.Firstly, a spatial template, rather than a two-dimensional (2D) Gaussian function, is used to describe the spatial distribution of signal events.The ps-template method accounts for the extension of the source by mapping the changing detector acceptance and convolving the template with the angular uncertainty of the events.Secondly, unlike in the point source likelihood where the background is estimated using scrambled data with negligible point source signal contribution, for a large extended source, the signal events in the data should be subtracted.Therefore, a signal-subtracted likelihood is constructed, and the background is estimated using scrambled data with the signal contamination subtracted (Pinat & Sánchez 2018).
The event-wise template likelihood (Aartsen et al. 2017b) is defined as where n s is the number of signal events under the source spectra assumption with a spectral index of γ, and N is the total number of events.S i is the signal PDF, which is related to the location x i , angular uncertainty σ i , and muon energy proxy E i of the i-th event.D i is the scrambled background PDF, which is obtained from the data and therefore contains the scrambled signal component, while S i is the scrambled signal PDF.Each PDF consists of a spatial term and an energy term.In the template likelihood, the neutrino spectrum is fixed and we only fit the number of signal events ns by maximizing the likelihood.The neutrino spectrum is calculated with the parameterized energy distribution of secondary particles produced in p-p interactions, assuming that all the γ-rays are originate from hadronic processes (details can be found in Appendix B).
The construction of the signal spatial PDF in the template likelihood can be described as We start with the Cygnus Bubble template T spat , which is treated as the neutrino spatial template.By convolving it with IceCube acceptance M acc , we can obtain the true neutrino direction after accounting for the detector efficiency.Then this map is smoothed with a 2D Gaussian of width σ i to account for the angular uncertainty of the events.Finally, this map is normalized to unity.The background PDF D i is constructed as the same method in point source likelihood, while the scrambled signal PDF S i is constructed following (Pinat 2017).
We use the γ-ray emission and the gas column density to weigh the neutrino emission within a 6 • radius from the bubble center.In the γ-ray flux template, neutrino emission is assumed to follow the γ-ray flux map at 7 TeV (WCDA) and 50 TeV (KM2A).In the H I and MC templates, neutrino emission follows the column densities of H I and MC, respectively, derived from the H I and CO emission.In the hydrogen (MC+H I) template, neutrino emission follows the total hydrogen column density (2N H2 + N HI ).In the Gaussian templates, neutrino emission follows the 2D Gaussian distribution with σ = 0.33 • for LHAASO J2031+4057 and σ = 2.28 • for LHAASO J2027+4119.Finally, we employ a uniform template for comparison with the other templates, assuming a uniform spatial distribution of the neutrino flux.The above templates cover the region of LHAASO 2018+3651.In addition to the 6 • radius, we explore templates with radii of 0.7 • and 1.2 • based on recent findings (Neronov et al. 2023), as well as a 10 • radius according to LHAASO's measurements.The center of each template is LHAASO J2032+4102 (R.A. = 308.05• , decl.= 41.05 • ).
Cygnus Bubble Scan
We scan a 28 • × 22 • region, extending to ∼ 10 • to include the entire bubble, to investigate the relation between the neutrino and γ-ray hotspots and sources, as well as the distribution of MC and H I gas.The region is divided into a grid of points, each occupying an area of 0.1 • × 0.1 • .Because the γ-ray significance maps are smoothed with a Gaussian kernel of σ = 0.3 • , we scan this region with sources having a matching σ s = 0.3 • extension for consistency.In the likelihood, the signal spatial PDF for extended source is modified from a 2D Gaussian used for the point source likelihood, as shown in Equation (A6).The modified signal spatial PDF is defined as where σ s = 0.3 • .Other PDFs remain the same as in the point source likelihood.For each grid we maximize the likelihood (see Appendix A) by fitting two parameters: the number of signal events ns and the spectral index γ assuming a power-law energy spectrum.
Template Search Results
The results of template searches using the γ-ray flux maps of the Cygnus Bubble are summarized in Table 1.Although some excess from the Cygnus Bubble are observed, the results are not statistically significant.The γ-ray flux template at 7 TeV yields the lower pretrial p-value of 0.176 (0.9σ).We set upper limits on the muon neutrino flux, which are found to be ∼ 3 times higher than the theoretically expected neutrino flux and shown in Figure 1.In the previous search in Cygnus region, no neutrino excess (n s = 0) was found with a pretrial p-value of 0.80 (Kheirandish & Wood 2019), which was probably due to the use of different data samples and templates.The results for the other six templates are summarized in Table 2.Among them, the Gaussian template for LHAASO J2031+4057 (σ = 0.33 • ) yields the lowest pretrial p-value of 0.007 (2.4σ), while the H I template yields the largest p-value of 0.291 (0.6σ).The lower significant result for the γ-ray flux map at 50 TeV is probably due to the lack of LHAASO J2031+4057 component.
The template search results with different template radii of 0.7 • , 1.2 • , 6.0 • , and 10.0 • are shown in Figure 2. The MC template with a radius of 1.2 • gives the most significant result, with a pretrial p-value of 2.6 × 10 −3 (2.8σ).At larger radii of 6 • and 10 • , the neutrino excess of γ-ray flux template at 7 TeV is more significant.At the radius of 0.7 • , the significance of neutrino excess is not sensitive to templates.The best fit number of signal event (n s = 46.6)for the MC template (1.2 • ) is much higher than the expected signal event number (n exp = 4.2) within the central 1.2 • region.It seems challenging to attribute all the observed neutrino excess solely to the neutrinos accompanying the γ-rays from the Cygnus Bubble observed.
While the excess of the neutrino signal is not substantial, our results are still consistent with the hadronic origin of the γ-ray emission from the Cygnus Bubble, as the upper limits of the flux exceed the theoretically predicted neutrino flux.To obtain more significant results, additional through-going track events are required.Furthermore, cascade events might be more suitable for measuring neutrinos originating from the extensive region of the Cygnus Bubble.This is because the high angular resolution of track events does not provide a significant advantage in reducing background, and cascade events have lower atmospheric background compared to track events.] flux derived from LHAASO (6°, mask J2018) flux derived from LHAASO (6°) 90% C.L. upper limit ( -ray flux map at 7 TeV) Figure 1.Upper limits (90% C.L.) on the muon neutrino flux (red) resulting from the template searches using the γ-ray flux map at 7 TeV for the Cygnus Bubble.The bold red solid line shows the center energy range contributing to 90% significance.The expected muon neutrino flux (black and gray), derived from LHAASO γ-ray observations assuming hadronuclear interactions, is also shown.The 2.5 • region centered on LHAASO J2018+3651 is masked for the gray line.
Cygnus Bubble Scan Results
The results of the Cygnus Bubble scan are shown in Figure 3, with the upper panels (A-C) illustrating the neutrino significance map and the lower panels (D-F) illustrating the neutrino excess map.In the entire scan region, the most significant point, indicated by the white cross, is found at R.A. = 303.35• and decl.= 43.75• with a pretrial p-value of 2.2 × 10 −3 (2.9σ).This point is located 4.4 • away from the template center, with the best fit parameters being ns = 22.2 and γ = 2.3.In the central 2 • region, the most significant point is found at R.A. = 308.25 • and decl.= 40.45• with a pretrial p-value of 6.3 × 10 −3 (2.5σ).This point is located ∼ 0.6 • away from the template center, with the best fit parameters being ns = 31.7 and γ = 4.0.We further conduct Monte Carlo simulations by scrambling the R.A.s of IceCube events to compute the post-trial probability of the neutrino hotspot being more significant than the observed one.The post-trial p-value is 0.96 in the 28 • × 22 • region and 0.84 (0.18) within the 10 • (2 • ) region.
The γ-ray significance map observed by LHAASO partly correlates with the H I distribution and clearly associates with the dense MC clumps.If these γ-rays are produced by cosmic rays interacting with the surrounding gas, the accompanying neutrinos are expected to follow the γ-ray and gas distributions.The neutrino hotspot in the bubble center is spatially associated with the γ-ray hotspot below 20 TeV.However, the neutrino significance map in the larger region (see the upper panels A-C in Figure 3) doesn't exhibit an obvious association with the γ-ray significance map.Similarly, the neutrino excess map (see the lower panels D-F in Figure 3) also lacks a clear correlation with the gas distribution or the γ-ray distribution.LHAASO has observed eight PeV photons from the Cygnus Bubble.Two of these PeV photons are associated with the neutrino hotspot in the bubble center.One PeV photon is located close the neutrino excess around (R.A. = 305.05• , decl.= 45.95 • ), and another PeV photon is located close to the neutrino excess around (R.A. = 300.55• , decl.= 43.55• ), which is close to the blazar MAGIC J2001+435 (R.A. = 300.32• , decl.= 43.88• ).IceCube had previously reported the upper limit on neutrino flux from this source, MG4 J200112+4352, in source-list searches using ten years of track data, yielding a pretrial p-value of 0.21 (0.8σ) (Aartsen et al. 2020).
CONCLUSION
In this study, we conducted template searches and a scan of the Cygnus Bubble observed by LHAASO to investigate a potential correlation between neutrinos and γ-rays using 7 years (2011-2018) of IceCube muon neutrino data observed by the full detector.Using various spatial templates of neutrino emission based on different assumptions, we found no significant neutrino signals in the Cygnus Bubble.The most significant result of the template searches is obtained with the MC template in a 1.2 • radius, centered at LHAASO J2032+4102 (R.A. = 308.05• , decl.= 41.05 • ), yielding a pretrial p-value of 2.6 × 10 −3 (2.8σ).As for the signals from larger regions with radii of 6 • and 10 • , the γ-ray flux template at 7 TeV yields more significant results than other templates.We obtained 90% C.L. upper limits on the neutrino flux for each template.By comparing the resulting upper limits with the theoretically predicted neutrino flux based on γ-ray observations assuming hadronuclear interactions, we conclude that the neutrino result is consistent with the pure hadronic origin of the γ-ray emission from the Cygnus Bubble.The neutrino signals exhibit a stronger tendency to follow the MC distribution compared to the γ-ray flux distribution in the central region (∼ 1 • ) of the Cygnus Bubble.The p-value is the probability of background TS being greater than the observed one.The distribution of background TS can be derived from the simulations by scrambling the IceCube events in R.A.s.The distribution consists of two parts: the over-fluctuating part with T S > 0 and the under-fluctuating part with T S = 0.If the shape of neutrino spectrum is fixed, the over-fluctuating part is expected to follow a χ 2 1 distribution, and the fraction of the under-fluctuating part is 0.5.Thus, the pretrial p-value can be expressed as where σ pp is the cross section of p-p interactions, F ν (F γ ) is the energy distribution probability of neutrinos (γrays) generated from a cosmic-ray proton with the energy of E p (Kelner et al. 2006), V is the volume of emission region, n H is the number density of hydrogen, d is the distance to observer, and J p is the energy spectrum of cosmicray density.The neutrino spectrum can be obtained from the best fit γ-ray spectrum ϕ γ with a photon index of Γ = 2.71 + 0.11 × log 10 (E γ /10 TeV), as the integral over the emission region is the same for both neutrino and γ-ray emissions.The observed neutrinos are assumed to be equal flavor ratio due to neutrino oscillation.
Figure 2 .
Figure 2. The results (significance) of template searches are shown as a function of the template radius.Results from various templates with radii of 0.7 • , 1.2 • , 6.0 • , and 10.0 • are presented.The results using the LHAASO γ-ray flux template at 7 TeV are represented as red squares.
Figure 3 .
Figure 3. Neutrino significance map with the pretrial p-value −log10p (panels A-C) and neutrino excess map with the best fit number of signal events ns (panels D-F) for the Cygnus Bubble scan.The γ-ray significance map with energies above 100 TeV (panel A), 25 − 100 TeV (panel B), 2 − 20 TeV (panel C) are shown by contours starting from 3σ and increasing in steps of 3σ.The spatial distribution of the MC template (panel D), the H I template (panel E), and the γ-ray flux template at 7 TeV (panel F) are indicated by contours smoothed with a Gaussian kernel of σ = 0.5 • for comparison with the neutrino excess map.Eight photons with energies beyond 1 PeV are shown as gold triangles.Sources from TeVCat and the first LHAASO catalog located in this region are indicated by green plus signs.The most significant point in the entire scan region (R.A. = 303.35• , decl.= 43.75• ) and in the central 2 • region (R.A. = 308.25 • , decl.= 40.45• ) are indicated by white crosses.
Table 1 .
Results of template searches using the γ-ray flux templates of the Cygnus Bubble for a 6 • region.The spatial template, the best fit number of signal events ns, the 90% C.L. muon neutrino upper limit flux at 5 TeV with units of TeV −1 cm −2 s −1 , and the pretrial p-value of each search are listed.
Table 2 .
Results of template searches.Same as Table1but for the MC, H I, hydrogen, uniform, and two 2D Gaussian templates within a 6 • radius region.
THE NEUTRINO AND GAMMA-RAY CONNECTIONIn p-p interactions, the energy spectra of neutrinos and γ-rays can be expressed as ϕ ν,γ (E) = B. | 5,448.8 | 2024-02-27T00:00:00.000 | [
"Physics"
] |
Neural Cognition and Affective Computing on Cyber Language
Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space.
Introduction
In today's society, cyber space has become an important place for people to share information, exchange opinions, and communicate emotions. Due to its virtuality, autonomy, openness, inclusiveness and the high expressiveness owing to various technologies of new media, the language creativity of people has been inspired to the extreme, giving rise to the booming of cyber languages [1].
Professor Yu at Communication University of China pointed out that cyber language is a "unique natural language" commonly used in cyber space [2]. According to Ferdinand de Saussure's semiotic theory, Chinese scholars have classified cyber languages into the two categories of readable symbols and nonreadable symbols and studied their symbol system, ideographic features, and the formation rules [1,3,4]. However, there has not been a consistent definition of cyber language (network language, Internet language, or web language) so far. With the rapid development of modern communication and new media technologies, the Internet, the Internet of things, and the wireless communication network have been integrated into the omnipresent "ubiquitous network" which features increasingly varied expression patterns of cyber language, including icon, audio, video, and text as well as shapes, colors, and brightness. Based on all the findings from previous researches, we define cyber language as "a symbol system that people have agreed on and widely used in communication under the ubiquitous environment. " Cyber languages are very rich in the expression of emotions by either the simple assembly of readable and nonreadable symbols or the complexes of texts, icons, audio or video signs, and their hybrids [5]. Any changes in the component, 2 Computational Intelligence and Neuroscience shape, color, layout, or presentation sequence may deliver different emotional messages. Cyber language is developing fast in all the world major languages such as Chinese, English, Japanese, German, French, and Spanish [1][2][3]. In the context of globalization, cyber language has brought new vitality to international language communication and strong capacity for expressing emotions. Therefore, research about emotional characteristics in cyber languages has attracted wide attention from linguistic study to public opinion analysis, internet marketing, service feedback monitoring, and social emergency management.
Affective computing, originally presented by Picard in 1997, indicates that emotional information can be perceived, processed, and computed by machine [6], which has been applied to cyber space in online opinion analysis [7], smart service design [8], psychological monitoring in ubiquitous learning [9], and dynamic emotion computation on vocal social media [10]. In the past decades, although great progress has been made with affective computing on natural language, there are still a lot of difficulties in dealing with cyber language due to its complexity and variability. Also, the cognition of emotional symbols in cyber language is closely related to the neural activities of human beings and affected by such factors as nationality and cultural background, which requires further multidisciplinary research on this issue.
Firstly gave a classification of emotional symbols in cyber language according to the Discovery Learning Theory and then analyzed the cognitive characteristics of different symbols. Based on our previous research findings, a mechanism model to show the dominant neural activities in that process was put forward. In order to analyze the expressive patterns of emotions in international cyber languages, a comparative study of Chinese, English, and Spanish languages was conducted in this paper, and finally an intelligent method was proposed for affective computing on the readable texts and nonreadable symbols in a unified PAD emotional space.
Classification of Emotional Symbols.
The symbols that can be used to express emotions in cyber languages are very abundant and continuously innovative and include the simple assembly of readable and nonreadable symbols or the complexes of texts, icons, audio, video signs, and their hybrids [5]. According to the Discovery Learning Theory, the learning process is realized by the learner's cognitive representation, which refers to the mental process of turning perception of external substances into internal mental facts. The manner of cognitive mental representation will experience three stages as people grow up: first enactive representation, second iconic representation, and, third, symbolic representation, which show the sequence in human's cognition of different information types, that is, enactive information comes first and is followed by image information and then text information.
In order to study the impact of different types of information on emotional cognition in cyber languages, we classify the emotional symbols into six categories (ECSAGT) [5,7]: enactive symbols, color symbols, structural symbols, audio symbols, graphic symbols, and text symbols, each of which delivers emotional messages by following certain encoding and commonly accepted rules.
Cognitive Characteristics of Emotional Symbols.
The analysis of emotional symbols in cyber languages is related to the intention and expression of the information sender as well as the perception and cognition of the information receivers. The emotions of the sender and possible receivers are different, so we should determine that our target is to identify the sender's emotions from his presented symbols or to judge the activated emotions of the receivers by the information of those symbols, which will be evaluated based on the statistical significance [10]. In affective computing, we usually consider the latter.
According to researches in cognitive neuroscience, human emotions arise from the external signals, transmitted through peripheral sensory organs and the internal sensory pathways to the brain's limbic system where the rapid primary emotion is produced, followed by a relatively slow secondary emotion formed in the interaction of the higher cognitive limbic system and the cerebral cortex [11,12]. This process is controlled by the emotional circuits of the human brain and will give rise to activation responses in corresponding brain regions.
Recent researches into human emotions have been well supported by updated experimental technologies such as EEG (electroencephalograph), ERPs (event-related potentials), fMRI (functional magnetic resonance imaging), and DTI (diffusion tensor imaging). In particular, the blood oxygenation level dependent functional magnetic resonance imaging (Bold-fMRI), with such advantages as being noninvasive, nontraumatic, and capable of locating accurately the activated brain areas, has been applied to the studies of language and emotion's neural mechanism [13][14][15].
In our previous research organized by Professor Dai et al. at Fudan University, we found that the cognitive responses to different types of symbols varied from one to another through experimental observation by EEG and fMRI [7]. For example, enactive, structural, color, and graphic symbols usually take less time and can give rise to the rapid primary emotions, which we call the primary emotional information. The semantic text symbols will take up more time while perceived by the advanced cortex in the brain. They usually generate the slower secondary emotions and belong to the secondary emotional information. Audio symbols contain both representational information and semantic information and therefore bear the characteristics of the primary and the secondary emotional information, but take more time resources than the first type. The primary emotional information is cross-cultural and independent of languages to a large extent. Once the cognitive rules of emotional symbols come into being, the secondary emotional information plays a vital role in expressing more in-depth emotion. The characteristics discussed above should be considered in the affective computing on the messages composed of mixed types of emotional symbols so as to reflect the dynamic cognitive responses to those symbols. As one of the essential issues in Principles of Visual Communication (PVC), the cognitive characteristic of visual constructs has been studied for many years [16]. Furthermore, researchers and engineers in the field of human computer interaction (HCI) have developed effective computational models to measure the time characteristics of different elements in that process [17,18].
According to the human processor model (a.k.a. MHP) [19], the information process includes three subprocesses: Perceptual Process, Cognitive Process, and Motor Process, as shown in Figure 1. Time parameters in the Perceptual Process and Cognitive Process can be measured as in Table 1 [20].
From Table 1, we can find that the processing time of visual information is usually shorter than that of auditory information. However, in our study, the visual information involves enactive symbols, color symbols, structural symbols, graphic symbols, and symbols, which have different processing time. If only considering the simple action, color, and structure of those symbols in cyber language, the experiment showed that the faster orders of processing time are: enactive symbols, color symbols, structural symbols, audio symbols, graphic symbols, and text symbols in representational cognition [7]. Compared with the audio symbols, graphic symbols can be perceived faster but they need more processing time to be understood in the cognitive process. However, people can usually cognize the semantic meaning of audio symbols much faster than that of text symbols. Therefore, we offer a schematic diagram as Figure 2, which shows the general cognitive characteristics of different types of emotional symbols in cyber language.
Neural Mechanism
Model. The brain mechanism of emotions has been systematically explored by scholars in the area of affective neuroscience [21][22][23]. In order to provide systemic guidance for analyzing the neural cognition of emotional symbols in cyber language, based on the summary analysis of existing theories and findings, we put forward a mechanism model to show the dominant neural activities in that process as in Figure 3 [5,9].
The stimulus signals of emotional symbols will be delivered through the receiver's sensory pathways to his limbic system and produce the intuitive primary emotion and then generate the slower but more rational secondary emotion through the cognitive activities of the advanced cortex in the brain. Finally, the changes of emotion will lead to the physiological reactions which are perceived by the brain and form the specific emotional experience. Actually, the subjective assessment of an emotional symbol by the information receivers is the judgment of their emotional experiences. In that process, the advanced cortex of the brain regulates its selective attention on the sensory pathways, that is, it distributes the visual, auditory, and time resources autonomously. The selective attention depends on the information receiver's motivation, knowledge, memory, cognition and such advanced psychological activities as decisionmaking. After experiencing the same symbol multiple times, a cognitive rule will be created in the memory to produce the conventional responses to familiar symbols.
Corpus and Affective Vocabulary.
When we look at cyber language closely, we will find that it expresses emotions with readable words or nonreadable symbols that have specific sentiment orientation. They are in general defined as emotional words, which can be found in a large quantity in world's most used languages such as Chinese, English, and Spanish. For example [24], "Bro" is a short for brother. It sounds pretty warm while "lol" expresses joy and refers to "laugh out loud" or "crack up. " In addition, there are also many special terms such as "pfffffff, " a proud word which means whatever or as you like. "Tmr" is a short for "tomorrow" and refers to something that will be done tomorrow. "N00b" is a newbie or a green hand, usually used to furiously describe those who are clumsy at certain things. "W00t" stands for who or what and is used to describe something or somebody that is exciting or surprising. In some countries, people may use figures or symbols to express their feelings. For example, "1337" represents "elite, " a person who is very competent. This number conveys feelings of surprise and joy. The number "56" means boring in English, aburrido in Spanish, and "无聊" in Chinese. The letter "D" is a big laugh. The symbol ":)" is a smile while ":(" means sadness. A project carried out by the researchers of cyber language worldwide aims to build the corpus by collecting and sorting out frequently used cyber vocabulary and symbols such as General Inquirer, WordNet, and SentiwordNet. The corpus will include a large quantity of affective network terms, which are vital to the analysis of sentiment orientation of cyber language. A case in point is China's Hownet, which has collected 52,000 Chinese terms and 57,000 English words [24]. Among all the published ones, there are 219 words describing the intensity of emotions, 3,116 negative, 1,254 derogatory, 3,730 positive, 836 approbatory, and 38 propositional making a proposition. HowNet's Semantics Dictionary has also included a large collection of lexical semantic entries, each of which is composed of semantics and its description of a term. It offers guidance on how to analyze the abovementioned affective expressions in a specific context.
The sentiment orientation affective cyber language presents is defined as sentiment polarity, which can be divided into three categories: positive, negative, and neutral. Every affective word's polarity and intensity correlates with emotional expression and cognition standards of cyber language in the specific context. The polarity and intensity of benchmark emotional words in the typical context can be identified through statistical studies of cyber language and has been covered by many corpuses, such as that of Chinese affective lexicon ontology collated and annotated by the Information Retrieval Laboratory of Dalian University of Technology. It provides thorough accounts from different perspectives of a word's part of speech, emotion category, intensity, and polarity and therefore offers important standards for the calculating of parameters related to affective cyber vocabulary.
Cyber languages are characterized by wide-ranging vocabulary that is profuse in sentiments and updated rapidly. The utilization of affective vocabulary is the basic method of the expression of emotions in cyber languages. Researchers worldwide have built the corpuses of cyber languages by collecting and sorting out frequently used vocabulary and symbols.
Expressive Pattern of Emotions.
The emotional message of cyber language is decided not only by the affective vocabulary used in the sentence, but also by the expressive pattern in the whole sentence. Therefore, the same affective vocabulary can be completely opposite in meaning when expressed in a different pattern. For example, "When I just heard the news, I was quite upset. But after having lurked for a while, I found hikers were right about that, so just want to show up today to share my joy. " There are two affective words in this sentence, the negative "upset" and the positive "joy" together with a concession word "but. " Supposing stands for the positive emotion word, for the negative one and for the concession, the emotion expression pattern of the above sentence can be generalized as the follows:
Cause and effect
In cyber language, conjunctions are critical to the understanding of the emotional messages delivered in sentences and thereafter are important objects to be considered in the analysis of emotional structure and expression patters. Table 2 has included some commonly used connectors in Chinese, English, and Spanish [24].
Of course, a complete emotional expression pattern also involves degree, negative words, and punctuation marks. For example, the Chinese words "不 很 高 兴, " "不 高 兴, " and "很不高兴" express unhappiness of very different degrees. Punctuation marks such as "!", "?", ". . ." and emoticons, in particular, demonstrate very distinct sentiment orientations. In addition, the sequence of affective words in a sentence will make a difference. For example, the English sentence "We are exhausted now; above all we are so happy for our success. " adopts the pattern of " + above all + ⇒ ". The emotion of the whole sentence is determined by the phrase following "above all, " which is "so happy. " Without strict rules regulating cyber language expression, which is ever changing, we will have to use computers to automatically capture new entries and modify them with subjective cognition in order to build open corpuses of expressive patterns of emotions in sentences and analyze the messages delivered by them.
What remains a research issue is the emotional expression through some nonreadable symbols such as enactive symbols, audio symbols, structural symbols, color symbols, and graphic symbols. As with in text symbols, the most effective approach to affective computing on those emotional symbols is to establish an open and frequently updated knowledge library based on the ontology of nonreadable symbols, which is correlated with the cultural background, language context, and social environment and should be processed with emotional notations considering the language application environment [5,9]. Among those symbols, the emotions in audio symbols are mainly reflected in the speed, intensity, pitch frequency, and spectral parameters of the audio signals and can be highly cross-cultural and cross 6 Computational Intelligence and Neuroscience languages. In an online conversation, an high accuracy rate of emotion recognition can be achieved in the way of pattern recognition without even semantic analysis [10,[25][26][27].
Unified PAD Emotional Space.
In order to be processed by the machine, the affective characteristics of an emotional symbol in cyber language should be described quantitatively.
The rudimentary description is its positive or negative polarity with quantitative intensity. In most cases, researchers propose the "six big" types of emotions: anger, disgust, fear, joy, sadness, and surprise [28], which have been widely applied to the analysis of graphic, audio, and video signals. However, the emotional symbols in cyber language usually carry mixed affective characteristics and reflect dynamic changes in audio and video signals.
Mehrabian presented a 3D model which can describe any kinds of complicated emotions in a PAD emotional space [29,30]. It includes three nearly independent continuous dimensions: Pleasure-Displeasure (P), Arousal-Nonarousal (A), and Dominance-Submissiveness (D). Experiment shows that all the known emotion states can be almost described in this space very well [10,30]. So far, the PAD model has been successfully applied in a variety of areas, such as audio-visual speech synthesis [31], micro-blog sentiment analysis [32], and music emotion comparison [33]. The 3D dimensions of PAD provide a unified space for describing the mixed affective characteristics of all types of emotional symbols in cyber language as well as their dynamic changes. As well, any emotional state in the PAD space can also be described as the percentage rates of typical emotions based on a conversion metric function [10,34].
Usually, the PAD values of commonly used emotional symbols should be firstly evaluated by subjective assessment according to the perception and cognition of the typical information receivers in the process of emotional notation and stored in the knowledge library based on the ontology, therefore providing the references for comprehensive computation by machine. In order to achieve precise and consistent results in subjective evaluation, Mehrabian designed an initial 34-item test questionnaire [29]. However, the application in practice indicates that the questionnaire should be designed according to the specific language due to the differences in language understanding and cultural backgrounds. Table 3 shows the Chinese version of the simplified 12-item questionnaire which was presented by the scholars from the Psychological Institute, Chinese Academy of Sciences [35]. The assessment is based on what kind of feeling is more intense in each item. From the left to the right, it is scaled within the range from "−4" to "4. " The scores in each item may be calculated and converted into the normalized values of P, A, and D [35].
Knowledge Library and Emotional
Notation. The perception and cognition of emotional symbols in cyber languages are not only related to neural cognition but also affected by the information receivers' background and features. Therefore, we establish an open and frequently updated knowledge library as shown in Figure 4.
The emotional symbols as well as their expressive patterns are stored in this library based on the ontology. The commonly used symbols in the library should be firstly assigned with the emotional notation of PAD values as the benchmarks through the subjective assessment which has been discussed before in this paper.
In order to evaluate the values of the rest or a new emotional symbol, we adopted the PMI (Pointwise Mutual Information) method [36], which is based on the probabilities of the new symbol and its benchmarks in the knowledge library. For example, the improved HowNet-PMI algorithm has been successfully applied to affective computing on Chinese cyber language [37]. As shown in Figure 5, in the knowledge library, we build up a semantic dictionary based on the ontology of international cyber languages, which includes both the readable and nonreadable symbols. The expressive patterns of emotions in sentences are represented by the knowledge such as the templates and rules which describe the commonly used structures along with the conjunctions, adverbs, and punctuation marks.
Based on the knowledge, affective computing can be carried out on a whole message which may contain one or more sentences with the additional nonreadable symbols.
Intelligent Computing Method.
The basic process of affective computing on cyber language is shown as in Figure 6. It includes the following four steps [5]: (1) The first step is Keyword Retrieval and Sentence Segmentation. A message of cyber language will be segmented into one or more sentences with possible additional nonreadable symbols for further processing by the structure analysis. Chinese, English, and Spanish have their different patterns in sentences, which can be mostly structured by the retrieved keywords such as conjunctions, adverbs, and punctuation marks.
(2) The second step is Analysis of Expressive Patterns and Extraction of Emotional Symbols. In this step, each segmented sentence needs to be broken up into a series of separate words by dedicated tools such as the Chinese word segmentation system NLPIR [38]. Hereafter, the expressive patterns of emotions in sentences with additional nonreadable symbols are analyzed, and all emotional symbols in the message are extracted based on the semantics dictionary, structural templates, and rules which are stored in the knowledge library.
(3) The third step is Analytical Computing. In this step, the readable and nonreadable emotional symbols are computed separately. In order to reflect the dynamic affective features in human cognitive processes, the enactive symbols such as flashing signs and video signs, as well as structural, color, and graphic symbols, are computed and ranged into the primary emotional information. The semantic text symbols are computed into the secondary emotional information. Audio symbols contained both the, especially, as discussed before in this paper, it contained the representational information and the semantic information. The former is related to vocal emotion only and can be computed by the LS-SRV estimator, which has been successfully applied to Wechat and QQ [10]. The later should be firstly converted into text sentences by a speech recognition tool and is afterwards computed similarly to the text symbols.
(4) The final step is Synthetic Affective Computing. The results by the step of Analytical Computing will hereafter be synthesized and adjusted based on the analysis of conventional expressive patterns, in order to reach a more accurate and comprehensive result. Dynamic affective features in the message of cyber language may be represented by the primary and the secondary emotions as well as the changing positions in a text sentence. This process in Figure 6 can be implemented by the intelligent computing method as shown in Figure 7.
In the proposed method, the processing and computing tasks are accomplished by a multiagent system intelligently. It includes a Monitoring Agent, a Preprocessing Agent, an Analytical Computing Agent, and a Synthetic Affective Computing Agent. Different from the traditional computing on emotions, considering the characteristics of neural cognition, our method can give the results of the primary and the secondary emotions, respectively, and shows their dynamic changes. This is significant for the further analysis of emotion propagation through social media in cyber space [39][40][41]. Once seeing this footage, the audiences are firstly attracted by the smiling pictures and videos of the news reporter and produce the primary joy emotion immediately the primary emotion. After computing on the online comments, we get the main types of emotions in different positions of the total comment text as shown in Figure 9, which reflect the dynamic changes of the secondary emotion in the online comments. Figure 10 is a feeling description posted by a Chinese survivor on January 1, 2015, at 3:53, who escaped from the miserable Shanghai bund stampede which took place at 23:35 on December 31, 2014, and resulted in 36 deaths and 49 injuries. The survivor said: "Tonight's Bund was nothing I could've imagined because of the crowding and trampling accident. I was fortunate to have survived. I saw young lives perished in front of me, but I couldn't save them. They were put on stretchers and sent down to us one after another. We tried CPR for all, one, two, three. . . until we were all exhausted. Poor people, hopefully 120 and the hospital can do a better job. Have to thank the medical workers, foreigners as well as all the others that participated in the rescue efforts. We have tried our best . . . (crying out loud). " Figure 11 shows the PAD values of affective computing result on that description. It represents the strong intensity of the mixed emotions as well as their changes which include grief, despair, helplessness, and the gratefulness to the people who were involved in this rescue.
Experiment and Result
Computational Intelligence and Neuroscience
Conclusion and Discussion
With the rapid development and wide application of the Internet and ubiquitous networks, cyber space has provided people with a new virtual society and convenient living and working platform. Characterized by its customary symbol system and vivid expression patterns, cyber language not only acts as the tool for people to communicate in the cyber space, but also plays a vital role in affective exchange and emotional propagation as well as social psychology and behaviors and has caused high attention in many areas in recent years.
Due to the open, virtual, and dynamic language environment, affective computing on cyber language requires a systemic and interdisciplinary research. This paper presented a classification of the emotional symbols in cyber language and put forward a mechanism model to show the dominant neural activities in the cognitive process. Furthermore, after analyzing the expressive patterns of emotions in the languages of Chinese, English, and Spanish, this paper proposed an intelligent method for affective computing on cyber language in a unified PAD emotional space, which can deal with the multi symbol information and mixed emotions in a cyber message and show their dynamic changes according to the characteristics of the neural cognition process. Experimental results indicate that this method can reach an accuracy of more than 70% for the computing on text symbols and audio symbols and provide an effective approach to the application of a lot of areas such as public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. However, the processing of the remaining nonreadable symbols had to be made by subjective evaluation in most cases.
In the future, the language ecosystem of cyber language and new media technologies will be ever changing and continuously updated. We suggest that future studies can be conducted in the following areas: (1) How to build an open and dynamic updated knowledge library to assist the affective computing by applying intelligent monitoring and big data mining techniques; (2) how to establish more thoroughly expressive emotional patterns and provide statistical fundamental parameters for the elaborate description of neural cognition by using advanced experimental observation techniques; and (3) how to explore a more effective method for computing on nonreadable symbols such as enactive and structural symbols. | 6,468 | 2015-09-28T00:00:00.000 | [
"Computer Science"
] |
Analysis of radiation pressure induced nonlinear optofluidics
We analyze two nonlinear optofluidic processes where nonlinearity is induced by the interplay between optical field and liquid interface. Specifically, guided optical waves generate radiation pressure on the liquid interface, which can in turn distort the liquid interface and modify the properties of the optical field. In the first example, we discuss the feasibility of nonlinear optofluidic solitons, where optical field is governed by the nonlinear Schrödinger equation and nonlinearity is effectively determined by liquid properties. Then, we analyze a nonlinear optofluidic process associated with a high quality (Q) factor whispering gallery mode (WGM) in a liquid droplet. Similar to Kerr effects, the WGM can produce a frequency shift proportional to the WGM power. Using liquid properties that are experimentally attainable, we find that it may only take a few photons to generate measurable WGM resonance shift. Such a possibility may eventually lead to nonlinear optics at single photon energy level. ©2014 Optical Society of America OCIS codes: (190.0190) Nonlinear optics; (190.3270) Kerr effect; (190.6135) Spatial solitons; (230.5750) Resonators. References and links 1. D. Psaltis, S. R. Quake, and C. Yang, “Developing optofluidic technology through the fusion of microfluidics and optics,” Nature 442(7101), 381–386 (2006). 2. U. Levy and R. Shamai, “Tunable optofluidic devices,” Microfluid Nanofluidics 4(1-2), 97–105 (2008). 3. C. Monat, P. Domachuk, and B. J. Eggleton, “Integrated optofluidics: A new river of light,” Nat. Photonics 1(2), 106–114 (2007). 4. S.-H. Kim, J.-H. Choi, S.-K. Lee, S.-H. Kim, S.-M. Yang, Y.-H. Lee, C. Seassal, P. Regrency, and P. Viktorovitch, “Optofluidic integration of a photonic crystal nanolaser,” Opt. Express 16(9), 6515–6527 (2008). 5. Z. Li and D. Psaltis, “Optofluidic dye lasers,” Microfluid Nanofluidics 4(1-2), 145–158 (2007). 6. H. Zhu, I. M. White, J. D. Suter, P. S. Dale, and X. Fan, “Analysis of biomolecule detection with optofluidic ring resonator sensors,” Opt. Express 15(15), 9139–9146 (2008). 7. H. Rokhsari, T. J. Kippenberg, T. Carmon, and K. J. Vahala, “Theoretical and experimental study of radiation pressure-induced mechanical oscillations (parametric instability) in optical microcavities,” IEEE J. Sel. Top. Quantum Electron. 12(1), 96–107 (2006). 8. T. J. Kippenberg and K. J. Vahala, “Cavity opto-mechanics,” Opt. Express 15(25), 17172–17205 (2007). 9. T. J. Kippenberg and K. J. Vahala, “Cavity optomechanics: Back-action at the mesoscale,” Science 321(5893), 1172–1176 (2008). 10. J. Hofer, A. Schliesser, and T. J. Kippenberg, “Cavity optomechanics with ultrahigh-Q crystalline microresonators,” Phys. Rev. A 82(3), 031804 (2010). 11. S. Tallur, S. Sridaran, and S. A. Bhave, “A monolithic radiation-pressure driven, low phase noise silicon nitride opto-mechanical oscillator,” Opt. Express 19(24), 24522–24529 (2011). 12. A. Cho, “Putting light’s light touch to work as optics meets mechanics,” Science 328(5980), 812–813 (2010). 13. A. Ashkin and J. M. Dziedzic, “Radiation pressure on a free liquid surface,” Phys. Rev. Lett. 30(4), 139–142 (1973). 14. I. I. Komissarovak, G. V. Ostrovskaya, and E. N. Shedova, “Light pressure induced deformations of a free liquid surface,” Opt. Commun. 66(1), 15–20 (1988). 15. J.-Z. Zhang and R. K. Chang, “Shape distortion of a single water droplet by laser-induced electrostriction,” Opt. Lett. 13(10), 916–918 (1988). 16. H. M. Lai, P. T. Leung, K. L. Poon, and K. Young, “Electrostrictive distortion of a micrometer-sized droplet by a laser pulse,” J. Opt. Soc. Am. B 6(12), 2430–2437 (1989). #223092 $15.00 USD Received 15 Sep 2014; revised 28 Oct 2014; accepted 3 Nov 2014; published 12 Nov 2014 (C) 2014 OSA 17 November 2014 | Vol. 22, No. 23 | DOI:10.1364/OE.22.028875 | OPTICS EXPRESS 28875 17. A. Casner and J.-P. Delville, “Adaptative lensing driven by the radiation pressure of a continuous-wave laser wave upon a near-critical liquid liquid interface,” Opt. Lett. 26(18), 1418–1420 (2001). 18. A. Casner, J.-P. Delville, and I. Brevik, “Asymmetric optical radiation pressure effects on liquid interfaces under intense illumination,” J. Opt. Soc. Am. B 20(11), 2355–2362 (2003). 19. J.-P. Delville, M. Robert de Saint Vincent, R. D. Schroll, H. Chraïbi, B. Issenmann, R. Wunenburger, D. Lasseux, W. W. Zhang, and E. Brasselet, “Laser microfluidics: fluid actuation by light,” J. Opt. A, Pure Appl. Opt. 11(3), 034015 (2009). 20. G. Bahl, K. H. Kim, W. Lee, J. Liu, X. Fan, and T. Carmon, “Brillouin cavity optomechanics with microfluidic devices,” Nat Commun 4, 1994 (2013). 21. M. Hossein-Zadeh and K. J. Vahala, “Fiber-taper coupling to Whispering-Gallery modes of fluidic resonators embedded in a liquid medium,” Opt. Express 14(22), 10800–10810 (2006). 22. A. Jonáš, Y. Karadag, M. Mestre, and A. Kiraz, “Probing of ultrahigh optical Q-factors of individual liquid microdroplets on superhydrophobic surfaces using tapered optical fiber waveguides,” J. Opt. Soc. Am. 29(12), 3240–3247 (2012). 23. A. Yariv and P. Yeh, Photonics: Optical Electronics in Modern Communications (Oxford University, 2007). 24. R. W. Boyd, Nonlinear Optics, 3rd ed. (Academic, Amsterdam, 2008). 25. J. D. Jackson, Classical Electrodynamics, 3rd ed. (John Wiley & Sons, 1998). 26. H. M. Lai, P. T. Leung, K. Young, P. W. Barber, and S. C. Hill, “Time-independent perturbation for leaking electromagnetic modes in open systems with application to resonances in microdroplets,” Phys. Rev. A 41(9), 5187–5198 (1990). 27. J. R. Buck and H. J. Kimble, “Optimal sizes of dielectric microspheres for cavity QED with strong coupling,” Phys. Rev. A 67(3), 033806 (2003). 28. A. Datta, S. Kundu, M. K. Sanyal, J. Daillant, D. Luzet, C. Blot, and B. Struth, “Dramatic enhancement of capillary wave fluctuations of a decorated water surface,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 71(4), 041604 (2005). 29. H. Leitão, A. M. Somoza, M. M. Telo da Gama, T. Sottmann, and R. Strey, “Scaling of the interfacial tension of microemulsions: A phenomenological description,” J. Chem. Phys. 105(7), 2875 (1996). 30. H. Chraibi, D. Lasseux, R. Wunenburger, E. Arquis, and J.-P. Delville, “Optohydrodynamics of soft fluid interfaces: optical and viscous nonlinear effects,” Eur Phys J E Soft Matter 32(1), 43–52 (2010).
Introduction
Optofluidics [1][2][3][4][5][6] and optomechanics [7][8][9] have recently emerged as two important areas of research.Optofluidics aims to synergize microfluidics and optics to achieve novel functionalities including reconfigurable optical systems [1,2], integrated optics [3,4], lasers [5], and sensing [6].Optomechanics, on the other hand, involves the dynamic interplay between optical field and mechanical motion [7][8][9].To date, most optomechanics-related research utilizes solid resonators [10][11][12].Only a limited number of studies investigated the mechanical interaction between optical field and fluids [13][14][15][16][17][18].The earliest example is perhaps the classic work in [13], where the authors used a focused high power laser beam to create a bulge over a flat air-liquid interface.Similarly, in [15,16], the authors used a high power laser beam focused onto a liquid droplet to distort its interface.More recently, optical radiation pressure was used in [17] to distort a flat liquid interface and form a tunable lens.By using a liquid mixture with extremely small interfacial tension, which can occur near the critical temperature of phase transition, it is possible to significantly reduce the optical power required for large interface distortion [17][18][19].Additionally, stimulated Brillouin scattering in a hollow capillary tube filled with liquid has been reported recently [20].The existence of high-Q whispering gallery mode (WGM) in an all-liquid droplet has also been experimentally confirmed [21,22].The focus of this work, however, is distinct from existing studies.Specifically, our goal is to demonstrate that under appropriate conditions, the dynamic interplay between optical force and liquid interface can lead to processes that are very similar to classical third-order nonlinear processes such as Kerr effects and optical solitons.As will be made clear in this paper, a defining feature of these nonlinear optofluidic processes is that the nonlinearity arises from the distortion of liquid interfaced induced by optical radiation pressure.
To analyze nonlinear optofluidic processes, it is necessary to solve the optical equations (e.g., Maxwell's equations) and the fluidic equations (e.g., Navier-Stokes equations) in a selfconsistent manner, where optical radiation pressure must be balanced by fluidic forces such as surface tension and buoyancy.Consequently, by tuning fluid parameters such as surface tension and density, one should be able to modulate and control the effective strength of optofluidic nonlinearity.This is in sharp contrast with traditional nonlinear optics, where nonlinear susceptibility (e.g., ( ) 2 χ or ( ) 3 χ ) is an intrinsic material property and cannot be easily tuned.Furthermore, by using liquids with low surface tension, it should be possible to achieve large nonlinearity with low optical power.In fact, in this paper, we find that by reducing surface tension to a low but experimentally attainable level, the effective strength of nonlinear optofluidics can be several orders of magnitude stronger than the traditional Kerr effects.To the best of our knowledge, this possibility has never been discussed in existing literature.To quantitatively analyze nonlinear optofluidic effects, we consider two examples in this paper.Figure 1(a) shows a liquid structure which contains a high index liquid as the waveguiding core, air as the top cladding, and a lower index liquid as the bottom cladding.Assuming low interfacial tension for the bottom liquid-liquid interface, the presence of optical field should deform the interface and create a bulge.At the same time, this bulge can enhance optical confinement along the transverse direction.Under certain conditions, we find that the propagation of optical field within the liquid bulge can be described by the nonlinear Schrödinger equation, the solutions to which are in the familiar form of optical solitons.This example illustrates a key feature of nonlinear optofluidics, namely that the presence of the optical field distorts the liquid interface; simultaneously, the change in the liquid interface shape also modifies the optical field.As a result, analyses of nonlinear optofluidics often require solving the coupled system of optical waves and fluids.Specifically, the optical field must satisfy the Maxwell's equations plus the boundary conditions imposed by the deformable liquid interface.Meanwhile, the liquid system is governed by the relevant fluid equations, where the impact of optical radiation pressure must also be included.Clearly, the most important feature of nonlinear optofluidics is the coupling between the optical field and the liquids, where the coupling strength is effectively determined by the radiation pressure and the interfacial tension of the liquid interface.
The optofluidic soliton depicted in Fig. 1(a) may not be easy to implement experimentally.A more practical example is the system shown in Fig. 1(b), which is based on a liquid droplet that supports high-Q WGMs.In this example, the optical field of the WGM exerts radiation pressure on the droplet surface.Since the direction of radiation pressure always points from the high index core to the low index cladding [16], a bulge should form along the droplet equator.As a result, given sufficiently high optical power, both the effective optical path of the resonator and the corresponding resonance frequency should change.This phenomenon is very similar to the Kerr effect, where high optical power also shifts the effective cavity length by changing the refractive index of the liquid.In this paper, we derive a closed-form formula that can provide an order of magnitude estimate for this nonlinear optofluidic process.Using common liquid parameters, we find that this nonlinear optofluidic effect can be significantly stronger than the Kerr effect.In fact, for liquids with low but experimentally achievable surface tension, it may even be possible to produce measurable change in WGM resonance frequency at single photon energy level.Such a possibility may ultimately enable us to demonstrate nonlinear interaction between two single photons, which is obviously important for quantum information technology.The optofluidic solitons can exist in the asymmetric liquid waveguide shown in Fig. 2(a).The refractive indices of the three layers are represented by 1 n , 2 n , and 3 n , while their fluid densities are given by 1 ρ , 2 ρ , and 3 ρ , respectively.For simplicity, we assume layer 3 is air and 3 1 n = .In the absence of optical signals, the waveguide core is the planar middle layer with thickness 0 h and has the highest refractive index (i.e., 2
Optofluidic soliton
n n > and 2 3 n n > ).By using the water-in-oil microemulsion system described in [17], it is possible to ensure that the core liquid layer (layer 2) possesses a higher refractive index but a lower mass density than the cladding liquid layer (layer 1), i.e., 2 n n > and 2 1 ρ ρ < .Thus the configuration in Fig. 2 should be hydrodynamically stable for small deformation.
After coupling light into layer 2, the presence of the optical field generates radiation pressure on the dielectric boundaries.We assume that the interfacial tension between layer 1 and 2 is much smaller than that of layer 2 and 3.As a result, we can ignore the deformation of the air-liquid interface (the interface between layer 2 and 3) and focus on the liquid-liquid interface instead.We note that the direction of radiation pressure always points from the high refractive index medium towards the low refractive index medium [16].Consequently, the presence of optical field should generate a liquid bulge as illustrated in Fig. 2(a).In turn, the geometrical deformation can lead to better optical confinement within the bulge.Qualitatively, such a nonlinear optofluidic process is very similar to the self-focusing of spatial solitons.In the following analysis, we establish an analytical framework and confirm that under certain conditions, the aforementioned optofluidic process can indeed be described by the nonlinear Schrödinger equation.
Our first step is to apply the effective index theory [23], a widely used theory that allows us to reduce the original three-dimensional (3D) problem into a two-dimensional (2D) one.In our case, similar to the treatment of structures such as ridge waveguides, we assume that the optical field depends only on x and z and use an effective index ( ) , eff n x z to account for the field's variation along the y direction.For the specific example in Fig. 2(a), we assume that the optical field's distribution along the y axis is simply given by the fundamental y-polarized mode in a planar waveguide.Then, we approximate the x and z dependence of the optical field as: where ( ) Given the effective index in Eq. ( 2), the electric field in Eq. ( 1) should satisfy: ( ) Note that Eq. ( 3) no longer depends on y .Then, assuming a slow varying envelop ( , ) Equation ( 4) shows that the shape of the liquid bulge, i.e., ( ) , can significantly change the properties of the guided optical wave.
In order to determine ( ) , we must first derive its governing equation.For the electric field polarized along the y axis, the optical radiation pressure at the layer 1 and 2 interface is given by . Since we look for a solution that is self-guided and propagates along the x direction, its intensity, i.e., ( ) 2 , A x z , should not depend on x .Therefore, the shape of the bulge ( ) should not depend on x either.
Then, in equilibrium, interfacial tension on a deformed interface is balanced with buoyancy ), this balance leads to: where ρ ρ ρ Δ = − , and σ is the interfacial tension between layer 1 and 2. In Eq. ( 5), we ignore the deformation of the air-liquid interface, since the air-liquid surface tension is assumed to be much higher than the liquid-liquid interfacial tension.To obtain a closed form solution, we further assume that the interfacial tension term in Eq. ( 5) is even smaller than gravity and is therefore neglected.Under this scenario, the height of the liquid bulge is proportional to optical intensity, i.e., ( ) Substituting this result into Eq.( 4), we obtain: where .
Equation (6a) takes the familiar form of the nonlinear Schrödinger equation [24].Its fundamental solution is: Our assumptions in deriving the form of optofluidic solitons are either the standard approach or can be justified using carefully controlled experimental conditions.For example, the slow varying envelope assumption and the effective index theory are commonly used in nonlinear optics and integrated optics.Our assumptions on liquid properties such as density, refractive index, and surface tension, however, warrant additional discussion.For example, two key requirements are: 1) in comparison with the cladding medium (layer 3), the core medium (layer 2) should possess lower density and higher refractive index; 2) the surface tension between layer 1 and 2 should be small.Both requirements can be satisfied by using a water-in-oil microemulsion system that contains a mixture of water, sodium dodecyl sulfate, toluene, and n-butanol-1 [17].At a temperature slightly higher than the critical temperature ( c T = 35 °C) for phase transition, the mixture phase separates into two different micellar phases, with interfacial tension given by ( ) , where , the Bond number is in the order of O (10).Such a large Bond number indicates that the interfacial tension effect is negligible compared to buoyancy effect.When the Bond number is small, the interfacial tension effect becomes comparable or even stronger than gravity, then we must solve the two coupled equations (Eqs.( 4) and ( 5)) without any approximations.For such cases, analytical solutions would unfortunately be unattainable and numerical methods must be applied.
An attractive feature of the optofluidic soliton is that its nonlinear behaviors can be tuned by varying the parameters of the liquid systems.In particular, an interface with weak interfacial tension should possess high nonlinear efficiency.As an example, consider the waveguide structure analyzed in Fig. 2 10 / V − m [24].Therefore, by using liquid systems with weak interfacial tension, we can effectively increase the third order nonlinearity by several orders of magnitude.
We can use Eq. ( 7) to estimate the optical power required to form the aforementioned optofluidic soliton.For this calculation, we assume 0 10 , and use .Using parameters given above, we find that the optical power of this optofluidic soliton is only 41 µW .We can also use the value for 0 A to estimate the bulge height.Applying the discussion immediately above Eq.( 6), we can relate the magnitude of bulge height to the optical field amplitude as The discussion in this section is mainly theoretical, where our primary aim is to show that radiation pressure induced interface deformation can lead to nonlinear processes very similar to traditional third order nonlinear processes such as optical solitons.In terms of experiment, it is perhaps easier to investigate nonlinear optofluidic processes using a high-Q optical resonator based on a liquid droplet, as will be discussed in the next section.12).The dimensionless constant σ Γ is extracted using Eq. ( 14) and least square fitting (dashed red line).
WGM induced droplet deformation
Figure 3(a) illustrates the second example of radiation pressure induced nonlinear optofluidic processes, where the key element is the high-Q WGM circulating along the equator of a liquid droplet in air.Since the direction of the radiation pressure always points from the high refractive index material to the low index material, the presence of a high-Q WGM would push the spherical droplet surface outwards and consequently enlarge the circumference of the droplet's equator.Since radiation pressure is proportional to the optical power carried by the WGM, under the limit of small droplet deformation, we expect that the equator circumference should increase linearly as a function of WGM power and shift the WGM resonance frequency as a direct result.This phenomenon is very similar to the Kerr effect.In the following analysis, we establish an analytical framework and provide an order of magnitude estimate for the frequency shift associated with this nonlinear optofluidic process.Our analysis suggests that the effective strength of the aforementioned nonlinear optofluidic process can be several orders of magnitude larger than the traditional Kerr effect.The starting point of our analysis is the Young-Laplace equation.We assume a small Bond number, i.e., buoyant force is much smaller than surface-tension force and is ignored.As a result, we find: where p Δ represents the pressure difference inside and outside of the droplet, opt P is the radiation pressure generated by the WGM, σ represents the surface tension of the droplet, and κ is the mean curvature of the droplet.Since our main purpose is to provide an order of magnitude estimate, we take the following steps to simplify our analysis.First, we restrict our considerations to the fundamental transverse electric (TE) mode.Mathematically, this means that the optical field of the WGM can be written as [25]: where the angular mode number l and m are integers and satisfy 0 l > and l m l − ≤ ≤ , ω is angular frequency.The subscript "q" is either "co" if it represents core parameters of the WGM resonator or "cl" if it denotes cladding parameters.For example, co k refers to the optical wave vector within the droplet, and cl k is the cladding wave vector.Similarly, co Z and cl Z represent the impedance of the droplet core and cladding material, respectively.In Eq. ( 9), lm X is the vector spherical harmonic function [25] and represents the angular variation of the WGM.As to the radial profile of the WGM, the function ( ) l q g k r is either the l-th order spherical Bessel function (for core) or the spherical Hankel function of the first kind (for cladding), i.e.,: where a is the radius of the un-deformed sphere.For the fundamental WGM, the m l = mode and the m l = − mode are identical except for the direction of light circulation.For convenience, we only consider the m l = mode and denote it as | ll .Given Eq. (9a), the WGM radiation pressure opt P on the droplet interface is given by [16]: ( ) where 0 ε is free space permittivity, co n and cl n represent core and cladding refractive indices, and surf E is the electric field on the droplet surface.Note that the expression for the optical force in Eq. ( 11) is slightly different from the one we used in Eq. ( 5), which is due to the difference in electric field direction [16].From the expression in Eq. (9a), we can verify Numerically solving Eq. ( 9) to ( 11) can give us the form of droplet deformation.However, the mathematical process is complex and can obscure the physics.In this paper, we emphasize the physical reality and utilize several assumptions to obtain an intuitive estimate of droplet deformation.First, we assume that the droplet deformation can be approximated by an oblate spheroid, as shown in Fig. 3(a).This assumption simplifies our analysis considerably.For example, assuming a spheroid shape, the mean curvature of the droplet interface is simply: where e x and p x are defined as ( ) assumption, the WGM resonance shift can be readily estimated using the perturbation theory in [26].The next step is to expand both sides of Eq. ( 8) using spherical harmonic function amount given in Eq. (13a).In other words, the reduction in fluid pressure is a constant term that balances the average magnitude of the radiation force (integrated over the entire droplet surface).After subtracting this constant component (i.e., p Δ ), it is the θ dependence of the radiation pressure that determines droplet deformation.In applying Eq. (13b), we essentially use the first non-zero alternating component to estimate the magnitude of droplet deformation.Obviously, to satisfy Eq. ( 8) exactly, the mean curvature must contain higher "frequency" components.However, using the curvature to calculate droplet shape requires double integration, which can be regarded as a low-pass filter that significantly suppresses these higher "frequency" components.Therefore, it is reasonable to discard the higher order spherical harmonic terms ( 4 L ≥ ), and use Eq. ( 13) to estimate the magnitude of droplet deformation.
To further simplify Eq. (13b), we note that the right hand side of Eq. ( 13b) is a function of .
Through least square fitting, we determine the value of Γ σ to be 1.01.
To obtain a closed form formula, we express 2 surf E in Eq. ( 11) as: , Equation ( 18) relates droplet deformation to the peak electric field intensity on the droplet interface and is the key result of this paper.In the next section, we consider several examples of liquid droplets and calculate their deformation as a function of optical power associated with the circulating WGM.
Estimate of nonlinear optofluidic effects based on high-Q WGMs
Using Eq. ( 18), we can calculate the radiation pressure induced WGM frequency shift and quantitatively compare nonlinear optofluidics with the traditional Kerr effect.We choose the following parameters in our calculations.The core index is 1.
The droplet radius varies from 10 m μ to 400 m μ .All WGMs are the fundamental TE mode | ll , and their angular momenta l are chosen to ensure that the WGM wavelengths λ are in the vicinity of 1.56 m μ .Table 1 lists the l and λ of the TE | ll mode in such droplets with different radii a .The resonance frequencies are obtained by matching transverse field components across the droplet interface [27], which is assumed to be a perfect sphere.Equation ( 18) links droplet deformation with peak surface field intensity, which is unfortunately not directly measurable.Therefore, we need to further relate the peak field intensity in Eq. ( 18) with the optical power or energy of the circulating WGM.To accomplish this, we take advantage of the fact that the deformation is linearly proportional to .By integrating the Poynting vector across the 0 φ = plane, we can calculate the total optical power carried by the WGM.Similarly, through volume integration of energy density, we can obtain the total energy stored within the WGM.Then, we multiply the electric field intensity with an appropriate normalization factor such that the total power or the energy of the WGM becomes the desired value.This normalization factor is then used to calculate 2 peak surf E associated with the desired WGM power or energy.Once 2 peak surf E is obtained, the only unknown parameter in Eq. ( 18) is the dimensionless factor lm Γ θ , which can be obtained through numerical integration of Eq. ( 17) and is shown in Fig. 4(d).Using the procedure described above, we set the power associated with the circulating WGM to be 1 W and use Eq. ( 18) to calculate the radiation induced interface deformation for different droplet radius.The results are shown in Fig. 5(a).The red circles give the normalized droplet deformation ( ΔR / a ) induced by the WGM radiation pressure, where we assume the surface tension is that of water ( 72 σ = mN / m).The blue and black diamonds represent the estimated Kerr effect in water (blue) and CS 2 (black), produced by the same peak surface field . These estimates are simply obtained by using ( ) where ( ) χ is the third order nonlinear susceptibility of water ( ).Since both ΔR / a and Δn represent the same physical effects, i.e., the relative change in the optical path of the high-Q resonator, they can be shown in the same figure for direct comparison.Clearly, the nonlinear optofluidic effect is three to five orders of magnitude stronger than the Kerr effect.Since the nonlinear optofluidic process can be significantly stronger than the Kerr effect, it is worth asking the question: Is it possible to use only a few photons to generate measurable WGM frequency shift?To answer this question, we first calculate the value of that the total WGM energy equals to single photon energy ω .Then, we substitute this value into Eq.( 18) and calculate the normalized radius change ΔR / a induced by the presence of a single photon.Finally, using the perturbation theory results in [26], the WGM frequency shift associated with the deformed spheroid is: Combining Eqs. ( 18) and ( 19), we can now calculate the radiation pressure induced frequency shift.The results are shown in Fig. 5 Given the fact that WGMs with Q factors as high as 2.3 × 10 6 have been observed using a liquid resonator [22], Fig. 5(b) suggests that one should be able to use only a few photons to produce experimentally observable resonance shift.If one can reduce the surface tension further down to the level of 0.1 / mN m σ = , then the presence of even a single photon can significantly change the characteristics of the liquid droplet resonator.
It is worth remarking that liquid systems with ultralow surface tensions have been experimentally attained using surfactants [28][29][30].For example, by introducing a bimolecular layer of preformed ferric stearate, one can reduce the surface tension of water down to 1 / mN m [28].In emulsion or microemulsion systems containing water (a polar fluid) and oil (a nonpolar fluid), surfactants can lower surface tension as small as 1 / N m μ at an optimal concentration [29].Therefore, for these liquid systems with ultralow surface tension, nonlinear optics at single photon energy level should be experimentally feasible.
In our WGM analyses, the effect of gravity is ignored.To justify this choice, we can estimate the Bond number of the liquid droplet.For a sphere with radius a , the gravitational and the surface tension effects can be estimated as ).Such a resonator is similar to the one investigated in [21].As a specific example, we estimate the deformation induced by 1W WGM power circulating within an oil-in-water droplet with radius .This result suggests that the conclusions we obtained using water-in-air droplets are broadly applicable to other types of liquid resonators.
Equation (18) suggests that droplet deformation depends on interfacial tension but not on liquid viscosity.This is to be expected, since our discussion is based on static analysis, i.e., the Young-Laplace equation in Eq. ( 8), which does not involve viscosity.We do, however, expect that viscosity should play a role in the dynamics of droplet deformation.
Finally, we point out that both the spherical WGM system analyzed here and the planar soliton case discussed in section 2 can be connected to liquid-based spheroid resonators.On the one hand, the spherical droplet analyzed in section 3 is a special case of the liquid spheroid resonators.On the other hand, a prolate spheroid resonator with an extremely large major axis can essentially be regarded as an infinitely long dielectric cylinder, which can also support WGM that circulates along the perimeter of the cylinder.And if we "unwrap" the infinitely long cylinder, the WGM traveling along the cylinder perimeter becomes similar to the self-guided soliton waves analyzed in section 2.
Summary
In this paper we analyze the possibility of using all liquid systems to achieve nonlinear optical effects with extremely low power threshold.The defining feature of the proposed nonlinear optofluidic processes is the interface deformation due to optical radiation pressure.In particular, we find that through the formation of a bulged interface, optical waves could selffocus within a liquid slab waveguide.Under appropriate conditions, these self-guided optical waves are governed by the nonlinear Schrödinger equation and can be described by the familiar soliton solutions.We then consider a spherical liquid droplet that supports high-Q WGMs.With sufficiently high power, the optical force associated with the circulating WGM could deform the liquid droplet and induce a frequency shift of the WGM resonance.By applying spherical harmonic expansion, we estimate that the radiation pressure induced nonlinearity is several orders of magnitude stronger than the traditional Kerr effect.In fact, we find that by using a liquid system with low but experimentally achievable surface tension, it is possible to produce measurable frequency shift at the energy level of a few photons.Such effect may ultimately lead to nonlinear interactions between two single photons.
Fig. 1 .
Fig. 1.(a) An optofluidic soliton formed from a self-guided optical wave that is confined within a liquid bulge formed through radiation pressure.(b) A liquid droplet that contains a high-Q WGM circulating along the equator.The radiation pressure of the WGM forms the bulge, which in turn shifts the WGM resonance frequency.
Fig. 2 .
Fig. 2. Illustration of an optofluidic soliton.The structure contains two liquids (refractive indices 1 n and 2 n ) and air ( 3 1 n = ).The thickness of the waveguide is 0 h in the absence of optical field.The radiation pressure of the guided optical signal produces the bulge shown in the figure.The thickness of the bulge that serves as the waveguide core is denoted as ( ) ( ) 0 h x h h x = +Δ .(b) The effective index of the asymmetric dielectric waveguide defined in (a) as a function of the core layer thickness ( ) h x .The waveguide parameters are 1 1.5 n = , v = .The two micellar phases possess different densities and refractive indices, with the density difference given by .The relative magnitude of buoyancy to interfacial paragraph above.Then, based on Eq. (7b)assume that the optical beam is confined in a cross-section of width 10 W µm = (i.e., soliton width) and height 3 H µm = (i.e., waveguide thickness), the optical power of such a beam can be simply estimated as
Fig. 3 .
Fig. 3. (a) A liquid droplet with a high-Q WGM circulating near its equator.The radiation pressure of the WGM deforms the original spherical droplet (the blue circle) and generates the bulge (the solid black line), which is approximated as an oblate spheroid (the dashed black line).The normalized equator radius e x is defined as the ratio of the spheroid radius at the equator ( ) a R + Δ and the radius of the original sphere a .(b) The integral ( ) ( ) 20 0 ( ) , sin e e F x a x Y d π κ θ θ θ θ = − opt P , depends on θ but not on φ .Additionally, for the | ll mode,2 surf E is symmetric with respect to the equator plane (i.e., .) Furthermore, under the spheroid #223092 -$15.00USD Received 15 Sep 2014; revised 28 Oct 2014; accepted 3 Nov 2014; published 12 Nov 2014 (C) 2014 OSA 17 November 2014 | Vol.22, No. 23 | DOI:10.1364/OE.22.028875| OPTICS EXPRESS 28883
ex
only.Multiplying the integral by droplet radius a , we can define a dimensionless parameter and numerically evaluate it as a function of e x .The result, which is shown in Fig. 3(b), suggests that this integral is almost a perfect linear function of ex .Thus we introduce a dimensionless parameter Γ σ as:
Fig. 4 . 2 E 2 E 2 E
Fig. 4. (a) The radial dependence of 2 E of a fundamental TE mode ( l = 257) in a spherical
Fig. 5 .
Fig. 5. (a) Radiation pressure induced droplet deformation ( / R a Δ ) in droplets with different radii (red circles).The total power of the WGM that circulates along the droplet equator is fixed at 1 W.For comparison, the changes in refractive index ( n Δ ) due to the Kerr effect are shown in the same figure, which are estimated using
1 W WGM power should induce a relative radius change of 4 / 4 /
procedure described above, we find that with the results shown in Fig.5(a), for an water-in-air droplet with identical radius, interfacial tension and WGM power, the deformation is
Table 1 .
The lm Γ θ values (blue circles) are obtained numerically using Eq.(17) and simply connected together using the dashed line. | 8,136.8 | 2014-11-17T00:00:00.000 | [
"Physics"
] |
The human brain somatostatin interactome: SST binds selectively to P-type family ATPases
Somatostatin (SST) is a cyclic peptide that is understood to inhibit the release of hormones and neurotransmitters from a variety of cells by binding to one of five canonical G protein-coupled SST receptors (SSTR1 to SSTR5). Recently, SST was also observed to interact with the amyloid beta (Aβ) peptide and affect its aggregation kinetics, raising the possibility that it may bind other brain proteins. Here we report on an SST interactome analysis that made use of human brain extracts as biological source material and incorporated advanced mass spectrometry workflows for the relative quantitation of SST binding proteins. The analysis revealed SST to predominantly bind several members of the P-type family of ATPases. Subsequent validation experiments confirmed an interaction between SST and the sodium-potassium pump (Na+/K+-ATPase) and identified a tryptophan residue within SST as critical for binding. Functional analyses in three different cell lines indicated that SST might negatively modulate the K+ uptake rate of the Na+/K+-ATPase.
Introduction
Somatostatin (SST) is an inhibitory peptide hormone produced by specific cells, including somatostatinergic neurons in several brain regions and somatotropic cells, known as delta cells, in pancreatic islets, the pyloric antrum and the duodenum. SST was initially discovered as a factor that inhibits growth hormone (GH) release from the anterior pituitary [1]. To date, SST is understood to also act as an inhibitor of synaptic transmission in the central nervous system, to regulate insulin and glucagon release from the pancreas [2], and to suppress digestive secretions [3]. SST exists primarily in two functional forms, a canonical 14-amino acid peptide (SST14) and an N-terminally extended 28-amino acid version of this peptide (SST28) [4]. Both versions of the peptide form cyclic structures, due to the presence of a highly conserved disulfide bridge, and are derived from the proteolytic cleavage of the 116-amino acid preprosomatostatin (PPSST), which is coded on Chromosome 3 in humans [4]. Less prominent cleavage products of PPSST exist, including a peptide encompassing residues 31-43 of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 unknown function, named neuronostatin [5]. SST is assumed to exert its influence primarily through interactions with five cognate G protein-coupled receptors (GPCRs), known as somatostatin receptors (SSTR1 to SSTR5), which are expressed widely throughout the body [6,7]. Both SST14 and SST28, as well as the closely related paralog cortistatin (CST), activate these receptors through a shared four-amino acid binding epitope comprised of the single-letter amino acid sequence FWKT [8], albeit with different potencies [6].
Given the breadth of SST's physiological roles, it is no surprise that SST dysfunction is implicated in several human diseases. For decades, SST analogs, including octreotide and lanreotide, have been used to treat neuroendocrine tumors and endocrine disorders, exploiting their ability to mimic the endogenous peptide's inhibition of excessive hormone release that is often associated with these conditions [9][10][11]. Because SSTRs are also highly expressed by several other cancer cell types, these receptors increasingly serve as therapeutic cancer targets and biomarkers [12][13][14]. Several lines of evidence have also linked SST to Alzheimer's disease (AD) [15][16][17]. One of the earlier biochemical findings in the AD research field documented reduced levels of SST in postmortem brains of individuals afflicted with the disease [18]. Since then, the key findings of this seminal study have been confirmed by several independent investigators [19][20][21]. Somatostatinergic neurons moved into the epicenter of AD research a few years later, when two high-profile studies reported that the post-mortem analyses of ADafflicted brains revealed amyloid β (Aβ) plaques to co-localize with these specialized neurons and neurofibrillary tangles to be predominantly observed in them [22,23]. Whereas the latter data may suggest a negative influence of somatostatin on AD, the ability of somatostatin to induce the expression of the Aβ-degrading enzyme neprilysin are consistent with a protective role [24]. More recently, AD was genetically linked to the SST gene, when separate reports based on two independent ethnic sample cohorts led the authors to conclude that polymorphisms in this gene may increase the risk of developing AD [25,26].
We previously used an unbiased mass spectrometry-based affinity-capture approach to identify human brain proteins that bind to the amyloid β (Aβ) peptide, one of the key players in the pathogenesis of AD [27]. This analysis revealed SST as the smallest natural peptide in our dataset that binds selectively to oligomeric Aβ (oAβ). Follow-up validation work revealed that SST interferes with Aβ fibrillization and promotes the formation of distinct SDS-resistant oligomers. In light of the critical role that the amyloidogenic characteristics of Aβ play in AD, it is intriguing to note that SST and several other peptide hormones belong to a group of natural amyloids that are stored in this highly condensed format in secretory granules prior to their cellular release [28]. In vitro studies investigating the dynamics and physicochemical aspects of SST aggregation have shown that SST can form laterally associated nanofibrils composed of fixed β-hairpin backbones, similar to other amyloidogenic proteins [29]. The potential significance of these earlier data stems from a plausible scenario, wherein SST might not only interact with oAβ but crosstalk between functional and disease-related amyloidogenic proteins could play a role in the etiology of AD [30].
Based on our Aβ-binding data, we were interested to learn more about SST's own binding partners and molecular environment. Whereas the principal binding partners of SST, the SSTRs, have been individually isolated and extensively investigated [6,[31][32][33], to our knowledge, a mass spectrometry-based in-depth analysis of SST-interacting proteins has not been reported. In order to address this gap in the literature, we undertook such a study, using biotinylated SST peptides as baits and human frontal lobe extracts as the biological source for identifying SST-binding candidates. Making use of a previously optimized workflow that includes isobaric tagging for relative and absolute quantitation of proteins (iTRAQ) [34], we report here on a selective interaction between SST and members of the P-type ATPase family. Our data reveal that both SST14 and SST28, but not the similar neuropeptide VIP28, can engage in this interaction. In follow-on work, we characterized the binding epitope and explored how the presence of SST influences the activity of the Na + /K + -ATPase.
Workflow of SST interactome analyses
The purpose of this study was to generate an in-depth inventory of human brain proteins that are capable of binding to SST using an unbiased in vitro discovery approach based on affinity capture mass spectrometry. To generate the bait matrices for these analyses, N-terminally biotinylated SST14 or SST28 were pre-bound to commercial streptavidin-conjugated agarose beads ( Fig 1A). Human frontal lobe extracts derived from three postmortem brains of individuals that had died of non-dementia causes served as the biological source material. Although SST14 is the predominant form of the peptide, analyses were extended to SST28 out of concern that the small size of SST14 may preclude binding of potential interactors due to steric hindrances caused by the affinity matrix ( Fig 1B). Previous investigators had also taken advantage of the 14-amino acid N-terminal extension of SST28 as a natural 'spacer' between the biotinyl group and the receptor binding sequence [35,36] (in this prior work, the SST ligands were, however, added to cultured cells prior to their solubilization in the presence of detergent, an approach that is precluded when dealing with human brain homogenates). As an additional negative control an N-terminally biotinylated SST derivative (hereafter referred to as SST14Δ7-10) was used with the amino acid sequence AGCKNFAFTSC, which lacks the four aforementioned critical residues for SST14 binding to SSTRs (Fig 1A). Sample groups were analyzed side-by-side in triplicate. The zwitterionic detergent 3-((3-cholamidopropyl) dimethylammonio)-1-propanesulfonate (CHAPS) was used to solubilize membrane proteins in human frontal lobe samples, a choice based on previous reports that established the ability of this detergent to extract the SSTRs [35,37]. Following overnight incubation of the brain extracts with affinity matrices, bound proteins were eluted by rapid acidification, denatured in 9 M urea, reduced, alkylated and trypsinized. To avoid inadvertent variances between samples, which are notoriously observed when mass spectrometry analyses are undertaken consecutively, individual tryptic digests were labeled with distinct iTRAQ labels in an eight-plex format and then combined. The iTRAQ-labeled mixture was purified using microscale reversed phase (RP) chromatography at low pH in parallel with high pH reversed phase fractionation. Mass spectra for peptide sequencing and quantification were obtained over four-hour RP nano-HPLC separations with simultaneous data collection on an Orbitrap Fusion Tribrid mass spectrometer running an MS/MS/MS (MS 3 ) analysis method. The relative levels of peptides in the eight samples were determined by comparing the intensity ratios of the corresponding low mass iTRAQ signature ions in MS 3 fragment spectra.
The SST interactome
The combined SST interactome analyses from three biological replicates yielded peptide sequence assignments to 13,843 mass spectra, which matched to 402 protein groups, comprising a total of 879 unique proteins, with a false discovery rate (FDR) of 3.6% (Fig 2A). Stringent filtering on the basis of the quality of quantitation data, i.e., requiring proteins to be quantified by a minimum of three iTRAQ reporter ion profiles and to exhibit low (< 30%) deviation in their relative levels between biological replicates, reduced the list of SST candidate interactors to 88 proteins (please see Table 1 for a truncated list of proteins identified, sorted according to their average enrichment in the three biotin-SST28 affinity capture eluates, and S1 Table for a full list of candidate interactors, sorted alphabetically). Propionyl-CoA carboxylase, an endogenously biotinylated enzyme ubiquitously expressed in eukaryotic cells [38] bound the streptavidin affinity capture matrix independently of the SST bait peptides, serving as an internal control (Fig 2B; S1 Table). Each of the two propionyl-CoA carboxylase subunits was identified from more than 1,850 tandem mass spectra, giving rise to the highest protein sequence coverage in the data set. Consistent levels of propionyl-CoA carboxylase subunits A (CV = 0.13) and B (CV = 0.15) across all eight eluates indicated that the lysates contained similar levels of brain proteins and that capture conditions were comparable across all samples.
As a first step in the characterization of candidate SST-binders, the 50 proteins most selectively bound to SST28 capture matrices were subjected to a Gene Ontology (GO) analysis, which revealed enrichment of proteins involved in ion transport. The molecular functions 'cation-transporting ATPase activity,' 'active transmembrane transporter activity,' and 'ATPase activity, coupled to transmembrane movement of substances' were overrepresented ( Fig 2C). Similarly, 'ATP hydrolysis coupled cation transmembrane transport,' 'ion transport,' and 'proton transmembrane transport' were top-listed, highly significant biological processes ( Fig 2C). Consistent with this result, the most selectively enriched SST28 candidate binders included Na + /K +transporting ATPase, excitatory amino acid transporter, V-type proton ATPase, and ADP/ATP translocase (Table 1). In addition to transporters, SNARE protein family members syntaxin, synaptosomal-associated protein 25 (SNAP25), and synaptotagmin were identified as candidate SST28 binders. These proteins, along with other members of the SNARE complex, are known to be involved in the fusion of synaptic vesicles with their target membranes. Other candidate SST28 interactors included the catalytic (serine/threonine-protein phosphatase 2B) and regulatory (calcineurin subunit B) subunits of the calcium-dependent protein phosphatase calcineurin. SSTRs themselves, however, were not observed (please see Discussion for interpretation). Taken together, these results were unexpected and pointed toward an unappreciated interaction of SST with members of the family of P-type ATPases and SNARE protein complexes.
SST28 binds to multiple members of the P-type ATPase superfamily
Among the P-type ATPases that showed the strongest levels of SST-dependent enrichment were subunits of the Na + /K + -ATPase (alpha-1, alpha-2, alpha-3, beta-1) and isoforms 1 and 2 of the plasma membrane Ca 2+ -transporting ATPase. In fact, the alpha-1 subunit of the Na + / K + -ATPase (ATP1A1) displayed the highest level of SST co-enrichment amongst all SST candidate interactors (Table 1), ranging from 6 to 8 times the levels observed in SST14Δ7-10 negative control eluates ( Fig 2B). These proteins were enriched in biotin-SST28 affinity capture eluates but not in biotin-SST14 eluates. To determine if steric hindrance caused by tethering SST14 to the affinity matrix precluded binding of ATP1A1, affinity capture experiments were conducted in which the ability of free SST14 to compete for binding to the biotin-SST28 bait matrix was assessed ( Fig 3A). The subsequent immunoblot assessment of assay fractions validated that ATP1A1 is, indeed, captured by biotin-SST28 ( Fig 3B). Moreover, pre-incubation of the brain lysate with free SST14 diminished the capture of ATP1A1 by biotin-SST28, consistent with the interpretation that SST14 can also bind to the Na + /K + -ATPase when it is not sterically restrained. Considering the tendency of abundant proteins to contaminate affinity purified preparations, the concern arose whether ATPase capture was indeed SST specific or merely reflected the high relative protein levels of these P-type pumps in the brain. Silver staining of the affinity capture eluates revealed that washing of the capture matrix had removed many of the most abundant proteins ( Fig 3B). In fact, only a small subset of proteins visible in the input samples were retained by the SST28 matrix. Collectively, these findings established that both SST28 and SST14 interact selectively with Na + /K + -ATPases.
Characterization of SST binding to the Na + /K + -ATPase
To begin to delineate the epitope within SST required for its Na + /K + -ATPase affinity, we next performed a series of competitive binding experiments ( Fig 3A) using various SST-derived (B) Relative quantitation of propionyl-CoA carboxylase, which displayed similar levels of enrichment across all samples, and Na + /K + -transporting ATPase subunit alpha-1, which was highly enriched in biotin-SST28 affinity capture eluates. The box plot depicts the enrichment ratios (R) of individual propionyl-CoA carboxylase peptides used for quantitation in log2 space, in addition to the median peptide ratio and Inter Quartile Range (IQR). A subset of peptides (red circles) was eliminated from the quantitation due to redundancy or failure to pass stringency thresholds. Relative protein levels are depicted as ratios, with the ion intensities corresponding to the heaviest isobaric labels acting as the reference. (C) 'Cellular Component' and 'Molecular Function' Gene Ontology analyses of the shortlisted proteins that displayed the highest enrichment in biotin-SST28 affinity capture eluates. https://doi.org/10.1371/journal.pone.0217392.g002 The human brain somatostatin interactome peptides. Specifically, again using biotin-SST28 or biotin-SST14Δ7-10 as baits, human brain lysates were pre-incubated with either SST14 (50 μM) or a mutated version of the peptide, SST14-W8P (50 μM), which contains a tryptophan to proline substitution in the previously determined SSTR binding epitope. Immunoblot analyses validated binding of SST28 to ATP1B1, the beta-1 subunit of the Na + /K + -ATPase, in addition to ATP1A1. As expected, both interactions could be blocked by pre-incubation of the brain lysate with SST14 ( Fig 4A). Additionally, these experiments revealed that the single amino acid substitution in the receptor binding site of SST prevented its binding to these proteins. This inference was based on the observation that preincubation with SST14-W8P did not impair the interaction of ATP1A1 or ATP1B1 with biotinylated-SST28 (Fig 4A). Analogous competition experiments in which N-and C-terminal truncated SST14 derivatives SST5-11 or SST6-10 were used as blocking peptides indicated that these peptides, although containing the critical tryptophan-8 residue, did not compete with SST28 for binding to Na + /K + -ATPase ( Fig 4B). In other words, the core SSTR binding sequence FWKT within SST seemed to be essential but insufficient for binding the Na + /K + -ATPase.
To further assess the specificity of the interaction between SST and P-type ATPases, the binding competition analyses were extended to vasoactive intestinal peptide (VIP), another 28-amino acid neuropeptide whose tissue expression overlaps with SST, with both peptides being present in the gastrointestinal tract and hypothalamus [39]. Pre-incubation of human brain extract with either SST28 or VIP revealed that SST28, but not VIP, was able to block the binding of biotin-SST28 bait to ATP1A1 (Fig 4C), further corroborating the conclusion that there is specificity to the interaction between SST and the Na + /K + -ATPase. We explored the possibility that SSTRs and ATP1A1 interact with SST via similar binding domains. To this end, we compared the sequences of all five human SSTRs and all four human alpha-subunits of Na + /K + -ATPases in the Uniprot database. This analysis revealed that SSTRs have low sequence identity with alpha-subunits. For example, the pairwise comparison of all SSTRs with ATP1A1 revealed sequence identities that ranged between 9.9% (SSTR4) and 12.9% (SSTR3) (not shown). Although the precise binding mode of SST to SSTRs is not known, due to the absence of a high-resolution structure, this result is in line with data which suggest that SST binds within SSTRs via a conformational discontinuous epitope [40].
SST inhibits 86 Rb uptake by the Na + /K + -ATPase
In order to address whether the interaction between SST and the Na + /K + -ATPase has an effect on the activity of the pump, we undertook an in vitro 86 Rb + uptake assay in human neural progenitor (ReNcell VM) and mouse neuroblastoma (N2a) cell lines, which was adapted from previously established protocols [41]. We also included wild-type human embryonic kidney (HEK293) cells in these analyses, because these cells express negligible levels of SSTRs, yet are known to endogenously express the alpha-1 subunit of the Na + /K + -ATPase and to exhibit robust pump-dependent ion uptake [42]. Experiments were performed in the absence and presence of ouabain, a potent inhibitor of the Na + /K + -ATPase, to discern the Na + /K + -ATPasedependent and -independent uptake of 86 Rb + , a radionuclide which pharmacologically mimics K + . In the presence of SST14 (50 μM), total 86 Rb + uptake was reduced to 91.5%, 53.6% and 74.4% in ReN, N2a and HEK293 cells, respectively, compared to controls (Fig 5). In contrast, total 86 Rb + uptake was reduced to 50.7% in ReN cells, 8.9% in N2a cells and 54.3% in HEK293 cells following treatment with ouabain (2 mM). The ouabain-mediated reduction of 86 Rb + uptake was not potentiated by the treatment of cells with both ouabain (2 mM) and SST14 (50 μM), consistent with the interpretation that the inhibitory effect of SST14 on 86 Rb + uptake is mediated by the Na + /K + -ATPase. These findings indicate that SST14 reduced the activity of the Na + /K + -ATPase in a cell type-specific manner, with N2a cells being particularly susceptible to SST-mediated inhibition, HEK293 cells exhibiting highest levels of 86 Rb + /mg internalization but being only partially susceptible to SST-mediated inhibition, and ReN cells displaying resistance to SST-mediated Na + /K + -ATPase inhibition.
Discussion
The goal of the current study was to generate an inventory of human brain proteins that bind to SST. Biotin-SST affinity capture robustly enriched 88 proteins each with at least 3 confident peptide-spectrum matches (PSMs). Among the top SST candidate interactors were multiple members of the P-type ATPase superfamily. Follow-up experiments, which centered on the
Validation of SST binding to Na + /K + -ATPase alpha and beta subunits. (A) Immunoblot analysis of competitive binding experiment with SST14
and the mutant peptide SST14-W8P (left) and quantification of western blot data (right). Capture of ATP1A1 and ATP1B1 by biotin-SST28 can be blocked by pre-incubation of the brain lysate with free SST14 (50 μM; lanes 5-7), but not with a mutant SST14 with an amino acid substitution (W8P) in the receptor-binding site (50 μM; lanes 8-10). Note that the negative control bait peptide, biotin-SST14Δ7-10, failed to capture any detectable ATP1A1 (lanes [11][12][13]. Coomassie staining of the same blot confirms that an equal amount of protein was loaded in each well, using the streptavidin subunits released from the affinity matrix as a loading control (lower panel). National Institutes of Health (NIH) Image J densitometry analyses of the anti-ATP1A1-and anti-ATP1B1-reactive bands appearing near the 98 kDa and 49 kDa molecular weight markers are shown in the panel to the right (B) Capture of ATP1A1 by biotin-SST28 can be blocked by pre-incubation of the brain lysate with free SST28 (50 μM; lane 6) or SST14 (50 μM; lane 7), but not with SST14-W8P (50 μM; lane 8). Truncated versions of SST14 also fail to block capture of ATP1A1 (50 μM; lanes 9, 10), despite containing the receptor binding sequence (FWKT). Coomassie staining confirms that protein loading was consistent across the gel (middle panel). NIH ImageJ densitometry of the anti-ATP1A1-reactive band appearing is shown in the lower panel (C) Capture of ATP1A1 by biotin SST-28 can be blocked by preincubation of the brain lysate with free SST28 (25 μM Na + /K + -ATPase, validated that both SST28 and SST14 can bind to this pump. We then observed that binding exhibits selectivity with regards to both the SST bait peptide when compared to similar peptides, and to the P-type ATPase prey when compared to other abundant brain proteins. Moreover, we identified a tryptophan residue within SST that appears to be critical for binding to the Na + /K + -ATPase. Interestingly, this tryptophan is embedded within the core FWKT sequence motif known to also mediate binding of SST to its cognate receptors. Finally, we observed that SST has a cell type-specific inhibitory effect on the activity of the Na + /K + -ATPase. Our study took advantage of recent improvements to mass spectrometry instrumentation and advanced workflows that incorporated isobaric labeling for relative quantitation. Since we sought to identify any protein that might interact with SST in the brain, a rather generic affinity capture approach was applied. In hindsight we realized that this strategy disfavored the purification of canonical SSTRs which, as members of the GPCR protein family, become unstable and lose their ligand-binding ability when removed from their native environment in a manner that disrupts their physiological interaction with G-proteins [43][44][45], necessitating cross-linking and modified affinity-capture protocols to stabilize the interaction of SST with its canonical receptors [36,45,46]. This limitation may also extend to other GPCRs, including opioid and dopamine receptors, which had been implicated in SST-dependent phenotypes by others [47,48], possibly on account of an inherent propensity of many GPCRs to operate as homo-or heterodimers [49]. The human brain somatostatin interactome A PubMed query that combined the search terms 'somatostatin' and 'mass spectrometry', although producing more than 150 hits, revealed no prior study that pursued the objective of identifying SST interacting proteins. Close examination of the pertinent literature suggests that this striking omission may at least in part reflect the fact that the discovery of canonical SSTRs in 1992 predated technological developments underlying modern protein mass spectrometry. Consequently, the first two canonical SSTRs (SSTR1 and 2) were identified through a genomic hybridization strategy, which specifically targeted GPCRs of pancreatic islet cells because earlier research had suggested SSTRs to be members of this receptor family [50]. The subsequent discoveries of SSTR3, 4 and 5 followed in short order, and were based on conceptually analogous genomic hybridization screens [51][52][53]. Thus, whereas until just before that time, the focus in the pertinent literature had been on the purification of SST receptors by biochemical means, this line of research was largely abandoned after 1992. When SSTR1 and SSTR2 were expressed in CHO cells, binding experiments with radiolabeled SST probes revealed that approximately 90% of binding depended on the presence of the heterologous SSTRs [31]. These and similar results obtained with the other SSTR paralogs may have limited the motivation to look any further, although they did not rule out the existence of additional physiological SST interactors. Consistent with the view that the known SSTRs may not account for all binders of SST, leading up to 1992, the molecular masses of candidate SST interacting proteins were reported to range from 27 to 228 kDa [54], and the masses of canonical SSTRs of approximately 40 kDa would have been difficult to reconcile with data of several investigators in the field.
The most striking finding from the current study was a selective interaction between SST and members of the P-type ATPase superfamily. In addition to P-type pumps, our interactome study revealed several other novel candidate SST interactors. Some of these had been indirectly linked to SST. For example, both subunits of the heterodimeric protein calcineurin, i.e., the catalytic serine/threonine-protein phosphatase 2B (PPP3CA) and its regulatory subunit (PPP3R1), emerged in this work as candidate SST binders. Calcineurin had previously been proposed to operate as a downstream effector of SSTRs [55]. Similar levels of enrichment were observed for members of the synaptic vesicle fusion complex, including syntaxin-1A/B, SNAP25, synaptotagmin, and synaptic vesicle glycoprotein 2A.
Focusing on the Na + /K + -ATPase for validation studies, we demonstrated that a moiety overlapping with the SSTR binding sequence FWKT is required by both SST28 and SST14 to interact with this pump. Even more intriguingly, we previously reported that this region of SST is also critical for its binding to oligomeric Aβ [27,30], an observation that triggered our interest in other SST interactors in the first place. Indicative of some specificity of this interaction, other abundant proteins (like tubulin and myelin basic protein) were not captured by the matrix, the mere replacement of a single tryptophan residue within SST made it non-competitive for binding to the pump, and the neuropeptide VIP, of similar mass (SST28 = 3.2kDa, VIP = 3.3kDa) and physicochemical characteristics (equal length, SST28 pI = 9.85, VIP pI = 9.82, SST28 GRAVY = -0.732, VIP GRAVY = -0.639) as SST, failed to block the capture of ATP1A1 by biotin-SST28. Although this is the first report of a direct binding between SST and P-type ATPases, prior to the discovery of SSTRs multiple authors identified proteins that bound selectively to SST analogs with molecular weights suspiciously similar to the alpha subunit of the Na + /K + -ATPase (~100 kDa) [46,56,57]. Furthermore, SST has been observed to modulate plasma membrane conductance in various cell types [58][59][60][61][62]. The primary method by which this was proposed to occur was through binding to GPCRs, which can hyperpolarize the membrane by opening K + channels and by lowering intracellular Ca 2+ levels [63]. For example, SST was reported to activate inwardly rectifying K + channels and inhibit voltage gated Ca 2+ entry through the action of different G-proteins [59][60][61][62][64][65][66] leading to hyperpolarization of the cell membrane [67]. It will be of interest to explore if SST additionally affects membrane conductance by directly binding to and modulating the activity of P-type ATPases and whether such interactions could have an impact on synaptic plasticity, learning, and memory. We have shown here that SST exposure leads to a partial inhibition of the Na + / K + -transporting ATPase in certain cell types. Similarly, SST may bind to and modulate the activity of the Ca 2+ -transporting ATPase, another P-type ATPase we co-isolated, which would be expected to affect intracellular Ca 2+ levels.
SST may also indirectly influence membrane conductance through a different mechanism: Binding of SST to its cognate receptors is best understood to activate G αi , a subunit of heterotrimeric G proteins. G αi inhibits adenylate cyclase, thereby reducing cAMP production, which leads in several cell models to a decrease in the levels and activity of the Na + /K + -ATPase [68,69]. If direct binding of SST to the Na + /K + -ATPase inhibits the pump, it could constitute an elegant signal amplification mechanism working synergistically with SSTR activation to decrease the levels and activity of the Na + /K + -ATPase.
A related consideration is the role of SST in controlling vascular tone. Na + /K + -ATPases are a primary pharmacological target for the treatment of hypertension using cardiotonic glycosides [70]. Multiple lines of evidence suggest that SST also has vasoactive properties, and generally acts as a hypertensive agent. In fact, for many years, somatostatin analogs have been used in the treatment of portal hypertension by inducing vasoconstriction of the splanchnic vasculature [71][72][73]. The identification of a selective interaction between SST and the Na + /K + -ATPase might be pertinent in this context, with potential implications for administering SST analogs in the clinic.
Western blot, Coomassie, and silver staining
For SDS-PAGE analyses, samples were mixed with Bolt LDS sample buffer (B0007; Thermo Fisher Scientific, Burlington, ON, Canada) containing 2.5% 2-mercaptoethanol and boiled for 10 minutes at 60˚C before loading. The samples were separated on Bolt 10% Bis-Tris Plus gels (NW00102BOX; Thermo Fisher Scientific, Burlington, ON, Canada) in MES SDS Running Buffer (NP0002; Thermo Fisher Scientific, Burlington, ON, Canada) for 1 to 1.5 hr at 120 V. For immunoblot analyses, peptides were transferred to polyvinylidene difluoride (PVDF) membranes at 50 V in Tris-Glycine buffer containing 10-20% methanol for 2 hr. Membrane blocking steps were done in standard Tris-buffered saline with 0.1% Tween 20 (TBST) containing 5% fat-free milk and membranes were incubated overnight with the appropriate primary antibodies for antigen binding. Following three washes with TBST, membranes were incubated for 1 hr with 1: 2000 diluted anti-mouse or anti-rabbit horseradish peroxidase-conjugated secondary antibodies. The band signals were visualized using enhanced chemiluminescence reagents (4500875; GE Health Care Canada, Inc., Mississauga, ON, Canada) and X-ray films or a LI-COR Odyssey Fc digital imaging system (LI-COR Biosciences, NE, USA). Where indicated, Coomassie and/or silver staining were performed to visualize all proteins present in the sample.
Affinity capture of SST14-and SST28-binding proteins from human frontal lobe extracts
Streptavidin UltraLink Resin beads (53114; Thermo Fisher Scientific, Burlington ON, Canada) were chosen as the affinity capture matrix in this study. The biotinylated SST14, SST28, and SST14Δ7-10 peptides were captured on the resin by 2 hr incubation in PBS at room temperature while undergoing continuous agitation on a slow-moving turning wheel. The biological source material for the interactome analysis consisted of human frontal lobe tissue samples from individuals (3 males) who died of non-dementia related causes at ages of 74, 76 and 82 years. The brains were obtained from the tissue biobank at the Tanz Centre for Research in Neurodegenerative Diseases, where they had been stored in -80˚C freezers. 150 mg pieces of each brain were used per experimental condition and were homogenized in lysis buffer containing cOmplete Protease Inhibitor Cocktail (11836170001; Roche, Mississauga, ON, Canada) and PhosStop phosphatase inhibitor tablets (04906837001; Roche, Mississauga, ON, Canada) and solubilized using 0.6% CHAPS detergent (C3023; Sigma-Aldrich, Oakville, ON, Canada). For follow-up validation experiments, 30 mg pieces of brain sample were used. Following centrifugation at 21,000 g for 1 hr to remove insoluble material, the protein concentration was normalized across all samples. The brain lysate was subsequently added to the pre-saturated affinity capture beads (20 μL per biological replicate) and incubated overnight at 4˚C. The affinity capture beads were then extensively washed during three wash steps with a total of 60 mL of lysis buffer containing 150-500 mM NaCl. Prior to elution, the affinity matrix was subjected to a pre-elution wash with 15 mL of 10 mM HEPES that served to reduce detergent and salt levels. Captured proteins were eluted either by rapid acidification in a solution containing 0.2% trifluoroacetic acid and 20% acetonitrile in deionized water (pH 1.9) (for mass spectrometry) or by boiling for 10 min at 60˚C in Bolt LDS sample buffer (B0007; Thermo Fisher Scientific) containing 2.5% 2-mercaptoethanol (for western blot analyses).
Rb + uptake assay
The 86 Rb + uptake assay procedure was based on previously established protocols [41]. More specifically, three hr before each experiment, the cell culture medium was replaced with DMEM containing 0.2% FBS. Following serum deprivation, the cells were treated with ouabain (2 mM) (O3125; Sigma-Aldrich, Oakville, ON, Canada) and/or SST14 (50 μM) or with vehicle (control). After incubation for 15 min at room temperature, 2 μCi 86 RbCl in water (NEZ072; PerkinElmer, Woodbridge, ON, Canada) was added to each well and the cells were incubated for another 10 min at 37˚C. The supernatants were then removed, and cells were washed four times with 1 mL ice-cold wash buffer (100 mM MgCl 2 , 10 mM HEPES, pH 7.4) before lysis in 500 μL buffer containing 1% NP40, 150 mM Tris (pH 8.3), and 150 mM NaCl. 250 μL aliquots of the cell lysates were transferred to vials containing 10 mL Ultima Gold liquid scintillation cocktail (6013326; PerkinElmer, Woodbridge, ON, Canada) and assayed using a liquid scintillation counter (LS6500; Beckman Coulter; Mississauga, ON, Canada). Additional 10 μL aliquots were used for protein concentration determination using BCA reagents (23228 and 1859078; Thermo Fisher Scientific, Burlington, ON, Canada).
Sample preparation for interactome analysis
Processing of the affinity-capture eluates followed previously described protocols [27,74]. First, the organic solvent was removed from the samples using a centrifugal evaporator. The acidity of the sample was reduced by the addition and continuous evaporation of an additional three volumes of water. Two volumes of 9 M urea were then added per volume of sample, and protein denaturation was allowed to take place over 10 minutes at room temperature. The pH was further adjusted by the addition of 100 mM HEPES (pH 8.0). Following reduction for 30 minutes at 60˚C in the presence of 5 mM tris (2-carboxyethyl) phosphine (TCEP), proteins were alkylated for 1 hr at room temperature in 10 mM 4-vinylpyiridine (4-VP). Protein mixtures were diluted with 500 mM tetraethylammonium bicarbonate (TEAB; pH 8.0) to a total volume of 100 microliters to ensure that urea concentrations were not in excess of 1.5 M. Digestion of samples with side-chain-modified porcine trypsin (90057; Thermo Fisher Scientific, Burlington, ON, Canada) proceeded overnight at 37˚C. Primary amines were covalently modified with isobaric tagging for relative and absolute quantitation (iTRAQ) reagents (4381663; SCIEX, Concord, ON, Canada) by following the manufacturer's instructions. The labeled digests were then pooled into a master mixture and purified with C18 (A5700310; Agilent Technologies, Inc., Mississauga, ON, Canada) or a high pH reversed phase fractionation kit (84868, Thermo Fisher Scientific, Burlington, ON, Canada), again following the manufacturer's instructions. Finally, upon reconstitution in 0.1% formic acid, peptides were analyzed by tandem mass spectrometry on an Orbitrap Fusion Tribrid instrument using previously described parameters [27].
Post-acquisition data analyses
The post-acquisition data analysis of interactome data sets was conducted against the Uniprot canonical and isoform human database (October 29, 2017 version, downloaded January 3, 2018), which was queried with Mascot (Version 2.4.1; Matrix Science Ltd, London, UK) and Sequest HT search engines within Proteome Discoverer software (Version 1.4; Thermo Fisher Scientific, Burlington, ON, Canada). Protein sequence and quantification data complimentary to that produced on Proteome Discoverer was created using PEAKS Studio software (Version 8.5; Bioinformatics Solutions Inc., Waterloo, ON, Canada). A maximum of two missed tryptic cleavages and naturally occurring variable phosphorylations of serines, threonines and tyrosines were allowed. Other variable modifications considered were oxidation of methionine, tryptophan and histidine as well as deamidation of glutamine or asparagine. Mass spectrometry data sets have been deposited to the ProteomeXchange Consortium [75] via the PRIDE partner repository [76] with the project name 'Somatostatin-interacting human brain proteins' and the dataset identifier PXD010885 (http://proteomecentral.proteomexchange.org/cgi/ GetDataset).
Statistical analyses
In the LC-MS/MS data interpretation by Proteome Discoverer, which was undertaken as described before [77], peptide sequencing quality was maintained at cut off scores providing an FDR of 0.05, estimated by the Percolator algorithm [78], with PSMs scoring below these cut offs excluded from quantification. Percolator analysis was conducted using a maximum delta Cn (PSM rank for each sequenced mass spectrum) of 0.05 and validation based on q-value. Protein sequencing with the PEAKS algorithm was performed at peptide and protein cut off scores of 0.0316 and 0.01 respectively.
Gene ontology analyses were conducted using the PANTHER Overrepresentation Test via the GO consortium online tool (http://geneontology.org/) under default settings.
For comparisons between two groups, statistical analyses were based on the paired Student's t-test, where a p-value of less than 0.05 was considered significant. In instances when the experimental design was predictive of a one-directional change these tests were one-tailed. Standard errors of the mean are represented as error bars in the figures. Western blot and in vitro experiments were performed in triplicate where statistical analyses were implemented. For better visualization of the western blot quantifications, data were normalized to the control condition (no peptide pre-incubation) and relative standard deviations were calculated. Similarly, in order to better interpret the results from the 86 Rb + uptake assay, data were normalized such that the control condition (no peptide pre-incubation) in each cell type reflected 100% uptake.
Supporting information S1 Table. Human SST interactome list. The table depicts the SST interactome dataset generated in this study in alphabetical order. The columns depicting iTRAQ ratios reveal the enrichment level of proteins relative to the SST14Δ7-10 negative control, i.e., proteins whose iTRAQ enrichment ratios were observed with values close to 1.0 exhibited no relative enrichment. The 'Coverage' column depicts the percentage of primary sequence of a given entry that was covered by peptide-to-spectrum matches (PSMs). The count column indicates the number of PSMs that supported the calculation of values shown in the iTRAQ ratio columns. (EPS) | 8,584.8 | 2019-05-28T00:00:00.000 | [
"Biology"
] |
A genome-wide CRISPR/Cas9 screen to identify phagocytosis modulators in monocytic THP-1 cells
Phagocytosis of microbial pathogens, dying or dead cells, and cell debris is essential to maintain tissue homeostasis. Impairment of these processes is associated with autoimmunity, developmental defects and toxic protein accumulation. However, the underlying molecular mechanisms of phagocytosis remain incompletely understood. Here, we performed a genome-wide CRISPR knockout screen to systematically identify regulators involved in phagocytosis of Staphylococcus (S.) aureus by human monocytic THP-1 cells. The screen identified 75 hits including known regulators of phagocytosis, e.g. members of the actin cytoskeleton regulation Arp2/3 and WAVE complexes, as well as genes previously not associated with phagocytosis. These novel genes are involved in translational control (EIF5A and DHPS) and the UDP glycosylation pathway (SLC35A2, SLC35A3, UGCG and UXS1) and were further validated by single gene knockout experiments. Whereas the knockout of EIF5A and DHPS impaired phagocytosis, knocking out SLC35A2, SLC35A3, UGCG and UXS1 resulted in increased phagocytosis. In addition to S. aureus phagocytosis, the above described genes also modulate phagocytosis of Escherichia coli and yeast-derived zymosan A. In summary, we identified both known and unknown genetic regulators of phagocytosis, the latter providing a valuable resource for future studies dissecting the underlying molecular and cellular mechanisms and their role in human disease.
www.nature.com/scientificreports/ receptors. For the phagocytosis of bacteria, various microbial proteins, glycoconjugates, lipopolysaccharides, lipoteichoic acids and mycobacterial lipids are essentially required 1 . For example, the C-type lectin receptor CLECSF8 is a key component of anti-microbial host defense, and mice lacking this receptor show an increased bacterial burden 7 . In contrast, phagocytic removal of e.g. apoptotic cells turned out to be more complex, with exposure of a plethora of 'eat me' signals on the surface of the dying cell triggering phagocytosis. For efficient recognition of target particles, receptors on the phagocyte interact with ligands present on the particle, enabling engulfment and finally its uptake 2,3,6 . Subsequently, phagocytosis initiates a series of intracellular fusion and fission events, which ultimately result in the formation of the phagolysosome 8 . Acidification by vacuolar ATPases leads to a decrease in the phagolysosomal pH, which further facilitates degradation of the ingested particle 2,9,10 . The high level of redundancy of both phagocytosis ligands and receptors reflects the importance and complexity of the process and complicates the genetic elucidation of involved components. Classically, phagocytosis has been studied using cell biology and microscopy techniques, partially combined with genetic manipulation of individual genes. Systematic analysis of the genes required for phagocytosis using genetic screening technologies has been described for C. elegans and D. melanogaster, and several genes with mammalian orthologues performing analogous functions could be identified [11][12][13][14][15] . However, until recently 16,17 , systematic screening of mechanisms regulating mammalian phagocytosis has not been reported.
In the last years, pooled genetic screens using either RNA interference or CRISPR Cas9-mediated gene knockouts became popular tools to investigate thousands of perturbations in a single experiment 18 . Pooled CRISPR screens allow for effective and systematic interrogation of complex cellular processes when combined with appropriate selection strategies. Here, we describe our efforts to identify genes regulating phagocytosis of bacteria in THP-1 cells, a human leukemia cell line with morphological and functional properties of primary monocytes 19 . We used our previously established FACS-based phagocytosis assay 20 and performed a genome-wide CRISPR screen in THP-1 cells phagocytosing S. aureus particles. Subsequently, the selected hits were validated in single gene knockout experiments and characterized with additional phagocytosis target particles.
Results
To identify genes regulating phagocytosis of the bacterium S. aureus, we selected the human monocytic cell line THP-1 19 as our primary screening model. THP-1 cells have been shown to spontaneously phagocytose bacteria without the need for activation or differentiation 20 . To establish THP-1 cells as a screening model which ensures homogenous Cas9 expression and thereby a high dynamic range of effect sizes in CRISPR screens 21 , we engineered them to harbor a tetracycline (Tet)-inducible Cas9-GFP expression system, isolated single cell-derived clones, and characterized them for inducible Cas9 expression (iCas9), homogenous CRISPR/Cas9 editing, and their phenotypic properties. Among several clones showing homogenous Cas9-GFP induction (Supplementary Figure S1a) and efficient knockout of the surface marker CD46 upon expression of a CD46-specific sgRNA (Supplementary Figure S1b), we selected five clones for further characterization. Compared to unperturbed THP-1 bulk cells, all five THP-1 iCas9 clones grew at enhanced rates (Supplementary Figure S1c) while displaying similar or even enhanced expression of the monocyte-associated surface markers CD11b, CD36, and CD14 (Supplementary Figure S1d). From these clones, we selected one clone (A2) as our screening model and further tested its functionality using the CRISPR library vector (sgETN), which co-expresses murine Thy1.1, a surface protein that can be used for magnetic-activated cell sorting (MACS) enrichment of transduced cells. To further evaluate the efficacy of CRISPR mutagenesis, we transduced THP-1 iCas9 (clone A2) with a pool of sgETN-sgRNAs targeting 5 essential genes (TIMELESS, WDHD1, RAD21, SMC3, PLK1) or non-targeting controls, partially enriched transduced cells using MACS (to ~ 60% Thy1.1 + ), induced Cas9 expression using doxycycline (dox) treatment, and monitored the fraction of Thy1.1 + /sgRNA-expressing cells over time. While the fraction of Thy1.1 + non-targeting control sgRNA-expressing cells remained stable over time, Cas9 induction led to a strong depletion of cells transduced with sgRNAs targeting essential genes (reaching 36-fold on day 13), indicating effective CRISPR mutagenesis in the vast majority of cells (Supplementary Figure S1e). In a last step, we tested whether an optimized phagocytosis assay previously established by our group 20 detects known phagocytosis regulators in the selected THP-1 iCas9 clone. To this end, THP-1 iCas9 cells were transduced with a non-targeting control sgRNA or an sgRNA targeting ARPC4, whose loss of function has recently been reported to impair phagocytosis in U937 cells 16 . Indeed, CRISPR-mutagenesis of ARPC4 reduced the fraction of phagocytosing cells by more than fourfold compared to the non-targeting control sgRNA (Supplementary Figure S2), demonstrating that our assay recapitulates the function of established phagocytosis regulators.
Having established a suitable cellular model and FACS-based assay for CRISPR/Cas9-based identification of phagocytosis regulators, we performed a genome-wide CRISPR screen following the workflow depicted in Fig. 1. THP-1 iCas9 cells were transduced with a genome-wide sgRNA library containing 6 guides per gene and 500 non-targeting controls (18,187 genes, ~ 6 sgRNA per gene and 108,611 sgRNAs total 21 ) with a transduction rate of 33.9% and a cell to guide coverage of approximately 3000×. The screen was performed in duplicates and triplicates for the baseline and phagocytosis samples, respectively. For each replicate a library coverage of more than 1000× was maintained throughout the experiment to mitigate a major source of experimental noise 18 . After MACS enrichment of transduced cells, baseline samples were collected before induction of Cas9 expression. Genome-wide knockout effects on phagocytosis were assessed 12 days post Cas9 induction by addition of commercially available pHrodo red labeled S. aureus particles. After 1 h of incubation, cells were collected and sorted according to their pHrodo signal intensity. The purity of the sorted cell populations was about 98% (Supplementary Figure S3). The DNA of the sorted cell populations was isolated, integrated sgRNAs were amplified by PCR and sequenced using Illumina's NextSeq platform. The sgRNA counts of the raw reads from each sample were determined (Supplementary Table S3) and further analyzed using MAGeCK-VISPR 22,23 (Supplementary Table S4). Secondary analysis including filtering, comparisons, and plotting was performed in RStudio 24 www.nature.com/scientificreports/ quality control, we also sequenced the lentiviral vector plasmid pool. About 99% of the designed sgRNAs were detected in the library and the reads of cloned sgRNAs were evenly distributed with only few high copy outliers (Supplementary Figure S4a). The relative difference in the sgRNA abundance in the library was very low and 80% of the sgRNAs were present with counts of less than tenfold difference (Supplementary Figure S4b). Amplicon-seq of populations collected throughout the screen revealed high mapping rates (no mismatches allowed) of > 80% (Supplementary Figure S5a). As expected, the number of missing sgRNAs and the gini index of the phagocytosing and non-phagocytosing samples were higher compared to the baseline samples (Supplementary Figure S5b and c). The gini index is a measure of unevenness, smaller numbers indicate more evenness and accordingly higher numbers indicate more unevenness. The correlation between phagocytosing and non-phagocytosing samples is rather high (Pearson: 0.93-0.97), but clearly separates from the baseline samples (Supplementary Figure S5d). As the effects of gene editing on the rate of phagocytosis were assessed 12 d post Cas9 induction, we assumed that sgRNAs targeting essential genes (constitutive core essential genes as defined in Hart et al. 25 ) might be depleted. Therefore, sgRNAs were grouped into non-targeting controls (NTC), targeting essential genes (211), and targeting the remaining genes (17,976), and the distribution of these groups throughout the screen was analyzed. In the plasmid pool and baseline samples all three groups are equally distributed, demonstrating a uniform transduction of the sgRNA library. In contrast, the FACS enriched populations (the normalized data of all phagocytosing and non-phagocytosing samples combined) have decreased sgRNA counts targeting essential genes compared to the non-targeting and remaining, non-essential sgRNAs (Supplementary Figure S5e).
Comparing the sgRNA counts of sorted phagocytosing to non-phagocytosing cell populations using MAGeCK with an FDR of ≤ 0.2 resulted in a list of 831 genes, which was used to generate an sgRNA pool for a second screen, aiming to increase the confidence of the primary hits (Fig. 2a). In the second screen, a much higher cell to guide coverage was used to reduce the risk of sgRNA dropouts by chance. THP-1 iCas9 cells were transduced with a lentiviral vector pool of 5407 sgRNAs (~ 6 guides per gene and 500 non-targeting controls) at a coverage of > 36,000 cells per guide, which was kept throughout the screen. Again, transduced cells were enriched, Cas9 expression was induced and phagocytosing and non-phagocytosing cells were sorted as described before. The quality of the screen metrics (mapping rate, missed sgRNAs) was equal or even better than in the first screen with an improved separation of non-/phagocytosing samples from the baseline samples (Supplementary Figure S6a-d, Supplementary Table S5). Still, phagocytosing and non-phagocytosing samples were highly correlating (Supplementary Figure S6d). The decrease in sgRNA counts targeting essential genes was comparable with the first screen (Supplementary Figure S6e). The distribution of sgRNAs in the validation pool was very even with no missing sgRNA and 80% of sgRNAs present with counts of less than sixfold difference (Supplementary Figure S4c, d).
MAGeCK MLE calculates beta scores for each gene, which is a measurement of the degree of selection similar to 'log fold changes' in differential expression analysis 23 . Comparing the beta values for the genes (Supplementary Tables S4 and S6) which had been investigated in both screens, we observed a high correlation with a Pearson Figure 1. Genome-wide CRISPR screen for genetic regulators of phagocytosis of bacteria. THP-1 iCas9 cells were transduced at a coverage of 2919× with a genome-wide pooled sgRNA library containing 6 sgRNAs per gene and 500 non-targeting controls. Transduced cells were enriched by MACS and Cas9 was induced with dox on two consecutive days. 14 days after transduction, S. aureus particles labeled with pHrodo red was added to the cells. After 60 min, cells were sorted by FACS according to their fluorescence intensity in phagocytic active and inactive populations. The integrated sgRNAs were amplified and sequenced with NextSeq 550. Sequencing data was analyzed with MAGeCK-VISPR. www.nature.com/scientificreports/ correlation coefficient of 0.816. In both screens, the loss of the above described ARPC4 gene function impaired phagocytosis. Moreover, other members of the Arp2/3 complex (ACTR2, ACTR3, ARPC2 and ARPC3) and subunits of the WAVE complex (NCKAP1L, CYFIP1, BRK1), both known to regulate actin polymerization necessary for phagocytic cup formation, were also depleted in the phagocytosis high population. Additionally, RAB7A, a key regulator of endo-lysosomal trafficking and important for phagocytosis, was shown to be essential for bacterial phagocytosis (Fig. 2b). To identify high confidence hits out of both the genome-wide and the validation screen, only genes with an FDR ≤ 0.1 in both screens were selected. Finally, this resulted in 75 high confidence hits, with 28 gene knockouts activating phagocytosis and 47 inhibiting phagocytosis (Figs. 2a, 3). Cellular localization analysis of the gene products for the 75 high confidence hits revealed them to be mostly located in the cytoplasm and nucleus, with 10 hits being associated with the plasma membrane ( Figure S7a). The protein function of the hits is annotated mainly as enzymatic and other functions, for which a specific function is not known ( Figure S7b). The top canonical pathways enriched are related to the actin cytoskeleton pathway, remodeling of epithelial adherens junctions, integrin signaling, Fcγ receptor-mediated phagocytosis in macrophages and monocytes, and various signaling pathways (Fig. 2c).
Next, we classified the hits to functional groups according to GO terms and literature search (Fig. 3). Most of the genes for which the gene knockout impairs phagocytosis are associated with the cytoskeleton in agreement with the canonical pathway analysis (Fig. 2c). Other larger groups are associated with the endoplasmic reticulum and Golgi apparatus, translational and transcriptional regulation, GTPase signaling, cell metabolism and glycosylation. In the latter group, five genes whose knockout activates phagocytosis are specifically involved in UDP glycosylation: Two members of the solute carrier family 35, SLC35A2 and SLC35A3, which transport UDP sugars from the cytosol to the lumen of the Golgi apparatus and the endoplasmic reticulum for glycosylation 26 , www.nature.com/scientificreports/ and B3GNT2, UXS1 and UGCG, which are enzymes of the UDP metabolism. Furthermore, two genes in the translational regulation cluster, EIF5A and DHPS show impaired phagocytosis when genetically inactivated. EIF5A is a translational elongation factor regulating the translation of a subset of mRNAs. Its activity depends on the modified amino acid hypusine 27 . There are two EIF5A isoforms present (EIF5A1 and EIF5A2), which in humans are the only proteins containing this amino acid modification 28 . Hypusine derives from a two-step conversion of lysine, catalyzed by the enzymes DHPS and DOHH.
To validate and further characterize these genes, we created single gene knockouts with the two best-performing sgRNAs targeting each gene and performed phagocytosis assays with various substrates. DOHH, although neither a hit in the primary nor the secondary screen (FDR around 0.16 in both screens), was included in the validation experiment. Again, a non-targeting sgRNA was included as a negative control and an sgRNA targeting ARPC4 served as a positive control, inhibiting phagocytosis. Phagocytosis assays were performed in addition to S. aureus with Escherichia (E.) coli and zymosan A particles. As expected, compared to non-transduced cells, the non-targeting sgRNA showed no effect on phagocytosis of the 3 different substrates and the knockout of ARPC4 impaired phagocytosis of S. aureus, but also the phagocytosis of E. coli and zymosan A (3.2-fold, 8.1fold and 28.8-fold for zymosan A, S. aureus und E. coli, respectively). The two sgRNAs targeting either EIF5A or DHPS inhibited phagocytosis of the various substrates, but to a different extent (E. coli > S. aureus/zymosan A). Knockout of DOHH impaired phagocytosis of the different substrate types with the weakest effects on S. aureus phagocytosis compared to the other experimental sgRNAs. This might explain why DOHH did not score in the two screens (Fig. 4). With exception of B3GNT2, knocking out the members of the UDP glycosylation pathway showed increased phagocytosis, and thus the screening hits were successfully confirmed by individual sgRNA transduction. The strongest effects were observed for the solute carrier SLC35A2 and the enzyme UXS1, increasing phagocytosis 2.4-fold and 2.5-fold, respectively. With respect to the three different particle types, phagocytosis of zymosan A was increased the most (up to more than twofold).
The fluorescence intensity of pHrodo red relies on the lysosomal acidification, hence, impaired acidification could result in reduced signal intensity. To evaluate whether the knockouts of the individual genes influence lysosomal acidification, KOs were stained with lysotracker. In none of the single gene knockouts an effect on lysosomal acidification compared to the non-targeting control sgRNA was observed, with exception of the sgRNA_2 targeting EIF5A. This knockout resulted in a small reduction of the lysotracker signal indicating a weak effect of the EIF5A KO on lysosomal acidification. Compared to the effect on phagocytosis, the changes To further discriminate phagocytosis from other forms of endocytosis, we assessed the uptake of transferrin and dextran as models for clathrin-mediated and clathrin-independent endocytosis assays 31 . We used 10 kDa dextran allowing for efficient uptake by macropinocytosis 32,33 . The endocytic uptake of transferrin and dextran was not affected by the single gene knockouts except for DOHH for which a 29% reduction of the transferrin uptake was observed ( Figure S8b, c). Compared to the effects of a DOHH knockout on phagocytosis, the reduced effect of transferrin uptake is less pronounced. Whereas for phagocytosis actin dependent cytoskeleton rearrangements are indispensable, for other forms of endocytosis actin filament reorganization is less important 31 . Therefore, it is not surprising that cytochalasin D, which inhibits actin polymerization does not block endocytosis of dextran and only reduce endocytosis of transferrin by about 20% irrespective of the gene knockout. Phagocytosis of E. coli particles in the presence of cytochalasin D could be completely blocked ( Figure S8d).
Discussion
Phagocytosis is an integral mechanism of the immune system of which dysregulation is associated with various diseases. We performed pooled CRISPR screens to identify genes regulating phagocytosis of the gram-positive bacterium S. aureus., and further confirmed the hits by individual sgRNA transduction experiments and testing additional phagocytosis substrates (E. coli and zymosan A particles). We used commercially available, well described particles derived from inactivated bacteria or yeast zymosan A. Over the last years, pooled CRISPR-Cas9 screening technology has been optimized to become a robust tool to study gene function at both the cellular 16,21,34,35 and the organismal 36 level. Genome-wide CRISPR screens have been successfully applied to delineate biological pathways 16,34,35,37 and to identify candidate drug targets 38 . Although genome-wide sgRNA libraries enable the systematic perturbation of the entire coding or regulatory genome, adapting CRISPR screens for probing gene functions in specific cellular processes requires the development of cell-based assays specific to the pathway or activity of interest. For example, recently described screens used either an engineered reporter to study autophagy by FACS analysis 37 or phagocytosis by MACS enrichment 16 . Here, we describe the successful combination of our recently published FACS-based phagocytosis assay 20 with the pooled CRISPR-Cas9 screening technology to elucidate known and unknown regulators of phagocytosis at the genome level. The identification and validation of hits depended on the increase and decrease of pHrodo red signal intensity. After internalization endosomes are fused to lysosomes leading to acidification THP-1 iCas9 cells were transduced with guides targeting indicated genes or with non-targeting control (NTC) and Cas9 expression was induced with dox. On day 12 after gene knockout, S. aureus, E. coli and zymosan A particles labeled with pHrodo red were added and phagocytosis was measured 1 h later using flow cytometry. Depicted is the mean phagocytosis index of live Thy1.1 + cells of 2-5 replicates ± SEM. The phagocytosis index is the mean fluorescence intensity normalized to non-transduced THP-1 cells. To test statistical significance, a one-way ANOVA corrected for multiple comparison according to Dunnett was performed. Ns: not significant, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. www.nature.com/scientificreports/ which increases the pHrodo red signal intensity. Using pH-sensitive lysotracker, we verified that acidification is not impaired in single gene knockouts of the identified hits.
Scientific Reports
Recently, a systematic investigation of genetic regulators of phagocytosis was published by Haney and coworkers 16 . They performed pooled CRISPR screens in macrophage-differentiated U937 cells, a pro-monocytic cell line, using various particle types for phagocytosis including beads, IgG-or C3a-opsonized apoptotic cells, myelin and zymosan. By contrast, we used the human monocytic THP-1 cell line and a bacterial phagocytosis substrate (S. aureus) in our pooled screens. We confirm and extend their results with respect to the essential role of the actin cytoskeleton regulating Arp2/3 and WAVE complexes in phagocytosis. As already described for U937 phagocytosis, loss of different subunits in these complexes is detrimental for S. aureus phagocytosis in THP-1 cells. Interestingly, a similar gene set has lately been shown to inhibit the uptake of Salmonella by macrophage differentiated THP-1 cells 17 . RAB7A, another top scoring hit in our screens has been shown to be essential for phagocytosis. RAB7A acts more downstream in phagosome maturation and is essential for the transition from early to late phagosome 39 .
We were surprised to find and confirm genes involved in glycosylation. The knockout of USX1, UGCG, SLC35A2 and SLC35A3 increased phagocytosis not only of S. aureus, but also E. coli and zymosan A. UXS1 catalyzes the conversion of UDP-glucuronic acid to UDP-xylose, which is a substrate for three glycosylation processes involved in the biosynthesis of glycosaminoglycans, O-glycans and dystroglycans 40 . UGCG generates glucosylceramide, which is the precursor for all glycosphingolipids 41 . SLC35A2 and SLC35A3 are transmembrane transporters for UDP-galactose and UDP-N-acetylglucosamine, respectively. They transport UDP sugars from the cytosol to the lumen of the Golgi apparatus and the endoplasmic reticulum for glycosylation 42 . The solute carriers SLC35A2, SLC35A3 and SLC35A4 together with N-acetyl-glucosaminyltransferases self-assemble into multienzyme/multi-transporter complexes to facilitate the synthesis of complex N-glycans 43 . To our knowledge, only a few reports showing an effect of protein glycosylation in the phagocyte on its rate of phagocytosis are available, e.g. O-glycosylation of the cell surface protein C1qRp enhances both FcR-and CR1-mediated phagocytosis 44,45 . By contrast, many reports demonstrate effects of the glycosylation status of the phagocytosed particle on phagocytosis. For instance, it was shown that O-glycosylation of the bacterial cell wall 46 and decorin-coating of collagen fibers 47 influences their phagocytosis. Moreover, antibodies and their corresponding Fc receptors are also highly glycosylated and the correct glycosylation is important for binding of Fc receptors to antibodies, which in turn is essential for antibody-dependent phagocytosis 48 . However, currently we do not know whether and how the knockout of the described genes influenced the protein glycosylation pattern in our THP-1 cell line. In addition, B3GNT2 was identified in our screens. However, the single knockouts of B3GNT2 showed no effect on phagocytosis of S. aureus and zymosan A, but an increase in phagocytosis of E. coli. B3GNT2 is a polylactosamine synthase that synthesizes a backbone structure of carbohydrate structures onto glycoproteins. It is expressed in murine macrophages, and B3GNT2-deficient mice show dysregulated activation of macrophages with elevated levels of CD14 expression and an enhanced response to endotoxin 49 . Together, all five genes are members of the cellular glycosylation machinery and their knockouts increased phagocytosis. However, a phagocytosis phenotype has not been described for either of them before, and further research is required to understand their role in the process of phagocytosis, which is beyond the scope of this study.
We identified two additional genes with no described link to phagocytosis, EIF5A and DHPS. Together with DOHH, DHPS is required for hypusination of EIF5A. The effect of the three genes on phagocytosis was tested individually and we could confirm that their knockout inhibits phagocytosis. EIF5A is a translational elongation factor and the only human protein which undergoes hypusination 50 . Hypusination is a posttranslational modification of lysine, which is catalyzed by the gene products of DHPS and DOHH 28 . EIF5A and the polyamine synthesis pathway regulate oxidative phosphorylation. Macrophages stimulated with IL-4 but not with LPS plus IFN-γ upregulate the hypusinated form of eIF5A 51 . Moreover, genetic ablation of Eif5a, Dhps or Dohh reduced the expression of CD206 and CD301 in murine macrophages, which are markers of alternative activation 51 . Furthermore, it was shown that the proteins Ldp17 and Vrp1, which are related to actin cytoskeleton organization, are nearly absent in Saccharomyces cerevisiae temperature-sensitive mutants of eIF5A 50 . This might explain why we observe a markedly reduced phagocytosis of S. aureus, zymosan A and especially E. coli particles in DHPS, DOHH and EIF5A knockouts.
To confirm that the genes identified in our genetic screen are phagocytosis specific and no general inhibitors of endocytosis, we tested there knockout effects in models for clathrin-dependent and -independent endocytosis 52 , i.e. transferrin and dextran uptake, respectively. Neither dextran, nor transferrin uptake was substantially affected by any one of the single knock outs. However, DOHH knockout showed 29% reduction of transferrin endocytosis indicating a more general impairment of uptake.
In summary, we successfully performed a CRISPR-Cas9 screen to identify yet unknown genes involved in phagocytosis. Among these, our screen reveals several components in glycosylation and hypusination pathways, which have not been implicated previously as regulators of phagocytosis. Even though classical receptors for foreign particles including opsonic receptors were not among the top hits, the targets identified might impact their expression or might alter their binding affinity by modifying the glycosylation pattern. Interestingly, singlegene knockouts of selected hits demonstrated that not only the phagocytosis of S. aureus particles but also of E. coli and zymosan A particles were affected. This indicates that many of the identified hits are either ubiquitously involved in phagocytosis or are at least important for the uptake of both bacteria and yeast. This study broadens the spectrum of genetic regulators of phagocytosis. However, their exact molecular mechanism in the process of phagocytosis and their regulation and role in human disease processes needs to be addressed in future studies. Genome-wide sgRNA library and generation of focused sgRNA pools. Design, construction, and basic performance of the genome-wide sgRNA library has been described in Michlits et al. 21 . To generate focused sgRNA pools for validation screens, oligo pools were obtained from Twist Bioscience and amplified by PCR using the Q5 Hot Start High-Fidelity DNA Polymerase (NEB). The amplicons and the sgETN vector were restricted with Esp3I (Thermo Fisher Scientific), the plasmid vector was purified by agarose gel electrophoresis and the digested sgRNAs were purified by ethanol precipitation. For the ligation, a total of 2 µg backbone was used at a vector to insert ratio of ~ 1:10. Ligation was performed at 16 °C overnight using T4 DNA Ligase (NEB). The ligation reaction was purified using phenol extraction with subsequent ethanol precipitation. The precipitated ligation reaction was dissolved in 15 µL TE buffer and 4 µL were used for transformation of MegaX DH10B T1 cells (Thermo Fisher Scientific). Bacteria were plated on LB agar dishes containing ampicillin and incubated at 37 °C overnight. The next day all bacteria were scratched off the plates and cultivated in 1 l LB medium (with ampicillin) for about 6 h. Bacteria were harvested, and plasmid DNA was prepared using the NucleoBond Xtra Maxi EF (Macherey Nagel) according to the manufacturer's recommendations. Cloning of single sgRNA (Supplementary Table S1) was performed as described in Datlinger et al. 53 . Briefly, two reverse complementary oligos with overhangs were hybridized, phosphorylated and cloned into a pre-cut vector backbone.
Scientific Reports
Generation and quantification of lentiviral particle. Lentiviral Sequencing. The sequencing libraries were generated using a two-step PCR. The first PCR amplifies the target specific region and adds adapter sequences used as template for the second PCR. From each sample, the complete DNA was used for the sequencing library preparation. In each 100 µL PCR 1 µg of template was amplified using Q5 Hot Start High-Fidelity 2X Master Mix (NEB), additional 2 mM MgCl 2 and a pool of forward and reverse primers containing 2 to 8 nucleotide staggers and the adapter sequence (Supplementary Table S2). Amplicons were purified using 0.8 × Agencourt AMPure XP beads (Beckman Coulter) and the MagMax (Thermo Fisher Scientific). Purified amplicons from the same sample were pooled and 20 ng was used for the second PCR using NEBNext Multiplex Oligos for Illumina and NEBNext Ultra II Q5 Master Mix (both NEB). The final sequencing libraries were cleaned up twice as describe before and quality controlled with the Fragment Analyzer (AATI). Sequencing was performed on Next-Seq 550 (Illumina) with High-Output and 150 bp single-end mode.
Endocytosis assay of single gene knockouts. THP-1 cells were either treated with 10 µM cytochalasin D (PHZ1063, Gibco) at 37 °C and 5% CO2 for 30 min or left untreated. S. aureus, E. coli, zymosan A, dextran (25 µg/mL, 10 kDa, P10361) or transferrin particles conjugated with pHrodo red were purchased from Thermo Fisher Scientific. Particles were added to cells and cells were incubated at 37 °C and 5% CO2 for 1 h. Of note, cytochalasin D was not removed before addition of particles and the final concentration during particle uptake was 7.5 µM. After the incubation, cells were placed on ice to stop further phagocytosis. Cells were stained with anti-Thy1.1-APC (clone: HIS51, eBioScience) and DAPI (Thermo Fisher Scientific), and analyzed on BD LSR-Fortessa X20 (BD Biosciences). Cells were gated based on FSC/SSC, singlets, live (DAPI-) and Thy1. 54 . Reads were mapped to the guide reference using MAGeCK-VISPR (V0.5.3) 23 . Counts were normalized to the median count. Samples with increased phagocytosis were compared to samples with reduced phagocytosis using the MLE algorithm in MAGeCK (V0.5.6) 22 . Further analysis, e.g. filtering, comparing, plotting, was performed in R studio (V1.2.1335-1, R 3.5.2). One-way ANOVA corrected for multiple comparison according to Dunnett was calculated with GraphPad Prism (version 8.0.0).
Data availability
All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). | 6,836.2 | 2021-06-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Using digital technologies to diagnose in the home: recommendations from a Delphi panel
Rapid advances in digital technology have expanded the availability of diagnostic tools beyond traditional medical settings. Previously confined to clinical environments, these many diagnostic capabilities are now accessible outside the clinic. This study utilized the Delphi method, a consensus-building approach, to develop recommendations for the development and deployment of these innovative technologies. The study findings present the 29 consensus-based recommendations generated through the Delphi process, providing valuable insights and guidance for stakeholders involved in the implementation and utilization of these novel diagnostic solutions. These recommendations serve as a roadmap for navigating the complexities of integrating digital diagnostics into healthcare practice outside traditional settings like hospitals and clinics.
INTRODUCTION
Digital technologies are advancing at a rapid pace, pushing diagnostics into smaller and more portable devices-many of them outside the traditional healthcare settings.For example, patients can record their sleep patterns using an Apple Watch or FitBit 1,2 , "touchlessly" monitor their stress and blood pressure using the phone application veyetals 3 , or monitor their blood glucose levels in real time with a small device (such as the Freestyle Libre 2) that displays results on a companion phone application 4 .These are just a few examples of the many developers introducing products that use smartphones to record health-related information, such as mood, balance, sleep, and respiratory patterns.While these innovations are patient-centric, we are also seeing many such products designed to be used by physicians, such as a small portable ultrasound device that plugs into a smartphone and provides high-quality images instantly without the need for cumbersome equipment 5 .
We call this product category, which ranges from unregulated general wellness products to regulated devices, in-home digital diagnostics (or digital diagnostics for short).Box 1 provides a formal description of how we define the category for the purpose of this study.
Thus far, this product category has been subject to a confusing and incomplete patchwork of regulation by federal law and agencies, as well as other areas of state law, such as tort and contract law 6 .Some of these issues have already drawn the attention of interested parties, such as clinicians, academics, and regulators 7 .The Food and Drug Administration (FDA), for example, has attempted to respond to issues raised by digital diagnostics through guidance documents and formal programs, including its recent guidance on the difference between regulated devices and unregulated general wellness products 8 .It has also implemented a pilot program, which it completed in September 2022, to bring software as a medical device to market faster by evaluating firms rather than products 9 .
Legal uncertainty is also coupled with ethical uncertainty about the obligations of physicians, manufacturers, and society have toward those that use digital diagnostics.In other words, it is not clear how actors-patients, physicians, manufacturers, marketers -ought to behave, legally or ethically, when developing, implementing, prescribing, using, and paying for digital diagnostics.For example, what is the best way for manufacturers to communicate to patients how the digital diagnostic should be used?How should manufacturers protect patient privacy?What should physicians know about how the digital diagnostic uses or shares patient information?Do physicians have an ethical obligation to use or not use new technologies?How is this obligation complicated by provider and patients' ability to access and pay for the digital diagnostic?Will insurance companies cover digital diagnostics and, if so, how much will they reimburse for them?If a digital diagnostic's features change, how should patient consent be obtained?Among the issues these questions raise are those related to patient data privacy and consent, as well as the ethical obligations of those collecting, using, or recommending the use of digital diagnostics.
To answer these kinds of legal and ethics questions, we conducted a Delphi study (Box 2) to develop recommendations around digital diagnostics for developers, regulators, and public and private insurers.The Delphi technique was chosen because it is recognized as an optimal method for consensus building, with use of anonymous feedback from an expert panel and statistical analysis techniques to interpret the data.The iterative nature of the process avoids some of the pitfalls of other methods, such as the effects of dominant persons or the tendency to conform to a particular viewpoint 10 .
The Delphi brought together 19 experts with diverse experience -including founders of digital health technology companies, academics, practicing lawyers at leading technology and insurance companies, physicians, and entrepreneurs-and asked them how to balance these risks against the promise of digital diagnostics.Our study began with over 100 policy recommendations, which the authors drafted based on a review of the literature and prior work in this field.After three rounds of participant evaluation, the total number of consensus recommendations was 29.These recommendations fell into five domains: (1) guidelines, certification, and training relating to the use of digital diagnostics; (2) liability arising from the use of digital diagnostics; (3) the regulation and marketing of digital diagnostics; (4) reimbursement of digital diagnostics; and (5) privacy, security, and consent in the use of digital diagnostics.We intend that these recommendations serve as forward-looking issue-based guideposts for legislators, regulators, developers, payers, and users of digital diagnostics.
General summary of Delphi process
In rounds 1 and 2, the project team engaged key stakeholders through individual interviews, case studies, and focus groups and formulated the following 5 domains as in need of further guidance: (1) guidelines, certification, and training; (2) liability; (3) regulation and marketing; (4) reimbursement; and (5) privacy, security, and consent.These domains were identified through prior work with the Diagnosing in the Home project through conversations with stakeholders, and in consultation with the steering committee for the project 11 .Each category is meant to identify a distinct class of conduct.For example, "regulation" could plausibly be read to include "guidelines and certification."In this study, however, we use the term "regulation" to mean formal administrative or legal action by a legislature or formal regulator.While certification and guidelines may be included in a regulatory scheme, in this study those terms are taken primarily to mean actions by private parties not otherwise sanctioned by the government or formal regulator.
We, the authors of this study, focused on developing recommendations for actors that are most important within each domain.For example, recommendations on the first domain, guidance, certification, and training, are directed to medical organizations, physicians, manufacturers, caregivers, and licensure, accreditation, and standard-setting bodies.
To identify recommendations that best addressed the ethical and regulatory implementation of digital diagnostics, we then employed a modified Delphi process, which uses multiple rounds of evaluation to gauge and facilitate consensus among a group of expert stakeholders on a particular topic [12][13][14][15] .Using a three-round process-two asynchronous online surveys and one synchronous video conference-the participants voted on candidate recommendations.Each round, consensus criteria were used to eliminate candidate recommendations and advance those remaining to the next round.Round 1 began with over 100 recommendations and asked participants to provide qualitative feedback on the clarity, importance, and correctness of the proposed recommendations and suggestions for additional recommendations.This yielded 54 recommendations, which participants rated in Round 2 along three axes: need, correctness, and feasibility.We choose these categories because our study aimed to develop policy recommendations capable of responding to a real challenge (need), providing appropriate guidance (correctness), and being successfully implemented (feasibility).While 20 of the 54 recommendations met the overall criteria for consensus and were deemed "accepted" without further discussion, 12 recommendations exhibited some level of disagreement.In Round 3 participants discussed the disagreements about these 12 recommendations during a synchronous video call, refining some and eliminating others.
A full list of recommendations and a full list of panelists are provided in Supplementary Information.Below we explain the methods of our Delphi in more detail, focusing on how each round of the Delphi was conducted, the decision rules for developing consensus, and the use of criteria to select recommendations in each round.
The rounds of the Delphi
The 19 members of our Delphi expert panel were selected with the aim of reflecting the diversity of stakeholders involved in the development and use of digital diagnostics, without seeking representativeness given the small size of the group, as is typical.Members included patients, patient advocates, nurses, physicians, medical officers, venture capitalists, product developers, data scientists, experts in bioethics, and experts in law (see Supplementary Table 1) for a complete list of participants).Participants were selected based on their proven expertise in these areas, as exhibited by publication record and professional position, and reputation.Once an individual agreed to participate, suggestions for further well-qualified participants were solicited from them, which were used to inform subsequent choices about whom to invite.
Our Delphi process consisted of 3 rounds.Before the first round, the project team drafted over 100 policy-level recommendations responsive to the 5 domains and targeted at actors identified as relevant to each domain.In round 1, we began our survey with five open-ended questions that asked participants about (i) the most important ethical or legal issues facing the development and implementation of digital diagnostics; (ii) the ethical and legal challenges that make digital diagnostics different and unique from other diagnostic devices used in traditional clinical settings, (iii) the legal and ethical challenges that stakeholders experienced with designing, manufacturing, or marketing of digital diagnostics, (iv) the legal and ethical challenges that stakeholders experienced Box 1 Definition of in-home digital diagnostics In-Home: outside of traditional healthcare settings.
• Our definition excludes traditional healthcare settings include physician offices, brick-and-mortar hospitals, medical centers, and stand-alone testing facilities.On the other hand, an at-home sleep apnea testing device such as WatchPAT® would qualify as "in-home," as would a smartphone application like Hyfe, which produces a cough report by tracking user cough patterns whenever the user initiates the app.As we use the term, "in-home" might also include a traditional healthcare service, such as an office visit, if performed remotely through video or telephone.Digital: significantly incorporates a novel, technology-enabled component not traditionally found in diagnostic devices.
• A self-testing kitsuch as a pregnancy, ovulation, or drug abuse detection test that allows users to view results onlinewould not satisfy this definition of "digital" since the digital component does not significantly alter the analog self-test.By contrast, SpectraPass, which uses machine learning algorithms and a mass spectrometer's laser on a protein sample to determine whether a patient is positive or negative for COVID-19, enabling adaptation of existing technology already deployed in hospital systems, would fall within our definition of "digital."This flexible definition captures the breadth of technologies where the digital component significantly changes the nature of the device.Diagnostics: any device that can aid in the identification of a particular disease or condition, or event associated with that disease or condition.
•
This definition covers the initial diagnosis and subsequent events caused by a particular disease or condition.Glucose monitors, for example, would fit within this definition because they can aid in the diagnosis of low blood sugar even though a patient typically uses one only after an initial diabetes diagnosis.Our project focuses on products that meet all three criteria.
Box 2 What is a Delphi?
Pioneered by the RAND corporation to forecast the effect of technology on warfare, the Delphi is a method for developing consensus among experts 16 .Since its inception, it been used to study issues ranging from drug and device regulation 27 , to nursing 28 , to clinical decision making 29 and bioethics 30 .The technique can vary according to context and objectives.In general, however, the Delphi identifies and solicits participation from experts to help develop consensus around a particular issue through some combination of asynchronous and synchronous surveys that ask participants to evaluate recommendations.Criteria are developed to eliminate recommendations that do not reach consensus or to select those that reach strong consensus.
with incorporating digital diagnostics into their medical experience, and (v) the legal and ethical challenges that digital diagnostics present.We asked panel members to provide qualitative feedback on the clarity, importance, and correctness of the proposed recommendations and suggestions for additional recommendations.This resulted in 54 recommendations for evaluation in round 2.
In round 2, members of the Delphi panel were asked to complete a survey evaluating the 54 recommendations along 3 axes: need, correctness, and feasibility.The choice of axes was motivated by our aim of selecting policy recommendations capable of responding to a real challenge (need), providing appropriate guidance (correctness), and being successfully implemented (feasibility).The criteria for determining consensus were based on the UCLA/RAND consensus criteria adjusted for the size of our panel 16 .Twenty of the 54 recommendations met the overall criteria for consensus after round 2 voting and were thus deemed "accepted" without further discussion.
Round 3 consisted of a half-day video conference devoted to discussing approximately 12 recommendations that exhibited some level of disagreement on feasibility.Before the video conference, participants received information sheets summarizing the voting process, the results, and the recommendations that had been accepted, as well as those that had been discussed.During the video conference, the attending participants (which totaled 15) debated revisions to recommendations after moderated discussion among the group.Our process allowed for and encouraged changes to the wording and substance of the recommendations.Because our Round 2 produced such high levels of agreement on recommendations, and because not all panelists could participate in Round 3, we used the videoconference to discuss modifications to the identified recommendations.
Because revisions were quite minor, the project team sent participants the changes and asked to respond if they did not agree with the modified recommendations.A lack of response within one week signified acceptance.
Decision rules for consensus
In any Delphi process, decision rules are determined in advance to both define and determine consensus.Consensus on a topic is usually determined if a certain number or percentage of the votes falls within a prescribed range.It is best to determine our criteria for consensus a priori to avoid bias.
We constructed a panel size of n = 19.The Panel used two different 7-vote scales for each round of the Delphi.The justification for the two different scales is the high number of initial recommendations (119).We explained to the panel both the scaling mechanism and the reason for the two different scales.Round 1 was primarily directed towards winnowing the pool of recommendations.For that reason, the panel used a 7-vote scale to measure three axes of opinions about each recommendation, where 7 represented positive views (importance, correctness, and clarity) and 1 represented negative views.
Round 2 was directed towards forming consensus on the recommendations remaining after Round 1.The panel used a 7-vote scale to measure three axes of opinions about each recommendation, where 7 represented positive views (correctness, high need, or feasibility) and 1 represented negative views (incorrect, low need, infeasible).Generally, endorsement of a recommendation was determined by high end (positive) scores without disagreement.
We based our criteria on the European Union BIOMED Concerted Action on Appropriateness for surgical procedures as referenced in The RAND/UCLA Appropriateness Method User's Manual 17 .For clinical appropriateness determinations, one needs to determine agreement around appropriate, inappropriate, and equivocal designations, since any clinical scenario may occur and will need to be categorized.Furthermore, appropriateness studies assume there will be variation.Policy recommendations, while not absolute, are not intended to be empirically applied, so we sought greater agreement.Furthermore, in our case, we were concerned only with a decision about whether to accept or reject a recommendation.Therefore, we needed to focus on consensus around high scores.
We defined consensus (i.e., agreement), as a clustering of scores in the high end of the scale (typically 5-7, or 6-7), without "disagreement" (i.e., scores in the low end of the scale, 1-3).Because we had three axes (confidence in correctness, need, feasibility), we decided to keep or reject each recommendation in two steps.Step 1 would be to assess consensus for each axis.Step 2 would be to make a recommendation selection based on all three axes.However, given the large number of starting recommendations, this two-step process was not applied at the initial recommendation stage.Instead, we relied on rough-cut criteria for consensus as determined by average scores across all axes equal to or greater than 5.We choose this criterion because of the large number of recommendations and the purpose of the first round as a throughput screening device.
To provide some flexibility, we designated 2 sets of criteriaone primary and one secondary (as per RAND), both discussed below.The primary criteria are meant to find consensus across all three domains, rather than simply one.We used the following primary criteria for determining "high (positive) consensus" and "low (negative) consensus": • "Positive consensus" -After discarding 1 extreme high and one extreme low rating, there must have been at least 10 ratings ≥ 6, and not more than 5 ratings < 5.
• "Negative consensus" is the inverse, i.e., after discarding 1 extreme high and one extreme low rating, among the remaining ratings, there must have been at least 10 ratings < 5, and not more than 5 ratings ≥ 6.
The secondary criteria applied only to Correctness.We used the following Secondary Criteria for determining "high consensus" and "low consensus": • "Positive consensus" -After discarding 1 extreme high and one extreme low rating, among the remaining ratings, there must have been at least 10 ratings ≥6, and not more than 2 ratings < 5.
• "Negative consensus" is the inverse, i.e., after discarding 1 extreme high and one extreme low rating, among the remaining ratings, there must have been at least 10 ratings <5, and not more than 2 ratings ≥ 6.
We used the secondary criteria to ensure representative responses.Because the primary criteria provided a narrower band of recommendations by screening out those that do not have consistent results across responses, they may produce a narrow band of recommendations that are not representative of the breadth of issues involved.Because the crux of consensus requires that the recommendation be correct, not that it be needed or feasible, secondary criteria were used to ensure representativeness.
The secondary criteria could be invoked if the investigators wished to include recommendations that are related and are close to consensus on the primary criteria.Assume, for example, with a full 19 valid responses, that 17 participants agreed it was highly correct and needed, but only 13 participants agreed it was feasible.It would not meet the primary criteria.Assume also that there seems to be a high degree of consensus on two of the three axes, and the third axis represents an area of disagreement that relates to the practicality of the recommendation, not to whether it should be pursued.We might still value those responses because the participants who scored the recommendation at 6 were ambivalent or uncertain in their opinions during the survey or meeting or were made up of people who consistently gave lower scores-and because people whose opinions we valued highly said it was highly correct and needed.
Defining consensus, using selection criteria, and endorsement
We used the primary and secondary criteria to determine consensus in each round.For each recommendation, we used 3 axes for each question in round 1 (importance, correctness, clarity) and round 2 (correctness, need, feasibility).We used the axis of correctness as the main axis to base an endorsement of recommendations.
Because each of the rounds had different subcategories and were directed to different tasks, we modified consensus criteria for each round.In the first, or initial, round of the Delphi, we proposed 119 total recommendations.Since the goal of the Delphi was to develop a smaller set of recommendations, the first round focused on winnowing recommendations down to a more manageable number.Our initial criteria for determining whether a recommendation reached consensus was based on the exclusion of lowscoring recommendations ≤ 3 and including high-scoring recommendations ≥ 5.
"Positive consensus" in Round 1 meant the following: After discarding 1 extreme high and one extreme low rating, among the remaining ratings, there must have been at least 10 ratings ≥ 5, and not more than 2 ratings in < 3. "Negative consensus" in the first round meant the inverse: i.e., after discarding 1 extreme high and one extreme low rating, among the remaining ratings, there must have been at least 10 ratings < 3, and not more than 2 ratings in the between 5 and 7.
In round 1, we used primary criteria applied to Correctness, Need, and Feasibility.A recommendation was selected if there as a high (positive) consensus on all three axis as measured by an aggregate score of 5 or higher.
Round 2 of the Delphi contained a smaller number of recommendations.And because these were recommendations on which there was already a high level of agreement, we expected consensus criteria to be narrow.We modified our definition of consensus for round 2 because the responses generated by Round 1 indicated scores of greater than 5, on average.For this reason, we generated higher cutoff criteria for what counted as positive and negative consensus.
In round 2, we used a two-part decision framework.A recommendation was selected if there was high (positive) consensus on correctness, need, and feasibility (using primary criteria).A recommendation might be selected if there was high (positive) consensus on correctness, (using secondary criteria) as long as there is no negative consensus on either need or feasibility (using strict criteria).
Overview
Twenty-nine recommendations met the prespecified criteria for consensus in the Delphi Process.We summarize them here, organized by the five domains identified earlier.
Domain 1: recommendations addressing guidelines, certification, and training relating to the use of digital diagnostics
The Delphi Panel achieved consensus on six recommendations addressing the introduction and implementation of guidelines, certifications, and training for use of digital diagnostics.For this domain, the key actors identified were manufacturers, with a downstream focus on patients and practitioners.These recommendations include the need for physicians to understand various demographic factors of their patient population to observe how these influence the patient's use of the digital diagnostic; the need for manufacturers to develop training tools for physician and nonphysician practitioners; and the need for manufacturers to provide patients with easy to understand instructions and on-demand resources which could direct them to videos or visual diagrams on the use of digital diagnostics and in turn, give patients access to comprehensive informational materials.
The Delphi Working Group adopted the following recommendations in this domain: Recommendation 1: Physicians should use reasonable efforts to understand how patient population (e.g., age, race, socioeconomics) may influence the use of the digital diagnostic.
Recommendation 2: Manufacturers marketing digital diagnostics should develop training tools that should be available to nonphysician practitioners who use digital diagnostics.
Recommendation 3: Manufacturers marketing digital diagnostics should develop training tools that should be available to physicians who use digital diagnostics.
Recommendation 4: Manufacturers should provide to patients easy-to-understand instructions for the use of the manufacturers' digital diagnostics.
Recommendation 5: Manufacturers should provide to patients access to on-demand resources (e.g., a QR code that points to videos, visual depictions/diagrams) for the proper use of the manufacturers' digital diagnostics (e.g., instructions on use, limitations, etc.).
Recommendation 6: Patients should have access to comprehensive informational materials about the digital diagnostic they use.
Domain 2: recommendations addressing liability arising from the use of digital diagnostics
The panel endorsed five recommendations addressing the liability arising from the use of digital diagnostics.For this domain, the key actors identified were healthcare organizations and healthcare practitioners, with a downstream focus on product users and caregivers.The recommendations include the need for healthcare organizations such as hospitals and medical centers to develop policies and procedures to monitor and remedy adverse events that arise from use of digital diagnostics; the need for healthcare practitioners to inform patients who use digital diagnostics of any privacy concerns arising out of that use, the patient's responsibilities while using digital diagnostics, and any risks involved with such use; and the need for healthcare practitioners to also inform caregivers of any risks involved with such use of digital diagnostics.
The Delphi Working Group adopted the following recommendations in this domain: Recommendation 7: Healthcare organizations, such as hospitals and academic medical centers, that use digital diagnostics should develop a template of policies and procedures for monitoring adverse events associated with the use of digital diagnostics (assuming information from digital diagnostics flows directly to them instead of manufacturers).
Recommendation 8: Healthcare practitioners that use digital diagnostics should inform patients about the privacy concerns arising from the use of digital diagnostics that the manufacturer communicates to the healthcare practitioner.
Recommendation 9: Healthcare practitioners that use digital diagnostics should adequately inform patients about the patients' responsibilities when using digital diagnostics.
Recommendation 10: Healthcare practitioners that use digital diagnostics should adequately inform patients about any risks involved in using digital diagnostics.
Recommendation 11: Healthcare practitioners that use digital diagnostics should adequately inform caregivers about any risks involved in using digital diagnostics.
Domain 3: recommendations addressing the regulation and marketing of digital diagnostics
The panel achieved consensus on six recommendations addressing the regulation and marketing of digital diagnostics.For this domain, the key actors identified were FDA and product manufacturers, with a downstream focus on product users.These recommendations address the reality that some digital diagnostics are regulated by FDA and others are not, a state of affairs the panel thought should persist.The recommendations in this domain include the need for manufacturers to use representative demographic characteristics based on data of the target population to validate products that are regulated by FDA; the need for manufacturers to develop a consumer-friendly template that explains the uses of the digital diagnostics for products not regulated by FDA; the need for FDA to regulate all digital diagnostics it reviews for analytical and clinical validity; the manufacturers' role in not implying or suggesting any uses other than those that the FDA approves, clears, or authorizes for digital diagnostics regulated by FDA; and the need for manufacturers to develop a uniform consumer-friendly disclosure that explains the uses and limitations of the digital diagnostics for those regulated by the FDA.
The Delphi Working Group adopted the following recommendations in this domain: Recommendation 12: For digital diagnostics regulated by FDA, manufacturers should use representative data-by including individuals with representative demographic characteristics (e.g., sex, race, age, socioeconomics) of the digital diagnostic's target population-to validate their products before and after any FDA approval, clearance, or authorization.
Recommendation 13: For digital diagnostics not regulated by FDA, manufacturers should develop a consumer-friendly template that explains the uses of the product.
Recommendation 14: FDA should attempt to evaluate all digital diagnostics it reviews for analytical and clinical validity.
Recommendation 15: For digital diagnostics regulated by FDA, manufacturer marketing should not imply or suggest uses other than those that FDA has approved, cleared, or authorized.
Recommendation 16: For digital diagnostics regulated by FDA, manufacturers should develop a uniform consumer-friendly disclosure that explains the uses of the product.
Recommendation 17: For digital diagnostics regulated by FDA, manufacturers should develop a uniform consumer-friendly disclosure that explains the limitations of the product.
Domain 4: recommendations addressing reimbursement of digital diagnostics
Effective digital diagnostics will not reach many of the patients that need them absent reimbursement by public and private payers.For this domain, the key actors identified were insurance providers, with a downstream focus on members or prospective members.While some patients might pay for them out of pocket, unless attention is focused on payment, we may see use of these products stymied by a lack of access to digital health technologies, especially in low-income communities and those without insurance coverage.Under this domain, the Delphi panel endorsed three recommendations addressing reimbursement of digital diagnostics.These recommendations include the need for Centers for Medicare and Medicaid Services (CMS) to articulate specific criteria for reimbursement that proactively assess digital diagnostics; the need for private insurance companies to explain to their members in plain language their expected financial responsibility under the member's policy for payment due for digital diagnostics; and the obligation on private insurance companies to develop clear policies for a reimbursement procedure for digital diagnostics.
The Delphi Working Group adopted the following recommendations in this domain: Recommendation 18: The Centers for Medicare and Medicaid Services (CMS) should articulate specific criteria for reimbursement that proactively assess digital diagnostics (beyond parallel review).
Recommendation 19: Private insurance companies should explain to their members in plain language the member's expected financial responsibility for a digital diagnostic paid for under the member's policy.
Recommendation 20: Private insurance companies should articulate and develop clear policies on their reimbursement procedure for digital diagnostics.Domain 5: recommendations addressing privacy, security, and consent in the use of digital diagnostics Digital Diagnostics implicate privacy, security, and consent because they will collect information on patients, raising the possibility that the information could be hacked or shared by manufacturer without patient knowledge or consent.For this domain, the key actors identified were manufacturers, stakeholders, patients, and the Federal Trade Commission (FTC), with a downstream focus on product users.The panel achieved consensus on nine recommendations addressing privacy, security, and consent in the use of digital diagnostics.These include the need for manufacturers to develop consensus on technical standards for digital diagnostics; the advice that manufacturers should use privacy-by-design principles-building in specific privacy controls, such as defaults, into the design of the product, rather than trying to regulate them solely after development; the need for stakeholders to convene to develop model ethical principles that prioritize the privacy and security of patient data; the right of patients to have access to data collected by digital diagnostics; the need for manufacturers to develop manuals that help patients understand in plain language how their information is stored, collected, and used; the advice for manufacturers to provide consumer-friendly disclosures detailing how consumers can protect their information; and the need for FTC to play a more active role in regulating false and misleading advertising claims for digital diagnostics and maintaining consumer privacy by regulating data breaches.
The Delphi Working Group adopted the following recommendations in this domain: Recommendation 21: To ensure accessible deployment of technology across patient populations and settings, manufacturers should adopt consensus on technical standards for digital diagnostics.
Recommendation 22: Manufacturers should use privacy-bydesign principles to develop products with privacy protections "built in" to the product's functionality.
Recommendation 23: Stakeholders (e.g., manufacturers, ethicists, and physicians) should convene to develop model ethical principles for the design, implementation, and use of digital diagnostics that prioritize the privacy of patient data.
Recommendation 24: Stakeholders (e.g., manufacturers, ethicists, and physicians) should convene to develop model ethical principles for the design, implementation, and use of digital diagnostics that prioritize the security of patient data.
Recommendation 25: Patients should have a meaningful right to access information collected by a digital diagnostic Recommendation 26: Manufacturers should develop easy-tounderstand instruction manuals that help patients understand in plain language how their information is gathered, stored, and used.
Recommendation 27: Manufacturers should provide consumerfriendly disclosures about how they protect consumer information.
Recommendation 28: The Federal Trade Commission (FTC) should maintain an active presence in the digital health space to police false and misleading advertising claims.
Recommendation 29: The Federal Trade Commission (FTC) should continue to protect consumer privacy using its authority to regulate data breaches.
DISCUSSION
Like any other emerging technologies, digital diagnostics generate a number of legal and ethical issues with no immediate or simple answer.One such area is liability, which may arise from the development, design, prescription, or use of digital diagnostics.The recommendations respond to this concern by ensuring all relevant parties that interact with digital diagnostics are reasonably informed about the diagnostic's capabilities and operation (1-5).This is important because liability risks that stem from using these devices extend beyond physicians to patient and caregivers as well 18 .For example, a caregiver may be injured at a patient's home while assisting with a digital diagnostic.Or they may incorrectly assist the patient using the digital diagnostic, causing a false positive or negative that results in injury.
The participants recognized that the legal and ethical responsibilities should be commensurate with available information, including by placing a greater burden on manufacturers to provide information than on physicians or patients to ferret it out on their own (which in many cases is actually or practically impossible) (7-9).The participants did not reach consensus on two recommendations that would have required patients and caregivers to take a short quiz on how to use the digital diagnostic prior to using them.Participants also did not reach consensus on a recommendation that would have required digital diagnostic trainings to be a required element of clinician licensure training or education.Conversely, participants also did not endorse a recommendation to limit liability entirely, failing to reach consensus on amending laws to provide additional protection for physicians that rely on digital diagnostics.
Participants did not endorse recommendations that would have directly shifted liability to healthcare organizations, which may engage in hospital-at-home programs that require patients and caregivers to use digital diagnostics (8-11).They did, however, recognize that these institutions have a significant role to play (7).Participants reached consensus on a recommendation for such providers to develop template policies to be used at a variety of institutions, perhaps even formalizing a custom that liability law would take into account when making liability determinations 19 .While the Delphi participants did not discuss the details about how exactly to effectuate such a move, hospitals and large academic medical centers, accrediting organizations, like the Joint Commission, might formulate template policies that could be customized to meet the needs of individual institutions, and tort law might look at these templates for guidance.
Open questions also remain to how federal and state regulators should confront digital diagnostics.While Delphi participants did not reject FDA's current approach to regulation, they recognized that more was needed in some key areas.The recommendations broadly supported FDA's continued enforcement of regulatory controls on digital diagnostics it reviews, along with clinical and analytical validation that uses representative (demographic) data.Moreover, several recommendations also embraced FDA's current approach, driven by its statutory authority, that exempts certain types of products categories from FDA regulation (12-15).What separates these "general wellness products" from devices seems clear on paper but can often be a difficult line for FDA to police 19 .The recommendations generally supported some flexibility in FDA's approach, and also suggested manufactures develop a uniform consumer-friendly disclosure to help inform those using the product (13, 16-17).There are existing proposals relating to off-label uses of drugs 20 , nutrition labels for AI 21 , and health apps 22 that could be adapted by FDA or private industry to effectuate these recommendations.
FDA is not the only agency active in this space.FTC's regulation of advertising is also important.The participants endorsed recommendations focused on FDA rather than FTC, but this may be an artifact of it being the agency with which they were more familiar.Indeed, in the first-round participants endorsed recommendations regarding continuing FTC enforcement and potential enforcement expansion with high consensus.But the strict criteria for consensus in the second round meant these recommendations did not advance to subsequent rounds.FTC was also discussed more in the context of consumer privacy (29), probably owing to FTC's ongoing presence and public attention in the privacy space.
For digital diagnostics to meaningfully improve the health of all patients, it is essential that there be some form of reimbursement for their use.The participants recommended that insurers have clear policies about covering and reimbursing digital diagnostics (18-20).Consumer-friendly tools to help insurance members understand and estimate costs were also recommended both before and after insurance selection by a customer.Indeed, insurers already have models they can adapt to this purpose.For example, Medicare has an online tool that helps customers estimate plan and drug costs, including customizing for the drugs a consumer already takes 23 .In the private insurance space, insurers already offer tools to determine whether physicians are in network.A recent federal law has required providers to give estimates before service is rendered, though its effect has thus far not been very pronounced 24 .Similar efforts could be made by public and private insurers with respect to coverage and cost of digital diagnostics.Participants did not reach the question of how pricing tools may be linked to other information about the patient, including information derived from digital diagnostics and other sources like consumer electronics (like a smart refrigerator or Amazon Alexa) 25 , an issue worthy of further discussion.
Beyond the use of data for pricing, the participants also considered collection, disclosure, sale, and use of data collected by digital diagnostics.Their recommendations here emphasized the importance of privacy, security, and consent.The general thrust of the recommendations the participants endorsed is that manufacturers and developers ought to build in guardrails by incorporating ethical principles in the design of their products, with particular emphasis on privacy and security (22-24).Others have suggested similar approaches to incorporating ethical principles into the design of the product rather than trying to police privacy and security through amorphous notions of consent after the product is already on the market 26 .Wariness about reliance on consent was also reflected in the fact that the participants failed to reach consensus on a recommendation that would require physicians to seek "re-consent" each time a manufacturer of a digital diagnostic updated its software in a way that significantly affects the functioning of the product.In the first round the participants supported a recommendation (25) for a patient's universal and meaningful right to access their data with relatively strong consensus, but the strict criteria applied in Round 2 meant that this recommendation, though still strongly supported, did not meet the criteria to advance to Round 3.
The participants also identified the importance of the individual patient as decision-making agent.The relevant recommendations emphasized different aspects of agency, ranging from information about what happens with their data (26-27) to a right to access the data collected by the digital diagnostic (25) to ensuring technical standards enable access to choose in the first instance (21).FTC enforcement, too, was seen to be an important component of this agency, ensuring accurate information reaches consumers and companies are held accountable for data breaches (28-29).
This study has several limitations.First, like Delphi, the type and content of recommendations are neither exhaustive nor random.The study authors developed potential recommendations based on research and consultation with others working in the area, but this did not capture all potential recommendations or all possible domains or necessarily a representative sample of them.Second, the high level of agreement on almost all potential recommendations suggests that the study did not capture recommendations that may be strongly disfavored.For example, the study did not recommend privatizing part or all of regulatory framework for digital diagnostics.This recommendation may have received strong negative consensus, but it was not evaluated.Third, the selection criteria used to accept recommendations required extremely high levels of agreement, suggesting that while some of the recommendations not selected also may be helpful to policymakers, the ones selected deserve priority in consideration.Fourth, while we were able to recruit participants representing a very diverse set of stakeholders, there were some interests in this area we did not succeed in recruiting for the Delphi.For instance, our study did not include a technology entrepreneur who might engage in serial development of digital diagnostics.Our study also did not include any participant from a large device manufacturer or large institutional investor.This may have resulted in high consensus for some recommendations that would have achieved lower, no, or negative consensus had additional individuals been included.Fifth, many of the recommendations that failed to reach consensus did so because participants rated them low on feasibility.This suggests that the results may be biased in favor of recommendations that, while feasible, may not be reflective of the breadth of concerns participants thought was important to consider.
As investment in and rollout of in-home digital diagnostic gains steam, developers, regulators, and public and private insurers are all trying to formulate policy in this space.Our Delphi process brought together 19 experts with diverse experience-including founders, academics, practicing lawyers at leading technology and insurance companies, physicians, and entrepreneurs-who endorsed 29 consensus recommendations we believe should help set the agenda in five domains: (1) guidelines, certification, and training relating to the use of digital diagnostics; (2) liability arising from the use of digital diagnostics; (3) the regulation and marketing of digital diagnostics; (4) reimbursement of digital diagnostics; and (5) privacy, security, and consent in the use of digital diagnostics.
Our results indicate the need of increased involvement across a diverse portfolio of stakeholders such as physicians, legislators, regulators, manufacturers, public and private insurance companies, healthcare practitioners and providers, and patients and caregivers to better understand and integrate digital diagnostics in the healthcare system.Because of the diverse range of stakeholders, the recommendations varied according to what actions each stakeholder should take.
Nevertheless, there are several key takeaways from the study.First, the results an emphasize a need to provide simple, understandable information about digital diagnostics to patients and physicians.This should include additional resources beyond product labeling that instruct individuals on how to use the device, as well as its limitations and risks.Second, physicians and policy makers should develop guidelines and standards for using digital diagnostics that provide a framework for assessing liability when the digital diagnostic or the physician using one cause harm to a patient.Third, regulators and manufacturers have a key role to play, not only in providing information, but also in ensuring that the digital diagnostic works safely and effectively for the target population and condition.Fourth, reimbursement should be a key element of digital diagnostics policy, both as a tool to ensure equitable access and to incentivize innovation of new technologies.Fifth, digital diagnostic policy should include rules of data management and security that protect patient information but also allow for the interoperability of digital diagnostics across platforms and the sharing of data for research and innovation purposes.
These recommendations reflect a diverse range of issues that digital diagnostics implicate.And while policymakers should consider these as important components of any strategy, they should not be viewed as conclusive of all concerns appliable to digital diagnostics.Nor should they be viewed as exhaustive of the potential recommendations for all digital diagnostics.Despite their limitations, however, these recommendations provide a framework for policymakers thinking about concerns, challenges, and opportunities that digital diagnostics raises.And they should be carefully considered as technology advances. | 9,744 | 2024-01-22T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
Reading on paper or reading digitally? Reflections and implications of ePIRLS 2016 in South Africa
South Africa participated in the electronic version of the Progress in International Reading Literacy Study (ePIRLS) in 2016 but faced many challenges during implementation. Accurate databases on information and communication technologies (ICT) capacity of schools were not available for sampling in Gauteng, many schools had old and/or non-functional hardware and half of the schools had not used their computer laboratories in the last 3 years. Consequently, South Africa was excluded from the international report as the study requirements could not be met. In this paper we examine the implications of the problems experienced in the ePIRLS multiple case study, conducted in 9 schools (n = 277) in Gauteng. Multilevel models were built using data from the nationally representative Grade 4 Progress in International Reading Literacy Study (PIRLS) data from 2011 (n = 15,744) and 2016 (n = 12,810). In the 2016 national study, principals and teachers reported fewer computers and libraries being available for learners than were reported in 2011. Computers and paper-based libraries being available were not significant predictors of reading literacy. Instead, the medium of instruction in the Foundation Phase, school location, gender, and socioeconomic composition of the school predicted reading literacy achievement. The ePIRLS results show no significant difference between paper-based and online reading. While issues of poverty, gender inequality, and historical disadvantage persist, Grade 4 learners may lack adequate opportunities to acquire paper and digital reading skills. We conclude that the most disadvantaged learners have increasingly insufficient opportunities and resources available to attain basic reading skills and this will have negative long-term consequences for South Africa’s educational sector and economy.
Introduction
The role of online reading in primary school education is increasingly viewed as essential to the nature of living in the information age (Gerick, Eickelmann & Bos, 2017;Hennessy, Onguko, Harrison, Ang'ondi, Namalefe, Naseem & Wamakote, 2010;Mullis, Martin, Foy & Hooper, 2017a;Plowman, McPake & Stephen, 2012). Some scholars argue that the effectiveness of ICT and digital media in primary education is yet to be established and should be limited with young learners (Burnett, 2010;Hesterman, 2011; Organisation for Economic Cooperation and Development [OECD], 2015a). Other scholars argue in favour of ICT to enhance reasoning skills as well as computer and information literacy (Beetham & Sharpe, 2013;Fisher, R 2014;Toki & Pange, 2014;Vasquez & Felderman, 2013).
ICT can contribute positively to online and paper-based reading literacy, but the context and purpose of instruction should guide its use and it should be integrated with the aims of the curriculum (Lindberg, Olofsson & Fransson, 2017;Mills, 2010;Sharpe & Oliver, 2013). The educational debate is shifting from ICT advantages and disadvantages, to methods of using ICT to maximise the benefit to both teachers and learners (Cicconi, 2014;Meyer & Gent, 2016;Mills, 2010;Toki & Pange, 2014;Whittingham, Huffman, Rickman & Wiedmaier, 2013). A balanced approach to the utilisation of technology in the classroom could strengthen digital literacy and enhance the teaching and learning of reading literacy (Lim & Hang, 2003;McLean, 2017). Reading paperbased materials and online reading are two constructs that overlap but also differ in some aspects, which is why there is a strong argument for developing both constructs in a digitally-rich world (Coiro, 2011;Gilleece & Eivers, 2018).
In this paper we investigate which ICT resources are available for teaching and learning reading literacy in Grade 4 and whether regular use predicts paper-based reading literacy. The challenges of assessing online reading is discussed in the context of ICT availability and utilisation in South African primary schools as well as the findings from the ePIRLS.
Literature Review
The ePIRLS 2016 international results report significant differences between online reading and paper-based reading for all but two of the 14 participating countries (Mullis, Martin, Foy & Hooper, 2017b). The main conclusion of the ePIRLS international study is that when learners are well-prepared to read paper texts and are exposed to digital reading in school, they are proficient in online reading, including skills such as navigating simulated internet pages, integrating interactive content and searching for information (Mullis et al., 2017a). The complexities of online reading, when compared to paper, are being expanded on by researchers, and include issues such as the type of information read online, its context, and use. For example, there may be no significant differences in comprehension of fiction or nonfiction texts when readers are exposed to paper, tablets, or computer reading (Margolin, Driscoll, Toland & Kegler, 2013). There may be a difference between online reading and paper-based reading for those who do not have access to digital reading devices (Leu, Forzani, Rhoads, Maykel, Kennedy & Timbrell, 2015). Factors that affect reading literacy achievement in online reading versus paper-based include socioeconomic factors. Those living in impoverished areas may have significantly lower online reading literacy achievement than children in more affluent neighbourhoods (Gilleece & Eivers, 2018;Leu et al., 2015). When learners do not have access to ICT resources, their overall reading achievement in both paper and online reading could be lower (Leu et al., 2015).
ICT policies, plans and reality in South Africa
Incorporating ICT into pedagogy has been part of education reform since 1994. A Technology Enhanced Learning Initiative (TELI) was introduced in 1995 ( De Jager & Nassimbeni, 2002). The initiative was followed by a draft policy paper in 1997, which aligned itself with the TELI strategic plan (Boekhorst & Britz, 2004). As part of incorporating ICT into pedagogy, SchoolNET was launched in 1997 (Blignaut & Howie, 2009). Seven years after the ICT initiative, a draft policy, the White Paper on e-education, was published in 2004 (Department of Education [DoE], 2004;Vandeyar, 2015). The strategic message of the White Paper on eeducation was that management, teachers, and learners should have computer literacy skills and access to ICT resources by 2013 (DoE, 2004). The slow and uncoordinated implementation of the policy can be attributed to a lack of resources and departmental capacity (Gauteng Department of Education [GDE], 2010;Meyer & Gent, 2016). Other challenges include a lack of integrative strategies and a one-size-fits-all approach that does not work in South Africa's diverse educational landscape (Meyer & Gent, 2016). Poor strategy and implementation on a national level has resulted in provinces taking initiative and developing their own approaches. Of the nine provinces, only two are proactive on this topicthe Gauteng Departments of Education and the Western Cape Education Department (GDE and WCED). WCED rolled out the Khanya Project, which envisaged providing every school with computers for administration, teaching, and learning (Chigona, Chigona & Davids, 2014). Gauteng Online was a project that provided computer labs with internet connections to primary schools that did not have these resources.
The South African administration of ePIRLS reported a lack of ICT resources, even in the more urbanised province of Gauteng (Howie, Combrinck, Roux, Tshele, Mtsatse, McLeod Palane & Mokoena, 2017). South Africa was not included in the international ePIRLS report due to insufficient information for random sampling and was treated as a multiple case study. The fact that the GDE did not have a complete list of schools with ICT capacity indicates gaps in monitoring the availability and use of computer laboratories or tablets, as well as its implementation. Schools in impoverished environments, which do not fall within the former model C classification, face a persisting disadvantage (Christie & McKinney, 2017), a fact that is supported by findings from this paper. The ePIRLS 2016 Gauteng study reported that, even when schools had some ICT capacity, many had outdated hardware and software or nonfunctional resources such as computers which no longer worked or were missing essential components such as keyboards (Howie, Combrinck, Roux, Tshele, Mtsatse, et al., 2017).
Research Objective and Questions
The main aim of this paper was to examine the current status, challenges, and implications of ICT availability in South Africa for Grade 4 and Grade 5 reading literacy teaching and achievement.
Research questions related to the main objective: 1) What is the current status of ICT availability for learning and teaching reading literacy in Grade 4? 2) What is the association between ICT resources and reading literacy achievement when controlling for other variables? 3) Does regular use of computers in the classroom predict increased reading scores? 4) What are the implications of the ePIRLS challenges and results?
Methodology
The main The national PIRLS samples are stratified clusters randomly drawn to represent populations chosen by the participating countries (LaRoche, Joncas & Foy, 2017). Schools are randomly selected, thereafter classes are randomly drawn. The South African samples were stratified by language and province, with the exception of the 2011 cycle, in which the sample was not stratified by province. Analysis for the current paper was conducted with data combined per grade, as shown in Table 1. In the case of Grade 4, data for all languages is available for 2011 and 2016, hence the large sample sizes. Due to difficulties discussed later in this paper, only nine schools and 277 learners participated in the ePIRLS 2016 study.
Instruments
The paper-based PIRLS booklets each contained a fictional passage as well as a non-fictional passage. Each passage was accompanied by 12 to 15 questions which contained a balance of multiple-choice and constructed-response items ranging in difficulty and cognitive demand. The paper-based version included a combination of passages aimed at international standards of fourth-year reading, and easier passages targeted at developing readers. The rotated-test design resulted in each learner completing one booklet from the 16 possible booklets, and achievement scores were estimated for all learners and all passages, producing imputed plausible values (PVs). The tasks and items of ePIRLS were developed by the International Association for the Evaluation of Educational Achievement (IEA) and the Australian Council for Educational Research (ACER) (Mullis et al., 2017a). The electronic version of the test required the ability to search for information in a simulated online environment, directly report facts in lower-order tasks and evaluate the information and synthesise material in higher-order tasks. The electronic version of the assessment only contained informational tasks and there was no overlap between paper and electronic tasks or passages; the tests were equated on the same learners being tested in both instruments. An interactive example of ePIRLS can be found on the Boston College website i (Mullis et al., 2017b). Four different questionnaires were also included in the study and they were answered by learners, teachers, principals, and parents. The contextual variables used in the analysis, for example whether the classroom had computers, were derived from the questionnaire data.
Administration and Ethical Considerations
The standard IEA protocols were followed during test administration, with one day used for the paper-based test and a separate day for the digital, online reading test. Both tests comprised 45 minutes of reading and answering questions for each task or passage. Learners were given a break between the sessions and testing was conducted in the morning to avoid fatigue. All administrators underwent training and quality assurance monitoring was done by the international body as well as the national team. Ethical clearance for the project was obtained from the University of Pretoria, Faculty of Education. Principals gave permission for testing to take place in their schools, and signed consent was obtained from the learners' parents.
Challenges in Sampling and Administration of ePIRLS
Some of the challenges of implementing ePIRLS were discussed in the highlights report (see Howie, Combrinck, Roux, Tshele, Mtsatse, et al., 2017). This paper reports the challenges in more detail, investigates whether findings and experiences from ePIRLS can be substantiated with findings from the main study, and examines the implications. A requirement of ePIRLS was that schools should have functional computer rooms. The South African sample was originally intended to be representative of schools in Gauteng with computer facilities, where English was the language of learning and teaching (LoLT) from Grade 1. Initially, a database was obtained from the GDE. The list contained 2,161 schools; after eliminating high schools and adult centres, 236 schools remained.
Verification with schools revealed that many of the schools had been assigned the incorrect medium of instruction (language) and even though all the schools on the list should have had ICT capacity, many did not. Liaising with the GDE eventually revealed that accurate databases of ICT capacity were not available. After telephone conversations with the schools the list was reduced to 36 schools. According to the protocol for the international study, Statistics Canada drew a random sample of 25 schools in Gauteng. However, after school visits it emerged that the LoLT and availability of ICT equipment had been reported incorrectly at even more schools. Eventually, 15 schools were invited to participate and nine agreed. The fact that a representative sample could not be tested had the unfortunate consequence that South Africa was excluded from the international report. Despite schools having computer laboratories, functionality was limited or non-existent in many schools. Therefore, the ePIRLS team had to rent laptops to take to schools, which escalated the cost of the study. During the study it was also discovered that four of the nine schools had not used their computer rooms in the last three years. The result of inactive ICT usage was observed during fieldwork; learners sometimes struggled to use the mouse and respond to the interactive content. Learners in schools where computers rooms were not used, did not read better on paper than online, which is attributed to the fact that both online and paper-based reading are closely linked and that neither of the skills had been adequately developed.
Data Analysis
The initial descriptive analysis was conducted using the IEA's International Database (IDB) Analyzer software, which functions in conjunction with the Statistical Package for the Social Sciences (IBM Corp., 2017;IEA, 2018). The mean achievement derived from the PIRLS comprehension are the PVs on a scale of 0 to 1,000 with a standard deviation of 100. IDB Analyzer runs statistical tests, while accounting for standard errors, weighting the sample appropriately and combining the plausible values (Foy, 2018;IEA, 2018). IDB Analyzer was used to generate descriptive statistics of ICT availability for the larger samples as well as the ePIRLS multiple case study.
Multilevel modelling (MLM) was conducted using the Hierarchical Linear and Nonlinear Modeling Program (HLM 7) to control for betweenschool variance (Raudenbush, Bryk & Congdon, 2013). In South Africa, between-school variance tends to be large due to the heterogeneous population, socioeconomic factors and complex schooling system (Van Staden, 2010;Van Staden & Howie, 2014). Consequently, models that consider the nested nature of the sample are necessary when the intraclass correlation coefficient (ICC) exceeds 5% (Arends, Winnaar & Mosimege, 2017;Huta, 2014).
Multilevel models were built for the Grade 4 data, with two separate models, one for 2011 and another for 2016 (see Table 1 for sample sizes). The models included the socioeconomic composition of the school, computer availability for learners to use in the school (from the principal questionnaire) as well as computers for class use (from the teacher questionnaire). The availability of a school library and a classroom library was included in the model as an additional control for paperbased reading resources. Weighting was calculated to represent the probability of the within-cluster units by multiplying all design weights and nonresponse adjustment, as recommended by Asparouhov (2006) and Stancel-Piątak and Desa (2014). To draw conclusions about the overall population, grand mean centring was used for all the variables at level 2 (Hoffman & Gavin, 1998;Raudenbush & Bryk, 2002;Snijders & Bosker, 1999;Stancel-Piątak, Mirazchiyski & Desa, 2013).
Economic variables included in the model were school location and school socioeconomic composition to account for learner background. Being exposed to an African language in the Foundation Phase, versus attending an English or Afrikaans school, was included as a variable that indicates both language background and socioeconomic status (SES). Attending a school where the LoLT is an African language or attending an English or Afrikaans LoLT school was specifically included as a predictor, because during the sampling of ePIRLS it was found that African LoLT schools were the most unlikely to have ICT resources. Afrikaans and English LoLT schools were grouped together due to their linguistic similarities and shared historical advantages. The African LoLT schools were grouped together as local languages due to their history and the fact that they differ linguistically from the European languages. It is acknowledged that Afrikaans can be classified as an African language due to its development in South Africa, but in this study it has been grouped with English.
The school reading literacy environment was measured by whether the school had a library and classrooms had reading corners or libraries (paperbased resources). The availability of computers or tablets for learners to use in the school and the classroom was included as digitally-based resources. Considering the large gender disparity in reading literacy, the gender of the child was included as a control variable in the model. The presence of computers or tablets does not necessarily imply that they are being used. For this reason, another model was built for schools where classroom computers or tablets were present to estimate the effect of regularly using these ICT resources.
Missing data can be problematic in multilevel modelling (Raudenbush et al., 2013). The variables derived from the school (principal questionnaire) had relatively large percentages of missing data; a total of 21% of the responses were missing for 2011 Grade 4 data, and 32% were missing for 2016. The combined cycles had as much as 26% of data missing regarding the variable of computers being available for learners' use in the school. The teachers' responses on computer availability for the classroom had an average of 7% missing data for each of the cycles. Full information maximum likelihood (FIML) estimation, using IBM Amos, was used to estimate missing values (Arbuckle, 2014(Arbuckle, , 2017Enders & Bandalos, 2001;Schminkey, Von Oertzen & Bullock, 2016). FIML provides unbiased estimates that are equivalent to multiple imputation (MI) in large samples (Ferro, 2014). The missing data models included predictors which had correlations above .40 with the outcome variables (Enders, 2010). Effect sizes in multilevel models are estimated with the proportion of explained variance (PEV), equivalent to the traditional r 2 . The PEV of the null model represents the magnitude of variance that can be explained by school differences, whereas the unstandardized regression coefficients, shown in Table 4 and Table 5, represent the magnitude of the fixed effects, using the pertinent predictors (Lorah, 2018). It should be noted that the student weight of the ePIRLS data was set to 1 in the analysis, as the sample is not representative of any population. Therefore, means may differ slightly from the highlights report, where weights had not been changed.
Results
Results are shown for the MLM models of the Grade 4 national sample in terms of the availability of general ICT and paper-based resources (Table 4), as well as a model built only for classrooms where teachers reported having computers or tablets (Table 5). A multiple linear regression model was built for the results of the ePIRLS multiple case study in Table 7.
Attending an African Language School Versus an Afrikaans or English School Among the challenges described earlier in this paper, the researchers found that, when a school's LoLT was an African language, the school tended to report no computer facilities or access to tablets. This led the researchers to suspect that there is a divide in ICT availability between African language schools and non-African language schools. To investigate whether the experiences during the multiple case study are supported by the main study, a variable was created to dichotomise writing the assessment in an African language school, as compared to writing in an Afrikaans or English school. While language in this dichotomisation is closely related to socioeconomic status, there are also potentially broader reasons for the divide, such as cultural elements (reading culture), historical disadvantage, and managerial issues within the school system. The divide between African LoLT schools (historically black schools) and English and Afrikaans LoLT schools (historically white schools) remains controversial and incendiary, but the research in this paper finds that issues of inequality persist between the two types of schools. Therefore, the variable was included for scientific reasons but also for the social implications.
The current status of ICT availability for Grade 4s as reported by principals and teachers of African and non-African LoLT schools, is shown in Table 2. With each subsequent cycle of PIRLS participation, both principals and teachers reported less ICT availability. A large and significant difference between ICT availability as reported by African LoLT language schools and English or Afrikaans LoLT schools also existed. Table 3 shows how many principals reported the presence of a school library and how many teachers reported classroom libraries or reading corners. As is the case with the ICT resources, a decline in paper reading materials was reported between the cycles.
ICT Availability and Use as Predictor of Reading Literacy Achievement
In Table 4 the multilevel models of the Grade 4 national samples are shown, with the model of the 2011 cycle on the left and the 2016 cycle on the right. The null model is shown first to account for between-school variance, which predicted 55% of the variance in the 2011 cycle and 41% of variance in the 2016 cycle. The models in Table 4 are based on the full Grade 4 national sample; in 2011 the representative sample included 15,744 learners and in 2016 a total of 12,810 learners participated. 16.28% Note. *Significant, p < 0.05; **Proportion of explained variance; ***LoT = Language of Test. β = Unstandardized regression coefficients on scale of 0-1000.
In 2011 the teachers of 22% of learners (n = 3,088), and in 2016 the teachers of 8% of learners (n = 861) reported that classroom computers or tablets were available for reading lessons. Teachers were also asked how often the classroom computers were used to look up information on the internet, read stories or other texts on the computer or use the computer to write stories or other texts. Based on category functioning, the responses were coded as Less than once a month and Weekly or daily. In 2011, 66% of the learners whose class-rooms had computers were in African language schools and this percentage was 71% in 2016. Considering that African langvuage LoLT schools were the least likely to have access to ICT resources, the variable of being in an African LoLT language school, or not, was once again included in the model. In Table 5 the results of the model for schools where teachers reported the availability of computers are shown for the 2011 sample and 2016 samples respectively. No statistically significant difference was found between the mean achievements of pa-per-based reading compared to online reading (shown in Figure 1). This is not surprising as the two types of reading were highly correlated in the South African multiple case study, r = .87 (p = .000). Therefore, predictors were expected to be similar for the two types of reading.
Figure 1 ePIRLS online mean achievement and PIRLS paper-based achievement
Schools who had not used their ICT resources in the last three years were classified as not using ICT; the descriptive statistics for the variables included in the multiple linear regression model are shown in Table 6. Learners reporting self-efficacy in using computers could be related to whether or not the school had used the computer room in the last three years. When self-reported efficacy of learners in schools that use their ICT resources was compared to the four schools that did not use it, only small differences were found (see Figure 2). Only 1% of learners in schools that used computer laboratories reported low self-efficacy, whereas 9% of learners in schools that did not use their computer rooms had low self-efficacy. The self-efficacy scale was generated by the IEA and based on items which included: asking learners to rate whether they were good at using computers, good at typing and whether it was easy for them to find information on the internet (Mullis et al., 2017a). Multicollinearity was not problematic as the correlation between computer self-efficacy and being in a school using the computer room was small (r = .11; p = .20).
Figure 2
Self-efficacy reported in schools where computer labs were used versus those where they were not To gain insight into factors that predict paperbased reading and online reading, a multiple re-gression model was built for the 277 learners who participated in ePIRLS (shown in Table 7). The results of ePIRLS (n = 277) cannot be generalised to any population because a representative sample was not drawn. The results of the 2016 ePIRLS multiple case study are intended to provide some insight into the predictors that are related to paper-based and online reading.
Discussion
When the influence of ICT availability, using the national Grade 4 samples, was examined, the availability of computers for learners' use in schools was not a significant predictor of paper-based reading literacy achievement in either of the two cycles. In the 2016 sample, classroom computers were significant (β = 35.59; SE = 15.02) but the 2011 model did not show the same significance. It should be acknowledged here that having access to resources, such as school libraries and computers or tablets, does not necessarily mean the resources will be used or that teachers have the required pedagogical knowledge to integrate these resources into teaching and learning (Mathevula & Uwizeyimana, 2014;Paton-Ash & Wilmot, 2015). The strongest significant predictor was attending an African language school or attending a non-African language school, predicting 76.70 (SE = 11.72) score points in the 2011 cycle and 43.15 (SE = 11.23) score points in the 2016 cycle. Attending an Afrikaans or English LoLT school could increase reading literacy achievement by more than a year of schooling (half a standard deviation) when other factors are fixed.
Gender was the second strongest predictor, with girls achieving as much as 33.62 (SE = 2.37) score points more than boys in 2011, and in 2016 the difference increased to 50.30 (SE = 2.48) score points (half a standard deviation). Globally, the reading literacy disparity between boys and girls in early grades has been documented for decades (Brozo, Sulkunen, Shiel, Garbe, Pandian & Valtin, 2014;Marinak & Gambrell, 2010;Zuze & Reddy, 2014). The increasing gender gap was also noted in the international report for two countries, South Africa and Saudi Arabia, which have the largest disparities between boys and girls in PIRLS (Mullis et al., 2017b). School location was also a significant predictor and living in a deep rural area or township could mean that learners achieved as much as 19.80 (SE = 6.52) score points less than those in urban areas in the 2011 study and 15.43 (SE = 4.94) score points less in the 2016 cycle. Having a school library and having a classroom library did not significantly predict paper-based reading achievement in either cycle when other predictors were fixed. Access to ICT for teaching and learning in South Africa is still associated with the SES of both schools and learners (Howie, 2010;Meyer & Gent, 2016;Sithole, Moses, Davids, Parker, Rumbelow, Molotja & Labadarios, 2013), and ICT uptake in schools remains lower than targets set by educational departments (Padayachee, 2017).
The 2011 and 2016 models explained 15-16% of variance in reading achievement once school variances were deducted (PEV). For the small percentage of schools where computers or tablets were available in classrooms (22% in 2011 and 8% in 2016), regular use of these resources was not a significant predictor of reading literacy achievement (paper-based). The exception was using computers to regularly look up information on the internet in the 2011 results. However, the regular use of computers to look up information in the 2011 cycle predicted a large, significant decrease in score points (β = -73.56; SE = 27.02). This result may be due to the fact that the majority (71%) of teachers said that they used the computers less than once a month, or it may be a spurious finding. Using the computers to read stories or other text and write stories or texts weekly did not significantly predict increased reading literacy achievement. Teacher responses to PIRLS questionnaire items can be excessively positive, and an overreporting of activities has been found in secondary analysis of data when followed up with case studies (Van Staden & Zimmerman, 2017;Zimmerman, 2010). Therefore, no conclusions can be drawn from the fact that the reported regular use of ICT did not predict increased scores. The ICT usage models explained 23-24% of reading literacy achievement when controlling for between-school variance. Both of the multilevel models (Table 4 and Table 5) show that demographic predictors, such as gender, socioeconomic composition of the school, school location, and language of instruction are large, significant predictors of reading achievement. Issues of language as well as a lack of economic empowerment continue to be major inluences on learning (Van der Berg, Spaull, Wills, Gustafsson & Kotzé, 2016).
In the ePIRLS linear regression model, the notion of schools using their ICT resources was the largest significant predictor of both paper-based and online reading achievement. Due to the small sample size and large amount of missing data for the questionnaire (more than 50%), contextual variables such as school SES composition could not be included in the model. Consequently, schools that use their computer laboratories may be the schools that are more functional, have learners from more advantaged backgrounds, implying general economic and social advantages, which are not accounted for in the model. Only English lan-guage schools in urban areas were used in the multiple case study, further limiting conclusions and comparisons, which is why models were built for the national sample. The ePIRLS multiple case study regression models explained 19% of the variance in paper-based reading and 21% of the variance in online reading. If a more complex and representative sample could have been drawn, more demographic variables and questionnaire items could have been used to strengthen the models.
Conclusion and Implications
In each subsequent cycle of PIRLS, South African principals and teachers reported less access to ICT resources in schools and classrooms than had been reported in the previous cycle. In 2011 principals of 55% of Grade 4 learners said that school computers were available for learning. By 2016 the principals of 44% of the Grade 4 learners reported ICT availabilitya significant reduction. In 2011, teachers of 22% of the learners said that classroom computers were present; by 2016 the number of teachers reporting classroom computers (or tablets) had been reduced to 8%. When taking into consideration that many South African schools in rural areas still lack basic infrastructure, such as flushing toilets or electricity, the obstacles in providing ICT capacity to schools are understandable (Fisher, J, Bushko & White, 2017). Overcrowded classrooms, curriculum overload, and teachers not being equipped to use ICT are further reasons cited for the slow implementation of ICT in schools (Fisher, J et al., 2017;Mathevula & Uwizeyimana, 2014;Padayachee, 2017).
The PIRLS main study was designed to be representative of the Grade 4 population in South Africa, and conclusions can be drawn that ICT availability in primary schools may be declining nationally. The world is becoming increasingly digitised, but South African schools, especially those in disadvantaged communities, are experiencing a decrease in access to ICT resources. There may be a large portion of South African learners who complete their schooling with limited exposure to computers and limited opportunity to gain online reading literacy skills.
The same decline in resources was observed regarding paper-based reading material despite the importance of school and classroom libraries in promoting reading literacy skills (Howie & Chamberlain, 2017). There was a slight reduction in school libraries and a large reduction in classroom libraries/reading corners being reported. Classroom libraries were reported for 70% of Grade 4 learners in 2011 and this dropped to 54% in 2016. When dichotomising the sample according to African language schools and non-African language schools, the digital divide is larger. African language schools are significantly (p < 0.01) less likely to have school computers (51% in 2011 and 33% in 2016) when compared to Afrikaans and English schools (76% in 2011 and 61% in 2016). The same pattern holds for paper-based resources; a third of African language schools reported school libraries in both cycles, whereas two-thirds of English and Afrikaans schools reported having a school library. The gap in digital access in the country is evident, not only by learners' SES, but also by school language. We found that the school's LoLT from Grade 1 to 3 predicts the reading performance of Grade 4 learners. After more than twenty-four years of democracy and promotion of a multilingual education system, the language of learning still predicts limited access to libraries, reading material, and ICT facilities in African language schools.
Despite a reported decline in electronic and paper resources for learning and teaching, neither ICT availability nor books are significant predictors in the models, once other contextual factors are considered. Significant predictors of Grade 4 reading literacy included LoLT of the school, school location (rurality), learners' socioeconomic background, and gender. Even when classrooms did have computers or tablets, regular use did not predict increased scores. This paper is based on the argument that 21st century online literacy reading skills are crucial for modern day readers and the demands learners will face in higher education and the world of work (Breytenbach, 2013;Maneschijn, Botha & Van Biljon, 2013;OECD, 2015b). South African ePIRLS scores show that paper-based reading highly correlates to online reading and strengthening the former could develop the latter. But results from this paper show that South Africa still faces significant challenges in terms of developing both paper-based and online reading skills due to issues of poverty, historical disadvantage, and the gender gap. A reading literacy culture is unlikely to develop when vulnerable populations continue to face issues of basic survival. Further qualitative research is required to understand how reading literacy, both on paper and digitally, can be supported and developed in a decolonised context.
The problems experienced with implementing ePIRLS allude to the inadequate monitoring by departments in the South African education system, despite the published ICT policies and set goals. ePIRLS highlights the fact that both ICT and paperbased reading skills are not being taught to the most vulnerable populations and that resource shortages and a lack of usage continue to plague the system. The reading crisis is one of social injustice that persists. PIRLS 2021 will be the last cycle with a paper-based option. There is an increased focus on digital literacy internationally and South Africa lags behind; to our detriment and the disadvantage of learners and their future. Serious changes are required if South Africa wants to compete in the global market and give learners opportunities to contribute to the economy in the future. Policy implications that emerged from the current study are provided below as general guidelines for stakeholders to consider.
Challenges
Recommendations 1) Low reading literacy skills both on paper and digitally. A lack of basic reading skills affects children in early grades and transmits into later grades. The problem has detrimental long-term consequences.
1) Strengthen the learning and teaching of reading literacy skills. Both paper and online reading should be a focus throughout schooling, but most importantly, in the early grades. Reading literacy skills should be a priority in policy and practice. 2) Insufficient school monitoring of ICT capacity and use. Inaccurate databases of schools and their ICT capacity, quality, and use. This could be linked to insufficient monitoring of school functioning in general.
2) Update and maintain the database of school ICT resources. Schools could provide the information, but district or departmental confirmation would be required for accuracy.
3) African language schools have significantly less access to both paper and digital reading resources. African language schools report significantly less school and classroom libraries as well as a lack of ICT capacity. This is associated with greater poverty and lower reading literacy achievement.
3) Focus on supporting reading literacy in African language schools. Due to historical disadvantage and colonisation, African language schools specifically require both resources (books and digital media) as well as support to use the resources.
4) A lack of classroom and school integration of ICT resources:
This study shows that even when ICT resources were available, teachers and schools did not always integrate the resources into teaching and learning.
4) Provide pedagogical support in addition to ICT resources to maximise integration. Merely providing ICT resources would not be sufficient; teachers and schools need training on the use of the resources and their role in teaching reading literacy skills. 5) Insufficient maintenance of ICT resources: Half of the schools in the ePIRLS study had some ICT capacity but had not used their resources in the last three years. The main reasons included outdated, insufficient, and/or non-functional equipment. 5) Provide ICT resources with maintenance funding and technical support. When ICT resources are provided to schools, plans and funds should also be in place to maintain equipment and update software. | 8,615.6 | 2019-12-31T00:00:00.000 | [
"Education",
"Computer Science"
] |
Effect of Synthesis methods on the Properties of Magnetic Material
Ba1-xPbxFe12O19 composition (x=0.0 to 1.0) synthesized by Co-precipitation and Sol-Gel methods. In Coprecipitation method BaCO3, PbO and Fe (NO3)3 .9H2O were used as basic ingredients. Acids and Di-H2O were used as solvents. Molar ratio of cations was 12. pH of solution kept constant at 13. All samples sintered at 965±5 o C for three hours. Lead own properties, synthesis at room temperature and substitution in R-block of structure were the reasons for decrease of phase purity from ―x‖ =0.0 to 70% for ―x‖=1.0. Decrease in phase purity and heterogeneity of material caused the properties to decrease. In Sol gel method, Nitrates (salts) and Ethylene glycol (liquid) were the basic material used. The mixed solutions dried out on a hot plate whose temperature was maintained constant at 200±2 o C. Pellets formed by applying suitable hydraulic pressure and then sintered at same temperature written above i.e. 965±5 o C for three hours. 100% phase purity achieved. All properties modified. Temperature and frequency dependent electrical properties investigated and reported here. DC and AC obtained properties were useful for different electronics and computer devices like capacitors, smart storage devices and multilayer chip inductors. Overall, both these properties improved through solgel method as compared to co-precipitation method. It was because of improvement in phase purity and change in morphology of synthesized material.
Introduction
Ferrites being magnetic material are very important for today's scientific and technological development.Its classification depends upon composition and structure, which differentiate its applications.Hexagonal ferrite itself is a big family and also known as hard ferrite.U, W, X, Y, Z and M are its family members [1].M-type is very much attractive for researchers, advancement of technology and for device fabrications.It is because of easy availability of raw materials, easy to synthesize and its excellent chemical, corrosion stability and durability [2,3].M-type family possesses complex structure and consists of SRS * R * blocks.For Ba hexaferrite formation, Ba 2+ , Fe 3+ and O 2-involve.The reactant mixture must self-assemble into S and R blocks or structures at their own suitable temperature and time.The reactions conditions that favor the formation of γ-Fe2O3 and Fe3O4 during synthesis would also encourage barium ferrite formation.Thus reaction kinetics of these compounds along with other components shown can be seen in M-type structure [4].
In order to improve the properties of M-type materials particularly Ba-hexaferrite, researchers have used and applied different methodologies and techniques.In these techniques first attempt was to optimize the synthesis parameters and obtain purity (Phase) especially when substitutions introduced.The physical properties of dopant like ionic radii, site preference, its mobility, valance ability are important.Better control on synthesis parameters improves the phase purity and morphology, which on other hand improves the properties.Above methods and applied techniques decide the structural and morphological growth of material as a result purity and properties improved.In all synthesis methods reaction kinetics provide environment for respective elements (ions) to use energy for their movements and occupy their respective crystallographic position in the structure then minimize.This mechanism is the crystal growth process, hence particular crystal structurephase completes.Properties of any material are strongly affected by the cations distributions, microstructure developed and phase obtained [5][6][7].
In presented paper, comparison of two synthesis methods with respect to phase purity, microstructure and their morphology has discussed.Change in these factors changes the properties.Ba1-xPbxFe12O19 (x=0.0 to 1.0) composition was synthesized by co-precipitation and sol-gel method.The comparative analysis of the obtained results has presented here.Both are low temperature synthesis methods, low cost, easy to manage and almost uniform cations distributions are the main advantages of these methods [8].
1-Experimental Procedure
In co-precipitation method, BaCO3, Fe (NO3)3.9H2Oand PbO were used as basic ingredients.Barium carbonate dissolved in (con.)HNO3 while PbO in (con.)HCl.NaOH used as precipitating agent.Cations molar ratio of iron to barium used was 12. Solid Fe (NO3)3.9H2Oand NaOH were dissolved in Di-H2O.All these ingredients were stochiometrically calculated, weighed and then used.Three independent solutions formed were mixed together at room temperature.For ferrite precipitation, molar solution (M=5) of NaOH was added into it.About 97% NaOH solution was added at once while the remaining used to acquire pH=13 at the end.The obtained solutions were stirred for half an hour.It was necessary for uniform distribution of cations, minimization of impurity and better homogeneity.This was co-precipitation synthesis and completed at room temperature.For identification of samples these solution were given name as Ao (x=0.0),A1 (x= 0.2), A2 (x= 0.4), A3 (x= 0.6), A4 (x= 0.8) and A5 (x= 1.0).
All solutions were washed with de-ionized water along with vigorous stirring so that impurity minimized and homogenization improved.After washing, paste like material placed in an oven at 110±2 o C for overnight.The dry material obtained transformed into powder form.The obtained powder then converted into pellets by applying suitable hydraulic pressure of 1000 lbs./inch 2 for five minutes.The dry pellets then sintered in a furnace for three hours at 965±5 o C.These sintered pellets were used for different characterizations.
In order to synthesize the same composition with sol-gel method, the basic precursors used were nitrates of barium, lead, iron and ethylene glycol.Stochiometrically calculated materials then dissolved and magnetically stirred in 200ml ethylene glycol solution whose quantity was also optimized.These mixed solutions (from 0.0 to 1.0) were stirred and heated on a hot plate whose temperature was maintained constant at 200±2 o C. Because of heating liquid contents started to evaporate, then gel like material formed.At the end, it burnt and dried material obtained.
In sol-gel method, vigorous stirring and heating the liquid media helps the different metallic components to mix quickly at atomic level.This mixing mechanism provides an opportunity especially in terms of energy for different ions to move, occupy their respective positions and then minimize their energies in phase as a result purity improves [9].Then pellets formed by applying suitable hydraulic pressure of 1000 lbs/inch 2 for five minutes, same pressure as used in coprecipitation method.The dried pellets then placed in a furnace at 965±5°C for three hours.Ba1-xPbxFe12O19 synthesized was ready for different characterizations.
Comparing both synthesis methods, sol gel looked better and easy than co-precipitation method.It is very useful for industrial production.Sol-gel was easy to manage because no extra parameters like molar ratio, molarities, pH and washing were required.To differentiate it from others samples these were given name as AAo, AA1, AA2, AA3, AA4 and AA5 for Pb=0.0, 0.2, 0.4, 0.6, 0.8 and 1.0 respectively.All chemicals used were analytical graded.They were from sigma Aldrich, Mäurk and Fluka.They were 99.99 % pure.
2-Structural Analysis
Since both compositions have same nature with only difference in synthesis methodology and the techniques applied.Sintering temperature and time was same.All characterizations tools were same and measurements recorded and noted under same conditions and procedures.
2.1-XRD Analysis
Samples synthesized with co-precipitation and Sol-gel methods were scanned for five minutes from 2θ = 0 up to 80 o and 70 o respectively by using CuKα radiation source.Wavelength available and used was 1.5406Å.
By comparing both indexed XRD graphs ( figure 1.1 and figure 1.3) it was noted that in co-precipitation method, impurity phases started to develop as the lead dopant introduced and it continued to increase up to end.In Sol-gel graph, No such phase(s) was detected or seen by XRD machine.Its reason was simple that in co-precipitation synthesization was at room temperature while in sol-gel method synthesization completed at 200±2 o C. At room temperature different reactions unable to complete, stresses and distortions due to lead substitution in place of barium initiated impurity phases like PbFe2O2.67,Ba3Fe26O41 and PbFe2O4 developed.Stresses and distortions were generated due to greater ionic radii (Pb 2+ = 1.76Å) than smaller ionic radii (Ba 2+ = 1.37Å) [10].Other reasons were low melt point of Pb (950°C) and fast
Figure1.3:-Indexed XRD pattern of Ba 1-x Pb x Fe 12 O 19 -Sol gel method
In case of sol-gel, ingradients and techniques applied changed.Heating was the key factor which differentiates it from co-precipitation so results improved.Heating factor was the basic requirement in sol-gel method.It provided the required energy which along with small energy of stresses and distortions enough for individual ions/atoms to move and occupy respective positions inside the structure.When structure completed energy started to minimize in phase so chances of initiating impurity phases also minimized.It was possible because of strict control on applied conditions like stirring and time.Indexed XRD did not detect any impurity peak(s) as shown in indexed graph of figure 1.3.These peaks were matched with JCPD cards.These cards were 00-084-0757, 00-027-1029 for AA1.0 i.e. for PbFe12O19.For other compositions, these cards were 00-041-1373 and 00-017-0660.
Comparing both indexed XRD graphs of same composition showed that reactions kinetics and energy minimization were almost in phase in sol-gel.It was not possible in co-precipitation method because those samples were prepared at room temperature.Heating was the key differentiating factor because completion of different reactions and starting of S and R blocks in SRS*R* structure were different particularly when there was 180 o rotation in this structure [11,12].Hence, structurally transformation from Ba-hexaferrite to Pb-hexaferrite obtained.
2.2-Complete Transformation
By comparing both XRD indexed patterns of Ba1-xPbxFe12O19 (x=0.0 to 1.0) produced different results in terms of phase purity and characterizations.XRD machine did not detect/indicate any impurity phase in Sol-gel indexed patterns while same machine showed impurity peaks in case of co-precipitation.In these compositions Pb was going to replace Ba in R-blocks of compound.By applying modified and optimized synthesized techniques purity improved.In coprecipitation method, purity achieved was 70% better than already reported with same method [7].Through sol-gel almost 100% purity obtained, so complete transformation from BaFe12O19 to PbFe12O19 was obtained and reported.
-Lead Preference Occupancy
It has reported by Thongmee et al [13] that all dopants have their own preference occupancy sites.In present study, Pb 2+ ions being greater ionic radii have preference occupancy site, i.e. octahedral (12k, 2a and 2b) of the hexagonal structure which is slightly larger than tetrahedral (4f1 and 4f2) [14,15].In Ba1-xPbxFe12O19 (x=0.0 to 1.0) composition, Ba occupies different occupancy sites in R and R * blocks.Physical properties of dopant like ionic radii, low melting point and higher mobility than Ba 2+ ions were also important beside site preference (see figure 4.1).Up to certain value of -x‖ dopant chances to occupy its preference site become maxima after that it may go to any other sites like tetrahedral or any other.These movements or distortions were responsible of creating abrupt change in phase development and properties of end material [16][17][18].
2.4-Structural growth analysis
Beside fast crystallization process, synthesis methodology and physical properties of lead were important to decide the variations in different structural parameters.In co-precipitation method, synthesis was at room temperature while continuous heating was a key factor in sol-gel synthesization i.e. 200±2°C.Structural variations observed in both methods which played its role in defining properties.
It has reported in the literature [19,20] that Pb 2+ ions being volatile nature and higher mobility helped to move randomly in different directions.As a result, grain growth has different behavior in both methods.In case of sol-gel, induction of lead dopant initially helped the growth mechanism to modifiy while in co-precipitation grain growth increased initially.So lead as dopant played different role in grain growth mechanism.Being volatile nature dopant has chance to go S-block.Its diffusion at boundaries of R and S also have different role in both methods.So valtile and mobility itself affected the growth mechanism in both methods.Different graphs below explain and support this trend.SEM micrographs also explained and supported this growth mechanism.
2.5-Crystallite size
It has reported [7,21] that oxides dopant like PbO, Al2O3 and SiO2 etc. are responsible of non-uniform trend in crystallite size -D -.In co-precipitation method lead substitution up to -x‖= 0.6 increased -D‖ from 38nm to 59nm.Heat generated in small area due to presence of sufficient quantity of dopant in that small area collectively helped the crystallite size to grow or enhance.Beyond -x‖=0.6 this trend changed.Increased quantity of volatile dopant now moved freely in different areas like R and S blocks(may be) so quantity now in large areas restricted the growth ( 59 to 33nm) however it modified the dimensional growth as shown by SEM micrographs (figure 3.1).In case of Sol-gel heat of 200±2°C temperature and heat of dopant stresses and distortions combined.This combined energy moved or travelled randomly in different directions so not enough energy to increase -D‖.In other words energy there moved to such sites like voids or dislocation areas diffused/ absorbed there that caused -D‖ to reduce from 47nm to 23nm .However when dopant concentration increased beyond -x‖=0.6combined energy was sufficient to enhance the crystallite size (23nm to 58nm).Scherer's formula was applied to calculate it.Small variation in -D‖ is because of volatile nature and mobility of dopant ions.
In case of co-precipitation method, -D‖ increased up to -x‖=0.6 beyond this value same type trend observed in Sol-gel and it decreased from -x‖=0.6 to 1.0 (co-precipitation) while it decreased form -x‖=0.0 to 0.6 for sol-gel method.Such trend for oxides dopant has also reported in literature [7].Dopant being volatile in nature may go to or pile up on those sites /positions where it unable to play role of enhancing the grain growth.This mechanism changed when dopant occupancy change its occupancy state inside the structure thus grain growth changed.Obtained crystallite size lies within 50nm range in both methods is useful for magnetic recording media applications [22][23][24].
Because of higher density of lead (11.34 g/cm
2.6-Lattice parameters and Volume
Because of volatile nature and higher ionic radii of dopant, all structural parameters varied.Physical properties of dopant was forced to move and diffuse /occupy in different sites / directions or occupancy sites [25] in non-uniform quantity.It was possible because dopant was going to replace Ba 2+ ions in R and R * blocks of structure (180°).Dopant also has different affect at boundaries of S an R blocks.All these factors caused the lattice parameters -a‖ and -c‖ to vary.Stresses and distortions were the other important parameters to have an effective role as reported by Teh and Jefferson [26].This change in lattice parameters was also responsible of change in size of unit cell.Similar phenomena was also observed and reported by Mingquan Liu et al. [27].
In co-precipitation method lattice parameter -a‖ reduced more (5.92Å to 5.82Å) than in sol-gel (5.91Å to 5.89Å), while -c‖ modified itself.It was important for defining different properties particularly magnetic properties.Here again smart variation in sol-gel than co-precipitation noted.Following graphs of lattice parameters and volume are explaining this whole mechanism in much detail.J u l y 3 0 , 2 0 15 Comparing both synthesis methodologies it observed that because of lead own different properties like change in preference occupancy and volatile status lattice parameters modified itself as a result reduction in volume of unit cell was observed.This factor has made the mass transportation slow within particles during crystal growth process.It makes grains small i.e. grains growth enhanced mechanism restricted as discussed [7].This phenomenon supports from the crystallite size graphs (Figure 2.1).This makes structure compact as confirmed by reduction in porosity.Porosity reduced from 58% AA0.0 to 25% AA1.0 (Sol-gel) and from 46% -A0 to 15% -A5(Co-precipitation).
3-SEM Analysis
To analyze the surface morphology and dimensional growth of composition synthesized by two different low temperature methods SEM was used.Both samples sintered at same temperature for three hours.XRD graphs showed that 70% and 100% phase purity achieved.Lead substitution not only enhanced crystallization process but also modified the dimensional growth.Both micrographs below have clearly differentiated this phenomenon.Best hexagonal structures seen in the co-precipitation samples.From sample Ao to A4 hexagonal structure reshaped itself because of increase of lead contents.But growth restricted in next concentrations.At the same time, different morphological and dimensional growth observed in sol-gel.Sharp edge and platelets like structures in co-precipitation as compared to petal and paper like structures in sol-gel.This morphology played an important role in analyzing different properties.Particle size estimated and their distributions are useful for magnetic recording media applications [24].Pb dopant has increased (c) and decreased (a) which helped above growth transformation.Platelets like structures are very important in defining magnetic properties.
4-Electrical Characterizations
Temperature dependent dc and frequency dependent ac properties were investigated for Ba1-xPbxFe12O19 are reported here.Because of obtained phase purity and variation in structural parameters and morphology, it was important to study these characterizations.Before going to discuss and explain the electrical properties of doped compositions, Figure 4.1 shown below has explained the octahedral and tetrahedral sites of the complex hexagonal structure [28].These sites are important to analyze these properties.In all ferrites, conduction and polarization phenomenon are because of generation of Fe 2+ ions due to oxidation of Fe 3+ ions on octahedral and tetrahedral sites [29,30].
4.1-DC Properties
Temperature dependent dc properties were investigated by two-probe method because ferrites are high resistive materials.In order to analyze different dc parameters, dc current (Idc) was measured as a function of temperature from 300K to 733K for all samples for co-precipitation method.For sol-gel, (Idc) was measured from 328K to 748K.Parameters like resistivity (ρ), activation energy (∆E), mobility (µd) and charge carrier concentration (n) were measured and calculated.The doping effect with respect to its concentration, site preference or occupancy studied and reported.
4.1.1-Resistivity, Mobility and Activation Energy
As discussed in literatures [31,32] that Fe 3+ ions in M-type ferrites occupy octahedral and tetrahedral sites i.e. 12k, 2a , 2b all have spin in one direction (up) while two tetrahedral sites 4f1 and 4f2 have spin opposite (down).The resistivity of material (ferrites) depends upon Fe 2+ ions on octahedral sites, this site is slightly larger than tetrahedral sites [31,33,34] .Graph of According to Verwey hopping model change in charge carrier concentration with change in temperature influenced the hopping or jumping of charge carriers from one octahedral site to next [35] i.e. conduction.It is also affected by doping concentration, phase purity, heterogeneity and morphology of synthesized material.
Variations in the graphs of figure 4.2 ( a and b ) showed that factors like substitution in R-block, occupancy, variation in -x‖, phase purity, change in amount of grain boundaries and their morphology played different role in both methods.Variations of these parameters in Sol-gel method were small because of phase purity and almost same grains dimensions.All these factors have different role in samples of co-precipitation method.
Pb conc(x)
Co-precipitation method Sol-gel method
4.3:-Trend of resistivity as a function of Pb conc(x) for co-precipitation and sol-gel method
Increasing trend in resistivity looked more smooth and prominent in sol-gel than co-precipitation method.Movement of Pb 2+ ions within structure formed different pockets or zones or blocks that provided trapping centers for hopping charges.Because of these centers and change in grain boundaries energy required for hopping charge also increased [36].In ferrites, charge carriers acquired energy to come out from trapping center and to hopping.
4.4:-Activation energy (A.E.) as a function of Pb conc(x).
Researcher has given name activation energy to that energy.Increase in resistivity confirmed by increase of activation energy as shown above.Energy increased in both cases confirmed that Pb 2+ ion substitution was responsible of increase of energy and resistivity.Researchers have given name to this phenomenon as a result of many body problems [37].Different factors discussed above were responsible of decrease of mobility.It has also proved by relation (μd = 1/neρ).Following two graphs are explaining the variation in mobility because of dislocations.Mobility of lead ions itself was non-uniform inside the structure.Increase in mobility mean that some quantity of Pb 2+ ions has gone to tetrahedral sites or some of Fe 2+ ions were forced to migrate from tetrahedral to octahedral site which resulted in increase of mobility from -x=0.2 to 0.6 [38].In other words here conduction increased.From above discussion, it has confirmed that lead substitution is responsible of variations of resistivity in co-precipitation more than sol-gel.It has reported by Kligner [39] that if activation energy is greater than 0.4eV than it is polaron hopping , less than this will electron hopping.In present discussion polaron hopping is prominent than electron hopping.
5-AC Characterizations
Fast development of communication technology generated another kind of environmental pollution (i.e.radiation hazards).It is associated with electromagnetic interfacing (EMI) and electromagnetic compatibility (EMC).Higher dielectric and magnetic loss are the prime parameters to address these unwanted signals.This can be achieved by selecting suitable dopant in suitable quantity and sintering temperature [40].In studied composition, lead used to modify the dielectric properties thus suitable for above application to some extent.Low losses are also acceptable for other type of applications.In present compositions, capacitance (C) and dissipation factor (D) were measured as a function of frequency by using 6440B Winker Frequency Precision Analyzer.Measurements were made from 20Hz to 3MHz.Appropriate formulas were used to calculate dielectric constant (ε) and dielectric loss (tanδ).
5.1-Dielectric constant
Figure 5.1 showed that dielectric constant decreased with increase of applied frequency or electric field in our studied compositions, a normal behavior of all ferrites.According to Maxwell -Wagner model ferrites consist of conducting grains and non-conducting grain boundaries (two layers model).The prepared composition Ba1-xPbxFe12O19 possesses heterogonous structure.The electron exchange between Fe 2+ + Fe 3+ ↔ Fe 3+ + Fe 2+ was the deciding factor for the magnitude of dielectric constant and dielectric loss [40].Dielectric constant and loss in ferrites depend upon space charge polarization and charge formation at the grain boundaries.Temperature, dopant and dipole strength played an effective role in dielectric properties of studied compositions.
Pb
2+ substitution developed non-uniform variation in polarization process as shown by graphs.Synthesized compositions consist of heterogeneous elements of different valances which have coordination of different strength with each other and with oxygen [41].Lead substitution, impurity phases, dislocations, voids and change of morphology when interacted with electric field of varying strength, polarization mechanism affected strongly.Variation in dopant concentration was another factor which affected the generation and hopping of Fe 2+ ions on octahedral sites thus magnitude of Fe 3+ /Fe 2+ ions concentration also disturbed and changed within composition.All these phenomenas were responsible of variation in dielectric constant in irregular form.Graphs of figure 5.1 have explained that variations is more prominent in co-precipitation than in sol-gel because of above discussed factors.Some electrons during hopping may fall or trapped by grain boundaries and potential wells given name correlated state as reported [42] so polarization decay trend becomes slow thus decreases.It has shown in graphs.Heterogeneity of composition, phase purity and grains modification or morphology affectively made this decaying trend to vary in non-uniform way.At some applied frequency points, it mixeed to each other.Mixing trend is almost same and different in co-precipitation samples than sol-gel like Ao and AAo.o.High resistivity also discouraged the polarization and conductivity mechanisms [40].
Graphs also indicated that exchange phenomenon (Fe 3+ ↔ Fe 2+ ) did not follow the frequency of applied field [43].Like A2 sample behavior decreased in the middle and increased from both ends while it responded slowly to the applied field [44,45].So large number of Fe 2+ ions if hopped, polarization increased as a result dielectric constant and dielectric loss increased [40,46].
5.2-Dielectric loss tangent
It is the dissipation of energy denoted by ‗Q'.Dielectric loss (tanδ) occurs in dielectric materials due to following process [47].
2-Electron hopping
Dipoles formed in ferrites due to change in magnitude of cation states (Fe 3+ /Fe 2+ ) by applied electric field.Substitution of lead in place of barium was another effective parameter to change this magnitude.This response changed with change of applied frequency.At low frequency dielectric loss was because of electron hopping and at high frequency it was because of polaron hopping [47].It was noted that for A1 ( x= 0.2; tanδ= 0.12 and 0.6), A4 ( x= 0.8; tanδ = 0.36 and 0.16), at 200kHz and 3MHz respectively as compared to un-doped composition Ao ( x=0.0; tanδ = 0.31 and 0.16).Dielectric loss slightly increased at higher frequency in case of A2, A3 and A5.It was because of increased concentration of Pb 2+ ions, non-uniform and hindrance in hopping and due to rapid movements of Pb 2+ ions on different crystallographic sites of structure.Because of these factors impurity increased so loss increased.Low loss are useful for high frequency applications while higher loss is useful to control the environmental pollution [34].Dielectric loss for same composition as a function of frequency has presented in the graph below for sol-gel method.It showed different trend as compared to co-precipitation method.Dielectric loss has increased for the sample AA0.0 and AA0.2 rapidly than others.It was because of occurrence correlated states in these samples [37].Heterogeneity of material, higher mobility of lead ions and coordination of Fe 3+ ions with O 2-form dipole of different strength in different directions/orientations.These Dipoles and dislocations within structures provided trapping centers for hopping charges.Those trapped charges when interact with applied electric field of varying strength they developed their own oscillations of different strength, researchers have given name relaxation frequency [48].The electron hopping between Fe 2+ ↔Fe 3+ ions produces another relaxation frequency.These different kinds of relaxation frequencies (also called characteristics of resonance peaks ) when associated with space charge polarization phenomenon altogether showed the behavior given name correlated state as shown in above graphs [49].Dielectric loss rises in such regions.The dielectric loss increased in case for 0.0 and 0.2 doping in the frequency range (starting 40 kHz, maximum at 800 kHz and minimum at 3 MHz).In case of AA0.2 or 0.2 doping ( starts rise at 5 kHz , max at 100 kHz and reduced to minimum at 3MHz).At high frequency, grain dimensions, their morphology and heterogeneity helped the hopping charges to pile up at grain boundaries.Their response to applied frequency was very different in the regions of correlated states.Because of these different mechanisms, chances of resonance phenomenon reduced /diminished.It has shown in the graph.
As the prepared material possesses magnetic properties so motion of these peaks was because of motion of domain walls also [50].Increase in loss is useful parameter for minimizing environmental pollution due to electromagnetic Interference (EMI) [34] .
6-Conclusions
Ba1-xPbxFe12O19 with x=0.0 to 1.0 was synthesized by co-precipitation and sol-gel methods.Both compositions were sintered in a furnace for three hours at 965±5°C.Structural and electrical (ac and dc) characterizations were investigated and comparatively discussed here.Composition synthesized with co-precipitation method was at room temperature.It produced excellent hexagonal structure, their distribution but phase purity dropped i.e. 70%.But it was better than already reported with same synthesis method.Same composition when synthesized with sol-gel, phase purity was almost 100% with excellent morphology.Because of improvement in phase -purity and morphology magnetic and electrical properties improved than co-precipitation.Sol gel synthesis method used here must be useful for industrial production of this material.The physical properties of lead like dopant -higher ionic radii unable to produce impurity in solgel because of its heating at 200±2°C.Particle size estimated, high resistivity and dielectric constant obtained were useful for different components of electronics and computer industries like recording media applications, storage devices like capacitors and other appliances where high resistivity is required.High dielectric loss obtained is useful against today's communication radiation hazards.
Figure 4 .
Figure 4.2(b):-Arrhenius plot for dc resistivity for Ba 1-x Pb x Fe 12 O 19 [sol gel].Graphs below showed almost increasing behavior/trend for different parameters.Resistivity increased because being higher ionic radii and non-magnetic dopant (Pb 2+ ions) occupied octahedral sites.This occupancy made difficult for Fe2+ ions to generate and hope.Other factors like voids, impurity phases, dislocations, heterogeneity played their role in restricting conduction mechanism.All these parameters act as energy barriers for conduction mechanism so it decreased.Lead dopant affected different dc parameters has explained below.
Mobility (μ d ) as a function of Pb conc(x).Coppt and Sol-gel . | 6,342 | 2015-08-01T00:00:00.000 | [
"Materials Science"
] |