text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Measures of fragmentation of rest activity patterns: mathematical properties and interpretability based on accelerometer real life data Accelerometers, devices that measure body movements, have become valuable tools for studying the fragmentation of rest-activity patterns, a core circadian rhythm dimension, using metrics such as inter-daily stability (IS), intradaily variability (IV), transition probability (TP), and self-similarity parameter (named α). However, their use remains mainly empirical. Therefore, we investigated the mathematical properties and interpretability of rest-activity fragmentation metrics by providing mathematical proofs for the ranges of IS and IV, proposing maximum likelihood and Bayesian estimators for TP, introducing the activity balance index metric, an adaptation of α, and describing distributions of these metrics in real-life setting. Analysis of accelerometer data from 2,859 individuals (age=60–83 years, 21.1% women) from the Whitehall II cohort (UK) shows modest correlations between the metrics, except for ABI and α. Sociodemographic (age, sex, education, employment status) and clinical (body mass index (BMI), and number of morbidities) factors were associated with these metrics, with differences observed according to metrics. For example, a difference of 5 units in BMI was associated with all metrics (differences ranging between −0.261 (95% CI −0.302, −0.220) to 0.228 (0.18, 0.268) for standardised TP rest to activity during the awake period and TP activity to rest during the awake period, respectively). These results reinforce the value of these rest-activity fragmentation metrics in epidemiological and clinical studies to examine their role for health. This paper expands on a set of methods that have previously demonstrated empirical value, improves the theoretical foundation for these methods, and evaluates their empirical worth in a large dataset. Introduction A large number of human behaviours and physiological functions follow a circadian rhythmicity, such as for examples sleep/wake cycles, body temperature, and hormonal levels [1].Circadian regulation of these processes is critical to maintaining homeostasis; prolonged disruptions are detrimental to health [2,3], highlighting the importance of precise, scalable measures of human circadian rhythm (CR).Accelerometers, devices that record acceleration of the part of the body to which they are attached, have emerged as valuable tools to measure dimensions of CR based on movements in free-living conditions [4,5]. An important dimension of CR is the fragmentation of rest-activity patterns over several consecutive days [6,7].Over time several metrics have been proposed to quantity the rest-activity fragmentation using accelerometry data.The first and now commonly used metrics are inter-daily stability (IS) and intradaily variability (IV).IS provides information on how constant the rest-activity pattern is between days and IV quantifies the fragmentation of activity pattern between hours over the observation period [6,8].Later the transition probability (TP) has been proposed to measure the likelihood of transitioning from a state of rest to a state of activity, or vice versa [7,9].In parallel, the Detrended Fluctuation Analysis (DFA) [10] initially used in genomics has been used to identify hidden patterns where activity fluctuations are used as proxy for rest-activity fragmentation [11,12].In DFA, the self-similarity parameter, also known as the scaling exponent or α, is a key metric for description of time series, such as stationary and non-stationary time series, random noise and fractal noise, among others [13]. Although metrics of rest-activity fragmentation are increasingly used in the literature, mathematical properties of these metrics and their interpretation have not been entirely described.First, although the range of IS in [0, 1] and IV in [0, 2] has been suggested [14], no proper mathematical proof is available, limiting confidence in interpretability, particularly for extreme values.Second, [7,9] have proposed different estimations of TP, both based on heuristic estimators, limiting their mathematical properties as compared to estimators based on maximum likelihood (ML) or Bayesian inference.In addition, the properties of these two estimators have not been compared.Third, interpretation of DFA-derived metrics is not straightforward.Finally, to our knowledge, only one study has shown the correlation between IS, IV, TP (based on [7] definition) and DFA within a unique sample, older adults living in residential facilities, limiting generalisability of findings [7]. In order to overcome limitations of the current evidence on rest-activity fragmentation metrics, the present study aims to 1) provide mathematical proof of the range of IS and IV, 2) propose a ML estimator, the gold standard of estimation, and a Bayesian estimator, with good properties, for TP, and 3) propose a new metric, that is a transformation of DFA-derived self-similarity parameter, named activity balance index (ABI), that reflects how balanced is the activity over several days, and 4) describe these metrics using data from the population-based Whitehall II accelerometer sub-study.All metrics discussed here may computed using our free codes available on GitHub1 , and they will be included in the GGIR R's package next versions. Preliminary definitions Rest-activity fragmentation metrics are calculated based on different time series derived from raw acceleration signals (Table 1).These time series differ as a function of epoch lengths (eg minute or hour) and outcome considered (acceleration, dichotomous state (rest/activity), or proportion of the epoch in a state).Here are some preliminary definitions of these time series. Definition 1. For each individual, a discrete stochastic process representing the intensity of movement over a time period [0, T ] is defined as ¶X t ♢ t∈T , with X t ∈ [0, δ x ], δ x < ∞, and t correspond to an epoch.The observed time series is vector represented as x = (x 1 , . . ., x T ) ′ .In the case of accelerometry data, x t corresponds to the acceleration recorded at the t th epoch and δ x is the maximum measurable record for x t .Definition 2. For each individual, a second stochastic process representing the active (a) and rest (r) states is defined as Data The Whitehall II study is an ongoing prospective cohort study established in 1985-1988 among 10308 British civil servants with clinical examinations every four-five years since inception.Written informed consent for participation was obtained at each contact.Research ethics approval was obtained from the University College London ethics committee (latest reference number 85/0938).An accelerometer measure was added to the 2012-2013 wave of data collection (age range 60 to 83 years) for participants seen at the London clinic and those living in the south-eastern regions of England who underwent clinical examination at home.Participants were requested to wear a tri-axial accelerometer (GENEActiv Original; Activinsights Ltd, Kimbolton, UK) on their non-dominant arm for nine consecutive 24-hour days.Accelerometer data, sampled at 85.7Hz and expressed relative to gravity (1g = 9.81m/s 2 ), were processed using GGIR v2.9-0 [15].The Euclidean norm minus one (ENMO) of raw acceleration was calculated and corrected for calibration error and non-wear time.Average acceleration lower than 40 milligravity (mg) during the waking period was classified as rest corresponding to activities not classified as light or moderate-to-vigorous activities [16,17].Waking periods (ie, periods between waking and sleep onset) for each day were identified using an algorithm for sleep detection that has been previously described and evaluated [18].Data from waking onset on day 2 to same time on day 8 were retained, resulting in 7 days of data.Non-wear time was detected using algorithm that has been previously described [19] and for the present study 2859 participants who wore the accelerometer over the full 7 consecutive days were included in analyses.For each individual, three time series were considered: the time series corresponding to the 1-minute epoch acceleration over 1, and an illustration is displayed in Figure 1. For illustrative purposes, seven participant profiles were selected to highlight differences in metrics observed in real-life situations.They were chosen based on their lowest or highest value in the metrics. Measures of socio-demographic (age, sex, education) and health-related (body mass index (BMI), prevalent morbidities) factors were also used.Education was categorised as zero if the individual has less than secondary school education and one otherwise.BMI was based on measured weight and height (kg/m 2 ), and the number of prevalent morbidities was assessed using clinical examinations in the study and linkage to electronic health records and includes coronary heart disease, stroke, heart failure. Properties of IS and IV IS measures how constant the rest-activity pattern is between days, this is a signal-to-noise measure, that is the ratio of the power of a signal (ordered by hour of the day) to the power of background noise (irrespective of the hours of the day) [8].Considering that we measure H hours over D days, we have a total number of hours P = D × H over a full observation period.For IS, it is useful to organize the vector z from Definition 3 in a matrix form as where z d,h is an element for the d th line and h th column, where d = 1, . . ., D and h = 1, . . ., H. IS is computed as IV reflects the fragmentation of the rest-activity pattern over long periods of rest or activity, it measures the variability between consecutive hours (Figure 1c) [8].IV is computed as . ( Some mild conditions should be established to derive the properties of IS and IV metrics.They are: (A1) Z p follows an autoregressive model of order 1 (AR(1) model) as where µ is the mean of the stochastic process, ϕ is a fixed but unknown parameter with ♣ϕ♣ < 1, ϵ p is a Gaussian noise. The assumption (A1) is required to define the IV range, because we need to determine the relationship between Z p and Z p−1 , and the AR(1) model is a very simple and flexible model, which can fit several different real situations.Although, we assume a stationary process, see unit root conditions in [20].The assumption (A2) is imposed to guarantee a positive auto-correlation 0 ≤ ϕ, and a long period of observation, P , of the time series. Theorem 3. Given a stochastic process ¶Z p ♢ p∈P and under assumptions (A1) and (A2), The proof of these theorems is provided in the Supplementary material (Section 1). Interpretation of IS and IV In the demonstration of Theorem 1 (Supplementary material -Section 1), we showed that 1) an IS value close to 1 reflects a rest-active pattern that is constant between days, as the signal is much stronger than the noise; 2) IS value close to zero corresponds to rest-activity pattern that is inconstant between days as the signal is weaker than the noise (Box 1).In the demonstration of Theorem 2 (Supplementary material -Section 1), we showed that 1) if ϕ goes to one (perfect autocorrelation), then IV goes to zero, reflecting a low rest-activity fragmentation between hours; 2) if ϕ goes to 0 (uncorrelated random noise), then IV goes to two, representing a high rest-activity fragmentation between hours; 3) in some specific cases, IV can be greater than 2, this can occur when the sample size P is too small, or ϕ < 0 which may be seen in ultradian rhythm, which means that the rhythm cycle lasts less than a day [14], or in the case of use of high frequency data [21].These statements agree with the previous claims given by [14] about IS and IV.Some authors as [22,23,24] use IV based on the x time series.In that case, the present properties do not hold anymore as the assumptions (A1) and (A2) are verified exclusively for z. Properties of TP The TP in dichotomous stochastic processes represents the probability of a state change given a period of time spent in a specific state.In the case of changes in rest/activity state among human beings, a "U" shape association is expected between the TP and the time already spent in a specific state [7].Indeed, if the time spent in one state is very short, the probability of transition to the alternate state should be relatively high as the current state is not consolidated; if the time spent in a given state is medium, then the chance of change is more likely to be small because the person is engaged in the specific behaviour.Finally if the time spent in a specific state is long, then the probability of transition is higher as people are unlikely stay in rest or activeness for a very long period of time. Definition 4. For each individual, given a stochastic process determined by Definition 2 the TP from r to a given an uninterrupted period of rest with length equal to s is and the TP from a to r given an uninterrupted period of activity with length equal to s is Definition 5.For each individual, given a stochastic process determined by Definition 2 the TP from r to a given an uninterrupted period of rest with length equal to 1 is and the TP from a to r given one single epoch of activity is The two conditional probabilities from Definition 4 are proposed by [7].The two TP from Definition 5 are special cases from the previous one and are highlighted by [9].We aim to propose a ML estimator to TP because if the model assumptions are aligned, there is no better estimation than ML, so it is a gold standard.Although, if there is available knowledge, we can aggregate this information and build a Bayesian estimator that is even more accurate than ML.Beforehand, some notations need to be introduced for readability.Definition 6.The r = (r 1 , . . ., r nr ) ′ is a n r -vector that records the length of each consecutive bout of rest, where n r is the number of bouts of rest (n r ≤ T ) so that r 1 is the length of the first bout of rest, r 2 of the 2 nd bout of the rest, and r n the length of the last bout of rest.T r = nr i=1 r i is the total time of rest (in epochs unit), r i ∈ ¶1, . . ., S r ♢, S r is the duration of the longest bout of rest, and where N r (s) ≥ N r (s + k), for all k ≥ 1. N r (s) is the number of bouts of rest of duration greater or equal to s. ∆ r (s) = N r (s) − N r (s + 1) is the number of bouts of rest of duration equal to s.Some corrected measures are required to account for the fact that the last state is rest or not, as if the last state is rest, we cannot consider its transition to activity.They are Definition 7. The a = (a 1 , . . ., a na ) ′ is a n a -vector that records the length of each consecutive bout of activity, where n a is the number of bouts of activity (n a ≤ T ).T a = na i=1 a i = T − T r is the total time of activity (in epochs unit), a i ∈ ¶1, . . ., S a ♢, S a is the duration of the longest bout of activity, and where N a (s) ≥ N a (s + k), for all k ≥ 1. N a (s) is the number of bouts of activity of duration greater or equal to s.As for Definition 6, some corrected measures are required to account for the value of the last state.They are Here is a list of assumptions: (B1) The stochastic process ¶Y t ♢ t∈T is a Markov chain. (B2) The stochastic process ¶Y t ♢ t∈T is stationary. (B3) The stochastic process ¶Y t ♢ t∈T has a finite memory equal to s ≥ 1.The proof of Theorem 4 is provided in (Supplementary material -Section 1) using as a main argument the properties of a Bernoulli stochastic process [25].The proof of the Corollary 5 is a directly application from Theorem 4 for a Binomial model.Then the Beta-Binomial posterior estimator is a well known result [26, page 104]. The Bayesian estimator introduced in Corollary 5 always exists, even if any of T r , T r , T a or T a is zero.Whether λ = 1 corresponds to the Uniform distribution prior, which gives an interesting interpretation, we aim that in a sequence of nights there is at least one epoch of activity and in a sequence of days there is a least one epoch of rest.Other values might be explored as λ = 0.5 is the Horseshoe prior [27], or λ = 10 −6 which returns a numerically insignificant difference between ML and Bayesian estimators.Larger values than one for λ may be not relevant in this context.Remark 1.Even without any assumption about the stochastic process in terms of memory and stability, some nonparametric measures are avaliable as the reciprocal average duration (RAD) of rest, RAD r , and the RAD of activity, RAD a , which are defined as This metric appears in the work of [28], but it was used to approximate the target probabilities in (2) and (3) by [9].If Let us give a hypothetical example for a small sample size T = 15 as y = (a, a, a, r, r, a, r, a, a, a, r, r, a, r, r) ′ to illustrate the difference between πra (1) M L , πra (1) B and RAD r , as well as πar (1) M L , πar (1) B and RAD a , with hyperparameter λ = 0.5.Thus r = (2, 1, 2, 2) ′ and a = (3, 1, 3, 1) ′ , this corresponds to n r = 4, n * r = 3, n a = 4, n * a = 4, T r = 7, T * r = 6, T a = 8, T * a = 8.So we have 3 changes (n * r ) in 6 opportunities (T * r ) ie 50% of transitions from r to a by the ML estimator, the RAD r inflates this result to 57% by adding a transition for the last observation, but actually we don't know what would happen in y 16 .From a to r, RAD a and ML estimators are the same, and Bayesian estimator is also really close.For convenience, these values are available in Table 2.The proof of Theorem 6 is provided in the Supplementary material (Section 1). Remark 2. The heuristic estimators proposed by [7] are where s = 1, . . ., S a − 1, d ≥ 1 is the smallest value that do N a (s) − N a (s + d) > 0 for πar (s) H , and , Note that if d = 1 and y T = r, then πar (s) H = πar (s) M L but otherwise they are different, and if d = 1 and y T = a, then πra (s) H = πra (s) M L .Of note πar (s) M L , πra (s) M L ∈ [0, 1), though, πar (s) H , πra (s) H ∈ (0, 1), but at the price of biased estimates when d > 1, which are more frequent as s increases [7].Thus the positive tails of πar (s) H and πra (s) H are systematically affect by this bias. Summary statistics for TP The conditional probabilities π ar (1) and π ra (1) are more convenient to interpret than π ar (s) and π ra (s).However, the assumption (B1) is much stronger than (B3), but the first it is not expected in accelerometer applications [7].In the aim of summarizing the set of πar (s) H and πra (s) H in single values, [7] proposed a bounded average calculated by LOWESS smoothing over a range of s values where the TP is lower (determined a posteriori as the s values in the flat region of the "U" shape, see illustration in [7].This method requires to determine the boundary of the s values for which there is not straightforward method. Here we propose to summarize πar (s) M L by a weighted mean as where ω s is a weight, Sa−1 s=1 ω s = 1, πar is a weight mean of πar (s) M L .The πar (s) is poorly estimated to large s because these events (large s) are rare [7].Thus we propose to weight these values by a proportion of the sample size so that ω s = Analogously, πra (s) M L is summarized by a weighted mean as Theorem 7. Given a stochastic process ¶Y t ♢ t∈T , under assumptions (B2), ( B3) and (B4), we observe that πar ≈ πar (1) M L and πra ≈ πra (1) M L .Also, if λ is small, then πar ≈ πar (1) B and πra ≈ πra (1) B . The proof of Theorem 7 is provided in the Supplementary material (Section 1).By the Theorem 7, we can say that estimated π ar (1) and π ra (1) are summary statistics of the functions of π ar (s) and π ra (s) for all values of s. is the total rest time during the sleep time, T * a,w is the total activity time during the awake time, T * a,s is the total activity time during the sleep time.The star ( * ) means minus one if it is not possible judge the last transition. Interpretation of TP When using a small λ and a long period of observation, higher TP ra,w corresponds to higher transition from rest to active period during the awake window, reflecting a more fragmented pattern of rest, higher TP ar,w corresponds to higher transition from active to rest period during the awake window, denoting a more fragmented pattern of activity.Similar interpretation applies to the metrics defined during the sleep window (Box 1).In case of absence of one state during a window period as for example no activity at all during the sleep window, the TP exists and transition from this missing state to the alternative state equals to 1 as it is likely that whether it happens the person moves to this state, the person will go back easily to the alternative state. Introduction to DFA The DFA is a powerful analytical tool for time series analysis initially proposed by [10] to analyse long-term correlation of nucleotides.More recently it has been used in the context of movement behaviour to quantity fractal fluctuations in activity over a range of time scales [12,29].In practise, it aims to evaluate to which extent the activity pattern (in terms of temporal and structural properties) is similar at different time scales.Estimating the self-similarity parameter allows differentiating stationary and nonstationary stochastic processes and identifying white, pink (fractal), or brown noise patterns.These key properties might be hidden in complex time series, but DFA is a way to reveal them. Let us consider a bounded stochastic process ¶X t ♢ t∈T from Definition 1.Take the accumulated signal with zero mean as For each box, we fit a polynomial of order l, eg, the polynomial for the j th box is fit using ordinary least squares regression f t (n) = β0 + β1 t + • • • + βl t l , t = (j − 1)n + 1, . . ., jn.Note that β = (β 0 , . . ., β l ) ′ is different to each j th box and each n-size, consequently f t (n) depends of t and n.To detrend the integrated time series, ie, remove the trend of c t , we take the difference of each pair c t and f t (n).For a given n-size box, the root mean square fluctuation is which can be computationally obtained by [30].Repeat the operation for a broad range of n-size box, eg, [31] recommend taking a sample on the grid between 4 ≤ n ≤ T /4.The figure 2 display the steps of DFA for two n-size boxes, the first with 60 minutes (figure 2c) and the second with 30 minutes (figure 2d). where ϵ n follows an independent Gaussian error, µ is an intercept, an ordinary least squares regression (OLS) was done to achieve α. The interpretation of α is quite precise, but requires much mathematical jargon.Given a stochastic process as determined by Definition 1, the self-similarity parameter belongs to the range 0 < α < 1 for stationary stochastic processes, and 1 < α < 2 for nonstationary as proofed by [32].Some critical values of the scaling exponent are of distinct mathematical importance as α = 0.5 means that the stochastic processes is white noise, α = 1 is related to pink or fractal noise, α = 1.5 is the case of a random walk [33]. Activity balance index: a new DFA-derived metric Given previous empirical results, [34] hypothesized that many biological systems present a fractal nature, ie, α = 1.A further hypothesis that healthy people presents fractal noise for heart and walking rates has been elaborated by [35].In the context of activity behaviour, we have introduced a novel metric named ABI, that measures how balanced is the activity over the observed period, higher values reflect a more balanced pattern of activity.It is a transformation of α as where α ∈ (0, 2).If α goes to one, then ♣α − 1♣ goes to zero and ABI(α) goes to one.On the other direction, as α goes to two or zero, which are the extremes for α [32], ♣α − 1♣ goes to one and ABI(α) goes to 0.0006.The ABI has two advantages: it penalizes the scattering of α in both directions and spreads its values over a large range between (0.0006, 1] or (0, 1] for simplicity. We introduced the ABI that focusses on the fractal noise nature of the signal to evaluate how the activity is balanced over the observation period.If fractal noise represents an optimum balance for activity behaviour, then healthy individuals would present higher values for their ABI metric than unhealthy people (Box 1).Both α and ABI are influenced by the choice of the epoch lengths, larger epoch values will naturally tend to smoothed acceleration signal, implying lower chance of observing a fractal noise (that is α and ABI closer to one). Results A total of 4,880 individuals were invited to participate to the Whitehall accelerometer sub-study.Out of these, 4,282 agreed to wear the accelerometer and had no contraindications (allergy to plastic or travelling abroad).Among them, 2,859 individuals had complete data without any non-wear period for a continuous period of 7 days corresponding to a total of 10,080 epochs.The mean age of the participants was 69.2 years, with a standard deviation (SD) of 5.7 years.A total of 602 were women (21.1%), 1170 (40.9%) had less than secondary school education level, 495 (17.3%) were currently employed, and 1140 (39.9%) had at least one morbidity.The mean BMI in the study sample was 26.7 (SD=4.3)kg/m 2 .Figure 3: Boxplot of inter-daily stability (IS), intradaily variability (IV), estimated autocorrelation parameter of AR(1) model ( φ), transition probability (TP) from activity to rest during the awake period (TP ar,w ), TP from activity to rest during the sleep (TP ar,s ), TP from rest to activity during the awake (TP ra,w ), TP from rest to activity during the sleep (TP ra,s ), estimated self-similarity (α), activity balance index (ABI) Table 3: Mean (SD) and p-value of inter-daily stability (IS), intradaily variability (IV), transition probability (TP) from activity to rest during the awake period (TP ar,w ), TP from activity to rest during the sleep (TP ar,s ), TP from rest to activity during the awake (TP ra,w ), TP from rest to activity during the sleep (TP Figure 3 shows the distribution of IS, IV, TP, α and ABI in the total sample.All empirical ranges are within the theoretical ones proposed in Box 1.For IV, two individuals have a value that exceeds 2, these outliers correspond to two of the three individuals whose φ value is not within the [0, 1] interval, suggesting a minority of cases with ultradian rhythm in the dataset. Table 4 shows one fitted multivariate regression for each standardized rest-activity fragmentation metric.Being a woman, aged around 70 years old (see Figure 1 in Supplementary additional results for association with age), with lower educational level, not currently employed, having lower BMI and less prevalence morbidity were associated with a more constant rest-activity pattern (all p < 0.05).The same variables (except for sex) were associated with IV, but in the opposite direction denoting less fragmented rest-activity pattern.TP ar,w is associated with all sociodemographic and health-related factors (except for sex and employment status), and TP ar,s , in a complementary way, is only significantly associated with sex and employment status.Higher TP ra,w is associated with being a woman, lower BMI and less morbidities while higher TP ra,s is associated with higher BMI and more prevalent morbidities.Both α and ABI are associated with all socio-demographic (except education) and health-related factors. Table 5 presents Pearson's correlation coefficients for IS, IV, TP, α, and ABI metrics.We observe one moderate correlation between α and ABI that is expected as ABI is a transformation of α and is not aimed to be used simultaneously.All the remaining correlations are considered fair or poor [36]. Discussion This study provides theoretical ranges and guidance on the interpretation of rest-activity fragmentation metrics.We mathematically compared the heuristic estimators of TP proposed by [7] and [37] and proposed alternative ML and Bayesian estimators.We also proposed a transformation of DFA-derived self-similarity parameter, the ABI to reflect the balance of activity behaviours over the observation period.Finally, using accelerometer data from around 2,859 individuals aged 60 to 83 years, we showed that most of the correlations between IS, IV, TP and ABI were modest.We also found sociodemographic and health related differences in some of the rest-activity fragmentation metrics but not all, highlighting the fact that they measure different features. We proposed Bayesian estimators of TP to estimate the chance to change from rest to active periods and vice versa, defined separately during the awake (day) and the sleep (night) windows.We observed as expected, a higher TP from active to rest period during the sleep window than during the awake period and, on the reverse, a higher TP from rest to active period during the awake window than during sleep period [38].We applied these metrics to rest/activity state defined by threshold of acceleration [16,17].Empirical results are dependent on the method used to differentiate rest from activity states.These metrics might also be relevant using methods that differentiate sleep and wake states instead of rest and activity states.This might be particularly relevant to evaluate the fragmentation of sleep during the night. When comparing rest-activity fragmentation metrics using data from adults aged 60 to 83 years, we found low to moderate correlations among the variables(♣r♣ < 0.6), except for α and ABI (r = 0.789).Although calculated differently, these estimated correlations are in accordance with those found in the previous studies [7,24,39,23,40].These modest correlations suggest that these metrics capture distinct features of individuals' rest-activity patterns.The graphical analysis of the extreme cases of each metric (see Figures 4 to 9 and Figures 2 to 5 in Section 2 of the Supplementary material) displays several behavior profiles: sedentary, active, good sleeper, insomniac, (un)balanced rest-activity person, tireless person, ultradian rhythm person.There were differences in these metrics by sex, age, working status, BMI, and prevalent morbidities, implying the potential usefulness of these metrics for health outcomes. The study has several strengths, including the use of both theoretical and empirical demonstrations of the range of the rest-activity fragmentation metrics, using a large sample size.The combination of the approaches increases the validity of our findings.Second, using multiple metrics in the same study population allows for a comprehensive comparison of these metrics.The study also has limitations.We used data from participants who had complete data for seven days.This may have resulted in a selection of the participants, highlighting the need to further investigate the impact of non-wear time on these metrics to allow the use of these metrics in a large sample.In addition, most participants were Caucasian and relatively healthy; whether results are valid in other ethnic subgroups requires further study.The empirical application is restricted to one type of device and should be replicated in other studies using different devices. Conclusion This study provided properties of rest-activity fragmentation metrics previously used and proposed new metrics.Their properties were evaluated using both theorical and empirical approach among more than 2800 older adults.Overall this study shows that the rest-activity fragmentation metrics examined in this paper -IS, IV, TP (TP ra,w , TP ra,s , TP ar,w , TP ar,s ), α and ABI -are modestly correlated, apart for ABI and α.Additionally, these metrics are differently associated with socio-demographic and health-related outcomes.Thus, they might reflect different aspects of the individuals behaviours.However, consideration should be given to their strengths and limitations, as summarized in Box 1.We encourage the use of these metrics in future studies in order to get insight into the role of rest-activity fragmentation for health. Definition 3 . and δ y is the threshold which separates active and rest based on the amount of acceleration per epoch.The observed time series is a vector represented as y = (y 1 , . . ., y T ) ′ .For each individual, a third stochastic process representing the proportion of active states per hour is defined as ¶Z p ♢ p∈P , with Z p ∈ [0, 1], where Z p = δ −1 z δz t=1 I(y t+δz(p−1) = a), I(•) is a indication function equal to one if the condition is true and zero otherwise, δ z is the number of epochs which build 1 hour (eg if one Figure 1 : Figure 1: Example of three time series from the same individual , where zh = 1 D D d=1 z d,h is the hour mean over the D days of measurement, and z = 1 P P p=1 z p is the general mean over the full observed period. Figures 4 to 9 Figures 4 to 9 show the time series processes of individuals with extreme IS, IV, TP, and DFA values.In footnotes, a short description of what characterized these time series is provided.More figures are available in the Supplementary material (Section 2, Figures 2 to 5). Figure 4 : Figure 4: The sedentary: this individual presents the highest TP ar,w .Note that the black blocks in the non-blue region of figure (b) are short, ie, this individual has short bouts of activity. Figure 5 : Figure 5: The active: this individual presents the lowest TP ar,w .Note that the black blocks in the non-blue region of figure (b) are very long, ie, this individual has long bouts of activity. Figure 6 : Figure 6: The good sleeper: this individual presents the lowest TP ra,s and a high TP ar,s .Note that the black (white) blocks in the blue region of figure (b) are very brief (long), ie, during the night this individual almost does not display activity. Figure 7 : Figure 7: The insomniac: this individual presents the lowest TP ar,s .Note some large black blocks in the blue region of figure (b), specially at the third and fifth sleep windows, ie, during the night this individual presents long periods of activity. Table 5 : Pearson's correlation of inter-daily stability (IS), intradaily variability (IV), transition probability (TP) from activity to rest during the awake period (TP ar,w ), TP from activity to rest during the sleep (TP ar,s ), TP from rest to activity during the awake (TP ra,w ), TP from rest to activity during the sleep (TP ra,s ), estimated self-similarity parameter ( α) and activity balance index (ABI)
8,260
2023-11-06T00:00:00.000
[ "Mathematics", "Environmental Science" ]
The graph limit for a pairwise competition model This paper is aimed at extending the graph limit with time dependent weights obtained in [1] for the case of a pairwise competition model introduced in [10], in which the equation governing the weights involves a weak singularity at the origin. Well posedness for the graph limit equation associated with the ODE system of the pairwise competition model is also proved. Introduction General Background.In this work, we are concerned with analyzing the graph limit of the following system of (d + 1)N ODEs The notation is as follows: the unknowns are x N i ∈ R d and m N i ∈ R are referred to as the opinions and weights respectively.The evolution of the opinions is given in terms of the weights and a function a : R d → R d which is called the influence.The evolution of the weights is given by means of functions ψ N i : R dN × R N → R where we apply the notation x N (t) := (x N 1 (t), ..., x N N (t)), m N (t) := (m N 1 (t), ..., m N N (t)).This model has been proposed in [10], along with several other models which are meant to idealize social dynamics.We refer to [10,13] for more details of how these models originate from biology and social sciences.Mathematically, the system (1.1) is a weighted version of the first order N −body problem (simply by taking all the weights to be identically equal to 1).By now, the mean field limit of the N −body problem ẋi is fairly well understood even for influence functions with strong singularities at the origin [14].The mean field limit can be analysed in terms of the empirical measure defined by Thanks to the work of Dobrushin [4] it is possible to prove quantitative convergence of µ N (t) to the solution µ of the (velocity free) Vlasov equation with respect to the Wasserstein metric (provided this is true initially of course).The mean field limit with time dependent weights has been investigated in [1,5,6] for Lipschitz continuous interactions and ψ N i which are at least Lipschitz in each variable, and more recently in [2] for the case of the 1D attractive Coulomb interaction (but still with ψ N i regular enough).There is a different regime, the so called graph limit, closely related to the mean-field limit.In the graph limit, we pass from a discrete system of ODEs to a "continuous" system in the following sense: we associate to x N (t), m N (t) the following Riemman sums x N : [0, T ] × I → R, m N : [0, T ] × I → R defined by Using the equation for the trajectories of the opinions and weights, one easily finds that x N , m N are governed by the following equations Lebesgue differentiation theorem leads us formally to the following integro-differential equation    ∂ t x(t, s) = I m(t, s * )a(x(t, s * ) − x(t, s))ds * , x(0, s) = x 0 (s) ∂ t m(t, s) = Ψ(s, x(t, •), m(t, •)), m(0, s) = m 0 (s). ( Here Ψ : R is a functional whose relation to ψ N i is given by the formula (2.5) in the next section.The formula relating x 0 (s), m 0 (s) to x 0 N (s), m 0 N (s) will be given in the next section as well (formula (2.6)).Hence, one expects that the sums x N (t, s), m N (t, s) are an approximation of the solution (x(t, s), m(t, s)) of the Equation (1.4) . Before going further, let us briefly comment on the origin of the terminology "graph limit".This name stems from the fact that the system (1.1) can be viewed as a nonlinear heat equation on a graph.For example, in the case where the weights are time independent and the a is taken to be the identity, then the system (1.1) can be rewritten as the linear heat equation with respect to the Laplacian associated to the underlying simple graph.This is the point of view which has been taken in [11].However, this underlying combinatorial structure seems to come into play mostly when the weights may vary from one opinion to another, in which case methods from graph theory prove as highly useful.We also refer to the more recent work [9] for a demonstration of the power of graph theory techniques in the context of the mean field limit, and [3] in the context of convergence to consensus for the graph limit equation.See also [12] for a proof of the graph limit for metric valued labels, alongside an extensive explanation of the relation between the graph limit and the hydrodynamic and mean field limits.In our settings, which are very similar to the framework in [1], this graph structure is not as relevant, and we shall therefore not dwell on this matter.It is instructive to view the system (1.4) as continuous version of (1.1), in the sense that it is obtained by replacing averaged sums by integrals on the unit interval and summation indices by variables in the unit interval. Relevant Literature and Contribution of the Present work.It appears that the graph limit point of view has not received as much attention as the mean-field limit.The study of this problem was initiated in [11], which as already remarked, considers time independent weights which may depend on the index of the opinion as well.This result has been extended in [1] to cover time dependent weights (although in [1] the weights depend only on the summation index).The evolution in time of the weights renders difficult the problem both at the microscopic and graph limit level-since the corresponding ODE/integro-differential equation become coupled (compare for instance Equations (1.1) and (1.2)), and at the macroscopic level-since the mean field PDE includes a non-local source term (see Section 4 for more details).In both of these results, the functions ψ N i are assumed to be well behaved in terms of regularity.On the other hand, models corresponding to scenarios where the functions ψ N i exhibit singularities recently received attention in [10].For instance, the following ODE has been studied in [10]: where a : R d → R d is Lipschitz and takes the form a(x) = a(|x|)x for some radial a : R → R, and s : R d → S d−1 is the projection on the unit sphere, i.e. Of course, inserting the equation for x i into the equation for m i transfers the system to the form (1.1).System (1.5) is referred to as a pairwise competition model in [10], and its well posedness can be proved provided opinions are separated initially (i = j =⇒ x 0 i = x 0 j ).It is the aim of this work to investigate how to overcome the challenges created due to the singularity in the weight function in the context of the graph limit.The problem of the graph limit for singularities in the influence function is also interesting.As already remarked, for the mean field limit this has been successfully achieved in [2] for the 1D attractive Coulomb case.However, it is not clear how to study the graph limit regime in this whole generality.The 1D repulsive Coulomb interaction however is manageable, and can be handled by similar methods to the one demonstrated in the present work. A first contribution of the present work is reflected on two levels, both of which are considered in 1D: the well posedness of the graph limit equation (1.4), and the derivation of (1.4) from the opinion dynamics (1.1) in the limit as N → ∞.As for the first point, we note that in the case when a and ψ N i are well behaved then equation (1.4) can be viewed as a Banach valued ODE, and noting that at each time t our unknowns (x(t, •), m(t, •)) are functions of the variable s, and therefore there is a straightforward analogy between the well posedness of the discrete System (1.1) and equation (1.4).As already mentioned, the global well-posedness of the finite dimensional version of Equation (1.4), namely System (1.5) has been (among other things) proved in [10] using the theory of differential inclusions as developed by Fillipov [7].Originally, Fillipov formulated his theory for unknowns taking values in a finite dimensional space in contrast to Equation (1.4).We follow a slightly different route which is in fact more elementary and does not require any familiarity with convex analysis.A second contribution of the present work, is studying the graph limit in arbitrary dimensions d > 1.In higher dimensions, a natural assumption to impose on the initial datum x 0 is that it is bi-Lipschitz in s -an assumption of this type is strictly stronger from what is needed in 1D.This in turn leads to considering Riemann sums whose labeling variable s varies on the d-dimensional unit cube rather on the unit interval, because cubes of different dimensions cannot be diffeomorphic.This labelling procedure does not have any modelling interpretation since particles (opinions) are still exchangeable or indistinguishable.It would in fact be possible to still work on the unit interval through a change of variable on the labeling variable, since all cubes (and most measurable spaces that one may use) are isomorphic to the unit interval per the Borel isomorphism theorem.However the corresponding analysis would be far more convoluted, and instead having the labeling variable on the d-dimensional unit cube make the various technical steps more transparent.These considerations are therefore detailed separately in Section 5.For both points it is crucial to observe the lower bound |x(t, s 2 ) − x(t, s 1 )| x 0 (s 2 ) − x 0 (s 1 ) .In 1D the initial separation at the continuous level will be replaced by the assumption that x 0 is increasing, whereas in higher dimensions this assumption will be replaced by requiring that x 0 is bi-Lipschitz.Finally we remark that the method here extends the case of the main results in [1], in the sense that it simultaneously covers functions s which are either Lipschitz or have a jump discontinuity at the origin.This last observation is simple but not obvious-for example in the case where the singularity emerges from the influence part, as mentioned earlier, it is not clear how to unify both results. We organize the paper as follows: Section 2 reviews the terminology introduced in [1] in the specific context of system (1.5).In particular, Section 2 includes preliminaries such as the existence and uniqueness of classical solutions to the system (1.1) in the present settings and other basic properties of solutions (of course, uniqueness is not strictly needed for the purpose of the graph or mean field limit).Section 3 is a continuous adaptation of section 2, namely well posedness for the 1D graph limit equation for which uniqueness is essential.Section 4 includes the main evolution estimate leading to the 1D graph limit, and clarifies the link between the mean field and the graph limit.In Section 5 we introduce multi-dimensional Riemann sums and study the graph limit for arbitrary d > 1. Preliminaries 2.1 The ODE system.Recall that the system which will occupy us is where When d = 1, which is the case of main interest here, we note that s identifies with the sign function. We start by reviewing the well-posedness theory which has been established for the System (2.1) in [10].As usual with ODEs with weakly singular right hand sides, the argument in [10] rests on the theory of differential inclusions as developed by Fillipov [7] and the fact that opinions remain separated for all times provided this is true initially.Unless necessary, we omit the super index N in the opinions and weights. Proposition 2.1.([10, Proposition 3] Suppose a : R d → R d is Lipschitz with a(0) = 0 and x 0 i = x 0 j for all i = j.Then there exists a unique classical solution (x N (t), m N (t)) to the System (2.1) with x i (t) = x j (t) for all i = j and t ≥ 0. We also recap the following basic properties of solutions, which already appear implicitly or explicitly in [1,10], and will appear in the course of the proof of the main theorems. ii. (Uniform bound in time on opinions).If x 0 i ≤ X then for all t ∈ [0, T ] it holds that iii. (Uniform bound in time on weights).m i (t) > 0 for all t ∈ [0, T ] with the estimate iv. (Opinions are separated).There is a constant C = C(L, T ) > 1 such for all t ∈ [0, T ] the following bound holds Proof.For i. see Proposition 2 in [10].For ii., fix a time τ > 0 such that m j (τ ) ≥ 0, j = 1, ..., N for all t ∈ [0, τ ] (such a time exists by continuity).We utilize i. and the assumption a(0) = 0 to find that for each t ∈ [0, τ 0 ] which by Gronwall's Lemma implies We prove iii., from which we will conclude ii. for all t ∈ [0, T ].We start by explaining why m i (t) > 0. Indeed, if on the contrary m i (t) ≤ 0 for some 1 ≤ i ≤ N and t ∈ [0, T ] and let Then the bound from ii. and preservation of total mass of i. imply that for all t ∈ [0, τ ) we have d dt Integration in time yields that for all t ∈ [0, τ ] . Letting t ր τ yields a contradiction.Therefore m i (t) > 0 for all t ∈ [0, T ] which in turn implies that (2.3) holds for all t ∈ [0, T ].Remark also that the same estimate done on the interval [0, T ] yields the asserted bound on [0, T ].Point iv. is Proposition 7 in [10]. 2.2 The graph limit equation. In the graph limit we attach to the flow of System (1.1) the following "Riemman sums" (2.4) The functional Ψ : I × L ∞ (I) × L ∞ (I) → R and the functions x 0 : I → R d , m 0 : I → R are given and the functions ψ N i and the initial data x 0,N i , m 0,N i are defined in terms of these functions through the following formula If Ψ is given by then one readily checks that the ψ N i in Formula (2.2) are recovered via Formula (2.5).Notice that by Lebesgue's differentiation theorem x N (0, s), m N (0, s) well approximate x 0 (s), m 0 (s) because for a.e.s we have pointwise convergence Also, it is worthwhile remarking that unlike in the mean field limit regime, where the initial data realizing the initial convergence can be chosen from a set of full measure, here we use a very specific choice for the initial data, and in particular all initial data of the form specified by formula (2.6) constitute a set of measure 0, which means that the probabilistic methods that we have at our disposal in the mean field limit become useless in the graph limit.We will return to this point in Section 4. The functions x N (t, s), m N (t, s) defined through Formula (5.2) are governed by the following equations, which should be compared with the graph limit Equation (1.4). Proposition 2.2.Let the assumptions of Proposition 2.1 hold and let (x Proof.We start with the equation for x N (t, s).Fix s ∈ i0−1 N , i0 N , we get On the other hand, we have The equation for m N is obtained due to the following identities 3 Well Posedness for the Graph Limit Equation The most restrictive assumption for the graph limit is H3' since S is not Lipschitz in problems of interest mentioned in the introduction, see [5,1].Furthermore, any solution to the graph limit equation (1.4) is expected to satisfy an estimate analogue to Inequality iv. in Lemma 2.1, namely which would imply Lipschitz continuity along the trajectories provided x 0 is one to one.This also leads us to remark that the initial separation in the microscopic system (1.5) can be replaced by the assumption that x 0 is one to one in the infinite dimensional case, which means that we need to be able to evaluate x 0 pointwise, and therefore a more natural assumption is x 0 ∈ C(I) rather than x 0 ∈ L ∞ .To summarize, in contrast to [1], we assume the hypotheses: ii. x 0 ∈ C(I) is one to one and x 0 ≤ X for some X > 0. H3 i.The restrictions s| (0,∞) and s| (−∞,0) are Lipschitz, i.e. there is some S > 0 such that Clearly, the sign function is a particular example of hypothesis H3.In the following Lemma, which is a variant of [1, Lemma 3], the new considerations discussed above will be taken into account. 1. Suppose that Then, for all t ∈ [0, T ] it holds that Then, for all t ∈ [0, T ] it holds that Proof. Step 1.For readability, we suppress the time variable (unless unavoidable).Set a(s, s * , s * * ) := a(x(s * * ) − x(s)) + a(x(s * * ) − x(s * ), we have Using the assumption that I m 1 (s)ds = I m 2 (s)ds = 1, the first integral in the right hand side of (3.3) can be estimated as Therefore, integrating (3.3) in s over I produces , we can also estimate as As a result, we obtain Lemma 3.2.Let hypotheses H1-H3 hold.Suppose also Then, there exists a unique solution (3.4) The solution x is such that s → x(t, s) is one to one and the solution m is non-negative such that I m(t, s)ds = 1. Step 1. Existence and uniqueness for the equation for x.Fix 0 < T < . Let M x0 be the metric space of functions in C([0, T ] × I) with x(0, s) = x 0 (s).Define the operator We view M x0 as a complete metric space.We then have The choice of T ensures 2L m ∞,∞ T < 1, thereby making the Banach contraction principle available which implies there exist a unique solution x ∈ C([0, T ] × I) to the equation By a standard iteration argument we have existence and uniqueness on the whole interval [0, T ]. Evidently the map τ → I m(τ, s * )a(x(τ, s * ) − x(τ, s))ds * is continuous so that by the fundamental theorem of calculus we conclude x ∈ C 1 ([0, T ]; C(I)).Next we claim that this solution must be one to one. Then for all t ∈ [0, T ] and all s 1 , s 2 ∈ I it hold that In particular, s → x(t, s) is increasing. Proof.We start by showing that |x(t, s 2 ) − x(t, s 1 )| 2 > 0 for all t ∈ [0, T ].Assume to the contrary there is some t ∈ [0, T ] and s 2 > s 1 such that x(t, s 2 ) = x(t, s 1 ) and set Then for all t ∈ [0, τ 0 ) we have |x(t, s 2 ) − x(t, s 1 )| > 0 and as a result which in turn gives the inequality Taking t ր τ 0 gives a contradiction.Repeating now the estimate (3.5) shows that in fact for all By continuity and the assumption that x 0 is increasing it follows that s → x(t, s) is increasing. Step 2. Existence and uniqueness for the equation for m. We start by observing that K maps M m0 into itself. Proof.To see why K m0 (m) is non-negative notice that because how T was chosen we have To show that K m0 (m) has unit integral we use that s is odd.Changing variables s ←→ s * and using that s is odd, the second integral in the right hand side is recast as We view M m0 as a complete metric space.Let m, n ∈ M m0 .Thanks to point 1. in Lemma 3.1 we have and thus The choice of T makes the Banach contraction theorem available thereby ensuring the existence of a unique solution m ∈ M m0 on [0, T ] to the equation Moreover, from the choice of T > 0 we evidently have long time. Let m(t, s) be the unique solution on [0, T ] to given by step 2.1.Then we obtain and as a result we deduce that Then we get a solution on 0, 2 × 1 16LXS∞τ .Iterating the process k > 16LT XS ∞ τ times we get existence and uniqueness of a solution on [0, T ].We claim now to have the upgrade m ∈ C 1 ([0, T ]; L ∞ (I)).Indeed, we have Taking into account (3.6), we finally conclude m ∈ C 1 ([0, T ]; L ∞ (I)). The coupled equation The well posedness for the decoupled equation serves as the main tool for proving well posedness of the original system.We prove Proof. Step 1. Existence.We define recursively the following sequence of functions (x n , m n ): i.For all t ∈ [0, T ] and all s ∈ I we set x 0 (t, s) = x 0 (s) and for all t ∈ [0, T ] and a.e.s ∈ I we set m 0 (t, s) = m 0 (s). ii.If (x n−1 , m n−1 ) have been defined we define (x n , m n ) to be the unique solution guaranteed by Lemma 3.2 to the equation Start by noting that x n is uniformly bounded (with respect to n) in the space C([0, T ] × I).We have This also implies a uniform bound in n for the weights since in view of Inequality (3.7) The proof of existence essentially boils down to proving that (x n , m n ) is a Cauchy sequence in the space C([0, T ]; C(I)) ⊕ C([0, T ]; L 1 (I)). Estimate for sup Integrating in s ∈ I gives Utilizing Lemma 3.1 shows that the first inner integral is whereas the second inner integral is As a result, we get Estimate for u n (t) := sup Collecting the Inequalities (3.9) and (3.10) we find which by easy induction implies It follows that sup x n (t, s) = x 0 (s) We explain how the passage to the limit as n → ∞ in the equation for m n is done, and the passage for the equation of x n is a standard verfication left to the reader.By Claim 3.1 we have for all Therefore, Lemma 3.1 is applicable and entails Hence, it follows that the right hand side in the equation for which by uniqueness of the limit implies that for all t ∈ [0, T ] and a.e.s ∈ I we have ) is exactly by the same reasoning of Lemma 3.2. Step 2. Uniqueness.Suppose we are given 2 solutions (x 1 , m 1 ) and (x 2 , m 2 ) with the same initial data.We have In addition, Lemma 3.1 yields It follows that and this a fortiori forces m 1 = m 2 and 4 The Graph Limit and consequences. This section is devoted to obtaining a Gronwall estimate on the time dependent quantity ξ N (t)+ ζ N (t), where , where x N , m N are given by Formula (5.2) and x, m are the corresponding solutions to Equation (3.8).We modify the argument demonstrated in Theorem 1 in [1] to our weakly singular settings.The estimate for ζ N (t) reflects the main novelty of this section.The estimates we obtain are locally uniform in time.The symbol stands for inequality up to a constant which may depend only on L, M, X, T, S, S ∞ .The main theorem is Theorem 4.1.Let the hypotheses H1-H3 hold.Let (x, m) ∈ C 1 ([0, T ]; C(I)) ⊕ C 1 ([0, T ]; L ∞ (I)) be the solution to Equation (3.8).Let (x N , m N ) ∈ C 1 [0, T ] ; R 2N be the solution to the system (2.1).Then x Proof. Step 1.The time derivative of ζ N (t).The estimate for the time derivative of ζ N (t) reflects the main difference with the argument in [1].The time derivative of ζ N is computed as follows. By Lebesgue differentiation theorem, for each t ∈ [0, T ] it holds that g N (t, s) → N →∞ 0 pointwise a.e.s ∈ I.In addition, x C([0,T ]×I) , m C([0,T ];L ∞ (I)) are bounded which implies that g N (t, s) is uniformly bounded (with respect to N ) so that by the dominated convergence theorem we find that the second integral in (4.1) is where for each t ∈ [0, T ] it holds that For the first integral, note Note that at this stage we cannot quite appeal to the Estimate (3.1) since it was formulated for x which are one to one in the variable s .The main difference is in the estimate of the second integral, which is now bounded by x N (t, •) − x(t, •)2 2 up to an error term which decays to 0 as N → ∞.Precisely put Lemma 4.1.It holds that i. ii. Proof.Unless unavoidable, we supress the time variable.i. Thanks to Lemma 2.1 we have The estimate i. is almost identical to the estimate demonstrated in (3.1), the only minor difference being that here we take the The first integral is Therefore squaring and integrating in s over I, Inequality (4.4) produces ii.We have Let us concentrate on the estimate of I m 2 (t, s) |J 2 (t, s)| 2 ds.For each s ∈ I set We abbreviate We estimate the integral as follows By the assumption H3 we have Recall that by Lemma 2.1 and Claim 3.1 there exist a constant C > 1 such that By Lebesgue differentiation theorem, for a.e.s ∈ I it holds that a.e.s * .For all s ∈ I the set s * x 0 (s * ) = x 0 (s) (being an atom) is null due to H2, and therefore for a.e.s it holds that a.e.s * .By dominated convergence we obtain which shows The same reasoning also shows that The combination of (4.5), (4.6), (4.8) and (4.9) implies the announced claim. Gathering i.,ii.and (4.2) gives Step 2. The time derivative of ξ N (t).The time derivative of ξ N (t) is mastered exactly as in [1].Following the argument in [1] one finds that Step 3. Conclusion.The combination of Inequalities (4.10) and (4.11) yields Applying Gronwall's lemma entails 2 is uniformly bounded, by (4.3) and dominated convergence which concludes the proof. In the last part of this section, we recall how to obtain as a consequence a special version of the mean field limit for the empirical measure associated with the System (2.1).We start by pointing out that currently the existing literature does not cover the well posedness theory of the mean field equation, namely the non-local non-homogeneous transport equation where S(x, y, z)dµ(t, y)dµ(t, z), S(x, y, z) = 1 2 (a(z − y) + a(z − x)) s(x − y). The case d > 1 In this section we explain how to extend the graph limit for arbitrary higher dimensions d > 1.In some places the proof requires only minor modifications and we therefore concentrate only on the parts which require special treatment. The graph limit equation d > 1 The first notable difference in comparison to the case d = 1 (or the work [1]) is reflected in the definition of the Riemann sums.Instead of labeling the opinions along a multi-index of length d we label them along a multi-d-dimensional matrix of indices.This is a particular case of the metric valued labelling procedure introduced in [12] when the labelling space is [0, 1] d .At the level of the graph limit equation this choice corresponds to considering the equation posed on [0, T ] × I d rather than [0, T ] × I. Indeed, the fact that x(t, •) is a map from I d to itself enables to consider bi-Lipschitz initial data, which is crucial for the sake of properly analyzing the singularity in s as is clarified in Lemma 5.1.This labelling procedure does not have any modelling interpretation since particles (opinions) are still exchangeable or indistinguishable, it is solely needed for pure technical reasons.As we mentioned at the beginning of the paper, it would still be possible to go back to using [0, 1] as a labeling space through a change of variable, since [0, 1] and [0, 1] d are isomorphic as measurable spaces per the Borel isomorphism theorem.But, obviously, this would lead to painful technical assumptions to replace the bi-Lipschitz condition on x(t, .), while the analysis is otherwise much more transparent when considering [0, 1] d .Precisely put, we take the number of opinions to be perfect powers of d in which case the opinion dynamics system becomes the following system of (d + 1) We attach to the flow of System (5.1) the following "Riemman sums" like quantities, as in the one dimensional case, defined by Here the labeling variable s varies on the d-dimensional unit cube I d .Generalizing the constructions of Section 2.2, the functional Ψ : m 0 (s)ds. (5.5) If Ψ is given by then one readily checks that the ψ N i in Formula (2.5) are recovered via Formula (5.4).Notice that x N (0, s) and m N (0, s) well approximate x 0 (s), m 0 (s) because by Lebesgue's differentiation theorem for a.e.s ∈ I d we have pointwise convergence The functions x N (t, s), m N (t, s) defined through formulas (5.2), (5.3) are governed by the following equation, which is the obvious higher dimensional version of Equation (1.4).Proposition 5.1.Let the assumptions of Proposition 2.1 hold and let (x N (t), m N (t)) be the solution to System (5.1) on [0, T ].Let x N , m N be given by (5.2) and (5.3) respectively.Then (5.6) Proof.We start with the equation for x N (t, s).Fix s ∈ Q i0 .Then On the other hand The equation for m N is due to the following identities Well posedness for d > 1 The point which requires most care for the proof of well posedness is point 2. in Lemma 3.1.Let us first state the assumptions we impose on the initial data and the other functions involved. ii.There is some locally Remark 5.1.The assumption A2 that x 0 is bi-Lipschitz is strictly stronger than the assumption that it is 1- is a particular example of hypothesis A3 as can be seen from through the following elementary inequalities Furthermore, it is clear that the condition i. in A3 is more general than the assumption s ∈ Lip(R d ). • The function x 2 is bi-Lipschitz in the labeling variable, i.e. there is some C > 1 such that for all (t, s, s Then where we define We start with the estimate on J 2 (t, s).Using assumption A3, we have that From the bi-Lipschitz assumption on The estimate for J 1 (t, s) follows in a similar way, The symbol stands for inequality up to a constant which may depend only on L, L 0 , M, X, T, S, A2S ∞ .By Lebesgue differentiation theorem, for each t ∈ [0, T ] it holds that g N (t, s)→0 as N → ∞ pointwise a.e.s ∈ I d .In addition, x C([0,T ]×I d ) , m C([0,T ];L ∞ (I d )) are bounded which implies that g N (t, s) is uniformly bounded (with respect to N ) so that by the dominated convergence theorem we find that for each t ∈ [0, T ] it holds that g N (t, •) 1 → N →∞ 0. We now estimate the first integral as for some compact set K ⊂ R d . Step 2. The time derivative of ξ N (t).The time derivative of ξ N (t) is mastered exactly as in [1].Following the argument in [1] one finds that ξN (t) ξ N (t) + ζ N (t). (5.8) Step Remark 5.3.Note that Theorem 5.1 proves convergence with respect to the L 1 norm, whereas Theorem 4.1 proves convergence with respect to the L 2 norm.This minor difference is because when d = 2 the L 2 norm of 1 |s| blows up, which prevents getting the inequality ii. in Lemma 5.2.Notice that for any d ≥ 3, the L 2 approach is perfectly valid.Remark 5.4.Essentially the same argument of Theorem 4.2 allows one to conclude the weak mean field limit from the graph limit in higher dimension. ( 1 ≤ and the functions x 0 : I d → R d , m 0 : I d → R are given and the functions ψ N i and the initial data x 0,N i , m 0,N i i ≤ N d ) are defined in terms of these functions through the following formulas m(s) m(s * ) (|s(x 1 (s) − x 1 (s * ))| + |s(x 2 (s) − x 2 (s * ))|) ds ds * ≤ 6 L M S L 1 (K) sup I d |x 1 (t, •) − x 2 (t, •)| .
7,900.2
2023-11-20T00:00:00.000
[ "Mathematics" ]
U(1) Fields from Qubits: an Approach via D-theory Algebra A new quantum link microstructure was proposed for the lattice quantum chromodynamics (QCD) Hamiltonian, replacing the Wilson gauge links with a bilinear of fermionic qubits, later generalized to D-theory. This formalism provides a general framework for building lattice field theory algorithms for quantum computing. We focus mostly on the simplest case of a quantum rotor for a single compact $U(1)$ field. We also make some progress for non-Abelian setups, making it clear that the ideas developed in the $U(1)$ case extend to other groups. These in turn are building blocks for $1 + 0$-dimensional ($1 + 0$-D) matrix models, $1 + 1$-D sigma models and non-Abelian gauge theories in $2+1$ and $3+1$ dimensions. By introducing multiple flavors for the $U(1)$ field, where the flavor symmetry is gauged, we can efficiently approach the infinite-dimensional Hilbert space of the quantum $O(2)$ rotor with increasing flavors. The emphasis of the method is on preserving the symplectic algebra exchanging fermionic qubits by sigma matrices (or hard bosons) and developing a formal strategy capable of generalization to $SU(3)$ field for lattice QCD and other non-Abelian $1 + 1$-D sigma models or $3 +3$-D gauge theories. For $U(1)$, we discuss briefly the qubit algorithms for the study of the discrete $1+1$-D Sine-Gordon equation. I. INTRODUCTION Lattice field theory, particularly the Wilson's formulation of quantum chromodynamics [1], now plays a central role in high energy physics being capable of ab initio precise predictions in support of the search for physics beyond the standard model (BMS).This is due to a firm theoretical foundation, combined with spectacular advances in algorithms on classical computers soon to approach the Exascale.It is generally accepted that the Wilson Euclidean (imaginary-time) lattice action lies in the basin of attraction of QCD, converging to the exact answer in infinite volume (IR) and zero-lattice-spacing (UV) limits. However, the standard Monte-Carlo integration is incapable of real-time dynamics.One way to change this paradigm could be quantum computing.This requires not only the development of quantum computing technology but also the transformation of the lattice field theories to an appropriate Hamiltonian Ĥ expressed in terms of qubits (sigma matrix operations), as first noted by Feynman in 1982 [2].The first step to convert the lattice action to a Hamiltonian formulation is straightforward.For example, for QCD, by taking the time continuum limit of the transfer matrix in the Wilson's lattice QCD, one obtains the Kogut-Susskind Hamiltonian [3] operator where {⟨x, y⟩} is the set of all of the pairs of the nearestneighbor lattice sites with the specified direction x → y, i.e. all the directed lattice links.The plaquette operators U µν (x) are defined as with the Wilson link operators U (x, y) ≡ exp[iA(x, y)] determined by the gauge field A(x, y) [4] in the adjoint of the gauge group.We refer to E(x, y), which are conjugate to the gauge fields A(x, y), as the electric field operators.Hence, E 2 (x, y) is the Casimir of the gauge group.The quark term is Ψ † D[U ]Ψ.The symplectic algebra between E(x, y) and U (x, y) on each link ⟨x, y⟩ preserves the exact spatial gauge invariance and the Gauss' law.It is then anticipated, based on the Osterwalder-Schrader positivity, that the unitary evolution operator U (t, 0) = exp −it ĤQCD of the lattice Hilbert space also converges to the exact quantum dynamics as the UV lattice spacing and the finite volume IR cutoff are removed.The second step, converting the problem into qubit operators, is more difficult, at least on all proposed hardware to date.The main difficulty comes from that the local variables on a single link, when quantized, act on an infinite-dimensional Hilbert space.This is the function space L 2 (G) on the group manifold of the local gauge group G. Roughly speaking, we have a wavefunction ψ(g) of the classical group variable g ∈ G, which needs to be normalizable.For example, for QCD, the infinitedimensional Hilbert space of the SU (3) group manifold at each link must be drastically reduced.On modern classical computers, this is solved by the illusion of the continuum with a mild 32-or 64-bit truncation of floatingpoint arithmetic approximation.On the other hand, this Hilbert space must be represented by a small number of qubits per lattice site on proposed quantum hardware with a limited number of qubits at present.The problem is to invent a new microstructure for a qubit Hamiltonian operator that falls into the universality class of the Kogut-Susskind Hamiltonian.At least in that sense, when we take the large volume and small lattice size limit, we should recover the exact QCD for the low energy states near the vacuum. A general framework, which is referred to as Quantum Links [5][6][7] or more properly its generalization called D-theory [8], has been proposed to achieve this.In Dtheory, the E and U fields are replaced with the quantized Ê and Û operators, respectively, on each link.These operators are represented as the bilinears of a small set of fermionic operators.The fermionic representation is an explicit example of what Bravyi and Kitaev [9] refer to as local fermionic modes (LFM) whose algebra can be represented as products of hard boson sigma matrices.The basic heuristic to plausibly reach the correct universality class is: (i) to wisely choose the base lattice to satisfy a maximal set of space symmetries and (ii) to find field operators which still satisfy the basic symplectic algebra of the link operators and their conjugate electric operators [10].It is plausible that by preserving lattice symmetries and the symplectic structure, many simple examples can be found in the basin of attraction of continuum field theory as indeed first conjectured by Feynman in 1982 [2].Preserving the fundamental symplectic algebra opens up a range of qubit realizations via D-theory for efficient quantum computing as summarized recently by Wiese in [11] and in an alternative qubit construction by Liu and Chandrasekharan in [12].Here, we restrict our investigation to the simplest example of field operators on the compact G = U (1) group manifold.Already this quantum rotor provides an interesting and non-trivial building block for quantum spin and gauge theories. Of course, establishing the Hamiltonian in the desired universality class is a difficult problem.It generally requires both theoretical insight and numerical evidence.The original D-theory paper argued it for asymptotically free chiral models in 1 + 1 dimensions and gauge theories in 3+1 dimensions.The universality would be valid with only a logarithmic growing layering of a single qubit in an extra dimension [13].While this is a modest increase in the volume, the discovery of other options is anticipated by the evidence found in [14] of a lattice Hamiltonian for the 1 + 1-D non-linear O(3) sigma model with only two layers.The qubit systems exhibit both the UV asymptotic free fixed point and the IR universality in the continuum.For our U (1) example, the reader is also referred to the study by Zhang, Meurice and Tsai [15].In their work, it is noticed that the Berezinskii-Kosterlitz-Thouless (BKT) phase transition, which is expected for the continuum 2-D O(2) (XY) model, is absent for 3states truncation per site but appears for 5-states truncation or more.The lesson here is that if the truncation is too drastic, one might be outside of the desired universality class. Here, we consider the limited question of how the use of M copies of fermionic qubits (referred to as a flavor index in [13,16]) at each link can converge locally to the Kogut-Susskind Hamiltonian as M → ∞.This sequence provides a qubit implementation that can be explored with respect to universality and efficient quantum computing with the hope that very few qubits per lattice volume suffice.This paper is also restricted to the simplest example as we mentioned: a compact U (1) field manifold formulated in a way that is capable of generalization to non-Abelian group manifolds.We would have the finite approximation of L 2 (S 1 ), the Hilbert space of the U (1) theory we study, as the quantization of the local variable.Even in this Abelian example, the Lagrangian formalism is mapped to a nontrivial SU (2) quantum rotor as a Hamiltonian, a basic ingredient of the qubit codes and even their hardware realization [17,18].Applications are interesting for a variety of quantum field theories, not just for gauge theories.Depending on if we have certain gauge constraints or not, what matters is the fact that the fields give an interesting local Hilbert space structure at a site or link.The main analysis of local fields can be applied to examples such as the XY model, the Sine-Gordon theory or the Schwinger model in 1 + 1 dimensions and gauge theories in 2 + 1 and 3 + 1 dimensions.For example, in the discretized version of the Sine-Gordon model, the local variable can also be taken to be a periodic variable living on each of the lattice sites rather than the links.A similar comment would be applied to non-linear sigma models on group manifolds where we would obtain L 2 (G) at each site, rather than L 2 (G) on links with the Gauss' law constraints.In this sense, this paper is more concerned with the individual manifold for local fields either on a link or lattice site, rather than the problem of a full quantum theory.We are basically asking how to generate local variables that become bosons (with a non-trivial manifold and symmetry structure) when the cutoff on the local variable is removed, while the symmetry structure is realized exactly. The paper is organized as follows.In Sec.II, we present the general algebraic constraint of quantum links for the U (N ) field with multiple flavors which is specialized to U (1), and we also comment on how the quantum links with gauged flavor give a description that is a truncation of the Hilbert space of more general group manifolds with no additional states.In Sec.III, we define the truncation of the U (1) quantum Hamiltonian both for the D-theory flux cutoff and the Z N clock rotor fields truncation.In Sec.IV we present the translation of the U (1) quantum link operators with fermionic operators to those with sigma matrices.In Sec.V, we numerically compare the spectra of the truncated models in our formalism as well as that of the Z N clock rotor fields truncation.Sec.VI considers briefly the quantum circuits to implement the 1 + 1-D XY and Sine-Gordon models for the lowest triplet truncation and study the phase transition by measuring the entanglement entropy of the ground states.In Sec.VII, we elaborate further on our results. II. SYMPLECTIC ALGEBRA AND UNIVERSALITY A Hamiltonian for a classical mechanical system is defined by the symplectic structure of its P-Q coordinates expressed as the Poisson brackets.A quantum Hamiltonian, just as the classical case, is also defined by the symplectic structure, promoting the Poisson brackets to the canonical commutators.Using the Kogut-Susskind Hamiltonian as an example to motivate the D-theory construction, we first double the phase space introducing a left-right pair, E L (x, y), E R (x, y) electric fields or gauge generators on each link and a pair of forward and backward link operators U (x, y) and U (y, x) = U † (x, y). The fermionic matter term, Ψ † D[U ]Ψ, is straightforward to be added, but not essential for our current discussion.At first, it might seem strange that one has to double the variables.This is quite natural when one is studying motions on a group manifold.This is because we have two possible group actions on G, by the left and right multiplications.There are a set of generators for each of these transformations, i.e. the electric fields. The full symplectic algebra on each link ⟨x, y⟩ in the doubled phase space is summarized as where the λ α matrices are the generators of G in the fundamental representation.E L and E R generate two independent copies of G, namely G L and G R respectively: In other words, the U variables transform in the representation of (fund., fund.) of G L × G R rather than the adjoint of G as in the ordinary construction of gauge theories, where G L is generated by E L and G R is generated by E R , while the U † variables are in (fund., fund.). It is also known that it is convenient to study the left and right invariant forms, U −1 dU and dU U −1 , which lead to velocities v L = U −1 U and v R = U U −1 .Each of these can serve as a basis for velocities, and they are clearly related to each other by When one is careful with these velocities, we get canonical conjugates to the group variables that encode the symmetry.These are Lie algebra valued, generating group transformations in the Hamiltonian sense, and the left and right actions on G commute with each other.The original Hamiltonian is then recovered with the constraint of unitarity and the constraint inherited from the velocities Eq. ( 6) on each link ⟨x, y⟩ Preserving the symplectic structure would mean that either we keep Eq. ( 4) and Eq. ( 5), or we keep Eq. ( 7).If we keep both, we have the full L 2 (G) which is infinitedimensional. A. Fermionic D-Theory Algebra In this section, we specifically pick the gauge group to be G = U (N ).This still demonstrates the more general framework of the D-theory discretization than that for the more simpler Abelian case, U (1), which will be our main focus for the later sections. A straightforward discrete representation that exactly preserves the symplectic algebra in Eqs. ( 4)-( 5) replaces the single link field on a compact group manifold by a bilinear of fermion operators as Notice that the matrix elements of the link operators are no longer complex numbers, but rather operators.We denote that by putting the "hat" notations on top of the operators.The scalar product implies a sum over the vector of M flavors of creation and destruction operators: The indices i and j are color indices running from 1 to N .All the 4N M fermionic operators a i m (x, y) and b i m (x, y) per link obey the standard anti-commutator relations of single fermionic degrees of freedom, as introduced in [13,16].The symplectic algebra Eq. ( 4) fixes the representation of the electric flux: reproducing the exact gauge algebra in Eq. (5).Although this seems cumbersome, the a operators carry the left action and the b operators carry the right action.In this way, E L and E R have been separated into completely distinct variables.Each flavor of a carries the same representation with respect to the left Lie algebra: the fundamental.The same is true for b, but carrying the antifundamental.The flavor index m only appears in sums, so the flavor symmetry U (M ) can be thought of as a local constraint on each link.This constraint ties the left and the right actions to each other eventually. The resulting fermionic qubit form, referred to in Bravyi and Kitaev [9] as local fermionic modes, is a small finite Fock space on each lattice link.The original link variables U (x, y) commute with each other resulting from the unitarity constraint Eq. (7), whereas in the fermionic representation, this is not maintained.The only non-zero commutator is local to each link: Thus, a link matrix is no longer normal and as a consequence, breaks the unitarity constraint.The symplectic algebra at each link treats E L and E R as independent velocity coordinates, conjugate to non-commuting position operators, Û and Û † . This breaking should be interpreted in its entirety as an irrelevant UV cutoff effect.As we go to the continuum limit, with sums of multiple paths between distant sources, this non-commutation due to the infrequent intersection at the cutoff scale should vanish.Moreover, when averaging over paths for long distances, we would also abandon the constraint U † U = 1, which is not satisfied when we use the expectation values for U and U † separately. It should also be noted that this construction of operators in the multi-flavor fermionic Hilbert space satisfying the symplectic algebra is not unique.Rather, it provides a general framework with multiple solutions which can be adapted to better approximate the infinite-dimensional Hilbert space with finite-dimensional space and provide alternative qubit implementations to optimize quantum codes.Indeed, this flexibility of the D-theory framework is what we exploit in the current application for U (1).As we will show explicitly for the U (1) example, the multiflavor fermionic space factorizes into super selection sectors which can be modified to give a sequence of bosonic qubit models restoring the zero commutator in the limit of M → ∞.It is useful to construct a variety of D-theory candidates to explore our U(1) examples, which will be carried in Sec.IV. It is important to note that the discrete representation of the link and the electric operators constructed above are the generators of the U (2N ) Lie algebra: Hamiltonian evolution remains on this U (2N ) group manifold at each link.It is remarkable for gauge theo-ries that the quantum link Hamiltonian preserves exactly the local symmetry rotation at each site.The construction of the formalism would also apply to a model with a global Lie group symmetry with a compact manifold.One example is the spin models such as the 1+1-D chiral theory with global U (N ) symmetries.The term with the coupling λ is the square of the discretized differentiation Û (x) − Û (y) † Û (x) − Û (y) .The spin theory will have global symmetry generators ĴL = x ÊL (x) and ĴR = R ÊR (x) so that [ ĴL , Ĥ] = [ ĴR , Ĥ] = 0, where all fields transform as Û (x) → g Û (x)h −1 for common g, h.Precisely determining whether or not this radical reduction of the degrees of freedom is still capable of reaching a universal continuum fixed point is generally a difficult dynamical question.We will not attempt to solve this problem here. We also refer the reader to the reference [8] for other group manifolds.For example, the algebraic structure for SO(N ), SU (N ), and Sp(N ) gauge theories naturally lie in SO(2N ), SU (2N ) and Sp(2N ) algebras respectively and as well as the O(N ), U (N ) ⊗ U (N ) quantum spin models. B. Restoration of the continuum Hilbert space Our goal here is to show that when M → ∞ we should recover the Hilbert space of the original U variables that would enter in the Kogut-Susskind formulation.Although this illuminates our method, the reader may choose to go directly to more intuitive geometrical interpretation discussed in Sec.II C or the concrete construction carried out in Sec.III on the U (1) example.It is possible to show in general that the state space is easily projected into a subspace with each link represented by a few hard boson (or sigma matrix) degrees of freedom.This representation is trivial for U (1) and only requires a local Jordan-Wigner transformation inside the group at each link.In that formulation, the entries of the matrices U are scalar functions of the group elements.These also commute with each other and their polynomials generate the space of L 2 functions on the group manifold G.The Hilbert space L 2 (G) itself is given by the following definition.We need wavefunctions from the group manifold to the complex numbers with the inner product implemented as ⟨ψ|φ⟩ = dg ψ * (g)φ(g) ( 14) The representations of the fermion operators as well as the bilinear Û under the left and right color gauge symmetries and the flavor gauge symmetry.The conjugate annihilation operators are in their conjugate representations. where dg is the Haar measure on the group manifold, which is the unique group invariant measure.The trivial function ψ(g) = 1 is group invariant.All other wave functions can be obtained from this by polynomials of the U, U −1 matrix component functions and then taking the L 2 completion. We want to show that our quantum link procedure approximates this L 2 (G) Hilbert space.It is convenient for us to consider a slightly modified realization of the U variables as bilinear of the fermions.As described in Eq. ( 8), the operators Û and Û † leave a total occupation number unchanged.There is an automorphism of fermion algebras a j m ↔ c †j m and a †m j ↔ c m j which makes it possible to describe Û as made purely from raising operators and Û † from lowering operators.Namely, they become As we noted, the contractions of the flavor indices can be thought of as gauging the U (M ) symmetry.If we also include the left Lie algebra action of U (N ) and the right action, the degrees of freedom on a link are charged under the U (N ) L × U (M ) × U (N ) R symmetry.Under this symmetry, the operators transform as Table I.The advantage of this setup is that the standard vacuum of the b, c fermions is neutral with respect to all the symmetries, hence it is gauge invariant.Let us call this standard vacuum |Ω⟩.We can reach other gauge invariant states by acting on |Ω⟩ with gauge invariant operators under the U (M ), namely, the matrix elements of Û and Û † .Notice that Û † |Ω⟩ = 0 but Û does not annihilate it.This means that Û and Û † act asymmetrically on the reference state |Ω⟩.The complete set of states is built by acting with many Û operators.The Û operators commute with each other, so their actions are just as commuting bosonic generators. The Hilbert space obtained this way can be decomposed into the irreducible representations of U (N ) L × U (N ) R .A state Û n |Ω⟩ has n upper indices with respect to U (N ) R and n lower indices with respect to the left U (N ) L .Because of the bosonic statistics, permutations of upper indices can be undone by a change in the order of the product, so long as the permutation is turned over to the lower indices.Projecting into different representations is done by these permutations, and it corresponds to a Young diagram (tableau) representation with n boxes. One of the diagrams for e.g.n = 10 is The diagram for the lower indices is the same, but since the indices are lowered, they are in the conjugate representation.In the intermediate flavor index, the fermionic statistics requires transposing the Young diagram.This argument appeared in [19] (see also [20] and references therein).The Hilbert space can be therefore decomposed into the sum of the tensor products of an irreducible representation of U (N ) and its conjugate, where each representation R Y is classified by a Young diagram Y : Here in the Hilbert space, each summand is represented by one copy of the Young diagram for the upper indices let's say, with the understanding that the conjugate representation is giving the representation of the other U (N ) in the lower index structure.We need to show that when we take M → ∞ of this Hilbert space Eq. ( 16) in our quantum link formulation, we can recover the Hilbert space L 2 (U (N )) for the Kogut-Susskind formulation.The constant function ψ(g) = 1 ∈ L 2 (U (N )) plays the role of the vacuum |0⟩.The excited states are described by the harmonic functions on U (N ).In L 2 (U (N )), both U and U † act non-trivially on the vacuum, whose actions are different from the actions we have on the fermion reference vacuum state |Ω⟩.This demands us to find the correct vacuum state |0⟩ in the Hilbert space of Û corresponding to ψ(g) = 1.For any Lie group G, we can now appeal to the Peter-Weyl theorem.This theorem states that when we decompose L 2 (G) into representations of the left (G L ) and right (G R ) symmetries of the group multiplication, we get that G L × G R is decomposed into a direct sum of the products of their irreducible representations In this sum, all irreducibles of G appear exactly once.If we compare it to the description above around |Ω⟩, we obviously have a mismatch: the U (N ) representations are classified by pairs of Young diagrams with some constraints rather than with a single Young diagram.In the double Young diagram, one Young tableaux is for boxes (they count powers of U ) and the other one is for antiboxes (they count powers of U † ) [21].The constraint is that the longest column of the box tableau plus the longest column of the antibox tableau need to add up to less than or equal to the rank of the group, in this case N .This is the constraint that says contractions are trivial Let us look at how one of these pairs of tableaux, denoting a single representation, can be represented graph-ically.For example, for U (5), we can take The second tableaux with the filled boxes is the one with anti-boxes.It is turned 180 degrees and put at the bottom of the diagram.The total vertical size is N (= 5 in this case).The constraint is such that the two tableaux do not overlap horizontally. The main idea to show that we can write the Hilbert space with these pairs of Young diagrams in terms of single Young diagrams is as follows.We choose as a reference state a tableau that is filled all the way down to the bottom N rows, with K boxes on each row.That is, we choose as a new vacuum a tableau that is actually a singlet of SU (N ), but that carries U (1) charge N K.That is, we choose as a new vacuum a Young diagram (for N = 5 and K = 6 let us say) where we have filled the boxes up to the maximum allowed depth N .These states are unique because they are one-dimensional representations of U (N ), once we fix the charge.If we want to represent Eq. ( 18) relative to this ground state, we add to the reference state the boxes of the fundamentals in the upper corner and we subtract the antiboxes on the bottom corner.For the above example with N = 5, Notice also that the representations of SU (N ) that appear on both the L 2 (G) and the fermion representation have the same dimension.To get the U (1) charge correctly for L 2 (U (N )) in the fermion formulation, what we have done in practice is that we shifted the U (1) charge so that the new vacuum has trivial charge.Happily, we see that we can match the representations of U (N ) with a few boxes.These are the representations with small Casimir.The constraint on M tells us that the maximum width of the fermion tableaux is M so that to recover the Hilbert space of L 2 (U (N )) we need to take M → ∞ and shift the charge enough so that the room on the left to remove boxes is as large as needed.The most symmetric way to do this is to choose K = M/2.This shows that at least around the new ground state |0⟩, we recover the representation of the Hilbert space we want, namely L 2 (U (N )) with a cutoff that depends on M .The gauge invariance relative to the flavor U (M ) shows we have no additional states to worry about.Computing the matrix elements of U , U † between states is beyond the scope of the present work and will be taken in more detail in a future publication. C. Geometrical Interpretation Here we present the geometrical interpretation of the above approximation of the L 2 (U (N )) space.Let us consider for the time being the simplest case of U (1).By means of the above construction with bifermions, we get the Lie algebra of U (2).The diagonal U (1) ⊂ U (2) plays no role, as it commutes with all the generators and therefore decouples.More precisely, acting with any of the other elements of the algebra will not change the U (1) diagonal charge, so it will act as a c-number when we think of a physical realization.We are left over with the symplectic structure with the structure of the Lie algebra of SU (2). Is there another way to motivate this?The answer is yes.The idea is that the classical phase space of the original problem of the U (1) theory leads to a cylinder: the tangent bundle of the circle as Fig. 1.This has an infinite volume, and therefore the Hilbert space is infinite-dimensional.We can ask if there is any other twodimensional manifold with a finite volume and a U (1) symmetry.The answer is, not surprisingly, yes; the twosphere (Fig. 2) satisfies that condition [22].The sym-plectic structure of the topological two-sphere can also be written in terms of the commutation relations of the angular momentum operators.They play the role of x, y, z coordinates, but quantized.This would lead us to recover the formulation above in terms of SU (2) without ever mentioning the fermions.Upon quantization, we should get a fixed SU (2) representation: a fixed value of the quadratic Casimir, corresponding to x 2 + y 2 + z 2 for the classical manifold.Adding more flavors in the earlier discussion with bifermions corresponds to having a larger dimension of the SU (2) representation, i.e. a larger value of the quadratic Casimir leading to a larger volume.This phase space is the homogeneous space manifold SU (2)/U (1) ≃ CP 1 , which is the complex projective plane of dimension one.To quantize this quotient space, we only need to choose the magnetic flux through the sphere (we need to choose a line bundle over the projective manifold to define the allowed wave functions). What should be remembered is that the metric of the two-sphere does not mean much as far as the symplectic structure is concerned.So, we could just as well have an elongated sphere.This is so because we are studying Hamiltonian physics on the sphere and not a sigma model.What matters is how different functions on the geometry generate dynamical flows.When we elongate the sphere further we can produce a cylinder in the limit of infinite elongation.We can approach the infinite volume of the cylinder this way as we desired.A classical Hamiltonian function on the center band of the cylinder and the one on the center band of the elongated sphere could be very similar.The former is usually represented by the kinetic term p 2 θ , plus any small perturbation in the angular variable (the base coordinate) θ.In the latter case, p 2 θ is replaced by L 2 z , where L z is one of the three angular momentum generators.This band is where the low energy physics of the small kinetic term is concentrated.At least semi-classically, one can argue that if one low energy band of the cylinder leads to the correct universality class of some favored physics, and so does it of a sphere enough elongated to have a big enough volume to capture this band. The generalization for multi-flavor convergence to continuum non-Abelian group manifolds are more involved.We outlined the method based on a more group theoretical convergence to the continuum Hilbert space by means of the Peter-Weyl theorem in Sec.II B. One can ask how to interpret this procedure geometrically as well as the elongated sphere for the U (1) case: what is the manifold to be quantized?The structure of coherent states in [13] seems to have the answer: for U (N ), it is the complex Grassmannian G(N, 2N ) ∼ U (2N )/U (N ) × U (N ).This is also a complex geometry of dimension 2N 2 and thus can be viewed as a phase space.More importantly, it has a group action by U (N ) × U (N ) acting on the left, so it is a candidate phase space with the correct group action.At the level of Lie algebras, the Lie algebra of U (2N ) provides the equivalent coordinates to x, y, z above.One can assume that this type of Grassmannian structures will be important in all such realizations for different compact groups. III. U (1) QUANTUM ROTOR WITH UV CUTOFF In this section, we demonstrate the UV cutoff that the D-theory sets for the first step towards having a finite dimensional Hilbert space for the Abelian U (1) group manifold.In order to test the fidelity of our U (1) qubit representations, we compare it to the full U (1) quantum rotor with symplectic algebra with [U, U † ] = 0 given from the unitarity U U † = 1.The operators with this required algebra can be represented with a scalar field θ ∈ [0, 2π) as E = −i∂ θ and U = exp(iθ).It is convenient to rescale the Hamiltonian in this representation by 1/g 2 = √ h so that The This can be truncated by a cutoff either in the flux basis |ℓ⟩ or in the field basis |θ⟩.We will subsequently show that the multi-flavor D-theory construction can be reformulated to exactly reproduce the flux cutoff L of this rotor with M = 2L fermion flavors and therefore converge exactly to the full rotor in the M → ∞ limit.The flux truncation of the infinite-dimensional Hilbert space is carried out by restricting the flux to ℓ This UV cutoff is the first step that D-theory takes; we will call this the flux truncation and the D-theory truncation interchangeably.To illustrate, let us write down the matrices of the operators for the L = 2 cutoff case: The We refer to this discretization as the clock model truncation.Note that the electric fields are identical but the U operators are different between these two approaches.The first flux truncation approach, i.e. the approach that D-theory takes, preserves the symplectic algebra [E, U ] = U and [E, U † ] = −U † but breaks the unitarity constraint U † U = 1, whereas the clock model does the opposite, preserving U † U = 1 but not the symplectic algebra.Preserving both leads to an infinite-dimensional Hilbert space, which is exactly what we need to avoid [24].More specifically, the flux truncation (i.e.D-theory) the unitarity is violated with the non-zero commutator Notice that these are concentrated on the largest ℓ exclusively, so they can be considered as only living in the UV region of the model, keeping the infrared physics roughly the same.In Sec.V, we compare the low spectrum as a function of h in the strong coupling limit h = 0 and the weak coupling limit h = ∞.We show that the low spectra are of course exact at h = 0 and remarkably accurate for a large range of values of h = 1/g 4 even for a L = 2 or L = 4 flux cutoff.This appears to be remarkable or even paradoxical since for the flux truncation, the field variables obey the nilpotency U 2L+1 = 0 and therefore have exactly degenerate zero eigenvalues.This would seem to be a poor starting point in comparison with the eigenvalues of the clock model e 2πk/(2L+1) .In the clock model truncation, U and U † are normal, so both their real and imaginary parts are Hermitian matrices that commute and can be diagonalized simultaneously.Hence, their eigenvalues can be measured simultaneously, and we can use that double measurement to determine the phases e 2πk/(2L+1) .For the flux truncation, on the other hand, the point is that U itself does not quite have a direct correspondence to the quantum rotor field U = exp(iθ).The physical correspondence becomes legit for the flux truncation once we take the combinations such as U + U † which now becomes Hermitian as illustrated in Fig. 3. On Fig. 3a, we observe a nearly harmonic oscillator low spectra (orange) for the D-theory truncation even for L = 3. Fig. 3b shows a remarkable match for D-theory for all angles even at a small cutoff L = 8. Note also that the field truncation with only discrete symmetry surviving has no natural generalization to non-Abelian groups; there is no infinite sequence of finite discrete subgroups that uniformly populate their manifolds.For example for SU (2), the largest such finite group that is uniform on SU (2) is the 120-element icosahedral group, and it is known that it fails to be in the universality class of the two-color gauge theory [25]. IV. COMPLEXITY OF QUBIT REALIZATIONS We now turn to our multi-flavor framework with Dtheory rotor Hamiltonian Eq. ( 21) replacing the variables U and U † with the operators Û and Û † given by a sum over the M -flavor fermions: There are 2M fermions in total.Remembering the Fock space for a single fermion is two-dimensional, either |0⟩ (unfilled) or |1⟩ (filled), the total Hilbert space on which Û and Û † are acting has the dimension 4 M .We note that the fermionic operators imply the nilpotency Û M +1 = 0 which coincides with the flux truncation with M = 2L.However, we will see that it does not represent the same matrices for this truncation.The M -flavor fermionic D-theory form starts in a 4 Mdimensional Hilbert space, but if we impose the halffilling condition for each flavor due to the fermion number conservation as we are now in the Hilbert space of dimension 2 M allowing us to represent the operators as M qubits or hard bosons.The point to be made is that in the sums Eq. ( 30), only terms that preserve each of all of these individual fermion number combinations of a + b appear, so they can be diagonalized ahead of computations.These actually generate a subgroup of the original U (M ) flavor symmetries which are flavor diagonal (that is, this is a set of U (1) M generators that has been fixed). In this subspace, for each flavor, we have the isometric mapping and there are no fermion statistics in the σ on the right.The sigma matrices are given as We have in this way eliminated the need to do any brute force Jordan-Wigner transformations to convert fermions into qubits.We identify the fermionic basis states with the spin states as the a fermion filled state as |↑⟩ and the b filled state as |↓⟩.In this form, the full representation of SU (2) available by these M flavors is M m=1 2. This representation is reducible, and its irreducible decomposition contains the irreducible M + 1 representation which we need to match with the flux truncation Hamiltonian in Eq. (26).For example, for the M = 4 case, the decomposition of the full 4 m=1 2 representation contains the irreducible 5 representation, on which the target Hamiltonian acts, as The Young tableau representation of the SU (2) irreducible decomposition Eq. ( 34).We can embed the U (1) Hamiltonian (Eq.( 21)) with the flux cutoff of L = 2 into the symmetric representation 5 of SU (2), which is represented as the last term of the right-hand side (the four boxes aligned in a single row). The simplifications where we start with 2M fermion qubits and end up with only M qubits encodes an M +1 dimensional Hilbert space inside a 2 M dimensional Hilbert space.We have cut the number of qubits by half with the half-filling condition.Still, the dimension of the Hilbert space where our physics is encoded grows exponentially with the number of states.We will name this property an exponential format. The Hilbert space on which the M + 1 representation acts is spanned by the states that are totally symmetric on the flavor symmetry, i.e. the states for m = 0, ..., M [26].The +permutations terms contain all the possible permutations of m down spins and M −m up spins.In this sense, if we gauge the permutation symmetry of the qubits, we get the unique representation of dimension 2L + 1.This can be justified by the gauging of the U (M ) flavor symmetry of the original D-theory formulation.The combinations in Eq. ( 32) are invariant only under a U (1) M subgroup of U (M ) rather than the full U (M ).The individual σ z are linear combinations of the Cartan generators.The commutant of U (1) M inside U (M ) also includes the permutations of the U (1), which should also be gauged.It justifies this prescription for keeping only the symmetric states. The ingredients for the construction of the qubit Hamiltonian are the Cartan-Weyl basis operators L+ = Lx +i Ly , L− = Lx −i Ly , and Lz in the M + 1 irreducible representation of SU (2), i.e. the spin-M/2 operators.This truncation of the U (1) fields with the spin-M/2 operators is investigated by Zhang et al. [15] to study the effect of this spin truncation to the BKT phase transition of the O(2) model.We can express these Cartan-Weyl basis operators in the full M m=1 2 representation, i.e. in the M -qubit representation as The normalization factors of L± are given so that their actions on the |0⟩ state match with those of the original U operators, and hence the low spectra of the qubit Hamiltonian replicate those of the continuum Hamiltonian in the small h region.The actions of the L+ and L− operators on the symmetric states Eq. ( 35) are to raise and lower them, respectively, as and the Lz operator acts as the shifted number operator Since the commutation relations of L+ and L− with Lz are which match with Eq. ( 22), whereas the commutator of L+ and L− is Lz , which violates the unitarity [U, U † ] = 0.If we consider the mapping E → Lz , U → L+ , and has the same symmetry as the continuum Hamiltonian Eq. ( 21) does (more precisely, the different pieces in the Hamiltonian have the same algebra). A. Phase space considerations. One can try to understand this a little bit better from the point of view of Hamiltonian classical mechanics on the phase space of the cylinder and the sphere.The reason to do so is to understand better the relation between both dynamical systems. Basically, after turning to the problem of writing in terms of qubits and focusing on the correct gauge invariant states, the original problem is reduced to the study of a single copy of the SU (2) Lie algebra hiding in the big Hilbert space.It is the physics of this sub-Hilbert space that we want to analyze by classical methods to get an intuition. On the cylinder (the tangent bundle on the circle), we have variables α (the periodic variable) and p α , with the Poisson bracket {α, p α } = 1.The cylinder Hamiltonian is By contrast, on the sphere we have a pair of spherical coordinates θ, φ, with φ periodic and with the Poisson bracket { φ, θ} = A/ sin θ (this is the inverse of the volume form in spherical coordinates, up to a rescaling factor, which we call A).The conjugate variable to φ is actually p φ = cos θ/A rather than θ.In the Cartesian coordinates, this is the z coordinate, and that is identified with L z after rescaling.On the other hand, L + ∝ e i φ sin θ, which is identified with That is, when we take the classical identification α ≡ φ, which results from taking the classical periodicity of the variables into account, and include the constraint , where L 2 is a c-number, we find that the Hamiltonian actually takes the form For this to work, we need to have p α = L z , p max = L and A = 1/p max = 1/L, so that p α = p max cos θ.The normalization of the naive kinetic term has been scaled to match what we need. We can now expand it in powers of 1/p max as so when we take p max → ∞, we recover the cylinder Hamiltonian.At finite p max , there are what should be interpreted as higher derivative corrections in the Hamiltonian.These are suppressed by the cutoff p max .The quantization of this system leads to the quantum Hamiltonian Eq. ( 42), provided that 1/g 4 ∝ h/p max .Here, we need to remember that L + , L − , L z have roughly the same normalization.One can say that the quantity 1/p max is playing the role of ℏ in a quantum expansion.This is also related to the volume of phase space, which is computed to be proportional to p max ≃ (2L+1) in the Planck units. In a field theory setup, these higher derivative corrections are expected to be irrelevant perturbations, at least by naive power counting: they affect the UV dynamics but should flow to the same universality class in the infrared.These scale like the electric field squared, times the magnetic field squared (the naive plaquette).In that vein, the low energy spectrum of Eq. ( 42) should converge to the low energy spectrum of the Kogut-Susskind Hamiltonian with the normalization of the equation Eq. ( 23) when we take the cutoff to infinity as well. One can try to understand a similar idea for more general groups that are not just U (1).As argued earlier, we should study the quantization of the Grassmannian G(N, 2N ), when we are discussing U (N ) links which are where the coherent states of [13] take values.One would then try to understand how to take the semiclassical double scaling limit correctly to get a cylinder over U (N ), namely, the tangent space of U (N ) as a phase space, with a Hamiltonian and a parameter playing the role of p max .This type of analysis is beyond the scope of the present work. B. Exponential Formats The Hamiltonians Eq. ( 21) and Eq. ( 42) are not exactly the same due to the difference between the coefficients of the actions of the raising and lowering operations.In the flux truncation, they are always constant (normalized to 1), whereas those in SU (2) are not constant but rather depend on the target state as we saw in Eq. ( 39).This is also true in the classical limit described by Eq. ( 45), where there are higher derivative corrections to the Hamiltonian.Also, remember that the Hamiltonian Eq. ( 42) acts on a Hilbert space of large dimension 2 M , but that only the states in the M + 1 irreducible representation matter and are invariant under all constraints.We dubbed this property as being an exponential format, where the dimension of the Hilbert space grows exponentially in the number of states we need. We can try to do better at the level of matching the operators in the subspace of interest in the total Hilbert space, by adding corrections to the operators getting rid of the differences below the finite cutoff.This should be equivalent to adding (or depending on the point of view, subtracting) irrelevant operators to compensate for the differences in the formulation. To construct the U and U † with our qubit construction, we may use one of the two ansatze: The first one, for what we call Û ′ can also be thought of as having L z /L max corrections to U + , as one would expect from the higher derivative expansion Eq. ( 45).As such, the coefficients should be suppressed by the powers of 1/L 2k max , up to normal ordering ambiguities.One can compute the coefficients a k or b k so that the action of Û ′ or Û ′′ is the same as U in Eq. (21).For the first Û ′ for example, the action of each Lk z L+ Lk z operator is as where A mk (m, k = 0, 1, ..., M/2−1) is a (M/2)×(M/2)dimensional matrix with the elements of The coefficients a k can be computed by solving the linear equation Similar procedure can be taken to find the coefficients for the other case of Û ′′ as well.Note that the operators Û ′ and Û ′′ constructed with the appropriate coefficients a k and b k are identical in the space spanned by the states Eq. ( 35), even though they are not in the full M -qubit space. Since the values of A mk and B mk grow exponentially with k, the values of a k and b k are expected to decay exponentially for higher k terms.Indeed, [15] numerically demonstrates that a k shows the behavior of exponential decay with k.For small cutoff L = 1, 2, 3, the Û ′ operator can be constructed as Lz L+ Lz and the Û ′′ operator can be constructed as The point is that if our goal is to produce the flux truncated Kogut-Susskind Hamiltonian on the nose, it can be done.Ideally, one would actually use L + , L − instead and try to argue that one is in the same universality class.The main reason is that the Hamiltonian Eq. ( 42) is made of sums of products of only two sigma matrices.These can be readily implemented as 2 qubit gates, perhaps with some swaps of qubits.Therefore, it provides a more efficient implementation on a NISQ device, where reducing the number of total gate operations per qubit is essential to get to a result that one can trust, before one loses coherence on the device. C. Linear Formats and Sparsity The formulation we have used to construct the qubit Hamiltonian with links in Eqs. ( 50) and (51) requires at least M = 2L qubits, discarding all other representations in the irreducible decomposition of M α=1 2 than the M + 1 representation.For large L, the Hilbert space grows exponentially in L losing the quantum advantage locally [27].We now introduce another qubit representation with which one can store information with only a logarithmic number of qubits, using M qubits to represent the M + 1-dimensional Hilbert space. One needs only n min = ⌈log 2 (M +1)⌉ qubits by keeping the other representations by mapping the |0⟩ , |1⟩ , ..., |M ⟩ states to the computational basis (i.e. the eigenbasis of σ 3 ) states as where b nmin b nmin−1 b 1 b 0 is the binary representation of the integer m.In this encoding, the dimension of the Hilbert space in which we embed our problem grows linearly with the dimension of the Hilbert space we want to encode.We would call this a linear format.Notice that this setup starts with the truncation and tries to fit it into a Hilbert space in an arithmetic way without starting with the symmetry algebra first.It is more economical in terms of qubits, but the generalization to non-Abelian fields for even a polynomial format is not straightforward, and even if possible presents a challenging research project in qubit algebra [28].We begin by introducing the M -bit quantum adder [29]: which maps the computational basis states as |m⟩ → |m + 1, n min ⟩ mod 2 M .Then, the adder can be modified to represent the raising operator Ûmin replacing the mod by annihilation for the highest state as Ûmin |M ⟩ = 0. To do this in general, we multiply the projector P from right to A, where P is defined to act as the identity for the |0⟩ , ..., |M − 1⟩ states and vanish at least the state |M ⟩ and possibly also the higher states.For M = 2(L = 1), for example, using the 3-dimensional subspace of the 2qubit space with a mapping given as The Ûmin , Û † min , and the corresponding Êmin operator are expressed as given that one of the possible choices of the projector P is For another example of M = 4(L = 2), using the 5dimensional subspace of the 3-qubit space with a mapping given as the Ûmin , Û † min , and the corresponding Êmin operator are expressed as given that one of the possible choices of P is These operators also satisfy the commutation relations Eq. ( 22), so the Hamiltonian constructed from these operators preserves the original symplectic algebra.Drawing on the extensive literature on efficient and robust quantum arithmetic [28] should help in designing optimal circuits for this linear formation. V. SPECTRAL MATCHING OF D-THEORY TRUNCATION In this section, we discuss and numerically compare the low spectra of the 0 + 1-D quantum rotor Hamiltonian with the U (1) symmetry defined in Eq. ( 21) with those with a small flux cutoff L, a discretization of the group manifold of U (1) to Z N (clock model), and the spin operators L ± as Û operators (quantum link model) constructed with M flavor qubits. First, we compare the spectra with the very small cutoff giving the five-dimensional Hilbert space and those with a slightly larger cutoff with the nine-dimensional Hilbert space (Fig. 5) with the exact spectra.We define a new coefficient τ parameterizing the inverse coupling h as h = τ /(1 − τ ) allowing us to plot the whole h ∈ [0, ∞) with the finite τ ∈ [0, 1), besides rescaling the Hamiltonian by ×(1−τ ).We can see from the figures that for the strong coupling region (small h), we do not need a large cutoff L to reach the precise solution for all of the truncation approaches, whereas we do need large L for the weak coupling.We can also observe that the lower eigenenergies converge to the exact values faster than the higher energies.Let us also note that the spectrum of the clock model deviates from the exact spectrum with smaller h than that with the flux cutoff does. It is also worth noting that the quantum rotor can be locally approximated as the quantum harmonic oscillator (QHO) around θ = 0.By expanding the cos θ term by θ, we can decompose the Hamiltonian to the non-perturbed QHO part H 0 = p 2 /2 + hθ 2 /2 with the momentum p = −i∂ θ and the perturbation term H 1 = h(−θ 4 /4! + θ 6 /6! − θ 8 /8! + • • • ) which is the higher-order terms of the cosine.In the large h region, the low spectra tend to condensate around θ = 0, which makes the perturbation H 1 and the periodic boundary conditions of θ trivial, so they should exhibit the even-spaced spectra as well as the well-known QHO spectra with E n = √ h(n + 1/2) for n = 0, 1, 2, .... To see if the two truncations can reproduce this QHO-like behavior with large h, we compare the spectra of the Hamiltonians with the truncations with the spectra of the QHO (Fig. 6).These QHO solutions correspond to the topologically trivial trajectories with zero winding, whereas the topologically non-trivial trajectories start to appear as nonperturbative effects in the small h region once we take the path integral for quantization [30].Since we need to simulate the behavior with large h, we use larger truncations giving the Hilbert space with dimension 21.As seen in the figure, the flux truncated Hamiltonian and the clock model well reproduce the QHO spectra in the large h regions until they start to experience non-negligible errors due to the truncations.However, the spectra computed with the spin truncation are completely off from the QHO spectra, which can be expected given that the dominating potential term 2 − Û − Û † = 2 − Lx in the large h region has the even spaced eigenvalues of 2−2m/ (M/2)(M/2 + 1) with m = −M/2, −M/2+1, ..., M/2−1, M/2 which grow linearly with h, instead of its square root.This indicates we need the corrections as we proposed in Sec.IV B for h very large.At intermediate h, we cannot neglect the term with L 2 z and the agreement should be better.To evaluate the performance of the flux truncation, we can also look at the breaking of the unitarity constraint in the eigenbasis for the low spectrum which is, without the truncation or with the clock model discretization, exactly zero.For j and k with the same parity, the matrix elements are all zero.The non-zero elements (i.e.j and k with the different parities) for small j and k of this matrix as functions of h with a small cutoff (L = 2 and L = 4) are as Fig. 7, demonstrating that the effects of the breaking on the low energy states are small for smaller j and k, for smaller coupling h, and larger cutoff L. This behavior of the breaking of the zero commutator validates that the flux-truncated Hamiltonian describes the effective theory of the exact U (1) quantum rotor in the small h region. We can think of simple 1 + 1-D models to which our scheme can be applied for the simulations of their dynamics.An interesting choice is the Sine-Gordon Model with the Lagrangian This is an intriguing exactly integrable theory with a strong-weak S-duality to the massive-Thirring model as shown in [31] demonstrating the fermionic excitations in the massive-Thirring model correspond to the solitons in the Sine-Gordon model.We note that the simulation of the massive-Thirring model on a quantum circuit is studied in [32].Both forms could be formulated for qubits with complimentary regions to find a common parameter space exhibiting the duality.Given that the latticized derivative ∂ µ ϕ := 1 a (ϕ(x + µ) − ϕ(x)) is small in the low-energy range where a is the lattice spacing, we can map the conjugate momentum field as π(x) = ∂ 0 ϕ(x) → E x and the compactified field exp(iβϕ(x)) → U x leading to the Hamiltonian as where h = m 2 /β 2 and J = 1/2a 2 β 2 .We fix the lattice spacing to be a = 1 from now on.We note that for h = 0, this is the classical XY model or the integerspin XX chain which has been numerically studied in more detail by Zhang, Meurice and Tsai [15] with tensor networks and the range of truncation of L = 1, 2, 3, 4.Among the interesting observations, while all preserve the gapless phase, the model has an infinite-order Gaussian transition for L = 1 and only for L ≥ 2 has a BKT transition.This is a nice example of how physics can depend crucially on the size of truncation.It has been known that the Sine-Gordon model effectively describes the vortices in the XY model with β corresponding to the inverse temperature, and its BKT transition is wellstudied from its renormalization flow on the Sine-Gordon side [33]. A. Real-time evolution It is plausible that for the small M qubit formulation, we can explore the small h and J region.This indicates that the qubit Hamiltonian is expected to be the effective theory of the exact Hamiltonian in the lowtemperature limit.Here, we give the smallest truncation L = 1, and simulate the real-time evolution of a state under the Hamiltonian with quantum circuits defined on six lattice sites.For L = 1, the Hamiltonian H SG can be represented with sigma matrices on a 1 + 1-D lattice (ignoring the constant) as Since the first electric term, the second potential term, and the third interaction term of H SG do not commute with each other, we use the Trotter-Suzuki approximation to simulate the time-evolution operator exp(−iH SG t) on a quantum circuit with a small time step ∆t ≡ t/n with n ≫ 1: We call the product inside the bracket a Trotter step, and the realization of the single Trotter step is depicted in Fig. 8.Each unitary rotation component can be realized with simple one or two-qubit quantum operations.Eq. ( 69) means that we can approximate the time evolution e −itHSG on a quantum circuit by iterating the Trotter step circuit many times with a small time step of ∆t.To test the reliability of the approximated time-evolution operator on a quantum circuit, we construct and simulate the circuit using qiskit with six lattice sites, i.e. twelve qubits.We pick the two-point correlation function in the spatial dimension (specifically, the left-most lattice site x and the middle point y) as the physical quantity to be measured from the circuit: Since the observable U † x U y + U x U † y can be expressed as a sum of the product of two sigma matrices with our construction, it can be measured by simple two-qubit measurements on the quantum circuit.We measure each two-qubit Paulis with 4096 runs of the circuit to approximate the expectation value.We choose the parameters of the Hamiltonian to be h = 1 and J = 1, and the state |ψ⟩ = |00...0⟩ corresponding to the state whose sites all have the flux of ℓ = −1 as the initial state which can be easily realized on a quantum circuit.The result of the simulation is Fig. 9, from which we can see that if the value of the time interval ∆t is small enough (≈ 0.1), the quantum circuit well approximates the exact time evolution. B. Gapped/gapless phase transition As we mentioned above, the interesting physical feature to investigate for these 1 + 1-D U (1) models in the continuum is the BKT transition.Since the BKT transition is due to the topological defects in the model, it is regarded as a topological phase transition.For example, the transition in the 2-D classical XY model can be explained as the confinement/deconfinement phase transition of the vortex-antivortex pairs.The topologically ordered phase is gapped, i.e. has a finite correlation length ξ.The topological phase transition closes this mass gap and allows the system to have massless Nambu-Goldstone excitations, and hence this other phase is critical and has an infinite correlation length. We can observe this gapped/gapless transition by computing the entanglement entropy of the ground state.Here we consider two entropy measures.The von Neumann entanglement entropy of the ground state for the subsystem A is defined as S A := −Tr[ρ A log ρ A ] where ρ A is the density matrix of the ground state in the subsystem A defined as ρ A = Tr A c [ρ] with the density matrix of the ground state ρ in the total system A ∪ A c .The α-Renyi entanglement entropy is defined as S (α) and it is related to the von Neumann entropy by lim α→1 S (n) A = S A (so-called the replica trick ).It is proven by Hasting [34] that the entanglement entropy of the ground state of 1 + 1-dimensional gapped systems obeys the area law, i.e. it is bounded from above by a constant which is independent of the subsystem size n.On the other hand, the entanglement S A needs a logarithmic correction log(n) in a gapless phase or a critical point in the thermodynamic limit, specifically it is proven for 1 + 1-dimensional systems by Calabrese and Cardy [35] by means of the two-dimensional conformal symmetry.For the finite volume cases, they prove that the Renyi entropy for the 1 + 1-D quantum system with conformal symmetry on the finite lattice is (ignoring the constant term) which converges to the von Neumann entropy in α → ∞ as In the thermodynamic limit N → ∞, it converges to the logarithmic correction of log(n/a).c is the central charge of the conformal theory in the same universality class as the quantum system. Zhang [36] computes the von Neumann entropy of the ground state of the Hamiltonian Eq. ( 67) with h = 0 and open boundary conditions, and it is confirmed that the gapped/gapless transition happens with L = 1 and the system size of N = 32, 64, 96, 128 with the subsystem size n = N/2 by means of density matrix renormalization group.As we mentioned, the results from [15,36] indicate that this model with L = 1 has the transition called infinite-order gaussian transition, which is distinguished from the BKT transition but still closes/opens the mass gap.We reproduce this result by computing the von Neumann and 2-Renyi entropies from exact diagonalization of the Hamiltonian with the much smaller system size of N = 10 and the sub-system size of n = 2, 3, 4, 5 as functions of β with two disjoint boundaries (Fig. 10).In the small β region, the entanglement entropies depend on the subsystem size, and the gaps become smaller as n increases, as we expect for the gapless phase.On the other hand, in the large β region, the entropies become independent of n, so the system must be in the gapped phase.That we can observe this gapped/gapless transition with only a small system size of N = 10 means that even with the current or near-future digital quantum device consisting of less than a hundred qubits, we may reproduce interesting physical phenomena related to a continuous field theory.Let us note that many efficient quantum algorithms for realizing the ground state on qubits for a given spin Hamiltonian have been proposed such as the variational quantum eigensolver [37], the adiabatic state preparation [38], and imaginary time evolution [39,40], etc, as well as that one can efficiently evaluate the 2-Renyi entropy on a digital quantum device by computing the expectation value ⟨GS| ⊗ ⟨GS| SW AP A |GS⟩ ⊗ |GS⟩ where |GS⟩ ⊗ |GS⟩ is the two copies of the ground state and SW AP A is the operation that swaps the qubits in the subsystem A between those two copies [41].Such expectation values of a unitary operator can be computed using for example the Hadamard test. We can also find the central charge c of this model by simply fitting the values of S A and S (2) A in the gapless phase to the functions Eq. (72) and Eq.(71), respectively.We fit their values with β = 0.01, and estimate the value of the central charges as c ≈ 1.019 for S A and c ≈ 1.032 for S (2) A , respectively (the insets of Fig. 10).These almost reproduce c = 1 which is expected for this class of the model [42,43]. VII. DISCUSSION In this paper, we have discussed a version of the quantum link model with gauged flavor symmetry [13,16] focusing especially on the problem of a U (1) quantum link. The main problem we have focused on in this paper is how to realize local degrees of freedom that are effectively bosonic and have a non-trivial symmetry structure realized on them in terms of (fermionic) qubits.Generically, these are ingredients that can be used on a variety of field theories, not just for gauge theories.Such a choice depends on if one puts the degrees of freedom on a link or on a lattice site, and it would also depend on if one imposes a local symmetry constraint or not, which would involve many links at a time.A single link/lattice variable would not know these spatial configurations on its own.The main problem with bosonic systems is that they naturally have an infinite-dimensional Hilbert space, even locally.This needs to be truncated if it is to be simulated on a quantum computer. The truncation suggested by the fermionic qubits for U (1) picks a particular quantization of a system that has an SU (2) symmetry on the phase space, but only a U (1) symmetry in the Hamiltonian.This gives us a quantum theory on a two-sphere, which is realized by angular momentum operators with a fixed value of L 2 .We showed in the classical theory and in the quantum theory how taking L 2 → ∞ results in the Kogut-Susskind Hamiltonian after an appropriate rescaling of the variables.The physics is in a compact phase space locally with finite volume in the units of the quantum ℏ.Taking the volume to infinity can provide the phase space of a tangent bundle on the U (1) manifold if done appropriately.We also showed how additional corrections to the Hamiltonian (which can be thought of as higher derivative corrections) could be added so that the naive flux-truncated Kogut-Susskind Hamiltonian for a single variable could be found exactly, rather than approximately at finite cutoff.This type of argument suggests that the link variables with gauge flavor symmetry fall in the same universality class as Kogut-Susskind type Hamiltonians do in the appropriate limit, without the need to add these higher-order corrections. Generalizing this construction to other non-Abelian symmetry groups seems to require studying the quantization on a complex Grassmannian (a compact phase space) and taking a similar large volume limit in units of ℏ. Some special features were found in the U (1) theory where the original problem with 2M fermionic qubits could be reduced to M qubits that are effectively hard bosons: they commute with each other.The physics requires that the permutation group between these hard bosons was fully symmetrized between them to faithfully achieve the flavor gauge symmetry.The qubit realization of the U, U † operators resulted in the unique representation that appears from the addition of angular momentum for these variables, with maximal angular momentum. We studied various versions of the truncated Hamiltonians that differ from each other in the choices that are made for these higher derivative corrections and found good agreement with the quantum rotor (the Kogut-Susskind Hamiltonian with no cutoff).They supposedly approximate even for moderate values of the coupling.It is interesting to study this property further for other models with non-Abelian symmetry. We also studied other implementations that are not based on the fermionic bilinears, but where the truncation in Hilbert space is done to minimize the number of total qubits, and the embedding is more ad-hoc (there is no natural symmetry action on the qubit degrees of freedom, it needs to be built by hand).At least in this sense, one can talk about the efficiency of the implementation in terms of resources. We applied these ideas to the models with two such U (1) degrees of freedom as would appear in a chiral U (1) model on a one-dimensional lattice.In particular, we showed how a simple truncation could be implemented in terms of explicit gates on a collection of 12-qubits (two per site) and showed how the Trotter expansion could be executed for studying the real-time evolution of a simply prepared initial state.Furthermore, in the U (1) case, we used exact diagonalization to argue that the ground state on a lattice of only 10 sites was already big enough to show non-trivial critical behavior in the entanglement entropy when varying the coupling constant.This suggests that interesting physics at (or near) criticality can be simulated on a modest quantum computer with roughly ∼ 100 qubits, rather than requiring us to take the large volume limit first. What is left out in this study is any serious exploration of how a minimal number of qubit per lattice site might preserve the universality class.Indeed this is a central and very challenging dynamical problem depending on the existence of a second-order critical surface.While preserving the symplectic algebra on the local field is clearly an attractive requirement, it does not address this problem.As in the classical Ising Hamiltonian with a single qubit per site with Z 2 reflection, the collective dynamics across the spatial lattice is sufficient to guarantee universality. There are many potential routes to universality.For example, we have also left out the original quantum link conjecture that in an asymptotically free theory (the non-Abelian 2-D sigma model or the 4-D gauge theory), flavors distributed in an extra dimension are sufficient to guarantee universality.Such models break the flavor symmetry but would also reduce the number of quantum gates to be executed to logarithmic growth in the correlation length.Our previous work [44], for example, provides such an implementation for a U (1) gauge theory in 2+1 dimensions.One needs to worry that the breaking of the flavor symmetry done in the Hamiltonian does not pollute the infrared physics with new degrees of freedom that are not gapped sufficiently.With full gauging of flavor, as we studied here, there are no additional singlet states beyond those required to match the Hilbert space of interest, so the only question is if we approximated the correct Hamiltonian well enough in the low energy sector.A full treatment of such questions needs to be explored in detail. FIG. 1 . FIG.1.The phase space of the original U (1) theory, which has infinite volume. FIG. 2.(a) The approximation with a two-sphere ∼ SU (2)/U (1).The equater is where the low energy physics resides and want to be matched with the dynamics on the cylinder.(b) Having a larger dimensional representation of SU (2) (i.e.adding more flavors) corresponds to having an elongated shape with a larger volume. flux representation is of course just the Fourier transforms, ⟨ℓ|θ⟩ = exp(iℓθ), with the delta function normalized states U |θ⟩ = exp(iθ) |θ⟩, or in the flux basis, E |ℓ⟩ = ℓ |ℓ⟩.Given that θ takes a value in the compact space of S 1 , the flux ℓ takes quantized values ℓ ∈ Z. Explicitly writing the matrix representation of the Hamiltonian in this flux basis, field truncation is more subtle.One can, in comparison, think of discretizing the field to the Z 2L+1 values with θ = 2πk/(2L + 1) with k ∈ 0, 1, • • • 2L and choose again to restrict the flux ℓ ∈ [−L, L] with a cyclic generator E for the Z 2L+1 ⊂ U (1) subgroup.This discretization gives the same dimension of the Hilbert space as the flux truncation with the same L. Illustrating the 2L + 1 = 5 state truncation, the operators are FIG. 5 . FIG. 5.The lowest five eigenvalues as the functions of τ for the quantum rotor Hamiltonian computed with (a) a small flux cutoff with (left) L = 2 and (right) L = 4, (b) the clock model discretization with (left) Z5 (right) Z9, and (c) the spin operators L± as the Û operators with (left) M = 2L = 4 and with (right) M = 2L = 8, compared with the spectrum of the exact Hamiltonian (black dashed curves). FIG. 6 . FIG. 6.The lowest five eigenvalues of the quantum rotor Hamiltonian as functions of h computed with (a) a flux cutoff with L = 10, (b) the group manifold discretization to Z21 group, and (c) the spin operators with M = 20, compared with the spectrum of the QHO (black dashed curves). FIG. 7 . FIG. 7. The non-zero matrix elements in the eigenbasis as the functions of h for low spectrum of Eq. (65) with the flux cutoff (a) L = 2 and (b) L = 4. FIG. 8 . FIG. 8.The quantum circuit of the single Trotter step for the Sine-Gordon model for two lattice sites with the periodic boundary conditions (top).The index i represents the position of the link and α represents the flavor.The circuit components Z and ± represent the operator exp −i∆t 1 8 σ 3 x,1 σ 3 x,2 (can be realized as bottom left) and exp −i∆t J 2 σ + x,i σ − y,j + σ − x,i σ + y,j (bottom right), respectively. FIG. 10 . FIG. 10.The (a) von Neumann and (b) 2-Reny entropies for the system with size of N = 10 and the subsystem with size of n = 2, 3, 4, 5 of the ground state with the periodic boundary conditions.The insets are the values of SA at β = 0.01 as functions of the subsystem size n.The value of the central charge c is calculated by fitting the functions Eq. (72) and Eq.(71) (blue curves), respectively, giving the estimations of c ≈ 1.019 for the von Neumann entropies and c ≈ 1.032 for the 2-Renyi entropies.
17,679.2
2022-01-07T00:00:00.000
[ "Physics" ]
Economic scheduling and dispatching of distributed generators considering uncertainties in modified 33-bus and modified 69-bus system under different microgrid regions : This paper presents a comprehensive framework for the economic scheduling and dispatching of Distributed Generators (DGs) in modified 33-bus and 69-bus systems across multi-microgrid regions. The framework introduces two key techniques: a novel dispatch strategy for optimizing the charging and discharging of Electric Vehicle (EV) batteries, and a robust power dispatch method for islanded distribution systems. The EV dispatch strategy uses a multi-criteria decision analysis method, Probabilistic Elimination and Choice Expressing Reality (p-ELECTRE), to maximize profits for EV owners while meeting power system requirements. This strategy is tested on fleets of 100 and 200 EVs with random travel plans within the modified 33-bus and 69-bus systems, and employs the BAT Optimization Algorithm (BOA) for optimal power dispatch. The second technique addresses the power dispatch in islanded systems by sectionalizing them into self-supplied microgrids, aiming to minimize operational costs, system losses, and voltage deviation using the Jaya algorithm. Additionally, a multi-objective cost-effective emission dispatch is evaluated using Whale Optimization Algorithm (WOA), showing superior performance over Differential Evolution (DE), Particle Swarm Optimization (PSO), and Grey Wolf Optimization (GWO). Comparative analysis highlights the scalability and adaptability of the proposed approach, making it a valuable tool for efficient microgrid management. Simulation results confirm significant improvements in cost savings, system reliability, and operational efficiency under various uncertainty scenarios. Introduction In recent years, the power utilization by the consumers are increased drastically, say as 28,580 TW approximately.Thereby to meet this requirements, we have various power generating units.In that Distributed Generation energy sources are playing crucial role to protect the environment and greenhouse effect.While working with various renewable energy resources, the operating solar, wind, plants are important.Due to this optimal scheduling of DGs, able to save cost and efficiency and reliability of system is increased.with this motivation various researchers have attained attention to work on optimal scheduling of DGs in Microgrid environment.Yang Zou et al [1].addresses the challenges posed by the random output of renewable energy and the integration of electric vehicles (EVs).The study proposes a scheduling strategy using wavelet neural networks for renewable energy output prediction and a second-order cone relaxation method to enhance solution efficiency [2].investigates the use of machine learning probabilistic forecasting merged with robust optimization to manage the dispatch schedules for renewable energy sources in microgrids.This method accounts for the decay in prediction accuracy over time and aims to enhance the reliability and cost-effectiveness of microgrid operations.Feng Zheng [3] discusses a model that addresses the uncertainties in renewable energy output and load demand.This paper utilizes stochastic optimization to generate scenarios that improve the reliability and efficiency of microgrid operations [4] Fatma Yaprakdal and colleagues explores the integration of DGs in reconfigurable microgrids (RMGs).The paper introduces a hybrid approach combining particle swarm optimization (PSO) and selective PSO to create an optimal reconfiguration and dispatch plan, focusing on power loss minimization and efficiency in operation.In addition to this distribution system, the incorporation of optimal placement of EVs in distribution network is rapidly increasing as the utilization of EVs are increasing day by day.Thereby introducing EVs into this plays crucial role in energy development sector.In general power system, optimal scheduling of EV is employed with fleets of 100 and 200 electric vehicles in to distribution system reduce power losses, voltage deviation.A greater emphasis has been placed on electric vehicles (EVs) as a result of growing concerns about energy cost reduction, emissions reduction, and the use of fossil fuels [4].By 2040, EV sales might account for up to 54% of new car sales [5].Parking EVs that are linked to the grid can help the electricity system by acting as a load while they are charging and by discharging power back into the grid.This makes the best use possible of the extra power produced by renewable sources [6,7].With the right methods for charging and discharging and support for vehicle-to-grid (V2G) technologies, EVs may provide the power system a number of advantages, including load leveling, spinning reserves, voltage and frequency management, and load balancing.Given its enormous storage/generation capacity, a number of studies have examined the integration of V2G-connected EV battery storage into power networks [7,8].Charging procedures have an impact on the advantages of EV battery storage and its affect on the power system [9].By strategically deploying EV battery storage, one may reduce losses, enhance voltage profiles, and ease grid congestion.The best grid-connected EV battery dispatch techniques have also been the subject of research.Studies that took the demand for EV travel into account looked at frequency management of the electrical grid.For several users, operational strategies for microgrids with EV fleets have been created [10].For large-scale V2G, a bi-directional coordinating dispatch algorithm has been put out [11], illustrating the financial advantages of V2G technology for both the electrical network and EV customers.In addition to traditional methods, Analytic Hierarchy Process (AHP) [12] and Particle Swarm Optimisation (PSO) [13] have been used to find the best times to deploy EVs and distribute energy sources.Such a plan should not only benefit EV [14] owners financially but also meet the operational needs of the power grid.A variety of factors have been taken into account in earlier studies, such as battery properties, state of charge (SoC) [15,16], cost to EV users, grid integration capacity, energy pricing, dispatch rates, and system restrictions.However, the effect of the availability of renewable distributed generation (DG) electricity on the V2G battery dispatch approach has not been sufficiently covered.The overall summaries of observations carried out by various researchers are presented in the From the above summary table it is understood that, various limitations have been noticed, thereby it can be confirmed that, still there exist a room to propose an innovative technique for optimal scheduling and dispatch of power through EVs.With this motivation, authors in this article have proposed, innovative techniques for optimal scheduling of DGs of Multi-Microgrid and optimal power dispatch of EVs.To achieve the optimal schedule of DGs a jaya algorithm has proposed.This algorithm has some features parameter free optimization and effective handling of multi-objective problems.Further for the optimal power dispatch of EVs, the p-ELECTRE technique is proposed.It is a multi-objective kind of analysis that takes into account the relative weights of many criteria and chooses the best course of action based on the likelihood that recommended courses of action from independent factors would occur.The suggested dispatch strategy examines the EV battery dispatch strategy in accordance with the relevance of several parameters by utilizing p-ELECTRE.The proposed techniques have developed with consideration of the multi-microgrid system which is shown in Figure 1.The proposed system has following features: 1.This system model has the ability to tackle the uncertainties in power generation.2. The proposed system has the ability of optimal scheduling of DGs in a Multi-microgrid system.3. The system utilizes modified 33-bus system and modified 69-bus system which is categorized into three microgrids (multi-microgrids). This study concludes with a dispatch strategy for EV batteries based on the p-ELECTRE approach that takes into account a number of factors and their respective weights.This technique intends to optimize the charging and discharging of EV batteries by taking into account the influence of renewable DG power availability, allowing EV owners to make money while satisfying the operational needs of the power system. The effectiveness of the suggested technique is assessed by examining the results of introducing 100 and 200 EVs into the system, taking into account various travel schedules with a one-hour time interval.This study [17] considers the availability of photovoltaic (PV) electricity, load demand, and real-time price information.The proposed approach is tested using simulations on a 33 bus distribution system [18,19] that has been updated to include additional distributed generations (DG).Furthermore, the BAT optimization algorithm (BOA), with goals centered on minimizing losses, expenses, and voltage variations, is utilized to dispatch DGs optimally in EV-rich distribution networks.In contrast, the Particle Swarm Optimization (PSO) algorithm emerges as a robust heuristic method capable of efficiently addressing the limitations of conventional approaches.Its ability to converge to global optima from diverse initial points makes it particularly effective in optimizing MG scheduling.To validate the accuracy of results obtained through PSO, a comparison is made with a stochastic optimization method in the subsequent sections.These contributions underline the significance of taking into account the availability of renewable DG power in the management of EV batteries [20] and the optimization of distribution networks while show casing the innovative methodology and possible advantages of the suggested dispatch strategy [21][22][23]. The following is a summary of this paper's significant contributions: 1.A novel p-ELECTRE multi-criteria decision-making technique is proposed as an optimal dispatch approach for EV batteries.2. Application of the proposed dispatch approach to fleets of 100 EVs and 200 EVs, followed by testing on a modified 33 bus distribution system with extra DGs. 3. The BAT optimization algorithm (BOA) is used for optimal power dispatch to strategically distribute EV fleets in the distribution system to reduce losses, costs, and voltage variations in order to provide the best DG dispatch.4. A Novel Energy Management System (EMS) is proposed for modified 33 bus distribution system that demonstrates its adaptability across various Microgrid (MG) operations, ensuring optimal performance.5. To achieve optimal scheduling of DGs in a Multi-Microgrid, the Jaya algorithm is used and compared with the Genetic Algorithm.6.In addition to tackle uncertainties of DGs (solar, wind, CHP), time-based demand response programs are introduced to analyses the cost reduction during peak hours and uncertainties.The simulation results are validated by comparing with PSO method and stochastic optimization based on probabilistic approach (Mean, Standard deviation).7. A novel optimization-based Multi-objective Cost-Effective Emission Dispatch is carried out on test system for reduction in cost for power dispatch.The simulation results are validated in comparison with different optimization techniques. Organization of the article is as follows: Section 2 deals with BAT optimization methodology based DG scheduling, Section 3 deals with Problem formulation, Section 4 deals with Simulation results and comparison with other optimizations.Finally conclusion were given in Section 5. Bat optimisation algorithm-based DG scheduling In this study a modified 33 bus system is considered [3].Three independent microgrids (MGs) make up the distribution system, which has been sectioned off to take use of sectionalization's advantages.The segmentation procedure is carefully modified to enable both isolated and combined operations of the MG(s) while preserving the system's radial structure.One switch is turned off to do this, while the other is turned on [24].Data on active and reactive power are included in Table 1 for each microgrid.References [24] are used to guide the process of opening and shutting lines to enable the independent or combined functioning of several microgrids.These changes are essential to enabling the microgrids' flexibility and effectiveness inside the redesigned distribution system. The proposed dispatch technique based on p-ELECTRE on the modified 33 bus distribution system.For further investigation in the future sections, a specific set of weights [0.4,0.2, 0.3, 0.1] is studied, since it has proved to have the most significant influence on the system load curve.scheduling of distributed generators (DGs) [21][22][23] is handled using the BAT optimization technique [24,25]. Table 1 depicts that microgrids operation with different fault cases.In order to analysis the loss of power dispatch and DG scheduling in the proposed multi-microgrid system considering multiple faults occurring with different possible combination of microgrid faults in the multi-microgrid system.The proposed system has the above combination of faults are considered, which are studied as different case studies.The dispatch strategy for EV batteries should take into account the availability of renewable DG power as an extra factor in addition to state of charge (SoC), electricity pricing, and load levelling. Objective 1: Minimization of Loss The appropriate scheduling of distributed generators (DGs) plays a vital role in the reduction of system losses. α = System loss with DG System loss without DG , From Equations ( 1) and ( 2), the total system losses can be calculated: In this context, C pq represents the power flow from the p-bus to the q-bus, while C qp represents the power flow in the opposite way, from the q-bus to the p-bus.The overall loss in a particular line, indicated as C loss , is the sum of all individual line losses. Objective 2: Economic Aspect of Operation The economic aspect of operation is a major consideration in determining the appropriate dispatch of distributed generators (DGs).In this work, the cost of DGs is represented as a quadratic function [19,20], as indicated in Equation 5.The cost coefficients of the ith DG are represented by a i , b i , and c i . ϕ = Cost of generation with DG max.cost of generation . (5) The objective function: To accomplish optimization with multiple objectives, a weighted sum technique is applied, where all parameters have equal weight. In order to optimise the objective functions, it is important to meet the equality and inequality constraints. Objective Function of Economic Load Dispatch The Economic Load Dispatch (ELD) problem aims to distribute the load of a power system among different generation units in order to reduce the cost of fuel of traditional generators, while meeting multiple constraints and meeting the system's load demand.The fuel expenses of traditional generators, which can be represented as a quadratic polynomial, can be mathematically described as: where P i is the output power of the i th generation unit and a i , b i and c i are the cost coefficients of the i th generator.F(P) is Fuel expenses in $/hr. Objective Function of Combined Economic-Emission Dispatch The problem of multi-objective economic-emission dispatch can be expressed mathematically as follows: where h i is the penalty factor for the i th generating unit.The power balance in the system is considered as an equality constraint.Furthermore, the Index of Energy Reliability (IER) is taken into consideration as an additional constraint. b) Generation Capacity Constraints: The generation capacity constraints place limits on the active power output and reactive power production of the generator. c) Bus Voltage Constraints: In order to provide voltage stability, it is necessary to ensure that the magnitude of the voltage on every bus in the microgrid remains within specified lower and upper limits [16]. The IER represents the impact of erratic power supply on customers and provides a measure of the reliability of power delivered by the community of generators.IER value has high indicates a lower likelihood of customers experiencing disruptions.The IER is influenced by factors such as the power output of each DG (λi) and its corresponding power output.The power output reflects the probability of a DG failing to meet the load requirements.The calculation of the IER, described in Equation ( 8), incorporates these factors to assess the overall reliability of the power system. e) Considering Uncertainty: Each microgrid assesses multiple factors as decision variables on an hourly basis, such as the power produced by different sources like Microturbines (MTs), Fuel Cells (FCs), and Combined Heat and Power systems (CHPs), as well as the charging or discharging of batteries, and energy transactions with neighbouring microgrids and the grid.As a result, every microgrid has five sets of factors for each hour, totally 120 variables for optimal scheduling of DG.These variables need to be precisely defined and stated. This work utilizes the Monte Carlo simulation (MCS) method to tackle uncertainties related to wind turbines, solar panels, and loads.To further include the intricacy of the situation, a scenario-based approach is employed instead of relying exclusively on MCS.A variety of scenarios are created to cover the full range of uncertain inputs indicated before, and the system is evaluated under each scenario assuming that the inputs are certain.This approach enables the investigation of various system states.Failure to meet system limitations incurs penalties that are applied to the goal function.Afterwards, the Particle Swarm Optimization (PSO) technique is utilized to estimate the decision variables for a 24-hour period, taking into account the given limits.Ultimately, the anticipated value of each variable is calculated by considering its average and standard deviation across all possible scenarios.The suggested structure for Networked Microgrids (NMG) views Distribution Network Operators (DNO) and Microgrids (MG) as distinct entities, each with its own objectives focused on optimizing operational costs.Our proposed algorithm addresses this challenge through a two-level approach.Initially, it optimizes the operational costs of each MG independently, considering uncertainties related to Renewable Energy Sources (RESs) and loads.This involves the individual scheduling of generation units within each microgrid to achieve optimal performance.The system includes a Central Energy Management System (EMS) depicted in Figure 3, responsible for local resource scheduling and ensuring generation-load balance within each MG.Additionally, the networked MGs-based structure allows for autonomous operation of DNO and MGs during certain hours.The objective function encompasses generated power, purchased and sold powers, as well as Operations and Maintenance (O&M) costs, with a focus on minimizing both power costs and pollutant emissions. Simulation results The proposed model underwent testing on a Multi-Microgrid (MMG), illustrated in Figure 1.Within this structure, MGs interact both internally and with grid.Every MG has an individual controller and gets relevant data from consumers and generating units.The primary objective of the Interconnected microgrid is to minimize operational costs while considering economic factors through Demand Response Programs and external factors.Emission factors for pollution emissions from various sources are outlined by Li et al. [14].Prices are set at $0.023/kWh, $0.034/kWh, and $0.040/kWh for off-peak, and high-demand times correspondingly, with a fixed cost of $0.034/kWh for power sales.Load amounts and associated prices are detailed in Table 1, based on data from Tuesday, July 12,2023.Load consumption in each MG is represented by mean values, with equal consumption prices across all MGs.This study considers three scenarios for solving optimal scheduling of Interconnected microgrid.Without using DRPs, the first case seeks to solve economic issues inside MGs.Time-of-Use programs are implemented for every load users in each Microgrid (MG) in the second case.On the other hand, the third scenario extensively integrates Real-Time Pricing (RTP) programs in each MG to obtain the most efficient solutions.8 represents the voltage profile of modified 33 bus distribution system under different fault cases.Under different fault conditions of single microgrid fault and multi-microgrid fault of a modified 33 bus system voltage profile is clearly shown in Figure 8.The combination of operational and emission costs for three microgrids during a 24-hour period is analyzed in the context of demand response programs (DRPs).From the Table 5 shown that optimal results by comparing different DRP techniques with different optimization techniques based on Mean.In addition, the execution time is determined throughout the optimization process.Furthermore, from the Table 6 the achieved outcome through the Particle Swarm Optimization (PSO) technique is compared with a stochastic optimization method based on standard deviation.From the Table 7 it is evident that optimal Scheduling of DGs using Bat Optimization Algorithm (BOA) with the addition of 100 EVs, the cost of generating only increases by 1% compared to the scenario without EVs.Similarly, with the connection of 200 EVs, the overall cost of generating increases by only 2%.Comparisons of power loss per day and voltage deviations with considering of 100 EVs and 200 EVs without DGs and with DGs are indicated in Table 7. This result is credited to the optimal dispatch method, which allows EVs to charge and discharge at opportune hours to make profits throughout the day.Furthermore, DG scheduling in the distribution system offers other benefits, such as minimizing maximum voltage variation.Additionally, the relying on the grid is greatly decreased after incorporating DGs into the microgrid.The proposed test case comprises of the modified 33bus distribution system into 3 Microgrids (MGs), as indicated in Figure 7. Upon grouping of the distribution system, Table ?? indicates that in the case of a single MG failure, the load delivered varies from a minimum of 50.20% to a high of 87.62% (equivalent to (3715-460)/3715*100) of the total load.Similarly, in the case of several MG faults, the load delivered varies from a minimum of 12.38´% to a maximum of 49.80% (equivalent to (3715-1865)/3715*100) of the total load.Figure 9 shows the voltage profile under different cases of faults in multi-microgrid system. The Jaya algorithm is configured using standard parameters such as Psize = 80 and iteration max = 200.Additionally, the evolutionary algorithm in this work employs both common and algorithm-specific parameters, including Psize = 80, iteration max = 200, P e = 0.1, P c = 0.7, and P m = 0.05.The generating cost coefficients are taken from the study of Deb et al. [16].Two scenarios are constructed depending on the objectives considered: 1. Objective-1: minimization of cost.2. Objective-2: Power Loss minimization.Under each case, different case studies are defined, as indicated in Table 1.In case-1 (Cost Minimization), the aim is to minimize the overall operating cost of DGs in the microgrids.The Jaya method is utilized to determine the optimal values of the generator output, and the results are reported in Table 8 and 9. From the Figure 10 it was shown that Characteristics of convergence GA and Jaya algorithms.Convergence characteristics of GA and Jaya algorithms are shown in Figure 10.It was evident that Jaya algorithm gives better optimal solution.To examine the superiority of the Jaya algorithm, a comparison was done with the results provided from the Genetic Algorithm (GA). In Case-2 (Loss Minimization), the target function is to reduce losses in the microgrids.The Jaya algorithm is utilized to conduct multiple iterations and determine the best possible amount for power losses in the system.The findings acquired using the Jaya algorithm are provided in Table 5.To validate the results acquired through the Jaya algorithm, a comparison is done.Upon evaluating the findings reported in Table 8 and 9. Additionally, the data reveal that as the system size rises, the quantity of savings or decrease in losses also increases. P Loss is the active power loss in kW.Q Loss is the reactive power loss in kVAR corresponds to each scenario.Cost ($/hr) represents the cost of power generation per hour for each scenario.Vdeviation (in P.U) represents the per-unit voltage deviation for each scenario.P demand (kW) represents the real power demand in kW.Q demand (KVAR) represents the reactive power demand in KVAR.From the case VII , when there is no fault on the system, then all the DGs in all Microgrids will operate and generate more active power in comparing with all other case studies.When there is no fault occurs on the system, the voltage deviations with in the microgrids and overall system voltage deviations are reduced.Therefore, from the results during the case VII, the voltage deviations are reduced to 1.05E-03.From the Figure 12 it was shown that Characteristics of convergence GA and Jaya algorithms.Convergence characteristics of GA and Jaya algorithms are shown in Figure 12 it was evident that Jaya algorithm gives better optimal solution.To examine the superiority of the Jaya algorithm, a comparison was done with the results provided from the Genetic Algorithm (GA).The costs estimated using alternative optimization strategies for the respective cases were higher than these.Furthermore, emissions dispatch was conducted in the test system indicated in Figure 1 employing different optimization algorithms, with resulting pollutant emissions (in kg).Notably, emissions using WOA were significantly lower across different cases: 2183.9629kg with all sources, 2264.9788kg without PV, 2254.2557kg without wind, and 2379.4554kg without both RES.These values were notably lower compared to emissions from other optimization techniques, with the highest emissions observed when no RES were utilized due to increased reliance on conventional generators.Additionally, Multi-objective Cost-Effective Emission Dispatch (CEED) was performed using the mentioned optimization techniques, with results presented.Once again, WOA outperformed other techniques due to its efficient exploration and exploitation capabilities.For example, the microgrid cost was $325364.621when all sources were utilized, $230019.0483without considering PV, $297907.5634without wind turbines, and $202881.7751without RES.These results underscore the effectiveness of WOA in achieving better and more profound outcomes compared to other optimization techniques.The convergence characteristics of Economic Load Dispatch (ELD) and Cost-Effective Emission Dispatch (CEED) using PSO, DE, GWO and WOA are illustrated in the convergence curves depicted in Figure 13(a) and 13(b) for each of the four different scenarios.Across most cases, it is evident that WOA method achieves convergence in less iteration compared to other optimization techniques. From the table 10 results includes voltage deviations, along with the active power losses and operating costs for different optimization techniques applied to modified 33-bus and 69-bus systems.The Jaya algorithm consistently delivers the best results in terms of minimizing active power losses and voltage deviations for both the 33-bus and 69-bus systems.However, for the 33-bus system, its operating cost is significantly higher compared to the other methods.The Particle Swarm Optimization also shows good performance across all metrics and is more cost-effective.The Genetic Algorithm performs the worst in this comparison, with higher power losses, voltage deviations, and operating costs for both systems. The table 11 compares the performance of various optimization techniques applied to modified 33-bus system & modified 69 bus systems, focusing on active power losses and operating costs.The Whale Optimization Algorithm consistently delivers the best performance for both the 33-bus and 69-bus systems, achieving the optimal active power losses and optimal operating costs.Particle Swarm Optimization also shows good performance, ranking second in both systems.The Grey Wolf Algorithm provides moderate results, while the Genetic Algorithm performs the worst in this comparison, with the highest active power losses and operating costs.Table 12 provides a comparative analysis of the performance of different optimization techniques on a modified 33-bus & modified 69-bus systems in terms of active power losses and operating costs.The Whale Optimization Algorithm consistently delivered the best results in terms of minimizing active power losses and operating costs for both the 33-bus and 69-bus systems.This suggests that it may be a highly effective technique for optimizing the performance of power distribution systems.Other methods like Particle Swarm Optimization and Grey Wolf Algorithm also showed good performance, but not as consistently optimal as the Whale Optimization Algorithm. Conclusions This research suggests the separation of the distribution system into self-sufficient MGs to efficiently isolate defective MGs in the case of single or multiple MG breakdowns.This method seeks to reduce the number of affected clients.The MGs can function independently or in cooperation between neighbouring MGs.The focus of this research is to reduce operating costs and power losses.Several situations with distinct case studies are provided.In this paper, a dispatch strategy employing the p-ELECTRE technique is developed, which applies a probabilistic approach for decision-making.The proposed method attempts to prioritize the interests of EV owners by preventing extensive battery cycling, charging during periods of low power costs, and discharging during periods of high electricity prices.In order to encourage EV users to switch to renewable energy sources, the availability of solar electricity is also taken into consideration when making decisions.This allows customers to charge their vehicles when solar power is available.In a modified 33-bus radial distribution system, the effects of 100 and 200 EV fleets are investigated along with the proposed methodology.Different weighting schemes are used to various factors.The resultant optimal dispatch guarantees that the system load is satisfied while reducing losses and expenses.The Jaya method is applied to solve the objective functions in various contexts.The proposed method is applied to the modified 33 bus distribution and modified 69-bus system distribution framework as a test case, and the obtained results for various scenarios are shown.By comparing the findings with those produced through the Genetic algorithm, the superiority of the Jaya algorithm is established.In future for more efficacy and relaiabity the above analysis can be made to carried out on 118 bus system and 123 bus system and to provide security.As the grid becomes more digitized, ensuring the cybersecurity of the MGs and the overall distribution system is crucial.Future work could focus on developing robust cybersecurity measures to protect against potential threats and attacks. Funding: No funding received for this work. Figure 2 . Figure 2. Location of DGs and tie-line connections for modified 33 Bus system. Figure 3 . Figure 3. Flowchart depicts the process of scheduling of distributed generators (DGs) using the BAT optimization algorithm (BOA) and p-ELECTRE method for EV dispatch. Figure 4 . Figure 4. Location of DGs and tie-line connections for modified 69 Bus system. Figure 5 . The outcomes derived from TOU and RTP Demand Response programs, using mean values. Figure 4 Figure 4 illustrates the load profile of each MG under the three scenarios.Notably, when TOU and RTP programs are applied, electrical load consuming reacts to price variations, with increased load during valley times and heightened demand during high-price periods.The PSO algorithm is employed to minimize the cost function with different load curves for individual microgrid.Table 2 outlines day ahead scheduling over a 24-hour period, with results achieved via PSO.Notably, TOU and RTP programs demonstrate significant reductions in operation costs, with RTP offering superior results.Additionally, DRPs benefit consumers by reducing their costs.Figure 5 depicts the load demand cost of microgrids in the consideration of demand cost of operation and pollution costs among three microgrids across day ahead scheduling in the consideration of DRPs, comparing results obtained by Particle Swarm Optimization with stochastic optimization methods.Figure 6 displays an overview of combined running and pollution costs, taking into account demand response programs.It emphasizes the effectiveness of demand response programs in microgrids and the superiority of real-time pricing (RTP) over time-of-use (TOU) programs.Figure 7 compares cost reductions across Demand Response Programs, showing the differences between RTP and TOU with No DRP.The figure 7 shows cost profile minimization using various DRP combinations with optimization techniques like particle swarm and stochastic optimization. Figure 5 Figure 4 illustrates the load profile of each MG under the three scenarios.Notably, when TOU and RTP programs are applied, electrical load consuming reacts to price variations, with increased load during valley times and heightened demand during high-price periods.The PSO algorithm is employed to minimize the cost function with different load curves for individual microgrid.Table 2 outlines day ahead scheduling over a 24-hour period, with results achieved via PSO.Notably, TOU and RTP programs demonstrate significant reductions in operation costs, with RTP offering superior results.Additionally, DRPs benefit consumers by reducing their costs.Figure 5 depicts the load demand cost of microgrids in the consideration of demand cost of operation and pollution costs among three microgrids across day ahead scheduling in the consideration of DRPs, comparing results obtained by Particle Swarm Optimization with stochastic optimization methods.Figure 6 displays an overview of combined running and pollution costs, taking into account demand response programs.It emphasizes the effectiveness of demand response programs in microgrids and the superiority of real-time pricing (RTP) over time-of-use (TOU) programs.Figure 7 compares cost reductions across Demand Response Programs, showing the differences between RTP and TOU with No DRP.The figure 7 shows cost profile minimization using various DRP combinations with optimization techniques like particle swarm and stochastic optimization. Figure 4 illustrates the load profile of each MG under the three scenarios.Notably, when TOU and RTP programs are applied, electrical load consuming reacts to price variations, with increased load during valley times and heightened demand during high-price periods.The PSO algorithm is employed to minimize the cost function with different load curves for individual microgrid.Table 2 outlines day ahead scheduling over a 24-hour period, with results achieved via PSO.Notably, TOU and RTP programs demonstrate significant reductions in operation costs, with RTP offering superior results.Additionally, DRPs benefit consumers by reducing their costs.Figure 5 depicts the load demand cost of microgrids in the consideration of demand cost of operation and pollution costs among three microgrids across day ahead scheduling in the consideration of DRPs, comparing results obtained by Particle Swarm Optimization with stochastic optimization methods.Figure 6 displays an overview of combined running and pollution costs, taking into account demand response programs.It emphasizes the effectiveness of demand response programs in microgrids and the superiority of real-time pricing (RTP) over time-of-use (TOU) programs.Figure 7 compares cost reductions across Demand Response Programs, showing the differences between RTP and TOU with No DRP.The figure 7 shows cost profile minimization using various DRP combinations with optimization techniques like particle swarm and stochastic optimization. Figure 7 . Figure 7. Comparing the cost function profile. Figure 8 . Figure 8. Voltage profile of modified 33 bus distribution system. Figure 10 . Figure 10.Characteristics of convergence GA and Jaya algorithms. Figure 12 . Figure 12.Characteristics of convergence GA and Jaya algorithm. Figure 13 . Convergence curve characteristics noticed during the execution of ELD with various algorithms. Table 2 . Operational cost per hour compared with and without DRP.Operational cost per hour of entire microgrids, along with the percentage reduction compared to the state without demand response programs (DRP). Table 3 . Emission cost per hour compared with and without DRP.Hourly emission costs for microgrids, with percentage reduction compared to no DRP. Table 4 . Consumption cost per hour compared with and without DRP.Consumption cost per hour of entire microgrids, along with the percentage reduction compared to the state without demand response programs (DRP) Table 5 . Optimal results by comparing different DRP techniques with different optimization based on Mean. Table 6 . Optimal results by comparing different DRP techniques with different optimization based on Standard deviation. Table 7 . Optimal Scheduling of DGs using Bat Optimization Algorithm (BOA) Table 8 . Optimal DGs Scheduling using JAYA Algorithm for Minimizing Power Loss. Table 9 . Optimal DGs Scheduling using JAYA Algorithm for Minimizing Cost. Table 10 . Comparison of power loss, voltage deviations and operating cost of modified 33-bus system & modified 69-bus system. Table 11 . Comparison of Economic Load Dispatch (ELD) when all DG sources are considered. Table 12 . Comparison of Economic Load Dispatch (ELD) when all DG sources are excluded.
7,763
2024-08-21T00:00:00.000
[ "Engineering", "Economics", "Environmental Science" ]
Learning Programmatic Idioms for Scalable Semantic Parsing Programmers typically organize executable source code using high-level coding patterns or idiomatic structures such as nested loops, exception handlers and recursive blocks, rather than as individual code tokens. In contrast, state of the art (SOTA) semantic parsers still map natural language instructions to source code by building the code syntax tree one node at a time. In this paper, we introduce an iterative method to extract code idioms from large source code corpora by repeatedly collapsing most-frequent depth-2 subtrees of their syntax trees, and train semantic parsers to apply these idioms during decoding. Applying idiom-based decoding on a recent context-dependent semantic parsing task improves the SOTA by 2.2% BLEU score while reducing training time by more than 50%. This improved speed enables us to scale up the model by training on an extended training set that is 5\times larger, to further move up the SOTA by an additional 2.3% BLEU and 0.9% exact match. Finally, idioms also significantly improve accuracy of semantic parsing to SQL on the ATIS-SQL dataset, when training data is limited. Introduction When programmers translate Natural Language (NL) specifications into executable source code, they typically start with a high-level plan of the major structures required, such as nested loops, conditionals, etc. and then proceed to fill in specific details into these components. We refer to these high-level structures (Figure 1 (b)) as code idioms (Allamanis and Sutton, 2014). In this paper, we demonstrate how learning to use code idioms leads to an improvement in model accuracy and training time for the task of semantic parsing, i.e., mapping intents in NL into general purpose source code (Iyer et al., 2017;Ling et al., 2016). State-of-the-art semantic parsers are neural encoder-decoder models, where decoding is guided by the target programming language grammar (Yin and Neubig, 2017;Rabinovich et al., 2017;Iyer et al., 2018) to ensure syntactically valid programs. For general purpose programming languages with large formal grammars, this can easily lead to long decoding paths even for short snippets of code. For example, Figure 1 shows an intermediate parse tree for a generic if-then-else code snippet, for which the decoder requires as many as eleven decoding steps before ultimately filling in the slots for the if condition, the then expression and the else expression. However, the if-then-else block can be seen as a higher level structure such as shown in Figure 1 (b) that can be applied in one decoding step and reused in many different programs. In this paper, we refer to frequently recurring subtrees of programmatic parse trees as code idioms, and we equip semantic parsers with the ability to learn and directly generate idiomatic structures as in Figure 1 We introduce a simple iterative method to extract idioms from a dataset of programs by repeatedly collapsing the most frequent depth-2 subtrees of syntax parse trees. Analogous to the byte pair encoding (BPE) method (Gage, 1994;Sennrich et al., 2016) that creates new subtokens of words by repeatedly combining frequently occurring adjacent pairs of subtokens, our method takes a depth-2 syntax subtree and replaces it with a tree of depth-1 by removing all the internal nodes. This method is in contrast with the approach using probabilistic tree substitution grammars (pTSG) taken by Allamanis and Sutton (2014), who use the explanation quality of an idiom to prioritize idioms that are more interesting, with an end goal to suggest useful idioms to programmers using IDEs. Once idioms are extracted, we greedily apply them to semantic parsing training sets to provide supervision for learning to apply idioms. We evaluate our approach on two semantic parsing tasks that map NL into 1) general-purpose source code, and 2) executable SQL queries, respectively. On the first task, i.e., context dependent semantic parsing (Iyer et al., 2018) using the CONCODE dataset, we improve the state of the art (SOTA) by 2.2% of BLEU score. Furthermore, generating source code using idioms results in a more than 50% reduction in the number of decoding steps, which cuts down training time to less than half, from 27 to 13 hours. Taking advantage of this reduced training time, we further push the SOTA on CONCODE to an EM of 13.4 and a BLEU score of 28.9 by training on an extended version of the training set (with 5× the number of training examples). On the second task, i.e., mapping NL utterances into SQL queries for a flight information database (ATIS-SQL; Iyer et al. (2017)), using idioms significantly improves denotational accuracy over SOTA models, when a limited amount of training data is used, and also marginally outperforms the SOTA when the full training set is used (more details in Section 7). Related Work Neural encoder-decoder models have proved effective in mapping NL to logical forms (Dong and Lapata, 2016) and also for directly producing general purpose programs (Iyer et al., 2017(Iyer et al., , 2018. Ling et al. (2016) use a sequence-tosequence model with attention and a copy mechanism to generate source code. Instead of directly generating a sequence of code tokens, recent methods focus on constrained decoding mechanisms to generate syntactically correct output using a decoder that is either grammar-aware or has a dynamically determined modular structure paralleling the structure of the abstract syntax tree (AST) of the code (Rabinovich et al., 2017;Yin and Neubig, 2017). Iyer et al. (2018) use a similar decoding approach but use a specialized context encoder for the task of context-dependent code generation. We augment these neural encoder-decoder models with the ability to decode in terms of frequently occurring higher level idiomatic structures to achieve gains in accuracy and training time. Another different but related method to produce source code is using sketches, which are code snippets containing slots in the place of low-level information such as variable names, method arguments, and literals. Dong and Lapata (2018) generate such sketches using programming languagespecific sketch creation rules and use them as intermediate representations to train token-based seq2seq models that convert NL to logical forms. Hayati et al. (2018) retrieve sketches from a large training corpus and modify them for the current input; Murali et al. (2018) use a combination of neural learning and type-guided combinatorial search to convert existing sketches into executable programs, whereas Nye et al. (2019) additionally also generate the sketches before synthesising programs. Our idiom-based decoder learns to produce commonly used subtrees of programming syntaxtrees in one decoding step, where the non-terminal leaves function as slots that can be subsequently expanded in a grammer-aware fashion. Code idioms can be roughly viewed as a tree-structured generalization of sketches, that can be automatically extracted from large code corpora for any programming language, and unlike sketches, can also be nested with other idioms or grammar rules. More closely related to the idioms that we use for decoding is Allamanis and Sutton (2014), who develop a system (HAGGIS) to automatically mine idioms from large code bases. They focus on finding interesting and explainable idioms, e.g., those that can be included as preset code templates in programming IDEs. Instead, we learn frequently used idioms that can be easily associated with NL phrases in our dataset. The production of large subtrees in a single step directly translates to a large speedup in training and inference. Concurrent with our research, Shin et al. (2019) also develop a system (PATOIS) for idiom-based semantic parsing and demonstrate its benefits on the Hearthstone (Ling et al., 2016) and Spider (Yu et al., 2018) datasets. While we extract idioms by collapsing frequently occurring depth-2 AST subtrees and apply them greedily during training, they use non-parametric Bayesian inference for idiom extraction and train neural models to either apply entire idioms or generate its full body. Idiom Aware Encoder-Decoder Models We aim to train semantic parsers having the ability to use idioms during code generation. To do this, we first extract frequently used idioms from the training set, and then provide them as supervision to the semantic parser's learning algorithm. Formally, if a semantic parser decoder is guided by a grammar G = (N, Σ, R), where N and Σ are the sets of non-terminals and terminals respectively, and R is the set of production rules of the form A → β, A ∈ N, β ∈ {N ∪ Σ} * , we would like to construct an idiom set I with rules of the form B → γ, B ∈ N, γ ∈ {N ∪ Σ} * , such that B ≥2 =⇒ γ under G , i.e., γ can be derived in two or more steps from B under G . For the example in Figure 1, R would contain rules for expanding each non-terminal, such as Statement → if Par-Expr Statement IfOrElse and ParExpr → { Expr }, The decoder builds trees fromĜ = (N, Σ, R ∪ I). Although the set of valid programs under both G andĜ are exactly the same, this introduction of ambiguous rules into G in the form of idioms presents an opportunity to learn shorter derivations. In the next two sections, we describe the idiom extraction process, i.e., how I is chosen, and the idiom application process, i.e., how the de-coder is trained to learn to apply idioms. Idiom Extraction Algorithm 1 describes the procedure to add idiomatic rules, I, to the regular production rules, R. Our goal is to populate the set I by identifying frequently occurring idioms (subtrees) from the programs in training set D. Since enumerating all subtrees of every AST in the training set is infeasible, we observe that all subtrees s of a frequently occurring subtree s are just as or more frequent than s, so we take a bottom-up approach by repeatedly collapsing the most frequent depth-2 subtrees. Intuitively, this can be viewed as a particular kind of generalization of the BPE (Gage, 1994;Sennrich et al., 2016) algorithm for sequences, where new subtokens are created by repeatedly combining frequently occurring adjacent pairs of subtokens. Note that subtrees of parse trees have an additional constraint, i.e., either all or none of the children of non-terminal nodes are included, since a grammar rule has to be used entirely or not at all. We perform idiom extraction in an iterative fashion. We first populate T with all parse trees of programs in D using grammar G (Step 4). Each iteration then comprises retrieving the most frequent depth-2 subtree s from T (Step 8), followed by post-processing T to replace all occurrences of s in T with a collapsed (depth-1) version of s (Step 10 and 17). The collapse function (Step 20) simply takes a subtree, removes all its internal nodes and attaches its leaves directly to its root (Step 22). The collapsed version of s is a new idiomatic rule (a depth-1 subtree), which we add to our set of idioms, I (Step 12). We illustrate two iterations of this algorithm in Figure 2 ((a)-(b) and (c)-(d)). Assuming (a) is the most frequent depth-2 subtree in the dataset, it is transformed into the idiomatic rule in (b). Larger idiomatic trees are learned by combining several depth-2 subtrees as the algorithm progresses. This is shown in Figure 2 (c) which contains the idiom extracted in (b) within it owing to the post-processing of the dataset after idiom (b) is extracted (Step 10 of Algorithm 1) which effectively makes the idiom in (d), a depth-3 idiom. We perform idiom extraction for K iterations. In our experiments we vary the value of K based on the number of idioms we would like to extract. Model Training with Idioms Once a set of idioms I is obtained, we next train semantic parsing models to apply these idioms while decoding. We do this by supervising grammar rule generation in the decoder using a compressed set of rules for each example, using the idiom set I (see Algorithm 2). More concretely, we first obtain the parse tree t i (or grammar rule set p i ) for each training program y i under grammar G (Step 3) and then greedily collapse each depth-2 subtree in t i corresponding to every idiom in I (Step 5). Once t i cannot be further collapsed, we translate t i into production rules r i based on the collapsed tree, with |r i | ≤ |p i | (Step 7). This process is illustrated in Figure 3 where we Source code: AST Derivation: perform two applications of the first idiom from Figure 2 (b), followed by one application of the second idiom from Figure 2 (d), after which the tree cannot be further compressed using those two idioms. The final tree can be represented using |r i | = 2 rules instead of the original |p i | = 5 rules. The decoder is then trained similar to previous approaches (Yin and Neubig, 2017) using the compressed set of rules. We observe a rule set compression of more than 50% in our experiments (Section 7). Experimental Setup We apply our approach on 1) the context dependent encoder-decoder model of Iyer et al. (2018) on the CONCODE dataset, where we outperform an improved version of their best model, and 2) the task of mapping NL utterances to SQL queries on the ATIS-SQL dataset (Iyer et al., 2017) where an idiom-based model using the full training set outperforms the SOTA, also achieving significant List all flights from Denver to Seattle SELECT DISTINCT flight_1 . flight_id FROM flight f1 , airport_service as1 , city c1 , airport_service as2 , city c2 WHERE f1 . from_airport = as1 . airport_code AND as1 . city_code = c1 . city_code AND c1 . city_name = " Denver " AND f1 . to_airport = as2 . airport_code AND as2 . city_code = c2 . city_code AND c2 . city_name = " Seattle " ; Figure 5: Example NL utterance with its corresponding executable SQL query from the ATIS-SQL dataset. gains when using a reduced training set. Context Dependent Semantic Parsing The CONCODE task involves mapping an NL query together with a class context comprising a list of variables (with types) and methods (with return types), into the source code of a class member function. Figure 4 (a) shows an example where the context comprises variables and methods (with types) that would normally exist in a class that implements a vector, such as vecElements and dotProduct(). Conditioned on this context, the task involves mapping the NL query Adds a scalar to this vector in place into a sequence of parsing rules to generate the source code in Figure 4 (b). Formally, the task is: Given a NL utterance q, a set of context variables {v i } with types {t i }, and a set of context methods {m i } with return types {r i }, predict a set of parsing rules {a i } of the target program. Their best performing model is a neural encoder-decoder with a context-aware encoder and a decoder that produces a sequence of Java grammar rules. Baseline Model We follow the approach of Iyer et al. (2018) with three major modifications in their encoder, which yields improvements in both speed and accuracy (Iyer-Simp). First, in addition to camel-case splitting of identifier tokens, we use byte-pair encoding (BPE) (Sennrich et al., 2016) on all NL tokens, identifier names and types and embed all these BPE tokens using a single embedding matrix. Next, we replace their RNN that contextualizes the subtokens of identifiers and types with an average of the subtoken embeddings instead. Finally, we consolidate their three separate RNNs for contextualizing NL, variable names with types, and method names with types, into a single shared RNN, which greatly reduces the number of model parameters. Formally, let {q i } represent the set of BPE tokens of the NL, and {t ij }, {v ij }, {r ij } and {m ij } represent the jth BPE token of the ith variable type, variable name, method return type, and method name respectively. First, all these elements are embedded using a BPE token embedding matrix B to give us q i , t ij , v ij , r ij and m ij . Using Bi-LSTM f , the encoder then computes: Then, h 1 , . . . , h z , andt i ,v i ,r i ,m i are passed to the attention mechanism in the decoder, exactly as in Iyer et al. (2018). The decoder remains the same as described in Iyer et al. (2018), and produces a probability distribution over grammar rules at each time step (full details in Supplementary Materials). This forms our baseline model (Iyer-Simp). Idiom Aware Training To utilize idioms, we augment this decoder by retrieving the top-K most frequent idioms from the training set (Algorithm 1), followed by post-processing the training set by greedily applying these idioms (Algorithm 2; we denote this model as Iyer-Simp-K). We evaluate all our models on the CONCODE dataset which was created using Java source files from github.com. It contains 100K tuples of (NL, code, context) for training, 2K tuples for development, and 2K tuples for testing. We use a BPE vocabulary of 10K tokens (for matrix B) and get the best validation set results using the original hyperparameters used by Iyer et al. (2018). Since idiom aware training is significantly faster than without idioms, it enables us to train on an additional 400K training examples that Iyer et al. (2018) released as part of CONCODE. We report exact match accuracy, corpus level BLEU score (which serves as a measure of partial credit) (Papineni et al., 2002), and training time for all these configurations. Semantic Parsing to SQL This task involves mapping NL utterances into executable SQL queries. We use the ATIS-SQL dataset (Iyer et al., 2017) to execute them against ( Figure 5 shows an example). The dataset is split into 4,379 training, 491 validation, and 448 testing examples following Kwiatkowski et al. (2011). The SOTA by Iyer et al. (2017) is a Seq2Seq model with attention and achieves a denotational accuracy of 82.5% on the test set. Since using our idiom-based approach requires a model that uses grammar-rule based decoding, we use a modified version of the Seq2Prod model described in Iyer et al. (2018) (based on Yin and Neubig (2017)) as a baseline model (Seq2Prod), and augment the decoder with SQL idioms (Seq2Prod-K). Seq2Prod is an encoder-decoder model, where the encoder executes an n-layer bi-LSTM over NL embeddings and passes the final layer LSTM Model Exact BLEU 1× Train 12.0 (9.7) 26.3 (23.8) 2× Train 13.0 (10.3) 28.4 (25.2) 3× Train 13.3 (10.4) 28.6 (26.5) 5× Train 13.4 (11.0) 28.9 (26.6) Table 3: Exact Match and BLEU scores on the test (validation) set of CONCODE by training Iyer-Simp-400 on the extended training set released by Iyer et al. (2018). Significant improvements in training speed after incorporating idioms makes training on large amounts of data possible. hidden states to an attention mechanism in the decoder. Note that the Seq2Prod encoder described in Iyer et al. (2018) encodes a concatenated sequence of NL and context, but ATIS-SQL instances do not include contextual information. Thus, if q i represents each lemmatized token of the NL, they are first embedded using a token embedding matrix B to give us q i . Using Bi-LSTM f , the encoder then computes: Then, h 1 , . . . , h z are passed to the attention mechanism in the decoder. The sequential LSTM-decoder uses attention and produces a sequence of grammar rules {a t }. The decoder hidden state at time t, s t , is computed based on an embedding of the current nonterminal n t to be expanded, an embedding of the previous production rule a t−1 , an embedding of the parent production rule, par(n t ), that produced n t , the previous decoder state s t−1 , and the decoder state of the LSTM cell that produced n t , denoted as s nt . s t is then used for attention and finally, produces a distribution over grammar rules. We make two modifications in this decoder. First, we remove the dependence of LSTM f on the parent LSTM cell state s nt . Second, instead of using direct embeddings of rules a t−1 and par(n t ) in LSTM f , we use another Bi-LSTM across the left and right sides of the rule (using separator symbol SEP) and use the final hidden state as inputs to LSTM f instead. More concretely, if a grammar s t = LSTM f (n t , Emb(a t−1 ), Emb(par(n t )), This modification can help the LSTM f cell locate the position of n t within rules a t−1 and par(n t ), especially for lengthy idiomatic rules. We present a full description of this model with all hyperparameters in the supplementary materials. Idiom Aware Training As before, we augment the set of decoder grammar rules with top-K idioms extracted from ATIS-SQL. To represent SQL queries as grammar rules, we use the python sqlparse package. Iyer-Simp yields a large improvement of 3.9 EM and 2.2 BLEU over the best model of Iyer et al. (2018), while also being significantly faster (27 hours for 30 training epochs as compared to 40 hours). Using a reduced BPE vocabulary makes the model memory efficient, which allows us to use a larger batch size that in turn speeds up training. Furthermore, using 200 code idioms further improves BLEU by 2.2% while maintaining comparable EM accuracy. Using the top-200 idioms results in a target AST compression of more than 50%, which results in fewer decoder RNN steps being performed. This reduces training time further by more than 50%, from 27 hours to 13 hours. Results and Discussion In Table 2, we illustrate the variations in EM, BLEU and training time with the number of idioms. We find that 200 idioms performs best overall in terms of balancing accuracy and training time. Adding more idioms continues to reduce training time, but accuracy also suffers. Since we permit idioms to contain identifier names to capture frequently used library methods, having too many idioms hurts generalization, especially since the test set is built using repositories disjoint from the training set. Finally, the amount of compression, and therefore the training time, plateaus after the top-600 idioms are incorporated. Compared to the model of Iyer et al. (2018), our significantly reduced training time enables us to train on their extended training set. We run Iyer-Simp using 400 idioms (taking advantage of even lower training time) on up to 5 times the amount of data, while making sure that we do not include in training any NL from the validation or the test sets. Since the original set of idioms learned from the original training set are quite general, we directly use them rather than relearn the idioms from scratch. We report EM and BLEU scores for different amounts of training data on the same validation and test sets as CONCODE in Table 3. In general, accuracies increase with the amount of data with the best model achieving a BLEU score of 28.9 and EM of 13.4. Figure 7 shows example idioms extracted from CONCODE: (a) is an idiom to construct a new object with arguments, (b) represents a try-catch block, and, (c) is an integer-based for loop. In (e), we show how small idioms are combined to form larger ones; it combines an if-then idiom with a throw-exception idiom, which throws an object instantiated using idiom (a). The decoder also learns idioms to directly generate common library methods such as System.out.println( StringLiteral ) in one decoding step (d). For the NL to SQL task, we report denotational accuracy in Table 4. We observe that Seq2Prod underperforms the Seq2Seq model of Iyer et al. (2017), most likely because a SQL query parse is much longer than the original query. This is remedied by using top-400 idioms, which compresses the decoded sequence size, marginally outperforming the SOTA (83.2%). Finegan-Dollak et al. (2018) observed that the SQL structures in ATIS-SQL are repeated numerous times in both train and test sets, thus facilitating Seq2seq models to memorize these structures without explicit idiom supervision. To test a scenario with limited repetition of structures, we compare Seq2Seq with Seq2Prod-K for limited training data (increments of 20%) and observe that ( Figure 6) idioms are additionally helpful with lesser training data, consistent with our intuition. Conclusions We presented a general approach to make semantic parsers aware of target idiomatic structures, by first identifying frequently used idioms, followed by providing models with supervision to apply these idioms. We demonstrated this approach on the task of context dependent code generation where we achieved a new SOTA in EM accuracy and BLEU score. We also found that decoding using idioms significantly reduces training time and allows us to train on significantly larger datasets. Finally, our approach also outperformed the SOTA for a semantic parsing to SQL task on ATIS-SQL, with significant improvements under a limited training data regime.
5,826.8
2019-04-19T00:00:00.000
[ "Computer Science" ]
Role of PRY-1/Axin in heterochronic miRNA-mediated seam cell development Background Caenorhabditis elegans seam cells serve as a good model to understand how genes and signaling pathways interact to control asymmetric cell fates. The stage-specific pattern of seam cell division is coordinated by a genetic network that includes WNT asymmetry pathway components WRM-1, LIT-1, and POP-1, as well as heterochronic microRNAs (miRNAs) and their downstream targets. Mutations in pry-1, a negative regulator of WNT signaling that belongs to the Axin family, were shown to cause seam cell defects; however, the mechanism of PRY-1 action and its interactions with miRNAs remain unclear. Results We found that pry-1 mutants in C. elegans exhibit seam cell, cuticle, and alae defects. To examine this further, a miRNA transcriptome analysis was carried out, which showed that let-7 (miR-48, miR-84, miR-241) and lin-4 (lin-4, miR-237) family members were upregulated in the absence of pry-1 function. Similar phenotypes and patterns of miRNA overexpression were also observed in C. briggsae pry-1 mutants, a species that is closely related to C. elegans. RNA interference-mediated silencing of wrm-1 and lit-1 in the C. elegans pry-1 mutants rescued the seam cell defect, whereas pop-1 silencing enhanced the phenotype, suggesting that all three proteins are likely important for PRY-1 function in seam cells. We also found that these miRNAs were overexpressed in pop-1 hypomorphic animals, suggesting that PRY-1 may be required for POP-1-mediated miRNA suppression. Analysis of the let-7 and lin-4-family heterochronic targets, lin-28 and hbl-1, showed that both genes were significantly downregulated in pry-1 mutants, and furthermore, lin-28 silencing reduced the number of seam cells in mutant animals. Conclusions Our results show that PRY-1 plays a conserved role to maintain normal expression of heterochronic miRNAs in nematodes. Furthermore, we demonstrated that PRY-1 acts upstream of the WNT asymmetry pathway components WRM-1, LIT-1, and POP-1, and miRNA target genes in seam cell development. Electronic supplementary material The online version of this article (10.1186/s12861-019-0197-5) contains supplementary material, which is available to authorized users. , and lin-28 [10]. HBL-1 is a zinc-finger transcription factor that critically mediates embryogenesis [11], and that controls developmental timing during post-embryonic development [9]. LIN-14 is a novel class of transcription factor [12] that is initially expressed at high levels in hypodermal blast cells in newly hatched L1 animals but at lower levels by the L2 stage [13]. LIN-28 is a conserved RNA-binding protein that controls the maturation of let-7 miRNA [5,[14][15][16]. Hypomorphic and null alleles of lin-14 and hbl-1 cause an increase whereas lin-28 mutants cause a decrease in the overall number of seam cells [9,17,18]. In addition to heterochronic miRNAs and their targets, seam cell division is also regulated by the divergent WNT asymmetry pathway, whose components include the β-catenins WRM-1 and SYS-1, the NEMO-like kinase (NLK) LIT-1, and the T-cell factor/lymphoid enhancer factor (TCF/LEF) POP-1 [18,19]. Removal of POP-1 activity causes seam cells to divide symmetrically, and thereby leads to an increase in their number. Conversely, since LIT-1 normally forms a complex with WRM-1 to phosphorylate and thus stimulate POP-1 export from the nucleus, disrupting WRM-1 and/or LIT-1 activity reduces the number of seam cells [19]. Similarly, the ratio of nuclear POP-1/SYS-1 activity affects the fate of daughter cells, such that those with lower POP-1 (and hence comparatively higher SYS-1) levels retain their seam cell fate, whereas those with higher POP-1 (and hence comparatively lower SYS-1) levels differentiate [20][21][22][23]. Genetic studies have also shown that WNT asymmetry pathway components interact with heterochronic genes to control seam cell development [17,18]. While investigating the role of pry-1 in developmental and post-developmental processes, we observed that pry-1(mu38) animals exhibit weaker cuticle and abnormal alae. Further analysis revealed that they also display a higher number of seam cells, a phenotype that was previously reported [19]. Similar defects were also seen in a C. briggsae pry-1 mutant Cbr-pry-1(sy5353) [24], suggesting a conserved role for pry-1 in seam cell development and cuticular alae formation. These observations are in line with our recent pry-1(mu38) mRNA transcriptome profiling (RNA-Seq), which identified differentially expressed (DE) genes associated with 'cuticle development' [25]. Given that the heterochronic pathway involves both protein-coding and miRNA genes, in the present study we conducted an miRNA-specific whole genome RNA-Seq experiment, which uncovered six DE miRNAs that include members of lin-4 and let-7 families. To understand the interaction of pry-1 with miRNAs during seam cell development, we knocked-down WNT asymmetry pathway components. Reducing wrm-1 and lit-1 expression suppressed, while silencing pop-1 exacerbated the pry-1 phenotype. Furthermore, an miRNA expression analysis conducted in a pop-1 hypomorph revealed a similar upregulation of miR-NAs to that observed in pry-1(mu38) worms, suggesting both that POP-1 is critical for asymmetric seam cell division, and that its nuclear levels are likely reduced in pry-1 mutants. Overall, our data demonstrates the importance of PRY-1 and its interactions with the WNT asymmetry pathway components for the regulation of miRNAs (and their targets) during asymmetric cell division. Since the WNT pathway and miRNAs are conserved in eukaryotes, similar interactions with Axin family members may control stem cell division in other systems. Results We observed that pry-1 mutant worms have a weaker cuticle (Fig. 2a) and abnormal alae that frequently includes Fig. 1 Seam cell asymmetric divisions during postembryonic development in C. elegans. During postembryonic development, C. elegans larvae undergo a series of molts, each of which is associated with carefully timed seam cell divisions. In the V1-4, and V6, cells undergo stem cell-like division where the anterior daughter fuses with the hyp7 syncytium. The posterior daughter then self-renews as another 'stem cell-like' seam cell. The exception to the asymmetric divisions is in early L2 stage where they undergo one symmetrical division prior to dividing again in an asymmetric manner gaps (Fig. 2b, c). The phenotypic analysis of C. briggsae pry-1 mutants, Cbr-pry-1(sy5353) [24], revealed similar gaps in alae as well as defective seam cell morphologies (Fig. 2d-f). Given these hypodermis-associated phenotypes, we investigated the role of pry-1 in seam cell development. pry-1 mutants exhibit an increased number of seam cells In C. elegans, seam cells divide asymmetrically at each larval stage to produce two daughter cells, one of which fuses with the hypodermal syncytium, while the other retains the seam cell fate (Fig. 1). The L2 stage is unique (e, f) Seam cells in control AF16 (e) and Cbr-pry-1(sy5353) (f) are visualized by adherens-junction-associated marker Cel-dlg-1p::GFP. Cbr-pry-1(sy5353) animals show defects in cell fusion (scale bar 0.1 mm). Boxed areas, marked by dotted lines, have been enlarged in the second row. Scale bars in b, d-f are 0.01 mm. (g) Both the pry-1(mu38) and pry-1(gk3682) animals show increased seam cell numbers compared to control N2 (scale bar 0.1 mm). (h) Each control animal has exactly 16 seam cells, whereas an average of 21 and 19 cells are found in pry-1(mu38) and pry-1(gk3682) mutants, respectively. (i) pry-1(mu38) animals show increased seam cell number by the end of the L2 stage. In panels a, h, and i, each data point represents the mean of at least two replicates (each batch with 30 or more worms) and error bar represents the STD. Student's t-test was used to determine the statistical significance: *p < 0.05 because it also includes a symmetric division that causes an increase in the number of seam cells (Fig. 1). We found that the pry-1(mu38) mutants have an average of approximately five extra seam cells (Fig. 2g, h), consistent with a previously report [19]. A similar phenotype was also observed in pry-1(gk3682), a new CRISPR/Cas9-induced mutant strain (provided by Don Moerman's lab) (Fig. 2g, h; Additional file 1: Figure S1). A stage-specific analysis conducted using scm:: GFP and ajm-1::GFP markers [26] revealed a higher number of seam cells in pry-1 mutants by the end of the L2 stage, likely due to an increase in symmetric cell divisions (Fig. 2h, i). Together, these findings suggest that pry-1 appears to play a role in L2-specific seam cell division. Heterochronic miRNA expression is altered in pry-1 mutants As described above, seam cell asymmetry is mediated by two interacting pathways. While heterochronic genes, such as members of the lin-4 and let-7 miRNA families, control cell division, the WNT asymmetry pathway plays a role in the specification of anterior/posterior daughter cell fates. To evaluate the involvement of miRNAs in pry-1-mediated seam cell development, we performed RNA-Seq experiment in L1-stage animals. The results revealed six DE miRNAs in the pry-1(mu38) mutants. Additionally, 61 novel miRNAs were recovered in the C. elegans reference sample (N2) (see Methods and Additional files 2-4: Table S1, S2, S3) that serve as a resource to further investigate the miRNA biology in worms. To elucidate whether pry-1-mediated miRNA regulation is stage-specific, we examined miRNA transcript levels by qRT-PCR in adult nematodes. While the expression of miRNAs was either unchanged or downregulated in pry-1(mu38) (Additional file 6: Figure S3A), the pattern was different in Cbr-pry-1(sy5353) animals, i.e., three miRNA orthologs were up, two were down, and one was unchanged (Additional file 6: Figure S3B). We also used the existing C. elegans miRNA::GFP transgenic strains [29] to determine changes in miRNA expression in pry-1(mu38) animals and confirmed the qRT-PCR results (Additional file 7: Figure S4). Overall, the dissimilar expression trends of the analyzed miRNAs in adults from L1-stage nematodes suggests that the pry-1-miRNA network is likely temporally regulated. miR-246, which a previous study showed to be involved in aging, oxidative stress, and thermo-sensation [30,31], was the only miRNA to be downregulated in the pry-1(mu38) nematodes. Although no role in heterochronic development has yet been reported for miR-246, we herein demonstrate that miR-246(n4636) adults exhibit alae defects (Fig. 3e). Conversely, scm::GFP and ajm-1::GFP reporter-based expression analyses of the miR-246(n4636) adults did not reveal any significant change to the seam cells (Fig. 3f). This finding was supported by the results of the tissue-enrichment analysis (Additional file 5: Fig. S2). Interestingly, a hypodermal cell marker, dpy-7::H1-wcherry, revealed that the number of hypodermal cells was reduced in the mutant compared to control worms (45.1 ± 1.7, n = 20 and 51.9 ± 1.6, n = 20, respectively; also see Additional file 8: Figure S5). Further study is needed to determine the exact fate of these hypodermal cells in miR-246 mutants. Many predicted gene targets of the mis-regulated heterochronic miRNAs are differentially expressed in the pry-1(mu38) mRNA transcriptome miRNAs mediate the degradation or translational inhibition of their target mRNAs via binding between their seed sequence and an miRNA response element (MRE) in the 3′ untranslated region of their target. Therefore, we searched for miRNA targets using TargetScan online program (http:// www.targetscan.org/vert_72/), and resultantly identified 453 unique targets. The gene ontology (GO) analysis revealed that these predicted miRNA targets were predominantly associated with processes such as the regulation of heterochrony (29-fold enrichment), the positive regulation of nematode larval development (8-fold enrichment), the molting cycle (5-fold enrichment), and collagen and cuticulinbased cuticle development (7-fold enrichment) (Fig. 4a). A comparison of the predicted miRNA target genes with our recently published pry-1(mu38) mRNA transcriptome [25] revealed a significant overlap (111 genes, Representation factor: 1.6, hyp.geo p < 7.862 e-08) (Fig. 4b). Furthermore, a tissue-enrichment analysis showed that this subset of overlapping genes is frequently associated with the hypodermal syncytium (i.e. the third-most enriched subset) (Fig. 4c). Together, these findings suggest that PRY-1 is likely necessary for normal miRNA expression during seam cell development. Knockdowns of WNT asymmetric pathway components affect both the pry-1(mu38) phenotype and miRNA expression We induced RNAi-mediated silencing to examine interactions between pry-1 and WNT asymmetry pathway components during seam cell development. The fates of seam cell daughters are specified by the nuclear levels of POP-1 that are high in the anterior cell (hypodermal fate) and low in the posterior cell (seam cell fate) [19,32] (Fig. 5a). The results of our experiments revealed that knockdowns of wrm-1 or lit-1 suppresses the pry-1(mu38) seam cell phenotype (Fig. 5b), likely because PRY-1 promotes asymmetric division by negatively regulating both of these factors in anterior daughter cells (Fig. 5a). This is consistent with PRY-1 being localized to the anterior cortex of dividing seam cells [23]. A similar genetic interaction between pry-1, wrm-1, and lit-1 was previously reported to occur during the asymmetric division of embryonic EMS cells [33]. In contrast to wrm-1 and lit-1, pop-1 RNAi exacerbated the pry-1(mu38) phenotype, resulting in a significant increase in the number of seam cells (35.9 ± 10.8 in pry-1(mu38); pop-1(RNAi), compared to 20.8 ± 2.1 in pry-1(mu38), and 23.5 ± 5.7 in pop-1(RNAi)) (Fig. 5b, c). We also examined nuclear POP-1 asymmetry following RNAi knockdown of pry-1 and found that it was disrupted (Fig. 5d). These results agree with nuclear POP-1 levels being differentially regulated by WRM-1 and LIT-1 to be higher in the anterior, and lower in the posterior daughter cell [19,32] (Fig. 5a) Together, our findings support that pop-1 likely limits the number of seam cells that are produced by promoting the asymmetric division of their precursors. Since the asymmetric localization of WRM-1, LIT-1, and POP-1 are known to mediate fate specification in presumptive seam cells, our results suggest that PRY-1 likely facilitates the maintenance of these asymmetric expression patterns. Given that the nuclear POP-1 and SYS-1 ratio determines the fate of daughter cells and SYS-1 localization is disrupted in animals lacking PRY-1 function [34], we also examined the effect of sys-1 knockdown. The results of this analysis showed no effect on seam cell division in pry-1(mu38) animals (Fig. 5b). We likewise knocked down another β-Catenin family member, bar-1, in the mutant strain and found that doing so did not alter the number of seam cells (Fig. 5b). Overall, the data support the possibility that β-Catenin family members are functionally redundant during seam cell division, consistent with previous studies [19], although do not exclude the possibility that PRY-1 role in seam cells is independent of BAR-1 and SYS-1. To examine whether WRM-1, LIT-1, and POP-1 asymmetries affect miRNA expression during seam cell division, we next quantified miRNA levels in animals in which POP-1 function was compromised. As in pry-1(mu38) mutants, the expression levels of lin-4, miR-48, miR-84, miR-237, and miR-241, were found to be significantly upregulated in both pop-1(hu9) and pop-1(RNAi) worms; however, no change to miR-246 expression was [25]) and predicted targets of DE miRNAs identified by Targetscan (435 genes, this study) in pry-1(mu38) animals. Further analysis revealed that 435 potential targets are shared between DE miRNAs (36 by lin-4, 115 by miR-48, 115 by miR-84, 36 by miR-237, 115 by miR-241, and 102 by miR-246). A total 111 genes are common between the two data sets, a number that is statistically significant based on the hypergeometric test. (c) Tissue-enrichment analysis of the common set of genes (111) revealed third highest fold enrichment in hypodermal syncytium cells (colour coded in orange) observed (Fig. 5e, f). Furthermore, the bioinformatic analysis revealed multiple TCF/LEF consensus binding sites (SCTTTGATS; S = G/C) [35,36] in the 5′ regulatory region of each of these miRNAs, except for miR-246 where a single site is found near the transcriptional start site (See Methods, Additional file 9: Figure S6), suggesting that their transcription may be inhibited by POP-1. Together, these data allow us to conclude that miRNAs act downstream of POP-1, which agrees with a previous model [18], and also suggest that PRY-1 may interact with WRM-1, LIT-1, and POP-1 to negatively regulate the expression of heterochronic miRNAs during seam cell development. The qRT-PCR analysis showed that while lin-14 expression levels were unchanged, hbl-1 and lin-28 were significantly downregulated in L1-stage pry-1(mu38) animals (Fig. 6a). Together with results described in previous sections, this observation allows us to propose that pry-1 may function upstream of miRNAs to promote expression of hbl-1 and lin-28. To further examine the regulatory network of pry-1, RNAi was carried out. The results revealed that while lin-14 and hbl-1 RNAi had no marked impact on seam cells, lin-28 RNAi caused a significant reduction in the seam cell number in both pry-1(mu38) and control animals (Fig. 6b). Discussion In this paper we describe a genetic pathway of PRY-1/ Axin signaling in seam cell development. Using a combination of mutant analysis and reporter gene expression we show that PRY-1 is involved in L2-specific seam cell division. To identify the genes regulated by pry-1, we performed whole genome miRNA profiling at the late-L1 stage. The results revealed six DE miRNAs in pry-1 mutants. Five of these, belonging to lin-4 and let-7 families (lin-4, miR-48, miR-84, miR-237, and miR-241), were upregulated whereas miR-246 was the only miRNA that was downregulated. A similar trend was also observed in C. briggsae pry-1 mutants suggesting that pry-1 plays a conserved role in miRNA regulation in Caenorhabditis nematodes. Three of the overexpressed miRNAs in pry-1 mutants, namely miR-48, miR-84, and miR-241 (let-7 family members), are known to redundantly control the L2-L3 larval-stage transition [9]. While C. elegans nematodes carrying a mutation in any one of these three miRNAs have been shown to exhibit a normal phenotype, miR-48/miR-84 double mutants display retarded molting and a higher number of seam cells as a result of reiterated symmetric divisions during the L2 stage [9]. This seam cell phenotype is further exacerbated in miR-48/miR-84/ miR-241 triple mutants [9], while conversely, miR-48overexpression mutants were shown to exhibit a reduced number of seam cells due to 'skipping' of L2-specific symmetric divisions [41]. The other two miRNAs upregulated in the pry-1(mu38) mutants are lin-4 and miR-237 (lin-4 family members [42]). A previous study showed that, although a miR-237-knock-out mutant does not directly incur a heterochronic defect, it enhances the seam cell phenotype exhibited by lin-4(e912); lin-14(n179ts) double mutant animals [8]. Unlike lin-4 and let-7 family of DE miRNAs, miR-246 is not known to play a role in heterochronic development although it is involved in other processes [30,31]. Consistent with previous studies, our analysis of miR-246 mutants did not reveal any changes in the number of seam cells. Thus, pry-1-mediated miR-246 regulation may participate in other biological events. The fact that all but one DE miRNAs in pry-1 mutants are involved in heterochronic pathway suggests an important role of pry-1 in this developmental process. This was further strengthened by our data showing a high enrichment for miRNA predicted target genes associated with GO-term processes such as regulation of heterochrony. Moreover, we observed a significant overlap between DE miRNA predicted targets and pry-1(mu38) mRNA transcriptome [25] that included many genes expressed in hypodermal syncytium. To understand the regulation of miRNAs by pry-1 we studied the involvement of WNT asymmetry pathway components using an RNAi approach. It was shown Fig. 6 Analysis of heterochronic genes in pry-1-mediated seam cell development. (a) qRT-PCR experiments at the L1 stage showed that hbl-1 and lin-28 are significantly downregulated in pry-1(mu38) animals. Each data point represents the mean of two replicates and error bar represents the SEM, **p < 0.01. (b) RNAi KD of lin-28 in pry-1(mu38) mutants rescued the seam cell defect. Each data point represents the mean of three batches (each batch with at least 30 worms) and error bar represents the STD. Student's t-test was used to determine the statistical significance: *p < 0.05 (compared to L4440 control) earlier that in the absence of PRY-1, localizations of WRM-1, LIT-1, and SYS-1 are disrupted [23,34]. As expected, the examination of seam cell phenotypes in pry-1(mu38) animals following RNAi knockdowns of these genes revealed that wrm-1 and lit-1 are necessary for pry-1 function. Thus, PRY-1 may affect seam cell number by localizing in the anterior cortex of dividing seam cells [23] and thereby lowering WRM-1 and LIT-1mediated nuclear POP-1 levels in anterior daughters. This plausible explanation is supported by our finding that POP-1 asymmetry is affected in animals with reduced PRY-1 function. To test this further, we examined miRNA expression in pop-1(hu9) and pop-1(RNAi) worms. As expected, all five (lin-4, miR-48, miR-84, miR-237, and miR-241) were found to be overexpressed. Moreover, multiple TCF/LEF binding sites were detected in the 5′ regulatory regions of miRNAs. Together, these findings raise the possibility of POP-1 acting as a transcriptional regulator of miRNAs. We thus propose a model summarizing our findings (Fig. 7), in which PRY-1 acts upstream of WRM-1, LIT-1, POP-1, and DE miRNAs (except miR-246). Since the miRNAs regulate the expression of protein-coding genes during heterochronic development, we tested three of their known targets hbl-1, lin-14, and lin-28. The results of our experiments suggested that hbl-1 and lin-28 act downstream of pry-1; however, only lin-28 appears to function in pry-1-mediated asymmetric cell division. The above genetic interactions are consistent with earlier findings where lin-28 was shown to act downstream of the WNT asymmetry pathway components and included in a network consisting of lin-4 and let-7-family of miR-NAs and their targets during seam cell development [17,18]. Our model is unique in that it places PRY-1 upstream of WRM-1, LIT-1, and POP-1-mediated miRNA transcriptional network. Conclusions Overall, the data presented in this paper demonstrate the important role of PRY-1/Axin in the regulation of miR-NAs and their heterochronic gene targets in a pathway that involves WNT asymmetry pathway components WRM-1, LIT-1, and POP-1 during seam cell development. Furthermore, since seam cell defects are also exhibited by C. briggsae pry-1 mutants, and that Cbr-pry-1 is necessary for the normal expression of these miRNA orthologs, our work has revealed that the role of pry-1 in seam cell development is conserved amongst nematodes. Strains, culture condition, and RNAi Nematodes were grown on standard NG-agar culture plates seeded with E. coli bacteria (OP50) [43]. Cultures were maintained at 20°C. Strains used in the study are listed in a supplementary table (Additional file 10: Table S4). RNAimediated gene silencing was performed using a protocol previously published by our laboratory [24]. Microscopy Nematodes were mounted on a glass slide containing 2% agarose and 0.02 M NaN 3 and observed using an Axiovision Zeiss microscope. Seam cell nuclei were counted, and adult lateral alae were scored using Nomarski differential interference contrast and epifluorescence optics. Images were acquired using NIS Element software (Nikon, USA) with a Hamamastu Camera that was mounted on a Nikon 80i upright microscope. Analysis of seam cell division The fates of daughter seam cells were determined in C. elegans using scm::GFP and ajm-1::GFP markers, and in C. briggsae using the seam cell adherens-junction marker, Cel-dlg-1::GFP. After 8 h of feeding, nematodes expressing GFP that had completed the first larval seam cell division, and that had 10 seam cells per side, were chosen for analysis. Seam cell divisions were monitored at approximately 6 h intervals until the late L4 stage, when divisions ceased. Cuticle integrity assay A solution containing 1% hypochlorite and 0.25 M NaOH was prepared as previously reported [44], and aliquoted (500 μl) into a 48-well plate. Individual gravid Fig. 7 A model summarizing genetic interactions between PRY-1, WNT asymmetric pathway components (WRM-1, LIT-1, and POP-1), heterochronic miRNAs, and the targets of heterochronic miRNAs during L2 stage seam cell development. Our data supports the Ren and Zhang model [18] and places the WNT asymmetric pathway upstream of miRNAs adult worms were transferred to each well, and the plate was agitated at 30 s intervals. The time to the first major cuticle break was recorded by direct observation using a dissecting stereoscope (SMZ 645; Nikon Corporation, Japan). Molecular biology and bioinformatics For qRT-PCR experiments, synchronous cultures were prepared by bleaching gravid hermaphrodites as described previously [45] except that eggs were allowed to hatch on the NG-agar plates. The bleaching process was repeated. The eggs were finally transferred onto plates and grown till the desired stage. Because pry-1 mutants grow slower than controls, RNA was extracted from age-matched animals. The L1 larvae were grown till 16 h (N2/AF16) and 18 h (pry-1 mutants) whereas adults were incubated for 52 h (N2/AF16) and 58 h (pry-1 mutant). Total RNA was extracted from animals using the trizol reagent (Catalog Number T9424, Sigma-Aldrich, Canada), according to the manufacturer's instructions. cDNAs for protein-encoding genes and miRNAs were synthesized using oligo-dT and specific stem-loop primers, respectively (Additional file 11: Table S5), and by using the qScript cDNA synthesis kit (Catalog Number 95047-025, Quantabio, Canada) according to manufacturer's instructions. qRT-PCR was performed (in triplicate) in the Bio-Rad cycler CFX 96 using appropriate primers (Additional file 11: Table S5), and SensiFAST SYBR Green Kit (Catalog Number BIO-98005, BIOLINE, USA), according to the manufacturer's instructions. The expression levels of miR-NAs and protein-coding genes were normalized to those of miR-2 and pmp-3, respectively. Ct and p values were calculated using CFX manager software (Bio-Rad, Canada). A lin-28 RNAi plasmid was constructed by inserting 2949 bp of the lin-28 coding sequence into the L4440 vector. A DNA fragment was obtained via PCR using the listed primers (Additional file 11: Table S5), under the specified PCR conditions. To identify TCF/LEF family binding sites in the 5′ upstream genomic region of the miRNAs, MatInspector software (https://www.genomatix.de/) was used with default settings. RNA-Seq experiment The steps for miRNA RNA-Seq in pry-1(mu38) mutants were similar to mRNA RNA-Seq that we reported earlier [25]. The pry-1 miRNA transcriptome profile can be found in the GEO archive with accession number GSE130039. Synchronized L1 worms were harvested by two successive bleaching to obtain a homogenous population and total RNA was isolated. Small RNA sequencing libraries were prepared, and samples were analysed using the Genome Analyzer IIx platform (Illumina Inc., USA) at the McGill University Genome Quebec sequencing facility. A total of 36,656,022 reads of small RNAs (15-25 nt) were generated from the four samples of C. elegans examined, of which 10,453,527 sequences aligned perfectly to the C. elegans genome. Overall, we detected perfect matches to the precursor forms of 161 out of the 250 miRNAs annotated in miRBase (http://www.mirbase.org) WBcel235 for C. elegans. DE analysis of the known miR-NAs led to identification of eight miRNAs that were altered in pry-1 mutants, of which two (miR-353 and miR-2208a) were excluded due to false predictions (Additional file 2: Table S1). Although previous studies have reported miRNAs in the C. elegans genome (e.g., see [42,46], due to increased depth of our sequencing data we expected to uncover additional new candidates. After eliminating rRNA, tRNA, and ncRNA reads, the remaining unannotated reads were processed for novel miRNA discovery, as discussed below. We focused on the 30,300 nonredundant un-annotated reads that aligned to the C. elegans genome in control N2 and pry-1(mu38) animals. To discover novel miRNAs, the miRNA discovery package miRDeep2 (https://www.mdc-berlin.de/n-rajews-ky#t-data,software&resources) was used. The analysis predicted a total of 243 miRNAs using read count cutoff of 5-fold (or 187 with cut-off of 10-fold) (Additional file 3: Table S2). We then used an additional criterion to further examine these candidates, i.e., a higher miRDeep Score (> 10). This led to the identification of 64 novel miRNAs at the 5-fold read count cut-off (or 61 when the read count was set at 10-fold) (Additional file 3-4: Table S2, S3). The authenticity of these novel miRNAs was tested by RNAfold, which confirmed that these produce miRNA stem loop structure [47]. Statistical analyses Statistics were performed by two-tailed student's t test after testing for equal distribution of the data and equal variances within the data set. The p values of 0.05 and less were considered statistically significant. The data are presented as either mean ± standard deviation of the mean (STD) or mean ± standard error of mean (SEM). Graphs were prepared using Microsoft Excel. Hypergeometric probability related tests were done using an online program (http://nemates.org/MA/progs/overlap_stats.html). Additional files Additional file 1: Figure S1. C. elegans pry-1 open reading frame showing the region affected by gk3682 mutation. The exons and introns are indicated by boxes and lines, respectively. The translational start and stop sites are marked. The sequence deleted in gk3682 allele (738 bp) is shown by a rectangle. As part of the CRISPR editing process, the excised
6,440
2019-06-19T00:00:00.000
[ "Biology" ]
The multi-kinase inhibitor afatinib serves as a novel candidate for the treatment of human uveal melanoma Purpose Uveal melanoma (UM) is the most common intraocular malignancy in adults with a poor prognosis and a high recurrence rate. Currently there is no effective treatment for UM. Multi-kinase inhibitors targeting dysregulated pro-tumorigenic signalling pathways have revolutionised anti-cancer treatment but, as yet, their efficacy in UM has not been established. Here, we identified the multi-kinase inhibitor afatinib as a highly effective agent that exerts anti-UM effects in in vitro, ex vivo and in vivo models. Methods We assessed the anti-cancer effects of afatinib using cell viability, cell death and cell cycle assays in in vitro and ex vivo UM models. The signaling pathways involved in the anti-UM effects of afatinib were evaluated by Western blotting. The in vivo activity of afatinib was evaluated in UM xenograft models using tumour mass measurement, PET scan, immunohistochemical staining and TUNEL assays. Results We found that afatinib reduced cell viability and activated apoptosis and cell cycle arrest in multiple established UM cell lines and in patient tumour-derived primary cell lines. Afatinib impaired cell migration and enhanced reproductive death in these UM cell models. Afatinib-induced cell death was accompanied by activation of STAT1 expression and downregulation of Bcl-xL and cyclin D1 expression, which control cell survival and cell cycle progression. Afatinib attenuated HER2-AKT/ERK/PI3K signalling in UM cell lines. Consistent with these observations, we found that afatinib suppressed tumour growth in UM xenografted mice. Conclusion Our data indicate that afatinib activates UM cell death and targets the HER2-mediated cascade, which modulates STAT1-Bcl-xL/cyclin D1 signalling. Thus, targeting HER2 with agents like afatinib may be a novel therapeutic strategy to treat UM and to prevent metastasis. Supplementary Information The online version contains supplementary material available at 10.1007/s13402-022-00686-5. Introduction Uveal melanoma (UM) accounts for ~85% of ocular melanomas and 3% of all melanomas in humans. The National Organization for Rare Diseases estimates that the incidence of UM is ~5-7 per million in the general population [1,2] Although UM has a relatively low incidence, its mortality rate is high and up to 50% of patients ultimately succumb to metastases [2,3]. Patient survival has remained poor, presumably due to silent hematogenous systemic micro-metastases that are present prior to the diagnosis of clinically evident ocular symptoms. Once metastases are established and are of detectable size, death occurs within 6-12 months [4][5][6][7]. Currently, front-line treatment for UM includes radiotherapy, phototherapy and surgery, but vision impairment, blindness or eye removal are common clinical consequences following these treatments [8]. Because the aetiology, genetic associations and clinical behaviour of UM are distinct from cutaneous melanoma, drugs that are effective in cutaneous melanoma are ineffective in UM patients, especially those with metastatic disease. Indeed, there are no drugs that have been found to be effective in treating primary or metastatic UM, including those that target multiple signalling cascades dysregulated in UM [9][10][11][12][13][14]. Most UM tumours exhibit mutations in genes encoding the G protein-alpha subunits GNAQ or GNA11 that activate the mitogen-activated protein kinase (MAPK) and phosphoinositide-3 kinase (PI3K)/Akt signalling pathways [15]. Initial studies implicated the Epidermal Growth Factor Receptor (EGFR or ErbB1) in UM tumour proliferation and metastasis [16,17]. The EGFR has been reported to be expressed in a small proportion of UM cell lines and tumours [17][18][19], and the ligand EGF has been reported to activate the phosphorylation of EGFR and its downstream mediator AKT in EGFR-expressing cell lines [18]. Furthermore, scleral invasion activity has been associated with higher vitreal EGF concentrations in UM patients [16]. However, some investigators found no association between EGFR expression and UM development [20,21]. Scholes et al. reported that EGFR immunoreactivity was restricted to macrophages [22]. Moreover, clinical trials of the multi-kinase inhibitor (MKI) gefitinib that targets the EGFR have been unsuccessful in UM [18,23] and a number of studies has questioned the functional importance of EGFR in UM [24]. Therefore, the clinical relevance of EGFR in UM remains controversial. There are four isoforms in the ErbB lineage of proteins (ErbB1-4) that function as homo-and heterodimers [25]. The dimerization of EGFR and ErbB2 (HER2) is associated with a poor prognosis and with cell invasion in a range of tumours [26]. HER2 signalling regulates a number of important targets with clinical roles in tumorigenesis. From a phosphor-proteomics analysis of cell lines in which HER2 was overexpressed, tyrosine phosphorylation of 198 proteins, including STAT1, was found to be increased [27]. STAT1 has been shown to regulate cell cycle progression by modulating the expression of cyclin D1 in tumour cells [28,29]. HER2 is expressed in UM tumour cells [30]. In the present study, we evaluated several MKIs with the capacity to inhibit ErbBs for their effect on the viability of UM cells. The principal finding is that afatinib, which is an established inhibitor of EGFR, HER2 and HER4, serves as an effective agent that exerts anti-UM activity in a range of in vitro, ex vivo and in vivo models. Thus, afatinib emerges as a new candidate for clinical evaluation in UM patients. UM cell lines The human Mel202 UM cell line was kindly provided by Prof. B. Ksander (Schepens Eye Research Institute, Boston, MA, USA). The 92.1 cell line was a gift from Prof. M.J. Jager (Leiden University Medical Center, Leiden, Netherlands) and the OMM-1 cell line was a gift from Prof. G.P. Luyten (Erasmus University, Rotterdam, Netherlands). The C918 cell line was purchased from BioScientific (Gymea, NSW Australia) and the BeNa Culture Collection (Beijing, China). All cell lines were authenticated in-house or by the respective commercial suppliers and routinely checked for mycoplasma contamination every 6 months using a MycoAlert Mycoplasma Detection kit (Lonza, Mount Waverley, VIC Australia). They were always negative. C918, Mel202 and 92.1 cells were cultured in RPMI-1640 medium supplemented with 10% heat-inactivated FBS (v/v), 1% P/S and 1% L-Glutamine (Thermo Scientific, Lidcombe, NSW, Australia). OMM-1 cells were cultured in DMEM medium supplemented with 10% heat-inactivated FBS (v/v), 1% P/S and 1% L-Glutamine. All cell lines were maintained in a humidified incubator (5% CO 2 ) at 37 °C and used within 20 passages after thawing. Cytotoxicity assay UM cells were cultured in 96-well plates (2 × 10 4 cells/well) for 24 h. Subsequently, cells were treated with MKIs (10 μM in 0.1% DMSO) in RPMI-1640 or DMEM containing 1% FBS (v/v) for 24 h; 0.1% DMSO was used as the negative control. Following treatments, cells were incubated with MTT (0.5 mg/ml) in the dark for 3 h and then washed with phosphate-buffered saline (PBS, 0.154 M NaCl, 0.001 M KH 2 PO 4 , 0.003 M Na 2 HPO 4 ; pH 7.4). Next, the cells were treated with DMSO and the plate was shaken for 10 min at room temperature. Absorbance values were measured at 550 nm in a microplate reader (Model 680, Bio-Rad, Gladesville, NSW, Australia) [32,33]. IC 50 values for MKIs were estimated by non-linear regression of percentage cell survival vs drug concentration data (GraphPad Prism 7.0; San Diego, CA). Annexin V/PI flow cytometry assay UM cells were treated with MKIs (5 μM in 0.1% DMSO) in RPMI-1640 or DMEM containing 1% FBS (v/v) for 24 h at 37 °C; 0.1% DMSO was used as the negative control. Next, the cells were collected and stained with PI and annexin V-FITC for 20 min at room temperature [34,35] and analysed for apoptosis and necrosis using a Guava easy®cyte flow cytometer (Merck Millipore, Bayswater, VIC, Australia). Cell cycle analysis UM cells were treated with MKIs (5 μM in 0.1% DMSO) in RPMI-1640 or DMEM containing 1% FBS (v/v) for 12 h at 37 °C; 0.1% DMSO was used as the negative control. Next, the cells were then harvested, washed twice with PBS and fixed in ice-cold 70% ethanol (v/v) for 16 h at 4 °C. Prior to the analysis, cells were washed with PBS and stained with PI for 30 min in the dark at 37 °C. The samples were analysed using a Guava easy®cyte flow cytometer. Cell migration assay UM cells were cultured on 96-well ImageLock™ microplates (Sartorius Australia, Dandenong, VIC; 5 × 10 4 cells/ well) for 24 h. Next, scratches ('wounds') were made by a Wound Maker™ (Sartorius Australia) after which the cells Table 1 Cytotoxicity of afatinib, crizotinib, sorafenib and sunitinib in four human UM cell lines. Cells were treated with MKIs at concentrations between 0.01 and 50 μM (24 h, 37 °C). Cell viability was assessed using MTT cytotoxicity assays. Experiments were repeated on three occasions (n = 3 replicates in each experiment). Data are presented as percentage of control (mean ± SD). IC 50 Images were taken at 10x magnification at 2 h intervals. Image J software (National Institutes of Health, USA) with Colony Counter Plugin was used to estimate the leading edge of the cell population. The migration rate was calculated as described previously [36]. Area (initial) is the area of the scratch measured immediately after scratching (t = 0 h). Area (final) is the area of the wound measured 24 h after the scratch was applied. Colony formation assay Cells were treated with MKIs (10 μM) or 0.1% DMSO for 24 h and then sub-cultured in 12-well plates (200 cells/well) for 6-8 days. On the day of analysis, the cells were stained with 0.01% crystal violet (w/v) and then assessed for colony growth. A colony is defined as a cluster of at least 50 cells determined microscopically. The plates were photographed in an Essen IncuCyte S3 ® instrument, using the whole-well scan mode at 4x magnification. Image J software was used to estimate the leading edge of the cell population. Western blotting UM cells were harvested and treated with lysis buffer containing NP-40 (1% IGEPAL, 50 mM Tris and 150 mM NaCl, pH 7.8 containing protease inhibitors). The lysates were centrifuged at 15,000 rpm (10 min, 4 °C). Protein samples were denatured and separated by electrophoresis. After transfer to PVDF membranes, the blots were incubated with 5% non-fat milk (in PBST) for 30 min at room temperature. The blots were incubated with a primary antibody at 4 °C overnight with orbital shaking and then washed three times with PBST. Next, the blots were incubated with a secondary antibody for 1 h at room temperature and then with a chemiluminescent substrate (SuperSignal West Pico, Thermo Scientific, Lidcombe, NSW, Australia). The signals were visualized using ImageQuant LAS500 (GE health care, Silverwater, NSW, Australia). Primary UM tumour-derived cell lines Human UM tumour samples were obtained as approved by St. Vincent's Hospital Sydney Human Ethics Committee and experiments were performed in strict accordance with the relevant guidelines and regulations. After surgical removal of the UM tumour tissues, the samples were washed three times with PBS (pH 7.4) and processed for cell isolation within 24 h. Trypsin-EDTA was applied to separate the cells that were collected in RPMI-1640 medium containing 20% FBS (v/v), 1% L-glutamine, 1% P/S, 1% ITS and 2% GCT. Primary UM tumour-derived cell lines were maintained at 37 °C with 5% CO 2 and used between passages 2 to 5 in subsequent experiments. The three patient UM tumour-derived cell lines were characterised by immunostaining using anti-Tyrp1 (a melanocyte-and melanoma-specific marker) and antimelanoma (a melanoma-specific marker) antibodies (Supplementary Fig. 1). UM xenograft mouse model Animal ethics approval was obtained from the Laboratory Animal Ethics Committee of Jiangsu Institute of Nuclear Medicine (Wuxi, China) and all animal experiments were performed in strict accordance with the relevant guidelines and regulations. C918 cells were mixed with Matrigel in a 2:1 (v:v) ratio and injected subcutaneously into BALB/c nude mice (5 weeks old; male; Chang Zhou Cavens Laboratory Animal Co., Ltd., Changzhou, China). Tumour volumes were measured every three days using callipers and treatments were started when a tumour volume of ~100 mm 3 was reached. The mice were randomly divided into two groups to receive either afatinib (15 mg/kg; n = 12) or vehicle (n = 10) by intraperitoneal injection once daily for 16 days. Body weights and tumour volumes were measured every four days. The volumes of tumours were calculated as (a × b 2 )/2, Cell death profiles in response to afatinib treatment were determined using annexin V/PI staining flow cytometry. Representative images from flow cytometry are shown in (B, D and F). Viable, necrotic or apoptotic cells are presented as percentages of total cells (mean ± SD) in (C, E and G); DMSO was used as control. Experiments were performed on three independent patient UM tumor-derived cell lines (n = 3 in each experiment). *p < 0.05; **p < 0.01; ***p < 0.001 vs. control by One-way ANOVA and Dunnett's post-hoc test ◂ where a and b were the length and width of the tumours, respectively. At the end of the treatment, the mice were anesthetized by intraperitoneal injection of 5 ml/kg 1% pentobarbital sodium salt. The tumours were removed, weighed and photographed. Finally, the tumour samples were fixed in 4% paraformaldehyde for pathological examination. Histology and immunohistochemistry Tumour tissues embedded in paraffin were sectioned (8 μm thickness) and then stained with hematoxylin and eosin (Beyotime Institute of Biotechnology, Jiangsu, China). For immunohistochemical staining, the sections were incubated with an anti-Ki67 antibody (Cat. #: ab15580, Abcam, Shanghai, China) at 4 °C overnight and then incubated with HRP-conjugated secondary antibodies. The sections were visualized using a DAB substrate kit (Shanghai Bio-Platform Technology Company, Shanghai, China) and an Olympus light microscope (Tokyo, Japan). TUNEL assay Apoptotic cells were assessed using a terminal deoxynucleotidyl transferase dUTP nick end labelling (TUNEL) assay in paraffin-embedded tumour sections fixed on slides. The slides were stained using a TUNEL assay kit (Beyotime Institute of Biotechnology, Jiangsu, China) following the manufacturer's instructions. Nuclei were counter-stained with hematoxylin. Staining was visualized using a Magscanner KF-PRO-120 (Konfoong Bioinformation Tech, Ningbo, China). Statistics Data are reported throughout as mean ± standard deviation with significance defined as p < 0.05. In vivo studies were randomized, and observers didn't know the group allocation. Statistical analyses were performed using GraphPad Prism 9.0 software with one-way ANOVA followed by Dunnett's post-hoc test when comparing multiple independent groups. An unpaired t-test was used to analyse differences between two groups. Two-way ANOVA was used to analyse data from treatment and control groups with or without serum stimulation. Afatinib decreases the viability of Mel202, 92.1, C918 and OMM-1 cells In initial experiments, the capacity of 13 MKIs (10 μM, 24 h) to decrease cell viability was assessed in Mel202, 92.1, C918 and OMM-1 UM. The Mel202, 92.1 and C918 cell lines were derived from primary UM tumours, while OMM-1 is a well-established subcutis metastatic UM cell model. The concentration of 10 μM was selected in initial screening experiments because it exceeds the reported serum trough levels in patients who were treated with the MKIs tested in the present study. This was done to ensure effective cell killing across multiple UM cell lines. The most active agent across the four UM cell lines (< 20% viability remaining) was afatinib, while pelitinib was also active in Mel202 and OMM-1 cells; cediranib, foretinib, lapatinib and neratinib were most effective in OMM-1 cells (Supplementary Table 1). It was also noticed that the established EGFR inhibitor gefitinib exhibited relatively low anti-cancer activity across the four UM cells tested, which is consistent with clinical observations [18,23]. Based on these findings, afatinib was selected for further study in the four UM cell lines and in primary UM tumour-derived cell lines. Sorafenib (a RAF/MEK/ERK and VEGFR-2/PDGFR-beta inhibitor), crizotinib (an ALK and ROS1 inhibitor) and sunitinib (a PDGFR, KIT and VEGFR inhibitor) were included in subsequent studies as controls, because these agents are currently in clinical trials in UM patients. We found that the IC 50 values for afatinib in four cell lines that represent primary and subcutis metastatic UM were in the range 3.43-5.29 μM ( Table 1). The other three agents were somewhat less potent, with the exception of sorafenib and sunitinib in OMM-1 cells (Table 1). In accord with these findings, afatinib and the other three MKIs effectively decreased the viability of patient-derived primary UM tumour cells (Fig. 1). The MKIs were more active in two of the primary cell lines while the third was somewhat less responsive (Fig. 1A). Importantly, the decrease in viability induced by afatinib was selective to UM cells, because the viability of several non-carcinoma-derived retinal cell types, including human retinal pigment epithelium cells (ARPE-19), Müller cells (MIO-M1), primary cultured melanocytes and fibroblasts were not impaired, while the other tested MKIs demonstrated mild to moderate toxicity ( Supplementary Fig. 2). Afatinib induces apoptosis in Mel202, 92.1, C918 and OMM-1 cells The decreases in UM cell viability induced by afatinib were evaluated further in annexin V/PI-stained cells using flow cytometry. An afatinib concentration of 5 μM was selected based on IC 50 values estimated in the four UM cell lines (Table 1). We found that apoptosis was the primary cause of death in MKI-treated UM cells (Fig. 2). Afatinib treatment increased the proportion of apoptotic cells 5.79-10.20-fold to that of the control, while the increases induced by the other three agents were somewhat less pronounced (to 3.50-5.32fold to that of the control by crizotinib, to 3.50-6.84-fold to that of the control by sorafenib and to 2.73-5.26-fold to that of the control for sunitinib; Supplementary Table 2). Consistent with these findings, we found that afatinib treatment also activated apoptosis in primary tumor-derived cells obtained from three UM patients ( Fig. 1B-G). Further cell cycle analysis revealed that afatinib arrested UM cells in the G0/G1 phase and decreased entry into the G2/M phase (Fig. 3). In OMM-1 cells G0/G1 accumulation and G2/M suppression was extensive and all of the MKIs were found to be similarly active. Taken together, the cell death and cell cycle analyses indicate that afatinib is highly effective in inducing apoptosis and enhancing cell cycle arrest in UM cell lines. Afatinib treatment decreases cell migration and promotes reproductive cell death in Mel202, 92.1, C918 and OMM.1 cells We tested the capacity of the four MKIs to decrease UM cell migration using scratch wound-healing assays. We found Colony formation assays were performed to subsequently evaluate the capacity of these four MKIs to decrease the viability of Mel202, 92.1, C918 and OMM.1 cells. We found that the four MKIs (5 μM) also impaired the colony growth of UM cells using clonogenic assays ( Fig. 4E-H), consistent with the induction of reproductive cell death. STAT1-regulated apoptotic pathways contribute to the anti-cancer action of afatinib in UM cells The capacity of afatinib to modulate the expression of important Bcl-2 proteins -Bax (pro-apoptotic) and Bcl-xL (anti-apoptotic) -that regulate apoptosis, was assessed in UM cells (Fig. 5). We found that afatinib increased the expression of Bax and decreased that of Bcl-xL leading to Bcl-xL/Bax ratios that were decreased to <0.2-fold compared to that in control in Mel202, 92.1 and OMM-1 cells (p < 0.001; Fig. 5B, E and K) and to ~0.5 fold compared to that in control in C918 cells (p < 0.05; Fig. 5H). Because afatinib induced cell cycle arrest, we also evaluated the expression of the important mediator cyclin D1 in afatinibtreated UM cells and found that it was decreased to 0.4-0.6fold compared to that in control cells (Fig. 5). STAT1 acts as a tumour suppressor in a range of cancer types and is associated with the expression of Bcl-xL and cyclin D1 [37]. We found that the expression of STAT1 was significantly increased in the UM cell lines to 1.4-3.5-fold compared to that in controls by afatinib treatment (Fig. 5C, F, I and L). Overall, we found that afatinib induced apoptosis in UM cells in a STAT1-and Bcl-xL/cyclin D1-dependent manner. HER2 signalling may be involved in the anti-cancer effect of afatinib in UM cells Afatinib has been reported to act as a dual inhibitor of EGFR and HER2 in non-small cell lung cancer and other cancer cells [38]. In this study, we assessed the involvement of EGFR and HER2 in the anti-cancer actions of afatinib in UM cells. We tested the activation of both receptors in response to acute stimulation by serum growth factors. Previously, it has been reported that EGFR is only expressed in a small proportion of UM tumours and immortalised cell lines [16][17][18][19][20]22]. Consistent with these reports, we failed to detect EGFR expression in Mel202, 92.1, C918 and OMM-1 cells and found that phospho-EGFR expression did not increase following serum stimulation ( Supplementary Fig. 3). Therefore, it is unlikely that EGFR signalling mediates the anti-UM effect of afatinib in UM cells. In contrast, we observed HER2 expression in all four UM cell lines and, in addition, that serum stimulation (20% FBS, 10 min) produced rapid activation of HER2, as indicated by increased p-HER2 expression (Fig. 6). Moreover, signalling pathways downstream from HER2 -most notably the ERK, PI3K and AKT pathways -were also found to be activated by serum (Fig. 6). Subsequent treatment with afatinib (1 h) markedly attenuated the serum-induced activation of HER2 and its downstream signalling in UM cells (Fig. 6). Taken together, we found that acute inhibition of HER2, PI3K, AKT and ERK signalling occurred after the administration of afatinib and preceded the loss of UM cell viability. Afatinib shows anti-tumour activity in a UM cell xenograft model The activity of afatinib was further evaluated in an in vivo C918 UM cell xenograft mouse model. We found that afatinib treatment (15 mg/kg daily for 16 days) markedly decreased the final weights of C918-derived UM tumours in mice (Fig. 7A) and strongly inhibited tumour growth (Fig. 7B). Using PET scan analysis, we found that the tumour volumes after afatinib treatment were decreased to ~55% of the controls (p < 0.01; Fig. 7C and D). Subsequently, we performed immunohistochemical staining of the tumour tissues harvested from the xenografted mice (Fig. 7E). We found that there was a pronounced decrease in staining for the tumor proliferation marker Ki67 in the UM tumours following afatinib treatment. Increases in TUNEL staining indicated that apoptosis was activated in tumours isolated from afatinib-treated mice. These findings Experiments were repeated on three occasions. *p < 0.05; **p < 0.01; ***p < 0.001 vs. control by unpaired t-test ◂ indicate that afatinib effectively induces apoptosis and inhibits proliferation in UM tumours in vivo. Discussion UM has a poor prognosis and, currently, there are no effective treatment options. MKI drugs have revolutionised the treatment of many cancer types by targeting kinases that drive important tumorigenic mechanisms such as proliferation, survival, motility and angiogenesis. Gefitinib, crizotinib and afatinib are kinase inhibitors that are used to treat tumours that exhibit EGFR mutations, ALK fusions and ERBB2/HER2-amplification, respectively [39][40][41]. Other inhibitors like sorafenib target multiple tumorigenic kinases [42]. MKIs that are known to be clinically useful for certain cancers have; however, yielded disappointing results in clinical trials for UM [14,15]. The EGFR family encompasses four ErbB members (ErbB1-4) that form homo-and hetero-dimers [25]. Apart from HER2, the receptors contain an extracellular domain with leucine-rich regions that can bind growth factors [25]. EGFR family proteins have been widely studied as anticancer targets [43]. Initial studies suggested that EGFR was expressed in human UM cell lines and tumours and that the expression correlated with tumorigenic activities, including proliferation and metastatic potential [44,45]. It has been reported that 14 of 48 primary UMs and 3 of 14 UM cell lines over-expressed EGFR and that EGFR over-expressing tumours, but not EGFR negative tumours, showed an activated EGF-signature [18]. In another series of 21 primary UMs tested, EGFR was detected in 6 of them and was found to correlate with metastatic disease [17]. In yet another study EGFR was found to be expressed in 8 of 40 tumours and to correlate with mitotic activity [16]. Other studies have, however, questioned the significance of EGFR in UM progression. In a study encompassing 60 UM tumours of varying aggressiveness, EGFR expression was found to be positive in 13 and heterogeneous in 5 [20]. No correlation was observed between EGFR expression and tumorigenic activity. Scholes et al. [22] and Mallikarjuna et al. [20] failed to observe any associations between EGFR expression and tumorigenic or metastatic capacities [16]. Moreover, gefitinib treatment yielded only limited benefits in a phase II study in 50 patients with UM or metastatic cutaneous melanomas. Only one of 6 patients with UM exhibited a response with a progression-free survival period of 9.7 months [23]. In the present study, selective EGFR inhibitors (erlotinib, gefitinib and vandetanib) were found to be relatively ineffective in decreasing UM cell viability and none of the UM cell lines used in the present study expressed EGFR. Therefore, EGFR is unlikely to be a significant target of afatinib in UM. In addition, two of the four UM cell lines that were used in the present study did not express HER4 (data not shown), suggesting HER4 is also unlikely a significant target for afatinib. Consistent with findings in the present study, it has been reported that HER2 is expressed in UM cells [46]. Forsberg et al. presented confirmatory evidence that HER2 protein in xenograft models of UM is detectable by immunohistochemical staining [47]. We found that MKIs that inhibit HER2 were more effective in decreasing UM cell viability. Afatinib was more potent in decreasing UM cell viability than three other MKIs (crizotinib, sorafenib and sunitinib) that are currently in clinical trials. Afatinib strongly activated apoptosis and cell cycle arrest in the four UM cell lines tested in this study, which was corroborated in UM patient tumour-derived cell lines. Interestingly, afatinib also demonstrated a significant inhibitory effect on UM cell migration and promoted reproductive cell death, which indicates its clinical potential in the inhibition of UM metastasis. Our data also showed anti-cancer activity of afatinib in a UM xenograft model. Afatinib markedly inhibited tumour growth and suppressed tumour progression, which suggests that the drug may have an in vivo therapeutic potential. Afatinib-induced apoptosis is associated with decreased expression of the anti-apoptotic Bcl-xL protein, the cell cycle progression regulator cyclin D1, and a decrease in the Bcl-xL/BAX ratio in UM cell lines. Afatinib-driven cellular apoptosis was also accompanied by an elevated expression of STAT1. STAT1 plays an important role in cell survival, viability and responses to pathogens. STAT1 induces cell Fig. 6 Afatinib exerts its anti-cancer actions by targeting the HER2, AKT, ERK and PI3K signalling cascades in UM cells. Serum was removed from Mel202, 92.1, C918 and OMM-1 cells and 24 h later, cells were treated with 20% FBS or medium alone for 10 min at 37 °C. Subsequently, cells were treated with afatinib or vehicle (serum-free medium) for 1 h at 37 °C prior to the preparation of total cell lysates. The expression of HER2, PI3K/AKT and PI3K signalling proteins was analysed by Western blotting. Representative images of p-HER2, HER2, p-AKT, AKT, p-PI3K, PI3K, p-ERK and ERK are shown for Mel202 (A), 92.1 (B), C918 (C) and OMM-1 (D) cells; GAPDH was used as loading control. Densitometry analysis of protein expression was preformed. Ratios of p-HER2/HER2, p-AKT/ AKT, p-PI3K/PI3K and p-ERK/ERK with serum stimulation are presented as fold of those without serum stimulation (tables at right). Data are presented as fold of control (mean ± SD). Experiments were repeated on three occasions. # p < 0.05; ## p < 0.01; ### p < 0.001 vs. control by Two-way ANOVA. Note: C-: control without serum stimulation; C+: control with serum stimulation; A-: afatinib treatment without serum stimulation; A+: afatinib treatment with serum stimulation ◂ cycle arrest in response to interferon-γ by interacting with D-type cyclins and cyclin-dependent kinase-4 in fibrosarcoma cells [48]. STAT1 also inhibits the transcription of the anti-apoptotic Bcl-2 and Bcl-xL proteins that promote mitochondrial integrity [49,50]. STAT1 expression has been reported to be increased in EGFR-positive and HER2-positive breast cancer patients, and relapse-free survival was found to be decreased in high-risk breast cancer patients [26]. STAT1 expression is transcriptionally upregulated by HER2 in breast cancer cells [51]. In stably transfected cell lines that overexpress HER2, 462 proteins were detected using the SILAC (stable isotope labelling with amino acids in cell culture) method, 198 of them showing increased tyrosine phosphorylation and 81 showing decreased tyrosine phosphorylation [27]. These phosphoproteins included a number of HER2 and EGFR signalling intermediates such as STAT1 [27]. STAT1 expression has also been found to correlate with a favorable prognosis in several cancer types, including colorectal [52,53], hepatocellular [54] and esophageal [55] cancers, and metastatic melanomas [56]. Concomitant deletion of STAT1 and overexpression of the ErbB2/neu oncogene in mammary epithelial cells accelerated mammary tumorigenesis [57,58]. Consistent with literature, in the current study afatinib was found to increase STAT1 expression and to decrease the expression of downstream cyclin D1 and Bcl-xL. Thus, the anti-UM effects of afatinib are likely mediated through STAT1 upregulation that subsequently leads to cell cycle arrest and apoptosis (Fig. 8). We confirmed HER2 activation upon acute afatinib treatment in all the four UM cell lines tested, which suggests that HER2 is likely the molecular target of afatinib and, thus, may be a viable anti-cancer target in UM. Unlike the EGFR inhibitors erlotinib and gefitinib that act in a reversible fashion, the HER2 inhibitors afatinib, pelitinib and neratinib also include an irreversible component in their inhibitory mechanism [25]. These agents bind to the ATP pocket of the receptor and their bulky extra-aromatic groups are oriented toward the kinase domain of HER2 [59,60]. Moreover, three other HER2 inhibitors that were assessed in an initial screening (lapatinib, pelitinib and neratinib) showed more effective anti-UM activity than other MKIs that did not target HER2. Several signalling cascades, including the AKT, PI3K and ERK pathways, have been shown to be activated by HER2 [59,60]. These were also evaluated in afatinib-treated UM cells. We found that afatinib markedly attenuated the activation of HER2 and the downstream signalling AKT, ERK and PI3K-linked cascades in UM cells. Thus, the inhibitory effect of afatinib on HER2, PI3K, AKT and ERK signalling could be an early event following afatinib treatment resulting in a loss of UM cell viability (Fig. 8). Afatinib is clinically approved to treat non-small cell lung carcinoma (NSCLC) and head and neck squamous carcinoma [61][62][63][64]. A phase II trial of afatinib monotherapy showed some promise in patients with HER2-positive esophagogastric cancers that were refractory to the anti-HER2 monoclonal antibody trastuzumab [65]. Our study is the first to show the clinical potential of afatinib in the treatment of UM and the prevention of metastasis. Importantly, our data show that afatinib is highly selective to UM tumour cells while minimally altering the viability of normal retinal cells, melanocytes and fibroblasts. In contrast, the other three MKIs tested were somewhat toxic to non-carcinoma retinal cell types. Therefore, afatinib may have greater efficacy and lower toxicity in the treatment of UM. The C max of afatinib is ~0.16 μM after administration of multiple daily oral doses of 50 mg in patients [66]. Brain penetration of afatinib has been demonstrated in in vivo models, although it might be somewhat lower than other EGFR inhibitors [67,68]. The desired plasma concentration of afatinib may not be readily achievable in the treatment of UM by oral route. However, intraocular injection, or other local administration routes that are used commonly for the treatment of eye diseases, may enable higher local concentrations to be achieved. In addition, advances in novel drug formulations may allow improved delivery of afatinib for the treatment of UM. Future research into these areas is now warranted, but is beyond the scope of the current study. In summary, we found that afatinib has potent anti-cancer properties in in vitro, ex vivo and in vivo UM models. HER2 signalling has emerged as a likely molecular target that activates apoptosis upon afatinib treatment in UM cells. Afatinib also has the advantage of preventing UM cell migration and Fig. 7 Afatinib inhibits tumour growth in a UM cell xenograft model. BALB/c nude mice were inoculated with C918 cells and maintained for 14 days. Mice were administered afatinib (15 mg/kg per day, n = 10) or vehicle (n = 12) on day 15 via intraperitoneal injection; treatments were continued for another 16 days. Tumor volumes and body weights of mice were measured every 4 days. At the end of the experiment mice were either sacrificed for the harvesting of tumor samples or were subjected to whole body PET scans (n = 5 or 6 mice in each arm). Representative images of tumours are shown in panel (A). Tumor growth curves are shown in panel (B). Data are shown as percentages of the tumor size on day 15 (mean ± SD; n = 5 or 6 per group). **p < 0.01; ***p < 0.001 vs. control by unpaired t-test. Representative PET scan images are shown in panels (C) and (D). Positron-emission intensity of tumours is presented as radioactivity vs. weight (% ID/g; mean ± SD, n = 5 or 6 per group). **p < 0.01 vs. control by unpaired t-test. Representative images of staining analyses of tumour sections are shown in panel (E). Paraffin-embedded tissue cryostats from control and afatinib-treated mice were subjected to hematoxylin and eosin staining (upper), anti-Ki67 (middle or TUNEL assays (bottom) ◂ enhancing reproductive cell death, which may contribute to suppressing UM metastasis. Together, our data indicate that afatinib may serve as a novel candidate drug with an improved therapeutic effect and selectivity in treating UM by targeting HER2. Fig. 8 Schematic summary of the anti-cancer mode of action of afatinib in UM cells. Afatinib inhibits HER2 to exert its anti-UM effect, which then leads to downregulation of the downstream PI3K, AKT and ERK pathways. These early events activate UM cell apoptosis and decrease UM cell survival and progression by inducing STAT1 and downregulating Bcl-xL and cyclin D1
7,828.8
2022-07-04T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Unitary Evolution and Elements of Reality in Consecutive Quantum Measurements Probabilities of the outcomes of consecutive quantum measurements can be obtained by construction probability amplitudes, thus implying the unitary evolution of the measured system, broken each time a measurement is made. In practice, the experimenter needs to know all past outcomes at the end of the experiment, and that requires the presence of probes carrying the corresponding records. With this in mind, we consider two different ways to extend the description of a quantum system beyond what is actually measured and recorded. One is to look for quantities whose values can be ascertained without altering the existing probabilities. Such “elements of reality” can be found, yet they suffer from the same drawback as their EPR counterparts. The probes designed to measure non-commuting operators frustrate each other if set up to work jointly, so no simultaneous values of such quantities can be established consistently. The other possibility is to investigate the system’s response to weekly coupled probes. Such weak probes are shown either to reduce to a small fraction the number of cases where the corresponding values are still accurately measured, or lead only to the evaluation of the system’s probability amplitudes, or their combinations. It is difficult, we conclude, to see in quantum mechanics anything other than a formalism for predicting the likelihoods of the recorded outcomes of actually performed observations. Introduction In [1], Feynman gave a brief yet surprisingly thorough description of quantum behaviour. Quantum systems are intrinsically stochastic, calculation of probabilities must rely on complex valued probability amplitudes, and it is unlikely that one will be able to get a further insight into the mechanism behind the formalism. One may ask two separate questions about the view expressed in [1]. Firstly, is it consistent? There have been recent suggestions [2] that quantum mechanics may be self-contradictory, and that its flaws can be detected from within the theory, i.e., by considering certain thought experiments. In [3], we have, argued that the proposed "contradictions" are easily resolved if Feynman's description is adopted. Secondly, one can ask if the rules can be explained further? There have been proposals of "new physics" based on such concepts as time symmetry, weak measurements, and weak values (see [4][5][6], and Refs. therein). Recently, we have shown the weak values to be but Feynman's probability amplitudes, or their combinations [7,8]. The ensuing paradoxes occur if the amplitudes are used inappropriately, e.g., as a proof of the system's presence at a particular location [9], a practice known for quite some time to be unwise (see [10] and Ch. 6 pp. 144-145 of [11]). It is probably fair to say that Feynman's conclusions have not been successfully challenged to date, and we will continue to rely on them in what follows. The approach of [1] is particularly convenient for describing situations where several measurements are made one after another on the same quantum system. Such consecutive or sequential measurements have been studied by various authors over a number of years [12][13][14][15][16], and we will continue to study them here. The simplest case involves just two observations, of which the first prepares the measured system in a known state, and the second yields the value of the measured quantity "in that state." Adding intermediate measurements between these two significantly changes the situation, as it brings in a new type of interference the measurements can now destroy. Below we will discuss two particular issues which arise in the analysis of such sequential measurements. One is the break down of the unitary evolution of the measured system, which occurs each time a measurement is made. Another is the possibility of extending the description of the system beyond what is actually being measured. This can be done, e.g., by looking for "elements of reality", i.e., the properties or values which can be ascertained without changing anything else in the system's evolution. This can also be done by studying a system's response to weekly coupled inaccurate measuring devices. It is not our purpose here to dispute the findings made by the authors using alternative approaches (see, for example, [5]). Rather we we want to see how the above issues can be addressed in conventional quantum mechanics, as presented in [1]. The rest of the paper is organised as follows. In Section 2, we recall the basic rules and discuss the broken unitary evolution of the measured system, In Section 3, we note that, in order to be able to gather the statistics, the experimenter would need the records of the previous outcomes. The system's broken evolution can then be traded for an unbroken unitary evolution of a composite {system + the probes which carry the records}. In Section 4, we discuss two different (and indeed well known) types of the probes. In Section 5, we discuss the quantities whose additional measurements would not alter the likelihoods of all other outcomes. However, like their EPR counterparts, these "elements of reality" cannot be observed simultaneously. In Section 6, we illustrate this on a simple two-level example. In Section 7, we look at what would happen in an attempt to measure two of such quantities jointly. Section 8 asks if something new can be learnt about the system by minimising the perturbation incurred by the probes. Section 9 contains a summary of our conclusions. Feynman's Rules of Quantum Motion: Broken Unitary Evolutions Consider a system (S) with which the theory associates N-dimensional Hilbert space H S . The L + 1 quantitiesQ to be measured at the times t 0 . . . < t . . . < t L are represented by Hermitian operatorsQ acting in H S , each with M ≤ N distinct real valued eigenvalues Q m where |q n , ( q n |q n = δ n n , n = 1, . . .N) are the measurement bases, ∆(X − Y) = 1 if X = Y, and 0 otherwise, andπ m is the projector onto the eigen-subspace, corresponding to an eigenvalue Q m . The first operatorQ 0 is assumed to have only non-degenerate eigenvalues, i.e.,Q 0 = ∑ N n 0 =1 Q 0 n 0 |q 0 n 0 q 0 n 0 |. This is needed to initialise the system, in order to proceed with the calculation. The possible outcomes of the experiment are, therefore, the sequences of the observed values Q L m L ..Q 0 n 0 , and one wishes to predict the probabilities (frequencies) with which a particular real path {Q L m L . . . ← Q m . . . ← Q 0 n 0 } would occur after many trials. Following [1], one can obtain these obtained by constructing first the system's virtual paths {q L n L . . . ← q n . . . ← q 0 n 0 }, connecting the corresponding states in H S , and ascribing to each path a probability amplitude (we useh = 1) Ĥ S (t )dt ] is the system's evolution operator (the time ordered product is assumed if the system's hamiltonian operatorsĤ S (t ) do not commute at different times, [Ĥ S (t ),Ĥ S (t )] = 0). We will assume that all Hermitian operatorŝ Q = (Q ) † can be measured in this way. We will also allow for all unitary evolutions, and the probabilities, We note that the amplitudes in Equation (3) depend only on the projectorsπ m in Equation (2), and not on the corresponding eigenvalues Q m . To stress this, we are able to write Finally, summing p(q L n L . . . ← Q m . . . ← q 0 n 0 ) over the degeneracies of the last operatorQ L , yields the desired probabilities for the real paths, Note that there is no interference between the paths leading to different (i.e., orthogonal) final states |q L n L , even if they correspond to the same eigenvalue Q L m L [1]. This is necessary, since an additional (L + 2)-nd measurement of an operatorQ L+1 = ∑ N n L =1 Q L+1 n L+1 |q L n L+1 q L n L+1 | immediately after t = t L would destroy any interference between the paths ending in different |q L n L s at t = t L . Since future measurements are not supposed to alter the results already obtained, one never adds the amplitudes for the final orthogonal states [1]. Note that the same argument cannot be repeated for the past measurements at t < t L . It may be convenient to cast Equation (6) in an equivalent form, where and a unitary evolution of the initial state |q 0 n 0 with the system's evolution operatorÛ s is seen to be interrupted at each t = t . check that the probabilities in Equation (6) sum, as they should, to unity. It is worth bearing in mind the Uncertainty Principle which, we recall, states that [1] "one cannot design equipment in any way to determine which of two alternatives is taken, without, at the same time, destroying the pattern of interference". In particular, this means that if two or more virtual paths in Equation (2) are allowed to interfere, it must be absolutely impossible to find out which one was followed by the system. Moreover, one will not even be able to say that, in a given trial, one of them was followed, while the others were not (see [10] and Ch. 6 pp. 144-145 of [11]). With the basic rules laid out, and an example given in Figure 1, we will turn to practical realisations of an experiment involving several consecutive measurements of the kind just described. The Need for Records: Unbroken Unitary Evolutions In an experiment, described in Section 2, there are N × M 1 × M 2 . . . . . . × M L possible sequences of observed outcomes. At the end of each trial, the experimenter identifies the real path followed by the system, path = {Q L m L . . . ← Q m . . . ← Q 0 n 0 }, and increases by 1 the count in the corresponding part of their inventory, K(path) → K(path) + 1. After K >> 1 trials, the ratios K(path)/K will approach the probabilities in Equation (6), from which all the quantities of interest, such as averages or correlations, can be obtained later. There is one practical point. In order to identify the path, an Observer must have readable records of all past outcomes once the experiment is finished, i.e., just after t = t L . There are two reasons for that. Firstly, quantum systems are rarely visible to the naked eye, so something accessible to the experimenter's senses is clearly needed. Secondly, and more importantly, the condition of the system changes throughout the process [cf. Equation (8)], and its final state simply cannot provide all necessary information. In other words, one requires L probes which copy the system's state at t = t , = 0, 1, . . .L and retain this information till the end of the trial. It is easy to see what such probes must do. The experiment begins by coupling the first probe to a previously unobserved system at t = t 0 . To proceed with the calculation, we may assume that just t 0 the initial state of a composite system + probes is where |D (0) is the initial state of the -th probe which, if found changed into |D (m ) , D (m )|D (m ) = δ m m , would tell the experimenter that the outcome at t = t was Q m . Note that the first probe D 0 has already been coupled to a previously unobserved system and produced a reading n 0 , thus preparing the system in a state |q 0 n 0 . The composite would undergo unitary evolution with an (yet unknown) evolution operatorÛ S+Probes (t L , t 0 ). The rules of the previous section still apply, albeit in a larger Hilbert space, and with only two (L = 1) measurements, of which the first one prepares the entire composite in the state (9). For simplicity, we let the last operator have non-degenerate eigenvalues, M L = N. By (6), the probability to have an outcome Q L n L . . . ← Q m . . . ← Q 0 We want the probabilities in Equation (10) (the ones the experimenter measures) and the probabilities in Equation (6) (the ones the theory predicts) to agree. Consider again the scenarios {q L n L . . . ← Q n . . . ← q 0 n 0 } in Equation (3). In the absence of the probes they lead to the same final state, |q L n L , interfere, and cannot be told apart, according to the Uncertainty Principle. If we could use the probes to turn these scenarios into exclusive alternatives [1], e.g., by directing them to different (orthogonal) final states in the larger Hilbert space, Equation (6) for the system subjected to L + 1 measurements would follow. In other words, we will be able to trade a broken evolution in a smaller space H S [cf. Equation (8)] for an uninterrupted unitary evolution in a larger Hilbert space H S+Probes . For this we need an evolution operatorÛ S+Probes (t L , t 0 ) such that where the orthogonal states |Ψ Probes (n L , . . .m . . .n 0 ) play the role of "tags", by which previously interfering paths {q L n L . . . ← Q m . . . ← q 0 n 0 } can now be distinguished (see Figure 2). For the reader worried about the collapse of the wave function we note that the same probabilities can be obtained in two different ways. Either the evolution of the wave function of the system only is broken every time an instantaneous measurement is made, as happens in Equation (8), or the evolution of the system + the probes continues until the end of the experiment as in Equation (12). Finally, we note the difference between producing all L records, but not using or having no access to some of them, and not producing some of the records at all. There is also a possibility of destroying, say, the -th record by making a later measurement on a composite {the system+ the -th probe} [17,18]. In this case, the composite becomes the new measured system, and the rules of the previous section still apply. Two Kinds of Probes We note next that it does not really matter for the theory how exactly the records are produced, as long as the interference between the virtual paths is destroyed, and Equation (12) is satisfied. The states |D in Equation (9) may equally refer to devices, to the Observer or the Observers' memories [17,19], or to the notes the Observers have made in the course of a trail, [18]. We will assume for simplicity that the probes have no own dynamics, and retain their states after having interacted with the measured system, Several interactions which have the desired effect are, in fact, well known, and we will discuss them next. There are at least two types of probes consistent with Equation (12). They require different treatments, and we will consider them separately. Discrete Gates For the -th probe consider a register of M two-level sub-systems, each prepared in its lower states |1 m The probe, designed to measure a quantityQ = ∑ M m =1 Q m π m , is coupled to the system viaĤ whereσ x (m ) is the Pauli matrix, which acts on the m -th sub-system in the usual way, σ x (m )|1 m = |2 m . Since the individual terms in Equation (15) commute, the evolution operator of the {System The probe entangles with the system in the required way, where in |D (m ) is obtained from |D (0) by flipping the state of the m -th sub-system, We note that, whatever the state |ψ S , one of the subsystems will change its condition (the system will be found somwhere). We note also that in each trial only one subsystem will be affected (the system is never found simultaneously in two or more places). The full evolution operator is, therefore, given bŷ where, as before, we we assumed M 0 = M L = N,π n 0 = |q n 0 q n 0 |, andπ n L = |q n L q n L |. The experimenter prepares all probes in the states (14) and, once the experiment is finished, only needs to check which sub-system of D , say, the m -th, has changed its state. This will tell him/her that the value ofQ at t = t was Q m . As a simple example, Figure 3 shows an outcome of five measurements made on a four-state system. There the first probe, capable of distinguishing between all four states prepares the system in a state |q 0 n 0 . The second probe cannot tell apart the third and the second states, soπ 1 1 = |q 1 1 q 1 1 |,π 1 2 = |q 1 2 q 1 2 |, andπ 1 3 = |q 1 3 q 1 3 | + |q 1 4 q 1 4 |, and so on. The sequence of the measured valued obtained by inspecting the probes at t > t 4 is After many trials the sequence will be observed with a probability P(Q 4 Von Neumann's Pointers In classical mechanics one can measure the value of a dynamical variable Q(x, p) at t 0 by coupling the system to a "pointer", a heavy one-dimensional particle with position f and momentum λ. The full Hamiltonian is given by H S (x, p) + λQ(x, p)δ(t − t 0 ), and at t = t 0 the pointer is rapidly displaced by δ f = Q(x(t 0 ), p(t 0 )), which providies the desired reading. What happens to the system, depends on the pointer's momentum, which remains unchanged by the interaction. If λ = 0, the system continues on its way unperturbed. If λ = 0, the system experiences a sudden kick, whereby its position and momentum are changed by ∆x = λ∂ p Q(x(t 0 ), p(t 0 )) and ∆p = λ∂ x Q(x(t 0 ), p(t 0 )), respectively. The quantum version of the pointer [20] employs a couplingĤ int = g(t)λQ, whereˆ , is the pointer's momentum operator, andQ = ∑ m Q mπm is the (system's) operator to be measured. The function g(t) = 1/τ can be chosen constant for the duration of the measurement τ, t 0 ≤ t ≤ t 0 + τ, and zero otherwise. It tends to a Dirac delta δ(t − t 0 ) for an instantaneous (impulsive) measurement, where τ → 0. For a system whose state |ψ S lies in the eigen sub-space of a projectorπ m , the action ofĤ int results in a spatial shift of the pointer's initial state |G(0) by Q m , With L + 1 pointers employed to measure L + 1 quantitiesQ , the initial state of the composite can be chosen to be [cf. Equation (9)] where the initial pointer states can be, e.g., identical Gaussians of a width ∆ f , all centred at the origin, except for the first probe, where we would need a narrow Gaussian, |G 0 ( f 0 )| 2 → δ( f ) in order to prepare the system in |q 0 n 0 . If all couplings are instantaneous, for the amplitude in Equation (3) [with L = 2, and |q L n L replaced by a state of the composite, |q L n L ⊗ | f , (3). If one wants their measurements to be accurate, the pointers need to be set to zero with as little uncertainty as possible (see. Figure 4). This uncertainty is determined by the Gaussian's width ∆ f , and sending it to zero we have is the probability, computed with the help of Equation (7). Equation (24) is the desired result, which deserves a brief discussion. In each trial, the pointers' readings may take only discrete values Q m , and the observed sequences occur with the probabilities, predicted for the system by Feynman's rules of Section 2. However, unlike in the classical case, this information comes at the cost of perturbing the system's evolution. Indeed, writing G (0) = G (λ ) exp(iλ f )dλ and proceeding as before, one obtains terms like , where exp(−iλ Q ) represents the "kick", produced on the system by the -th pointer at t . As in the classical case, we can get rid of the kick by ensuring that the pointer's momentum λ is approximately zero. However, Heisenberg's uncertainty principle (see, e.g., [1]) will make the uncertainty in the initial pointer position very large. Accuracy and perturbation go hand in hand, and the measured values do not "pre-exist" measurements but are produced in the course of it [21]. Notably, one can still predict the probabilities by not mentioning the pointers at all and analyzing instead an isolated system, whose unitary evolution is broken each time the coupling takes place. Secondly, and importantly, von Neumann pointers have many states, and only a few of them are actually used. This suggests that the pointers and the probes of the previous section could, in principle, be replaced by much more complex devices, with only a few states of their vast Hilbert spaces coming into play. For example, there is nothing in quantum theory that forbids using printers, which print the observed values on a piece of paper. If an experiment that measures spin's component is set properly, the machine will print only "up" or "down" with the predicted frequencies, and would never digress into French romantic poetry. The Past of a Quantum System: Elements of Reality The stock of an experiment described so far, is taken just after the last measurement at t = t L . This is the "present" moment, the times t 0 , . . .t L are relegated to the "past", and the "future" is yet unknown. Possible pasts are defined by the choice of the measured quantitiesQ , and of the times t at which the impulsive measurements are performed. which the theory aims to predict. There are clearly gaps in the description of the system between successive measurements at t and t +1 . One way to fill them (without adding new measurements, which would change the problem) is to look for quantities whose values can be ascertained at some t < t < t +1 without altering the existing probabilities . Or, to put it slightly differently, to ask what can be measured without destroying the interference between the virtual paths [cf. (2)] which contribute to the amplitudes (3). There is a well-known analogy. EPR-like scenarios [22] are often used to question the manner in which quantum theory describes the physical world. In a nutshell, the argument goes as follows. Alice and Bob, at two separate locations, share an entangled pair of spins. Alice can ascertain that Bob's spin has any desired direction, while apparently unable to influence it due to the restrictions imposed by special relativity. Hence, all possible values of the spin's projections can exist simultaneously, i.e., be in some sense real. If quantum mechanics insists that different projections cannot have well-defined values at the same time, it must be incomplete. We are not interested here in the details of this important ongoing discussion (for an overview see [22] and Refs. therein), or the implications relativity theory may have for elementary quantum mechanics [23]. Rather we want to make use of the Criterion of Reality (CR) used by the authors of [24] to determine what should be considered "real". This criterion reads: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." [24]. Consider again an experiment in which L + 1 measurements are made on the system at t = t , = 0, 1. . .L, while the system's condition at some t between, say, t and t +1 remains unknown. To fill this gap, one may use the CR criterion just cited, and look for any information about the system, which can be obtained without altering the existing statistical ensemble. Thus, one needs a variableQ whose measurement at t < t < t +1 results is (25) There are at least two kinds of quantities that satisfy this condition. To the first kind belong operators of the typê whereπ m (t , t ) =Û S (t , t )π m Û −1 S (t , t ) is the projectorπ m evolved backwards in time from t to t . To the second kind belong the quantitieŝ and [cf. Equation (6)] There is, of course, a simple explanation. The statesÛ S (t , t )|q n form an orthogonal basis for measuringQ − (t ), and the system in |q n at t can only go to |Û S (t , t )|q n at t , as all other matrix elements ofÛ S (t , t ) vanish. Similarly, the system inÛ −1 S (t +1 , t )|q n +1 at t can only go to |q n +1 at t +1 . The presence of the operatorsÛ −1 S (t , t ) andÛ S (t +1 , t ) in Equations (26) and (27) ensures that Equation (25) holds, and Equation (29) follow. The problem is as follows. By using the CR, we appear to be able to say that at t = t a quantityQ − (t ) has a definite value Q − m if the value ofQ at t = t was Q m . Similarly, it would appear thatQ + (t ) also has a definite value Q + m +1 if the value ofQ +1 at t = t +1 is Q +1 m +1 Since in generalQ − (t ) andQ + (t ) do not commute, [Q − (t ),Q + (t )] = 0, and quantum mechanics forbids ascribing simultaneous values to non-commuting quantities, we seem to have a contradiction. Fortunately, the contradiction is easily resolved. At the end of the experiment, one needs to have all the relevant records and to produce these records an additional probe must be coupled at t = t . MeasuringQ − (t ), orQ + (t ) requires different probes, which affect the system differently, and produce different statistical ensembles. The values Q − m andQ + m +1 do not pre-exist in their respective measurements [9,21], and appear as a result of a probe acting on a system. The caveat is the same as in Bohr's answer [25] to the authors of [24]. There are no practical means of ascertaining these conflicting values simultaneously. Next, we give a simple example. A Two-Level Example Consider a two-level system, (a qubit), N = 2, prepared by the first measurement of an operatorQ in a state |b 1 at t = t 0 . The second measurement (we have L = 1) yields one of the eigenvalues of an operator With only two dimensions involved, all eigenvalues are non-degenerate. If for simplicity we putĤ S = 0,Û S (t 1 , t 0 ) = 1 (see Figure 5a), one can easily verify that at any t 0 < t < t 1 the value ofB is B 1 (see Figure 5b), or thatĈ has the same value it will have at t = t 1 (see Figure 5c). Moreover, its is easy to ascertain that if the first and the last outcomes are B 1 and C i , the values ofB at t = t and ofĈ at t = t are B 1 and C i , as long as t 0 < t < t < t 1 (see Figure 6a). Indeed, according to (3) we have The former is no longer true ifĈ is measured beforeB. A final value C 1 can now be reached via four real paths Figure 6b, and the probabilities no longer agree with those Equation (32), The transition between the two regimes occurs at t = t , when an attempt is made to measure two non-commuting quantities at the same time. The rules of Section 2 imply that such measurements are not possible in principle, sinceB andĈ do not have a joint set of eigenstates which could inserted into Equation (2). However, ultimately one is interested in the records available at the end of the experiment. Next, we will look at the readings the probes would produce, should they be set up to measure non-commutingB andĈ simultaneously. i=1 |b i B i b i | prepares the system (Ĥ S = 0) in a state |b 1 , and is followed by a measurement ofĈ = ∑ 2 i=1 |c i C i c i |. (b) An additional measurement ofB at t 0 < t < t 1 yields an outcome B 1 , and finds the system in the state |b 1 with certainty. (c) An additional measurement ofĈ at the same t yields C i with certainty, if the last outcome is also C i . The probabilities are unchanged, P( , and it would appear that at t = t the system has well defined values of non-commuting operatorsB andĈ. Figure 6. (a) In the case shown in Figure 5b, at t < t < t 1 one can add a measurement ofĈ, still leaving the probabilities unchanged, P( . It would appear that in a transition {C 1 ← B 1 } intermediate values B 1 and C 1 co-exist for any t and t , such that t > t . (b) The above is no longer true ifĈ is measured beforeB. The transition between the cases (a,b) is discussed in Section 7. Joint Measurement of Non-Commuting Variables We want to consider two measurements made on the system in Figure 6 which can overlap in time at least partially. No longer instantaneous, both measurement will last τ seconds, start at t and t , t , t > t 0 , respectively, and finish before t = t 1 , t + τ, t + τ < t 1 . The degree to which the measurements overlap will be controlled by a parameter β, so that for β = 1 the measurement ofB precedes that ofĈ, β = 0 corresponds to simultaneous measurements of bothB andĈ, and for β = −1Ĉ is measured first. Next we consider the two kind of probes introduced in Section 4 separately. C-NOT Gates as a Meters For our two-level example of Section 6 we can further simplify the probe described in Section 4. Since we only need to distinguish between two of the system's conditions, a two-level probe, whose state either changes or remains the same, is all that is required. We will need two such probes, D and D , two sets of states four projectorŝ and two couplingŝ whereσ x |D (1) = |D (2) . In what follows we will put τ = 1. The probes are prepared in the respective states |D (1) and |D (1) , and after finding the system in |c 2 at t 2 their state is given by if t > t , while for t < t the order of operators in (38) is reversed. It is easy to check that ( =, , ) so that for |β| = 1 the r.h.s. of Equation (39) reduces toπ 1 + iπ 2 σ x . The action of the coupling is, therefore, that of a quantum (C)ontrolled-NOT gate [26], which flips the probe's (target) state if the system's (control) state is |b 2 or |c 2 , and leaves the probes's condition unchanged if it is |b 1 or |c 1 . If β = 1, there is no overlap, andB is measured beforeĈ. Dividing τ into K subintervals and sending K → ∞ we have an identity (cf. Equation (39)) and the state of the first probe remains unchanged. For β = 0 both probes act simultaneously. Now the use of the Trotter's formula yields Equation (42) contains scenarios where both probes change their states, and sinceπ ⊗π = 0, the evolution of one of them must affect what happens to the other. Now all four probabilities in Equation (40) have non-zero values, although it is still more likely that both probes will remain in their initial states (cf. Figure 7). Finally, for β = −1,Ĉ is measured beforeB as if both measurements were instantaneous, and all four paths in Figure 6b are equally probable. A different result is obtained if the two measurements are of von Neumann type, as we will discuss next. Von Neumann Meters Consider the same problem but with the two-level probes replaced by two von Neumann pointers with positions f and f , respectively. As before, the interaction with each pointer lasts τ seconds, so the two Hamiltonians arê If the pointers are prepared in identical Gaussian states (22) f |G = G( f ), =, , the probability distribution of their final positions is given by where (we measure f in units of g 0 and put τ = 1) Consider first the case β = 0 where the measurements coincide. The amplitude Φ( f , f ) has several general properties. Firstly, it cannot be a smooth finite function of y and y , or the integral in (44) would vanish in the limit of narrow Gaussians, ∆ f → 0, due to the normalisation of |G and |G (cf. Equation (22)). It must, therefore, have δsingularities [27,28]. Secondly, using the Trotter's formula [29], we have It is readily seen that each time the product in the r.h.s. of Equation (46) is applied, the pointers are displaced by B 1 /K or B 2 τ/K and C 1 /K or C 2 /K, respectively. If B 1,2 , C 1,2 = ±1, one has a quantum random walk, where the pointers are shifted by an equal amount 1/K ether to the right or to the left. Since the largest possible displacement is 1, Φ( f , f ) must vanish outside a square −1 ≤ f , f ≤ 1. One also notes that for f = ±1, the maximum of Φ(±1, f ) is reached for f = 0, since there are (let K be an even number) K!/(K/2)!(K/2)! walks, each contributing to Φ( f , f ) the same amount 1/2 2K+1 . Similarly, a maximum of A detailed combinatorial analysis is complicated, but the location of the singularities can be determined as was done in [30]. As explained in the Appendix B, the amplitude Φ(y , y ) in Equation (45) can also be written as ( where J k is the Bessel function of the first kind of order k. From Equation (44) one notes that in the limit ∆ f → 0, P( f , f ) in Equation (44) will become singular, and that its singularities will coincide with those of Φ( f , f ) in Equation (45). Since [31] J k (λ f → ∞) → (2/πλ f ) 1/2 cos(λ f − kπ/2 − π/2), the integral in Equation (47) will diverge at large λ's provided the oscillations of cos(λ) and sin(λ) are cancelled by those of J 0 (λ f ) and J 1 (λ f ), i.e., for f 2 + f 2 = 1. As a result, we find the pointer readings of two simultaneous accurate measurements ofB andĈ (with B 1,2 = C 1,2 = ±1) distributed along the perimeter of a unit circle as shown in Figure 8. Figure 9 shows the distribution of the pointer's readings for different degrees of overlap between the two measurements. There is, therefore, an important difference between employing discrete probes and von Neumann pointers. In the previous example shown in Figure 7 one could (although should not) assume, e.g., that for β = 0 P(1 , 2 ) yields the probability forB andĈ to have the values 1 and −1, if measured simultaneously. Figures 8 and 9 show this conclusion to be inconsistent. As β decreases from 1 to −1, the pointer's readings are not restricted to ±1 , ±1 . Rather, for β = 0 they are along the perimeters of a unit circle (cf. Figures 8 and 9), which, if taken at face value, implies that the probability in question is zero. We already noted that the theory in Section 2 is unable to prescribe probabilities of simultaneous values of non-commuting operators. In practice, this means that the probes, capable of performing the task in a consistent manner, simply cannot be constructed. The Past of a Quantum System: Weakly Perturbing Probes and the Uncertainty Principle It remains to see what information can be obtained from the measurements, designed to perturb the measured system as little as possible. If such a measurement were an attempt to distinguish between interfering scenarios, without destroying interference between them, it would contradict the Uncertainty Principle cited in Section 2. As before, we treat the two types of probes separately. Weak Discrete Gates We start by reducing the coupling strength, so the interaction Hamiltonians (15) becomeĤ In [11], Feynman described a double-slit experiment where photons, scattered by the the passing electron, allowed one to know through which of the two slits the electron has travelled. With every electron duly detected, their distribution on the screen, P Slit-unknown (x), does not exhibit an interference pattern. With no photons present, the pattern is present in the distribution P Slit-known (x). If the intensity of light (i.e., the number of photons) is decreased, some of the electrons pass undetected. The total distribution on the screen is, therefore, "a mixture of the two curves" [11]. P(x) = aP Slit-unknown (x) + bP Slit-known (x), where a and b are some constants,. Something very similar happens if an extra discrete probe D is added to measurê Q = ∑ M m =1 Q m π m at t = t , t < t < t +1 . As before [cf Equation (16)], we find where the cosine term accounts for the possibility that the systems passes the check undetected. Replacing P(x) in Feynman's example with the probability P(q L n L ) of detecting the system in a final state |q L n L at t L . With a values ofQ detected in every run, γ = π/2, one has a distribution where A S is the amplitude in Equation (5). If the probe is uncoupled, γ = 0, the distribution is With 0 < γ < π/2 the outcomes fall into two groups, those where the probe D remains in its initial state, and those where the state of one of its sub-systems has been flipped. The two alternatives are exclusive [32], and the total distribution is indeed a mixture of the two curves. As γ → 0 one has P(q L n L ) = (1 − γ 2 )P Q'-unknown (q L n L ) + γ 2 P Q'-known (q L n L ) + o(γ 2 ). In other words, in the vast majority of cases, the system remains undetected, and the interference is preserved (cf. Equation (52)). In the few remaining cases, it is detected and the interference is destroyed. The Uncertainty Principle [1] is obeyed to the letter: probabilities are added where records allow one to distinguish between the scenarios; otherwise one sums the amplitudes. One has, however, to admit that nothing really new has been learned from this example, as both possibilities simply illustrate the rules of Section 2. The above analysis is easily extended to include more extra measurements, whether impulsive or not. Since to the first order in the coupling constant γ weak probes act independently of each other, the r.h.s. of Equation (53) would contain additional terms P Q"-known (q L n L ), P Q"'-known (q L n L ), etc. Weak von Neumann Pointers Next we add to L + 1 accurate impulsive von Neumann pointers an extra "weak" pointer, designed to measureQ = ∑ M m =1 Q m π m at t = t between t and t +1 . The new coupling, given byĤ will perturb the system only slightly in the limit γ → 0. To see what happens in this limit, one can replace f by γ f ,Ĥ int in Equation (54) by −i∂ f Q , and the pointer's initial state G ( f ) by γ 1/2 G (γ f ). Now as γ → 0 the pointer's initial states become very broad, while the coupling remains unchanged. For a Gaussian pointers (22), considered here, this means replacing ∆ f with ∆ f /γ, i.e., making the measurement highly inaccurate. This makes sense. The purpose of a pointer is to destroy interference between the system's virtual paths, (cf. Equation (24)), which it is clearly unable to do if the coupling vanishes. Accordingly, with the pointer's initial position highly uncertain, its final reading is also spread almost evenly between −∞ and ∞. Measured in this manner, the value ofQ remains indeterminate, as required by the Uncertainty Principle. This could be the end of our discussion, except for one thing. It is still possible to use the broad distribution of a pointer's readings in order to evaluate averages, which could, in principle, remain finite in the limit ∆ f → ∞. Maybe this can tell us something new about the system's condition at t = t . Note, however, that whatever information is extracted in this manner should not contradict the Uncertainty Principle, or the whole quantum theory would be in trouble [1]. From Equation (22) it is already clear that any average of this type will be expressed in terms of the amplitudes A(q L n L . . . ←π m . . . ← q 0 n 0 ). The simplest average is the pointer's mean position. If the outcomes of the accurate measurements are Q L n L . . .Q m . . .Q 0 n 0 , for the mean reading of the weakly coupled pointer we obtain (see Appendix C) If the measured operator is one of the projectors, say,Q =π m , this reduces to We note that, as in the previous example, different pointers do not affect each other to the leading order in the small parameter γ. The quantities in the l.h.s. of Equations (55) and (56) are the standard averages of the probes' variables. In the l.h.s. of these equations one finds probability amplitudes for the system's entire paths, {q L n L . . .. ←π +1 m +1 ←π m ←π m . . . ← q 0 n 0 }. The values of these amplitudes can be deduced from the probes's probabilities [33]. The problem is, these values offer no insight into the condition of the system at t = t . In the double-slit case, to conclude that a particle ". . . goes through one hole or the other when you are not looking is to produce an error in prediction" [11]. In our case, one cannot say that the value ofQ was or was not a particular Q m . The Uncertainty Principle prevails again, this time by letting one only gain information not sufficient for determining the condition of the unobserved system at t = t . A more detailed discussion of this point can be found, e.g., in [7,8]. Summary and Conclusions A very general way to describe quantum mechanics is to say that it is a theory that prescribes probability amplitudes to sequences of events, and then predicts the probability of a sequence by taking an absolute square of the corresponding amplitude [1]. Where several (L + 1) consecutive measurements are made on the same system (S), a sequence of interest is that of the measured valuesQ m , endowed with a (system's) probability amplitude A S (q L n L . . . ←π m . . . ← q 0 n 0 ) in Equation (5). This is not, however, the whole story. To test the theory's predictions, an experimenter (Observer) must keep the records of the measured values, in order to collect the statistics once the experiment is finished. This is more than a mere formality. The system, whose condition changes after each measurement, cannot itself store this information. Hence the need for the probes, material objects, whose conditions must be directly accessible to the Observer at the end. One can think of photons [1,11] devices with or without dials, or Observer's own memories of the past outcomes [17,18]. The probes must be prepared in suitable initial condition |Ψ Probes (0) and be found in one of the orthogonal states |Ψ Probes (n L , . . .m . . .n 0 ) later, with an amplitude A S+Probes (Ψ Probes (n L , . . .m . . .n 0 ), q L n L ← Ψ Probes(0) , q 0 n 0 ). To be consistent, the theory must construct the amplitudes A S and A S+Probes using exactly the same rules, and ensure that |A S+Probes | 2 = |A S | 2 . In other words, the experimenter should see a record occurring with a frequency the theory predicts for an isolated (no probes) system, going through its corresponding conditions. This requires the existence of a suitable coupling between the system and the probe. Its choice is not unique, and for a system with a finite-dimensional Hilbert space studied here, two different kinds of probes were discussed in Section 4. The first one is a discrete gate, using the interaction in Equation (15), while the second is the original von Neumann pointer [20]. Now one can obtain the same probability P = |A S+Probes | 2 = |A S | 2 by considering a unitary evolution of a composite {System + Probes} until the moment the Observer examines their records at the end of the experiment. Or one can consider such an evolution of the system only, but broken every time a probe is coupled to it. For a purist intent on identifying quantum mechanics with unitary evolution (see, e.g., the discussion in [34]), the first way may seem preferable. Yet there is no escaping the final collapse of the composite's wave function when the stock is taken at the end of the experiment. It is often simpler to discuss measurements in terms of the measured system's amplitude, leaving out, but not forgetting, the probes. The rules formulated in Section 2 readily give an answer to any properly asked question, but offer no clues regarding a question which has not been asked operationally. One may try to extend the description of a quantum system's past by looking for additional quantities whose values could be ascertained without changing the probabilities of the measured outcomes. In general, this is not possible. To find the value of a quantityQ at a t between to successive measurements, t < t < t +1 , one needs to connect an extra probe. This would destroy interference between the system's paths and change other probabilities, leaving the question "what was the value ofQ if was not measured?" without an answer. There are two seeming exceptions to this rule. IfQ is obtained by evolving backward in time the previously measuredQ (cf. Equation (26)), call itQ − , its value is certain to equal that ofQ , and all other probabilities will remain unchanged, (cf. Equation (28)). Similarly, the value of aQ + (cf. Equation (27)), obtained by the forward evolution of the next measured operatorQ +1 , will also agree with that ofQ +1 . It would be tempting to assume that these values represent some observation-free "reality", were it not for the fact that they cannot be ascertained simultaneously. The two measurements require different probes, each affecting the system in a particular way. The probes frustrate each other if employed simultaneously. It is hardly surprising that different evolution operatorŝ U S+Probes (t L , t 0 ), in Equation (10) may lead to different outcomes. One notes also that measuring these two quantities one after another would also leave all other probabilities intact, but only ifQ − is measured first, as shown in Figure 6a. Changing this order results in a completely different statistical ensemble, shown in Figure 6b. The rules of Section 2 say little about what happens if the measurements coincide, except that if Q − andQ + do not have common eigenstates, Equation (5) cannot be applied. One can still analyze the behavior of the two probes at different degrees of overlap to explain why it is impossible to reach consistent conclusions about the simultaneous values ofQ − andQ + . For example, if two discrete gates are used, Figure 7 appears to offer four joint probabilities of having the values B, C = ±1. If discrete probes are replaced by von Neumann pointers, the readings shown in Figure 8 suggest that joint values ofB andĈ should lie on the perimeter of a unit circle, in a clear contradiction with the previous conclusion. Another way to explore the system's past beyond what has been established by accurate measurements, is to study its response to a weakly perturbing probe, set up to measure someQ at an intermediate time t . In this limit, the two types of probes produce different effects but, in accordance with the Uncertainty Principle, reveal nothing new that can be added to the rules formulated in Section 2. If the coupling of an additional discrete probe is reduced, trials are divided into two groups. In a (larger) number of cases, the system remains undetected, and interference between its virtual paths passing through different eigenstates ofQ remains intact. In a (smaller) fraction of cases, the value ofQ is accurately determined, and the said interference is destroyed completely. Individual readings of a weak von Neumann pointer extend a range much wider than the region which contains the values ofQ , and are in this sense practically random. Its mean position (reading) allows one to learn something about the probability amplitude in Equation (56), or a combination of such amplitudes as in Equation (55). The problem is that even after obtaining the values of these amplitudes (and this can be done in practice [33]), one still does not know the value ofQ , for the same reason he/she cannot know the slit chosen by an unobserved particle in a double-slit experiment. Quantum probability amplitudes simply do not have this kind of predictive power [1,11]. In summary, quantum mechanics can consistently be seen as a formalism for calculating transition amplitudes by means of evaluating matrix elements of evolution operators. In such a "minimalist" approach (see also [35]), the importance of a wave function, represented by an evolving system's state, is reduced to that of a convenient computational tool. In the words of Peres [23] (see also [36]) ". . . there is no meaning to a quantum state before the preparation of the physical system nor after its final observation (just as there is no 'time' before the big bang or after the big crunch)." This is, however, not a universally accepted view. For example, the authors of [4][5][6] propose a time-symmetric formulation of quantum mechanics, employing not one but two evolving quantum states. We will examine the usefulness of such an approach in future work. The probes' are prepared in an initial state Φ Probes (t 0 ) = |1 |1 = ∑ λ ,λ =± |D (λ ) | D (λ ) /2, and after post-selecting the system in |c 1 , their final state (38) is given by where U λ λ = c 1 | exp(iπλ |β|π 2 /2) ⊗ (A3) exp[iπ(1 − |β|)(λ π 2 + λ π 2 )/2] ⊗ exp(iπλ |β|π 2 /2)|b 1 .
12,318.6
2022-04-01T00:00:00.000
[ "Physics" ]
Co-production of polyhydroxybutyrate (PHB) and coenzyme Q10 (CoQ10) via no-sugar fermentation—a case by Methylobacterium sp. XJLW Purpose: To explore a competitive PHB-producing fermentation process, this study evaluated the potential for Methylobacterium sp. XJLW to produce simultaneously PHB and coenzyme Q10 (CoQ10) using methanol as sole carbon and energy source. Methods: The metabolic pathways of PHB and CoQ10 biosynthesis in Methylobacterium sp. XJLW were first mined based on the genomic and comparative transcriptomics information. Then, real-time fluorescence quantitative PCR (RT-qPCR) was employed for comparing the expression level of important genes involved in PHB and CoQ10 synthesis pathways’ response to methanol and glucose. Transmission electron microscope (TEM), gas chromatography/mass spectrometry (GC-MS), nuclear magnetic resonance (NMR), Fourier transformation infrared spectrum (FT-IR), and liquid chromatography/mass spectrometry (LC-MS) methods were used to elucidate the yield and structure of PHB and CoQ10, respectively. PHB and CoQ10 productivity of Methylobacterium sp. XJLW were evaluated in Erlenmeyer flask for medium optimization, and in a 5-L bioreactor for methanol fed-batch strategy according to dissolved oxygen (DO) and pH control. Results: Comparative genomics analysis showed that the PHB and CoQ10 biosynthesis pathways co-exist in Methylobacterium sp. XJLW. Transcriptomics analysis showed that the transcription level of key genes in both pathways responding to methanol was significantly higher than that responding to glucose. Correspondingly, strain Methylobacterium sp. XJLW can produce PHB and CoQ10 simultaneously with higher yield using cheap and abundant methanol than using glucose as sole carbon and energy source. The isolated products showed the structure characteristics same to that of standard PHB and CoQ10. The optimal medium and cultural conditions for PHB and CoQ10 co-production by Methylobacterium sp. XJLW was in M3 medium containing 7.918 g L -1 methanol, 0.5 g L of ammonium sulfate, 0.1% (v/v) of Tween 80, and 1.0 g L of sodium chloride, under 30 °C and pH 7.0. In a 5-L bioreactor coupled with methanol fed-batch process, a maximum DCW value (46.31 g L) with the highest yields of PHB and CoQ10, reaching 6.94 g L -1 and 22.28 mg L, respectively. (Continued on next page) © The Author(s). 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. * Correspondence<EMAIL_ADDRESS>Equal contributors Peiwu Cui and Yunhai Shao contributed equally to this work. College of Biotechnology and Bioengineering, Zhejiang University of Technology, Hangzhou 310032, People’s Republic of China Full list of author information is available at the end of the article Annals of Microbiology Cui et al. Annals of Microbiology (2021) 71:20 https://doi.org/10.1186/s13213-021-01632-w Introduction Nowadays, along with the increasing demands for polymer plastics, which can be widely used from product packing and daily tools to equipment parts and construction sectors, the growing serious petroleum-based plastic pollution has drawn more attractive attention due to its less biodegradation property (Cardoso et al. 2020;Mostafa et al. 2020). In order to solve this global circumstance, many scientists have put great efforts on biodegradable polymer production. For showing similar thermoplastic, elastomeric, and other physical-chemical properties to conventional plastics, polydroxyalkanoates (PHAs) are regarded as the most potential substituent, which can be completely degraded to CO 2 and H 2 O (Sukruansuwan and Napathorn 2018; Mostafa et al. 2020). However, the high cost of PHA production from costly substrates has seriously limited the utilization of PHAs in commercial fields, which forces scientists to explore alternative approaches to produce it at a lower price (Parveez et al. 2015). The production costs of PHAs depend on many factors including strains, substrates, cultivation conditions, extraction, and purification processes (Gamez-Perez et al. 2020). Carbon source is regarded as the major factor that accounts for 70-80% of the total expenses of PHAs (Mohandas et al. 2017), because PHAs are usually synthesized under a specific condition of limitation of nutrients, and excess of carbon source (Cardoso et al. 2020). Thus, development of a PHA-producing process with a cheap and renewable substrate is still necessary. As one of the common industrial by-products and a cheaper and renewable chemical feedstock, methanol has been widely used as carbon and energy source in methylotroph fermentation processes for value-added chemical production (Zaldivar Carrillo et al. 2018;Zhang et al. 2019). Hence, methanol-based fermentation for PHA production is still a highly promising process without sugar consumption. Among all PHAs, polyhydroxybutyrate (PHB) is considered as the most competitive biopolymer because of its good biocompatibility, biodegradability, and similar properties to polypropylene (Parveez et al. 2015;Sharma 2019). Meanwhile, coenzyme Q 10 (CoQ 10 ) is the most valuable product among all natural quinone metabolites, and it is a good clinic biological drug for removing free radicals in the body, keeping biological membrane stable, anti-lipid peroxidation, and strengthening the nonspecific immune (Ernster and Dallner 1995;Qiu et al. 2012;Lu et al. 2013). Thus, PHB and CoQ 10 were selected as representatives of biopolymers and quinone metabolites, respectively, to evaluate the potential for their coproduction via methanol-based process. In our previous work, a new formaldehyde-degradable methylotrophic bacterium was isolated and identified as Methylobacterium sp. XJLW (Qiu et al. 2014;Shao et al. 2019a). Its completed genome has been sequenced (Shao et al. 2019b). Comparative genomic analysis exhibited Methylobacterium sp. XJLW contains both pathways of CoQ 10 and PHB biosynthesis (Fig. 1), suggesting the possibility to develop a new fermentation process to realize co-production of PHB and CoQ 10 with the abundant methanol as sole carbon source at the same time, which will provide a more economic process for PHB production. In the present study, the aim was to (1) verify the potential of PHB and CoQ 10 co-production by Methylobacterium sp. XJLW with different carbon sources, glucose, and methanol; (2) elucidate the expression difference of the key genes in both pathways of PHB and CoQ 10 biosynthesis in Methylobacterium sp. XJLW response to methanol and glucose; (3) evaluate the effects of medium composition and cultivation conditions on PHB and CoQ 10 co-production in Erlenmeyer flasks and in a 5-L stirred bioreactor employing methanol fed-batch strategy. This study provided a new reference of strategy for improving value-added product productivity with methanol-based fermentation process employing methylotrophs. Materials and methods Chemicals PHB (purity above 95%, CAS no: 26063-00-3) and CoQ 10 (purity above 99.9%, CAS no: 303-98-0) were purchased from Sigma-Aldrich, China. Alcohol (HPLC grade, purity above 99.5%) was purchased from Tjshield fine chemicals Co., Ltd. (Tianjin, China). Other After the broth OD 600 of strain XJLW cultured in liquid M3 mineral medium containing 1.0% methanol reaches about 0.6, about 750 μL broth was mixed with 250 μL 80% sterile glycerol in a 1.5-mL centrifuge tube, and then stored in -80°C refrigerator. When activation is required, the stored strains are taken out and thawed, inoculated into an M3 liquid medium containing methanol, and activated on a shaker at 30°C and 180 rpm. Culture condition Medium M3 (Bourque et al. 1995) Initial pH of the above media was adjusted to 7.0 with 1 mol L -1 NaOH. Methanol, 7.918 g L -1 , was added to the two media used as sole carbon source after being autoclaved at 115°C for 30 min. Fifty microliters suspension of frozen stock Methylobacterium sp. XJLW was inoculated into a 250-mL Erlenmeyer flask containing 50 mL medium M3, and incubated for 96 h. Then 2-mL culture was inoculated into 250-mL Erlenmeyer flasks containing 50 mL fermentation medium and incubated for 5 days in a rotary incubator (SPH-2102, SHIPING, China) with the parameter settings at 30°C and 400 rpm, respectively. Cell morphology observation via transmission electron microscope Cells in 1 mL culture broth was harvested by centrifugation at 5790 × g for 10 min at 4°C in a high-speed freezing centrifuge (TGL-16M, Bioridge, China), and then were suspended in 4% (v/v) pre-cooled glutaraldehyde and immobilized for 1 h at 4°C. The ultrathin section of immobilized cell was observed under transmission electron microscope (HITACHI H-7650, Japan) at the magnification of 15,000 ×. Physiological characteristic analysis combined with RNAseq and RT-qPCR The cell growth and simultaneous production ability of PHB and CoQ 10 was detected in M3 medium supplemented with 7.4232 g L -1 glucose or 7.918 g L -1 methanol, respectively. Meanwhile, the cells were harvested for RNA-seq and RT-qPCR. RNA-Seq data analysis After culture in M3 containing methanol or glucose as carbon source, respectively, at 30°C to log phase (OD 600 0.8), Methylobacterium sp. XJLW cells were harvested via centrifugation at 2000 × g for 10 min at 4°C in a high-speed freezing centrifuge (TGL-16M, Bioridge, China). Then, cell pellets were immediately mixed with RNA protect Bacteria Reagent (QIAGEN China Co. Ltd), and then stored at -80°C for RNA extraction. A total amount of 1 μg qualified RNA sample was used as input material for the library preparation. Library concentration was measured using Qubit® RNA Assay Kit in Qubit® 3.0 (Thermo Fisher Scientific, USA) to preliminary quantify. Insert size was assessed using the Bioanalyzer 2100 system (Agilent, USA), after the insert size is consistent with expectations, qualified fragment was accurately quantified using qPCR by Step One Plus Real-Time PCR system (ABI, USA). The raw reads were filtered by removing reads containing adaptors, ploy-N (i.e., unrecognized bases, reads with a recall ratio less than 5%), and low-quality reads (the number of base ≤ 10 and occupied less than 50% of the entire read) for subsequent analysis. Firstly, Tophat2 (Kim et al. 2013) was used to evaluate the sequencing data by comparison with the genomic sequences of reference strains. Based on the Tophat2 alignment results, Cufflinks-2.2.1 (Trapnell et al. 2010) was used to perform quantitative gene expression analysis. Gene expression is calculated as (See figure on previous page.) Fig. 1 Genetic organization of genes and core pathway responsible for CoQ 10 (a, c) and PHB (b, d) synthesis in XJLW strain via comparative genomic analysis. The EC No. in yellow-backed textboxes in a and b meant that they cannot be found in genomic data of XJLW strain. The green color and (+) symbol-labelled genes were upregulated expressed in the methanol group, while the red color and (-) symbol-labelled genes were downregulated expressed in methanol compared with glucose. The black-backed gene in c suggests that the expression level of this gene was not affected by methanol or glucose follows: FPKM (expected number of Fragments Per Kilobase of transcript sequence per Million of sequenced base pairs). In general, the screening criteria for significantly differentially expressed genes are: |log 2 fold change| ≥ 1 and p value ≤ 0.05. Scatter plot and volcano map are used to present the overall profile of gene expression differences. RNA extraction and quantitative RT-qPCR The cells in the early exponential stage, cultured in M3 medium supplemented with 7.4232 g L -1 glucose or 7.918 g L -1 methanol respectively, were centrifuged at 2000 × g for 10 min at 4°C in a high-speed freezing centrifuge (TGL-16M, Bioridge, China). The total RNA was extracted by using RNA isolator (Vazyme Biotech Co., Ltd., Nanjing). And then, HiScript II Q RT SuperMix qPCR kit (Vazyme Biotech Co., Ltd., Nanjing) was used to develop reverse transcription reactions. The reaction buffer system of RT-qPCR was prepared with ChamQ SYBR qPCR Master Mix, and the quantitative PCR with Bio-Rad CFX real-time PCR system was performed. The expression level of the 16S rRNA gene was used as internal reference. Each reaction was repeated at least three times. The primers used for RT-qPCR are listed in Table 1. Effect of culture conditions on Methylobacterium sp. XJLW fermentation in Erlenmeyer flask Firstly, to choose a better initial medium, the cell growth and biosynthesis of target products of Methylobacterium sp. XJLW cultivated in M3 and MSM were evaluated. A one-factor-at-a-time design was employed to analyze the effects of methanol concentration, ammonium sulfate concentration, fermentation temperature, initial pH of medium, different types of oxygen carriers and osmotic pressure regulated by adding different concentration of sodium chloride on Methylobacterium sp. XJLW growing and target metabolites biosynthesis. The value ranges of the above mentioned culture condition variables are listed in Table 2. Cultivation of Methylobacterium sp. XJLW on bench bioreactor using fed-batch strategy After investigation of fermentation conditions in Erlenmeyer flask, a fed-batch fermentation was carried out in a 5-L stirred tank reactor (Biostat-Bplus-5L, B.Braun Germany) with a working volume of 3.0 L, at 30°C, 400 rpm and pH 5.5 (controlled using aqueous NH 4 OH solution), and with a dissolved oxygen concentration above 20% of air saturation. Firstly, the basal salts of optimal medium were dissolved in 2670 mL ddH 2 O and were autoclaved in the bioreactor. To start the fermentation, 30 mL methanol and 300 mL inoculum suspension (OD 600 = 3.0) were added to the bioreactor by peristaltic pump. Filter-sterilized air was the source of oxygen and was supplied at a flow rate of 3 vvm. After initial added methanol was completely exhausted implied by the dissolved oxygen level rising up to 100%, additional methanol (mixed with 1% trace element solution) was pulse fed into the reactor regulated by the dissolved oxygen monitor to further increase the cell density. At the same time, pH was also adjusted at a stable level of 5.7 by adding NH 4 OH solution which could supply nitrogen source simultaneously. If needed, increasing stirred speed strategy was also employed to increase dissolved oxygen level. The whole fermentation period was about 5 to 7 days. Separation of CoQ 10 and PHB After fermentation, cell biomass was separated by centrifugation at 8000 × g, 4°C for 10 min (Biofuge Stratos Sorvall, Thermo, Germany), then 20 mL alcohol was added to the pellets for suspending cells. Subsequently, the cell suspension was subjected to sonication in an ultrasonicator (Scientz-IID, China) at 500 W for 12 min with a pulse of 15 s on and 10 s off. After cell disruption, the suspension was centrifuged at 8000 × g, 4°C for 10 min, and then the supernatant was sampled for CoQ 10 Table 1 Primers used in this study Genes Primers Sequence analysis, while the precipitation was sampled and kept in a 45°C oven to a constant weight before PHB extraction. For PHB extraction, 10 mL chloroform was added to a digestion tube with threaded cap containing less than 100 mg of the dry disruption cell for 1 h extraction at 60°C. Then, the PHB extract was separated by vacuum filtration and air dried as the crude PHB, which was further purified by adding acetone-methanol-mixed liquor (volume ratio 7:2) and washing twice to remove the pigment. The purified PHB was obtained after drying at 45°C. Assay methods Methanol was analyzed by gas chromatography (GC; Shimadzu-2010, Japan) equipped with flame ionization detector (FID) and elastic quartz capillary column (AT-FFAP). Chromatographic condition: injection temperature 200°C, detector temperature 250°C, temperature programming: keeping at 70°C for 4 min, then heating to 150°C at the speed of 50°C per min and keeping for 1 min. The carried gas was nitrogen, and column flow was 3.0 mL/min, split ratio of 10/L, and a sampling quantity of 1 μL. Cell biomass was measured by analyzing the optical density at 600 nm using UV1800 spectrophotometer (Shimadzu, Japan). Firstly, 1 mL culture samples were centrifuged at 6000 × g for 10 min at 4°C; the cells were washed twice in distilled water, centrifuged at the same condition, and finally were diluted by adding distilled water to the linear concentration range according to the standard curve describing the fitting relation between dry cell weight (DCW) and absorbance at 600 nm (OD 600 ), OD 600 was tested, and DCW would be calculated according to a standard curve of the relationship between optical density of cells and DCW of Methylobacterium sp. XJLW. Each sample was in triplicate. PHB content analysis was according to Pal A's method (Pal et al. 2009). Firstly, 10 mg PHB sample was turned into crotonic acid by treatment with 10 mL concentrated H 2 SO 4 in the boiling water bath for 30 min, then the tube was naturally cooled to room temperature, and the absorbance was tested under 235 nm by the UV-1800 spectrophotometer (Shimadzu, Japan) with concentrated H 2 SO 4 as the blank. The standard curve was drawn by the same method. The chemical structure of PHB was identified by gas chromatography-mass spectrometry (GC-MS), nuclear magnetic resonance (NMR) spectroscopy, and Fourier transform infrared (FT-IR) spectroscopy, respectively. To find the polymer composition, the purified PHB was dissolved in chloroform (5 mg PHB mL -1 ), 1 μL of which was injected into a GC-MS instrument (Agilent Technologies 7890A GC System, America; Bruker esquire 6000 MS instrument, German). The column and temperature profile used for GC analysis were as follows: capillary column (HP5MS), 30 m × 0.25 mm, film thickness 0.25 μm; injection temperature 250°C, ion source temperature 200°C, and transfer line temperature 275°C ; oven temperature programming: initially at 60°C, then heating to 250°C at the speed of 20°C per min and keeping for 15.5 min. The carried gas was helium and column flow was 40 cm/s. Proton ( 1 H) and carbon ( 13 C) NMR spectra were recorded by using an Anance III spectrometer (Bruker, Switzerland) at 400 MHz and 100 MHz, respectively, at the following experimental conditions: 0.5% (w/v) polymer sample was dissolved in spectrochem-grade deuterochloroform (CDCl 3 ) and tetramethylsilane (TMS) was used as an internal reference. The chemical shift scale was in parts per million (ppm). For FT-IR analysis, 2 mg polymer sample was thoroughly mixed with 100 mg spectroscopic grade KBr with the help of mortar and pestle. From this mixture, 15 mg was used for making KBr pellets. The pellets were kept in an oven at 100°C for 4 h to remove atmospheric moisture from the sample. The IR spectrum of the polymer sample was recorded with a Nicolet 6700 FT-IR spectrophotometer (Thermo, America) in the range 4000-600 cm -1 . Statistical analyses The mean and standard deviation were calculated from samples in triplicate using Microsoft Excel 2013. Results Methylobacterium sp. XJLW can produce PHB and CoQ 10 simultaneously Transmission electron microscope observation results (Fig. 2) showed that there were many white particles with high refraction inside strain Methylobacterium sp. XJLW cells, occupying nearly half or more space. It suggested high content of PHAs inside the Methylobacterium sp. XJLW cells. After isolation and purification, the exact structure of PHAs from Methylobacterium sp. XJLW was identified via GC-MS, NMR, and IR analysis methods, respectively. Fig. S1A shows the GC spectra of PHA extracts of Methylobacterium sp. XJLW strain, and the 7.59-min peak corresponded to the hydrolyzed product of PHB according to standards. In order to obtain an exact structure of this polyester, a further MS analysis of the 7.59-min peak fragment was carried out, and the spectra are shown in Fig. S1(B). The 101.0 m/z molecular fragment was identical to the 3-hydroxybutyrate, while the molecular fragments of 85.0 m/z represented butyrate. The 1 H-and 13 C-NMR spectra of PHB standards and PHAs produced by Methylobacterium sp. XJLW are shown in Fig. S2. The 1 H-NMR spectra show the presence of three signals in both spectra of the two polymer samples, which corresponded to the methyl group (CH 3 at 1.28 ppm), methylene group (CH 2 at 2.61 ppm), and methine group (CH at 5.26 ppm), respectively (Fig. S2A). The methyl group (CH 3 ), methylene group (CH 2 ), methyne group (CH), and carbonyl group (C=O) are found at 19.8, 40.8, 67.6, and 169.2 ppm, respectively (Fig. S2B). The chemical shifts of both 1 H-and 13 C-NMR of PHAs from Methylobacterium sp. XJLW are in good agreement with the data of PHB standards. IR spectra of PHB standards and PHAs from Methylobacterium sp. XJLW are shown in Fig. S3. It shows mainly two intense absorption bands at about 1280-1291 cm -1 , 1725 cm -1 , and 2925-2978 cm -1 corresponding to C-O, C=O, and C-H stretching groups, respectively. The 3436.8 cm -1 absorption band indicates a small number of O-H existing in PHAs from Methylobacterium sp. XJLW and PHB standards referring to the terminal hydroxyl. Meanwhile, the great similarity of IR spectra characteristic indicates chemical group composition in PHAs from Methylobacterium sp. XJLW is the same to that of PHB standards. All the above evidences demonstrate PHB should be produced by Methylobacterium sp. XJLW. LC-MS results of CoQ 10 standard and the sample extracted from Methylobacterium sp. XJLW cells are shown in Fig.S4. It was found that the peak of CoQ 10 in sample appeared at the retention time same to that of CoQ 10 standard. Although the target peak area of sample looked lower than that of other unidentified peaks, the mass-to-charge ratio of CoQ 10 sample extracted from Methylobacterium sp. XJLW strain exhibited a molecular peak (m/s, 885.6) same to that of CoQ 10 standard. The result suggested that the Methylobacterium sp. XJLW has the ability of CoQ 10 biosynthesis. However, further purification of the sample CoQ 10 and enhanced production of CoQ 10 in Methylobacterium sp. XJLW are required in future research. Higher biomass, PHB, and CoQ10 yield in M3 with methanol than with glucose As shown in Fig. 3, Methylobacterium sp. XJLW exhibited much higher biomass and yield of both PHB and CoQ 10 when incubated in M3 medium supplemented with methanol than glucose as sole carbon source, respectively. It is interesting that the expression level of some genes coding the key enzymes in the pathway of PHB and CoQ 10 biosynthesis of Methylobacterium sp. XJLW in methanol medium was also significantly higher than that in glucose medium (Fig. 4). The expression level of much more genes was also compared based on the RNA-seq results (Tables 3 and 4). Besides, the data of quantitative RT-qPCR of selected genes involved in PHB synthesis pathway indicated that PHB may be synthesized by different pathways or be regulated by different isoenzymes under different substrates or different cultivating conditions. In the RT-qPCR analysis, phaC-3 encoding poly(R)-hydroxyalkanoic acid synthase (class III) was chosen for analysis, results showed that phaC-3 was significantly upregulated by methanol, which was identified with RNA-seq results. However, phaC-1 catalyzing the same step in the pathway was downregulated by methanol, indicating different isoenzymes were regulated by different factors. Meanwhile, totally 5 acat genes, 3 paaH genes, 2 fadN genes, and 2 phaZ genes were found in PHB synthesis pathway in Methylobacterium sp. XJLW showing different responses to methanol (Table 4), which indicated that there was a more complex regulation system in Methylobacterium sp. XJLW responsible for PHB production. From genomic data mining, it was also found no gene encoding hydroxybutyrate-dimer hydrolase (EC: 3.1.1.22) and hydroxymethylglutaryl-CoA synthase (EC: 2.3.3.10) existing in Methylobacterium sp. XJLW strain, suggesting PHB were mainly synthesized through FadJcatalyzed branch pathway. Besides, in CoQ 10 synthetic pathway of Methylobacterium sp. XJLW, it was also found no gene encoding decaprenyl-diphosphate synthase (EC: 2.5.1.91) existed in the genomic data, but the LC-MS had strickly verified the product of CoQ10 from this strain. So, it is very possible that there is another new branch pathway or unannotated gene responsible for decaprenyl-diphosphate, an important precursor of CoQ 10 , biosynthesis in Methylobacterium sp. XJLW. Effects of medium composition and cultivation conditions on cell growth, PHB, and CoQ 10 productivity in Erlenmeyer flask level Both medium M3 and MSM are recommended as suitable medium for Methylotroph strain cultivating (Bourque et al. 1995) with methanol as sole carbon and energy source. Thus, the growth behaviors of Methylobacterium sp. XJLW in M3 and MSM were evaluated in Erlenmeyer flasks. The results (Fig. 5a) showed that M3 medium exhibited more superiority for cell growth than MSM, and 5 days was the best harvest time with maximum dry cell density. Meanwhile, the ability of PHB and CoQ 10 production by Methylobacterium sp. XJLW in M3 and MSM was also evaluated respectively. The results ( Fig. 5b) also showed that Methylobacterium sp. XJLW exhibited better PHB and CoQ 10 biosynthesis capacity in medium M3 than in MSM. M3 was then selected as initial medium for the optimization of Methylobacterium sp. XJLW fermentation in the following experiments. As medium components, carbon source and nitrogen source play the significant role in the fermentation productivity according to previous reports (Wei et al. 2012;Fig. 3 Different effects of glucose and methanol (same carbon atom amount) on the strain XJLW cell growth and its yield of CoQ 10 and PHB. Significant differences from glucose group are indicated by * p < 0.05; ** p < 0.01 Fig. 4 Effect of carbon source on the express of key genes in CoQ 10 (a) and PHB (b) biosynthesis pathway via RT-qPCR. Significant differences from glucose group are indicated by * p < 0.05; ** p < 0.01 Mozumder et al. 2014). Thus, the effect of carbon and nitrogen sources is also very necessary to be evaluated for the optimization of Methylobacterium sp. XJLW fermentation process. In the previous publications, methanol and ammonium sulfate had been approved to be the suitable carbon and nitrogen source for Methylobacterium (Bourque et al. 1995;Yezza et al. 2006). Therefore, the effect of different concentrations of methanol (Fig. 6a) and ammonium sulfate (Fig. 6b) on PHB and CoQ 10 productivity of Methylobacterium sp. XJLW was evaluated respectively in the present study. It was found that 7.918 g L -1 methanol led to maximal CoQ 10 concentration of 1.26 mg L -1 while the optimal biomass and PHB concentration was obtained under 11.877 g L -1 methanol. The phenomenon may result from the different biosynthesis pathways of CoQ 10 and PHB. In order to avoid cell intoxication caused by high methanol concentration, 7.918 g L -1 methanol was selected as the optimal carbon source concentration in further research. However, no significant increase of PHB and CoQ10 yield was detected when ammonium sulfate concentration ranged from 0.5 g L -1 to 1.5 g L -1 , thus 0.5 g L -1 was selected for the following study. Besides medium components, cultural condition such as culture temperature and initial pH also play important roles in microbial fermentation. Thus, the effect of culture temperature and initial pH on Methylobacterium sp. XJLW fermentation was then evaluated in Erlenmeyer flask. The results (Fig. 6c and d) showed that the best cultural temperature is 30°C, and the optimal initial pH is 7.0. As fermentation broth may turn to lower pH caused by carbon metabolism of Methylobacterium sp. XJLW, feeding ammonium hydroxide to neutralize the excess formic acid derived from methanol metabolism is very important. Thus, the optimal initial pH and cultural temperature were selected as 7.0 and 30°C, respectively. Due to the poor solubility of oxygen in aqueous medium, the dissolved oxygen (DO) supply is another key factor affecting the productivity in aerobic fermentation process, and one of the most effective strategies for improving oxygen mass transfer efficiency is adding oxygen carrier to the aerobic fermentation system (Lai et al. 2002;Xia 2013;Vieira et al. 2015). In this study, three different oxygen carriers were chosen to enhance the oxygen supply, including two different surfactants (Triton X-100 and Tween 80) and hydrogen dioxide. Compared with the control group, 0.1% (v/v) of different oxygen carriers was added to Methylobacterium sp. XJLW fermentation system, respectively. The results (Fig. 6e) showed that Tween 80 exhibits positive effects especially in the level of CoQ 10 and PHB biosynthesis, meanwhile the productivities of the Triton X-100 group and the hydrogen dioxide group were both lower than the control group. Perhaps excessive emulsification of Triton X-100 and denaturation of membrane protein caused by hydrogen dioxide can both inhibit normal metabolism of Methylobacterium sp. XJLW. Tween-80, a non-ionic surfactant, could improve the cell membrane permeability and the specific surface area of oxygen at appropriate concentration, so it may also exhibit positive promotion for intracellular metabolite biosynthesis. According to these data, 0.1% (v/v) of Tween 80 was chosen as the best oxygen carrier in the following research. As an important environmental factor, osmotic pressure may affect the mass transfer and the accumulation level of metabolites in many microorganisms (Xu et al. 2013;Mozumder et al. 2015), so the effects of osmotic pressure on Methylobacterium sp. XJLW metabolism were discussed through adding different concentrations of sodium chloride. The results (Fig. 6f) showed that the group adding 1.0 g L -1 of sodium chloride exhibited the highest cell yield and target product concentration, so this regulation strategy was chosen in the subsequent research. Based on the above, the optimal medium and cultural conditions for CoQ 10 and PHB co-production through Methylobacterium sp. XJLW strain fermentation were M3 medium containing 7.918 g L -1 methanol, 0.5 g L -1 of ammonium sulfate, 0.1% (v/v) of Tween 80, and 1.0 g Fig. 5 Cell growth (a) and PHB/CoQ 10 production (b) of XJLW in M3 and MSM, respectively. Significant differences from MSM group are indicated by * p < 0.05; ** p < 0.01 Fig. 6 Effects of methanol concentration (a), ammonium sulfate concentration (b), fermentation temperature (c), initial pH of medium (d), different oxygen carriers (e), and sodium chloride concentration (f) on XJLW biomass, PHB and CoQ 10 biosynthesis. Significant differences from selected group (7.918 g L -1 methanol group for a, 0.5 g L -1 (NH 4 ) 2 SO 4 group for b, 30°C group for c, pH 7.0 group for d, Tween 80 group for e, and 1.0 g L -1 sodium chloride group for f, respectively) are indicated by * p < 0.05; ** p < 0.01 L -1 of sodium chloride under the fermentation temperature and initial medium pH of 30°C and 7.0, respectively. Methylobacterium sp. XJLW fermentation in a 5-L fermenter Based on the above results, a methanol feeding strategy coupled with pH and dissolved oxygen (DO) controlling was employed in a 5-L stirred tank reactor for a highdensity fermentation. During the whole cultivation period, DO, stir speed, and pH were captured by online monitors, and the acquisition curves are shown in Fig. 7a. Meanwhile, the changes of methanol concentration, biomass, and PHB and CoQ 10 productivity during the whole process are shown in Fig. 7b. During the first 36 h, the consumption of methanol added before fermentation was speeded up gradually until DO rebounding to 100%, meaning that there was no methanol enough for cell growth in the medium. From then on, methanol was fed at a pulsed pace to ensure sufficient carbon source in the fermentation system without toxicity caused by excessive methanol. With cell density increasing, the limited dissolved oxygen became another key factor affecting cell growth. Thus, stir speed also gradually increased to ensure the DO level between 10 and 50%. During the whole fedbatch process, pH of broth was controlled at 5.7 approximately rather than 7.0, for excessive ammonium hydroxide used for adjusting pH may inhibit PHB accumulation according to previous report (Pieja et al. 2012). After 106 h when methanol accumulation occurred, methanol feeding ceased, and DO quickly rose up to 100%, indicating the respiration intensity of XJLW cells weakened sharply with little methanol consumption in the final period. It was also found that low content of PHB and CoQ 10 were detected during the first 36 h, suggesting initially added methanol was almost completely exhausted for cell respiration and growth. Later, along with feeding substrates, concentration of biomass, PHB, and CoQ 10 increased in the same trend, implying both PHB and CoQ 10 were biosynthesized in association with cell growth. During the whole process, the total exhausted methanol volume is 830 mL, coupled with feeding 113.05 mL ammonium hydroxide. Finally, a maximum DCW value of 46.31 g L -1 was obtained, and the highest yields of PHB and CoQ 10 reached 6.94 g L -1 and 22.28 mg L -1 , respectively. Thus, the final productivities of PHB and CoQ 10 in this fed-batch fermentation system reached 0.15 g g -1 of DCW and 0.48 mg g -1 of DCW, respectively. These results suggest that the feeding methanol coupled with DO controlled through adding ammonium hydroxide strategy should be an effective method to increase the cell density and productivities in Methylobacterium sp. XJLW submerged fermentation system. Discussion As carbon source storage in microbial cells, PHAs are usually synthesized and accumulated under imbalanced growth conditions by limiting a nutritional element, such as nitrogen, phosphate, or oxygen (Mozumder et al. 2014). PHAs could accumulate inside a membrane enclosed inclusion in many bacteria at a high content up to 80% of the dry cell weight (Khosravi-Darani et al. 2013). Thus, if a strain has the potential for PHA production, there will be many polymer particles inside the cell suggesting PHA existence. In this study, the cell morphology of Methylobacterium sp. XJLW under a transmission electron microscope (TEM) also showed a high content of polymer particles (Fig. 1), which is similar to most PHA-producing strains. For Methylotrophs cultivating with methanol as sole carbon and energy source, both medium M3 and MSM are recommended as suitable medium (Bourque et al. 1995). However, M3 medium exhibited superiority for Methylobacterium sp. XJLW cell growth than MSM. As medium components, carbon source and nitrogen Fig. 7 Online parameter acquisition curve (a) and CoQ 10 and PHB fermentation of XJLW via fed-batch process (b) in a 5-L stirred tank reactor. The arrow demarcates the feeding event source usually play the significant role in the fermentation productivity according to previous reports (Wei et al. 2012;Mozumder et al. 2014). For Methylobacterium strains, methanol and ammonium sulfate had been approved to be the suitable carbon and nitrogen source (Bourque et al. 1995;Yezza et al. 2006). In the present study, a methanol utilized strain Methylobacterium sp. XJLW, which was isolated as formaldehyde degrading strain in our previous study (Qiu et al. 2014), also grows better in the M3 than in BSM containing methanol as sole carbon source (Fig. 5). In order to develop its potential applications in biotechnological industry, PHB and CoQ 10 were selected as representatives of biopolymers and quinone metabolites, respectively, to evaluate the potential for their coproduction via methanol-based culture process of Methylobacterium sp. XJLW. An increasing number of PHBproducing strains have been reported, including Methylobacterium extorquens (Ueda et al. 1992;Bourque et al. 1995), Paracoccus denitrificans (Ueda et al. 1992;Kalaiyezhini and Ramachandran. 2015), Alcaligenes latus (Yamane et al. 1996), Methylobacterium sp. ZP24 (Nath et al. 2008), Bacillus thuringiensis (Pal et al. 2009), Cupriavidus necator (Mozumder et al. 2015), Halomonas campaniensis , Bacillus drentensis (Gamez-Perez et al. 2020). After process and culture condition optimization, the yield of PHB has reached a high level more than 100 g L -1 PHB from methanol via high-cell-density fed-batch culture of methylotrophic bacteria (Ueda et al. 1992;Yamane et al. 1996). Based on the above, methylotrophic bacteria seem the potential industrial strains for PHB production via methanol-based biotechnology. Meanwhile, CoQ 10 is another important compound which can be widely used as potent antioxidative dietary supplement in treating cardiovascular disease, cancer, periodontal disease, and hypertension acting (Hofer et al. 2010;Lu et al. 2013). There are also a number of strains capable of producing CoQ 10 . However, no publication was found about CoQ 10 synthesis in methylotrophic bacteria. In this study, it was found that both metabolic pathways of PHB and CoQ 10 biosynthesis exist in Methylobacterium sp. XJLW based on the genomic and comparative transcriptomics information (Fig. 1). RT-qPCR results also showed the transcription level of key genes in both pathways' response to methanol was significantly higher than that response to glucose (Fig. 4). Correspondingly, Methylobacterium sp. XJLW can produce PHB and CoQ 10 simultaneously with higher yield using methanol than using glucose as sole carbon and energy source (Fig. 3). To our knowledge, it is the first report on PHB and CoQ 10 production simultaneously by methylotrophic bacteria. After optimization of medium composition and the culture conditions on PHB and CoQ 10 biosynthesis, a cell density of DCW 46.31 g L -1 with a PHB concentration of 6.94 g L -1 , and a CoQ 10 concentration of 22.28 mg L -1 were achieved in a 5 L bioreactor, which were 30-fold, 6-fold, and 17-fold higher than that in Erlenmeyer flasks, respectively. Although the productivity of CoQ 10 was 0.48 mg g -1 of DCW, which was lower than that of previous reported strains such as Rhodobacter sphaeroides (2.01 mg g -1 of DCW) (Kalaiyezhini and Ramachandran. 2015), the volumetric yield of 22.3 mg L -1 of Methylobacterium sp. XJLW was higher than that of several previous reported strains including the mutant strain of Rhodobacter sphaeroides (14.12 mg L -1 ) (Bule and Singhal. 2011), Paracoccus dinitrificans NRRL B-3785 (10.81 mg L -1 ) (Tian et al. 2010), and Sphingomonas sp. ZUTEO3 (1.14 mg L -1 ) (Zhong et al. 2009). Meanwhile, Methylobacterium sp. XJLW could accumulate PHB at the productivity level of 0.15 g g -1 of DCW. The PHB yield of Methylobacterium sp. XJLW was lower than several reported strains such as Methylobacterium extorquens DSMZ 1340 (0.62 g g -1 of DCW) (Mokhtari-Hosseini et al. 2009) and Methylobacterium extorquens ATCC 55366 (0.46 g g -1 of DCW) (Bourque et al. 1995), but the volumetric yield of PHB of Methylobacterium sp. XJLW in this study (6.94 g L -1 ) was higher than that of Methylobacterium sp. ZP24 (3.91 g L -1 ) (Nath et al. 2008). Conclusions In summary, it is feasible to develop a co-production process of two valuable metabolites by Methylobacterium sp. XJLW from methanol. However, compared with the cost of chemical polymers and the productivity of PHB or CoQ10 high yield strains, it is still necessary to further optimize fermentation process, and genetically modify strain pathway, for enhanced production of PHB and CoQ 10 simultaneously by Methylobacterium sp. XJLW. This study also presented a potential strategy to develop efficiently co-producing other high-value metabolites using methanol-based bioprocess.
8,890
2021-05-21T00:00:00.000
[ "Biology", "Engineering" ]
Cost Effectiveness and Resource Allocation Background: This study aimed at providing information for priority setting in the health care sector of Zimbabwe as well as assessing the efficiency of resource use. A general approach proposed by the World Bank involving the estimation of the burden of disease measured in Disability-Adjusted Life Years (DALYs) and calculation of cost-effectiveness ratios for a large number of health interventions was followed. Methods: Costs per DALY for a total of 65 health interventions were estimated. Costing data were collected through visits to health centres, hospitals and vertical programmes where a combination of step-down and micro-costing was applied. Effectiveness of health interventions was estimated based on published information on the efficacy adjusted for factors such as coverage and compliance. Results: Very cost-effective interventions were available for the major health problems. Using estimates of the burden of disease, the present paper developed packages of health interventions using the estimated cost-effectiveness ratios. These packages could avert a quarter of the burden of disease at total costs corresponding to one tenth of the public health budget in the financial year 1997/98. In general, the analyses suggested that there was substantial potential for improving the efficiency of resource use in the public health care sector. Discussion: The proposed World Bank approach applied to Zimbabwe was extremely data demanding and required extensive data collection in the field and substantial human resources. The most important limitation of the study was the scarcity of evidence on effectiveness of health interventions so that a range of important health interventions could not be included in the costeffectiveness analysis. This and other limitations could in principle be overcome if more research resources were available. Conclusion: The present study showed that it was feasible to conduct cost-effectiveness analyses for a large number of health interventions in a developing country like Zimbabwe using a consistent methodology. Published: 28 July 2008 Cost Effectiveness and Resource Allocation 2008, 6:14 doi:10.1186/1478-7547-6-14 Received: 14 December 2007 Accepted: 28 July 2008 This article is available from: http://www.resource-allocation.com/content/6/1/14 © 2008 Hansen and Chapman; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Cost Effectiveness and Resource Allocation 2008, 6:14 http://www.resource-allocation.com/content/6/1/14 Background There is an increasing number of cost-effectiveness studies aiming at analysing the allocative efficiency of the health care sector. These analyses incorporate costs and effects of interventions directed at different health problems and different patient groups and often include a large number of interventions. Examples from developed countries include analyses performed in United Kingdom [1], Australia [2] and in Oregon State in the USA [3] while a large database on cost-effectiveness analyses from all over the world is maintained by an American university [4]. For developing countries, the World Bank health sector priorities review [5][6][7] assessed the costs and effectiveness of health interventions directed at major health problems for low and middle income regions of the world. In a similar effort, the World Health Organization estimated costs per DALY for a wide range of health interventions for 14 epidemiologic sub regions and in addition developed tools enabling individual countries to perform similar costeffectiveness analyses based on local estimates on e.g. disease burden and unit costs of various health services [8][9][10]. At an individual country level, a list of costs per discounted life year gained for a large number of preventive and curative health interventions was developed for Guinea [11]. While such cost-effectiveness analyses aiming at assessing allocative efficiency may be very useful for setting priorities in the health care sector of a given country, several features of this technique have been identified as being problematic. Since these analyses often include a large number of health interventions, these exercises are extremely data intensive in terms of estimating the required information on costs and effectiveness. Consequently, simplifying assumptions and shortcut methods have been applied in order to make the data collection task more manageable. For instance, it is often assumed that health interventions are produced under constant returns to scale so that the costs per health output do not vary with the scale at which the intervention is undertaken thus making it necessary only to estimate a single point on the cost function [6,12]. It is also common practice to exclude important cost categories such as costs borne by patients [13]. Further, required information may be predicted using statistical models rather than actual data collection [9]. A major concern is the severe lack of information on effectiveness of health interventions [14]. Finally, concerns have been raised over the relevance and applicability to priority setting in a particular country of the published allocative cost-effectiveness analyses since these have often been developed as regional estimates [15]. Presently, there is not much knowledge of the relative cost-effectiveness of health services offered in the Zimba-bwean public health care sector. Such information may however be useful for assessing the efficiency of resources used in a situation of dwindling health care funds and steeply increasing demand. The main objective of this paper is therefore to provide input into an analysis of identifying ways of improving the allocative efficiency of resource utilisation in the health care sector of Zimbabwe. The general research strategy for achieving this objective is inspired by the approach previously utilised by the World Bank [5,6,16,17]. As a first step, this approach entails the estimation of the level of ill-health of the Zimbabwean population in 1997 using DALYs as the societal health outcome measure. Results of this component have been reported elsewhere [18] and key figures describing the burden of disease by cause in 1997 have been reproduced in Annex 1 of the present paper. The second step involves the estimation of costs per DALY gained for a large number of health interventions followed by the development of essential packages of health interventions which address large amounts of ill-health at low costs. The present paper focuses on the second step. In addition, having finalised this kind of analysis in Zimbabwe, this study also provides an opportunity to discuss the feasibility of conducting this very data intensive World Bank approach in a developing country setting. The context of the health care system At the time of this study, the disease pattern in Zimbabwe is heavily dominated by communicable, maternal, perinatal and nutritional conditions [19] similar to other countries in Sub-Saharan Africa although Zimbabwe is plagued with an unusually large disease burden due to HIV (Annex 1). The health of the nation has traditionally been a high priority and large investments in the public health care sector in the 1980s led to impressive improvements in key health indicators although the years following 1990 saw a reversal in most health indicators [20] -a development further exacerbated in more recent time due to decreasing GDP, dwindling health care funds and massive emigration of health sector personnel [21]. The health care sector is a highly heterogeneous section of the economy. Provision of health care services is offered by government, church missions and other NGOs, industries and mines, private practitioners and traditional healers. Measured by the number of health facilities, government is the single most important provider [22]. Private practitioners and hospitals are relatively abundant in larger cities where these providers are able to attract large proportions of the available health personnel. Government of Zimbabwe has succeeded in organising its own institutions as well as church mission facilities and some of the private sector facilities into a four-tiered system of health care service delivery. Health centres manned by qualified nurses are the first level followed at the next levels by district, provincial and central hospitals where hospital services of increasing complexity are offered requiring more specialised personnel and equipment. The head office of the Ministry of Health and Child Welfare constitutes the highest level of the public health care sector and it is the main actor in terms of health policy making and development. For instance, the head office is responsible for the allocation of all government health care funds among health facilities as well as steering important processes such as the Zimbabwe Essential Drugs Action Programme (ZEDAP) which results in a list specifying the most costeffective drugs for a large number of health problems [23] that is used extensively by all health facilities in the country. Choice of interventions for the cost-effectiveness analyses Curative interventions for the present study included the treatment of common health problems at hospital inpatient and outpatient departments as well as health centres. These interventions covered both single treatment episodes and more long term management of chronic conditions. Preventive interventions included five vertical activities: residual house spraying to prevent malaria, immunisation of children (measles, polio, tuberculosis, diphtheria, pertussis and tetanus), surveillance and targeted supplementary feeding of wasted children, HIV prevention through improved access to treatment of sexually transmitted infections (STIs) and health promotion of personal and domestic hygiene in order to decrease the incidence of diarrhoeal diseases. Cost data collection and unit costs estimation at selected study sites In order to estimate the costs of individual curative and preventive health interventions, a number of public health providers were visited for the collection of the necessary cost data. Study sites were randomly chosen from all over the country. With respect to curative health interventions, six health centres out of a total of around 1200 were selected for the cost analysis. Health centres offered outpatient services and selected preventive activities such as immunisation. Five district level hospitals including two mission hospitals from a total of 130 hospitals were sampled for the costing of inpatient services, surgical procedures and outpatient services. Finally, two provincial hospitals (from a total of 8) were randomly selected and these offered similar services as district hospitals but the former hospitals were able also to provide more specialised services. The highest level, central hospitals, was excluded from the costing analysis. Preventive interventions were organised in a vertical fashion involving provincial health offices and district hospitals as well as services performed by health facilities (e.g. vaccinations at health centres and hospitals). Two provinces out of a total of eight were randomly chosen and two districts were ran-domly selected within each province (a total of 14 districts). Finally, the Ministry of Health Headquarters and two provincial health offices were visited to capture additional programme costs of curative and preventive interventions such as central purchasing of drugs and high level administrative personnel [10]. The costing perspective taken for this study was the health provider's view (Ministry of Health and Child Welfare) since the objective of the present cost-effectiveness analysis was to determine how the largest slice of the burden of disease could be cut using a given government budget [24]. Activities at each study site incorporated the identification, measurement and subsequent valuation of resources required to offer health services. Government accounting systems provided at each study site the level of actual, recurrent expenditure by category including for example salaries by type of personnel, stationery, electricity, maintenance and drugs. With respect to capital inputs at each study site, a quantity surveyor estimated the present day construction costs per square metre by type of office or department. Further, a list of available equipment and furniture was developed and subsequently valued using market prices. From these replacement costs of buildings, equipment and furniture, an annual equivalent was calculated using the annuitization method [24,25] assuming a real discount rate of 3% and expected life spans of 30, 7 and 10 years for the mentioned capital inputs. These costs by category were at each study site allocated to the health interventions selected for this study. This was done by applying the standard step-down costing methodology [24,26] consisting initially of categorising activities (in practice wards and departments) in a study site into a hierarchical system with the final product (such as patient care) at the lowest level and with support and overhead activities at successively higher levels. Subsequently, the aggregate costs by category were allocated to final activities in a step-wise fashion using simultaneous equation techniques [ [24], Ch. 4] and the development of allocation criteria reflecting actual resource use. At the end of the standard step-down costing procedure, all costs of a study site had been distributed to the final service departments so that an average costs figure could be calculated by dividing the number of services provided by individual departments. Micro-costing techniques [27] were used to supplement the above information in order to achieve information on interventions against individual diseases. For instance, a review of a sample of inpatient notes was performed at hospitals in order to capture the treatment pattern of the most common health problems. With respect to the treatment of the less common health problems, official treatment guidelines were used [23]. Having finalised the study activities described above, unit costs of individual curative and preventive services were available for the study sites included. Costs of interventions at population level Unit costs of individual health interventions estimated from data collected at the study sites were utilised for calculating the total costs of offering this service for a population group as a whole. This was done to take account of the fact that costs and effects measured in DALYs averted depended on age of onset of disease. The total costs of a specific curative health intervention were calculated for a hypothetical district population of 250,000 individuals in Zimbabwe with the same age and sex distribution and incidence of diseases as the country as a whole. The number of treatments for each disease was determined by incidence and the health seeking behaviour of the population. Information on incidence of diseases was drawn from a national study which provided estimates of new cases of disease by age and sex groups for the year 1997 [18]. In addition, the proportions of cases by disease likely to seek treatment were determined based on advice from clinical experts as well as the National Health Information System [19]. Using these two types of information, the total number of treatments by age and sex could be estimated for each disease under study. Subsequently, the total costs of a curative health intervention were estimated by multiplying this number with the relevant unit costs: where C j is the population level costs of intervention j, U j indicates the unit costs of curative health intervention j. In addition, is the absolute, annual number of incident cases of a health problem (which may be treated by intervention j) in population group of age a and sex s while is the proportion of incident cases seeking treatment in the same population group. Outpatient services were offered both at health centres and hospitals. It was assumed that 80% of all cases were treated at health centres and 20% at district hospital outpatient departments corresponding to the actual health seeking behaviour [19]. Some health problems required life long treatment like for instance insulin-dependent diabetes. In these cases, the specific cost figures estimated for a given length of time were recalculated to match the life expectancies at various ages of onset of the disease as indicated in the formula below: where is the annual costs at time t for health intervention j for a chronic condition while T(as) indicates the life expectancy of an individual belonging to population group of age a and sex s. Future costs were discounted using a real discount rate r of 3 percent. The primary preventive interventions incurred costs at district and provincial health offices and typically also at the level of health providers such as health centres and hospitals. The pattern of cost components for preventive interventions therefore followed the general form: where D j and P j represent the overall costs related to preventive intervention j at the district and the particular district's share of the provincial office respectively. In addition, U j denotes the unit costs of preventive activities such as vaccinations or STI treatments performed at health centres and hospital outpatient departments. Finally, is the absolute number of individuals in population group of age a and sex s targeted for intervention j and with denoting the percentage actually covered. Information on the number of individuals in each age and sex group in the study population could be obtained from the most recent census [28,29] and updating these figures using estimates of population growth [30]. Coverage of the five preventive health interventions was established through discussions with the responsible staff in the four districts. For some activities such as immunisation, information on coverage was collected as part of a recent Demographic and Health Survey [31]. Estimation of effectiveness of interventions at population level The benefits of an intervention were measured as the reduction in the burden of disease (DALYs averted) as a result of the intervention. Following the Global Burden of Disease methodology [32][33][34], the burden of disease for an individual of sex s dying prematurely at age a, BOD as , and with life expectancy T(as) (or suffering from a disease episode starting at age a with length T(as)) could be calculated from the formula: where W is a quality adjustment factor (disability weight) representing different levels of health [ [33,35]: Annex 3]. The component Kte -βt is an age weighting curve of an inverted u-shape so that the relative value of life years in young adulthood is higher than in other ages while e -r(t-a) is the discount factor using discount rate r = 0.03. Finally, rather than using actual life expectancies of the population under study, the DALY methodology employs long life expectancies from a low mortality model life table (Coale-Demeny West Level 26 [36]). Life expectancies T(as) therefore depend on both age and sex. The benefit in terms of DALYs gained from a successful intervention j for a person of age a and sex s is calculated in the following way: where is the burden of disease after a successful intervention. For instance, the number of DALYs gained for an individual dying prematurely at age a 1 without treatment but postponing death until age a 2 (a 1 <a 2 ) following an intervention can be calculated using equation (5). A detailed explanation using worked examples of how to calculate DALYs for cost-effectiveness analysis has been presented by Fox-Rushby and Hanson [37]. Effectiveness of health interventions in a real world setting depend on a wide range of factors [11,38]. Four factors were identified for the present study as having an important influence on the effect of curative interventions: efficacy of individual drugs, diagnostic accuracy, appropriateness of the treatment prescribed and patient compliance. With respect to efficacy, sources of information for this measure by type of drug were mainly a World Bank review [5], Cochrane systematic reviews (such as for instance trachoma [39]) or articles identified through the Cochrane register of randomized controlled trials. Very little hard evidence from the Zimbabwean setting could be found on the other three factors so estimates of these aspects were determined for each health problem based on discussions with clinical experts. In a similar fashion as applied by Evans et al. [13], the effectiveness of a health intervention was estimated by reducing the efficacy of the relevant drug by a factor between 0 and 1. The benefits at population level in terms of DALYs averted of a specific curative health intervention j were subsequently calculated as: where E j , B j , F j and G j are efficacy of the drug prescribed, diagnostic accuracy, correct treatment and patient compliance respectively for curative intervention j measured as percentages. Expressed in words, this equation estimates the number of individuals cured through treatment j by excluding ineffective services from the total number of individuals seeking treatment and translating the resulting health benefits into DALYs averted. A similar procedure was applied to preventive interventions involving first determining the effect under ideal conditions followed by adjusting this to incorporate real world conditions. Efficacy of malaria spraying was derived from a study in South Africa which compared the prevalence of malaria infection in sprayed areas and nonsprayed areas [40]. Similarly, efficacy estimates were derived for environmental health [41][42][43], food supplementation [44], vaccines [45,46] and STI syndromic management [47,48]. The number of DALYs averted at population level for a given preventive intervention j was calculated as: where E j is the efficacy of the intervention under ideal circumstances and R j is any necessary downward adjustment (less than perfect coverage) of efficacy while is the incidence of disease in different age-and sex groups. Coverage of the five preventive health interventions was established through discussions with the responsible staff in the four districts included or in the case of EPI utilising the Demographic and Health Survey [31]. Calculation of cost-effectiveness ratios Having estimated the total costs and effectiveness of various health interventions, the cost-effectiveness ratio for intervention j, CER j , was found as: where costs were estimated using equation (1), (2) or (3) and effects were estimated using (6) or (7). Development of essential health packages The selection of health interventions for essential health packages may be done by applying different sets of principles. According to the World Bank principles for develop- ing health packages [16], desirable health interventions are those with low cost-effectiveness ratios and at the same time address important health problems. Another possible set of principles is a pure cost-effectiveness criterion [49]. This entails utilising a process consisting of selecting first the intervention with the lowest cost-effectiveness ratio and then calculating the total costs of averting this health problem. The subsequent step chooses the intervention with the second lowest cost-effectiveness ratio and also calculating the total costs of averting this health problem and so on until the budget is exhausted. Assuming that the cost-effectiveness ratios estimated for the health interventions of this study complied with the assumptions of perfect divisibility and constant returns to scale [50,51], the total costs and effects in terms of disease reduction of various sets of interventions could be estimated. Median cost-effectiveness ratios were utilised for each type of intervention. Estimates of the burden of disease by cause which must be addressed by the selected interventions were obtained from a previous national study [18] and reproduced in Annex 1. It was further assumed that 300 millions of Zimbabwe dollars were available for the essential packages. This corresponded to just below 10 percent of actual capital and recurrent expenditure at the national level in the financial year 1997/1998. Two additional restrictions were imposed. First, the majority of diseases could not be averted at a single level of the health system. For instance, if an intervention against pneumonia was selected to be part of the package, this health problem could not be fully avoided by offering treatment through health centres. It would be necessary to offer hospital treatment as well. Secondly, it was assumed that at most 30 percent of the HIV burden could potentially be averted through the preventive intervention included in the study (STI treatment) to avoid the budget being exhausted by this single intervention. Sensitivity analysis The sensitivity of the cost-effectiveness ratios was assessed by varying important parameters and assumptions. Instead of a 3% discount rate utilised for the baseline calculations of cost-effectiveness ratios, these were recalculated using discount rates of 6 and 10%. Estimated time preferences with respect to life years vary a lot [52] although a recent empirical study in Tanzania suggested a time preference of a similar size to the range chosen above [53]. The size of the discount rate affected both the effects of health interventions through the DALY formula and the costs of health services. The long life expectancies from the chosen model life table were replaced by actual, period life expectancies as recommended in the recent World Bank health sector priorities review [12]. Much shorter, actual Zimbabwean life expectancies were estimated based on the population size and the number of deaths by age and sex obtained from the census of 1997 [30]. Furthermore, rather than the inverted u-shape of the age weighting function suggested by the DALY methodology, an age weighting function with an equal value of 1 on each life year was used as also suggested by the World Bank health sector priorities review [12]. Some of the health facilities visited operated considerably below full capacity during the study year thus pushing up costs of services. For the sensitivity analysis, cost-effectiveness ratios were recalculated under the assumption that all health facilities were moderately and significantly better utilised (e.g. 80 and 95% bed occupancy rates in inpatient departments respectively). Finally, assessing the robustness of the cost-effectiveness ratios to changes in the effectiveness of interventions was very important since this was an area with little hard evidence. Therefore, cost-effectiveness ratios were recalculated assuming a lower effectiveness of individual health interventions than in the baseline calculations. Calculations were performed utilising effectiveness estimates that were 90, 70 and 50% of the baseline estimates. Results Combining the estimated costs and effects resulted in a list of costs per DALY averted for a large number of health interventions. Table 1 displays the range of costs per DALY of the health interventions included in this study. The specification of a range of costs for individual health interventions reflects the fact that the costs of individual treatments differed in the health facilities and the various preventive programmes due to such factors as varying treatment patterns for similar diseases, availability of resources, degree of capacity utilisation of health facilities and incidence of diseases. Among the interventions with the lowest cost-effectiveness ratios, curative treatment at health centres and hospital outpatient departments of pneumonia, severe diarrhoeal diseases, peptic ulcer, dysentery, malaria, trachoma, schistosomiasis haematobium and glaucoma were identified. In addition, curative treatments for meningococcal meningitis, pneumonia and malaria at district and provincial hospitals were also in the group of interventions with low costs per DALY. Preventive interventions of low costs per DALY included improved STI treatment to avert new HIV infections and residual house spraying to avoid malaria infection. In the middle range of the list of costs per DALY averted, fewer interventions based at the first level of the delivery system were found. Treatment of scabies at health centres and hospital outpatient departments was the only exception. With respect to district and provincial hospitals, the treatment of dysentery, peptic ulcer, six months tuberculosis course, complicated deliveries with minor or major surgery (caesarean section) and appendectomy were estimated to have costs per DALY averted in the middle range. Hygiene promotion in the community to prevent diarrhoea was also in the middle range at costs per DALY averted of Z$1,011. The least cost-effective interventions of the list incorporated only health facility-based curative interventions. Health centre and hospital outpatient department interventions incorporated treatment for urethral discharge in males, body ringworm, impetigo and tonsillitis. The reasons for the high costs of these interventions were that these conditions were mild, often self-resolving and the treatment was in some cases not very effective due to low efficacy of the drug recommended. At hospitals, treatment interventions aimed at insulin-dependent diabetes, prophylaxis to avert opportunistic infections in HIV/AIDS patients, hypertension, pelvic inflammatory disease and syphilis were estimated to have high costs per DALY. The estimated cost-effectiveness ratios were utilised to develop a package of health interventions by applying the World Bank criteria for selection of essential activities (cost-effective and addressing important health problems). As displayed in Table 2, these criteria suggested a variety of health interventions aimed at relieving and averting the health problems due to HIV, pneumonia, diarrhoeal diseases, protein-energy-malnutrition, meningitis, malaria, complicated deliveries and tuberculosis. Potentially, these interventions could avert 26.4 percent of the burden of disease in 1997 at total costs of Z$300 million. The above calculations indicated that relatively few additional resources could address a large percentage of the present burden of disease. Using instead a selection procedure based on a pure costeffectiveness criterion resulted in the interventions displayed in Table 3. Inclusion of health interventions for this package was continued until the total costs were at an identical expenditure level as the package described above. Interventions addressing important health problems like HIV, pneumonia, diarrhoeal diseases, malaria, tuberculosis and complicated deliveries were included in this package as was the case of the package described above. However, instead of including supplementary feeding against malnutrition, this package included relatively minor health problems like bacterial conjunctivitis, gastritis, childhood cluster diseases, glaucoma, peptic ulcer, schistosomiasis and trachoma. The total package could potentially avert a slightly higher share of the 1997 disease burden than the above package (27.2 versus 26.4 percent). Generally, the cost-effectiveness ratios presented in the tables suggested that there was a good potential for improving the allocative efficiency in the public health care sector of Zimbabwe. Four general strategies could be (Table 1). In principle, therefore, substantial improvements in the allocative efficiency could be achieved by a reallocation of resources from the interventions with high cost-effectiveness ratios to interventions with low cost-effectiveness ratios. Secondly, it appeared beneficial to put particular attention to a narrow range of interventions with low cost-effectiveness ratios. The calculations behind the development of the two intervention packages (Tables 2 and 3) indicated that as much as one quarter of the total burden of disease could be averted by focusing on a few interventions at a level of total costs corresponding to ten percent of the current, national expenditure at that time. Thirdly, the cost-effectiveness figures estimated also confirmed the findings of other studies (e.g. [54,55]) namely that for the same disease, it was more attractive from an efficiency point of view to have the health problem taken care of at the lowest level of the referral system as possible. Comparing the cost-effectiveness ratios for the same health problem, the highest ratios were generally found in provincial hospitals followed by district level hospitals and outpatient care. According to calculations not presented in Table 1, cost-effectiveness ratios at health centres were also lower than hospital outpatient departments. In other words, this suggested that there could be substantial gains from utilising the public health care sector facilities in the hierarchical manner that was intended (e.g. only treating difficult health problems at high level facilities and mild cases at the lowest levels). Fourthly, there appeared to be a potential for designing very cost-effective preventive health interventions at the expense of curative interventions. The estimated costs per DALY for five preventive health programmes included in this study were all relatively low. The sensitivity analysis presented in Table 4 suggested that increasing the discount rate to 6%, utilising the actual Zimbabwean life expectancies, applying equal age weighting or assuming a better capacity utilisation of health facilities had relatively minor effects on the cost-effectiveness ratios. As compared to the baseline estimates of Table 1, these assumptions resulted in a difference in costs per DALY of less than 20% for the majority of the 65 interventions included in the study. In addition, the majority of interventions had shifted their rank by three places or less in the rank order of interventions by cost-effectiveness ratio. Contrary to these observations, utilising a higher discount rate of 10% had more profound effects on the figures. Cost-effectiveness ratios of most interventions rose by 30% or more and the rank changed by four steps and above for 24 interventions. Reducing the estimates of effectiveness in individual health interventions, the rank of health interventions was more strongly affected at sufficiently large decreases in intervention effectiveness. For instance, if intervention effectiveness was 50% of the original estimates, more than half the interventions decreased their ranks by more than five places. In particular, the ranks of hospital treatment of bacterial meningitis, malaria, pneumonia, tuberculosis and surgical interventions as well as health centre treatment of conjunctivitis, schistosomiasis haematobium and gastroenteritis were sensitive to changes in intervention effectiveness. Limitations of the approach applied in Zimbabwe The above findings must be seen in the light of the limitations of the study. As a starting point, the cost function assumed for this study was simple and restrictive. The assumptions of constant returns to scale and perfect divisibility ensured that the average costs per unit of health of any given health intervention were the same irrespective of the level of production. These assumptions facilitated the identification of which interventions should go into an essential health package as well as the exact level of health services production necessary to eliminate the disease burden of a health problem. While there may be several reasons for deeming these assumptions unrealistic, they are frequently applied both for curative and preventive interventions [56][57][58]. Relieving the assumptions of constant returns to scale and perfect divisibility to develop cost functions with non-constant unit costs and limited possible production levels would require more complicated optimising techniques such as linear, non-linear or integer programming [59,60]. Much of the information necessary for costing of health interventions was available in principle in the sense that the majority of health facilities and vertical programmes visited kept a good record of most of the resources used although only at an aggregate level (i.e. total resources for a whole hospital and not broken down by diagnosis). However, the costing exercise was nevertheless in some aspects not as detailed and extensive as could have been desired mainly as a result of limited research resources. First, a relatively limited number of health centres, hospitals and preventive activities in districts were included as study sites leading to a risk of not capturing a representative pattern of unit costs for the country as a whole. Second, the costing methodology also involved a number of simplifying assumptions in the data collection at study sites including, for instance, that the average personnel and other costs per inpatient day were the same irrespective of the diagnosis of a patient and that the treatment of most conditions followed the treatment guidelines [23] rather than the pattern found through the micro-costing data collection. Third, the number of health interventions included was relatively low due in particular to problems of identifying sufficient information to estimate effectiveness of interventions. Perhaps the most serious limitation was the scarcity of evidence on effectiveness of health interventions and of health systems research in Zimbabwe. The observations made by several authors [14,61,62] that we desperately lack data on these aspects are therefore also very relevant for the Zimbabwean situation. Research on factors influencing how health interventions work in the real world was extremely limited. One example of an extremely useful piece of research for the purposes of the present study was the finding of Vundule and Mharakurwa [63] that as Note: The median costs per DALY utilising an alternative assumption or parameter were compared to the median costs of Table 1. Likewise, the rank order of interventions by median cost-effectiveness ratios utilising an alternative assumption or parameter were compared to the rank order of Table 1. many as 21 percent of villagers might refuse access of spraying teams to some rooms during residual house spraying. For the present study, the necessary information required to estimate effectiveness was for the majority of health interventions based on best guesses by clinical experts. Operational lessons learned The present exercise replicated the methods underlying the 1993 World Development Report [16] and utilised also in more recent analyses [6]. The Zimbabwean research proved very data intensive and most of the needed information to determine the burden of disease, costs and effects of health interventions was not immediately available which required very extensive data collection in the field. Results presented here as well as the burden of disease estimates published previously [18] represented what was feasible within the time and resources allocated. The net time of work for this research was approximately one and a half year with one researcher, a study administrator and eight research assistants working full time and a core team of ten researchers and civil servants from selected ministries working on a part-time basis (3-6 hours per individual per week on average). Despite this substantial resource input, it was deemed necessary to adopt a number of simplifying assumptions and less extensive data collection activities some of which have been discussed above. Substantial efforts were invested in searching for published and unpublished studies on the effectiveness of health interventions as well as health systems research on how health interventions operated in reality in Zimbabwe. The lack of knowledge was a main obstacle to this study and came to some extent as a surprise to research team. If this situation had been anticipated, the data collection would have focused more on collecting additional information on the functioning of the health system which could subsequently have been utilised to adjust the findings from efficacy studies measuring impact under ideal conditions to arrive at estimates of effectiveness for the Zimbabwean situation. Efficacy studies from similar settings in Africa are more plentiful [i.e. [64,65]]. One example of a possible useful piece of health systems research would be a review of the health centres of this study to investigate the compliance of tuberculosis patients on directly observed treatment short course (DOTS). The characteristics described above had as a consequence that it was possible to include only a limited number of interventions for the cost-effectiveness analysis. In other words, cost-effectiveness assessments were not performed for a whole range of relevant interventions. For instance, interventions directed at health problems with a signifi-cant burden of disease such as depression and anxiety disorders and road traffic accidents were not included. Preventive interventions were incorporated only in a limited number so that many common HIV preventive activities were left out of the analysis including condom promotion, voluntary testing and counselling, peer-based programmes to educate high risk groups and prevention of mother-to-child transmission but also preventive activities against non-communicable diseases and injuries. As a result of these shortcomings, it is not possible to conclude that we have now identified the best or the most efficient essential package of care which must be focused on for many years. There may be other more efficient packages since it is possible only able to make conclusions with regards to reallocations among the interventions investigated in this study. The present endeavour may more appropriately be seen as just one possible direction towards improving allocative efficiency in the health care sector. Nevertheless, the present exercise was still extremely useful. It captured costs per DALY of a range of health interventions representing the situation in the health care sector at the time of study. Before this study, there was only very limited and scattered information on the costs and even less on the effects of health interventions [55,[66][67][68]. This study also demonstrated what was possible in a setting like Zimbabwe given a certain level of research resources but also that most of the limitations mentioned could in principle be overcome if more research resources were available. For instance, the cost data collection could have been extended and more health systems research could have been planned to inform the effectiveness component which could have enlarged the range of health interventions for the cost-effectiveness analysis. Finally, this study highlighted the most important gaps in the knowledge for priority setting i.e. the shortage of hard evidence on effectiveness of health services. Utilisation of results According to the health information system, acute respiratory infections, malaria and skin diseases are the most common health problems treated at outpatient department level whereas pulmonary tuberculosis (as much as 22% of all inpatient days in the country), malaria and pneumonia are the most frequent diagnoses in individuals admitted as inpatients at hospitals in 1998 [19]. Large proportions of pulmonary tuberculosis and pneumonia cases are probably caused by underlying HIV [18]. Most of these interventions have been deemed relatively costeffective in this study and are components of the packages suggested (Tables 2 and 3). However, the present study suggests that HIV prevention is more cost-effective than treatment (including TB treatment) which has been confirmed by other studies [69]. Also malaria prevention in high incidence areas appeared to have lower costs per DALY compared to treatment. The central level of Ministry of Health controls the overall allocation of funds between hospitals, where the majority of interventions are curative, and preventive programmes. It is therefore possible to change priorities at the macro level by shifting the balance in favour of the preventive component and furthermore to focus on HIV and malaria prevention. Apart from deciding the funds available for individual hospitals, the central level of Ministry of Health has less influence on the priority setting at health centre and hospital level where the interventions offered to a large extent are determined by self-referral of patients. Rationing decisions for treating different patient groups will be done by health practitioners and may involve a variety of considerations and values (see i.e. [70]) resulting in a focus which may differ from priority setting based on pure cost-effectiveness criteria. The present study pointed to a direction of focus among curative health interventions which may be difficult to enforce among health practitioners. Increased legitimacy and support among clinicians and other health sector personnel as well as patients may be secured if they are involved in the priority setting procedure through a consultative process where these groups are allowed to incorporate their own values. For instance, several authors have suggested that other criteria such as equity, severity of disease, age of patient groups and capacity to benefit may affect the rank order of health priorities in the opinion of health personnel [71][72][73]. Conclusion The previous pages showed that it was feasible to conduct cost-effectiveness analyses for a large number of health interventions in a developing country like Zimbabwe using a consistent methodology similar to the analysis performed at a general, non-country specific level by the World Bank [16]. The analyses performed in Zimbabwe suggested that cost-effective health interventions were available for some of the major health problems including HIV, pneumonia, tuberculosis and malaria. In addition, the analysis suggested that there was substantial potential for improving the efficiency with which resources are utilised in the public health care sector. Limitations to the approach applied in Zimbabwe were identified including short cuts in the costing methodology and scarcity of evidence on effectiveness of health interventions. As a result, important health interventions were not incorporated in the cost-effectiveness analysis. However, most of the obstacles identified in this study could in principle be overcome by adding more research resources. For instance, adding a large component of health systems research on the actual functioning of the health system would improve the effectiveness estimates and enable the inclusion of more health interventions in the cost-effec-tiveness analysis. A larger number of health interventions assessed by cost-effectiveness analysis would in addition make the subsequent identification of an essential package of health interventions more credible.
9,945.6
2018-01-01T00:00:00.000
[ "Economics", "Medicine" ]
Preparation of Quaternary Ammonium Separation Material based on Coupling Agent Chloromethyl Trimethoxysilane (KH-150) and Its Adsorption and Separation Properties in Studies of Th(IV) In this research, the authors studied the synthesis of a silicon-based quaternary ammonium material based on the coupling agent chloromethyl trimethoxysilane (KH-150) as well as its adsorption and separation properties for Th(IV). Using FTIR and NMR methods, the silicon-based materials before and after grafting were characterized to determine the spatial structure of functional groups in the silicon-based quaternary ammonium material SG-CTSQ. Based on this, the functional group grafting amount (0.537 mmol·g−1) and quaternization rate (83.6%) of the material were accurately calculated using TGA weight loss and XPS. In the adsorption experiment, the four materials with different grafting amounts showed different degrees of variation in their adsorption of Th(IV) with changes in HNO3 concentration and NO3− concentration but all exhibited a tendency toward anion exchange. The thermodynamic and kinetic experimental results demonstrated that materials with low grafting amounts (SG-CTSQ1 and SG-CTSQ2) tended to physical adsorption of Th(IV), while the other two tended toward chemical adsorption. The adsorption mechanism experiment further proved that the functional groups achieve the adsorption of Th(IV) through an anion-exchange reaction. Chromatographic column separation experiments showed that SG-CTSQ has a good performance in U-Th separation, with a decontamination factor for uranium in Th(IV) of up to 385.1, and a uranium removal rate that can reach 99.75%. Introduction Extensive research has been conducted in recent years on separating and adsorbing various elements in the end products of nuclear fuel.Thorium, which exists in trace amounts in the end uranium products, is not only an important element that needs to be controlled in the product but also holds significant importance in nuclear fuel reprocessing and environmental management [1].According to the statistics, nuclear power plants worldwide produce over 10,500 tons of spent fuel annually, including more than 170 tons of thorium-232 that can be further utilized [2].Due to technological limitations, this spent fuel undergoes simple reprocessing and is then solidified and buried as nuclear waste.This nuclear waste not only poses environmental safety hazards but also leads to significant resource wastage [3].While the previous literature has documented the recovery Molecules 2024, 29, 3031 2 of 23 and utilization of trace thorium in simulated reprocessing samples, enhancing separation methods to improve the recovery rate remains a challenge in spent fuel reprocessing [4,5]. In previous reprocessing studies, the separation of thorium in the sample was mainly achieved through extraction and ion exchange methods [6].The extraction separation method has significant advantages in the separation of actinide trace elements (such as thorium, neptunium, and plutonium) due to its high selectivity and separation efficiency [7].Thanks to this benefit, various extractants (including chelating ones, phosphorus-based ones, and amine-based ones) have been utilized and progressed to some degree in separating trace actinide elements.For example, Yu Yufu used polyurethane foam loaded with PMBP (an amine-based extractant) to separate trace thorium from a large amount of uranium, followed by further purification using TEVA quaternary ammonium resin.The decontamination factor for uranium from thorium in this method reaches 2.7 × 10 6 [8].Hassan S. Ghaziaskar successfully separated thorium from zirconium carbide waste residue by liquid-liquid extraction with trioctylamine after acid leaching.Chuanqin Xia designed and synthesized a ditriamide extractant, which has high selectivity for Th(IV) [9,10].However, the extraction separation method requires meticulous operations such as shaking and phase separation, leading to long operation times and complex processes.Additionally, for the separation of trace elements, extractants must have extremely high distribution ratios, which limits the variety of available extractants and makes them expensive [11].The ionexchange method offers the benefits of easy-to-use equipment and the opportunity for reuse.In the late 20th century, American scientists systematically conducted research on anionexchange technology for the separation and purification of actinide trace elements, such as R.H. Poirier, who successfully achieved the separation and analysis of trace thorium from a large amount of uranium using pyridine anion-exchange resin [12].The ion-exchange resins utilized in the ion-exchange technique offer benefits such as high exchange capacity and the ability to be reused; however, their organic framework is easily damaged under extreme conditions of spent fuel reprocessing (high acid concentration, strong radiation), which limits their rapid development in the field of radioactivity [13].Extraction separation and ion-exchange methods each come with their own set of advantages and limitations.Given this, it is essential to develop and create a reprocessing separation material that integrates the benefits of both, thus possessing the efficiency of the extraction method as well as the convenience of the solid-phase separation method.Additionally, the material's framework should be resistant to damage.In recent years, many researchers have also tried to develop new separation materials or methods for the separation of thorium.For example, Tonghuan Liu prepared selective cellulose materials after Amidoxime-functionalized cellulose and studied its adsorption and separation properties for Th(IV).Tianxiang Jin modified graphene oxide to obtain materials that can capture thorium ions efficiently.Faur et al. synthesized carbamoyl-methylphosphonated hydrosoluble polymers and investigated their selectivity for Gd(III)/Th(IV)/U(VI) separation [14][15][16].These studies have made progress and breakthroughs in the separation process of thorium to a certain extent but still cannot overcome the shortcomings of the current separation materials in some aspects. In nuclear fuel reprocessing, a qualified separation material matrix should have good mechanical properties, acid resistance, and radiation resistance.According to recent years' studies, silicon-based materials have been widely used in the field of radioactivity due to their excellent performance in various aspects.Zhang et al. grafted a novel functionalized silica composite material (SiAcP) on the surface of porous silica gel using bis(2methacryloyloxyethyl) phosphate (BMAOP) as a monomer and optimized the synthesis conditions.The adsorption separation behavior of the new material for Sc(III) and Zr(IV) was studied through column experiments, and the results showed that the material exhibited good selective adsorption for Zr(IV) [17].Chen et al. studied the adsorption behavior of polyamine-grafted silica composite material (SAER) for uranium in neutral and weakly acidic solutions, demonstrating the feasibility of the novel silicon-based composite adsorbent to effectively remove uranium from different aqueous matrices [18].Compared to the organic framework of ion-exchange resins, silica gel has good mechanical properties, acid resistance, and radiation resistance.The hydroxyl groups on the surface of silica gel can be used for chemically grafting functional monomers, greatly enhancing the stability of functional groups and effectively inhibiting the loss of functional materials [19].Furthermore, the majority of contemporary methods used to produce porous silica gel yield materials with significant pore size and surface area, offering significant advantages in enhancing the performance of adsorption separation materials.Hence, the utilization of silicon-based materials in the realm of radioactivity holds great promise. Amine extractants are often used to separate and purify actinides in the field of nuclear fuel reprocessing analysis.Among them, the quaternary ammonium salt extractor is widely used in the separation of neptunium, thorium, and plutonium in uranium in the reprocessing process due to its short equilibrium time and high selectivity (Figure S1: the relationship between the partition coefficient K d of various ions on TEVA and the concentration of nitric acid).For example, TEVA (trioctylmethyl ammonium chloride or ammonium nitrate) raffinate resin has a remarkable effect in the analysis and pretreatment of thorium [20,21].In our previous research, we utilized a coupling agent to chemically graft and modify the surface of porous silica gel (mesoporous) and investigated its grafting mechanism and grafting rate [22].Based on this theory, this study grafted the coupling agent chloromethyl trimethoxysilane (KH-150) on the surface of porous silica gel, then used trioctylamine to quaternize the modified silica gel to obtain a silica-based quaternized separation material for the study of Th(IV) adsorption and separation.The research content is divided into three parts.Firstly, the materials before and after quaternization were characterized by FTIR, NMR, XPS, and BET to determine the connection mode and spatial structure of functional groups on the silica gel surface, and then the grafting amounts of functional groups under different synthesis conditions were determined through TGA, thereby obtaining the theoretical adsorption capacity of the materials.Secondly, through the study of Th(IV) adsorption behavior, the adsorption reaction mechanism of the silica-based quaternized separation material for Th(IV) was determined, additionally, thermodynamics and kinetics of adsorption were explored.Finally, the material was used for the separation study of trace Th(IV) in simulated reprocessing of uranium samples.This study offers fresh insights and perspectives for enhancing and optimizing thorium separation materials in reprocessing activities, along with introducing a novel approach for utilizing silica gel-based materials in the realm of reprocessing separation. Figure 1 illustrates the preparation process of SG-CTSQ and the adsorption process of Th(IV). Characterization of Silicon-Based Quaternized Material The SEM images of SG (a), SG-CTS (b), and SG-CTSQ (c) are shown in Figure 2. It can be seen from SEM images that the surface of porous silica gel particles has no significant changes in either the alkylation process or quaternization process.This is because both processes are small-molecule grafting processes, which is not enough to cause significant surface changes.However, with the progress of grafting, the appearance color of the material changed significantly, that is, the alkylated material showed light green and the quaternized material showed yellow (Figure S3), which are due to external manifestations caused by changes in the molecular structure of the surface of the material [23].In order to further prove that the surface structure of porous silica gel changes, the hydrophilic properties of the material were investigated.As can be seen from the figure (Figure S3), SG particles all sink to the bottom in deionized water; however, SG-CTS and SG-CTSQ particles are mostly suspended or float in water, which proves that a large number of organic hydrophobic groups already exist on the surface of the material, providing preliminary proof that the coupling agent and trioctylamine have been successfully grafted to the surface of the silica gel. Characterization of Silicon-Based Quaternized Material The SEM images of SG (a), SG-CTS (b), and SG-CTSQ (c) are shown in Figure 2. It can be seen from SEM images that the surface of porous silica gel particles has no significant changes in either the alkylation process or quaternization process.This is because both processes are small-molecule grafting processes, which is not enough to cause significant surface changes.However, with the progress of grafting, the appearance color of the material changed significantly, that is, the alkylated material showed light green and the quaternized material showed yellow (Figure S3), which are due to external manifestations caused by changes in the molecular structure of the surface of the material [23].In order to further prove that the surface structure of porous silica gel changes, the hydrophilic properties of the material were investigated.As can be seen from the figure (Figure S3), SG particles all sink to the bottom in deionized water; however, SG-CTS and SG-CTSQ particles are mostly suspended or float in water, which proves that a large number of organic hydrophobic groups already exist on the surface of the material, providing preliminary proof that the coupling agent and trioctylamine have been successfully grafted to the surface of the silica gel. .In Figure 3a2 (enlarged portion of 1400-3800 cm −1 in Figure 3a1), 1643 cm −1 is the flexural vibration peak of H2O and 1517 cm −1 is the in-plane deformation vibration peak of −OH.The broad peak around 3380 cm −1 is the stretching vibration peak of the double-base −OH and H2O, and 3744 cm −1 is the stretching vibration peak of the free-type −OH [25].In addition, there is a peak at about 3600cm −1 which is the stretching vibration peak of hydrogen-bonded −OH, which is obscured by the large peak of 3380cm −1 but is faintly visible [26].Figure 3a shows that there are three different types of −OH on the surface of SG, namely free-type −OH, hydrogen-bonded −OH, and double-base −OH; in addition, there is a certain amount of bound water on the surface of SG. For SG-CTS, in Figure 3b1, due to the large amount of −OH involved in the reaction, the peaks of 967 cm −1 and 3744 cm −1 were significantly weakened.In Figure 3b2 (enlarged portion of 1400-3800 cm −1 in Figure 3b1), 3060 cm −1 is the stretching vibration peak of C−H in −CH2Cl, 2980 cm −1 and 2880 cm −1 are the stretching vibration peaks of C−H in −CH3, and 1416 cm -1 is the in-plane bending vibration peak of C-H in −CH3 [27].In addition, the bending vibration peak of water at 1643 cm −1 and the stretching vibration peak of water at around 3380 cm −1 disappear, which indicates that there are a large number of organic hydrophobic groups on the surface of silica gel and the binding water is significantly reduced [28].The above analysis shows that the characteristic peaks of the groups in the coupling agent KH-150 are all present in the SG-CTS spectrum, which indicates that the coupling agent KH-150 has been successfully grafted on the surface of the silica gel. For SG-CTSQ, in Figure 3c1, due to the introduction of a large amount of octyl (containing −CH3 and −CH2−) on the surface of silica gel after the quaternization reaction, the The FTIR spectroscopies of SG (a), SG-CTS (b), and SG-CTSQ (c) are depicted in Figure 3.For SG, in Figure 3a 1 , 967 cm −1 is the flexural vibration peak of −OH, and 1078 cm −1 is the stretching vibration peak of Si−O−Si [24].In Figure 3a 2 (enlarged portion of 1400-3800 cm −1 in Figure 3a 1 ), 1643 cm −1 is the flexural vibration peak of H 2 O and 1517 cm −1 is the in-plane deformation vibration peak of −OH.The broad peak around 3380 cm −1 is the stretching vibration peak of the double-base −OH and H 2 O, and 3744 cm −1 is the stretching vibration peak of the free-type −OH [25].In addition, there is a peak at about 3600cm −1 which is the stretching vibration peak of hydrogen-bonded −OH, which is obscured by the large peak of 3380cm −1 but is faintly visible [26].Figure 3a shows that there are three different types of −OH on the surface of SG, namely free-type −OH, hydrogen-bonded −OH, and double-base −OH; in addition, there is a certain amount of bound water on the surface of SG. For SG-CTS, in Figure 3b 1 , due to the large amount of −OH involved in the reaction, the peaks of 967 cm −1 and 3744 cm −1 were significantly weakened.In Figure 3b 2 (enlarged portion of 1400-3800 cm −1 in Figure 3b 1 ), 3060 cm −1 is the stretching vibration peak of C−H in −CH 2 Cl, 2980 cm −1 and 2880 cm −1 are the stretching vibration peaks of C−H in −CH 3 , and 1416 cm -1 is the in-plane bending vibration peak of C-H in −CH 3 [27].In addition, the bending vibration peak of water at 1643 cm −1 and the stretching vibration peak of water at around 3380 cm −1 disappear, which indicates that there are a large number of organic hydrophobic groups on the surface of silica gel and the binding water is significantly reduced [28].The above analysis shows that the characteristic peaks of the groups in the coupling agent KH-150 are all present in the SG-CTS spectrum, which indicates that the coupling agent KH-150 has been successfully grafted on the surface of the silica gel. For SG-CTSQ, in Figure 3c 1 , due to the introduction of a large amount of octyl (containing −CH 3 and −CH 2 −) on the surface of silica gel after the quaternization reaction, the in-plane bending vibration peak of C−H in −CH 3 was more obvious at 1416 cm −1 and the out-of-plane bending vibration peak of C−H in −CH 3 appeared at 1156 cm −1 , which was previously covered by the large peak of 1078 cm −1 [29].The in-plane bending vibration peak at 1376 cm −1 is of C−H in −CH 2 −.In addition, the sharp peak of 1724 cm −1 is the C−N stretching vibration peak corresponding to quaternary ammonium nitrogen.In Figure 3c 2 (enlarged portion of 2500-3500 cm −1 in Figure 3c 1 ), the most obvious change is the disappearance of the C−H stretching vibration peak (3060 cm −1 ) in −CH 2 Cl, which indicates that this group has participated in the quaternization reaction [30].The above analysis shows that trioctylamine was successfully present on the surface of the silica gel and quaternized with coupling agent molecules. Molecules 2022, 26, x FOR PEER REVIEW 5 of 24 in-plane bending vibration peak of C−H in −CH3 was more obvious at 1416 cm −1 and the out-of-plane bending vibration peak of C−H in −CH3 appeared at 1156 cm −1 , which was previously covered by the large peak of 1078 cm −1 [29].The in-plane bending vibration peak at 1376 cm −1 is of C−H in −CH2−.In addition, the sharp peak of 1724 cm −1 is the C−N stretching vibration peak corresponding to quaternary ammonium nitrogen.In Figure 3c2 (enlarged portion of 2500-3500 cm −1 in Figure 3c1), the most obvious change is the disappearance of the C−H stretching vibration peak (3060 cm −1 ) in −CH2Cl, which indicates that this group has participated in the quaternization reaction [30].The above analysis shows that trioctylamine was successfully present on the surface of the silica gel and quaternized with coupling agent molecules.The 29 Si-MAS NMR spectra of SG and SG-CTS are shown in Figure 4.For SG, in Figure 4a, −90.0 ppm is the absorption peak of Si corresponding to the double-base −OH, −100.0 ppm is the absorption peak of Si corresponding to free-type −OH and hydrogen-bonded −OH, and −111.6 ppm is absorption peak of Si corresponding to siloxane groups [31].After the grafting coupling agent reaction, for SG-CTS, in Figure 4b, the absorption peak at −90.0 ppm disappeared, indicating that double-base −OH almost completely participated in the reaction.The intensity of the absorption peak at −100.0 ppm decreased significantly, indicating that most of the independent −OH participated in the grafting reaction [32]; however, lots of −OH participated in the reaction, increasing the number of Si corresponding to siloxane groups Si−(OSi) 4 , which enhanced the intensity of the absorption peak at −111.6 ppm.In addition, −67.2 ppm and −77.8 ppm are peaks of Si of the coupling agent KH-150 with two different connection modes, respectively [33].Among them, −67.2 ppm corresponds to the coupling agent molecule with only one chemical bond connected to the silica gel, and −77.8 ppm corresponds to the coupling agent molecule with two chemical bonds connected to the silica gel [34].Based on the above analysis, the possible surface structures of SG and SG-CTS are shown in Figure 6a The 13 C-NMR spectra of SG-CTS and SG-CTSQ are shown in Figure 5.For SG-CTS, in Figure 5a, 13.5 ppm is the absorption peak of C corresponding to the −OCH3, and 59.5 ppm is the absorption peak of C corresponding to the −CH2Cl [35].For SG-CTSQ, in Figure 5b, the peak at 59.5 ppm disappeared, which indicates that a large number of −CH2Cl groups disappeared [36].Nevertheless, the peak observed at 62.5 ppm corresponds to the C absorption peak associated with quaternary ammonium nitrogen, suggesting that trioctylamine has effectively undergone quaternization with the −CH2Cl group in the coupling agent molecule.In addition, 18.2 ppm is the characteristic peak of C corresponding to −CH3 in trioctylamine, and 29.3 ppm is the characteristic peak of C corresponding to −CH2− in trioctylamine [37].Based on the above analysis, the possible surface structure of SG-CTSQ is shown in Figure 6c.The 13 C-NMR spectra of SG-CTS and SG-CTSQ are shown in Figure 5.For SG-CTS, in Figure 5a, 13.5 ppm is the absorption peak of C corresponding to the −OCH 3 , and 59.5 ppm is the absorption peak of C corresponding to the −CH 2 Cl [35].For SG-CTSQ, in Figure 5b, the peak at 59.5 ppm disappeared, which indicates that a large number of −CH 2 Cl groups disappeared [36].Nevertheless, the peak observed at 62.5 ppm corresponds to the C absorption peak associated with quaternary ammonium nitrogen, suggesting that trioctylamine has effectively undergone quaternization with the −CH 2 Cl group in the coupling agent molecule.In addition, 18.2 ppm is the characteristic peak of C corresponding to −CH 3 in trioctylamine, and 29.3 ppm is the characteristic peak of C corresponding to −CH 2 − in trioctylamine [37].Based on the above analysis, the possible surface structure of SG-CTSQ is shown in Figure 6c.The 13 C-NMR spectra of SG-CTS and SG-CTSQ are shown in Figure 5.For SG-CTS, in Figure 5a, 13.5 ppm is the absorption peak of C corresponding to the −OCH3, and 59.5 ppm is the absorption peak of C corresponding to the −CH2Cl [35].For SG-CTSQ, in Figure 5b, the peak at 59.5 ppm disappeared, which indicates that a large number of −CH2Cl groups disappeared [36].Nevertheless, the peak observed at 62.5 ppm corresponds to the C absorption peak associated with quaternary ammonium nitrogen, suggesting that trioctylamine has effectively undergone quaternization with the −CH2Cl group in the coupling agent molecule.In addition, 18.2 ppm is the characteristic peak of C corresponding to −CH3 in trioctylamine, and 29.3 ppm is the characteristic peak of C corresponding to −CH2− in trioctylamine [37].Based on the above analysis, the possible surface structure of SG-CTSQ is shown in Figure 6c.The TG-DTG results of materials synthesized under different concentrations of trioctylamine (SG-CTSQ 1 to SG-CTSQ 8 ) are shown in Figure 8a.When the concentration of trioctylamine is low, the thermal weight loss increases rapidly with the increase in the concentration.When the concentration of trioctylamine participating in the reaction reaches 0.2 mol•L −1 , the thermal weight loss reaches the maximum.Thereafter, with the increase in trioctylamine concentration, the trend of the thermogravimetric curve remained the same.Since the grafting amount G(mmol•g −1 ) of quaternary ammonium groups depend on the thermogravimetric weight loss m 2 , the change trend of grafting amount can be obtained by Equation ( 1), as shown in Figure 8b.When the trioctylamine concentration was 0.05 mol•L −1 , 0.1 mol•L −1 and 0.15 mol•L −1 , the grafting amounts were 0.258 mmol•g −1 , 0.413 mmol•g −1 , and 0.475 mmol•g −1 , respectively.When the trioctylamine concentration was greater than 0.2 mol•L −1 , the surface reaction site reached saturation, and the grafting amount reached the maximum value (0.537 mmol•g −1 ) and maintained equilibrium.The TG-DTG results of materials synthesized under different concentrations of trioctylamine (SG-CTSQ1 to SG-CTSQ8) are shown in Figure 8a.When the concentration of trioctylamine is low, the thermal weight loss increases rapidly with the increase in the concentration.When the concentration of trioctylamine participating in the reaction reaches 0.2 mol•L −1 , the thermal weight loss reaches the maximum.Thereafter, with the increase in trioctylamine concentration, the trend of the thermogravimetric curve remained the same.Since the grafting amount G(mmol•g −1 ) of quaternary ammonium groups depend on the thermogravimetric weight loss m2, the change trend of grafting amount can be obtained by Equation ( 1), as shown in Figure 8b.When the trioctylamine concentration was 0.05 mol•L −1 , 0.1 mol•L −1 and 0.15 mol•L −1 , the grafting amounts were 0.258 mmol•g −1 , 0.413 mmol•g −1 , and 0.475 mmol•g −1 , respectively.When the trioctylamine concentration was greater than 0.2 mol•L −1 , the surface reaction site reached saturation, and the grafting amount reached the maximum value (0.537 mmol•g −1 ) and maintained equilibrium.In order to explore the changes in elements and functional groups on the surface of the materials, and to obtain the actual maximum quaternization grafting rate, XPS tests were performed on SG (a), SG-CTS (b), and SG-CTSQ4 (c) (the material with the highest grafting amount mentioned above), as shown in Figure 9.For SG, in Figure 9a, 104 eV and 155 eV are the 2p and 2s absorption peaks of Si, respectively; 533 eV and 980 eV are 1s and 0 KLL absorption peaks of O, respectively [38].As can be seen from the figure, SG contains only two elements, Si and O.The XPS spectrogram of SG-CTS were changed after the reaction with the coupling agent KH-150 (chloromethyl trimethoxy-silane).In Figure 9b, 201 eV is the 2p peak of Cl, and 286 eV is the 1s peak of C, which indicates that the coupling agent molecule was successfully present on the silica gel surface [39].After quaternization (Figure 9c), the most obvious change was that the C 1s peak at 286 eV was significantly enhanced, and the 1s peak of quaternary ammonium nitrogen appeared at 401 eV [40].In order to explore the changes in elements and functional groups on the surface of the materials, and to obtain the actual maximum quaternization grafting rate, XPS tests were performed on SG (a), SG-CTS (b), and SG-CTSQ 4 (c) (the material with the highest grafting amount mentioned above), as shown in Figure 9.For SG, in Figure 9a, 104 eV and 155 eV are the 2p and 2s absorption peaks of Si, respectively; 533 eV and 980 eV are 1s and 0 KLL absorption peaks of O, respectively [38].As can be seen from the figure, SG contains only two elements, Si and O.The XPS spectrogram of SG-CTS were changed after the reaction with the coupling agent KH-150 (chloromethyl trimethoxy-silane).In Figure 9b, 201 eV is the 2p peak of Cl, and 286 eV is the 1s peak of C, which indicates that the coupling agent molecule was successfully present on the silica gel surface [39].After quaternization (Figure 9c), the most obvious change was that the C 1s peak at 286 eV was significantly enhanced, and the 1s peak of quaternary ammonium nitrogen appeared at 401 eV [40]. Molecules 2022, 26, x FOR PEER REVIEW 8 of 24 The TG-DTG results of materials synthesized under different concentrations of trioctylamine (SG-CTSQ1 to SG-CTSQ8) are shown in Figure 8a.When the concentration of trioctylamine is low, the thermal weight loss increases rapidly with the increase in the concentration.When the concentration of trioctylamine participating in the reaction reaches 0.2 mol•L −1 , the thermal weight loss reaches the maximum.Thereafter, with the increase in trioctylamine concentration, the trend of the thermogravimetric curve remained the same.Since the grafting amount G(mmol•g −1 ) of quaternary ammonium groups depend on the thermogravimetric weight loss m2, the change trend of grafting amount can be obtained by Equation ( 1), as shown in Figure 8b.When the trioctylamine concentration was 0.05 mol•L −1 , 0.1 mol•L −1 and 0.15 mol•L −1 , the grafting amounts were 0.258 mmol•g −1 , 0.413 mmol•g −1 , and 0.475 mmol•g −1 , respectively.When the trioctylamine concentration was greater than 0.2 mol•L −1 , the surface reaction site reached saturation, and the grafting amount reached the maximum value (0.537 mmol•g −1 ) and maintained equilibrium.In order to explore the changes in elements and functional groups on the surface of the materials, and to obtain the actual maximum quaternization grafting rate, XPS tests were performed on SG (a), SG-CTS (b), and SG-CTSQ4 (c) (the material with the highest grafting amount mentioned above), as shown in Figure 9.For SG, in Figure 9a, 104 eV and 155 eV are the 2p and 2s absorption peaks of Si, respectively; 533 eV and 980 eV are 1s and 0 KLL absorption peaks of O, respectively [38].As can be seen from the figure, SG contains only two elements, Si and O.The XPS spectrogram of SG-CTS were changed after the reaction with the coupling agent KH-150 (chloromethyl trimethoxy-silane).In Figure 9b, 201 eV is the 2p peak of Cl, and 286 eV is the 1s peak of C, which indicates that the coupling agent molecule was successfully present on the silica gel surface [39].After quaternization (Figure 9c), the most obvious change was that the C 1s peak at 286 eV was significantly enhanced, and the 1s peak of quaternary ammonium nitrogen appeared at 401 eV [40].In order to further determine the actual maximum grafting rate of the quaternization reaction (namely, the utilization of coupling agent on SG-CTS), we fitted the Cl 2p absorption peak of SG-CTSQ.After quaternization, the Cl in the material may be linked in the following two ways: -CH 2 Cl not involved in the reaction or Cl in the quaternization group.Two deconvoluted peaks centered at 201.5 eV and 199.9 eV were assigned to the above two connection methods, respectively [41].The Cl 2p spectra of SG-CTSQ are illustrated in Figure 10.According to the fitting results, the peak area ratios of the two groups are 16.4% and 83.6%, respectively.The maximum grafting rate of quaternization was 83.6%. In order to further determine the actual maximum grafting rate of the quaternization reaction (namely, the utilization of coupling agent on SG-CTS), we fitted the Cl 2p absorption peak of SG-CTSQ.After quaternization, the Cl in the material may be linked in the following two ways: -CH2Cl not involved in the reaction or Cl in the quaternization group.Two deconvoluted peaks centered at 201.5 eV and 199.9 eV were assigned to the above two connection methods, respectively [41].The Cl 2p spectra of SG-CTSQ are illustrated in Figure 10.According to the fitting results, the peak area ratios of the two groups are 16.4% and 83.6%, respectively.The maximum grafting rate of quaternization was 83.6%. Combined with the previous TG-DTG results, it can be seen that under this maximum grafting rate (83.6%), the corresponding grafting amount (the number of quaternizing groups per unit mass of the material) is 0.537 mmol•g −1 .However, the adsorption effect is not only related to the number of functional groups but is also affected by other factors, which will be discussed later in the paper.Figure 11 displays the pore size distribution of SG-CTSQ with varying degrees of grafting; the nitrogen sorption isotherm was put in Figure S4.The results of calculating the maximum probability apertures and specific surface areas of the four materials based on the BET method and BJH method are shown in Table 1.As the grafting amount increases, the material's specific surface area and pore size decrease gradually.It is obvious that the existence of quaternary ammonium groups occupies a certain surface space of the material.However, even the material with the highest grafting amount (SG-CTSQ4) has a pore size greater than 8 nm and a specific surface area greater than 400 m 2 •g −1 .The above results show that the material still has a large pore size and specific surface area after quaternization, which can be used for adsorption experiments and studies.Combined with the previous TG-DTG results, it can be seen that under this maximum grafting rate (83.6%), the corresponding grafting amount (the number of quaternizing groups per unit mass of the material) is 0.537 mmol•g −1 .However, the adsorption effect is not only related to the number of functional groups but is also affected by other factors, which will be discussed later in the paper. Figure 11 displays the pore size distribution of SG-CTSQ with varying degrees of grafting; the nitrogen sorption isotherm was put in Figure S4.The results of calculating the maximum probability apertures and specific surface areas of the four materials based on the BET method and BJH method are shown in Table 1.As the grafting amount increases, the material's specific surface area and pore size decrease gradually.It is obvious that the existence of quaternary ammonium groups occupies a certain surface space of the material.However, even the material with the highest grafting amount (SG-CTSQ 4 ) has a pore size greater than 8 nm and a specific surface area greater than 400 m 2 •g −1 .The above results show that the material still has a large pore size and specific surface area after quaternization, which can be used for adsorption experiments and studies. Study on the Adsorption and Separation Properties of Th(IV) by Silicon-Based Quaternary Ammonium Material (SG-CTSQ) The effect and mechanism of the adsorbed material depend on a number of factors, such as the number of functional groups, pore size, specific surface area, steric hindrance, etc., which determines whether the adsorption process is physical adsorption, chemical adsorption, or mixed adsorption type.Hence, this section examined four materials (SG-CTSQ1 to SG-CTSQ4) with varying degrees of quaternization grafting.The investigation encompassed acidity tests, adsorption thermodynamics analysis, adsorption kinetics experiments, study of adsorption mechanisms, and separation experiments. Effects of HNO3 and NO3 -concentrations on adsorption of Th(IV) by SG-CTSQ: The variation in the adsorption amount of Th(IV) by the material is shown in Figure 12, where the black line represents the change in adsorption amount with the concentration of HNO3, and the red line represents the change in adsorption amount with the concentration of NO3 − .For the material SG-CTSQ4 with the highest grafting amount (Figure 12d), when the HNO3 concentration is in a lower range (1-5 mol•L −1 ), the adsorption amount of Th(IV) by SG-CTSQ4 gradually increases with increasing acid concentration; however, as the concentration continues to increase, the adsorption amount shows a decreasing trend, which may be due to the increase in H + concentration or NO3 -concentration.To analyze this reason, we set the H + concentration at 1 mol•L −1 and investigated the change in adsorption amount with NO3 − concentration under this condition (red line).As the NO3 − concentration increases, there is no inflection point in the change in Th(IV) adsorption amount by SG-CTSQ4, indicating that the adsorption process is influenced by the variation in H + concentration.According to the complexation behavior of Th(IV), it can form complex anions [Th(NO3)6] 2− and [Th(NO3)5] − with NO3 − , thus, being adsorbed Study on the Adsorption and Separation Properties of Th(IV) by Silicon-Based Quaternary Ammonium Material (SG-CTSQ) The effect and mechanism of the adsorbed material depend on a number of factors, such as the number of functional groups, pore size, specific surface area, steric hindrance, etc., which determines whether the adsorption process is physical adsorption, chemical adsorption, or mixed adsorption type.Hence, this section examined four materials (SG-CTSQ 1 to SG-CTSQ 4 ) with varying degrees of quaternization grafting.The investigation encompassed acidity tests, adsorption thermodynamics analysis, adsorption kinetics experiments, study of adsorption mechanisms, and separation experiments. Effects of HNO 3 and NO 3 − concentrations on adsorption of Th(IV) by SG-CTSQ: The variation in the adsorption amount of Th(IV) by the material is shown in Figure 12, where the black line represents the change in adsorption amount with the concentration of HNO 3 , and the red line represents the change in adsorption amount with the concentration of NO 3 − .For the material SG-CTSQ 4 with the highest grafting amount (Figure 12d), when the HNO 3 concentration is in a lower range (1-5 mol•L −1 ), the adsorption amount of Th(IV) by SG-CTSQ 4 gradually increases with increasing acid concentration; however, as the concentration continues to increase, the adsorption amount shows a decreasing trend, which may be due to the increase in H + concentration or NO 3 − concentration.To analyze this reason, we set the H + concentration at 1 mol•L −1 and investigated the change in adsorption amount with NO 3 − concentration under this condition (red line).As the NO 3 − concentration increases, there is no inflection point in the change in Th(IV) adsorption amount by SG-CTSQ 4 , indicating that the adsorption process is influenced by the variation in H + concentration.According to the complexation behavior of Th(IV), it can form complex anions [Th(NO 3 ) 6 ] 2− and [Th(NO 3 ) 5 ] − with NO 3 − , thus, being adsorbed through anion exchange [42].In this process, the higher the NO 3 − concentration, the greater the proportion of complex anions, making it easier to be adsorbed; however, when the acidity of the system increases, H + also participates in complexation to a certain extent, forming particles such as H 2 Th(NO 3 ) 6 that cannot participate in anion-exchange reactions [10].Based on the above analysis, we preliminarily believe that the adsorption of Th(IV) by SG-CTSQ 4 tends to the form of anion exchange. For SG-CTSQ 1 -SG-CTSQ 3 (Figure 12a-c), the adsorption curves show significant changes due to the decrease in grafting amount (reduction in quaternary ammonium groups).Firstly, the inflection point of the black line (concentration change curve of HNO 3 ) gradually shifts to the left as the material grafting amount decreases, which indicates that fewer functional groups (quaternary ammonium groups) can reach the chemical adsorption saturation state quickly.In other words, a smaller number of reaction sites are more easily able to reach chemical equilibrium.This means that as the H + concentration increases, even if there is only a small change in the concentration of complex anions of Th(IV), it will still have a significant impact on the adsorption effectiveness of the material.Secondly, the change at the inflection point becomes gradually less noticeable, especially for SG-CTSQ 1 (Figure 12a) since with the increase in HNO 3 concentration, there is no longer a decreasing trend in adsorption amount.This is because, with the reduction in chemical adsorption sites, the physical adsorption effect becomes more pronounced (due to the characteristics of a porous matrix and large specific surface area), which to some extent masks the chemical adsorption effect. the acidity of the system increases, H + also participates in complexation to a certain extent, forming particles such as H2Th(NO3)6 that cannot participate in anion-exchange reactions [10].Based on the above analysis, we preliminarily believe that the adsorption of Th(IV) by SG-CTSQ4 tends to the form of anion exchange. For SG-CTSQ1-SG-CTSQ3 (Figure 12a-c), the adsorption curves show significant changes due to the decrease in grafting amount (reduction in quaternary ammonium groups).Firstly, the inflection point of the black line (concentration change curve of HNO3) gradually shifts to the left as the material grafting amount decreases, which indicates that fewer functional groups (quaternary ammonium groups) can reach the chemical adsorption saturation state quickly.In other words, a smaller number of reaction sites are more easily able to reach chemical equilibrium.This means that as the H + concentration increases, even if there is only a small change in the concentration of complex anions of Th(IV), it will still have a significant impact on the adsorption effectiveness of the material.Secondly, the change at the inflection point becomes gradually less noticeable, especially for SG-CTSQ1 (Figure 12a) since with the increase in HNO3 concentration, there is no longer a decreasing trend in adsorption amount.This is because, with the reduction in chemical adsorption sites, the physical adsorption effect becomes more pronounced (due to the characteristics of a porous matrix and large specific surface area), which to some extent masks the chemical adsorption effect. Study on the kinetics of Th(IV) adsorption by SG-CTSQ: The variation in Th(IV) adsorption by SG-CTSQ with time is shown in Figure 13.The adsorption kinetics result was analyzed by the pseudo-first-order kinetic model (the reaction rate is linearly related to the concentration of a reactant, and this model is based on the fact that the rate-determining step is a physical process); the pseudo-second-order kinetic model (the reaction rate is linearly related to the concentration of two reactants, and this model is based on the fact that the rate-determining step is a chemical reaction); and the Elovich kinetic model (the Elovich model is suitable for processes with irregular data or with large activation energy) [43,44].The kinetic parameters for the adsorption of Th(IV) are listed in Table 2. From the parameters in the table, it can be observed that for the material SG-CTSQ 1 with the lowest grafting rate, its adsorption kinetics tend to better fit the pseudo-first-order kinetic model (R 2 = 0.995) compared to the results of fitting the pseudo-second-order kinetic model (R 2 = 0.964).This indicates that the rate-limiting step of this adsorption process is a physical process.This is due to the lower content of quaternary ammonium functional groups in SG-CTSQ 1 , where physical adsorption is more pronounced compared to chemical adsorption.However, as the grafting amount increases, the fitting results of the pseudo-first-order model gradually improve, and the pseudo-second-order model gradually decrease.The fitting results of the pseudo-second-order model (R 2 = 0.981, 0.993) for SG-CTSQ 3 and SG-CTSQ 4 show a significant improvement compared to the pseudofirst-order model (R 2 = 0.957, 0.924), suggesting stronger chemical adsorption during the Th(IV) adsorption process.Furthermore, according to Figure 13a, SG-CTSQ 1 takes around 60 seconds to achieve equilibrium in Th(IV)adsorption, whereas the other three materials require more than 100 seconds (Figure 13b-d).This also indicates that with an increase in grafting amount, the materials tend to exhibit a greater inclination toward chemical adsorption of Th(IV). For the Elovich kinetic model, the R 2 for all four materials are relatively small after fitting, indicating that the adsorption process does not involve a significant activation energy.As the grafting amount increases, the R 2 of this model gradually rises (0.797, 0.837, 0.882, 0.906), indicating that more activation energy is involved in the process and, thus, suggesting a greater inclination toward chemical adsorption (physical adsorption does not require activation energy) [45]. netic model (the reaction rate is linearly related to the concentration of two reactants this model is based on the fact that the rate-determining step is a chemical reaction the Elovich kinetic model (the Elovich model is suitable for processes with irregula or with large activation energy) [43,44].The kinetic parameters for the adsorpti Th(IV) are listed in Table 2. From the parameters in the table, it can be observed that for the material SG-C with the lowest grafting rate, its adsorption kinetics tend to better fit the pseudo order kinetic model (R 2 = 0.995) compared to the results of fitting the pseudo-secondkinetic model (R 2 = 0.964).This indicates that the rate-limiting step of this adsorption cess is a physical process.This is due to the lower content of quaternary ammonium tional groups in SG-CTSQ1, where physical adsorption is more pronounced compar chemical adsorption.However, as the grafting amount increases, the fitting results pseudo-first-order model gradually improve, and the pseudo-second-order model ually decrease.The fitting results of the pseudo-second-order model (R 2 = 0.981, 0.99 SG-CTSQ3 and SG-CTSQ4 show a significant improvement compared to the pseudo order model (R 2 = 0.957, 0.924), suggesting stronger chemical adsorption during the T adsorption process.Furthermore, according to Figure 13a, SG-CTSQ1 takes around 6 onds to achieve equilibrium in Th(IV)adsorption, whereas the other three materia quire more than 100 seconds (Figure 13b-d).This also indicates that with an incre grafting amount, the materials tend to exhibit a greater inclination toward chemic sorption of Th(IV). For the Elovich kinetic model, the R 2 for all four materials are relatively small fitting, indicating that the adsorption process does not involve a significant activatio ergy.As the grafting amount increases, the R 2 of this model gradually rises (0.797, 0.882, 0.906), indicating that more activation energy is involved in the process and, suggesting a greater inclination toward chemical adsorption (physical adsorption not require activation energy) [45].Study on the thermodynamic of Th(IV) adsorption by SG-CTSQ: Figure 14 shows the relationship between the equilibrium adsorption amount of the material for different concentrations of Th(IV) and the equilibrium concentration.The four figures, respectively, represent four kinds of materials with different grafting amounts.It can be clearly seen that for each material, with the increase in Th(IV) concentration, the adsorption amount increases and gradually tends to equilibrium, which indicates that the adsorption amount of the material for Th(IV) will eventually reach a saturated adsorption amount.With the increase in grafting amount of SG-CTSQ, the saturated adsorption amount of Th(IV) increased significantly, from about 25 mg•g -1 to about 45 mg•g -1 .The Langmuir model and Freundlich model are based on the assumptions of monolayer adsorption and multilayer adsorption, respectively [46,47].In general, chemical adsorption and physical adsorption are typical monolayer adsorption and multilayer adsorption processes, respectively [48].Therefore, we can preliminarily determine the type of Th(IV) adsorption on SG-CTSQ based on the fitting results of these two models.The parameters and fitted plots of Langmuir and Freundlich adsorption isotherm models are listed in Table 3 and Figure 14.The fitting result of the Freundlich model (R 2 = 0.989) for SG-CTSQ 1 (Figure 14a) indicates superior performance compared to the Langmuir model (R 2 = 0.957).This suggests that the adsorption of Th(IV) on SG-CTSQ 1 leans toward multilayer adsorption, implying a preference for physical adsorption by Th(IV) on SG-CTSQ 1 .As the grafting amount increases (as the number of quaternized groups in the material increases), the determination coefficient R 2 in the Langmuir model fitting gradually increases.At the maximum grafting level (SG-CTSQ 4 , Figure 14d), the Langmuir model's fitting result (R 2 = 0.992) significantly outperforms that of the Freundlich model (R 2 = 0.976).This suggests that as the quaternized groups in the material increase, there is a preference for monolayer adsorption of Th(IV), indicating that chemical adsorption is the predominant mechanism.Van't Hoff's fitting results and parameters for temperature changes are shown in Figure 15 and Table 4. Based on the parameters in the table, for the four materials with different grafting amounts, ∆G is less than 0 across all temperature ranges, indicating that the adsorption process is spontaneous.Based on Ma et al's research, the enthalpy change range for physical adsorption falls within 2.10 to 20.90 kJ•mol −1 .The enthalpy change range for chemical adsorption falls within 20.90 to 418.40 kJ•mol −1 [49].The ∆H for the adsorption of Th(IV) by SG-CTSQ 1 is 17.47 kJ•mol −1 (less than 20.90 kJ•mol −1 ), namely, its adsorption process belongs to physical adsorption.However, for the other three materials, the ∆H is greater than 20.90 kJ•mol −1 , indicating that their processes are closer to chemical adsorption.4. Based on the parameters in the table, for the four materials with different grafting amounts, ΔG is less than 0 across all temperature ranges, indicating that the adsorption process is spontaneous.Based on Ma et al's research, the enthalpy change range for physical adsorption falls within 2.10 to 20.90 kJ•mol −1 .The enthalpy change range for chemical adsorption falls within 20.90 to 418.40 kJ•mol −1 [49].The ΔH for the adsorption of Th(IV) by SG-CTSQ1 is 17.47 kJ•mol −1 (less than 20.90 kJ•mol −1 ), namely, its adsorption process belongs to physical adsorption.However, for the other three materials, the ΔH is greater than 20.90 kJ•mol −1 , indicating that their processes are closer to chemical adsorption. The above analysis shows that in the adsorption process of Th(IV) by SG-CTSQ1-SG-CTSQ4, the thermodynamic fitting results and kinetic fitting results of adsorptive properties are basically consistent.Table 4. Adsorption thermodynamic parameters of Th(IV) on SG-CTSQ.The above analysis shows that in the adsorption process of Th(IV) by SG-CTSQ 1 -SG-CTSQ 4 , the thermodynamic fitting results and kinetic fitting results of adsorptive properties are basically consistent. Species T (K) ∆G (KJ•mol Study on adsorption mechanism of Th(IV) on SG-CTSQ: In this experiment, to study the effect of acidity on adsorption, we initially believed that the functional groups in SG-CTSQ tend to adsorb Th(IV) in the form of anion exchange.To delve deeper into the adsorption mechanism, we carried out the following experiment using SG-CTSQ 4 . Assuming that the adsorption of Th(IV) by SG-CTSQ has the following reaction formula: where K is the adsorption equilibrium constant; K d is the distribution ratio; n is the number of complex acid radical ions; [SiNR 4 + ] is the content of functional groups (mmol); Take lgK d as the ordinate and lg[SiPyR 4 + ] as the abscissa to draw a straight line, and the value of n can be obtained after fitting. The results of adsorption using varying dosages of adsorbent are presented in Table 5, with the corresponding fitting outcomes displayed in Figure 16.Based on the slope of the straight line, which is equivalent to n = 5.75, it can be concluded that the adsorption of Th(IV) by SG-CTSQ predominantly involves the complex anions Th(NO 3 ) 6 2− and Th(NO 3 ) 5 − .Study on separation of uranium and thorium: Suitable adsorption conditions can improve the adsorption capacity of Th(IV) on SG-CTSQ, thereby improving the recovery rate of Th(IV).According to previous studies, in HNO3 system, UO2 2+ mainly exists in the form of UO2(NO3) + after complexation with NO3 − , and the form of UO2(NO3)3 − is less common, so the U(VI) partition coefficient on the material is very low [50].Based on the static adsorption experiment findings, Th(IV) readily creates complex anions in an HNO3 environment, allowing for its absorption onto the material in the chromatography column and, thus, facilitating the separation of uranium and thorium. After eluting thorium with the corresponding eluent, we obtained the elution curve of Th(IV), as shown in Figure 17.It can be observed that with the increase in grafting amount, there is a noticeable change in the elution curve of thorium.For SG-CTSQ1 with a lower grafting amount (Figure 17a), the thorium concentration in the first milliliter of eluate is relatively high, indicating that part of the Th(IV) in the sample is physically adsorbed in the column, which may be because the material has a smaller adsorption distribution coefficient due to a lower content of functional groups, and there is a large amount of free Th(IV) (no ion-exchange reaction has occurred); however, as the material grafting amount increases, this phenomenon gradually disappears (Figure 17b-d).Based on the thorium and uranium concentrations in the original sample and the eluate, the recovery rate of Th(IV) and the decontamination factor of uranium in thorium are shown in Table 6.In terms of Th(IV) decontamination, the newly synthesized silicon-based quaternized material exhibits better decontamination performance compared to the commonly used TEVA resin.Under the same experimental conditions, except for SG-CTSQ1-which has a slightly lower decontamination factor than TEVA resin-the other three materials have higher decontamination factors than TEVA resin [51].Study on separation of uranium and thorium: Suitable adsorption conditions can improve the adsorption capacity of Th(IV) on SG-CTSQ, thereby improving the recovery rate of Th(IV).According to previous studies, in HNO 3 system, UO 2 2+ mainly exists in the form of UO 2 (NO 3 ) + after complexation with NO 3 − , and the form of UO 2 (NO 3 ) 3 − is less common, so the U(VI) partition coefficient on the material is very low [50].Based on the static adsorption experiment findings, Th(IV) readily creates complex anions in an HNO 3 environment, allowing for its absorption onto the material in the chromatography column and, thus, facilitating the separation of uranium and thorium. After eluting thorium with the corresponding eluent, we obtained the elution curve of Th(IV), as shown in Figure 17.It can be observed that with the increase in grafting amount, there is a noticeable change in the elution curve of thorium.For SG-CTSQ 1 with a lower grafting amount (Figure 17a), the thorium concentration in the first milliliter of eluate is relatively high, indicating that part of the Th(IV) in the sample is physically adsorbed in the column, which may be because the material has a smaller adsorption distribution coefficient due to a lower content of functional groups, and there is a large amount of free Th(IV) (no ion-exchange reaction has occurred); however, as the material grafting amount increases, this phenomenon gradually disappears (Figure 17b-d).Based on the thorium and uranium concentrations in the original sample and the eluate, the recovery rate of Th(IV) and the decontamination factor of uranium in thorium are shown in Table 6.In terms of Th(IV) decontamination, the newly synthesized silicon-based quaternized material exhibits better decontamination performance compared to the commonly used TEVA resin.Under the same experimental conditions, except for SG-CTSQ 1 -which has a slightly lower decontamination factor than TEVA resin-the other three materials have higher decontamination factors than TEVA resin [51]. Preparation of Silicon-Based Quaternized Separation Material The alkylation process of porous silica gel: Using the coupling agent chloromethyl trimethoxysilane to modify the surface of porous spherical silica gel, alkylated silica gel is obtained.The reaction process is as follows: Soak the porous spherical silica gel in 1 mol•L −1 HNO3 for 6 hours to activate and increase the hydroxyl groups on the surface.The activation process is carried out in an ultrasonic cleaning machine (JP-031S, Skymen, Shenzhen, China) with the temperature set at 60 °C.After rinsing with deionized water, the silica gel is placed in an oven and dried for 24 hours to obtain activated silica gel (SG).The activated silica gel (SG) is placed in the air to Preparation of Silicon-Based Quaternized Separation Material The alkylation process of porous silica gel: Using the coupling agent chloromethyl trimethoxysilane to modify the surface of porous spherical silica gel, alkylated silica gel is obtained.The reaction process is as follows: Soak the porous spherical silica gel in 1 mol•L −1 HNO 3 for 6 h to activate and increase the hydroxyl groups on the surface.The activation process is carried out in an ultrasonic cleaning machine (JP-031S, Skymen, Shenzhen, China) with the temperature set at 60 • C.After rinsing with deionized water, the silica gel is placed in an oven and dried for 24 h to obtain activated silica gel (SG).The activated silica gel (SG) is placed in the air to freely absorb moisture.During this period, continuous weighing is performed until the hydration degree of the silica gel reaches 10% (weight increase of 10%).The activated silica gel (SG) is mixed with xylene solvent and placed in a reaction reactor (DDL-2000, EYELA, Tokyo, Japan) for 15 min of stirring.Then, KH-150 coupling agent is added, and the mixture is stirred at a constant temperature for 24 h (reaction temperature set at 80 • C; KH-150 coupling agent dosage: 0.27 mol•L −1 ).Finally, alkylated silica gel (SG-CTS) is obtained. The above reaction conditions are the optimal conditions for the alkylation reaction (grafting coupling agent) of silica gel, which was validated in the Authors' previous research [22]. The quaternization process of silica gel: The alkylated silica gel (SG-CTS) and acetonitrile solvent are placed in a DDL-2000 reaction reactor and stirred for 15 min at 80 • C to ensure uniform mixing.Then, trioctylamine is added to the reactor and stirred at 80 • C for 24 h at a constant temperature to fully quaternize it.Following the reaction, the material undergoes three washes with ethanol, is dried at 60 • C under vacuum for 24 h, and yields the desired silicon-based quaternized material (SG-CTSQ). The trioctylamine (0.05 mol and 0.4 mol•L −1 ) was employed to conduct the quaternization reaction, leading to the production of quaternized materials exhibiting varying levels of grafting, denoted as SG-CTSQ 1 to SG-CTSQ 8 , respectively. Characterization of Silicon-Based Quaternized Material The characterization of SG-CTSQ includes SEM, FTIR, NMR, XPS, TG-DTG, and BET.The instrument type and parameter Settings used are included in Supplementary Material S1. Calculation Methods of Grafting Amount and Adsorption Amount The degree of quaternization on the surface of the silica gel is expressed using the grafting amount G (mmol•g −1 ), which represents the quantity of quaternized groups per gram of SG-CTSQ.Assuming that the 100% weight of SG-CTS is subjected to thermogravimetric analysis, the weight loss is m 1 , and the 100% weight of SG-CTSQ is subjected to thermogravimetric analysis, the weight loss is m 2 .No groups are eliminated during the quaternization reaction so the weight gain of the material in the quaternization process corresponds to the weight of the trioctylamine used in the reaction.After simple derivation, the grafting amount G(mmol•g −1 ) can be calculated as follows: where M is the molar mass of trioctylamine.The total concentrations of Th(IV) were determined by an X-ray fluorescence analyzer (EDX-8100, Shimadzu, Japan).The adsorption amount of Th(IV) in aqueous solution by the SG-CTSQ is calculated by the following Equation: where Q (mg•g −1 ) is the adsorption amount; C 0 (mg•L −1 ) is the initial concentration of Th(IV) before adsorption; C e (mg•L −1 ) is the concentration of Th(IV) after adsorption; the volume of the solution is denoted as V (L); and the mass of the SG-CTSQ is represented as m (g). Adsorption Study on Silicon-Based Quaternized Material with Th(IV) Experiment on the influence of concentration gradients of HNO 3 and NO 3 − : Th(IV) (20 mg•L −1 ) solution is configured for the HNO 3 and NO 3 − concentration gradient experiment.Set HNO 3 or NO 3 − concentration: and 7 mol•L −1 .Other conditions: temperature, 30 • C; adsorption time, 30 min.After standing for adsorption, filter, measure the equilibrium concentration of Th(IV) with the X-ray fluorescence analyzer, and record it as C e .Equation ( 6) is used to calculate the adsorption amount. Adsorption kinetic experiments: The Th(IV) (20 mg•L −1 ) solution is configured for the adsorption kinetics experiment.Set adsorption times: 10 s, 20 s, 30 s, 50 s, 70 s, 100 s, 200 s, 300 s, 500 s, 700 s, and 1000 s.Other conditions: temperature, 30 • C; acidity 4 mol•L −1 HNO 3 .At each set time, take a small amount of supernatant and filter it, measure the concentration of Th(IV) with the X-ray fluorescence analyzer, and record it as C t .The rate of the adsorption process is determined using the pseudo-first-order, pseudo-second-order, and Elovich kinetic models. The Langmuir and Freundlich adsorption isotherm models are given as Equations ( 10) and (11) [46,47]: q max bC e 1 + bC e (10) n f e (11) where q e (mg•g −1 ) is the equilibrium adsorption amount; q max (mg•g −1 ) is the maximum adsorption amount; C e (mg•g −1 ) is the equilibrium concentration; b is the constant; K f is the Freundlich constant; and n f is the concentration index.The Th(IV) (20 mg•L −1 ) solution is configured for the adsorption heat experiment.Set temperatures: 20 • C, 30 • C, 40 • C, 50 • C, and 60 • C. Other conditions: acidity 4 mol•L −1 HNO 3 ; adsorption time, 30 min.After standing for adsorption, filter, measure the equilibrium concentration of Th(IV) with the X-ray fluorescence analyzer, and record it as C e .Equation ( 6) is used to calculate the adsorption amount.The Van't Hoff equation was used to fit the adsorption amount at different temperatures to obtain the thermodynamic parameters ∆H and ∆S.The change in Gibbs free energy ∆G at various temperatures was determined using the Gibbs equation.Van't Hoff and Gibbs equations are given as Equations ( 12) and ( 13) [52,53]: ∆G = ∆H − T∆S (13) where C s is the concentration of the solid surface at the adsorption equilibrium; and C e is the concentration in the solution at the adsorption equilibrium.Study on the adsorption mechanism of Th(IV) by SG-CTSQ: The Th(IV) (20 mg•L −1 ) solution was configured to explore the adsorption mechanism of Th(IV) on SG-CTSQ.Determine the adsorbent dosages: 20 mg, 40 mg, 60 mg, 80 mg, and 100 mg.Other conditions: temperature, 30 • C; acidity 4 mol•L −1 HNO 3 ; adsorption time, 30 min.After standing for adsorption, filter, measure the equilibrium concentration of Th(IV) with the X-ray fluorescence analyzer, and record it as C e .Equation ( 6) is used to calculate the adsorption amount. Separation experiment of trace thorium in the uranium matrix: A certain amount of SG-CTSQ was loaded into an 8 mL extraction chromatography column and passed through the column with 4 mol•L −1 HNO 3 to make it pre-equilibrated for use.The typical flow rate of the chromatographic column is approximately 0.5 mL per minute.Prepare the U-Th mixed samples with the following concentrations U at 1000 mg•L −1 , Th at 20 mg•L −1 , and HNO 3 at 4 mol•L −1 . Transfer 1 mL of the mentioned U-Th sample through a chromatographic column.A total of 4 mol•L −1 HNO 3 was used to elute the uranium; then, 0.2 mol•L −1 HNO 3 -0.2mol•L −1 Na 2 C 2 O 4 was used to elute the thorium and the eluent of thorium was collected, which totaled 14 mL.The X-ray fluorescence analyzer was used to measure the concentrations of uranium and thorium, and the elution curves for both elements were generated.The formula for calculating the decontamination factor of uranium in thorium is as follows: The standard curve equation for Th is in Figure S2. Conclusions This study investigated the synthesis of silicon-based quaternized materials based on the coupling agent chloromethyl trimethoxysilane (KH-150) and their adsorption and separation performance for Th(IV).In this synthesis process, materials SG-CTSQ 1 to SG-CTSQ 4 with different grafting amounts were synthesized by controlling the concentration of trioctylamine.The NMR characterization was used to determine the molecular structure of the organic groups attached to the surface of porous silica gel, the results show that the coupling agent molecules exist in two ways on the surface of silica gel: it is connected with silica gel through a single chemical bond; it is connected with silica gel by two chemical bonds.Trioctylamine quaternary ammoniated with -CH 2 Cl on the silica gel surface.According to the results of XPS and TGA, the maximum quaternization rate of the coupling agent can reach 83.6%, and the grafting amount of quaternary ammonium groups at 0.537 mmol•g −1 .In the acidity experiment, the four materials with different grafting amounts showed different degrees of variation in their adsorption of Th(IV) with changes in HNO 3 concentration and NO 3 − concentration; all exhibited a tendency toward anion exchange.In addition, the adsorption results are also affected by physical factors (pore size, specific surface area, etc.) to a certain extent.The thermodynamic and kinetic experimental results demonstrated that materials with low grafting amounts (SG-CTSQ 1 and SG-CTSQ 2 ) tended to physical adsorption of Th(IV), while the other two tended toward chemical adsorption; with the increase in the grafting amount of SG-CTSQ, the saturated adsorption amount of Th(IV) increased significantly, from about 25mg•g -1 to about 45mg•g -1 .The adsorption mechanism experiment further proved that the functional groups achieve the adsorption of Th(IV) through an anion-exchange reaction.Chromatographic column separation experiments showed that SG-CTSQ has a good performance in U-Th separation, with a decontamination factor for uranium in Th(IV) of up to 385.1, and the uranium Figure 1 . Figure 1.Preparation procedure of SG-CTSQ and the adsorption process of Th(IV). Figure 2 . Figure 2. SEM images of SG (a), SG-CTS (b), and SG-CTSQ (c).The FTIR spectroscopies of SG (a), SG-CTS (b), and SG-CTSQ (c) are depicted in Figure 3.For SG, in Figure 3a1, 967 cm −1 is the flexural vibration peak of −OH, and 1078 cm −1 is the stretching vibration peak of Si−O−Si [24].In Figure3a2(enlarged portion of 1400-3800 cm −1 in Figure3a1), 1643 cm −1 is the flexural vibration peak of H2O and 1517 cm −1 is the in-plane deformation vibration peak of −OH.The broad peak around 3380 cm −1 is the stretching vibration peak of the double-base −OH and H2O, and 3744 cm −1 is the stretching vibration peak of the free-type −OH[25].In addition, there is a peak at about 3600cm −1 which is the stretching vibration peak of hydrogen-bonded −OH, which is obscured by the large peak of 3380cm −1 but is faintly visible[26].Figure3ashows that there are three different types of −OH on the surface of SG, namely free-type −OH, hydrogen-bonded −OH, and double-base −OH; in addition, there is a certain amount of bound water on the surface of SG.For SG-CTS, in Figure3b1, due to the large amount of −OH involved in the reaction, the peaks of 967 cm −1 and 3744 cm −1 were significantly weakened.In Figure3b2(enlarged portion of 1400-3800 cm −1 in Figure3b1), 3060 cm −1 is the stretching vibration peak of C−H in −CH2Cl, 2980 cm −1 and 2880 cm −1 are the stretching vibration peaks of C−H in −CH3, and 1416 cm -1 is the in-plane bending vibration peak of C-H in −CH3[27].In addition, the bending vibration peak of water at 1643 cm −1 and the stretching vibration peak of water at around 3380 cm −1 disappear, which indicates that there are a large number of organic hydrophobic groups on the surface of silica gel and the binding water is significantly reduced[28].The above analysis shows that the characteristic peaks of the groups in the coupling agent KH-150 are all present in the SG-CTS spectrum, which indicates that the coupling agent KH-150 has been successfully grafted on the surface of the silica gel.For SG-CTSQ, in Figure3c1, due to the introduction of a large amount of octyl (containing −CH3 and −CH2−) on the surface of silica gel after the quaternization reaction, the Figure 3 . Figure 3. FTIR spectra of the SG (a), SG-CTS (b), and SG-CTSQ (c).The 29 Si-MAS NMR spectra of SG and SG-CTS are shown in Figure 4.For SG, in Figure 4a, −90.0 ppm is the absorption peak of Si corresponding to the double-base −OH, and 6b, respectively.Molecules 2022, 26, x FOR PEER REVIEW 6 of 24 −100.0ppm is the absorption peak of Si corresponding to free-type −OH and hydrogenbonded −OH, and −111.6 ppm is absorption peak of Si corresponding to siloxane groups [31].After the grafting coupling agent reaction, for SG-CTS, in Figure 4b, the absorption peak at −90.0 ppm disappeared, indicating that double-base −OH almost completely participated in the reaction.The intensity of the absorption peak at −100.0 ppm decreased significantly, indicating that most of the independent −OH participated in the grafting reaction [32]; however, lots of −OH participated in the reaction, increasing the number of Si corresponding to siloxane groups Si−(OSi)4, which enhanced the intensity of the absorption peak at −111.6 ppm.In addition, −67.2 ppm and −77.8 ppm are peaks of Si of the coupling agent KH-150 with two different connection modes, respectively [33].Among them, −67.2 ppm corresponds to the coupling agent molecule with only one chemical bond connected to the silica gel, and −77.8 ppm corresponds to the coupling agent molecule with two chemical bonds connected to the silica gel [34].Based on the above analysis, the possible surface structures of SG and SG-CTS are shown in Figure 6a and 6b, respectively. Molecules 2022 , 26, x FOR PEER REVIEW 6 of 24 −100.0ppm is the absorption peak of Si corresponding to free-type −OH and hydrogenbonded −OH, and −111.6 ppm is absorption peak of Si corresponding to siloxane groups[31].After the grafting coupling agent reaction, for SG-CTS, in Figure4b, the absorption peak at −90.0 ppm disappeared, indicating that double-base −OH almost completely participated in the reaction.The intensity of the absorption peak at −100.0 ppm decreased significantly, indicating that most of the independent −OH participated in the grafting reaction[32]; however, lots of −OH participated in the reaction, increasing the number of Si corresponding to siloxane groups Si−(OSi)4, which enhanced the intensity of the absorption peak at −111.6 ppm.In addition, −67.2 ppm and −77.8 ppm are peaks of Si of the coupling agent KH-150 with two different connection modes, respectively[33].Among them, −67.2 ppm corresponds to the coupling agent molecule with only one chemical bond connected to the silica gel, and −77.8 ppm corresponds to the coupling agent molecule with two chemical bonds connected to the silica gel[34].Based on the above analysis, the possible surface structures of SG and SG-CTS are shown in Figure6aand 6b, respectively. Figure 7 Figure 7 . Figure 7 displays the TG-DTG results for alkylated silica gel SG-CTS.The image clearly shows that the weight reduction occurring between 30 and 246 °C is due to the dissociation of the organic solvent xylene from the surface of silica gel.The weight loss in the range of 246-905 °C is the decomposition process of −OCH3 and −CH2Cl on the coupling agent molecules.In this thermogravimetric process, the weight of organic group decomposition is m1=8.64%.By substituting it into Equation (5), the grafting amount G(mmol•g −1 ) of quaternary ammonium groups can be calculated as follows: G = m 2 − 0.0864 0.9136M * 10 3(1) Figure 7 Figure 6 . Figure 7 displays the TG-DTG results for alkylated silica gel SG-CTS.The image clearly shows that the weight reduction occurring between 30 and 246 • C is due to the dissociation of the organic solvent xylene from the surface of silica gel.The weight loss in the range of 246-905 • C is the decomposition process of −OCH 3 and −CH 2 Cl on the coupling agent molecules.In this thermogravimetric process, the weight of organic group decomposition is m 1 =8.64%.By substituting it into Equation (5), the grafting amount G(mmol•g −1 ) of quaternary ammonium groups can be calculated as follows: G = m 2 − 0.0864 0.9136M * 10 3 (1) Figure 7 Figure 7 . Figure 7 displays the TG-DTG results for alkylated silica gel SG-CTS.The image clearly shows that the weight reduction occurring between 30 and 246 °C is due to the dissociation of the organic solvent xylene from the surface of silica gel.The weight loss in the range of 246-905 °C is the decomposition process of −OCH3 and −CH2Cl on the coupling agent molecules.In this thermogravimetric process, the weight of organic group decomposition is m1=8.64%.By substituting it into Equation (5), the grafting amount G(mmol•g −1 ) of quaternary ammonium groups can be calculated as follows: G = m 2 − 0.0864 0.9136M * 10 3(1) Figure 12 . Figure 12.Effects of HNO3 and NO3 − concentrations on adsorption of Th(IV) by SG-CTSQ1 (a), SG-CTSQ2 (b), SG-CTSQ3 (c), and SG-CTSQ4 (d).Study on the kinetics of Th(IV) adsorption by SG-CTSQ:The variation in Th(IV) adsorption by SG-CTSQ with time is shown in Figure13.The adsorption kinetics result was analyzed by the pseudo-first-order kinetic model (the reaction rate is linearly related to the concentration of a reactant, and this model is based on Molecules 2022 , 24 Figure 14 . Figure 14.Adsorption isotherm fitting results of Th(IV) on SG-CTSQ1 (a), SG-CTSQ2 (b), SG-CTSQ3 (c), and SG-CTSQ4 (d).Van't Hoff's fitting results and parameters for temperature changes are shown in Figure 15 and Table4.Based on the parameters in the table, for the four materials with different grafting amounts, ΔG is less than 0 across all temperature ranges, indicating that the adsorption process is spontaneous.Based on Ma et al's research, the enthalpy change range for physical adsorption falls within 2.10 to 20.90 kJ•mol −1 .The enthalpy change range for chemical adsorption falls within 20.90 to 418.40 kJ•mol −1[49].The ΔH for the adsorption of Th(IV) by SG-CTSQ1 is 17.47 kJ•mol −1 (less than 20.90 kJ•mol −1 ), namely, its adsorption process belongs to physical adsorption.However, for the other three materials, the ΔH is greater than 20.90 kJ•mol −1 , indicating that their processes are closer to chemical adsorption.The above analysis shows that in the adsorption process of Th(IV) by SG-CTSQ1-SG-CTSQ4, the thermodynamic fitting results and kinetic fitting results of adsorptive properties are basically consistent. DF =U content in the sample/Th content in the sample U content in eluent/Th content in eluent(15)Establishment of the Th standard curve: Thorium standard solutions were prepared by diluting a 500 mg•L −1 Th(IV) standard solution to achieve mass concentrations of 5 mg•L −1 , 10 mg•L −1 , 20 mg•L −1 , 40 mg•L −1 , 60 mg•L −1 , and 80 mg•L −1 .The fluorescence intensity A Th (au) of the standard series solution was determined by X-ray fluorescence spectroscopy, and the standard curve was drawn as A Th versus C Th (Th concentration). Table 1 . Specific surface area and pore size with the changes in grafting amount. Table 1 . Specific surface area and pore size with the changes in grafting amount. Table 5 . Adsorption mechanism experiment results.Fitting results of adsorption mechanism. Table 6 . Th(IV) recovery rate and decontamination factor. Table 6 . Th(IV) recovery rate and decontamination factor.
16,483.6
2024-06-26T00:00:00.000
[ "Chemistry", "Materials Science" ]
Large deviations for homozygosity * For any m ≥ 2 , the homozygosity of order m of a population is the probability that a sample of size m from the population consists of the same type individuals. Assume that the type proportions follow Kingman’s Poisson-Dirichlet distribution with parameter θ . In this paper we establish the large deviation principle for the naturally scaled homozygosity as θ tends to infinity. The key step in the proof is a new representation of the homozygosity. This settles an open problem raised in [1]. The result is then generalized to the two-parameter Poisson-Dirichlet distribution. For any θ > 0, let J 1 (θ) ≥ J 2 (θ) ≥ · · · denote the jump sizes of γ(t) over the interval [0, θ] in descending order. If we set P i (θ) = J i (θ)/γ(θ), i ≥ 1, then the law of P(θ) = (P 1 (θ), P 2 (θ), . . .) is Kingman's Poisson-Dirichlet distribution P D(θ)(cf. [10]). It is a probability on the infinite-dimensional simplex is loosely called the homozygosity of order m. The name is taken from population genetics where the homozygosity corresponds to m = 2. The function is closely associated with the Shannon entropy in communication, the Herfindahl-Hirschman index in economics, and the Gini-Simpson index in ecology. It can be used to measure the population diversity in terms of the number of different types and the evenness in the distribution among those types. The value of H(p; m) decreases when the number of types increases and the distribution among those types becomes more even. In this paper we are interested in the behaviour of the random variable H(P(θ); m) when θ tends to infinity. When a random sample of size m is selected from a population whose individual types have distribution P D(θ), the probability that all samples are of the same type is given by H(P(θ); m). Since H(P(θ); m) ≤ P m−1 1 (θ), it follows that H(P(θ); m) converges to zero as θ approaches infinity. In [7] and [9] it is shown that H(P(θ); m) goes to zero at a magnitude of Γ(m) where ⇒ denotes convergence in distribution and Z m is a normal random variable with mean zero and variance It is natural to investigate more refined structures associated with the limits In [1], a full large deviation principle is established for H(P(θ); m) describing the deviations from zero. For l in (0, 1/2), the quantity θ l θ m−1 Γ(m) H(P(θ); m) − 1 converges to zero in probability as θ tends to infinity. Large deviations associated with this limit are called the moderate deviation principle for { θ m−1 Γ(m) H(P(θ); m) : θ > 0}. In [5], the moderate deviation principles are shown to hold for l in ( m−1 2m−1 , 1 2 ). The large deviation principle corresponding to l = 0 remains an open problem. In this paper we will solve this open problem, namely, the large deviation principle for θ m−1 Γ(m) H(P(θ); m) describing deviations from 1. The two-parameter generalization is also obtained. The key in the proof is a new representation of the homozygosity. Large deviations Let m is any integer that is greater than or equal to 2. The objective of this section is to establish the large deviation principle for L(P(θ); m) = θ m−1 Γ(m) H(P(θ); m). We begin with the case that θ takes integer values. For any 1 ≤ k ≤ θ, let J k i , i = 1, 2, . . . denote all the jump sizes of γ(t) over [k − 1, k]. Since the subordinator γ(t) will not jump at t = 0, 1, . . . , θ with probability one, it follows that . . , W θ are independent copies of γ(1), and independently, H 1 , . . . , H θ are independent copies of H(P(1); m). Set Then we have Theorem 2.1. A large deviation principle holds for L(P(θ); m) as θ converges to infinity on space R with rate θ 1/m and good rate function Proof: By Ewens sampling formula and direct calculation we have Thus there exists λ < 0 such that F (λ) < 1 which implies that J(y) > 0 for y < 1. This combined with (2.2) and the fact that J(·) is non-increasing implies that for any x < 1 lim sup On the other hand, for any > 0 and 0 < δ < 1 P{L 0 (P(θ); m) > x} which combined with (2.4), (2.7) and Theorem (P) in [13] implies the large deviation principle for L 0 (P(θ); m) with speed θ 1/m and good rate function I(·). By direct calculation By Lemma 2.1 in [5], the large deviation principle for L(P(θ); m) is the same as L 0 (P(θ); m). For any 0 < δ < 1, [θ] where the second equality follows from the fact that γ( distribution. This implies that for any 0 < r < 1 Similarly we can prove that found. Large deviation estimates were obtained in [3] for the scaled probability of two randomly selected individuals at time zero having the same ancestor at time T n . In our notation this probability has the form n γ(n) This is the same as L 0 (P(n); 2) except H k is replaced by 1. Our result shows that the corresponding work in [3] can be generalized to any m ≥ 2. The two-parameter homozygosity of order m is defined as where Z α m is a normal random variable with mean zero and variance As in the one-parameter case, the moderate deviation principles hold for the two- Our next result establishes the large deviation principle for L(P(α, θ); m). Theorem 3.1. A large deviation principle holds for L(P(α, θ); m) as θ converges to infinity on space R with rate θ 1/m and good rate function The two-parameter homozygosity can now be written as H(P(α, θ); m) = 1 σ α,θ m θ k=1 (W α k ) m H α,k . (2.1) and (3.2) can be generalized to these models. But the independency between the total jump size and the normalized individual jump sizes may no longer hold. It is not clear whether our result can be generalized to these situations. Remark 3.3. For 0 < α < 1, x > 1, we have I α (x) < I(x). Thus L(P(α, θ); m) is more spread out from 1 than L(P(θ); m) and α can then be used to describe the diversity of the population in terms of large deviations.
1,549.8
2016-01-01T00:00:00.000
[ "Mathematics" ]
A Bistable Vibration Energy Harvester with Closed Magnetic Circuit In this work, to increase magnetic flux passing through the electric coil in a bistable vibration energy harvester, the magnetic circuit is made closed by introducing two coil systems which have magnetic core in their axis holes. The magnetic resistance of the magnetic circuit, composed of silicon steel and thin air gaps, is supressed to be small. The double well potential is realized from the spring force and nonlinear magnetic force between the magnets and the magnetic core. Two harvesters with opened and closed magnetic circuits are manufactured for comparison. It is also shown that the closed magnetic circuit can effectively improve the output power. Introduction Energy harvesting devices have been receiving much attentions as voltage source for low-powered wireless sensor nodes with autonomous operations [1]. The electromagnetic vibration energy harvester (VEH) transforms vibration energy into electrical energy through magnetic induction [2], [3]. Conventional VEHs produce electric power through linear spring-damper oscillation. Thus, maximum power is obtained at the resonant frequency and the output power drops as operating frequency goes far away from the resonance. In order to make effective energy harvesting from real-world vibrations, the operational bandwidth of VEHs should be broaden. For the wideband VEHs, various approaches have been proposed [4]. The use of the bistable structure is one of the attractive approaches [5]. Bistable VEHs are characterized by a double-well potential. As for the bistable VEHs, the inertial mass of VEH transits between two equilibrium positions if the oscillator can overcome the potential barrier by the vibrations regardless of frequency. Hence, the bistable VEHs have a possibility to realize broadband operation. In fact, it has been shown that bistable VEHs can harvest electrical power under noise excitations [6]. Moreover, the magnetic flux passing through the coil in VEHs should be increased to improve power generation efficiency. In this work, to increase magnetic flux in a bistable VEH, we make the magnetic circuit closed by introducing two coil systems which have magnetic core in their axis holes. The magnetic resistance of the magnetic circuit, composed of silicon steel sheets and thin air gaps, is supressed to be small. The double well potential is realized from the spring force and nonlinear magnetic force between the magnets and the magnetic core. Two harvesters with opened and closed magnetic circuits are manufactured for comparison. It is also shown that the closed magnetic circuit can effectively improve the output power. Bistable harvester with closed magnetic circuit It is necessary to improve the operational bandwidth and the output power of VEH. The electromotive force induced in the coil is equal to the time derivative of the magnetic flux across the coil. The magnetic flux should be, therefore, increased to improve the output. Introduction of magnetic materials in the magnetic circuit allows us to increase the flux linkage with the coil. When introducing magnetic materials, attraction magnetic force, which is nonlinear with respect to displacement, is generated between magnets and magnetic materials. This nonlinearity would broaden the frequency bandwidth of the VEH. It is thus expected that appropriate magnetic circuit design leads to the improvements of both the bandwidth and the output power. On the basis of the above mentioned insight, we have been developed a VEH harvester model using magnetic material [7,8]. In this work, a harvester which has H-shaped magnetic core shown in figure 1 is considered. This model is here called H1-harvester. For the magnetic core in the axis hole of the coil bobbin, we employ the silicon steel sheets whose relative permeability is much higher than the space permeability. The magnetic flux is expected to be much larger in this VEH in comparison with the VEH whose magnetic core is made of Ferrite [7,8]. It is therefore expected that the H1-harvester can improve the output power. However, the magnetic resistance is still large because the magnetic flux outside the coil has to pass the air region. For this reason, another harvester with a closed magnetic circuit is presented, as shown in figure 2, called here H2-harvester. In this model, two H-shaped magnetic cores are introduced in two coils so as to form closed magnetic circuit, as shown in figure 2(b). It is thus expected that the flux linkage with the coils can effectively be increased in comparison with the H1-harvester. In these two models, nonlinear attraction magnetic force is generated between the magnets and the magnetic cores. When carefully tuning the magnetic force and spring constant, the bistable potential is realized, as will be mentioned below. Thus, it is expected that introduction of the magnetic cores not only improve the efficiency but also broaden the bandwidth of the harvester. We now consider the potential energy in the proposed harvesters. The potential energy, E, in the VEH is written by where k is spring constant, and z is the relative displacement of the coil and magnets. Moreover, Emag is magnetic energy which can be computed by the finite element method as follows: where and Br is remnant flux density in magnet. Note that Em(z) is a function of z because flux distribution, B, varies depending on z. Figures 3 and 4 show the potential energy in the H1-and H2-harvesters, respectively, for three different values of k. It can be found that bistable potential structure can be realized when setting k to proper values. Moreover, when carefully tuning k, the potential barrier gets low, which makes it easy to transit the oscillator between two equilibriums. For example, in the H2-harvester, the appropriate value of k is about 1100N/m. Experiments The proposed harvesters was manufactured. Figure 5 shows the manufactured H2-harvester. The total mass of the oscillator (the magnets, core, and their keepers), effective spring constant of the cantilever, k, and the number of the turn in the coils are summarized in Table 1. To evaluate the performance with high potential barrier, k for the H1-harvester is set to relatively small. Measurements The output power against frequency for different frequencies was measured. Sinusoidal vibration was applied to the base of the proposed harvesters, and the acceleration level was fixed to about 1.0G at all the frequencies. A resistive load of 75Ω was connected to the coils, and the load voltage was measured by an oscilloscope. Figures 6 and 7 show the output power and load voltage, respectively, plotted against the input frequency in the H1-harvester. These figures show that the maximum is located at about 25Hz, and around which the outputs drop. The time-variation of displacement and load voltage are shown in figure 8, from which it is found that H1-harvester has three distinct behaviour modes. These modes well correspond to the behaviour modes of bistable VEH: interwell, chaotic, and intrawell oscillations [5]. Thus, this result strongly suggests that the manufactured H1-harvester has a bistable property. Nevertheless, the bandwidth is not broaden. This would be due to the fact that the potential barrier in the manufactured H1-harvester is high when k is about 400, as shown in figure 3. High potential barrier makes it difficult to produce the interwell oscillation, as a result, the bandwidth of the bistable VEH is narrow. The output power and load voltage in the H2-harvester are shown in figures 9 and 10, respectively, which show that the output of the H2-harvester is much greater than that of the H1-harvester. Although efficiency of two harvesters is not simply compared because the numbers of turns in them are different, this result clearly indicates superiority of the superiority in the closed magnetic circuit. However, the H2-harvester again has a relatively narrow peak at about 40Hz. The time-variation of displacement and load voltage are shown in figure 11. At 35Hz, the amplitude of the oscillation is large. On the other hand, at 50Hz, the oscillation of the displacement is biased and its amplitude is much smaller than that at 35Hz. From these results, it can be said that the H2-harvester has two distinct behaviour modes depending on the input frequency. This fact suggests that the H2-harvester does not have a bistable property. Discussions Here, the reason why the H2-harvester does not have the bistable property is discussed. One of the considerable reason is due to the manufacturing error. As for bistable VEHs, the coil axis is aligned to the center of the magnet at the neutral position. The potential energy profiles for the misaligned coil are shown in figure 12, from which it is found that the H2-harvester does not have a double-well potential whereas H1-harvester has. Because the potential barrier in the H1-harvester is relatively high, bistable structure is maintained against the small misalignment. On the other hand, in the H2-harvester, the manufacturing results in disappearance of the double-well structure because of the low potential barrier. From these discussions, it can be concluded that bistable VEHs with low potential barrier can have broad bandwidth although it can easily loose the double-well potential due to manufacturing errors. In summary, the broadband bistable VEH is not robust against the manufacturing errors. Conclusions This paper has presented the bistable harvester which has the closed magnetic circuit. The experimental results have suggested that the manufactured H1-harvester is bistable while H2harvesters is not. It has also shown that the output power in the H2-harvester is much greater than that in the H1-harvester. The reason why the H2-harvester does not have double-well potential has been discussed. For the future work, the bistable H2-type harvester will be developed by improving the manufacturing accuracy. Acknowledgment This study was partly supported by Grant-in-Aid for JSPS Fellows and JSPS KAKENHI Grant (B) Number 24310117.
2,228.8
2014-11-27T00:00:00.000
[ "Physics" ]
Mitochondrial phylogenies in the light of pseudogenes and Wolbachia: re-assessment of a bark beetle dataset Abstract Phylogenetic studies based on mtDNA become increasingly questioned because of potential pitfalls due to mitochondrial pseudogenes and mitochondrial selective sweeps. While the inclusion of nuclear markers should preferentially be considered for future studies, there is no need to abandon mtDNA as long as tests for the known mtDNA artefacts are performed. In this study we presentadditionaldata and test previous phylogeographical studies of Pityogenes chalcographus. We did not detect nuclear copies (numts) of the previously used mitochondrial markers by performing a combined long range/nested PCR of the COI gene and by an in silico analysis of the COI sequence data. This confirms the robustness of our previous phylogenetic study of Pityogenes chalcographus. Results of an in-situ hybridization of Wolbachia in Pityogenes chalcographus confirm the presence of this endosysmbiont in this species. However, we did not detect a correlation between infection status, geographical region and mtDNA haplotypes. The hybridisation data also support a previous hypothesis that infections do not result from parasitoids or parasitic nematodes, insect surface or laboratory contaminations and are hence a true infection of Pityogenes chalcographus. We conclude that the deep structure found in mitochondrial populations of Pityogenes chalcographus indeed represents the evolutionary history of European populations. Introduction In the last two decades several phylogeographic (e.g. Stauff er et al. 1999) and phylogenetic (e.g. Cognato and Sun 2007) studies on scolytines were presented and most of them used mitochondrial DNA (mtDNA) as one of, or the only genetic marker. Analyses of the mitochondrial genome pioneered the era of molecular ecology due to its small size, uniparental mode of inheritance, ease of isolation, and conserved simple structure, allowing the development of universal primers spanning several classes of Metazoa (e.g. Lunt et al. 1996). However, its potential for resolving the evolutionary history of organisms was gradually questioned when factors infl uencing the reliability of mtDNA derived phylogenies were identifi ed, namely (i) nuclear non-functional copies of mitochondrial genes (e.g. Bensasson et al. 2001), (ii) maternally inherited endosymbionts (Hurst and Jiggins 2005), (iii) positive selection on mitochondrial genomes (Meiklejohn et al. 2007) and (iv) mitochondrial introgression as a consequence of hybridisation (Petit and Excoffi er 2009). Mitochondria originated from the endosymbiosis of α-proteobacteria in ancestral eukaryotic cells. Mitochondrial genomes contain fewer genes than those of free-living α-proteobacteria, due to a loss of genes during their evolutionary history. Th is gene loss is explained by (1) the functional redundancy of mitochondrial genes with pre-existing nuclear genes and (2) the functional transfer of mitochondrial genes to the nucleus. Th e transfer of mtDNA derived sequences to the nucleus is an ongoing pro cess in eukaryotes and mitochondrial pseudogenes have been identifi ed in the nuclear genome of many species (Timmis et al. 2004). Such nuclear mitochondrial (numt) pseudogenes can derive from any part of the mtDNA and occur typically as single copies at dispersed genomic locations. Numts are usually less than 1 kb in size (Richy and Leister 2004). Larger fragments as well as tandemly repeated numts have been reported in mammals (e.g. Bensasson et al. 2001). Phylogenies derived solely from mtDNA sequences may hence be erroneous due to numts being co-amplifi ed by universal mitochondrial primers. A set of strategies is available in order to avoid numt based errors, including in silico analysis of sequences to detect an eventual increased number of non-synonymous base substitutions, frameshifts, additional stop codons and reduced transition/transversion ratios (Bensasson et al. 2001). Positive results should raise doubt on the mitochondrial origin of the retrieved sequences. Furthermore, long PCR techniques can be utilized because most numt sequences are shorter than 1000 base pairs (Richy and Leister 2004). A specifi c feature of mtDNA is its strict maternal inheritance in most insects. Due to this asymmetrical inheritance within a species the marker only refl ects the female part of the species' genealogy. Hence, mtDNA transmission will be infl uenced by any selection for maternally transmitted genes or other maternally selective traits. Several maternally transmitted endosymbionts are well known in invertebrates, with Wolbachia as the most prominent one (Werren et al. 2008). Wolbachia was also detected in Ips typographus (Stauff er et al. 1997), Hypothenemus hamperi (Vega et al. 2002), Xylosandrus germanus (Peer and Taborsky 2005) and Coccotrypes dactyliperda (Zchori-Fein et al. 2006). Recently, P. chalcographus was found infected with two Wolbachia strains wCha1 and wCha2 (Arthofer et al. 2009a). Both strains occur in low titre not accessible by conventional PCR detection methods. While some Wolbachia infections do not alter host physiology and reproduction, such eff ects have been found in others. Reproductive fi tness traits range from cytoplasmatic incompatibility (CI) to male-killing, feminisation and the induction of thelytokous parthenogenesis (see Werren et al. 2008 for a review). In a population infected with CI-inducing Wolbachia, the mtDNA associated with the initially infected females will hitchhike through the population and replace the original haplotypes (Hurst and Jiggins 2005). From a phylogenetic point of view this selective sweep may easily be mistaken for a population bottleneck or a founder eff ect. On the other hand, old and established Wolbachia infections within a population might maintain mitochondrial isolation in spite of nuclear gene fl ow. In such cases, deep mtDNA structure may contradict homogenous nuclear phylogenies. Th us, the presence of Wolbachia must be checked when mtDNA based phylogenies and phylogeographies are established. Th is is usally done by conventional PCR using the Wolbachia specifi c primers for wsp (Zhou et al. 1998) or 16S rDNA (O'Neill et al. 1992). More sophisticated methods include high sensitivity detection (Arthofer et al. 2009a, b) or in situ hybridization which off ers a possibility to detect Wolbachia directly in infected tissues (Chen et al. 2005). Th e latter method reduces the risk of false positive results due to contamination with infected parasitoids, parasitic nematodes or prey in the gut content of predators. In this study we show that numts do not infl uence the phylogenetic pattern of P. chalcographus (Avtzis et al. 2008) by performing a combined long range/nested PCR of the COI gene and by an in silico analysis of the COI sequence data. Furthermore, we present results of an in-situ hybridization of Wolbachia in P. chalcographus confi rming the presence of the endosysmbiont in tissues of this species. Numt search Mitogenomic sequences of the coleopteran species Pyrocoelia rufa (Lampyridae), Tribolium castanaeum (Tenebrionidae) and Crioceris duodecimpunctata (Chrysomelidae) were obtained from GeneBank (for accession numbers see table 1) and aligned using Clustal X (Th ompson et al. 1997). To facilitate identifi cation of conserved regions sequences of Apis mellifera (Apidae), Bombyx mori (Bombycidae) and Drospohila simulans (Drosophilidae) were included in the alignment. Conserved regions were selected for primer design (Table 1). Occasional variable nucleotide positions within the conserved regions required the selection of primer sequences characteristic for coleopterans. Developed primers were Met/F 5' gctwhtgggttcataccc 3' located in the methionin tRNA region and CO2/R 5' caaatttctgaacattg 3' located in CO2. Th is primer pair amplifi es a stretch of about 3463bp. Fourteen DNA extracts of P. chalcographus representing all clades were selected for analysis. Th ermocycling was performed in a Primus 25 advanced thermocycler (peqlab, Germany). Full length PCR was performed in 10 μl reactions using 0.4 μM of each Met/F and CO2/R primer, 6 mM magnesium sulphate, 200 μM dNTPs, 0.4 U Taq DNA polymerase (Sigma, USA), 0.01 U Sawady Pwo polymerase (peqlab) and 1 μl DNA template in the buff er provided with the Pwo polymerase. Cycling conditions were 3 min initial denaturation at 94° C followed by 32 cycles of 94° C (30 sec), 55° C (1 min) and 68° C (2.5 min) and a fi nal extension step at 68° C (10 min). Products were diluted 1:10,000 with sterile distilled water and 1 μl diluted amplicon was used as template for the nested PCR. Dilution series were carried out to prove that the carry over of genomic DNA from the full length to the nested PCR reaction was small enough to avoid detectable amounts of amplicon. Nested PCR was done in 25 μl reactions containing 3.75 mM magnesium chloride, 125 μM dNTPs (Fermentas, Lithuania), 0.5 μM of each K698 (Caterino and Sperling 1999) and UEA10 (Lunt et al. 1996) primer and 1U Taq polymerase (Sigma, USA). Cycling conditions contained an initial denaturation step of 3 min at 94° C followed by 33 cycles of 94° C (30 sec), 48° C (60 sec) and 68° C (1.5 min) and a fi nal extension step at 68° C (10 min). Amplicon size was checked by gel electrophoresis, products were purifi ed with the QiaQuick PCR purifi cation kit (Qiagen, USA) and Sanger sequencing was performed using nested PCR primers by a commercial provider. An in-silco analysis was performed on 262 sequences of the original study (Avtzis et al. 2008) representing 58 European haplotypes of P. chalcographus (DQ515997-DQ516054) to identify non-synonymous base substitutions, additional stop codons, insertions and deletions, frameshifts and the transition:transversion ratio. Eleven molecular traits listed in table 2 were selected to discriminate numt and mtDNA which are extensively discussed in the results section. Identifi cation of Wolbachia infections by in situ hybridization In situ hybridization followed a slightly modifi ed protocol of Chen et al. (2005). Insects from locations with elevated Wolbachia prevalence were dissected under a stereo microscope using sterile forceps and scalpel blades. Ovarial tissue was recovered, transferred onto microscope slides, pre-fi xed with a drop of methanol and air-dried over night. Final fi xation was carried out in a drop of 0.4% formaldehyde at 4° C for Zhang and Hewitt (1996), g percentage of total transitions/transversions on 3 rd codon position, h Tamura (1992), i Lin and Danforth (2004), data for CO1 genes 5 min. Slides were washed twice by pipetting 2 ml buff er 1 (100 mM Tris.HCl, 150 mM sodium chloride, pH=7.4) on the tissue. Th e buff er was kept on the tissue for 30 sec and was then decanted. After 10 min air-drying 10 μl of a hybridization solution containing 1 ng/μl of a DIG-labelled wsp specifi c probe, 5% (w/v) dextrane sulphate, 2% (v/v) denatured salmon sperm, 1x SSC, 1x Denhart's reagent and 50% (v/v) formamide were placed on the slid e under a cover slip. Tissue was denatured for 5 min at 96° C, cooled on ice and hybridized over night at 42° C in a humid chamber. Th e cover slip was removed and the slide washed two times 5 min with 2x SSC at room temperature and once 5 min with 0.1x SSC at 42° C. All subsequent steps were carried out at room temperature. Th e slide was exposed to buff er 2 (100 mM Tris.HCl, 150 mM sodium chloride, 0.5% (w/v) blocking reagent (Roche), pH=7.4) for 15 min, briefl y washed with buff er 1 and air-dried for 10 min. 10 μl Anti-DIG antibody conjugated to alkaline phosphatase (Roche, 1:500 in buff er 2) were placed atop each tissue specimen and incubation was performed for one h in a humid chamber. Slides were washed two times 5 min in buff er 1 and equilibrated 5 min in buff er 3 (100 mM Tris. HCl, 150 mM sodium chloride, 1% (w/v) BSA, 0.3% (v/v) Triton X-100, pH=7.4). Staining was performed with 20 μl NBT/BCIP solution (Amresco, USA) in the dark under a cover slide. As soon as a purple colour became visible (30 min up to several h) the cover slip was removed, the sample washed briefl y with distilled water, mounted, and microscopy was performed to detect cells infected with Wolbachia. For positive and negative control Drosophila simulans strains were used. Results and discussion Phylogeographic analysis of European P. chalcographus populations revealed a deep genetic structure between the most diverged haplotypes with three major clades and an estimated divergence time of 100,000 years before present (Avtzis et al. 2008). Recently, low titre infections of two Wolbachia strains were detected in more than 30% of the analysed specimens (Arthofer et al. 2009a). Th us, tests for integrity of the mtDNA based phylogeny in the light of numts and endosymbiont infection were mandatory. Here we present a data set demonstrating that the phylogeny of Avtzis et al. (2008) is not infl uenced by numt pseudogenes. Arthofer et al. (2009a) have detected Wolbachia in all major P. chalcographus clades in a pattern that is unlikely to be caused by CI inducing strains. Here we prove the presence of the endosymbiont directly in ovarial cells of the beetle, excluding positive Wolbachia detection by PCR due to contamination. Long/nested PCR and in silico analysis for presence of numts Alignment of mitochondrial genomes of three coleopteran and three non-coleopteran insect species resulted in six candidate primers (data not shown), of which one primer pair (Table 1), after extensive optimization of PCR conditions, amplifi ed a clear band from P. chalcographus DNA extracts. Dilution series of genomic DNA gave no visible bands in dilutions of more than 1:1,000, ensuring that all amplicons produced in the nested PCR originated solely from the full length PCR product and not from genomic carry-over (data not shown). After nested PCR extensive products of the expected size could be obtained from almost all haplotypes of P. chalcographus examined. Even templates without visible amplifi cation in the full length PCR had formed enough product to be amplifi ed in the subsequent nested reaction. Comparison of the NJ trees derived from direct PCR sequences (Avtzis et al. 2008) and from nested PCR sequences of 14 representative haplotypes of the major clades showed identical topologies (data not shown). PCR conditions were chosen to remove any numt shorter than 3.4 kb, i.e. three times longer than the largest numts ever observed in insects. Both direct and long/ nested PCR sequences were identical, and so were the phylogenetic trees. With our test, co-amplifi cation of numts in the direct PCR approach would have led to discrepancies in tree topology between direct and long PCR sequences. In order to extend numt screening to 262 individual sequences representing 58 diff erent haplotypes, an in silico analysis was performed targeting characteristic differences between mtDNA and numt sequence composition. Eleven numerical traits were analyzed independently and all of them resulted in values within 5% confi dence intervals for authentic mtDNA (Table 2). Th us, presence of numts in the analyzed populations of P. chalcographus can be excluded. Several strategies to avoid numt co-amplifi cations are known. Th e purifi cation of mtDNA by caesium chloride gradient centrifugation (Nishiguchi et al. 2002) prevents the isolation of numts but is inapplicable when the amounts of source DNA are limited. Beside this, the procedure is slow and laboursome and therefore not suitable for the screening of large populations. Other enrichment techniques provide a DNA that may still be contaminated with some nuclear sequences. In cases where the sequences of authentic mtDNA and the corresponding pseudogenes are known the development of target-specifi c primers may be recommended (Zhang and Hewitt 1996). Th e long PCR approach utilized in this study should exclude any amplicons derived from nuclear DNA. Furthermore, mtDNA shows some characteristics in base composition and mutational patterns that are diff erent from the nuclear genome. Most obvious, mtDNA is strongly AT biased (Lewis et al. 1995) and evolves faster than single copy nuclear genes (Galtier et al. 2009). Most probably this fast evolution is explained by ineffi cient repair mechanisms at the mitochondrial replication complex. More recent studies have shown substantial rate heterogeneity between diff erent species and mitochondrial genes (e.g. Mueller 2006). After transfer into the nucleus, a mitochondrial sequence will evolve with the typical patterns of a pseudogene. Compared to the authentic sequence which is under some selective constraint there will be less codon position bias and a higher proportion of nonsynonymous base replacements (Sunnucks and Hales 1996). Transition-transversion ratio is signifi cantly higher in mtDNA than in corresponding pseudogenes (Arctander 1995). Th e GC dinucleotide is often methylated in nuclear DNA and 5-methylcytosine mutates abnormally often to T (Bird 1980). Th erefore the rate of GC › GT mutations among the four possible nC › nT combinations is highly overrepresented in the nucleus but not in mtDNA where methylation does not occur (Bulmer 1986). While we consider the long/nested PCR approach as very reliable to exclude any numt from a genetic analysis, it requires additional handling time, costs for PCR consumables and high quality DNA allowing the amplifi cation of >3kb products. Especially the latter condition will not be given when long term stored specimens have to be analyzed that might have degraded DNA. Th e in silico approach presented here can be readily applied to individual haplotypes within any mtDNA alignment and does not require additional manipulations in the laboratory. It is thus suitable for a re-check of existing mtDNA based phylogenies. Detection of Wolbachia by in situ hybridization Th e principial functionality of a modifi ed protocol for Wolbachia detection by in situ hybridization with DIG labelled probes was tested using ovarial tissue of Wolbachia free D. simulans STC and D. simulans fl ies infected with wRi. Diff erences in colouration were clearly distiguishable between infected and uninfected D. simulans (Fig. 1 A, B). Compared to wRi in D. simulans, Wolbachia titre in P. chalcographus was low, and in average only 35.5% of the individuals were infected (Arthofer et al. 2009a). Th e ovarial tissue of several individuals analysed showed staining patterns at diff erent intensities, comparable to the D. simulans positive controls (Fig. 1C). Conclusion Evidence of a range of selective forces on mtDNA markers make phylogenetic studies that are purely based on mtDNA less reliable. While the inclusion of nuclear markers like microsatellites or AFLP should preferentially be considered for future studies, there is no need to completely abandon mtDNA as long as tests for the potential manipulation of mtDNA sequences are performed. Such tests should also be included in ongoing eff orts to barcode the tree of life based on mtDNA (Song et al. 2008). Here, we confi rm that the data of the previous phylogeographic analysis by Avtzis et al. (2008) are not caused by numts. It can be concluded that the deep structure found in mtDNA populations of P. chalcographus indeed represents the evolutionary history at least of the female branch of European populations Furthermore, we have detected Wolbachia in P. chalcographus cells in low titre by in situ hybridisation. Our results confi rm earlier work that used a highly sensitive PCR method (Arthofer et al. 2009a). Such an approach can be prone to false positive results due to contamination, as it was found in one extract that carried a uniquely isolated Wolbachia sequence, that most likely derived from co-isolated DNA of a parasitoid (Arthofer et al. 2009a). Th e previous work showed that two strains are present in this beetle in low titre and low frequency, without any correlation between infection status, geographical region and mtDNA haplotype. Despite the inability to diff erentiate both strains with the presented hybridisation technique, the new data support that infections do not result from parasitoids, parasitic nematodes or laboratory contaminations and are hence true Wolbachia infections of P. chalcographus. In general, additional tests for presence of numts and endosymbionts are laborious and time consuming. However they are required for species that exhibit deep mtDNA divergences in order to exclude potential misinterpretation of mtDNA sequence data.
4,513.6
2010-09-17T00:00:00.000
[ "Biology", "Environmental Science" ]
Cytokine and Chemokine Levels in Patients with Severe Fever with Thrombocytopenia Syndrome Virus Background Severe fever with thrombocytopenia syndrome virus (SFTSV), which can cause hemorrhagic fever–like illness, is a newly discovered bunyavirus in China. The pathogenesis of SFTSV infection is poorly understood. However, it has been suggested that immune mechanisms, including cytokines and chemokines, play an important role in disease pathogenesis. In the present study, we investigated host cytokine and chemokine profiles in serum samples of patients with SFTSV infection from Northeast China and explored a possible correlation between cytokine levels and disease severity. Methods and Principal Findings Acute phase serum samples from 40 patients, diagnosed with SFTSV infection were included. Patients were divided into two groups – severe or non-severe – based on disease severity. Levels of tumor necrosis factor (TNF)-α, transforming growth factor (TGF)-β, interleukin-6, interferon (IFN)-γ, IFN- γ-induced protein (IP)-10 and RANTES were measured in the serum samples with commercial ELISAs. Statistical analysis showed that increases in TNF-α, IP-10 and IFN-γ were associated with disease severity. Conclusions We suggest that a cytokine-mediated inflammatory response, characterized by cytokine and chemokine production imbalance, might be in part responsible for the disease progression of patients with SFTSV infection. Introduction Severe fever with thrombocytopenia syndrome (SFTS) is a tickborne hemorrhagic fever-like illnesses caused by severe fever with thrombocytopenia syndrome virus (SFTSV), which is a member of the Bunyaviridae family and newly identified in Central and Northeast China [1]. SFTSV infection characterized by fever, thrombocytopenia, leukocytopenia, and multiorgan dysfunction produces a broad spectrum of clinical manifestations, which ranges from an acute self-limited febrile illness to various grades of a severe disease with a reported case-fatality rate varying between 12% and 30% [2]. Humans become infected through tick bites, contact with blood from SFTS patients, and personal contact [3]. The pathogenic mechanism in patients with SFTSV infection is at least partly immune-mediated, which may play an important role in determining the severity and clinical outcome [4]. Other viruses, such as Hantaviruses and Crimean-Congo hemorrhagic fever virus, cause cytokine activation, and an uncontrolled release of cytokines has been observed in filovirus infection similar to that seen in sepsis caused by Gram-negative bacteria [5]. Until now, the pathogenesis of SFTSV infection has not been clearly defined, and cytokine and chemokine studies with respect to SFTSV infection are lacking. The present study determined the concentrations of tumor necrosis factor (TNF)-a, transforming growth factor (TGF)-b, interleukin (IL)-6, interferon (IFN)-c, IFN-cinduced protein (IP)-10 and RANTES in serum samples of SFTS patients during the 2011 endemic in Northeast China, and correlated them with disease severity. We also analyzed T-cell subgroups and their possible role in disease severity. Ethics Statement Patients all gave written consent to the participation in our study. Permission to perform this study was given by the Ethics Committee of China Medical University. Patients and Clinical Samples Fifty-seven patients with suspected SFTSV infection treated in our hospitals between April and November 2011 were tested for SFTSV at admission. A suspected case was defined as one with acute fever, in which the pathogen could not be identified, and at least one of the following laboratory tests: thrombocytopenia, leukocytopenia or a history of arthropod bites. Confirmed cases were defined by a positive result in a quantitative RT-PCR or a positive result for IgM antibody to SFTSV. Testing was also performed for detection of human granulocytic anaplasmosis, HFRS, Crimean-Congo hemorrhagic fever, Leptospira, Salmonella, Rickettsia or Brucella; and EBV, cytomegalovirus, hepatitis A, B, C and E virus were also investigated. Plasma samples were obtained from patients during the acute phase of their illness. After sampling, serum was extracted and immediately frozen at -80uC until serum analysis. The clinical course and laboratory data of the patients were recorded prospectively on individual forms. Severe SFTSV infection was defined as any person who required admission to an intensive care unit and met at least one of the following criteria: ARDS, heart failure, liver failure, shock or disseminated intravascular coagulation. Data on demographic characteristics and laboratory measures were expressed as mean 6SD or median. In addition, serum samples from 40 healthy volunteers (who did not have a febrile illness in the preceding week and were not epidemiologically linked to the endemic) were also included as controls. Measurement of Plasma Cytokines Blood samples coated with EDTA were collected from patients on admission during the acute phase of SFTSV infection. Plasma was separated by centrifugation (3000 g for 10 min, 4uC) and stored at -80uC until analyzed. Serum levels of TNF-a, TGF-b, IL-6, IP-10, IFN-c and RANTES were measured retrospectively by an ABC ELISA kit (Research & Diagnostics, Minneapolis, MN, USA), in accordance with the manufacturer's instructions. Statistical Analysis Means for continuous variables were compared using independent-group Student's t tests when the data were normally distributed; otherwise, the Mann-Whitney test was used. Proportions for categorical variables were compared using the x 2 test, although Fisher's exact test was used when the data were sparse. Correlation was assessed using Pearson's test. Significance was set at p,0.05, using two-sided comparisons. Results were analyzed using SPSS for Windows version 17.0 (SPSS, Chicago, IL, USA). Results Forty patients, comprising 32 SFTS cases confirmed by RT-PCR and eight by ELISA were admitted from hilly areas of Northeastern China; six (15.0%) of whom died. The mean age of the patients was 55.7614.7 years (range, 18-89 years), and 11 (27.5%) were female (Table 1). Fever, fatigue, anorexia, diarrhea, nausea and myalgia were the most frequent symptoms in all patients. The major clinical symptoms in critical cases were disturbance of consciousness, arrhythmias, pancreatitis, serious pneumonia, capillary leakage, hypotension and shock, in addition to fever, thrombocytopenia, and leukopenia. Hemorrhagic-feverlike symptoms were also observed in the patients. All the patients suffered fever during the course of the disease. The median interval between the onset of illness and the day in which serum samples were obtained at admission was 5.5 days (range, 1-10 days). The mean length of hospital stay was 9.765.2 days (range, 1-22 days). Six (15.0%) of 40 patients died of acute left ventricular failure, multiorgan dysfunction or aplastic anemia within 4-18 days of admission, while the remaining 34 patients (85.0%) survived. Nine (22.5%) of 40 patients met the criteria for severe disease. Pleural and pericardial effusions were observed in eight (20.0%) and three (7.5%) patients, respectively. There was a significant difference in the interval between onset and admission between severe and non-severe cases (median, 7.0 vs 5.0 days, respectively; p = 0.016) ( Table 2); however, age, sex ratio, duration of fever, and length of hospital stay were similar between the severe cases with SFTSV infection and those with non-severe infection (p.0.05). Apart from the patient who died from aplastic anemia, who had coexisting hypertension and coronary heart disease, the other patients with severe disease had no evidence of pre-existing comorbidity. All of the patients had thrombocytopenia, and elevated aspartate aminotransferase (AST), alanine aminotransferase (ALT) and lactate dehydrogenase (LDH) levels. Of the clinical laboratory parameters tested, C-reactive protein (CRP), creatinine phosphokinase (CK) and LDH levels were significantly higher; platelet count, calcium and albumin levels were significantly lower in the patients with severe SFTSV infection, compared with those with non-severe infection ( Table 2). Serum levels of TNF-a, TGF-b, IL-6, IP-10, IFN-c and RANTES on admission are shown in Table 3, Figure 1 and Figure 2. Levels of TNF-a, IL-6 and RANTES were significantly higher in patients, and levels of IFN-c were significantly lower in patients than those in healthy individuals. Levels in the patients with severe and non-severe SFTSV infection were compared. Median TNF-a levels were 43.3 pg/mL (range, 21.3-97.9 pg/ mL) and 26.4 pg/mL (range, 10.2-240.1 pg/mL) in the patients with severe and non-severe SFTSV infection, respectively, and the difference was significant (p = 0.020). Median IFN-c levels were 236.4 pg/mL (range, 105.0-2216.1 pg/mL) and 35.4 pg/mL (range, 0.3-1530.8 pg/mL) in the patients with severe and nonsevere SFTSV infection, respectively, and the difference was significant (p = 0.001). Median IP-10 levels were 369.7 pg/mL (range, 78.3-640.4 pg/mL) and 209.9 pg/mL (range, 60.9-361.5 pg/mL) in the patients with severe and non-severe SFTSV infection, respectively, and the difference was significant (p = 0.024). Levels of IP-10 in patients with pneumonia (n = 20) were significantly higher than those in patients without pneumonia (n = 20) (p = 0.024). There were no significant differences in IL-6, Range 1-12 Length of hospital stay -d Mean 9.765.2 1-22 Interval between onset and admission -d TGF-b and RANTES levels between patients with severe and non-severe SFTSV infection. Levels of IFN-c in non-severe cases were significantly lower (p,0.001) than values detected in healthy individuals. Although levels of IFN-c in severe cases were higher than those in healthy individuals, the difference was not significant (median, 236.4 vs 136.0 pg/mL, respectively; p = 0.063). Levels of IP-10 in severe cases were significantly higher (p = 0.019) than those in healthy individuals (data not shown). Counts of CD4 + and CD8 + T cells were shown to decrease. The difference in the lymphocyte subgroups was not statistically significant between severe and non-severe cases (data not shown). Correlation between clinical and laboratory parameters and cytokines and chemokines was analyzed. Only leukocyte and lymphocyte counts positively correlated with the levels of IFN-c. A positive correlation between IFN-c and IP-10 levels was observed ( Figure 3). Discussion We have presented the cytokine profiles following SFTSV infection in 40 hospitalized patients. The interval between onset and admission was significantly longer in severe cases than in nonsevere cases, and may be a risk factor for severity of SFTSV infection. Laboratory findings include higher levels of CRP, LDH and CK, and lower levels of albumin, platelet count and serum calcium in severe cases. To date, the pathogenesis of SFTSV infection has not been clearly defined. Inflammatory cytokines and chemokines, the first ramification of activation of the innate immune cells, play a role in the pathogenesis of virus infection in animals and humans [6]. It has also been suggested that besides the high levels of viral replication, immunological response might contribute to disease pathogenesis of SFTSV infection [7]. In our prospective study, the most striking phenomenon was the significant increase in plasma TNF-a, IL-6 and RANTES and the significant decrease in plasma IFN-c in patients with SFTSV infection. The increased levels of TNF-a, IFN-c and IP-10 are related to the severity of illness. These data, together with the significant correlation found between the levels of IFN-c and IP-10, suggest that an intense inflammatory response to virus infection, perhaps specifically involving recruitment of inflammatory cells to infected tissues, contributes to disease pathogenesis [8]. Markedly elevated levels of TNF-a and IFN-c are also correlated with the fatality of Ebola virus infection [9]. TNF-a is also correlated with the severity of hemorrhagic fever with renal syndrome (HFRS) [10,11]. Pleural and pericardial effusions and hemorrhagic-fever-like symptoms were commonly observed in these patients with SFTSV infection [4]. However, the pathogenesis of vascular leakage is not known. Viral factors can target the endothelium directly or indirectly, and virus-mediated, host-derived soluble factors can cause endothelial activation and dysfunction indirectly [5]. TNF-a produced by monocytes, macrophages and T cells acts on the endothelium, stimulates the production of vasodilating substances, and is an inducer of NO synthase, with important effects on capillary endothelial permeability [11,12]. IFN-c and TNF-a have a synergistic effect on endothelial cell cultures in vitro by increasing monolayer permeability, which might play a role in capillary leakage [13]. Whether the role of TNF-a in SFTSV infection is the same as that in other viral hemorrhagic fevers is not known. TNF-a is one of the main proinflammatory proteins, and IL-6 and IFN-c are also involved in induction of acute inflammatory responses. Levels of cytokines such as IL-6 and IFN-c in the blood of SFTSV-infected patients are also associated with clinical outcome [4]. In the present study, levels of IFN-c were shown to increase in all patients, especially in fatal cases. IFN-c is also a predictive factor for disease severity in dengue patients [14]. Major functions of IFN-c are activation of macrophages, differentiation of T helper (Th)1 from T cells, inhibition of the Th17 pathway and control of intracellular pathogens [15]. IL-6 plays a major role in host defense mechanisms, including immune responses, acute-phase reactions, and hematopoiesis [16]. In the present study, levels of IL-6 were not associated with severity of illness, but levels of IFN-c in severe cases were markedly elevated compared with non-severe cases. In contrast, expression of IFN-c in non-severe cases was suppressed, which suggests that IFN-c may lead to disease progression. It has also been reported that there is a significant association between the high illness severity phenotype and the IFN-c + 874T allele in patients with acute infection with Epstein-Barr virus (EBV), Coxiella burnetii, or Ross River virus [17]. The cause of the significantly lower levels of IFN-c in nonsevere cases with SFTSV infection is not yet known. IP-10, a chemokine synergistically induced in various cell types by type I (IFN-a and IFN-b) and II (IFN-c) IFNs, lipopolysaccharide, other cytokines and Toll-like receptor ligands, is a chemoattractant for recruiting natural killer and activated T cells into sites of tissue inflammation [18,19]. The levels of IP-10 in patients were similar to those in healthy individuals, with significantly higher levels in severe cases compared with nonsevere cases and healthy individuals. IP-10 is a potent chemoattractant for activated Th1 lymphocytes, and IFN-c produced during Th1 responses may reflect CD8 + T cell activation with Table 2. Differences in clinical and laboratory characteristics between severe and non-severe cases of SFTSV infection. production of inflammatory cytokines [20]. Therefore, our findings suggest that SFTSV activates mainly the Th1 immune response against viral invasion. Significantly elevated IP-10 levels were observed in patients with pneumonia, which has also been reported in infection with H5N1 influenza virus and severe acute respiratory syndrome [21,22]. However, it has been reported that the expression of IP-10 and RANTES induced by hantavirus infection can not increase the permeability in human lung microvascular endothelial cells [23]. IP-10 in patients with H5N1 virus infection might explain the prominent macrophage inflammatory infiltrate in the lungs [24]. In human cutaneous leishmaniasis, IFN-c production is one of the indicators of a sustained cell-mediated immune response, which is mediated not only through expansion of antigen-specific IFN-c-producing CD4 + Th1 cells, but also through IFN-c-producing CD8 + T cells [25]. Whether IFN-c produced by CD8 + T cell also plays a role in severe patients with SFTSV infection requires further evaluation. The reduced levels of IFN-c in non-severe patients may be due to the inhibited expression of IFN-c or production of IFN-c antagonist during SFTSV infection. Decreased level of IFN-c in non-severe SFTS patients suggests that expression of IFN-c during SFTSV infection may reflect the severity of the disease. Interaction between these cytokines has also been demonstrated. The TNF-a-mediated morbidity or mortality in mouse models of cerebral malaria and bacterial sepsis also can be regulated by IFN-c [13]. The IFN-c response is itself regulated by interaction with responses to TNF-a [26]. TNF-a is an early-phase cytokine that acts locally to trigger a cascade of other cytokines including IP-10 and IL-6 [24]. A positive correlation between IFN-c and IP-10 levels has been observed in these patients. Association of serum levels of TNF-a and IL-6 with high serum creatinine and low platelet counts in HFRS patients has been reported [11]. Association of serum levels of cytokines was not observed in our study, including high serum CK and creatinine, low platelet and T-cell subgroup (CD4 + and CD8 + T cells). Upregulated levels of IL-6 are related to increased responses to infection, such as fever, phagocytic cell recruitment, and blood vessel permeability [27]. No correlation between serum cytokines and chemokines levels and body temperature were observed in our study. However, leukocyte and lymphocyte counts were positively correlated with the levels of IFN-c ( Figure 3). Lymphopenia is common in SFTS patients. Apoptosis of lymphocytes can be induced by IFN-c [28,29], but levels of IFN-c were shown to Figure 1. Levels of cytokines (pg/mL) and chemokines (pg/mL) were determined as described and only those with a P value of ,0.05 are illustrated. Box plots illustrating the significant differences of TNF-a, IL-6, IFN-c and RANTES in SFTS patients and healthy controls. doi:10.1371/journal.pone.0041365.g001 decrease in most patients in this study. How lymphopenia occurs is not clearly understood. On the basis of our results, we believe that a cytokine-mediated inflammatory response, characterized by cytokine and chemokine production imbalance, plays an important role in the disease progression of patients with SFTSV infection. The levels of Th1 cytokines are correlated with disease severity. Further immuno- Table 3. Cytokines and chemokines in severe and non-severe patients infected with SFTSV and healthy individuals. logical studies are required to elucidate the role of the immune response in patients' outcomes.
3,896.2
2012-07-24T00:00:00.000
[ "Medicine", "Biology" ]
Possibility and probability : application examples and comparison of two different approaches to uncertainty evaluation This paper proposes two interesting applications of the approach to uncertainty evaluation and representation in terms of Random-Fuzzy Variables. One covers the expression of the calibration uncertainty of gauge blocks, and one considers unknown temperature variations, with respect to temperature at calibration time, in expressing a voltmeter uncertainty. Both considered examples show that the proposed approach is more effective than the traditional GUM approach. Introduction The evaluation and expression of measurement uncertainty is still an hot and debated topic, despite the uncertainty concept, as defined by the Guide to the Expression of Uncertainty in Measurement (GUM) [1,2], has been universally accepted by the metrologists.The focus of the discussion is on the validity of the mathematical framework, probability, into which the evaluation of measurement uncertainty has been confined. There are several cases, in the everyday measurement practice, in which the validity of a purely probabilistic approach can be doubted, for instance every time a systematic effect has to be considered, whose value is unknown, but the interval into which it is supposed to lie.Let's consider the acceleration of gravity, g, in a given location.Due to unknown measurement errors, we cannot know its true value, though we can assume an interval into which the local g is supposed to reasonably lie.On the other hand, when using the measured g value, we cannot consider it as a random variable in the estimated interval, because g, in the considered location, does not vary, and, hence, it represents an unknown and uncompensated systematic effect. In order to represent such effects and their combination with other random and non-random effects in a more correct and effective way, a new mathematical framework has been proposed in the recent years [3][4][5][6], based on the theories of evidence and possibility, that generalize probability and allow also non-random effects to be handled.While the theoretical framework has been well 1 Corresponding author<EMAIL_ADDRESS>[6], few practical applications have been considered, up to now.This paper, after having briefly recalled the very fundamentals of uncertainty expression in terms of possibility distributions, is aimed at showing the effectiveness of this new approach in two simple, though significant cases, where unknown and uncompensated systematic effects may cause the GUM approach to provide incorrect results.The first one considers the expression of calibration uncertainty for gauge blocks, whilst the second one considers unknown temperature variations, with respect to temperature at calibration time, in expressing a voltmeter uncertainty. Overview of the RFV approach The employed mathematical framework, based on the possibility theory, considers possibility distributions, instead of probability distributions, to represent the distribution of values that can be reasonably attributed to the measurand.Without entering too many mathematical details, for which the reader is addressed to [6,7], a possibility distribution (PD) is defined as a function r over a support X r : X → [0,1], (1) such that: The cuts Iα of a PD, called α-cuts and defined as: can be considered as a generalization of the probabilistic concept of confidence intervals and a credibility that an element x belongs to them can be associated to each interval, in the range [0,1] [6,8].In particular, the credibility value is given, for each α-cut at level α, by 1 − α [7].Therefore, expressing a measurement result in terms of a PD yields a family of confidence intervals at various confidence levels.Since the final aim of uncertainty evaluation is "to provide an interval about the result of a measurement within which the values that could reasonably be attributed to the measurand may be expected to lie with a high level of confidence" [1], expressing a measurement result in terms of a PD is in complete agreement with the requirements of the GUM. It has been proved [6] that a PD can effectively represent the effect of any kind of contributions to uncertainty.However, different effects (random and nonrandom) give different contributions to uncertainty and combine in different ways.Therefore, we may expect that a single PD is not enough to represent all effects and their combinations, and that an aggregation of at least two PDs is required. Random Fuzzy Variables (RFVs) have been defined to this purpose [7].An RFV is defined by two possibility distributions: the internal one rint(x) considers all nonrandom contributions to uncertainty, whilst the external one rext(x) considers also the random contributions.The external PD is obtained by combining the internal PD rint(x) with the random PD rran(x), which considers only the random contributions to uncertainty [7]. Fig. 1 shows an example of RFV and its PDs.Since the two PDs rint(x) and rran(x) represent different effects that combine in different ways, the mathematics used to combine RFVs must consider the different characteristics of these two PDs.This mathematics exploits the definition of joint PDs, given in [9] for the internal PDs and in [10] for the random PDs, and the application of the Zadeh's extension principle [11], as also shown in [12,13], to which the reader is addressed for further details. Uncertainty evaluation for gauge blocks As a first example, let us consider one of the examples proposed by the Collège Français de Métrologie (CFM), and in particular the 27 th example on the calibration of gauge blocks [14].In this example, different uncertainty sources are considered, as listed in Table 1. According to the CFM example, all effects are considered as random ones, and a specific probability density function (PDF) is assumed for each of them, together with their standard deviation, as listed in Table 1.Following the approach of the GUM [1], being those effects not correlated, a combined standard uncertainty uc = 32nm is obtained [14], thus leading to a combined expanded uncertainty Uc = 64nm, under the assumption of a normal PDF of the final result and a coverage factor k = 2, corresponding to an interval with 95% confidence level.However, some of these effects, namely those in rows 2, 4, 5, 9 and 10 in Table 1, show a systematic behavior, since they originate in the imperfections of the measurement process at calibration time.As a matter of fact, the length of the gauge blocks, their geometry, the deformation on the contact point and the accuracy of the sensors, at calibration time, are not random quantities.We simply cannot know their exact values, that are assumed to take single values inside given intervals.When the calibrated block gauge is used in a measurement process, this lack of knowledge cannot be considered as a random effect affecting the measurement result, since the length of the gauge blocks, their geometry, the deformation on the contact point and the accuracy of the sensors do not vary during the measurement procedure.Therefore, this lack of knowledge represents a systematic effect.Hence, just adding their assumed variances to obtain the combined uncertainty, as stated by the GUM, makes little sense. The representation of the calibration result in terms of an RFV, briefly recalled in the previous section, appears to be much more effective, since it allows one to consider the systematic effects in the internal PD, separated from the random ones.Therefore, starting from the available information listed in Table 1, considering the contribution coming from row 4 as total ignorance on the related error, and according to the procedure shown in [15], the RFV shown in Fig. 2 is obtained, where the contributions in rows 2, 4, 5, 9 and 10 have been used to build the internal PD, and the other ones have been used to build the random PD. It can be noted that the α-cut at α = 0.32, corresponding to the interval with 68% confidence level, has an halfwidth of 0.06μm and the α-cut at α = 0.05, corresponding to the interval with 95% confidence level, has an halfwidth of 0.11μm.In GUM terms, this value corresponds to the combined expanded uncertainty, associated to the confidence interval with 95% confidence level.Therefore, it is possible to compare this value with the combined expanded uncertainty Uc evaluated by the CFM.It follows that the RFV approach provides a larger value, as expected, since it takes into account the systematic behavior of some considered effects. Uncertainty evaluation for a voltmeter This second measurement example consists in the calibration of a Fluke 8845A digital multimeter (device under calibration, DUC), in the DC voltage measurement, by comparison with a Fluke 8508A reference multimeter (standard).To do that, a relationship has to be established between the values measured by the standard and the corresponding indications of the DUC. Moreover, considering the uncertainty contributions affecting the standard and the calibration process, an accuracy estimate needs to be associated with the DUC.This can be done by following three different approaches: the GUM approach reported in [1], the Monte Carlo (MC) simulations, as suggested in [2], or the RFV approach.The main goal this example is to show that, following the 1 For the sake of simplicity, the calibration procedure is here performed for a single voltage value.Of course, the procedure may be RFV approach, a more informative accuracy estimate is obtained.Moreover, the RFV approach allows a more effective accuracy estimate than the GUM and MC approaches when the voltmeter is operated, after calibration, under unknown temperature conditions, that are likely to be different than the temperature at calibration time. The employed calibration procedure is the following: the same voltage 1 is simultaneously measured by the standard, with a fixed operating temperature ϑS = 23 • C, and the DUC, for different operating temperatures in the range 8 • C ≤ ϑ ≤ 38 • C. The obtained expected value of the voltage is, according to the standard, ¯vS = 9.999942V, while multiple expected values ¯vDUC (ϑ) are obtained according to the indications of the DUC, as shown in Table 2. Starting from ¯vS and ¯vDUC (ϑ), a correction value δ( ϑ) can be obtained, as: The evaluation of (4) completes the calibration procedure.Then, when the calibrated multimeter is employed to measure a voltage vm, the correction δ¯ (ϑ) can be applied according to: Of course, due to the uncertainty affecting ¯vS and v¯DUC (ϑ), also the correction δ¯ (ϑ) is affected by uncertainty.Evaluating this last uncertainty contribution corresponds to estimate the residual uncertainty contributions associated with the calibrated voltmeter, i.e. its accuracy.In fact, according to (5), for a specific measured value vm and a specific operating temperature ϑ, the uncertainty affecting the corrected value v is equal to the uncertainty affecting δ¯ (ϑ). The uncertainty contributions involved in the calibration process are a type B contribution provided by the manufacturer of the standard (∆vS = 34μV), a type A contribution due to the experimental variability of vS (σvS = 4μV) and a type A contribution due to the experimental variability of ¯vDUC (ϑ), as shown in Table 2.Moreover, when the operating temperature of the calibrated voltmeter repeated for several voltage values covering the whole input range of the DUC. GUM approach According to the GUM approach [1], the combined uncertainty uδ(ϑ) can be found by combining the standard uncertainties of vS and vDUC through (4), starting from some assumptions about their probability density functions (PDFs).In particular, the standard uncertainty of vS is obtained assuming a uniform PDF of width 2∆vS and a normal PDF with standard deviation σvS , while the standard uncertainty of vDUC is obtained assuming a normal PDF with standard deviation σvDUC(ϑ).Then, according to (5), for a given measured value vm and a specific operating temperature ϑ, the uncertainty uv is equal to uδ(ϑ). Another uncertainty contribution should be added in the evaluation of uv in the case of an unknown operating temperature of the calibrated voltmeter in a given range.As an example, let us assume that during a measurement process after calibration the operating temperature may assume a constant, though unpredictable value in the range 18 • C ≤ ϑ ≤ 28 • C.This means that the best estimate of the correction δ¯ (ϑ) to be applied in ( 5) is now unknown.Following the GUM probabilistic approach, the only way to include this contribution to uncertainty is to consider an additional random contribution in the evaluation of uv.In particular, the additional contribution is obtained by considering a uniform PDF of width: The main problem with this approach is that, both for fixed and unknown operating temperatures, the resulting PDF of v is unknown and, therefore, the confidence for which v is supposed to lie in the interval [v − K • uv, v + K • uv] is unknown, where K is the coverage factor.This confidence may be determined by referring to the central limit theorem (CLT), even if few input variables have been combined.If an higher accuracy in the uncertainty evaluation is desired, Monte Carlo (MC) simulations should be considered, as suggested by GUM supplement 1 [2].In this case, the samples of v are directly obtained and they provide an estimate of the PDF of v.In Sec.4.3, this PDF will be transformed into an equivalent PD [8] that will be compared with the ones obtained following the RFV approach. RFV approach The calibration procedure can be also modeled following the RFV approach.Considering the uncertainty contributions and associated PDFs discussed above, RFVs can be associated with vS and vDUC (ϑ), as shown in Fig. 3 and 4, respectively.Since only random contributions are considered for vDUC (ϑ), its RFVs are composed by the sole random PDs rvDUC|ϑ. RFVs associated with vDUC (ϑ). Due to the uncertainty presence, for each temperature ϑ, an RFV shall be associated with δ(ϑ) given by the difference between the RFVs associated with vS and vDUC (ϑ).The resulting internal and external PDs rδ|ϑ composing these RFVs are shown in Fig. 5. The PDs rδ|ϑ can be usefully employed to obtain, for a given measured voltage vm and temperature value ϑ, the PDs of the corrected voltage values v.In fact, according to (5), the PDs rv|(vm,ϑ) can be simply obtained, as: As an example, the RFV of v for specific values vm = 9.99990V and ϑ = 23 • C is shown in Fig. 6 (red lines).This figure proves that, after the correction, the most possible value of v is, as expected, ¯vS .The availability of PDs rv|(vm,ϑ) allows one to include in the accuracy estimate the effect of an unknown operating temperature in a given range.A uniform PD expressing the total ignorance about temperature in the range 18 • C ≤ ϑ ≤ 28 • C can be associated with ϑ, as shown in Fig. 7. According to [16], a principle can be followed to include the information provided by rϑ in the accuracy estimate: Comparison The results obtained about the possible corrected values v following the GUM approach and considering the CLT, following the MC approach, and applying the RFV approach, are compared in Figs. 6 and 8.In particular, for the MC simulations, equivalent PDs of v are obtained (black and blue dashed lines), while for the GUM and CLT approach, the 68% and 95% confidence intervals are shown (magenta and cyan horizontal lines).To simplify the comparison, the 68% and 95% confidence intervals obtained following the thee approaches are reported in Tables 3 and 4. For the RFV approach, type-2 confidence intervals are obtained.55]] When a known temperature ϑ = 23 • C is considered (Fig. 6 and Table 3), the MC and RFV approaches provide compatible results.In fact, all the MC confidence intervals fall inside the RFV type-2 confidence intervals.In this respect, the RFV approach leads to a more informative result since it considers the different random and nonrandom nature of the contributions to uncertainty, thus providing an uncertainty estimate due to the sole nonrandom effects (narrowest confidence intervals) and an estimate due to all effects (largest confidence intervals).On the other hand, the GUM and CLT approach provides only approximate results. When an unknown temperature in the range 18 • C ≤ ϑ ≤ 28 • C is considered (Fig. 8 and Table 4), the MC and RFV approaches provide different results.This is due to a different representation of the available knowledge about the unknown temperature.Following the GUM and CLT approach and the MC approach, the effect of an unknown temperature is modeled as an additive random contribution.However, the effect of an unknown temperature is not random, but systematic, since it produces a (unknown) deviation of the most possible value of the correction δ¯ (ϑ).Moreover, following these approaches, a specific PDF (uniform) is associated with temperature, even if the knowledge about the specific PDF is not available, as in the considered example.On the other hand, the systematic nature and the unknown PDF of temperature are correctly represented by following the RFV approach.According to Fig. 8 and Table 4, when the GUM and CLT approach and the MC approach are erroneously applied, an underestimate of uncertainty may be obtained. Conclusions The application of the RFV approach to uncertainty evaluation in two practical examples has been considered by this paper and compared with the results provided by the application of the GUM and its supplement 1. The obtained results confirm that the RFV approach extends the purely probabilistic GUM approach and allows one to handle and combine also the contributions to uncertainty given by uncompensated systematic effects, when the only available knowledge related to these contributions is an interval into which they are supposed to lie. Therefore, the considered examples confirm that the RFV approach is a promising way to evaluate uncertainty in several practical applications when the systematic effects are not completely known and, hence, cannot be compensated for. X DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, Figure 2 . Figure 2. RFV representing the calibration result of a 50 mm gauge block in terms of deviation from the nominal value.The two intervals are those at 68% (upper interval) and 95% (lower interval) confidence level. unknown, a type B contribution is added to the final accuracy estimate of the voltmeter. Figure 6 . Figure 6.RFV (red lines), PD induced by MC simulations (black dashed lines), and confidence intervals provided by GUM approach and CLT (magenta lines) associated with v for ϑ = 23 • C. Figure 8 . Figure 8. RFV (red lines), PD induced by MC simulations (black dashed lines), and confidence intervals provided by GUM approach and CLT (magenta lines) associated with v for 18 • C ≤ ϑ ≤ 28 • C. Table 1 . Contributions to calibration uncertainty.N: normal PDF.T: triangular PDF.U: uniform PDF.
4,278.4
2015-01-01T00:00:00.000
[ "Computer Science" ]
Spinning out of control: wall turbulence over rotating discs The friction drag reduction in a turbulent channel flow generated by surface-mounted rotating disc actuators is investigated numerically. The wall arrangement of the discs has a complex and unexpected effect on the flow. For low disc-tip velocities, the drag reduction scales linearly with the percentage of the actuated area, whereas for higher disc-tip velocity the drag reduction can be larger than the prediction found through the linear scaling with actuated area. For medium disc-tip velocities, all the cases which display this additional drag reduction exhibit stationary-wall regions between discs along the streamwise direction. This effect is caused by the viscous boundary layer which develops over the portions of stationary wall due to the radial flow produced by the discs. For the highest disc-tip velocity, the drag reduction even increases by halving the number of discs. The power spent to activate the discs is instead independent of the disc arrangement and scales linearly with the actuated area for all disc-tip velocities. The Fukagata-Iwamoto-Kasagi identity and flow visualizations are employed to provide further insight into the dynamics of the streamwise-elongated structures appearing between discs. Sufficient interaction between adjacent discs along the spanwise direction must occur for the structures to be created at the disc side where the wall velocity is directed in the opposite direction to the streamwise mean flow. Novel half-disc and annular actuators are investigated to improve the disc-flow performance, resulting in a maximum of 26\% drag reduction. I. INTRODUCTION Turbulent skin-friction drag reduction has been the subject of growing interest in the fluid mechanics research community in recent decades. A breakthrough in this context would lead to lower fuel consumption and improved ecosustainability in many industrial scenarios, and it is for this reason that great efforts are directed towards improving the understanding of the underlying physical mechanisms and to the development of novel drag reduction techniques. Flow control techniques can be classified as active or passive. Active methods are those which require an external energy input, while passive methods manipulate the flow field without a supply of energy. Amongst active methods there exists a further division between techniques which operate under closed-or open-loop control [1]. Closed-loop control requires sensors to measure the flow properties, thus allowing the control input to be adjusted according to a prescribed algorithm. Open-loop control is instead predetermined and does not respond to changes in the flow. As such it does not require sensors. Although numerical investigations of closed-loop flow control utilizing linear control theory have promised high drag reduction and significant net power savings (computed by taking into account the energetic cost of control), the experimental verification of these computational efforts poses enormous challenges. These relate to the very small spatial and temporal scales typically required to achieve such energetic performances. Progress is nonetheless being made with the fabrication of novel MEMS-based flow sensors and actuators [2]. According to the estimates of Wilkinson [3] the current production cost of such systems for use on a commercial aircraft would however render their application prohibitively expensive. Promisingly, active open-loop control reaches a compromise between complexity and performance. Since the pioneering direct numerical simulations (DNS) of Jung et al. [4] and the experiments of Laadhari et al. [5], the response of wall-bounded turbulent flows to spanwise, spatially uniform sinusoidal oscillations of the wall has become one of the most studied active open-loop techniques. The temporal forcing has been converted to spatial forcing in the form of standing waves and has been confirmed to produce wall-friction reductions of up to 40% [6]. Drag reduction is thought to occur because the intensity of the Reynolds stresses decreases as a result of the weakening of the turbulence structures [7]. Skote [8] employed the steady waves to alter a streamwise-developing boundary layer and observed strong suppression of low-speed streaks above those parts of the wall for which the velocity was maximum. Furthermore, Skote [9] showed that the improved drag reduction for spatial oscillations over temporal oscillations may be explained by an additional negative turbulence production term involving the streamwise gradient of the spanwise velocity. A generalization of the oscillating-wall and standing-wave forcing was proposed by Quadrio et al. [10], who studied the response of a turbulent channel flow to streamwise travelling waves of spanwise wall velocity. fluid and U * p is the centreline velocity of the laminar Poiseuille flow at the same mass flow rate. The equivalent friction Reynolds number in the fixed-wall configuration is R τ =u * τ h * /ν * =180, where u * τ = τ * /ρ * is the friction velocity, τ * is the space-and time-averaged wall-shear stress, and ρ * is the density. An open-source code, available on the Internet [25], is utilized to solve the incompressible Navier-Stokes equations using Fourier series expansions along the statistically homogeneous x * and z * directions, and Chebyshev polynomials along the wall-normal direction y * . A third-order semi-implicit backward differentiation scheme is used to advance the equations in time. The discretized equations are solved using the Kleiser-Schumann algorithm [26], described in Canuto et al. [27]. The nonlinear terms are treated explicitly and the linear terms implicitly. Dealiasing is carried out by setting the upper third of the modes in the x and z directions to zero. The wall boundary conditions were modified by RH13 to implement the disc motion. The code is parallelized using OpenMP and simulations have been carried out on the N8 HPC Polaris cluster. Post-processing has been performed on the Iceberg cluster at the University of Sheffield. Lengths are scaled by h * , velocities by U * p , and time by h * /U * p . Scaling using these outer units is not marked by any symbol. Quantities denoted by the + superscript are scaled in viscous units, i.e. with ν * and u * τ , where u * τ pertains to the uncontrolled reference case. For D=3.38, the size of the computational domain is (L x ,L y ,L z )=(4.52π,2,2.26π) and, for D=5.02, (L x ,L y ,L z )=(6.79π,2,3.39π). The resolution along x and z is constant in all cases, ∆x + =10 and ∆z + =5, corresponding to a number of Fourier modes equal to N x =N z =256 for D=3. 34 and N x =N z =384 for D=5.02. The number of grid points in the wall-normal direction is kept constant at N y =129. Nodes along y are clustered according to y(i)=cos [iπ/(N y − 1)], where 0≤i<N y , ∆y min =0.054, and ∆y max =4.42. This allows greater resolution near the walls. The time step is changed adaptively between ∆t + min =0.008 and ∆t + max =0.08. This reduces the computational cost by maximizing the CFL number within the range 0.2<CFL<0.4. B. Arrangement of discs The discs are located on both walls, have diameter D and rotate steadily with an angular velocity Ω. The disc-tip velocity is W =ΩD/2. In RH13 the discs are arranged in a square packing scheme, with discs which are adjacent in the streamwise direction spinning in opposite directions and discs along the spanwise direction rotating in the same direction. This configuration was chosen to resemble the standing wave studied by Viotti et al. [6], and will henceforth be referred to as case 0. The layout for case 0 and the modified disc arrangements investigated herein are presented in Fig. 2. The coverage C is defined as the percentage of the wall surface which is in motion. For each arrangement, a coverage C n is defined, with the subscript n referring to the layouts as numbered in Fig. 2. For the reference case studied by RH13 (case 0), C 0 =78%. For case 5, the arrangement is not the hexagonal lattice that gives maximum coverage for packing of equal circles (i.e. C=91%). As the channel domain must be rectangular, it is not possible to configure the discs in this manner whilst maintaining an integer number of discs. The layout shown at the bottom right of Fig. 2 is instead simulated. The coverage for this arrangement is C 5 =84% and an integer number of discs is enforced. The spanwise length of the domain for case 5 is L z =2.11π for D=3. 38 and L z =3.17π for D=5.02, due to the hexagonal disc arrangement. The disc diameters and velocities studied are D=3.38 and 5.02, and W =0.13,0.26,0.39, and 0.52. These forcing parameters are the ones that guarantee a high drag reduction of about 20% in the configuration studied by RH13. The term column is used to indicate disc alignment along the streamwise direction and the term row is used to denote disc alignment along the spanwise direction. C. Averaging procedures and flow decomposition The time average is defined as where t i and t f denote the start and finish of the averaging time. The spatial average along the homogeneous directions is defined as The flow field within the channel is expressed as the sum of three components, where u m (y)={u m (y), 0, 0}= u is the mean flow, u d (x, y, z)={u d , v d , w d }=u−u m is the disc flow, and u t represents the turbulent fluctuations. Flow fields have been computed over a minimum integration time of 1400h * /U * p . This time window does not include the initial transient from the start of the disc motion, during which the flow adjusts to the new forcing conditions. All statistical samples are doubled by averaging over both halves of the channel, by accounting for the existing symmetries with respect to the centreline of the channel. D. Performance quantities The turbulent drag reduction is defined as dy is the bulk velocity, and the subscript s denotes the stationary-wall case. Since simulations are carried out under constant mass flow rate conditions, U b =2/3 throughout. As shown by RH13, the power supplied to the discs to rotate them against the viscous resistance of the fluid and expressed as a percentage of the power needed to pump the fluid in the streamwise direction, is . E. Annular gap As in RH13, a small annular region of thickness c is simulated around each disc. The wall velocity in this region decays linearly from the maximum at the disc tip to zero at the stationary wall and is independent from the azimuthal direction. The azimuthal velocity u θ varies with the radial coordinate r as follows: This serves to mimic an experimental scenario where a gap would inevitably be present. As shown by RH13, the Gibbs phenomenon at the disc edges is also almost entirely suppressed. It would be significant if the gap were not simulated because of the velocity discontinuity at the boundary between the disc tip and stationary wall. The effect of gap size on the performance quantities for D 0 =3.56 and W =0.39 is shown in Fig. 3, where D 0 =D+2c is the outer diameter of the circle occupied by the disc and the annular gap, as shown in Fig. 1. Although the Gibbs phenomenon does occur for c=0, it does not influence the computation of drag reduction as the effect is limited to the disc edge. The drag reduction decreases by about 1% as c increases from 0 to 0.08D 0 . It then decreases more rapidly and, by c=0.12D 0 , R is 70% of the value obtained without the annular gap. The power spent decreases almost linearly and more rapidly than R as the gap size increases. The averaged wall-shear stress therefore responds primarily to the large scales of the disc forcing, while the power spent shows a more marked dependence on the precise distribution of wall actuation. More evidence of this emerges in Sec. III G where the dependence of these quantities on the spectral representation of wall forcing is investigated. The gap size in the following cases is c=0.06D 0 , which would most closely resemble the clearance in a water channel or in a wind tunnel set up. The drag reduction computed in RH13 for D=3.38, W =0.39, and c/D 0 =0.05 is R=19.5%, which is larger than the corresponding value estimated from the data in Fig. 3, R=18.5%. This discrepancy is larger than the uncertainty range of the numerical calculations. The difference between the C f in the actuated-wall case in RH13 (C f =6.64·10 −3 ) and the C f computed here for c/D 0 =0.06 (C f =6.68·10 −3 ) leads to only a 0.4% difference in R if the stationary-wall C f computed by RH13 is used as reference case (C f =8.25·10 −3 ). More accurate resolution checks on the stationary-wall C f lead to C f =8.19·10 −3 , which explains the 1% difference in R. shown by the white symbols, R scales linearly with C. This implies that the drag reduction is only produced by the shearing effect of the flow over the disc surface. The hexagonal arrangement (case 5), which gives the maximum wall coverage C 5 =84%, also follows the linear scaling with C. The scaling starts to deteriorate for some of the cases with W =0.26 and 0.39 (light and dark grey symbols), and is completely lost for W =0.52 (bold white symbols). A different physical mechanism must be responsible for drag reduction for the cases which do not follow the linear scaling with coverage. Except for case 5 and W =0.39, in all the cases that do not fall on the straight lines, R is larger than the corresponding value predicted by the coverage scaling. The drag reduction for case 0 and W =0.52 (R=11.9%) is lower than the one given by cases 3 and 4 for the same W and D (R=15.5%) despite the removal of half of the discs. For cases with C 1 =19.5%, in which the surface is covered by a fourth of the number of discs used by RH13, the additional drag reduction with respect to coverage increases monotonically with W . Although cases 2, 3, and 4 all have the same coverage, C=39%, the drag reduction values differ for the same W and D because they have different disc arrangements. Case 2, for which discs are aligned in one column (upward facing triangles), obeys coverage scaling up to W =0.39. Case 3, for which discs aligned along every other row (circles), and case 4, which has a checkerboard disc arrangement (diamonds), instead lose this scaling for W ≥0.26. At the same W , the R values of cases 2 and 3 only differ by small amounts, which are within the uncertainty range for all the W tested. For 0.26≤W ≤0.39, it follows that the additional drag reduction with respect to the value predicted by the linear scaling with coverage occurs when a portion of stationary wall of the streamwise extent of one diameter is present between discs. The spanwise space between discs does not have an effect because case 3 (discs next to each other along z) and case 4 (spanwise space at either side of discs) lead to the same drag reduction. The case of hexagonal arrangement, C 5 =84%, presents drag reduction values which are shifted below the coverage line for W =0.39. This is consistent with the upward shift of cases which present a streamwise region of stationary wall. In the hexagonal arrangement the streamwise spacing between discs is instead reduced and therefore drag reduction deteriorates with respect to the coverage line. The drag reduction given by case 2 (discs aligned in one column) loses the linear scaling only at W =0.52, even though no streamwise spacing is present. An upward shift with respect to the coverage line also occurs for case 5 at W =0.52. Similarly to the upward shift of case 2 at the same W , this is not due to the streamwise fixed-wall space as in cases 1, 3, and 4 because discs are closely packed along the streamwise direction. It is neither due to the spanwise space of fixed wall at the side of each disc because the additional drag reduction is the same in cases 2 and 5, although case 2 displays more spanwise space than case 5. The drag reduction at W =0.52 being higher than the value predicted by the linear scaling with coverage remains unexplained at this point. By defining a new quantity, E=R/C, the coverage gain of the disc actuators is given as the drag reduction induced per actuated area. For cases in which E>E 0 , where E 0 =R 0 /C 0 is the coverage gain for case 0, larger drag reduction occurs compared to case 0 for the same number of discs. emerges that the gain is null at W =0.13, independent of C when W =0.26 for cases that do not follow coverage, and at its maximum at low coverage and high W . For the cases examined heretofore, the displacement between adjacent streamwise and spanwise disc centres has been either D 0 or 2D 0 . More arrangements of discs can be studied by defining the spacings S x =x d /D 0 and S z =z d /D 0 , where x d and z d are the distances between neighbouring disc centres in the x and z directions, respectively. S x and S z are shown graphically in case 3 in Fig. 2. Fig. 6 (left) shows R for different S x and S z with disc parameters D=3.38, W =0.52. An optimum spacing is found for (S x , S z )=(1.5,1) resulting in R=17%. For comparison the RH13 value (case 0) is R=12% for the same disc parameters. As R scales with coverage at low W , a prediction of the drag reduction engendered by the discs is attempted, starting from the data computed in Viotti et al. [6] (page 10) for the standing-wave case. As noted by RH13, the wall forcing created along the disc centres is similar to a triangular wave of wavelength λ x =2D 0 and amplitude W . The drag reduction given by the discs can be predicted as R pred =C w · C θ · C · R sw , where C w is the scaling factor due to waveform, C θ models the effect of the orientation of wall forcing, C accounts for the wall coverage, and R sw is the drag reduction in the standing-wave case by Viotti et al. [6] for λ x =2D 0 . The factors are approximated as follows. [6]. Waveform: It is known that temporal and spatial forcing can be largely treated as analogous to one another [10]. The temporal non-sinusoidal spanwise wall-forcing investigated by Cimarelli et al. [28] can thus be used to gauge the influence of the spatially non-sinusoidal spanwise wall-forcing of the discs. Waveform j on page 4 of Cimarelli et al. [28] closely resembles the triangular wave spanwise forcing of the discs, which results in C w =85%. Streamwise forcing: The streamwise forcing which is present in the disc technique does not occur in the standing-wave case studied by Viotti et al. [6]. The effect of wall oscillations at an angle θ with respect to the mean flow has been studied by Zhou and Ball [29]. While pure spanwise oscillations produce the maximum drag reduction, the response to streamwise oscillations reduces to a third. The influence of wall-forcing orientation is accounted for by C θ =75%, estimated by averaging Zhou and Ball's data over the angle of wall forcing. Coverage: This is quantified by the coverage value C n for each case, given in Fig. 2. The table in Fig. 6 (right) shows the R values for three sample layouts and disc parameter combinations. The prediction R pred of the numerically computed R is excellent for the cases tested. Power spent The effect of coverage is now studied on the power spent, shown as a function of C in Fig. 7. The numerical values are found in Appendix A. For all W the linear scaling of power spent with coverage is excellent and much more robust than for drag reduction, shown in Fig. 4. The power spent therefore does not depend on the disc arrangements for fixed C. This follows from the power spent being solely related to the wall motion and largely independent of the dynamics of turbulence within the channel. The solid lines represent the laminar prediction to the power spent P sp,l , calculated from the solution to the flow induced by an infinite disc rotating beneath a quiescent fluid [30]. An amended and improved version of the formula in RH13, which now takes into account the effect of the gap flow, is derived in Appendix B. It reads where G k =−0.61592 is given in Schlichting [31], and R p and R τ are the Poiseuille and friction Reynolds numbers respectively, defined in Sec. II A. Equation (3) predicts P sp,t well, with the turbulent P sp,t being always slightly larger than the laminar P sp,l . B. The Fukagata-Iwamoto-Kasagi identity In this section, the FIK identity [15] is used to further understand the mechanism of drag reduction for the disc arrangements studied in Sec. III A. This identity quantifies the effect of the laminar flow and of the Reynolds stresses to the skin-friction coefficient. RH13 and WR14 showed that through this identity it is possible to distinguish two separate contributions to drag reduction, which arise from (a) the modification in the turbulent Reynolds stresses relative to the uncontrolled case, and from (b) the Reynolds stresses u d v d , related to the structures appearing between discs and described in RH13 on page 13 and in WR14 on pages 557-558. The drag reduction is written as R=R t +R d , where R t synthesizes effect (a) and R d is related to (b). Their expressions are: Fig. 8 shows R t and R d (light and dark grey respectively) for each layout and different W for D=3. 38. For case 0 the contribution from R t increases from 7% at W =0.13 to 13% at W =0.26 and 0.39. It decays to 6% for W =0.52. In the oscillating case studied by WR14, R t scales linearly with the disc boundary layer thickness δ, defined in RH13 and WR14 as a measure of the viscous diffusion from the disc surface. Using data from RH13, R t also scales linearly with δ for steady rotation. Furthermore, R t scales with coverage for W =0.13 for all layouts. The contribution to the overall drag reduction from R d is negligible for cases 1 and 2 at all W , for which there is no spanwise interaction between the discs, and for all cases at W =0.13. The impact of the interdisc structures on drag reduction, synthesized by R d , becomes important for cases 3 and 4, whose R t and R d values are the same for the same W . The cases for which R d attains a finite value are boxed by the dashed line. Spanwise interaction between the discs must therefore be important for the formation of these structures, although at this stage it is still not clear why cases 3 and 4 have the same R t and R d values despite the shift of columns. For the cases boxed by the solid line, coverage scaling applies and structures do not appear, although in RH13 for W =0.26 and 0.39 the structures do contribute to the overall drag reduction. C. Flow visualizations The contribution of R d in cases 3 and 4 is proved to be important through the use of the FIK identity. Therefore, we resort to flow visualizations to display the interdisc structures that are responsible for R d . Isosurfaces of .08 are shown in Fig. 9 for cases 3 and 4, the white arrows indicating the direction of disc rotation. In both cases the disc boundary layers are clearly visible. The plots show the presence of the tubular structures first shown in RH13, elongated in the streamwise direction and situated between adjacent discs in the spanwise direction. For cases 1 and 2, the structures are instead not evident for similar values of q. The only instances where the structures are clearly visible occurs when there is spanwise interaction between the discs. This happens only for W ≥ 0.26 and for cases 0, 3, and 4, where the distance between the nearest disc centres is smaller than or equal to √ 2D 0 . A contour of u d v d for case 3 at y + =14 is shown in Fig. 10, indicating the disc side where the structure is created. The contour for case 4 is nearly identical. Differently from the experimental study by Klewicki and Hill [20] of the laminar flow over a finite rotating surface patch, structures are not visible over both sides of the disc. They do however propagate downstream parallel to the mean flow as the structures observed by Klewicki and Hill. Fig. 10 shows that in all cases where there is a contribution from R d , the structures originate from the disc side where the wall forcing is along the upstream direction. When only one disc is included in the domain, the structures do not appear. Therefore the structures are created: i) when there is sufficient spanwise interaction between discs, i.e. W ≥ 0.26 and the distance between disc centres located in adjacent columns is smaller than or equal to √ 2D 0 , and ii) at the disc sides where the wall streamwise motion is in the opposite direction to the mean flow. D. Radial streaming The FIK identity and flow visualizations of the structures have been useful to shed further light on the formation of the interdisc structures, but have not helped to explain the extra drag reduction effect with respect to coverage, discussed in Sec. III A. To gain more insight, since streamwise fixed-wall space is a common feature of the cases which present the additional drag reduction, the flow between discs is studied. The streamwise development of R along the disc centreline in case 3 is shown in Fig. 11 by the solid line. The drag reduction is non-zero at the disc centre and asymmetric about this point. A local peak of maximum drag reduction of 95% occurs in the upstream disc region and intense drag increase appears in the downstream disc region. Between discs there is a region of about R=20% that is responsible for the additional drag reduction with respect to coverage. This region must be created through the interaction between the mean flow and the disc flow because the net disc-flow wall-shear stress would be null if u m =0, i.e. if the streamwise pressure gradient were absent, owing to the disc-flow symmetry. By use of the laminar solution, the skin-friction coefficient is predicted as follows: where F k =0.51 is given in Schlichting [31]. This prediction is not rigorous as the interaction between the mean and disc flow is not considered and end effects are neglected. Despite this, as shown in Fig. 11, the gradient of R with respect to x is well predicted on the disc surface, although the drag reduction computed via the laminar solution is higher than 100% due to flow reversal as the disc edge is not modelled. The DNS trend of R is shifted along x by about 45ν * /u * τ relatively to the laminar prediction. This is consistent with the streamwise shift in the disc flow of about 100ν * /u * τ observed at y + =8 in the oscillating-disc case by WR14. This shift must be due to the interaction between the mean and disc flows, which is not considered in the laminar analysis. To further investigate the flow above the fixed-wall region between discs, the downstream development of u d along the centreline of the discs, shown in Fig. 12 (left), is studied. The profiles are separated by 40ν * /u * τ and those on the disc surface are indicated by the grey bars. From the beginning of the domain and up to about the disc centre, the disc creates a radial flow along the negative x direction which retards the streamwise flow, thereby causing drag reduction. From the centre of the disc and up to the downstream disc tip, the radial flow enhances the streamwise flow, resulting in drag increase. The radial flow is most energetic near the disc tips and this is represented by the peaks of drag reduction and drag increase in Fig. 11. The streamwise shift in the disc flow is also evident in Fig. 12 (left), shown by the switch from negative to positive u d occurring between points C and D at a distance of about 80ν * /u * τ downstream of the disc centre. The disc flow persists further in the upstream direction than it does downstream, which explains the region of drag reduction above the fixed wall in Fig. 11. The disc flow upstream of a disc persists for 480ν * /u * τ from the upstream Fig. 12 (left) the peak of the u d profile varies above the disc, whereas in the laminar solution this location is invariant. The difference must be accounted for by the interaction of the disc flow with the mean streamwise flow. Immediately off the disc surface the peak y-location of the disc flow increases by ∆y=0.015. As the wallward flow above the disc caused by the von Kármán pumping effect does not occur above the fixed wall, the radial flow is allowed to diffuse further into the channel. Fig. 12 (right) presents the radial flow u r as a function of y for two locations on the disc surface. A graphical definition of u r is provided in Fig. 12 (inset). The thick solid line is the radial flow above the disc at x=2.72, z=1.36, displaced by r=1.04 from the disc centre. The dashed line is the laminar prediction for the disc flow at the same r. It is evident that at the same location the laminar and turbulent flow profiles do not coincide. The thin solid line indicates the turbulent disc flow at a location 100ν * /u * τ downstream of the laminar prediction (x=3. 27, z=1.36). At this location the turbulent and laminar profiles are almost identical for y<0.05, confirming the downstream shift of the disc flow. E. Half-disc actuators As evidenced by Fig. 11 the radial flow induced by the downstream half of the discs causes drag increase. To eliminate this effect, a half-disc configuration is studied, whereby the downstream disc half is covered and the wallvelocity is zero. The half-disc actuators are investigated for D=3. 38,5.07 and W =0.13,0.26,0.39. The drag reduction data for the half-disc simulations (subscript h) are presented in the table in Fig. 13 (right) with the corresponding data for case 0 (subscript 0). As shown in Fig. 13 (left), the negative effect of the downstream radial flow is eliminated by covering this portion of the disc. The azimuthal flow, which contributes favourably to drag reduction, is also removed. As expected, the prediction of the laminar solution (dashed lines) is worse than in the full-disc case. For both disc diameters and W =0.26, the drag reduction decreases when the downstream disc half is covered. This is because for low W the negative effect of the radial flow is less important than the benefit of the azimuthal forcing. For W >0.26 the drag reduction increases when the downstream disc half is covered and a maximum R h =25.6% is computed. For high W the removal of the downstream disc section and the associated radial flow therefore outweighs the loss of beneficial effects induced by the azimuthal flow. Although the increased drag reduction from this configuration is an interesting result, our model contains many simplifications. In an experimental set up a step would occur between the covered and uncovered halves of the disc, resulting in recirculation regions. Neither this nor any interaction between the mean flow and the disc housing is considered. A novel flow-control device has been realized experimentally by Koch and Kozulovic [32] who performed boundary layer experiments on a disc set up with one spanwise half covered. Differently from our actuators this is a passive method as the disc motion is driven by the mean flow and there is no external power input. As the uncovered disc half rotates, the velocity difference between the mean flow and the wall decreases, thereby reducing the wall-shear stress while drawing energy from the mean flow. A discussion must be included on the categorization of flow control methods as either drag reduction or pumping [33]. For the original disc actuators, studied by RH13 (case 0 in Fig. 1), although a mean flow is induced by the discs in the absence of streamwise pressure gradient, this mean flow is null when averaged along the streamwise direction. Therefore RH13's disc-flow control method can be categorized as drag reduction. For the half-disc technique, a net upstream mean flow is instead created in the absence of streamwise pressure gradient as an indirect response to the wall forcing, whose average in either the spanwise or streamwise direction is null. The half-disc method can thus be classified as indirect pumping. Direct pumping would instead occur if the reduction of wall friction were induced by a body force or a wall velocity distribution which are not zero when averaged along the streamwise direction. F. Annular actuators The laminar solution provides further direction for improvement of the disc-flow technique. The wallward flow produced by the von Kármán pump, which is uniform over the disc surface in planes parallel to the wall, can be expected to direct the streamwise flow towards the wall, causing a detrimental effect to drag reduction. Furthermore, the azimuthal forcing near the disc centre is of low velocity and, as shown in Sec. II E, the large-scale forcing appears to be important for drag reduction. Therefore, annular actuators are studied, with the intent of attenuating the wallward flow and eliminating the low velocity motion near the disc centre, which is thought to have a marginal contribution to drag reduction. The ratio of the internal and external radii, a=r i /R, is varied from 0 to 1, and the drag reduction and power spent are shown as functions of a in Fig. 14. A schematic of the actuators is shown in Fig. 14 (inset). The drag reduction remains approximately constant at R=19% for a<0.375. An optimum of R=20% is reached at a=0.6, beyond which the drag reduction decreases. This confirms the prediction that the flow induced near the disc centre has an overall negative effect on drag reduction. Beyond the optimum a=0.6 the removal of the central part of the disc causes a sharp decrease in R to a null value for a = 1. The power spent, shown in Fig. 14, instead shows a rapid monotonic decrease as a increases. Analogously to the changes due to the gap size, shown in Fig. 3, the response of the power spent to the change in wall boundary conditions is more significant than for the drag reduction. G. Spectral truncation The investigation of annular actuators confirms that the large scale forcing is important for drag reduction. The spectral representation of the boundary conditions is therefore examined to elucidate the effects of large and small scale forcing. By truncating the number of Fourier modes that describe the disc motion, it is possible to force only a specified range of scales. The proportion of modes forced in the homogeneous directions is given by k(%)=100k f,i /N i , where k f,i is the maximum forced wavenumber, N i is the total number of modes, and the i subscript denotes the streamwise or spanwise direction. The truncation of modes is symmetrical in each direction, and so k=100k x /N x =100k z /N z . The drag reduction and power spent are plotted as functions of k in Fig. 15 (left). As the number of forced modes increases, both R and P sp,t asymptotically approach the values given when all of the modes are included. The drag reduction reaches the asymptotic value only when k=8%, while P sp,t reaches the asymptote when k=47%. The contour plots of azimuthal wall velocity for these truncations are shown in Fig. 15 (insets). Fig. 15 (right) displays the energy contained within the streamwise modes. A large proportion of the energy is contained within the low wavenumber modes. The energy of the wall streamwise velocity has a peak value at k x =2, then drops monotonically with k x up to about k x = 50, at which it attains small values comprised between 10 −5 and 10 −6 . The energy of the wall spanwise velocity has peaks of amplitude decreasing continuously by more than one order of magnitude and occurring at k x =2, 14 and 82. These peaks are separated by minima at k x =6 and 54 of magnitude 10 −2 and 10 −5 , respectively. The results in Fig. 15 (left) bear analogy with the effects of gap size and annular actuators on the performance quantities, presented in Sec. II E and III F, respectively. In all cases it is evident that the large scale forcing is most responsible for the drag reduction, shown by the lack of significant change in R when high-wavenumber modes are eliminated from the disc spectral representation, the gap size is increased, or the central part of the disc is removed. This is significant as it means that low-order models, which only capture prescribed features of the turbulence dynamics, might be sufficient for computing accurate values of drag reduction. The boundary conditions have also been modified to only force either the spanwise or streamwise wall velocity. Drag increase occurred in both cases. This shows that a fully nonlinear mechanism must be responsible for drag reduction. IV. SUMMARY This paper has presented results on the rotating disc method for drag reduction. A summary of these results is presented herein. • The effects of coverage and layout on the performance of the disc technique have been investigated, with unexpected gains in R found upon the removal of discs. For example for disc-tip velocity W =0.52 the removal of half of the discs leads to an increase in R. At this W , an optimal spacing of 1.5D 0 between disc centres results in an additional drag reduction of R=5% relatively to the RH13 layout. For intermediate values of W , the gain in R always occurs when streamwise space of stationary wall occurs between discs. • For low W , the drag reduction scales linearly with coverage and is well predicted from the standing-wave data by Viotti et al. [6] through scaling factors to account for the changes in waveform, angle of wall forcing, and coverage. • The power spent to actuate the discs is well predicted by the laminar solution, does not depend on the disc arrangement, and scales with coverage for all W . • The FIK identity and flow visualizations have been useful to elucidate the criteria for the formation of structures appearing between discs. The structures are created only when there is sufficient interaction between spanwise neighbouring discs and at the disc sides where the wall streamwise motion is in the opposite direction to the mean flow. The disc-tip velocity must be W≥0.26 and the maximum spacing between disc centres in neighbouring columns must be √ 2D 0 , where D 0 is the outer diameter of the circle occupied by the disc and the annular gap. • It has been shown that the radial flow due to the von Kármán pumping effect creates a viscous layer over areas of stationary wall between discs. This boundary layer is responsible the the additional drag reduction with respect to the value predicted through the scaling with the actuated area. • Novel half-disc and annular actuators have been simulated to improve the drag reduction effect, resulting in a maximum of R=26%. A comparison between these disc-flow drag reduction data and those of other drag reduction techniques is given in Table I. Control strategy R max (%) Details Riblets [34] 12 Sinusoidal riblets with spanwise modulation Opposition v-control [35] 25 Control with wall-normal velocity Opposition w-control [35] 30 Control with spanwise velocity Oscillating wall [36] 45 Oscillation period, T + =100. Amplitude, W + =27 Steadily rotating discs (RH13) 23 • According to the categorization proposed by Hoepffner and Fukagata [33], the original disc actuators studied by RH13 have been classified as a drag reduction method. The half-disc actuators have instead been classified as an indirect pumping method. The term pumping arises from the net upstream flow that would be created by the half discs even without streamwise pressure gradient, while the term indirect indicates that this upstream flow is engendered even though the forcing at the wall is null when averaged along the streamwise direction. • The effect of the forcing scales on the drag reduction and on the power spent has also been studied. Truncation of the number of forced modes in the boundary conditions has shown that it is the larger scales that most contribute to drag reduction. The power spent has a more marked dependence on the precise spectral representation of the wall forcing than drag reduction. ACKNOWLEDGMENTS We would like to thank the Department of Mechanical Engineering at the University of Sheffield for funding this research. This work would have not been possible without the use of the computing facilities of N8 HPC, funded by the N8 consortium and EPSRC (Grant EP/K000225/1) and coordinated by the Universities of Leeds and Manchester. Our thanks also go to Prof. J.F. Morrison for recommending the Klewicki and Hill paper and to Mr Alessandro Melis for his insightful comments on a preliminary version of the manuscript. Appendix A: Table of drag reduction data The data for R and P sp,t are given in Table II The laminar flow solutions to the flows induced by spinning and oscillating discs were used by RH13 and WR14 to predict the work done to enforce the disc motion. Therein the laminar power spent to actuate the discs is calculated as the ratio between the power spent to actuate the discs, P sp,l , and the power spent to drive the fluid in the streamwise direction, P x . The efficiency of the mechanical system used to power the discs is not considered in the computation of either P sp,l or P sp,t . RH13 and WR14 considered P sp,l as being the volume-averaged power spent above the disc surface (i.e. averaged over πD 2 h/4). This is equivalent to computing the power spent averaged over the actuated wall area. P x was computed as the average over the volume D 2 0 h. The contribution to the power spent due to the annular flow between the disc and the stationary wall was not considered. In the following P sp,l is averaged over the whole wetted area for a meaningful comparison with the power spent computed through DNS. The contribution of the gap flow to the power spent is also accounted for. The derivations of the adjusted formulae are outlined below. By taking the volume integral of the viscous stresses work term in equation (1-108) of Hinze [37] as follows There are two distinct intervals over which the integral must be taken. The first considers the disc surface (i.e. for r≤D * /2, u * θ =2W G(η)r * /D * , where G(η) is tabulated by Schlichting [31] and η = y * 2W * /(ν * D * ) is the scaled wall-normal coordinate) and the second considers the annular flow for D * /2<r * <D * 0 /2. To include the gap into the calculation it is assumed that within this region the wall-normal scaling remains the same as the von Kármán solution and that the angular velocity within this region is therefore given by u * θ,g =W * G(η)(D * 0 /2 − r * )/c * . Expression (B2) then becomes Dividing (B3) by the power spent to drive the fluid in the streamwise direction and scaling in outer units yields the formula for the percent laminar power spent to move the discs, Fig. 16 (left) presents the RH13 data for P sp,t versus P sp,l computed from formula (B4). The agreement of P sp,l with the DNS data is much better with the corrected averaging and improvement. The laminar power spent formulae presented in WR14 are now derived to incorporate the annular clearance flow. Formula (3.6) in WR14 is amended and improved as follows P * sp,l = π 3/2 G(γ)W * 2 D * 2 where G(γ)=(2π) −1 2π 0 G(0, t)G (0, t)dt and γ=T * W * /(πD * ). Dividing by P * x and scaling in outer units yields P sp,l expressed as a percentage of the power spent to drive the fluid in the x direction, P sp,l (%) = 100(πR p ) 3/2 G(γ)W 2 which is amended formula (3.8) in WR14. Fig. 16 (right) shows a much improved agreement of P sp,l with the DNS data for the oscillating-disc flow as well. An analytical approximation to G for γ 1 is given in equation (3.10) of WR14. In the limit γ 1 Rosenblat [38] derives a first-order approximation to u * θ . Upon substituting this approximation into (B1) and integrating the viscous stresses over the volume, the first-order approximation to P * sp,l is found. Expressed as a percentage of P x , this is The asymptotic limit of G for γ 1 is found by WR14 to be G γ 1 =G s γ/2, where G s =−0.61592 is given in Rogers and Lance [18]. By substituting this into (B5) the asymptotic form of the power spent in the limit γ 1 is found We close this appendix with a note on the power transfer to and from the discs. The spatial distribution of the power spent is presented in Fig. 11 of RH13. Therein it is stated that the areas for which this power is positive indicate regions where the fluid performs work on the disc, and that this is a spatially localized regenerative braking effect. This latter terminology is used incorrectly, as pointed out by Prof. J.F. Morrison (personal communication). Although it is true that over these areas the disc motion is aided by the fluid, no energy can be extracted or stored. For this reason the term 'regenerative braking' does not apply to the steadily rotating discs. For the oscillating wall however the phenomenon occurs in time. Therefore as for some phases of the oscillation the net power transfer to the wall is positive over the whole wetted area, it could theoretically be possible for the energy to be stored and reused. In this instance, the term regenerative braking is appropriate.
10,994.2
2014-12-04T00:00:00.000
[ "Engineering", "Physics" ]
An Outbreak of tet(X6)-Carrying Tigecycline-Resistant Acinetobacter baumannii Isolates with a New Capsular Type at a Hospital in Taiwan Dissemination of multidrug-resistant, particularly tigecycline-resistant, Acinetobacter baumannii is of critical importance, as tigecycline is considered a last-line antibiotic. Acquisition of tet(X), a tigecycline-inactivating enzyme mostly found in strains of animal origin, imparts tigecycline resistance to A. baumannii. Herein, we investigated the presence of tet(X) variants among 228 tigecycline-non-susceptible A. baumannii isolates from patients at a Taiwanese hospital via polymerase chain reaction using a newly designed universal primer pair. Seven strains (3%) carrying tet(X)-like genes were subjected to whole genome sequencing, revealing high DNA identity. Phylogenetic analysis based on the PFGE profile clustered the seven strains in a clade, which were thus considered outbreak strains. These strains, which were found to co-harbor the chromosome-encoded tet(X6) and the plasmid-encoded blaOXA-72 genes, showed a distinct genotype with an uncommon sequence type (Oxford ST793/Pasteur ST723) and a new capsular type (KL129). In conclusion, we identified an outbreak clone co-carrying tet(X6) and blaOXA-72 among a group of clinical A. baumannii isolates in Taiwan. To the best of our knowledge, this is the first description of tet(X6) in humans and the first report of a tet(X)-like gene in Taiwan. These findings identify the risk for the spread of tet(X6)-carrying tigecycline- and carbapenem-resistant A. baumannii in human healthcare settings. Introduction The emergence of multidrug-resistant Gram-negative bacteria poses a serious threat to global health. Acinetobacter baumannii is a troublesome nosocomial pathogen that causes pneumonia, sepsis, and wound and urinary tract infections, and particularly leads to severe disease and death in intensive care unit (ICU) patients [1][2][3][4][5][6]. Tolerance to desiccation and evasion of host immunity, together with the notorious antimicrobial resistance of A. baumannii, confer an advantage for the environmental and in-host survival of this microorganism. The spread of multidrug-resistant A. baumannii (MDRAB) has increased rapidly, and A. baumannii strains resistant to carbapenem, which has been used to treat MDRAB infections, has emerged [7][8][9][10][11][12][13]. Colistin and tigecycline are the two last-resort antibiotic options for treatment of infections caused by carbapenem-resistant A. baumannii. However, cases of colistin-or tigecycline-resistant A. baumannii infections have been reported worldwide [14][15][16][17]. Tigecycline is a tetracycline family antibiotic that inhibits bacterial protein synthesis by interacting with the 30S ribosomal subunit and inhibiting tRNA entry [18]. Compared to classical tetracyclines, tigecycline exhibits a higher affinity for ribosomes. However, tigecycline-resistant bacteria have emerged with the increasing use of tigecycline [19]. The primary mechanisms of tigecycline resistance are associated with mutations in the ribosome that block drug binding or result from overexpression of efflux proteins that actively pump out the drug. For example, mutations in rpsJ, which encodes ribosomal protein S10, alter the tigecycline-binding site and thus contribute to tigecycline resistance [20]. Another resistance mechanism involves the increased expression of efflux pumps, such as OqxAB, AcrAB-TolC, and AdeABC [21][22][23]. In addition, Tet proteins, including the tigecycline-modifying enzyme tet(X), the ribosomal protective protein tet(M), and the mutated tet(A) efflux pump, have also been reported to decrease tigecycline susceptibility [24,25]. Since tet(X)-like genes could spread between species through horizontal gene transfer, surveillance of the prevalence and abundance of these genes is important. To the best of our knowledge, tet(X) variants have not been documented in Taiwan. Thus, we aimed to investigate 228 tigecycline-non-susceptible A. baumannii clinical isolates collected in Taiwan for the presence of tet(X) variants. Screening of tet(X) Variants We screened 228 non-repetitive clinical tigecycline-non-susceptible A. baumannii isolates in Taiwan for the presence of tet(X) variants via polymerase chain reaction (PCR) using a universal primer pair designed in this study (described in the Materials and Methods section). The PCR and sequencing results indicated the presence of tet(X)-like genes in seven strains, with a positive rate of 3%. The sources of the seven strains were blood (n = 2), sputum (n = 2), tissue (n = 1), urine (n = 1), and pleural effusion (n = 1) samples (Table S1). The Genomes of A. baumannii Isolates Carrying tet(X) Variants Are Highly Similar and Carry Two Plasmids The genomes of the seven analyzed strains were almost identical (~100% identity and coverage) (BioProject ID: PRJNA672213; Accession Nos. CP064193-CP064204 and CP076736-CP076744), each comprising a circular chromosome and two plasmids of 112 kb and 9 kb. NCBI BLAST analysis showed that the chromosome shared high similarity with A. baumannii strain ab736 (Accession No. CP015121.1), which was isolated from a patient with bacteremia in the USA, and A. baumannii strain ZW85-1 (Accession No. CP006768) [46], which was isolated from the fecal sample of a patient with diarrhea in China (98.7% identity and 88% coverage for both isolates). Meanwhile, the best matches in the NCBI nucleotide database for the two plasmids were pCMCVTAb1-Ab59 [47] (Accession No. CP016299.1; 100% DNA identity and 98% coverage) and pAB-NCGM253 [48] (Accession No. AB823544; 100% DNA identity and coverage) for the 112 kb and 9 kb plasmids, which were obtained from clinical isolates in the USA and Japan, respectively. Tigecycline-Resistant A. baumannii Isolates Carrying tet(X) Variants Also Carry Other Antimicrobial Resistance Genes Antimicrobial resistance genes were identified using the Comprehensive Antibiotic Resistance Database (CARD). A total of 38 proteins exhibited >90% amino acid identity and coverage to proteins in the CARD database. We found a plasmid-encoded bla OXA-72 (bla OXA-24 -like) carbapenemase gene (located in the 9 kb plasmid, which is almost identical to pAB-NCGM253, a common bla OXA-72 -bearing plasmid found in several Acinetobacter spp. [49]) and a chromosome-encoded bla OXA-66 (bla OXA-51 -like) carbapenemase gene (overexpression of bla OXA-51 -like could confer carbapenem resistance), which may contribute to the carbapenem resistance of these isolates. The adc-56, a gene encoding extended-spectrum AmpC cephalosporinase, was found to confer cefepime resistance. In addition, genes encoding the aminoglycoside-modifying enzymes ANT(3 )-IIa, APH(3 )-Ib, APH(6)-Id, and APH(3 )-Ia; the chloramphenicol resistance gene floR; the sulfonamide resistance gene sul2; the tetracycline resistance genes tet(Y) and tet(X6); the abaQ gene encoding an efflux pump to mediate quinolone resistance; and the multidrug efflux pump-encoding genes such as ade were also identified. The Tigecycline-Resistant A. baumannii Isolates Carry tet(X6) Genes In all seven sequenced strains, a tet(X6) gene was identified in the chromosome and was located in a~40 kb region that is absent in A. baumannii ab736 and ZW85-1 from the NCBI database, which shared high genome identity with the strains analyzed in this study ( Figure 1). It is noteworthy that this 40 kb region flanked by two directly repeated IS26 sequences showed a higher G + C content (49.7%) than the rest of the chromosome (39%), indicating that this region may have originated from another source. genes in plasmids and chromosomes, including the sequences of plasmids from A. baumannii strain ABF9692 (plasmid pABF9692), Proteus cibarius strain ZN2, and the chromosomes of Proteus genospecies 6 strain T60, P. cibarius strain ZF1, P. cibarius strain 17SZRF8EW, P. mirabilis strain 18QD2AZ3W, A. lwoffii strain 18QD2AZ28W, Myroides phaeus strain 18QD1AZ29W, and A. baumannii strain X4-65 (this study) [35,38,50,51] (Figure 2). We found that, regardless of their location (plasmid or chromosome), tet(X6) were frequently associated with ISCR2, suggesting that ISCR2 may contribute to the transmission of tet(X6). In addition, tet(X6) was found on an SXT/R391 integrative and conjugative element (ICE) in the chromosome of Proteus genospecies 6 (T60), an isolate from retail pork (the authors named the novel ICE ICEPgs6Chn1) [38]. However, the genetic environment of tet(X6) in our current study was different from that of ICEPgs6Chn1, and we did not find a complete ICE in the strains analyzed in this study, as the ICE finder tool (https://db-mml.sjtu.edu.cn/ICEfinder/ICEfinder.html) and oriTfinder tool (https://toolmml.sjtu.edu.cn/oriTfinder/oriTfinder.html) could not detect an integrase gene, relaxase gene, oriT, or type IV secretion system-encoding genes. In addition to tet(X6), several other antibiotic resistance genes, including aph(3 )-Ib, aph(6)-Id, aph(3 )-Ia, floR, sul2, and tet(Y), were present in this region. Of note, several transposase-encoding genes were also identified, which suggests that transposition events occurred in this region and probably resulted in the accumulation of antimicrobial resistance genes. We further compared the genomic environments of previously reported tet(X6) genes in plasmids and chromosomes, including the sequences of plasmids from A. baumannii strain ABF9692 (plasmid pABF9692), Proteus cibarius strain ZN2, and the chromosomes of Proteus genospecies 6 strain T60, P. cibarius strain ZF1, P. cibarius strain 17SZRF8EW, P. mirabilis strain 18QD2AZ3W, A. lwoffii strain 18QD2AZ28W, Myroides phaeus strain 18QD1AZ29W, and A. baumannii strain X4-65 (this study) [35,38,50,51] (Figure 2). We found that, regardless of their location (plasmid or chromosome), tet(X6) were frequently associated with ISCR2, suggesting that ISCR2 may contribute to the transmission of tet(X6). In addition, tet(X6) was found on an SXT/R391 integrative and conjugative element (ICE) in the chromosome of Proteus genospecies 6 (T60), an isolate from retail pork (the authors named the novel ICE ICEPgs6Chn1) [38]. However, the genetic environment of tet(X6) in our current study was different from that of ICEPgs6Chn1, and we did not find a complete ICE in the strains analyzed in this study, as the ICE finder tool (https://db-mml.sjtu.edu.cn/ICEfinder/ICEfinder.html (accessed on 5 October 2021)) and oriTfinder tool (https://tool-mml.sjtu.edu.cn/oriTfinder/oriTfinder.html (accessed on 5 October 2021)) could not detect an integrase gene, relaxase gene, oriT, or type IV secretion system-encoding genes. The A. baumannii Isolates Showed No Evidence of Conjugal Transfer of Tet(X6) Although we did not identify a complete ICE associated with tet(X6), we examined whether tet(X6) could be transferred to Escherichia coli by conjugation. To ensure that the conjugation experimental procedures were successful, we used a Klebsiella pneumoniae strain that was able to transfer the blaOXA-48 gene to E. coli as a control. The results showed that the control K. pneumoniae can successfully transfer the blaOXA-48 gene to E. coli, and the transconjugants (E. coli J53-blaOXA-48) exhibited a higher imipenem MIC of ≥8 mg/L compared to the recipient (E. coli J53), with an MIC of 0.125 mg/L. However, no transconjugant was obtained for the tet(X6)-harboring A. baumannii strains under the conditions used in this study (Table 1 and Figure S1). K type and Sequence-type (ST) of the Tigecycline-Resistant A. baumannii Isolates Carrying Tet(X6) The capsular types (K-types) of the seven strains were determined using Kaptive, a tool that predicts the K-type of A. baumannii strains based on the sequences of the capsular polysaccharide synthesis (cps) locus [52]. The results showed that these strains belong to a new K type, which we designated as KL129 (Accession No. MW353360), that is related to KL60. Two genes differed between KL60 and KL129: A sugar transferase gene and a gene encoding a WxcM-like domain-containing protein ( Figure 3). The sugar transferase ItrA2 in KL60 and the corresponding protein in KL129 shared 77% amino acid identity at 95% coverage, whereas the WxcM-like domain-containing protein FdtE in KL60 and the Comparative analysis of DNA identity was performed using Easyfig 2.2.2. The tet(X6) genes are colored in red, and the other genes are colored according to their annotated functions: Green, antimicrobial resistance genes; grey, ISCR2 or transposase-encoding genes; blue, other functions and hypothetical protein-encoding genes. The A. baumannii Isolates Showed No Evidence of Conjugal Transfer of tet(X6) Although we did not identify a complete ICE associated with tet(X6), we examined whether tet(X6) could be transferred to Escherichia coli by conjugation. To ensure that the conjugation experimental procedures were successful, we used a Klebsiella pneumoniae strain that was able to transfer the bla OXA-48 gene to E. coli as a control. The results showed that the control K. pneumoniae can successfully transfer the bla OXA-48 gene to E. coli, and the transconjugants (E. coli J53-bla OXA-48 ) exhibited a higher imipenem MIC of ≥8 mg/L compared to the recipient (E. coli J53), with an MIC of 0.125 mg/L. However, no transconjugant was obtained for the tet(X6)-harboring A. baumannii strains under the conditions used in this study (Table 1 and Figure S1). K Type and Sequence-Tyype (ST) of the Tigecycline-Resistant A. baumannii Isolates Carrying tet(X6) The capsular types (K-types) of the seven strains were determined using Kaptive, a tool that predicts the K-type of A. baumannii strains based on the sequences of the capsular polysaccharide synthesis (cps) locus [52]. The results showed that these strains belong to a new K type, which we designated as KL129 (Accession No. MW353360), that is related to KL60. Two genes differed between KL60 and KL129: A sugar transferase gene and a gene encoding a WxcM-like domain-containing protein (Figure 3). The sugar transferase ItrA2 in KL60 and the corresponding protein in KL129 shared 77% amino acid identity at 95% coverage, whereas the WxcM-like domain-containing protein FdtE in KL60 and the corresponding protein in KL129 shared 73% amino acid identity at 99% coverage. In addition, sequence variations were found in other proteins: Wzx shared 89% amino acid identity at 99% coverage, Gtr49 shared 94% amino acid identity at 99% coverage, and Gtr50 shared 92% amino acid identity at 99% coverage. We also determined the STs of the strains analyzed in this study based on the obtained genome sequences. The results showed Antibiotics 2021, 10, 1239 6 of 13 that they belonged to Oxford ST793/CC208 (previously denoted as CC92) and Pasteur ST723/CC2, a clonal complex (CC) that corresponds to international clone II ( Figure S2). corresponding protein in KL129 shared 73% amino acid identity at 99% coverage. In addition, sequence variations were found in other proteins: Wzx shared 89% amino acid identity at 99% coverage, Gtr49 shared 94% amino acid identity at 99% coverage, and Gtr50 shared 92% amino acid identity at 99% coverage. We also determined the STs of the strains analyzed in this study based on the obtained genome sequences. The results showed that they belonged to Oxford ST793/CC208 (previously denoted as CC92) and Pasteur ST723/CC2, a clonal complex (CC) that corresponds to international clone II (Figure S2). Open reading frames are shown as arrows. Comparing the two cps loci, conserved genes that shared > 95% amino acid identity with KL60 are shown in black. Genes that share 80-95% amino acid identity with KL60 are shown in grey, and genes sharing < 80% amino acid identity with KL60 are shown in green or blue for the corresponding genes. Nosocomial Spread of Tigecycline-Resistant A. baumannii Isolates Carrying Tet(X6) Six strains of tet(X6)-carrying A. baumannii were isolated from patients 1 and 3-7 in the same medical ICU; the last one (strain X4-107) was isolated from patient 2 in an orthopedic ward located on a separate floor of the same building ( Figure 4 and Table S1). Patients 1, 3, and 5 had once been assigned to the same bed. The first strain (X4-65) was isolated (5 February 2020) from the sputum of patient 1 two months after admission to the ICU due to hepatic encephalopathy resulting from alcoholic liver cirrhosis. Patient 1 died of ventilator-associated pneumonia caused by tet(X6)-carrying A. baumannii three days later (8 February 2020). Strain X4-136 was isolated from patient 3, who was admitted on 10 February 2020, under the impression of pancreatitis with septic shock. The patient developed ventilator-associated pneumonia and central line-associated bloodstream infection caused by tet(X6)-carrying A. baumannii seven days later (17 February 2020). Approximately 52 days later, strain X4-300 was isolated from patient 5, who had hepatocellular carcinoma. The patient died of ventilator-associated pneumonia caused by tet(X6)-carrying A. baumannii. Strains from patients 4, 6, and 7, assigned to different beds in the same ICU, were isolated in March, June, and July, respectively. We presumed that this outbreak was caused by tet(X6)-carrying A. baumannii colonization in the environment of the medical ICU, where the healthcare worker(s) spread it to the orthopedic ward. We further performed in silico pulse field gel electrophoresis (PFGE) and constructed a phylogenetic tree based on the PFGE profile ( Figure S4). The results showed that the seven strains collected in this study were clustered in a clade, indicating that these strains were clonally related. Figure 3. Capsular polysaccharide synthesis (cps) gene clusters in KL129. Capsular polysaccharide synthesis (cps) locus of KL129, which was identified in this study, was compared with that of BAL_329, an A. baumannii strain with KL60 capsular type (Accession No. MN148382.1). Open reading frames are shown as arrows. Comparing the two cps loci, conserved genes that shared > 95% amino acid identity with KL60 are shown in black. Genes that share 80-95% amino acid identity with KL60 are shown in grey, and genes sharing < 80% amino acid identity with KL60 are shown in green or blue for the corresponding genes. Nosocomial Spread of Tigecycline-Resistant A. baumannii Isolates Carrying tet(X6) Six strains of tet(X6)-carrying A. baumannii were isolated from patients 1 and 3-7 in the same medical ICU; the last one (strain X4-107) was isolated from patient 2 in an orthopedic ward located on a separate floor of the same building ( Figure 4 and Table S1). Patients 1, 3, and 5 had once been assigned to the same bed. The first strain (X4-65) was isolated (5 February 2020) from the sputum of patient 1 two months after admission to the ICU due to hepatic encephalopathy resulting from alcoholic liver cirrhosis. Patient 1 died of ventilatorassociated pneumonia caused by tet(X6)-carrying A. baumannii three days later (8 February 2020). Strain X4-136 was isolated from patient 3, who was admitted on 10 February 2020, under the impression of pancreatitis with septic shock. The patient developed ventilatorassociated pneumonia and central line-associated bloodstream infection caused by tet(X6)carrying A. baumannii seven days later (17 February 2020). Approximately 52 days later, strain X4-300 was isolated from patient 5, who had hepatocellular carcinoma. The patient died of ventilator-associated pneumonia caused by tet(X6)-carrying A. baumannii. Strains from patients 4, 6, and 7, assigned to different beds in the same ICU, were isolated in March, June, and July, respectively. We presumed that this outbreak was caused by tet(X6)-carrying A. baumannii colonization in the environment of the medical ICU, where the healthcare worker(s) spread it to the orthopedic ward. We further performed in silico pulse field gel electrophoresis (PFGE) and constructed a phylogenetic tree based on the PFGE profile ( Figure S4). The results showed that the seven strains collected in this study were clustered in a clade, indicating that these strains were clonally related. The co-existence of tet(X)-like genes and other antibiotic resistance genes has been reported in different bacterial strains isolated from animals or their environments, posing a serious threat to the clinical treatment of humans. A study of E. coli strains from an animal source possessing both tet(X4) and the colistin resistance gene mcr-1 raised con-Antibiotics 2021, 10, 1239 8 of 13 cerns, since tigecycline and colistin are regarded as last line drugs for the treatment of carbapenem-resistant bacteria [31]. A recent study documented an A. baumannii chicken isolate co-carrying a tet(X6) variant and the carbapenemase genes bla NDM-1 and bla OXA-58 [51]. Another study reported the co-occurrence of tet(X6) and the linezolid resistance gene cfr in Proteus spp. from swine farms [50]. In another study, Acinetobacter spp. harboring both tet(X) and bla OXA-58 were isolated from pigs [57]. In the current study, we found the co-carriage of carbapenemase gene bla OXA-72 and tet(X6) in A. baumannii strains isolated from patients. To the best of our knowledge, tet(X6) has been reported in the Myroides, Proteus, E. coli, Providencia rettgeri, and A. baumannii strains of animal origin [35,38,50,51,58], and this is the first description of tet(X6) in bacteria from human samples. Although previous studies have reported that tet(X6) was associated with ICEs [38], we did not find a complete ICE in the region of the tet(X6) genes in the strains analyzed in this study. Concordantly, we failed to obtain tet(X)-containing transconjugants through conjugation, suggesting that other mechanisms, such as transformation, may be responsible for the transfer of tet(X)-containing DNA in the strains analyzed in this study. Of note, we found sequence similarity surrounding the genomic environments of reported tet(X6) genes in plasmids from A. baumannii and Proteus cibarius, and in the chromosomes of P. mirabilis, P. cibarius Proteus genospecies 6, A. lwoffii, Myroides phaeus, and A. baumannii (this study), suggesting that recombination events could occur between plasmids and chromosomes. Moreover, tet(X6) was associated with the presence of ISCR2, either at one or both ends, implying that ISCR2 could play a role in the transmission of tet(X6). Taken together, we demonstrated the presence of tet(X6) together with the carbapenemase gene bla OXA-72 in clinical isolates of A. baumannii and reported an outbreak at a hospital in Taiwan. The findings revealed evidence of clonal spread of tet(X6)-carrying tigecycline-and carbapenem-resistant A. baumannii. Even though it seems that tet(X6) is restricted to certain clones and has not widely spread to large numbers of A. baumannii clinical isolates, it poses a real threat to healthcare systems. To control its dissemination, further investigations on its prevalence and distribution should be undertaken at different hospitals. Tigecycline-Non-Susceptible A. baumannii Isolate Collection A total of 228 non-redundant (when repetitive samples from the same patient were isolated, only the first sample was included) tigecycline-non-susceptible A. baumannii isolates were collected at Chang Gung Memorial Hospital, Lin Kou branch, a 3700-bed medical center in northern Taiwan, from January to September 2020. The MIC of tigecycline was determined using the broth dilution method according to Clinical and Laboratory Standards Institute (CLSI) recommendations. Since CLSI does not suggest breakpoints for tigecycline against Acinetobacter spp., the results were interpreted according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) v8.1 criteria for Enterobacterales (strains with an MIC ≤ 1 mg/L were defined as susceptible; MIC >2 mg/L were defined as resistant) [59]. PCR Detection of tet(X) Variants To detect tet(X) variants in the isolates, we analyzed the strains for 19 tet(X) variant sequences (Table S2 and Figure S3). A pair of universal primers was designed to detect the 15 tet(X) variants, i.e., tet(X)-tet(X14) ( Table S3). The PCR cycling program consisted of 96 • C for 3 min, followed by 30 cycles of 96 • C for 30 s, 52 • C for 30 s, and 72 • C for 30 s. Products with an expected size of~800 bp were subjected to Sanger sequencing. Bacterial Genome Sequencing and Analysis Bacterial genomic DNA was extracted with a commercial kit (QIAamp DNA Mini Kit, Qiagen, Valencia, CA, USA) and subjected to nanopore sequencing (Good Future BioMed Inc., Kwenshan, Taoyuan, Taiwan). The sequencing library was prepared with a Rapid Barcoding Sequencing Kit (SQK-RKK004; Oxford Nanopore Technologies, Oxford, UK) according to the manufacturer's instructions. Sequencing was performed on the GridION platform, and FlowCell (R9.4.1 FLO-MIN106D; Oxford Nanopore) was used to generate raw signal data. Base calling of the raw signal data was performed by Guppy (v3.2.1) in the HAC mode. The adaptors remaining in the base-called reads were trimmed with Porechop (v0.2.4). The clean reads were assembled into chromosome or plasmid contigs using Flye (v2.7). Any sequencing errors in the genome and plasmid contigs were first polished by four runs of Racon (v1.4.3). The remaining errors were removed by Medaka (v1.0) and validated by in-house scripts searching against the EMBL-EBI cDNA database. The protein-coding genes and rRNAs in the chromosomes and plasmids were annotated using the Prokka pipeline (v1.14.6). To identify antibiotic-resistance genes, the annotated genes were searched against the CARD using Diamond (v0.9.36). Multilocus Sequence Typing (MLST) Analysis Two schemes for ST assignment were used. The Pasteur scheme of MLST relies on seven housekeeping genes (cpn60, gltA, recA, fusA, pyrG, rplB, and rpoB) [60], and the Oxford scheme relies on cpn60, gltA, recA, rpoD, gyrB, gdhB, and gpi [61]. The target genes were extracted from the genome and subjected to ST analysis (www.pasteur.fr/mlst (accessed on 5 October 2021).). The global optimal eBURST algorithm was used to define the major clonal complexes of the strains. Conjugation Assay The conjugation assay was performed as described previously [62]. To ensure that the conjugation experimental procedures were successful, we used a donor, carbapenemresistant K. pneumoniae strain (17CRE24), which was able to transfer the bla OXA-48 gene to E. coli by conjugation, collected from Tung's Taichung Metro Harbor Hospital, Taiwan, as a control [63]. Seven tigecycline-resistant A. baumannii strains were used as donors, and sodium azide-resistant E. coli J53 was used as the recipient. The donors and recipients were cultured overnight at 37 • C in LB broth supplemented with 2 mg/L of tigecycline (for the seven tigecycline-resistant A. baumannii strains), 4 mg/L of imipenem (for the control K. pneumoniae strain), or 100 mg/L of sodium azide (for the E. coli J53 recipient). The donor and recipient cells were mixed at a ratio of 1:10 (100 µL donor and 1 mL recipient) and centrifuged at 8000× g for 5 min. The small pellet was resuspended in~3 µL of LB broth, dropped onto a nitrocellulose membrane on an LB agar plate, and incubated overnight. The nitrocellulose membrane was transferred to a tube containing fresh LB broth and incubated at 37 • C for 30 min with shaking. Transconjugants were selected on LB agar containing 100 mg/L of sodium azide and supplemented with 2 mg/L of tigecycline or 2 mg/L of imipenem. Transconjugants were further plated on eosin methylene blue (EMB) agar to confirm E. coli, which produces a green metallic sheen on EMB. Furthermore, the successful transfer of genes was confirmed via PCR using specific primers (Table S3). The MICs of the successful transconjugants were determined. In Silico Analysis of PFGE The complete genomes were explored using in silico PFGE patterns via AscI restriction digestion [64]. Phylogenetic trees were generated to compare genetic relatedness and clonal assignment using the Dice distance from band pattern and agglomeration using the ward.D2 method [65,66]. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/antibiotics10101239/s1, Table S1: the seven tet(X)-harboring strains and patients; Table S2: tet(X) variants included for sequence analysis, Table S3: Primer pairs used for PCR amplification, Figure S1: PCR confirmation of transconjugants, Figure S2: Clonal complexes of tigecycline-resistant A. baumannii isolates carrying tet(X6), Figure S3: Alignment of tet(X) variants and primer design, Figure S4: Phylogenetic tree of A. baumannii isolates based on PFGE profile. Informed Consent Statement: Informed consent was obtained from all subjects who participated in the study. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon request.
6,055
2021-10-01T00:00:00.000
[ "Biology", "Medicine" ]
EVOLUTION OF THE BRAIN COMPUTING INTERFACE (BCI) AND PROPOSED ELECTROENCEPHALOGRAPHY (EEG) SIGNALS BASED AUTHENTICATION MODEL With current advancements in the field of Brain Computer interface it is required to study how it will affect the other technologies currently in use. In this paper, the authors motivate the need of Brain Computing Interface in the era of IoT (Internet of Things), and analyze how BCI in the presence of IoT could have serious privacy breach if not protected by new kind of more secure protocols. Security breach and hacking has been around for a long time but now we are sensitive towards data as our lives depend on it. When everything is interconnected through IoT and considering that we control all interconnected things by means of our brain using BCI (Brain Computer Interface), the meaning of security breach becomes much more sensitive than in the past. This paper describes the old security methods being used for authentication and how they can be compromised. Considering the sensitivity of data in the era of IoT, a new form of authentication is required, which should incorporate BCI rather than usual authentication techniques. 1 What is a Brain Computer Interface? What is a Brain Computer Interface (BCI)? A brain computer interface, sometimes called a mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions [1]. 1.1 History of BCI Research In the 1970s, research on BCIs started at the University of California, which led to the emergence of the expression brain–computer interface. The focus of BCI research and development continues to be primarily on neuroprosthetics applications that can help to restore * Corresponding author<EMAIL_ADDRESS>© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). MATEC Web of Conferences 155, 01006 (2018) https://doi.org/10.1051/matecconf/201815501006 IME&T 2017 What is a Brain Computer Interface? What is a Brain Computer Interface (BCI)? A brain computer interface, sometimes called a mind-machine interface (MMI), direct neural interface (DNI), or brain-machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions [1]. History of BCI Research In the 1970s, research on BCIs started at the University of California, which led to the emergence of the expression brain-computer interface. The focus of BCI research and development continues to be primarily on neuroprosthetics applications that can help to restore damaged sight, hearing, and movement. The mid-1990s marked the appearance of the first neuroprosthetic devices for humans. BCI does not read the mind accurately, but detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/ frequency patterns in the brain. John Donoghue and his team of researchers from Brown University formed a public traded company, Cyberkinetics, in 2001. The goal was to design a brain computer interface, the socalled BrainGate, commercially. The company has come up with NeuroPort ™, which was its first commercial product. Researchers from Columbia University Medical Center have successfully monitored and recorded electrical activity in the brain with improved precision. According to researchers, NeuroPort™ Neural Monitoring System enabled them to identify micro-seizure activity prior to epileptic seizures among patients [2]. Current Standing of BCI BCIs comprise an active area of research and could start to integrate advances from adjacent fields such as neuroscience, nanomaterials, electronics miniaturization, and machine learning. For example, one neuro-imaging research project is starting to make guesses as to what participants see during brain scans, purporting to be able to distinguish between a cat and a person. Merging this kind of functionality with BCIs might produce new applications. Other experimental BCI projects have been proposed. One is Neocortical Brain-Cloud Interface: autonomous nanorobots that could connect to axons and neuronal synaptic clefts, or embed themselves into the peripheral calvaria and pericranium of the skull. Another project, Brainets, envisions linking multiple organic computing units (brains) to silicon computing networks. The third project is Neural Dust, in which thousands of 10-100 micron-sized free-floating sensor nodes would reside in the brain and provide a computing processing network [3]. Future of BCI A number of developments have been taking place in the field. By 2050, it is has been suggested that BCI could become a magic wand, helping men to control objects with their mind. The day isn't far off when man may be able to guide an outside object with their thoughts in order to consistently execute both natural and complex motions of everyday life. So far BCIs have been conceived primarily as a solution for medical pathologies. However, it is possible to see BCIs more expansively as a platform for cognitive enhancement and human-machine collaboration. The BCI functionality of typing on a keyboard with your mind suggests the possibility of having an always-on brain-Internet connection. Consider what the world might be like if each individual had a live 24/7 brain connection to the Internet. Just as cell phones connected individual people to communications networks, BCIs might similarly connect individual brains to communications networks. We propose a variety of BCI applications and concepts throughout the rest of this paper, all of which are speculative. In one sense, ubiquitous BCIs are expected. It is contemplated that communication technology, already mobilized to the body via the cell phone, could be "brought on board" even more pervasively. BCIs are merely a next-generation improvement to the current situation of people constantly staring at their phones. In another sense, though, BCIs are not only a "better horse" technology: they are also a "car" in that it is impossible to foresee the full range of future applications that might be enabled from the present moment. BCIs pose a variety of practical, ethical, and philosophical issues. Life itself and the definition of what it is to be human could be quite different in a world where BCIs are widespread. Some of the immediate practical concerns of BCIs could include invasiveness, utility, reversibility, support, maintenance, upgradability (hardware and software), anti-hacking and anti-virus protection, cost, and accessibility. Beyond practical concerns, there are ethical issues regarding privacy and security. For example, neural data privacy rights are an area where standards need to be defined. There could be at least three classes of BCI applications introduced in graduated phases of risk and complexity: biological cure and enhancement; information and entertainment; and self-actualization (realization of individual cognitive and artistic potential). Each of these is for separate discussion [4]. BCI in view of IoT (Internet of Things) The IoT (Internet of Things) is a network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and network connectivity, which enable these objects to connect and exchange data. Each thing is uniquely identifiable through its embedded computing system but is able to inter-operate within the existing Internet infrastructure. Experts estimate that the IoT will consist of about 30 billion objects by 2020 [5]. The advancement of consumer electroencephalography (EEG) devices opens the technology up to a completely new world of BCI options. Combining EEG technologies with other IoT technologies like heart rate monitoring, facial emotion recognition, smart homes, home security systems, and personal devices will put new threats of privacy and security breach. Operation principles of BCI The purpose of a BCI is to detect and quantify features of brain signals that indicate the user's intentions and to translate these features in real time into device commands that accomplish the user's intent. To achieve this, a BCI system consists of four sequential components. 3. Feature Translation. 4. Device Output. These four components are controlled by an operating protocol that defines the onset and timing of operation, the details of signal processing, the nature of the device commands, and the oversight of performance. An effective operating protocol allows a BCI system to be flexible and to serve the specific needs of each user. Electrical signals from brain activity are detected by recording electrodes located on the scalp, on the cortical surface, or within the brain. The brain signals are amplified and digitized. Pertinent signal characteristics are extracted and then translated into commands that control an output device, such as a spelling program, a motorized wheelchair, or a prosthetic limb. Feedback from the device enables the user to modify the brain signals in order to maintain effective device performance. BCI is brain-computer interface; ECoG is electrocorticography; EEG is electroencephalography [6]. Signal Acquisition Signal acquisition is the measurement of brain signals using a particular sensor modality (e.g., scalp or intracranial electrodes for electrophysiologic activity, fMRI for metabolic activity). The signals are amplified to levels suitable for electronic processing (and they may also be subjected to filtering to remove electrical noise or other undesirable signal characteristics, such as 60-Hz power line interference). The signals are then digitized and transmitted to a computer. Feature Extraction Feature extraction is the process of analyzing the digital signals to distinguish pertinent signal characteristics (i.e. signal features related to a person's intent) from extraneous content and representing them in a compact form suitable for translation into output commands. These features should have strong correlations with the user's intent. Because much of the relevant (i.e. most strongly correlated) brain activity is either transient or oscillatory, the most commonly extracted signal features in current BCI systems are time-triggered EEG or ECoG (Electrocorticography) response amplitudes and latencies, power within specific EEG or ECoG frequency bands, or firing rates of individual cortical neurons. Environmental artifacts and physiologic artifacts such as EMG (Electromyographic) signals are avoided or removed to ensure accurate measurement of the brain signal features. Device Output The commands from the feature translation algorithm operate the external device, providing functions such as letter selection, cursor control, robotic arm operation, and so forth. The device operation provides feedback to the user, thus closing the control loop. Feature Translation The resulting signal features are then passed to the feature translation algorithm, which converts the features into the appropriate commands for the output device (i.e. commands that accomplish a user's intent). For example, a power decrease in a given frequency band could be translated into an upward displacement of a computer cursor, or a P300 potential could be translated into selection of the letter that evoked it. The translation algorithm should be dynamic to accommodate and adapt to spontaneous or learned changes in the signal features and to ensure that the user's possible range of feature values covers the full range of device control. Programming Using OpenBCI OpenBCI is an open source brain-computer interface platform created by Joel Murphy and Conor Russomanno after a successful Kickstarter campaign in late 2013. OpenBCI boards can be used to measure and record electrical activity produced by the brain (EEG), muscles (EMG), and heart (EKG), and is compatible with standard EEG electrodes. The OpenBCI boards can be used with the open source OpenBCI GUI, or they can be integrated with other open-source EEG signal processing tools. Hardware The OpenBCI 32-bit Board uses the ADS1299, an IC developed by Texas Instruments for biopotential measurements. The OpenBCI uses a microcontroller for on-board processingthe 8-bit version (now deprecated) uses an Arduino-compatible ATmega328P IC, while the 32-bit board uses a PIC microcontroller -and can write the EEG data to an SD card, or transmit it to software on a computer over a bluetooth link. In 2015, OpenBCI announced the Ganglion board with a 2nd Kickstarter campaign. It costs $100 (1/5 the cost the 32-bit board). It has four input channels for measuring EEG, EMG, and EKG, and is also bluetooth enabled [7]. Table 1 presents price details of OpenBCI Hardware. Software OpenBCI has released an open-source application for use with the OpenBCI written with Processing. Display and processing software written in NodeJS and Python are also being developed [8]. Table 2 presents the major open-source repositories of OpenBCI. The official NodeJS driver for the Cyton board over Serial. Security Challenges in The Era of IoT This paper proposes a need and plan of switching to more secure authentication methods for the age of Brain Computer Interface and IoT. This new security method should use BCI as a source of authentication method instead of old authentication methods. In the proposed method, authentication should be checked by the EEG Signals from brain and authentication should be granted by verifying the pattern with previously stored pattern. Authentication Methods Being Used Currently Authentication is an essential element of a typical security model. It is a process aimed to confirm identification of a user (in some cases, a machine) that is trying to access resources or log in. There is a number of different authentication mechanisms, but all serve this same purpose. Some of the security methods being used for Secure Authentication these days are listed below. Security of all these methods could be compromised one way or the other. There is a proven history of the breach of security with all specified methods. A Look into BCI Authentication Methods BCI Authentication methods will use the pure EEG or EEG + EMG signals to authenticate a user over computer or mobile devices. This will allow the user to be authenticated by brain signals safely without using any other input methods. Such authentication system will be based on EEG signals. Electroencephalogram (EEG) Model for Authentication Electroencephalography (EEG) is the recording of electrical activity along the scalp through measuring voltage fluctuations accompanying neurotransmission activity within the brain. The electrodes are attached in a cap-like device. It has unique usability advantages over other types of brain signal recording that recommend it for commercial use. It is portable, inexpensive, and easy to use. EEG recording also provides high temporal resolution. However, its signal to noise ratio and spatial resolution represent a limitation compared to other methods [9]. Figure 1 shows an authentication model using EEG signals proposed by the authors. Signals acquired from headset will be processed and then the important features will be extracted from the raw EEG signals. A custom authentication algorithm extracts important authentication related information and makes a package, which could be checked for authentication. This authentication data package is then compared with previously stored data pattern on either a server or local storage in order to verify if that it is the same user leading to authentication or rejection of user. Current Challenges Currently there is a number of challenges causing difficulty for the research process. Once the research is successful, still there will be some limitations to produce this as a locally useable device. Usability Challenge The headsets we can find currently in the market are costly due to expensive sensors. These devices cannot be used directly and require to setup the electrodes and put a conducting gel on the scalp of a person for signals reading. Training Process & ITR (Information Transfer Rate) Training processes express the limitations facing the user acceptance of BCI technology. They include the issues related to the training process necessary for class discrimination. ITR (Information transfer rate) is one of the system evaluation metrics that combine both performance and acceptance aspects [10][11][12][13]. Users training is a time-consuming activity either in guiding the user through the process or in the number of recorded sessions. It takes place either in preliminary phase or in the classifier calibration phase. The user is taught to deal with the system as well as to control his or her brain feedback signals in the preliminary phase, while in the calibration phase the trained subject's signal has been used to learn the used classifier. ITR (Information Transfer Rate) is a widely used evaluation metric for command BCI systems. It depends on the number of choices, the accuracy of target detection, and the average time for a selection. Thus compared to imagery BCI, selective attention strategies achieve higher ITR as their offered choices are larger. Non-linearity The brain is a highly complex nonlinear system, in which chaotic behavior of neural ensembles can be detected. Thus, EEG signals can be better characterized by nonlinear dynamic methods than linear methods. The training sets are relatively small because usability issues influence the training process. Although heavily training sessions are considered time consuming and demanding for the subjects, they provide the user with necessary experience to deal with the system and learn to control his/her neurophysiological signals. Thus, a significant challenge in designing a BCI is to balance the trade-off between the technological complexity of interpreting the user's brain signals and the amount of training needed for successful operation of the interface [14]. Conclusion Brain Computer Interface is a newly emerging technology, which is soon going to surround all other technologies and will be involved in daily life activities. It can be easily understood that BCI is going to integrate with IoT on very large scale and this integration will be a new rise in the technological advancements. At the same time, there is a need to find new authentication methods, which are more secure and do not use conventional authentication techniques but rely totally on EEG & EMG Signals of Brain Computing Interface.
4,022.2
2018-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Research on Logistics Distribution Vehicle Scheduling Based on Heuristic Genetic Algorithm To study the genetic algorithm, this paper solves the problem of shop scheduling under the premise of layout Flying − V . Firstly, double-layer coding is used for optimization. When calculating fitness, the time to return to the mouth P & D approaches the optimum through the greedy idea. Individual screening is carried out through the roulette method. Different crossover and genetic operators are used for different coding layers. Through thinking of elitism and catastrophe and the immigration operator to ensure the diversity of the algorithm in the calculation process, it can achieve the recommendation of the number of cars to control the cost. The stability of the algorithm is good. It can recommend a better picking sequence and number of carts for various types of picking problems. Introduction In recent years, Germany has proposed the "Industry 4.0" project, which uses the Internet of things information system to digitalize and intelligentize supply, manufacturing, and sales information in production. Finally, it achieves rapid effective and personalized product supply. e "smart logistics" of its three themes, "smart factory," "smart production," and "smart logistics," emphasizes the integration of logistics resources through the Internet, the Internet of things, and the logistics network. It gives full play to the efficiency of the supplier and accelerating the service of the demander match. e country also proposed China's smart manufacturing 2025 plan accordingly. With the vigorous development of today's e-commerce and express delivery industry, the warehousing and logistics industry with its rigid demand has also entered a broad new situation. With the process of industrialization and the diversification of the service industry, the warehousing, logistics, and distribution industry have emerged with a new demand of "multiple varieties, small batches, and multiple batches." Today's traditional warehousing and logistics services have gradually been unable to adapt to my country's economic development. How to improve the circulation efficiency of warehousing logistics and reduce circulation costs has become a major and difficult problem for warehousing enterprises to solve urgently. In recent years, foreign scholars have innovated the layout of warehouses. In 2009, the American scholars Gue and Meller [1] studied nontraditional layouts Flying − V, which shortened the picking distance of warehouses at the expense of warehouse utilization. A balance is reached between the two. For the use of optimization algorithms to solve similar optimization problems, many scholars have also conducted a lot of analysis from different angles. Liao et al. [2] introduced particle swarm optimization on the basis of traditional genetic algorithm to make the algorithm have more reliable optimization ability. Bo-Wen and Hua [3] based their work on the initial solution constructed by using chaotic initialization population and elite retention optimization results combined with parallel processing to increase the diversity of the genetic algorithm population and improve the quality of the solution. Minghai and Guihua [4], Li-Feng and Yong-Jie [5], and Meng et al. [6] analyzed the Chinese traveling salesman problem based on genetic algorithms and used them to achieve the optimal solution. Togan and Daloglu [7], based on the analysis of the characteristics of the initial population of genetic algorithm, proposed a new concept of group adaptation strategy and used the adaptive method of penalty function and mutation crossover operator to obtain adaptive genetic algorithm. Liu et al. [8,9] considered reconstructing the point set of the graph, which also has certain enlightening significance for the innovation of warehouse layout. Regarding the use of optimization algorithms to sort warehouse goods, Steffey [10], Huang et al. [11], Opetuk and Dukic [12], Zhou et al. [13], Pohl et al. [14], Xu and Hu [15], and Ardjmand et al. [16] used fishbone layout and artificial fish school algorithm and improved cat group algorithm, respectively, and used genetic algorithm analysis and comparison to find the picking path and picking of the warehouse. Regarding the dispatching optimization plan of the cargo trolley, Venkitasubramony and Adil [17] and Le and Degui [18] (2013) constructed a warehouse location allocation model based on layout Fishbone, used a hybrid algorithm combining genetic algorithm and ant colony algorithm, and improved layout Fishbone methods to optimize the allocation of many locations. e model is solved, combined with warehouse examples, to find the best warehouse layout. Yang et al. [19] used the picking path algorithm optimized by ant colony algorithm, used it for robot autonomous navigation through simulation, and concluded that the ant colony algorithm optimized path problem can effectively and consistently generate a better collision-free path. Mohamed et al. [20] proposed a novel nature-inspired algorithm called Gaining-Sharing-Knowledge-based algorithm (GSK) for solving optimization problems over continuous space. Experimental results indicate that in terms of robustness, convergence, and quality of the solution obtained, GSK is significantly better than, or at least comparable to, the state-of-the-art approaches with outstanding performance in solving optimization problems especially with high dimensions. Mohamed et al. [21] proposed three new mutation strategies to improve the optimization performance of differential evolution algorithm, two of which are highly competitive, especially with the increase of dimension. By referring to the methods of different scholars and the genetic algorithm mentioned above, the structure object is operated directly, without the limitation of derivation and function continuity, and has inherent implicit parallelism and better global optimization ability, so we adopt the genetic algorithm to optimize the structure. e topic will be based on the layout optimization; at the same time, through the genetic algorithm, the recommendation of the number of picking carts and the recommendation of the route can solve the problem of improving the circulation efficiency of warehousing logistics and reducing the circulation fee [22]. Modeling the Distance of Flying In the paper [23], the layout of Flying − V of the warehouse is shown in Figure 1: the shelves are placed according to the layout Flying − V. e entire warehouse is divided into four areas, as shown in the figure, divided into areas 1-4. P&D is the entrance and exit of the warehouse. Each square represents an item storage position, and the black square represents the goods that need to be picked. A set of data A, B, C, D { } is used to represent the information of the goods. e data are defined as follows: A(A � 1, 2, 3, 4) indicates the area number where the goods are located. B(B � 1, 2, 3, . . . , n) indicates the number of the container where the goods are located. A is 1, 4 is n � 104, otherwise. n � 91 indicates the number of layers of the container where the goods are located. e weight of the cargo is D(D � 1, 2, 3, 4, 5). e expression of mouth is (0, 0, 0, 0). Distance Matrix. Some symbol descriptions are given in Table 1. Step 1. Establish a rectangular coordinate system, P&D with the coordinate to the right x as the positive semiaxis and y upward to the positive semiaxis. As shown in Figure 1, a grid is a unit. Step 2. Convert the cargo i number into (x i , y i ). Step 3. Calculate the distance from the goods to the P&D and bring the information of the two goods into the following formula: Step 4. Calculate the distance between the goods, and take the subscripts i, j of the two goods into the following formula: 2 Complexity Genetic Algorithm to Solve the Problem e genetic algorithm is a kind of algorithm idea that imitates the biological evolution process [24]. It firstly encodes the goods, then randomly generates the initial population, and calculates the fitness value of each individual. It goes through the genetics of replication, mutation, and crossover. After many iterations of the problem, in order to ensure biodiversity and avoid excessive concentration, it is considered to control elitism. Finally, it generates the optimal solution or the approximate optimal solution. 3.1. Coding. In this paper, the coding of goods adopts real number coding and coding rules. Using this coding method, we can effectively solve the situation of invalid coding in crossover and mutation. In the coding of the picking trolley, we assume that the first-level coding of each individual is fixed. en this problem is converted into a knapsack problem. It is better to use binary coding, and in order to solve the two different number of individual cars can still cross. We abandoned the real number coding and chose the binary code. e bit binary i indicates whether the goods in i + 1 are moved by the new trolley. erefore, the code length is the cargo code length minus one. Generation of the Initial Population. It generated groups of N × M. at is, a population N with individuals M, in order to carry out immigration operations. Fitness Function. e fitness function in this problem is shown in where f(x) is the objective function. In f(x) � t 1 + t 2 + km, t 1 is the longest travel time of the car. t 2 is the shortest travel time of the car. m is the number of cars used. k is the consumption of each car. Selection Operator. Under normal circumstances, the setting of the selection operator follows the principle of survival of the fittest. e greater the fitness, the greater the possibility that an individual's genes will be inherited to the next individual. On the contrary, it reduces. According to the principle of the roulette method, this paper adopts the proportional operator: the calculation method of the proportional operator is e probability of an individual in the population being selected is proportional to the value of the individual's corresponding fitness function. e fitness values of all individuals in the population are accumulated and then normalized. Finally the individuals corresponding to the area where the random number falls are selected by random numbers, which is used as the selection operator. Crossover Operator. According to the different situations of the two-level coding, two operators are adopted. For the first-level pickup order coding, we use multipoint crossover. Because there are more methods of permutation and combination, the possibility is great. e use of multipoint crossover can increase the possibility of individual combinations and try to avoid the algorithm from entering the local optimal under the premise of gene richness. e second layer of pickup trolley coding is relatively less likely because it uses binary coding. If multipoint crossover is used, invalid chromosomes are more likely to be Complexity 3 generated. e number of carts used after multipoint crossover may exceed the predetermined maximum value. So this paper considers the use of single-point crossover. Mutation Operator. According to different coding conditions, two mutation operators are also adopted here. For the second-level pickup truck coding, we also adopt single-point mutation. But in the first-level pickup sequence, due to mutation, the diversity of the population will increase. erefore, we should control the degree of this mutation to achieve convergence and prevent local optimization. So as in (5), set the number of N node variance [25]: e number of node mutations is controlled within 3 nodes. If T is smaller, that is, less than 1, then set 1 node variation. If the value T is larger, even greater than 3, only 3 node variances need to be set. Elitism and Concentration Control. When crossover and mutation produce a new generation, we are likely to lose the optimal solution in the process of evolution. After many scholars have demonstrated the feasibility of elitism from different angles, this paper believes that it can be incorporated into genetic algorithms to simulate biological evolution. However, the emergence of elitism makes it easy for the algorithm to enter the local optimal situation. erefore, we need to control the concentration of the current optimal solution. If the current generation is the best generation time in history, and its concentration is lower than 0.05%, we will randomly supplement 0.03% individuals. is ensures that the optimal solution obtained in the process can be preserved. e concentration will not be too high due to elitism and destroy the diversity of the group. Catastrophe Algorithm. From the perspective of elitism, we consider if the algorithm loses the optimal solution in the process of divergence. Catastrophe is a way to prevent limiting local optimality. From the perspective of biological evolution, every new overlord has a common prerequisite, which is the decline of the previous overlord. Catastrophe has become a way to break the local optimum. e condition of catastrophe [26] is that the optimal fitness of successive generations N is the same. 95% of the individuals are eliminated. New individuals are randomly generated. However, at the end of the algorithm, we need to converge the results, so we need to close in the last period of time. But catastrophe should not happen frequently or otherwise the algorithm will drop to random search. 3.9. Immigration Algorithm. For x individuals in a population of the usual genetic algorithm, in this paper, they are divided into population individuals. is increases the diversity of genes. It is not easy to lose the diversity of other populations because of a local optimum. But for this population N, we still need to connect them to achieve convergence. erefore, this paper considers the immigration method, where a probability judgment is made for each population. When the conditions are met, the optimal solution of the population is randomly covered to random individuals in the random population. is maintains the diversity of the population. It can also converge in individual n regions. Termination Condition. e termination condition is consistent with the basic genetic algorithm idea, and the algorithm is stopped by determining algebra. Algorithm Structure Step 1: randomly it generated initial population of N × M. Step 3.2: the roulette method is carried out through (4) to select the group. Step 3.3: single-point crossover and multipoint crossover are used for the two coding layers, respectively. Single-point crossover and adaptive number point crossover are used for the two coding layers, and the number of adaptive points is calculated by (5). Step 3.4: judge whether the new individual generated meets the elitism condition, and if it meets the condition, the optimal solution is randomly reserved. Step 3.5: it is judged whether the newly generated individual meets the conditions of catastrophe, and if so, the random individual will be regenerated and coded. Step 3.6: probabilistically determine whether immigration is required for each population, and, if necessary, move its optimal solution to random individuals in a random population. Step 3.7: determine whether the algebra that needs to be calculated has been reached, and if it is reached, stop and output the optimal solution. If not, jump to Step 3.1. e flow chart is shown in Figure 2. Experimental Results and Analysis e experimental parameter data are given in Table 2. (1) e average picking time and average unloading time will not be calculated temporarily (2) e trolley ignores mutual collisions during operation Bicycle Pickup Problem. When the number of picking carts is fixed at 1 and the cost of picking carts is 0, enter different quantities to be picked up to get the data in Table 3. From Table 3, we can see that when solving the bicycle problem, the stability of the algorithm is good, but the stability gradually decreases with the increase of the number of pick-ups. When the number of pick-ups is too large, because the value of the objective function becomes larger, the adaptation is too small and it is not easy to observe. However, in general, the algorithm can still approach the optimal solution. Based on the variance judgment, the algorithm is more stable in solving the single-vehicle pickup problem. Fixed Multivehicle Pickup Issue. When the quantity to be picked up is fixed at 20 and the price of the picking cart is 0, enter the number of different pickup trucks to get the data in Table 4. From Table 4, we can see that when solving the multivehicle problem, the stability of the algorithm is related to the number of cars. As the number of cars increases, the stability begins to decline due to the increase in combinations. But according to the nature of the combination number and the last set of data, we can infer that when the number of trolleys reaches half of the total number of goods, the stability begins to rise due to the reduction of the combination. According to the experimental results, we can see that the algorithm can approach the optimal solution when solving multivehicle problems and can solve such problems more stably. Number of Recommended Carts. When the quantity to be picked up is fixed at 20 and the maximum number of carts is 10, enter the different cost of each car to get the data in Table 5. From Table 5, we can see that as the cost of the pickup truck increases, the recommended pickup truck also gradually decreases when the cost is infinite. Complexity e number of carts becomes 1. At the same time, when the cost is higher, since the range of the recommended number of cars is reduced, the algorithm is more stable and the variance is smaller. e information obtained in Table 5 shows that the algorithm can recommend the path and the number of cars that approximate the optimal solution in a relatively stable situation. After several generations of evolution, the fitness of the Note. e average fitness is the average of 10 runs of the algorithm. 6 Complexity population of genetic algorithm would ideally reach an approximately optimal state, so we believe that the result mentioned above is acceptable. Conclusion From the results, we can see some advantages of the genetic algorithm: the ability to search quickly and randomly without concern for the problem domain. Starting from the group, the search has potential parallelism and can be compared with multiple individuals at the same time with good robustness. e search is inspired by evaluation function and the process is simple. Using a probabilistic mechanism to iterate, it has randomness. It is extensible and easy to combine with other algorithms. Optimizing the picking path plays an important role in reducing the operating cost of the logistics distribution center, improving logistics efficiency and customer service capabilities. A mathematical model is established for the characteristics of the problem. e load constraint and the number of picking trucks in the actual situation are considered. A two-layer coding genetic algorithm was designed to solve the problems raised, which achieved the goal of reducing the cost by recommending the number and path of picking carts. rough testing, it is believed that the algorithm under small data tends to be stable. It can achieve the goal of improving the circulation efficiency of warehousing and logistics and reducing circulation fees. Data Availability e data supporting this study are included in the paper. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,412.8
2021-08-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Hypoxia promotes liver-stage malaria infection in primary human hepatocytes in vitro ABSTRACT Homeostasis of mammalian cell function strictly depends on balancing oxygen exposure to maintain energy metabolism without producing excessive reactive oxygen species. In vivo, cells in different tissues are exposed to a wide range of oxygen concentrations, and yet in vitro models almost exclusively expose cultured cells to higher, atmospheric oxygen levels. Existing models of liver-stage malaria that utilize primary human hepatocytes typically exhibit low in vitro infection efficiencies, possibly due to missing microenvironmental support signals. One cue that could influence the infection capacity of cultured human hepatocytes is the dissolved oxygen concentration. We developed a microscale human liver platform comprised of precisely patterned primary human hepatocytes and nonparenchymal cells to model liver-stage malaria, but the oxygen concentrations are typically higher in the in vitro liver platform than anywhere along the hepatic sinusoid. Indeed, we observed that liver-stage Plasmodium parasite development in vivo correlates with hepatic sinusoidal oxygen gradients. Therefore, we hypothesized that in vitro liver-stage malaria infection efficiencies might improve under hypoxia. Using the infection of micropatterned co-cultures with Plasmodium berghei, Plasmodium yoelii or Plasmodium falciparum as a model, we observed that ambient hypoxia resulted in increased survival of exo-erythrocytic forms (EEFs) in hepatocytes and improved parasite development in a subset of surviving EEFs, based on EEF size. Further, the effective cell surface oxygen tensions (pO2) experienced by the hepatocytes, as predicted by a mathematical model, were systematically perturbed by varying culture parameters such as hepatocyte density and height of the medium, uncovering an optimal cell surface pO2 to maximize the number of mature EEFs. Initial mechanistic experiments revealed that treatment of primary human hepatocytes with the hypoxia mimetic, cobalt(II) chloride, as well as a HIF-1α activator, dimethyloxalylglycine, also enhance P. berghei infection, suggesting that the effect of hypoxia on infection is mediated in part by host-dependent HIF-1α mechanisms. INTRODUCTION Malaria affects 250 million people and causes approximately a million deaths each year (World Health Organization, 2011). The liver stage of malaria is an attractive target for the development of antimalarial drugs and vaccines (Prudêncio et al., 2006;Mazier et al., 2009), especially with the goal of malaria eradication, but is relatively poorly understood. In vitro models that recapitulate the liver stages of human malaria are needed to identify compounds that have potential antimalarial activity, but most of these models are dependent on cell lines (Gego et al., 2006;Meister et al., 2011) due to limitations in in vitro culture of primary adult hepatocytes. There is evidence that mimicking the in vivo hepatic microenvironment, such as by adding cell-cell interactions, cellmatrix interactions and controlling tissue microarchitecture, can improve in vitro models of the liver (Dunn et al., 1989;Sivaraman et al., 2005;Khetani and Bhatia, 2008;Kidambi et al., 2009). For example, micropatterned co-cultures (MPCCs) of primary human hepatocytes (PHHs) and supporting stromal fibroblasts result in stable hepatocyte function, including albumin secretion, urea production and cytochrome P450 levels, for several weeks compared with hepatocytes alone (Khetani and Bhatia, 2008). Another feature of the in vivo hepatic microenvironment is the presence of a range of oxygen tensions (Wölfle et al., 1983), which is thought to be a factor that contributes to hepatic zonation, a compartmentalization of functions along the axis of perfusion (Jungermann and Kietzmann, 1996;Jungermann and Kietzmann, 2000). Previous studies have shown that exposing mixed populations of primary rat hepatocytes to physiological gradients of oxygen tension can induce compartmentalization in vitro, render the cells selectively susceptible to zonal hepatotoxins (Allen and Bhatia, 2003;Allen et al., 2005) and recapitulate the zonated patterns of carbohydrate-metabolizing enzyme gene expression in vitro (Wölfle et al., 1983;Jungermann and Kietzmann, 1996;Kietzmann and Jungermann, 1997). Thus, in vitro liver-stage malaria culture platforms might be improved by altering microenvironmental oxygen concentrations. Ambient oxygen concentrations have a broad spectrum of biological impact, influencing diverse pathways from homeostasis to development (Semenza, 2011). The role of oxygen has been explored in a range of infectious diseases. For instance, hyperoxia reduces certain bacterial and Apicomplexan infections in vitro or in vivo (Park et al., 1992;Tsuneyoshi et al., 2001;Arrais-Silva et al., 2006), whereas hypoxia promotes hepatitis C virus infection in vitro (Vassilaki et al., 2013) and Trypanosoma lewisi infections in vivo (Hughes and Tatum, 1956b). In the malaria field, previous studies have probed the effect of atmospheric oxygen on parasitemia in rodent and avian disease models. In particular, Plasmodium berghei-infected rats or Plasmodium cathemeriuminfected canaries subjected to hypoxia exhibited increased levels of parasitemia (Hughes and Tatum, 1955;Hughes and Tatum, 1956a), whereas hyperoxia decreased P. berghei parasitemia (Rencricca et al., 1981;Blanco et al., 2008) and prevented early death caused by experimental cerebral malaria in the P. berghei-ANKA mouse model (Blanco et al., 2008). Furthermore, continuous in vitro culture of the blood stages of Plasmodium falciparum was first achieved by reducing atmospheric oxygen levels (Trager and Jensen, 1976), and subsequent studies have characterized this microaerophilic nature of blood stage P. falciparum (Torrentino-Madamet et al., 2011). In this study, we explored the influence of cell surface oxygen on liver-stage malaria infection of PHHs. We used an in vitro model of hepatocyte culture that is phenotypically stable, responsive to ambient oxygen and supports the liver stage of malaria (March et al., 2013). Using this model system and a mathematical framework to estimate the cell surface oxygen partial pressures (pO 2 ) under a variety of experimental manipulations, we show that oxygen has a profound impact on the Plasmodium liver stage. In particular, both infection efficiency and development of exo-erythrocytic forms (EEFs) can be perturbed by altering cell surface oxygen concentrations. We identified an optimal cell surface oxygen level for maximizing infection and demonstrate that host HIF-1α is at least partially responsible for this response. In vivo EEF development correlates with hepatic oxygen gradients Oxygen tensions in the hepatic sinusoids vary from 30-75 mmHg between the perivenous and periportal regions, respectively (Wölfle et al., 1983). To investigate whether this variation in oxygen concentration exerts an influence on liver-stage Plasmodium infection in vivo, C57BL/6 mice were infected with GFP-expressing Plasmodium yoelii sporozoites, a host-parasite combination that supports robust liver-stage infection (Douradinha et al., 2007), and their livers collected 46 hours post-infection. Two populations of P. yoelii EEFs were defined to test the hypothesis that the hepatic sinusoidal variation of oxygen concentration correlates with EEF growth. EEFs were defined as periportal EEFs if they were found within eight cell-lengths of the hepatic portal triad, and perivenous EEFs if they were found within eight cell-lengths of the hepatic central vein (Fig. 1A). This definition minimizes the likelihood of an EEF being simultaneously defined as periportal and perivenous, taking into consideration that the number of hepatocytes between the portal triad and the central vein of a mouse liver is ~20. Immunohistochemical analysis of infected liver sections (Fig. 1B) revealed that the maximal size of perivenous P. yoelii EEFs were significantly larger than periportal P. yoelii EEFs (Fig. 1C), suggesting that oxygen concentrations could be a parameter that influences liver stage Plasmodium infection of primary hepatocytes in vitro. Ambient hypoxia increases survival and growth of liverstage malaria parasites in PHH MPCCs To investigate whether hypoxia influences P. berghei infection of human liver cells in vitro, micropatterned co-cultures of primary human hepatocytes and supporting stromal fibroblasts were maintained at 4% O 2 for 24 hours before infection. A 3-hour exposure to P. berghei sporozoites was followed by an additional 48 hours of hypoxic culture, at which point infection efficiency was determined based on Plasmodium HSP70 immunofluorescence. The number of P. berghei EEFs per hepatocyte island was elevated in response to hypoxic incubation of PHHs before, during and after infection ( Fig. 2A). A significant upward shift in the size distribution of P. berghei EEFs in hypoxic cultures compared to normoxic cultures was also observed (Fig. 2C,E). This pattern of improved infectivity was observed in more than one lot of cryopreserved PHHs ( Fig Because P. berghei liver-stage infections mature at 55-65 hours post-infection in vitro (Graewe et al., 2011), P. berghei EEF sizes were quantified at 56 hours and 65 hours post-infection to address the possibility that hypoxia could be speeding up parasite development instead of increasing the potential for parasite growth. P. berghei EEFs were larger in hypoxic cultures at 48, 56 and 65 hours post-infection (supplementary material Fig. S1F). Furthermore, the number of P. berghei EEFs per hepatocyte island was consistently higher in hypoxic cultures at 48, 56 and 65 hours post-infection (supplementary material Fig. S1E). Given that both EEF numbers and sizes are larger in hypoxic cultures throughout the late liver stages of P. berghei infection, this suggests that the total number of potential merozoites is larger under hypoxia than under normoxia. Consistent with this prediction, the number of nuclei in Clinical issue Malaria is a mosquito-borne parasitic disease that impacts millions of people worldwide and that causes about a million deaths a year. Several different Plasmodium parasites cause malaria but all have a complex life cycle. Plasmodium parasites enter the human body as sporozoites, which travel to the liver where they develop and grow without causing any symptoms. After a few days, merozoites are released from the liver cells and invade red blood cells. Here they replicate rapidly, before bursting out and invading more red blood cells. The recurring flu-like symptoms and more serious complications of malaria such as anemia and organ damage are caused by this cyclical increase in the parasitic burden. Results The liver stage of the Plasmodium life-cycle is an attractive target for drug treatment. However, to develop drugs that attack this stage, model systems that mimic normal human liver responses to parasitic infection are needed. Existing models are generally hard to infect, in part because cells that are grown in tissue culture are generally exposed to higher levels of oxygen than they would experience inside the body. In this study, the authors first show that the growth of malaria parasites in the livers of infected mice correlates with the natural variation in oxygen levels within the liver. They then show that the exposure of micropatterned co-cultures of primary human hepatocytes and supporting stromal cells to different levels of oxygen leads to profound changes in malaria infection efficiency and parasite development. Finally, they report that the effect of different oxygen levels on the infection of liver cells by malaria parasites is partly due to the activation of the HIF-1 intracellular oxygen sensing signaling pathway. Implications and future directions These findings, in combination with existing literature on the impact of oxygen levels on the maintenance of in vivo-like hepatocyte functions in vitro, demonstrate that optimization of the oxygen levels experienced by human liver cells grown in tissue culture is needed to maximize malaria infection rates. This new information can now be used to develop improved models of liver-stage malaria for antimalarial drug development. The study also identifies one oxygen-dependent host mechanism that influences liver-stage malaria parasite development. Future studies on this mechanism and on other oxygen-dependent host or parasite mechanisms that might potentially affect the liver stage of Plasmodium development should further facilitate antimalarial drug development. P. berghei EEFs at 65 hours post-infection was significantly higher in hypoxic cultures compared with the normoxic control (supplementary material Fig. S1H). P. berghei EEFs were also able to develop normally under hypoxia, as shown by the expression of the mid-liver-stage marker, PbMSP-1, at 65 hours post-infection and the appearance of various EEF morphologies characteristic of late liver-stage EEFs (supplementary material Fig. S2). Moreover, the percentage of MSP1-positive P. berghei EEFs was significantly higher in hypoxic cultures at 56 and 65 hours post-infection (supplementary material Fig. S1G), suggesting that the EEFs progress into the later phases of the liver stage more successfully under hypoxia. Importantly, the effect of hypoxia on EEF size translated to the human Plasmodium species P. falciparum, as shown by the finding that ambient hypoxia increased the size of P. falciparum EEFs in hepatocytes at both 4 and 6 days post-infection (Fig. 2G,H). However, the number of P. falciparum EEFs did not increase in hypoxic cultures maintained at 4% O 2 (supplementary material Fig. S1D). Optimization of cell surface oxygen tension for in vitro liverstage malaria infection Given the observed impact of prolonged exposure to a reduced oxygen concentration, we sought an optimal set of conditions that might maximize the elevated infection of PHHs. By applying a mathematical model of diffusion and reaction solved at steady-state conditions to PHH MPCCs ( Fig. 3A; supplementary material Fig. S3), it was estimated that the typical cell surface pO 2 when cultures are incubated at normoxia ranges from 110 to 130 mmHg (Table 1). In contrast, in vivo blood pO 2 (not at the cell surface) ranges from 30 to 75 mmHg in the hepatic sinusoid (Wölfle et al., 1983). Therefore, culture at ambient hypoxia could improve liver-stage malaria infection by reducing cell surface pO 2 to a more physiologically relevant level. To test this hypothesis, a Hypoxyprobe™ assay that incorporates a hypoxic marker, pimonidazole hydrochloride (Varghese et al., 1976), was conducted to compare the cell surface pO 2 in PHHs incubated at either normoxia or ambient hypoxia. Consistent with our hypothesis, incubation of PHHs at ambient hypoxia results in an increase in Hypoxyprobe™ staining relative to normoxia-cultured MPCCs (Fig. 3B), confirming that ambient hypoxia indeed results in a decrease in cell surface pO 2 experienced by the hepatocytes. Cell surface pO 2 of MPCCs can also be altered by modifying parameters such as media height and hepatocyte density (Fig. 3A). The model predicts that cell surface pO 2 decreases as media height increases (supplementary material Fig. S3B). Indeed, elevating the media height in wells of normoxic cultures resulted in an increase in Hypoxyprobe™ staining at the cell surface (supplementary material Fig. S4A,B). The greater media height also led to increased numbers of P. berghei EEFs at 48 hours post-infection (supplementary material Fig. S4C), collectively supporting the hypothesis that the effects of ambient hypoxia on in vitro liver-stage malaria infection efficiencies are mediated by a decrease in the effective cell surface pO 2 experienced by the hepatocytes. Modeling also predicts that cell surface pO 2 will decrease as cell density increases ( Fig. 3C; supplementary material Fig. S2A). However, modifications to hepatocyte density in a conventional monolayer culture might also influence infection efficiency due to the resulting changes in hepatocyte survival, polarization and morphology, rather than in response to changes in cell surface pO 2 . To vary hepatocyte density while preserving the homotypic interactions necessary for hepatocyte survival and functional maintenance, the density of the hepatocyte island patterning was varied in MPCCs. These modifications led to perturbations of the cell surface pO 2 as predicted by the model, based on Hypoxyprobe™ staining results (Fig. 3C). The simultaneous variation of both hepatocyte island density and atmospheric oxygen level permits fine-tuning of cell surface oxygen levels that span four orders of magnitude (supplementary material Table S1). Infections with P. yoelii across this range of conditions yield a monotonic increase in total EEFs as cell surface pO 2 decreases (Fig. 3E). However, a threshold cell surface pO 2 is observed at 5-10 mmHg, RESEARCH ARTICLE Disease Models & Mechanisms (2014) doi:10.1242/dmm.013490 below which the number of mature EEFs (diameter >10 μm) decreases as cell surface pO 2 declines (Fig. 3D). This biphasic relationship between the number of mature EEFs and cell surface pO 2 suggests that there is an optimal cell surface pO 2 for maximizing the number of mature EEFs in infected MPCCs. The combination of the optimal hepatocyte island density under ambient hypoxia (4% O 2 ), which gives rise to the optimal cell surface pO 2 of 5-10 mmHg was hence used for subsequent experiments. Kinetics of hypoxic treatment alters liver-stage malaria infection in vitro The hypoxia experiments performed thus far had exposed the PHH MPCCs to hypoxia throughout the 24 hours before infection, during infection (0-3 hours) and after infection (3-48 hours), termed the priming, invasion and development phases, respectively. To assay whether improved infectivity requires each of these three phases of hypoxic treatment, MPCCs were incubated at ambient hypoxia over varying portions of the assay (Fig. 4A). Increased numbers of EEFs at 48 hours post-infection were only observed when the infected MPCCs were cultured under hypoxia during the invasion and development phases (Fig. 4B, conditions A, B, E). In contrast, MPCCs pre-treated with hypoxia before infection and subsequently returned to normoxia (Fig. 4B, conditions C, D) did not exhibit an increase in EEF number. These findings suggest that hypoxia treatment improves late-stage infection rates by reducing the attrition rate of EEFs rather than promoting the initial susceptibility of the host hepatocytes to sporozoite invasion. However, hypoxia over varying portions of the assay did not change the proportion of large EEFs 48 hours post-infection (Fig. 4C). Hypoxia does not increase sporozoite-dependent or hostdependent invasion To examine whether the hypoxia-mediated change in hepatocyte infectivity stems from an impact on sporozoite function, sporozoite gliding motility and sporozoite entry into hepatocytes were assayed. Ambient hypoxia did not result in a significant difference in the gliding motility of P. berghei sporozoites (Fig. 4D), and hypoxic treatment of hepatocytes did not change the number of the sporozoites that successfully entered hepatocytes (Fig. 4E), suggesting that hypoxia does not improve late-stage infection efficiencies via sporozoite or host-mediated increases in the initial invasion rate, but rather by affecting the ability of the host cell to support EEF survival and growth. Host HIF-1α induction promotes EEF survival in infected hepatocytes The hypoxic response of mammalian cells is largely mediated by the hypoxia-inducible factor-1 (HIF-1) pathway (Semenza, 2012). Consistent with the reported literature, gene set enrichment analysis (GSEA) of PHH MPCCs incubated at ambient hypoxia revealed a marked enrichment for the expression of genes that are transcriptionally regulated by HIF-1α relative to normoxic MPCCs (supplementary material Fig. S6A). Cobalt(II) chloride is a hypoxia mimetic that has been reported to induce the intracellular stabilization of HIF-1α and lead to the transcriptional activation of downstream hypoxia-responsive genes (Jaakkola et al., 2001 normoxia in three different combinations of the priming, invasion and development phases (Fig. 5A). Cobalt(II) treatment of PHH MPCCs at normoxia in any of the three combinations tested resulted in an increased number of P. berghei EEFs at 48 hours post-infection, with the greatest effect observed if cobalt(II) was present throughout all three phases of priming, invasion and development (Fig. 5B). Of note, although ambient hypoxia (4% O 2 ) consistently led to the emergence of a subset of larger EEFs relative to normoxic controls, cobalt(II) treatment did not fully replicate this outcome ( Fig. 5C; supplementary material Fig. S5A). Under normoxia, HIF-1α is constitutively marked for proteasomal degradation by prolyl hydroxylase (PHD). Inhibition of PHD by a small molecule, dimethyloxalylglycine (DMOG), results in HIF-1α stabilization and the associated downstream host hypoxic responses (Jaakkola et al., 2001). GSEA of hypoxic MPCCs also shows a marked enrichment for the expression of a set of genes that are upregulated under DMOG treatment (supplementary material Fig. S6B) (Elvidge et al., 2006). Consistent with the effect of cobalt(II) treatment on P. berghei infection at normoxia, PHH MPCCs that were treated with DMOG at normoxia demonstrate increased numbers of P. berghei and P. yoelii EEFs at 48 hours post-infection (Fig. 5D,E), with the number of P. berghei EEFs increasing in a dose-dependent fashion with DMOG concentration (supplementary material Fig. S5B). However, DMOG treatment did not lead to the emergence of a subset of larger EEFs compared to the untreated control, in contrast to ambient hypoxia (supplementary material Fig. S5C). Further increases in DMOG concentration inhibited EEF development (supplementary material Fig. S5C), which is reminiscent of the effect of extremely low levels of pO 2 on the number of well-developed EEFs (Fig. 3D). Together, these data suggest that intermediate levels of HIF-1α activation in the host hepatocyte support EEF survival but not EEF growth, and that higher levels of HIF-1α might inhibit EEF growth and mediate the biphasic effect of pO 2 on EEF size observed in earlier experiments. DISCUSSION Using an in vitro model of primary hepatocyte culture that stabilizes PHH function, is oxygen-responsive, and infectible with liver-stage malaria, we applied a mathematical framework to estimate cell surface oxygen tensions under a variety of experimental manipulations. We have shown that the cell surface oxygen concentration experienced by primary adult human hepatocytes in vitro influences their ability to support a productive liver-stage malaria infection by P. berghei, P. yoelii and P. falciparum. RESEARCH ARTICLE Disease Models & Mechanisms (2014) doi:10.1242/dmm.013490 Moreover, we identified an optimal cell surface oxygen level (predicted cell surface pO 2 5-10 mmHg) for maximizing infection. More extreme levels of hypoxia (predicted cell surface pO 2 <5 mmHg) resulted in increased late-stage parasite survival but arrested parasite development. The effects of hypoxia on late-stage EEF survival, but not EEF development, appear to be regulated in part by host-dependent HIF-1α mechanisms. Establishing an in vitro model of liver-stage malaria has been an ongoing challenge for the field, due in part to the relatively poor maintenance of hepatic functions by existing culture platforms. With the development of the PHH MPCC system, it is now possible to achieve robust liver-stage malaria infection in vitro (March et al., 2013), but further optimization of infection efficiency remains advantageous. Our mathematical model predicts that conventional MPCCs are hyperoxic under conventional culture conditions, with estimated cell surface pO 2 ranging from 110 to 130 mmHg (Table 1), whereas in vivo oxygen tensions in the liver range from 30 to 75 mmHg (Wölfle et al., 1983;Kietzmann and Jungermann, 1997). We have previously shown that achieving more physiological replication of the in vivo environment can improve hepatocyte function and disease modeling capacity in vitro (Allen et al., 2005). Thus, we hypothesized that liver-stage malaria infection might be more robust in vitro in the presence of atmospheric hypoxia. Indeed, the current observations that the sizes of P. berghei, P. yoelii and P. falciparum EEFs increase in PHHs under hypoxia in vitro is consistent with previous observations that primary hepatocytes respond to physiologically relevant oxygen gradients imposed upon them in vitro to recapitulate in vivo zonation phenotypes that are otherwise not observed in vitro (Allen et al., 2005). The observation that P. berghei and P. yoelii demonstrate increased numbers of EEFs under hypoxia, but not P. falciparum, suggests that the kinetics and extent of exposure to hypoxia for increased survival of the human malaria parasite differs from the rodent malaria species. The finding that there is an optimum cell surface pO 2 (5-10 mmHg) for liverstage malaria infection in vitro is consistent with the histopathology findings from P. yoelii-infected mouse liver sections, which show that EEFs in the perivenous region, which has the lowest sinusoidal oxygen tension of 30 mmHg, are larger than those in the periportal region (Fig. 1). Intriguingly, this optimum range of cell surface pO 2 for PHH infection in vitro is lower than the 30-75 mmHg (Wölfle et al., 1983) reported in hepatic sinusoids in vivo. One possible reason for this discrepancy is due to a lower hepatocyte surface pO 2 in vivo than that previously measured in the hepatic sinusoid. This could be due either to the unsteady perfusion of the liver, which arises from the pulsatile flow that has been observed in vivo (McCuskey et al., 1983), or the significant oxygen consumption by the endothelium in vivo (Santilli et al., 2000). This hypothesis is supported by the observations that liver sections obtained from mice perfused with Hypoxyprobe™ show significant Hypoxyprobe™ adduct accumulation in the pericentral regions (Arteel et al., 1995) and that Hypoxyprobe™ forms such adducts only at pO 2 <10 mmHg (Varghese et al., 1976). A second possible reason is that the optimal in vitro pO 2 for malaria infection could simply be different from in vivo hepatic pO 2 . This could be because our in vitro model is missing key in vivo microenvironmental cues (growth factor gradients and cycling insulin/glucagon metabolism) that might result in the necessity for more extreme pO 2 perturbations to optimize malaria infection in vitro. This disparity is consistent with the fact that in vitro infections, although improved under hypoxia, still require much higher multiplicities of infection than in vivo infections. It is also possible that the in vivo pO 2 is not necessarily optimal because blood stage malaria parasitemia in rodents can be further increased under atmospheric hypoxia that simulates high-altitude atmospheres (Hughes and Tatum, 1956a). A third reason lies in the possibility that our mathematical model underestimates cell surface pO 2 in vitro due to the assumption that only diffusion transports oxygen to the cell surface. Furthermore, our mathematical model assumes that hepatocytes exhibit a constant oxygen consumption rate (OCR) , which can vary with species, donor, time in culture (Rotem et al., 1994;Bhatia et al., 1996) and culture parameters like density and co-culture cell type. The finding that liver-stage malaria infection in vitro has an optimal oxygen tension is also consistent with the microaerophilic nature of the blood stages of P. falciparum, which exhibit a propensity for better growth in vitro under ambient hypoxia (Trager and Jensen, 1976;Briolant et al., 2007), and in fact demonstrate optimum growth at an in vitro pO 2 (2-3%, 15-25 mmHg) (Scheibel et al., 1979) that is lower than in vivo pO 2 levels in the blood (4-13%, 30-100 mmHg) (Tsai et al., 2003). To extrapolate our findings to other in vitro liver-stage models, the appropriate atmospheric pO 2 should be determined within a similar mathematical framework as described for MPCCs and take into account culture parameters such as effective hepatocyte density and oxygen diffusion distance (height of medium). The beneficial effect of hypoxia on in vitro liver-stage malaria infection could be due to changes in the host cell that increase host cell susceptibility to initial parasite invasion or that favor parasite survival or development, or to changes in the parasite itself that promotes its own ability to survive and thrive in the host cell. Sporozoite entry assays (Fig. 4E) and infection of hepatocytes exposed to hypoxia only prior to invasion but not after infection (Fig. 4C) suggest that hypoxia does not increase hepatocyte susceptibility to sporozoite infection. Nonetheless, gene set enrichment analysis of PHH MPCCs incubated at ambient hypoxia versus normoxia showed a marked enrichment for the expression of HIF-1α related genes in hypoxic MPCCs (supplementary material Fig. S6A). HIF-1α plays a major role in the induction of cellular responses that mediate the adaptation of the host cell to hypoxic conditions. This response includes an increased expression of glucose transporters and multiple enzymes responsible for a metabolic shift towards anaerobic glycolysis (Warburg effect), as well as the downregulation of mitochondrial respiration. The latter in turn reduces mitochondrial oxygen consumption and the resultant generation of reactive oxygen species that occurs due to inefficient electron transport under hypoxic conditions (Weidemann and Johnson, 2008;Semenza, 2012). Among other Apicomplexan infections, host HIF-1α has been shown to be essential for Toxoplasma gondii survival and growth in host cells cultured at physiological oxygen levels (3% O 2 ) (Spear et al., 2006), and is also necessary for the maintenance of Leishmania amazonensis parasitemia in human macrophages in vitro (Degrossoli et al., 2007). In fact, Toxoplasma and Leishmania infection increase HIF-1α protein levels as well as HIF-1α-regulated expression of glycolytic enzymes and glucose transporters (Spear et al., 2006;Degrossoli et al., 2007), suggesting that these Apicomplexan parasites actively activate host HIF-1α, presumably to favor their survival or growth. Pharmacological activation of host HIF-1α in infected MPCCs by CoCl 2 and DMOG increased EEF survival (Fig. 5B,D), but did not increase the EEF size distributions (Fig. 5C,E), suggesting that the effects of ambient hypoxia on liver-stage malaria EEF numbers and EEF sizes are driven by distinct mechanisms, with host HIF-1α playing a role in maintaining the survival of EEFs but not necessarily driving EEF growth. This hypothesis is supported by the observations that the total number of EEFs increased monotonically with decreasing cell surface pO 2 (Fig. 3E) but the number of welldeveloped EEFs exhibited a biphasic relationship with decreasing cell surface pO 2 (Fig. 3D). However, in the absence of genetic perturbation of host HIF-1α, the possibility that hypoxia, CoCl 2 or DMOG impact alternative pathways in the parasite that mediate the observed infection phenotype cannot be excluded. One possible mechanism that could explain the effect of hypoxia on EEF size is the activation of the AMPK pathway in the host cell. AMPK activation is known to induce autophagy in mammalian cells (Liang et al., 2007;Kim et al., 2011), whereas autophagy of Plasmodium EEFs in human hepatoma cells is known to occur and might be necessary for the growth of Plasmodium EEFs (Eickel et al., 2013). AMPK activation also mediates mitophagy and mitochondrial biogenesis (Mihaylova and Shaw, 2011), which results in increased mitochondrial renewal and might promote Plasmodium EEF development. In support of this hypothesis, Toxoplasma gondii, another Apicomplexan parasite, is known to tether host mitochondria to its parasitophorous vacuole membrane (Sinai and Joiner, 2001), suggesting that host mitochondria is necessary for Toxoplasma growth in the host cell. In addition to host-mediated mechanisms, the malaria parasite might contain either oxygen sensors that directly respond to hypoxia or indirect mechanisms that limit their ability to respond to oxidative stress. It is difficult to distinguish the parasite-specific and the hostspecific responses to hypoxia. For example, intraerythrocytic P. falciparum is heavily dependent on antioxidant systems despite its almost totally fermentative lifestyle, yet it lacks significant antioxidant enzymes like catalase and glutathione peroxidase, which play major protective roles in mammalian cells (Müller, 2004;Vonlaufen et al., 2008). This suggests that the Plasmodium liver stage might also be predisposed to being overwhelmed by environmental oxidants and that hypoxia might reduce the energy expenditure for the maintenance of redox balance in the EEF. A caveat of our findings is that changes in atmospheric oxygen could result in modulations beyond simply adjusting cell surface oxygen levels. The modulation of hepatocyte metabolism under hypoxia might result in different rates of nutrient consumption and waste generation, which could lead to secondary effects like changes in pH. This study also does not specifically identify the role of the coculture nonparenchymal cell type in the infection phenotype, and does not use a liver-derived nonparenchymal cell type like sinusoidal endothelial cells or Kupffer cells. The in vivo histopathology findings are correlative and not causal, as the presence of an oxygen gradient along the sinusoid is only one of many other gradients that simultaneously exist in the liver. Thus, it is challenging to decisively untangle the various contributions of oxygen gradients in our observations, but oxygen is more likely to be the driver of these other sinusoidal gradients than vice versa. More work is required to characterize the role of HIF-1α on Plasmodium infection of PHHs, including performing siRNA-mediated knockdown and overexpression of HIF-1α in primary hepatocytes in vitro, or using a HIF-1α knockout mouse. Furthermore, the downstream mechanisms of HIF-1α that are ultimately responsible for the effect of hypoxia on Plasmodium infection of PHHs remain to be uncovered. These mechanisms could include increases in glycolysis or iron uptake by hepatocytes, which could lead to an elevation in intracellular glucose or iron levels that are accessible to the Plasmodium EEF. Other mechanisms that could contribute to the effect of hypoxia on infection include AMPK activation in host cells, leading to a starvation response that decreases intracellular ROS levels and frees up resources for the malaria EEF. In an era of a renewed effort towards global malaria eradication, the finding that oxygen levels influence in vitro Plasmodium liver-stage infection of PHHs, in combination with existing literature on the impact of oxygen on the maintenance of in vivo-like hepatocyte functions in vitro, highlights the importance of optimizing oxygen levels experienced by PHHs in vitro so as to develop improved in vitro models of liver-stage malaria for antimalarial drug development. Reagents and cell culture Dimethyloxalylglycine (DMOG) was obtained from Cayman Chemicals (Ann Arbor, MI), and cobalt(II) chloride was obtained from Sigma (St Louis, MO). Cryopreserved PHHs were purchased from vendors permitted to sell products derived from human organs procured in the United States by federally designated Organ Procurement Organizations. CellzDirect (Invitrogen, Grand Island, NY) was the vendor used in this study. Human hepatocyte culture medium was high glucose Dulbecco's modified Eagle's medium (DMEM) with 10% (v/v) fetal bovine serum (FBS), 1% (v/v) ITS TM (BD Biosciences), 7 ng/ml glucagon, 40 ng/ml dexamethasone, 15 mM HEPES, and 1% (v/v) penicillin-streptomycin. J2-3T3 murine embryonic fibroblasts (a gift of Howard Green, Harvard Medical School) were cultured at <15 passages in fibroblast medium comprising DMEM with high glucose, 10% (v/v) bovine serum and 1% (v/v) penicillin-streptomycin. MPCCs of primary human hepatocytes and supportive stromal cells Coverslips (12 mm) that were placed into tissue culture polystyrene 24-well plates or glass-bottomed 96-well plates were coated homogenously with rat tail type I collagen (50 μg/ml) and subjected to soft-lithographic techniques (Khetani and Bhatia, 2008) to pattern the collagen into micro-islands (of diameter 500 μm) that mediate selective hepatocyte adhesion. To create MPCCs, cryopreserved PHHs were thawed and pelleted by centrifugation at 100×g for 6 minutes, assessed for viability using Trypan Blue exclusion (typically 70-90%), and then seeded on collagen-micropatterned plates in DMEM. The cells were washed with DMEM 2-4 hours later and replaced with human hepatocyte culture medium. 3T3-J2 murine embryonic fibroblasts were seeded (40,000 cells in each well of a 24-well plate and 7000 cells in each well of a 96-well plate) in human hepatocyte medium 3 h after Plasmodium sporozoite infection. Medium was replaced every 24 hours. Sporozoites P. berghei ANKA and P. yoelii sporozoites were obtained by dissection of the salivary glands of infected Anopheles stephensi mosquitoes obtained from the insectaries at New York University (New York, NY) or Harvard School of Public Health (Boston, MA). P. falciparum sporozoites were obtained by dissection of the salivary glands of infected Anopheles gambiae mosquitoes obtained from the insectary at Johns Hopkins School of Public Health (Baltimore, MD). Infection of MPCCs P. berghei, P. yoelii or P. falciparum sporozoites from dissected mosquito glands were centrifuged at 3000 rpm for 5 minutes onto micropatterned primary hepatocytes cultured without fibroblasts for 2 or 3 days before infection at a multiplicity of infection of 1 to 3. After incubation at 37°C and 5% CO 2 for 3 hours, the wells were washed twice and J2-3T3 fibroblasts were added to establish the MPCCs. Media was replaced daily. Samples were fixed at 48, 56 or 65 hours post-infection with P. berghei and P. yoelii, and 4 or 6 days post-infection with P. falciparum. Immunofluorescence assay Infected MPCCs were fixed with −20°C methanol for 10 minutes at 4°C, washed thrice with PBS, blocked with 2% BSA in PBS for 30 minutes and then incubated for 1 hour at room temperature with a primary antibody Sporozoite gliding assay Motility of cryopreserved sporozoites was determined in each batch to define the number of infective sporozoites. Sporozoite gliding was evaluated with 30,000 sporozoites for 40 minutes in complete DMEM, at 37°C on glass cover slips pre-coated for 1 hour at 37°C with an antibody against P. berghei circumsporozoite protein (PbCSP) (clone 3D11, 10 μg/ml). Sporozoites were subsequently fixed in 4% paraformaldehyde (PFA) for 10 minutes and stained with anti-PbCSP antibody. The percentage of sporozoites associated with CSP trails was visualized by fluorescence microscopy. Quantification was performed by counting the average percentage of sporozoites that perform at least one circle. Double-staining assay for sporozoite entry At 3 hours post-infection, MPCCs were fixed and stained using a doublestaining protocol as previously described (Rénia et al., 1988). Briefly, to label extracellular sporozoites, the samples were first fixed with 4% paraformaldehyde for 10 minutes at room temperature, blocked with 2% BSA in PBS, incubated with a primary mouse anti-PbCSP (clone 3D11, 10 μg/ml), washed thrice in PBS and incubated with a secondary goat anti-mouse Alexa Fluor 488 conjugate. This was followed by a permeabilization with −20°C methanol for 10 minutes at 4°C, incubation with the same primary mouse anti-PbCSP, washing thrice with PBS, and incubation with a secondary goat antimouse Alexa Fluor 594 conjugate. This second step labels both intracellular and extracellular sporozoites. The samples were counterstained with Hoechst and mounted on glass slides as described above. The number of invaded sporozoites (stained green only) in PHHs was quantified. Gene expression microarray analysis MPCCs established from two different donor lots of PHHs were incubated under ambient hypoxia overnight (18-24 hours), and total RNA was extracted using TRIZOL and a Qiagen RNA clean-up kit. The RNA was analyzed using a Bioanalyzer before being labeled with Cy 3 and Cy 5 for the normoxic versus hypoxic samples, respectively. The labeled RNA from biological triplicates was loaded onto an Agilent (Santa Clara, CA) SurePrint G3 Human Gene Expression Microarray. The microarray data was analyzed by performing a Gene Set Enrichment Analysis (GSEA), which determines whether a predefined set of genes shows statistically significant differences between two biological conditions (Subramanian et al., 2005), applying a false discovery rate of 25%. Mathematical model To estimate the cell surface oxygen tensions in MPCCs, the transport and consumption of oxygen was modeled as a one-dimensional reactiondiffusion system, as described previously . The average number of hepatocytes per hepatocyte island in the MPCCs was determined by manual counts with light microscopy. The following assumptions were made in applying this model. First, the oxygen consumption rate of primary rat hepatocytes was used due to absence of the oxygen consumption rates of PHHs. Second, as the oxygen consumption rate of fibroblasts is only one-tenth that of primary rat hepatocytes and the oxygen consumption rate of random co-cultures of hepatocytes and fibroblasts is similar to that of hepatocytes alone (Allen et al., 2005), the oxygen consumption of MPCCs was assumed to be that of hepatocytes alone. Third, the oxygen consumption rates were assumed to be independent of culture format and constant throughout the infection experiments. Hypoxyprobe™ assay Hypoxyprobe™ (pimonidazole hydrochloride, Burlington, MA) forms covalent adducts in hypoxic cells at cell surface pO 2 <10 mmHg (Varghese et al., 1976) and was used as a hypoxia marker in PHHs. Hypoxia was first induced in primary hepatocytes by atmospheric hypoxia, variation of the height of the medium or variation in hepatocyte island densities. Pimonidazole hydrochloride was then added from a 200 mM stock solution (constituted in PBS) into the culture medium (without changing medium to avoid disturbing the steady-state oxygen gradient) at a 1:1000 dilution to achieve a final working concentration of 200 μM. Cells were incubated at 37°C for 2 hours, washed twice with PBS and fixed with chilled methanol for 10 minutes at 4°C. Adduct formation was detected by direct immunofluorescence using the HP-Red549 antibody (Hypoxyprobe™) at a 1:100 dilution. Histological analysis Liver slices (50 μm) were obtained from C57BL/6 mice (Charles River, Wilmington, MA) at 46 hours post-infection with GFP-expressing P. yoelii sporozoites. Maximal EEF size of EEFs in the periportal area (up to eight hepatocytes wide, from portal vein) and in the centrilobular area (up to eight hepatocytes wide, from central vein) were measured using z-stacks of these EEFs acquired via confocal imaging (Olympus, Center Valley, PA). Statistics Experiments were repeated three or more times with triplicate samples for each condition. Data from representative experiments are presented, and similar trends were seen in multiple experiments. Two-tailed t-tests were performed for all comparisons between two conditions (e.g. 21% versus 4% O 2 ) at a single time point. One way ANOVAs were performed for comparisons involving three or more conditions (e.g. 21% versus different periods of 4% O 2 ) at a single time point with Tukey's post-hoc test for multiple comparisons. Two way ANOVAs were performed for comparisons involving both simultaneous variation in time points post-infection and oxygen level (e.g. 21% versus 4% O 2 at 48, 56 and 65 hours post-infection for P. berghei) with Bonferroni's posthoc test for multiple comparisons. All error bars represent s.e.m.
9,279.2
2013-11-28T00:00:00.000
[ "Biology", "Medicine" ]
Multiobjective Optimization Model for Sustainable Waste Management Network Design Inefficient or poorly planned waste management systems are a burden to society and economy. For example, excessively long waste transportation routes can have a negative impact on a large share of the population. This is exacerbated by the rapid urbanization happening worldwide and in developing countries. Sustainability issues should be accounted for at every stage of decision making, from strategic to daily operations. In this paper, we propose a multiobjective optimization model to design a cost-effective waste management supply chain, while considering sustainability issues such as land-use and public health impacts.Themodel is applied to a case study in PathumThani (Thailand) to provide managerial insights. Introduction An unsanitary and inefficient municipal solid waste management (MSWM) system has long been a challenging issue to overcome. An inadequate waste management budget and the lack of public participation in waste segregation at the source are among the leading causes for the long-term accumulation of uncollected and improperly disposed of solid wastes in developing countries. In urban areas, unsanitary disposal sites are generally more problematic, considering the limited land availability and the need to attain adequate waste disposal capacity to serve a rapidly growing population. The risk of exposure to contaminants emitted from solid waste disposal is also more pronounced for urban residents. First, this is due to the problem of waste disposal capacity shortage, caused by a lack of land suitable for solid waste receptacles or a delay in building sufficient capacity. Waste flows beyond manageable capacity are among the causes of the uncontrolled release of leachates and gases, which can lead to the formation of strong unpleasant odors and spontaneous fires. Second, in an urban context, expanding the existing disposal facilities or locating new ones has to be made in the vicinity of densely populated areas. The release of hazardous constituents from disposal and transport operations can impose substantial environmental stress on the surrounding communities, especially for centralized MSWM systems [1]. The neighboring population has to deal with the immediate impacts, such as uncleanliness, odors, and inadequate air quality. The long-term impacts of unsanitary MSWM practices on the local population include various health risks [2] and decreased environmental quality [3]. Previous studies highlight the fact that in developing countries MSWM results in decreasing property values [4] and imposes an additional financial burden on the local population in terms of higher sanitation fees [5]. Consequently, a disproportionate environmental impact is an increasingly important issue in MSWM planning. To resolve this issue, environmental justice must be added as one of the strategic goals to be achieved. The principles of environmental justice, in this case, refer to the idea that all communities must be protected from excessive or disproportionate environmental stressors. The level of environmental stressors experienced by communities needs to be evaluated. Whenever possible, the capacity of each community to tolerate unfavorable environmental impacts should be considered. Aside from disproportionate impacts and justice concerns, the negative externalities of MSWM can also impede the development of desired social aspects such as strong community social cohesion and efficient youth development [6]. The waste management literature [7] also reveals that 2 Journal of Advanced Transportation environmental problems of waste management generate a number of social issues, such as those related to employment opportunities [8], unsanitary working conditions [9], and community satisfaction [10]. Attempts to offset these issues usually require significant spending on improving sanitary infrastructure and services. Such financial needs can ultimately leave communities with an insufficient sanitary budget and inability to properly collect and dispose of municipal solid wastes, leading to a vicious cycle of decline in local public health and environmental conditions. Therefore, a sustainable MSWN requires that significant economic, environmental, and social issues be integrated into the strategic decision-making processes. The successful establishment of sustainable MSWM is dependent on the network design and transportation planning stage. Environmentally benign site selection requires a careful land suitability analysis. The potential interactions of disposal sites with local communities and existing geographical, hydrological, and socioeconomic parameters need to be taken into account. For example, in Thailand, there is a rigorous protocol for solid waste disposal site selection. However, a large number of disposal sites are still located in environmentally sensitive areas. Due to the rapid and unplanned urban growth in many cities, some disposal sites, which were originally in suitable vacant land areas, are now in close proximity to rapidly urbanizing areas. Under the current MSWM situation, the number of affected communities is only expected to rise in the near future. Local governments will also expend a great deal of effort to cope with strong public opposition, should any future solid waste disposal siting or expansion decision be taken. To sum up, a sustainable MSWM requires the development of network design approaches to help in resolving the following MSWM challenges: (i) finding suitable land for waste disposal, (ii) public health impacts caused by the operation of MSWM facilities and waste transport, (iii) lack of land-use planning for MSWM, (iv) environmental justice and disproportionate environmental impacts on nearby communities. The remainder of the paper is organized as follows: Section 2 provides a review of the relevant literature. Section 3 introduces a mixed-integer multiobjective program to design a sustainable supply chain for waste management. A case study in Pathum Thani (Thailand) province is analyzed in Section 4. Finally, the conclusion and future research directions are discussed in Section 5. Background This section provides a literature review to gain an insight into the research trends in sustainable MSWM design. We focus on three broad categories: sustainable supply chain network design (SCND), solid waste management, and environmental justice. Results are summarized in a Venn diagram in Figure 1. The diagram reveals that each research field has received considerable attention over the years. There are some pairwise overlaps. However, to the best of our knowledge, no research paper has studied the problem incorporating the three issues at the same time. Despite a growing research interest in sustainable supply chain management and environmental justice, there is still limited consideration of environmental justice issues in the sustainable SCND literature. Past studies combine equality and justice issues with the environmental and social dimensions. Moura et al. [11] incorporate environmental justice aspects into a biobjective transportation network design model while addressing the (three pillars of) sustainability. Their model uses a restrictive constraint that protects the surrounding communities from being overly burdened with noise and air pollution. The motorists cost of increased travel time due to the congestion effects of the transport network is regarded as the social impact metric. Constraints are imposed to achieve a network design with desired environmental justice levels. There are also previous sustainable SCND studies that investigate social justice issues. These papers generally aim to minimize the inequality in accessing public services and in the quality-of-life improvement opportunities. An SCND study by Ferguson et al. [12] shows how to incorporate social equality into a transit service design problem. They aim to minimize the variation in the level of access to basic amenities provided by the designed transit system. Jafari et al. [13] propose a sustainable SCND model for textile industries, with the aim of promoting social justice by maximizing the employment level in different geographical areas. Despite the increased research efforts, there are still social justice issues that need to be translated into a welldefined optimization problem. Manaugh et al. [14] point out social justice issues and measures that can potentially be incorporated into urban transport planning. These issues are related to the disproportionate accessibility of transport service among different groups of people. Oswald Beiler and Mohammed [15] address many demographic, socialeconomic, and location-based factors that can be considered in developing transportation justice metrics and frameworks. The topic of solid waste management has been studied in depth due to the steadily growing urban population and consequent waste generation. Life-cycle environmental impacts of various MSWM systems have been explored extensively as reviewed by Bernstad Saraiva et al., 2018 [16]. Also, a number of economic, environmental, and social performance indicators for MSWM have been proposed by researchers as addressed by Rodrigues et al., 2018 [17]. However, most of sustainable SCND studies are related to industrial applications. The strategic planning of MSWM infrastructures and transport network has been confined to the analysis of infrastructure investment, operating expenses, and economic viability. For instance, Zhang et al. [18] develop an optimization model to minimize the costs of inventory, transportation, and disposal of MSWM. In their study, an interval programming approach is employed to deal with uncertainties of MSWM planning parameters. Toso and Alem [19] propose a deterministic and stochastic capacitated facility location model to determine the optimal location planning design for recycling urban solid wastes. Their model solely focuses on minimizing the overall costs having budget constraints. Just recently more attention is being given to MSWM planning problems looking beyond economic feasibility to add the sustainability perspective. A network design study by Inghels et al. [20] evaluates the financial viability of using multimodal transportation to reduce the carbon emissions and social impact of MSWM. Their model evaluates the societal cost burden associated with different transportation modes, measured as the sum of disturbance effects on nearby residents. The effects include accidents, noise, air pollution, congestion, and construction of transport infrastructure. Xu et al. [21] propose a SCND model for the reverse logistic supply chain of solid wastes. The amount of carbon emissions created during the transportation of recyclable e-wastes is used as an environmental metric. The number of affected people is used to estimate the negative effects of MSWM facilities on local communities. According to the research trend, all aspects of sustainability are currently being addressed in the context of MSWM. However, as pointed out by Eskandarpour et al. [22], there is a research need for a broader consideration of social and environmental impact matrices. Aside from the global warming potential, the use of other LCA-based environmental impact indicators needs to be explored. Indicators that have been used by LCA studies to evaluate the environmental performance of MSWM systems are summarized in the review papers by Khandelwal et al. [23] and Yadav and Samadder [24]. As already pointed out, there is a lack of studies about how justice affects the sustainability performances of an MSWM system. The development of MSWM infrastructure that positively contributes to environmental and social justice must be based on a careful evaluation of the current stage of urban development and ongoing injustice issues [25][26][27]. There is a difference between social and environmental justice problems in developed and developing countries. Developed countries generally experience inequalities for side effects of urban growth such as air pollution, waste water, and traffic congestion. In developing countries and low-density cities, the problems are more related to spatial inequalities of economic development activities, sanitation services, and allocation of public resources. As shown in the case of the Perth Metropolitan area, people living in the outer suburbs have the least accessibility provided by car and public transport to job, education, shopping, and healthcare opportunities [28]. The case of plastic bag waste in Nairobi, Kenya, is a typical example of environmental justice issues associated with the disproportionate share of sanitary services and environmental protection policy [29]. The case study describes how political influences on local businesses result in unsustainable patterns of production and consumption of plastic bags, and, thereby, there is a vast accumulation of solid and plastic wastes in communities with low socioeconomic status. In addition to pollution problems, land-use issues are also critical issues in MSWM planning. Agyeman and Evans [30] explore urban development initiatives that demonstrate inherent links between environmental justice and sustainability issues. They point out that solid waste and landuse planning are common concerns for both environmental justice and sustainability. A land-use policy should focus on preventing disproportionate land-use impacts within or among communities. Without adequate land-use and environmental justice policies, solid waste facilities are likely to be located in poor and minority communities as discovered by Norton et al. [31] when focusing on North Carolina waste infrastructure. This study addresses the three fundamental sustainability dimensions: environmental, social, and economic, in the context of MSWM in rapidly urbanized regions. To bridge the research gap and contribute to the field of sustainable MSWM, this develops a social impact metric based on environmental justice and incorporate it into a sustainable SCND model for MSWM system. Specifically, the issue of environmental justice related to land-use stress caused by MSWM facility establishment is considered. The inclusion of a land-use equality objective is used to obtain a balanced network design where land-use stress is fairly distributed across an area. Furthermore, the damage of MSWM to human health is also introduced as a measure for the environmental impact of an MSWM system. We use the disability-adjusted life years (DALYs) metric, according to the life-cycle impact assessment (LCIA) method. Despite being used by WHO as a measure of the global burden of disease for many years, there have been very limited applications of DALYs in both the supply chain and the waste management literature. Lastly, facility and transportation costs are taken into account to evaluate the economic aspects of sustainability. Public Health Impact Assessment. In this study, the public health impact is defined as the overall mortality and disease burden of nearby residents caused by MSWM facilities and waste transport activities. The public health impact is estimated in units of disability-adjusted life years (DALYs), which is one of the well-established endpoint LCIA metrics. DALYs represent the number of years of life lost due to premature mortality and healthy years of life lost due to disability [32]. We translate the impact of waste management operations and waste transportation into DALYs per person using the ReCiPe 2008 Endpoint LCIA method [33]. Then, the total DALYs are obtained by multiplying the individual impact of exposure by the total number of people living within the affected area. Land-Use Impact Assessment. Processes in MSWM facilities including construction, operation, and closure normally take place over long timescales. Under traditional centralized waste systems, the life expectancy of MSWM facilities is longer, due to the need for larger-sized facilities to cope with increased waste generation in cities. Communities surrounding waste facilities have to deal with the long-term negative external effects of municipal solid waste. Therefore, the issue of disproportionate environmental burden among population in certain areas is a pressing concern, especially for rapidly urbanized cities. From a strategic point of view, it is important that each administrative area in a city is not overly burdened by land-use impact or other important environmental stressors caused by MSWM facilities. The principle of environmental justice must be adopted at the early phase of MSWM planning. In this study, the spatial planning of MSWM infrastructure uses a land-use equality strategy to mitigate the impact on local land-use in areas with substantial land-use stress. The proposed planning approach involves a two-step process. The first step is to evaluate the current land availability in each geographical or administrative area within a city. This step generally requires knowledge of land-use policies and the use of GIS tools to screen out portions of land which are not suitable for development. The second step is to calculate the land-use stress of each administrative area, which is the ratio of land-use impact caused by MSWM facilities to the available land. In our study, the level of land-use impact is calculated based on the total amount of direct and indirect land-use. The direct land-use is the actual land area used for facility establishment. The indirect land-use is estimated by land occupation LCIA methodology, which assesses the impact on land quality over a given period. The ReCiPe Midpoint (H) V1.07/Europe ReCiPe H method is used in our study to account for the land occupation impact based on the type and capacity of facilities. Previous attempts to integrate direct and indirect land-use impacts have been made in LCA studies to account for the relevant land-use impacts [34,35]. Optimization Model for Sustainable MSWM. In this section, we introduce a mathematical formulation for the sustainable MSWM problem. The multiobjective mixedinteger model is a customization of the popular facility location model. We consider a 3-echelon supply chain, where solid wastes are gathered in collection centers, then moved to sorting facilities, and finally sent to either incinerators or landfills. Figure 2 shows an example of this supply chain. We assume that decisions can be made on both locations and sizes of tier 2 and tier 3 of the supply chain (i.e., sorting facilities and landfills/incinerators). This directly affects the capacity of each facility, its land-use, and its impact on public health. The objective is to identify locations, sizes, and routes to minimize costs, land-use, and public health impact. Superscripts , , and are used throughout the mathematical formulation to refer to sorting, incinerator, and landfill facilities, respectively. The model's notation is given below: Sets and Indices (i) is the set of collection centers, indexed by ; (ii) is the set of sorting facility locations, indexed by ; (iii) is the set of incinerator locations, indexed by ; (iv) is the set of landfill locations, indexed by ; (v) ( ), ( ), ( ) are the set of available sizes at locations , , and , indexed by . The cost function is computed as follows. The overall cost is the sum of the fixed costs to open sorting facilities, incinerators, and landfills plus the operational costs of transporting and managing the solid waste flow across the network. A second function is introduced to measure the average land-use stress. The function is formally defined as follows. The function is the sum of all the land-use ratio across all candidate locations. Parameters , , represent the ratios for land used and land available. This ratio can be computed by looking at the entire network or by narrowing down the focus to smaller districts, to compute the impact of land-use at a local level. Finally, a third function ℎ is used to evaluate the impact of transportation and facilities on the population's health. The function is defined as follows. The Sustainable Waste Management Network (SWMN) design model can be formulated as a multiobjective mixedinteger program. Solution Algorithms. The computational results, displayed in Section 4, are obtained from various optimization models solved under single-objective functions and multiobjective functions. We refer to these models using the notation SWMN(), where we define inside the brackets the objectives being optimized together. For example, SWMN( , ℎ ) is the multiobjective problem optimizing both costs and health impact whereas SWMN( ) is the single-objective model minimizing only the land-use impact. Single-objective models can be solved with numerical methods such as branch and bound method. As for multiobjective models, a min-max approach is implemented to identify solutions that minimize the deviations from the ideal results. Formally, let * , * , and * ℎ be the optimal value obtained by solving the respective single-objective problems. Furthermore, let , , and ℎ upper bounds be set equal to the worst value obtained by each function across the single-objective problems. We can now define the deviation levels between each objective and its ideal target by normalizing the functions as follows. By adding a new continuous decision variable , we can formulate a min-max goal attainment on the deviations of three objectives as follows. [SWMN ( , , ℎ )] : min (22) The objective of min-max SWMN is to minimize the largest deviation from optimal targets across the three functions considered. The model can be easily modified to target only two objectives. Case Study In this section, we apply the proposed SWMN model to a case study in Pathum Thani province in Thailand. The waste management system in Thailand is of interest as very little planning has been used in the past to locate waste facilities. Nowadays, among a total of 2,490 municipal solid waste sites, only 499 (20%) adopt safety standards, such as sanitary/engineer landfill, controlled dumping, and incineration with air pollution-controlled system [36]. The rest of the sites implement unhealthy practices such as open dumping and incineration without air pollution-controlled system. As a consequence, the uncontrolled release of leachates and gases from these dumps is frequent. This is taking a toll on both environment and society. For example, in 2010 a fire broke out at the Phraeksa dump site causing hundreds of residents to flee the area. This incident prompted local governments across Thailand to reexamine their regulatory approaches in health and environmental safety. Pathum Thani is located in the central region of Thailand, within the Bangkok metropolitan area (Figure 3). It covers a total area of 1,525.9 km 2 , organized into 7 districts, 60 subdistricts, and 529 villages. Its population has steadily increased in the past decade to about 1.2 million. As a result, the amount of waste generated increased to 0.612 million tonnes/year [36]. As the province is quite vast, we narrow the focus to Muang Pathum Thani, Sam Khok, and Lat Lum Kaeo districts for our case study (Figure 4). Currently, solid wastes from communities are gathered at collection centers across subdistricts. Wastes at collection centers are transported to sorting facilities and subsequently sent to either incinerators or landfills. Recycling is purposely not included in this study as it is currently done by private companies. The validation of the proposed model is done by determining the location and size of sorting and disposal facilities. Parameters Land Availability Assessment and Candidate Locations. In order to quantify the land available, an initial screening is necessary to exclude from the analysis places such as rivers, ponds, main roads, archaeological heritage sites, and residential areas. We further include a buffering area to guarantee safe distances between waste sites. As shown in Table 1, the size of these buffers is set according to the regulation and guideline of MSWM developed by the Pollution Control Department (PCD). A number of polygons are identified by combining available land with subdistrict boundaries. Based on their sizes, these polygons are further divided and their centroid is selected as a facility location. Available land for landfill siting is shown in Figure 4. Since candidate locations of the three types of facilities overlap, we further constrain the mathematical model to avoid colocations. To ensure environmental justice across the three districts, we perform the land-use assessment at the subdistrict level. For any location within a subdistrict, we compute parameters , and as the following ratio. Direct + Indirect land use, with facility of size Total land available in the sub-district (27) For each facility, 3 capacity levels are considered: small, medium, and large. Their direct and indirect land-use are shown in Table 2. The values are estimated by land occupation LCIA methodology as previously described in our land-use impact assessment. Traveling Distances. All wastes are transported from the collection centers to the sorting facilities using 16-tonne (light) trucks. From the sorting facilities to both the incinerator and landfill sites, 32-tonne (heavy) trucks are used. Traveling distances are estimated using ArcGIS and a digital map of Pathum Thani. Specifically, having defined the candidate locations, we use a network analysis tool to determine the shortest routes. Public Health Impact Assessment. To measure the public health impact, this study computes the DALYs for people affected by the MSWM system. The first step is to estimate the number of people living near the supply chain. No data is currently available showing the population distribution at the household level. ArcGIS is used to count the total number of residential buildings surrounding the waste transportation routes and MSWM facilities. To estimate the damage to public health caused by an MSWM system, this study adopts the emission-to-exposure model used by Gouge et al. [37] and Greco et al. [38]. They use several buffer distances to estimate the health impact caused by transport pollution. In our study, we set the buffer distance for transportation routes to 100 m. For MSWM facilities, the affected areas are assumed to increase with the waste disposal dimension. Moreover, due to a long history of unsanitary practices in Thailand, landfills are assumed to have a larger impact than other MSWM facilities ( Table 3). The estimation of the number of affected people is made based on the information of the total population living in the area and the number of residential buildings. The proposed estimation approach is expected to give a more accurate estimate of affected people than the previous approach [39]. Varying impact distances, corresponding to different types and sizes of facilities, are used instead of one single impact distance. The second step is to multiply the number of affected [33]. Only local-scale environmental impacts are taken into account, including human toxicity, photochemical oxidant formation, PM formation, and ionizing radiation. The damage due to climate change and ozone depletion is neglected. Finally, the capacity of waste sites and trucks is selected from those available in SimaPro 7.3 (LCA software) which covers a wide range of typical facilities and operations. The entire dataset can be found in Kachapanya [40]. Results and Discussion . This section presents the results from the computational analysis which is carried out on a Windows 10 machine using an Intel i7-6700HQ processor with 8 GB of RAM. CPLEX 12.6 optimization studio is used to solve the mathematical models. The analysis is organized in two subsections: single-objective and multiobjective optimizations. The results are interpreted in terms of satisfaction levels of each objective. A satisfaction level of 100%, indicates that the objective is equal to its optimal level. Conversely, smaller satisfaction levels indicate that there is a gap between the objective and its best achievable target. Formally, these levels Computational Time (Sec.) 24 10 13 are obtained from the deviation values introduced in the methodology section (i.e., 1 − , 1 − , and 1 − ℎ ). Single-Objective Optimization Total Cost Optimization (SWMN( )). Figure 5(a) shows the optimal layout of the MSWM system obtained by solving SWMN( ). Results show that when cost is the main driver, the number of facilities is low. Out of 37 candidate locations for sorting facilities, only 6 are selected. Similarly, only 1 incinerator and 3 landfills are selected. While these facilities provide sufficient capacity for all subdistrict, they also generate routes longer than 30 km. The summary of costs and the average land-use stress associated with the solution are shown in Table 5 under the column named "Minimizing Cost". The construction cost is the largest cost. As a consequence, landfills are chosen over incinerators. The cost-optimum of the MSWM layout is about 50,944,315 US dollars composed of the transportation (11,973,205 US dollars), land (7,753,691 US dollars), construction (17,114,396 US dollars), and operational costs (14,103,023 US dollars). The average land-use stress and public health impact for this solution are estimated at 0.573 and 44,681 DALYs, respectively. The public health impact is from transportation (13,042 DALY) and waste facilities (31,639 DALYs). This suggests that although SWMN( ) achieves minimum cost, it comes at the expense of public health and land-use. This is mostly due to disposal sites located close to the urban areas where waste is generated, resulting in excessive land-use stress and high public health impact. Average Land-Use Stress Optimization (SWMN( )). Number and location of facilities obtained by solving SWMN( ) are shown in Figure 5(b). Again, the number of open facilities is relatively small. Four sorting facilities and four incinerators are selected. The locations are different, compared to the SWMN( ) case. Sorting facilities and incinerators are located in rural areas, reducing the excessive land-use stress returned by the cost minimization model. No landfill is selected. However, displacing facilities away from urban areas leads to high transportation cost and high public health impact. The results of average land-use stress are shown in Table 5 Public Health Impact Optimization (SWMN( ℎ )). Solving SWMN( ℎ ) leads to a layout that is quite different. Most of the facilities are sparsely located across the region (Figure 5(c)). A total of 22 facilities are opened (12 sorting facilities, 5 landfills, and 5 incinerators). Results in Table 5 show that the optimal public health drops to 7,224 DALYs. The average land-use stress and total cost increase to 0.985 and 74,740,757 US dollars, respectively. As expected, the land cost (21,035,012 US dollars) and construction cost (27,692,869 US dollars) are mostly responsible for the cost increment. Clearly, reducing public health is mostly a consequence of reducing the transportation routes. This is also evident by looking at the transportation cost (9,658,807 US dollars) which decreases considerably. This is achieved by locating a large number of facilities sparsely across the area. Consequently, the land-use ratio is bound to worsen dramatically. To sum up, the transportation cost reaches its minimum when SWMN( ℎ ) is solved. Its value is 20% smaller than SWMN( ). However, the land cost from SWMN( ) is lower than SWMN( ) by 76%, and it is lower than SWMN( ℎ ) by 91%. For the total construction cost, SWMN( ) is 12% lower than SWMN( ) because landfills have lower construction cost than incinerators, and it is 38% lower than SWMN( ℎ ) because the number of sorting facilities is smaller. The reason is that incinerators reduce the amount of land required, and the cost of land in urban areas is typically more expensive. The lowest operational cost is from SWMN( ). Moreover, focusing on the land-use stress impact, SWMN( ) results in lower values compared to SWMN( ) and SWMN( ℎ ) (95% and 97%, respectively). Finally, regarding the public health impact, the SWMN( ℎ ) result is lower than SWMN( ) by approximately 84% and lower than SWMN( ) by approximately 88%. Multiobjective Optimization. The results and the layouts of MSWM networks are shown in Tables 6 and 7 and Figure 6. (SWMN( , )). When the problem is solved minimizing costs and land-use, sorting facilities and incinerators are evenly distributed between urban and rural areas. The landfills are located in rural areas due to larger land requirements (Figure 6(a)). This scenario shows the tradeoff between costs and land-use stress. Looking at SWMN( ), land-use stress satisfaction increases from 43% to 86% but it deteriorates the total cost satisfaction level by 37% (Table 6). Due to the higher potential impact of landfill facilities on the overall land-use stress, SWMN( , ) reduces the number of facilities to only one large facility (Table 7). However, to satisfy the total demand, three large incinerators are built. Furthermore, the number of sorting facilities increases to 11 (7 small, 1 medium, and 3 large). As expected, this scenario has high satisfaction levels of cost (63%) and land-use stress (86%). Nevertheless, the satisfaction level of the public health objective is extremely low (5%). (SWMN( , ℎ )). Total Cost and the Public Health Impact Transportation routes from collection center to sorting facility Transportation routes from sorting facility to disposal sites (d) Figure 6: Optimal layouts of MSWM system under multiobjective functions. all facilities are sparsely distributed in urban and rural areas ( Figure 6(b)). This is because the two objectives share the common goal of reducing the transportation routes so that both health and costs are reduced. From Table 7, the sorting facilities are now three for each size. Two large incinerators are built and three landfills locations are selected (2 small 1 large). The average satisfaction is 46%. The satisfaction level of cost is 56% while the satisfaction level of public health is 69% (Table 6). This result suggests a clear conflict between these two objectives, despite the common goal of reducing transportation routes. Average Land-Use Stress and Public Health Impact (SWMN( , ℎ )). Optimizing land-use and health impact disregarding costs results in selecting only incinerators (1 small, 2 medium, and 2 large) due to their lower land-use as opposed to landfills (Figure 6(c)). The number of sorting facilities increases to 11, 5 of which are large, 2 medium, and 4 small. The average satisfaction level is 65% as land-use stress (78%) and public health impact (89%) simultaneously reach high levels (Table 6). However, the satisfaction level of the total cost is reduced dramatically to 28%, suggesting that this solution is not practical. Total Cost, Average Land-Use Stress and Public Health Impact (SWMN( , , ℎ )). Previous results have shown that both single-and biobjective models fail to reach a good compromise. Therefore, the problem is solved minimizing the three objectives at the same time. This leads to an MSWM system layout that is well balanced across the region (Figure 6(d)) as this model mainly chooses small and medium-sized facilities (Table 7). Normally, it is difficult to obtain solutions with high satisfaction levels across all conflicting objectives. However, this scenario gives the highest average satisfaction level (69%) and offers an effective compromise between the three objectives as each satisfaction level is above 50%. Concluding Remarks In this paper, we propose a novel network design optimization model for MSWM which accounts for sustainability in a comprehensive way. Specifically, we incorporate environmental and social impact indicators with the economic objective. A formal methodology is introduced to model public health and land-use impacts. The first is measured in terms of DALYs imposed by the waste operations on the population living close to the supply chain. To enforce a fair use across a city's subdistricts, the land-use metric is computed as the ratio between used and available land. The multiobjective formulation is translated into a singleobjective model aiming to minimize the maximum gap of each objective from its optimal target. A case study in Pathum Thani (Thailand) is developed to validate the model while also providing results that can be of public interest, to increase awareness and engage with local stakeholders. The single-objective analysis highlights the fact that focusing only on cost generates a supply chain which imposes a serious burden on society. In fact, the resulting land-use and public health metrics are far from sustainable. However, single-objective models focusing on public health or land-use are inefficient and they deteriorate the metrics outside the objective. This further motivates the multiobjective approach studied in this paper, where a model is proposed to minimize the deviation of each criterion from its optimal target. The best tradeoff between the metrics is indeed achieved when all dimensions are considered simultaneously. The scope of this work, together with an increasing push for a wide-range research focus on sustainability, provides several interesting further extensions. Integrated modeling approaches should be developed to simultaneously consider interrelated supply chain decisions, as demonstrated by Mota and et al. [41]. To this aim, optimization models can be developed to incorporate facility location decisions with other decisions, such as waste collection schemes, transport modes, and disposal technologies. This will require developing and solving complex multiobjective location-routing problems. Another line of research should focus on incorporating uncertainty into the problem. It is clearly of interest to investigate the extent to which the problem's features such as the amount of waste, the transportation costs, and their inherent uncertainties can impact sustainability metrics and costs. Finally, given the complexity of the current model and its possible extensions, a further research direction should focus on the development of efficient solution algorithms to obtain good solutions on large realistic networks. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
8,223.6
2019-05-13T00:00:00.000
[ "Economics" ]
Influence of the Scale Effect Upon the Financial Results of the Banks in Bulgaria масштаба на финансовые . The object of attention in the article is the profitability and efficiency of the banks in Republic of Bulgaria. The subject of the development focuses onto the influence of the credit institutions size upon their financial results. The objective of this study is to either to reveal that there are sufficient grounds to believe that the effect of the scale renders its influence upon the profitability and efficiency indicators or such a dependency can hardly be found. This study comprises observations about the processes in the banking sector of the country for the period 2007–2016. A coefficient analysis was employed, using a system of indicators suitably selected to this end. Certain dependency between the size of the banks in Bulgaria and the values of these financial indicators was established on the basis of the analysis of the real empirical data. It was concluded that utilizing the scale effect influence the large credit institutions manage to derive certain advantages in comparison to the smaller in size banks. The idea that by means of further consolidation of the banking sector of the country its efficiency can be increased, was substantiated. INTRODUCTION One of the main criteria for classification of the banks in a country is according to their size. Review ing the reference literature shows that the question of the relative advantages and disadvantages of the large and the smaller in size banks is debatable [5][6][7]. As an advantage for the largesize credit institu tions was pointed out the fact that the consider able scale of the activity contributes to offering of a wider range and more diverse products, helps in diversification of the bank portfolios and in avoid ance of excessive concentrations. Large banks are considered more competitive and more sensitive to innovations in the financial industry. Their policy is usually oriented towards more risky, but highly profitable investments, and they are better adapt able to the respective regulatory requirements. It is traditionally assumed that in any critical situa tion, the probability for the state to support a large size bank is greater than if it was about saving of a smaller bank ("too big to fail"). The following disadvantages of the large banks are pointed out: greater inertness of the banking activity, harder adaptability to changes of the external conditions, more complex and more expensive management, more limited interest in servicing small custom ers, danger of taking greater risks, related to the largescale transactions. On the other hand, the smaller banks also have their advantages -greater flexibility, easier adaptability to abrupt changes of environment, more simplified management, striv ing towards more moderate and balanced policy, etc. Concerning their disadvantages, they are usu ally related to the limitations in provision of large credits and servicing big customers, difficulties in diversification of the operations, harder access to the financial markets, etc. To a certain extent, the outlined comparative ad vantages and disadvantages of the large and smaller banks have more general nature. It is another ques tion to what extent these can be substantiated by empirical data and what is their exact manifesta tion on the background of the specifics of the bank industry in the respective country. The object of attention of this article is the finan cial results of the banks in Republic of Bulgaria. The subject of the development is directed towards the intensity and the direction of influence of the factor of bank size upon these financial results. The objec tive of the study is to establish whether the effect of the scale renders its influence upon the profitability and the efficiency of the banks in Bulgaria or such dependency can hardly be found. Two work hypotheses will be formulated for the needs of this study: • First hypothesis -the size of the banks in Bulgaria virtually renders no influence upon their financial results. The core of it consists in the fact that the effect of the scale renders no significant effect upon the commercial viability and the ef ficiency of the credit institutions so, from this pointofview "size does not matter"; • Second hypothesis -there is a certain de pendency between the size of the banks in Bul garia and the status of a series of their key indi cators, reflecting the final financial results from the banking activity. According to this hypothesis, the effect of the scale renders significant effect the last, meaning that for the banks in the country "size does matter". METHODOLOgY A N D DATA In the beginning, a reasoning of the criterion, which will be the basis to determine the bank size, should be provided. Different pointsofview can also be used to quantify their size. Nevertheless, the con ventional criterion to judge the magnitude of the credit institutions is the asset size [6,7]. We as sume that the sum of assets is the most precise expression of the scale and scope of the banking activity. To outline the tendencies in the financial sector, Bulgarian National Bank (BNB) divides the banks in Bulgaria according to their size into three cat egories. The first group comprises the five biggest banks in terms of the sum of their assets, whichever they may be as at any given moment. The second group includes the remaining small and medium sized banks. A separate, third group comprises the branches of foreign banks in Bulgaria. The present study is based on this officially accepted classifica tion. Further down, our attention focuses upon the financial results of the banks from the first group (the five largest banks) and the banks from the sec ond group (the rest of the small and mediumsized banks). Due to certain specifics of the activity of foreign bank branches in the country (the ones from the third group), these have been intentionally not included in this study. The dependency between the size of the banks, grouped into the two mentioned categories and some of their key financial indicators of profitability and efficiency, is to be analysed on the above grounds. Coefficient analysis is employed by using a system of indicators, selected in accordance with the above Ж. в. вытев outlined guideline of the study. To be more precise, the focus was placed on the following: • Costincome ratio. It expresses what part of the bank income covers the respective expenses and what part of the income remains to set up the net financial result [2], i. e. Sum of expenses Cost income ratio . Sum of income − = Its values decrease with the increasing of the income and/or decreasing of the expenses, which is a favourable situation. Due to its complex nature the costincome "scissors" is often used to evaluate the efficiency of the credit institutions. • Operating efficiency. Key importance for es tablishing the income and expenses will have the operating ones, which are related to the main (typical) for the banks activities. Therefore, the operating expenses and income have sustainable nature and are constantly occurring. These are: interest expenses/income, received/paid fees and commissions, expenses/income from foreign cur rency transactions, from securities transactions, etc. The ratio of the considered expenses and in come, renders its significant effect upon the so called operating efficiency [6]: Operating expenses Operating efficiency . Operating income = Lower values of the indicator (related to reduc tion of the operating expenses and/or increase of the operating income) are an indication of an increasing efficiency. The difference between the operating income and expenses expresses the net operating income. • Net interest margin. The difference between the income and expenses for interests gives us the concentrated expression of the efficiency of the bank's intermediary operation. For comparison, the net interest income is used by its relative value against the assets • Eff iciency ratio. This popular financial in dicator for evaluation of the commercial viabil ity and efficiency of the credit institutions is based on the fact that the banking profits ob tained from the sum of the net interest income and the other noninterest income after deduc tion of the respective noninterest expenses. In this particular case, we are based on the cir cumstance that usually, for the banks the non interest expenses are larger than the noninterest income, i. e. for them the net noninterest income has negative value [6]. This necessitates that the interest income should have such an amount that the interest expenses will be recovered so that on the one hand, the remainder of them will be covered by those noninterest expenses, which have not be covered by the noninterest income, and on the other hand -to be sufficient so as to form certain profit. These deductions find their quantity expression in the following dependen cy [4]: For example, if the efficiency ratio is 0,70, this means that 70% of the net interest income and the other (noninterest) income will cover the non interest expenses, and the remaining 30% will be used to form up the profit. Lower levels of this indi cator correspond to higher values of the indicators for commercial viability. • Nonoperating expenses per unit of net operat ing income. The management of the nonoperating expenses and the control of their dynamics and structure are of considerable importance for the bank management. These include: administrative and management expenses, amortisations, provi sions, rental payments, fines, etc. [3]. Due to its nonproduction nature, the increase of the latter ones represents an additional weight on the final financial result. For the needs of the comparative analysis, these are interpreted as relative quantity. The present study uses as a basis the size of the net operating income of the banks, i. e. Non operating expenses per unit of net operating income Non operating expenses . Net operating income The nonoperating expenses per unit of net op erating income decrease with the decreasing the nonoperating expenses and/or with the increasing of the net operating income. This situation will be favourable, if the values of the coefficient are com paratively lower when compared to the other banks or a decrease tendency is observed. Otherwise, this may suggest excessive staff employment, inefficient management policy, deterioration of the quality of assets, etc. • Administrative expenses per unit of assets. Administrative expenses have their significant weight in forming up of the nonoperating ex penses. These are unavoidable, but their keeping the unreasonably large will render negative effect on the profit and efficiency of the banking activity. As a relative quantity, these are often expressed as a percentage against the assets: Administrative expenses per unit of assets Administrative expenses . Sum of the bank assets = = Generally, the reduction of the values of this coefficient means higher efficiency. The situation is not favourable, if for a certain period the increase of the administrative expenses exceeds the one of the assets or if against the increasing of the former ones, a reduction of the latter ones is observed. • Net profit per unit of staff expenses. The de pendency between the banking profit and the staff expenses (wages, social security payments, etc.) bears valuable information from the human factor utilization pointofview, i. e. [4] Net profit per unit of staff expenses Net profit Return on Assets . Sum of assets = Using this indicator is appropriate for the pur poses of the present study, because the profit is a result from the overall banking activity, and assets best reflect its scope and scale. On the basis of the financial indicators presented, we performed comparative analysis between the two groups of banks in Bulgaria, classified according to their size: the banks from the first group (the large banks) and the banks from the second group (small and mediumsized banks). The idea is to establish the dependency between the size of the credit in stitutions and their financial results. This study comprises observations about the development of the banking sector in Bulgaria for a period of ten years (2007-2016). Several considera tions played an importance role for the selection of the time interval. First, studying data about a longer period contributes to the better outlining the typi cal patterns in the manifestation of the scale effect upon the banks' financial results. Furthermore, this way the influence of some factors, which have a shortlived, temporary or accidental nature will be ignored. Second, from the pointofview of the effect of the financial crisis upon the banking activity, the analysed period included three relatively differenti ated stages: precrisis period (from 2007 to 2009), crisis period (from 2009 to 2014) and postcrisis period (after 2014). This allows for a more precise outlining of certain specifics of the dynamics of the processes in the banking sphere, during each individual stage. The conclusions from this study are based on the officially published information by the Bulgarian National Bank on the status of the banking sector in the country. EMPIRICAL RESULTS Our further development specifies the testing of the formulated work hypotheses by means of an analysis of the real empirical data on the condition of the banking sector in Bulgaria. Let us first begin by presenting the most popular of the indicators considered -the costincome ratio. The data show that in the years before the occur rence of the economic crisis, the expenses on the banking system level were continuously on the rise. This is logical taking into account the increasing activity of the credit institutions (table 1). Never theless, the expenses were completely offset by the Ж. в. вытев income, which during this stage had a frontrunning growth rate. The consequences from the crisis after 2008 rendered negative effect upon the profitability of the banks. The thinning growth of the income in the crisis conditions forced them, as much as pos sible, to restrict their expenses. The costincome "scissors" of the banking sector was gradually clos ing. The dynamics of costincome ratio outlined a negative tendency -the total value for the sector marked a palpable increase from 0,74 in 2007 to 0,90 in 2013. It was only in the last years that there were some symptoms of overcoming of this negative dynamics. At the end of 2016, it almost restored its 2007 levels. The outlined tendency refers not only to the banking system as a whole, but also to most banks, regardless of their size. In the same time, the comparative analysis reveals structural differ ences, observed in the banks with difference scale of activity. The large vaults of first group are in a more favourable position -despite the worsened economic conditions, they maintained the income toexpense ratio to a higher level in comparison with the smaller in size banks from the second group, or against the respective values for the banking sec tor, as a whole, respectively. On the average, for the tenyear period, it was 0,81 for the largesize institutions, while for the smallersize ones it was 0,89. The outlined advantage of the larger banks in this aspect appears as a permanent tendency -it has been observed for the entire period analysed. As to the operating efficiency coefficient (Operating expenses / Operating income), it is important to note that until 2013 it reported a constant deterioration both, for the banking system, and for the individual bank groups (table 2). Operating expenses increase at a quicker pace than the operating income. Interest expenses rendered the most significant effect upon this negative tendency. The fierce deposit competi tion and the popular "deposit tourism" between the banks, typical for the years of the crisis, found their expression in the aggressive interest policy carried out by them in the collection of deposits and in the maintenance of high deposit interest rates. This inertia was overcome after 2013. For the period from 2013 to 2016 inclusive, the operating expenses were reduced by impressive rates -more than twice. They reached levels far lower than in comparison with the ones in 2007. This was basically due to the drastic lowering of the interest expenses. The interest rates for the bank deposits in these years dropped substantially. Indeed, there were indica tions of certain decrease of the operating income in this period, but it was considerably smaller than the one of the operating expenses. Most of the sta bility of the operating income was substantiated by two circumstances. Firstly, the interest rates on the credits remained at a comparatively high level. The banking competition was redirected from deposit collection towards credit provision activity. Secondly, the significance of the income from fees and com missions, as an element of the operating income, increased. In these two aspects, the large banks demonstrated certain advantages in comparison with the rest. On the one hand, they managed to maintain higher interest rates on the credits, and on the other hand -offering wider range of services, they increased their income from fees and commis sions. This data allowed us to draw the conclusion that as a whole, the largesize credit institutions have better operating efficiency when compared to the smaller size ones. The general tendency is that under the conditions of crisis the banks in the country should operate with decreasing net interest income. The latter one gradually stabilizes only in the years after coming out of the crisis (table 3). At the same time, during the analysis of the data on the dynamics of the net interest margin, considerable differences between the large and the smaller banks were found. The advantage is mainly to the benefit of the former ones -they operate at considerably higher interest margin than the rest. The main reasons for it being the circumstance that for the analysed period the large banks in Bulgaria managed to maintain higher interest rates on the credits and lower ones on the deposits, and attract more customers, at the same time. This finding may appear illogical, but it has its reasoning: a) the largesize banks enjoy greater Ж. в. вытев popularity; b) they are in a position to generate greater confidence in themselves, and become cen tre of attraction for more customers; c) they own a welldeveloped branch network; d) they are in a position to provide users with both traditional credit and deposit products, along with a wider range of other services, meeting their individual needs. The consequences from the economic crisis in the country rendered negative effect on the coef ficient of efficiency (table 4). The negative tendency is well expressed after 2008 and continues until 2013. The reason for this takes its root in the cir cumstance that the increase of the noninterest expenses happens at a quicker pace than the net interest income and the noninterest income. The growth of the noninterest expenses originated mainly from the deterioration of the quality of the bank credit portfolios, causing significant increase of the expenses for provisions against their devalua tion. It was only in the last three years (2014-2016) that the efficiency ratio altered its negative trend, though still far from the levels, which were typical for 2007 and 2008. However, we should note the fact that from the pointofview of the considered indicator, the large banks from the first group are in a more favourable position in comparison with the small and mediumsized banks from the second group. This pattern was manifested during the whole analysed period. The average value of the efficiency ratio for the period 2007-2016 for the first group was 0,74, while for the second group it was 0,84. In this sense, the large banks of the sector demonstrated greater efficiency in comparison with the rest. Sum of income (in thousands of BGN) Cost-income ratio The analysis shows that the nonoperating expens es take up a large relative share from the total sum of expenses of the banks in Bulgaria. If we compare data from table 5 and table 1, we will find out that over the individual years, it varied between 60% and 75%. It is interesting to note that the nonoperating expenses exceed even the size of interest expenses. These facts contribute to the particular importance of the control upon the nonoperating type of ex penses. The nonoperating expenses represented as ratio against the quantity of the net operating income, show multidirectional development trends ( source: author's own calculations based on data from URL: http://www.bnb.bg (accessed: 12.06.2017). МИРОВАЯ ЭКОНОМИКА system level, and to the individual groups of banks. In this period the nonoperating expenses increased faster when compared to the net operating income. The significant increase of expenses for provisions against credit devaluation rendered strong negative pressure in the analysed aspect, originating from the deterioration of their quality (of the credits). It was only in the last years that the nonoperating expenses per unit of net operating income gradually outlined the favourable tendency towards reduc tion. In the same time, if the attention is drawn to the values of the analysed indicator, which are characteristic about banks of different size, certain differences become evident. The large banks from the first group are in more favourable position. For them, the nonoperating expenses per unit of net operating income for the entire period analysed are lower in comparison with the ones of the smaller banks from the second group (the average values of the indicator for the period for the first ones is 0,73, and for the others -0,83). It is noteworthy that only for the period from 2007 to 2013 the nonoperating expenses of the banks from the first group marked a growth of about 30%, while for the ones from the second group this increase reached more than 100%. Therefore, this data confirm that the influence of the scale effect is more tangible even concerning the nonoperating expenses. The effect from the achieving of economies of scale is particularly well pronounced against the administrative expenses. The data in table 6 show that in this aspect, the large banks in Bulgaria enjoy a marked supremacy. In 2016 when compared to the basis 2007, the expenses of administrative nature of the large banks increased by 13% (while assets' growth was 58% for this interval of time). As to the small and mediumsized banks, this growth rate for the same period is significantly higher -36% (while assets' growth was 63%). In other words, it is typical for the largesize banks that the assets' growth is accompanied by relatively smaller increase of the administrative expenses in comparison with the smaller banks. This reflects on the rate of the administrative expenses per unit of assets for both groups of credit institutions. The pattern, which is clearly distinguished, is that the banks from the first group will continuously report lower percentage of administrative expenses related to the assets when Ж. в. вытев table 6 compared to the ones from the second group. Con cerning the staff expenses on the banking system level, it can be noted that during the analysed period, these showed a tendency of slight increase ( The influence of the scale effect is more tangibleagainst the staff expenses incurred by the banks from the first group, the latter ones generate two times greater profit in comparison with the one from the second group. Achieving of sufficient and increasing profit is a priority task for each credit institution. The data presented about the dynamics of the net profit of the bank system in Bulgaria for the period 2007-2016 (table 8) show that during this interval of time three stages can be outlined. Until 2009 the profits of the sector increases by substantial amounts. The reason is the fast economic growth and the credit boom in the country during that period. The crisis after 2008 rendered its negative effect on the activity of the banks, the sign for which was the constant melt ing of their profit. Only for the period from 2008 to 2012 the latter one decreased more than twice Under the conditions of the crisis, due to the reduction of the volumes and the decrease of the interest rates on credit provisions, the interest income continu ously dropped, which rendered its negative effect on the financial results. The interest expenses had strong impact in negative direction over the first two years of the analysed period. At the same time, this influence was not so tangible, as it was completely offset by the increasing interest income. For the next years its negative impact is insignificant, and after 2013 -even positive (the interest rates on the deposits were perceptibly reduced, and respectively, the interest expenses were reduced). The only factor of permanent positive effect for almost the whole analysed period was the noninterest income. For most of the years though, its effect was not very notable. As to the noninterest expenses, they are constantly rendering negative impact on the profits, mostly, due to the deterioration of the quality of the bank credit portfolios and the increasing of the expenses for provisions against their devaluation. This factor had its strongest negative impact during the first years of the analysed period. The stabiliza tion of the profits at the end of the period (2016) was conditioned by: a) low interest expenses; b) the gradual reduction of the expenses for provisions; c) certain decrease of the administrative expenses. The problem was that to achieve permanent increase of the financial results cannot be done only by reduc ing the expenses, which has its objective limitations, without the respective expansion of the income Return on Assets (ROA) of the banks in Bulgaria depending on their size base. The above considerations explain the reasons why the return on assets (ROA) of the banking sector varied broadly over the last ten years in the country. If we draw the attention to the situation in the large and in the smaller in size banks, the values of ROA will present to a great extent the complex patterns, outlined within the study of the previously mentioned financial indicators. The analysis shows that there is a certain dependency between the size of the banks and the commercial viability of their assets. The data confirm the influence of the scale effect to the benefit of the large banks from the first group. The latter ones report higher Return on As sets (ROA) in comparison with the banks of smaller size -both, in each and every of the analysed years, and as the average value for the period. CONCLUSIONS The exposition above allows us to do the respec tive inferences concerning the work hypotheses formulated at the beginning. The first hypothesis, according to which the size of the credit institu tions had no significant effect on their financial results, cannot be confirmed. The results from this study proved the second hypothesis -utilizing the influence of the scale effect, the large credit institu tions in the country managed to derive considerable advantages when compared to the smaller in size banks, which eventually, is reflected by their better financial results. This finding corresponds to the need of continu ation of the process of consolidation of the bank system in Bulgaria that has already started. This ne cessity is further intensified against the background of: a) comparatively limited economic activity in the country; b) existence of significant number of too small in size credit institutions with limited scope of activity; c) the overall increase of the regulatory requirements to the banks, in accordance with the requirements of Basel III. Proceeding from this we believe that consolidation of the banking sector is one of the routes to increase its efficiency.
6,392.4
2017-09-22T00:00:00.000
[ "Economics" ]
Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments Landing an unmanned aerial vehicle (UAV) autonomously and safely is a challenging task. Although the existing approaches have resolved the problem of precise landing by identifying a specific landing marker using the UAV’s onboard vision system, the vast majority of these works are conducted in either daytime or well-illuminated laboratory environments. In contrast, very few researchers have investigated the possibility of landing in low-illumination conditions by employing various active light sources to lighten the markers. In this paper, a novel vision system design is proposed to tackle UAV landing in outdoor extreme low-illumination environments without the need to apply an active light source to the marker. We use a model-based enhancement scheme to improve the quality and brightness of the onboard captured images, then present a hierarchical-based method consisting of a decision tree with an associated light-weight convolutional neural network (CNN) for coarse-to-fine landing marker localization, where the key information of the marker is extracted and reserved for post-processing, such as pose estimation and landing control. Extensive evaluations have been conducted to demonstrate the robustness, accuracy, and real-time performance of the proposed vision system. Field experiments across a variety of outdoor nighttime scenarios with an average luminance of 5 lx at the marker locations have proven the feasibility and practicability of the system. Introduction Unmanned aerial vehicles are cost-efficient, highly maneuverable, and casualties free aerial units that have been broadly adopted in civil applications and military operations, such as surveillance, traffic and weather monitoring, cargo delivery, agricultural production, damage inspection, radiation mapping, and search and rescue (SAR), to name a few [1][2][3][4]. For those missions requiring repeated flight operations where human intervention is impossible, autonomous takeoff and landing are essential and crucial capabilities for a UAV, which has been extensively studied by researchers from all over the world during the last few decades. Although launching a UAV is relatively easy, landing it is the most challenging part in many circumstances due to high risks and environmental uncertainties. According to statistics, crashes and accidents are most likely to occur in the landing phase, jeopardizing the safety of the UAVs involved. For a successful autonomous landing, a prerequisite is to know the precise location of the landing site. With this information, a UAV could gradually minimize its distance to the landing site, descend to a proper altitude, and perform touch-down in the final descent phase. To resolve this pressing issue, a widely accepted approach is to use machine vision to detect artificial landing markers for assisting UAV autonomous landing. One of the most significant advantages of machine vision is that it provides rich information about the surrounding environments without emitting radiation. It is also lightweight, low-cost, energy consumption efficient, and friendly to stealth operations. Moreover, machine vision is robust to signal jamming and telemetry interference due to its passive nature. At a distance, a UAV may carry out preliminary detections on landing markers using machine vision, while relying on other navigational means such as the global navigation satellite system (GNSS) or an inertial measurement unit (IMU) [5]. At close range, vision sensors can determine both the relative positions and attitudes between the UAV and the landing marker within sub-millimeter accuracy [6], information of which is essential for precise landing control. During the entire landing maneuver, vision sensors can couple with GNSS or IMU to obtain more reliable measurements. To date, there exist many studies describing vision-based methods for UAV autonomous landing in either simulations or indoor well-contained laboratory environments. They are the demonstration or proof-of-concept that could light a path toward more practical and realistic solutions [7,8]. Some of the recent studies have carried out flight tests in real-world environments where well-illuminated scenes favor the vision systems [9,10]. With the proliferation of UAV applications, there emerges an increasing need to operate UAVs at nighttime, benefiting from less airspace traffic and less human-activity-based interferences. Some countries and regions have also issued specific legal frameworks to allow for operation of a UAV at night. According to the Global Drone Regulations Database [11], for instance, the Federal Aviation Administration (FAA) in the United States brought the 2021 New FAA Drone Regulations into effect on 21 April 2021. The regulations state that a certificated operator must comply with the FAA's training and testing requirements and apply anti-collision lighting before flying the drone at night. From 31 December 2020, the European Union Aviation Safety Agency (EASA) allowed night operations of UAVs unless the state or region defines a specific zone where night ops are not possible for security reasons. In 2016, China's civil flight authority, the Civil Aviation Administration of China (CAAC) issued new rules to allow UAVs operating beyond visual line of sight (BVLOS) to fly outside the no-fly-zone (NFZ) in the nighttime. Recently, Australia's Civil Aviation Safety Authority (CASA) has approved flying a drone or remotely piloted aircraft (RPA) at night for licensed operators. With the continuous improvement of laws and regulations, UAV nighttime operations will be increasingly common in the foreseeable future. Some related studies have also investigated the potential of nighttime-based autonomous navigation. However, autonomous nighttime landing is still a challenging task. Researchers have introduced supplementary means, such as applying active light sources to markers [12], or using a shipdeck-deployed infrared light array [13], so that the vision systems may see the highlighted landing spots through the darkness. Nonetheless, an obvious drawback of the active-marker-based methods is that it requires additional ground-based infrastructures to be deployed. The active nature has a strong possibility to defeat stealth operations. To our best knowledge, none of the existing approaches have reported a reliable performance of UAV autonomous nighttime landing using conventional vision sensors and non-active marker designs only. In the darkness, the onboard vision system may significantly encounter more difficulties than daytime for the following reasons: 1. Low contrast and visibility: Since very limited light is reflected on the object surfaces due to poor illumination conditions, the landing marker may have little appearance from a dark background. Target detection methods relying on contour detection, geometric analysis, and inline-texturebased feature matching may no longer be applicable owing to low contrast and visibility. 2. Partial occlusion: Shadows caused by light sources occasionally overlay on the landing marker, thus leading to partial occlusion. In addition, the marker itself may be partially or entirely out of field-of-view (FOV) due to UAV ego-motion, which is another primary reason for occlusion. 3. Motion blur and noise: Under low-illumination conditions, the exposure time should be set longer to capture a relatively brighter image. However, longer exposure time combined with camera and UAV motion results in strong motion blur, and inevitably brings noises to image acquisition. These disadvantages hinder a UAV from accomplishing autonomous landing at night. Considering the aforementioned issues, this paper aims to close the gaps in the relative works and offer a solution towards UAV autonomous landing at nighttime without the need of an active-sourced marker, where the illumination conditions become unfavorable for a conventional vision sensor. In our previous research on UAV autonomous shipboard landing, we set a solid foundation and gained rich experience in vision-based target detection and closed-loop flight control in indoor laboratory environments [14]. In this paper, we intend to extend the vision-based approach further to accommodate the challenges of nighttime landing in outdoor environments. The significant contributions of this paper are summarized as follows: • A novel monocular vision system is presented for landing marker localization at nighttime. It consists of a model-based scheme for low-illumination image enhancement, a hierarchical-based method consisting of a decision tree with an associated light-weight CNN for coarse-to-fine landing marker detection and validation, and a post-processing technique for keypoint extraction. Such an approach is able to perform robust and accurate landing marker detection in low-illumination nighttime scenarios from different altitude levels without lightening up the marker. • For low-illumination image enhancement, we analyze a model-based scheme and refine the process by relaxing some of the restrictions and avoiding redundant calculation to speed up the algorithm. For landing marker detection and validation, the first three nodes of the decision tree are designed based on loose criteria, and the last lightweight CNN node maintains a strict standard of classification. Our vision system can quickly adapt to other landing marker designs by simply retraining the CNN node. • The solution described in this paper has been verified outdoors in various field environments with an average luminance of 5 lx at the landing marker locations, feasibility, accuracy, and real-time performance of which are investigated comprehensively. To our best knowledge, none of the existing approaches have reported UAV-based landing marker detection using conventional visual sensors and non-active landing markers in such low-illumination conditions. The remainder of this paper is organized as follows: in Section 2, the literature on relative topics is reviewed in detail; the hardware and software involved in this work are presented in Section 3; in Section 4, the algorithm for low-illumination image enhancement is elaborated; the hierarchical-based approach for landing marker detection and validation is described in Section 5; experiments and the corresponding discussion are depicted in detail in Section 6, and the paper is concluded in Section 7. Literature Review Previously, vision-based autonomous landing for UAVs in outdoor environments has been comprehensively studied. Most of these approaches use the onboard vision sensors to recognize a pre-defined reference object, a landing marker in general, for calculating the relative pose (position and attitude) between the UAV and the marker. The flight system then utilizes such information to perform landing maneuvers. Dated back in 2003, Saripalli, Montgomery, and Sukhatme [15] conducted the groundbreaking research for landing an autonomous unmanned helicopter outdoor on a helipad by computer vision based on calculating the image invariant moments of the landing pattern, the result of which has been broadly accepted and well-commented on by many other researchers. Since then, various marker-based proposals have been reported, in which distinctive features, such as geometrical shapes, characters, color blocks, and a combination of them have been commonly applied to design landing markers. For instance, Lee et al. [16], Serra et al. [17] and Wu et al. [18] combined black and white square-shaped tags as the landing marker to extract the relative camera position and orientation followed by image-based visual servoing (IBVS) for target tracking. In this case, the square corners are distinctive and robust enough for the visual algorithms to detect. A color-based marker design was reported in [19], where specific color-based segmentation techniques in conjunction with filtering methods and shape detection can distinguish the target from the background effortlessly. These early works have demonstrated the integration and adaptation of visionbased approaches into UAV platforms, making vision sensors the primary sensing scheme in related applications. In real-world applications, UAVs often operate in outdoor environments and land on moving platforms such as vehicles or ground robots in multiple scenarios of interest. In contrast to static-target-based landing, pursuing a moving target and landing on it is not always a trivial task due to complex constraints, including large displacements, target out of FOV, strong motion blur, and environmental interferences. In [20], Richardson et al. incorporated an onboard visual tracker RAPiD with a Rotomotion SR20 electric-powered UAV to demonstrate integrated landing control on a flatbed truck and an unmanned ground vehicle (UGV) with slow translational motions, respectively. It used a predefined description of the object being tracked, in this case, a customized polygon, to determine the position and orientation of the object with respect to the UAV. However, the tracker cannot cope with rapid changes in illumination. Thus, the UAV can only operate in constant illumination conditions. In 2017, the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) competition organized by the Khalifa University of Science in Abu Dhabi was considered a milestone in robotic society. The competition was about landing a UAV on a ground vehicle carrying a landing marker (a cross sign within a circle) moving at a maximum speed of 15 km/h. Results indicated that there still exists a large reality gap between laboratory experiments and real-world field tests. Among the very few participants who accomplished the tasks, Li et al. [21] proposed a vision-based method comprising three stages to detect the moving marker at different altitudes. Based on the geometrical features of the marker, a combination of ellipse extraction, line slope clustering, and corner detection techniques is elaborated in detail to achieve an F-measure beyond 80% with real-time performance. This study mainly focused on vision system design, but did not elucidate the integration of the entire UAV framework. Baca et al. [9] introduced their UAV system architecture design for contesting the MBZIRC competition, where a model predictive controller (MPC) along with a nonlinear feedback controller was adopted to facilitate trajectory planning and following. The vision algorithm is comprised of adaptive thresholding, fast undistortion, ellipsecross pattern detection, and relative pose estimation, details of which were comprehensively analyzed in [22]. Besides, the vision systems of other participants are revealed in [10,23,24]. Though the MBZIRC competition offered a valuable opportunity to examine the UAVs' performance when facing challenging realworld conditions, nighttime landing was still a task beyond its consideration. With the rapid development of deep learning, convolutional neural networks have been introduced to substitute hand-crafted features for landing marker detection in [25,26], to name a few. However, in extreme low-illumination conditions, it is still debatable whether CNN can extract features effectively or not. Runway-based landing is another scenario in which UAV autonomous landing occurs. In [27][28][29], the visual-based approaches utilize the unique perspective angle of the forwardlooking cameras to detect the runway patterns, so as to determine the relative position between the aircraft and the runway for approaching and landing control, which are the most critical phases of a flight. Although the adaptation of thermal infrared (TIR) cameras is a potential solution to all-weather operation, most of these proposals still desire good visibility conditions to accomplish landing. Alternatively, some infrastructure-based systems set up on the ground provide a functional level of accuracy and availability to guide a UAV in autonomous landing. The research group of Kong et al. proposed a stereo ground-based system consisting of visible light cameras, infrared cameras, and pan-tiltunits (PTUs) to capture and track the UAV. The system feeds the relative pose information to the UAV's onboard autopilot to generate proper landing trajectories [30,31]. A similar approach was employed by Yang et al. in [32], in which more than two near-infrared cameras are deployed to form a camera array to track and provide real-time position and velocity of the UAV. Thus, a complicated calibration procedure is essential once the system is re-deployed. Moreover, an additional infrared lamp was mounted on the nose of the UAV to facilitate target detection and tracking. Other ground-based systems use a local wireless positioning network to assure UAV localization [33][34][35]. Such an approach has the advantage of being immune to harsh weather conditions while being relatively accurate. However, the system is vulnerable to signal jamming. Landing on maritime vessels is another real-world application for UAVs that has gained popularity. Compared with landing on a moving ground carrier, landing on a moving shipdeck faces instability and complex motions resulting from ocean environmental disturbances, such as wind, gusts, turbulence from the ship's airwake, and the ship-wave interaction due to a high sea-state. Additionally, sea-based landing makes conventional vision systems suffer from various illumination differences caused by sun reflections, rain, fog, and overcast skies, where some of the situations are considered the closest to nighttime conditions. An early study of [36] followed the strategy of placing a visual marker on an autonomous surface vehicle (ASV) for the UAV's visual sensor to determine the proper landing area using a learning saliency-based approach, while the relative pose is measured with the help of an ASV-mounted upward-looking camera to capture a UAV-carried ArUco marker at close range before the final touch-down. Nonetheless, the field experiments were only conducted in the daytime under favorable weather conditions. A paper written by Sanchez-Lopez et al. [37] introduced a vision-based decision-tree method for classifying the international landing marker on a mimicked moving shipboard. The decision tree involves a combination of an artificial neural network and other geometrical properties of the landing marker, but it was only tested in laboratory environments as the heavy computational burden prevents the vision system from being implemented in real-time flight tests. Wang and Bai [38] proposed a visually guided two-stage process to land a quadrotor UAV on a moving deck simulator. The authors scaled and modified the landing pad with active LEDs to suit the onboard vision system in long-range and above-the-deck hovering for landing marker detection. For long-range detection, the onboard camera utilizes color segmentation to detect the red LEDs, whereas at close range, the conventional visual approach is adopted to detect the marker. Notice that the precision of the vision system was validated via VICON ground truth by an emulated deck motion under sea-state 6, but the average distance error of landing was relatively large compared with the size of the landing pad. The work of Xu et al. [39] customized a nested AprilTag on a moving vessel as the landing marker to assist UAV autonomous landing, in which the hierarchical marker design along with a three-stage marker detection scheme may suit the vision system at different altitude levels. Although a successful landing attempt was reported during the field tests, more details, such as the accuracy of the estimated relative pose and the speed of the moving vessel, were yet to be revealed. In our previous paper [14], we proposed a real-time vision system to successfully detect the international landing marker in a cluttered shipboard environment even in the presence of marker occlusion, of which closed-loop flight tests demonstrated the results. Apart from the passive landing marker methods listed above, the active sourced markers presented in some early studies are potential solutions to nighttime landing. Activated landing markers, a large number of which are based on TIR technology that has been widely used for rescue, surveillance, inspection, and automatic target recognition [40,41], are more favorable to operating in bad weather conditions or dark environments. Several studies have suggested deploying infrared-based ground structures and sensors to accommodate the task of landing in poor-illuminated scenarios. For instance, Xu et al. [42] imported a "T"-shaped cooperative target emitting infrared light to couple with a longwavelength infrared imaging sensor as the vision-based approach. Owing to the distinct advantage of infrared imaging, only affine moment invariants are processed for landing marker detection. Such a system has been validated under low visibility conditions, such as heavy fog, haze, and at night, showing a certain degree of accuracy and robustness. In [43], the authors constructed two concentric active IR markers to achieve robust UAV landing on a moving ground robot. The authors then simplified the vision algorithm to trivial brightness thresholding by mounting an IR-passing filter to the infrared camera to obscure all the visible light except the IR markers. Chen, Phang, and Chen [12] adopted a modified AprilTag illuminated with IR LEDs as the landing platform to allow tracking of a moving target in low-illumination conditions using the standard AprilTag recognition algorithm. Although the accuracy of the presented method has been validated, it lacks integration of the system to perform a fully autonomous landing. Similar infrared-based methods may also be found in [13,38,44]. These active-sourced markers have shown preferable performance against environmental limitations, including low illumination and bad weather conditions. However, we would not encourage their use due to the potential risks of exposing both the UAV and the landing site where stealthy operations are desired. Alternatively, it is of great necessity to achieve autonomous nighttime landing on a conventional landing marker due to the inevitable and increasing needs of UAV nighttime operations. To date, none of the existing approaches has demonstrated the ability to utilize autonomous landings at nighttime, counting on conventional vision sensors only. System Configuration The experimental system involved in this work has two major components: the unmanned aerial vehicle and ground-based facilities. The UAV carries essential avionics and computational equipment with the corresponding software to fulfill the requirements of onboard processing. At the same time, the ground-based facilities are comprised of a landing marker, a ground control station (GCS) laptop which enables accessing and monitoring the UAV remotely, and a router for establishing a wireless communication link between the UAV and the GSC. The UAV Platform We select a DJI Matrice 100 (M100) quadrotor equipped with a DJI E800 propulsion system as the UAV platform. It is a commercially off-the-shelf, fully integrated, highly flexible, and programmable aerial platform offering ready-to-fly capability and expandability. The quadrotor is able to perform GPS-guided waypoint navigation, GPS/attitude-based hover, and external-program-driven maneuvering. Such a setup minimizes the effort of customizing the airborne platform while enabling a more convenient and minor risky route toward actual flight tests. The M100 quadrotor has a maximum take-off weight and payload endurance of 3.6 kg and 1.0 kg, respectively. When equipped with a DJI TB48D 5700 mAh Li-Po battery, it guarantees a 20 min flight-time to cover most of the experimental tasks. Onboard avionics are involved, including an N1 flight control unit (FCU), an IMU consisting of onboard sensors such as gyroscopes, accelerometers, a magnetometer, and a pressure sensor, as well as a global navigation satellite system (GPS/BeiDou/GLONASS). A bi-directional serial interface running at a baud-rate of 921,600 bps enables the FCU to output IMU and flight states not limited to orientations, accelerations, velocities, and altitudes up to 100 Hz. External control commands can also be transmitted back to the FCU at a maximum rate of 100 Hz via the interface. A rechargeable RC transmitter has an operating range of up to 2 km, where the vehicle status, battery health, and homing position can be monitored when an external mobile device is attached. To fulfill the requirements of onboard processing, we customize some payloads and install them on M100. The payloads include an NVIDIA Jetson TX2 high-performance processing unit, a DC-DC power module, a CamNurse SY003HD 1080P color camera, a mobile beacon of the positioning system, and other custom-built mechanical structures for mounting purposes. The TX2 unit is equipped with a 256-core NVIDIA Pascal GPU, a dual-core NVIDIA Denver 2 64-Bit CPU, a quad-core ARM Cortex-A57 MPCore, 8 GB LPDDR4 memory, 32 GB internal storage, and multiple interfaces. It delivers 1.3TOPs performance under 15 W power consumption, which has become one of the most popular processing units for robotic applications. The DC-DC power module sources power from the TB48D LiPo battery and converts the voltage to suit the TX2 unit. The camera contains a 1/2.7 inch CMOS sensor offering a maximum resolution of 1920 × 1080, and a lens with focal length: 3.6 mm, FOV: 140 • horizontal, f-number: 2.0, and maximum frame-rate: 30 Hz. The camera is tightened to a lightweight custom-built mechanism underneath the M100 battery cartridge facing straight downward and connected to the TX2 unit via a USB cable for target detection purposes. In the field tests, the camera operates at free-run mode with a resolution of 1280 × 1024 and a frame rate of 20 Hz. Figure 1 illustrates the M100 UAV platform and the customized payloads. Specifically, component No. 8 is a mobile beacon of a ground-based positioning system installed on the UAV's airframe. We omit the detail here as it does not contribute to this work, whereas it is reserved for outdoor position referencing purposes in the ongoing studies. Ground Facilities and Software Architecture Apart from the UAV platform, ground-based landing facilities play an essential role in this research. A scaled landing marker is adopted to provide visual clues for landing guidance, details of which are depicted in Section 5. A wireless router powered by a LiPo battery has also been set up to bridge the M100 UAV and the GCS laptop so that the UAV onboard commands can be executed remotely by the GCS laptop via the Secure Shell Protocol (SSH). The TX2 unit has an Ubuntu 16.04 Linux operating system and the robot operating system (ROS) kinetic version installed. ROS is an open-source embedded robot library designed to provide a standard for robotics software development, which abstracts robotic hardware from the software that any robot can use the codes. It is a distributed framework of processes that enables executables to be individually designed and loosely coupled at runtime. These processes are written in C++ or Python programming languages and often grouped in the form of packages, which can be shared, distributed, and reused conveniently. There is a variety of driver packages and third-party libraries that can be easily deployed and integrated. The major ROS packages involved in this work are the OpenCV library for visual algorithm implementation and the DJI Onboard-SDK for M100 sensor telemetry and aircraft condition monitoring. Additionally, ROS' data recording, playback, and offline visualization functionalities are extensively used in experiments. Low-Illumination Image Enhancement Images acquired at nighttime by the onboard camera have extremely low illumination and visibility. Thus, the images need to be enhanced first before any follow-up processing can be applied. The solution to low-illumination image enhancement is twofold: increasing the exposure time to raise global brightness, and implementing specific image enhancement techniques. Unfortunately, the exposure time has an upper limit of 50 ms based on field experiments to guarantee a frame rate of 20 Hz to meet the requirements of real-time processing and hardware-in-the-loop control. Such an exposure time setup results in a very dark image in which details are hard to distinguish with the naked eyes despite partial artificial scene illumination existing. Therefore, a real-time image enhancement technique has been adopted to improve the quality of the images. Conventional methods, such as gamma correction and histogram equalization, have shown adequate yet limited performance in low-illumination image enhancement. These methods are hindered by non-uniform brightness cases and suffer from over-enhancement of contrast, loss of details, and amplified background noise. In this section, a physicalmodel-based method for enhancing onboard capture image sequences in extremely low light conditions is presented. It treats image enhancement as haze removal, where the "haze" is determined by scene illumination. The proposed method is inspired by the work of Dong et al. [45]. Here, we adopt a similar but more efficient approach to facilitate low-illumination image enhancement. A channel-wise inverted low-illumination image has a very similar appearance to a hazy image: where I c (x) is the input low-illumination image, I c inv (x) is the inverted pseudo-hazy image, and c stands for the RGB color channels. Figure 2a,b show the low-illumination and inverted images of the test field, respectively. The McCarney atmospheric model is commonly adopted to describe the scattering process in hazy images: in which J c inv is the haze-free image yet to be estimated, A c is the global atmospheric light obtained by statistics, and t(x) is the transmission map describing the portion of the light reflected by the object and which reaches the camera. According to Dark Channel Prior (DCP) proposed in [46], a haze-free image has at least one color channel with very low intensity at some pixels: where J dark (x) denotes the dark channel of image J, Ω(x) is a local patch centered at x. By taking the minimum operation in the local patch and performing dark channel calculation of the hazy image, Equation (2) can be re-written in the form of: The transmission in a local patch Ω(x) is assumed to be a constant, and the patch's transmission is denoted byt(x). According to DCP, J dark (x) in Equation (3) is assumed to be zero, as A c is always positive. Combining Equations (3) and (4), a coarse transmission mapt(x) is derived with a constant parameter ω to retain a small amount of haze for depth perception:t Then, the haze-free image J(x) can be recovered according to Equation (6), where t 0 is a lower bound to preserve a small amount of haze in dense hazy regions. By inverting J(x) again, the enhanced image is obtained. Although the original DCP method can enhance low-illumination images, it is subject to several drawbacks. One of the disadvantages is that the patch-based calculation in the estimation oft(x) consumes a significant amount of computational power while introducing block artifacts in the transmission map, as the transmission is not always constant in a patch. Another problem is that the density of haze is caused by the intensity of light rather than scene depth, yet such a fact is ignored in the DCP model. Therefore, we relax the restrictions of patch-based calculation and use the luminance channel to substitute the dark channel. Figure 2c,d are the corresponding dark channel and luminance channel images derived from Figure 2b, respectively. We may observe that the luminance channel image is superior to the dark channel one due to the suppressed speckle noise. A simple grayscale conversion obtains the luminance channel image: We also discover that the pixels of the transmission mapt(x) calculated by DCP and the pixels of I l inv (x) are roughly symmetrical about the line y = k. As presented in Equation (9), k is a tunable constant that dominates the effect of enhancement. The improper value selection of k leads to over-enhancement or insufficient enhancement. Therefore, k should be chosen carefully. Hence, combining Equations (6) and (9), we may utilize a simplified method to obtain a haze-free image using Equation (10). The value of k has been empirically set to 0.52. Such a value achieves the best result while preventing over-enhancement. For estimating the global atmospheric light A l , the original DCP method has an extra statistical step because the brightest pixel in the image does not always represent the atmospheric light. Conversely, we relax this assumption and pick the brightest pixel in the luminance channel image I l inv (x) for A l . The reason behind this is that the intensity of the pixels in image I l inv (x) is only related to the level of darkness in the original low-illumination image. As I l inv (x) is an inverted image, the brightest pixel of I l inv (x) always represents the darkest spot of the scene that needs most enhancement. To speed up the calculation, we only process the luminance channel image to reduce the computational burden and prevent further yet redundant color-to-grayscale conversion. This is because the landing marker adopted in this work is of bright color on a dark background that can be easily distinguished in a grayscale image, whose details are elaborated in the next section. After haze removal, the inverted image after enhancement I l (x) is shown in Figure 2e. We also present the color version of the enhanced image in Figure 2f for visualization and an intuitive comparison. Landing Marker Localization The main objective of landing marker localization is to robustly detect the landing marker and extract its relative pose with respect to the UAV. In this section, a novel hierarchical approach is presented to deal with landing marker localization after low illumination image enhancement. It includes a hierarchical scheme consisting of a pre-processing stage to reduce noise and separate the candidate foreground, a decision tree in connection with a lightweight CNN for detection and validate the landing marker, and an information extraction method to obtain the critical relative pose information for target tracking and closed-loop landing control. The approach minimizes the complexity of the system to achieve real-time processing while maintaining detection accuracy and robustness. Landing Marker Analysis Marker-based approaches have been commonly involved in UAV landing. Since increasing the versatility becomes more of a necessity, we aim to use a general marker, the international landing pattern consisting of a block letter "H" and a surrounding circle, to extend the adaptability of landing. Whereas some other marker-based methods have specific requirements on the combination and size of the visual patterns, our visual approach does not rely on a special configuration of marker size or proportion. Instead, the width, height, and thickness of the block letter and the circle can be freely customized and adjusted in practice. As illustrated in Figure 3, a scaled landing marker of size 1 × 1 m 2 is adopted in our work, in which the circular pattern has a radius of 45.8 cm, and the block letter "H" has a width of 43.6 cm and a height of 50.8 cm. The landing marker is of bright color and painted in a gray background to be sufficiently distinguishable. According to the lens setup, the marker is also adequately large for the onboard vision sensor to see it at an altitude up to 15 m. It is worth mentioning that no further modification has been applied to the landing marker, as we would like the visual approach developed in this work to be adaptable to other scenarios where a similar landing marker is presented. Hierarchical Mission Definition As the UAV moves towards the landing marker from long distances to finally land on it, one of the significant challenges is the scale change of the marker in the image. At high altitudes, the marker becomes relatively small so that certain features are too difficult to be detected, but at the same time, the vehicle is less likely to encounter obstacles. At low altitudes, the situation is just the opposite. Therefore, we employ a hierarchicalbased scheme to divide the marker detection problem into four different phases, named "Approaching", "Hovering", "Descending", and "Touch Down", according to a variable ∆ for altitude. In this study, ∆ was empirically set to 1 m based on the marker size and experiments. We utilize the UAV's onboard GNSS module to receive altitude information, so that the vision system is able to determine the mission phase. The details of the hierarchical-based scheme are depicted as follows: • At the "Approaching" phase with an altitude level above 8∆, the landing marker is acquired at a distance and forms a small area in the camera image. A piece of coarse position information is sufficient to guide the vehicle towards the marker. Hence, a rough estimation of the marker center with an acceptable position error is all we need at this stage. We also affirm that 15∆ is the maximum altitude to detect the marker with considerable accuracy, whereas the accuracy of the visual algorithm rapidly degrades when the altitude exceeds the limit. • The second phase is defined as the "Hovering" phase. When the UAV enters an altitude interval between 3∆ to 8∆, it hovers and continuously tracks the marker to minimize the position error. Therefore, a more precise position estimation is required. The landing marker offers more visual detail for hover and altitude descending control than the "Approaching" stage. • During the third phase, "Descending", the vehicle lowers its altitude to an interval between 1∆ to 3∆ and prepares for the final landing stage. During this process, the relative pose between the UAV and the marker is derived using all the critical visual clues of the marker for landing control. • Finally, the UAV enters the "Touch Down" phase when the altitude is below 1∆. At this time, the onboard camera is too close to the ground, so that the landing marker is very likely to fade out and no longer present itself. The vehicle relies on the rest of the onboard avionics, as well as the state machine, to predict the relative pose to accomplish touch-down control. Pre-Processing We observe that a nighttime image is corrupted by shot noise with a mixture of Poisson-Gaussian distribution due to the inherent drawback of the image sensor. Such noise is significantly amplified after low-illumination enhancement, which affects image binarization and landing marker separation. The first step of marker detection is to reduce noise and isolate the foreground objects. Noise reduction is performed by employing a 5 × 5 mean filter along with a 7 × 7 Gaussian filter, resulting in a slightly blurred image. Due to the uneven distribution of scene luminance, adaptive thresholding is applied to binarize the image to capture the border of the landing marker. Different box sizes are selected according to altitude levels, that is: box size 29 for altitude above 8∆, 41 for altitude between 3∆ to 8∆, and 51 for 1∆ to 3∆. We suggest that such a box size configuration maintains a balance between processing speed and quality. The input images and the corresponding results after adaptive thresholding can be seen in Figure 4a-f. After thresholding, connected component analysis is performed to select the candidate region of interest (ROI) for further validation. Since there are many small irregular blocks in the binarized image, we apply different minimum area thresholds to the generated connected components based on the hierarchical proposal to bypass the irrelevant blocks. We set thresholds to 150 pixels for the "Approaching" phase, 500 pixels for the "Hovering" phase, and 1200 pixels for the "Descending" phase, respectively, based on an image resolution of 1280 × 1024. Such a scheme effectively reduces the number of connected component candidates from thousands to less than a dozen per frame on average. Then, the minimum bounding box is computed for each remaining connected component to form an ROI in both the previous enhanced image and the binarized image. Then, the results are forwarded to a decision tree for marker detection and validation. Decision Tree Validation The decision tree presented in this work contains four nodes in total. The first node is to check if the shape of the ROI is similar to a square due to the landing marker being asymmetric. Since perspective and lens distortions may change the ratio, a tolerance δ 1 is applied to such a criterion to improve the robustness. The second decision node is based on the ratio between the number of pixels from the outer circular pattern, and that of the "H" pattern is close to a constant value c 1 , while the ratio between the number of pixels from the entire pattern and the dark background is close to a constant value c 2 . Taking the factors of occlusion and perspective distortion into account, tolerances δ 2 and δ 3 are applied to c 1 and c 2 to improve the robustness. Such a feature is scale-, translation-, and rotationinvariant. The confirmed ROIs are sent to the next node. The third node is to check whether a binarized ROI image contains candidate graphical components or not. It examines if the connected components from the candidate ROI have nested holes to determine whether a complete landing marker is presented or not. We also take the situation of occlusion into account. If partial occlusion contaminates the marker, the circular pattern is likely to be incomplete, which violates the nested-holes assumption. Instead, if at least two connected components are within one single ROI, and the geometrical centers are within a certain pixel range, the ROI is confirmed as a potential candidate and passed on to the next tree node. The criteria mentioned above are also scale-, translation-, rotation-invariant, and anti-occlusion features. ROIs that do not meet the criteria are discarded. The fourth node is the most critical yet resource-consuming node, in which a CNN is developed to perform marker validation. Unlike the region-proposal-based or boundingbox-regression-based CNN frameworks, we treat the landing marker detection problem as a simple binary classification problem using the ROIs extracted from the previous node. As a result, our method only validates the extracted ROI images using a lightweight network architecture rather than searching for the desired target in the entire image. Such an approach significantly accelerates the processing speed 20 times faster than the state-ofthe-art (SOTA) CNN object detectors. To generate the training samples, we first utilize the previous ROI images containing landing markers and segmented natural scenes of different scales, perspective angles, and illumination conditions captured under various scenarios to construct the dataset. In total, 100 landing marker images and 200 natural scene images are manually chosen as the training dataset, whilst another 20 landing marker images and 40 natural scene images are randomly chosen as the testing dataset. In Figure 5, the selected images of the dataset are illustrated, where the first two rows are the captured landing markers, and the last two rows are the segmented natural scenes. Notice that some markers are partially occluded, heavily distorted, blurred, and affected by discontinuity to improve the network's generalization ability. Before training, each image is resized to a resolution of 56 × 56 and normalized to a range of [0, 1]. In order to further extend the adaptability of the network, data augmentation, including affine transformation, random cropping, and random rotation, is applied to the training samples. As the developed CNN is eventually running on a resource-constrained UAV onboard platform, we are not encouraged to use deep network architectures such as ResNet [47] and VGG [48] to trade processing speed for precision. Instead, we present a simple yet efficient network structure primarily inspired by SqueezeNet [49] to facilitate landing marker validation. The key idea is to substitute the conventional convolution layers with the "Fire" modules to reduce computation and parameters. A fire module is comprised of a squeeze convolution layer with 1 × 1 filters only, followed by an expand layer consisting of two separate convolution layers: one with 1 × 1 filters, the other with 3 × 3 filters, outputs of which are concatenated together in the channel dimension. As depicted in Figure 6, the network begins with a standalone convolution layer C1 using an input image of size 56 × 56, followed by four fire modules Fire1 to Fire4, and ends with another standalone convolution layer C2. Max-pooling with stride 2 (P1 to P3) are employed after C1, Fire2, and Fire3, respectively. As the input of the proposed network is only a quarter of the original SqueezeNet, we proportionally reduce the number of filters per fire module, resulting in (6,24,24) for Fire1, (8,32,32) for Fire2, (12,48,48) for Fire3, and (16,48,48) for Fire4, respectively. A dropout layer with a ratio of 40% is applied after the Fire4 module. The final average pooling layer P4 divides the output into landing marker and background categories, where a softmax layer is used to classify the output. For training purposes, the network is trained by minimizing the cross-entropy loss function using back-propagation on a desktop PC equipped with an Intel i7 4790K CPU, 16GB DDR3-1333MHz RAM, and an Nvidia Geforce 1070 GPU running Ubuntu 16.04 OS. We used a batch size of 4, a learning rate of 0.001, and an Adam optimizer to train the network for 100 epochs under PyTorch framework. Keypoint Extraction Finally, if an ROI is affirmed as a landing marker by the CNN node, we extract the keypoints from the corresponding binarized ROI image and preserve them for further processing, such as pose estimation. Specifically, we treat the four outermost vertices and the center of the "H" pattern as identical points. As illustrated in Figure 7a, the vision system first rejects the small outliers of the marker ROI image, then removes the circular pattern to obtain a clear image of the "H" pattern (see Figure 7b). The four vertices of the bounding parallelogram of the "H" pattern are extracted, and the marker center is derived by simply averaging the pixel coordinates of the vertices, as shown in Figure 7c. Field Experiment Scenes Since this is the first attempt, we try to perform vision-based landing marker detection in low-illumination nighttime environments, and we conduct a comprehensive investigation to select the proper locations at the South China University of Technology-Zhuhai Institute of Modern Industrial Innovation campus before carrying out the field tests. The majority of the campus has intensive artificial light sources around buildings and paths. We have chosen four different scenes with minimum light sources but cluttered backgrounds as the experiment locations, as illustrated in Figure 8a-d. Additionally, the red arrow in each scene indicates the actual position where the landing marker is placed in the experiments. We also employ a light meter with a precision of 0.1 lx and a range of 200,000 lx to measure the luminance at each marker location. To be more specific, Figure 8a is the basketball field of the campus near the main building with the nearest light source approximately 15 m away, shown in the top-left corner of the figure. The landing marker is placed in the middle of the field (denoted as L 1 ), where the measured luminance is 4.0 lx. In Figure 8b, the landing marker is placed on a stone-paved path near a small warehouse (denoted as L 2 ) and surrounded by some vegetation with a measured luminance of 4.3 lx. In Figure 8c, the landing marker is placed on the dark side of a fire escape (denoted as L 3 ) with a luminance of 4.8 lx, while the luminance at the white arrow marking is approximately 32 lx due to closer to light source. In Figure 8d, the landing marker is placed in a small garden (denoted as L 4 ) on a stonepaved path surrounding by some vegetation, where the measured luminance is 7.6 lx. The averaged luminance at the marker locations is approximately 5 lx. We conduct various field experiments at the above-mentioned scenes and record the onboard videos in the late evening at around 10:00-12:00 p.m. with moderate weather conditions. Vision System Evaluation To evaluate the vision system proposed in this paper, we follow the complete procedure of autonomous landing and manually fly the DJI M100 quadrotor to simulate the phases of "Take-off", "Approaching", "Hovering", "Descending", and "Touch Down". In Figure 9a, the three-dimensional trajectory of one manual flight test recorded by the onboard GNSS module is presented. As we can see, the quadrotor takes off from the vicinity of the landing marker at the beginning. After ascending to an altitude above 10 m, it approaches the marker and stays in the air to confirm target detection. Then, the vehicle gradually lowers its altitude during the "Hovering" and "Descending" phases and eventually lands on the marker. Figure 9b shows the vehicle's local velocities of the x, y, and z axes. The onboard camera records images at a resolution of 1280 × 1024, a frame rate of 20 Hz, and an exposure time of 50 ms. As mentioned earlier in Sections 3 and 4, such a frame configuration meets the minimum requirement of real-time processing while allowing as much light as possible to enter the camera in the exposure phase to lighten the image. Each video is then stored in the "bag" file format under the ROS framework. The actual frame rate is around 19.8 Hz due to processing latency between adjacency frames, but we neglect the error in this study. In the experiments, the M100 quadrotor has a maximum horizontal speed of 2 m/s and a vertical speed of 1 m/s, which is sufficient to simulate motion blur and perspective distortion caused by varying the camera angle when acceleration and deceleration occur horizontally. Finally, at each field experiment scene, we collect a video with a length of approximately 2-3 min, resulting in a total number of four videos for evaluating the proposed vision system. In Table 1, the statistics of each video are summarized, where key elements like "total images", "marker images", and "max altitude" are presented. These elements are the fundamental metrics for evaluating the vision system. Since our dataset is built on the automatically-extracted ROI images without precisely labeled ground-truth bounding boxes, it would be inappropriate to use the conventional intersection of union (IoU) metric to evaluate the vision system. Instead, we employ some other criteria to evaluate the vision system based on the following equations: where points C m and C g are the predicted landing marker center and its ground truth, respectively. Equation (11) indicates that if the marker center is successfully detected when a marker is presented, whose coordinate falls into a small circle C centered at C g with radius r th , we consider the detected landing marker as a true-positive TP. According to our previous experience in [14], a marker center derived from the four outermost vertices of the "H" pattern is accurate enough with limited bias. Hence, we set r th to one-eighth of the bounding polygon's diagonal length. Such a value raises the threshold and makes the evaluation more reasonable. On the contrary, if a landing marker is presented in the image, but no marker center is obtained, or the predicted marker center is outside the circle C, it is considered a false-negative FN. For the extracted background ROI images, if an image is categorized as a landing marker by the CNN node, it is considered a false-positive FP. In Equations (12) and (13), the standard metrics, precision P, recall R, and F-measure F 1 , are adopted to evaluate the accuracy of the vision system. Compared with the conventional IoU metric, the proposed criteria give credit to both the quality of landing marker detection outcomes and the detection rate. The overall performance of the vision system affects the accuracy of pose estimation and precision of hardware-in-the-loop control in future works. Based on the criteria mentioned above, we collect the output images of each stage using the recorded different scene videos, and manually categorize them into marker (denoted as "MK") and background (denoted as "BG") categories to calculate the exact numbers of TP, FP, and FN, results of which are listed in Table 2. Specifically, the term "ccomp" stands for the number of detected connected components, and "dt1-dt3" means the remaining ROIs filtered by nodes 1 to 3 of the decision tree. Owing to the CNN node playing a critical role in confirming actual landing markers and rejecting the irrelevant background ROIs, the output of the CNN node is counted separately and presented as "dt4_cnn". Finally, we examine the number of cases where the marker keypoints can be successfully extracted, denoted by the term "ext_kpt". At scene L 1 , there are 2380 markerpresented frames, whereas the "ccomp" stage outputs 2339 connected components of the marker and 5186 that of background fragments. Then, the decision tree rejects a small amount of the marker ROIs and outputs 2241 marker images with accurately extracted keypoints confirmed as TP while effectively eliminating more than 98% of the background segments. We carefully examine the remaining FP samples reported by the network, discovering that they all come from partially cropped landing marker images generated during the final "Touch Down" phase of landing. Although the keypoints are drawn from most of these "H" patterns, we still consider them FP due to inconsistency and unreliability. Compared with L 1 , scene L 2 has a more complicated and cluttered background than the basketball field. Therefore, nearly three times the number of the connected components more than L 1 are picked up in L 2 . The vision system has similar performance to the previous scene, except that the CNN node only verifies 23 FP samples. We study this case and find the cause lies in the way of the landing maneuvering. Since the marker is placed on a narrow stone-paved path surrounded by cluttered vegetation, the quadrotor is piloted toward the west of the marker at a relatively high altitude (3.28 m) to land on flat ground. The marker quickly vanishes by the edge of the image, leaving limited partially occluded marker samples only. However, we find it difficult to extract the keypoints from these negative samples due to heavy motion blur. Moving on to the fire escape scene L 3 , an interesting phenomenon is observed that the number of extracted markers is slightly greater than its actual value (3008 vs. 2936, marked by the * sign in Table 2, column 6). This is because the fire escape has a lighter color background than the marker. The edges of the marker square are thresholded to the foreground at a low altitude, leading to approximately 80 repetitive identifications of the marker. Therefore, we have subtracted this number at each stage to make a fair comparison. Scene L 4 has a similar cluttered environment as scene L 2 , where a large number of background segments have been successfully rejected by the vision system while achieving a comparable performance in marker detection. A majority of the FP samples also come from incomplete marker patterns that are yet to be eliminated. In Table 3, precision P, recall R, and F-measure F 1 are listed correspondingly. Combining with the altitude information listed in Table 1, we may also see that the altitude level has an impact on the system. The vision system occasionally misses the marker when the UAV flies above 13 m due to insufficient pixels in the connected component analysis. This situation is exacerbated if motion blur generated by horizontal movements occurs, which explains a slight degeneration in system performance found in scene L 2 . In contrast, the vision system achieves the best performance at scene L 3 with an altitude constantly below 10 m. Note that we subtract the repetitive identifications mentioned above from final system outputs to get the actual number of TP samples (2852, denoted by * sign in Table 3) at scene L 3 . In real landing scenarios, it is recommended to maintain a lower altitude for landing marker detection and tracking. In Figure 10, we present some of the landing marker detection results. The first, third, and fifth rows are some of the original color images captured onboard, in which the markers detected by our vision system are highlighted in color lines and dots. The second, fourth, and sixth rows are the corresponding color images after enhancement to observe the details. The proposed vision system is able to detect the landing marker in the presence of noise, scaling, image distortions, motion blur (see figures in row 4, column 3 and row 6, column 1), and image acquisition error (see the last two figures of Figure 10) in complex low-illumination environments. Figure 10. The landing marker detection results of the proposed vision system. The first, third, and fifth rows: Selected color images from the original test dataset. Detected markers are highlighted in color lines and dots. The second, fourth, and sixth rows: The corresponding color images after enhancement for scene detail observation. As we suggest, the low-illumination image enhancement stage plays a crucial role in landing marker detection, and the vision system is further evaluated using the same metrics, but the original test videos are without enhancement. To make a fair comparison, we also retrain the CNN node with the original unenhanced training samples. The results are shown in Table 4, from which we may intuitively observe massive degradations in system performance. The reason is that the pre-processing stage of the system experiences difficulty properly binarizing the image, thus reducing the number of generated connected components and detection rates in the subsequent stages. Notice that only two marker samples can be identified at scene L 1 , leading to a complete failure of the system. Such a bad performance is because the marker is placed at the basketball field's center, a location far away from the light source with the lowest luminance among all the test scenes. The landing marker merges into the dark background so that it can no longer be detected. According to the analysis, we may conclude that the low-illumination image enhancement scheme increases the quality of nighttime images and is also the foundation of landing marker detection. Since there is a certain number of CNN-based object detection frameworks in the related research, we also compare the performance of the proposed vision system against that of the state-of-the-art methods that can be implemented in real-time. We have chosen YOLOv3 and its simplified version YOlOv3-Tiny [50], and MobileNetV2-SSD [51] to carry out the evaluation. As our vision system does not rely on image-wise bounding box labeling, we randomly select an equal proportion of the original images without enhancement from the videos to establish a dataset for a fair comparison. From Table 1, one may see there is a total number of 9642 marker images. As a result, 500 images were selected and manually labeled as the training dataset, whilst the remaining 9142 images were collected for testing. All the images were resized to 640 × 512 for a faster evaluation process. Each network was trained individually using the default parameters under official guidance, and the detection results are listed in Table 5. It is worth mentioning that the M100 quadrotor spends a longer time at higher altitudes during the "Approaching" and "Hovering" stages, resulting in approximately 60% to 70% of the images appearing to have a relatively small landing marker, which brings difficulties to marker detection. As a result, it significantly increases the number of FN samples for YOLOv3 and its derivative. On the contrary, MobileNetv2-SSD performs better than YOLOv3 and YOLOv3-Tiny in detecting small markers, but also introduces a certain amount of FP samples, which are other ground objects, into the results. Finally, the performance of our vision system is derived by using the data from Table 3, from which one may see that our approach achieves the best recall and F-measure based on a minimal network design. We also evaluate whether the enhanced low-illumination images benefit the abovementioned CNN-based object detection frameworks or not. Again, each network is trained and tested by the same procedure, but using the enhanced image dataset. We present the detection results in Table 6. We may see some overall improvements for both YOLOv3 and YOLOv3-Tiny, whereas MobileNetv2-SSD significantly reduces the FN samples to achieve the best recall and F-measure. Note that such a result slightly outperforms our vision system, as the enhanced images offer much stronger and more discriminative features for the networks to extract. However, our vision system still has the advantage of processing speed, which are elaborated in the next subsection. Processing Time Evaluation Processing speed is of crucial importance to real-time UAV applications. Therefore, the timing performance of each stage of the proposed system is quantitatively evaluated. In this section, we utilize the collected videos to test the average time consumption of each processing stage. Each video has an initial resolution of 1280 × 1024, that is then resized to 1024 × 768 and 800 × 600, respectively. We use the annotations "HR", "MR", and "LR" to denote these resolutions. The tests are conducted on both the CNN training desktop PC (denoted by "PC") and the NVidia TX2 unit (denoted by "TX2"). Timing performances of different processing stages, including image enhancement, pre-processing, the decision tree method, and keypoint extraction are comprehensively evaluated, results of which are listed in Table 7 accordingly. It is worth mentioning that the CNN node calculation is performed on GPUs on both the platforms, where the time consumption of data transfer between CPU and GPU is neglected. We may see that the most time-consuming parts lie in the image enhancement and the adaptive thresholding stages. By reducing the resolution to the "MR" level, the detection rate on the desktop PC has surpassed the maximum frame rate that the onboard camera offers. On the contrary, the TX2 unit has relatively poor performances on both the "HR" and "MR" resolutions. Nonetheless, it has already achieved a detection rate of more than 10Hz on the "LR" resolution, satisfying this study's minimum requirement of real-time processing. Compared with the works of [9,25], we utilize the landing marker detection at a higher resolution, whereas their algorithms only operate at image resolutions of 752 × 480 and 640 × 480, respectively. We also test the timing performance of the aforementioned CNN-based frameworks on the desktop PC's NVidia Geforce 1070 GPU. For an image size of 1280 × 1024, the averaged processing time for YOLOv3, YOLOv3-Tiny, and MobileNetV2-SSD are 102.65 ms, 14.12 ms, and 64.39 ms. For image size 640 × 512, YOLOv3, YOLOv3-Tiny, and MobileNetV2-SSD take approximately 33.49 ms, 5.21 ms, and 26.65 ms to process one frame. Although these networks may achieve real-time processing using a high-performance desktop GPU at the cost of reducing the image resolution, we still find it challenging to implement these networks on a resource-limited UAV onboard platform. Moreover, these networks still require the low-illumination image enhancement scheme to achieve a better performance, which further reduces the processing speed. Note that the timing performance of the proposed vision system is accomplished without optimization, in which redundant operations are executed at each frame. Using the information of adjacent frames in the image sequences, we may narrow down the ROIs to a specific area based on the previous detection results to dramatically reduce the computational burden caused by image enhancement and adaptive thresholding. Conclusions and Future Works In this paper, we presented a novel, robust, and efficient vision system for assisting UAV autonomous landing in nighttime outdoor environments with visibility constraints. The system is able to enhance the quality of the onboard-captured images effectively while performing reliable landing marker detection and validation through a hierarchical-based decision tree method and extracting the key information. Field experiments show that the vision system is accurate and robust to low illumination, motion blur, distortions, and marker occlusion. Moreover, the vision system has satisfactory timing performance and has been implemented in real-time on the onboard processing unit. Its processing speed can be further improved by utilizing the information of adjacent video frames. At this stage, we have only shown the open-loop test results of the vision system for landing marker detection and validation, as space is limited in this paper. In the meantime, however, we are coupling the marker detection results with relative pose estimation, sensor fusion, and control system design to finalize hardware-in-the-loop field experiments. Soon, we would like to present the results of UAV fully autonomous nighttime landing and extend the existing system to land on a moving target. As the study goes on, a landing marker nighttime dataset will be released for other researchers to compare and evaluate their detection algorithms.
14,441
2021-09-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Human TRAV1-2-negative MR1-restricted T cells detect S. pyogenes and alternatives to MAIT riboflavin-based antigens Mucosal-associated invariant T (MAIT) cells are thought to detect microbial antigens presented by the HLA-Ib molecule MR1 through the exclusive use of a TRAV1-2-containing TCRα. Here we use MR1 tetramer staining and ex vivo analysis with mycobacteria-infected MR1-deficient cells to demonstrate the presence of functional human MR1-restricted T cells that lack TRAV1-2. We characterize an MR1-restricted clone that expresses the TRAV12-2 TCRα, which lacks residues previously shown to be critical for MR1-antigen recognition. In contrast to TRAV1-2+ MAIT cells, this TRAV12-2-expressing clone displays a distinct pattern of microbial recognition by detecting infection with the riboflavin auxotroph Streptococcus pyogenes. As known MAIT antigens are derived from riboflavin metabolites, this suggests that TRAV12-2+ clone recognizes unique antigens. Thus, MR1-restricted T cells can discriminate between microbes in a TCR-dependent manner. We postulate that additional MR1-restricted T-cell subsets may play a unique role in defence against infection by broadening the recognition of microbial metabolites. H uman mucosal-associated invariant T (MAIT) cells have been described as an abundant population of ab-T-cell antigen receptor (TCR) T cells that display antimicrobial Th1-like cytotoxic capacity upon detection of a range of microbial infections [1][2][3] . By definition, MAIT cells express a semi-invariant TCR that engages antigenic ligands presented by the HLA-Ib major histocompatibility complex (MHC)-related protein I (MR1). MR1 has been shown to present small compounds derived from folic acid and riboflavin biosynthesis, the latter of which can activate MAIT cells [4][5][6] . In healthy humans, MAIT cells account for 1-10% of T cells in peripheral blood. They are also abundant in the liver and in a number of mucosal tissues 1,[7][8][9][10][11] . Thymic selection and peripheral expansion of MAIT cells depend on MR1 (refs 11,12). Furthermore, MAIT cells with effector function have been found in the thymus, a finding that has been used to support their definition as innate-like 13 . While the role of MR1 and MAIT cells in human immunity is unclear, mouse studies have demonstrated their role in protection against bacterial infections including Klebsiella pneumoniae, Mycobacterium bovis Bacillus Calmette-Guérin (BCG) and Francisella tularensis live vaccine strain (LVS) [14][15][16] . MR1 is an HLA-Ib MHC class I molecule thought to be highly conserved in mammalian evolution 17 . MR1 can bind vitamin B-based precursors derived from folic acid (vitamin B9) and riboflavin (vitamin B2) biosynthesis that share a common pterin ring structure 5 . So far, only those from the riboflavin synthetic pathway have been shown to stimulate MAIT cells. These stimulating ligands can be derived from either pyrimidine-based early intermediates in riboflavin synthesis (5-A-RU) that form adducts with other small metabolites (for example, 5-OP-RU) or the direct lumazine precursors of riboflavin (for example, ribityllumazine (RL)-6,7-diMe) 4,5 . Because riboflavin synthesis does not occur in humans, riboflavin metabolites presented in the context of MR1 have been suggested to be pathogen-associated molecular patterns. However, evidence supports the existence of additional MR1 ligands. For example, structural analysis suggests that plasticity in the MR1-binding groove could accommodate a range of different ligands 4,[18][19][20][21][22] . As the pterin ring occurs commonly in the environment, it is feasible that other microbial or host molecules with common chemotypic properties could bind to MR1 and function as antigens for MR1-restricted T cells. Although MAIT cells specifically recognize infection by pathogens with the capacity to synthesize riboflavin 1,3 , whether microbe-specific MR1 ligands exist is unknown. We previously evaluated the ex vivo human TCR repertoire of MAIT cells responsive to three riboflavin-synthesizing microbes 23 , finding that distinct MAIT TCR usage was associated with microbeselective responses within and across individuals. These data support the hypothesis that MR1 can present discrete microbial ligands, and that this presentation is in turn associated with selective clonal expansion of MAIT cells. However, it is not known whether each microbe synthesizes the same repertoire of riboflavin metabolites, but at varying proportions, or whether there are unique ligands. The nature of the diversity in MR1 ligand repertoire suggests an accordingly diverse MAIT TCR repertoire to mediate ligand recognition. Human MAIT TCRa chains have been described as being invariant, comprising TRAV1-2/TRAJ12, 20, 33 genes paired with a limited array of TCR b-chains 1,11,13,24,25 . However, other studies have identified greater TCR heterogeneity through more diverse TCRa and TCRb chain usage 10,23,[26][27][28] . Gherardin et al. 28 described TRAV1-2-negative TCRs that bind selectively to MR1 tetramers loaded with 5-OP-RU (riboflavin metabolite), or 6FP/acetyl-6FP (folate derivative), or both. These TRAV1-2negative TCRs represent unprecedented diverse TRAV and TRBV usage by MR1-restricted T cells. These findings suggest that MR1-restricted T cells could use diverse TCRs to recognize microbial infection; therefore, the full repertoire of TCRs that can be used by MR1-restricted T cells is unknown. Here we describe microbe-reactive MR1-restricted T cells that do not express TRAV1-2. Functional analysis reveals that these cells, although less prevalent than those that express TRAV1-2, can be found in PBMC from all individuals. Among MR1-Ag tetramer-positive cells, 1-4% are TRAV1-2-negative. T-cell cloning confirms the usage of an alternative TCRa chain, TRAV12-2, by an MR1-restricted T-cell clone from one donor. In comparison with previously described TRAV1-2 þ MAIT cells, this T-cell clone displays a unique pattern of ligand and microbial selectivity. Most notably, the TRAV12-2 T-cell clone could detect infection with Streptococcus pyogenes in a TCR-dependent manner, a microbe that is not capable of synthesizing riboflavin. These data, then, provide direct evidence of the ability of MR1 to present a diverse array of ligands, which in turn is associated with selective TCR usage. Finally, our findings challenge the current paradigm of sole usage of TRAV1-2 in conjunction with the recognition of riboflavin metabolites being the defining feature of MR1-restricted T cells. Results Enumeration of functional TRAV1-2 À MR1-restricted T cells. MAIT cells can detect a wide range of bacteria and fungi through recognition of riboflavin metabolites presented by the HLA-Ib molecule MR1. In this context, we sought to explore the relative contribution of MR1 to the entire HLA-Ib-restricted CD8 þ T-cell response to microbial infection. In order to quantify and characterize these responses directly ex vivo, we have developed a functional ex vivo assay that relies upon cytokine production by CD8 þ T cells in response to microbial infection of HLA-mismatched A549 cells 1 . The flow cytometry gating scheme used to analyse this response is shown in Supplementary Fig. 1. Using this approach, we have consistently been able to enumerate MAIT cells (TRAV1-2 þ ) responsive to a number of microbes such as Mtb 1,13,23 , Candida albicans and Salmonella typhimurium infections 23 . However, we also consistently observed TRAV1-2negative cells reactive to these same microbes. For example, nearly 50% of the CD8 þ HLA-Ib response to M. smegmatis (M. smegmatis) was TRAV1-2-negative in a representative donor, D462 (Fig. 1). To address the hypothesis that TRAV1-2-negative cells were MR1-restricted, we generated an MR1-knockout A549 cell line 29 functional assay, the wild-type (WT) and MR1 À / À cell lines were infected with mycobacteria and T-cell responses evaluated by interferon (IFN)-g ELISPOT. As shown in Fig. 2a, activation of the TRAV1-2 þ MR1-restricted clone (D426-B1 (ref. 23)) was ablated, while activation of the HLA-E-restricted clone (D160-1-23) was unaffected, indicating that lack of MR1 did not affect infectivity or a separate antigen-presentation pathway. To establish the prevalence of MR1-restricted T-cell responses ex vivo, WT and MR1 À / À A549 cells were infected with M. smegmatis and used as stimulators for CD8 þ T cells isolated Figure 2 | MR1-restricted microbial-reactive CD8 þ T cells from blood do not exclusively express TRAV1-2. (a) IFN-g production by T-cell clones D160-1-23 (restricted by HLA-E) and D426-B1 (restricted by MR1) in response to mycobacteria-infected WT and MR1 À / À A549 cell line. (b) Positively selected CD8 þ T cells from PBMC were tested for ex vivo IFN-g responses to M. smegmatis-infected WT or MR1 À / À A549 cell line. Events are gated on live CD3 þ CD4 À cells. IFN-g and TRAV1-2 expressions are shown on the x and y axes, respectively. To the right is a summary of the TRAV1-2-negative response from each donor across experiments. (c) Frequency of IFN-g þ CD4 À cells from each donor (represented by one dot) when stimulated by M. smegmatis-infected WT or MR1 À \ À A549s, n ¼ 5 biological replicates, with n ¼ 3 technical replicates. Statistical significance of difference between groups was determined using the nonparametric Mann-Whitney U-test. Error bars are the s.e.m. of triplicates in a,b. Experiments in this figure were performed at least twice with similar results. Representative results are shown. *P value40.05 was considered significant. from PBMC of five healthy individuals (Fig. 2b). Intracellular IFN-g production was assessed using flow cytometry. As expected, TRAV1-2 þ cells produced IFN-g in response to M. smegmatisinfected WT A549 cells. Furthermore, the majority of the response was MR1-dependent (mean 85.21% MR1-restricted, range 45.3-97. 22, n ¼ 5, tested in duplicate). Each donor also had a proportion of TRAV1-2-negative cells whose production of IFN-g was MR1-dependent (mean 25.83% MR1-restricted, range 10-41.03, n ¼ 5, tested in duplicate; Fig. 2b). The average of the cytokine responses by the TRAV1-2-negative population for each donor across experiments was plotted in Fig. 2b (right panel). In order to confirm the presence of TRAV1-2-negative M. smegmatis-reactive T cells, we repeated this assay using PBMC that were fluorescenceactivated cell sorting (FACS)-selected on CD8 but depleted of TRAV1-2 þ cells. Then, we stimulated the T cells with WT (MR1-positive) infected A549 cells. From all donors we observed a detectable IFN-g-producing population. Therefore, we conclude that not all microbe-reactive MR1-restricted T cells are TRAV1-2 þ . Cloning TRAV1-2-negative MR1-restricted T cells. To further characterize TRAV1-2-negative MR1-restricted CD8 þ T cells in each of the five donors, we used M. smegmatis-infected antigen-presenting cell (APC) to generate CD8 þ TRAV1-2negative T-cell lines that were both reactive to M. smegmatis and whose activation was blocked by the addition of MR1 antibody. Subsequent cloning of the T-cell line from one donor, D462, using a-CD3 stimulation was used to establish the T-cell clone, D462-E4. As shown in Fig. 3a, D462-E4 was characterized by the uniform expression of CD8a and the absence of TRAV1-2. In comparison with TRAV1-2 þ MAIT cell clones, D462-E4 expressed equivalent levels of TCR and co-stimulatory receptors (Fig. 3b). This T-cell clone also expressed CD26 (ref. Table 1). The expression of the TCRb chain was confirmed by antibody staining, which showed staining on D462-E4 but not by MAIT T-cell clone, D481-A9, that expresses TRBV20-1 1 . We confirmed that the clone was M. smegmatis-reactive using infected epithelial cells, and restricted with MR1 using antibody blockade (Fig. 3d). The D462-E4 clone retained the ability to recognize M. smegmatis-infected cells in a manner that was blocked by a-MR1, but not by the pan-HLA I antibody W6/32, Differential antigen recognition by MR1-restricted T cells. Because we had isolated D462-E4 from an M. smegmatis-reactive T-cell line, we wanted to compare the M. smegmatis reactivity of TRAV1-2 þ and TRAV12-2 þ T-cell clones over a range of MOI. In Fig. 4a, we tested clone D426-G11 and D462-E4 in an IFNg ELISPOT for their reactivity to infected dendritic cells (DCs). Although both clones recognized the infection, our data show a higher potency of antigen dose in regards to the TRAV1-2 þ clone (D426-G11), while both clones displayed similar maximal efficacy (cytokine release). This result suggested that the TRAV12-2 TCR either had lower TCR avidity or could recognize fewer MR1 ligands from the infected cell. Given prior evidence of MR1 ligand discrimination 1,4,20,23,28 , we tested whether the TRAV12-2 TCR had different ligand selectivity in comparison with the TRAV1-2 MAIT TCR. To define the MR1 ligands recognized by the TRAV12-2 þ TCR, we first tested D462-E4 for its ability to recognize the A549 cell line loaded with MAIT RL antigens RL-6,7-diMe and RL-6-Me-7-OH (ref. 5) in the presence or absence of MR1 blockade. As shown in Fig. 4b, D462-E4 detected both antigens in an MR1-dependent manner. However, D462-E4 was preferentially stimulated by RL-6-Me-7-OH. In contrast, two previously characterized MR1-restricted clones D481-F12 and D426-G11 (refs 1,23) were preferentially stimulated by the RL-6,7-diMe antigen. To better understand Numbers on the overlay indicate the geometric mean fluorescence intensity of at least 30,000 clones. (f) DCs were infected with S. pyogenes at MOI ¼ 3, blocked with anti-MR1 or isotype control (10 mg ml À 1 ) and then co-incubated with D462-E4 T-cell clone. IFN-g production was quantified by ELISPOT. (g) DCs were either blocked with 6-formyl pterin (50 mg ml À 1 ) or 0.01 M NaOH vehicle control or nothing, and then loaded with S. pyogenes or M.smeg supernatant (15 ml) or PHA at 10 mg ml À 1 . The DCs were then used to stimulate T-cell clone D462-E4 and IFN-g production was quantified by ELISPOT. Error bars represent the s.e.m. of at least duplicates. Assays were performed three times, with similar results. Representative results are shown. whether this differential response was due to TCR avidity, T-cell activation was tested over a range of antigen concentrations. We found that the differential responses by D462-E4 compared with the other MR1-restricted T-cell clones were maintained over a wide range of concentrations (Fig. 4c). In these experiments, we observed similar antigen potency (antigen concentration of half maximum response) yet different maximal efficacy (cytokine release) in response to the two-RL antigens between the three MR1-restricted T-cell clones. These responses may be indicative of different levels of antigen cross-reactivity between these TCRs. Therefore, we concluded that D462-E4 displayed ligand discrimination between MAIT RL antigens. Differential recognition of microbes by MR1-restricted T cells. To establish the repertoire of microbes recognized by D462-E4, a diverse array of microbes was tested. DCs and epithelial cells were infected with microbes at optimized MOI and used to compare the ability of T-cell clones D462-E4 (TRAV12-2) and D426-G11 (TRAV1-2) to recognize these targets (Fig. 4d). We observed four patterns of responses from the MAIT T-cell clones: (1) ; Fig. 4d). Neither clone responded to the riboflavin auxotroph Enterococcus faecalis. Thus, the TRAV1-2 þ MAIT clone, D426-G11, only responded to microbes with the capacity to produce riboflavin. While the D462-E4 clone was also able to recognize many of these microbes (Fig. 4d), it was distinguishable in its ability to respond to infection with S. pyogenes. S. pyogenes is a bacterium that does not express riboflavin synthesis Rib enzymes [33][34][35] ribA, ribB, ribD, ribH and ribE (E. coli operon nomenclature). Initially, we confirmed that S. pyogenes was auxotrophic for this vitamin (Fig. 5a). To demonstrate the selectivity of the TRAV12-2 T-cell clone for S. pyogenes, clones were tested over a broad range of MOIs. The TRAV12-2 þ T-cell clone responded over a range of S. pyogenes MOI in DCs (Fig. 5b), while the TRAV1-2 þ clone did not respond to this infection. To exclude a nonspecific effect on the TRAV12-2 clone, cells were incubated with S. pyogenes or filtered culture supernatant without an APC. Here no activation was observed (Fig. 5c). In order to establish whether the T-cell clone was activated by soluble factors from the APC, we used two approaches. First, we tested whether conditioned media from DCs pulsed with S. pyogenes culture supernatant could activate D462-E4. Here no activation was observed (Fig. 5c). Second, we tested whether pulsed fixed DCs would maintain the same pattern of eliciting T-cell clone activation as unfixed pulsed DCs (Fig. 5c). In this experiment, DCs that had been pulsed with S. pyogenes supernatant overnight and then fixed were still able to activate T-cell clone D462-E4, suggesting that the response depends upon antigen presentation and not soluble factors. We next sought to determine whether the response to S. pyogenes was dependent upon the TCR. As shown in Fig. 5d the response to S. pyogenes could be blocked using the pan ab-TCR non-activating antibody but not an isotype control. Furthermore, increased phosphorylation of CD3z the TCR-CD3 complex (the primary intracellular signal-transducing subunit 36 ) was observed in D462-E4 following incubation with DCs infected with M. smegmatis or S. pyogenes (Fig. 5e). ZAP-70 is a tyrosine kinase that, upon TCR stimulation, is recruited to the TCR-CD3 complex by phosphorylation of the ITAMs of CD3z (ref. 37). After TCR engagement, the tyrosine residues Y315 and Y319 of ZAP-70 are phosphorylated. In order to provide further evidence of TCR stimulation upon recognition of bacterially infected DCs, we also compared the level of ZAP-70 pY319 between treatments. Here we also observed increased phosphorylation of this key residue of ZAP-70 in D462-E4 following incubation with DCs (Fig. 5e). To determine whether the recognition of S. pyogenes was dependent on MR1, two approaches were employed. First, we tested whether the clone's response to S. pyogenes could be blocked with antibody to MR1 (Fig. 5f). Here recognition was efficiently inhibited by addition of anti-MR1 but not isotype control. Second, we employed the MR1 antagonist, 6FP (Fig. 5g). The response to S. pyogenes, but not mitogen Phytohaemagglutinin (PHA), was also blocked by addition of 6FP over the vehicle control. Thus, clone D462-E4 detects both RLs MAIT antigens and an unidentified streptococcal-derived antigen in an MR1 and TCR-dependent manner. This unique detection pattern of infection and ligand recognition by TRAV12-2 þ D462-E4 compared with TRAV1-2 þ MAIT cells indicates a greater diversity in microbial MR1 ligands. Tetramer staining of TRAV1-2 À MR1-restricted T cells. To establish the prevalence of TRAV1-2-negative MR1-restricted T cells across healthy individuals, we first stained PBMC with the MR1-Ag (5-OP-RU) tetramer, followed by sequential staining with antibodies to MAIT-associated surface markers ( Supplementary Fig. 2). As human MAIT cells have been defined as either CD8 þ or CD8 À CD4 À T cells 1,24 , we quantified tetramer staining within the CD4-negative population. Frequencies of MR1-Ag tetramer þ cells ranged from 0.98 to 4.30% (mean 2.30%, n ¼ 5) of the CD4-negative lymphocytes (Fig. 6, top row). In line with previously published data 10 , the majority of MR1-Ag tetramer þ cells expressed the TRAV1-2 TCR (Fig. 6, second row). However, on average, 2.57% of MR1-Ag tetramer þ cells did not express TRAV1-2 (red events in Fig. 6). Furthermore, these were present in all donors and ranged in frequency from 1.40 to 4.22% of tetramer þ cells. TRAV1-2 þ MAIT cells have been phenotypically characterized as cells with high expression of CD161 (refs 26,38) and CD26 (refs 7,30). In line with this, the majority of the TRAV1-2 þ MR1-Ag tetramer þ cells co-expressed CD26 and CD161 (mean 96.2%, s.d. 2.5%; Fig. 6, bottom row). In contrast, a smaller proportion of tetramer þ TRAV1-2-negative cells expressed CD161 and CD26, although there was considerable heterogeneity between donors (mean 60.1%, s.d. 21.6%, range 37.8-86.2%). To exclude the possibility that tetramer binding masked the staining of TRAV1-2, we depleted the TRAV1-2 þ cells using FACS in three of the above donors, and then stained the remaining TRAV1-2-negative cells with the MR1 tetramer. In this case, the estimated TRAV1-2negative tetramer frequencies were nearly identical to those seen before TRAV1-2 depletion (Fig. 6, bottom). This experiment verified the frequencies of TRAV1-2-MR1-restricted T cells from each donor. In sum, these data confirm the presence of T cells capable of interacting with MR1-Ag that do not express the TRAV1-2 TCRa in healthy human blood. Finally, in order to test the generalizability of the antigenic reactivity observed by clone D462-E4, we generated CD8 þ TRAV1-2-negative MR1 tetramer þ T-cell lines from the four additional donors used in Fig. 6. Importantly, these T-cell lines were enriched to 84-97% TRAV1-2-negative of MR1 tetramer þ cells. We observed equivalent MR1-restricted IFN-g reactivity to M. smegmatis and S. pyogenes infection by each T-cell line. Taken together, the confirmation of our finding from four additional PBMC donors clearly supports generalized reactivity of TRAV1-2-negative MR1-restricted T cells across individuals. Discussion While MAIT cells have been defined through their usage of the TRAV1-2 TCR, in this report we demonstrate unambiguously the presence of MR1-restricted T cells that are TRAV1-2-negative, demonstrate the specific usage of the TRAV12-2 TCR by a clone and find that these cells are capable of recognizing both the previously demonstrated RL riboflavin intermediates, as well as unique ligands derived from S. pyogenes, a bacterium incapable of riboflavin biosynthesis. As a result, our study demonstrates considerable promiscuity in MR1-restricted T cells, at the level of the ability of their TCR to recognize antigens, and in the ability of MR1 to present these ligands. Here we find that TRAV12-2 MR1-restricted T cells can be stained with the MR1-Ag tetramer, and have the ability to recognize both known RL antigens as well as an antigen derived from S. pyogenes. The observation that the TRAV1-2 MR1restricted T cell cannot recognize this bacterium provides definitive evidence that the antigen being recognized is distinct. These data, then, would suggest a model in which the MAIT cell TCR confers selectivity, but not stringent specificity. Similarly, our data support the hypothesis that MR1 is capable of presenting an array of ligands. The observation that MR1-restricted T cells of varying TCR usage and antigenic selectivity can be broadly defined by staining with the MR1-Ag (5-OP-RU) tetramer is reminiscent of CD1d-restricted T cells defined by staining with the aGalCer tetramer [39][40][41] . In these initial studies, human NKT cells were enumerated using CD1d-aGalCer tetramers and found to stain with anti-Va24 TCR antibody. More recently, populations of NKT cells expressing alternative semi-invariant TCRs that bind to aGalCer-loaded CD1d tetramers have been identified 42,43 . Elucidation of the crystal structure of one of these alternative TCRs to aGalCer-loaded tetramer showed a similar binding mode to that of the Va24Ja18 TCR 44 . Taken together, this shared phenomenon between MR1 and CD1d antigen presentation must be what allows selective activation within the confines of microbial pattern recognition by unconventional T cells. The use of TRAV12-2 TCR by MR1-restricted T cells necessarily challenges the existing paradigm of how the MAIT cell TCR interacts with MR1. Prior studies have defined the structural and functional requirements of the semi-invariant TRAV1-2 TCR for MAIT cell activation in the context of MR1 and bound ligand 4,[18][19][20][21] . These studies define a clear role for the CDR3a and possibly the CDR3b loop in ligand recognition. Specifically, these studies suggested a conservation of the key patterns of TCR residues in the TCR a-chain including the conserved amino acid, 45). The critical tyrosine at position 95 in the CDR3 of the TCRa chain allows for the formation of a hydrogen bond with MAIT activating but not non-activating ligands. For example, MR1 binding of the RL-6-Me-7-OH ligand (that is recognized by D462-E4) allows for a single TRAV1-2 þ TCR contact with TRAV1-2: Tyr95 of the CDR3a loop 20 . While this residue is highly conserved between MAIT TCRs 25,46 , sequence analysis of TRAV12-2 þ D462-E4 demonstrates that this clone lacks a tyrosine residue in its CDR3a region. We note that we have previously reported that a proportion of microbial-reactive, TRAV1-2-expressing MR1restricted T cells do not contain the Tyra95 (ref. 23). A recent study by Gherardin et al. 28 observed that a TRAV1-2-negative MR1-restricted TCR (TRAV36/TRAJ34) could instead use an asparagine residue of its CDR1a to contact the 5-OP-RUactivating ligand. This elegant study highlights that alternative molecular interactions can mediate atypical TCR recognition of MR1-Ag. At present, we cannot comment on the critical residues that mediate the D462-E4 TCR interaction with MR1 ligand. Our data clearly support the hypothesis that MR1 can present a diverse array of ligands to MR1-restricted T cells. First, by comparing microbial recognition of the TRAV1-2 and TRAV12-2 TCR we have defined patterns of recognition that would imply the presence of more than one activating ligand. For instance, the ability of TRAV1-2-expressing T cells to uniquely recognize a microbe would suggest the presence of a ligand not recognized by TRAV12-2 TCR. Similarly, those microbes that are preferentially recognized by the TRAV1-2 T cells likely contain either a single ligand that is recognized preferentially (analogous to our findings of ligand discrimination between RLs in Fig. 4b), or containing multiple ligands. Most striking, however, was the ability of the TRAV12-2 þ MR1-restricted T cell to recognize S. pyogenes, an organism that cannot synthesize riboflavin. Because this pathogen is not recognized by the TRAV1-2 þ T cells, these data would unambiguously refute the hypothesis that differential MAIT cell recognition can be simply explained by differing proportions of riboflavin metabolites. At present, the MR1 ligand from S. pyogenes remains to be determined. Recent molecular analyses suggest that MR1 can accommodate a range of different ligands because of plasticity in ligand orientation of the binding cavity 4, [18][19][20][21][22] . As the pterin ring occurs commonly in the environment, it is feasible that other microbial or host molecules with common chemotypic properties could bind to MR1, and function as antigens for MR1-restricted T cells. We hypothesize that diversity in MR1 ligands allows MR1-restricted T cells to recognize a wide range of microorganisms and their associated metabolomes. In contrast to conventional T-cell populations, MAIT cells can be found in the thymus with effector pathogen-reactive capability 13 and their selection depends on haematopoietic rather than epithelial cells 12,38,47 . It is unknown whether innate T-cell function is a T-cell-intrinsic programme or is a result of TCR signalling through selection in the thymus by MR1. Both functional data and tetramer staining demonstrate the presence of MR1-restricted T cells in all donors. Because we do not have a TRAV12-2 antibody, the full TCR repertoire of these cells remains to be determined as well as whether they share the innate T-cell attributes and selection pathway of TRAV1-2 þ MR1restricted T cells. We also note the preponderance of MR1-restricted T cells expressing the TRAV1-2 TCR, in line with prior observations 10 . This phenomenon could occur by a scenario where TRAV1-2 þ T-cell selection in the thymus is favoured over other TCRs. Alternatively, given evidence of ligand discrimination by MAIT TCRs, TRAV1-2 þ MAIT cells could dominate in the periphery because of selective microbial exposures. In line with this hypothesis, perhaps repeated exposures of environmental mycobacteria or Gram-negative gut microbiota allow for preferential expansion of TRAV1-2 þ MR1-restricted T cells in the majority of individuals. On the basis of our findings, we propose that non-TRAV1-2 MR1-restricted TCRs contribute to immune defence against infection primarily by providing more diverse and, in some instances, unique microbial recognition. For instance, the TRAV12-2 þ MR1-restricted T-cell clone can recognize infection with S. pyogenes. A variety of diseases are caused by infection by S. pyogenes or Group A streptococcus 48 . These include throat infection 'strep throat', pneumonia, fasciitis, nosocomial wound infection and glomerulonephritis. We hypothesize that MR1restricted T cells expressing TRAV12-2 þ or other atypical TCRs selectively expand at tissue sites, such as the human mouth, tonsils and skin, associated with streptococcal infection. Our prior finding of microbe-selective clonal MR1-restricted T-cell expansions within individuals 23 , in conjunction with the data presented herein, demonstrates the capacity of MR1-restricted T cells to discriminate between microbial infections, and supports the hypothesis that MAIT cells display antigen-driven clonal expansion. In sum, we show that MR1-restricted T cells have the capacity to detect a greater diversity of microbes than previously shown. We have isolated a human T-cell clone that expresses a TCR never observed within MAIT cells before, TRAV12-2, demonstrating that MR1-restricted T cells do not use TRAV1-2 exclusively. This TRAV12-2 TCR displays MR1-Ag discrimination both with regard to the recognition of known RL metabolites, and most notably in its capacity to uniquely detect S. pyogenes, a pathogen that lacks the capacity to synthesize riboflavin. Collectively, these data provide evidence that additional MAIT cell subsets may play a unique role in human defence against infection by broadening the recognition of microbes and their associated metabolites. Methods Human participants. All samples were collected and all experiments were conducted under protocols approved by the institutional review board at Oregon Health and Science University. PBMCs were obtained by apheresis from healthy adult donors with informed consent. Cell lines and reagents. All cell lines used in this study have been confirmed to be mycoplasma-free. The A549 lung carcinoma cell line (ATCC CCL-185) was used as APCs for IFN-g ELISPOT assays in Figs 1 and 2, direct ex vivo intracellular cytokine staining determination of microbe-reactive T cells and in Fig. 4c for infection with Neisseria gonorrhoeae and Y. enterolitica. The BEAS2B bronchial epithelial cell line was used (ATCC CRL-9609) in Fig. 3 for antibody blockade ELISPOT assays. Cell lines were maintained by continuous passage in F12K culture medium supplemented with 10% fetal bovine serum. RL-6,7-diMe and RL-6-Me-7-OH were purchased from WuXi Apptec and 6FP from Schick Laboratories. Livedead aqua stain and carboxyfluorescein succinimidyl ester (CFSE) were purchased from Life Technologies. Unconjugated antibodies used in the study were the following: anti (a)-CD3 (clone OKT3), abTCR (clone T10B9, BD), aMR1 (26.5, gift from Ted Hansen), aHLA-ABC (W6-32, AbD Serotec), aCD1a/CD1b/CD1c/CD1d (gift from Branch Moody), aIFNg for ELISPOT (Mabtech, see ELISPOT methods section below) and LEAF purified IgG2a, IgG1 and IgM isotype controls (Biolegend). Conjugated antibodies used in this study were the following: aCD3 1 h. After gentle washing twice with PBS, nonadherent cells were removed and 10% HS in RPMI containing 30 ng ml À 1 of IL-4 (Immunex) and 30 ng ml À 1 of granulocyte-macrophage colony-stimulating factor (Immunex) was added to the adherent cells. The cells were X-rayed with 3,000 cGray using X-RAD320 (Precision X-Ray Inc.) to prevent cell division. After 5 days, cells were harvested with cell-dissociation medium (Sigma-Aldrich, Gillingham, UK) and used as APCs in assays. Generation of an MR1-knockout cell line using CRISPR/Cas9. The reagents used to mutate the MR1 gene were derived from the toolkit described in ref. 49. A codon-optimized synthetic Cas9 cDNA under the control of the cytomegalovirus promoter (Addgene plasmid #41815) was used in combination with a single guide RNA comprising a transactivating CRISPR RNA sequence 49 as well as the 19nucleotide protospacer sequence (5 0 -GATGGGATCCGAAACGCCC-3 0 ) targeting the þ strand of exon 3 of the MR1 gene followed by the protospacer-associated motif AGG. Plasmid DNA serving as template for the transcription of the CRISPR/ Cas9 elements was transfected in the carcinomic human alveolar basal epithelial cell line A549 using Lipofectamine 2000 (Invitrogen, Life Technologies, Paisley, UK) according to the manufacturer's instructions. Genomic DNA from A549 cells was isolated with the GenELute mammalian genomic DNA miniprep kit (Sigma-Aldrich). Mutations at the target site were detected using the CEL-I enzyme, as part of the SURVEYOR assay (Transgenomic Ltd, Glasgow, UK), which cleaves DNA duplexes bearing base pair mismatches, caused by insertions or deletions at proximity of the protospacer-associated motif sequence, within the PCR amplicons generated with primers flanking the genomic target site. The PCR forward primer (5 0 -GCATGTGTTTGTGTGCCTGT-3 0 ) is located in the intron region upstream of the target site and the reverse primer (5 0 -GGTGCAATTCAGCATCCGC-3 0 ) downstream on exon 3. MR1 protein expression at the cell surface was measured using flow cytometry with the anti-MR1 antibody clone 26.5 (a kind gift from Professor Ted Hansen) following overnight stabilization by incubating cells with 50 mg ml À 1 acetyl-6-formylpterin (Schirks Laboratories, Jona, Switzerland). MR1negative cells were sorted using flow cytometry and single-cell clones were derived from the sorted bulk population by limiting dilution (average of 0.3 cells per well). Clonal populations were then screened for MR1 surface expression and DNA indels with the SURVEYOR assay. The region flanking the target site was PCRamplified from the genomic DNA of selected clones using the primers described above fused to restriction sites and the PCR products were cloned into recipient plasmids, which were transfected in chemically competent Top10 E. coli bacteria. Ten colonies that tested positive for the insert by colony PCR were used to produce plasmid Minipreps, which were sent for Sanger sequencing in order to determine the nature of CRISPR/Cas9-induced mutations. Clone 9 (in the manuscript as MR1 À / À ) was shown to bear a 125 bp deletion on one allele and a single bp deletion on the other. Expansion of T-cell clones. T-cell clones were cultured in the presence of X-rayed (3,000 cGray using X-RAD320, Precision X-Ray Inc.) allogeneic PBMCs, X-rayed allogeneic LCL (6,000 cGray) and anti-CD3 monoclonal antibody (20 ng ml À 1 ; Orthoclone OKT3, eBioscience) in RPMI 1640 media with 10% HS in a T-25 upright flask in a total volume of 30 ml. The cultures were supplemented with IL-2 on days 1, 4, 7 and 10 of culture. The cell cultures were washed on day 5 to remove soluble anti-CD3 monoclonal antibody. Phosphorylation-specific T-cell staining for flow cytometry. T-cell clones were incubated overnight in RPMI media containing 0.5 ng ml À 1 IL-2 and 2% HS. Monocyte-derived DCs were incubated for one hour with S. pyogenes or M. smegmatis MOI ¼ 10, or nothing, in ultra low adherence culture plates. A three to one ratio of DCs to T cells was co-incubated for 15 min at 37°C and then immediately fixed in 2% paraformaldehyde. Following fixation, the cells were permeabilized in ice-cold 100% methanol for 30 min. Then, the cells were washed in FACS buffer to sufficiently remove the methanol and stained with the following antibodies: anti-CD8 (clone RPA-T8, 1:50 dilution, Biolegend), anti-CD3z-pY142 (1:10 dilution, BD), anti-ZAP-70 pY319/Syk pY352 (1:10 dilution BD) or isotype controls (IgG2a for anti-CD3z, IgG1 for anti-ZAP-70, used at a matching concentration to their orresponding phospho-specific antibody) for 45 min at 4°C. Isotype controls were used to optimize staining with phospho-specific antibodies. We use the mitogen PHA as a positive control for TCR stimulation and maximum phosphorylation signal. A minimum of 30,000 CD8 þ events were collected for geometric mean fluorescence intensity analysis across samples. Analysis of TCR usage. Amplification and sequencing of TCRB and TCRAD CDR3 regions were performed using the immunoSEQ Platform (Adaptive Biotechnologies, Seattle, WA). ImmunoSEQ combines multiplex PCR with highthroughput sequencing and a bioinformatics pipeline for (TCRB/TCRAD) CDR3 region analysis 50,51 . The IMTG nomenclature was used throughout the study 52 . Flow cytometry staining and cell sorting. Cells to be analysed for cell surface marker expression were first incubated at 4°C in a blocking solution of PBS containing 2% normal rabbit serum (Sigma-Aldrich), 2% normal goat serum (Sigma-Aldrich) and 2% HS to prevent nonspecific binding. Cells were washed in PBS and then incubated with live-dead discriminator and surface stains or isotype controls for 20 min in the dark at 4°C in a total volume of 50 ml. Cells were then washed and fixed, or fixed and permeabilized (ex vivo ICS tests, BD Fix/Perm kit) according to the manufacturer's instructions. Antibodies for intracellular staining were then added for 30 min in the dark at 4°C in a total volume of 50 ml, and after washing flow cytometry was performed. Specifically for Fig. 3a, 2 Â 10 6 PBMCs from each donor were stained with the MR1-Ag tetramer at 0.3 nM in 25 ml volume for 45 min in PBS buffer containing 2% fetal bovine serum at room temperature in the dark. Viability and surface stains were added on top of the tetramer stain for another 20 min at 4°C. Samples were then washed twice in tetramer staining buffer. All flow cytometry analyses were performed on a Fortessa 18-parameter flow cytometer (BD). All FACS analyses were performed using an Influx 11-parameter flow cytometer (BD) with the Oregon Health and Science University flow cytometry core facility. Data were analysed using FlowJo (v9.8.5). Fluorescence minus one controls were used for optimal gating. Doublets were excluded based on FSC-H and FSC-A, SSC-H and SSC-A; lymphocytes were identified based on FSC-A and SSC-A and CD3 expression; and dead cells were excluded based on Aqua viability dye. Ex vivo stimulation assay. To observe the nonclassical T-cell response, we used the A549 cell line as APCs because it does not express MHC-II and is MHC-Imismatched to the donor source of T cells. CD8 þ T cells were positively selected from healthy donor PBMCs using magnetic bead separation according to the manufacturer's instructions (Miltenyi), added to uninfected WT or M. smegmatisinfected (MOI ¼ 3, overnight infection) WT or MR1 À / À A549 cells at a ratio of 3:1 and incubated overnight in the presence of Brefeldin A and 0.5 ng ml À 1 rhIL-2 at 37°C. The following day, the cells were stained for surface phenotype markers and live-dead discriminator. Following surface staining, cells were fixed and permeabilized using the BD Fix/Perm Kit and stained intracellularly with a-IFN-g. Microorganisms and preparation of APCs. M. smegmatis, C. albicans, S. enterica Typhimurium, E. coli, M. tuberculosis, S. pyogenes and M. avium were used from frozen glycerol stocks, whereas all other microbes were harvested from overnight growth on agar plates and titred based on OD 600 readings of a colony suspension. A cell-free supernatant was created from an overnight culture of S. pyogenes (ATCC 19615) that was sterile-filtered and frozen before being used in T-cell stimulation assays of Fig. 5c-f. A549 cells were infected for 2 h or DCs were infected for 1 h with microbes at 37°C. In fig. 4c, A549 cells were used for Yersinia and Shigella infections; DCs were used for all other infections. The MOI and antibiotics used for each microbe were optimized for APC viability and maximal MR1-restricted response: E. coli 1, M. smegmatis 3, S. flexneri 1, Y. enterolitica 1, C. albicans 0.1, M. avium 30, N. asteroides 1, S. enterica Typhimurium 30, V. parahemolitica 1, M. tuberculosis 30, N. gonorrhoeae 1, P. aeruginosa 1, S. aureus 1, S. pyogenes 30, E. faecalis 1 and 10. All infections were performed in the absence of antibiotics. After the indicated infection time, all cells were washed twice in media containing antibiotics and then incubated overnight in an ultra low adherence tissue culture plate before being counted and added to the assay (ELISPOT set-up described below). Riboflavin dependence growth assay. Overall, 5 Â 10 6 colony-forming unit from frozen glycerol stocks of S. pyogenes or E. coli were added to 10 ml of minimal growth media and cultured at 37°C for 96 h (S. pyogenes) or 12 h (E. coli) in the dark. Minimal growth media was made with M9 salts (BD Difco) supplemented with glucose, MgSO 4 and CaCl 2 , as recommended by the manufacturer, and 0.01% w/v amino acids (casein enzymatic digest, Sigma). Riboflavin (Sigma) was added to the growth medium at mM concentrations indicated in Fig. 5. Bacterial growth was measured by absorbance readings at 600 nm. IFN-c ELISPOT assay and antibody blocking. A MSHA S4510 96-well nitrocellulose-backed plates (Millipore, bought via Fisher Scientific) was coated overnight at 4°C with 10 mg ml À 1 solution of anti-IFN-g monoclonal antibody (Mabtech clone 1-D1K) in a buffer solution of 0.1 M Na 2 CO 3 , 0.1 M NaHCO 3 , pH ¼ 9.6). Then, the plate was washed three times with sterile PBS and blocked for 1 h at room temperature with RPMI 1640 media containing 10% heat-inactivated HS pool. Then, the APCs and T cells were prepared as described below and coincubated overnight. Briefly, DCs (Figs 4a,d and 5), the BEAS2B cell line (Fig. 3d) or the A549 cell line (all other experiments) were used as APCs at 1 Â 10 4 per well in ELISPOT assays. For all blocking ELISPOT assays, APCs were limited to 5 Â 10 3 per well. In Fig. 4b,c, the A549 cell line was incubated with MAIT antigens over a range of concentrations in the dark for 2 h. Where stated, blocking antibodies or antagonists were added for 2 h at 2.5 mg ml À 1 (a-HLA-I clone W6/32, a-CD1a, b, c, d (Branch Moody), 6-formyl pterin (50 mg ml À 1 , Schick Laboratories) and a-MR1 clone 26.5 (Ted Hansen) or appropriate isotype controls). To block the T-cell receptor, anti-abTCR (clone T10B9 (ref. 53), BD Pharmingen) or isotype control was added to T-cell clones at 0.5 mg ml À 1 for 30 min at 4°C before co-incubation with APCs, T-cell clones were added at 1 Â 10 4 per well. The plate was incubated overnight at 37°C and then washed six times with PBS containing 0.05% Tween. The plate was then incubated for 2 h at room temperature with a 1 mg ml À 1 solution of anti-IFN-g-biotin secondary antibody (Mabtech clone 7-B6-1) in 0.5% BSA, 0.05% Tween in PBS. Finally, the plate was washed six times in PBS-Tween, and then PBS, and developed using an AEC Vectastain kit SK-4200 (Vector labs). We defined a positive response as greater than 25 IFN-g spot-forming units. Preparation of pulsed fixed APC. Monocyte-derived DCs were pulsed for 4 h with bacterial culture supernatant from S. pyogenes or left untreated and then washed and rested overnight in an ultra low adherence tissue culture plate. The following day, the pulsed DC-conditioned media (Fig. 5c) was collected and added to the D462-E4 clone in the ELISPOT plate. Then, half of each harvested DC sample was used in the ELISPOT as 'unfixed'. The other half was fixed by incubating in 0.5% paraformaldehyde (Electron Microscopy Sciences) in PBS for 15 min at room temperature. Then, an equal volume of 0.4 M lysine was added to stop the reaction for 5 min. The samples were then extensively washed with media, incubated for 1 h at 37°C and then washed again before being used in the ELISPOT assay. Data Analysis. Flow cytometry data were analysed using FlowJo software 9.8.1 (Tree Star). All statistical analyses were performed using the Prism software with nonparametric, the Mann-Whitney U-tests (Fig. 2c). Nonparametric statistical tests were used for small group sizes (five donors in this study). In all descriptive statistical analyses, the variance was first confirmed to be similar between groups using s.d. or s.e.m. tests, as appropriate and displayed on each graph. P valuesr0.05 were considered significant (*Pr0.05). Data Availability. The authors declare that the data supporting the findings of this study are available within the article and its Supplementary Information Files, or are available from the corresponding author upon request.
9,760.8
2016-08-16T00:00:00.000
[ "Biology", "Medicine" ]
Coupled CFD‐DEM modeling to predict how EPS affects bacterial biofilm deformation, recovery and detachment under flow conditions Abstract The deformation and detachment of bacterial biofilm are related to the structural and mechanical properties of the biofilm itself. Extracellular polymeric substances (EPS) play an important role on keeping the mechanical stability of biofilms. The understanding of biofilm mechanics and detachment can help to reveal biofilm survival mechanisms under fluid shear and provide insight about what flows might be needed to remove biofilm in a cleaning cycle or for a ship to remove biofilms. However, how the EPS may affect biofilm mechanics and its deformation in flow conditions remains elusive. To address this, a coupled computational fluid dynamic– discrete element method (CFD‐DEM) model was developed. The mechanisms of biofilm detachment, such as erosion and sloughing have been revealed by imposing hydrodynamic fluid flow at different velocities and loading rates. The model, which also allows adjustment of the proportion of different functional groups of microorganisms in the biofilm, enables the study of the contribution of EPS toward biofilm resistance to fluid shear stress. Furthermore, the stress–strain curves during biofilm deformation have been captured by loading and unloading fluid shear stress to study the viscoelastic properties of the biofilm. Our predicted emergent viscoelastic properties of biofilms were consistent with relevant experimental measurements. | INTRODUCTION Bacterial biofilms are initiated by reversible attachment of planktonic bacteria to a surface. Bacteria are then irreversibly attached to the surface and develop cell-cell cohesion. Matured biofilms are embedded in extracellular polymeric substances (EPS) which are produced by bacteria themselves (Flemming & Wingender, 2010). The formation of biofilm helps bacteria to survive in harsh environments such as fluid flows (Banerjee et al., 2020). It was found that bacteria in biofilms are much more resistant to antibiotics than in planktonic state (Davies, 2003). Biofilms have dramatic impacts for a wide range of industries. For example, biofilms play an important role in bioremediation since they are able to convert toxic pollutants to harmless products (Singh et al., 2006;Yadav & Sanyal, 2019). Biofilms are also essential in wastewater treatment (Capdeville & Rols, 1992;Sehar & Naz, 2016). However, the accumulation of biofilms in industrial pipelines and drinking water systems may lead to biocorrosion (Abe et al., 2012;Klapper et al., 2002). Additionally, biofilms adhered to marine surfaces is an important trigger of accelerated biofouling (Antunes et al., 2019). The biofilms attached to the ship hull increase the frictional drag resulting in higher fuel consumption (de Carvalho, 2018). The emergence of biofilms allows pathogenic bacteria to survive in diverse environments (Tasneem et al., 2018). Besides, pathogen transmission is of concern to public health and can cause infection when the cells detach from the biofilm (Brindle et al., 2011). Therefore, a greater understanding of biofilm detachment in different hydrodynamic conditions may help to control the biofilm-related infection . In biofilms, the EPS is a self-produced matrix which majorly consists of polysaccharides, extracellular DNA (eDNA) and protein (Erskine et al., 2018;Gloag et al., 2013;Yadav & Sanyal, 2019). It provides many functions, such as adhesion to surfaces and cohesion to maintain the mechanical stability of the biofilm system (Flemming et al., 2007). The production of EPS is essential during biofilm development since bacteria cells could be immobilized by EPS (Flemming & Wingender, 2010). EPS production can be responsively regulated, for example, it was found that EPS production could be affected by EPS biosynthetic genes (Ali et al., 2000;Song et al., 2018). Besides, mutant strains could cause the overproduction of EPS to help the biofilm position in the beneficial environment (Hibbing et al., 2010). All these could affect the EPS amount in biofilms. Biofilms may also increase the strength of the matrix by increasing EPS production when subjected to mechanical stresses at intermediate time scales (e.g., 1 h) (Shaw et al., 2004). Different biofilms can have different EPS and different mechanical properties (Houari et al., 2008;Klapper et al., 2002;Rupp et al., 2005;Stoodley et al., 1999;Vinogradov et al., 2004;Wloka et al., 2004). However, it is difficult to quantify EPS by microscopy or chemical analysis due to the complexity of the chemistry, as well as bias in extraction and purification techniques. Although EPS is complex, computational modeling can be simplified to represent its overall physical function rather than identify the individual polymer components, hence, gaining better understanding of the contribution of EPS production to biofilm mechanical properties. In this study, a three-dimensional individual-based model (IbM) of biofilm was developed by coupling the computational fluid dynamics approach (CFD) with the discrete element method (DEM). This model was implemented on NUFEB (https://github.com/nufeb) which is an open-source tool for individual-based modeling of microbial communities (Li et al., 2019). NUFEB integrates CFD-DEM solver SediFoam (https://github.com/xiaoh/sediFoam) which provides a flexible interface between large-scale atomic/molecular massively parallel simulator (LAMMPS) (Plimpton, 1995) and opensource field operation and manipulation (OpenFOAM) (Greenshields, 2017). The framework enabled us to describe the fluid induced biofilm deformation and detachment subjected to different flow velocities. In this study, we modeled a bacterial mutant that can produce the same type of EPS at different levels. Different EPS amounts were obtained by varying the relevant kinetic parameters in the model. We predicted the effect of EPS amount on the mechanical properties of biofilms and biofilm detachment. | METHODOLOGY The processes of biofilm growth and biofilm deformation were decoupled in this study, that is, fluid flow was applied to a pregrown biofilm. The pregrown biofilm was "grown" under static conditions for 5.3 days using the NUFEB individual based model which was described in Jayathilake, et al. (2017). The kinetic parameters for biofilm growth are provided in supporting information (Table S1). Only bacteria growth, division, and EPS production were considered in this study. Then the two-way coupling between the solid biofilm and computational fluid dynamic was adopted to investigate the deformation and detachment of biofilm under different hydrodynamic conditions. The simulation domain is displayed in Figure 1, with the pregrown biofilm positioned on the inlet side of the channel. The diameter of the involved particles is in the micrometry range (0.7-1.4 μm) based on the stochasticity of the biological system (Jayathilake, et al., 2017) | Fluid-induced biofilm deformation and detachment Experimental work has shown that EPS production in biofilms varied with bacterial strains and growth conditions (Costa et al., 2018;Danese et al., 2000). In this study, we achieved different EPS volume ratio (i.e., EPS volume divided by the volume of the biofilm) by changing the EPS growth yield coefficient in the modeling. The EPS growth yield coefficient was varied from 0.12 to 0.22 (g COD EPS /g COD S ) which corresponds to EPS/biofilm ratio of 20%-51% here. To investigate the biofilm deformation and detachment events, the biofilm with 46% EPS was subjected to inlet flow velocity between 0.1 and 0.4 m/s (Reynold number from 3.75 to 15, maximum wall shear stress from 10.7 to 42.7 Pa, shear rate from 2000 to 8000 s −1 ) for a duration of 40 ms. In the next simulation, the inlet fluid velocity was kept at 0.3 m/s to study the effect of EPS production on biofilm deformation and detachment. The detachment rate coefficient, which is defined as the ratio of the volume of detached biofilm clusters to the total volume of preformed biofilm, was calculated during the initial 14 ms (before biofilm washed away from the surface wall). Cluster detachment from the biofilm was defined as erosion if the particle number of the cluster was less than 1000 and sloughing if the particle number of the detached cluster exceeded 1000. The EPS amount, the mean and maximum heights, the roughness and porosity of different biofilms are summarized in Supporting Information (Table S2). | Biofilm deformation-recovery test The responses of biofilm to a rapid fluctuating shear stress were analysed immediately before biofilm failure. To save computational time, the fluid shear stress was applied to the biofilm for 3 ms (loading cycle) and then stopped immediately. Afterward, the biofilm was allowed to relax for 17 ms (unloading cycle). During the loading period, the fluid shear stress was increased by increasing the fluid velocity from 0 m/s at a constant acceleration. For the biofilm with 46% EPS, deformation-recovery tests were carried out by exposing the biofilm to the ramping flow with different accelerations: 20, 30, and 40 m/s 2 , respectively. Then the biofilms with 40% and 51% EPS were subject to the increasing fluid velocity at the acceleration of 40 m/s 2 to investigate the effect of EPS amount on mechanical response of biofilm. The shear strain in this simple shear test was defined as the angle change between the front edge of biofilm with the left channel wall ( Figure S1). The shear modulus was calculated as follows: where α is the shear strain, σ xz is the fluid induced shear stress on the biofilm which was computed globally by LAMMPS (Thompson et al., 2009). In this section, three planes (y = 5 μm, y = 15 μm, and y = 25 μm) were selected to measure the deformation angle thus to obtain the averaged shear strain ( Figure S2). | Motion of bacterial and EPS agents During biofilm deformation and detachment, the motion of bacterial cells and EPS agents are tracked by DEM on a Lagrangian framework: where v i ⃗ is the velocity of the particle i; m i is the particle mass; f c i , ⃗ is the contact force among collided particles (Xia et al., 2021), ⃗ is the fluid-particles interaction force. | Locally averaged Navier-Stokes equations for fluids The fluid flow is solved by locally averaged incompressible Navier-Stokes equation in which the fluid density ρ f is constant: ϵ s is solid volume fraction while ϵ f is fluid volume fraction which equals to (1 − ϵ s ). U s ⃗ and U f ⃗ are particle velocity and fluid velocity, | Fluid-particle interaction In this model, the fluid-particle interaction force f fp i , ⃗ consists of a drag force and lift force. For the particle i, the drag force model is expressed as (Sun et al., 2018): where V p i , is the volume of the particle i, U f i , ⃗ , and u p i , ⃗ are the fluid velocity and particle velocity, respectively. ϵ f i , is the fluid volume fraction while ϵ s i , is the solid volume fraction, β i is the drag correlation coefficient which is used to convert terminal velocity correlation to drag correlation (Syamlal et al., 1993). In addition, the lift force on the particle i is calculated by the following formula (Sun et al., 2018;Van Rijn, 1984;Zhu et al., 2007): ⃗ is the curl of the flow velocity interpolated to the center of particle i. | Cohesive force among particles The cohesive force among the particles was computed by using the equation below (Israelachvili, 2011;Sun et al., 2018): where A is the cohesive strength, and s is the separation distance between the particle surface. A minimum separation distance s min was implemented when the separation distance between the two particles equals zero (s = 0). In this study, five different values of cohesive strength were used for the interactions of bacterial cells with bacterial cells, bacteria cells with EPS agents, bacteria cells with particle-wall, EPS agents with the particle-wall, EPS agents with EPS agents. Since EPS plays a significant role on binding the bacterial cells, the cohesive strength driven by EPS was assumed to be three orders of magnitude larger than that for bacteria (Fang et al., 2000). The mechanical parameters of the simulations are listed in Table 1. | EPS effect on biofilm deformation and detachment To study how the EPS amount affected the deformation and detachment of biofilms, we examined a single fluid flow condition, 0.3 m/s sustained for a duration of 40 ms. For biofilms with a low amount of EPS (20% EPS), the biofilm clusters could easily detach from the biofilm matrix (erosion-dominated) at high frequency, accompanied by the escape of single bacterial cells ( Figure S6). This may be due to the limited EPS availability to immobilize the cells in biofilm (Flemming & Wingender, 2010). The detachment frequency of biofilm decreased with the increase in EPS amount. Figure 4 shows the biofilms after being The local detachment, such as erosion and sloughing, could be significantly captured within the time of 14 ms. However, biofilm removal occurred after this period when the inlet fluid velocity was greater than 0.2 m/s. Therefore, the detachment rate coefficient, which is defined as the ratio of the volume of detached biofilm cluster to the total volume of biofilm per millisecond, was calculated during the initial detachment period (14 ms) and adopted to describe the biofilms detachment behavior under the range of hydrodynamic conditions. Each simulation was run for three replicates and the average results were calculated. As displayed in Figure 6a suggest that the resistance of the biofilm to the external fluid is largely attributable to the EPS amount. EPS is responsible for the mechanical stability of the biofilm due to its cohesive properties, therefore, the biofilms with a greater density of EPS components are predicted to be more stable when exposed to the fluid flow. | Biofilm viscoelastic response during deformation-recovery test Deformation of the biofilm with 46% EPS was monitored for 3 ms as the fluid velocity was incrementally increased from 0 m/s at a constant acceleration. Then the fluid flow was stopped, the biofilm was allowed to relax for 17 ms. The stress-strain curve was obtained from the loading and unloading cycle. Figure 7 shows the deforma- Figure 8a shows the fluid-induced shear stress on biofilms overtime. It is evident that the higher flow acceleration resulted in higher shear stress imposed on biofilms (Figure 8a). This can lead to larger deformation (or shear strain) and deformation rate of biofilms (or strain rate), as seen in Figure 8b. After the flow was removed at 3 ms, the fluid induced stress in the biofilm decreased rapidly (3-4 ms, Figure 8a), and some of the deformation (8%-10%) was immediately recovered attributable to a time-independent elastic response. Afterward (4-20 ms, Figure 8a), the stress decay slowed down dramatically and almost reached a plateau at the end of the recovery. A residual deformation (or strain) during biofilm relaxation was captured in each deformationrecovery test and increased with the maximum fluid velocity. Such a strain rate-dependent recovery was due to the nature of viscoelastic models adopted within the biofilms and is a common characteristic for viscoelastic materials (Capurro & Barberis, 2014;Chen et al., 2011). (or strain rate) due to the nature of viscoelastic effect. It is expected that the lower deformation rate leads to a smaller apparent shear modulus, which is also seen in this study. If we take into account different deformation rates of biofilms at different fluid velocities, it yields consistent equilibrium shear modulus and viscosity which are in the range of 3.8-4.8 Pa and 6.9-8.7 mPa*s, when using the Prony series viscoelastic model for curve fitting at different deformation rates (Chen & Lu, 2012). To study the EPS effect on biofilm mechanics, we also focused on biofilms containing higher EPS amounts (40%, 46%, and 51%) subjected to the ramping fluid velocity at a constant acceleration of 40 m/s 2 . Biofilms with lowers EPS amount were not considered here since they could easily detach, thus the stress-strain curve would not be captured during deformation. The deformation of the biofilm with 40% EPS was greater compared to biofilm with 46% EPS, and both experienced similar shear stresses. This suggests that higher EPS resulted in the better resistance to the external fluid shear force. However, the shear strain of the 51% EPS biofilm was lowest although the maximum stress was almost 16% higher than for its counterpart biofilms. Taken together, the results suggest that biofilms with higher EPS might be stiffer, which agrees with what was found in Gloag et al. (2020). As seen in Figure 9c, the stress-strain curve was almost linear at very small strains (<0.1), which was also found in our experimental measurements for flow induced biofilm deformation of Bacillus subtilis ( Figure S8). When ignoring the deformation rate (or strain rate) effect, the apparent shear modulus at given loading conditions was 12.82 ± 2.03, 15.21 ± 1.94, and 17.18 ± 3.3 Pa for biofilms with 40% EPS, 46% EPS, and 51% EPS, respectively. When the deformation rates were accounted for, it yields the equilibrium shear modulus of 3.5 ± 0.7 Pa, 5.7 ± 1.7 Pa, and 5.6 ± 0.4 Pa for those three biofilms when using the Prony series viscoelastic model for curve fitting at different deformation rates (Chen & Lu, 2012). These are consistent with several biofilms such as Pseudomonas aeruginosa (Stoodley et al., 1999) and Staphylococcus aureus (Rupp et al., 2005). The reported corresponding viscosity for those three biofilms is 8.4 ± 4.5, 6.6 ± 1.6, 8.3 ± 3.2 mPa*s, respectively. There is no evidence for correlation between the equilibrium shear modulus and viscosity of biofilms, as found in other recent studies (Houari et al., 2008;Klapper et al., 2002;Rupp et al., 2005;Safari et al., 2015;Stoodley et al., 1999;Vinogradov et al., 2004;Wloka et al., 2004). The predicted viscosity was the resultant of the mechanical interactions for bacteria-bacteria, bacteria-EPS and EPS-EPS. In general, all the simulated biofilms exhibited some strain stiffening effect followed by strain softening at larger strains, which is The fluid induced (a) stress and (b) strain on biofilms changed with time and the corresponding averaged (c) stress-strain curve when the flow was applied and terminated (standard deviation did not give here for the high resolution). The biofilms with 40%, 46%, and 51% extracellular polymeric substance were selected. The fluid velocity was applied at an acceleration of 40 m/s 2 . The error bars represent standard deviations based on three replicates. due to the viscoelastic properties and the change in the microstructure of biofilms during the deformation (Figure 9c). This is consistent with experimental measurements of biofilms at a wide range of strains (Jana et al., 2020). ANOVA test was used for statistical analysis (α = 0.05). The result (p < 0.05) suggests that the change of shear modulus with EPS amount has a statistically significant difference. After the fluid was stopped, the stress on the biofilms decayed exponentially and the deformation recovered slowly which is a common feature for viscoelastic materials (Chen & Lu, 2012). The overall biofilm deformation recovery was 16%, 20%, and 22% for biofilms with 40%, 46%, and 51% EPS within the simulation period, respectively. The results suggest that the abundant presence of EPS in the deformed biofilms makes a significant contribution to their mechanical recovery, as the bacterial cells are much more loosely associated with other bacterial cells than EPS agents. Similar results were also verified in the experimental work, which released that the EPS is required to induce bacterial rearrangement during stress relaxation (Peterson et al., 2014). | CONCLUSIONS A CFD-DEM coupled model developed here has enabled us to predict biofilm deformation and detachment under varied hydrodynamic conditions. When the biofilm was exposed to a steady fluid proportional EPS (less than 32%), the biofilm easily detached from the surface and dispersion of individual cells was observed. In these cases, the limited amount of EPS was incapable of protecting the biofilm bacteria from shear stress. Biofilm detachment frequency decreased with the increase of EPS amount. The biofilms were stiffer at higher loading rate, which is a typical characteristic for viscoelastic materials. Such viscoelastic features of biofilms also led to the hysteresis loop (energy dissipation), which was predicted by the stress-strain curves in our simulations and experimental measurements (Rupp et al., 2005). The predicted shape of stress-strain curve during the flow induced deformation is similar to our measurements at a comparable flow velocity. Furthermore, we found that higher EPS amount led to a higher apparent shear modulus of the biofilm at given flow velocity. The equilibrium shear modulus was also higher when the EPS ratio was relatively high. In general, the predicted equilibrium shear modulus of the biofilms was in the range of 3.5-5.7 Pa, which is consistent with experimental measurements of P. aeruginosa and Staphylococcus aureus reported in literature (Rupp et al., 2005;Stoodley et al., 1999). The nonlinear stress-strain characteristics of biofilms at large strains predicted by the simulations were comparable with some key experimental findings by rheometer measurements (Jana et al., 2020). AUTHOR CONTRIBUTIONS Yuqing Xia, Pahala G. Jayathilake, Paul Stoodley, and Jinju Chen designed the research. Yuqing Xia performed the simulation work and acquired the data. Yuqing Xia and Jinju Chen did data analysis. Pahala G. Jayathilake, and Bowen Li contributed to data analysis. Yuqing Xia, Pahala G. Jayathilake, and Jinju Chen prepared the original draft. Jinju Chen and Pahala G. Jayathilake provided the overall guidance of the work. All the authors contributed to the writing of the manuscript, revised it, and approved the final version.
4,869.2
2021-12-20T00:00:00.000
[ "Environmental Science", "Engineering", "Biology" ]
CREATION OF YOUNG ENTREPRENEURS AS RESOURCES OF ECONOMIC DEVELOPMENT AND ALLEVIATION OF POVERTY IN MUSLIM COUNTRIES: AN ISLAMIC APPROACH Purpose of Study: This paper aims to examine the importance of entrepreneurship and the role of young entrepreneurs in economic development. The paper explores the vital role that entrepreneurship can play in boosting economics of a country. This is because creating youth on entrepreneurial skills can be used as resources of economic development and alleviation of poverty in a country. The paper examines how Muslim youth can be created and trained as resources of development of economics and prevention of poverty in Muslim countries. Methodology: The paper adopts qualitative research methodology to collect and analyze data on the topic in hand. This methodology is used to discuss the effect of training on young entrepreneurs and how it creates them as resources of economic growth and improvement of scarcity in a country. Results: It finds in this paper that young entrepreneurs are the most crucial instruments that can be utilized to enhance economics of a country and eliminate poverty in the society. This is because young generation is the most important generation that can be taken as a tool and resource to develop economics of a country and alleviate poverty in the society at large. Implications/Applications: Entrepreneurship plays a crucial role in economic development and creation of young entrepreneurs can be used as resources of economic enhancement as well as job opportunity for Muslim youth. The paper suggests to have empirical research in order to have a clear picture on creation of young entrepreneurs and effect of training them on entrepreneurship as resources of economic development and alleviation of poverty in Muslim countries. INTRODUCTION Islam encourages Muslim youth to be entrepreneurs since the era of Prophet Salla allahu alaihi wassalam (S.A.W). T0first young Muslim entrepreneur was Uthman bin afan. When the Prophet s.a.w. created a brotherhood between him and sa'ad bin Rabi'i the latter divided his wealth into two parts between him and Uthman but the former refused to take his part and told his brother in Islam (Sa'ad), may Allah bless your wealth, show me the market of al-Madinah I know how to sell and buy from there he was initiated and created his entrepreneurial skill and became successful entrepreneur in al-Madinah until now he has a bank account in of one the banks in al-Madinah al-Munawwarah, Saudi Arabia. Islam invites all nations to entrepreneurship males and females, Islam does not encourage people to be dependent on others, indeed it encourages people to be independent and not rely on others. Islam urges a person to work hard in this world like he is going to live in it forever and work for hereafter like he is going to die tomorrow. Thus, in Islamic point of view a Muslim has to combine between this world and hereafter as both of them were created for him to live in prosperity and happiness in this world and get paradise in hereafter. Therefore, a young Muslim has to work hard in this world because in hereafter Allah Subhanahu wata'ala (S.W.T) will ask him about three things, one of them is his young generation how does he spend it in this world. This is because young generation is very important generation which a young Muslim has to be careful to utilize it in what is beneficial for him in both here and hereafter. Islam encourages young Muslim to take opportunity to use his young generation to serve his nation in general and particularly himself and other kin. Hence, young Muslim should train himself on entrepreneurship to be entrepreneur who is going to use his entrepreneurial skills to work for his country in order to develop the economy of the country and prevent poverty in the society. Islam as religion which can suit anytime and anywhere invites a young Muslim to be entrepreneur who will serve his nation. As a result, creation of young entrepreneurs is crucial element which can boost the economy and alleviate the poverty in Muslim countries. Faizal et al. (2013) found that Islam urges Muslims to be entrepreneurs because entrepreneurship is one of the elements that can alter problems of economics of a country. Entrepreneurship is one of Islamic culture as Prophet Muhammad Sallahu alaihi wasssalam (S.A.W) and his companions have demonstrated it since early era of Islam. Islam invites Muslims to be creative entrepreneurs and dynamics. In this world many Muslims were successful entrepreneurs. The Prophet Muhammad (S.A.W) was entrepreneur before his prophetic mission who was successful in entrepreneurship. The entrepreneurship is one of the factors that can assist in alleviation of poverty and develop system of economics of a country. Thus, social entrepreneurship is the global phenomenon that can influence the society by applying creative ways to solve social problems. Current economic situation can only be minimized and solved by entrepreneurship activities. This is because entrepreneurship is new phenomenon that can decrease social problems and eliminate poverty in the society. REVIEW OF LITERATURE Gumusay (n.d) argued that Islam in mind is more obvious, and in market it is easy to utilize. Islamic entrepreneurship which is based on construction of marketing more than academic rigorous and theological appropriate. From Islamic law perspective, it is suggested to use specific terminology for entrepreneurship which involves both vigilance and comprehensive understanding of entrepreneurship. For the reason that Islam did not give any specific terminology to entrepreneurship. Islamic entrepreneurship is similar to Islamic economics and finance which refer to certain business activities in Islamic way. Ullah et al. (2015) explain that entrepreneurship is a practice which is neither a science nor art. It is a new method of doing something which was done previously. Enhance, an entrepreneur is a person who takes the products and resources of economics from lower level to higher level with high profit. However, Hoque et al. (2014) argue that a country is poor because of its insufficient and unqualified entrepreneurs. This is because development of economics of a country is driven by the qualified entrepreneurs of the country. From the above statement, one can say that there is a need to conduct a research on this topic in order to find out how to create qualified entrepreneurs who can be able to accelerate economic development of their countries. Davis (2013) contends that many academic writing on Islamic economics are in Arabic, Urdu and other languages which are not commonly spoken by non-Muslims writers. Therefore, Islamic entrepreneurship is lack of academic writing which is one of reasons that causes lack of understanding of Islamic entrepreneurship. This is because some Western writers are of view that Islam is not compatible with entrepreneurship. Based on that, one can say that there is a crucial need to conduct a research on this topic in hand, so that non-Muslim writers can understand Islamic entrepreneurship. For the reason that entrepreneurship and business activities are parts of Islam. As Islam encourages Muslims to involve in entrepreneurship and business venture in accordance with the principles of Islamic law. In Islam success is not assessed by the results of achievement but by the means to achieve the success ( From the above literature, one can say that there is a need to conduct a research on the topic in hand in order to highlight the crucial role that entrepreneurship can play in the economic development of a country and find out how to create qualified entrepreneurs who can be able to accelerate economic development of their countries. METHODOLOGY OF STUDY The study adopts a qualitative research methodology approach wherein the theoretical framework of creation of young entrepreneurs as resources of economic development and alleviation of poverty in Muslim countries is approached through the collection of relevant data from literature such as textbooks, encyclopaedia, and articles in academic journals, seminars, conference papers, online databases and internet materials. This method of data collection is based on library research in order to examine and explore the issues relating to the topic in hand. SIGNIFICANCE OF ENTREPRENEURSHIP IN ISLAM Islam invites all Muslims to be entrepreneurs since its initial stage. Islam encourages Muslim youth to involve in entrepreneurship to be self-independent and provide job opportunity to others. This is because, Islam allowed Mudarabah, Musharakah and other commercial contracts which are carried out by those who have entrepreneurial skills to participate in the economic development. For example, Usman bin Affan was young entrepreneur who involved himself in entrepreneurship and was successful entrepreneur who served Islam and Muslims in many calamities. His charity fund was running from his era until now. Currently he has a charity bank account which is managed by government of Saudi Arabia to serve Muslims to alleviate poverty in the country. This can show the significance of entrepreneurship to be used as economic instrument to develop economics of a country and eliminate poverty in the society. Islam is system of life which Allah (S.W.T) has given to us to live in this world accordingly and get paradise in hereafter. In Islam there is no separation between economic system and religion of Islam. Entrepreneurship is integral part of Islam as Islam urges Muslim to be innovative and creative entrepreneurs. It was mentioned in the Hadith that the Prophet (S.A.W) is reported to have said: "one cannot ever eat any food which is better than that is produced by his hands on" (AyukAko . Muslim's duty in business activities is not only to generate profit in this world but to get the blessing of Allah (S.W.T) in hereafter which is paradise. Thus, the concept of entrepreneurship in Islam is a business activity in which Muslim entrepreneurs are combined between here and hereafter. For the reason that Islam is system of life to be followed according to its principles and regulations which will be resulted by success in this world and the world after. Therefore, in Islam entrepreneurship is not to make profit only, but to produce goods and render services which are not involving in non halal activities. Islamic entrepreneur is a person who conducts his business activities in accordance with the principles of Islamic law. He does not indulge in any activity which is not advised by the teaching of al-Qur'an and al-Sunnah CREATION OF YOUNG ENTREPRENEURS In Islam, Muslim youth are the most important generation which Islam takes it into consideration since its infancy. As the Prophet (S.A.W) is reported to have said: "a Muslim has to take opportunity to use his young generation in something that is beneficial for him before he is going to be old" A country can train its youth to be entrepreneurs and provide entrepreneurial skills for them from the beginning in primary and secondary schools, and make this entrepreneurial training compulsory on the youth in their schools all over the country's education system in order to create them to be future (young) entrepreneurs who will use their entrepreneurial skills to develop economy of the country and create jobs opportunities for others. They also can train others who do not have entrepreneurial skills. This can accelerate economic development of the country and elimination of poverty in the society. By training those youths on entrepreneurship from the primary and secondary schools. The child will be ready to be an entrepreneur even though government of the country is not able to provide job opportunity for him. There is no worry about his future, because of his entrepreneurial skills and training which were provided for him in his education system. He can create his own enterprise and business firm. This system of creation and training of youth can be used as resources for economic development in the Muslim countries especially in non-developed countries. This training can assist in the growth of sale products and development of new products as well as providing entrepreneurial skills for entrepreneurs. Training can help new entrepreneurs to run their business activities confidentially, because people are doubt to engage in business activities because of lack of experience and skill in the business area. Therefore, having entrepreneurial training as prerequisite for young entrepreneurs can encourage them to engage in entrepreneurship activities and boost their business activities smoothly. Those who are trained before entering in business activities are almost having confidentiality and necessary information in the area. This can distinguish them from those who do not have enough information on entrepreneurship as Allah (S.W.T) said in al-Qur'an: "did those who have knowledge are equal to those who do not have it". Those who have entrepreneurial skill and training are more successful entrepreneurs than those who do not have that entrepreneurial skill. As a result, by creating those young entrepreneurs in any country particularly Muslim countries the economics of the country can be developed faster. This is because the young citizens of the country are already prepared to be resources for the development of the economics of the country. Therefore, there is no worry about job opportunity in the country for future generation and economic development. DEVELOPMENT OF ECONOMICS AND ENTREPRENEURSHIP Islam encourages Muslims to be creative and innovative in their business activities as well as good in their business enterprise, because thousands of people may benefit from good business enterprise as initiative to create a good business enterprise can be considered as good deed in the eyes of Allah (S.W.T). In al-Qur'an Allah (S.W.T) said: "Those who do righteous deed shall have a reward unfailing" (al-Qur'an, 2:42). Muslim entrepreneur should have sufficient zeal, mental stamina and inspiration to initiate his business enterprise and accelerate economic development of his country. Indeed, Allah (S.W.T) has recommended Muslim to be creative and innovative for driving resources of economics as He (S.W.T) said in al-Qur'an: "When your prayer is over, spread over the earth and seek the bounty of Allah" (al-Qur'an, 62:10). From the aforesaid, one can say that creation of young entrepreneurs can play a crucial role in economic development and mobilization of resources for economic growth in the Muslim countries as Islam encourages Muslim to work hard to live independently and not rely on others or wait fortune to come. Islamic economics is still at its early stage particularly entrepreneurship in Islamic context. Therefore, understanding entrepreneurship and Islamic economics system is crucial part of economic system as entrepreneurial behaviours and Islamic economics have become part of conventional financial system (Davis, 2013;Elfakhani and Ahmed, 2013;Gümüsay, n.d.). In Islam a business man has to take the risk of business venture then he deserved the profit derived from the business. In entrepreneurship concept the main characteristic that can distinguish entrepreneurs from others is that entrepreneurs are willing to take risk and be creative in their personality and behaviours. This is because entrepreneurship enables them to participate in economic and job growth as well as creation of new business activity. Thus, entrepreneurship consists of activity and process involve creation of business activity. The person who is engaging in this business activity is entrepreneur who is thinking about how opportunity is realizing and condition of resources are accelerating. The main premise of an entrepreneur is to increase resources to high productive level of business activity. Hence, Muslim entrepreneurs should differentiate from non-Muslim entrepreneurs in their motives and goals. Muslim entrepreneurs are not allowed to engage in business activities which are not in accordance with the principles of Islamic law. They are only permitted and encouraged to involve in the business activities which are morally accepted by Islamic law. From the above statement, one can say that entrepreneurship play a vital role in economic development of a country and creation of jobs opportunities for others. This can participate in alleviation of poverty in the country particularly Muslim countries in which entrepreneurship is still at its infancy stage. FINDINGS It was discovered in this study that creation of young entrepreneurs can play a crucial role in development of economics of Muslim countries and non-Muslim countries. In Islamic law perspective, entrepreneurship is highly encouraged to carry out. Islam urges Muslim youth to be entrepreneurs in order to participate in development of economy of the country. This is because entrepreneurship is one of the key elements that can be used to accelerate economy of a country and creation of jobs opportunities for others. The growth of economics of a country is based on its entrepreneurs particularly young entrepreneurs who are most important elements to be utilised to develop economy of the country and alleviation of poverty in the country especially Muslim countries which are having huge number of poor people because of lack of entrepreneurial skills for many people in the country particularly its youth who are the most crucial assets that can be used to increase products of the country to high quality which can be imported and exported to other countries. In addition, it was argued in the study that training Muslim youth on entrepreneurship can play a crucial role in development of economics and elimination of poverty in the country. Thus, young entrepreneurs are future generation and leaders of a country who are required to develop and accelerate economics of the country by using their experience and entrepreneurial skills that have been taken in their educational period. CONCLUSION Young entrepreneurs play a crucial role in development of economics of a country and alleviation of poverty. Those young people can be used a resources for acceleration of country's economy and wellbeing of society at large. Islam is a religion that urges Muslim youth to work hard in order to live in this world independently and not rely on others. This is because young generation has high consideration in Islamic law point of view. It is the most generation which a young person can take opportunity to use it in something that can benefit him in his future. Therefore, creation of young entrepreneurs can be considered as resources for economic development and prevention of poverty in Muslim countries and non-Muslim countries. This is because many Muslim countries are facing problems of unemployment and enterprise. For the reason that, their youth are not prepared and trained to become entrepreneurs who can participate in the development of economics of the countries and elimination of poverty in the societies. Furthermore, entrepreneurship is milestone of a country's economy acceleration which can increase productions of the country that can be imported and exported to enhance welfare of people in the country particularly its youth who are going to be future leaders of the country. The researcher suggested to have empirical research on the topic in hand in order to have clear picture on the theme.
4,314.8
2019-11-05T00:00:00.000
[ "Economics", "Business" ]
A two-level on-line learning algorithm of Artificial Neural Network with forward connections —An Artificial Neural Network with cross-connection is one of the most popular network structures. The structure contains: an input layer, at least one hidden layer and an output layer. Analysing and describing an ANN structure, one usually finds that the first parameter is the number of ANN’s layers. A hierarchical structure is a default and accepted way of describing the network. Using this assumption, the network structure can be described from a different point of view. A set of concepts and models can be used to describe the complexity of ANN’s structure in addition to using a two-level learning algorithm. Implementing the hierarchical structure to the learning algorithm, an ANN structure is divided into sub-networks. Every sub-network is responsible for finding the optimal value of its weight coefficients using a local target function to minimise the learning error. The second coordination level of the learning algorithm is responsible for coordinating the local solutions and finding the minimum of the global target function. In the article a special emphasis is placed on the coordinator’s role in the learning algorithm and its target function. In each iteration the coordinator has to send coordination parameters into the first level of sub-networks. Using the input X and the teaching Z vectors, the local procedures are working and finding their weight coefficients. At the same step the feedback information is calculated and sent to the coordinator. The process is being repeated until the minimum of local target functions is achieved. As an example, a two-level learning algorithm is used to implement an ANN in the underwriting process for classifying the category of health in a life insurance company. I. INTRODUCTION In practice many ANN structures are used but the most popular are the ANNs with forward connections that have a complete or semi-complete set of weight coefficients.The structure of an ANN is depicted in (Fig. 1).Neurons in both the hidden and the output layers use sigmoid or tanh activation functions.In the output layer the linear activation function is usually used for approximation tasks.In the most common structures hidden layers include more neurons than input layers, so input information is not compressed in the hidden layers.In this paper two assumptions are accepted: • To define an ANN structure only the hidden layers and output layer are included.A network described as ANN (10-15-8) includes 10 neurons in the input layer, 15 neurons in one hidden layer and 8 in the output one. It is a two-layer ANN. • To implement a two-level learning algorithm, an ANN with one hidden layer is used.The concept layer is used in the primary sense.Fig. 1: Scheme of the ANN with forward connections Using (Fig. 1) symbols a set of forward and back formulas can be written.For forward For the first layer (the hidden layer) The coordinator is described by the Ψ function For the second layer (the output layer) The target function (IJARAI) International Journal of Advanced Research in Artificial Intelligence, Where: j = 0, 1, ..n 0 -number of input neurons, i = 0, 1, ..n 1 -number of hidden neurons, k = 1, 2, ..n 2 -number of output neurons. Using a standard backpropagation notation, derivatives with respect to the weight coefficients are achieved Equation ( 4) is known as coordination function. II. TYPES OF HIERARCHICAL MODELS Using concepts described in [1], an ANN will be treated as a complex system in an internal hierarchical structure.Three terms are introduced in relation to an ANN: • The layer of both an ANN and a learning algorithm description or abstraction, • The layer of algorithm complexity, • The layer of algorithm structure.To distinguish between these concepts, the following three terms: a stratum, a level, and an echelon, are used respectively.The term layer is used as a common term referring to any of the aforementioned concepts.For future use of the formal description of different concepts of the hierarchical structures, we describe an ANN as a relation between sets X and Y. A. ANN layers of description or abstraction To treat an ANN as a complex system and to describe it in a complete and detailed way, a different approach should be used.There arises the dilemma of the simplicity of description and the complete understanding of an ANN's behavior [1].At the first stage, a verbal description is used to help understand how an ANN is built.For a more detailed analysis, mathematical descriptions using algebra and/or differential equations are required.Finally, math formulas have to be implemented in a computer program or an electronic device.Therefore, to achieve a complex description of an ANN, a family of concepts and models from different fields of science and technology have to be used.Every model uses its own set of variables, laws, principles and terminology by means of which an ANN is described.For such a hierarchical description the functioning on any level should be as independent as possible. To separate this concept of hierarchy from others, a new name is used, a stratified ANN or a stratified description [1].The layer of abstraction will be referred as stratum (Fig. 2). Fig. 2: The ANN stratification description Using this definition we can state that [1]: • the selection of strata, in terms of which an ANN is described, depends on the scientist, their target and needs.• the concepts in which every stratum is described should be as independent as possible. • one can comprehensively understand how an ANN is working, moving down from the hierarchy of strata.• a stratified description implies a reduction in information sent up the hierarchy by the reduction of information.The input set X and the output set Y are both representable as Cartesian products.It is assumed that there are given two families of sets : Where: n s -the number of strata in which one describes an ANN structure.If concepts in which every stratum is described are fully independent, the ANN stratification can be described as: Where: i = 1, 2...n s B. Organisational hierarchy For a multi-layer ANN a lot of hidden layers and one output layer are sectioned off.The smaller part will be described as a sub-network.Every sub-network has its own output vector that is, at the same time, an input vector of the succeeding one V i j , i = 1, 2, ...n − 1, j = 2, 3...n, where n is the number of sub-networks.Because of the specific organisation of an ANN's hierarchy there are a lot of sub-networks on the first level, for each of which local target functions are defined: These sets of local tasks have to be coordinated to achieve the global solution.The coordinator, as an independent task, will have its own target function Ψ. Taking everything into account, this concept is the base on which one may build the new scheme of the ANN learning algorithm structure (Fig. 3).It is the hierarchical organisational structure.To distinguish this concept of hierarchy structure from others, the echelon is used. Fig. 3: The hierarchical structure of an ANN learning algorithm The two-level ANN learning algorithm can be described as a set of procedures.The procedures on the first level are responsible for solving their local tasks and calculating the part of matrix weight coefficients .The second-level procedure has to coordinate all the local procedures (tasks) using its own local target function.There is the vertical interaction between the procedures and two types of information are sent.One is a downward transmission of control signals: The second is upward from the first level to the second.It is a feedback signal that informs the coordinator about the behaviour of the first-level tasks: Consequently, in all the structures, three different task are defined: • the global target function • a set of the first-level tasks (the first level task) Where: Where: To build the two-level learning algorithm two assumptions have been made: • There is no explicit relation between the procedures on the first level for direct communication.The procedures are using only input and output vectors and a coordinator signal.• There is no direct relation between the global target function Φ and the coordinator task Ψ. C. Levels of calculation complexity The standard ANN learning algorithm is a non-linear minimisation task without constraints.To solve this task, iteration procedures are used.Using the most popular back propagation algorithm, one has to choose a lot of control parameters.From a theoretical point of view one can have only general suggestions and recommendations regarding the choice of real parameters, for example learning parameters.The algorithm is time-consuming and its convergence is not fast.Dividing the primary algorithm for all ANNs into the sub-network tasks, the local target functions are simpler and can be used in different procedures.Additionally, a new procedure is needed: the coordination procedure.In practice, however, the coordinator does not have the ability to find all the parameters needed for the first-level procedures.To solve this problem, a multi-level decision hierarchy is proposed [1].Solving the problems in the iteration algorithm on both the first and the second level, one can observe certain dynamic processes.These processes are non-linear and use a lot of control parameters.During the learning process these parameters are stable and do not change.Practice proves that this solution is not optimal.To control the way learning parameters change, an additional level could be used the adaptation level (Fig. 4). Thus, one can build three levels at a minimum: • The local optimisation procedures: the algorithm is defined directly as a minimisation task without constraints.• The coordination procedure: this algorithm could be defined directly as a minimization of the target function as well.Constraints could exist or not.• The adaptation procedure: the task or procedure on this level should specify the value of learning parameters not only for the coordinator level, but also on the first level. To solve this task, a procedure should achieve dynamic characteristic of the learning process on all the levels. As a conclusion one can state that the complexity of the problem increases from the first level to the next one.The coordination and adaptation procedures need more time to solve their own procedure.The two-layer ANN with an input layer, one hidden layer and an output layer can be used for further considerations.This simple structure is very popular and by using it one can solve a lot of practice tasks.Since this network is used to solve different classification tasks, sigmoid activation functions are used.To decompose the standard learning algorithm structure into a sub-network task, the coordination target function has to be built.The two-level learning algorithm structure for the ANN with one hidden layer is shown in (Fig. 5). Fig. 5: Scheme of two-level learning algorithm structure According to [2][8] the following set of formulas can be written. 1) For the first sub-network: The local target function Φ1 is defined as error-mean-square Other relations Where: γ1− a target value given by the coordinator Total derivatives with respect to the weight coefficients of matrix W 1. A sigmoid derivative function The new value of weight coefficients Where: α 1 -the learning coefficient β 1 -the regularisation coefficient The feedback information sent by the first sub-network to the coordinator Where: i = 1, 2, ...n 1 2) For the second sub-network: The local target function Φ2 is also defined as error-mean-square. Other relations Where: γ2 i the input value given by the coordinator.Total derivatives with respect to the weight coefficients of matrix W2. And the sigmoid derivative The new value of the weight coefficients As we state in (30), the local target Φ2 is the function of W 2 weight coefficient and γ2 parameters given by the coordinator.The total derivatives with respect to the coordination parameters γ2 are: New feedback information sent by the second sub-network to the coordinator: 3) For the coordinator: In a two-level learning algorithm, the coordinator plays the main role.It is now time to decide what kind of coordination principle will be chosen.This principle specifies various strategies for the coordinator and determines the structure of the coordinator.In [1] three ways were introduced in which the interaction could be performed. • Interaction Prediction.The coordination input may involve a prediction of the interface input.• Interaction Decoupling.Each first-level sub-system is introduced into the solution of its own task and can treat the interface input as an additional decision variable to be free.This means that sub-systems are completely decoupled.• Interaction Estimation.The coordinator specifies the ranges of interface inputs over which they may vary.In this article the Interaction Prediction is used.The coordinator predicts the interface between sub-networks.This means that the output of the first sub-network V 1 1 and the input of the second sub-network V 1 2 are predicted.The signal γ1 predicts the output signal of the first sub-network V 1 1 .The first sub-network uses this signal as a teachers value and it is a part of the target function Φ1 of the first sub-network according to formula (22).The coordinator predicts signal γ2 as well.The second sub-network uses this signal as the input value V 1 2 .Using this assumption, it gives the ability to define the local target function Φ2 of the second subnetwork (30).Consequently, two local target functions Φ1 and Φ2 can be defined.As stated above, the coordinator needs the feedback information from the first-level sub-networks checking if the predicted signals γ1, γ2 were true.If not, the coordinator using its own target function should find the new value of the coordination signal γ1, γ2.The first subnetwork using the formula (24) calculates the new value of its output signal which,,at the same time, is the feedback signal 1 to the coordinator (29).The second sub-network is trying to minimise the local target function Φ2 and calculate the new optimal value of input signal 2 (38), which is sending to the coordinator.Therefore, the coordinator has full information and is ready to calculate and predict the new value of the coordination input signal γ1, γ2.Taking into account that, the coordinator target function is defined as: Using gradient algorithm one can calculate using gradient algorithm one can calculate The new value of the coordinator signal γ1 i γ2 i Fig. 6: Functional multi-level hierarchy for an ANN learning algorithm IV. EXAMPLE In a life insurance company the underwriting process has been playing the main role in risk control and premium calculation.The ANNs could be used to help the insurance agents to classify the insurance applicant and calculate the first level of premium.Therefore, a special short questionnaire was prepared which only includes 10 main questions.The data were used to teach the ANN work as an insurance specialist, known as the underwriter.All data were divided into three subsets: • The first set of both the input data X and the output data Z included 250 records of data.This set is known as the learning set.As an example, a small part of the input data is shown in the (Fig. 7).The learning epoch includes 250 (IJARAI) International Journal of Advanced Research in Artificial Intelligence, www.ijarai.thesai.orgvectors that are sending into the ANN input one by one.When this sequence of the process is finished, the next iteration begins.• The second set is used to verify the quality of the learning process.This set contains 150 records.It is known as verification set.• Finally, the third set, known as the testing set, contains only 100 records.It helps the decision-making specialist to decide whether the ANN achieves good quality and if it is ready for use. Fig. 7: An input data example To achieve this, the two-level learning algorithm has been used to teach the ANN.The structure of the ANN includes only one hidden layer: 10 input neurons as the dimensionality of the vector X, 15 neurons in the hidden layer and 8 neurons in the output layer.This structure can be shortly described as the ANN (10-15-8).Two sub-networks were introduced in accordance with the algorithm description.The first subnetwork includes the hidden layer and its local target function Φ1.The second sub-network includes the output layer with its local target function Φ2.The coordinator has its own local target function Ψ and coordinates local tasks to achieve the minimum of the global target function Φ (the whole ANN).The main goal of this example is to study the dynamic characteristic of the ANN learning process, especially the relations between all the target functions: two local target functions Φ1 and Φ1, the coordinator target function Ψ and the global target function Φ. Al values have to achieve the minimum value according to the relation, as shown in formula (22)(30)(40). In Fig. 8. the dynamic characteristic of the learning process of the first sub-network is shown.In the beginning phase, the target function Φ1 decreases its value from 1.2 to less than 0.1 during 4,000 iterations.After that, it very slowly decreases its value to the target value 0.001. To study the dynamic characteristic of the second subnetwork (Fig. 9.), one can say that in the beginning phase of the learning process, errors occur less frequently and achieve value 0.1 after only 2,000 iterations.The differences between the sub-networks can be explained by the dimensionality of W 1 and W 2 matrices.The matrix W 1 includes 165 weight coefficients while W 2 includes only 128 weight coefficients.Finally, the coordinator target function is shown in Fig. 10.The quality of the dynamic processes is the same for both Φ1 and Φ2.The starting error is the greatest and achieves its value 0.1 after 6,000 operations, which can be explained by the relations between the sub-networks.When the learning process started, the sub-networks were not connected (decoupled).This means that every sub-network has to change both the weight coefficient and the input− output vectors using the coordinator gamma signal.The coordinator calculates the optimal γ value using feedback information 1, 2 from both sub-networks.This is the iteration process and it has two stages.During the first stage all errors decrease their value dramatically, rather quickly achieving value less than 0.1.After that, the process stabilises and achieves the final value after a When the ANN achieves the final learning result, the verification sets are used and the differences between the teachers data and the ANN's calculations are collocated (Fig. 12).It can be seen that not all the output vectors are the same.In most cases, the ANN calculates less than the teacher (an insurance specialist).The chart shows a number of categories for a couple of insurance candidates.When a category is higher, the insurance premium is greater as well.Fig. 12: Result of the ANN's learning Therefore, one can state that the ANN is more conservative than a life insurance company specialist.For a few candidates it calculates the higher category and they would pay higher premium.Finally, the dynamic characteristic of the global target function is shown (Fig. 13).The characteristic is closely related to all the above.The maximum value is higher and the process needs more time to achieve value less than 0.1, namely after about 8,000 iterations (the sum of the local values). V. CONCLUSION In [1] for the big systems with hierarchical structure, three coordination principles are defined.For the ANN learning process interaction prediction was used.Each sub-network is www.ijarai.thesai.orgresponsible for finding the minimum value of its own target function to treat the interface inputs as additional variables.The γ signal plays this role.For the first sub-network, γ1 is used as the teacher data and the sub-network should change its own weight coefficients in such a way that the final subnetworks output should be as close to the teacher value as possible (in a square error sense, of course).For the second sub-network, gamma works as the input vector.This vector and the teacher vector are used to train the sub-network.The coordinator is responsible for finding the optimal value of the γ1 signal, using its own target function Ψ.The underwriting process was used to show that this learning algorithm structure is able of finding the minimum of the global target function Φ for quite a complicated problem and the ANN is ready to work.The weight coefficient for both W 1 and W 2 matrices were memorised and the program was sent to insurance agents to use.As it has been emphasised, during the second stage, the learning algorithm works far away from the optimal value.Convergence errors decrease their values very slowly and (Fig. 15.) reeffirmed the quality of the dynamic characterics: and Where: n− number of iterations Therefore coordinator should use a more complicated coordination algorithm that has to include not only a PD algorithm structure but a PID algorithm as well.This work will be continued.Analysing the learning result shown in Fig. 12. one can see that the ANN should find a discrete category value (from the set 1 to 8).From time to time, the network solutions are different than the specialist's decisions.Shifting the solution into fuzzy sets could solve this conflict.What follows is the suggestion that the second sub-network should Fig. 4 : Fig. 4: Functional multi-level hierarchy for an ANN learning algorithm Fig. 8 :Fig. 9 : Fig. 8: The target function Φ1 of the first sub-network depending on the iteration number ( IJARAI) International Journal of Advanced Research in Artificial Intelligence, considerably long time (number of iterations).This part of the algorithm is not optimal and the coordinator should change the strategy to calculate a new γ value. Fig. 10 : Fig. 10: The coordinator target function value Ψ depending on the iteration numberr Fig. 11 : Fig. 11: The part of the learnig process including vibrations Fig. 13 : Fig. 13: Learning the global target function Φ for the ANN Characteristics of the feedback signals 1(n) end 2(n) are depicted on Fig.14.
4,991
2014-12-01T00:00:00.000
[ "Computer Science" ]
Understanding Capabilities , Functionings and Travel in High and Low Income Neighbourhoods in Manila Transport plays an important role in helping people to access activities and participate in life. The availability of transport networks, the modes available, new infrastructure proposals, and the type of urban development can all impact on and change activity participation, and hence contribute to social equity in the city. This article uses surveys in low and high income neighbourhoods in Manila, the Philippines, to assess the social equity implications of differential access to transport. The analysis demonstrates how the theoretical framework of the Capability Approach (Nussbaum, 2003; Sen, 1985, 1999, 2009) can be used to assess what individuals might be able to access (capabilities) versus their actual travel (functionings). The spatial patterns of travel and access to activities are assessed, demonstrating significant differences by gender, age, income and neighbourhood, in terms of travel mode and cost of travel; health, physical and mental integrity; senses, imagination and thoughts; reasoning and planning; social interaction; natural environment; sustainable modes; and information. This approach to assessing the transport dimensions of social equity offers much potential, based not only on access to resources or consumption of mobility, but also in the opportunities that people have in relation to their activity participation. The case study context is also informative, with Manila providing an example of an Asian city with high levels of private car usage, high levels of congestion, and large spatial and income differentials in travel and associated social equity. Introduction The first Human Development Report (United Nations Development Programme, 1990) was published almost three decades ago, and since there have been various attempts to improve levels of social equity, over many contexts internationally.The focus has been on putting people at the centre of the development process, i.e., aiming to create the conditions for people to enjoy long, healthy and rewarding lives, rather than simply pursuing increases in Gross Domestic Product (GDP).But understanding levels of social equity, the multi-dimensionality of this, and the potential solutions, has proved complex.Social equity is viewed as a fair access to opportunities, livelihood, education, and resources, with social justice as the fair and just relation between the individual and society, including the distribution of wealth, opportunities and social privilege (Mella Lira & Hickman, 2017).Spatial patterns of social equity still vary greatly between and within countries-and it is not always ob-vious what should and can be improved and how.Some countries and cities have become more inequitable over the last few decades, with inadequate resources available to maintain even minimum standards of living.Inequity is now widely seen as moving beyond the accumulation of wealth, incorporating issues such as participation in activities, employment, education, and other factors, such as literacy, life expectancy, health, and wellbeing.Perhaps there has been less discussion concerning the role of transport in supporting social equity; however, effective transport seems fundamental to many of the issues being faced-with transport required to facilitate participation in activities.Travel is not usually an end in itself, but provides one of the means to access what people value.In addition, active transport, through walking and cycling, has many direct health benefits (Woodcock et al., 2009). This article uses the Capabilities Approach (CA), developed by Amartya Sen (Sen, 1985(Sen, , 1999(Sen, , 2009) ) as a theoretical framework to understand the differences in travel and participation in activities.It applies CA, using surveys in high and low income neighbourhoods in Metro Manila, the Philippines. 1 The contribution of the article is to understand the opportunities that people have and aspire to, and what they actually achieve, relative to accessing activities, and to examine how this is distributed by gender, age, income and neighbourhood.These issues are seen as important in a context such as Manila, where the activities that individuals might be able to or like to access, relative to their actual travel, are likely to be very uneven across population cohorts and spatially.Individuals do not always take up the theoretical opportunities on offer.The use of the accessibility that is available, via different transport networks, might not be straightforward-with some modes not used for issues such as cost, status, comfort and safety.The political and cultural structure of society is critical to travel and social equity-enabling only a limited set of choices at the individual level.As Sen (2009, p. 227) states: In assessing our lives, we have reason to be interested not only in the kind of lives we manage to lead, but also in the freedom we actually have to choose between different styles and ways of living. The article hence develops an approach to apply CA in relation to travel, assuming that the context of Manila might be associated with unequal access to travel and participation in activities.Reflections are given on the implications of using CA as a framework for assessing the social equity impacts of transport systems. The Capabilities Approach and Travel Transport can be an important factor in helping to develop socially-equitable societies, with the different types of infrastructure, such as highway, public transport or walking and cycling networks, tending to be used by different cohorts in society.A diverse literature has examined the potential relationships between transport and social equity, social exclusion, and wider issues such as social capital and well-being, including the barriers to access experienced by different groups (such as Church, Frost, & Sullivan, 2000;Currie et al., 2009;Currie & Stanley, 2008;Delbosc & Currie, 2011;Lucas, 2004Lucas, , 2012;;Preston & Rajé, 2007;Social Exclusion Unit, 2003;Stanley, Hensher, Stanley, & Vella-Brodrick, 2011).Accessibility analysis and planning have been usefully applied in practice, particularly in Global North contexts, to examine the impacts of transport strategies and projects (Ashiru, Polak, & Noland, 2003;Dong, Ben-Akiva, Bowman, & Walker, 2006;Geurs, Boon, & Van Wee, 2009;Geurs, Zondag, De Jong, & De Bok, 2010;Hansen, 1959). CA offers a complementary way of examining these issues, focusing on the opportunities that people have, and the realisation of these opportunities, in accessing activities.There is much use of CA in wider fields, notably in development studies (see Comin, Qizilbash, & Alkire, 2008, and many others); but little in transport planning, despite much potential for application.Some research is beginning to emerge, in developing the conceptual framework for use in transport (Beyazit, 2011;Hananel & Berechman, 2016;Martens, 2017;Mella Lira & Hickman, 2017;Nahmias-Biran, Martens, & Shiftan, 2017;Nordbakke & Schwanen, 2014), and applying this through case studies (Nordbakke, 2013;Ryan, Wretstrand, & Schmidt, 2015). The central concepts used in CA are: • Capabilities: representing the "alternative combinations of doings and beings that are feasible to achieve", i.e., what real opportunities are available for people to do and to be (Sen, 1999, p. 75); • Functionings: the "various things a person may value doing and being" (Sen, 1999, p. 75), with the realised functionings representing what a person actually achieves and how.These might include elementary activities, such as being adequately nourished, being in good health, avoiding early morbidity; to more complex activities or personal states, such as taking part in activities and community life, having self-respect and being happy. In transport, this distinction can be useful in allowing us to understand the opportunities available in a particular context and also how this relates to actual participation in activities.The realised functioning element (what a person actually does) is perhaps the easiest to measure, represented by the actual travel and participation in activities.The travel part of this is well used in transport planning, with analysis often focused, for example, on actual vehicle kilometres travelled or mode share.Capa-bility (the real opportunities, concerning what the person is substantively free to do) is more problematic to measure with an easy metric.It can be viewed as the level of accessibility available (Martens, 2017), but perhaps can be further developed, beyond the aggregate level, as the individual opportunities for travel and participation in activities.Hence the theoretical, aggregate 'physical' accessibility might be modified by issues such as the type of available infrastructure, built form, social and cultural norms, and individual characteristics-and these give the individual a unique capability set.The 'real' opportunities are also difficult for individuals to assess, as they might not be aware of the full or relative range of opportunities on offer.Capability should, however, cover the potential and aspiration to access different activities within particular contextual constraints. The capability is hence viewed as the substantive freedom to achieve different activities and lifestyles, i.e., the combinations of different possibilities from which the person can choose.For example, a person with a high income may choose to have a similar level of mobility (functioning) to a person with a lower income, but have a very different capability set in that they could choose to be much more mobile.The realised functionings are modified again relative to the capability set according to individual characteristics such as income, disability, education and aspiration.In practice, a higher income is likely to lead to a higher realised functioning in mobility and participation terms.The value in using such a distinction is that this may lead us to understand why certain levels of accessibility-even improved levels of public transport, pedestrian or cycling accessibility-are not being used.The evaluative focus for assessing the social impacts of transport can hence be widened beyond the realised functionings to consider issues of capability.This is perhaps most evident when considering different city or national contexts, where the political, institutional and cultural constraints can be very different, including the use of transport systems. In terms of applying CA, Sen avoids outlining a basic list of capabilities and giving weights to different capabilities.His reasoning is that different capability sets will be relevant to particular groups in different settings.Others argue that CA is most useful when applied as an evaluative approach and we attempt to build on this in relation to transport.Nussbaum (2003) provides a list of 10 central human capabilities which can be used as the basis for discussion on factors (beings and doings) that may be important in a particular context.These include: life; bodily health; bodily integrity; senses, imagination and thought; emotions; practical reason; affiliation; other species; play; and control over one's environment. A third core concept in CA is agency, and this is defined as someone who acts and brings about change (Sen, 1999, pp. 18-19).This can be interpreted at the individual or societal levels, including the role of institutions and organisations within particular political and cultural contexts.The agency aspect is important in help-ing to structure and shape the potential for capabilities and functionings.Sen further distinguishes this in terms of opportunity freedom (what opportunities or abilities individuals have to achieve) and process freedom (the process through which activities might happen) (Sen, 1999, p. 17). CA is interpreted and applied in many different ways in the literature, and most often in relation to development studies.Analysis on deprivation and advantage using CA is focused on capabilities or functionings rather than utility or commodity, hence there is a humancentred and multi-dimensional, pluralistic emphasis.Assessment can incorporate measurement, but more often is focused on qualitative discussion, and is usually focused on either functionings or capabilities, and rarely both together (Comin et al., 2008).The objective of development is seen as the expansion of capabilities, hence there is a concern with changing practice and generating policies and activities which may increase capabilities (Sen, 1999).It is assumed that the functionings would increase alongside the increased opportunities. Figure 1 interprets and applies CA in the transport context, illustrating how a capability set (including a positive journey experience, such as bodily health, integrity, emotion, affiliation, and access to activities) may be available to an individual, yet only a more limited set of functionings are realised, dependent on ability, income and other potential barriers to take up.Some capabilities may be only partly taken up, e.g., through working part-time; or even be more fully taken up than initially envisaged, e.g., by caring for an elderly relative.Hence there are theoretical, maximum opportunities available, and only some of these are used by the individual.The level of aggregate accessibility may be higher than the capability set-offering a theoretical level of choice to participate in activities that is not always possible to take up.The agency dimension is largely interpreted here at the structural level, including the governmental institutions which may, for example, favour a particular set of infrastructure investments and interventions.This leads to the transport systems and built environment-and frames the available opportunities.The actions of institutions lead, in part, to the opportunities available and help to create the cultural and social norms of travel and participation in activities. If this is related to a hypothetical example, say investment in a new transport project, it can be seen that levels of accessibility may improve.Alongside, the capability set may increase, including the number and scale of capabilities.Functionings may also increase, depending on the particular context and barriers to take up.There are also issues of adaptive capacity, where individuals modify their beliefs and actions to the context they find themselves in.Individuals hence can normalise both their capabilities and functionings, e.g., the full range of potential opportunities may not be understood or realised. CA therefore has potential as a conceptual framework which can be used to help understand and repre- sent travel and activity participation.It could be used alongside accessibility analysis to help understand why and how individuals and societies may participate in activities relative to the barriers to take up.It could be used in social equity impact assessment to understand how proposed infrastructure projects may affect individuals and neighbourhoods.Further research is possible to develop the themes, examining and applying the different concepts, perhaps with most potential through the use of case studies.A more conventional analysis, focused on changes to levels of mobility, such as vehicle kilometres travelled, traffic volume or mode share, in comparison, gives only a limited view of the impacts of transport investment, usually interpreting mobility as a commodity to be consumed.A more human-centred and multi-dimensional analysis can potentially offer greater insights on the social impacts of transport. Case Study Neighbourhoods and Survey Approach The case study neighbourhoods chosen to explore these issues are drawn from Metro Manila, the Philippines (a large urban area, rapidly growing from a population of 1.6 million in 1984, to approximately 12.9 million people in 2015, and estimated to reach 14 million in 2030).There is an urban land area of 614 km2 , and high population densities of 21,000 persons/km 2 (Philippine Statistics Authority, 2015). 2 Metro Manila's diverse and hazard-prone geography-poor transport systems, including high levels of traffic congestion, poor quality public transport, very poor walking and cycling facilities, and dispersed urban population-result in very unequal access to travel and activity participation; challenges to urban life and human development.The private-dominated system of infrastructure provision leads to some types of transport projects being developed, often privately financed urban highway schemes, and extensive public transport networks are very difficult to provide.The metropolitan area is an example of splintered urbanism (Graham & Marvin, 2010)-where infrastructure provision, transport and urban development lead to fragmented urban experiences and large levels of social and spatial inequality. The Philippines is seen as medium scale on the Human Development Index (HDI) (with a HDI score of 0.682; 114 out of 188 countries) (United Nations Development Programme, 2016).Per capita GDP in Metro Manila is relatively low at 183,747 Philippine Peso (Php) (£2,877.56GBP as at June 2017) (National Statistical Coordination Board, 2013), but this is the highest of the regions in the Philippines.There are an estimated four million slum dwellers (informal settlements) in Metro Manila (Roy, 2014), hence distribution of wealth is very uneven.The richest 10% of the population account for 30% of consumption and the poorest 10% just 3%.The Gini Index3 is 0.398 (Human Development Network, 2003). High and low income neighbourhoods were surveyed in Metro Manila (Figure 2).The high income group were interviewed with an online survey, including residents in exclusive villages from around Manila.Respondents were found via university students, staff and wider contacts, using snowball recruitment.This is a useful method where respondents are difficult to find.A variety of high income neighbourhoods were used to source the high income group, again due to difficulty in identifying survey respondents in one neighbourhood.The validity of the survey, in terms of understandability of questions and coverage, was checked initially by members of the academic team involved in the research project and then via a small pilot (n = 10) with students at De La Salle University. Respondents from the high income neighbourhoods generally live in large lots and houses, often with swimming pools and access to private leisure clubs (Figure 3).These exclusive subdivisions were established between the 1940s-1980s; examples are Forbes Park and Urdaneta Village.Houses usually have separate maid's quarters, their own common security personnel, and most own several private vehicles.For exclusive villages near the central business district (CBD), the price could range from Php 150,000 to Php 500,000 per square metre of land,4 depending on the year of build.A large house might sell for around 300 million Php (£4.7 million GBP).While in other areas further from the Makati CBD, this could range from Php 70,000 to Php 100,000 per square metre of land.These exclusive villages were not converted into commercial uses given their proximity to some of the shopping and business areas of the Makati central business district and the attractiveness for residential living.102 valid questionnaires were gathered from these high income neighbourhoods. For the low income neighbourhoods, face-to-face surveys were conducted with respondents in five neighbouring barangays in the Sampaloc District (Figure 4), adjacent to De La Salle University.Surveys were easier to gain in this area, as there were many university students living here and initial contact was easy to make.Again, snowball recruitment was used.Face-to-face interviews were used to carry out the surveys, instead of the process being online.Not all those being surveyed had easy access to the Internet, hence face-to-face interviews were more appropriate.The same survey was used in the high and low income neighbourhoods, hence it is unlikely that the technique of online and face-to-face delivery affects results, but the impact of this is unknown. Sampaloc is an old residential neighbourhood; many of the houses are old and dilapidated, and some of these residences have become boarding houses for students.Some residences have converted their ground floor into commercial space for restaurants or stores catering mostly for local residents.Others have replaced their old houses and constructed apartment buildings with four or more storeys for rent to students.There are private car owners in this area, but they usually use the street to park their vehicles as the lot is used for living space.The street in front of the house also sometimes serves as an extension to the house, for example where laundry may be done, or even as an outside liv-ing room where people will sit on benches and talk with neighbours.There are also pockets of informal settlers in the area, usually on vacant lots which were not properly secured by their owners.A court order would be required to remove the settlers.The cost of a lot with a structure here would range from Php 35,000 to Php 75,000 per square metre5 depending on the age of the house or structure on it.Rent for a flat is available at around 10,000 Php per month (£160 GBP per month).A total of 105 valid questionnaires were gathered from Sampaloc.The number of surveys undertaken is low, certainly for quantitative research, but this reflects the context where these were carried out-it is relatively difficult to gain survey respondents in both the high and low income neighbourhoods in Metro Manila.The analysis can be seen as exploratory, with scope for more detailed analysis to follow up some of the initial findings. Survey Questions The surveys included questions on individual and household characteristics, primary and secondary mode of travel (used to access work or main activity), followed by individual views against a range of central human capabilities, covering issues such as travel experience and access to activities.The question themes are based on the list of central human capabilities developed by Nussbaum (2003), but modified to fit the transport and urban planning context in Manila more clearly.Responses are given for desired levels (capability) and actual levels (functioning), using a five-point Likert scale (1 bad; 5 good).The survey is quite lengthy, including 75 questions covering individual characteristics and seven key categories of impact.The survey took around 20 minutes to complete.An example question is given below: • Capability: What is your desired level of comfort while you are using your primary transport mode? • Functioning: How do you assess the levels of comfort that you experience while you are using your transport mode? Following the earlier discussion on applying CA in transport, we rely in the survey on the individual viewpoint of desired level of transport or participation in activities to reflect capability.This may not always relate well to real opportunities, but gives us a view of perceived desired opportunities.Further research can test varied approaches here, including attempts to assess real and relative opportunities-but this is a complex concept and difficult to explain to respondents.The following central human capabilities are used, covering the journey experience, access to activities and also associated well-being: 1. Health, physical and mental integrity: • Level of stress • Level of physical activity • Closeness to other transport users • Levels of air pollution • Levels of security (not being assaulted, robbed or harassed) • Levels of comfort 2. Senses, imagination and thoughts: • Feelings associated with different modes (such as freedom, insecurity, functionality, enjoyment, health and status) • Enjoyment of primary and secondary mode 3. Reasoning and planning: • Access to current employment • Public transport provision and access to visiting relatives, recreational activities, cultural and sporting activities, etc. • Range of transport modes • Affordability of transport modes 4. Social interaction: • Level of social interaction • Feeling of discrimination 5. Natural environment and sustainability: • Presence of natural elements • Access to sustainable transport modes 6. Information: • Quality of interchange • Access to information on transport modes 7. Travel to work and other activities: • Level of access • Range of employment • Commute time Examining issues such as these allows us to consider the different dimensions of social equity as related to the transport system, the experience of travel and participation in activities.The issues are broader than those usually considered through social impact assessment, including criteria such as senses, imagination and thought; reasoning and planning; and level of social interaction.All of these are potentially important social impacts associated with different transport infrastructure.In particular, the analysis allows us to explore the differences between opportunities and aspiration and realised activities.Hence, neighbourhood types, with high and low incomes, are examined to assess whether there are differences in travel and activities by population cohort and spatially. Analysis There are clear differences between the two neighbourhood types across many of the individual characteristics (Table 1).In the low income neighbourhood, there are more males (60% relative to 55% in the high income neighbourhoods); a different age profile, with less in the 18-24 group (30% relative to 53%) but more aged 35-54 (34% relative to 28%); lower educational attainment, with fewer at graduate level (42% relative to 70%); a lower monthly income (86% below 25,000 Php per month relative to 40%; and 0% above 70,000 Php per month relative to 48%6 ); much lower use of the private car (6% relative to 79%) and higher use of Light Rapid Transit (LRT) and Philippine National Railway (PNR) systems (34% relative to 6%), and tricycles (32% relative to 0%), all as a primary mode.The high educational attainment levels reflect the method of survey deliverystudents from De La Salle University were used to gain contacts and gradually find responses.The large differences in incomes and modes used illustrate the large social inequity in Metro Manila. Figure 5 shows boxplot diagrams of the aggregated capabilities and functionings for the low and high income neighbourhoods.Responses are aggregated to give a combined capability and functioning score across 12 questions, covering level of stress for primary transport, physical activity, closeness to others, air pollution, security, comfort, access to current employment, range of employment in neighbourhood, range of mode options, access to sustainable transport modes, social interaction, and level of information.The other variables are not used due to multicolliniarity and missing data.The maximum possible aggregate score is therefore 60, for both capabilities and functioning, with a maximum score of 5 under each question.The higher aggregate scores reveal a general increased level of travel experience and access to activities.The boxplots illustrate the distribution of the data by neighbourhood, giving the median (central dark line), interquartile range (box), first and third quartile (edge of box), 1.5 times the interquartile range (the whisker), and minimum and maximum data and outliers (circles). The high income neighbourhoods appear to have higher levels of both functionings and capabilities, and particularly functionings, relative to the low income neighbourhoods.It is argued that income plays a significant role in shaping capabilities and functionings at an individual level.In other words, lower income groups are likely to have lower rates of participation in various key life activities and are most likely to experience social exclusion.This is what we would expect, and a similar finding to previous literature (Preston & Rajé, 2007;Social Exclusion Unit, 2003), but measured in a different way in terms of aspired and realised activities-both of these are related to income.This seems a fundamental finding: the current transport systems in Metro Manila are disproportionately affecting lower income groups in social terms.This is also in view of individuals demonstrating adaptive preferences-they are likely to internalise their particular circumstances, choose within a narrow choice set of activities, and not always be aware of the greater possibilities on offer.An important conclusion to be made is that the agency dimensions, i.e., the organisations developing transport strategies and programmes, are not supporting the lower income cohorts to the extent that they might.Perhaps a different set of infrastructure investments are required to support the lower income groups, as well as interventions in urban planning.Again, these issues could be examined with further research, including using different case studies and neighbourhoods. Table 2 gives additional analysis using the more detailed responses against each dimension on the capability list.Chi-squared tests are used to examine the differences in responses across different population groups with categorical data (gender, age and income) and also spatially (by neighbourhood).The only exception is monthly transport costs, which is continuous data, and an F-test is used to compare mean deviations.Statistically significant findings are indicated with an asterisk. When examining differences by gender in relation to the 26 indicators, there are only three statistically significant results.Males and females have different perceptions of being assaulted, robbed or harassed (actual) when they are using their primary mode of transport; and levels of social interaction (desired and actual). There are a number of significant differences by age.Closeness to other transport users (actual), level of trans- port options available (actual), accessibility to transport modes (actual), level of information available (actual), accessibility to employment in local neighbourhood (actual) and monthly transport costs are significant.Examining this in more detail, it is found that people aged between 18-24 are mostly satisfied with the proximity to other transport users when driving cars or taking taxis/FX taxis, but when travelling on public transport, such as LRT/PNR or buses, they tended to feel uncomfortable; the same applies to cycling and walking.Young and middle-aged people are more likely to report having more choice of transport options available to them, when carrying out daily activities, compared with teenagers and older people.Young people are more likely to have access to job opportunities in their local neighbourhoods compared to middle-aged and older people.In addition, older people spend the highest amount on transport costs for their daily commute, followed by middle-aged groups, while the younger generation spend the least on travel costs.Analyses of the differences by income groups and neighbourhoods (chosen largely by income) yielded many more statistically significant results-almost all of the 26 human capability dimensions, across desired and actual, are significant.Many of these are highly significant (p < 0.001), particularly between neighbourhoods, including level of physical activity, closeness to other transport users, level of air pollution, security, enjoyment when travelling, accessibility to employment, and level of information available to choose different modes.It can therefore be argued that the parameters of income and location have very significant impacts on individual capabilities and functionings.Of course, the interpretation is complex here-infrastructure, income and travel are closely related.The availability of different types of infrastructure leads to particular types of travel, with the use of infrastructure unevenly distributed over different income groups.But, in addition, the availability of income increases the travel possibilities and the potential to access opportunities.Hence, there are multiple relationships at work, with different factors working in multiple directions. Conclusions Transport plays an important role in helping people access activities and participate in life-it is an important factor in human development.But, much of the current transport investment benefits certain cohorts in society, usually the higher income groups, relative to othersand this is experienced in some cities and neighbourhoods more than others.This article has demonstrated how CA might be used in the transport context, using a case study of Metro Manila.It attempts to show what individuals might be able to do and their actual travel and how these might be distributed by population group and spatially.There are critiques of CA and the use of concepts of opportunity, instead of the more orthodox focus on welfarism (the extent to which people's preferences are satisfied).CA is also very difficult to apply, with measurement of opportunity open to different in-terpretations, and is complex empirically (Alkire, 2008;Sugden, 2001).But, the Manila context seems to demonstrate that both capabilities and functionings are potentially important.A person's inclusion in and quality of life is not merely a matter of what he or she achieves, or the mobility that is consumed, but also is related to the options available.There is not always a 'genuine' choice of the good life on offer, more a constrained set of options from which to choose (Sen, 1985).The exploratory analysis in this article demonstrates that there are significant differences for travel and activity participation by gender, age, income and neighbourhood; including issues such as travel mode and cost; health, physical and mental integrity; senses, imagination and thought; reasoning and planning.The neighbourhoods studied have very different forms of access to the transport system, the experience of travel, and to the activities this helps reach. The theoretical framework of CA helps us to understand these issues and can be used to assess what opportunities are available to individuals and what they might like to access (capabilities) versus their actual travel (functionings).The local political and institutional context (agency) helps to explain what transport infrastructure and systems are available to individuals, how the urban form has been developed, and, to an extent, what the societal cultures and norms might be, e.g., whether it is acceptable to walk, cycle, use public transport, or whether the private car is the aspirational mode to use.The distinction between capabilities and functionings might seem to be nuanced, but we argue it is important to add this type of analysis to accessibility planningso that we can further understand why a seemingly good level of accessibility might not be used.In particular, this might be important in a context such as Manila, where use of walking, cycling and public transport is very difficult, uncomfortable, unsafe, and has low status.Hence there are many barriers to using a theoretical level of accessibility.The way we have interpreted capabilities in the surveys is to use this to represent individual aspiration, as related to activities that are feasible to achieve.This could be tested in different ways empirically; and further research could re-examine this issue, perhaps estimating a neighbourhood or societal level of opportunity to travel and participation in activities, using interviews or workshops.This could help to develop a benchmark against which individual functionings could be assessed.In addition, it may be useful to consider different criteria and weighting of criteria, the measurement of adaptive preference by individuals, to compare functionings and capabilities relative to levels of accessibility, and to develop metrics or score thresholds which indicate appropriate or deficient functionings and capabilities.Analysis could be prospective and evaluative-assessing how a project, for example, might lead or has led to a change in opportunity and actual travel. The application of CA in transport hence has much potential, allowing us to examine the multi-dimensional social impacts of major infrastructure projects and the wider dimensions in using the distinction of capabilities and functionings.This helps us to understand not only the consumption of resources, mobility and accessibility, but also the opportunities that people have in relation to their activity participation.CA does not make the processes of appraisal and evaluation any easier, indeed it makes these much more complex, as there are wider dimensions to be considered.There are many difficulties empirically: in devising surveys that address the wide-ranging social criteria, in explaining the different concepts within CA to respondents, in developing an approach to social impact appraisal that can be scaled up without large resource requirements, and in allowing social impacts to be considered alongside other issues, such as environmental and economic impacts. The adaptive preference issue is perhaps most difficult-that people may choose their travel and activity participation within a particular set of narrow choice sets, and will not always be aware of the greater possibilities on offer.However, in Manila and elsewhere, differential access to transport and high levels of social inequity remain problematic-and fundamental to human development.Hence, we should continue to refine our approaches to measuring transport's impact on human well-being-and to seek to improve well-being for all groups in society through infrastructure investment.And, as Sen reminds us, this can be considered not only in terms of what people have or can consume, but in terms of what they can do and be. Figure 1 . Figure 1.Potential changing functionings and capabilities in relation to a transport project (developing Ryan et al., 2015). Figure 2 . Figure 2. Case study neighbourhoods (high and low income). Figure 5 . Figure 5. Box plots of index of capabilities and functionings for high and low income neighbourhoods. Table 2 . Summary test statistics (Chi-squared and F Test) for capabilities and functionings. . Commuting to Work and Productive Activities < 0.05, ** p < 0.01, *** p < 0.001;(a)as this is a continuous variable, an F-test is used.All other variables are categorical and, as such, a Chi-squared test is applied.
7,985.6
2017-12-28T00:00:00.000
[ "Economics" ]
T-reg transcriptomic signatures identify response to check-point inhibitors Regulatory T cells (Tregs) is a subtype of CD4+ T cells that produce an inhibitory action against effector cells. In the present work we interrogated genomic datasets to explore the transcriptomic profile of breast tumors with high expression of Tregs. Only 0.5% of the total transcriptome correlated with the presence of Tregs and only four transcripts, BIRC6, MAP3K2, USP4 and SMG1, were commonly shared among the different breast cancer subtypes. The combination of these genes predicted favorable outcome, and better prognosis in patients treated with checkpoint inhibitors. Twelve up-regulated genes coded for proteins expressed at the cell membrane that included functions related to neutrophil activation and regulation of macrophages. A positive association between MSR1 and CD80 with macrophages in basal-like tumors and between OLR1, ABCA1, ITGAV, CLEC5A and CD80 and macrophages in HER2 positive tumors was observed. Expression of some of the identified genes correlated with favorable outcome and response to checkpoint inhibitors: MSR1, CD80, OLR1, ABCA1, TMEM245, and ATP13A3 predicted outcome to anti PD(L)1 therapies, and MSR1, CD80, OLR1, ANO6, ABCA1, TMEM245, and ATP13A3 to anti CTLA4 therapies, including a subgroup of melanoma treated patients. In this article we provide evidence of genes strongly associated with the presence of Tregs that modulates the response to check point inhibitors. of total peripheral CD4+ T cells and are characterized by the presence of the transcription factor FOXP3 11 .Tregs play a central role inhibiting the immune response against tumors by secreting several immunosuppressive factors 12 .These cytokines inhibit effector T and NK cells; and promote tumoral M2 macrophages 13 .Several strategies have been pursued to inhibit their activity by acting on cell surface molecules that regulate their function including CD25, CTLA4, CD36, among others 14 .In this context, targeting some of these proteins like CTLA4 with antibodies has shown benefit in patients 15,16 .In addition, clinical studies targeting some of these molecules are under evaluation in early phase trials including compounds against GITR, CD25 or OX40, among others 17 . Efficient antitumor immune activation requires the effect against different targets to enhance a multicell effector action.This has been demonstrated with the combination of anti PD(L)1 inhibitors with CTLA4 inhibitors in several indications like melanoma or MSIH colorectal cancer 15,16 .In this context, identification of targets that are expressed simultaneously is mandatory to design smart drug combinations.In a similar way, the discovery of markers of response will undoubtedly permit the administration of therapies to resistant patients.In this context, proteins expressed at the surface of the membrane are attractive targets or markers, as are easily accessible with antibodies against them: therefore, mapping the cell surfaceome is a therapeutic priority in drug development. In our study we aimed to evaluate the immune transcriptomic profile of tumors that harbor high presence of Tregs.Our goal was to identify genomic vulnerabilities linked to the presence of Tregs that could be druggable pharmacologically.In addition, we explored transcriptomic signatures of response to agents targeting Tregs like CTLA4 antibodies. Mapping upregulated genes in breast cancer tumors expressing Tregs To identify upregulated genes expressed in tumors with high presence of regulatory T cells (Tregs) we interrogated public datasets, as described in the material and method section.Figure 1a displays the flow chart of the whole analysis.Using a correlation score (threshold of spearman rank correlation) > 0.45 with a p < 0.05 (for statistical analysis see material and methods section) (Fig. 1a), we identified twelve genes correlated in the entire breast cancer population with high Treg expression.When performing the analysis in breast cancer subtypes independently, we recognized sixty-one genes in basal-like tumors, one hundred fifteen genes in HER2 positive tumors, and one hundred thirty-nine and thirty-nine genes in the Luminal A and B subtype, respectively (Fig. 1b, Supplementary Table 2).Figure 1c,d describe the proportion of the selected genes within all transcriptome: 0.5% of genes in the entire population, 0.25% in the basal-like, 0.49% in HER2, and 0.56% and 0.16% in Luminal A and Luminal B molecular subtypes, respectively.These data suggest that the described genes constituted a minority of the entire transcriptomic profile.Functional analysis of the identified genes is fully represented in Supplementary Fig. 1. Common up-regulated genes among different subtypes and association with immune populations Only four genes were commonly present in all breast cancer subtypes, and those included BIRC6, MAP3K2, USP4 and SMG1, as can be seen in Fig. 2a.Sixty (16.95%) genes were shared among any of the subtypes (Fig. 2b) and 51 (85.00%) were common in two subtypes (Supplementary Fig. 2A).The HER2+ and the Luminal A subtype, were the combo-subtype that shared more genes between both groups (23.53%) (Supplementary Fig. 2B), followed by the Luminal A and Luminal B, and finally the Basal-HER2+ combo-group (21.57%).We evaluated if these genes BIRC6, MAP3K2, USP4 and SMG1, coded for proteins, so we explored their presence using The Human protein Atlas (Supplementary Fig. 3), confirming their presence. Functional analysis of commonly shared up-regulated genes revealed Regulation of transcription, Protein modification and DNA damage stimulus, among the most present functions, as can be seen in Fig. 2c. We next correlated the expression of commonly shared identified genes; BIRC6, MAP3K2, USP4 and SMG1 with immune populations in different breast cancer subtypes.Doing so we aimed to identify additional immune populations present within the immune microenvironment.As expected, a positive correlation was observed between the expression of these genes and CD4+ T cell populations in all subtypes (Fig. 2d).A lower correlation was identified for macrophages, neutrophils and CD8+ T cells and no presence of B cells were observed.This set of data could suggests that these genes are present in tumors with high expression of CD4+ T cells, but also in other populations that could be susceptible for immune modulation.Interestingly, BIRC6 and MAP3K2 highly correlated with macrophages in the HER2+ subtype (Fig. 2d).A complete evaluation of the different T cell populations is displayed in Supplementary Figs. 4 and Supplementary Fig. 5.Of note the CD4+ T cells consisted mainly of memory T cells and were the population more associated with the expression of these genes. Surfaceome proteins correlated with macrophages and PD-L1 expression Ten up-regulated genes coded for proteins expressed at the cell membrane, being eight of those (7.8%) from the HER2 subtype, one (0.7%) in the Luminal A, and three (4.9%)genes from the basal-like subtype (Fig. 3a).No genes were identified in Luminal B subtype.Functional analyses of these genes displayed ontologies related to the immune system as Neutrophil mediated immunity, Neutrophil activation, Neutrophil degradation; or Regulation of macrophage (Fig. 3b). We observed a positive correlation between MSR1 and CD80 with macrophages in basal-like tumors and between OLR1, ABCA1, ITGAV, CLEC5A and CD80 and macrophages in HER2 positive tumors (Fig. 3c).A very strong correlation was observed between CD80 and CD274/PDL1 and PDPC1/PD1, and a strong association with FOXP3 indicative of the presence of CD80 mainly in Tregs (Fig. 3d).The association was weaker for OLR1 and MSR1.In other breast cancer subtypes, no clear association was identified when evaluating all markers with CD274/PDL1, PDPC1/PD1 and FOXP3 (Fig. 3d).These results suggest that proteins coded by the described surfaceome genes are expressed in cells within the tumor microenvironment including, but not limited to, macrophages particularly in basal-like tumors.Supplementary Table 3 provides a full list of the described genes and an explanation of their biological role. Association of the identified genes with clinical outcome In a next step we explored the presence of the reported genes with patient clinical outcome including relapse free survival (RFS) and overall survival (OS). Presence of genes and response to anti PD (L)1 and CTLA4 antibodies Finally, we intended to study the presence of the described genes with response to PD1 or CTLA4 therapies.To do so we collected data as described in the material and methods section.Favorable outcome was observed in patients treated with anti-PD1 for BIRC6, USP4 and SMG1 (Fig. 6a, left panel).For anti CTLA4 therapies, an association with better survival was observed for the genes MAP3K2, USP4 and SMG1 (Fig. 6a, middle panel).Lastly, when we evaluated the effect on survival in patients treated with both agents, we observed a positive association with the four common genes (Fig. 6a, right panel). When we used these genes as a signature using a mean of expression of the four genes, we observed an association with favorable outcome, for anti-PD1, anti-CTLA4 or combinatorial therapies, highlighting its predictive role for both treatments (Anti-PD1 + Anti-CTLA4: HR = 0.13; CI 0.04-0.37;p = 1.5 × 10-5) (Fig. 6b). When we used these genes as a signature using a mean of expression of all genes (10 genes), we observed an association with favourable outcome, being statistically borderline significant for PD1 treatment, and the association was strongly positive for anti-CTLA4 or combinatorial therapies (Anti PD1: HR = 0.75; CI 0.56-1.01;p = 0.06; Anti CTLA4: HR = 0.42; CI 0.24-0.74;p = 0.002; Anti PD1 + Anti CTLA4: HR = 0.21; CI 0.07-0.61;p = 0.0018) (Fig. 7b).Also, this association was observed when we restrict the analysis to pre-treated or on-treated patients.Except anti-PD1 in pre-treatment samples (Supplementary Fig. 6).Of note, we also studied the association of described genes with response to Ipilimumab (anti-CTLA4 therapy), in melanoma patients.As shown in Supplementary Fig. 7, better survival was observed for the genes MSR1, CD80, OLR1, ABCA1, ANO6, TMEM245, and ATP13A3. The capacity of the genes to predict response to immunotherapy was further confirmed using a different cohort of patients treated with anti-PD1 or anti-CTLA4.This dataset is not limited to breast cancer and includes different solid tumor types as described in Material and Methods section. Discussion In the present article we explore the transcriptomic profile of tumors that harbor high expression of Tregs with the final aim to identify genomic correlates of response to check point inhibitors and potential druggable vulnerabilities. Tregs are a subpopulation of CD4+ T cells that constitute around 4-9% of this cell population 18 .Their principal role is related to the inhibition of the effector immune cell response mediated by activated CD8+ T cells favoring T cell exhaustion 9,19 .In cancer, several studies have demonstrated that this population plays a central role mediating tumor progression, and indeed inhibition of their effect acting on the CTLA4 receptor has shown to increase survival in several tumor types 20 .In addition, other therapies aiming to act on receptors expressed in this population are in late stage of clinical development like those targeting TIGIT 21 . When evaluating upregulated genes that associated with high presence of Tregs we observed that only a minority of genes strongly correlated with this population being only 0.5% of all genes in the whole population.In our analysis we used a double approach, we first explored highly upregulated genes and secondly, we focused only on those present at the membrane of the cells.Only sixty genes (17%) were commonly shared between breast cancer subtypes, being only four of them present in the four subtypes: BIRC6, MAP3K2, USP4 and SMG1.Functions of those genes included Regulation of transcription, Protein modification and DNA damage stimulus.The common four genes were associated with the presence of CD4+ T cells and CD274/PDL1, suggesting that their presence was not restricted to a population of Tregs 11 .The identified signature predicted favorable outcome in breast cancer patients and better prognosis in patients treated with checkpoint inhibitors. In a next step, we focused only of those genes located at the plasma membrane.Twelve genes were identified and functionally linked with neutrophil mediated immunity and macrophage regulation.macrophages in HER2 positive tumors.CD80 associated with CD274/PDL1 and FOXP3 in basal-like tumors suggesting that this biomarker can be expressed in different immune populations that co-exist within the same tumor microenvironment.CD80 has been expressed in different immune cells in a tumor type context dependent, mainly in cells with antigen presenting functions 22,23 .In addition, expression of CD80 has been considered as necessary to sustain Treg populations 24 .In this context some studies have found that mice lacking CD80 had a decreased number of Tregs in the thymus and periphery predisposing to autoimmune disease 23 .Finally, in line with this, CD28, the receptor of CD80, is necessary for the production of Tregs.Although CD80 has been associated with Treg modulation as described before 24 , in our analysis we observed an association with favorable outcome in untreated patients.Although this is somehow a contradictory finding, an association with favorable prognosis has also been observed for high expression of inhibitory receptors and ligands like PD1 or PDL1 25,26 .This suggests that an immune reactive but suppressed microenvironment is present.The association with OLR1 and MSR1 was weaker.CD80 or CTLA4 is a known co-inhibitory receptor present in Tregs and antibodies targeting this protein like ipilimumab has shown to produce clinical activity.MSR1 has been described as present in M2 macrophages contributing to inflammation and patient outcome 27,28 .Indeed, presence of MSR1 has been linked with T cell exhaustion and it has been included in a gene signature that predicted favorable response to anti PD (L)1 in liver cancer 29 .MSR1 is a gene that codes for a membrane glycoprotein implicated in the pathologic deposition of cholesterol in arterial walls during atherogenesis and mediates the endocytosis of a diverse group of macromolecules, including modified low-density lipoproteins (LDL) 30 .A more detailed description of the biological role of the identified surfaceome genes is provided in Supplementary Table 3.Finally, we observed that some of the identified genes correlated with favorable prognosis and response to anti PD1 and CTLA4 therapies.The selected gene signature defined outcome in basal-like and HER2 tumors for RFS and OS.When evaluating patients treated with check point inhibitors, the selected gene signature correlated with clinical response and a favorable survival, and this was clearly observed for anti CTLA4 and both anti PD1 and anti CTLA4 agents.When focus on Ipilimumab, particularly in melanoma, similar findings were observed.Finally, some genes specifically correlated with response to Ipilimumab like MSR1, CD80, OLR1, ABCA1, ANO6, TMEM245, and ATP13A3.The presence of MSR1 suggest the relevant role of macrophages modulating the inhibitory effect of Tregs.Finally, we confirmed how the presence of some genes predicted response to anti PD1 and or anti CTLA4 in combination with chemotherapy in the neoadjuvant setting.This data highlights that for a selected number of genes a short treatment course, as that given in the neoadjuvant setting, is enough to predict response to these immunotherapy agents. In our article we identify a set of different genes that are probably expressed in a wide range of cells, mainly CD4+ T cell, macrophages, and neutrophils.Of note, none of these genes is characteristic of a specific immune population, and therefore could be expressed in a range of immune cells.Their presence was more clearly identified in the basal-like subtype where an association with PDL1 was also observed.These findings suggest that there is an immune-repressed microenvironment that clearly favors the activity to CPIs. Several articles have described immune signatures in breast cancer, but only few describe the association between the presence of Tregs and outcome [31][32][33][34][35] .However, no evaluation of the transcriptomic profile in relation to the presence of immune cell populations, including Tregs, has been performed in this indication.Although several gene signatures have been described in relation to response to check point inhibitors mainly anti PD(L)1 agents 36 , little has been reported about the activity of both anti-PD(L)1 agents and the anti CTLA4 antibody ipilimumab.We also acknowledge that this is a bioinformatic analysis and the use of other techniques like spatial transcriptomics, single cell analysis or direct evaluation of protein expression with immunohistochemistry techniques will undoubtedly had enriched the manuscript 37,38 . In summary, we describe transcriptomic correlates present in breast tumors with high expression of Tregs, identifying a gene signature that predicts clinical benefit of the approved check-point inhibitors PD (L)1 and CTLA4 antibodies.The described signature in the manuscript is protected by the following patent application: EP23382324.The relevant role of Tregs in suppression of T lymphocyte action on tumoral cells opens the possibility of acting on the former to restore T cell fitness against tumors.Identification of manners of controlling Treg action may therefore augment immune anti-tumoral responses.In this respect, the data presented here uncovers potential options to optimize these treatments. Identification of genes related with Tregs infiltration and functional analyses Breast cancer samples including patients from datasets described previously [39][40][41] were used as a cohort to identify genes whose expression correlated to high regulatory T-cells (Tregs) infiltration.Immune cell infiltration for each tumor sample was determined by using the normalized RNA-seq based transcriptome-wide gene expression data as input for the xCell algorithm 42 .xCell is designed to compute surrogate markers of cellular proportions for all together sixty-four different cell types.Then, Spearman rank correlation was computed for each gene to compare its normalized gene expression and the xCell derived infiltration scores for regulatory T-cells.High value as a Treg score corresponds to higher proportion of Treg cells among all cells in the entire bulk tumor sample.Finally, all investigated genes were ranked based on the achieved Spearman correlation coefficients.The analysis was performed in all patients and in each of the molecular subtypes independently-the molecular subtypes were determined using the PAM50 signature and include basal (lacking ER, PGR, and HER2 expression) n = 183), Luminal A (ER or PR positive with low KI67 expression, n = 462), Luminal B (ER or PR positive with high KI67 expression, n = 323), and HER2 enriched (HER2 positive, n = 97) cohorts.The correlation analysis included a total of 25,229 genes.To elucidate common upregulated genes associated with Treg infiltration in some breast cancer subtypes Venn diagrams were performed.We followed procedures described at: http:// bioin forma tics.psb.ugent.be/ webto ols/ Venn/.Genes that correlated with Tregs infiltration were analyzed using the biological function enrichment analyses tool Enrichr 43 .We compilated Biological process or Molecular function ontologies (2021 version) with a determinate p-value indicative of each functional study (< 0.05 in all cases). Surface protein identification We applied the in silico human surfaceome 44 to identified genes that encode surface proteins.This public biomedical resource can be used to filter multiomics data to uncover cellular phenotypes and new surfaceome markers. Association with tumor immune infiltrates Tumor Immune Estimation Resource (TIMER) platform 45 was employed to analyze tumor purity, and the association between the presence of tumor immune infiltrates (CD4+ T cells, CD8+ T cells, macrophages, neutrophils, and B cell).TIMER contains 10,897 samples from diverse cancer types from the TCGA (The Cancer Genome Atlas) project and provides immune infiltrates' abundances estimated by multiple immune deconvolution methods.TIMER applies a deconvolution method previously published 46 to infer the abundance of tumor-infiltrating immune cells from gene expression profiles.For estimation of cell type abundances from bulk tissue transcriptomes by CIBERSOFT multiple hypothesis testing was performed using the Benjamini and Hochberg method 47 .We explored the tumor immune infiltrates in breast cancer subtypes. Outcome analyses and gene correlations KM Plotter Online Tool [39][40][41] was used to evaluate the relationship between the expression of the genes and patient clinical prognosis.This database permits the evaluation of relapse-free survival (RFS) and overall survival (OS) in breast tumors by subtypes.For outcome analyses, patients were separated according to auto best cut-off values.Patients above the threshold were deemed "high" expression while patients below the threshold were characterized as "low" expression.The number of samples included in HGU133 array 2.0 for each subtype was: all: n = 2032 (RFS) and n = 953 (OS); basal-like: n = 442 (RFS) and n = 296 (OS); HER2 + : n = 358 (RFS) and n = 198 (OS) and Luminal A: n = 1809 (RFS) and n = 596 (OS). In an independent Kaplan-Meier analysis we correlated the gene expression and survival in a combined cohort of immunotherapy treated patients from different tumor types including bladder (n = 90), esophageal adenocarcinoma (n = 103), glioblastoma (n = 28), hepatocellular carcinoma (n = 22), HNSCC (n = 110), melanoma (n = 570), NSCLC (n = 21), NSLC (n = 22), breast (n = 14), gastric (n = 45) and urothelial (n = 392).The datasets were identified in GEO using the keywords "gene expression", "PD1", "CTLA4", and "immunotherapy" as well as the names of available immunotherapy agents.In this cohort we evaluated the correlation to overall survival (OS) only and patients were also separated into two cohorts according to the best cut off values.According to administered therapy, anti-PD1 treatment included n = 797 samples and the anti-CTLA4 cohort included n = 131 samples. The Kaplan-Meier (KM) plots are presented with the hazard ratio (HR), the 95% confidence interval (CI) and the log-rank p-value (p).Genes or signatures with a HR < 1, p < 0.05 were considered predictors of favorable outcome, while genes with a HR > 1, p < 0.05 were considered predictors of detrimental outcome. The ROC plotter online tool 48 was used to correlate gene expression and response to immunotherapy (anti-PD1 or anti-CTLA4) in an independent cohort of different solid tumors that include metastatic and primary tumors.The area under the curve (AUC) was computed to evaluate the clinical activity of the biomarker candidates.AUC values are independent of the used cut-off.This dataset, a public available tool, has been developed by some of the authors of this publication. For correlation analysis between genes, we used the Pearson correlation coefficients of every pair of genes.Data from TCGA (The Cancer Genome Atlas) 49 were included in the analysis. Complete information describing all datasets used in the work is provided in the Supplementary Table 1. Figure 1 . Figure 1.Identification of up-regulated genes associated with Tregs infiltration.(a) Flow chart describing the results obtained during the process and the bioinformatic analysis used.(b) Genes with spearman correlation > 0.45 were considered as positively correlated with Treg infiltration.Pie chart displaying the proportion of genes with different SC in whole breast cancer group (c) and by subtypes (d). Figure 2 . Figure 2. Evaluation of common genes between subtypes.(a) Venn diagram including genes with SC > 0.45 in breast cancer subtypes.(b) Pie chart with proportion of common genes in at least two subtypes.(c) Functional analyses by Enrichr of sixty common genes (included in two subtypes or more).(d) Heat map depicting the Pearson correlation coefficient (R) between gene expression, tumor purity, and the presence of tumor immune infiltrates in breast cancer subtypes. Figure 3 . Figure 3. Evaluation of surfaceome genes.(a) Pie chart with proportion of surfaceome genes by subtypes.(b) Functional analyses of the surfaceome genes performed by Enrichr.(c) Heat map depicting the Pearson correlation coefficient (R) between gene expression, tumor purity, and the presence of tumor immune infiltrates in breast cancer subtypes.(d) Heat map depicting the Pearson correlation coefficient (R) of the association between macrophage markers and the expression of the selected genes using CANCERTOOL and the TCGA cohort. Figure 4 .Figure 5 . Figure 4. Common up-regulated genes associated with outcome in breast cancer.Dot plot displaying HR values extracted from Kaplan-Meier survival plots of the association between common genes individually expressed and patient prognosis, including relapse-free survival (RFS) (a) and overall survival (OS) (b), for all breast cancer patients from the exploratory cohort.(c) Kaplan-Meier survival plots of the association between common genes mean expression levels and patient prognosis, for all subtypes including RFS and OS.All: n = 2032 (RFS) and n = 253 (OS). Figure 8 . Figure 8. Common up-regulated genes associated with response in patients treated with anti-PD1 and anti-CTLA4.Box-plots of genes validated for Anti-PD1 response in (a) or Anti-CTLA4 response (b) in cancer patients using the pathological complete response database in ROC plotter.Graphs show normalized gene expression in non-responders (NR) and responders (R) patients.Cohort of different solid tumors that include metastatic and primary tumors treated with immunotherapy. Figure 9 . Figure 9. Surface genes associated with response in patients treated with anti-PD1 and anti-CTLA4.Boxplots of genes validated for Anti-PD1 response in (a) or Anti-CTLA4 response (b) in cancer patients using the pathological complete response database in ROC plotter.Graphs show normalized gene expression in nonresponders (NR) and responders (R) patients.Cohort of different solid tumors that include metastatic and primary tumors treated with immunotherapy.
5,545.4
2024-05-06T00:00:00.000
[ "Medicine", "Biology" ]
Characterisation, identification, clustering, and classification of disease The importance of quantifying the distribution and determinants of multimorbidity has prompted novel data-driven classifications of disease. Applications have included improved statistical power and refined prognoses for a range of respiratory, infectious, autoimmune, and neurological diseases, with studies using molecular information, age of disease incidence, and sequences of disease onset (“disease trajectories”) to classify disease clusters. Here we consider whether easily measured risk factors such as height and BMI can effectively characterise diseases in UK Biobank data, combining established statistical methods in new but rigorous ways to provide clinically relevant comparisons and clusters of disease. Over 400 common diseases were selected for analysis using clinical and epidemiological criteria, and conventional proportional hazards models were used to estimate associations with 12 established risk factors. Several diseases had strongly sex-dependent associations of disease risk with BMI. Importantly, a large proportion of diseases affecting both sexes could be identified by their risk factors, and equivalent diseases tended to cluster adjacently. These included 10 diseases presently classified as “Symptoms, signs, and abnormal clinical and laboratory findings, not elsewhere classified”. Many clusters are associated with a shared, known pathogenesis, others suggest likely but presently unconfirmed causes. The specificity of associations and shared pathogenesis of many clustered diseases provide a new perspective on the interactions between biological pathways, risk factors, and patterns of disease such as multimorbidity. Coding and diagnosis of disease. The National Clinical Coding Standards 31 define the primary diagnosis as the main symptom or disease treated, and arguably this primary cause of hospital admission provides the most reliable diagnosis. Additional diagnoses made after admission to hospital can correspond to less severe complaints diagnosed by chance, or occurring in association with either the primary or a different disease. Coding standards require that only diseases that affect the patient's management should be recorded 31 , which will not necessarily include all existing diseases. They are also biased by medical practice, with diagnoses limited to those that are investigated. Therefore the present study was restricted to the smaller number of primary diagnoses that were expected to have passed a threshold of severity, and were more likely to be unrelated to undiagnosed or co-occurring disease. Clinical considerations. Not all diagnosed and coded diseases are suitable for study. For example, a disease may have an uncertain diagnosis, or be unrelated to age or environmental exposures. Primarily we required that 3-digit ICD codes refer to a clear diagnosis of an age-related disease. Random events including accidents or infections due to a chance exposure were excluded unless modified by an underlying, possibly age-related, condition or predisposition. For example, some infectious diseases are more strongly influenced by chance lifestyle exposures than by age-related risks, but urinary tract and chest infections are influenced more by a weakened immune system than from a chance exposure alone, and were included. Diseases common before the start of the UK Biobank study such as pregnancy-related diseases were excluded due to insufficient cases. Any of the above, or related issues, can cause statistical models to fail or lose power, and we also excluded any diseases that failed any statistical test described later. The above considerations led us to firstly exclude ICD-10 coded diseases beginning with: Z (factors influencing health status)-because not disease specific, Q (congenital) and O, P (Diseases related to pregnancy and perinatal period), U (new and antibiotic resistant diseases), V, X, Y (external causes of morbidity and mortality), and T (multiple injuries, burns, and poisoning)-usually reflecting a chance exposure. An epidemiology-trained pathologist (KG) selected and categorised diseases as excluded, acute-onset, chronic, due to infection, due to injury, or unknown aetiology (R-coded diseases in ICD-10, retained to allow follow-up studies). Selection at the 4-digit ICD-10 code level. Incidence data may be more informative if a 3-digit ICD-10 coded disease, is split into 4-digit coded disease subtypes. If these more accurately reflect the underlying aetiology, then associations with risk factors are expected to be clearer (with for an equivalent number of cases, smaller confidence intervals and bigger effect sizes). Therefore the 3-digit selections were examined and revised by a physician with training in epidemiology (IT). Where substantial aetiopathological differences existed, 3-digit codings were split into smaller groups. Often one or more 4-digit codes were excluded from a 3-digit group for a reason listed previously. Occasionally, diseases were split into a combination of one or more 4-digit codes and a grouping of 4-digit codes (see Supporting Information). The 4-digit selection was reviewed and tested for self-consistency to prevent typographical input errors. Details of the ICD-10 code selection are included in the Supporting Information, Table 1. Survival analysis. The survival analysis used a proportional hazards model 32,33 with age as the time variable, and the data were left-truncated at the age when participants attended the UK Biobank assessment centre. The data were right-censored if the end of the study period occurred before the disease of interest, or if there was any cancer other than non-melanoma skin cancer, because many cancers and cancer treatments are known to influence subsequent disease risk. Using age as the time variable allows strong age-dependencies to be accurately modelled through the baseline hazard. All calculations used R version 4.0.0 34 , with packages "bit64" 35 , "data. table" 36 , and "grr" 37 for data manipulation, "survival" 38 for fitting survival models, "xtable" 39 for long tables in the Supplementary Material, "dendextend" 40 and "gplots" 41 for plots. We considered the well-known risk factors of: diabetes, height, body mass index (BMI), smoking status, systolic blood pressure (SBP), alcohol consumption, and walking pace, and adjusted for the established confounders and female-specific risk factors of: deprivation tertile, education, hormone replacement therapy (HRT) (women only), and having one or more children (women only). We used numerical measures for height, BMI, and SBP, standardised using their joint mean and standard deviation across men and women. Smoking status was: never, previous, or current, alcohol consumption was: rarely (less than 3 times per month), sometimes (less than 3 times a week, but more than 3 per month), regularly (3 or more times each week), walking pace was: slow, average, brisk, and education was: degree level, post-16 (but below degree), to age 16 or unspecified. For women, we also adjusted for any previous HRT use (yes or no), and for having had one or more children (yes or no). Baseline was taken as: no diabetes, never smoker, rarely drink, brisk walking pace, degree-level education, minimum deprivation tertile, and women with no children or HRT use. Analyses were multiply adjusted to minimise the influence of correlations between risk factors and capture as much causal information in the fitted parameters as possible. To reduce confounding by age we stratified by year of birth (YOB), and adjusted by the age at which participants joined the study. We assumed a linear response to the continuous measures of BMI, height, and SBP, so as to maximise the number of cases in each category. If associations were non-linear, it would reduce the accuracy of our model fits, leading us to argue against inferring causal associations with risk factors. Well-known and biologically meaningful variables were used to aid interpretation of disease clusters, but as measurable recognised physical characteristics used for characterising and clustering diseases, it would be acceptable if "risk factors" were symptoms. The measured risk factors had less than 1% missing values, allowing a complete case analysis. Because the risk factors are commonly measured, equivalent analyses in other datasets are possible. Sensitivity analyses with sex-dependent tertiles found similar results to those of the main text; see Supporting Information, Figs. 2 and 4. Statistical inclusion criteria. There is no general rule to determine how many cases are sufficient to ensure meaningful estimates for parameters and their covariances 42 . We excluded diseases if their parameters or covariance matrices were undefined, or their covariance matrices' eigenvalues were unusually large, indicating excessively large confidence intervals for one or more parameter (see Supporting Information, Fig. 1). This was typically due to insufficient data in one or more category, and were usually diseases that occur at the older (or younger) extremes of age range (e.g. delirium or excessive menstruation respectively), with too few cases in the younger (or older) YOB tertiles. To select a smaller set of diseases that have the most statistically significant risk factors and are easier to study and discuss, we excluded diseases whose risk factors were not statistically significant after a Bonferroni multiple-testing adjustment of a multivariate χ 2 test for statistical significance of the fitted parameters. Finally the proportional hazards assumption was tested using a global χ 2 test of the Schoenfeld residuals 33 , and diseases failing the test after an FDR multiple-testing adjustment 43 were excluded. When testing for failure (and exclusion), an FDR adjustment is stricter than a Bonferroni adjustment and will exclude more diseases. The selection procedure is summarised in Table 1. Strong, biologically meaningful comparisons. To compare diseases, we were interested in strong biologically meaningful comparisons, for example between current smokers and a baseline of never smokers, as opposed to a baseline of previous smokers. Such substantial differences are more likely to be associated with changes to biological pathways that can modify disease risk. Because the maximum likelihood estimates (MLEs) for parameters are normally distributed, the distribution for a subset of parameters is easily obtained by marginalisation 44 . The mean and covariance matrices of a subset are simply the rows and columns of the mean and covariance matrices that correspond to the parameters of interest 44 . These values are generally quite different than those obtained by fitting the subset of parameters directly. This allowed us to adjust for parameters that are known to influence disease risk, but for clustering and comparison we used marginalisation to solely consider: BMI, height, SBP, slow walking pace (versus fast walking pace), regular drinker (versus rarely drink), and current smoker (versus never smokers). The procedure also ensures that each risk factor is represented by a single variable when clustering, reducing the potential for clustering to be dominated by a single risk factor (e.g. a categorical variable with d levels would otherwise be represented by d parameters when clustering). Multivariate statistical tests and clustering metrics. Because maximum likelihood estimates for parameters e.g. μ 1 and μ 2 are approximately normally distributed, statistical tests are easy to construct. For μ 1 ∼ N(µ 1 , � 1 ) and μ 2 ∼ N(µ 2 , � 2 ) , if they have the same mean with µ 1 = µ 2 , then where p is the number of parameters 42 . This was used to test the null hypothesis that the fitted parameters of diseases in men and women are the same, using the MLE estimates for the covariance matrices (Figs. 3 and 4). We also tested the null hypothesis that diseases in the same cluster have the same mean, by noting that, where n g is the number of diseases in cluster g with members C g , N is the number of groups, and p is the number of fitting parameters. After removing the 8 diseases in Fig. 4 with statistically significant differences between men and women at the 0.05 level after an FDR multiple-testing adjustment 43 , we plotted the left-hand side of Eq. 1 versus N to determine a minimum value of N = 63 where there is no longer a statistically significant difference at the 0.05 level (Fig. 1). However our main interest is in the similarity between risk factors for diseases, not whether they are statistically different. The left hand side of Eq. (1) falls rapidly until N ≃ 24 , suggesting that most of the variation is captured in the first 24 clusters. This "elbow criterion" 45 , was used in Figs. 1 and 5 . Presently there is no established method to determine how many clusters there should be 42 . The log-likelihood has recently been calculated for the clustering model considered here 46 , that uses the normally distributed MLEs and their covariances to assess the likelihood of diseases forming clusters with the same risk-factor associations (MLEs). For a normal (prior) distribution that places a low probability on large estimates for associations with risk factors, the hierarchical clustering model used here minimised the log-likelihood at between 22 and 25 clusters depending on the prior's assumed covariance. The distance between fitted parameters must reflect both their values and the uncertainty in their estimates, so that distances are less if the same estimates have larger covariances. Ideally, it will also measure similarities between their covariance matrices, and have a clear mathematical interpretation. This is true of the Bhattacharyya coefficient, that measures the similarity between probability distributions through their overlap. The Battacharyya distance is the negative logarithm of the result, that for two multivariate normal distributions has, www.nature.com/scientificreports/ with � = (� 1 + � 2 )/2 . The first term is proportional to the χ 2 (p) that was used to test the null hypothesis of equal means ( µ 1 = µ 2 ) in Figs. 3 and 4. As a consequence the largest p values will tend to coincide with the smallest Bhattacharyya distances, but D B also incorporates extra information from the estimated covariance matrices to compare the shape of the probability distributions. The minimum D B can be used to assign a partner to each disease (Fig. 3). We hierarchically clustered the 156 diseases using D B and the ward.D2 algorithm in the The "elbow" in the weighted sum of squares of differences in the fitted parameters in each cluster (Eq. 1), at ≃ 24 clusters, qualitatively indicates how many clusters to keep. With 63 or more clusters there are no statistically significant differences at the 0.05 level between fitted parameters in each cluster (inset). www.nature.com/scientificreports/ R software package. Diseases were assigned to 24 clusters, as suggested by the elbow criteria 45 and Fig. 1. The clustering is shown in Fig. 5, along with a heat map for the coefficients of each risk factor associated with each disease mapped onto a 0-1 scale using an inverse logit function. Sensitivity analysis. Informative clusterings should be insensitive to small changes in the data or to the models used for analysis. In addition to visual comparisons, we quantified the differences between two clusterings of disease by considering the pairs of diseases that remain in the same cluster, independent of the clustering algorithm. Specifically, consider clustering the same set of diseases into e.g. 24 groups by two different algorithms A and B, such as using coefficients estimated from two different proportional hazards models. Take the observed number of all possible disease pairs within clusters in A as n A , the equivalent number in B as n B , and the number common to both as n AB . The maximum proportion of disease pairs that are clustered together by both A and B is p AB = n AB /min(n A , n B ) . In practice, the sensitivity analyses produced clusterings with more disease pairs, and in this paper p AB is the proportion of all clustered disease pairs that remain clustered together in the sensitivity analysis. Similar clusterings have p AB ≃ 1 , and unrelated clusterings have p AB ≃ 0 . Individual diseases that are particularly sensitive to the clustering procedures can be identified as those with no other diseases that are common to both their clusters (in A and B). Results Diseases were selected on the basis of statistical and clinical criteria, as outlined in the Methods and summarised in Table 1. All results describe diagnoses that were an individual's first primary diagnosis in an ICD-10 chapter. This compromise between reducing the risk of confounding by prior disease and retaining sufficient cases was tested by a sensitivity analysis, as discussed later. Figure 2 shows the number of diseases with statistically significant risk factors, that increase with the number of cases due to maximum likelihood estimates becoming increasingly accurate and identifying smaller effect sizes. Overall there were smaller proportions of statistically significant associations with injuries or symptoms of unknown origin. There were similar numbers of chronic and acute diseases with 230 or more cases, but rarer diseases with 49-230 cases were almost twice as likely to be acute than chronic disease. Despite infectious diseases Identification of disease. Each disease present in both men and women were assigned to the one with minimum Bhattacharyya distance between their estimated associations with potential risk factors. The proportion of diseases matched to their equivalent disease in the opposite sex are plotted in Fig. 3, grouped as acute, chronic, infectious diseases, and symptoms of unknown origin (R-codes). For 38% of the 172 diseases considered, the nearest disease measured by Bhattacharyya distance was the equivalent disease in the opposite sex, and for 80% of diseases the equivalent disease was among the nearest 8 diseases (the nearest 5%). Differences between men and women. The proportions of diseases with statistically significant differences in their associations with risk factors are shown in Fig. 3. Approximately 5% of diseases had statistically significant differences between men and women at the 0.05 level after an FDR multiple-testing adjustment 43 , and this dropped to ∼ 1% when BMI was excluded as a risk factor. The risk factors responsible for statistically significant differences between men and women are considered in Fig. 4. The heat map indicates whether a risk-factor is associated with a higher risk for women (red), or lower risk (white), with orange neutral. Because BMI appeared to have different risk associations in men and women, it was removed and the analysis rerun. Removing BMI reduced the number of diseases with statistically significant risk factors (after a Bonferroni adjustment), from 172 to 156. Figure 3 shows that the proportion of diseases with statistically significant differences between men and women reduced from ∼ 5 to ∼ 1%, and Fig. 4 shows that those diseases were arthrosis of the knee and kidney stones. The differences did not appear to be solely due to Stat. Signif. difference between men and women at 0.05 level (after FDR multiple-testing adj.) Figure 3. The proportion of diseases whose equivalent disease in the opposite sex has the smallest Bhattacharyya distance is plotted in green. The proportion of diseases with statistically significant differences between men and women are plotted in red. The differences are mainly due to different associations with BMI (inset). Fig. 3). Overall we found strong evidence for sex-specific associations for some diseases affecting men and women, especially for BMI. Figure 3 shows that many diseases could be identified by their associations with well-known risk factors. Presuming the associations reflect common aetiological pathways, then clustering by them may yield clusters of diseases with similar aetiologies. Hierarchical clustering was used to capture and visualise similarities between the risk factors for disease, and generated a hierarchical structure of increasingly similar clusters. The dendrogram is coloured to indicate 24 groups. The clustering is shown in Fig. 5, along with a heat map for the risk factors associated with each disease. This allows us to simultaneously visualise how diseases cluster, and the associations responsible for the clusterings. When considering Fig. 5 it is useful to note that: (1) Disease descriptions with the same first-digit of ICD-10 code are coloured the same, e.g. I50 and I70 are both coloured black. (2) If the same disease in men and women cluster together, then it is likely to have a distinctive combination and magnitude of associations with risk factors. (3) Any diseases connected by a tree with small depth will have a quantitatively similar combination of associations. (4) The heatmap indicates a cluster's association with risk factors, with red associated with higher risk, white with lower risk, and orange neutral. For example, considering Fig. 5, chronic obstructive pulmonary disease, lung cancer, arterial embolism, and atherosclerosis are clustered closely together (groups 1 and 2), and are being identified primarily by the increased risk associated with smoking and walking slowly, with the magnitude of associations producing the finer subgrouping. Symptoms, signs, and abnormal clinical and laboratory findings, not elsewhere classified. Chapter XVIII of ICD-10 is devoted to "Symptoms, signs, and abnormal clinical and laboratory findings, not elsewhere classified" 2,3 , and accounted for 11% of primary hospital episodes in the UK Biobank data. Despite their uncertain aetiology, 60 of the 98 diseases in men or women had statistically significant risk factors at the 0.05 level after an FDR multiple-testing adjustment, and 36 were statistically significant at the 0.05 level after a Bonferroni adjustment. Ten diseases that satisfied the FDR-adjusted proportional hazards test and were also present in both men and women were included in the clustering studies, and for most of these their risk associations were similar in men and women (Fig. 5, R-coded disease descriptions). Confounding by prior disease. Because the same individual's data can appear every time a hospital episode has a primary disease from a different ICD-10 chapter, there is potential for confounding by prior disease. To test whether this influenced the clustering results, we took the 24 clusters in Fig. 5 and refit the proportional hazards model for each disease, but now excluded data with any prior diseases from the same cluster as the disease being studied. This prevented the clustering of diseases from different chapters being influenced by repeat hospital episodes from the same individuals. Despite having fewer cases, the resulting cluster is almost identical to Fig. 5 (see Supporting Information, Fig. 2), with all pairs of clustered diseases continuing to cluster with each other. This strongly suggests that the clusters were driven by similarities in risk factors as intended, not by sequences of prior diseases. Diseases in men and women tend to cluster adjacently. Labels are coloured by their first ICD-10 digit, and the dendrogram is coloured with the top 24 groups in the cluster (see Fig. 1). Associations with potential risk factors are indicated by the heat map, with red an association with higher risk, white with lower risk, and orange neutral. The figure was produced with R 34 using packages "dendextend" 40 and "gplots" 41 . linear relationship between continuous measurements for height, BMI, and SBP, it was possible to consider diseases with fewer cases than needed by a more complex model. To test the sensitivity of clusterings to this linear approximation, we refit a proportional hazards model to the same set of diseases but with sex-specific tertiles for height, BMI, SBP, and year of birth. Before clustering we again used marginalisation to compare a baseline of non-smokers, non-diabetic, rarely drink, and minimum tertiles for height, BMI, and SBP, to parameters for regularly smoking, diabetes, regularly drinking, and maximum tertiles for height, BMI, and SBP. We did not require fits to satisfy any statistical tests because the fewer numbers of cases in each tertile were expected to make the fits poor for some diseases. As shown in the Supporting Information Fig. 3, the resulting clusters are similar, with 54% of all pairs of clustered diseases remaining together after reanalysing with tertiles. Arterial embolism and thrombosis (I74) was not included because there were too few cases in women when the analysis used tertiles. Ten diseases were most sensitive to the model being fit, having no other disease clustered with them in both clusterings. These were: H25.8-Other senile cataract (men), K42-Umbilical hernia (men), K59-Constipation (men), K61-Abscess of anal and rectal regions (men), L97-Ulcer of lower limb (women), M15-Polyarthrosis (men), M51-Other intervertebral disk disorders (women), R11-Nausea and vomiting (men), R29.6-Tendency to fall (men), R69-Undetermined causes of morbidity (men). Diseases with statistically significant differences between men and women were also similar (see Supporting Information, figure 4). The differing analyses found 4 diseases common to both studies with statistically significant differences at the 0.05 level after an FDR multiple-testing adjustment 43 . Without BMI as a risk factor, both studies found that kidney stones (N20) continued to have different risk associations for men and women. Discussion The broad systematic study of sex-specific diseases, specificity of observed associations, and shared pathogenesis of many clustered diseases offers potential new insights into the clinical presentation and aetiopathology of disease, some of which are explored below. Sex differences and epidemiological practice. There is increasing recognition of differences between men and women for the incidence, diagnosis, prognosis, and treatment of disease 25,47 . Sex-dependent risk factors have also been found for associations with cardiovascular disease 48 . Here we find a substantial proportion of diseases with different risk associations between men and women, for BMI in particular (Figs. 3 and 4). Further work is needed to understand the causes and implications of different risk associations, but the sex-dependent differences for BMI in particular, are sufficiently clear that they should be accounted for in future studies. The proportional hazards model failed more frequently as the number of cases increased (Fig. 2). For larger data sets in particular, the model should be tested, and modified as required. With sufficient data, alternative methods may need to be considered. Specificity of associations. Despite 5% of diseases having substantially different associations between risk factors in men and women, 38% of diseases were correctly identified with their equivalent disease in the opposite sex, and 80% had their equivalent disease among the nearest 8 (of 172) diseases. This would only be possible if men and women had similar quantitative associations with risk factors for a given disease, and if these are sufficiently distinct from those for other diseases. The influence of risk factors on disease onset seems surprisingly specific in many cases, and with more risk factors this specificity may increase. For example, if the 7 risk factors had a trinary value of e.g. tertiles, there would be 3 7 = 2187 possible combinations, but if the number of risk factors were doubled from 7 to 14 the combinations would exceed 4 million. In principle, it may be possible to define diseases by their response to a specific set of risk factors. Pathways for disease. An objective was to explore whether clustering by common risk factors could help identify pathways for disease. For example, renal failure, hyperkalaemia, and ulcers of lower limbs in men are clustered in group 8, along with other septicaemia in women. Renal failure can increase the risk of ulcers of the lower limb 49,50 , and hyperkalemia can be caused by kidney disease. However, a sensitivity analysis excluded prior diseases from the same cluster prior to fitting the proportional hazards model, and produced an almost identical clustering of diseases, consistent with clusters being driven by associations with risk factors (as intended), not prior disease. One interpretation is that the disease cluster is driven by a common pathway such as atherosclerosis, with some associations being risk factors for it, others symptoms of it, and the diseases a consequence of it. This could produce (non-causal) associations between subsequent hospital admissions for different diseases. In contrast, cardiovascular diseases such as arterial embolism, pulmonary embolism, and atrial fibrillation are from the same ICD-10 chapter, but have different underlying causes, and are found in different clusters with quantitatively different risk associations. Cardiovascular diseases appear in several different clusters, suggesting they are influenced by a range of different pathways for disease onset or severity. Arterial embolism and atherosclerosis are clustered with lung cancer in group 1, and adjacent to chronic obstructive pulmonary disease (COPD) in group 2, suggesting a similar and possibly smoking-related cause. Pulmonary embolism is in a group of 9 diseases (group 4) that includes gallstones, pain in limb, and polyarthrosis in women. Gallstones have previously been associated with an higher risk of pulmonary embolism 51 , that was attenuated after cholecystectomy 51 . Heart failure and unspecified stroke in women appear in a large group (group 6) of 18 diseases, in which 11 of the remaining 15 diseases involve infections. Atrial fibrillation has sufficiently specific associations to be clustered on its own in group 16. Non-rheumatic aortic valve disorders, angina pectoris, and cerebral infarction, appear in group 21, along Scientific Reports | (2021) 11:5405 | https://doi.org/10.1038/s41598-021-84860-z www.nature.com/scientificreports/ with spondylosis, other spondylopathies, and senile cataracts in men. Cervical spondylosis (CS) have previously been associated with a higher risk of posterior circulation infarcts 52 , and with acute coronary syndrome 53 . Unspecified stroke, hypotension, and oesophageal varices, all in men, are in the adjacent group 20, along with hypo-osmolarity and hyponatraemia, and ulcer of lower limbs in women. The majority of diseases involving infections are in a large cluster (group 6), described above in the context of cardiovascular diseases. The clustering suggests that susceptibility or severity could be mediated by a common underlying pathway. There is nothing unusual about the associations with walking slowly, diabetes, high BMI, and smoking, suggesting that the specific strengths of those associations are producing the cluster. Four other types of infections affecting both men and women (eight diseases), are in groups 9 and 10, and appear to have weaker associations with smoking and BMI than those in group 6. Identification and re-classification of disease. Many diseases of uncertain aetiology (R-coded diseases in ICD-10), had statistically significant risk factors, often sufficiently specific for equivalent diseases in men and women to cluster adjacently (Fig. 5). This could be explained by hospital referrals being influenced by specific risk factors and symptoms, as specified by medical training or guidelines. Alternatively, the quantitative disease-specific patterns of associations between risk factors and R-coded diseases could reflect an underlying pathophysiological cause. From the perspective of the Bradford Hill criteria 54,55 : Strength of association, Consistency, Specificity, and Temporality-there were strong, dose-related, statistically significant, disease-specific, subsequent responses to risk factors in both men and women. Analogy, Plausibility, and Coherence-like all diseases, evidence of disease is sufficiently strong and specific for hospital admission and identification with one of nearly 100 R-coded diseases. R-coded diseases have rarely been discussed or studied, so it is worth examining the diseases with which they cluster in detail: (1) Nausea and vomiting clustered with specified intestinal infections, suggesting a possible infectious origin. The cluster also contains anaemia, and diseases in women-only of tendency to fall, other interstitial pulmonary diseases, hypotension, and viral infections of unspecified site. (2) Change in bowel habit was clustered with constipation in group 12. (3) Abnormal weight loss was clustered with fractures of the femur, bronchiectasis, and coeliac disease in group 15. Weight loss is a potential cause of fractures that are mediated by osteoporosis, but similar risk associations for weight loss and femoral fractures would suggest that weight loss could be a symptom of an unidentified underlying process. (4) Abnormal findings or imaging of lung, and haemoptysis, were clustered with pancreatic and bladder cancers, and rectal polyp, in group 19. (5) Other and unspecified abdominal pain were clustered with gastritis and duodenitis in group 15. We are unaware of any indirect reasons why the risk factors for diseases with such similar symptoms would coincide, but the group also contains four cataract diseases, that seem most likely due to coincidental similarities between the risk associations. (6) Other chest pain and undetermined causes of morbidity are in cluster 24, a group that also includes back pain, intervertebral disc disorders, other joint disorders not classified elsewhere, fractures of the lower leg, and benign neoplasms of the colon, rectum, and anus. The links between these undiagnosed causes of pain and morbidity, and diagnoses of back and intestinal problems may be relevant for improving the accuracy of diagnoses. (8) A few other R-coded diseases are included, but these diseases appear in different clusters for men and women, and are not discussed further. Limitations. Many of the limitations here are inherent to any cohort study, but some are accentuated by the need to simultaneously study multiple diseases. Disease selection: Uncertainty about the history of treatment decisions made it impractical to identify and exclude diseases whose hospital episode rates have geographical or temporal variations due to changes in diagnosis or treatment practices, such as a change in reported incidence of sepsis due to changes in coding 56 . Instead we relied on statistical tests to detect when large variations in episode rates were causing statistical models to fail or lose power. Cohort: Due to the minimum age of participants in UK Biobank, we can only study diseases of old age, and the UK Biobank cohort is not representative of the UK or global population. Hospital referrals, diagnoses, and recordings of diagnoses are all biased by clinical procedures and training. Model: Although a sensitivity analysis suggested the clustering results were insensitive to the model, a larger cohort with more cases would allow a more complex statistical model, or the inclusion of more risk factors. Although the application of clustering methodologies to epidemiological data is becoming popular, methods to objectively determine the optimum number of clusters for a particular application have yet to be established. Most importantly, we found that disease identification and clustering was sensitive to the number of diseases, that in turn was surprisingly sensitive to the fitted model through the multiple-testing adjustments used to determine which diseases to include. Causal associations: We aimed to explore associations between diseases, but further work is needed to determine if the observed associations are causal. Strengths of methodology. Diseases were assessed and selected prior to the study, on the basis of clinical and epidemiological criteria. Established and interpretable statistical methodologies were used in new but statistically rigorous ways. Risk associations were calculated before clustering, providing advantages in terms of modelling and interpretation of results. Proportional hazards methods provided access to several decades of epidemiological experience, and are familiar to the medical community. Analyses were sex-specific, used (lefttruncated) age as a time variable and multiple-adjustment to reduce the influence of correlations between risk factors, age in particular, and were censored by the first occurrence of cancer (other than non-melanoma skin cancers). Estimates were adjusted for likely confounders, and multiple adjustment will reduce the influence of correlations between risk factors on subsequent clustering. The resulting estimates are normally distributed, allowing rigorous (multivariate) statistical tests to compare the equivalence of risk factors for different diseases, and their marginal distributions are easy to calculate. This allowed adjustment for many known risk factors Scientific Reports | (2021) 11:5405 | https://doi.org/10.1038/s41598-021-84860-z www.nature.com/scientificreports/ but to subsequently focus on a subset of the most biologically relevant factors by using marginalisation 43 to remove parameters of lesser interest. The procedure also ensured that each risk factor was represented by a single variable when clustering, avoiding clustering being dominated by e.g. a categorical variable with many different categories. Rigorous statistical tests were used to compare different diseases' risk factors, clustering results were consistent with statistical tests, were relatively insensitive to changes in the proportional hazards model, and sensitivity analyses found no evidence that clustering was driven by prior disease. Distances between fits used estimated parameters and their covariance matrices, retaining as much information from the data as possible. Hierarchical clustering is easily visualised, and may help inform hierarchical disease classifications. Diseases were confirmed to cluster into clinically meaningful groups. Summary The associations of common risk factors with disease incidence were used to characterise over 400 diseases in men and women, and to identify clusters of 78 diseases that were present in both sexes with statistically significant risk factors after a Bonferroni multiple-testing adjustment. We aimed to incorporate as much clinical and epidemiological knowledge as possible, and to adopt analyses that are easily interpretable, familiar to the medical community, and underpinned by a rigorous statistical methodology. The broad perspective gained from the simultaneous study of several hundred diseases emphasises that BMI can have a quantitatively different influence on disease risk for men and women, and that proportional hazards models are more likely to fail with more cases. Both of these important points should be considered in relevant epidemiological studies. We found that the associations of common risk factors with disease incidence were sufficiently specific to identify the equivalent disease in the opposite sex for 38% of 172 diseases studied here, and 80% have their opposite-sex pair among the nearest 8 diseases, suggesting that quantitatively similar risk factors may indicate similar underlying disease. This hypothesis was supported by hierarchical clustering, that tended to produce clinically similar clusters of diseases, and suggested several plausible but presently unconfirmed associations between disease. Some patterns of multimorbidity, such as a cluster of diseases linked to renal failure, are likely to be driven by common disease pathways and risk factors. All the diseases studied here are common causes of hospital admission, representing a substantial burden of ill health. We highlighted several symptoms of unknown causes (ICD-10 R-coded diseases), that appear to be linked with more clearly diagnosed disease, and emphasised the potential for hospital admissions to be biased by known risk factors for disease. Overall, we have developed a methodology and demonstrated a proof of principle for clustering diseases in terms of their associations with established and easily measured risk factors. Future work is intended to optimise the approach, benchmark it in different datasets, and explore applications in diagnosis, prognosis, aetiological understanding, and drug development.
8,625.4
2020-11-30T00:00:00.000
[ "Medicine", "Biology" ]
Molecular Dynamics Simulation of High-Temperature Creep Behavior of Nickel Polycrystalline Nanopillars As Nickel (Ni) is the base of important Ni-based superalloys for high-temperature applications, it is important to determine the creep behavior of its nano-polycrystals. The nano-tensile properties and creep behavior of nickel polycrystalline nanopillars are investigated employing molecular dynamics simulations under different temperatures, stresses, and grain sizes. The mechanisms behind the creep behavior are analyzed in detail by calculating the stress exponents, grain boundary exponents, and activation energies. The novel results in this work are summarized in a deformation mechanism map and are in good agreement with Ashby’s experimental results for pure Ni. Through the deformation diagram, dislocation creep dominates the creep process when applying a high stress, while grain boundary sliding prevails at lower stress levels. These two mechanisms could also be coupled together for a low-stress but a high-temperature creep simulation. In this work, the dislocation creep is clearly observed and discussed in detail. Through analyzing the activation energies, vacancy diffusion begins to play an important role in enhancing the grain boundary creep in the creep process when the temperature is above 1000 K. Introduction Nanocrystalline (NC) metals possess many different mechanical behaviors in comparison with traditional coarse grained metals, e.g., the Hall-Petch effect and the inverse Hall-Petch effect, which have already been studied by several researchers [1][2][3][4]. For application at a high-temperature condition, the creep behavior is the most important one that should be carefully researched and thoroughly understood. The creep deformation in polycrystalline metals is well described by the Bird-Dorn-Mukherjee relation [5], given below as Equation (1).ε In this equation, A is a dimensionless constant, D 0 is the diffusion constant (frequency factor), and b is the Burgers vector. G, k b , and ∆Q are the shear modulus, the Boltzmann constant, and the activation energy for a thermally activated process, respectively. p represents the exponent of the grain size d and n the exponent of the applied stress σ. T is the temperature. With varying the temperature T, the applied stress σ, and the grain size d in a creep simulation, different creep behavior can be observed. On the nanoscale, the diffusion process and the grain boundaries influence the creep mechanisms in a significant way, therefore many studies on this topic have already been published [6][7][8][9][10][11][12]. The exponent pair (n, p) for stress and grain size was used to determine the creep mechanisms. For example, n = 1, p = 2 was characterized as Nabarro-Herring creep (lattice diffusion) [10], and n = 1, p = 3 is Coble creep [6]. When n > 3, the plastic Main Variables In this study, the main variables were the mean grain size d of the model, the temperature T, and the applied stress σ. The initial size of simulation boxes was 50 × 50 × Molecules 2021, 26, 2606 3 of 13 50 nm 3 . The number of grains varied from 10 to 30 in five models, see Figure 1. Through approximating grains as spheres, the corresponding averaged diameterd can be calculated byd = 3 3V/(4πN), in which V and N are the volume of the model and the number of grains in the model, respectively. Table 2 shows the grain size of each model. number of grains varied from 10 to 30 in five models, see Figure 1. Through approximating grains as spheres, the corresponding averaged diameterd can be calculated bȳ d = 3 3V/(4πN), in which V and N are the volume of the model and the number of grains in the model, respectively. Table 2 shows the grain size of each model. The melting point T m of pure Ni was simulated in this work as 1720 K, which is close to the well-known value 1728 K. The investigated temperatures are 500, 800, and 1200 K for all five models. Additionally, in order to obtain more detailed information about the influence of temperature and to analyze the activation energy, the model M1 was simulated at a finer temperature mesh from 500 to 1200 K for every 100 K increment. All the applied stresses in the creep simulations were homologous to the strength R m . The applied stresses varied from 0.4 R m to 0.8 R m . Main Process of Simulations The models were initially equilibrated at the selected temperatures, i.e., 500, 800, and 1200 K, as an isotherm-isobar (NPT) ensemble. Then the nano-tensile simulations were performed. The nano-creep simulations were sequentially executed with the applied uniaxial stresses level determined by the tensile strength. The applied boundary conditions of nano-tensile and -creep simulations were periodic in the x-direction and shrink-wrapped (not periodic but encompassing all the atoms inside the surface) in the y-and z-directions. Nano-Tensile Test Simulations In order to obtain strengths R m of all models M1-M5, nano-tensile tests at different temperatures were carried out prior to nano-creep simulations. Figure 2a shows the stress-strain curves of M1 at different temperatures. At the beginning, it is a linear elastic region, and the slope of this section (ε to 0.5%) is the Young's modulus, 221.25 GPa at 500 K. Compared with Table 1, we can see that the difference is very small. This indicates that the model is almost isotropic. The melting point T m of pure Ni was simulated in this work as 1720 K, which is close to the well-known value 1728 K. The investigated temperatures are 500, 800, and 1200 K for all five models. Additionally, in order to obtain more detailed information about the influence of temperature and to analyze the activation energy, the model M1 was simulated at a finer temperature mesh from 500 to 1200 K for every 100 K increment. All the applied stresses in the creep simulations were homologous to the strength R m . The applied stresses varied from 0.4 R m to 0.8 R m . Main Process of Simulations The models were initially equilibrated at the selected temperatures, i.e., 500, 800, and 1200 K, as an isotherm-isobar (NPT) ensemble. Then the nano-tensile simulations were performed. The nano-creep simulations were sequentially executed with the applied uniaxial stresses level determined by the tensile strength. The applied boundary conditions of nano-tensile and -creep simulations were periodic in the x-direction and shrink-wrapped (not periodic but encompassing all the atoms inside the surface) in the y-and z-directions. Nano-Tensile Test Simulations In order to obtain strengths R m of all models M1-M5, nano-tensile tests at different temperatures were carried out prior to nano-creep simulations. Figure 2a shows the stress-strain curves of M1 at different temperatures. At the beginning, it is a linear elastic region, and the slope of this section (ε to 0.5%) is the Young's modulus, 221.25 GPa at 500 K. Compared with Table 1, we can see that the difference is very small. This indicates that the model is almost isotropic. In Figure 2a an oscillation can be observed of the resulting tensile stress in the stressstrain curve with the frequency varying from 14 to 18 ps (strain from 1.4 to 1.8%). This oscillation is considered to be caused by the non-periodic boundary conditions in the yand z-directions in combination with the elastic pulse when starting the tensile simulation. Furthermore, the maximum amplitude at the first cycle at a strain of approximately 0.9% shows a higher stress at 1200 K than at 800 K and 500 K, and indicates a temperature influence on this effect. Further analysis shows that the oscillation decays rapidly and therefore, the influence of the oscillation on the ultimate tensile strength is minor. Figure 2b,c display the temperature influence on the tensile strength of M1. The tensile strength of model M1 decreases from 3.08 to 2.25 GPa as the temperature increases, which is due to the softening of the material at higher temperatures. Figure 2c demonstrates the relation between the grain size and the tensile strength R m . The tensile strength shows no significant dependence on the grain size in the investigated range. This suggests that the relevant grain sizes lie around the transition regime between the Hall-Petch and the reverse Hall-Petch effect. In Figure 2a an oscillation can be observed of the resulting tensile stress in the stressstrain curve with the frequency varying from 14 to 18 ps (strain from 1.4 to 1.8%). This oscillation is considered to be caused by the non-periodic boundary conditions in the yand z-directions in combination with the elastic pulse when starting the tensile simulation. Furthermore, the maximum amplitude at the first cycle at a strain of approximately 0.9% shows a higher stress at 1200 K than at 800 K and 500 K, and indicates a temperature influence on this effect. Further analysis shows that the oscillation decays rapidly and therefore, the influence of the oscillation on the ultimate tensile strength is minor. Figure 2b,c display the temperature influence on the tensile strength of M1. The tensile strength of model M1 decreases from 3.08 to 2.25 GPa as the temperature increases, which is due to the softening of the material at higher temperatures. Figure 2c demonstrates the relation between the grain size and the tensile strength R m . The tensile strength shows no significant dependence on the grain size in the investigated range. This suggests that the relevant grain sizes lie around the transition regime between the Hall-Petch and the reverse Hall-Petch effect. Nano-Creep Simulations As shown in Figure 3a, the creep strain increases with the simulation time. At the beginning of the simulation, there are some fluctuations of the creep strain, because the stress was applied to the model within a short time interval. The second phase of the creep process is steady and linear, and the creep rateε = dε/dt in this phase is the minimum creep rate during the whole process. Under an applied stress σ = 0.7 R m , the creep process of model M1 steps into the third phase at 200 ps. When the applied stress increases to 0.8 R m , the second phase is very short and the creep process steps directly into the third phase. The second creep phase, which can be clearly observed in this work, plays the most important role during the creep process. Therefore, the simulation time around 500 ps is sufficient to investigate the dominant creep mechanisms. Nano-Creep Simulations As shown in Figure 3a, the creep strain increases with the simulation time. At the beginning of the simulation, there are some fluctuations of the creep strain, because the stress was applied to the model within a short time interval. The second phase of the creep process is steady and linear, and the creep rateε = dε/dt in this phase is the minimum creep rate during the whole process. Under an applied stress σ = 0.7 R m , the creep process of model M1 steps into the third phase at 200 ps. When the applied stress increases to 0.8 R m , the second phase is very short and the creep process steps directly into the third phase. The second creep phase, which can be clearly observed in this work, plays the most important role during the creep process. Therefore, the simulation time around 500 ps is sufficient to investigate the dominant creep mechanisms. The minimum creep rateε is of great importance to the creep property. In Figure 3b, the relation ofε and σ at different temperatures T is revealed in a log-log scaling diagram. The higher the applied stress is, the faster the model creeps. Furthermore, it can also be seen from Figure 3b that the creep rateε increases with temperature. According to the power-law relationship of the strain rateε and the applied stress σ from the Bird-Dorn-Mukherjee relation (Equation (1)), the stress exponent is expressed as n = ∂logε/∂log σ. n is the slope in the log-log scaling plot ofε with σ. As displayed in Figure 3b, the relationship between logε and log(σ/R m ) is not linear. Therefore the data were cut into two regions, a low σ region and a high σ one, and these calculated exponents are shown in Figure 3c and in Table 3. From Figure 3c and Table 3, it is considered that when applying high stresses at a temperature between 500 and 1200 K, the dominant mechanism is the power-law creep as is well-known as the dislocation creep because of stress exponents n > 3. For creep tests at low stresses, stress exponents are larger than 2 and increase with temperature. When T < 700 K, the creep mechanism for low stresses is distinguished as grain boundary sliding as the stress exponent 2.6 < n < 3 [12]. When the temperature is The minimum creep rateε is of great importance to the creep property. In Figure 3b, the relation ofε and σ at different temperatures T is revealed in a log-log scaling diagram. The higher the applied stress is, the faster the model creeps. Furthermore, it can also be seen from Figure 3b that the creep rateε increases with temperature. According to the power-law relationship of the strain rateε and the applied stress σ from the Bird-Dorn-Mukherjee relation (Equation (1)), the stress exponent is expressed as n = ∂logε/∂log σ. n is the slope in the log-log scaling plot ofε with σ. As displayed in Figure 3b, the relationship between logε and log(σ/R m ) is not linear. Therefore the data were cut into two regions, a low σ region and a high σ one, and these calculated exponents are shown in Figure 3c and in Table 3. From Figure 3c and Table 3, it is considered that when applying high stresses at a temperature between 500 and 1200 K, the dominant mechanism is the power-law creep as is well-known as the dislocation creep because of stress exponents n > 3. For creep tests at low stresses, stress exponents are larger than 2 and increase with temperature. When T < 700 K, the creep mechanism for low stresses is distinguished as grain boundary sliding as the stress exponent 2.6 < n < 3 [12]. When the temperature is above 800 K, the creep mechanism is supposed to be coupled by grain boundary sliding and dislocation nucleation and propagation. The yellow line fitted with stress exponents of low σ regions shows a deviation for the two points at 1100 and 1200 K. This is due to fewer data points for fitting low stress exponents at a high temperature, e.g., 6 points at 500 K but 3 points at 1200 K in Figure 3b. However, the yellow line can still provide an approximate prediction of the stress exponent n of the low σ region for higher temperature conditions. It is interesting to emphasize that the stress exponents of low-and high-stress parts have an intersection at around (1400 K, n = 4), as shown in Figure 3c. This means that the creep mechanism above 1400 K is independent of the stress ranging from 0.4 R m to 0.8 R m , and the stress exponent n = 4 represents that the grain boundary sliding is coupled with dislocation slip (see Section 4.3). Thermally Activated Mechanisms The Arrhenius equation was applied here to analyze the temperature influence on the creep rate. Figure 4 shows the data and fitted lines for lnε and 1/(k b T), which are derived from the Arrhenius equation. The slope of every fit line is the free activation energy ∆G of the creep process. Molecules 2021, 1, 0 6 of 13 above 800 K, the creep mechanism is supposed to be coupled by grain boundary sliding and dislocation nucleation and propagation. The yellow line fitted with stress exponents of low σ regions shows a deviation for the two points at 1100 and 1200 K. This is due to fewer data points for fitting low stress exponents at a high temperature, e.g., 6 points at 500 K but 3 points at 1200 K in Figure 3b. However, the yellow line can still provide an approximate prediction of the stress exponent n of the low σ region for higher temperature conditions. It is interesting to emphasize that the stress exponents of low-and high-stress parts have an intersection at around (1400 K, n = 4), as shown in Figure 3c. This means that the creep mechanism above 1400 K is independent of the stress ranging from 0.4 R m to 0.8 R m , and the stress exponent n = 4 represents that the grain boundary sliding is coupled with dislocation slip (see Section 4.3). Thermally Activated Mechanisms The Arrhenius equation was applied here to analyze the temperature influence on the creep rate. Figure 4 shows the data and fitted lines for lnε and 1/(k b T), which are derived from the Arrhenius equation. The slope of every fit line is the free activation energy ∆G of the creep process. From Figure 4a, the creep rate at a certain stress can be divided into two different temperature ranges. The turning point is around 900∼1000 K for all applied stresses. It is assumed to be the thermal activation for the vacancy diffusion as Aidhy et al. [34] reported that the vacancies are immobile up to 700 K and diffusing at 1200 K. At high temperatures, the activation energy for the accelerated creep process is around 0.5 eV, which is slightly lower than the 0.8 eV reported by Swygenhoven [18]. This might be caused by a couple mechanism with grain boundary diffusion or dislocation gliding. Deformation Diagram for NC Ni Although the creep rate in the MD simulation is of around 8 orders of magnitude faster than in experimental results, which is the typical timescale and/or length scale problem for MD simulations, the simulation results reveal the mechanism of plastic deformations when we normalize the parameters to dimensionless. In order to compare our work with experimental results, all plastic deformations were collected and a deformation diagram for NC Ni was drafted in this work, as shown in Figure 5. The applied stresses were normalized to the corresponding tensile strength R m at each temperature. The green, blue and red regions were divided by certain (σ, T) points as investigated in this work, which represent grain boundary sliding, dislocation creep, and their coupling, respectively. From Figure 4a, the creep rate at a certain stress can be divided into two different temperature ranges. The turning point is around 900∼1000 K for all applied stresses. It is assumed to be the thermal activation for the vacancy diffusion as Aidhy et al. [34] reported that the vacancies are immobile up to 700 K and diffusing at 1200 K. At high temperatures, the activation energy for the accelerated creep process is around 0.5 eV, which is slightly lower than the 0.8 eV reported by Swygenhoven [18]. This might be caused by a couple mechanism with grain boundary diffusion or dislocation gliding. Deformation Diagram for NC Ni Although the creep rate in the MD simulation is of around 8 orders of magnitude faster than in experimental results, which is the typical timescale and/or length scale problem for MD simulations, the simulation results reveal the mechanism of plastic deformations when we normalize the parameters to dimensionless. In order to compare our work with experimental results, all plastic deformations were collected and a deformation diagram for NC Ni was drafted in this work, as shown in Figure 5. The applied stresses were normalized to the corresponding tensile strength R m at each temperature. The green, blue and red regions were divided by certain (σ, T) points as investigated in this work, which represent grain boundary sliding, dislocation creep, and their coupling, respectively. It is clear that at a low temperature and with a low stress, the creep process is dominated by grain boundary sliding, because the dislocation is difficult to nucleate at grain boundaries and to propagate into the grain. However, at a high stress level, dislocation nucleation and gliding is the dominant mechanism. There is a common area (the blue area in Figure 5) where both dislocations and grain boundary sliding mechanisms become visible and make comparable contributions to the creep process. However, the red and green areas depict that a mechanism is dominant (but not exclusive). We propose that the transition from a grain boundary sliding dominated creep to a dislocation dominated creep is narrow at a lower temperature, e.g., the distance between the blue and the red line is smaller at 700 K than at 800 K. The conjunction point at around 0.68 R m approximately represents that the transition between grain boundary sliding and dislocation creep is rarely observable. However, for a creep process with a coupled mechanism, to accurately determine the contribution of each mechanism is rather difficult. Hence, the region of the coupled mechanism is to represent that the grain boundary sliding and dislocation creep possess a comparable contribution to a creep process, which will be discussed in Section 4.3. Figure 6 is the famous Ashby map for Ni with a grain size of 32 µm [16]. The area in the magenta box shows the region of interest with mechanisms found in this work. It is significant that our result is comparable to the Ashby map. In this work, the lowest normalized stress σ/G = 1.16 × 10 −2 for 0.4 R m at 1200 K is still higher than the stress corresponding to dislocation creep and grain boundary creep in the Ashby map. Because of the size effect, the yield strength and tensile strength are almost 7 times higher than normal results, e.g., 368 MPa at 973. 15 K [35]. This is presumed to be the reason why a comparable deformation diagram is obtained at a higher stress level. As we have already discussed the tensile simulation in Section 3.1, the main parts of this section are dedicated to dislocation creep, grain boundary sliding, and their coupling. Creep through Dislocations In Figure 7, snapshots of model M1 from different creep processes at the moment of 100 ps are shown. It is significant that at a higher stress level (refer to the green area in Figure 5), many more dislocations and stacking faults appear. From Figure 7c for 1200 K and 0.4 R m and Figure 7d for 1200 K and 0.65 R m , intensive grain boundary sliding is also observed. Creep through Dislocations In Figure 7, snapshots of model M1 from different creep processes at the moment of 100 ps are shown. It is significant that at a higher stress level (refer to the green area in Figure 5 Referring to Table 3, the calculated stress exponents n for high stress are 13.8 and 6.8 for 500 and 1200 K, respectively. Combined with Figure 7, it is in no doubt that the dislocation nucleation and propagation dominate the creep process at a high stress level. Referring to Table 3, the calculated stress exponents n for high stress are 13.8 and 6.8 for 500 and 1200 K, respectively. Combined with Figure 7, it is in no doubt that the dislocation nucleation and propagation dominate the creep process at a high stress level. Besides, through the dislocation extraction algorithm (DXA) [36] in OVITO, we observed dislocations interacting with stacking faults and also with other dislocations, as shown in Figure 8. For example, in Figure 8a, the moving dislocation was first blocked by a two-atom-layer stacking fault and then the dislocation went through the stacking fault with the glide plane jumping for one atom layer. Figure 8b illustrates the dislocation meeting with a vacancy. One atom, the nearest neighbor to the vacancy, diffused into the dislocation half plane so that the dislocation line has a sharp peak at that point. However, the dislocation could not climb to the other glide plane because this peak vanished and the dislocation kept moving in the next configuration. This is probably because the mobility of the vacancy is still limited by the temperature. olecules 2021, 1,0 Besides, through the dislocation extraction algorithm (DXA) [36] in OVI served dislocations interacting with stacking faults and also with other d as shown in Figure 8. For example, in Figure 8a, the moving dislocation was fi by a two-atom-layer stacking fault and then the dislocation went through th fault with the glide plane jumping for one atom layer. Figure 8b illustrates tion meeting with a vacancy. One atom, the nearest neighbor to the vacanc into the dislocation half plane so that the dislocation line has a sharp peak at However, the dislocation could not climb to the other glide plane because thi ished and the dislocation kept moving in the next configuration. This is probab the mobility of the vacancy is still limited by the temperature. Creep Through Grain Boundary Sliding The snapshots in Figure 9 show different moments of the creep process of m 1200 K and 0.4 R m . FCC and HCP structures are presented in green and red, r Other structures are shown in gray. Some of the gray clusters of other atoms in are vacancies, which have been discussed in Section 3.3. From Figure 9a-c, the boundaries of two grains, which can clearly be seen in ellipses, were moving out of sight. At 100 ps, dislocation appears inside some g circles in Figure 9b) and is elongated in the following snapshots. It is obvious t location nucleated at the grain boundary and then elongated inside the grain impeded by other dislocations or the opposite grain boundary. Additionally, two creep processes were compared with each other. Details in Table 4. The creep rate at 1200 K and 0.4 R m (0.90 GPa) is 2.26 × 10 7 1/s, wh parable to the creep rate at 800 K and 0.65 R m (1.63 GPa) as 3.98 × 10 7 1/s, see Whereas, stress exponents n for the pairs (1200 K, 0.4 R m ) and (800 K, 0.65 R m in Table 3 are obtained as 4.8 and 10.0, respectively. n = 10.0 indicates that the is dislocation creep for simulation at 800 K and 0.65 R m . However, with a si rate, the amount of stacking faults of model M1 at 1200 K and 0.4 R m (2.4% to a much lower than that at 800 K and 0.65 R m (4.2% to all atoms). Therefore, it is that the dominant creep mechanism for creep at lower stresses is grain bound When the temperature increases and dislocations are thermally activated, then Creep Through Grain Boundary Sliding The snapshots in Figure 9 show different moments of the creep process of model M1 at 1200 K and 0.4 R m . FCC and HCP structures are presented in green and red, respectively. Other structures are shown in gray. Some of the gray clusters of other atoms inside grains are vacancies, which have been discussed in Section 3.3. From Figure 9a-c, the boundaries of two grains, which can clearly be seen in the marked ellipses, were moving out of sight. At 100 ps, dislocation appears inside some grains (black circles in Figure 9b) and is elongated in the following snapshots. It is obvious that the dislocation nucleated at the grain boundary and then elongated inside the grain until it was impeded by other dislocations or the opposite grain boundary. Additionally, two creep processes were compared with each other. Details are shown in Table 4. The creep rate at 1200 K and 0.4 R m (0.90 GPa) is 2.26 × 10 7 1/s, which is comparable to the creep rate at 800 K and 0.65 R m (1.63 GPa) as 3.98 × 10 7 1/s, see Figure 3b. Whereas, stress exponents n for the pairs (1200 K, 0.4 R m ) and (800 K, 0.65 R m ) as shown in Table 3 are obtained as 4.8 and 10.0, respectively. n = 10.0 indicates that the mechanism is dislocation creep for simulation at 800 K and 0.65 R m . However, with a similar creep rate, the amount of stacking faults of model M1 at 1200 K and 0.4 R m (2.4% to all atoms) is much lower than that at 800 K and 0.65 R m (4.2% to all atoms). Therefore, it is convincing that the dominant creep mechanism for creep at lower stresses is grain boundary sliding. When the temperature increases and dislocations are thermally activated, then dislocation nucleation and propagation start to contribute. Grain Size Effect The grain size is the third factor that influences the creep properties. According to Equation (1), the grain size exponent can be calculated as n = ∂(logε)/∂(log (1/d)). The grain size exponents p of the five models are 2.56 and 2.57, respectively, at the stress levels 0.7 R m and 0.8 R m , see Figure 10. This means that, at a high stress level, the creep rate decreases with increasing grain size. Because of the limitation of grain size researched in previous works, dislocation nucleation was observed and analyzed. However, the interaction between dislocations inside the grain has not been studied in a nano-creep simulation. Around the transition regime between the Hall-Petch and inverse Hall-Petch effects, the grain size has a sufficient influence on the dislocation movement, which is closely related to the mechanical properties. Therefore, it is not trivial to investigate further in this direction. Grain Size Effect The grain size is the third factor that influences the creep properties. According to Equation (1), the grain size exponent can be calculated as n = ∂(logε)/∂(log(1/d)). The grain size exponents p of the five models are 2.56 and 2.57, respectively, at the stress levels 0.7 R m and 0.8 R m , see Figure 10. This means that, at a high stress level, the creep rate decreases with increasing grain size. Because of the limitation of grain size researched in previous works, dislocation nucleation was observed and analyzed. However, the interaction between dislocations inside the grain has not been studied in a nano-creep simulation. Around the transition regime between the Hall-Petch and inverse Hall-Petch effects, the grain size has a sufficient influence on the dislocation movement, which is closely related to the mechanical properties. Therefore, it is not trivial to investigate further in this direction. Summary Through MD-simulations, we first studied the tensile properties of Ni nano-polycrystals. As the temperature decreased, the tensile strength R m increased. The tensile strength showed no significant dependence on the grain size. This might be due to the fact that the investigated grain sizes lie in the transition area between the Hall-Petch and inverse Hall-Petch regimes. With applying stress that was homologous to the tensile strength R m , nano-creep simulations were performed at different temperatures for all five models. From the creep curves, three typical creep phases have been clearly identified. We draw the conclusion that the creep rate rises with increasing stress, increasing temperature, and decreasing grain size. Collecting all plastic deformation cases, we formed a deformation mechanism map to distinguish the corresponding mechanisms at given conditions. When applying a high stress, the stress exponent n was above 6.7, resulting in a dislocation creep. Visualization analysis revealed many dislocation movements, including nucleating from the grain boundary, propagating inside grains, and interacting with other dislocations or with grain boundaries. As for a creep simulation at a low stress level, the dominant creep mechanism is supposed to be grain boundary sliding at a low temperature with stress exponents n < 3. When increasing the temperature to 1200 K, the stress exponent for the low stress part increases to 4.8 and dislocations begin to contribute to the creep process. Hence, it is safely concluded that the dominant creep mechanism is grain boundary sliding at low stress and this could be coupled with dislocation creep with increasing the temperature. Additionally, from the analysis of activation energies, it was found that the vacancy diffusion becomes prevalent when the temperature is above 1000 K. The grain boundary creep is assumed to be enhanced by vacancy diffusion at high temperatures. Furthermore, we postulate that the creep mechanism of NC Ni remains unchanged with the coupling of dislocation creep and grain boundary creep when the temperature is above 1400 K. It is novel that the deformation diagram corresponds well to the Ashby map for pure Ni. Due to the properties of NC metals, it is difficult and expensive to experimentally investigate the creep behavior of NC metals at high temperatures. This work could serve as a good example to expand deformation diagrams for NC metals through employing molecular dynamics simulations. Funding: We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting this work by funding-EXC2075-390740016 under Germany's Excellence Strategy. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the ongoing research.
7,632.8
2021-04-29T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Decreasing Overhead and Power Consuming in Ad-Hoc Networks by Proposal a Novel Routing Algorithm : An Ad-Hoc network has been constructed from a fabric of mobile nodes intercom connected by temporary wireless links. Nodes located beyond a single hop are reached using intermediate neighbors to forward messages over long distances. The problem with this method of communication is that mobility causes the network topology to be unstable. Traditional solutions maintain routes by exchanging information when ever the network topology changes. Using traditional solutions for maintaining routes in highly mobile environments creates high overhead cost which makes prohibition scalability. A new routing algorithm has been designed to support highly dynamic networks which maintain routs locally. The suggested algorithm in comparison with existing routing algorithms, not only minimize overhead, saving power considerably, but also improve scalability. INTRODUCTION Mobile Ad Hoc Network (MANET) is an efficient technology for providing a wide-area communication environment where installing the infrastructure of a wired network is difficult. It is also suitable for supporting communication among mobile nodes. For the last several decades, many routing protocols have been proposed to make the best use of wireless network technology [1,2] . As setting up base station or wired networks for mobile terminal is impossible or unfeasible use of these networks owing to simple configuration and low cost is very suitable. Some applications communicate in hostile environment without any central base station such as , navies in navy component, mobile computer meetings of people in areas where wired networks are not available, disaster recovery, management of emergencies (for instance in case of earth quake where all infrastructures are destroyed). On the other hand Ad-Hoc networks are a heterogeneous mix of different wireless and mobile devices, witch are supplied with battery and will be turn-of when battery is discharged [3] . So in this networks energy is very important issue and routing algorithm must be designed in a way that act accurately to use energy. Ad-Hoc routing algorithms can be categorized into two types Table driven and On-Demand. Table driven algorithms send periodical broadcasts to maintain a route Table. Algorithms classified as On-Demand construct routes only when they are needed [4] . Dynamic Source Routing or DSR [5] is an ondemand protocol which constructs routes as they are required. The path taken by a source routed message is determined by the originator and cannot be changed during transit. If the chosen path is broken, the sender must retransmit the message with an alternative route. The algorithm consists of two stages in route construction, route request and route maintenance [6] . The Ad-Hoc On-demand Distance Vector or AODV [7] is a combination of the DSDV and DSR algorithms described earlier. Routes are created by exchanging distance vectors on-demand. Cluster Based Routing Protocols (CBRP) [8] partition the network into disjoint sets. The task of discovering routes is delegated to Cluster-heads which are elected because of their position and coverage. The aim of clustering is to reduce the number of packets flooded into the network by creating a temporary infrastructure. Associatively Based Routing (ABR) [12] seeks to limit network traffic by discovering and using long lived routes between nodes which remain stable over time. The rest of the study is organized as following: the proposed routing protocol is explained, evaluate the performance of DTRP and then compare other routing algorithms with DTRP. Proposed routing protocol for AD-HOC networks: In this study we present a new solution for routing messages in mobile Ad-Hoc networks. The aim is to produce a simple and reliable routing algorithm which can tolerate a dynamic topology while minimizing routing overhead to conserve energy. Our proposed Ad-Hoc routing algorithm uses localized maintenance to reduce the operational cost overhead. We refer to our algorithm as Demand Table Routing Protocol (DTRP). As the name suggests, the solution is demand driven, meaning that routes are created when they are needed. Routes are constructed by cost advertisement and confirmation. Localized maintenance ensures that routes stay intact as relocation changes the network topology. Route representation: A route is represented by labeling the intermediate nodes on a path with a Route Pair containing the source and destination address. A set of nodes containing Route Pairs with the same source and destination addresses form a route linking the two endpoints together. In addition to the source and destination addresses, the Route Pair contains an Active flag and a timeout. A node will only forward an arriving message if it contains a Route Pair corresponding to the source and destination addresses of the packet. By forwarding, we mean to re-broadcast the message so that other nodes beyond the range of the source may receive it. Route advertisment and creation: Nodes advertise themselves by sending periodic global Route Cost messages to inform all other nodes of the distance to the advertiser in hops. If multiple advertisements are received from the same source, lower costs take precedence. The nodes in Fig. 1 are labeled with their hop costs from an advertiser, A. To communicate with the advertiser, a node replies with a Route Create message. The create message is propagated toward the advertiser by traversing intermediate nodes which have progressively lower costs. Depending on the distribution of nodes, there may be more than one pathway. Fig. 2 shows a route creation message following the least cost paths from C-A. The create message inserts a RoutePair entry into a Table at each node on the return path. A route pair entry contains: addresses of the two ends of a route, a timeout and an active status flag. Each node receiving the create message flags the route pair entry as active. Route maintenance: Routes collapse when nodes move out of range and require maintenance to keep Table are expired after a timeout period if they are not maintained. The initial state of a network is illustrated in Fig. 3. The nodes marked with a tick contain active route pairs for a path between (A, C). The nodes marked with a cross contain inactive route pairs for (A, C) which have been learned through localized maintenance. Fig. 4 illustrates the state of the Route update frequance: For DTRP to operate efficiently, the frequency of route advertisements and maintenance messages need to be timed carefully. If sent too infrequently, the state of the network will change and the updates will contain information that is out of date. If sent too often, the network will become congested from the routing overhead. The ideal solution is to send advertisements at a frequency corresponding to the peak value of the distribution of nodes leaving range to captures as many nodes as possible before they become disconnected. Maintenance messages must be sent at a higher frequency to repair minor topological changes in-between the phases of route construction. Reducing overhead: The time between periodic broadcasts are linked to the rate at which the topology of the network changes. The amount of global communication is reduced by making frequent localized repairs. This strategy aids in reducing the total overhead cost, thus saving energy. The aim of localized maintenance is to replace nodes that have moved out of range with nearby neighbors to keep routes intact. As one node leaves range, another arrives to replace it. Instead of incurring a worst case global route reconstruction cost each time a link is broken, there is only a single hop cost. This property causes the overhead cost of DTRP to scale with the number of broken links per route instead of n 2 each time a link is broken, where n is the size of the network. Duplicate messages: It is possible for many paths with equal distance to be created during route construction, causing duplication. Initially this appears to be inefficient but in fact works to our favor. By finding more than one pathway it is less likely that a route will become disconnected because of the inherent redundancy created by nodes with overlapping broadcast areas. Two problems arise when considering multi-path routes with DTRP: it is possible for a packet to be rebroadcast in the direction from which it came, or to be forwarded by multiple candidates in an overlapping area. To overcome both of these problems, packets headers are cached, forwarding only those which have not been seen before. The problem still exists that there may be more than one candidate available to forward the message creating duplicates. This can be avoided by inserting a short random delay before forwarding a packet. The delay introduces a race condition where the first node to forward the packet succeeds. Any duplicate packets queued pending transmissions by nodes within range are cancelled. Header history cache: The aim of a Header History Cache (HHC) is to stop nodes from retransmitting the same message twice by remembering packet headers and discarding any duplicate messages. The HHC stops messages from backtracking in the direction from which they came and looping indefinitely. Packets are identified by their source address and a sequence number which is incremented by the source for each packet sent. The HHC also stores the total number of messages received when the entry was created and a hit count which is incremented each time the same packet is received. Entries are evicted from the cache when the ratio of hits/messages received falls below a threshold. The threshold can be adjusted depending on the size of the cache and the rate at which messages arrive. The eviction process is similar to a Least Recently Used (LRU) cache. An example of the header history cache in operation: An example trace of hits/messages received against time is shown in Fig. 5. The threshold is arbitrarily set to evict entries when the ratio of hits/messages received drops below 0.25 for illustrative purposes. A new message is received each second. The trace initially has a value of 1.0 because a new packet header has just been inserted into the HHC. Three subsequent duplicates are received after 3, 4 and 11 seconds. The entry is evicted from the cache after 16 seconds when the trace drops below the threshold. Evaluation criteria: Ad-Hoc networks allow direct communication between nodes using radio broadcasts [14,15] . As the nodes are mobile, there is a limited supply of power available to each device. The aims of an Ad-Hoc routing algorithm are to route messages efficiently with respect to path length and power consumption. These are the criteria that will use 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Time (sec) Hits/messages received Path length: When all of the link costs are known, the router can find the cheapest path from source to destination. Assigning all links with a cost of one unit causes the router to find the shortest path. Cost based routing relies upon a stable network topology. Mobile Ad-Hoc networks cannot guarantee long lived point-topoint links. Under these circumstances, finding low cost paths by exchanging route information is a problem that cannot easily be solved. This is because the time needed to propagate total knowledge of the network can exceed the period of time for which the network remains stable. The aim is to find the shortest path between two nodes while consuming the least amount of power. Power consumption: Mobile nodes are powered by batteries, therefore it is important to conserve energy in order to extend the lifespan of each device. This can be accomplished either by reducing the range of communication, or by reducing the total amount of data sent. These two options are mutually exclusive, requiring a cost/performance trade-off to be made. The broadcast range has a direct effect on the number of hops between source and destination. A shorter range implies a lower transmission cost but increases the number of hops needed to reach the destination. A longer range implies fewer hops but incurs a higher cost each time a message is forwarded. The frequency of route maintenance depends on the period of time for which nodes remain in contact with their neighbors. The network topology changes more often with a shorter range, creating a higher route overhead. Longer ranges allow route maintenance to be performed less frequently. Metrics: The criteria for a good Ad-Hoc routing algorithm were identified in the previous section as minimization of power consumption by controlling the quantity and range of communication. In this study we define a set of metrics to quantify the performance of different Ad-Hoc networks against the criteria that has been decided upon. The metrics provide a method for objectively comparing different solutions. Bandwith and latency: The performance of a network is traditionally measured using bandwidth and latency. The bandwidth of a link is the quantity of data that can be placed onto the link in a unit of time measured in bits per second (bps). Latency is the time needed for a signal to travel across the distance of a link medium. With respect to ad hoc networks latency indicates the time needed for a message to be propagated from source to destination including the time spent in queues at intermediate nodes. Bandwidth is defined by the medium and the limitations of the communication device being used. Of particular interest is how well the available bandwidth is utilized: the actual throughput vs. the theoretical maximum. Energy: The simplest power-attenuation model suggests that signal strength falls at a rate of 1/r k , where r is the distance from the transmitting antenna and k is a value between 2 and 4, depending on range and interference in the environment. For our thesis, the precise value of k is irrelevant as long as there is attenuation, here we choose k = 2. The attenuation model suggests that the energy required to transmit a signal is proportional to the distance it is being sent. More specifically, the energy needed to transmit a message of length l bytes over a distance r meters at a cost of b joules per byte, is lbr k . The costs of b and k are specific to each type of wireless device. Substituting b with 1 joule per byte gives the energy metric: lr 2 . These substitutions allow us to measure the energy consumed when sending the same message over variable distances. This assumes the properties of the underlying radio technologies are the same. The total energy, e, consumed by an ad-hoc network can be calculated as: where l r the quantity of data is sent by the routing algorithm and l t is the total amount of data sent. A network with a lower routing overhead generates less traffic and is therefore more efficient with respect to the energy consumed. As o→1 the routing algorithm consumes more of the network traffic which is undesirable. Performance evaluation of DTRP: In order to evaluate performance of DTRP, NS2 simulator was used. Enough bandwidth is available for each node to transmit up to 100 packets per second, with a Maximum Transmission Unit (MTU) is one kilobyte. Messages larger than the MTU are fragmented prior to sending by the sliding window protocol and reassembled by the receiver. The sliding window protocol also permits out of order delivery. The position of each node is recomputed at a frequency of 100Hz to accurately represent the highly dynamic connectivity of an Ad-Hoc network. Flooding a mobile networek: In this experiment, we enable node mobility and the DTRP route maintenance scheme. The aim is to show that DTRP is more energy efficient than broadcasting in a mobile network and that altering connectivity affects performance for identical work loads. Methods: Nodes were allowed to roam and communicate in two differently shaped arenas. The first arena was a square 10m × 10m, the second a rectangle 20m × 5m. Both experiments used 25 nodes with evenly distributed start positions. A broadcast range of 3m was selected to ensure sufficient connectivity between nodes. A stationary node was positioned at the centre of each arena to act as the receiver. All other nodes walked randomly within the arena at a maximum velocity of 0.7m sec −1 , sending 1000 × 512 byte packets to the receiver. The experiment was done once with full network broadcasting using the square arena and then using DTRP in the square and rectangle arenas. Maintenance messages: Established cost based routes collapse if the network topology becomes unstable due to motion. Maintenance ensures that routes stay intact. Regular localized maintenance messages are sent by DTRP to inform neighboring nodes of active routes in their vicinity. The frequency of maintenance messages depends on the stability of the network. DTRP algorithm parameters: The frequency of advertisements and maintenance messages has to be determined for optimal results. The distribution of nodes leaving range was measured in a test experiment over a period of 10 min. The results are shown in Fig. 6 Nodes leave range most often after 4.4 seconds, suggesting an advertisement frequency of 4 seconds and maintenance frequency of 2 seconds is sufficient. Fig. 7 and Fig. 8 show the degree of connectivity and path lengths from each node to the receiver for the network confined to a square arena. The average path length from each node to the receiver is very similar for all nodes while the degree of connectivity is spread over a wide range. Fig. 9 and Fig. 10 show the average path lengths and degree of connectivity for nodes in the network confined to a rectangle arena. The nodes appear to have similar degrees of connectivity with a greater variation in path lengths to the receiver. Fig. 11 shows that a 13% energy saving was achieved using DTRP over broadcasting in the square network, with a route overhead of only 0.9%. Changing the arena to a rectangle produced a network is used to create longer routes which better demonstrates the efficiency of DTRP. Results: Method: A road 50m long by 10m wide was populated with 50 nodes. One of the nodes was placed in the center of the road to act as a server which remained stationary. The remaining 49 client nodes sent 10 requests of 128 bytes to the server at random time intervals while in motion. The server responded by randomly selecting one of the queued requests and replying with 4 Kbytes of data. Motion: To create realistic pedestrian motion the steering behavior selects a random target at one end of the road. The trajectory is recomputed by offsetting the target by a small amount along the edge of the road. DTRP algorithm parameters: For this experiment the range was set at 5m and the maximum node velocity was limited to 0.7 m/sec. Fig. 12 shows the distribution of nodes leaving range over a period of 10 min. The longest period of time a node stays in range is 7.2 seconds. Using this data, the advertisement frequency was set to 7 seconds with maintenance messages being sent every 3.5 seconds. RESULTS AND DISCUSSION The average path lengths and degrees of connectivity are shown in Fig. 13 and Fig. 14. The average path length was 4.36 hops and the average degree of connectivity was 7.47 nodes. Fig. 15 shows a 31% energy saving was achieved using DTRP over broadcasting, which is a 20% improvement over the energy saving in with 25 nodes. The improvement was due to the increase in average path length which allowed more focused routes to form. Trading energy for bandwidth: Another experiment was done with different ranges to show the effect on throughput. The experiment used 30 nodes in a 30m × 10m arena with broadcast ranges between 5m and 10m. The average time between a client sending a request and receiving a response as the range was increased. Each node sent 20 requests at random intervals. Advertisements were sent every 5 sec. and maintenance messages every 2.5 seconds. Fig. 16 and Fig. 17 show the average degree of connectivity and path length for the same network while increasing the range. The total energy consumed and average service time is plotted in Fig. 18 and Fig. 19. The average service time is the average time between clients sending a request and receiving a response. Increasing the range causes the total amount of data sent and average service time to fall. The maximum throughput is achieved with a range of 9m, after which the network becomes congested causing energy and service time to increase sharply. The results show that energy can be traded for throughput by adjusting range. Least energy is consumed with the shortest range. Comparison with other approaches: The aim of this section is to compare the efficiency and scalability of DTRP with exist routing algorithms. In this section we'll compare overhead and energy consuming in DTRP with other algorithms. Method: Each of the algorithms were analyzed to calculate the route overhead [13] taking into account the network size, number of routes and average path length. The analysis is based on the probability of route failure which was calculated by measuring the number of broken links and average path length from simulation data. Probability of a route failure: For our analysis, we need to know how often a route is broken. The probability of a broken route P r can be determined from the number of broken links, b and the average path length, k over the period of time for a node to send t messages. The probability of a broken link per message is: The probability of a broken route per message is derived from the probability of a broken link: Calculating overhead: The algorithms were analyzed to identify their characteristics and deduce how costly they are. In this section we state the how the cost of each algorithm was calculated. DSDV: The DSDV algorithm has a static advertisement cost for periodic distance vector broadcasts from each node and a dynamic cost which is incurred each time a link is broken: Dynamic cost = (P r ×t)×routes×nodes Total cost = Static cost+Dynamic cost (5) DSR: Routes are created by fully broadcasting an advertisement to all nodes followed by a reply along the shortest path to acknowledge the route. Maintenance messages are sent each time a route fails, causing the route creation process to be repeated. Route create = (nodes×routes)+(routes×k) Route maintenance = (P r ×t)×Route create (7) Total cost = Route create + Route maintenance (8) AODV: AODV has the creation cost of DSR and the maintenance cost of DSDV. Nodes participating in routing send periodic updates to their neighbors. We assume that all nodes participate in at least one route. Updates = frequency×nodes (9) Route create = (nodes×routes)+(routes×k) (10) Route maintenance = (P r ×t)×routes×nodes (11) Total cost = Updates+Route create+Route aintenance (12) TORA: TORA has the same route creation cost as DSR. On route failure, the maintenance cost is one traversal of the average path length to repair the broken link by readjusting heights. Route create = (nodes × routes)+(routes×k) (13) Route maintenance = (P r ×t)×k×routes (14) Total cost = Route create+Route maintenance (15) DTRP: Nodes periodically send global cost advertisements if they offer a service. A route is constructed by replying to an advertisement with a creation message. Localized maintenance messages are sent periodically to keep route pairs fresh. The Comparing scalability by size of network: Three different types of motion characteristics were evaluated for networks with sizes between 25 and 100 nodes. In each case we assumed that routes were created to a single static node as in the previous chapter. For each of the networks we recorded the number of broken links, b and average path length, k over a period of 10 min. The probability of a broken route, P r was calculated from the values of k and b. The measurements and calculated probabilities are presented in Table 1 and Table 2. A broadcast frequency of 1 Hz was used to calculate the cost of DSDV and AODV. The cost of the DTRP algorithm was calculated with advertisements sent every 4 seconds and maintenance messages sent every 2 sec. The routing costs for each algorithm are presented in Table 3 and plotted in Fig. 20 using a log scale. Compare consumed energy through node speed: This experiment compares energy consuming in different routing algorithms with different node speed. Simulated information was recorded for a network consists of 50 nodes with speed between 1 m sec −1 to 8 Table 4 and plotted on a log scale in Fig. 21. The results show that DTRP sends the least number of route packets. It is also noted that for a network with more than 40 nodes DSDV, DSR and AODV need to send more packets than the available bandwidth. CONCLUSION In this research, a new routing algorithm has been proposed for Ad-Hoc networks. In this method the route is created by labeling middle nodes and will create a chain between source and destination. As the network is mobile, a chain does not stay intact for long. Frequent localized maintenance messages repair the breakages caused by minor topological changes at low cost. The savings in cost overhead are derived from selective route cost advertisements and localized maintenance which limits the amount of data flooded onto the network. To evaluate DTRP we used NS-2 simulation and a comparative analysis was made using experimental data to show that our solution has a significantly lower cost overhead than existing routing algorithms for highly mobile Ad-Hoc networks. The benefits of our solution are lower power consumption and increased scalability.
5,945.8
2008-06-30T00:00:00.000
[ "Computer Science" ]
Estimating Neural Network’s Performance with Bootstrap: A Tutorial : Neural networks present characteristics where the results are strongly dependent on the training data, the weight initialisation, and the hyperparameters chosen. The determination of the distribution of a statistical estimator, as the Mean Squared Error (MSE) or the accuracy, is fundamental to evaluate the performance of a neural network model (NNM). For many machine learning models, as linear regression, it is possible to analytically obtain information as variance or confidence intervals on the results. Neural networks present the difficulty of not being analytically tractable due to their complexity. Therefore, it is impossible to easily estimate distributions of statistical estimators. When estimating the global performance of an NNM by estimating the MSE in a regression problem, for example, it is important to know the variance of the MSE. Bootstrap is one of the most important resampling techniques to estimate averages and variances, between other properties, of statistical estimators. In this tutorial, the application of resampling techniques (including bootstrap) to the evaluation of neural networks’ performance is explained from both a theoretical and practical point of view. The pseudo-code of the algorithms is provided to facilitate their implementation. Computational aspects, as the training time, are discussed, since resampling techniques always require simulations to be run many thousands of times and, therefore, are computationally intensive. A specific version of the bootstrap algorithm is presented that allows the estimation of the distribution of a statistical estimator when dealing with an NNM in a computationally effective way. Finally, algorithms are compared on both synthetically generated and real data to demonstrate their performance. Introduction An essential step in the design of a neural network model (NNM) is the definition of the neural network architecture [1]. In this tutorial, the analysis assumes that the network architecture design phase is completed and the parameters not varied anymore. It is assumed that a dataset S is available. For the training and testing of an NNM, S is split into two parts, and called the training dataset (S T ) and validation dataset (S V ), with S = S T ∪ S V and S T ∩ S V = ∅. The model is then trained on S T . Afterwards, a given statistical estimator, θ, is evaluated on both S T and S V (indicated with θ(S T ) and θ(S V )) to check for overfitting [1]. θ can be, for example, the accuracy in a classification problem, or the Mean Square Error (MSE) in a regression one. θ is clearly dependent (sometimes strongly) on both S T and S V , as the NNM was trained on S T . The metric evaluated on the validation dataset S V may be indicated as to make its dependence on the two datasets S T and S V transparent. The difficulty in evaluating the distributions of θ S T ,S V (S V ) is that it is enough to split the data differently (or in other words, to get different S T and S V ), for the metric θ S T ,S V (S V ) to assume a different value. This poses the question on how to evaluate the performance of an NNM. One of the most important characteristics of an NNM is its ability to generalise to unseen data, or to maintain its performance when applied to any new dataset. If a model can predict a quantity with an accuracy of, for example, 80%, the accuracy should remain around 80% when the model is applied to new and unseen data. However, changing the training data will change the performance of any given NNM. To give an example of why giving one single number to measure the performance of a NNM can be misleading, let us consider the following example. Suppose we are dealing with a dataset where one of the features is the age. What would happen if, for sheer bad luck, one splits the dataset in one (S T ) with only young people, and one (S V ) with only old people? The trained NNM will, of course, not be able to generalize well to age groups that are different from those present in S T . Therefore, the model performance, measured by the the statistical estimator θ, will drop significantly. This problem can only be identified by considering multiple splits and by studying the distribution of θ. The only possibility to estimate the variance of the performance of the model given a dataset S is to split S in many different ways (and, therefore, obtain many different training and validation datasets). Then, the NNM has to be trained on each training dataset. Finally, the chosen statistical estimator θ can be evaluated on the respective validation datasets. This will allow to calculate the average and variance of θ, and use these values as an estimate of the performance of the NNM when applied to different datasets. This technique will be called the "split/train algorithm" in this tutorial. The major disadvantage of this technique is that it requires to repeat the training of the NNM for every split, and is therefore very time-consuming. If the model is large, it will require an enormous amount of computing time, making it not always a practical approach. This paper shows how this difficulty can be overcome using resampling techniques to give an estimate of the average and the variance of metrics as the MSE or the accuracy, thus avoiding the training of hundreds or thousands of models. An alternative to resampling techniques are so-called ensemble methods, namely, algorithms that train a set of models and then generate a prediction by taking a combination of each single prediction. The interested reader is referred to [2][3][4][5][6][7][8][9]. The goal of this tutorial is to present the main resampling methods, and discuss their applications and limitations when used with NNMs. With the information contained in this tutorial, a reader with some experience in programming should be able to implement them. This tutorial is not meant to be an exhaustive review of the mentioned algorithms. The interested reader is referred to the extensive list of references given in each section for a discussion of the theoretical limitations. The main contributions of this tutorial are four. Firstly, it highlights the role of the Central Limit Theorem (CLT) in describing the distribution of averaging statistical estimators, like the MSE, in the context of NNMs. Particularly in this work, it is shown how the distribution of, for example, the MSE will tend to a normal distribution for increasing sample size, thus justifying the use of the average and the standard deviation to describe it. Secondly, it provides a short review of the main resampling techniques (hold-out set approach, leave-one-out cross-validation, k-fold cross-validation, jackknife, subsampling, split/train, bootstrap) with an emphasis on the challenges when using neural networks. For most of the above-mentioned techniques, the steps are described with the help of a pseudo-code to facilitate the implementation. Thirdly, bootstrap, split/train, and the mixed approach between bootstrap and split/train are discussed in more depth, again with the help of the pseudo-code, including the application to synthetic and real datasets. Finally, limitations and promising future research directions in this area are briefly discussed. Details of implementations of resampling techniques on the problem of high complexity goes beyond the scope of this work, and the reader is referred to the following examples [10][11][12]. This tutorial is structured in the following way. In Section 2 the notation is explained, followed by a discussion of the CLT and its relevance for NNMs in Section 3. A short introduction to the idea behind bootstrap is presented in Section 4, while other resampling algorithms are discussed in Section 5. In Section 6, bootstrap, split/train, and the mixed approach between bootstrap and split/train are explained in more detail, and compared. Practical examples with both synthetic and real data are described in Sections 7 and 8, respectively. Finally, an outlook and the conclusions are presented in Sections 9 and 10, respectively. Notation n independent, identically distributed (iid) observations will be indicated here with X n ≡ (x 1 , . . . , x n ). This dataset will come from a population described by a probability density function (PDF) F generally unknown: Let us assume that the statistical estimator θ (for example, the average or the mean squared error) is a functional. Loosely speaking, θ will be a mapping from the space of possible PDFs into real numbers R. To make the concept clear, let's suppose that the estimator is the mean of the observations x i . In this case, where it is clear that the right part of Equation (3) is a real number. Unfortunately, in all practical cases, the "real" PDF F is unknown. Given a certain dataset X n , the only possibility is to approximate the estimator θ withθ n ≡ θ(F n ), where F n indicates the empirical distribution obtained from X n by giving a probability of 1/n to each observation x i . This is the idea at the basis of the bootstrap algorithm, as it will be discussed in detail in Section 4. Central Limit Theorem for an Averaging Estimator θ A lot of mathematics has been developed to get at least the asymptotic distribution ofθ n for n → ∞ [13][14][15][16]. The CLT [17], also known as the Lindeberg-Lévy central limit theorem, enunciates that, considering a sequence of iid observations x i with µ = E[x i ] (the expected value of the inputs), and where For any practical purpose, if n is large enough, the normal distribution N will give a good approximation of the distribution of the average of a dataset of n iid observations, x i . Let's consider F to be a chi-squared distribution (notoriously asymmetric) with k = 10 [18] normalized to have the average equal to zero (panel(a) in Figure 1). Let's now calculate the average of X n , as in Equation (5) 10 6 times in three cases: n = 2, 10, 200. The results are shown in Figure 1, panels (b) to (d). When the sample size is small (n = 2, panel(b)), the distribution is clearly not symmetrical, but when the sample size grows (panels (c) and (d)), the distribution approximates the normal distribution. Figure 1 is a numerical demonstration of the CLT. Typically, when dealing with neural networks both in regression and classification problems, one has to deal with complicated functions like the MSE, the cross-entropy, accuracy, or other metrics [1]. Therefore, it may seem that the central limit theorem does not play a role in any practical application involving NNMs. This, however, is not true. Consider, as an example, the MSE function of a given dataset of input observations x i with average µ It is immediately evident that Equation (6) is nothing other than the average of the transformed inputs (x i − µ) 2 . Note that the CLT does not make any assumption on the distribution of the observations. Thus, the CLT is also valid for the average of observations that have been transformed (as long as the average and variance of the transformed observations remain finite). In other words, it is valid for both x i and (x i − µ) 2 . This can be formalized in the following Corollaries. for n → ∞ will be the normal distribution where with µ(δ i ) and σ(δ i ), we have indicated the expected value and the standard deviation of the δ i , respectively. Proof. The first thing to note is that since the x i have finite mean and finite variance, it follows that δ i ≡ (x i − µ) 2 will also have finite mean and finite variance, and therefore the CLT can be applied to the average of the δ i . By applying the CLT to the quantities δ i , Equation (8) is obtained. That concludes the proof. A numerical demonstration of this result can be clearly seen in Section 7.1. In particular, Figure 2 shows that the distribution of the MSEs approximates N when the sample size is large enough. Note that Corollary 1 can be easily generalized to any estimator in the form if the quantities g i = g(x i ) have a finite mean g i and variance σ 2 (g i ). For completeness, the Corollary 1 can be written in the general form. Corollary 2. Given a dataset of i.i.d. observations x 1 , . . . , x n with a finite mean µ and variance σ 2 , we define the quantities g i ≡ g(x i ). It is assumed that the average µ(g i ) and the variance σ(g i ) 2 are finite. The limiting form of the distributions of the estimator θ for n → ∞ will be the normal distribution Proof. The proof is trivial, as it is simply necessary to apply the central limit theorem to the quantities g i since the θ is nothing other than the average of those quantities. The previous corollaries play a major role for neural networks. The implications of the final distributions of averaging metrics being Gaussian are that: • The distribution is symmetric around the average, with the same number of observations below and above it; and • The standard deviation of the distribution can be used as a statistical error, knowing that ca. 68% of the results will be in a region of ±σ around the average. These results justify the use of the average of the statistical estimator, such as the MSE, and of its standard deviation as the only parameters needed to describe the network performance. Bootstrap Bootstrap is essentially a resampling algorithm. It was introduced by Efron in 1979 [19], and it offers a simulation-based approach for estimating, for example, the variance of statistical estimates of a random variable. It can be used in both parametric and nonparametric settings. The main idea is quite simple and consists of creating new datasets from an existing one by resampling with repetition. The discussion here is limited to the case of n observations that are iid (see Section 2 for more details). The estimator θ calculated on the simulated datasets will then approximate the estimator evaluated on the true population, which is unknown. The best way to understand the bootstrap technique is to describe it in pseudo-code, as this illustrates its steps. The original bootstrap algorithm proposed by Efron [19], can be seen in pseudo-code in Algorithm 1. In the Algorithm, N s is an integer and indicates the number of new samples generated with the bootstrap algorithm. Algorithm 1: Pseudo-code of the bootstrap algorithm. Generate a new sample X * n,i selecting n elements from X n with repetitions; As a consequence of Corollary 2 (Section 3), Algorithm 1 for a large enough N s gives an approximation of the average and variance of the statistical estimator θ. Being the Gaussian distribution (albeit for N s → ∞), these two parameters describe it completely. An important question is how big N s should be. In the original article, Efron [42] suggests that N s of the order of 100 is already enough to get reasonable estimates. Chernick [26] considers N s ≈ 500 already very large and more than enough. In the latter work, however, it is indicated that, above a certain value of N s , the error is due to the approximation of the true distribution F by the empirical distribution F n , rather than by the low number of samples. Therefore, particularly given the computational power available today, it is not meaningful to argue whether 100 or 500 is enough, as running the algorithm with many thousands of samples will take only a few minutes on most modern computers. In many practical applications, using N s between 5000 and 10,000 is commonplace [26]. As a general rule of thumb, N s is large enough when the distribution of the estimator starts to resemble a normal distribution. The method described here is very advantageous when using neural networks because it allows an estimation of average and variance of quantities as the MSE or the accuracy without training the model hundreds or thousands of times, as will be described in Section 6. Thus, it is extremely attractive from a computational point of view and is a very pragmatic solution to a potentially very time-consuming problem. The specific details and pseudo-code of how to apply this method to NNMs will be discussed at length in Section 6. Other Resampling Techniques For completeness, in this section, additional techniques-namely the hold-out set approach, leave-one-out cross-validation, k-fold cross-validation, jackknife, and subsampling-are briefly discussed, including their limitations. For an in-depth analysis, the interested reader is referred to [43] and to the given literature. Hold-Out Set Approach The simplest approach to estimating a statistical estimator is to randomly divide the dataset into two parts: a training S T and a validation dataset S V . The validation dataset is sometimes called a hold-out set (from which derives the name of this technique). The model is trained on the training dataset S T , and then used to evaluate θ on the validation dataset S V . θ(S V ) is used as an estimate of the expected value of θ. This approach is also used to identify whether the model overfits, or, in other words, learns to unknowingly extract some of the noise in the data as if that would represent an underlying model structure [1]. The presence of overfitting is checked by comparing θ(S T ) and θ(S V ). A large difference is an indication that overfitting is present. The interested reader can find a detailed discussion in [1]. Such an approach is widely used, but has two major drawbacks [43]. Firstly, since the split is done only once, it can happen that S V is not representative of the entire dataset, as described in the age example in Section 1. Using this approach would not allow for an identification of such a problem, therefore giving the impression that the model has very bad performance. In other words, this method is highly dependent on the dataset split. The second drawback is that splitting the dataset will reduce the number of observations available in the training dataset, therefore making the training of the model less effective. The techniques explained in the following sections try to address these two drawbacks with different strategies. Leave-One-Out Cross-Validation Leave-one-out cross-validation (LOOCV) can be well-understood with the pseudocode outlined in Algorithm 2. This approach has the clear advantage that the model is trained on almost all observations. Therefore, we address the second drawback of the hold-out approach, namely that there are less observations for training. This also has the consequence that the LOOCV tends not to overestimate the estimate of the statistical estimator θ [43] as much as the hold-out approach. The second major advantage is that this approach will not present the problem that was described in the age example in the introduction as the training dataset will include almost all observations, and it will vary n times. The approach, however, has one major drawback: it is very computationally expensive to implement, if the NNM training is resource-intensive. In all medium to large NNM models, this approach is simply not a realistic possibility. k-Fold Cross-Validation k-fold cross-validation (k-fold CV) is similar to LOOCV but tries to address the drawback that the model has to be trained n times. The method involves randomly dividing the dataset in k groups (also called folds) of approximately equal size. The method is outlined in pseudo-code in Algorithm 3. Algorithm 3: k-fold cross-validation (k-fold CV) algorithm. Result: Estimate of a statistical estimator:θ. 1 Define an NNM by fixing the hyperparameters; 2 Split the dataset in k groups (folds): S (i) , with i = 1, . . . , k; 3 for i = 1, . . . , k do 4 Train the NNM on the dataset k i=1,i =j S (i) , or in other words the dataset without the i th fold ; 5 Evaluate the statistical estimatorθ (i) by evaluating it on S (i) ; Therefore, LOOCV is simply a special case of k-fold CV for k = n. Typical k values are 5 to 10. The main advantage of this method with respect to LOOCV is clearly computational. The model has to be trained only k times instead of n. When the dataset is not big, k-fold CV has the drawback that it reduces the number of observations available for training and for estimating θ, since each fold will be k times smaller than the original dataset. Additionally, it may be argued that using only a few values of the statistical estimator (for example, 5 or 10) to study its distribution is questionable [43]. Jackknife The jackknife algorithm [44,45] is another resampling technique that was first developed by Quenouille [44] in 1949. It consists of creating n samples by simply removing one observation each time from the available x 1 , . . . , x n . For example, one jackknife sample will be x 2 , . . . , x n , with n − 1 elements. To estimate θ, the statistical estimator will be evaluated on each sample (of size n − 1). Note that the jackknife may seem to be the exact same method as LOOCV, but there is one major difference that is important to highlight. While in LOOCV,θ is evaluated on the i th observation held out (in other words, on one single observation), in jackknife,tθ is evaluated on the remaining n − 1 observations. With this method, it is only possible to generate n samples that can then be used to evaluate an approximation of a statistical estimator. This is one significant limitation of the method compared to bootstrap. If the size of the dataset is small, only a limited number of samples will be available. The interested reader is referred to the reviews [28,29,[46][47][48][49][50]. A second major limitation is that the jackknife estimation of an averaging estimator θ coincides with the average and standard deviation of the observations [26]. Thus, using jackknife is not helpful to approximate θ(F). For the limitations discussed above, this technique is not particularly advantageous when dealing with NNMs, and therefore, is seldom used in such a context, especially when compared with the bootstrap algorithm and its advantages. Subsampling Another technique for resampling is subsampling, achieved by simply choosing from a dataset X n with n elements, m < n elements without replacement. As a result, the samples generated with this algorithm have a smaller size than the initial dataset X n . As the bootstrap algorithm, this one has been widely studied and used in the most different fields, from genomics [51,52] to survey science [53,54], finance [55,56] and, of course, statistics [26,57,58]. The two reviews [26,59] can be consulted by the interested reader. Precise conditions under which approximating with subsampling lead to a good approximation of the desired estimator can be found in [59][60][61][62]. Subsampling presents a fundamental difficulty when dealing with the average as a statistical estimator. By its own nature, subsampling requires to consider a sample of smaller size m than the available dataset (of size n). As seen previously, the CLT enunciates that the standard deviation of a sample of size m will tend asymptotically to a normal distribution with a standard deviation that is proportional to the inverse of √ m. That means that changing the sample size changes the standard deviation of the distribution of θ. Note that this is not a reflection of properties of the MSE, but only of the sample chosen. In the extreme case that m = n (one could argue that this is not subsampling anymore, but let's consider it as an extreme case) the average estimator will always have the same value, exactly the average of the inputs, since the subsampling samples are without replacement, and therefore the standard deviation will be zero. On the other hand, if m = 1, the standard deviation will increase significantly and will coincide with the standard deviation of the observations. Therefore, the subsampling method presents the fundamental problem of the choice of m. Since there is not a general criterion to choose m, the distribution of θ will reflect the arbitrary choice of m and the properties of the data at the same time. This is why the authors do not think that the subsampling method is well-suited to give a reasonable and interpretable estimate of the distribution of a statistical estimator, as the MSE. Algorithms for Performance Estimation As discussed in Section 1, the performance of an NNM can be assessed by estimating the variance of the statistical estimator θ. The distribution of θ can be evaluated by the split/train algorithm by splitting a given dataset S randomly N s times in two parts S . . , N s . This algorithm is described with the help of the pseudo-code in Section 6.1. This approach is unfortunately extremely time-consuming, as the training of the NNM is repeated N s times. As an alternative, the approach based on bootstrap is discussed in Section 6.2. This algorithm has an advantage over the split/train algorithm in being very time-efficient, since the NNM is trained only once. After training, the distribution of a statistical estimator is then estimated by using a bootstrap approach on S V . Split/Train Algorithm The goal of the algorithm is to estimate the distribution of a statistical estimator, like the MSE or accuracy, when considering the many variations of splits, or in other words, the different possible S T and S V . To do this, for each split a new model is trained, so as to include the effect of the change of the training data. The algorithm is described in pseudo-code in Algorithm 4. First, the dataset is ran- T indicates the dataset obtained by removing all x i ∈ S (i) T from S. Then, the training is performed, and finally, the distribution of a statistical estimator is evaluated. It is important to note that Algorithm 4 can be quite time-consuming since the NNM is trained N s times. Thus, if the training of a single NNM takes a few hours, Algorithm 4 can easily take days and therefore may not be of any practical use. Remember that, as discussed previously, N s should be at least of the order of 500 for the results to be meaningful. Larger values for N s should be preferred, making this algorithm in many cases of no practical use. From a practical perspective, besides the issue of the time, care must be taken in the implementation when automatizing Algorithm 4. In fact, if a script trains hundreds of models, it may happen that some will not converge. The results of these models will, therefore, be quite different from all the others. This may skew the distribution of the estimator. So, it is necessary to check that all the trained models reach approximately the same value of the loss function. Models that do not converge should be excluded from the analysis, as they will clearly falsify the results. It is important to note that the estimate of the distribution of an averaging estimator as the MSE will always depend on the data used. Therefore, the method allows to assess the performance of an NNM, measured as its generalisation ability when applied to unseen data. Bootstrap This section describes the application of bootstrap to estimate the distribution of a statistical estimator. Let's suppose one has an NNM trained on a given training dataset S T and is interested in finding an estimate of the distribution of a statistical estimator, for example, the MSE or the accuracy. In this case, one can apply bootstrap, as described in Algorithm 1, to the validation dataset. The steps necessary are highlighted in pseudo-code in Algorithm 5. Note that Algorithm 5 does not require training of an NNM multiple times and is, therefore, quite time-efficient. From a practical perspective, it is important to note that the results of Algorithm 5 (θ n and σ(θ n )) approximate the ones from Algorithm 4 (θ V and σ(θ V )). In fact, the main difference between the algorithms is that in Algorithm 4, an NNM is trained each time on new data, while in Algorithm 5, the training is performed only once. Assuming that the dataset is big enough and that the trained NNMs converge to similar minima of the loss functions, it is reasonable to expect that their results will be comparable. Mixed Approach between Bootstrap and Split/Train The bootstrap approach, as described in the previous section, is computationally extremely attractive, but has one major drawback that needs further discussion. Similarly to the hold-out technique, the estimate of the average of the MSE and its variance are strongly influenced by the split: If S V is not representative of the dataset (see the age example in Section 1), the Algorithm 5 will give the impression of bad performance of the NNM. Algorithm 5: Bootstrap algorithm applied to the estimation of the distribution of a statistical estimator. Result: Average and standard deviation of a statistical estimator:θ n and σ 2 (θ n ) 1 Define an NNM by fixing the hyperparameters; 2 Create S T from S by choosing m elements randomly from S with m < n without replacement; 3 S V ← S \ S T ; 4 Train the NNM on S T ; 5 for i = 1, . . . , N s do 6 Generate a new validation dataset S A strategy to avoid this drawback is to run Algorithm 5 on the data a few times using different splits. As a rule of thumb, when the the average of the MSE and its variance obtained by the different splits are comparable, these will likely be due to the NNM and not to the splits considered. Normally, considering 5 to 10 splits will give an indication of whether the results can be used as intended. This approach has the advantage of being able to use a large number of samples (the number of bootstrap samples) to estimate a statistical estimator, without being insensitive to possible problematic cases due to splits where training and test parts are not representative of each other and of the entire dataset. Application to Synthetic Data To illustrate the application of bootstrap and to show its potential compared to the split/train approach, let's consider a regression problem. A synthetic dataset (x i , y i ) with i = 1, . . . , n was generated with Algorithm 6. All the simulations in this paper were done with n = 500. The data correspond to a quadratic polynomial to which random noise taken from a uniform distribution was added. The goal in this problem is to extract the underlying structure (the polynomial function) from the noise. A simple NNM was used to predict the y i for each x i . The NNM consists of a neural network with two layers, each having four neurons with the sigmoid activation functions, trained for 250 epochs, with a mini-batch size of 16 with the Adam optimizer [63]. Classification problems can be treated similarly and will not be discussed explicitly in this tutorial. Results of Bootstrap After generating the synthetic data, the dataset was split in 80%, used as a training dataset (S T ), and 20% used for validation (S V ). Then, the bootstrap Algorithm 5 was applied, training the NNM on S T and generating 1800 bootstrap samples from the S V datasets. Finally, the MSE metricθ n on the bootstrap samples was evaluated, and its distribution plotted in Figure 2. The black line is the distribution of the MSE on the 1800 bootstrap samples, while the red line is a Gaussian function with the same mean and standard deviations as the evaluated MSE values. Figure 2 shows that, as expected from Corollary 1, the distribution of the MSE values has a Gaussian shape. This justifies, as discussed, the use of the average and standard deviation of the MSE values to completely describe their distribution. V . N s = 1800 was used for both Algorithms 4 and 5. The corresponding averages of the metric distributions (vertical dashed lines) are very similar for the shown cases. Note that, depending on the split, the NNM may learn better or worse and, therefore, the average of MSE obtained by using Algorithm 4 may vary, although most of the cases will still be between roughly one σ of the average obtained by Algorithm 4. The second observation is that, although the average of the MSE may vary a bit, its variance stays quite constant. Finally, the results show that, as expected, the distributions have a Gaussian shape, as expected from Corollary 1. As it was mentioned, Algorithm 5 is computationally more efficient. For example, on a system with an Intel 2.3 Ghz 8-Core i9 CPU, Algorithm 5 took less than a minute, while Algorithm 4 took over an hour. Comparison of Split/Train and Bootstrap Algorithms The comparison between the split/train and bootstrap algorithms is summarized in Table 1, where the average of the MSE, its variance, and the running time are listed. In the Table, the results of k-fold cross-validation and of the mixed approach (a few split/train steps combined with several bootstrap samples) are also reported. Note that for the mixed approach, the reported average of the MSE and σ are obtained over the several splits. It is important to note that, in the split/train algorithm, the NNM was trained on 100 different splits, and in the bootstrap algorithm, only on one. To avoid the dependence of the average of the MSE and of its variance on the single split, the mixed approach offers the possibility to check for dependencies from the split still remaining computationally efficient. For comparison, the k-fold CV is computationally efficient, but the distribution of the MSE is composed of very limited (here k = 5) values. Thus, even if the results of Table 1 are numerically similar, the difference is in the computation time and robustness of these values. Application to Real Data To test the different approaches on real data, the Boston dataset [64] was chosen. This dataset contains house prices collected by the US Census Service in the area of Boston, US. Each observation has 13 features and the target variable is the average price of a house with those features. The interested reader can find the details in [65]. The results are summarized in Table 2. As visible from Table 2, the average of the MSE and its variance obtained with the different algorithms are numerically similar, as obtained on the synthetic dataset. Here, as in the example of Section 7.2, the difference is also in the computation time and robustness of these values. Limitations and Promising Research Developments It is important to note that the algorithms discussed in this work are presented as practical implementation techniques, but are not based on theoretical mathematical proof, with the exception of Section 3. One of the main obstacles in proving such results is the intractability of neural networks (see, for example, [66]) due to their complexity and nonlinearity. Although not yet based on mathematical proofs, those approaches are important tools to be able to estimate the value of a statistical estimator without incurring in problems as described, for example, in the example with the age in the introduction. The field offers several promising research directions. One interesting question is whether the degradation of the performance of NNMs due to small datasets differs when using the different techniques. This would enable to choose which technique works better with less data. Another promising field is to study different network topologies and the role of the architecture on the resampling results. It is not obvious at all, that different network architectures behave the same when using different resampling results. Finally, it would be important to support the results arising from simulations described in this paper with mathematical proofs. This, in the opinion of the authors, would be one of the most important research directions to pursue in the future. Conclusions This tutorial showed how the distributions of an average estimator, as the MSE or the accuracy, tends asymptotically to a Gaussian shape. The estimation of the average and variance of such an estimator, the only two parameters needed to describe its distribution, are therefore of great importance when working with NNMs. They allow to assess the performance of an NNM, perceived as its ability to generalise when applied to unseen data. Classical resampling techniques were explained and discussed, with a focus on their application with NNMs: the hold-out set approach, leave-one-out cross-validation, k-fold cross-validation, jackknife, bootstrap, and split/train. The pseudo-code included is meant to facilitate the implementation. The relevant practical aspects, as with the computation time, were discussed. The application and performance of bootstrap and split/train algorithms were demonstrated with the help of synthetically generated and real data. The mixed bootstrap algorithm was proposed as a technique to obtain reasonable estimates of the distribution of statistical estimators in a computationally efficient way. The results are comparable with the ones obtained with the much more computationallyintensive algorithms, like the split/train one. Software The code used in this tutorial for the simulations is available at [67]. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: MSE Mean Square Error NNM Neural Network Model CLT Central Limit Theorem iid independent identically distributed PDF Probability Density Function LOOCV Leave-one-out cross-validation CV cross-validation
8,838.2
2021-03-29T00:00:00.000
[ "Computer Science" ]
Karst conduit size distribution evolution using speleogenesis modelling One of the critical aspects when modeling groundwater flow in karstic aquifers is to estimate the statistics of the size of the conduits, in conjunction with the connectivity of the karst conduit network. Statistical analysis can be performed on data gathered by speleologists, but a significant fraction of the karst conduit networks is not directly reachable, and therefore, the resulting statistics are incomplete. An alternative method to evaluate the inaccessible areas of a karst conduit network is to simulate numerically the speleogenesis processes. In this paper, we use a coupled reactive-transport model to simulate the evolution of a vertical section of a fractured carbonate aquifer and investigate how the statistical distribution of the fracture apertures evolves. The numerical results confirm that the karstification proceeds in different phases that were previously hypothesized and described (inception, gestation, development). These phases result in a multi-modal distribution of conduit aperture. Each mode has a roughly lognormal distribution and corresponds to a different phase of this evolution. These outcomes can help better characterize the statistical distribution of karst conduit apertures including the inaccessible part of the network. Introduction Karst aquifers develop when rock mass permeability is increased by dissolution processes (e.g., de Waele and Gutierrez 2022). The global behavior of these aquifers is strongly controlled by fast and mostly turbulent groundwater flow within karst conduits connected over large distances, and by slow recessions that can be fed by the fractured carbonate matrix. Modeling these processes is known to be difficult as highlighted, for example, by the Karst Modeling Challenge (Jeannin et al. 2021). If we focus on physically based models, several numerical tools (such as MODFLOW-CFP, FEFLOW, or DISCO) can solve the flow and solute transport equations based on the knowledge of the karst conduit network geometry and the physical properties of the conduit and rock matrix (Reimann and Hill 2009;de Rooij et al. 2013;Kresic and Panday 2018). These tools require on one hand a 3D mesh representing the geometry of the karst conduit network as one-dimensional objects in a 3D space filled by matrix elements, and on the other hand the hydraulic properties of the matrix and karst conduits. While recent progresses have been made in developing methods allowing to generate plausible network geometries (e.g., Jaquet et al. 2004;Pardo-Igúzquiza et al. 2012;Rongier et al. 2014;Fandel et al. 2021;Banusch et al. 2022), the question of defining the proper statistical and spatial distribution of the conduit diameters remains much more open. These parameters are, however, critical to predict possible flow and solute transport in karst aquifers. The main source of data to constrain the conduit diameter statistics are direct measurements made by speleologists. The statistical and geostatistical analysis of these data (Pardo-Igúzquiza et al. 2012;Frantz et al. 2021) shows that the accessible conduits size has roughly a log-normal distribution and 360 Page 2 of 16 their size is spatially correlated along the conduit network. However, the data sets on which these studies are based are incomplete, they do not include conduit sizes that are too narrow to be explored by speleologists. Speleogenesis modelling is a possibility to complement the data directly observable by speleologists and obtain some information about the statistical distribution of the smaller conduits. A closely related field is the study of the development of dissolution patterns in rough fractures or rock masses (Upadhyay et al. 2015;Lipar et al 2021). By using reactive transport models one can model the development of geological structures affecting the properties of fractures or the creation of karstic geomorphologies. Some particularly interesting results in this field were obtained by Upadhyay et al. (2015). They show that the evolving dissolution patterns are insensitive to the amplitude and correlation length of the initial aperture field. However, their study does not consider the transition from laminar to turbulent flow. The speleogenesis modelling approach has been pioneered by authors like Dreybrodt (1990), Gabrovšek and Dreybrodt (2000) or Kaufmann (2003). It involves the coupled numerical simulation of groundwater flow and reactive transport that causes mineral dissolution in the host rock. The dissolution process alters the petrophysical properties of the rock, such as fracture apertures. There is a positive feedback mechanism, because the fractures enlarged by dissolution allow a larger flow rate. As the flow rate increases, the reactive water can penetrate deeper into the fracture network and accelerate the fracture growth rate. Dreybrodt et al. (2005) have simulated these coupled processes to gain insights into the evolution of conduits and synthetic fracture networks. Their simulations revealed the importance of 4 th order dissolution kinetics in the development of karst conduit. They explain the development of fractures hundreds or thousands of meters from the inlet of a fracture network. This type of approach has been applied at different scales and on different types of aquifers (e.g., confined, unconfined, coastal, deep, etc.) to better understand the effect of these various situations on the speleogenesis processes (e.g., Hubinger and Birk 2011;Perne et al. 2014;Kaufmann et al. 2014;de Rooij and Graham 2017;Cooper and Covington 2020). Among the known results, Hubinger and Birk (2011) show that the enlargement of fractures creates discrete pathways (connected network) with bimodal fracture aperture distributions, where only the largest fractures continue to grow after the breakthrough of a pathway connecting the inlet and outlet of the modelled network. The breakthrough is defined as the stage in fracture evolution when aperture has increased sufficiently to reduce friction loses and allow the flow regime to change from laminar to turbulent. The transition is followed by a significant increase in flow rate at the outlet. Hubinger and Birk (2011) also show that a unimodal fracture aperture distribution emerges if the recharge is severely reduced. The aim of this paper is to investigate the statistical distribution and evolution of possible karst conduit sizes below what is accessible by a speleologist. For this purpose, we implement and use a speleogenesis model based on the concepts described in detail in Dreybrodt et al. (2005). The numerical model uses FEFLOW to solve the flow and transport equations. The code interacts with FEFLOW to couple the dissolution process with the reactive transport modeling. The initial stage is assumed to be a two-dimensional discrete fracture network to keep the computing time manageable. We test different initial fracture networks and recharge conditions to check the sensitivity of the results to the initial geometry and recharge. We then run the numerical model of speleogenesis to assess the evolution of the aperture of the fracture network, i.e., karst conduits. In particular we pursue the simulation after the breakthrough to investigate how the karst conduit aperture continues to evolve. As compared to previous studies, we show that the evolution of the system includes multiple phases that can be interpreted as phases of inception, gestation, and development as they were previously hypothesized by Filipponi (2009) but never yet described on the base of a numerical model. These multiple phases lead to multiple modes in the statistical distribution of the fracture apertures. This paper is organized in three parts. We first present the conceptual model and numerical tool that was developed and used in this work. We then describe our numerical experiments and the results. Finally, we discuss the implications of those results and conclude. Conceptual model and numerical approach The numerical code employed for this study is an extension of a previous code developed by Maqueda (2017 The modelling approach implemented in KSP is based on the conceptual model presented by Dreybrodt et al. (2005). The speleogenesis model relies on three assumptions. The first assumption is that the walls of the fracture are assumed to be natural limestone (calcium carbonate) thus soluble by acidic water. The second assumption is the dissolution reaction occurs at the rock surface only, i.e., the effect of calcite dissolution is a retreat of the fracture wall. The third assumption is that the mineral dissolution is the only mechanism for fracture growth modelled. Other processes such as the erosion of the walls by suspended solids in water, or rock detachment along the conduit walls due to mechanical stress are not accounted for. KSP uses the kinetics chemistry model of Dreybrodt et al. (2005). Calcite dissolution rate is expressed as a function of the ratio of dissolved calcium concentration C to calcite equilibrium concentration C eq estimated by applying equilibrium chemistry concepts (Appelo and Postma 2010). The calcium equilibrium concentration is a function of dissolved carbon dioxide (CO 2 ) concentration, temperature, and the presence of other dissolved mineral species. The lowest calcium concentration ratio (C/C eq ~ 0) yields the faster dissolution rate. A calcium concentration equal equilibrium concentration yields a dissolution rate equal to 0 mol/m 2 ⋅s. The main feature of the kinetics model is that the dissolution rate can be explained by a linear or 4th order reaction model depending on the C/C eq ratio. The model considers four dissolution rates for the following combinations of C/C eq ratio and flow regime. Equation 3 applies for laminar flow conditions when C/C eq < 0.9, while Eq. 4 applies when C/C eq > 0.9, Eq. 5 applies for turbulent flow conditions when C/C eq < 0.9, while Eq. 6 applies when C/C eq > 0.9: where R is the dissolution reaction rate [mol m −2 s −1 ], k l is the 1st order reaction rate constant, k n is the 4th order reaction rate constant, b is the fracture aperture [m], c is the calcium ion concentration [mol m −3 ], C eq is the calcium equilibrium concentration (mol m −3 ), and D is the calcium ion diffusion coefficient [m 2 s −1 ]. The mass transport is described by the advection-dispersion model applied to one dissolved species (calcium ion Ca 2+ ). The implementation of the mass transport model is presented in detail in FEFLOW documentation (Diersch 2013). The change in fracture aperture caused by mineral dissolution was coded into KSP. KSP can simulate the transition between laminar and turbulent flow and can estimate fracture growth for both flow regimes. KSP switches from laminar to turbulent flow equation at a point in fracture evolution (cross section geometry, gradient), where both models yield the same resistance to flow. This is achieved by comparing in every fracture the flow velocity obtained by Hagen-Poiseuille model with the flow velocity computed by the Manning-Strickler model. When the turbulent solution yields a higher resistance to flow compared to the laminar solution for the same fracture geometry and gradient, the simulation changes the flow equation to turbulent for every specific fracture in the simulated domain. This approach offers a numerically stable solution for the transition. The change in fracture geometry due to mineral dissolution (r = fracture wall retreat) is computed with Eq. (7), where R is the dissolution reaction rate depending on C/C eq ratio and flow regime, A is the fracture surface area, and V m is the molar volume of calcite rock: The numerical implementation of the method described above relies on the splitting of reactive transport and the wall retreat calculations based on a quasi-stationary state approximation (Litchner 1988). This approximation is the assumption that dissolution reaction rates are relatively slow; therefore, it is acceptable to extrapolate an estimated dissolution rate over a period of simulation time to compute fracture wall retreat. The approximation is implemented by simulating only reactive transport until solute concentrations in the fracture network tend to stabilize while using timesteps with a duration that ensures numerical stability. Only then, wall retreat is computed and KSP modifies the aperture of fractures in the FEFLOW model in a single time-step. Next, the reactive transport problem is left to run once again, until a new quasi-stationary state in mass concentration in the model is attained. KSP benchmark test To test the KSP code, we first compared the results obtained with KSP with those published by Dreybrodt et al. (2005). For this benchmark, the aim is to model the evolution of one single rectangular fracture having a width of 1 m, an initial aperture of 0.2 mm, and a length of 1 km. The hydraulic boundary conditions are: (i) a constant pressure head of 50 m at the inlet point, and (ii) a constant head of 0 m at the outlet. The solute boundary condition is water fully unsaturated with calcite at the inlet. The calcite equilibrium concentration is 2 mmol/liter. The linear and 4 th order reaction kinetics constant values are 4⋅10 -7 and 4⋅10 -4, respectively. Before comparing the results, note that there are two fundamental differences between the Dreybrodt et al. (2005) simulations and our implementation. In Dreybrodt et al. (2005), the flow occurs only in the fracture and the hydrodynamic dispersion of the solute is not accounted for. Whereas in our simulation, the fracture is embedded in a porous rock matrix with low hydraulic conductivity (< 10 -6 m/s), where flow and solute transport still occur at a minimal rate, and hydrodynamic dispersion, representing the heterogeneity in water flow velocities in the fractures, is accounted for within the advective-dispersion formulation in FEFLOW. Figure 1a presents the results of the evolution of fracture aperture in both the KSP simulation (dashed lines) and the Dreybrodt et al. (2005) simulation (solid lines). At simulation times of 13,100 and 17,800 years, the fracture aperture in both simulations is nearly identical. The transition to turbulent flow occurs in both simulation at around 18,850 years, whereas the fracture apertures in the KSP simulation are slightly smaller than the aperture at the benchmark model (blue lines). The largest difference in fracture aperture is observed at a simulation time of 19,032 years. We interpret the deviations as resulting from the differences in implementation due to the presence of porous media in KSP and the different transport equations. At a simulation time of 19,152 yrs. the difference in fracture aperture is reduced after flow becomes turbulent. Figure 1b presents the evolution of flow rate for both KSP simulation and Dreybrodt et al. (2005) simulation. The simulated flow rates with KSP are comparable to the benchmark before and after the transition to laminar flow (nearly vertical increase in flow rate). Only a small difference is observed by the end of the simulation probably, because flow and solute transport in porous media in our simulation. The conclusion from the benchmark test of the KSP is that the code reproduces the main trends of fracture aperture and is able to model the transition between laminar and turbulent flow to reproduce the flow rate increase after breakthrough Dreybrodt et al. (2005) simulation. Finally, we consider that this test shows that KSP is capable to simulate reasonably well-fracture aperture evolution from 10 -4 m to 10 m. Model setup KSP was used to investigate the evolution of a simplified karst aquifer under phreatic conditions. The geometry is a rectangular vertical cross section through a fractured carbonate formation (Fig. 2). Recharge is acting on the top of the system, and a spring is located on the lower part of the right side of the domain. The 2D model domain has a size of 2 km horizontally and 500 m vertically. Initial fracture networks 1 and 2 within the model domain were generated using a simple object-based model that we implemented in python. Two different fracture networks were considered (Table 1). For each one, 4 families of discontinuities (fracture sets) were defined. The sub-horizontal family represents bedding planes or sub-horizontal tectonic discontinuities. The sub-vertical family represents sub-vertical tectonic discontinuities. In addition, there are two families of conjugated fractures. All the fractures are simulated independently. Their position is generated following a Poisson random point process with a density that is different for every fracture family. The distribution of the length of the fractures follows a truncated power-law distribution, with an exponent that has been kept constant. The orientations follow a von Mises distribution for each fracture family. All the parameters of those statistical distributions are provided in Table 1. In total, the fracture network 1 has 6,257 fractures with a total accumulated fracture length of 57,628 m; the fracture network 2 has 5,554 fractures with a total accumulated fracture length of 39,727 m. The differences between fracture networks 1 and 2 are the density and minimum length of fractures (Table 1). This difference is well-visible on Fig. 3, where fracture network 1 looks denser and with shorter fractures when compared to fracture network 2. Fracture apertures have been shown to be variable in space and spatially correlated at multiple scales (Tatone and Grasselli 2012). To generate a simple but plausible initial distribution of apertures, we used the sequential gaussian simulation method within the SGEMS software (Remy et al. 2009) to generate the logarithm of the apertures as a random multi-Gaussian field, with a Gaussian variogram model with a range equal to 150 m in the horizontal direction and 50 m in the vertical direction. The random multi-Gaussian field is created as a raster file which is used to assign initial apertures to the fracture networks in the numerical model The final distribution of the aperture has a log-normal distribution with a mean of 1.68⋅10 -4 m and a variance of 3.8⋅10 -9 m 2 ; the initial apertures have values ranging between 5.0•10 -5 m and 5.0•10 -4 m (Fig. 3). The fractures are discretized into smaller elements, whereas each fracture element has a unique initial aperture value to improve the convergence and numerical accuracy of the numerical solution. When the flow becomes turbulent, we also need to define the value of the Strickler coefficient. For this purpose, we use a range of values previously identified in the scientific literature. Jeannin (2001) estimated a Strickler coefficient of 20 based on karst conduit geometry and flow measurements for the Hölloch cave system in Switzerland. An estimation of the Strickler coefficient was also done for the Devil's Icebox-Connor's Cave system in the USA based on flow and geometry data and yields a Strickler coefficient between 28 and 150 (Peterson and Wicks 2006). Perne et al. (2014) used values between 50 and 100 for simulations of the transition from pressurized flow to free-surface flow in karst conduit networks. Based on the aforementioned references, we assume a Strickler coefficient of 50 which should be representative of karst conduit networks. For the flow boundary conditions (Fig. 2), we assume that the amount of precipitation is constant. In the initial state of the system, the amount of precipitation is higher than what can enter in the fractures, because the initial fracture permeabilities are low. Therefore, the total amount of available recharge by precipitation cannot completely enter the system and a large proportion of it is eliminated by surface runoff. This process maintains the fractures fully saturated. For each fracture network we considered three simulation scenarios with different values for the constant pressure head boundary condition: case (a) h = 33 m, case (b) h = 100 m and case (c) h = 300 m. The different values of constant pressure head will allow to assess its effect on the evolution of the fracture networks. When the system evolves, the fractures are enlarged by dissolution, the network becomes much more permeable, but the recharge of the system cannot become larger than the rate of precipitation on the upper surface. Therefore, we implemented a constraint of maximum inflow rate in addition to the constant head boundaries. The maximum flow rate is set to 1 l/s per recharge node. This hydraulic boundary condition is implemented on the nodes, Fig. 3. The outlet of the system (spring) is located at the intersection between one fracture and the right boundary near the bottom of the model domain (Fig. 3). The spring is modeled using a prescribed constant head h = 0 m. The location of the spring is also represented as a blue circle on the right side on the bottom of the domain in Fig. 3. Mineral dissolution (reactive transport) is simulated with a variable time-step duration in the order of hours controlled by FEFLOW solver to ensure a stable numerical solution. The solute concentration arrives to a quasi-stationary condition (stable concentration) after 50-100 days of simulation. The simulation time-step duration for the simulation of fracture enlargement is fixed at 7,300 days (~ 20 years). The total simulated period of fracture growth is between 5 and 10 K years depending on simulated domain size, fracture geometry and hydraulic boundary conditions. Figure 4 presents the fracture aperture distributions for all the simulated cases at the simulation time when the maximum flow rate of 1 l/s per recharge node is attained. Even if the initial fracture aperture is identical for fracture networks 1 or 2, the final structure of the enlarged fracture network depends on the hydraulic conditions and can favorize different fractures to grow faster. With smaller hydraulic gradient (e.g., case 1a), the resulting enlarged fracture networks tend to be "simpler", i.e., it contains less conduits (enlarged fractures) compared to resulting networks with stronger hydraulic gradients (e.g., cases 1b, 1c). The same outcome is observed for scenarios 2a, 2b, and 2c, where stronger gradient causes a more "complex" network of turbulent flow fractures. Figure 4 also shows that karst aquifers do not need to develop all over a karstified rock mass but only in limited areas. For example, larger fractures or karst conduits did not develop significantly below the outlet elevation. Karst conduits are also absent in areas distant from the outlet. In these areas without karst conduits, the hydraulic behavior is determined by flow along fractures (fracture aquifers), while in those with karst conduits, these dominate the aquifer hydraulics (karst aquifers). Fracture aperture evolution The networks illustrated in Fig. 4 are the results of multiple discrete breakthroughs corresponding to the connection of several new clusters or parts of the fracture network to the turbulent and rapid flow path system between the recharge area and the outlet. This evolution is illustrated in Fig. 5. Fig. 4 Comparison of the final geometry (developed length) of the fracture network for all simulation cases (a-c) for fracture networks 1 and 2 The figure shows the evolution of fracture apertures for case 1a at four important and distinct simulation times. At time T = 1560 years, a small cluster of fractures located above the outlet has just and rapidly transitioned from laminar to turbulent flow, their apertures became much larger than the initial values (red colors). The histograms only present aperture distribution, the relationship to the transition from laminar to turbulent flow is presented in the videos included as supplementary material for this article. These fractures connect the outlet to the recharge area. At T = 1855 years, a new cluster of fractures has just made the same transition from laminar to turbulent flow (fractures in red within the highlighted dashed rectangle) and did connect to the first cluster. Later, at T = 3618 years and T = 5667 years, the same process repeated itself with the connection of additional clusters of fractures, highlighted again by the dashed rectangles in Fig. 5. This process involving the sudden connection of novel parts of a connected network of highly open fractures can be fully observed on the video animations provided as electronic supplementary information. These animations also permit to see that the same behavior occurs (with some variations) in all the simulated cases and include the isolines of hydraulic head. The breakthrough of enlarged fractures influences the fracture aperture distribution as shown on the histograms of fracture aperture (Fig. 6). Frequency is shown in the vertical log scale, while the fracture aperture in meters is presented in horizontal log scale. For the case 1a, the initial fracture apertures (T = 0 years) have a log-normal distribution with a mode or peak indicated with the 0 symbol on the top of the figure. At T = 1,560 years, the 1st breakthrough causes an aperture mode (peak 1) of ~ 0.1 m. At T = 1,855 years, the 2nd breakthrough causes a 2nd aperture mode (peak 2) of ~ 0.1 m. By then, the fractures of the 1st breakthrough (peak 1) have grown and have a new mode of ~ 0.3 m. At T = 3,618 years, the third breakthrough causes a new aperture mode (peak 3) of ~ 0.1 m. At this time, peaks 1 and 2 have merged into a single peak of mode ~ 1 m. At T = 5,667 years, the 4th and final breakthrough occurs and yet another mode (peak 4) of ~ 0.2 m emerges. By then, peaks 1, 2, and 3 have nearly converged to an aperture between 1 and 3 m and can be regarded as karst conduits. At T = 5,667 years, the inflow rate stabilizes (see Fig. 4 case 1a), and the simulation is run until T = 7,115 years. At this evolution stage, the 4 th breakthrough fractures (peak 4) almost converge with previous peaks 1, 2, and 3, and a 5th mode emerges with an aperture ~ 0.2 m emerges. Since the inflow rate has stabilized, the flow in the background fractures does not transition from laminar to turbulent flow and it is not expected that fractures of peak 0 grow into conduits with turbulent flow, i.e., into karst conduits. Figure 7 presents the distribution of the final fracture aperture for all the simulated cases. Log scales are the same as Fig. 6. A multimodal distribution (several peaks) is observed in the fracture aperture distributions. The various modes are the result of the same evolution process presented for case 1a in Fig. 6. The multimodal distribution is the result of every new phase of karstification when a new group of enlarged fractures are connecting to the cluster of fractures previously connected to the spring. When comparing the outcomes of the six simulation cases a general trend of three modes of fracture aperture emerges: • A peak composed by fractures with apertures larger than 1 m (red color in Fig. 7). This peak represents all the fractures where flow transitioned from laminar to turbulent flow. These fractures keep growing after the stabilization of flow rate which causes this peak to be detached from the rest of the distribution. This peak represents fractures that developed to caves, i.e., these are the ones that are large enough to be explored in nature by speleologists. • A second peak is observed for fractures in the range of around 0.1-0.3 m (orange color in Fig. 7). These fractures are connected to the larger fractures of the first peak. However, they stopped growing once the flow rate stabilized, because the recharge of reactive water is drained by larger fractures (peak 1) to the spring. • Finally, a third aperture mode still in the range of the initial fracture aperture between 5.0•10 -5 to 5.0•10 -4 m (blue color in Fig. 7). The flow regime in these sub-millimeter fractures remains laminar. Figure 8 presents the evolution of the outflow rate at the spring for all the simulated cases. This evolution can be divided into three main phases: Flow rate evolution • An initial phase with a slow exponential increase before any connected fracture network has been sufficiently enlarged to transition from laminar to turbulent flow. During this initial phase, the flow rate can be approximated by the product of the total head gradient and an effective hydraulic conductivity that depends on the initial structure of the fracture network and the distribution of the apertures. The differences in flow rate between the cases are linearly proportional to the head gradient. The second phase occurs when fracture flow transitions from laminar to turbulent flow. This is characterized by a sudden increase in flow rate. At this stage, a complete flow path became turbulent and connects a part of the recharge area with the outlet. During that phase, we observe multiple successive breakthroughs for scenarios 1a, 1b, 1c and 2a corresponding to the sequential connection of different areas of the fracture networks with the outlet, which causes a stepwise increase of the flow rate at the outlet (see Fig. 5). These flow rate steps are less visible in cases 2b and 2c (Fig. 8). • During the third phase, the flow rate stabilizes and reaches the maximum amount of recharge available. Further enlargement of the fracture aperture is minor and has little impact on the flow paths. At this stage the hydraulic gradient within the aquifer area with the turbulent flow path tends progressively toward zero. Table 2 presents a summary of the values of the flow rates at the outlet fracture at the 1st breakthrough time and the time of stabilized flow rate. As expected, stronger pressure heads yield in a shorter breakthrough time (case 1c < 1b < 1a). Network 1, in spite of being more complex than network 2, displays a shorter time for flow stabilization, because the maximum recharge flow rate is greater than for network 1 (46 > 26 l/s). Network statistics To compare the complexity of the resulting turbulent flow fracture networks, we computed the following summary statistics (Table 3): • number of fractures experiencing turbulent flow and ratio of this number to the total number of fractures in the network. • sum of the length of fractures experiencing turbulent flow and ratio of this length to the total fracture network length. Fig. 8 Flow rate evolution and breakthrough times for all simulation cases. The multiple breakthroughs presented in Fig. 5a for case 1a are highlighted with arrows. The first increase in flow rate is the consequence of the 1st breakthrough (Fig. 5a), 2nd flow rate increase is caused by second breakthrough (Fig. 5b), and so on until the last increase in outflow rate caused by the 4th and final breakthrough as presented in Fig. 5b These summary statistics were estimated at two simulation times: (i) at the stabilization of the total outflow flow rate (in Table 3 presented in bold font), and (ii) at the simulation end (in Table 3 presented in normal font). For all the simulated cases, only between 13% and 20% of all the fractures transitioned from laminar to turbulent flow in the simulated fracture network and only a fraction of those grew into karst conduits. Whereas at both fracture networks 1 and 2, the ratio of fractures in turbulent flow is higher for the cases with higher hydraulic gradient. It is noted that in cases 1a, 2a, 1b and 1c, the number of fractures, where flow becomes turbulent after the stabilization of flow rate is between 0.25% and 1.7%. Whereas in cases 2b and 2c the number of fractures in turbulent flow is the same at the flow stabilization time and final simulation time. This can be understood as follows: when the maximum flow rate is achieved, the pressure head required to push reactive water through the smallest fractures in the network decreases and approaches zero. Then, the system does not evolve anymore significantly. The second statistic in Table 3, the ratio of the sum of the length of the turbulent fractures shows that the turbulent fractures represent between 13% and 20% of total fracture network length. In this aspect, the cases with higher pressure head are the cases with more turbulent fractures. Notably, the cumulated length of fractures in turbulent flow is the nearly the same at flow stabilization and final simulation time demonstrating that once the pressure head approaches to zero, the development of new paths between inlets and outlets stops. The relation of the evolution of the turbulent fracture network and the hydraulic gradient is not straightforward and does not follow simple intuition. For example, some flow paths that were enlarged with low initial hydraulic gradient have not been enlarged with higher hydraulic gradients. Depending on the hydraulic gradient, different fractures were enlarged and reached turbulent flow conditions (Fig. 4). This phenomenon is likely the result of the non-linearity of the underlying physics and chemistry. If the flow would only follow a linear equation, such as Poiseuille, increasing the initial hydraulic gradient will increase the flux in all fractures in a proportional manner. Because of the transition from laminar to turbulent flow, the resistivity, and the behavior of the enlarged fractures, changes in a non-linear manner. This effect is then amplified by the dissolution process and by the changes in boundary conditions on the top of the domain when the flow rate becomes higher than the maximum recharge. All these interactions between the processes are explaining the complexity of the observed behavior. Conclusions and discussion In this paper, we simulated the evolution of the aperture of a fracture system using a new implementation of the speleogenesis model of Dreybrodt et al. (2005). This model was implemented as a plugin within FEFLOW. As compared to the original model, the new implementation differs on some aspects related to the solute transport equation that includes a dispersion term in FEFLOW. There are also some differences in the way that the transition between laminar and turbulent flow are implemented. It would be useful to run several additional experiments to analyze in detail the impact of the porous matrix and dispersion equation on the results in the future to clarify the mechanisms controlling those differences. Nevertheless, the two models give rather similar results in the case of a single fracture. We then use this model to study how two 2D discrete networks of fractures evolve over time under different hydraulic boundary conditions. One of the major results of this study is that we observe that the karstification process does not occur homogeneously and regularly in the catchment, but instead it proceeds in a series of multiple breakthroughs corresponding to different phases of karstification. These results illustrate for the first time in a quantitative manner a process that was proposed earlier by Lowe (1992) and extended by Filipponi (2009). The original concept included 3 phases in cave development: (i) inception, (ii) gestation, and (iii) development, as illustrated in Fig. 9. The inception phase was described as the start of dissolution in the fractures under phreatic conditions. The outlet or spring was assumed to be the consequence of a valley incision in a soluble rock massif. The outlet was hypothesized to organize the groundwater flow and at time 1, a cave gestation zone was assumed to emerge. This is very similar to what we observe during the initial phase of our simulations. At time 2, the breakthrough occurs, and the new karst conduit network acts as a "spring area" for the upstream section of the model which is comparable to the 1st breakthrough in our simulation. The karst conduits (cave development phase) offer less resistance to flow, thus the water table drops (observed as pressure head decrease in our simulation), and the gestation and inception zones move upstream. At time 3, the cave development keeps advancing upstream, which is comparable to the 2nd, 3rd, and 4th breakthroughs in our simulation cases 1a, 1b, 1c and 2a. In our simulations, the variability of the distances between the inlets and the outlets of the fracture networks resulted in several stages of breakthrough time when different regions of the network did connect the inlets and outlets. In our simulations the fractures are always full saturated. The second important result that we obtained is that the distribution of fracture apertures is generally multimodal. Depending on the flow conditions and initial geometry, the distribution can be dominated by two main modes or more. This result is comparable and generalizes the results obtained by Hubinger and Birk (2011) who conducted numerical simulations of the growth of fracture networks by mineral dissolution. They found that during the initial phase of their simulations there was only 1 mode of fracture aperture. At later simulation times the modes representing enlarged fractures merge. In our simulations, the initial lognormal distribution of fracture size with one mode evolves into a log-normal mainly tri-modal distribution. The first mode can include several sub-families, but it generally corresponds to the fractures with an aperture larger than 1 m with flow in turbulent regime and which can be regarded as karst conduits. The second mode represents fractures that intersect the main conduits but where flow after the stabilization of flow rate is not sufficient to drive reactive water into them, and therefore, the enlargement of these fractures stopped. The third mode represents the initial fractures which experienced minimal growth and stay in laminar flow conditions during the entire simulated period. As we discussed in the previous paragraph, the different modes have an essentially log-normal distribution of fracture aperture. This is similar to what has been observed when computing statistics of explored conduits (Maqueda 2017;Frantz et al. 2021). We also observe in our results that there is no trend of larger conduits found upstream or downstream in the network. This is again consistent with the analysis of field data by Frantz et al. (2021). However, the spatial distribution of the conduits radius in natural caves is known to be spatially correlated along the conduit network (Pardo-Igúzquiza et al. 2012;Frantz et al. 2021). Some branches of the conduit networks have larger conduit sizes, while others are narrower. A geostatistical analysis of the fracture Fig. 9 Conceptual model for the development of karst conduit network from Filipponi (2009) apertures obtained in our simulation has not been conducted, by a visual inspection of the results (e.g., in Fig. 5) show that there is little variability of the apertures along the cluster of connected enlarged fractures. Our model does not seem to be capable of reproducing the variability of these apertures even if we started from a random aperture distribution. This is partly a consequence of the chosen model of mineral dissolution kinetics during laminar flow conditions (before breakthrough time). Under laminar flow conditions, the dissolution rate is controlled by mass diffusion as seen in Eq. 3 (Dreybrodt et al 2005). The larger the fracture, the slower the dissolution reaction. Therefore, up to the breakthrough time and transition to turbulent flow, smaller fractures grow faster than larger fractures. However, after the stabilization of the flow rate, the reactive water flows mainly in the karst conduits (large fracture) and the relatively faster dissolution reaction kinetics becomes irrelevant, because no reactive water flows through them. Therefore, they stop growing. Other reasons possibly explaining that the model produces conduits of relatively homogeneous diameters are: (i) that the rock is assumed to be completely homogeneous, and (ii) the model only considers mineral dissolution, while in reality, mechanical effects also contribute to fracture growth. Among the other results obtained in this study, we observe that the breakthrough occurs at a flow rate of similar order of magnitude for all the simulated cases. This behavior is interpreted as a result of the flow rate, dissolution reaction kinetics and fracture initial aperture interacting with each other. When fractures grow and flow velocity increases, reactive water can penetrate deeper into the fracture network. This penetration distance is what drives the breakthrough. Since breakthrough occurs at approximately the same flow rate, regardless of initial hydraulic boundary conditions, we conclude that breakthrough flow rate is a characteristic of the initial fracture network, mainly the resistance to flow of the fracture network. And finally, significant differences were observed for the simulations with same initial fracture network geometry and aperture but different initial hydraulic gradient. We quantified the complexity of the resulting karst conduit network in terms of number of fractures experiencing turbulent flow and concluded that when a larger hydraulic gradient push more water into more small fractures before the 1 st breakthrough time, a more complex conduit network tends to emerge. Before concluding, we are aware that the results and conclusions expressed in this paper were obtained only for a few 2D models. To get more confidence in our results, it would be useful to run an ensemble of stochastic simulations and repeat a similar analysis on a large number of models. Furthermore, since the dimensionality of a model affects strongly the connectivity of a network, we expect that some of our conclusions may not remain valid in 3D. Exploring these effects would be rather straightforward but would imply much larger computation time and this was not possible in the framework of the present study. Finally, the results presented in this paper should help understand how the statistical distribution of fracture apertures or conduit diameters can evolve during the speleogenesis. These results can serve to guide the selection of karst conduit aperture distributions that one could use for modeling groundwater flow and solute transport in karstic aquifers using physically based models. These results can have practical implications for diverse applications, including, for example, environmental impact assessments or inflow risk analysis when planning the construction of underground infrastructures (e.g., Casagrande et al. 2005;Filipponi et al. 2012).
9,490.6
2023-07-01T00:00:00.000
[ "Environmental Science", "Geology" ]
Optimized Radial Basis Function Neural Network Based Intelligent Control Algorithm of Unmanned Surface Vehicles : To improve the tracking stability control of unmanned surface vehicles (USVs), an intelligent control algorithm was proposed on the basis of an optimized radial basis function (RBF) neural network. The design process was as follows. First, the adaptation value and mutation probability were modified to improve the traditional optimization algorithm. Then, the improved genetic algorithms (GA) were used to optimize the network parameters online to improve their approximation performance. Additionally, the RBF neural network was used to approximate the function uncertainties of the USV motion system to eliminate the chattering caused by the uninterrupted switching of the sliding surface. Finally, an intelligent control law was introduced based on the sliding mode control with the Lyapunov stability theory. The simulation tests showed that the intelligent control algorithm can effectively guarantee the control accuracy of USVs. In addition, a comparative study with the sliding mode control algorithm based on an RBF network and fuzzy neural network showed that, under the same conditions, the stabilization time of the intelligent control system was 33.33% faster, the average overshoot was reduced by 20%, the control input was smoother, and less chattering occurred compared to the previous two attempts. Introduction Since the concept of unmanned autonomous ships (UASs) was established, the research and development of UASs has become a key breakthrough project in the global shipping industry. To this end, the research and development institutions in various countries have conducted in-depth research on the intelligent navigation and control of UAS. At present, research on intelligent motion control navigation is in full swing. The most representative applications are various types of small unmanned surface vehicles (USVs) [1]. Developing a small intelligent hull and conducting a series of studies will lay a solid foundation for future large-scale intelligentization. It is well known that steering tracking control is an important research hotspot in the field of ship motion and control, which is related to both the economics and safety of ship navigation [2,3], such as in ship collision avoidance [4,5]. Due to the uncertain parameters, nonlinearity, and external interference [6] in the USV motion control model, conventional proportional-integral-derivative (PID) control is not effective enough, and the control input chattering situation is serious [7]. Therefore, ideal sliding mode control (ISMC) [8,9] can be effectively applied to the motion control of nonlinear systems. Neural network systems have universal approximation performance for arbitrary functions, which can effectively solve the problem of the uncertainty of USV motion systems [10,11]. Wang et al. [12] proposed the use of a radial basis function (RBF) network for control input compensation and designed an intelligent tracking control algorithm for USVs based on SMC under limited input conditions. Dongdong et al. [13] proposed an adaptive trajectory tracking control strategy for underactuated unmanned surface vehicles. The neural network minimum learning parameter method proposed in this paper features a small amount of computation. After combining the SMC and RBF networks and applying them to the design of USV motion control, the performance of the controller could be further improved [14,15]. However, when neural networks are used for the modeling and control of motion systems [16], the function estimation and classification functions of neural networks are most commonly used. The key to designing a neural network is to determine the structure and connection weight coefficients of the neural network. For the most typical radial basis function (RBF) neural network, the connection weights and parameters of the Gaussian functions are mainly determined by experience [17]. If the parameters are not properly selected, it is easy for the system to fall into local extremes [18]. Genetic algorithms (GAs) can be used for calculation optimization [19] and in the design of neural networks [20]. In [21], a GA optimized the center vector and basis width of an RBF neural network and updated the weight of the RBF network in real time, improving the accuracy of RBF tracking. In [22], a genetic algorithm was used to optimize the weight of a RBF network and a clustering algorithm was used to determine the number of centers of the basis function of the RBF network. Then, a coke quality prediction model based on the GA-optimized RBF neural network was designed. Based on the literature [17], the key to RBF network application is to determine its three parameters. Considering the shortcomings of GAs [17], some improvements have been proposed, such as an adaptive mutation algorithm and an excellent individual protection algorithm. In [23], utilizing the excellent global search ability of gene expression programming (GEP) to optimize the RBF neural network, a GEP-optimized RBF multi-output model algorithm (GEP-RBF algorithm) was designed. This model has good prediction accuracy and multi-output balance. Motivated by the above-mentioned observations, an intelligent control algorithm was proposed based on SMC with an optimized RBF network. GA was used to optimize the RBF network, and the optimized RBF network was directly approximated to the USV motion control input instructions. Then, the system stabilization function was constructed using SMC theory. Finally, the USV navigation motion intelligent control algorithm was introduced using stability theory. The contributions of this paper can be summarized as follows. The previous USV motion controller is remodeled based on SMC and RBF Neural Network (RBFNN). The GA-optimized RBF network directly inputs the control law, which overcomes the chatter caused by the uncertainty term in the USV model. At the same time, by modifying the adaptation value and mutation probability and by differentiating N subpopulations, the improvement of the general GA algorithm can reduce the prevalence of falling into local extremes, speed up the optimization process, and further improve the input accuracy of the RBF network. RBF Neural Network and its Genetic Optimization The RBF neural network is a neural network proposed by Moody and Darken [17] in the late 1980s. It is a three-layer, feed-forward network with a single hidden layer, as shown in Figure 1. Because it simulates a neural network structure that is locally adjusted in the human brain and covers the receiving area with itself, the RBF network is a local approximation network. The fact that the mapping from input to output is nonlinear and the mapping from the hidden layer space to the output space is linear can greatly speed up the learning process and avoid local minima and can also approximate any continuous function with arbitrary precision [17]. However, it is difficult to determine parameters like connection weights, center widths, and center values for the Gaussian function [18]. RBF Neural Network Approximation Algorithm The RBF network input/output algorithm [17] is where x is the network input, i is the i-th input of the network input layer, j is the j-th network input of the network hidden layer, is the output of the Gaussian function, * W is the ideal weight of the network, and  is the error of the ideal neural network approximating  , If the network input is taken as , then the output of the network is (2) where f is the network output and Ŵ is the estimated weight of the neural network. Optimization of the RBF Neural Network based on the Genetic Algorithm The current methods will be used to improve the performance of RBF networks, which include fuzzy algorithms and intelligent optimization algorithms [24][25][26][27]. In fuzzy systems, the design of fuzzy sets, membership functions, and fuzzy rules is based on empirical knowledge, and the algorithm itself does not have the ability to learn autonomously [25]. However, the intelligent optimization algorithm represented by the genetic algorithm [26,27] can realize self-learning by using the law of biological evolution, ultimately converging to the most adaptive group to obtain the optimal solution or most satisfactory solution. In addition, the main advantages of genetic algorithms [17] are listed below: 1) To solve any form of objective function and constraint optimization problem, whether it is linear or nonlinear, discrete or continuous, the genetic algorithm does not require any mathematical model. Depending on its evolutionary nature, the inherent nature of the problem need not be known during the search process. 2) The ergodicity of evolutionary operators makes genetic algorithms very efficient in globally searching for probabilistic meanings. 3) For a variety of special problems, genetic algorithms can provide great flexibility to mix and construct domain-independent heuristics, thereby ensuring the effectiveness of the algorithm. When the GA runs its optimization calculations, a large system size will cause poor optimization performance. Similarly, a system sample without important characteristic genes can also cause the optimization process to converge prematurely and fail to reach the optimal solution [28]. In order to solve the above problems, this paper adopts a modification to the adaptation value and mutation probability to improve the traditional optimization algorithm. Thus, the improved genetic algorithm (IGA) is used to determine the connection weight parameters of the RBF network. The genetic optimization process of the RBF neural network is shown in Figure 2, and the process of optimization can be described as follows: Step 1: The N subpopulations are initialized, and the initial network parameters (connection weights, center width, and center value of the Gaussian function) are encoded as genes. Step 2: N subpopulations carry out evolutionary operations independently. Step 3: Performance judgement? If no, go to the next step. If yes, end, and go to the RBF learning step. 1) Firstly, autonomous learning of the RBF network. 3) Thirdly, parameter updating. 4) Finally, performance judgement? If Yes, end. If No, go to autonomous learning and error calculation. Step 4: The average fitness of N subpopulations is calculated. Step 5: The selection and cross operations are performed separately. Step 6: Mutation operations are performed separately. Step 7: The new N subpopulations are recalculated and returned to Step 2. Adaptation Value Correction If f is the adaptation value calculated in the usual way, and f is the average adaptation value [17], then the revised adaptation value is 1 1 Here, 1 1 k  . The effect of this correction is to reduce the influence of genes whose adaptation value is too large, slow down their convergence speed, and expand the search space. According to the model theory, after the adaptation value is modified, as the genetic algorithm gradually evolves, the average adaptation value will become increasingly larger, and optimization will develop in an ideal direction. Correction of Mutation Probability The mutation probability is corrected as follows: The N sub-populations run genetic optimization operations independently in a certain number of times The N populations and average fitness are where i represents the algebra of the GA's evolution; mh p and ml p represent the upper and lower limits of the mutation probability; and 2 k is a constant less than 1. The correction method can be used to increase the mutation probability automatically when premature convergence occurs to expand the search space. At the same time, in order to prevent the best results being destroyed due to an overly high mutation probability, the best samples are kept in each iteration. Generally, the values of the above parameters are: In order to obtain satisfactory approximation accuracy, an absolute error index is used as the minimum objective function for parameter selection: where N is the total approximation step, and   e i is the approximation error of the i-th step. USV Ideal Sliding Mode Control based on the RBF Neural Network Because the SMC control law contains the uncertainty terms   2 f x and g , the optimized RBF neural network is used to approximate the control input instruction to achieve ideal control of the USV. First, the initial network parameters (connection weights, center width, and center value of the Gaussian function) are encoded as genes, and the optimized parameters output by the IGA are accepted. Then, the absolute error of the control system is determined by the optimized RBF network. Then, the RBF network output is transmitted to the SMC control unit after performing autonomous learning. Lastly, an intelligent control algorithm for USV navigation motion is introduced based on the SMC method with Lyapunov stability theory. The intelligent control algorithm flow is shown in Figure 3. USV Motion Mathematical Model In the case of external disturbances and system parameter disturbances, Nomoto's equation is used as a dynamic mathematical model of USV plane motion [29,30]: where   i α i = 1, 2, 3, is a real-valued constant. In order to perform sliding mode control design, Equation (1) needs to be transformed into the following form: Ideal Sliding Mode Control based on the RBF Neural Network The USV navigation controller is introduced with SMC technology and Lyapunov stability theory [31]. The optimized RBF network is applied to approximate the control law of USV with the problem of system parameter perturbation. According to the SMC design theory, the sliding mode surface function is defined as follows: where 0   , d e x x   , x is the actual system output, and d x is the system input instruction. Thus, Equation (10) can be inferred using Equation (9): The existence of an ideal sliding mode control law [32] in the absence of external interference and system parameter perturbation can make the controlled system (Equation (8)) globally stable, and the system convergence speed can be adjusted by parameter  . The ideal sliding mode control law is   The control law has limitations because the system function   2 f x and gain g are unknown. The optimized RBF neural network was used to approximate * u to achieve ideal control of USV navigation. The inputs of the RBF neural network, containing the system variables and sliding mode surface functions, are where the compact set is expressed as The ideal output of the RBF neural network can be rewritten [18] In this formula, Ŵ is an estimated value of * W . The network learning adaptive law is designed as follows: where T    and 0   . Equation (17) can then be obtained according to Equations (10) and (15): Similarly, Equation (18) is produced according to Equations (14) and (17): In order to suppress the external disturbance of the system, the ideal sliding mode control law in Equation (11) is further improved as Further, Equation (20) can be deduced by Equations (18) and (19): . Based on this, the Lyapunov Function was constructed as follows: Thus, Equation (22) can be inferred via Equation (21) The inequality equations are as follows: Therefore, Equation (24) can be acquired by Equations (22) and (23): Equation (25) can be realized by Equation (24) because of According to Lemma B.5 in [33], the inequality in Equation (25) is solved as According to Equation (21),   V 0 is a specific value at the initial time, similar to constant C 1 . Thus, Equation (26) can be rewritten as The first term in Equation (27) gradually decays to 0  over time. For this reason, as long as the second term is guaranteed to be a very small amount, it can be guaranteed that when t   , 0 V   . Therefore, the control system remains stable as long as Computer Simulation Experiment Results and Analysis The USV named "Lanxin" [29] was adopted for the computer simulation. With a speed of 8.5 knots, the parameters of the Nomoto model are listed as Considering the disturbances of wind and waves, the equivalent interference model can be replaced by a transfer function of a second-order wave driven with white noise [34]: where   w s is Gaussian white noise, whose average value is zero, and the power of spectral density is 0.1;   z s is a transfer function of the second-order wave and is defined by Equation (29): where 0  is the wave's dominant frequency,  is the damping coefficient, and k  and m  are constants. Verification of Practical Performance Experiments The practical performance of the intelligent control algorithm is confirmed though five computer simulation experiments. In the five experiments, the control parameter of the sliding mode surface is During the experiment, the optimization process tended to converge when the optimization process executions exceeded seven. At the same time, in order to facilitate data calculation, an odd number of experiments was performed. Thus, the first nine sets of experiments were intercepted here, as shown in Figure 4. Compared with the general GA, the IGA optimization process converged faster, and the objective function value was smaller. As shown in Table 1, the average value of the objective function of GA optimization is 974.33, and the average/median value is 0.998. The average value of the objective function of IGA optimization is 821.44, which is 15.7% smaller than that for the GA; the average/median value is 0.975, which is 2.3% smaller than that of the GA. The specific results of the ship motion control under various experimental conditions are shown in Table 2. Table 2, A represents the heading change experiment. In this experiment, the controller parameters were set as described above. The first order response pattern was used to perform the tracking experiments. The amplitude was 030°, and the initial value of the experiment was 000° without interference. The test results are shown in Table 2, row A. B represents the model perturbation experiment. The difference here from experiment A is the added perturbation of the system parameters and that the parameter values of USV are perturbed by 40%. The test results are shown in Table 2, row b. C represents the sinusoidal interference experiment. The difference here from experiment A is the added external sinusoidal interference. The system output is subjected to sinusoidal interference with an amplitude of 2° and a frequency of 0.1 rad/s. The test results are shown in Table 2, row C. D represents the white noise interference experiment. The difference here from experiment C is that the external interference changes to white noise. The system output is subjected to white noise with an amplitude of 0.1. The test results are shown in Table 2, row D. E represents the compound interference experiment. The difference here from experiment A is the added compound external interference. The parameter values of USV are perturbed by 40%, and the system output is subjected to white noise with an amplitude of 0.1. The test results are shown in Table 2, row E. Based on the result of the experiments in Table 2, the effectiveness and practicality of the intelligent control algorithm is verified because the control performance indicators satisfied the engineering design requirements. Verification of Advanced Performance Experiments In order to verify the advanced performance of the intelligent control algorithm of USV based on the IGA-optimized RBF neural network proposed in this paper, the tracking control results of this method are compared with the results of the sliding mode control algorithm based on the RBF neural network and the algorithm based on a fuzzy neural network. The three parameters of control performance are then compared between the three methods. In this paper, three algorithms are applied to the navigation control of the USV under certain conditions in the following three ways. First, the initial course is 000°, and the tracking course is 030°. Second, the second-order pop model is used to determine the compound interference of wind and waves. Third, a servo-driven model of a steering gear is included in the USV control model. The navigation control of the USV is then determined using the three algorithms mentioned above, and the results of the comparison experiment are shown as Figure 5. Control Performance of the Intelligent Algorithm based on an IGA-Optimized RBF Network As can be seen from Table 2, the maximum value of the heading stabilization time is initially 80 s, and its tracking speed is fast. After 80 s, the heading tracking output value remains unchanged, so the tracking performance of the control system is stable and reliable. Second, the maximum value of the system's overshoot is 1.2% (less than 5%), which meets actual engineering standards. Third, the maximum value of the control system input's chattering is 2%, which is ideal. This shows that the intelligent control algorithm of USV based on the IGA-optimized RBF neural network proposed in this paper can effectively track the target course, and its tracking accuracy is high. Figure 5 shows that, under the same compound interference situation, the stability of the intelligent control algorithm based on the IGA-optimized RBF neural network is the fastest, and its control chatter is also the weakest. In practice, the weaker the chattering, the more it can reduce the load on the steering gear and thus protect it. Comparison of USV Navigation Control In Table 3, it shows that the control performance of the intelligent control algorithm based on the IGA-optimized RBF neural network is better than the control performance of the sliding mode control algorithm based on the fuzzy neural network. Moreover, the control performance of the sliding mode control algorithm based on the fuzzy neural network is better than the control performance of the sliding mode control algorithm based on the RBF neural network. Compared with the other two methods, the intelligent control algorithm for USV based on the IGA-optimized RBF neural network can be applied to the research and development of UAS control systems. Conclusions When an RBF neural network is used for the modeling and control of a motion system, it is easy to fall into the local extreme since the parameters mainly depend on experience; thus, it is still difficult to apply the RBF to such a design. In response, this article applied the IGA to optimize the RBF network for effective application in the development of UAS control systems. Through computer simulation experiments, we found that: (1) The intelligent control algorithm of the USV based on the IGA-optimized RBF neural network proposed in this paper can effectively track the target course, and its tracking accuracy is high; (2) compared with the sliding mode control algorithm based on a Fuzzy neural network and RBF network, the intelligent control algorithm based on the IGA-optimized RBF neural network is more advanced and can thus be applied to the research and development of UAS control systems. In addition, it should be noted that the improvement of the GA in this paper is performed by modifying its adaptation value and mutation probability. There are possibly better ways to improve the GA. At the same time, there are no observational data on external interference, which, to a certain extent, can cause the chattering problem for the control input.
5,268.2
2020-03-18T00:00:00.000
[ "Computer Science" ]
Sub-femtometer-resolution absolute spectroscopy with sweeping electro-optic combs Optical frequency comb with evenly spaced lines over a broad bandwidth has revolutionized the fields of optical metrology and spectroscopy. Here, we propose a fast interleaved dual-comb spectroscopy with sub-femtometer-resolution and absolute frequency, in which two electro-optic frequency combs are swept. Electrically-modulated stabilized laser enables ultrahigh resolution of 0.16 fm (or 20 kHz in optical frequency) and single-shot measurement in 90 ms. Total 20 million points are recorded spanning 3.2 nm (or 400 GHz) bandwidth, corresponding to a spectral sampling rate of 2.5 × 10 8 points/s under Nyquist-limitation. Besides, considering the trade-off between the measurement time and spectral resolution, a fast single-shot measurement is also realized in 1.6 ms with 8 fm (or 1 MHz) resolution. We demonstrate the 25-averaged result with 30.6 dB spectral measurement signal-to-noise ratio (SNR) by reducing the filter bandwidth in de-modulation. The results show great prospect for precise measurement with flexibly fast refresh time, high spectral resolution, and high SNR. Introduction Optical frequency combs (OFCs) 1,2 , originally built from femtosecond mode-locked lasers (MLLs), have revolutionized the field of spectroscopy with phase coherent spectral lines and broad bandwidth 3−5 . Several techniques are developed to extract the spectroscopic content encoded on the comb lines, such as virtually imaged phased arrays (VIPA)-based spectroscopy 6 and Fourier transform spectroscopy (FTS) 7 . Dual-comb spectroscopy (DCS) emerges from comb-based techniques and has been implemented in linear spectroscopy 8−13 , nonlinear spectroscopy 14,15 and microscopy 16,17 . Beating two OFCs with slightly different repetition rates, DCS re-trieves each frequency component in radio frequency (RF) domain, which may fully exploit the spectral resolution and measurement bandwidth determined by OFCs. In recent years, novel approaches for OFC generation have been explored, which facilitate DCS performed at different wavebands with versatility 9,11,12 . Mutual coherence establishment between two independent combs requires phase-locking circuits 10,18 or phase corrections 19−21 . OFCs may also be generated by electro-optic modulation from a single seed laser to form a DCS system with intrinsic mutual coherence 22−29 , which significantly reduces the system complexity. Electro-optic frequency combs (EOFCs) 30,31 have been demonstrated with nonlinearly broadened bandwidth 32 or ultra-dense line-spacing 33,34 , which promote flexible electro-optic DCS for various spectroscopic applications. Spectral interleaving technique for comb-based spectroscopy may improve the spectral resolution narrower than the line-spacing, which may be realized by stepping either line-spacing or central frequency 13,35−38 . Four interleaved spectra were performed based on MLLs in 25 MHz step 36 . OFCs with large line-spacing, such as microresonator comb, may be hundreds of times interleaved with a step of 80 MHz by both tuning cavity length and pump laser frequency 38 . Temperature tuning of quantum cascade laser (QCL) combs 37 interleave the spectra to 80 MHz. Tuning the driving current of seed laser improves the EOFC spectral resolution from 25 GHz to 100 MHz 39 . The spectral resolution with interleaved combs is commonly limited by the frequency accuracy during the adjustment of comb line frequency. Besides, long measurement time may weaken one significant superiority of DCS to realize Nyquist-limited spectral sampling rate 40 . In this paper, we propose a fast interleaved dual-comb spectroscopy with sub-femtometer resolution and absolute frequency, which simultaneously realizes high spectral resolution, large bandwidth and fast measurement speed under the Nyquist-limitation. The stabilized seed laser provides absolute optical frequency reference and is used to generate the swept lightwave by external RF modulation with low sweep nonlinear error, and fast sweep speed. The probe and local EOFCs seeded by the swept source have a slight repetition rate difference to build a dual-comb interferometer. Each comb-line pair records a high-resolution spectrum and is located to a different frequency in electrical domain, which may be separated by a digital filter to recover whole spectrum. In experiments, ultrahigh resolution of 0.16 fm (or 20 kHz in optical frequency) is performed in 90 ms, thanks to the high accuracy of electrically modulated frequency. Total 20 million points are recorded spanning 3.2 nm (or 400 GHz) bandwidth. The spectral sampling rate, defined as the acquired spectral point per unit time, reaches 2.5 × 10 8 points/s, which is one quarter of the analog-to-digital converter (ADC) sampling rate under Nyquist-limitation. Considering the trade-off between sweep time and spectral resolution, a fast measurement may be realized in 1.6 ms with 8 fm (or 1 MHz) resolution for dynamic measurement situation. The high resolution reflectance spectrum of high Q-factor fiber Fabry-Perot cavity is also measured by the ultra-fine EOFC, and the consistent results validate the proposed method. Besides, flexible filter bandwidth adjustment in demodulation enables high signal-to-noise (SNR) measurement of H 13 CN gas cell at the cost of resolution. The SNR reaches 30.6 dB after averaging in 50 ms with a spectral resolution of 0.4 pm (or 50 MHz). Operation principles The operation principle is illustrated in Fig. 1. This method consists of two primary components, which are respectively an ultra-linearly frequency-swept optical source generation system and an electro-optic dualcomb interferometer (EO-DCI) system. A continuous wave (CW) laser is externally modulated to generate the swept optical source. The RF driving signal is a sweeping signal with a frequency of μ(t) = γt (0 ≤ t ≤ T 0 ), where γ is the frequency sweep rate, and T 0 is the sweep time. The generated optical source with B s = γT 0 sweep range may have good sweep nonlinearity and fast sweep rate thanks to the use of programmable RF signal with low phase noise, which contributes to the high resolution spectroscopy and the fast measurement speed. The probe and local combs with line-spacings of f p and f l are generated with small detuning ∆f = f p − f l in the EO-DCI seeded by the swept source. Each comb-line frequency of the probe comb may be expressed as f pm = f c + mf p + γt, where f c is the center frequency and m is the index of comb number. The spectrum of the device under test (DUT) is recorded by the probe comb. Since the probe comb is swept, each comb line measures a bandwidth of B s instead of a single point in conventional DCS system. The measurement results may be spliced to cover the whole bandwidth of the probe comb when f p ≤ B s . The interleaved spectrum may reach a resolution limited by the optical sweep nonlinearity. The local comb is simultaneously swept (the m-th line frequency is f lm = f c + mf l + γt), so the probe comb lines are separately located to RF domain at the frequencies of m∆f, similar as the principle of conventional DCS. The spectrum recorded to discrete channels at different RF frequencies may be retrieved by using a digital filter. Relatively, the temporal resolution of each channel is limited by the filter bandwidth to t r = 1/B f , where B f ≤ ∆f is the available filter bandwidth. Therefore, for each filtered channel, the spectral resolution may be the product of sweep rate and temporal resolution as f r = t r γ = B s /B f T 0 ≥ f p /∆fT 0 , which illustrates a trade-off between spectral resolution and measurement time. With proper frequency relation, DCS may fully utilize the detection bandwidth under the Nyquist limitation as S AD /2 = K∆f, where K is the number of comb line, and S AD is the sampling rate of ADC. Similarly, the proposed method also makes full use of detection bandwidth, and records Kf p /f r spectral points in T 0 time. The spectral sampling rate S sp (defined as recorded spectral points per unit time) may reach half of ADC sampling rate under Nyquist limitation, expressed as In practical experiments, since the nonlinear error of swept source ∆μ determines the frequency accuracy of each spectral point, the limitation of the spectral resolution may be expressed as max (f p /∆fT 0 , ∆μ). Besides, a narrow filter bandwidth B f may increase the time domain SNR at the cost of spectral resolution. The userdefined filter bandwidth in demodulation provides flexibility for different applications. Experimental setup A specific experimental setup of the DCS system is depicted in Fig. 2. A CW fiber laser (NKT, Adjustik E15) is locked to a stable Fabry-Perot cavity (Stable Laser System) with finesse of 400 k by using Pound-Drever-Hall technique. The stabilized laser operates at the wavelength of 1550.531 nm (or 193.348 THz in optical frequency) with a daily drift of less than 100 kHz, which provides the absolute optical frequency reference. The output of the stabilized laser is modulated by a Mach-Zehnder modulator (MZM), which is driven by an arbitrary waveform generator (AWG, Keysight M8195A) with a sampling rate of 64 GS/s. The electrically driven RF signal is a sinusoidal signal with a frequency swept from 2 GHz to 18 GHz in 1.6 ms. The linearly-swept sideband is generated after the modulation, and then injected into an isolator-removed distributed feedback laser diode (DFB-LD) which serves as a slave laser via an optical circulator. Due to the injection locking effect 43 , the 1st-order positive sideband is selected and amplified to 10 mW, in which the carrier and all other sidebands are suppressed. Therefore, a swept lightwave with low power fluctuation and low nonlinear error is generated. Then, the swept lightwave is used as the light source of Results Fast measurement in 1.6 ms As shown in Fig. 3(a), we characterize the property of the swept lightwave by using an unbalanced Mach-Zehnder interferometer 41 . The length of delay fiber is 200 m, and the unwrapped phase φ of the beat frequency is used to calculate instantaneous frequency f i based on the equation of f i = c∆φ/2πnL , where c is the velocity of light and n is refractive index of fiber, to evaluate the sweep nonlinear error. The lightwave sweeps 16 GHz bandwidth in 1.6 ms as shown in Fig. 3(b). The standard deviation of the nonlinear error is calculated to be 9.13 kHz. The optical spectra of the probe comb and local comb, when the seed lightwave is not swept, are shown in Fig. 3 spectrum of the probe comb, when the seed lightwave is swept to cover the gap between adjacent lines, is also shown in Fig. 3(c). Temporal data of reference branch recorded in 1.6 ms are shown in Fig. 4(a), which have a relatively stable power thanks to the use of optical injection locking. A zoom-in figure containing four interferograms in 0.4 μs is shown in Fig. 4(b). The non-pulse interferograms may reduce the quantization noise. As depicted in Fig. 4(c), the electrical spectrum is obtained by digital Fourier transformation, which contains 25 flat lines circled by blue dotted line. The slope of both signal and noise powers are introduced by the frequency response of BPD. The center frequency is located at 200 MHz with a small detune introduced by the optical path difference. Each channel is equally spaced with an interval of ∆f=10 MHz to be distinguished in frequency domain. A digital filter with a bandwidth of 10 MHz is used to demodulate all lines, and a filtered temporal waveform of the 3-rd channel circled by green dotted line is shown in Fig. 4(d). The waveform is a sinusoidal signal as shown in the zoom-in figure. The varying intensity in 1.6 ms is the power fluctuation and noise, and relatively the varying intensity of the measurement branch represents the recorded optical spectrum of the DUT. The envelope is obtained by a digital Hilbert transformation and then moving averaged with a 100-point window. The temporal resolution is 100 ns corresponding to 10 MHz filter bandwidth. The extracted envelopes of measurement branch without DUT and reference branch are shown in Fig. 4(f). The power fluctuation is eliminated by compar-ing the results of two branch. The calibrated spectrum in 16 GHz bandwidth is recovered with a spectral resolution of 1 MHz, as shown in Fig. 4(g). After successively demodulating all 25 channels, we obtain the whole spectrum spanning 400 GHz. Since each spectrum with 400 GHz bandwidth and 1 MHz resolution is recorded in 1.6 ms, the equivalent spectral sampling rate reaches 2.5 × 10 8 points/s, which is half of Nyquist upper limitation (sampling rate of ADC is 1 GS/s) thanks to the real-time interleaving. Figure 4(h) shows the demodulated spectrum without DUT, in which the power differences among comb lines are also eliminated and the SNR differences still exist. The stabilized seed laser and the ultralinear sweeping ensures the frequency accuracy of the measured spectrum. The standard deviation of the spectrum shown in Fig. 4(h) is calculated to be 2.3169%, which represents the standard deviation of the spectral measurement error σ Η . So the averaged spectral measurement SNR is calculated to be 43.2 by SNR=1/σ Η . The M×SNR is calculated to be 1.73×10 7 in 1.6 ms, where M is the number of spectral points. A reflectance spectrum of a fiber Fabry-Perot cavity is measured by the proposed system with a setup shown in Fig. 5(b). The FFPI is composed of a pair of fiber Bragg gratings (FBGs) with a reflectivity of 99%, and the cavity length is 3.3 cm. The reflectance spectrum spanning 400 GHz with 1 MHz resolution are recorded in 1.6 ms and shown in Fig. 5(a). The free spectral range (FSR) of F-P cavity are measured to be 3 GHz. The deepest resonance centered at 1.165 GHz relative frequency is depicted in The experimental setup is shown in Fig. 5(d), similar to that used in ref. 42 . An UFEOFC with 1 MHz line-spacing is resolved in a self-heterodyne interferometer. The demodulated result with higher SNR is also shown in Fig. 5(c) in red, and the consistency between two results validates the proposed method. High SNR measurement over 30 dB As described in the part of "Operation principles" section, an exchange between spectral resolution and time domain SNR may be realized by adjusting the bandwidth of the digital filter. According to the demodulation process, time domain SNR determines the spectral measurement SNR, therefore a narrow bandwidth filter may be used to realize high SNR measurement of relatively wide spectral resonances such as gas absorption. A transmission spectrum of a H 13 CN gas cell is measured by using the same experimental setup. The single pass cell has a length of 15 cm at 25 Torr (1 Torr=133.322 Pa, Wavelength References, HCN-13-25) under a laboratory temperature of about 297 K. To improve the SNR, we set filter bandwidth to be 200 kHz in demodulation, corresponding to a temporal resolution of 5 μs and a spectral resolution of 50 MHz. A single-shot spectroscopic result centered at 193.348 THz is shown in Fig. 6(a), containing the absorption lines of P9-13 in the 2v 3 band. A 25 times averaged result for further SNR improvement is also shown in Fig. 6(b). Zoom-in figures of the P11 line are shown in Fig. 6(c), together with fitting curve based on the Voigt function for comparison. The residual errors between the results and fitting curve are shown in Fig. 6(d), which represent the spectral measurement error. The standard deviations σ Η are respectively 0.378% and 0.087% for the single-shot measurement and 25-averaged measurement, and the spectral measurement SNR may be also calculated by SNR=1/σ Η . The SNR of 50-MHz resolution single-shot measurement is calculated to be 264.6, which proves the validity of SNR improvement by digital filter bandwidth reduction similar as mode-filtering in DCS. Besides, with 25 times averaging, the SNR reaches 1148.2 (corresponding to 30.6 dB) in 50 ms. Ultrahigh resolution of 20 kHz In above experiments, 1 MHz resolution are limited by 1.6 ms sweep time. To reach the tens of kHz spectral resolution limitation caused by the sweep nonlinearity, we increase the sweep time to 90 ms covering a range of 18 GHz. The swept lightwave is also characterized by the unbalanced Mach-Zehnder interferometer shown in Fig. 3(a). The unwrapped phase of the beat frequency has three hopping circled in Fig. 7(a), since the RF signal may not be swept continuously at that time. The nonlinear error from 20 ms to 90 ms is shown in the zoom-in figure, and the standard deviation is calculated to be 17 kHz. The line spacing of the probe and local comb are also set to be 18 GHz and 17.99 GHz corresponding to the sweep range. Other setups are same as those used in the experiments described in " Experimental setup" section. We measured the reflectance spectrum of another fiber F-P cavity with a cavity length of 18.9 cm. The electrical spectrum obtained by FFT for the measurement branch is shown in Fig. 7(b). Considering the reflection region, six channels circled in blue are demodulated. The filter bandwidth is 10 MHz, corresponding to a temporal resolution of 100 ns. The spectral resolution is 20 kHz, and the demodulated spectrum in 108 GHz bandwidth with 20 kHz resolution is shown in Fig. 7(d). Here, the points during the phase hopping circled in red are removed, which may lose the spectrum of few resonances. The electrical spectrum of the reference branch shown in Fig. 7(c) illustrates at least 23 channel results are simultaneously recorded in 414 GHz bandwidth. Totally, 20.125 million spectral points are measured in 90 ms, corresponding to a spectral sampling rate of 2.24 ×10 8 points/s. A zoom-in figure of the circled resonance in Fig. 7(d) is shown in Fig. 7(e) in linear coordinate. This resonance is also measured by using UFEOFC with 20 kHz line-spacing with the usage of the same setup shown in Fig. 5(d). The measurement results by the two methods are shown in Fig. 7(e). The SNR is relatively low comparing to the method of using UFEOFC as the proposed method provides a wider bandwidth. However, it may also characterize the narrow resonance whose FWHM is measured to be about 600 kHz as the Q-factor of the cavity is over 3×10 8 . Discussion and conclusions We have proposed a novel interleaving DCS technique with 0.16 fm (or 20 kHz) spectral resolution, which is three orders of magnitude narrower than existing demonstrations. Stabilized seed laser provides absolute optical frequency reference without additional calibration, and then external modulated to generate sweeping lightwave. Electrical driving signal with low nonlinearity error is fully utilized for ultra-high resolution and fast measurement. 20.125 million spectral points are recorded in 90 ms spanning 3.2 nm (or 400 GHz) bandwidth. Theoretical performance limitation of interleaving DCS is analyzed and illustrated to reach the Nyquist-limitation. This significant superiority of DCS has not been considered in previous interleaving DCS. Spectral sampling rate, as a normalized factor to characterize overall performance, reaches 2.5×10 8 points/s to be half of Nyquist-limitation. Based on the theoretical trade-off between sweep time and spectral resolution, a fast measurement may be realized in 1.6 ms. The refresh rate of 625 Hz is fastest in interleaving DCS, which is also remarkable for all existing DCS considering the resolution of 1 MHz and bandwidth of 400 GHz. The reflectance spectrum of high Q-factor (over 10 8 ) fiber Fabry-Perot cavity demodulated in the proposed method is also measured by the ultra-fine EOFC to validate the performance. Flexible filter bandwidth adjustment in demodulation enables high SNR measurement of H 13 CN gas cell at the cost of resolution. The SNR reaches 30.6 dB after averaging in 50 ms with a spectral resolution of 50 MHz. This paper provides an effective method based on EOFCs for sub-fm-resolution absolute spectroscopy. Together with flexible fast refresh time and high SNR, the proposed method may be implemented in various applications including measuring high-Q cavity, electromagnetically induced transparency, or physical and biochemical sensing requiring hyperfine spectrum measurement, and high sensitivity implementations such as green gas monitoring.
4,634
2020-01-01T00:00:00.000
[ "Physics" ]
Solubilization and Reconstitution of the Lactose Transport System from Escherichia coli* The lactose transport system from Escherichia coli was solubilized with octylglucoside and reconstituted into liposomes by an octylglucoside dilution procedure. The reconstituted proteoliposomes exhibited lactose counterflow and membrane potential-driven lactose transport. The lactose transport system from Escherichia coli was solubilized with octylglucoside and reconstituted into liposomes by an octylglucoside dilution procedure. The reconstituted proteoliposomes exhibited lactose counterflow and membrane potential-driven lactose transport. The lactose transport system of Escherichia coli is responsible for the active transport of P-galactosides into the cell (1). In 1963, Mitchell (2) postulated that this system functions as a proton-substrate co-transport system which is coupled to the metabolism of the cell via the transmembrane electrochemical proton gradient. Studies with intact cells and cytoplasmic membrane vesicles have provided strong support for this concept (see Ref. 3 for review). In addition, the kinetics (4-6) and substrate specificity (7) of the transport system have been extensively studied. The lac y gene, which codes for the lactose transport protein, has been cloned on a bacterial plasmid and the nucleotide sequence of the gene has been determined (8). The transport protein has been purified in an inactive form (9). Despite the fact that the lactose carrier represents one of the most extensively characterized active transport systems, little is known about its subunit structure, the molecular mechanism of active transport, or the mechanism by which the lactose carrier is regulated by the phosphotransferase system (10). Reconstitution of the carrier would provide an assay for the purification of the protein@) responsible for /3galactoside transport. Furthermore, reconstitution of a purified transport system would greatly facilitate the determination of the molecular mechanisms of carrier function and regulation. Solubilization and reconstitution of lactose transport into transport-negative membrane vesicles have been reported (11). In addition, Padan et al. (12) have reported that lactose transport activity is lost after extraction of membrane vesicles with sodium cholate, and that transport activity in these vesicles is restored upon addition of exogenous phospholipid followed by detergent removal. However, previous attempts to solubilize and reconstitute the lactose transport * This work was supported by Public Health Service Grant AM05736 from the National Institute of Arthritis, Metabolism and Digestive Diseases and Grant PCM 78-00859 from the National Science Foundation. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. $ Supported by a National Science Foundation Predoctoral Fellowship. To whom correspondence should be addressed. system into liposomes have been unsuccessful. Several reports on the reconstitution of bacterial cationsubstrate co-transport systems have appeared (13-16), one of which deals with the reconstitution of proline transport from E . coli (13). This communication describes the solubilization of the lactose transport system and its reconstitution into liposomes prepared from E. coli phospholipid. The reconstituted proteoliposomes exhibited lactose counterflow and membrane potential-driven lactose transport. Preparation of Acetone/Ether-washed E. coli Lipid-Chloroform/methanol-extracted E. coli lipid was acetone/ether-washed by a modification of the method of Kagawa and Racker (19). Crude lipid extract, 50 ml of chloroform/methanol (91), containing 1 g of lipid, was evaporated to 5 ml under a stream of Nz gas. The material was suspended in 100 ml of Nz-bubbled anhydrous acetone containing 2 mM 2-mercaptoethanol. The suspension was placed in a light-protected 250-ml flask under NZ gas and stirred at low speed (on top of a Styrofoam block) on a magnetic stirrer for 12 h at room temperature. The extract was filtered through Whatman No. 1 paper on a Buchner funnel with suction, and the insoluble material was scraped off the filter paper and immediately resuspended (with stirring) in 100 ml of anhydrous ether containing 2 mM 2-mercaptoethanol. This suspension was centrifuged in a glass bottle at 2500 X g (4000 rpm) for 15 min. The supernatant was carefully decanted and placed in a small flask and then evaporated to 10 ml under a stream of Nz gas. The solution was transferred to a preweighed test tube, and the remaining ether was evaporated as above. A small amount of chloroform was added to the tube, and the lipid was dispersed as a film on the bottom and sides of the tube by rotating the tube under a stream of Nz gas. The lipid was lyophilized for 3 h to remove remaining solvent. The tube containing the lipid was weighed, and the lipid was suspended in 2 mM 2-mercaptoethanol at 50 mg/ml. The lipid was Vortex-dispersed while under NZ gas, and the suspension was stored in 1-ml aliquots under N2 gas at -80°C. One-half of the starting material was recovered after acetone/ether washing. 10583 Reconstitution of Lactose Transport System X-100, and the level in the bath was adjusted to give maximal agitation of the solution (20). Preparation of High Pressure French Press Vesicles-Cells (1.5 to 3 liters) were grown to late logarithmic phase with continuous shaking in modified medium 63 (21), which consisted of 0.1 M potassium phosphate, pH 7.0,15m~ ammonium sulfate, and0.8 mM MgSO,. 7 H,O. The medium was supplemented with 0.4% glycerol. The cells or membrane vesicles were kept at 4°C during all subsequent steps. The cells were harvested and washed once with modified medium 63. The cells were resuspended (0.2 g wet weight/ml) in 50 mM potassium phosphate, pH 7.5, 1 mM dithiothreitol, 20 mM lactose, 5 mM MgS0, and then hand homogenized before the addition of 20 pg/ml of DNase and 0.5 mM phenylmethylsulfonyl fluoride. In some initial experiments, 0.5 mM phenanthroline (phenylmethylsulfony1 fluoride and phenanthroline are both protease inhibitors) was also added, but the presence or absence of this inhibitor did not affect the reconstitution. Cells were disrupted by passage through an Aminco French pressure cell (model 4-3398) at 19,000 p s i . total pressure and collected in a tube in an ice bath. Unbroken cells were removed by centrifugation at 11,700 X g (10,000 rpm) for 10 min. The supernatant was carefully removed and the centrifugation was repeat.ed. The resulting supernatant was removed and the vesicles were sedimented by ultracentrifugation at 140,000 X g (45,000 rpm) for 2 h. The pellet was resuspended in 50 mM potassium phosphate, pH 7.5, 1 mM dithiothreitol, 20 mM lactose, 0.5 mM phenylmethylsulfonyl fluoride, and the centrifugation was repeated. The vesicles were resuspended in the same buffer at a protein concentration of 30 to 70 mg/ml. Vesicles were divided into 50-pl aliquots, frozen in liquid N2, and stored at Reconstitution of Lactose Transport-Lactose transport was reconstituted by the octylglucoside dilution procedure of Racker et al. (22). Two hundred ninety microliters of 50 mM potassium phosphate, pH 7.5, 5 p1 of 100 mM dithiothreitol, 17 pl of 580 mM lactose, 7.3 pl of high pressure vesicles (0.5 mg of protein) and 40 pl of acetone/etherwashed E. coli lipid (from 50 mg/ml of stock) were added to a small test tube in an ice bath, and the tube was blended on a Vortex mixer. Octylglucoside (33 p1 of a 15% solution in 50 mM potassium phosphate, pH 7.5) was added (final concentration, 1.25%) and the tube was blended on a Vortex mixer. The suspension was incubated at 4'C for 10 min, blended on a Vortex mixer again, and then centrifuged at 175,000 X g (45,000 rpm) (4OC) for 1 h. The supernatant (containing 0.2 to 0.3 mg of solubilized membrane protein) was carefully removed with a Pasteur pipette and mixed with 100 pl of bath sonicated liposomes (5 mg of lipid) plus 8.3 p1 of 15% octylglucoside (final concentration, approximately 1.25%). The mixture was blended on a Vortex mixer and then incubated at 4°C for 10 min. The suspension was then pipetted directly into 15 ml of 50 mM potassium phosphate, pH 7.5, 20 mM lactose, 1 mM dithiothreitol at room temperature, and the tube was blended on a Vortex mixer gently. The resultant proteoliposomes were sedimented by ultracentrifugation in a TY 42.1 rotor at 85,000 X g (35,000 rpm) for 1.5 h. The supernatant was decanted and the tube was wiped with a cotton-tipped applicator. The pellet was then resuspended, for counterflow, in 200 p1 (final volume) of 50 mM potassium phosphate, pH 7.5, 20 mM lactose, 1 mM dithiothreitol. Resuspension was carried out by stirring the pellet with a glass rod and squirting the suspension up and down three times with a 5O-pl Hamilton syringe. For membrane potential-driven lactose transport, lactose was eliminated from all reconstitution steps and the proteoliposomes were resuspended in 50 1.1 of 50 mM sodium phosphate, pH 7.5, 1 mM dithiothreitol. Reconstituted proteoliposomes were stored at 4OC. Membrane potential-driven uptake was assayed immediately. Counterflow was assayed within 12 h. for counterflow by diluting 9 pl (2.3 pg of protein) of lactose-loaded Transport Assays-Reconstituted proteoliposomes were assayed proteoliposomes into 0.45 ml of 50 mM potassium phosphate, 2 mM MgS04 containing 0.9 pCi of ['4C]lactose (counterflow assay buffer). The final lactose concentration was 0.43 mM. Transport was carried out at room temperature. The tube was blended on a Vortex mixer and, at various times, 0.1-ml samples were removed and filtered bopwise onto the center of a 0.22-pM Millipore filter (type GSTF) using 25 inches of mercury vacuum suction. The filter was washed with 5 ml of ice-cold 50 mM potassium phosphate and counted in 4 ml of Bray's scintillation fluid at a I4C efficiency of 86%. A blank value, obtained by filtering 0.1 ml of assay buffer without proteoliposomes, was subtracted from all points. Membrane potential-driven lactose transport in reconstituted proteoliposomes was assayed by diluting 7 pl of potassium phosphateloaded proteoliposomes (5.4 pg of protein) into 0.7 ml of 50 mM -80°C. sodium phosphate, pH 7.5, containing 1.4 pCi of ['4C]lactose. The final lactose concentration was 0.23 mM. Transport was carried out at room temperature. The tube was blended on a Vortex mixer, and 0.1ml samples were removed at 15-and 45-s time points, fdtered as above, washed with 5 ml of ice-cold 50 mM sodium phosphate, and counted as above. At 60 s, 7 p1 of 1 mM valinomycin was added (to give a final concentration of 14 p~) , and the tube was blended on a Vortex mixer. At various times, 0.1-ml samples were removed, filtered, washed with 5 ml of ice-cold 50 mM sodium phosphate, and counted as above. Reconstituted proteoliposomes (lactose-preloaded), approximately 1.5 mg of lipid phosphate, 20 pg of protein in 20 pl, were suspended in 0.5 ml of low density solution and layered on one gradient (42% sucrose cushion). High pressure French press vesicles, approximately 0.6 mg of lipid phosphate, 1 mg of protein in 14.5 pl, were suspended in 0.5 ml of low density solution and layered on top of a second, similar gradient (60% sucrose cushion). The gradients were centrifuged for 2.5 h at 58,000 rpm in an SW 65 rotor (4"C). Protein Assays-Protein was determined by the modified Lowry procedure of Peterson (23). The microassay was used, without precipitation. TotaZPhosphate-Total phosphate was determined by the method of Ames (24). The samples were ashed in a box type furnace at 600°C. RESULTS AND DISCUSSION When E. coli membrane vesicles were treated with 1.25% octylglucoside in the presence of exogenous acetone/etherwashed E. coli lipid, as described under "Experimental Procedures," 40 to 60% of the membrane protein was solubilized. This step was followed by a high speed centrifugation to remove all unextracted membrane material. The supernatant was then exposed to liposomes and diluted 30-fold into detergent-free buffer. In the present experiments, 15 to 20% of the solubilized protein was recovered in the reconstituted proteoliposomes centrjfuged following the dilution step. This represents 7.5 to 10% of the original protein in the French press vesicle material. when preloaded with nonradioactive lactose. The counterflow phenomenon is observed when radioactive substrate is transported into a cell or vesicle and accumulates because its exit is competitively inhibited by a high internal concentration of unlabeled substrate. The subsequent rate of loss of the accumulated radioactive substrate has been shown to be dependent on the number of carriers per cell (25). The time course for counterflow in the reconstituted proteoliposomes (one-half of the accumulated lactose was retained after 70 min) and the lipid to protein ratio used for reconstitution (approximately 201) are consistent with the hypothesis that each active proteoliposome contains only one or a few active lactose carriers. Proteoliposomes which were not preloaded with lactose exhibited very low transport activity (Fig. 1, inset). Lactosepreloaded proteoliposomes assayed for counterflow in the presence of thiodigalactoside (a competitive inhibitor of lactose transport) also exhibited very low transport activity. N-Ethylmaleimide-treated lactose-preloaded proteoliposomes exhibited virtually no transport activity. The same result was obtained with proteoliposomes prepared with solubilized membrane protein from uninduced E. coli ML3 (lac y-) French press vesicles. FIG . 2 (center). Membrane potential-driven lactose transport in reconstituted proteoliposomes. Reconstitution was carried out and membrane potential-driven transport was assayed as described under "Experimental Procedures." M , proteoliposomes were diluted into 50 mM sodium phosphate, pH 7.5, and valinomycin (final concentration, 14 p~) was added at 60 s, the time indicated by the arrow (the 60-s time point was obtained in a separate experiment, using the same batch of proteoliposomes); M , proteoliposomes were diluted into sodium phosphate, pH 7.5, containing 20 p~ CCCP, and valinomycin (14 p~) was added at 60 s; [1"(3, proteoliposomes were diluted into 50 mM potassium phosphate, pH TABLE I Effect of lipid on reconstitution of lactose transport Lactose counterflow in the presence and absence of 10 mM thiodigalactoside was measured at I-, 5-and 15-min time points as described under "Experimental Procedures." For Experiment 1, reconstitution was carried out as described under "Experimental Procedures." For Experiment 2, no E. coli lipid was added at the time of solubilization. A 40-pl sample of 50 mM potassium phosphate, pH 7.5, was added to maintain equivalent volumes. For Experiment 3, asolectin was substituted for E. coli lipid at both the solubilization and detergent dilution (liposome) steps of the reconstitution. The values presented in the table represent the 15-min time point, which corresponded to the peak level of transport in each experiment. at the membrane solubilization step was required for lactose transport reconstitution. In addition, when asolectin was substituted for acetone/ether-washed E . coli lipid in the reconstitution, only 15% of the E . coli lipid reconstitution activity was recovered. Energy-depleted whole cells and French press vesicles accumulate lactose against a concentration gradient when a membrane potential, negative inside, is generated across the cytoplasmic or vesicle membrane (26). This can be accom-0 1 1 4-I 7.5, and valinomycin (14 p~) was added at 60 s; M , proteoliposomes were diluted into 50 mM sodium phosphate, pH 7.5, and gramicidin (14 p~) , instead of valinomycin, was added at 60 s; A-A, proteoliposomes were treated with 4 mM N-ethylmaleimide for 10 min at room temperature prior to dilution into 50 r n~ sodium phosphate, pH 7.5, and valinomycin (14 p~) was added at 60 s. FIG. 3 (right). Isopycnic density gradient centrifugation of reconstituted proteoliposomes and French press vesicles. Gradients were prepared and assays were carried out as described under "Experimental Procedures." Thirteen 0.37-ml fractions were collected from each gradient. A 25-pl aliquot from each of the French press vesicle gradient fractions was assayed for phosphate. In this gradient, the lipid phosphate peak was found in the Fist fraction (indicated by the arrow). Aliquots of 10 p1 from each of the proteoliposome gradient fractions were assayed for phosphate (M). The phosphate peak observed at the top of the gradient corresponds to potassium phosphate buffer placed on the gradient with the proteoliposomes. Each fraction was diluted into 15 ml of 50 mM potassium phosphate, pH 7.5, 20 mM lactose, I mM dithiothreitol. Proteoliposomes were then sedimented by ultracentrifugation of the suspension at 85,000 X g (35,000 rpm) for 1.5 h. Fraction 7 was the only fraction which yielded a pellet upon centrifugation. This pellet was resuspended in 15 p1 of the above buffer, and a 6-pl aliquot was assayed for counterflow at a 2-min time point (M). No lactose uptake was observed when a second 6-pl aliquot was assayed in the presence of 10 mM thiodigalactoside. plished by diluting potassium-containing cells or vesicles into low potassium medium and adding valinomycin, thus giving rise to a potassium diffusion potential. Membrane potentialdriven lactose accumulation was demonstrated in the reconstituted proteoliposomes (Fig. 2). When potassium phosphateloaded proteoliposomes were resuspended and diluted in sodium phosphate plus radioactive lactose, uptake of [14C]1actose was observed (approximately 10 nmol of lactose/mg of protein at 60 s). The addition of valinomycin (arrow, Fig. 2) resulted in a transient 5-fold accumulation of radioactive lactose. When a similar experiment was carried out in the presence of the proton conductor CCCP, the addition of valinomycin did not give rise to transient lactose accumulation. The CCCP would be expected to block membrane potential-driven lactose accumulation, since this ionophore collapses the protonmotive force generated by the membrane potential. Dilution of potassium phosphate-loaded proteoliposomes into sodium phosphate and addition of gramicidin instead of valinomycin also resulted in no lactose accumulation. When potassium phosphate-loaded proteoliposomes were diluted into potassium phosphate instead of sodium phosphate, the addition of valinomycin resulted in no lactose accumulation, since no potassium diffusion potential was produced. In the preceding three experiments, lactose presumably equilibrated between the inside and outside of the proteoliposomes containing active transport protein. This view is confirmed by the experiment (Fig. 2) in which proteoliposomes were pretreated with N-ethylmaleimide, which inactivated the carrier and almost completely prevented entry of sugar into the proteoliposomes. Reconstitution of Lactose Transport System The possibility was considered that the lactose transport activity was not solubilized, but that intact French press vesicles were retained during the reconstitution procedure and were responsible for the final transport activity. However, no activity was found when lactose-loaded high pressure French press vesicles were assayed for counterflow under the same conditions as the proteoliposome counterflow assay (data not shown). Lancaster and Hinkle (26) have shown that French press vesicles pass through 0.22-pM Millipore filters, unless the vesicles are aggregated with (po1y)lysine prior to fitration. The absence of activity (by the counterflow assay) in the high pressure French press vesicle preparation used for reconstitution argues against the possibility that the reconstitution transport activity is due to contaminating French press vesicles. A second possibility was that hybrid vesicles might be formed due to the introduction of exogenous lipid into detergent-disrupted French press vesicles or that hybrid vesicles were formed from large fragments of native membrane and exogenous lipid. This possibility was examined by density gradient centrifugation of the reconstituted vesicles and native French press vesicles in a manner similar to that utilized by Papazian et al. (27). Native membrane material, containing approximately equal amounts of protein and lipid, should sediment to a much denser region of the gradient than liposomes or proteoliposomes containing only a few protein molecules per vesicle. Two similar gradients were prepared. Onto the surface of one was placed reconstituted vesicles; the other gradient received high pressure French press vesicles. After centrifugation, 13 fractions were collected from each gradient. As shown in Fig. 3 (arrow), the peak of lipid phosphate in the French press vesicle gradient was found in the first fraction, which included the 60% sucrose cushion. This position in the gradient corresponds to a density of between 1.09 and 1.29 g/ ml. Each fraction from the proteoliposome gradient was assayed for phosphate (closed circles) and then diluted and centrifuged at high speed to pellet any intact proteoliposomes. Only Fraction 7 contained a pellet following this procedure. This pellet was found to contain lactose transport activity (open circle). The position of lactose transport activity on the gradient closely parallels the position of the only lipid phosphate peak (artificial vesicles) and corresponds to a density of 1.046 g/ml, which is close to the theoretical value for pure phospholipid vesicles (1.03 g/ml). This experiment provides strong evidence that the lactose transport system has been solubilized and transferred into an artificial vesicle or proteoliposome. A preliminary kinetic analysis of initial rates of lactose counterflow in the reconstituted proteoliposomes (data not shown) has yielded maximum velocities of 0.5 to 1.0 pmol of lactose/min/mg of protein. These values are 5 to 10 times higher than the maximum velocity (100 nmol/min/mg of protein) published for lactose counterflow in membrane vesicles (28). Considering that only 7.5 to 10% of the original French press vesicle protein was recovered in the reconstituted proteoliposomes, these results suggest that a partial purification of the lactose transport system may have been achieved. One of the interesting features of these experiments is the finding that the addition of acetone/ether-washed E. coli lipid at the solubilization step and the use of this material in the detergent dilution procedure significantly improves the activity of the reconstituted proteoliposomes. A requirement for exogenous phospholipid at the time of solubilization has also been demonstrated by Maron et ai. (29) for the reconstitution of a catecholamine transporter from bovine chromaffin granules. The present reconstitution method should be useful as an assay for the purification of an active lactose transport system and make possible the study of several interesting aspects of the molecular mechanism of lactose carrier function.
4,925.2
1980-11-25T00:00:00.000
[ "Biology" ]
Secure Data Transmission in Integrated Internet MANETs Based on Effective Trusted Knowledge Algorithm The communication between mobile node and fixed node is achieved through the Integrated Internet MANET (IIM) with the help of the gateway by increasing the application domain of mobile ad hoc network. The wireless channel and dynamic nature of Mobile Ad hoc Networks (MANETs) experiences integrated MANETs to suffer from security susceptibility. Methods/Analysis: An untrustworthy mobile node can harm the data and adversely affect the communication between a mobile node and a fixed node in IIM. In this manner, examining the trust level impacts the certainty with which an element may decide for information transmission. In order to provide a secure data transmission we are proposing an Effective Trusted Knowledge Algorithm (ETKA) that calculates the nodes trust. Findings: The proposed algorithm has two phases for finding the trusted node. In the first phase, observing the nodes in promiscuous mode, in the second phase, the effective trust value is calculated by hybrid method. Improvement: Through extensive simulation analysis, we can come to an end that the developed mechanism leads to a successful methodology towards security and protection of data from untrusted nodes in integrated internet and MANET. Introduction An Infrastructure less wireless network is termed as MANETs 1 . A MANET is collection of several nodes that send and receive data directly in a peer-to-peer method. Thus, a specially appointed system is autonomous of any current system foundation, for example, base stations and access points. In spite of the fact that a self-ruling, MANET is helpful as a rule, a MANET associated with the Internet is considerably more alluring. This is on the grounds that Internet assumes a critical part in the day by day life of many individuals by offering an expansive scope of administrations. Gateways are used to integrate the wireless mobile ad hoc network and the wired cyberspace and ad hoc routing protocol that is required to send packets from MANET to internet as well as within a MANET. The integrated internet and MANET is represented in Figure 1. The ad hoc routing protocols proposed for MANETs which are used to communicate information among freely moving communicating entities. Mobile devices communicate with one another using multi-hop relaying 2 . But, it is not possible for the mobile nodes to access internet because routing from static node and a mobile node is not supported. In the I-D "Global Connectivity for IPv6 Mobile Ad Hoc Networks" 3 an answer is displayed where Ad hoc on-demand distance vector is altered in a manner that it can route packets both in integrated network and MANET, but it is not considering security mechanism. The main challenge here stems from the need of selecting a trusted mobile node in MANET for secure data transmission in the field of integrating internet with MANET. So, an effective trusted mechanism is needed for this environment in such occurrence; the mobile node has to choose which of its neighbor trustworthy node the optimal one for its communication is, Examining and labeling the different security issues in the way connected from a MN to the FN, by checking the behavior of the suspicious nodes in the composite network frames the inspiration of the research 4 . There might be untrusted nodes or congested nodes present in MANET which drops the packets in the path to the fixed node. The existence of untrusted nodes may not be known to the source node. These reasons made us to build up a trust based system that recognizes these mischievous activities of the nodes in the MANET furthermore guarantees choice of trustworthy nodes 4 . Trust framework can likewise be utilized as a part of evaluating the nature of received information, to give network security administrations for example access control, authentication, identifying untrusted node and secure asset sharing [5][6][7][8] . Therefore, it is vital to occasionally assess the trust estimation of nodes in view of a few measurements and computational strategies. Trust calculations in static systems are generally less difficult in light of the fact that the trust value here changes for the most part because of conduct of nodes. Sufficiently after perceptions these practices are unsurprising. In order to have an effective trust on the nodes, we are proposing ETKA that detects the nearness of suspicious node and protect the data being transmitted from them. It basically does the work in three different stages. Initially each node in a system follows to its adjacent nodes randomly for observing the behavior of the nodes and the misbehaving nodes are considered to be malicious from the remaining nodes it assigns a task for the mobile nodes and the nodes that are providing inappropriate results or delays the task completing time then such type of nodes are treated as unfair nodes and in the final stage from the remaining nodes effective trust is calculated by hybrid method. The remaining part of the work is arranged as follows. Section II illustrate related work for security issues in the integration of internet and MANET, section III tell of the proposed work and section IV Analyses the simulation outcomes and section V concludes the paper. Related Work In the survey, lot of strategies have been worked out for inter connecting internet with MANET and the techniques used for calculation and management of trust and the security measures considered. Gateway discovery is done by the three approaches namely reactive, proactive and hybrid by extending Ad hoc On-Demand Distance Vector (AODV) routing protocol that are addressed by Ali Hamedian et al. 9 , very few papers focused on secure data transmission between fixed node in internet and a mobile node in MANET. Sanjay et al. 10 have proposed a strong method to bear the cost of security for MANETs and performs superior to the trust depended mechanisms through it has been analyzed. The friends sharing method ends up being a proficient instrument to open out data of trustworthy nodes adequately in framework. Faultiness of a node is on the solitary circumspection of a specific node that decides from tasks. In their protocol, they have used difficulties besides other secure protocol which uses multi hop routing & records the neighbors work to verify any node contrasted and because of these difficulties, the FACES mechanisms works more preferable & gives much security over the other multi hop routing protocol. If we increase the application domain of MANET by interconnecting with internet then the security parameters should be considered for the connectivity in order to have the efficient data communication between the mobile nodes and the internet resources. Ayesha, Sridevi and Arshad11 have proposed an algorithm for moderating black hole attack in AODV protocol based on secure knowledge. It focuses the packets which are sent in promiscuous mode to guarantee that the packets are conveyed to its destiny before concluding that a particular node is black hole node, our algorithm monitors the node for packet drop reason, in this way keeping a trusted node from turning into a black hole node. But to have effectiveness of the data transmission, authentication of the nodes is also required where we can conclude that the data is being transmitted through trusted nodes in a secure route. Amit kumar et al. 12 Gupta has provided a method in which trusted secure gateway is selected and authenticating it so as to achieve host to host security by means of trusted and uncongested route and trusted node. However this concept is limited in MANETs i.e. only communication is provided only between mobile nodes of MANET, a fixed node from Local Area Network (LAN) or internet is not possible to communicate in this scenario. Antesar M. Shabut et al. 13 have proposed a suggestion based trust system with a protection plan, which uses clustering strategy to progressively removing attacks identified with exploitative proposals between certain time in light of number of connections, similarity of data and closeness between the nodes. In IIM, as the gateway is used for connectivity, the recommendation based trust model may not provide better security as the nodes behave irresponsible. Chen Xi, Sun Liang, Ma Jianfeng and Ma Zhuo 14 have represented a new scheme for trust management based on behavior feedback in which the fast-moving nodes realize the mutual identity authentication by utilizing the certificate chains, and the identity trust relationship is built up in certificate graph format. On the other hand, the successors generate Verified Feedback Packets for each positive feedback behavior to realize the mutual authentication of forwarding behavior, and consequently the behavior trust relationship is formed. The studies can be implemented in Internet MANET Integration for the mobile nodes to have better resource utilization in order to provide secure data transmission. Yichi Zhang, Lingfeng Wang and Weiqing Sun 15 concentrated on the essential parts of the trust framework arrangement that are static trust node situation, dynamic ideal communication between a routing algorithm which is fault tolerant and cost-delicate and they chose nodes. A three-layer conveyed interruption discovery framework design proposed in 16 is utilized as the trust framework to be sent in the smart grid network; specifically, it is utilized to find the trust nodes and actualize the optimal routings. As the nodes in MANETS are dynamic in nature it would not provide the placement of the node and ideal conversation between fixed node and mobile node in IIM is achieved through the authentication system based on trust values. Wang et al. 17 proposed a method to recognize childish nodes from agreeable ones construct exclusively in light of nearby perceptions of AODV routing protocol practices. They utilize a confined state machine structure of privately watched AODV activities to build a measurable portrayal of every nodes conduct. With a specific end goal to recognize selfish and cooperative nodes, a progression of surely understood measurable tests are connected to highlights acquired from the watched AODV activities. An intriguing expansion of this work is considered for different examples of node mobility which can give extra bits of knowledge. In the above survey, most of the proposed models (10,11,12,13,15,17) are based on trust evaluation of the nodes and routes for providing security and identifying malicious nodes and protecting the data from unauthorized user and these papers are limited to only MANET region but to provide effective communication, mobile nodes have to utilize the internet resources. So our proposed model increases the application domain of MANET by interconnecting with internet. In paper 14 , the security measures based on certifications and key management techniques has been worked out. But if we enhance the secure data transmission by also considering the trust evaluation scheme for node selection and data transmission will filter out the malicious nodes effectively. So in this paper, we are first filtering out the malicious nodes from the network through promiscuous mode and then for the remained trusted nodes, we are finding the trust values based on the hybrid methods and the mobile node that is having the maximal trust value is chosen for data transmission. In this way we are providing effective security for the data that is being transmitted in IIM Proposed System We propose an ETKA for secure routing amid mobile node in MANETs and a fixed node in internet. The proposed model is split in to two aspects one is for selecting of trusted node from the network using promiscuous mode and the other is for evaluation of effective trust value using hybrid method. The architecture of IIM with the presence of pernicious nodes and the trusted nodes is represented in Figure 2. Phase 1: Selection of Trusted Nodes In this phase, each mobile node watches its neighbor nodes in promiscuous mode that the packet are being sent by its neighbors with a specific end goal to record the conduct of neighbor in regards to packet operation in the trusted knowledge table that is kept up by each node. Each mobile node contrasts the neighbor data and the data it records in its trusted knowledge table. In the event that both are same, they are named as trusted nodes which expect that the packet is sent further, else it waits for specific time period and checks the purposes behind packet dropping. When packet dropping reaches the threshold value, then the nodes are declared malicious nodes. It is then recorded in the field M_ip_addr of trusted knowledge table and a message is broadcasted in the network announcing that the particular node is malicious and it can be refrained in the routing. Keeping in mind the end goal to affirm, the packets are delivered to its adjacent nodes, the trusted nodes screens all the packets to refrain from selective dropping, as the selected packets are dropped by untrusted nodes. Our algorithm is built on AODV routing protocol, but it can be only used in MANETs. So in order to support routing process in integrated internet MANET, modified AODV (MAODV) 19 is used where the best path is built on smallest hop count and largest sequence number. Our proposed algorithm has the field in addition to the fields of MAODV and is represented as follows. Phase 2: Calculate Effective Trust Value The idea of "Trust" initially gets from sociologies and is characterized as the level of subjective conviction about the practices of a specific entity 18 . So in order to have better communication between mobile node and a fixed node, we are finding the trusted nodes so that the data can be protected from malicious nodes. In this phase, the effective trust is calculated for the trusted nodes using hybrid method which is obtained by direct observation and recommendation based methods. The direct trust value of node x on y is obtained by (1) Where W( ) is an assigned weight to event, are optimized route reply misbehavior factor, route request misbehavior factor, route error misbehavior factor respectively. The values of can be determined as (2) Where are the successful route reply acknowledgement packets, successful route request acknowledgement packets and successful route error acknowledgement packets respectively. Similarly are the numbers of failed packets. The recommended trust value of node x on y is obtained by the recommendation of third node z as shown in figure 3. Simulation Results The result of proposed work is carried out using the NS2 with the necessary extension and evaluated the PDR, delay with respect to number of nodes The quantity of vindictive nodes recognized in the system depicts the quality of the trusted routing method. In the Figure 4 and Figure 5, it is observed that ETKA finds large number of malicious nodes as it uses node observation followed by effective trust calculation with hybrid method. If a node drops packets and the values in its trusted table does not matches with its neighbor nodes, it is declared malicious and then effective trust is calculated for the trusted nodes only. But other protocols Trust based Multipath Routing for Ad hoc Networks 20 (TMR), Message Trust based Multipath Routing for Ad hoc Networks 21 (MTMR) rely on the trust of a node and uses challenging the node for detecting malicious nodes. They set aside opportunity to arrive at a determination that a specific node is noxious. As we increment the quantity of network, more noxious nodes are distinguished by ETKA. In the Figure 6 and Figure 7, we can see that the packet drop is minimal in ETKA, as it productively removes the paths having untrusted nodes. Another multipath routing protocol discards a bigger quantity of packets as they go through a more noteworthy quantity of nodes and in this way expanding the odds of routing information through untrusted nodes. The increment in quantity of nodes or even the mobility is directly proportional to quantity of packets drop. As friends records are hard to keep up in a profoundly portable environment and observance presents a keen increment in the packet drop of FACES, whereas effectiveness is maintained in ETKA that provides better security in IIM compared to other three protocols. Conclusion This work proposed ETKA that enhances the security of integration of internet-MANET. Using node monitoring in promiscuous mode and we assess the trust estimations of trusted nodes in mobile ad hoc network. Mischievous activities, for example, dropping or changing packets can be recognized in our method by direct learning. Nodes having minimum trust qualities will be rejected using routing mechanism. Hence, a trusted route could be built up in mischievous situations. The aftereffects of IIM routing situation decidedly help the viability & execution of proposed method, that enhances throughput & Packet Delivery Ratio extensively, with marginally expanded normal overhead of messages and end-to-end delay. In future, the work can be extended by implementing using the concept of secret keys for encrypting the data to improve the security of communication in IIM.
3,915.8
2016-12-16T00:00:00.000
[ "Computer Science", "Engineering" ]
Measuring perceived clutter in concept diagrams Clutter in a diagram can be broadly defined as how visually complex the diagram is. It may be that different users perceive clutter in different ways, however. Moreover, it has been shown that, for certain types of diagrams and tasks, an increase in clutter negatively affects task performance, making quantifying clutter an important problem. In this paper we investigate the perceived clutter in concept diagrams, a visual language used for representing ontologies. Using perceptual theory and existing research on clutter for other diagrams, we propose five plausible measures for assigning clutter scores to concept diagrams. By performing an empirical study we evaluated each of these proposed measures against participants' rankings of diagrams. Whilst more than one of our measures showed strong correlation with perceived clutter, our results suggest that a measure based on the number of points where lines cross is the most appropriate way to quantify clutter for concept diagrams. I. INTRODUCTION The use of diagrams, either as illustrative aids, or as foundations for visual languages, is becoming increasingly widespread.To aid the uptake of diagrams it is essential that the representation leads users quickly and consistently to the correct interpretation of the represented data.In other words, for visualization to be an effective tool it is imperative that diagrams are drawn according to guidelines that we know aid comprehension.However, existing guidelines (such as the Gestalt principles [1], Moody's Physics of Notations [2] or Miller's 7 ± 2 [3]) are theoretically-based and general.They each give a set of ideals a visual notation should adhere to that could aid comprehension.But, the guidelines need to be interpreted and implemented for each individual notation, and the empirical evaluation is then often omitted.Moreover, no set of guidelines discusses the visual clutter a diagram may exhibit, instead talking of shape, number of visual symbols, etc. Whilst some of these aspects no doubt contribute towards the clutter in a diagram, we instead focus directly on the aspect of clutter. In particular, we look at clutter in diagrams representing ontologies.Ontologies are widely used to represent a domain of knowledge, for example in biomedicine [4], law [5] and the Semantic Web.Their application is in areas for which accuracy of information is paramount, and it is known that ontology engineering is a difficult task [6].Various visualizations have been proposed to represent ontologies [7] of which our object of study, concept diagrams, is one.By creating a measure for clutter in concept diagrams, we can show that relatively low cluttered visualizations can be effective tools for aiding ontology engineering. A. Structure of the paper The structure of the remainder of the paper is as follows. We give an overview of the syntax, semantics, and usage of concept diagrams in section II.In section III, we examine the notion of clutter, especially as applied to other types of diagrams, and outline our proposed clutter measures for concept diagrams in section III-A.The design of our empirical study is explained in section IV, with the results presented and analysed in section VI.Finally, we conclude and indicate future directions in section VII. II. CONCEPT DIAGRAMS This section is intended only as an overview of the syntax and semantics of concept diagrams.For a fuller exposition, see [8], [9].As will be seen in section IV, we will be presenting concept diagrams to participants without any labels, and thus will not represent any particular context.However, we will explain the meaning of concept diagrams by reference to ontologies.A description of the language of ontologies can be found in [10], [11]. The basic elements in concept diagrams are curves, boxes and arrows.Curves represent sets of objects.Their arrangement in the plane, in particular their interaction with other curves, represents the underlying relationship between sets.Where a region exists inside two curves, we can infer that the represented set is possibly non-empty.For example, where one curve is completely contained within another, we are asserting that the set represented by the former is a subset of the set represented by the latter.In Figure 1 we have that the curve labelled A is completely inside the curve labelled B. Abusing notation, if the curve labelled A represents the set A, and similarly for B, then we are asserting that A ⊆ B. In the language of ontologies, we would be asserting A B. Crucially, we do not specify whether this subset relation is proper or not: where a region exists we still allow that the represented set can be empty.By contrast, where two curves do not intersect, we are asserting that the represented sets must have empty intersection.In Figure 1, we see that the curves C and D do not intersect, and thus we can infer that C ∩ D = ∅ for the represented sets.In the language of ontologies, we would say this situation encodes the axiom C D ⊥. Each box represents the universe of elements.Several boxes within the same diagram (for example in Figure 1) thus represent partial information: we know that explicit relationships hold between curves inside the same box, but we are asserting nothing about the relationship between curves in separate boxes.For example, we do not assert anything about the relationship between A and C. Arrows represent binary relations between sets, and in the context of ontologies the object properties between concepts.For example, in Figure 2 the arrow labelled R joins the curve A with an unlabelled curve.The interpretation of this is that, between them, all the elements of A are related to all, and only, the elements of the unlabelled set.For ontologies, this represents the axiom A ∀ R.C. Arrows can also be sourced, or targeted, on boxes.This situation is used to model the domains and ranges of the represented properties. There are choices to be made when drawing a concept diagram.The same underlying information can be represented in a number of ways, and testing which of these ways is most visually appealing is the focus of this paper.As an example, consider the two diagrams in Figure 3.The lefthand diagram (d 1 ) uses two boxes, allowing us to assert that the curve labelled A can have any relationship with the curves labelled B, C and D. We could instead place all information in a single box, however, as shown on the right (d 2 ) of Figure 3.We now have to explicitly show all of the possible interactions between the curve A and the others.(Note that some represented regions may be empty.) III. CLUTTER Clutter is a difficult concept to define precisely, although Rosenholtz et al. [12] define it as "the state in which excess items, or their representation or organization, lead to a degradation of performance at some task."Interestingly, they define it in terms of task degradation, in other words related to performance, rather than an independent feature of the diagram.Clutter is only defined, then, in relation to a task.What may be a cluttered representation for one task may be uncluttered for another.In this paper, by contrast, we seek to find an absolute measure for clutter in a diagram-type, rather than the clutter for a diagram-task pair.Related to clutter is the notion of visual complexity, although that is more usually applied to images in general.In [13], visual complexity was defined as "the amount of detail or intricacy of the line", whereas in [14] the visual complexity of an image was defined as the size of an image file after compression.These measures are based on some objective measure (in the case of a file size), or the pleasure a person feels when looking at the image. Clutter in Euler diagrams (the foundation for concept diagrams) was examined in [15], and a number of measures were proposed to quantify it.Participants were asked to assign a score for each diagram, on a scale of 1 to 100, as to how cluttered they thought it was.Each measure was compared with users' perceptions, and the one most aligned was the contour score, described in section III-A.Interestingly, this paper used a cardinal scale for perception.So, users were asked not to compare two diagrams and say which was more cluttered, but effectively by how much one was more cluttered than the other.Important to note is that John et al., in [15] made a distinction between abstract and concrete clutter.Abstract clutter (or structural clutter) was clutter that was fundamental to the information being represented, independent of drawing choices.A focus on abstract clutter led them to using only black curves.By contrast, concrete clutter takes into account the drawing choices, such as using coloured curves.Since we know that concept diagrams will be drawn in practice using colour, we will focus on concrete clutter, in John et al.'s terminology. Quantifying clutter is important owing to its effect on comprehension.In [16] it was found that increased clutter, according to the measure in [15], negatively affected comprehension.Similarly, in [14], it was found that there was a negative linear relationship between the visual complexity of a website and the pleasure participants had in navigating the website; in other words visually complex websites were viewed as less pleasurable to use. A. Proposed clutter measures We describe five plausible clutter measures to be tested, justifying each proposal. 1) Ontology complexity score (OCS): The diagrams we present to participants will have no labels and thus, in some sense, are not representative of any particular context.How- ever, the intended usage of concept diagrams is for representing ontologies.Thus, our first measure is based on the ontology complexity measure tree impurity (hereafter renamed OCS), found in [17].This measure was evaluated against the criteria in [18], and found to satisfy most of them.A detailed description is given in [17]; we simply give the intuition here.Ontologies consist of hierarchical information; concepts are subclasses of other concepts, which are all subclasses of the top level Thing.The bottom-most concept is Nothing.Sometimes, these hierarchies form a tree.However, in most cases, concepts will be subclasses of more than one parent concept.For example, the concept Cat could be a subclass of has4Legs and Mammal, neither of which is a subclass of the other.Thus, the hierarchy, when drawn as a graph, will not form a proper tree.The OCS then measures "how far" from a tree a certain ontology is, with a higher score representing a more complex ontology.For our purposes, we created our study diagrams by creating small sets of ontology axioms, from which we could calculate the OCS. The justification for this measure is to check whether the key cause of clutter in a diagram is the complexity of the underlying information.We also note that this is an existing measure that is used in the ontology community for measuring complexity, and so is widely understood. 2) Abstract scores (AS1 and AS2): These measures are based on the abstract syntax of the underlying Euler-based diagram, as defined in [15].The curves in the diagram (including the boxes) partition the plane into a number of regions.Each region is thus inside a number of curves, and outside a number of others.The score for the diagram (without arrows) is then given by the sum of the scores for each region in the diagram.For example, in Figure 4, the highlighted region (marked with a star) is inside four curves (two curves proper, and two boxes), and thus contributes 4 to the AS.Note that, since boxes are no different to curves, at an abstract level, we treat them in the same manner. Arrows, in the abstract syntax, are sourced on a particular curve or box, and targeted on a particular curve or box.There are two ways to calculate the contribution of an arrow.The first takes account how deeply nested the endpoints of the arrow are, which is necessarily a feature of a drawn (or concrete) diagram.Where an arrow is sourced on a curve, it touches the curve in one particular location.The contribution is then the scores of the regions where the arrow's endpoints aim at, calculated via the abstract syntax.For example, in Figure 4, the start-point of the arrow touches a region with a score of 4, and the end-point of the arrow touches a region with a score of 5.The contribution of the arrow is then 9.This method gives the score AS1. The second method of calculation considers an arrow to add clutter whenever the curves it connects contain a large number of regions.We can calculate the score of a curve by counting the number of regions it contains.Thus, the contribution of an arrow is the number of regions in the source curve added to the number of regions in the target curve.This method of counting arrows gives the score AS2.This score relies solely on the abstract syntax of a diagram, and will be invariant under different concrete representations. The justification for these measures is that they are simple extensions of one which has been shown to align with perceived clutter for Euler diagrams.We can have confidence that it should explain the clutter for the curve-based part of the diagram.The difference between the two is owing to how the clutter contribution from arrows is calculated.On the one hand, an arrow more deeply nested in the diagram could create clutter.On the other, the amount of information arrows connect could create clutter.Having both measures allows us to test which aligns with the perception of people. 3) Concrete score (CS): This measure is based on the drawn diagram.We test the assumption that the number of visual objects in a diagram is a proxy for the perceived clutter in a diagram.Each individual object (curves, boxes, arrows) adds one to the CS, and every intersection between two objects also contributes one to the CS.We define the intersection of two objects as either a point where two curves cross each other, or where an arrow crosses a curve or box.We do not count the meeting of the endpoints of an arrow with a source or target as an intersection 1 .Intuitively, the CS captures the number of points of interest in a diagram.There is a similarity between the CS and the AS1/AS2 in that curves and boxes are treated in the same fashion: boxes are just curves with a particular shape.As an example, consider Figure 4.There are 8 curves and boxes, and 1 arrow.The number of intersections between those objects is 11, and thus the CS is 20. The justification for this measure is that it simply counts the number of points on the diagram.Whereas the OCS will not change, and the AS1/2 may not change, depending on different representations of the same information, the CS will almost certainly change depending on how the information is drawn.Since we are asking participants to rank the clutter of diagrams, not of the underlying information, it seems plausible to base our clutter measure on the drawn diagram itself.4) Hybrid score (HS): The hybrid score is a combination of AS1/2 and CS, which respects that curves, boxes and arrows are all distinct types of object, and so may contribute to clutter in different ways.In HS, the scoring for curves is the same as AS1/2, but each box is not counted as a curve.Rather, it is just counted as a single object, as in CS.In other words, the region marked with a star in Figure 4 would contribute 2 to the HS of the diagram.Similarly, arrows are just objects in the plane regardless of where they are sourced or targeted, and so each contributes only 1 to the HS. The justification for this measure is that it takes into account both the underlying information (the AS1/2 gives the scores for the curves) and the representation of it (the CS gives the score for the boxes, and partially for the arrows). IV. STUDY DESIGN An empirical study was undertaken to see how people's perception of clutter correlated with the five proposed measures of clutter.The study followed a within-group design, where each participant was shown concept diagrams and asked to rank them on the basis of how cluttered they appeared.The ranking of the diagrams, given by each participant, was the primary variable recorded.We also recorded the time taken by participants to rank the diagrams.Participants were given no further guidance on ranking the diagrams, and no time limit was imposed upon them. The study consisted of four tasks, each of which required participants to rank 18 concept diagrams, with a ranking of 1 being least cluttered and 18 being most cluttered.Joint rankings were permitted.The first task (T A ) fixed the number of curves and boxes in a diagram, and varied the number of arrows.This task allowed us to establish how perceived relative clutter varied with the number and placement of arrows.The second task (T B ) fixed the number of arrows and curves, and varied the number of boxes.This task allowed us to establish whether merging information from several boxes into a single box affects perceived clutter.The third task (T C ) fixed the number of arrows and boxes, and varied the number of curves.Again, this task allowed us to establish how clutter would vary with the number and placement of curves.The final task (T D ) included concept diagrams with a variety of curves, boxes and arrows.This task allowed us to establish which aspects of the diagram (curves, boxes or arrows) contributes most important to perceived clutter. A. Diagrams Created for the Study For each task we generated 18 concept diagrams.The diagrams were designed so that the clutter measures ranked the diagrams differently.In this way, we could examine the relative merits of the proposed measures. We explain how the diagrams for the tasks were generated. 2.For T A and T C the process used was similar.We give details for T A .Three base diagrams without arrows were created from a set of ontology axioms (recall that, in order to calculate OCS, we require the diagrams to represent a set of ontology axioms), using 3 boxes and 5 curves.To each diagram we added either 1, 2 or 3 arrows in two different ways, thus each base diagram generated 6 study diagrams.The two methods for adding arrows was to join the source and target curves by the shortest distance (which may cross a number of regions), or to join the source and target curves by using a routing which followed a smooth path and intersected as few other curves and arrows as possible.For example, see Figure 5, where the left-hand diagram uses the fewest crossings and the right-hand diagram uses shorter paths.Each drawing method yields the same AS2, OCS and HS, but possibly different scores according to AS1 and CS.The scores for each diagram and measure are shown in Table I, with the associated rankings.Although the rankings for some of the measures are similar, there are also ranking differences.Thus, we should be able to distinguish which measures most accurately reflect participants' rankings. For T C we created three base diagrams with 1 box and 2 arrows which have the same source and target curves for each arrow.To each diagram we then added an extra 1, 2 or 3 curves using the following two methods.In the first, the newly added curves are to intersect as much as possible with the existing arrows.In the second method, as little as possible.In this way, some measures will vary, whilst others will remain constant, allowing us to differentiate them.Figure 6 shows an example of a pair of diagrams from T C .The left-hand diagram adds a single extra curve interacting with the arrows, and in the right-hand diagram the extra curve does not interact with the arrows.The rankings for the diagrams for each method are shown in Table II.Here, we see less variation amongst the rankings.However, given the existing research, and the restriction of producing diagrams which would be used in practice, this smaller variation is to be expected. For T B , we must be more careful, in that we need to ensure that the information in a base diagram (drawn with 5-boxes) was the same as the information in the 4-, 3and 2-box diagrams generated from it.A process of merging was used (sketched in Figure 3 in section II, but fully explained in [8]) to maintain the information content across the different representations of the same diagram.The side effect of this merging process is that the total number of curves and arrows may decrease as boxes are merged.Again, three base diagrams with 6 boxes, 13 curves and 2 arrows were drawn.A merging process then created three 4-box diagrams, three 3-box diagrams, and three 2-box diagrams, yielding 9 diagrams.The merging followed two different methods.In the first, we merge the boxes which contain the most information in common, whereas in the second we merge the boxes which contain the lowest amount of common information.As an illustration, consider a diagram containing three boxes, two of which contain curves with the same labels, while the third contains none of the same labels as the other two.The first method would merge the two boxes with common curves, and the second method would merge either of the similar boxes with the dissimilar one.Examples of the different diagrams produced for the study can be seen in Figure 7, with the diagram on the left the result of merging two boxes with little information in common (resulting in the box with many curve intersections), and the diagram on the right the result of merging two boxes with common information.The rankings for the diagrams, according to the proposed measures, are found in Table II.As in T A , there are more variation amongst the measures, as is to be expected owing to the different way each treats boxes. For the final task T D , we generated 18 initial diagrams, with either 1 or 3 boxes, 5, 7 or 9 curves, and 1, 2 or 3 arrows.Half were drawn with the method for drawing arrows as described for T A .This task allowed us to vary arrows, boxes and curves at once.The rankings for the diagrams are shown in Table II.There is variation amongst the rankings as well as similarity, allowing us to see which is most aligned to the participants' ranking. B. Concept Diagram Layout When drawing the diagrams for the study, we were careful to control their layout features to minimise (so far as practically possible) unintended variation.We adopted the following layout guidelines: 1) All diagrams were drawn with black lines for boxes, blue lines for curves, and green lines for arrows.This matches the presentation in [8] and [9], and thus makes our results more applicable to concept diagrams as used in practice. 2) The stroke width for boxes was set to 3 pixels, and curves and arrows 2 pixels.3) There were no labels in the diagrams.4) The diagrams were printed in the center of A5-size paper.We chose not to include labels because there we did not participants to perform any task with the diagrams other than ranking them by how cluttered they were perceived.As noted in section III, we attempted to find a measure of clutter independent of task. V. EXPERIMENT EXECUTION Initially 6 participants (3 M, 3 F, ages 21-42) took part in a pilot study.The pilot study was successful and the participants finished the four tasks in less than half an hour.As no changes were deemed necessary, the pilot data was carried forward for analysis with the data collected in the main study phase.A further 63 participants were recruited, giving a total of 69 participants (30 M, 39 F, ages 18-46).All the participants were staff or students from the University of Brighton; none of them were members of the authors' research groups. The participants performed the experiment within a room that provided a quiet environment free from interruption and noise.Each participant was alone during the experiment, except that the experiment facilitator was present.Each participant was informed not to discuss the details of the study with the other participants who are yet to perform the experiment.The participants were informed that they could withdraw at any time.However, all the participants completed the experiment.Each participant was paid £6, in the form of a canteen voucher, to take part.Each of the four tasks were conducted consecutively in a random order by the participants.Moreover, for each task, the printed diagrams were given to the participants in a random order, one at a time.The participants were asked to order the diagrams on the table from left to right where those on the left are least cluttered and those on the right are the most cluttered.If the participants thought that any diagrams were equally cluttered, then they were told that they could place them at the same point.They were also told that there was no limit on how many diagrams can be placed at the same point.The participants were given the opportunity to reorder the diagrams on the table at any time during the task.The participants were not given any help with their decisions during any of the tasks. VI. RESULTS AND ANALYSIS The goal of our work is to compare participants' rankings with our proposed measures.In order to do that, we first had to combine the 69 individual rankings for each task into a single ranking.A Friedman test was used for this purpose, giving estimated median ranks, from which we calculated the participants' rank.For each task, the Friedman test had a p-value of 0.000, indicated that further analysis using the participants' ranking was justified.The participants' ranks for each task are given in table III, with the estimated median ranks given for task T A only. Using these participants' rankings for each task, we performed a Pearson correlation test against our proposed measures.This approach is consistent with that used by others studying visual complexity, for example [15], [19].A higher number indicates a better (positive) correlation between two rankings.Table IV shows the correlation coefficients for each proposed measure with the participants' ranking by task.The figures in brackets are the p-values for the test.The scatter plot of measure rankings against participants' ranking for T A can be seen in Figure 8 (without OCS). We observe that OCS is a very poor measure for clutter in a diagram.The asterisks in Table IV in the OCS column indicate that there was no evidence of any relationship between OCS rankings and the participants' rankings.This is not surprising: if data can be presented in a number of ways, it is highly likely that the representation will affect how clutter the data appears.Since this measure was derived from the context of ontologies, we can conclude that one cannot look solely at an ontology's complexity to determine the visual complexity of the concept diagram used to represent it.All further sub-analysis then does not involve OCS. All other measures showed correlation with participants' perceived clutter.However, they all exhibited different strengths of correlation.The measure AS1 showed the strongest correlation in two tasks, T B and T D , whereas the measure CS showed the strongest correlation in tasks T A and T C .Using the Fisher r-z transformation, we can test for significant differences between the correlation coefficients.For task T A , pairwise tests indicate that the ranking coefficients for CS and AS1 are significantly different to those for AS2 and HS (CS vs. AS2: z = 4.92, p = 0.0000, CS vs. HS: z = 6.39,p = 0.0000, AS1 vs. AS2: z = 3.56, p = 0.0002, AS1 vs. HS: z = 5.03, p = 0.0000), whereas there is no difference between AS1 and CS (z = 1.36, p = 0.0869) and AS2 and HS (z = 1.47, p = 0.0708).For T B , pairwise tests revelated no significant differences between any of the rankings.For T C , the correlation coefficient for CS was significantly higher than that of all other rankings, with AS2 being the closest (vs.AS2: z = 2.95, p = 0.0016).For T D , there was no significant differences between any of the correlation coefficients.Thus, we can conclude that CS is the best measure: it is never worse than any other measure, and in certain situations is significantly better. That all measures (except OCS) are correlated with participant rankings suggests that the key component for clutter in a concept diagram is the clutter in the underlying Euler diagram.Three of the measures (AS1, AS2, HS) all calculate the curvebased clutter in the same way, the only difference being in how boxes are scored.The remaining measure (CS) counts the intersections between objects in a diagram, the majority of which are intersections between curves.The more regions an Euler diagram has, the greater the number of intersections between curves.The notable exception to this is with nested curves (the tunnels of [15]).CS and AS1 score arrows in similar ways: the former counts the number of curves an arrow crosses between source and target, whereas the latter counts how deeply nested the end-points are.In certain circumstances, these may give similar results.For example, a deeply nested source connected to a deeply nested target in another box will cross several curves.However, two deeply nested curves could be connected by an arrow which crosses no other curves.In this situation, the contribution to the CS by the arrow would be 1, whereas the contribution to AS1 would be larger.Since CS outperforms AS1, we can conclude that the number of crossing points is the best predictor for clutter. We also noted the time taken for participants to perform each task.The averages are shown in Table V, where the time is given in seconds.The data were transformed using logarithms to ensure normality.An ANOVA test was conducted on the log of time taken by task (F (3,68) = 6.56, p = 0.000), and pairwise Tukey tests at the 99% significance level gives a difference between tasks T B and T C .In other words, participants were quicker ranking diagrams which differed in the number of curves, than they were in ranking diagrams which differed by the number of boxes.This observation supports our conclusion that the underlying Euler diagram is the largest component of clutter score: when the underlying Euler diagram changed, participants could quickly rank the diagrams.However, when other syntax varied, greater cognitive effort was required to rank diagrams by clutter. VII. CONCLUSIONS AND FUTURE WORK We investigated the perceived clutter in concept diagrams.By producing five plausible measures that could explain clutter in a diagram, and evaluating each of these measures against participants' rankings of diagrams, we found that a measure based on the number of line crossings in a diagram was the most effective predictor of clutter.Moreover, the results suggested that the largest part of clutter in a concept diagram comes from the underlying Euler diagram.Our results are also consistent with the findings of [15], in that the measures AS1, AS2 and HS, all based on the clutter scores for Euler diagrams, are all correlated with clutter in concept diagrams.Moreover, since the measure CS counts the number of points of interest in a diagram, our findings are consistent with [20], where a study on clutter in linear diagrams found that the number of lines in the plane was the best predictor of clutter.We can also give a recommendation for arrows in concept diagrams to reduce clutter: have the arrow cross as few curves as possible. This work can be taken in two future directions.Firstly, the effect of clutter on task performance using concept diagrams can be investigated.In [21], it was conjectured that more cluttered concept diagrams negatively affected identifying empty classes in a set of ontology axioms.The study reported there was not directly investigating clutter, however, and was only focused on one particular task.To validate or reject the conjecture of [21], then, additional studies need to be undertaken. Secondly, there is now a growing corpus of research on clutter in various visual languages.In this paper, we have seen Other aspects of diagrams could also cause them to appear cluttered.The relative proportion of white-space in a diagram, as well as the placement of objects relative to each other, could both affect perceived clutter.Whilst this paper is an important step in providing a unifying framework for clutter, more work is needed to be able to develop a general theory of clutter. Fig. 7 . Fig. 7. Drawing choices for boxes syntax score of a diagram is consistent with perceived clutter in Euler diagrams, concept diagrams, and linear diagrams.We conjectured that the number of points in a diagram (either objects or interactions between objects) was what caused clutter.It would be relatively straightforward to test this conjecture by using an eye-tracking experiment.Such an experiment forms part of our future research directions. TABLE I SCORES AND RANKINGS FOR DIAGRAMS IN TASK T A . TABLE II RANKINGS FOR DIAGRAMS IN TASKS T B , T C AND T D . TABLE III PARTICIPANT RANKINGS FOR ALL TASKS. TABLE IV CORRELATIONS BETWEEN CLUTTER MEASURES AND PERCEPTION, BY TASK. Fig. 8. Scatter plot for T A TABLE V AVERAGE PARTICIPANT TIME, BY TASK.
7,621.6
2016-01-01T00:00:00.000
[ "Computer Science" ]
Calculation of Bearing Capacity and Deformation of Composite Pile Foundation with Long and Short Piles in Loess Areas In order to calculate the bearing capacity and settlement deformation of composite pile foundations with long and short piles in collapsible loess areas, the theoretical approximate solution was used to obtain the location of the neutral point of single piles. Additionally, based on the equation to calculate the bearing capacity of multielement composite foundations, a method considering the negative frictional resistance was proposed for calculating the bearing capacity of composite pile foundations with long and short piles. Based on the shear displacement method and the principle of deformation control, an equation to calculate the displacement and deformation of a composite pile foundation was presented. Amodel test with different operating conditions, i.e., a single pile, four piles, and eight piles, was designed to verify the proposed calculation methods. *e results show that the location of the neutral point has a significant influence on the single-pile negative frictional resistance, and the neutral point ratio of the calculation meets the value range of the practical project. When the load at the top of the pile is relatively small, the experimental curve is consistent with the theoretical calculation curve, whereas when the load is comparatively large, the theoretically calculated displacement increase at the top of the pile is greater than the measured one. Under the premise that the theoretical calculation is in good agreement with the results, the theoretical value is larger than the actual value. And it contributes to strengthening engineering safety. Introduction With superior force transmission performance, pile foundations have good applicability in improving the bearing capacity of foundations and controlling the settlement deformation of buildings [1]. e stress performance of largediameter cast-in-place concrete pile is good, which leads to more and more common application in pile foundation [2]. According to the engineering practice in collapsible loess areas, especially in loess areas with thick layers of loess, the pile end must penetrate the collapsible loess layer in general cases based on the principle of safety design. In the selfweight collapsible loess sites, the pile end should penetrate the collapsible layer to be supported by the reliable rock and soil layer; in the nonself-weight collapsible loess sites, the pile end should also be supported by the noncollapsible loess layer with relatively high bearing capacity. From the viewpoint of safety and stability, it is beyond doubt that pile foundations (especially pile length control) should be strictly designed. Nevertheless, with a continually improved understanding of the effective treatment thickness of collapsible loess foundation [3], we found that the design principle requiring all engineering piles to penetrate the collapsible loess layer is conservative and safe. erefore, apart from optimizing the pile length, another key issue that remains to be solved is the determination of new pile foundations that are suitable for collapsible loess as well as their design theories and calculation methods. With deformation control as the design principle, a composite pile foundation with long and short piles [4,5] was proposed as applicable in loess areas based on the analysis principle of settlement reducing piles. e design principle is that when there are two or more bearing layers in the foundation soil, all the long piles will be located in the deep bearing layer to control the settlement deformation of the buildings, whereas the short piles will be located in the shallow bearing layers to provide the bearing capacity. In recent years, there are many extensive studies on composite pile foundations with long and short piles [6]. Field tests were carried out by Gotmann and Sokolov [7], and the performance of an XCC composite foundation on the basis of the load-settlement curves, axial force and side friction distribution, pile-soil load-sharing, and pile-soil stress ratio was studied, which has certain guiding effect on the bearing capacity of composite pile. Ma et al. [8] systematically expounded the design concept of a composite pile foundation with long and short piles for the first time, introduced the steps to calculate its bearing capacity and settlement, and analyzed the selection of related design parameters. Ge et al. [9] discussed the calculation methods for the bearing capacity and settlement of a composite pile foundation with long and short piles in a soft soil area and conducted a comparative analysis of the measured and calculated settlement values. According to the three-dimensional finite element analysis of a pile foundation with all short piles, a pile foundation with all long piles, and a composite pile foundation with long and short piles. Yang et al. [10] found that the composite pile foundation not only reduced the amount of long piles used but also effectively controlled the overall settlement and differential settlement of the foundation. In terms of calculating the settlement deformation of long and short piles, Yang et al. [11] proposed a practical settlement calculation method for composite pile foundations with long and short piles and applied it to the analysis of the settlement of a high-rise residential building. Zhao et al. [12,13] established a calculation model for the settlement of composite pile foundations with long and short piles based on the shear displacement method with certain simplifying assumptions. Hong Y et al [14] carried out a field study on the downdrag and dragload of two pairs of bored piles installed in consolidating ground. e result shows that the dragload in a floating pile can be one-third less than that in an end-bearing pile with similar geometries and embedded in the same consolidating ground. Ni et al. [15] studied the effect of scouring around the pile on the lateral capacity of the piles embedded in sandy soil by the method of numerical investigation. Li and Gong [16] presented a method for predicting the load carrying behavior of pile groups consisting of new and existing displacement piles by combining the load-transfer method and shear displacement method and explored the stiffness efficiency and the loadsettlement behavior of pile groups with different layouts of new and existing piles. Li et al. [17] presented a semianalytical approach to predict the time-dependent bearing performance of an axially loaded jacked pile in saturated clay strata and provided an incremental algorithm and a corresponding computational code to assess the time-dependent load-settlement response of a jacked pile. ere currently are many studies on the bearing capacity of composite pile foundations with long and short piles, but few in-depth studies have been conducted on their applicability in collapsible loess areas, and almost no theoretical analysis can be found regarding the location of the neutral point and calculation of the reduction effect of the bearing capacity of negative frictional resistance. Feng et al. [18], based on the modified Burgers model, proposed the nonlinear viscoelastic pile shaft and base load-transfer (t-z) models. By these models, a nonlinear approach was presented to calculate the long-term settlement of a vertically loaded single pile and a pile group in layered soft soil. e pile-soil interface behavior is of particular interest in pile surface where shaft friction of pile plays an important role in resisting the applied load [19,20]. e objective of this study was to develop the theoretical design and calculation method of a composite pile foundation with long and short piles in collapsible loess areas. e location of the neutral point was determined through theoretical analysis, a method considering the negative frictional resistance was proposed to calculate the bearing capacity of the composite pile foundation, and the corresponding equation to calculate the displacement deformation was presented. Next, the model test was carried out to verify the rationality and applicability of the proposed methods for calculating the bearing capacity and settlement deformation. is paper aims to provide reference for theoretical research and engineering practice concerning composite pile foundations with long and short piles in loess areas. Determination of Neutral Points of Composite Pile Foundation with Long and Short Piles. e bearing capacity of the pile foundation is mainly provided by the pile-side frictional resistance and the pile-end resistance. e direction and magnitude of the pile-side frictional resistance depend on the relative displacement change of the pile and the soil around the pile. If the collapsible loess collapses, its displacement is usually greater than the displacement of the pile, and the soil around the pile has negative frictional resistance toward the pile. e negative frictional resistance assumes an inhomogeneous distribution and variation along the length of the pile. When the displacement of the pile and the soil around the pile are equal, the corresponding frictional resistance at this point is zero. Along this point, the displacement of the pile begins to grow larger than that of the soil around the pile; thus, the soil around the pile provides upward positive frictional resistance for the pile, which improves the bearing capacity of the pile foundation. erefore, this point is the demarcation point of positive and negative frictional resistance, i.e., the so-called neutral point of the pile foundation in the collapsible loess layer. Field testing is the most reliable method to determine the neutral point in engineering design, but since field tests are timeconsuming and costly, estimation methods are used more often in practical applications. e commonly used methods for estimating the neutral point include the empirical method, the Japanese specification-based approach, and the theoretical approximate solution [19]. Although the empirical method does not involve complex calculation, it does not take into account influencing factors such as the properties of the soil around the pile, the load around the pile, or the characteristics of the pile, which leads to a relatively big deviation between the calculated location of the neutral point and the actual result. e Japanese specification-based approach [21] considers the influence of factors such as the additional load around the pile, the load at the top of the pile, the properties of the soil in each layer, and the characteristics of the pile body on the negative frictional resistance. e calculation results are relatively close to the measured values, but there are many parameters, so the actual operability is comparatively poor. Combining the advantages of the empirical method and the Japanese specification-based approach, the theoretical approximate solution considers the impact of factors such as the additional load around the pile, the load at the top of the pile, the properties of the soil in each layer, and the characteristics of the pile body on how the negative frictional resistance changes along the pile body, and its calculation parameters are common and easy to obtain. erefore, the calculated results are in good agreement with the actual negative frictional resistance. Hence, in this study, the neutral point of the composite pile foundation with long and short piles was determined based on the theoretical approximate solution. According to the theoretical approximate solution and the force balance conditions of the pile, the following equation was obtained: where P up is the load at the top of the pile; P g is the gravitational force of the pile body; P sn denotes the single-pile negative frictional resistance; P s is the positive frictional resistance; P dp represents the pile-end resistance; and all variables are expressed in kilonewtons. If the pile-side frictional resistance is assumed to be directly proportional to the effective pressure q of the overlying soil, the β method can be used to obtain the equation as follows: where L is the length of the pile, with m being the unit; c is the severity of the soil layer, and the floating severity will be used if it is below the groundwater level, with kN/m 3 being the unit; and L f is the depth of the neutral point, with m being the unit. e pile-end resistance, P dp , was calculated according to the following equation: where P dp denotes the standard value of the extreme pile-end resistance, which can be valued according to the specification, and A p is the pile-end area, with m 2 being the unit. e β coefficient was calculated based on the equation as follows: where D r is the relative density. Based on Equations (1)-(5), the following equation can be obtained: By substituting different P up , β 2 , β 1 , P g , P dp , and A p into Equation (7), the location of the neutral point of each pile was obtained: Calculation of Pile-Group Negative Frictional Resistance of Composite Pile Foundation with Long and Short Piles. In the previous section, the location of the neutral point of the pile and the single-pile negative frictional resistance were calculated based on the theoretical approximate solution; that is, the single-pile negative frictional resistance was calculated by simultaneously using Equations (7) and (2). e pilegroup negative frictional resistance of the composite pile foundation was determined by the following equation: where Q n g is the negative frictional resistance of the pile group, with kN being the unit; n is the total number of piles; η n is the pile-group effect coefficient of negative frictional resistance; and P sn denotes the single-pile negative frictional resistance, with kN being the unit. In the above calculation of the pile-group negative frictional resistance, the pile-group effect of negative frictional resistance was considered. It is known that the pile-side negative frictional resistance was caused by the settlement deformation of the pile-side soil. If the weight of the soil shared by the surface of each pile in the pile group per unit area is less than the limit value of the single-pile negative frictional resistance, the negative frictional resistance of the foundation piles will decrease. erefore, when calculating the negative frictional resistance of foundation piles in the pile group, the negative frictional resistance effect of the pile group should be taken into account. In this study, the pile-group effect coefficient of negative frictional resistance is represented by η n and determined by the following equation: Advances in Civil Engineering where S ax and S ay stand for the center distance of the vertical and horizontal piles, respectively, with m being the unit; d is the diameter of the pile, with m being the unit; c m is the weighted average severity of the thickness of soil around the pile above the neutral point, with kN/m 3 being the unit; and q sn is the standard value of the weighted average negative frictional resistance of the thickness of soil around the pile above the neutral point, with kN being the unit. Calculation of Bearing Capacity of Composite Pile Foundation with Long and Short Piles. Under normal circumstances, for site soil with special characteristics, a reinforcement body will be adopted to process the soil. If the control objectives of the bearing capacity or deformation still cannot meet the design requirements, it is stipulated in Recommendations for Design of Building Foundations [21] that a composite pile foundation with long and short piles can be used. Based on the area replacement rate and the pilesoil stress ratio, the bearing capacity of multielement composite foundations was obtained by summing the bearing capacities of the long piles, short piles, and the soil between the piles multiplied by corresponding effective strength coefficients. e bearing capacity of a multielement composite foundation, f sp,k2 , can be calculated as follows: where m 1 and m 2 are the area replacement rates of short piles and long piles, respectively; λ 1 and λ 2 are the single-pile bearing capacity exertion coefficients of short piles and long piles, respectively; and f sk is the processed characteristic value of the bearing capacity of the soil between the piles of composite foundations, with kPa being the unit. e values of λ 1 and λ 2 should be determined by the single-pile composite foundation test according to equal deformation criteria or the static load test of the multipile composite foundation, and if there is regional experience, the value can be determined based on regional experience, while the value should be in the range of 0.7 to 0.9 if there is no regional experience. In summary, the location of the neutral point and the negative frictional resistance were determined by the theoretical approximate solution. Equation (8), the equation to calculate the pile-group negative frictional resistance, was obtained based on Equation (2), the equation to compute the single-pile negative frictional resistance. According to Equation (10), which is the equation to calculate the bearing capacity of the soil between the piles of composite foundations, an equation that considers negative frictional resistance to calculate the bearing capacity of composite pile foundations with long and short piles in collapsible loess areas featuring thick layers of loess can be obtained as follows: where f spk is the bearing capacity of the composite pile foundation with long and short piles in a collapsible loess area, with kN being the unit. According to the above analysis, Equation (11) solved the theoretical calculation of the bearing capacity of composite pile foundations with long and short piles in collapsible loess areas, thus providing a theoretical basis for the application of composite pile foundations with long and short piles in loess areas in Northwest China. Calculation of Settlement of Composite Pile Foundation with Long and Short Piles in Loess Areas eoretically, in the calculation of the settlement deformation of multielement composite foundations, the composite foundation can be divided into the composite soil layer and the underlying soil layer. e common methods to calculate the settlement deformation of composite pile foundations with long and short piles include the composite modulus method, the stress correction method, and the shear displacement method. Among them, the composite modulus method recommended by Recommendations for Design of Building Foundations [21] ascertains the settlement of the foundation by calculating the composite modulus of the soil layer after long-short-piles treatment, but it requires a great number of moduli to be determined. Although the stress correction method is relatively simple, it is difficult to determine the pile-soil stress ratio and the stress correction coefficient of the composite foundation, and the role of long and short piles is ignored during the calculation process. According to the shear displacement method, Zhao et al. [12,13] introduced a pile-pile and pile-soil interaction model based on specific assumptions. With the effect of the bedding layer considered, a method to calculate the settlement of a composite foundation was proposed based on the bedding-pile-soil joint action, but it did not take into account relevant problems caused by loess collapsibility. Based on the principle of deformation control, Equation (11) was introduced to obtain a method that considers negative frictional resistance when calculating the settlement deformation of composite pile foundations in collapsible loess areas. For simplification, Zhao et al. [12,13] made the following assumptions: (1) both the soil layer between the piles of the composite foundation and the underlying layer are homogeneous elastomers, and the long and short piles share the same material and geometrical shape; (2) the foundation structure is absolutely rigid, and the vertical displacements under the raft are equal. e equation of the joint action of the pile-soil-raft system was obtained as follows: Advances in Civil Engineering where [K] is the stiffness matrix of the pile-soil-raft system, and w { } and P { } are the vertical displacement vector w 1 , w 2 , . . . , w i , . . . , w N T and the counterforce vector P 1 , P 2 , . . . , P i , . . . , P N T , respectively, of relevant nodes. e influences of the stiffness matrix of the pile-soil-raft system, the flexibility coefficients of pile-pile, pile-soil, and soil-soil interactions, and the raft action were considered. In the end, the basic form of the equation to calculate the settlement of the pile-soil-raft system was obtained as follows: where P is the total external load. Equation (13) is an equation to calculate the settlement of composite foundations based on the shear displacement method proposed by Zhao Minghua and can be calculated by programming. However, the existence and impact of negative frictional resistance in the loess area were not considered. erefore, based on the method for calculating the bearing capacity of composite foundations with long and short piles in collapsible loess areas proposed in this study, the following equation was obtained: erefore, with considering the pile-group negative frictional resistance, the method for calculating the settlement deformation of composite foundation with long and short piles in loess areas was established. e theoretical method to calculate the settlement of composite foundations with long and short piles based on deformation control was thereby presented. Indoor Verification Test To verify the proposed calculation method, a model test was carried out to verify the applicability and rationality of the theoretical algorithm. Test Scheme. ree tests were designed: (1) the test of the bearing capacity of the foundation with a single pile; (2) the test of the bearing capacity of the foundation with 4 piles (2 short piles + 2 long piles); and (3) the test of the bearing capacity of the foundation with 8 piles (4 short piles + 4 long piles). Table 1 shows the test schemes of the composite pile foundations with long and short piles, and the layouts of the piles in the tests are shown in Figure 1. Model Piles. A PVC pipe was used as a test pile to make a mold with an outer diameter of 75 mm and an inner diameter of 70 mm. Four wires, each with a diameter of 4 mm, were arranged around the PVC pipe and surrounded by a fine binding wire at an interval of 25 mm to form a wire cage with a diameter of 50 mm. e wire worked as the longitudinal bar of the pile, and the fine binding wire was equivalent to the spiral stirrup. e pile was made of concrete with a strength grade of C25, as shown in Figure 3. Screening and Filling of Soil around the Pile. e soil used in the test was Lanzhou collapsible loess. e water content of the soil was 12.98%, and the maximum dry density ρ d,max was 1.76 kg/m 3 . e soil was then filled around the pile; the thickness of each layer of soil was 100 mm, and the dropping height of each layer was kept the same. By testing the filling samples, it was found that the degree of compaction of the filled soil was 91%, which satisfied the test requirements. Loading. Currently, there are few experimental studies on this model pile. In this study, the bearing capacity of the model pile was estimated according to the Technical Code for Building Pile Foundations (JGJ94-2008) [22]. e loading amount of a single pile at each stage was assumed to be 0.5 kN, and the loading amount of a pile group at each stage was n times that of a single pile at each stage, where n is the number of piles in the pile group. Figure 4 shows the loading process. Conversion between Single-Pile Settlement and Pile-Group Settlement. As experiment was set up using the test scheme described above, the length of the long piles was 1100 mm, the length of the short piles was 850 mm, and the piles were laid out in the shape of a plum blossom. e data measured by the model test can be used to find the relationship between the load at the top of the pile and the displacement of a single pile in a pile group, whereas Equation (13) is used to calculate the overall settlement of long and short piles, so it is necessary to convert the single-pile settlement into pilegroup settlement in the model test through reasonable calculation. Considering the simplicity and practicability of the equivalent pier method [23] for calculating the settlement of a pile group based on the settlement of a single pile, the method was used to convert the settlement of the long and short piles obtained from the model test into the settlement of a pile group using the following equation: Advances in Civil Engineering where S is Winkel's ground coefficient and S st is the settlement of a single pile. Calculation of Neutral Point and Negative Frictional Resistance of Long and Short Piles Calculation of Neutral Point of Long and Short Piles. e neutral point of each long pile and short pile in this text was calculated based on Equation (7), and the results are shown in Table 2. A large amount of experimental data has shown that the neutral point ratio is mostly between 0.35 and 0.70 [12]. e neutral point ratio calculated by Equation (7) in this test was within this range, indicating that the calculation result is reasonable. Single-Pile Negative Frictional Resistance of Long and Short Piles. Due to the different locations of neutral points of long and short piles, the single-pile negative frictional resistance of long and short piles is inconsistent as well. e single-pile negative frictional resistance values of long and short piles were calculated by Equation (2), and the results are shown in Table 3. It can be seen from Table 3 that the location of the neutral point had a significant influence on the single-pile negative frictional resistance. Due to the different locations of the neutral points of long and short piles, the negative frictional resistance of long piles was about 30% larger than that of short piles. Pile-Group Negative Frictional Resistance. For a pile group with relatively small pile spacing, the negative frictional resistance of foundation piles will be reduced due to the effect of the pile group. Equation (8) was used to calculate the negative frictional resistance of each pile group considering the pile-group effect coefficient of negative frictional resistance. e results are shown in Tables 4 and 5. According to the comparative analysis of Tables 4 and 5, since the composite foundation with 4 piles and the composite foundation with 8 piles had the same pile spacing and single-pile diameter, their pile-group effect coefficients were the same as well. erefore, the difference between the negative frictional resistances of the two pile groups should be caused by the difference between the single-pile negative frictional resistances of long and short piles. Figure 5(a), when the load at the top of the pile was less than 8 kN, the displacement at the top of the pile was relatively small, and the test curve was consistent with the theoretically calculated curve; both of which were around 0 to 0.4 mm. When the load at the top of the pile exceeded 8 kN, the displacement at the top of the pile gradually increased and reached 1.5 mm when the load was 12 kN. When the load exceeded 12 kN, there were significant differences between the theoretical curve and the test curve. For the theoretically calculated curve, as the load at the top of the pile increased, the increase rate of the displacement at the top of the pile was larger than the corresponding part of the test curve. When the load at the top of the pile reached 16 kN, the actual settlement value was 8.5 mm, whereas the theoretical settlement value was 10.5 mm, which is about 23.5% larger than the actual settlement value. e reason may be that the theoretical calculation considered the negative frictional resistance of the pile group and the effect of pile caps brought by the collapsible layer, while the collapsible layer may not have completely collapsed and caused negative frictional resistance to the long and short piles in the test. In general, the theoretical value was larger than the actual value, which is conservative, and thus conducive to increased engineering safety. Analysis of the Test Results and eoretical Calculation of Composite Foundation with 8 Piles. Figure 5(b) compares the test curve and the theoretically calculated settlement curve of the composite foundation with 8 piles. When the load at the top of the pile was less than 9 kN, the measured and theoretically calculated settlement curves were basically the same. However, when the load at the top of the pile was greater than 14 kN, the measured and theoretical curves were significantly different. Similar to the results of the composite foundation with 4 piles, the theoretical settlement curve of the composite foundation with 8 piles was larger than the actual settlement curve. When the load at the top of the pile exceeded 21 kN, as the load at the top of the pile increased, the theoretically calculated displacement at the top of the pile increased rapidly. When the load at the top of the pile reached 32 kN, the theoretical settlement was 26.5 mm, which is 26.03% higher than the actual settlement value of 19.6 mm. e theoretical value was larger than the actual value, which is comparatively conservative, and therefore beneficial to engineering safety. Conclusions (1) In this study, the location of the neutral point of a single pile was obtained based on the theoretical approximate solution, and a method considering the negative frictional resistance was proposed to calculate the bearing capacity of composite pile foundations with long and short piles based on the equation to calculate the bearing capacity of multielement composite foundations recommended by the specification. (2) With the shear displacement method as the theoretical basis to calculate the settlement deformation, the method for calculating the bearing capacity proposed in this study was used to replace its load calculation parameters so that the calculated results of the negative frictional resistance were internally included to present the method for calculating the settlement deformation of composite pile foundations with long and short piles. (3) As indicated by the comparison between the indoor model test and the results calculated using the method proposed in this study, the calculated location of the neutral point is within the engineering practice experience in engineering practice. e comparison between the theoretically calculated settlement value and the settlement results obtained in the test shows that when the load at the top of the pile is relatively small, the displacement at the top of the pile is comparatively small as well, and the test curve is consistent with the theoretically calculated curve. When the load at the top of the pile is relatively large, the displacement at the top of the pile gradually increases, and there are differences between the theoretical curve and the measured curve; that is, the increased rate of the theoretically calculated displacement at the top of the pile is larger than that of the measured one. (4) rough comparative analysis of the test and the theoretical calculation, it is found that the theory proposed in this study to calculate the bearing capacity and settlement deformation of composite pile foundations with long and short piles is relatively reasonable and it is applicable to foundation engineering in collapsible loess areas. Data Availability Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable requests including discussion with the authors, repeating the tests, and other situation. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,236.6
2020-10-20T00:00:00.000
[ "Engineering", "Environmental Science" ]
Optical quantum nondemolition measurement of a solid-state spin without a cycling transition Optically-interfaced spins in the solid state are a promising platform for quantum technologies. A crucial component of these systems is high-fidelity, projective measurement of the spin state. In previous work with laser-cooled atoms and ions, and solid-state defects, this has been accomplished using fluorescence on an optical cycling transition; however, cycling transitions are not ubiquitous. In this work, we demonstrate that modifying the electromagnetic environment using an optical cavity can induce a cycling transition in a solid-state atomic defect. By coupling a single Erbium ion defect to a telecom-wavelength silicon nanophotonic device, we enhance the cyclicity of its optical transition by a factor of more than 100, which enables single-shot quantum nondemolition readout of the ion's spin with 94.6% fidelity. We use this readout to probe coherent dynamics and relaxation of the spin. This approach will enable quantum technologies based on a much broader range of atomic defects. A key capability for atomic defects is high-fidelity spin readout using the optical transition [1]. Single-shot optical spin measurements have been achieved in quantum dots [22] and in the NV [23] and SiV − [24] color centers in diamond by leveraging highly cyclic optical transitions that arise from atomic selection rules. However, cyclic optical transitions are not a universal feature of atomic defects, and are often absent in low-symmetry defects and in the presence of strain [25] or spin-orbit coupling without careful alignment of the magnetic field [22,24]. Singleshot readout has not been achieved in atomic defects without intrinsic cycling transitions, such as rare earth ions [26]. In this work, we demonstrate that tailoring the electromagnetic density of states around an atom with an optical cavity can induce highly cyclic optical transitions in an emitter that is not naturally cyclic. Using a single Er 3+ ion in Y 2 SiO 5 (YSO) coupled to a silicon nanophotonic cavity (Fig. 1a), we demonstrate a greater than 100-fold enhancement of the cyclicity: under conditions where the ion alone can scatter fewer than 10 photons before flipping its spin, a cavity-coupled ion can scatter over 1200. This is sufficient to realize singleshot spin readout with a fidelity of 94.6%, and to enable continuous, quantum nondemolition measurement of quantum jumps between the ground state spin sublevels. The improvement in the cyclicity arises from selective Purcell enhancement of the spin-conserving optical decay pathway (Fig. 1b), determined primarily by the alignment of the cavity polarization and the spin quantization axis defined by a magnetic field. A small additional enhancement arises from detuning of the spin-non-conserving transitions from the optical cavity, an effect that was recently used to enhance the cyclicity of a quantum dot in a nanophotonic cavity [27]. This generic technique opens the door to exploiting a much broader range of atomic defects for quantum technology applications, and is a particular advance for individually addressed rare earth ions. Our experimental approach, following Ref. 20, is based on a YSO crystal doped with a low concentration of Er 3+ ions placed in close proximity to an optical cavity in a silicon photonic crystal waveguide (Fig. 1a). Assembled devices are placed inside a 3 He cryostat at 0.54 K with a 3-axis vector magnet. Light is coupled to the cavities using a lensed optical fiber on a 3-axis translation stage. The high quality factor (6 × 10 4 ) and small mode volume of the cavity, together with the high radiative efficiency of the Er 3+ optical transition, enable Purcell enhancement of the Er 3+ emission rate by a factor of P = 700 (Fig. 2a). There are several hundred ions within the mode volume of the cavity, but their optical transitions are inhomogeneously broadened over a several GHz span, such that stable, single ion lines can be clearly isolated (Fig. 1c) [20]. The ground and excited states of the 1.536 µm optical transition in Er 3+ :YSO are effective spin-1/2 manifolds, which emerge from the lowering of the 16-(14-)fold degeneracy of the 4 I 15/2 ground ( 4 I 13/2 excited) states by the crystal field. In the absence of a magnetic field, the ground and excited states are two-fold degenerate, as required by Kramers' theorem [28]. This degeneracy is lifted in a small magnetic field, revealing four distinct optical transitions (Fig. 1b,c). Transitions A and B conserve the spin, while C and D flip the spin. To probe the selection rules of the optical transition, we excite the spin-conserving A and B lines alternately (Fig. 2a). The average fluorescence following the A and B pulses is the same, since the transitions are symmetrically detuned from the cavity and the spin is on average unpolarized from continuous optical pumping by the excitation light. However, the intensity autocorrelation function, g (2) (nt rep ) (where n is the offset in the number of pulses) is anti-bunched for odd-numbered pulse offsets (i.e., A-B correlations) and bunched for even offsets (i.e., A-A or B-B correlations), revealing that only one of the transitions A or B is bright at any given time, depending on the instantaneous spin state (Fig. 2b). Note that the fluorescence after each pulse is integrated before computing the autocorrelation, so g (2) (nt rep ) is only defined for discrete times. Eventually, the spin relaxes and g (2) decays exponentially to 1 after an average of n 0 pulses. Under the assumption (to be verified later) that the observed spin relaxation arises primarily from optical pumping between the spin sublevels, we extract the optical transition cyclicity C = n 0 P ex , where P ex ≈ 1/2 is the probability to excite the ion in each pulse. This value of P ex is assured by using an intense excitation pulse to saturate the ion, and is verified using the independently measured collection efficiency [29]. We repeat this measurement with different orientations of the magnetic field, and find that the cyclicity varies by nearly three orders of magnitude (Fig. 2c,d), with a maximum value of 1260 ± 126. This results from the changing orientation of the atomic transition dipole moment with respect to the cavity polarization. It can be captured by a simple model where the decay rates on each transition are proportional to the projection of an associated dipole moment d onto the cavity polarization at the position of the ion. For the spin conserving transition, When the magnetic field is rotated, the spin sublevels mix such that |↑ (ϕ , θ ) = α |↑ (ϕ, θ) +β |↓ (ϕ, θ) , with the coefficients α, β completely specified by the anisotropic g tensor describing the Zeeman shifts ( Fig. 1d) [30]. Together with the time-reversal symmetry properties of the Kramers' doublets, this allows the complete angular dependence of C = Γ AB /Γ CD + 1 to be described by only two parameters: · d || and · d ⊥ at a single (arbitrary) reference orientation [29]. In this model, the role of the cavity is to restrict the decay to a particular polarization, such that the decay rates are determined by a single matrix element | · d| 2 ; in free space, there is no preferred . Since the dipole matrix elements for Er 3+ :YSO and the cavity field polarization at the position of the atom are not known, we treat · d || and · d ⊥ as fit parameters. A fit to this model displays excellent agreement with the data, and allows the complete angle dependence of the cyclicity to be extracted from a small number of measurements. While this discussion centers on electric dipole coupling, the Er 3+ transition we study has comparable electric and magnetic dipole matrix elements [31], and the predicted magnetic Purcell factor for our structures is similar [20], depending on the precise position of the ion. We show in the Supplementary Information [29] that the electric and magnetic contributions have the same angular dependence and may be summed into a single term. To quantify the extent to which the cyclicity is enhanced by the cavity, we study a second ion with lower Purcell factor and then lower it further by detuning the cavity. The cyclicity is observed to decrease roughly linearly with P (Fig. 2e). Based on the dependence of the cyclicity on the cavity detuning for this ion, we estimate that the cyclicity C 0 of the ion alone is less than 10 [29], such that the enhancement by the cavity is greater than 100. We note that C 0 has not been directly measured for Er 3+ :YSO. Next, we focus on using the cavity-enhanced cyclicity to measure the spin state. Fig. 3a shows a time trace of photons recorded in a single run of the experiment, with telegraph-like switching between |↓ g (where transition B is bright) and |↑ g (where transition A is bright) clearly visible. A continuous estimation of the spin state occupation using a Bayesian estimator applied to the full measurement record [32] shows clearly resolved quantum jumps between these states, demonstrating the quantum nondemolition nature of the measurement. The quantum jumps are driven by optical pumping from the measurement process itself, because of the finite cyclicity. To demonstrate single-shot measurement of the spin, we use a maximum likelihood (ML) algorithm to estimate the state at time t using photon counts from times t > t. The measurement duration is adaptive: each measurement terminates when a set fidelity threshold or time limit is reached, and a new, independent measurement is begun [33]. The outcome of each measurement is shown by the circles in Fig. 3b. The average measurement fidelity estimated by the ML algorithm is 94.6%, and 91% of consecutive measurements have the same outcome. The average time to complete a measurement is 20 ms, which corresponds to the average time to detect two photons. The optimum fixed measurement window is 51 ms, resulting in a slower measurement with a lower average fidelity of 91.1% (Fig. 3c). Lastly, we apply this spin readout technique to investigate the ground state spin dynamics. We measure the intrinsic spin relaxation rate T 1,dark by reducing the optical excitation rate 1/t rep until the total spin lifetime T 1 = 1/(T −1 1,dark +T −1 1,op ) saturates (Fig. 4a). T 1,op = Ct rep /P ex is the optical pumping time. T 1,dark increases with increasing magnetic field strength, in a manner that starkly diverges from the expected B −4 behavior of spin-lattice relaxation (Fig. 4b) [28]. One possible explanation is flip-flop interactions with nearby Er 3+ ions [34], which is consistent with the fact that T 1,dark varies sharply with the magnetic field angle and is different by a factor of 4 between two ions studied [29]. In this device, the average separation between magnetically equivalent Er 3+ ions is estimated to be 70 nm, such that the dipoledipole interaction strength is around 1 kHz; the flip-flop rate is likely much slower because of spectral diffusion from nearby 89 Y nuclear spins. The longest relaxation time we observe, T 1,dark = 12.2 ± 0.4 s, is the longest electronic spin T 1 measured for Er 3+ , to the best of our knowledge [35]. In Fig. 4c, we demonstrate high-visibility Rabi oscillations between the ground state spin sublevels, driven by a microwave magnetic field applied through a coplanar waveguide. We measure T * 2 = 125 ± 5 ns (in a Ramsey experiment), and T 2 = 3.3 ± 0.2 µs (Hahn echo), consistent with previous measurements of electron spin coherence in solid-state hosts with abundant nuclear spins [17,36]. Longer coherence times to enable storage of quantum states and the observation of coherent dynamics between interacting Er 3+ ions may be achieved using dynamical decoupling. Ultimately, it will be beneficial to use alternative host crystals with lower nuclear spin content; Er 3+ incorporation has been demonstrated in several candidates including CaWO 4 , Si and TiO 2 [37]. Our results demonstrate that the optical properties of atomic systems are malleable through control of their local environment. Using a photonic nanostructure, we have achieved more than two orders of magnitude improvement in the emission rate and cyclicity of a single Er 3+ ion, and demonstrated single-shot readout of its spin. Realistic improvements in the quality factor of the optical cavity and photon collection efficiency η will enable another 20-fold enhancement in emission rate and spin readout with F > 0.99 in 50 µs (Q = 10 6 and η = 0.2). These results represent a significant step towards realizing quantum networks based on single Er 3+ ions. This measurement approach may also be extended to address many closely-spaced Er 3+ spins in the same device by exploiting small differences in their optical transition frequencies, providing a foundation for studying strongly interacting spin systems. Finally, this technique will enable a much broader class of atomic defects to be explored for quantum technologies. We acknowledge helpful conversations with Charles Thiel, Nathalie de Leon and Alp Sipaghil. Support for this research was provided by the National Science Foundation ( The intensity autocorrelation g (2) is computed from the integrated fluorescence after each pulse. Odd values of n (blue points) probe the correlation between pulses driving different transitions and are anti-bunched as a result of the spin staying in the same state over many excitation cycles. g (2) decays as e −n/n 0 because of optical pumping, giving the cyclicity C ≈ n 0 /2. In this measurement, C = 660 ± 66. g (2) (0) = 0.3 is consistent with the signal-to-background ratio of 10. (c, d) The cyclicity varies dramatically with the angle of the external magnetic field, but is described by a model (red line) based on the independently measured g tensors (see text). (e) The cyclicity decreases rapidly with the Purcell factor, demonstrated here by measurements on several ions (shown in different colors) with the detuning varied to change P . Experimental Details The devices and substrates used in this work are identical to those in Ref. 1, and are described in detail there. The key difference in the present work is that the measurements take place in a 3 He cryostat with a base temperature of approximately T = 540 mK. At T = 4 K, spin dynamics are unobservable, presumably because of rapid spinlattice relaxation in the ground or excited states [2]. The measurement of Rabi oscillations in Fig. 4c uses a slightly different device geometry that incorporates a microwave coplanar waveguide approximately 125 µm from the photonic crystal. Microwave pulses are generated using a signal generator (SRS SG386) modulated by an IQ mixer driven by an arbitrary waveform generator (Agilent 33622) and amplified to 21 W (MiniCircuits ZHL-30W-252+) before entering the cryostat. A low duty cycle is used to avoid heating the sample. All data in this paper comes from measurements on 3 different ions (Table 1). Figs. 1,2,3, Figs. S1 and S4 are based on "ion 1", while Fig. 4a,b, S2, S3 and part of Fig. 2e, are based on "ion 2". Fig. 4c and part of Fig. 2e are based on "ion 3". Ions 1 and 2 are coupled to different photonic crystals on the same YSO substrate, while ion 3 is in a different YSO crystal. Out of the many ions coupled to each cavity, these particular ions were selected for careful study because they are strongly coupled to the cavity (large Purcell factor) and spectrally well-separated from other ions. Importantly, no additional selection was made on the basis of cyclicity or spin readout fidelity. The transition from ion 1 to ion 2 was necessitated by the accidental destruction of the photonic crystal coupled to ion 1 (which was also damaged, lowering Q, after the measurements in Fig. 2a and 3 but before those in Fig. 2cd), while the transition to ion 3 was motivated by a new device geometry allowing for microwave driving of the spins. Theoretical model of cavity-enhanced cyclicity In this section, we develop a theoretical model describing the cyclicity of the optical transitions measured in Fig. 2 of the main text. Calculating the cavity coupling strength for all four possible transitions A-D is not currently possible because the relevant transition dipole moments of Er 3+ :YSO have not been measured, and the precise position of the ion in the cavity is not known. As an alternate approach, we demonstrate that these four rates and their dependence on the magnetic field angle can be reduced to only two fit parameters that physically correspond to the decay rates of the AB and CD transitions into the cavity at a single magnetic field orientation. The agreement of this model with the data validates our interpretation of the underlying physics, and is also practically useful for predicting conditions where the cyclicity is maximized from a small number of measurements. Here, d is the electric dipole operator and µ is the magnetic dipole operator. Typically, expressions for multipolar coupling include electric quadropole (E2) terms at the same order as MD contributions. However, for Er 3+ , it has been calculated that the vacuum E2 emission rate is approximately 7 orders of magnitude smaller than the MD rate [3], justifying its exclusion in the present analysis. We quantize the electric and magnetic fields in the cavity as: The i appearing in the expression for B reflects the fact that the magnetic field oscillates out of phase with the electric field in a standing wave cavity. For nanophotonic cavities, the TE/TM polarization mode splitting is high enough that we only consider a single polarization mode. We then introduce four transition dipole operators σ i , corresponding to the four transitions A-D. Each operator couples to electric and magnetic fields via the dipole moments d i and µ i , respectively. The complete atom-photon interaction Hamiltonian is ( = 1): (4) Making the rotating wave approximation and taking the cavity to be resonant with the atomic transition (neglecting Zeeman splittings), we arrive at: In the limit κ ( d · E, µ · B), the atom-cavity dynamics are simply Purcell-enhanced spontaneous emission into the cavity (the bad-cavity limit of cavity QED). The decay rate on the transition i into the cavity is given by: where κ is the cavity linewidth. In previous work, we have shown that the contributions from electric and magnetic coupling to the cavity could be of similar magnitude [1], depending on the position of the ion within in the cavity standing wave. Reducing parameters using Kramers' theorem Er 3+ has an odd number of electrons, so in the absence of a magnetic field, its eigenstates are all even-fold degenerate according to Kramers' theorem. In a low-symmetry environment like the Y site in YSO (C 1 point group), the degeneracy is minimal and all eigenstates are doublets [5]. The application of a magnetic field lifts the degeneracy of the doublets, resulting in the singly degenerate states {↓ g , ↑ g , ↓ e , ↑ e } shown in Fig. 1b. The states emerging from the same doublet are time-reversal conjugates of each other:Θ |↑ j = |↓ j and Θ |↓ j = − |↑ j , whereΘ is the antiunitary time-reversal operator. This has implications for the matrix elements of the electric and magnetic dipole operators, which are even and odd under time reversal (Θ −1 AΘ = ±A), respectively [5]. In particular: We can now revisit the atom-photon coupling Hamiltonian, Eqn. (5). Writing the atomic operators σ i in the basis {↓ g , ↑ g , ↓ e , ↑ e }: In going from the first to the second line, we have made use of Eqns. (7) - (10). In going from the second to the third line, we have taken the cavity fields to be real-valued, which is an excellent approximation as they are highly linearly polarized. The final expression, Eqn. (14), has the same form for the electric and magnetic dipole moments despite the difference in their transformation properties in Eqns. (7) - (10). Mathematically, this arises from the factor of i in front of the magnetic field term in Eqn. (5). Physically, this means that interference of the electric and magnetic dipole decay channels into the cavity does not introduce any chirality (i.e., preference for decays from |↑ e relative to |↓ e ) into the atom-cavity system, which is intuitive for a standing-wave optical cavity. Angle dependence of the cyclicity We now turn to computing the cyclicity and its dependence on the orientation of the external magnetic field defining the spin quantization axis. The branching ratio between spin-non-conserving and spin-conserving decays through the cavity is R = Γ ⊥ /(Γ || + Γ ⊥ ) = |g ⊥ | 2 /(|g || | 2 + |g ⊥ | 2 ), and the cyclicity is C = 1/R. The spin eigenstates are given by the effective spin Hamiltonian: where B is the applied magnetic field, S = {σ x , σ y , σ z } is a vector of Pauli matrices, µ B is the Bohr magneton and g is a symmetric, real 3x3 matrix. For Er 3+ :YSO, g is highly anisotropic in both the ground and excited electronic states, with principal components (14.65, 1.80, 0.56) for the ground state and (12.97, 0.85, 0.25) for the excited state [6]. Note that the orientation of the eigenvectors of g is different from the crystal axes and also slightly different between the ground and excited states. A consequence of the anisotropy of g is that the spin eigenvectors are not generally parallel to the applied magnetic field. The eigenvectors at one magnetic field orientation (ϕ , θ ) can be expressed in terms of those at another orientation as: |↑ g (ϕ , θ ) = α |↑ g (ϕ, θ) + β |↓ g (ϕ, θ) . Denoting the matrix of atom-cavity coupling matrix elements in Eqn. (14) as M (ϕ, θ), we can transform it to another basis according to: (18) and evaluate the cyclicity as before. We note that a similar model was applied to the orientation dependence of the absorption of linearly polarized light by Nd 3+ :YVO 4 in Ref. 7. The applicability of this model to emission in our work stems from the restriction of the emission to a single polarization by the high Purcell factor coupling to a single-mode, non-degenerate cavity. This model is used to fit the data in Fig. 2 of the main text. The model is fit using M (100 • , 90 • ) as the fit parameters, and the result is g || = e −i1.15 , g ⊥ = 0.024 × e −i1.476 . Since the cyclicity only depends on the ratio of these quantities, we constrain |g || | = 1. Correction for free-space decay As the Purcell factor is finite, free-space emission can influence the cyclicity when it is very large. We incorporate this by adding free space decays to the branching ratio expression, such that the cyclicity becomes C 0 when the cavity coupling vanishes: In this expression, Γ 0 i denotes the decay rate on transition i in the absence of a cavity, C 0 = 1 + Γ 0 AB /Γ 0 CD is the cyclicity in the absence of a cavity, and P ||/⊥ = g 2 ||/⊥ /(κΓ 0 ), where Γ 0 is the total decay rate out of the excited state in the absence of the cavity. For the ions studied here with P of order several hundred, the inclusion of this correction does not meaningfully improve the fit. We believe this occurs because the fit function can artificially increase g ⊥ by a small amount at the orientation of maximum cyclicity to account for the free space decay without significantly impacting the cyclicity at other angles. However, measuring the same ion with different Purcell factors (realized by changing the cavity detuning) makes the free-space decay evident and allows C 0 to be estimated (section 4). Highest achievable cyclicity For an atom with Purcell factor P and bare cyclicity C 0 , the highest possible cyclicity is C = P C 0 , achieved when g ⊥ = 0. This can always be realized by choosing the cavity polarization to be perpendicular to d ⊥ , for some spin quantization axis. In contrast, if the cavity polarization is fixed, it may be possible to tune the quantization axis angle to achieve g ⊥ = 0. We consider three cases: 1. If the g tensors in the ground and excited state are isotropic (or equal), it is always possible to choose an orientation of B where g ⊥ = 0. This can be proved from the normality of the matrix m formed by the upper-right 2x2 block of M . 2. If the g tensors for the ground and excited states are not simultaneously diagonalizable (having different axes), then it is not possible to achieve g ⊥ = 0 in general. This is proved by the specific counterexample of Er 3+ :YSO, where we observe (in a numeric model) that g ⊥ can be made small but never quite zero for certain values of M . 3. If the g tensors for the ground and excited states are parallel but have different (non-zero) magnetic moments, it appears (numerically) to always be possible to make g ⊥ = 0. We have not proven this mathematically. Er 3+ :YSO falls into the second case because of the C 1 site symmetry of the Er site. However, the principal axes of the ground and excited state g tensors are only rotated by about 15 • from each other [6], which may explain the success in achieving high cyclicity anyway. Many other host crystals for REIs and other defects have higher site symmetry, which forces the alignment of the g tensor axes to the crystal, and are therefore covered by the third case. Therefore, we expect that this technique is fairly general. Calibration of P ex For the measurements in Fig. 2 of the main text, we use saturating optical pulses to ensure P ex ≈ 0.5 regardless of the magnetic field orientation. The approximate value of P ex is confirmed by counting the emitted photons and comparing to the independently measured collection and detection efficiency (Fig. S1). We note that the finite duration of the excitation pulse may allow for P ex > 0.5 if a photon is emitted during the pulse and the ion is re-excited, and have observed increased values of C at certain angles when using optical π pulses instead. In that sense, the values of C that we quote should be interpreted as lower bounds. FIG. S1. Pex calibration. Average number of collected photons following each excitation pulse during the measurement in Fig. 2c. The photon numbers are scaled by the independently measured collection and detection efficiency η = 0.020. Since the spin is on average unpolarized and only resonant with the laser half of the time, Pex = 1/2 (as defined here) corresponds to 1/4 photon per pulse. Estimate of intrinsic cyclicity of Er 3+ :YSO transition A central claim of our work is that the cyclicity of the ion is enhanced by the optical cavity. Since the cyclicity C 0 of the spin transitions in Er 3+ :YSO without a cavity has not been previously measured, there is no direct basis for comparison. In ensemble experiments, C 0 has been estimated to be around 10 using measurements of the optical pumping rate and a number of simplifying assumptions [8]. We have attempted to measure C 0 using a single ion by detuning the cavity as much as possible, which increases the fraction of decays into free space and yields a weighted average of the cavity-enhanced C and C 0 [Eq. 19]. In Fig. S2c, we show several measurements of the same ion with increasing cavity detuning to decrease the total Purcell factor. The maximum cyclicity is strongly reduced: at the highest detuning, we observe a cyclicity of 53 ± 5, which sets an upper bound on C 0 . In Fig. S2d, we plot the cyclicity vs. detuning together with a theoretical model for several values of C 0 . From this model, we conclude that C 0 is less than 10, and likely closer to 2. Direct measurements at higher detunings are not possible because the count rate falls dramatically. Single-shot measurement fidelity Two factors contribute to the ML fidelity: the statistical error set by the signal-to-background ratio during the collection period (SNR), and the probability to decay to the wrong state before enough photons are collected, m/(ηC), where m is the number of photons needed to reach the target fidelity F T . m is related to the SNR as F T = SNR m /(SNR m + 1). In our experiment, SNR is typically high enough (14 for ion 1, limited by a timing error, and around 20 for ions 2 and 3, limited by dark counts) that by the time a single photon is detected, the error from the finite cyclicity is larger than the statistical error. In this regime, a nearly-optimal strategy is to infer that the spin state is |↑ g whenever an "A" photon is detected, and |↓ g whenever an "B" photon is detected. The average measurement fidelity is 1 − 1/(ηC), and the average measurement duration is t meas = 1/(P ex ηC). Based on this model, we can project how improved devices might lead to improved measurements. Assuming the same demonstrated cyclicity (1500), fiber-waveguide coupling and photon detection efficiency, but improving the cavity to realize Q int = 10 6 with critical coupling (κ wg = κ int ) and using optical π pulses to increase P ex to 1, it FIG. S2. Extracting the bare ion cyclicity, C0. (a,b) Cyclicity measurements on ion 2 (situated in a different cavity) show a similar orientation dependence to ion 1. Each of the two crystallographically inequivalent Y sites in YSO has two possible orientations related by a C2 rotation about the b axis. Ion 2 is in the opposite site from ion 1, so we have inverted the θ axis to make the apparent dependence the same. (c) Measurement of cyclicity for the same ion 2 at several different Purcell factors, achieved by detuning the cavity from the atomic transition by an amount ∆cav. The data is fit to Eq. 19. Here, |B| = 112 G and θ = 90 • . (d) Cyclicity at (ϕ, θ) = (100, 90) • vs. cavity detuning. The model is Eq. 19, where P || and P ⊥ acquire a detuning dependence P (∆i) = Pmax 1+(2∆ i /κ) 2 based on the known Zeeman splittings of the four transitions. Since the decay rates out of the two excited states are no longer equal when the cavity is detuned (i.e., ΓA = ΓB), Eq. 19 is averaged over the two excited states. A fit yields C0 = 2 ± 3. should be possible to realize an average measurement fidelity of 0.996 in average time of 50 µs. This assumes that SNR remains dark-count limited as the Purcell factor is increased. Intrinsic spin relaxation The spin relaxation rates in Fig. 4a are fit to a model of the form T −1 SR = T −1 1,dark + P ex /(t rep C), where the repetition rate t rep is varied to change the optical pumping strength. The intrinsic relaxation rate T 1,dark varies strongly with the magnetic field amplitude, and the cyclicity C also has a weak dependence. The latter is explained by the larger Zeeman shift of the spin-non-conserving transitions CD (∼ 13.1 MHz/Gauss) compared to the AB transitions (∼ 2.5 MHz/Gauss) shifting the former out of resonance with the cavity more quickly (Fig. S3). From this data, we can also affirm that the selective Purcell enhancement of the spin-conserving transition does not primarily arise from detuning the CD transitions, as in Ref. 9, although this effect does provide an additional factor of 2-3 at the highest magnetic fields. The magnetic field dependence of T 1,dark disagrees markedly with predictions based on the measured coefficients for the Raman, Orbach and direct processes for Er 3+ :YSO [10]. At T = 0.5 K, these are dominated by the direct process, which should have a magnetic field dependence T 1 ∝ B −4 , which arises from the combination of the frequency-dependence of the phonon density of states and the magnetic field-dependence of the spin-phonon coupling [5]. The measured values display a B 1/2 dependence at low fields, transitioning to a more rapid increase around B = 50 Gauss before saturating at 200 Gauss. We note that the point where T 1,dark saturates is roughly Fig. 4a,b of the main text, we extract the cyclicity of the optical transitions at a fixed magnetic field orientation for varying field amplitudes. The data is fit to the model in Eq. 19, incorporating the detuning-dependence of the Purcell enhancement as described in the caption to Fig. S2. C0 fits here to 5 ± 1. consistent with the onset of the direct process, and that the disagreement could be attributed to anisotropy in the direct process rate. Anomalous magnetic field dependence in the low-field relaxation of rare earth ions has been previously observed in electron paramagnetic resonance [11] without a definitive explanation. Concentrationdependent spin relaxation has been observed in spectral hole burning experiments [12] and attributed to flip-flop interactions between nearby Er ions. This is a likely explanation for our observations, which is bolstered by the difference in T 1,dark for different ions [which may arise from the stochastic arrangement of ions in this low-density sample ([Er] ≈ 300 ppb)] and a strong ansiotropy in T 1,dark (Fig. S4). 7 Additional data analysis 7.1 Correction of cyclicity estimate at small n 0 values In Fig. 2b and thereafter, the intensity autocorrelation g (2) has been fitted to an exponential function and the decay constant n 0 is utilized to extract C = P ex n 0 . This expression for C is only valid when n 0 1 because of the discrete time steps in the measurement. We estimate the value of C more accurately by using the following expression: which reduces to the previous expression for C at large n 0 values. 7.2 Extracting n 0 from g (2) at low optical pumping rates The spin relaxation time is extracted from fits to g (2) measurements as described in Fig. 2b. When these time constants are longer than 1 s, the even-and odd-offset g (2) time constants begin to differ. We believe this results from spectral diffusion of the optical transitions. Under the assumption that this is uncorrelated with the spin dynamics and acts identically on the A and B transitions, we find that the spin time constant can be isolated by fitting an exponential to the difference of the even-and odd-offset traces.
8,163.4
2020-03-05T00:00:00.000
[ "Physics" ]
Resonant thermal energy transfer to magnons in a ferromagnetic nanolayer Energy harvesting is a concept which makes dissipated heat useful by transferring thermal energy to other excitations. Most of the existing principles are realized in systems which are heated continuously. We present the concept of high-frequency energy harvesting where the dissipated heat in a sample excites resonant magnons in a thin ferromagnetic metal layer. The sample is excited by femtosecond laser pulses with a repetition rate of 10 GHz, which results in temperature modulation at the same frequency with amplitude ~0.1 K. The alternating temperature excites magnons in the ferromagnetic nanolayer which are detected by measuring the net magnetization precession. When the magnon frequency is brought onto resonance with the optical excitation, a 12-fold increase of the amplitude of precession indicates efficient resonant heat transfer from the lattice to coherent magnons. The demonstrated principle may be used for energy harvesting in various nanodevices operating at GHz and sub-THz frequency ranges. 1. Most of the results show tuning the B field can lead to peaks in the Kerr rotation signal. If the B field is with a value that is off-resonance, there is little signal, but if it is on-resonance, there are peaks. I am almost lost on why you need a pulsed heating source at a specific frequency. What if you match your frequency to 20 GHz, which seems to have the largest amplitude in Fig. 1a? I actually only see a peak at 20 GHz there. To criticize lightly, the paper does not explain the rationales well. 2. I get that the results show heating can lead to larger magnetization precession amplitude, which presumably relates to the population of magnons. But how did this happen in a mechanistic level? Which phonon modes couple to which magnon modes? If electron is playing a rule here? Seems that there is phonon-electron coupling in the picture as well. The phenomenological observation of heating lead to magnetization enhancement has been studied extensively. In my opinion, the use of a pulsed light is just another way of showing the same phenomena. 3. Magnon has a dispersion, meaning there is a spectrum of frequency with different wavevectors. How is this playing a role in the picture? 4. The energy conversion efficiency is extremely low. It is fine, but the authors started the paper with discussing energy conversion. Reviewer #3 (Remarks to the Author): This manuscript develops the idea of thermal energy harvesting by channeling it into magnon excitations in ferromagnetic metal thin films. The idea is novel and the paper deserves to get proper exposure through publication in Nature Communications. However, improvements are needed before it becomes suitable for publication. A major one is a discussion of (quantum) efficiencies in channeling resonant energy through modulated optical heat injection as compared to microwaves, a method that that been well known for decades. In other words, under what conditions can the authors imagine resonant optical heating having advantages and possibly becoming a good candidate for device applications? A clarification is needed with regard to the feasibility of exciting magnon resonances which are based on the Lifshitz-Gilbert equation (1) so that the resonance condition can be attained. Typically fs lasers come with fixed rep rates so the generated temperature field can match the natural magnon frequencies only under some tenability conditions. This should be made clear in the text. As a followup question, does thermal field resonant attainment have to excite magnon resonances only at the peak, or a sufficient energy transfer can occur at off-peak magnetic field values? The occluded temperature field of Eq. (2) involves a stationary background DT. This will grow with successive pulsing of the laser as more thermal energy is pumped into the system, including the ferromagnetic thin film. The authors cite 42K background heating which sounds quite high. What (counter-efficiency) role does the growing background play in the operation of the magnon resonance system and how would performance improve with efficiency of removal of that heat? It was already mentioned that increasing DT shifts the resonance Bn field (and no doubt broadens the peak due to increased random magnetic motion). Are there interface conditions between thin film and surrounding space to be taken into account for optimization? Also, following resonant magnon motion (i.e. absorption of the thermal energy by the magnon system resulting in resonant motion), does there follow further conversion to heat, likely at the surfaces of the thin layer where collective magnon motion may be degraded due to their spatial discontinuity? This is an issue of adiabaticity, I reckon. The authors anticipate this energy transfer concept "to generate magnons in ferromagnetic layers deposited on processors and other microchips operating at GHz and sub-THz clock frequencies." What lossy mechanisms can they propose that might compromise the efficiency of resonant amplitude through further thermal dissipation? Wouldn't additional dissipation terms have to be added to Eq. (1) to predict the inevitable further heat diffusion losses? Minor issues: In Fig. 3b the 15-fold increase of AB is claimed against the baseline, horizontal dashed line, which, however, looks artificially drawn and not based on data. It can be noted that small variations in the exact position of the baseline can change the 15-fold figure drastically: for example, a slight rigid upward translation to meet the tail end of the peaked distribution would reduce that figure anywhere down to between 8-fold (left boundary) and 4-fold (right boundary). On line 110 does KR signal stand for Kerr? On line 136, t0 should be changed to τ0. Please update. We are grateful to all reviewers for their useful comments and questions which helped us to strengthen our case. In particular, we have performed additional experiments on a thicker film, revised the manuscript accordingly and added four Supplementary Notes. Below we answer in detail the reviewer's questions and refer to the revisions made in the manuscript. This manuscript describes an interesting concept and experiment on resonantly harvesting thermal energies (temperature oscillations) by using the magnon mode excitations. Experimental verification for the magnon excitation is demonstrated via the time-resolved Kerr effect. Data interpretation is given by using the calculated temperature variation in the context of the micromagnetic LLG theorem. The result is overall interesting and may present a broad impact on the field of ultrafast dynamics and spintronics. The work may be ultimately considered publishing in Nature Communications. However, there are some unconvincing aspects related to the experiment. The theoretical description is also quite preliminary. These aspects should be improved before the manuscript can be accepted. We are thankful to the reviewer for finding the results to be interesting. We accept the reviewer's comments about unconvincing aspects of the experiment and theoretical description. We have performed additional experiments to make the conclusions more convincing and extended the presentation of the theoretical part in the main text of the manuscript. We have added 4 Supplementary Notes, referenced in the main text accordingly. Question 1. The Temperature calibration is rather unsatisfactory. It is basically estimated from a calculation without any additional validation. As far as I understand, this experiment was performed at room temperature. Was the experiment performed in a vacuum chamber or just ambient? The authors claim a global temperature rise of ~10 -42 K and an oscillation of ~1K. This sounds rather challenging to control and also to be able to excite coherent magnons. The authors should discuss the stability and temperature background. The values for the temperature estimates should also have error bars with them. The experiment was performed at ambient conditions at room temperature, which is now mentioned in the revised version. We have improved the theoretical analysis for estimation of the background temperature, T. As experimental validation, we have obtained the values of T from the shift of the resonance magnetic field [ Fig. 3a] induced by the background heating and added corresponding discussion on page 5. The details of the calculations and validation procedure are also presented in the Supplementary Notes 1 and 2. The highest estimated value of T remains well below the Curie temperature of Galfenol and, thus, background heating does not affect the amplitude of coherent magnons at the resonance conditions, which is determined by the differential temperature, T. Its value is validated by comparison of the measured (symbols in Fig. 3d) and calculated values of the precession amplitude on page 7 of the revised manuscript. The error bars in Fig.3b do not exceed much the size of the symbols and the small fluctuations from the linear dependence of the measured precession amplitude on power are clearly seen in Fig.3d. Concerning stability: We agree, that in a real device, the background temperature can vary with time due to variation of the used power, environment and heat sink. In the revised version we have estimated a change of the precession amplitude of 3% for a 20% change of the background temperature T (page 7). The errors for the experimentally obtained T are ~10% and given in the Supplementary Note 2. Question 2 As far as I know, the experiment is performed at a single frequency, making it less convincing. What technical limitation prevents the author to try other frequencies? If not, I suggest the authors scan a small range of frequency for their laser heating drive (and subsequently detect Kerr dynamics and then FFT) . This is probably not critical for the thermal part, but will be very helpful in understanding the magnon part. To validate that the spin dynamics are coherent, a dispersion (e.g. Kittel) relation should be obtained, including discussion on the linear and nonlinear regimes based on the power dependence. (a) The 10 GHz laser is a unique laser system and the repetition rate cannot be tuned. In this respect our experiments are similar to the first microwave experiments on magnetic resonance, where the frequency of electromagnetic waves was fixed and all resonances were detected by tuning their frequencies by an external magnetic field. However, in our experiments the thermal excitation is not harmonic and, thus, includes several frequencies: the fundamental (10 GHz) as well as higher harmonics of it (20 and 30 GHz) (see the bottom panel of Fig. 1c (revised)). As a result, there are several resonant magnetic fields where the magnon frequency coincides with one of the excitation harmonics and the precession amplitude is drastically increased (Fig. 3a). No peaks in the field dependence of the precession amplitude are detected in the case of single pulse excitation, where the amplitude gradually decreases with increasing B. (b). The spin dynamics are coherent in the sense that all magnons are driven by periodic thermal excitation with fixed, i.e. not random, phases. Harmonic oscillations are observed only at the resonance conditions, i.e. at the values of magnetic fields when f=10, 20 or 30 GHz (see f(B) dependence in Fig. 1a), which also confirms that the precession phase is preserved for a long time. We have added a paragraph about the magnon dispersion on page 6 of the revised manuscript and give its detailed description in the Supplementary Note 4. There we show that the main contribution to the measured signals is given by the fundamental magnon mode (with zero wave vector). To illustrate the destructive role of magnons with finite wave vectors, we have examined the effects in a thick (105-nm) Galfenol layer, in which the broadened magnon spectrum consists of spectrally close lying higher-order magnon modes. This leads to fast dephasing of the thermally driven magnetization precession and decreases the efficiency of resonant energy transfer. The results for the thick layer are presented in the Supplementary Note 4. (c) Nonlinear effects, which could accompany the high-amplitude magnetization precession, are not observed in our study. The resonant signals remain harmonic and their amplitudes depend linearly on W in the whole range of excitation powers. However, the shift of the resonance frequency with the increase of T affects the linear dependence of the precession amplitude on power at fixed magnetic field. This temperature-induced "nonlinearity" is known since the conventional microwave experiments on ferromagnetic resonance [Phys. Stat. Sol. 8, K89 (1965)]. A corresponding statement about nonlinearity has been added to the manuscript on page 7. Question 3. The authors claim that their technique can work in the GHz and even sub-THz regime. They should provide a more clearly bandwidth with good justification. This will involve the discussions of their setup, materials systems, and other potential limiting factors against extending the bandwidth both ends (lower GHz or upper THz). We have added a discussion about bandwidths on page 8. To reach high, sub-THz frequencies, the corresponding bandwidths should be available for both temperature and magnons. a) Temperature modulation bandwidth. Here we refer to the picosecond acoustic experiment in Ref. [4] where the temperature in a 12nm Al film was modulated at frequencies up to 400 GHz. The thermal properties of ferromagnetic metals do not differ much from normal metals and thus such a modulation is possible in thin films. Theoretically, for fixed parameters the amplitude of the temperature oscillations is proportional to the power and inversely proportional to the frequency up to sub-THz frequencies. b) Magnon bandwidth. In Ref. [12] it is shown that in thin (Fe,Ga) films magnons with frequencies up to 100 GHz may be excited at the magnetic field of 3T. Moreover, novel metallic ferrimagnetic materials (e.g. Mn-Te compounds) possess narrow magnon resonances at frequencies up to hundreds of GHz at low magnetic fields [29]. c) From the lower end, there is no limit for temperature modulation while for magnons the lower frequency varies from hundreds of MHz (e.g. for garnets) up to several GHz in metallic ferromagnets. Question 4. It seems that the underlying mechanism for such temperature-driven spin dynamics lies in the strong modulation effect of the magnetic anisotropy, which behaves like an effective field. The authors should add quantitative discussions on the magnitude and effectiveness of such modulation, using their material therein. The referee is absolutely right that the underlying mechanism is modulation of the magnetic anisotropy. This analysis is described in details in Ref. [14] for the parameters of the (Fe,Ga) used in the present work. In the revised version, we have extended the discussion part of the main text on page 6 adding Eq.2 for the free energy and of the Method section, where additional Eq.5 and more details necessary for the analysis are presented. In addition, is FeGa alloy chosen for some unique material concerns? Can other ferromagnets work as good as FeGa? If not, why. Yes, other ferromagnetic and ferrimagnetic materials will work as well. The critical properties for the suggested concept are the pronounced magneto-crystalline anisotropy and the long lifetime of the magnetization precession. Galfenol is an actively studied novel ferromagnetic material, which has these properties. It is a good testbed for demonstration of the general concept to transfer parasitic heat modulated at high frequency to magnons. In the revised version we have added a paragraph about other materials on page 8. Question 5 In addition to the temperature-induced anisotropy modulation, thermal gradient torques (via spin-Seebeck effect for example), if excited, may also contribute to the magnon excitation. This is a different mechanism where true spin torques are actually generated owing to the spin-orbit coupling (of FeGa, for example). There seem no relevant discussions on ruling out such in the present manuscript. Indeed, the diffusion of hot carriers from the excited area as well as the thermal gradient inside a ferromagnetic film (we estimate 4% temperature difference across the film) can induce ac-spin transfer modulated at the laser repetition rate, f0 [15,16]. However, due to the uniform magnetization in a single thin ferromagnetic layer, this transfer does not produce a torque on the magnetization [17]. The dominant role of the thermal modulation of the magneto-crystalline anisotropy is also confirmed by the dependence of the precession amplitude excited by a single laser pulse on the direction of the external magnetic field. This dependence demonstrates a four-fold in-plane symmetry with a slight uniaxial distortion, which corresponds to the magneto-crystalline anisotropy of the studied layer. The corresponding discussion is given on page 5 of the revised manuscript. Reviewer #2 (Remarks to the Author): This paper experimentally studied the magnon generation using pulsed laser heating on a Fe0.81Ga0.19 film grown on a GaAs substrate. The authors used a pump-probe method, where a periodic laser pulse is used to heat up the sample at a frequency of 10 GHz and used another laser at 1 GHz to probe the Kerr rotation single to infer the magnon information. They then used a phenomenological to link the magnetization precession amplitude to temperature variation excited by the pump laser. The paper is written in a way that is very difficult to understand and may appeal only domain experts instead of broader audience, which Nat. Comm. emphasizes. This is my feeling, but I am not using it as a criterion to judge the quality of the paper. It is the job of the editor. We are thankful to the referee for this general comment. We have revised the paper answering all questions of the referees and added four Supplementary Notes. We hope that the revised manuscript becomes suitable for the broader audience which Nature Communications addresses. Fig. 1a? I actually only see a peak at 20 GHz there. To criticize lightly, the paper does not explain the rationales well. Most of the results show tuning the B field can lead to peaks in the Kerr rotation signal. If the B field is with a value that is off-resonance, there is little signal, but if it is on-resonance, there are peaks. I am almost lost on why you need a pulsed heating source at a specific frequency. What if you match your frequency to 20 GHz, which seems to have the largest amplitude in In general, the excitation should be periodic, but not necessarily pulsed. To explain the observed effect in a more clear way we have extended the description of the analysis adding the equation for the free energy (Eq.2) and two coupled equations (Eq.5) in the Method Section, which describe the excitation of magnetic resonance. In the text on page 3, we compare the physics of magnon excitation with the excitation of a simple one-dimensional oscillator with eigenfrequency f (i.e. the analog of the fundamental magnon frequency) by an external oscillating force. The version of the Landau-Lifshitz-Gilbert equation Eq. (1) in the main text of the manuscript describes the same physics but in a more complicated system due to the precessional origin of the magnetization vector m in three-dimensional space. Our thermal excitation is a periodic but not harmonic force (see the spectrum in Fig. 1c (revised)). Its spectrum includes the fundamental frequency (f1=10 GHz) and higher harmonics (f2=20 GHz, f3=30 GHz etc). The sum of all these harmonics forms an external oscillating force acting on the oscillator. Obviously, the oscillator driven by such a force has the same frequency components, but the amplitude of each harmonic depends on the detuning between the oscillator eigen-frequency, f, and the excitation harmonic fi. In our experiment, the oscillator (magnon) frequency, f, can be tuned by the external magnetic field. To emphasize this, we have added to the revised Fig. 1a several other spectral lines, which characterize the magnon spectrum at several values of applied external magnetic field. When the frequency f coincides with one of the exciting harmonics, the amplitude strongly increases, and this effect is clearly seen in our experiment. In the offresonance regime the magnon precession is not harmonic (see right panel of Fig.2) and the amplitude is much smaller than in the resonance case. The analogy with the excitation of a simple one-dimensional oscillator is valid for the magnons in our thin (Fe,Ga) film because there is only a single relevant magnon mode, the frequency of which depends on B as shown in Figs. 1a Indeed, heating of a ferromagnetic layer increases the thermal population. However, this is true for noncoherent magnons (with random phases) and this effect can be used for generation of spin currents (see, for instance, [Nature Mater. 11, 391(2012)], for details). In our experiments the thermally excited magnetization precession is coherent and thus the phases of all excitations (thermal and magnons) are fixed and not random. In general, the excitation should be periodic, but not necessarily pulsed. For continuous excitation, the magnon phase is random and no resonances will be observed. In our experiment, the role of electrons is important at the first stage of the optical excitation. The optical excitation heats the electrons and they transfer the energy to the lattice during a time of ~1 ps. Due to this short time, the temperature is modulated in time and the magnetization precession, the period of which is ~100 ps, is coherent. In the revised version this is mentioned on page 8. Magnon has a dispersion, meaning there is a spectrum of frequency with different wavevectors. How is this playing a role in the picture? The excitation spot is ~17 m in diameter and only magnons with q<10 6 m -1 are excited coherently. In a film, where the thickness h is much less than the excitation spot and hq<<1, the magnon dispersion in this range of q is negligible and lies within the spectral width of the fundamental magnon mode. The higherorder exchange magnon modes quantized along the film normal have significantly higher frequencies (>160 GHz), which are out of the spectral range of thermal excitation and, therefore, are not excited. Thus, all excited magnons have the frequency of the fundamental mode which is mentioned in the revised version on page 6. The analysis of the magnon spectrum and the contribution of magnons with finite wave vectors as observed in the experiment with a thick (Fe,Ga) film are presented in the Supplementary Note 4. Please, see also our reply to the Question 2 of Reviewer 1. Question 4. The energy conversion efficiency is extremely low. It is fine, but the authors started the paper with discussing energy conversion. We agree with the reviewer about the confusing start of the paper. We took out the first sentence of the abstract. We would like to mention that energy harvesting is aiming not only at realizing efficient cooling but also at transferring parasitic heat to other kinds of energies, i.e. coherent magnons in our case, which may be used for other purposes. Reviewer #3 (Remarks to the Author): This manuscript develops the idea of thermal energy harvesting by channeling it into magnon excitations in ferromagnetic metal thin films. The idea is novel and the paper deserves to get proper exposure through publication in Nature Communications. However, improvements are needed before it becomes suitable for publication. We are thankful to the reviewer for studying the manuscript and we agree that improvements were needed. Question 1. A major one is a discussion of (quantum) efficiencies in channeling resonant energy through modulated optical heat injection as compared to microwaves, a method that been well known for decades. In other words, under what conditions can the authors imagine resonant optical heating having advantages and possibly becoming a good candidate for device applications? It is not easy to compare the efficiency of resonant thermal and microwave excitation of coherent magnons because both methods depend on many technical issues, e.g. the design of the microwave cavity, efficiency of the transmission line, efficiency of the heat sink etc. A direct comparison of the precession amplitudes achieved in our experiment and in the experiment with a conventional coplanar microwave waveguide [26] for the same power shows that the efficiency of the resonant thermal excitation is one-order of magnitude lower. However, the effect of thermal modulation can be expressed as a virtual microwave field acting on a ferromagnet. The amplitude of this field in our experiment is 10 -100 T, which is comparable with the microwave induction generated by a microwave cavity [26]. By increasing the thermal modulation amplitude, it is possible to achieve efficiencies comparable with planar microwave microstructures. In the revised version, we give a direct comparison of our concept with conventional microwave and acoustic methods on page 7. We also discuss a way to improve the dynamical thermal efficiency ζ=δT/ΔT, which determines the fraction of the useful oscillating temperature δT relative to the total parasitic heating ΔT. This discussion has been added to page 7 of the manuscript and in detail to the Supplementary Note 3. Question 2. A clarification is needed with regard to the feasibility of exciting magnon resonances which are based on the Lifshitz-Gilbert equation (1) The reviewer is right that sufficient energy is transferred to the magnons only at the resonance condition which is achieved by tuning the external magnetic field. The width of the feature in Fig. 3b answers this question directly for the case of pulsed optical excitation. In the revised manuscript, the description of resonant conditions is given in more details, both on a qualitative and quantitative level, in the main text of the manuscript (pages 3 and 6), and Supplementary Note 4. It is important that these conditions are tolerant to variations of the background temperature T which is mentioned in the revised manuscript on page 6. Please, see also the answer to Question 1 of Reviewer 1. Question 3. The occluded temperature field of Eq. (2) involves a stationary background DT. This will grow with successive pulsing of the laser as more thermal energy is pumped into the system, including the ferromagnetic thin film. Before answering the Reviewer's comments, we would like to mention that the heat transferred to the magnons is very small relative to the heat generated as a result of optical excitation. The main idea of our work is to explore this parasitic heat for resonant excitation of magnons and we successfully show that the efficiency of heat transfer is high enough to exploit the generated coherent magnons in practice. In the comments, the Reviewer is raising questions concerning the decay of coherent magnons and the emission of phonons during this decay. We agree that these processes could play an important role in the magnon dynamics and we give the detailed answers below. (a) The reviewer is right that the temperature background grows with successive pulsing. This heating ultimately reaches a stationary value ΔT. This value is still much lower than the Curie temperature and does not change significantly properties such as the decay of the magnetization. This is supported by the experimental fact that the amplitude depends linearly on the excitation power at the resonance conditions (see Fig. 3d). In the revised version, the details of the calculation of ΔT are presented in the Supplementary Note 2. We have validated the calculated values for ΔT using the experimentally measured resonance shifts mentioned by the Reviewer and get reasonable agreement with the calculation. We introduce the dynamical resonant efficiency ζ=δT/ΔT, which gives the fraction of the oscillating temperature δT relative to the background heating ΔT. We discuss the dependence of δT/ΔT on various parameters and the design of optical and electrical devices. These details are presented in the Supplementary Note 3. (b) The reviewer is right that heat diffusion losses of the magnons are, in general, inevitable. These losses are driven by the coherent field gradients and usually are growing with increasing average sample temperature. However, our experimental observations have evidenced the linear dependence of the coherent magnon amplitude on the pump laser power and the average temperature in the ranges realized in our experiments. This observation (equivalent to the observation of a temperature-independent width of the coherent magnon spectral line) indicates that the attenuation of the coherent magnons in our experiments is temperature independent. (c). The reviewer is also right when suggesting that the highest gradients of the coherent ac-magnetic field generated by the fundamental, i.e. spatially homogeneous inside the film, magnetic mode are expected at the interfaces of the magnetic film with the capping metal layer and the semiconductor substrate. Our estimates demonstrate that the regime of the coherent oscillation in the film is isothermal rather than adiabatic. The wavelength of the thermal wave at the considered frequencies of 10 GHz -30 GHz is longer than the total thickness of the magnetic film and of the metallic capping layer, and, thus, it exceeds the scale of potential spatial inhomogeneities that could be associated with diffusional transport at the interfaces. Thus, the experimentally observed coherent motion is isothermal. (d). As the losses related to the diffusion processes are experimentally proved to be negligible, no additional temperature-dependent dissipation terms have to be added in Eq. (2) and a removal of the heat from the substrate, as suggested by the Reviewer, would not decrease the efficiency of the resonant amplitude excitation in the experimentally covered range of temperatures (around room temperature). To summarize, we have demonstrated an original concept using the concept of a rather simple structure with an elementary design. We do not find anything that can be considered as a potential compromising factor or road blocker for application of our concept. Moreover, there are various prospectives for optimizing the suggested approach in detail that may need to be tailored for a specific application. Minor issues. In Fig. 3b the 15-fold increase of AB is claimed against the baseline, horizontal dashed line, which, however, looks artificially drawn and not based on data. It can be noted that small variations in the exact position of the baseline can change the 15-fold figure drastically: for example, a slight rigid upward translation to meet the tail end of the peaked distribution would reduce that figure anywhere down to between 8-fold (left boundary) and 4-fold (right boundary). To be more precise, in the revised version of the manuscript the dashed line corresponds to the out-ofresonance background obtained by fitting it with the sum of Lorentzian peaks. The corresponding resonance increase in Fig.3b is 12-fold. On line 110 does KR signal stand for Kerr? On line 136, t0 should be changed to 0 Please update. The corresponding changes were made. In the revised version we use "Kerr rotation signals" or "KR signals" through the text.
7,011.2
2020-01-27T00:00:00.000
[ "Physics" ]
Effect of automated head-thorax elevation during chest compressions on lung ventilation: a model study Our goal was to investigate the effects of head-thorax elevation (HUP) during chest compressions (CC) on lung ventilation. A prospective study was performed on seven human cadavers. Chest was automatically compressed-decompressed in flat position and during progressive HUP from 18 to 35°. Lung ventilation was measured with electrical impedance tomography. In each cadaver, 5 sequences were randomly performed: one without CC at positive end-expiratory pressure (PEEP) 0cmH2O, 3 s with CC at PEEP0, 5 or 10cmH2O and 1 with CC and an impedance threshold device at PEEP0cmH2O. The minimal-to-maximal change in impedance (VTEIT in arbitrary unit a.u.) and the minimal impedance in every breathing cycle (EELI) the) were compared between flat, 18°, and 35° in each sequence by a mixed-effects model. Values are expressed as median (1st–3rd quartiles). With CC, between flat, 18° and 35° VTEIT decreased at each level of PEEP. It was 12416a.u. (10,689; 14,442), 11,239 (7667; 13,292), and 6457 (4631; 9516), respectively, at PEEP0. The same was true with the impedance threshold device. EELI/VTEIT significantly decreased from − 0.30 (− 0.40; − 0.15) before to − 1.13 (− 1.70; − 0.61) after the CC (P = 0.009). With HUP lung ventilation decreased with CC as compared to flat position. CC are associated with decreased in EELI. Cadaver preparation The trachea was intubated and connected to a ventilator (T60, Air Liquide Medical System, Antony, France) set in volume control mode, constant flow inflation profile, tidal volume 8 ml/kg predicted body weight (additional methods are available in the Additional File 1), respiratory rate 10 breaths/min, insufflation time 1 s, end-inspiratory pause 1 s, expiratory time 4 s, inspired oxygen fraction in air 21%, and PEEP 0 cmH 2 O. Airway pressure was measured at a heat-and-moisture exchanger (HME) (DAR, Covidien, Mansfield, MA) attached to the endotracheal tube.A flow-meter (Hamilton Medical, Switzerland) was inserted between the Y-piece and the HME.A nasogastric catheter (Nutrivent, Sidam, Mirandola, Italy) was inserted to measure oesophageal pressure.Flow, airway pressure and esophageal pressure signals were acquired at 200 Hz by a data-logger (Biopac 150, Biopac Inc., Golletta, CA).The ITD (ResQPOD ITD 16, Zoll Medical Corporation, Chelmsford, MA, USA) was inserted between the HME and the proximal tip of the endotracheal tube in part of the experiment.It generates a negative intrathoracic pressure at the time of chest decompression during CPR and hence increases the venous blood flow to the heart 15 . Cardiopulmonary resuscitation CPR was performed by actively compressing and passively decompressing the chest with the Lund University Cardiopulmonary Assist Device (LUCAS 2, Stryker, USA) at a rate of 102/min.Chest compressions were not synchronized with the ventilator. Each CPR sequence was performed across the same trunk inclination order: flat, 18° and 35°.When flat the cadaver was installed in a supine position with the back-plate under the body to which the LUCAS device was attached and locked with the suction cup over the chest.Before elevating the head and thorax the back-plate was replaced with one that provided an 18° inclination (EleGARD System, Advanced CPR Solutions, Mineapolis, MN, USA), to which the body was attached and locked with the LUCAS device, as in the flat step.When this second back plate was switched-on the body was gradually raised to the 35° inclination over 2 min, i.e. by an angle of 0.17°per second.Chest compressions were applied at the same angle to the chest as they were in the flat position. In flat position chest compressions were performed over 1 min.During the 18-35° inclinations the chest compressions were started at 18° for 1 min, continued until the 35° inclination was reached and then continued for 90 s. Respiratory mechanics Compliance of lung and chest wall was automatically measured breath by breath in the time without chest compressions in each trunk inclination, i.e. before and after chest compressions, via an application in Matlab (Matlab2021b, The Mathworks) 16 . Vol.:(0123456789) Electrical impedance tomography EIT device A ring of 16 EKG electrodes (Blue sensor BR, Ambu®, Ballerup, Denmark) was placed around the thorax 5 cm above the xiphoid and connected to an EIT device (Gottingen High-Performance, Sensor Medics, Eindhoven, The Netherlands).A 5-mA alternating electrical current was applied and thorax scans were performed at 13.58 Hz. EIT signal processing (Additional File 2). In each pixel and in the sum of all of them within a region of interest.The minimal and maximal values of the sum of all impedance pixel defined the onset and end, respectively, of inspiration (Fig. 2).The difference between them was termed VT EIT because the changes in impedance correlates with the volume variations 17,18 .Time course of lung impedance (arbitrary units: a.u.) measured with electrical impedance tomography (EIT) in cadaver#6, (CPR-PEEP0 sequence, flat position) depicting the assessment of volume change (VT EIT ) and endexpiratory lung impedance (EELI) before and after chest compressions.VT EIT highlighted with the continuous and dotted double green arrows is the change between the minimal and the maximal values of the filtered lung impedance signal (red line).EELI is the minimal value.EELI was measured over the last three breaths before (continuous vertical black arrows) and during chest compressions once the tracing was stable (dotted vertical black arrows).EELI was normalized to VT EIT of the corresponding breaths. EIT indexes In addition to VT EIT , other EIT indexes based on the specific purpose of the study.The anterior-to-posterior VT EIT ratio was determined for each region of interest.An anterior-to-posterior VT EIT ratio of 1 reflected a balanced distribution of the ventilation between the dorsal and ventral lung regions.A ratio > 1 or < 1 indicated that most of the lung ventilation was distributed towards the anterior or the posterior lung regions, respectively.The pendelluft phenomenon was computed as the sum of all pixels belonging to regions of interest with negative VT EIT differences.Pendelluft was normalized to the overall VT EIT value and expressed as %VT EIT .Hence, as measured the pendelluft was in antiphase with the volume generated by the ventilator and thus pendelluft had a negative value.The pendelluft reflects an internal movement of gas during inspiration, from some parts of the lungs to other parts of the lung probably due to regional inequality in lung mechanics.This phenomenon, frequent in ICU patients 19 , is important because it is a mechanism by which dependent parts of the lung may get overdistended when they receive part of the inflated tidal volume from the non-dependent lung during the same inspiration 20 .The regional ventilation delay (RVD) measures the time delay to re-open atelectactic area with mechanical insufflation.It was computed according to Muders et al. 21on the EIT signal.In each pixel, the RVD was the percentage of the inflation time taken to reach 40% of the maximal value.An increase in the standard deviation of RVD (SDRVD) indicates a more heterogeneous distribution of regional inflation.The global inhomogeneity index (GII) measures the global spatial heterogeneity of ventilation.It was computed according to Zhao et al. 22 .The heterogeneity of ventilation increases with higher GII.The end-expiratory lung impedance (EELI) was taken as the minimum value of lung impedance in each breath and normalized for VT EIT (EELI/VT EIT ) (Fig. 2). EIT data analysis VT EIT , anterior-to-posterior VTEIT ratio and pendelluft were measured on the same breaths over 60 s (10 cycles) during chest compressions in flat, 18° and 35° trunk inclinations.Figure 3 depicts the time course of lung impedance during elevation from 18° to 35° over which the EIT measurements were performed. EELI and EELI/VT EIT were measured over the last three breaths without and with cardiac compressions (Fig. 2). Statistical analysis The primary end-point was VT EIT .The secondary end-points were: anterior-to-posterior distribution of VT EIT , pendelluft, EELI/VT EIT and lung and chest wall compliance. The analysis was conducted using a mixed-effects model for each sequence where the dependent variables were VT EIT , anterior-to-posterior VT EIT , pendelluft and EELI/VT EIT , the inclination (flat, 18°, 35°) the factor with fixed effect and the cadaver with the factor with a random effect .The reference position was flat, to which the two other inclinations were compared.The accuracy of the model was checked by plotting the standardized residuals to the fitted values. Lung and chest all compliance were compared before and after cardiac compressions in flat position by using a mixed-effects model.They were also compared before cardiac compression at 18° versus after compression at 35° for the sequences 2-5.This was due to the fact that the chest compressions were continuously performed during elevation, and hence this comparison includes the effect of cardiac compression and of trunk inclination change as well.Values are presented as median (1st-3rd quartiles) unless otherwise stated. Ethics approval and consent to participate The human bodies used in this study were donated for medical science use by the donators themselves.Written and witnessed consent to donate their bodies to science for anatomical and pedagogical purposes was given prior to death.This donation was free, anonymous, and regulated by the French funeral legislation.According to French law, no other approval was necessary by French authorities or by the local ethical board. Results Seven cadavers (4 females and 3 males) of 89 ± 9 (mean ± SD) years of age and 51 ± 8 kgs predicted body weight were used (Additional File 3).Esophageal pressure was not available for one cadaver (failure to insert the nasogastric catheter in cadaver#1). Effect of inclination on VT EIT As shown in Table 1, the VT EIT did not change significantly across the trunk inclination when the chest was not compressed.When chest compressions were performed VT EIT decreased with trunk elevation as compared to flat 1).This decrease was consistently significant at 35° in every sequences with chest compressions and also at 18° in the CPR-PEEP0-ITD sequence.EIT parametric images of VT EIT are shown in Fig. 4. Effect of inclination on anterior-to-posterior VT EIT , pendelluft and EELI/VT EIT , SDRVD and GII The anterior-to-posterior VT EIT did not statistically significantly change with trunk elevation in any sequence (Table 1).The same was true for pendelluft except that it was lower at 18° in the sequence 1 without chest compression.EELI/VT EIT became more negative, i.e.EELI decreased, with trunk elevation in every sequence but the first, the difference being statistically significant only for the CPR-PEEP10 sequence (Table 1).There was a non-significant trend to a reduction in SDRVD with body inclination in every sequence (Table 1).With body inclination GII non-significantly tended to decrease in absence of chest compression and at PEEP 0 and to increase in the other conditions (Table 1). Effect of chest compressions on lung and chest wall compliance In the flat position the lung and chest wall compliance increased after chest compressions except in the CPR-PEEP0-ITD sequence for the lung compliance and in the CPR-PEEP5 sequence for the chest wall (Table 2). During trunk elevation, the chest wall and lung compliances were higher after than before chest compressions, expect for lung compliance, which was lower in sequence CPR-PEEP5 and also in CPR-PEEP0-ITD sequence though not significantly. Discussion To our knowledge, this is the first study that measures the distribution of lung ventilation with EIT during flat and HUP-CPR in human cadavers.The main findings were: (1) with HUP-CPR, VT EIT decreased at PEEP values of 0, 5 and 10 cmH 2 O compared to flat CPR, but its anterior-to-posterior distribution did not significantly change; the same was found with ITD in place; (2) the chest compressions decreased EELI/VT EIT . Methodology Before to discuss present results, a critique of our methodology is required.The study challenged the EIT technology in many ways.One was the location of the electrodes across the thorax.Using a ring of electrodes rather than a belt was thought more flexible to accommodate the LUCAS installation.Another issue was the baseline recording.EIT depends on a measurement of change in impedance relative to a baseline.Performing a continuous EIT recording that would have taken more than one hour had several risks.One was a potential electrode contact defect during the different maneuvers and the loss of parts of the records.Therefore, the design took into account this issue and the experiment was split into 5 sequences for which the baseline was as close as possible to the recording of the head elevation maneuver.Another risk was the skin contact of the electrodes.To manage it, the EIT device continuously monitored the electrode impedance in order to maintain it in the appropriate range.Finally, the issue of filtering the EIT signal disturbed by chest compression artefacts was solved as shown in Figs. 1, 2 giving a good signal-to-noise ratio.Furthermore, the accuracy of filtering was tested after each recording with our flexible application developed in Matlab.Contrary to most of the commercial EIT systems with which the sternum and spine are avoided with electrodes placed to inject more electric current into the chest cavity, our device included one sternal and one spinal electrode.As we argued in the Additional File 2, we decided not to use a conventional EIT belt due to the presence of the CPR device placed in the mid-sternum.We reasoned that by using a series of 16 EIT electrodes would facilitate the management of the CPR with that device.It is true that placing the electrodes 1 and 9 at the sternum and spine, respectively, was the way the EIT prototype was used and published in many publications about EIT 18,[23][24][25] before the new commercial devices came up.To the best of knowledge, there was no comparison between the single 16 electrodes and the 16-electrodes belt. Model validation The present study shows that EIT is feasible during CPR in humans, which has clinical and research implications. Beyond this, we are aware that it is important to make sure that the present results are consistent with past and future findings.Two findings deserve attention in the perspective of a model validation.Without chest compressions at PEEP0, EELI/VT EIT did not change with HUP in present study.By contrast, in acute respiratory distress syndrome patients HUP was associated with, on one hand, an increased EELV [26][27][28][29] , and, on the other hand, higher plateau pressure and lower respiratory system compliance that improved with chest strapping 30 .Both findings suggest that HUP may induce end-inspiratory overdistension and that an increase in EELV may reflect overdistension.The anterior-to-posterior distribution of VT EIT was markedly greater than 1 in the flat position in all the sequences, indicating a marked distribution of tidal volume towards the most anterior parts of the lung during mechanical ventilation and CPR.This may also reflect atelectasis in the dorsal lung regions. Effect of HUP and chest compressions on VT EIT at different PEEP values In present study we measured VT EIT , its anterior-to-posterior distribution and the pendelluft during chest compressions at various trunk inclinations.In the flat position, VT EIT was twice higher with CPR at any PEEP than without at PEEP0 (Table 1).We did not formally compare these values because the baseline may differ across the sequences.VT EIT decreased with HUP consistently across all PEEP levels.This finding may reflect overdistension with HUP, in particular in the ventral lung regions, or by collapse in the dorsal lung regions or by a combination of both.Our study design did not permit to further explore these mechanisms.Ascribing the findings entirely to ventral overdistension might be too simplistic, in particular because the VT EIT distribution did not change between conditions.The fact that GII increased, though not significantly, with body inclination at PEEP 5 and 10 when RVD decreases may suggest overinflation in the ventral regions, should it be assumed that the lower SDRDV reflects recruitment in the dorsal lung regions.It should be noted, however, that there were changes in that distribution but these were not statistically significant perhaps by lack of power.However, the anterior-toposterior distribution of VT EIT did not change significantly with HUP as compared to baseline.With the ventral lung regions overdistended, the ventilation in those regions, and hence the anterior-to-posterior of VT EIT would likely be lower with HUP.On the other hand, the pendelluft phenomenon, which was present in the flat position in each sequence, did not significantly change with HUP during the chest compressions. Yang et al. measured some EIT indexes in 30 normal volunteers breathing spontaneously at 0, 30, 60 and 90 degrees from the horizontal plane in the supine position 31 .They found that with the sitting positioning the dorsal ventilation increased and the ventilation became more heterogeneous.Moreover, the authors had concern about keeping the same the position of the electrodes over the range of inclination.Our results are difficult to compare with that of Yang et al. given the difference in subjects and ventilation regimen 31 . Effect of chest compressions on EELI/VT EIT We measured EELI/VT EIT before and after chest compressions at different trunk angulations and PEEP values.The present study confirms that chest compressions during non-synchronous mechanical ventilation decreases EELI, as already reported for EIT in an experimental model of cardiac arrest in pigs 14 and in other studies 32,33 .This also argues in favor of the model's validity.The clinical implication for this finding would be to set some PEEP during the chest compressions or once the circulation is restored in order to optimize ventilation, the balance with the effect on cerebral perfusion and venous return is still to be determined.However, in present study even with PEEP, EELI/VT EIT decreased with HUP, suggesting that HUP would not improve lung ventilation. Lung and chest wall mechanics In human cadavers a previous study found that the chest wall compliance increased over time after chest compressions 34 .In present study we assessed the chest wall compliance by using esophageal manometry and also found it was higher after than before chest compressions, in each trunk inclination and at each PEEP value.The chest wall was softened by the prolonged chest compression.This confirmative finding is another evidence validating our experimental model.In the absence of lung CT scans it was not possible to assess the nature and extent of lung injury, if any, in the cadavers we used.It is, however, likely that the bodies had some degree of lung edema and or tissue edema. Limitations Our study has several limitations.One is the use of a cadaver model.For this reason, no recommendation can be made regarding inclination or the use if an ITD because no information about hemodynamics and cerebral perfusion could be collected during present experiment.Tidal volume measured by EIT does not account for the presence of pulmonary ventilation due to chest compressions, which may also be influenced by chest elevation and PEEP.As mentioned above, the lack of lung CT to better define the presence and extent of lung injury at baseline is a limitation.Our study is a pilot study and probably underpowered due to the cost and availability of cadavers.Therefore, it only allows us to track trends and we cannot draw any firm conclusions.A better understanding of the effects of chest compressions and the impact of mechanical ventilation during CPR on lung ventilation distribution could impact the use of mechanical ventilation after cardiac arrest 35 . Figure 1 . Figure 1.Schematic drawing of the sequences of events.PEEP positive end expiratory pressure, ITD impedance threshold device, CC chest compressions, EIT electrical impedance tomography. Figure 2 . Figure 2. Time course of lung impedance (arbitrary units: a.u.) measured with electrical impedance tomography (EIT) in cadaver#6, (CPR-PEEP0 sequence, flat position) depicting the assessment of volume change (VT EIT ) and end-expiratory lung impedance (EELI) before and after chest compressions.VT EIT highlighted with the continuous and dotted double green arrows is the change between the minimal and the maximal values of the filtered lung impedance signal (red line).EELI is the minimal value.EELI was measured over the last three breaths before (continuous vertical black arrows) and during chest compressions once the tracing was stable (dotted vertical black arrows).EELI was normalized to VT EIT of the corresponding breaths. Figure 4 . Figure 4. Parametric EIT images of tidal ventilation in cadaver#7 during cardiac compressions at different body inclinations.The blue areas show the pendelluft. Figure 5 . Figure 5. Box-and-whisker plots of end expiratory lung impedance to change in impedance (EELI/VT EIT ) ratio before and after chest compressions during the 4 sequences with chest compressions.CPR cardiopulmonary resuscitation, PEEP positive end-expiratory pressure, ITD impedance threshold device.*P < 0.05 vs before. Table 1 . Effect of body inclination on electrical impedance tomography indexes with or without chest compressions.Values are median (1st, 3rd quartiles).EIT electrical impedance tomography, VT EIT tidal ventilation, EELI end-expiratory lung impedance, a.u.arbitrary units, CPR cardiopulmonary resuscitation, PEEP positive end-expiratory pressure, ITD impedance threshold device, SDRVD standard deviation of the regional ventilation delay, GII global inhomogeneity index.*P < 0.05 vs. Flat. Table 2 . Change in lung and chest wall compliance with chest compressions.Values are mean [95% confidence intervals] ml/cmH 2 O. Lung and chest wall compliance are measured without chest compressions, i.e. immediately before and after them.The change is the absolute difference between after and before chest compressions, a positive value indicating that the compliance is greater after than before chest compressions and the opposite is true for a negative value.Elevation means trunk angulation moved from 18° to 35°.The change in compliance reflects both the effect of chest compression and the change in trunk inclination.CPR cardiopulmonary resuscitation, PEEP positive end-expiratory pressure, ITD impedance threshold device.
4,928
2023-11-21T00:00:00.000
[ "Medicine", "Engineering" ]
Comparative analysis of delivered and planned doses in target volumes for lung stereotactic ablative radiotherapy Background Adaptive therapy has been enormously improved based on the art of generating adaptive computed tomography (ACT) from planning CT (PCT) and the on-board image used for the patient setup. Exploiting the ACT, this study evaluated the dose delivered to patients with non-small-cell lung cancer (NSCLC) patients treated with stereotactic ablative radiotherapy (SABR) and derived relationship between the delivered dose and the parameters obtained through the evaluation procedure. Methods SABR treatment records of 72 patients with NSCLC who were prescribed a dose of 60 Gy (Dprescribed) to the 95% volume of the planning target volume (PTV) in four fractions were analysed in this retrospective study; 288 ACTs were generated by rigid and deformable registration of a PCT to a cone-beam computed tomography (CBCT) per fraction. Each ACT was sent to the treatment planning system (TPS) and treated as an individual PCT to calculate the dose. Delivered dose to a patient was estimated by averaging four doses calculated from four ACTs per treatment. Through the process, each ACT provided the geometric parameters, such as mean displacement of the deformed PTV voxels (Warpmean) and Dice similarity coefficient (DSC) from deformation vector field, and dosimetric parameters, e.g. difference of homogeneity index (ΔHI, HI defined as (D2%-D98%)/Dprescribed*100) and mean delivered dose to the PTV (Dmean), obtained from the dose statistics in the TPS. Those parameters were analyzed using multiple linear regression and one-way-ANOVA of SPSS® (version 27). Results The prescribed dose was confirmed to be fully delivered to internal target volume (ITV) within maximum difference of 1%, and the difference between the planned and delivered doses to the PTV was agreed within 6% for more than 95% of the ACT cases. Volume changes of the ITV during the treatment course were observed to be minor in comparison of their standard deviations. Multiple linear regression analysis between the obtained parameters and the dose delivered to 95% volume of the PTV (D95%) revealed four PTV parameters [Warpmean, DSC, ΔHI between the PCT and ACT, Dmean] and the PTV D95% to be significantly related with P-values < 0.05. The ACT cases of high ΔHI were caused by higher values of the Warpmean and DSC from the deformable image registration, resulting in lower PTV D95% delivered. The mean values of PTV D95% and Warpmean showed significant differences depending on the lung lobe where the tumour was located. Conclusions Evaluation of the dose delivered to patients with NSCLC treated with SABR using ACTs confirmed that the prescribed dose was accurately delivered to the ITV. However, for the PTV, certain ACT cases characterised by high HI deviations from the original plan demonstrated variations in the delivered dose. These variations may potentially arise from factors such as patient setup during treatment, as suggested by the statistical analyses of the parameters obtained from the dose evaluation process. Stereotactic ablative radiotherapy treatment for patients with non-small cell lung cancer Stereotactic ablative radiotherapy (SABR), also known as stereotactic body radiation therapy (SBRT), is a radiotherapy procedure that is highly effective in controlling early stage primary or oligometastatic cancers.It delivers a higher biologically effective dose to the tumour than does conventional radiotherapy [1][2][3].Advancements in technologies, such as intensity-modulated radiation therapy (IMRT) and image-guided radiation therapy (IGRT) have helped achieve treatment goals by maintaining an acceptable therapeutic ratio with excellent dose conformity in SABR.An accurate patient setup facilitated by IGRT is a crucial aspect of SABR.Owing to advancements in IGRT, SABR has been applied to patients with early stage non-small cell lung cancer (NSCLC) who are medically inoperable or prefer non-invasive treatment [4][5][6][7][8]. Adaptive radiation therapy with CBCT One of the most critical aspects of IGRT is the application of imaging techniques for precise patient positioning during treatment sessions.In external beam radiation therapy, cone-beam computed tomography (CBCT), which is integrated with a linear accelerator, is extensively used to position a patient based on the stance taken during treatment simulation.The CBCT image can provide information on the patient's treatment position, allowing estimation of the dose delivered to the gross tumour volume (GTV) and surrounding organs at risk (OARs).However, because of the known limitations of CBCT, such as the large uncertainty of the CT numbers arising from scattered photons [9][10][11] and several other effects [12][13][14], CBCT is considered inappropriate for direct dose computation [15].This limitation prevents CBCT from being used as a planned CT for adaptive radiation therapy when the patient's anatomy has been seriously deformed throughout the treatment course. Advances in the DIR algorithm and CBCT image quality [29][30][31][32] have enabled the accumulation of the delivered dose using the daily CBCT [33,34] in addition to the efforts to overcome the limitations of CBCT.Based on the delivered dose estimation to the GTV and OARs, adaptive radiation therapy facilitates the modification of treatment plans to achieve optimal outcomes by adapting to the observed anatomic or physiologic variations in patients from the initial simulation. Delivered dose estimation using deformable image registration Synthetic CT, also called adaptive CT (ACT), can be acquired by aligning and deforming the PCT onto daily CBCT.This process involves both rigid and deformable image registrations.In SABR, CBCT acquired for each patient's setup at every fraction enables the creation of the ACT, allowing for the calculation of daily doses that reflect any variations from the initial simulation, and ultimately, the prediction of the total dose actually delivered. Previous studies [35,36] have evaluated the dose delivered to patients with NSCLC using the same commercial software.They either focused on a limited number of PTV parameters or examined only a few treatment sessions.This study aimed to assess the dose delivered throughout the treatment course, reflecting the patient's anatomic and physiologic variations.Furthermore, maximum possible PTV parameters were collated through the dose evaluation process and were analysed to understand the relationship between the dose delivered to the PTV in each fraction and various parameters.This approach can help to identify the factors contributing to the observed dose distribution.were collected.The patients were treated with 60 Gy (D prescribed ) in four fractions between 2019 and 2023, using volumetric arc-modulated therapy of a 6 MV flattening filter-free photon beam of Varian TrueBeam STX (Varian Medical Systems Inc., Palo Alto, CA, USA).The D prescribed was prescribed to the 95% volume of the PTV.For lung SABR treatment planning, the four-dimensional CTs (4DCTs) obtained with thoracic scan protocol were reconstructed into 10 respiratory phases by Brilliance CT Big Bore™ (Royal Philips Electronics, Amsterdam, Netherlands), and then their average CT image was generated for the treatment plan.The maximum intensity projection (MIP) of the tumour from each phase was integrated and contoured as an ITV, and the PTV was determined by embracing the ITV on a patient-by-patient basis with a 5-7-mm margin, compensating for several treatment uncertainties.Treatment plans were created using a treatment planning system (TPS, Eclipse Ver.13.5 and 16.1 with Acuros XB algorithm, Varian Medical Systems Inc., Palo Alto, CA, USA).In the treatment room, the patients were positioned on the couch by matching the acquired daily CBCT image to the PCT image before treatment.The specifications of the PCT and CBCT images are listed in Table 1.The treatment records, including a set of Dicom RT plan (RP), RT dose (RD), RT structures (RS), the PCT, and daily CBCT images were exported into the Velocity software (Ver.4.1, Varian Medical Systems Inc., Palo Alto, CA, USA) to generate ACT. Dose evaluation procedure ACT generation began by aligning the PCT with CBCT.To estimate the dose delivered to the patient at the time of treatment, the couch location recorded by the CBCT matching to the PCT was used for rigid registration.The daily CBCT images were acquired with a free-breading.After applying rigid registration, deformable image registration of the PCT on CBCT was performed using the multipass B-spline algorithm [37,38] in the Velocity software.The RS, including the ITV, PTV, and normal structures, was automatically propagated following the deformable vector field (DVF) of the PCT to the CBCT to generate adaptive RS (aRS) on the ACT.The ACT and aRS were then transferred to the TPS to calculate the delivered dose (aRD), reflecting the daily variations from the initial simulation.After the dose calculation was completed in the TPS, the accompanying aRD with the ACT was copied back to the Velocity software to deform the aRD back to the PCT coordinate.Each patient's record contained four CBCTs, thus resulting in four ACTs and corresponding four aRDs.Therefore, the total dose delivered to the patient was estimated by averaging the four aRDs in the PCT coordinates.The workflow of the dose evaluation procedure is summarised in Fig. 1. Data analysis with parameters Through the dose evaluation procedure described earlier, parameters related to the ITV and PTV were collected, as summarised in Table 2.These parameters were obtained from the dose statistics of the RD and aRDs in the TPS, deformable quality assurance [39], comparison of the RS and aRS, and from the dose accumulation processes using Velocity software.Dose-evaluation indices, such as homogeneity index [40], were calculated and used in the analysis.Obtained parameters were assessed to determine their relationship to the treatment target, specifically the dose delivered to the 95% volume of the PTV (D 95% ), using a stepwise multiple linear regression model in SPSS® (Ver.27, IBM®, Chicago, IL, USA).As the study assessed the treatment records of 72 patients, a total of 288 ACT cases was analysed.Through the analysis, significant parameters that passed the normality test and multicollinearity check were selected, and their p-values were reported.Additionally, analysis of variance (ANOVA) was performed to examine the mean difference between the parameters depending on the tumour location. Statistics of the parameters After the multiple linear regression analysis, the PTV parameters proven to be significant were examined.First, basic histograms of the PTV parameters including PTV D 95% and two-dimensional distributions of the parameters were presented.Volumes of the ITV and PTV were reviewed in the PCT and ACTs, and the average volume differences between the ACTs and PCT were investigated. Delivered dose evaluation The dose delivered to the PTV and ITV was estimated from the aRDs and compared with the planned dose shown in Figs. 2 and 3, respectively.The planned and delivered doses were agreed within the uncertainty of the delivered doses.From a frequentist's statistical perspective, the delivered dose to the PTV agreed with the Relations between the tumour location and PTV parameters Each lung is divided into sections known as lobes that are closely associated with various structures within the thoracic cavity.The diaphragm, a dome-shaped primary respiratory muscle, is expected to undergo the most significant respiratory motion.Given that the diaphragm is located beneath the lower lung lobes, there has been specific interest in exploring the dependence of the values of PTV parameters, for example, the estimated delivered dose vs. tumour location.By assessing the mean values of the PTV parameters from 288 ACT cases, a significant difference was observed in PTV displacement and PTV D95% between the lobes (P-value less than 0.05) in Table 4. Tumour volume change Previous studies on NSCLC SABR treatment have reported changes in tumour volume, despite a short treatment duration of typically 7-12 days.Using various imaging modalities, some studies [41,42] reported a consistent decrease in GTV, whereas others [43][44][45] noted a slight initial increase in GTV during the treatment course, followed by a decrease.At our institute, treatment records document the ITV instead of the GTV; therefore, this study investigated the changes in the ITV volume during the course of treatment.Considering the average volume differences per fraction for both the ITV and PTV are nearly zero and significantly smaller than the SD in Fig. 4(c) and (d), volume changes during the SABR treatment did not constitute a significant concern in this study. In the ACTs, the ITVs and PTVs were automatically recontoured following the DVF obtained from the DIR using commercial software, and their volumetric differences were consistently negative.These plots indicate that the recontoured structures exhibit a marginally smaller volume than their original volume, and the volume difference may originate from the differences in the method of treating the respiratory motion between PCT and CBCT.The ITV in the PCT was delineated based on the MIP of the GTV, whereas the CBCT was acquired with free-breathing.This might have affected the ITV in the ACT, reflecting that the volumes recorded in the free-breathing CBCT are marginally smaller than the ITV and PTV in the PCT.However, this volume difference was significantly smaller than the SD.Comparing the values in Fig. 4(c) and (d), the average ITV differences are approximately twice as large as those of the PTV.These discrepancies might have occurred because the ITV, when used as the denominator to calculate the volume difference, is smaller than the PTV. Recontoured structures for dose evaluation Concerns may arise regarding the accuracy of the aRS recontoured automatically using the commercial software.When applying the ACT for adaptive planning, it is essential for a radiation oncologist to review and correct the delineation of the ITV and PTV prior to treatment approval.Nonetheless, the automatically recontoured ITV and PTV on ACT, based on the DVF, were considered adequate for evaluating the dose delivered to the patients.This study was performed to investigate how the original PTV was deformed in the treatment and whether the planned dose was actually delivered to the original PTV.Among the collected patient data, the ITV and PTV delineations were randomly selected and reviewed by an expert for comparison with manual recontouring.This comparison revealed no significant differences in dosimetric evaluation, thus supporting the use of automatically recontoured structures for dose evaluation across all 288 ACT cases. PTV parameters depending on the tumour location The PTV located in the lower lobes showed a greater discrepancy in the position between the treatment and simulation.The mean Warp mean value was notably higher and the mean D 95% value was significantly lower for tumours located in the lower lobes compared with those of tumours in the upper lobes (Table 4).Furthermore, the SDs for PTVs in the lower lobes exhibited a significant increase, indicating that tumours located in the upper lobes allow for more consistent patient setup reproducibility, thereby enhancing the accuracy of the prescribed dose delivery to the PTV.Among the seven outliers, six had PTV located in the lower lobes.This may be attributed to the observed higher Warp mean and lower D 95% values in the lower lobes, a trend that these outliers similarly exhibited. Outliers of the multiple linear regression model prediction Seven outliers marked in Figs. 5 and 6 are characterised by having the highest ΔHI and small PTV volume.These outliers show significant deviations from linearity, as shown in Fig. 8.The PTV D 95% values of the outliers calculated from the ACTs in the TPS deviated from the PTV D 95% predictions of the multiple linear regression.Two geometric and two dosimetric parameters were found to be significantly related to the D 95% in the multiple linear regression, and the detailed results are listed in the Table 3. To evaluate the effect of ACTs with a high ΔHI, stepwise multiple linear regression was repeated, excluding ACT cases of HI > 17, based on the distribution in Fig. 5(b).The results summarised in Table 5 indicate that only the dosimetric parameters maintained a significant correlation with D 95% .This indicates that the outliers may be closely linked to the two geometric parameters that represent the PTV displacement.Such displacement might imply uncertainties from the patient setup and respiratory motion, resulting in dosimetric uncertainty.Kim et al. [46] highlighted that geometric uncertainties in patient positioning can limit the clinical advantages of IMRT.Therefore, during SABR treatment, a more thorough consideration of patient setup and respiratory motion is imperative. Although these cases exhibited suboptimal dosimetric outcomes for the PTV, Fig. 3 confirms that the dose was accurately delivered to the ITV.This demonstrates that the margin between the ITV and PTV effectively ensures sufficient dose coverage of the ITV.As the patients were treated under the prescription of one radiation oncologist, the ITV and PTV were consistently determined.However, it is worth studying whether the margin can be further reduced with respect to various parameters obtained.In this study, analysis of multiple parameters enabled a more precise evaluation of the dose delivered. Uncertainties affecting the deformed image In DIR, the PCT is a moving image, while the daily CBCT is a stationary image.When the PCT is deformed to the daily CBCT, the deformed PCT is called an ACT, retaining patient information at the time of the treatment setup.In the TPS dose calculation, the delivered dose based on the ACT was calculated using the original beam plan.Thus, we believe that the setup error implied by the ACT affects the estimated delivered dose. Although the QA parameters of DIR [39] were assessed and included in the analysis (Table 2), the uncertainty of the DIR algorithm was not considered.Repeating the DIR using the same CBCT and PCT images is insufficient to estimate the DIR uncertainty; however, the DIR uncertainty is expected to be systematic without giving a rise to an outlier.As the outliers of interest occurred randomly among the 288 ACT cases, they likely originated from a random source, such as the setup error rather than the DIR algorithm error. Limitations of this study One limitation is the resolution of the parameters obtained by comparing the images.The image resolutions of the PCT and daily CBCT are presented in Table 1.Recalling the method of generating the ACT, it is a result of the deformation of the PCT to the daily CBCT, thus the resolution of the ACT is same as that of the PCT; the transverse and vertical resolutions of the ACT were approximately 1.3 and 3 mm, respectively.Considering the Warp mean , which showed a peak around 1.3 mm (Fig. 7b), it is challenging to calculate any finer displacement.However, we used the average value of the displacement calculated from PTV voxels; thus, we did not rely on a single movement value but on the movement trend of the PTV structure.Secondly, a methodology covering intra-fractional motion during SABR was lacking.Monitoring the intrafractional motion might be optimal for IMRT treatment using an MR-Linac.In the CBCT-Linac option, it is difficult to track real-time respiratory motion during treatment.In particular, for SABR, respiratory motioncontrolled treatments, such as DIBH, are not normally considered.Although we observed one case of using continuous positive airway pressure breathing, the respiratory motion was not monitored. Intra-fractional motion was considered as the respiratory motion in the PCT and ACT images, and the patients were subjected to treatment with a free-breathing.In PCT, the respiratory motion was assessed using the MIP of the tumour.As we did not control the patient's respiration during CBCT acquisition, which took approximately 1 min, the image was expected to represent the average motion of the GTV.Surrounding OARs were also obtained with average intensity projection (AIP) on PCT and with the free-breathing on CBCT. Comparison of the result with previous studies Previous studies utilised the same commercial software to evaluate the dose delivered to patients with NSCLC.Czajkowski P et al. [35] evaluated the accuracy of dose delivery in the stereotactic radiation therapy for both brain and lung cancers.Although they analysed only 10 patients for the lung SABR dose evaluation, they reported no significant change on the ITV volume during the treatment and agreement within ± 10% on the dose in 99% volume of the PTV between the PCT and ACT.They assessed only the PTV volume and DSC regarding the PTV change.On the contrary, Wang B et al. [36] investigated the differences of delivered and planned dose to PTV as well as to the OARs, which were clinically acceptable.They used records of 27 patients with locally advanced NSCLC who were treated with 51 Gy in 17 fractions.They generated ACTs for treatment in 1,5,9,13, and 17.A significant tumour shrinkage (11.1%) was observed through the course.However, no significant difference was discovered in the volume of 51 Gy isodose line corresponding to the PTV, but limited increase (< 5%) was observed in total lung, oesophagus, and heart. Compared with previous studies, we calculated delivered dose for every fraction and assessed the PTV parameters as much as possible with an increased number of patients.We were able to obtain distributions of the significant PTV parameters and the parameters were proven to be related to the PTV D 95% .Besides the delivered dose evaluation, the PTV D 95% and the PTV displacement from the PCT were observed significantly related to the tumour location. Conclusions Throughout the evaluation of the delivered dose, we confirmed that the prescribed dose was successfully delivered to the ITV.The analysis showed that the HI difference between the ACT and PCT was the most sensitive parameter for the delivered PTV D 95% .Although the dose was delivered to the PTV successfully in most cases, a few outliers with higher ΔHI, that degraded the PTV dose distribution, were observed.Judging by the relationship between geometric parameters of the PTV and the worse PTV D 95% , these outliers might be caused by the misaligned patient setup.Analysis of the delivered dose along with the parameters obtained during the evaluation process demonstrated that the PTV margin effectively compensated for the setup uncertainty. Fig. 2 Fig. 1 Fig. 2 Dose comparison between planned (filled square) and estimated delivered (open square) dose to planning target volume (PTV) and their standard deviations: minimum (D min ), mean (D mean ), maximum dose (D max ) to the PTV, and the dose delivered to the 95% volume of the PTV (D 95% ) Fig. 3 Fig. 3 Dose comparison between planned (filled square) and estimated delivered (open square) dose to internal target volume (ITV) and their standard deviations: minimum (D min ), mean (D mean ), maximum (D max ) dose delivered to the ITV Fig. 6 Fig. 5 Fig. 6 (a) One dimensional distribution of the PTV volume (cc) and (b) scattered plot of the delivered dose to 95% volume of the PTV (D 95% ) vs. the PTV volume.Concerned outlier cases were highlighted in the scatter plot Fig. 7 Fig. 7 One-dimensional distribution of the independent parameters: (a) mean delivered dose to PTV (D mean ), (b) mean Warping distance (Warp mean ), (c) Dice coefficient of similarity (DSC), and a dependent parameter (d) delivered dose to 95% volume of the PTV (D 95% ) Fig. 8 Fig. 8 Scattered plot of the residual of the dose prediction vs. the predicted valued of the regression model.Concerned outliers were highlighted in the scatter plot Table 1 Specifications of the treatment simulation CT and cone beam CT Table 2 List of parameters collected for internal and planning target volume Table 3 Results of the multiple linear regression *VIF: Variance Inflation Factor, Tolerance: a measure of collinearity, ΔHI: difference of homogeneity index, D mean : mean delivered dose to PTV, DSC: Dice Similarity Coefficient, Warp mean : mean displacement of PTV Table 4 Means and standard deviations of the D 95% , DSC, and Warp mean distributions of the PTV depending on the tumour location in the lung lobe.Meaningful differences were assessed by the P-value Table 5 Results of the multiple linear regression by SPSS® (v27, IBM®, Chicago, IL, USA) with cases having ΔHI less than 17 *VIF: Variance Inflation Factor, Tolerance: a measure of collinearity, ΔHI: difference of homogeneity index, D mean : mean delivered dose to PTV
5,391.4
2024-08-16T00:00:00.000
[ "Medicine", "Engineering" ]
Feasible Wind Power Potential from Costal Line of Sindh Pakistan The energy is the serious issue, directly or indirectly, in the whole practice of advancement, development and existence of all the existing creatures. It plays a very important part in socioeconomic growth and social prosperity of any country, at least 1/3 rd of the country has no access of energy like electricity. Globally Pakistan is an electricity lacking country, however deficient in oil and gas. Pakistan is rich in sources like water, coal, wind and solar energy. Electricity is the basic needs of all human’s comfort and in addition to overcome power crises in common, Pakistan desires in the way to utilize its natural power assets similar to hydel power plant, sunlight and wind potential for the generation of electricity. Pakistan has surely considerable latent for exploiting wind energy. Additionally about 1000 km lengthy shoreline in southern and northern hilly areas offers an outstanding reserve of wind potential. The efforts need for utilization of wind energy in the country. This study includes only twenty selected regions of southern regions of Sindh province for power generation from a natural source of wind energy. INTRODUCTION Energy is a fundamental element of combined human being growth in terms of development.The power situation and fossil fuel have three major concerns, the impact of environment, depletion resources and energy supply issues.The nonconventional resources are much more important for the human development.Suppose biomass used for heating, cooking and steam production, wind potential and hydro power to generate the electricity.The renewable energy sources are environmentally friendly.Power crisis, increasing day by day, the Production ratio of fossil fuel was reported by reverse of North of America, Europe and Pacific Asia was 10, 57 and 40 years respectively (Makkawi et al., 2009).It is necessary to reduce reliance on oil to achieve stable energy supply.The wind energy power potential is one of the renewable energy sources which can meet the increasing demand of the country (Harijan et al., 2009). In this research study it employs the chief hourly ideals of wind potential speed to draw the wind speed, duration curves and to analyze the wind potential produces electricity from 20 sites in the coastal line of the southern region of Sindh. Geographical profile of Pakistan: The Pakistan is 8,03,950 km 2 in area, it is divided into five regions called provinces Baluchistan, Sindh, Punjab, Gilgitbaltistan and Khyber Pakhtun Khawa (KPK) including Tribal Areas (FATA and FANA).It has large mountains in ranges of the Karakorum's, Himalayas and the Hindukush from Pakistan, northern uplands of KPK Northern Tribal Areas.Baluchistan Plateau is mostly dry soil, surrounded by dry mount.Thar Desert is on the east, the Ran of Kutch is on the west of the Kirther in Sindh provinces.Punjab provinces are generally in a straight line, alluvial natural with five major rivers overlook the higher area, ultimate unification the River Indus flows from south towards the Sea of Arabian (Mirza et al., 2003).The Islamic republic of Pakistan is located between latitude 24-37' North and longitude 62-75' East.In the east of Pakistan is Hindustan, in the west, Iran, China north side, northwest side is Afghanistan and Arabian Sea on the south side.Geographic map is shown in Fig. 1. PAKISTAN ENERGY PROFILE SITUATION The Pakistan energy profile is well reliant on oil, liquefied gasoline and natural gas, it is about eighty 5% of the whole supply of 44465 (million tons oil equivalent).The Coal contribution is not more than 4.5% of the overall supplies, 1.1% is the nuclear energy and the lasting 9.2% is abounding by hydro electricity (Mirza, et al., 2003).In 2000, country was producing nearly 56000 barrels of crude oil/day get-together.Almost 15% of the domestic oil demands.The enduring Fig. 1: Geographic map 85% was traded in from the Middle East with a rate of 2.4 billion US Dollar, which is equivalent to 30% of the state entire export income (GoP, 2001).The country's economy is more dependent on oil imports.The solid petrol coal assets are in the huge capacity of approximately 2265 metric tons relatively better.On the other hand the domestic coal is not utilized to a large extent in Pakistan due to its low class in provisions of heating significance and large quantity of sugar.For that reason, the instant of household creation of coal is only of million 3.3 tons annually (World Energy Council, 2001).In the period of 2008-09 energy resources such as natural gas, oil, nuclear, coal and Liquefied Petroleum Gas (LPG) contributes 48.3, 32.1, 11.3, 7.6 and 0.6% respectively of the primary energy supply in a row.The share of primary energy supplies by various sources in MTOE (PEYB, 2009). PAKISTAN POWER SECTOR PROFILE At the Pakistan independence, it was 60 MW energy production potential for population 31.5 million, resulting in 4.5 kWh/individual utilization.In 1964-1965, power generation capacity rises to 636 MW, in 1959 and energy production to about 2500 kWh.By the end of 1970, production increases from 636 to 1331 MW with the installation of thermal power houses and hydroelectric.In the year 1980 touched 3000 MW capacity of the system, which quickly climbs to more than 7,000 MW in 1990-91.A rapid development in Karachi in the 1990s also witnessed tremendous industrial and commercial houses built leading up to the sudden boost in demand for electricity.As a result, it has been approved Karachi Electric Supply Company (KESC) license for generation, transmission and PAKISTAN PRESENT ENERGY SCENARIO Installed different power sector: Table 1 shows the total installed power generation from different sectors in the Pakistan in the year 2012-2013(ADB, 2006)).Figure 2, shows the installed power generation capacity through the sector wise by the year 2012-2013, from Fig. 2, it is observed that from wind energy, which is free source available not to utilize for the generation of electricity. Power generation by sector wise (reports): Figure 3 shows the generation capacity by sector wise in the year 2012-2013, from the graph it is observed that different sources have been utilized for the generation of electricity even wind energy, but the ratios of wind potential for the generation of electricity near to zero, the source which can never be finished. Power demand sector wise (reports): Figure 4 shows the total power consumed in different sectors. Study area: The windy area in mapping is about 1000 km of coastal line of Sindh.According to the survey, windy areas of Sindh coastal line are given here (Reports.Pakmet).The 20 selected sites have been chosen for wind power potential.Chuhr Jamalee, DHA Khi, Ghharo Golarchee, Haks bay, Haiderabad, Jaamshoro, Jatee, KHI, Matlee, Mirpursakro, Nooryabad ShahBandar, Talhaar, Thano Bula Khan shown in Fig. 7. cenario India may actually exceed the nation's largest and most alternative energy (Fig. 6). Sindh wind power sector scenario: Globally, cumulative production capacity is forecast to increase almost 500 Giga watts by 2016.This is more than recorded in 2011.Sindh coastal line approximate about 5 to 7 m/sec (Sheikh, investigated that Pakistan has wind power generation potential about about 11000 to 14000 MW coastal line of Sindh.Globally, huge potential and can power mandate.The world can boost source into the markets, safe and secure power supplies and diminish global atmospheric conditions.It also offers opportunities to come across the prerequisites for energy services mainly for rural areas and developing countries, then it will openings for local public, and it indigenous trade for rial side for manufacturing point of view.It is therefore expected that proper utilization of wind potential within Sindh then it may fulfill the mandate of power crises.Alternative Energy Development Board signed the agreement in 2004 to install wind farms.In this admiration, at the moment sixteen engineering wind farm selling that stay delivering wind turbines.The plan was to complete wind power capacity of about 2000 MW at the end of year 2010 (Muneer and Asif, mapping is about 1000 km of coastal line of Sindh.According to the survey, windy areas of Sindh coastal line are given here .The 20 selected sites have been Badeen, Baghhan, DHA Khi, Ghharo Golarchee, Haks Jatee, KHI, Katee Bandr, Nooryabad, Sajawal, aar, Thano Bula Khan and Thattaas In Table 2 shows the classification of wind, each class showing the wind speed in m/sec and wind power in watts/m 2 at 30 m height and at a 50 m altitude beyond the earth level.The international standards of wind power generation classifications shown in Table 2.The sites being settled using large wind power generation turbine are marked in class number 5 and above that approach in outstanding categories.Class number 4 also is also being considered for future development for further development of the country.The category number 1 and 2 are only designed for smaller wind. Technical survey data: Figure 8 shows the potential of wind at 20 selected region of the Sindh at 10 m height. Figure 9 shows the potential of wind at 20 selected region of the Sindh at 30 m height. Figure 10 shows the potential of wind at 20 selected region of the Sindh at 50 m height. ESTIMATED RESULTS AND DISCUSSION Typically, power generation by wind energy technologies is a significant factor; meanwhile it In Fig. 12 powers are generated in kWh per year, so it is observed from the fig the maximum power is generated from sajawal regional averagly. In Fig. 13, power is generated in kWh per year, so it is observed from the Fig the maximum power is generated from nooriabad regional averagly. In Fig. 14, power is generated in kWh per year, so it is observed from the Fig the maximum power is generated from Jamshoro region averagly. CONCLUSION Sindh province has an electricity shortfall of about 1500 to 2000 MW in summer season and 1000 to 1500 MW in winter season, Sindh has a considerable possible for harnessing wind energy to generate the power, it is estimated that about 11000 MW can be generated from wind potential from Sindh coastal lines only.The study concluded the 20 locations of the Sindh region in coastal lines for wind energy.The wind energy is the free source of energy, and it is the environment friendly source.Wind energy would protect from pollution and also reduce the oil imports automatically; it will improve the socioeconomic conditions of the human life.From the result, it is observed that from 20 locations the Jamshoro has the maximum wind potential having the annual average potential wind speed is about 7 m/sec at the height of 30 m, similarly at 50 m height, average wind potential speed is about 8.5 m/sec, internationally, at 50 m and 8.5 m/sec height and speed respectively, comes in the excellent category having the annual average capacity factor of about 40%.Finally, it is suggested Sindh, Pakistan must proceed to exploit the free source for the generation of electricity. Fig. 2 : Fig. 2: Installed power generation capacity by sector wise Fig. 6 : Fig. 6: Globally, wind power plant scenarioIndia may actually exceed the nation's largest and most sophisticated that use alternative energy Sindh wind power sector scenario cumulative production capacity is forecast to increase almost 500 Giga watts by 2016.This is more than double the figure recorded in 2011.Sindh coastal line winds speed, approximate about 5 to 7 2010).It is investigated that Pakistan has supplementary wind power generation potential about 20,000 MW out of which about 11000 capacity in the coastal line of Sindh.Globally, renewable power wealth has huge potential and can meet the world power mandate.The world can power source into the markets, safe and secure extended periods of the viable power supplies and diminish global atmospheric conditions.commercially smart opportunities to come across definite prerequisites for energy services mainly for rural areas and developing countries, then it will produce occupation openings for local public, and it will also become the promise of indigenous trade for the industrial side for manufacturing point of view.It is therefore expected that proper utilization of wind potential within Sindh then it may fulfill the mandate of power crises.Alternative Energy Development Board signed the agreement in 2004 to install wind fa this admiration, at the moment sixteen engineering wind farm selling that stay delivering wind turbines.The plan was to complete wind power capacity of about 2000 MW at the end of year 2010 2007). Fig. 7 : Fig. 7: Map of windy areas of Sindh regionTable 2: International wind power generation classification Table 1 : Installed power generation capacity by sector wise
2,860.2
2015-06-05T00:00:00.000
[ "Engineering" ]
Comparison of modified Mohr–Coulomb model and Bai–Wierzbicki model for constructing 3D ductile fracture envelope of AA6063 Ductile fracture of metal often occurs in the plastic forming process of parts. The establishment of ductile fracture criterion can effectively guide the selection of process parameters and avoid ductile fracture of parts during machining. The 3D ductile fracture envelope of AA6063-T6 were developed to predict and prevent its fracture. Smooth round bar tension tests were performed to characterize the flow stress, and a series of experiments were conducted to characterize the ductile fracture firstly, such as notched round bar tension tests, compression tests and torsion tests. These tests cover a wide range of stress triaxiality (ST) and Lode parameter (LP) to calibrate the ductile fracture criterion. Plasticity modeling was performed, and the predicted results were compared with corresponding experimental data to verify the plasticity model after these experiments. Then the relationship between ductile fracture strain and ST with LP was constructed using the modified Mohr–Coulomb (MMC) model and Bai-Wierzbicki (BW) model to develop the 3D ductile fracture envelope. Finally, two ductile damage models were proposed based on the 3D fracture envelope of AA6063. Through the comparison of the two models, it was found that BW model had better fitting effect, and the sum of squares of residual error of BW model was 0.9901. The two models had relatively large errors in predicting the fracture strain of SRB tensile test and torsion test, but both of the predicting error of both two models were within the acceptable range of 15%. In the process of finite element simulation, the evolution process of ductile fracture can be well simulated by the two models. However, BW model can predict the location of fracture more accurately than MMC model. Introduction AA6063 aluminum alloy is an Al-Mg-Si alloy with excellent plasticity and machinability. It is widely used in the automobile, construction and energy industry. The plastic forming processes are commonly classified into hot working and cold working. Cold working is generally performed at room temperature and leads to an improvement in the mechanical properties and a decrease in plasticity due to work hardening. In fact, cold working can easily result in ductile damage, meaning it is critical to investigate the damage forming mechanism at room temperature [1,2]. It is the main means to study the damage forming mechanism of metal by establishing mathematical model [3]. The stress state of a point is usually represented by stress triaxiality (ST) and the Lode parameter (LP). From the microscopic point of view, the ductile damage is caused by the nucleation, growth, and aggregation of micro-voids under a high ST. Under a lower ST, the shear band is the main leading cause of ductile fracture [4,5]. Based on this damage formation mechanism, three types of damage models are constructed: (1) failure criterion, (2) porous plasticity theory, and (3) phenomenological damage model [6]. For the first type of model, the damage is obtained by integrating the internal stress variables. Crockroft and Latham deemed that the maximum tension stress is the main factor in the failure of materials, and the Crockroft-Latham ductile failure criterion was established based on this [7]. Chen et al. [8] believed that the equivalent stress is also one of the factors that leads to material failure, and based on the work of Crockroft and Latham, a normalized Crockroft-Latham ductile fracture criterion was thus established. Meanwhile, in accordance with various applications, different ductile failure criterions have been established. For example, the Brozzo model [9] and the Oyane model [10] were used to predict sheet metal forming and round bar drawing. While the failure criterion is simple, it cannot predict the complex deformation path and large plastic deformation. In terms of the second approach, Gurson first introduced the void volume fraction f into the Mises yield criterion and established the Gurson plastic potential [11]. Following this, Tvergaard et al. [12] established the modified Gurson plastic potential by considering the effects of the nonuniform stress field around the voids on the softening behavior of the material. Needlemen et al. [13] held a view that the nucleation of the void conforms to the normal distribution controlled by strain, and the nucleation of the void will lead to a change in the void volume fraction f. Through the research of above authors, the so-called GTN constitutive model which takes into account the nucleation, growth and convergence of the voids was established. Following further research on the void evolution mechanism, a number of more accurate GTN constitutive models have been developed. The advantage of this model is that it is more accurate for the damage prediction under a relatively low ST. However, its many parameters make calculations complicated and sensitive to mesh size. For the third approach, the phenomenological damage model was first used to solve engineering problems. Phenomenological damage models can be divided into two types: coupled and uncoupled damage models. According to the continuum theory, Lemaitre coupled the equivalent stress σ with the damage factor D to describe the dissipation potential in the deformation process, and established a coupled phenomenological damage model [14]. Later, it was found that the application condition of Lemaitre's damage model was limited by the range of ST. Considering this, Xue established the relationship between the damage factor D and ST and LP, and revised the Xue phenomenological damage model [15]. He et al. coupled the damage variable D with dislocation density, grain size and recrystallization fraction, and established the damage constitutive equation of 52100 bearing steelballs [1]. Given that the coupled phenomenological damage model requires numerous experiments to calibrate the parameters, large number of researchers have established the uncoupled phenomenological damage model. According to the uncoupled phenomenological damage model, there is a linear relationship between the damage parameter D and plastic strain. The expression of damage parameter D and equivalent plastic fracture strain ε f is as follows: where η is ST, and θ is LP. Therefore, it is essential to construct a 3D fracture envelope in the space ε f , η, θ for uncoupled damage. Bai-Wierzbicki first constructed the 3D fracture envelope in the space ε f , η, θ , which defines the fracture strain as a function of ST and LP [16]. The Mohr-Coulomb model was formed based on the maximum shear stress to predict shear damage. Bai-Wierzbicki used the relationship between principal stress and ST and LP to transform the Mohr-Coulomb model into the MMC model [17]. Working on the basis of the MMC model, Lou et al. established the concept of ST threshold and proposed that when the ST is lower than the threshold, ductile fracture will not occur [18]. Elsewhere, Mohr et al. transformed the Mohr-Coulomb model into the space of ST, LP and equivalent plastic strain ε p devise the mixed stress/strain version and established the KHSP fracture criterion [19]. The phenomenological damage model parameters are calibrated from experimental fracture tests. For a precise calibration of a ductile fracture criterion, fracture experiments covering a wide range of stress states must be performed [20]. The fracture of materials with positive ST can be obtained by using smooth round bar (SRB) tension test, notched round bar (NR) tension test, and plate tension test [21][22][23]. The fracture of materials with nearly zero ST can be obtained by the torsion and notched tube tension tests [24,25]. The fracture of materials with negative ST can be obtained by a cylindrical compression test. The phenological damage models of aluminum alloys have been reported. Lou developed a weight function of an uncoupled shear ductile fracture criterion and applied it to model the ductile fracture of AA6082-T2 [26]. Many researchers used the phenomenological damage model to the cold deformation of aluminum alloys. Mirnia et al. [28]. However, a 3D fracture envelope has not yet been constructed for AA6063 alloy. In this paper, the fracture characteristics of AA6063 under different deformation conditions were studied, and two uncoupled phenomenological damage models based on MMC and BW ductile fracture criterion was established to describe the effect of ST and LP on plastic damage. By comparing the two types of phenological damage models, the prediction ability of the different ductile fracture criterion for AA6063 were assessed. To obtain a wide range of ST variation, tension, torsion, and compression tests were conducted to determine the relationships between the fracture strain and ST with LP. The relationship between the stress state and fracture strain is determined by finite element (FE) simulation. The 3D fracture envelope was calibrated by the GA optimization technique using the relationship between ST, LP, and fracture strain. The developed damage model was implemented into Abaqus 6.14 for the FE simulation. The accuracy of the model was verified by the specific forcedisplacement curve and the distribution of the damage field. Finally, the two models were compared in terms of fitting quality and damage prediction ability. Material Description AA6063-T6, which is an Al-Mg-Si alloy with medium strength and heat treatment, was used in these experiments. It is widely used in the construction and transportation industry, and its chemical composition is shown in Table 1. Experimental Process To observe the fracture of the specimen under as many different stress states as possible, various fracture tests were designed in the initial stress state space, as shown in Figure 1. The tests included tension, torsion, and compression tests. The plasticity and fracture behavior of the material under high ST were obtained by the SRB tension tests. The design of the SRB tension test piece was based on GB/T 228-2002, as shown in Figure 2(a). To where η is the ST, R is the radius of the neck in the notched bar specimen, and a is the radius of the smallest cross section. Figure 2 shows the shape and dimension of the sample of the room-temperature fracture test. The smallest cross-sectional diameter of the three types of NR was 5 mm, and R in Eq. (2) were 20 mm, 10 mm, and 5 mm, respectively. The dimension and shape of these specimens are shown in Figure 2(a). The gauge lengths of SRB and NR were both 25 mm. The SRB and NR tension tests were carried out with a speed of 0.25 mm/s until the specimen was broken. Because the hydrostatic pressure of the material is close to 0 in the pure shear state, shear tests were carried out to obtain the ductile fracture at low ST. Through the torsion test, the specimen reached a pure shear state. The shape and dimension of the torsion specimen are shown in Figure 2(b). MTS809 axial torsional test system was used in this experiment. The test speed was 0.1 rad/s, and the gauge length was 50 mm. Figure 2(c) shows the compression test specimens. To study the effect of height-to-diameter ratio on the plasticity and ductile fracture of materials, two ratios were considered (1.5 and 1). Through the compression test of the cylinder, the damage evolution and repair under negative ST were studied. To reduce the friction between the cylindrical specimen and experimental equipment, the lubricant was used before the experiment. To maintain the quasi-static condition, a compression speed of 0.015 mm/s was used. Experimental Results Macroscopic fracture surfaces were investigated to study the fracture mechanism of specimens. Figure 3 displays the macroscopic fracture phenomena of different tension specimens. The macroscopic fracture surface of these specimens with high initial ST mainly presents the "cup cone" shape. The cup-cone fracture has a macroscopic appearance with numerous dimples in the central interior area of the fracture surface and an oblique appearance at the outer surface. The cup-cone fracture is caused by the dominant effect of the tension stress in the center region of the fractured surface [29]. Figure 4 shows the force-displacement relationships of SRB, NR5, NR10, and NR20. The softening stage of the material can be seen from the image. The SRB can be regarded as a notched bar with an infinite notch radius. It can be seen from Figure 4 that the fracture elongation increases with the increase in notch radius, while the peak load increases with the decrease in notch radius. The load of the curves decreases obviously after necking. Most experimental results show that the ST increases evidently when necking occurs [23,24]. The increase in ST will affect the plasticity of materials. This is because of the fact that the regions with high triaxiality tend to have the characteristics of small plastic deformation and large volume deformation, which will lead to the release of more elastic strain energy at the points with high ST. When more elastic strain energy is released, the stress will be concentrated at this point, which will hinder the metal flow and will lead to a decrease in metal plasticity [30]. Therefore, as Figure 4 shows, the NR5 with the largest initial ST was the first to enter the softening stage and to fracture, while the slowest to enter this stage and to fracture was the SRB with the minimum initial ST. Figure 5 shows the force-displacement relationship of specimens after cylindrical compression tests with different height-to-diameter ratios. It can be seen from Figure 5 that the bearing capacity of the cylinder with the height-to-diameter ratio of 1 is stronger than that of the specimen with the height-to-diameter ratio of 1.5. Due to the friction between the specimen and test equipment, the specimen has a severe nonuniform deformation, resulting in a drum shape of the specimen. Due to the increase in the cross-sectional area and the inhibition of void nucleation under negative ST, the load-bearing capacity of the specimen increases. Thus, the force-displacement curve continuously increases. Figure 6(a) shows the torque-twist angle curve obtained from the torsion test under a rate 0.1 rad/s, and Figure 6(b) shows the morphology of the fracture surface of the specimen. It can be observed that the fracture surface is very smooth without any macroscopic cracks due to the shear stress acting along the cross section in the pure shear state. Compared with the tension fracture test with high ST, the specimens fracture along the direction was perpendicular to the maximum tension stress, while in the torsion fracture test with low ST, the specimens fracture along the direction of maximum shear stress. Stress-strain Relationship We assumed that the material is isotropic and conforms to the Von Mises yield criterion. Firstly, the engineering stress-strain curve in Figure 7 was calculated according to the SRB force-displacement curve in Figure 4. According to the stress-strain curve in Figure 7, Young's modulus of elasticity was 14611 MPa. The true stress-strain curve in Figure 7 was calculated until the ultimate tension strength, thereby excluding the effects of damage when fitting the hardening model in Section 4. In the true stress-strain relationship, the reduction of the cross section of the specimen under tension is considered. As such, the load on the unit cross section of the specimen increased, which led to the true stress-strain curve being higher than the engineering stress-strain curve. The commercial software Abaqus 6.14 was used to simulate the SRB tension test. The geometric model is shown in Figure 8(a). To simplify the calculation, only the part of the specimen within the gauge distance was simulated. The mapped mesh was created using an 8-node linear hexahedron element with a reduced integration size of 0.5 mm. The corrected curve of the equivalent stress and equivalent plastic strain (stress-strain relationship) was determined using the true curve in a multi-linear form by the trial and error method. It obtained the correct equivalent stress and equivalent plastic strain relationship (Figure 8(b)) by comparing the force-displacement curves from the experimental and simulation results until they matched, as shown in Figure 8(a). Plasticity Model The Von Mises yield criterion is used to describe the flow criterion of a metal. The strain hardening behavior of metals is described by the power hardening law: where A, B, and n are material constants obtained by fitting the experimental data in Figure 8(b). Additionally, due to the needs of the ductile fracture model in Section 5, the Swift hardening law was used to fit the experimental data as following [31]: where K is a material constant, ε 0 is a prestrain-like material constant, and m is the strain hardening exponent. Through the results of the SRB tension test, the plasticity parameters required for the plastic model in Table 2 were obtained by fitting the data in Figure 8(b). The fracture experiments were simulated using these parameters. These FE simulations, described in the following, were done using Abaqus/Explicit with the power hardening law. All simulations were replicated in the same way using C3D8R 8-node bilinear axisymmetric quadrilateral and hourglass control. The critical elements were usually selected where the equivalent plastic strain is larger, and the damage is easy to occur. Based on previous findings [32,33], the critical element of specimen was usually selected on the element with the largest equivalent plastic strain before fracture, and the selection of critical elements is shown in Figure 9. Since the real mesh size is very small, the grid division in Figure 9 is used as a schematic diagram to clearly indicate the location of critical elements. After the simulation, the stress states of the critical elements selected on the model were extracted from the FE model to obtain the stress states of the specimens in other tests. The force-displacement curves of SRB from simulations compared with the corresponding experiments are shown in Figure 8(a). The force-displacement curve obtained from the experiment is consistent with the simulation results. Additionally, necking can be observed in the simulation result. The variation of ST and LP with equivalent plastic strain extracted from the critical element is shown in Figure 10, where LP remains 1 while ST increases. This indicates that the critical element is continuously elongated along the axial direction [23]. The force-displacement curves of NR from simulations compared with the corresponding experiments are shown in Figure 11(a). The simulation results of the softening stage are not consistent with the experimental data because the plastic model considers only the hardening law. The location of critical elements for obtaining stress state variables was at the axes in the notched region where the crack occurred, as shown in Figure 9(b-d). The variation of ST and LP with equivalent plastic strain is shown in Figure 11(b). It can be seen that the growth rates of the ST of specimens with different notched radii are similar. The simulation of the compression test was performed as an axisymmetric case. Punches and sets were regarded as a discrete rigid body. The force-displacement curves from simulations compared with experiments are shown in Figure 12(a). The force-displacement curves obtained from the experiments agree well with the simulation results for both height-to-diameter ratios. The location of critical elements for obtaining stress state variables was in the middle of the axes, as shown in Figure 9(e)-(f ). The variation of ST and LP with equivalent plastic strain is shown in Figure 12(b). The torque-rad curve obtained from torsion test is shown in Figure 13(a). The torque-rad curve obtained from the experiments is consistent with the simulation results. The location of critical elements was on the outer surface in the model, as shown in Figure 9(g). The variation of ST and LP with plastic strain is shown in Figure 13(b). It was found that ST and LP approach zero, which leads to the failure of the specimen by shear stress. Figure 9 The finite element mesh with critical elements: a SRB, b NR20, c NR10, d NR5, e cylinder with a height-to-diameter ratio of 1.5, f cylinder with a height-to-diameter ratio of 1.0, and g torsion test Ductile Fracture Criterion ST is dimensionless variable that describes the stress state of a point in a continuous medium by the ratio of stress invariants and is defined as follows: where σ m and σ are the mean stress and equivalent stress. Another important parameter is LP, which is related to the third stress invariant as follows: where J 2 and J 3 represent the second and third stress invariants. In this paper, two phenomenological models were used to predict the ductile fracture of AA6063 in cold deformation. The damage prediction ability of two different ductile fracture criterion was verified for AA6063 under different loading conditions. The MMC criterion was derived from the Mohr-Coulomb criterion. The expression of the fracture strain as a function of stress state is as follows [23]: where c 1 and c 2 are constants that need to be calibrated by experiments, K and m are constants from the Swift Therefore, Eq. (12) can be written as [19]: where D 1 , D 2 , D 3 , D 4 , D 5 and D 6 are constants, which need to be calibrated by experiments. It can be seen from Figures 10, 11(b), 12(b), and 13(b) that ST and LP vary continuously during specimen deformation (Table 3). To accurately calibrate the ductile fracture criterion, the integral mean values of ST (η) and LP ( θ ) were calculated by Eq. (13) and Eq. 14: GA optimization technique was used to calculate the parameters of Eqs. (8) and (12) in MATLAB. The calculated model parameters are shown in Table 4. The 3D fracture envelope of the MMC and BW models are depicted in Figure 14(a) and (b), respectively. It can be seen that the 3D fracture envelope of both models agree well with the results of tension and torsion tests. The fitting effects of different models were evaluated by R 2 using Eq. 15: To compare the prediction ability of the two models, the prediction error of the two models were calculated according to Eq. 16: Figure 15 shows the comparison of the prediction errors between the two models, with the results indicating that, on the whole, the prediction ability of the two models is similar. While the two models returned several errors in the SRB tensile test and torsion tests, there were fewer errors with the other ductile fracture tests. This may because of the serious necking of the specimen in the SRB test, which led to the inaccurate measurement of the fracture strain. Secondly, the ductile fracture strain under shear conditions is less which results in the inaccurate prediction ability of the models for specimens with (16) low and medium ST. The prediction ability of the two models was similar for the fracture strain of the NR tensile test. However, the BW ductile fracture criterion was superior to that of the MMC in predicting the fracture strain in the SRB tensile test and the torsion tests. Since the LP hardly varies in axisymmetric tension and pure shear tests, the relationship between tension fracture strain or shear fracture strain and ST can be obtained when the LP is 1 or 0, respectively, in Figure 16. It can be found that the fracture strain will decrease with the increase in ST under any deformation condition. According to Bai-Wierzbicki [16,17], when the ST is low η < −0.33 , the material is in a negative hydrostatic stress state. In this stress state, the nucleation and growth of the voids will be restrained, which makes the occurrence of ductile fracture very unlikely. However, at a higher ST η > 0.4 , the material is in a high-ST stress state, wherein the fracture of the material is caused by the nucleation, growth, and propagation of the voids. In this stress state, the material is highly prone to form the ductile fracture. Therefore, under any deformation condition, the fracture strain will decrease with the increase in ST. The shear fracture strain is smaller than the tension fracture strain as a whole. This indicates that cracks are more likely to occur during shear deformation. The experimental data points are close to the curve of the BW model. Fracture Prediction The developed approaches were implemented into Abaqus 6.14 for FE simulation to verify the damage model further and observe damage distribution. Two developed approaches were applied to simulate two selected tests (SRB, NR20), respectively. Application of the MMC Model There are force-displacement responses from simulations compared with experiments in Figure 17(a). The solid curve was obtained by FE simulation, and the scattered points show the experimental data. Better prediction in the weak stage was achieved by the MMC model compared with the results without considering damage effects in Figure 11(a). However, compared with the experimental results, the fracture of the specimen in the simulation occurred earlier than that in the experiment. The crack of SRB initiated at the center of necking and propagated horizontally at first in Figure 17(b). The crack of NR20 initiated in the middle of the thickness at the notched region and propagated horizontally. Application of the BW Model The results of the BW model were similar to those of the MMC model. The BW model was more accurate than the MMC model in predicting the fracture point on the force-displacement curve in Figure 18(a). Through the analysis of the damage field shown in Figure 18(b), it was found that the damaged area for the BW model was slightly smaller. The prediction ability of the two models for the damage of AA6063 was similar, although the accuracy of the BW model was slightly higher than that of the MMC model. Conclusions (1) The mechanisms of ductile fracture were investigated in various loading conditions. The initial ST of experiments varies from −0.33 at cylindrical compression tests to 0.67 at NR5 tension tests. The specimens with high ST tend to failure due to tension stress, while the specimens tend to failure along the direction of the maximum shear stress with the decrease in ST. (2) The plastic model of AA6063 was established via a SRB tensile test. By combining the ductile fracture test with the FE simulation, the ductile fracture criterion of AA6063 were established for both the BW model and the MMC model. (3) The 3D fracture envelope was represented by two models: the MMC model and the BW model. Compared with the experimental results, the two models can describe the softening phenomenon of materials due to the initiation and propagation of damage. The fitting effect and the prediction ability of the BW model were slightly better than those of the MMC model. While the fracture location predicted by the two models was different than the actual situation, the accuracy of the BW model was higher than that of the MMC model.
6,154.4
2020-10-24T00:00:00.000
[ "Materials Science" ]
Product Lifecycle Management with the Asset Administration Shell : Product lifecycle management (PLM) as a holistic process encompasses the idea generation for a product, its conception, and its production, as well as its operating phase. Numerous tools and data models are used throughout this process. In recent years, industry and academia have developed integration concepts to realize efficient PLM across all domains and phases. However, the solutions available in practice need specific interfaces and tend to be vendor dependent. The Asset Administration Shell (AAS) aims to be a standardized digital representation of an asset (e Introduction Products and systems are becoming increasingly digitized.Most people are aware of examples from the consumer market, such as self-driving cars or smart homes.However, many industrial products that have been equipped with software in recent years are also part of this trend, for example, power supplies or connectors.The holistic organization of the product lifecycle of these products and systems based on methodical and organizational measures using IT systems is called product lifecycle management (PLM).PLM is a significant enhancement of the concept of product data management (PDM), which includes the organization of CAD drawings; the management of product data, such as the bill of materials (BOM); and the application of corresponding project management processes [1]. In practice, numerous PLM tools have become established.However, none of these tools manage all product information or, in operational practice, several systems are used, such as CAD systems and simulation systems [2].This heterogeneity of the IT landscape makes a continuous data chain in product lifecycle management difficult, since the implementation of such data chain between the different IT systems requires a lot of effort.Although there are standards for a subset of the data, such as STEP or JT, the outlined situation still poses challenges to companies, including the following, according to [3]: Due to individual item naming in the systems, different interpretations of an artefact occur within companies. of concrete IT tools, while PLM includes a strong focus on IT tools.Nevertheless, any activity aiming for improved, standardized data integration throughout the PLM process contributes to systems engineering research. Digital Twin The Digital Twin is seen as a major tool for increasing the productivity in PLM processes in the age of industrial digitalization.Therefore, a number of publications are focusing on this concept, creating several definitions of the term [11].However, these are not always of value in the practical implementation of Digital Twins in PLM processes.Although contradictory definitions do not hinder an investigation of the development of Digital Twins in PLM processes, the authors presented an alternative approach to deal with Digital Twins.The so-called Digital Twin theory was proposed during the work on the TeDZ project (see section "funding").Therefore, it is only briefly explained here.Despite this focused view, the reader should be aware that a tremendous amount of research on Digital Twins has been conducted in recent years and continues to be carried out. Digital Twin theory assumes that throughout the PLM process there are multiple stakeholders with different perspectives on the digital representation of products and systems who are working with this digital representation at the same time.These assumptions form the basis of several hypotheses of a Digital Twin, namely [14]: A Digital Twin is a digital representation of an asset. 2. A Digital Twin is located in several places simultaneously. 3. A Digital Twin has multiple states.4. The Digital Twin has a context-specific state in a specific interaction situation.5. The information model for Digital Twins is infinitely large.It is called the real information model.6. The real information model can be finitely approximated for a specific application scenario, becoming a rational information model.7. The rational information model cannot be stored in a single place.8. The rational information model is never completely visible. For an explanation of these hypotheses, the reader is referred to the original article.However, Figure 1 shows some of the elements named in the hypotheses and their relation to the PLM process.PLM infrastructure for Digital Twins.Adapted from ref. [14]. The AAS data model consists of three main classes: the asset class, the submodel class, and the view class (see Figure 2).The asset class provides information on the kind of asset (type or instance) and the asset identification submodel.The submodel class refers to a well-defined domain or subject matter (e.g., the asset identification and drive parameters).Submodels can be considered the main information store of an AAS, as they provide the central data on an asset.There is no limit to the number of submodel classes.When developing an AAS of an asset, any new submodel can be defined.This research work used submodel classes to model PLM data, such as the bill of materials (BOM). Asset Administration Shell (AAS) Digital Twin theory is a theoretical concept.It does not indicate any guidelines for creating Digital Twins.A central technology implementing Digital Twins can be seen in the Asset Administration Shell (AAS).However, there are many other technologies available implementing Digital Twins. The "Plattform Industrie 4.0", a German consortium of politics, companies, and research organizations, introduced and specified the concept of the AAS [15].In order to promote this concept, the Industrial Digital Twin Association (IDTA) was recently founded.The AAS is a standardized digital representation of an asset.An asset is anything of value and can be a physical or logical object (e.g., a product or a service).The AAS contains digital models of various aspects of the asset in the form of submodels and describes the asset's technical functionality by displaying it via a standardized interface. The AAS data model consists of three main classes: the asset class, the submodel class, and the view class (see Figure 2).The asset class provides information on the kind of asset (type or instance) and the asset identification submodel.The submodel class refers to a well-defined domain or subject matter (e.g., the asset identification and drive parameters).Submodels can be considered the main information store of an AAS, as they provide the central data on an asset.There is no limit to the number of submodel classes.When developing an AAS of an asset, any new submodel can be defined.This research work used submodel classes to model PLM data, such as the bill of materials (BOM). The AAS data model consists of three main classes: the asset class, the submodel class, and the view class (see Figure 2).The asset class provides information on the kind of asset (type or instance) and the asset identification submodel.The submodel class refers to a well-defined domain or subject matter (e.g., the asset identification and drive parameters).Submodels can be considered the main information store of an AAS, as they provide the central data on an asset.There is no limit to the number of submodel classes.When developing an AAS of an asset, any new submodel can be defined.This research work used submodel classes to model PLM data, such as the bill of materials (BOM). The view class provides a projection of the AAS model seen from a particular perspective, omitting entities that are not relevant to that perspective.In addition to these main classes, the AAS data model defines many more classes, providing detailed information about the asset (e.g., data type classes).Furthermore, the AAS specification defines the representation of the AAS data model in standard data interchange formats, such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON).The view class provides a projection of the AAS model seen from a particular perspective, omitting entities that are not relevant to that perspective. In addition to these main classes, the AAS data model defines many more classes, providing detailed information about the asset (e.g., data type classes).Furthermore, the AAS specification defines the representation of the AAS data model in standard data interchange formats, such as Extensible Markup Language (XML) and JavaScript Object Notation (JSON). The AAS specification does not define how to implement an AAS.However, in order to provide standardized implementations of the AAS from different vendors, several initiatives are currently active, including the AASX Package Explorer.This is an opensource software tool enabling a user to create, edit, and view an AAS [16].Furthermore, it provides access to the AAS via an Open Platform Communications Unified Architecture (OPC UA) or Message Queuing Telemetry Transport (MQTT) interface.This work used the AASX Package Explorer to demonstrate the submodels created for the purpose of the research.The second initiative to mention is the BaSys 4.0-Middleware, which provides an open-source platform called BaSyx, supporting the implementation of a vendor-specific AAS [17,18]. In order to provide PLM data in an AAS submodel structure, there must be the corresponding data models.However, the "Plattform Industrie 4.0" has defined only a few such AAS data models (e.g., digital nameplate).As none of these fit the purpose of this work, which focused on a general strategy to enable data integration throughout the PLM process with the AAS and not on the data models themselves, existing data models outside the AAS specification were used.They are explained in the next section. Selected Data Models in PLM Processes There is no standardized all-encompassing PLM data model.However, there are phasespecific and domain-specific standards, respectively.For example, Siemens, a market leader in the area of PLM software, defined the PLM XML schema [19].It is a Siemens internal data model of its PLM tool Teamcenter to exchange data between two Teamcenter instances using XML files.Furthermore, it supports application integration through workflows.PLM XML is a very comprehensive model.However, it is a proprietary format, and there is no widespread acceptance in practice. Another example is derived from the ALM domain: there is no standardized data model for ALM.However, as requirements engineering is an important task in ALM, there is a so-called Requirements Interchange Format (ReqIF) [20], enabling the exchange of requirements between different tools.Similarly to PLM XML, ReqIF enables data exchange between IT tools using XML.ReqIF originated in the automotive industry.However, since its standardization by the Object Management Group (OMG), it has also been applied in other industries and fields.ReqIF covers requirements as well as the documents containing these requirements.Furthermore, as requirements are normally written in natural language, ReqIF is not limited to requirements but also supports other artefacts.For example, this work uses ReqIF to exchange assembly instruction data containing text and pictures. OSLC-Based Data Integration When discussing the exchange of data throughout a product lifecycle, one cannot neglect to mention the OSLC standard [9].OSLC is designed to connect data and to create a digital thread across domains, applications, and organizations.It uses the concept known as the Resource Description Framework (RDF) for data exchange between different applications.In order to enable OSLC-based data exchange, the tools involved must provide an appropriate interface.In theory, data exchange between OSLC consumers and OSLC providers is tool independent.However, in reality, tool integration can contain vendorspecific elements, meaning that only tools from a single vendor fit together optimally.Furthermore, the OSLC standard is not designed to facilitate anything other than data access between tools.For example, operational data (e.g., device temperature) from an asset are not part of the OSLC design concept. Although there is no all-encompassing PLM data model, tool vendors offer proprietary solutions for PLM data integration throughout the PLM process, as shown in the following example: As previously mentioned, the authors have already worked on concepts for PLM/ALM integration [6].These activities made use of the Siemens toolchain with Teamcenter (PLM) and Polarion (ALM).Data exchange between these tools is based on the OSLC standard (see Figure 3).In addition to various use cases from the industry, the research team created several other use cases for the so-called "SmartLight", which is a simple mechatronic product involving the mechanical, electronic, and software domains (depicted in Figure 6).Among the use cases under investigation was the assignment of a requirement, managed in ALM, with a design element storing CAD data.Although these CAD data are managed in Teamcenter, they can only be edited in the CAD tool Siemens NX.The data integration between NX and Teamcenter is based on an internal Siemens interface. simple mechatronic product involving the mechanical, electronic, and software domains (depicted in Figure 6).Among the use cases under investigation was the assignment of a requirement, managed in ALM, with a design element storing CAD data.Although these CAD data are managed in Teamcenter, they can only be edited in the CAD tool Siemens NX.The data integration between NX and Teamcenter is based on an internal Siemens interface.Figure 4 shows an actual view of the Polarion user interface as an example of the PLM/ALM interface.The requirement "SLv2-69 Co_Modulhousing_Material" (ALM) is linked to the design elements "SL-Bottom_piece" and "SL-Top_component_piece" (PLM). To create such links, the Siemens toolchain provides specific dialogs.Furthermore, with the so-called "Delegated UI" technology, Teamcenter data can be edited in Polarion and vice versa.Such features are vendor specific and are not covered by the OSLC specification.Therefore, the authors perceive an increasing need for vendor-independent approaches to linking such data. Major Research Goals and Research Method This research work addresses product lifecycle management in the sense of Industry 4.0.As there are many aspects of Industry 4.0, the research team set itself the task of focusing on the application scenarios defined in [ Figure 4 shows an actual view of the Polarion user interface as an example of the PLM/ALM interface.The requirement "SLv2-69 Co_Modulhousing_Material" (ALM) is linked to the design elements "SL-Bottom_piece" and "SL-Top_component_piece" (PLM). (depicted in Figure 6).Among the use cases under investigation was the assignment of a requirement, managed in ALM, with a design element storing CAD data.Although these CAD data are managed in Teamcenter, they can only be edited in the CAD tool Siemens NX.The data integration between NX and Teamcenter is based on an internal Siemens interface.Figure 4 shows an actual view of the Polarion user interface as an example of the PLM/ALM interface.The requirement "SLv2-69 Co_Modulhousing_Material" (ALM) is linked to the design elements "SL-Bottom_piece" and "SL-Top_component_piece" (PLM). To create such links, the Siemens toolchain provides specific dialogs.Furthermore, with the so-called "Delegated UI" technology, Teamcenter data can be edited in Polarion and vice versa.Such features are vendor specific and are not covered by the OSLC specification.Therefore, the authors perceive an increasing need for vendor-independent approaches to linking such data. Major Research Goals and Research Method This research work addresses product lifecycle management in the sense of Industry 4.0.As there are many aspects of Industry 4.0, the research team set itself the task of focusing on the application scenarios defined in [21], namely: To create such links, the Siemens toolchain provides specific dialogs.Furthermore, with the so-called "Delegated UI" technology, Teamcenter data can be edited in Polarion and vice versa.Such features are vendor specific and are not covered by the OSLC specification.Therefore, the authors perceive an increasing need for vendor-independent approaches to linking such data. Major Research Goals and Research Method This research work addresses product lifecycle management in the sense of Industry 4.0.As there are many aspects of Industry 4.0, the research team set itself the task of focusing on the application scenarios defined in [21], namely: Industrial experts and renowned researchers have identified these application scenarios as enhancements of the current state of the art.Therefore, the intention of this work is to create results that allow their implementation in practice.When discussing the project results in Section 6, descriptions of these application scenarios are given. This research work is divided into the creation of the conceptual basis to solve the problem, a practical implementation evaluation, and a theory building.It is based on the methodological approach of Design Research (DR).DR analyses the application of designed IT artefacts to understand, explain, and improve information systems.As there are several definitions of DR, this work follows the explanations in [22].DR consists of two activities: the design of one or more IT artefacts and theory building.The IT artefacts (in this work, these are the several AASs) and their contribution to an overall solution have a local practical reference to the SmartLight, which is designed and produced in a research and test factory.The local results obtained in the design of the IT artefacts are used to feed the theorizing with the aim of forming generally applicable results from them.As this research work is ongoing, the theorizing process is also ongoing. The design of an IT artefact consists of the following phases: problem analysis, building, and evaluation.The building and evaluation phases can be repeated several times, which was conducted in this work.Figure 5 shows the steps of the methodological approach and their assignment to the DR phases.  TAP-Transparency and Adaptability of Delivered Products;  OSP-Operator Support in Production;  SP2-Smart Product development for Smart Production;  IPD-Innovative Product Development. Industrial experts and renowned researchers have identified these application scenarios as enhancements of the current state of the art.Therefore, the intention of this work is to create results that allow their implementation in practice.When discussing the project results in Section 6, descriptions of these application scenarios are given. This research work is divided into the creation of the conceptual basis to solve the problem, a practical implementation evaluation, and a theory building.It is based on the methodological approach of Design Research (DR).DR analyses the application of designed IT artefacts to understand, explain, and improve information systems.As there are several definitions of DR, this work follows the explanations in [22].DR consists of two activities: the design of one or more IT artefacts and theory building.The IT artefacts (in this work, these are the several AASs) and their contribution to an overall solution have a local practical reference to the SmartLight, which is designed and produced in a research and test factory.The local results obtained in the design of the IT artefacts are used to feed the theorizing with the aim of forming generally applicable results from them.As this research work is ongoing, the theorizing process is also ongoing. The design of an IT artefact consists of the following phases: problem analysis, building, and evaluation.The building and evaluation phases can be repeated several times, which was conducted in this work.Figure 5 shows the steps of the methodological approach and their assignment to the DR phases. AAS-Based Engineering Process One focus of this work lies in the engineering process, especially in the PLM/ALM integration.This section explains why and how the AAS is utilized in this context. Requirements When aiming to create a vendor-independent standardized concept of PLM/ALM integration, the requirements for such a concept must be defined.The major requirements identified by the research team are as follows: AAS-Based Engineering Process One focus of this work lies in the engineering process, especially in the PLM/ALM integration.This section explains why and how the AAS is utilized in this context. Requirements When aiming to create a vendor-independent standardized concept of PLM/ALM integration, the requirements for such a concept must be defined.The major requirements identified by the research team are as follows: • R1: The concept must be based on a standard in order to enable vendor-independent integration.• R2: The underlying data model must provide comprehensive access to all data created for or by an asset in order to include data other than PLM/ALM data. As well as these high-level requirements, the concept should fulfil specific requirements for PLM/ALM integration.However, these requirements are often user specific.In order to discuss the validity of the concept described in this article, several requirements developed in the industrial case study described in [8] were adopted as references.Given Further requirements are described in [8].However, they are highly specific to the industrial case study.As this work aims to provide a new general strategy for data integration in a PLM process, such specific requirements must be validated in the future. Design and Implementation Figure 6 shows the general design concept for providing PLM/ALM data to an AAS data model, which is explained using SmartLight. integration. R2: The underlying data model must provide comprehensive access to all data created for or by an asset in order to include data other than PLM/ALM data. As well as these high-level requirements, the concept should fulfil specific requirements for PLM/ALM integration.However, these requirements are often user specific.In order to discuss the validity of the concept described in this article, several requirements developed in the industrial case study described in [8] were adopted as references.Given that Teamcenter is the PLM tool and Polarion is the ALM tool, the following requirements must be fulfilled: Further requirements are described in [8].However, they are highly specific to the industrial case study.As this work aims to provide a new general strategy for data integration in a PLM process, such specific requirements must be validated in the future. Design and Implementation Figure 6 shows the general design concept for providing PLM/ALM data to an AAS data model, which is explained using SmartLight.The PLM data were exported from the PLM tool using the PLM XML data format.The same was carried out for the ALM data using the ReqIF data format.Both sets of information were stored in separate XML files. In the next step, the research team created an AAS for SmartLight using the AASX Package Explorer.To provide exported PLM/ALM data to the SmartLight AAS, the research team developed two so-called "importers", enabling the importation of the data to the AASX Package Explorer.The PLM data were imported as a PLM submodel, while the ALM data were imported as an ALM submodel.After importing these data, elements of The PLM data were exported from the PLM tool using the PLM XML data format.The same was carried out for the ALM data using the ReqIF data format.Both sets of information were stored in separate XML files. In the next step, the research team created an AAS for SmartLight using the AASX Package Explorer.To provide exported PLM/ALM data to the SmartLight AAS, the research team developed two so-called "importers", enabling the importation of the data to the AASX Package Explorer.The PLM data were imported as a PLM submodel, while the ALM data were imported as an ALM submodel.After importing these data, elements of both submodels could be related to each other.For this, the AAS RelationshipElement class was used.This class contains the members first and second, which are known as referable elements.Figure 7 shows an example of an AAS-based PLM/ALM relation.For a better comparison, this is the same example as that described in Figure 4; namely, it demonstrates the properties of the SmartLight requirement "Co_Modulhousing_Material", which is part of the requirements specification managed in the ALM tool.All the properties of this requirement were created during the ReqIF import. class was used.This class contains the members first and second, which are known as referable elements. Figure 7 shows an example of an AAS-based PLM/ALM relation.For a better comparison, this is the same example as that described in Figure 4; namely, it demonstrates the properties of the SmartLight requirement "Co_Modulhousing_Material", which is part of the requirements specification managed in the ALM tool.All the properties of this requirement were created during the ReqIF import.As shown in Figure 8, the requirement contains two instances of the RelationshipElement class.The first element is the requirement itself.The second element is the PLM data item "SL-Bottom_piece".The semantics of this relation are part of the description and have the following meaning: the requirement "Co_Modulhousing_Material" is implemented by the design element "SL_Bottom_piece".The same relationship was also added to the PLM submodel element, which was imported by the PLM XML importer (see Figure 9).The differences to the instance of the RelationshipElement class, which is part of the ALM item, are that the first and second elements are swapped and the semantic meaning is changed.The relationship from the viewpoint of the PLM item has the following meaning: the design element "SL_Bot-tom_piece" implements the requirement "Co_Modulhousing_Material".It can be seen As shown in Figure 8, the requirement contains two instances of the RelationshipElement class.The first element is the requirement itself.The second element is the PLM data item "SL-Bottom_piece".The semantics of this relation are part of the description and have the following meaning: the requirement "Co_Modulhousing_Material" is implemented by the design element "SL_Bottom_piece". class was used.This class contains the members first and second, which are known as referable elements. Figure 7 shows an example of an AAS-based PLM/ALM relation.For a better comparison, this is the same example as that described in Figure 4; namely, it demonstrates the properties of the SmartLight requirement "Co_Modulhousing_Material", which is part of the requirements specification managed in the ALM tool.All the properties of this requirement were created during the ReqIF import.As shown in Figure 8, the requirement contains two instances of the RelationshipElement class.The first element is the requirement itself.The second element is the PLM data item "SL-Bottom_piece".The semantics of this relation are part of the description and have the following meaning: the requirement "Co_Modulhousing_Material" is implemented by the design element "SL_Bottom_piece".The same relationship was also added to the PLM submodel element, which was imported by the PLM XML importer (see Figure 9).The differences to the instance of the RelationshipElement class, which is part of the ALM item, are that the first and second elements are swapped and the semantic meaning is changed.The relationship from the viewpoint of the PLM item has the following meaning: the design element "SL_Bot-tom_piece" implements the requirement "Co_Modulhousing_Material".It can be seen The same relationship was also added to the PLM submodel element, which was imported by the PLM XML importer (see Figure 9).The differences to the instance of the Rela-tionshipElement class, which is part of the ALM item, are that the first and second elements are swapped and the semantic meaning is changed.The relationship from the viewpoint of the PLM item has the following meaning: the design element "SL_Bottom_piece" implements the requirement "Co_Modulhousing_Material".It can be seen that two instances of the RelationshipElement class for one relation are required, which may be considered an unnecessary double data entry.However, a RelationshipElement class describes the relation from the viewpoint of one item.For example, there is a significant difference between an "implements" relationship and an "implemented by" relationship.Therefore, as there are two viewpoints on one relation, two instances of the RelationshipElement class are needed. that two instances of the RelationshipElement class for one relation are required, which may be considered an unnecessary double data entry.However, a RelationshipElement class describes the relation from the viewpoint of one item.For example, there is a significant difference between an "implements" relationship and an "implemented by" relationship.Therefore, as there are two viewpoints on one relation, two instances of the Re-lationshipElement class are needed.This concept allows an AAS data model to be enriched with PLM/ALM data and means that relations between these data can be created.However, this strategy is just an initial starting point to enable PLM/ALM integration with an AAS.The following requirement validation explains this statement. Requirements Validation The concept fulfils the main requirement R1 (see Section 4.1).As previously mentioned, the AAS was developed by a number of companies and organizations.As it is designed to provide all digital data for an asset, R2 is also fulfilled.It has the potential to go well beyond the current most popular strategy for life cycle integration using OSLC. The explanations in Section 4.2 demonstrate that the concept can fulfil requirement R3: links can be set between any submodel elements, including between elements that represent ALM requirements and PLM design revision items. As the ReqIF standard allows exporting not only single requirements but also documents containing requirements, a document can be imported as a submodel element into an AAS.As a submodel element, it can be linked with any other element, including a PLM item.Therefore, requirement R4 is fulfilled. A user can configure the item attributes (e.g., author and status), which should be included in a ReqIF or PLM XML export.Therefore, the properties of the imported submodel elements depend on the specific user configuration.Thus, requirement R5 is fulfilled. Requirements R6, R7, and R8 cannot be validated without explanatory comments.The AAS data model does not prevent these requirements from being fulfilled.However, the implementation of these requirements depends on tool support.The AAS data model is simply the data storage of the information on an asset.The graphical views on this data This concept allows an AAS data model to be enriched with PLM/ALM data and means that relations between these data can be created.However, this strategy is just an initial starting point to enable PLM/ALM integration with an AAS.The following requirement validation explains this statement. Requirements Validation The concept fulfils the main requirement R1 (see Section 4.1).As previously mentioned, the AAS was developed by a number of companies and organizations.As it is designed to provide all digital data for an asset, R2 is also fulfilled.It has the potential to go well beyond the current most popular strategy for life cycle integration using OSLC. The explanations in Section 4.2 demonstrate that the concept can fulfil requirement R3: links can be set between any submodel elements, including between elements that represent ALM requirements and PLM design revision items. As the ReqIF standard allows exporting not only single requirements but also documents containing requirements, a document can be imported as a submodel element into an AAS.As a submodel element, it can be linked with any other element, including a PLM item.Therefore, requirement R4 is fulfilled. A user can configure the item attributes (e.g., author and status), which should be included in a ReqIF or PLM XML export.Therefore, the properties of the imported submodel elements depend on the specific user configuration.Thus, requirement R5 is fulfilled. Requirements R6, R7, and R8 cannot be validated without explanatory comments.The AAS data model does not prevent these requirements from being fulfilled.However, the implementation of these requirements depends on tool support.The AAS data model is simply the data storage of the information on an asset.The graphical views on this data model depend on the tools using the data model.This research work used the AASX Package Explorer to view the data.Although it provides basic editing functionality, it is not designed to be a tool for real-world scenarios.Therefore, to fulfil these requirements, the PLM/ALM tools should implement the corresponding functionalities.To illustrate a possible graphical user interface, Figure 10 shows a vendor-specific implementation of requirement R7.This is the same example as that in Figure 4 and shows trace links between Teamcenter and Polarion.model depend on the tools using the data model.This research work used the AASX Package Explorer to view the data.Although it provides basic editing functionality, it is not designed to be a tool for real-world scenarios.Therefore, to fulfil these requirements, the PLM/ALM tools should implement the corresponding functionalities.To illustrate a possible graphical user interface, Figure 10 shows a vendor-specific implementation of requirement R7.This is the same example as that in Figure 4 and shows trace links between Teamcenter and Polarion. AAS-Based Production Infrastructure After the presentation of how the AAS can be used in engineering processes, this section discusses the potential role of the AAS in future production environments.In the TeDZ research project (see section "funding"), infrastructure for order-controlled production of the aforementioned SmartLight based on the AAS was designed and implemented.The associated production facilities are located in the SmartFactoryOWL, which is an Industry 4.0 research and test factory jointly operated by the Fraunhofer Application Centre IOSB-INA and the OWL University of Applied Sciences and Arts in Lemgo, Germany. Requirements of the AAS-Based Production Infrastructure First, an overview of the identified requirements for a working environment for an AAS-based production infrastructure is given.These requirements relate to a scenario in which a customer orders a specific variant of SmartLight and in which the production facility autonomously selects an appropriate assembly workstation.In order to gain an improved understanding of the requirements, the following aspects must be considered: The project team agreed that the AAS could be seen as a synonym for the Digital Twin, because the AAS meets many characteristics of the Digital Twin.In particular, it is a useful means to prove the hypotheses of Digital Twin theory (see Section 2.2).The project team identified the following requirements:  R10 (automatic creation of a Digital Twin): during the ordering process, an instance AAS must be created automatically based on a template. R11 (extend/change a Digital Twin): since a server hosts the AAS and communication protocols are implemented, the AAS must be changed and/or extended. R12 (versioning a Digital Twin): two different types of the same product must be created to demonstrate how to manage the production of different products. R13 (views on the AAS): to protect certain information, types and instances must be implemented.Instance information shows only selected information of that type. R14 (production based on AAS data): the manufacturing process must use instance and type data when producing an asset. R15 (automatic production data): the production process manufacturing information must be entered to create the "Digital Nameplate" (see Section 2.3). AAS-Based Production Infrastructure After the presentation of how the AAS can be used in engineering processes, this section discusses the potential role of the AAS in future production environments.In the TeDZ research project (see section "funding"), infrastructure for order-controlled production of the aforementioned SmartLight based on the AAS was designed and implemented.The associated production facilities are located in the SmartFactoryOWL, which is an Industry 4.0 research and test factory jointly operated by the Fraunhofer Application Centre IOSB-INA and the OWL University of Applied Sciences and Arts in Lemgo, Germany. Requirements of the AAS-Based Production Infrastructure First, an overview of the identified requirements for a working environment for an AAS-based production infrastructure is given.These requirements relate to a scenario in which a customer orders a specific variant of SmartLight and in which the production facility autonomously selects an appropriate assembly workstation.In order to gain an improved understanding of the requirements, the following aspects must be considered: The project team agreed that the AAS could be seen as a synonym for the Digital Twin, because the AAS meets many characteristics of the Digital Twin.In particular, it is a useful means to prove the hypotheses of Digital Twin theory (see Section 2.2).The project team identified the following requirements: • R10 (automatic creation of a Digital Twin): during the ordering process, an instance AAS must be created automatically based on a template.• R11 (extend/change a Digital Twin): since a server hosts the AAS and communication protocols are implemented, the AAS must be changed and/or extended. • R12 (versioning a Digital Twin): two different types of the same product must be created to demonstrate how to manage the production of different products. • R13 (views on the AAS): to protect certain information, types and instances must be implemented.Instance information shows only selected information of that type.• R14 (production based on AAS data): the manufacturing process must use instance and type data when producing an asset.• R15 (automatic production data): the production process manufacturing information must be entered to create the "Digital Nameplate" (see Section 2.3). These requirements served as input of the design and implementation of the AASbased production infrastructure. Design and Implementation In order to understand the implementation of the production infrastructure and the above requirements, the AAS model is explained in more detail.Regarding the AAS specification [15], the step in order to follow the meta-model description is to separate the information models into types and instances, as displayed in Figure 11.This separation is important for manufacturing companies, since they aim to protect certain information from going public and must manage different versions of the product they produce.Therefore, types contain all information from the engineering phase (e.g., the ALM data), including proprietary data and files, whereas instances contain the data that are only related to one specific instance of an asset (e.g., the production data and operation data).This information is needed by companies that use the product but do not manufacture it.Since the information provided by the types may be useful to the instance owner, a link to the type is provided by each instance using the derivedFrom property, which is part of each AAS.To secure proprietary data, the outside access via this link, types can be modified to provide only public data.More information for the separation of data into types and instances can be found in [15] (p.29).If the customer chooses a configuration and orders a product, the ordering system triggers a process creating a product instance.Furthermore, it adds certain properties into submodels to control the production of the product.In this process, the instance AAS can communicate with other AASs using their accompanying services.For example, an instance can communicate with the production system to register as a product that has to be manufactured.Whether the manufacturing process is possible or not is decided autonomously by comparing the production requirements with the workstation options.The corresponding production requirements are based on the data imported from the PLM system.If the result of this "dialog" is positive, the address of the instance AAS is added to a production queue within the production system AAS. By determining the address of a product instance, the AAS of the production system can access type data by following the "Instance-Type" connection variable in the derivedFrom property.In this way, it accesses the product's assembly instructions and can display them at the workstation to assist workers with manual assembly.Moreover, the instance connection allows the production system to write production-related data into the instance, for example, a digital nameplate, a user manual, or general production data.After production is finished, the product and the AAS instance can be delivered to the customer. The video mentioned in the Supplementary Materials section demonstrates the AASbased order-controlled production process implemented in the SmartFactoryOWL.The overall production and engineering infrastructure of SmartLight created in this work contain the following elements: 1. PLM/ALM data and tools a. To create the engineering data of the product (see Section 4). To trigger the production of the asset by creating the instance(s). Production System a. System to produce the asset. To store production, engineering, and operation data. AAS Registry a. Provides addresses for all AASs involved in the production. The bill of materials (BOM) is provided by the PLM system as part of the PLM XML data.Furthermore, the ALM system is used to store the instructions for the manual assembly of SmartLight (step-by-step instructions including pictures).These data are imported into an AAS type and are linked by the already explained Relationship Elements (see Section 4).Moreover, the type AAS will be provided with a configuration submodel.This submodel can offer, for example, different colours.To facilitate communication, all AAS are equipped with submodels that provide the fundamental properties to communicate using OPC UA, REST, and MQTT.Moreover, a so-called AAS registry application is added that can redirect communication by providing server addresses to all AASs based on their unique identifier. To complete the infrastructure, an additional AAS is added to represent the production system.This AAS contains a list of addresses to the AAS instances of SmartLight that can be manufactured using the corresponding production system.Figure 12 shows the infrastructure model. omously by comparing the production requirements with the workstation options.The corresponding production requirements are based on the data imported from the PLM system.If the result of this "dialog" is positive, the address of the instance AAS is added to a production queue within the production system AAS. By determining the address of a product instance, the AAS of the production system can access type data by following the "Instance-Type" connection variable in the derivedFrom property.In this way, it accesses the product's assembly instructions and can display them at the workstation to assist workers with manual assembly.Moreover, the instance connection allows the production system to write production-related data into the instance, for example, a digital nameplate, a user manual, or general production data.After production is finished, the product and the AAS instance can be delivered to the customer. The video mentioned in the Supplementary Materials section demonstrates the AASbased order-controlled production process implemented in the SmartFactoryOWL.The dynamic processes in this infrastructure are as follows: First, PLM and ALM data are provided to the AAS type(s), where they are used by an ordering system to generate orderable objects.Second, the configuration submodel is added, describing additional product options that can be selected in the shop. If the customer chooses a configuration and orders a product, the ordering system triggers a process creating a product instance.Furthermore, it adds certain properties into submodels to control the production of the product.In this process, the instance AAS can communicate with other AASs using their accompanying services.For example, an instance can communicate with the production system to register as a product that has to be manufactured.Whether the manufacturing process is possible or not is decided autonomously by comparing the production requirements with the workstation options.The corresponding production requirements are based on the data imported from the PLM system.If the result of this "dialog" is positive, the address of the instance AAS is added to a production queue within the production system AAS. By determining the address of a product instance, the AAS of the production system can access type data by following the "Instance-Type" connection variable in the derivedFrom property.In this way, it accesses the product's assembly instructions and can display them at the workstation to assist workers with manual assembly.Moreover, the instance connection allows the production system to write production-related data into the instance, for example, a digital nameplate, a user manual, or general production data.After production is finished, the product and the AAS instance can be delivered to the customer. The video mentioned in the Supplementary Materials section demonstrates the AASbased order-controlled production process implemented in the SmartFactoryOWL. Requirements Validation The usage of an AAS-based infrastructure provides a unique opportunity for companies to obtain a standardized digital interface to access any asset during a product lifecycle.Although the described infrastructure is a simplified example of production infrastructure in a research facility, it has the potential to serve as a template for real production environments.The following evaluation of the requirements described in Section 5.1 indicates that this statement is justified: • R10 (automatic creation of a Digital Twin): the AAS is automatically created by the ordering system when a customer orders a product.• R11 (extend/change a Digital Twin): several AASs are integrated into the system.By exchanging data, they mutually modify their data models.Additional information can be added to a specific AAS at any time. • R12 (versioning a Digital Twin): it is possible to order different configurations of the same product.For each of these configurations, the production infrastructure creates a separate AAS instance representing a specific product configuration. • R13 (views on the AAS): Data access to the AAS type is restricted or can be controlled.This ensures the protection of sensitive product-type data.• R14 (production based on AAS data): as shown in Figure 11, the production infrastructure significantly uses the concept of the AAS, which is a major goal in this work. • R15 (automatic production data): The AAS of the assembly station enters production data into the AAS instance of the product.In this way, it enriches the information model delivered to the customer. However, this is an initial assessment for a limited example.For a more in-depth analysis, the example should be made more complex, and more requirements are needed. Evaluation of Industry 4.0 Application Scenarios As stated in Section 3, a major goal of this work was to find ways to implement the Industry 4.0 application scenarios given in [22], which follows a discussion on the achievement of this target. OCP-Order-Controlled Production: • Description [22]: "This application scenario revolves around orders and describes how to dynamically organize the production resources required for the order". • Evaluation: The OCP is among the main aspects of the presented infrastructure in Section 5.In the AAS-based infrastructure, an AAS is created for each order.This "order AAS" communicates to the AAS of the production facilities to control the manufacturing process. AF-Adaptable Factory: • Description [22]: "In contrast to the OCP scenario-which focuses on the order-this application scenario focuses on a specific production resource and explains how it can be made adaptable and how this affects both the resource supplier and the system integrator."• Evaluation: The AF is another scenario that is fulfilled by the given infrastructure.Since the AAS uses Industry 4.0-based communication protocols to determine whether or not an assistance system is suitable for the production of the product, it is also possible to use this technology for other production operations to further increase AF capabilities.SAL-Self-organizing Adaptive Logistics: • Description [22]: "This application scenario is closely linked to the OCP application scenario, but focuses on the entire inter-and intra-logistics structure."• Evaluation: Even if the presented infrastructure does not contain a logistic node, it is highly adaptable for all kinds of self-driven processes.It is mentioned in Section 5 that the given infrastructure can be enlarged by adding more AASs into it.These AASs do not have to be production machinery but can also be representatives for materials or an automated delivery system to deliver them to the place where they are needed. VBS-Value-based Services: • Description [22]: "This application scenario describes how service can be integrated into the value network by making specific product and/or process information available on an IT platform." • Evaluation: The AAS data model is based on the idea of collecting all information connected to its asset.This also includes real-time data from production machinery or from the product itself.This work previously discussed the problems around the proprietary interfaces and vendor-dependent solutions for the gathering of such information.Overcoming this interaction barrier could be the first step towards data collection that is accessible for every service available to a company or customer. TAP-Transparency and Adaptability of Delivered Products: • Description [22]: "In contrast to the VBS scenario-which focuses on the value network-this application scenario focuses on the product and how to use an IT platform to ensure that products are transparent and adaptable."• Evaluation: Using the AAS distinction between types and instances, it is possible to obtain a running digital representative in the hands of the customer, which still has a connection to the vendor.Since the customer owns the instance, the customer has full control over the data that are made accessible and those that are not.In this state, the instance can be used to deliver software updates and live data analysis to guide and help the customer to use the product to its full potential. OSP-Operator Support in Production: • Description [22]: "This application scenario describes how new technologies can provide support for production operators."• Evaluation: The use of an assistance system in the given infrastructure (see Section 5) addresses the OSP scenario.The assistance system uses engineering data automatically to guide the operator in the assembly process in the most effective way. SP2-Smart Product Development for Smart Production: • Description [22]: "This application scenario describes collaborative product engineering, which is based on product requirements and is aimed at creating a seamless engineering process and enabling production and service to access the information they require." Evaluation: Data import from PLM and/or ALM systems into the AAS (see Section 4) ensures that the data provided by the AAS is always up to date, accessible, and readable by production resources. IPD-Innovative Product Development: • Description [22]: "This application scenario describes new methods and processes in product development and is focused on the early phases of product development."• Evaluation: The solution presented in this work does not address this application scenario.However, it does not prevent the realization of the IPD scenario. In addition to this initial fundamentally positive assessment, there are other advantages of using AAS throughout the PLM process: If companies implement AAS interfaces in common software (CAD tools, ERP systems, etc.), the IT infrastructure can remain almost unchanged, which would further increase the attractiveness of this new concept.As the AAS technology is based on the idea of IoT, it supports the ongoing development of Industry 4.0 towards an integrated horizontal supply chain.Companies using the AAS are positioning themselves for the future.However, the potential success of this novel concept depends on several factors. Success Factors for AAS-Based Product Lifecycle Management The AAS as the core of this concept is a relatively new development.Although it has been around since the initial Industry 4.0 discussions, only the version 2.0 release from November 2019 can be considered relevant in practice.Therefore, the success of this research work will depend, among others, on the following factors: 1. Submodels for the AAS; 2. Tool vendor support. 1. Submodels for the AAS: Among the main problems to be solved is the definition of standardized submodels (e.g., for PLM/ALM data).In this work, the existing PLM/ALM data models PLM XML and ReqIF, respectively, were used.However, PLM XML in particular cannot be considered as a universal data model for PLM.Moreover, ReqIF only covers the requirements data of ALM.Although it can be used for elements other than requirements elements, deeper discussions on its suitability as a generic ALM data format are needed.In addition, the semantics of the relationship links between submodel elements should be defined.Such semantics describe the meaning of the relationship (e.g., a PLM design element "implements" an ALM requirement). 2. AAS market success: Another challenge in the success of the AAS itself.This research work will only achieve practical significance if the concept of the AAS is successful.To advance the AAS standard in practice, the Industrial Twin Association (IDTA, https: //idtwin.org,accessed on 31 May 2021) was founded in March 2021.IDTA is supported by major industrial companies.Therefore, the chances for the wide dissemination of the AAS are good.However, a large amount of effort is still needed to achieve this.In particular, as with any digitization measure, companies must consider the ROI when implementing an AAS-based infrastructure. 3. Tool vendor support: Although the AAS is successful, the results of this work will only gain practical importance if the PLM tool manufacturers support it.Tool vendor support should provide much smoother handling than described in this article.Manual data export and import should be avoided.In addition, editing of the AAS data model should be conducted directly in the PLM tools and not in an additional tool, such as the AASX Package Explorer.OSLC, however, targets similar requirements.It is currently more widespread in practice than the AAS.Therefore, tool vendors will need viable reasons to change today's OSLC-based implementations.An important advantage of the AAS can be seen in its much more generic concept, which supports more scenarios than simply the data exchange between tools (e.g., the article described the scenario of an order-controlled production process in detail). Without significant action addressing these success factors, it will be challenging to establish the proposed concept in practice, as practitioners need ready-to-use solutions. Conclusions The development of digitized products and services requires interdisciplinary development approaches.The engineering domain (mechanics, electric/electronic, and software) and the production domain should work together seamlessly in order to maintain efficiency and effectiveness in a digitized industry.Product lifecycle management provides methods and tools to enable cross-domain development.However, these solutions are often vendor dependent and target the use of a single vendor's toolchain.Although OSLC or other existing standards are used as connecting technology, it is challenging to provide smooth data integration throughout the holistic PLM process using tools of different vendors. The Asset Administration Shell aims to establish a standard interface in Industry 4.0.It is a comprehensive digital representation of an asset (e.g., the digitized product).This work investigated the ability of the AAS to facilitate data integration throughout the PLM process.In the engineering phase, it uses submodels to propagate PLM/ALM data to an AAS.As there are no standardized PLM/ALM data models, this research work utilizes the existing formats PLM XML and ReqIF.When importing such data in an AAS data model, the relations between these data can be created.Hence, a central requirement of any PLM/ALM integration can be met using the AAS. Moreover, a production infrastructure based on the AAS was created.The AAS data used in this infrastructure are provided in part by the engineering process, which enables the semi-automatic reuse of engineering data in a production environment.The concept was implemented and tested at the SmartFactoryOWL, producing a sample product (SmartLight). Figure 4 . Figure 4. Example of a vendor-specific PLM/ALM data link (screenshot of Siemens Polarion). Figure 4 . Figure 4. Example of a vendor-specific PLM/ALM data link (screenshot of Siemens Polarion). Figure 4 . Figure 4. Example of a vendor-specific PLM/ALM data link (screenshot of Siemens Polarion). Figure 5 . Figure 5. Design research approach of this work. Figure 5 . Figure 5. Design research approach of this work. Figure 6 . Figure 6.The design concept for PLM/ALM mapping to AAS. Figure 6 . Figure 6.The design concept for PLM/ALM mapping to AAS. Figure 11 . Figure 11.Instance and type information according to the AAS specification [15]. Figure 12 . Figure 12.Example production infrastructure based on several AASs. Teamcenter is the PLM tool and Polarion is the ALM tool, the following requirements must be fulfilled: • R3: It must be possible to link a Polarion work item with a Teamcenter item and vice versa.• R4: It must be possible to link a Teamcenter item with a Polarion document.• R5: It must be possible to access the data of any attribute of a Polarion work item from the linked Teamcenter item and vice versa.• R6: It must be possible to link more than one Polarion work item with one Teamcenter item in a single action.• R7: When creating a traceability report in Polarion, all linked Teamcenter items must be included and vice versa.• R8: The status of a Polarion work item can only be changed if the status of the linked Teamcenter item has a dedicated status. It must be possible to link a Polarion work item with a Teamcenter item and vice versa.The status of a Polarion work item can only be changed if the status of the linked Teamcenter item has a dedicated status.
12,394.2
2021-06-23T00:00:00.000
[ "Engineering", "Computer Science" ]
Superintegrability and Kontsevich-Hermitian relation We analyze the well-known equivalence between the quadratic Kontsevich-Penner and Hermitian matrix models from the point of view of superintegrability relations, i.e. explicit formulas for character averages. This is not that trivial on the Kontsevich side and seems important for further studies of various deformations of Kontsevich models. In particular, the Brezin-Hikami extension of the above equivalence becomes straightforward. Introduction According to [1,2,11], the Hermitian matrix model is equivalent to the quadratic Kontsevich-Penner model in the following sense: with p k = tr L −k (2) and the Gaussian integrals normalized to unity, e − 1 2 Tr M 2 dM = e − 1 2 tr X 2 dX = 1. Our goal in this note is to make this identity compatible with the superintegrability property [3], which provides explicit expressions for averages of the Schur functions χ R [M ] on the both sides. Here the Schur function [4] is a symmetric function of eigenvalues of the matrix M , and it is understood as a function of power sums of the eigenvalues, In the Kontsevich case, these averages were not yet discussed in the literature but are somehow similar to the ones appearing in the cubic Kontsevich [5] (see also [6]) and generalized Kontsevich models [7], and in the Brezin-Gross-Witten model [8]. The basic formula Our consideration is based on a simple generalization of (3) to the Gaussian Hermitian model in the external field with partition function where M is the Hermitian N × N matrix, dM is the Haar measure, g k are parameters, and the integral is understood as a formal power series in these parameters g k . The correlators are defined and we normalize the measure in such a way that < 1 > = 1. The average of the Schur functions is easily performed in this model with the help of the Wick theorem following the line of [9,5], and it gives with This relation (6) is a direct analogue of the result of [5] for the rectangular complex matrix model, which further develops its analogy with the Kontsevich family. The main relation There are many proofs of formula (1), see, for instance, [11]. Here we are going to derive (1) using the basic formula (6). To this end, we make a change of variables M → L −1 M in the integral (4) so that the averages in (6) are evaluated with the Gaussian weight e − 1 2 Tr M 2 , but the Schur functions become depending on L −1 M : where < . . . > is understood as the average with the Gaussian measure (3), i.e. without the external matrix. At the l.h.s. of (1), it is sufficient to apply the Cauchy formula [12] where the sum goes over all Young diagrams R, in order to get Applying the result of [3] (formula (3) above), we get for the l.h.s. of (1) Now consider the r.h.s. of (1). In [1,2], the determinant in this formula entered in a positive degree (see, e.g., [11,Eqs.(I.1),(I.17)]), which gave rise to a simple definition of the integral, since it was just the Gaussian average of a polynomial. It was for the price of various imaginary units and of the minus sign in (2). Here we consider the integral at the r.h.s. of (1) as a power series at large L: Applying once again the Cauchy formula, we obtain Using now (8), we finally come to An example. As a small illustration of how this works, in the first approximation to (12), we have where we used the relation X ij X kl = δ jk δ il n 2 tr X 2 = δ jk δ il for the Gaussian correlators. At the same time, [2] {tr L −k }χ [2] {N } − χ [1,1] {tr L −k }χ [1,1] Brezin-Hikami identity Relation (6) allows one to write a more symmetric version of (12), which appears to be the Brezin-Hikami generalization [13,14] of the Chekhov-Makeenko relation (1). To emphasize the symmetry, we first write it in a slightly different notation: In this identity, there are determinants of the tensor product, and we write 1 instead of Id ⊗ Id in order to simplify the notation. By the Cauchy formula, the inverse determinant at the l.h.s. is equal to and the l.h.s. itself becomes Clearly, the r.h.s. of (6) is just the same because of the symmetry 1 ↔ 2, and this proves the relation (13). An example. It is again instructive to look at the first approximation to (13). At the l.h.s., we have a clearly symmetric expression: Tr M 2 = δ jk δ il for the Gaussian correlators. Conclusion In this paper, we exploited a new relation (6) for the Gaussian averages, and demonstrated that it stands behind the Chekhov-Makeenko identity (1) between the Hermitian and quadratic Kontsevich-Penner models, and behind its further Brezin-Hikami extension. In addition to these "practical" applications, one can also wonder about the theoretical meaning of our results. The superintegrability property (3) is a direct corollary of (6) at L = 1, with the obvious substitution of the n × n matrix X by the N × N matrix M . This can make (6) a reasonable enhancement of (3). However, the inverse claim, i.e. whether (6) is directly implied by (3), remains unclear. A possible approach to this problem can require supplementing (3) with some version of the Wick theorem which can be used as a consistency relation for (3) and a source of some stronger statements like (6). Alternatively, one can just postulate (6) in addition to (3), but this requires a study of possible mutual restrictions on the two definitions. A separate interesting question is the relation between (3), (6) and factorization property of single-trace Harer-Zagier functions [15,16]. All these issues remain for the future work.
1,372
2021-02-02T00:00:00.000
[ "Physics" ]
Inductance Formula for Rectangular Planar Spiral Inductors with Rec- tangular Conductor Cross Section In modern technology, inductors are often shaped in the form of planar spiral coils, as in radio frequency integrated circuits (RFIC’s), 13.56 MHz radio frequency identification (RFID), near field communication (NFC), telemetry, wireless charging, and eddy current nondestructive testing devices, where the coils must be designed to a specified inductance. In many cases, the direct current (DC) inductance is a good approximation. Some approximate formulae for the DC inductance of planar spiral coils with rectangular conductor cross section are known from the literature. They can simplify coil design considerably. But they are almost exclusively limited to square coils. This paper derives a formula for rectangular planar spiral coils with an aspect ratio of up to 4.0 and having a cross-sectional aspect ratio of height to width not exceeding unity. It is based on physical principles, hence scalable and valid for any dimension and inductance range. The formula lowers the overall maximum error from hitherto 28 % down to 5.6 % . For specific application areas like RFIC’s and RFID antennas, it is possible to reduce the domain of definition, with the result that the formula lowers the maximum error from so far 18 % down to 2.6 %. This was tested systematically on close to 194000 coil designs of exactly known inductance. To reduce the number of dimensions of the parameter space, dimensionless parameters are introduced. The formula was also tested against measurements taken on 16 RFID antennas manufactured as printed circuit boards (PCB’s). The derivation is based on the idea of treating the conductor segments of all turns as if they were parallel conductors of a single-turn coil. It allows the inductance to be calculated with the help of mean distances between two arbitrary points anywhere within the total cross section of the coil. This leads to compound mean distances that are composed of two types of elementary ones, firstly, between a single rectangle and itself, and secondly, between two displaced congruent rectangles. For these elementary mean distances, exact expressions are derived. Those for the arithmetic mean distance (AMD) and one for the arithmetic mean square distance (AMSD) seem to be new. The paper lists the source code of a MATLAB® function to implement the formula on a computer, together with numerical examples. Further, the code for solving a coil design problem with constraints as it arises in practical engineering is presented, and an example problem is solved. Introduction Inductors are basic components of many electric and electronic devices. Nowadays, electronic circuits are produced as planar structures like microelectronic integrated circuits (IC's) and printed circuit boards (PCB's). Thus, inductors are often realized as planar spiral coils. This is the case in radio frequency integrated circuits (RFIC's) [1], in 13.56 MHz radio frequency identification (RFID) [2], near field communication (NFC) [3], and telemetry antennas [4], in wireless charging devices [5,6], and as eddy current sensors for nondestructive testing [7,8]. In these applications, the coils must be designed to a specified inductance. Hence, values of the design parameters resulting in the required inductance must be found. This represents a simple form of an inverse problem. It can only be solved indirectly, by calculating the inductance of many coils, subject to any constraints, and by choosing the design whose inductance matches the predefined value best, e.g. using an optimization method. To do so, a method is needed to calculate the inductance of a coil from its design parameters. In principle, this can be done with the help of a field solver. In many cases, it suffices to know the low-frequency or direct current (DC) inductance (see sections 6 and 8). But even for designing a single inductor, creating the data file defining the layout is tedious, particularly if the coil has many windings, let alone if the calculation must be repeated for many different designs, as is the case in an inverse problem. Moreover, the computer run time may become very long. So, this way of solution is impractical. The Greenhouse method [9] offers an analytical alternative. It allows precise calculations of the DC inductance. The method consists of dividing the coil into its constituent straight conductor segments and calculating their partial selfinductance and all mutual inductances between them separately, using analytical formulae and summing up all the contributions. But the method doesn't provide an inductance formula that explicitly depends on the design parameters, like the number of turns, the coil size, etc. Thus, for large numbers of turns and for solving inverse problems, the method becomes tedious. Therefore, many researchers have worked on finding approximate inductance formulae that explicitly depend on the design parameters. Using such a formula is by far the easiest and fastest way to calculate coil inductance, particularly for solving inverse problems. Some formulae approximating the DC inductance of planar spiral coils with rectangular conductor cross section are known from the literature, see ([8], equation (3)) and [10] where the maximum error of six of the most cited formulae are compared. Of these, only Crols et al.'s empirical formula [11] is applicable to any coil geometry, but the authors only discussed square and circular spiral coils. All other formulae are limited to square spiral coils. Whereas rectangular spiral coils were tested as eddy current sensors and a double-integral representation of the impedance of such coils was derived [7], in IC design only square spiral inductors seem to be have been considered until today ( [12], section I). It appears to me that there should also be a case for using rectangular spiral coils in practical IC design, since they provide more flexibility for optimization under geometric constraints. This may even help to reduce cost. Jayaraman et al. argued along similar lines stating that "A rectangular spiral outline shape allows flexibility to the designer while doing layout floor planning at the circuit level. Thus, the design and modeling of rectangular spiral inductors are of great interest to the circuit designers." ([12], section I). Further, in designing inductors for RFID devices, it is imperative to be able to master rectangular spiral coils because the standard ISO 7810 prescribes the exact size of transponder cards, which is rectangular. To maximize the reading distance, their antennas must be made as large as possible. Hence, they must be rectangular. Consequently, also many RFID reader antennas are rectangular. This applies to many NFC devices as well, particularly to mobile phones, whose housing is rectangular. Shortly before this paper was submitted, Jayaraman et al.'s work, the first paper presenting a formula for rectangular spiral coils based on physical principles came out [12]. They adapted Mohan's current sheet approximation for the DC inductance of square spiral coils [13] to rectangular ones and introduced some improvements. In the current sheet approximation, the coil is modeled as four homogeneous conductive sheets forming the four sides of the rectangle. The gaps between parallel conductor segments (see Fig. 1 for a typical layout) are ignored. For square coils, the maximum error of this approximation was found to be 29 % ( [10], subsection 4.4). With the same method of error analysis and based on the same domain of definition, I have found the maximum error of Jayaraman et al.'s model ( [12], equations (1) -(5)) to be 28 %. For specific application areas, it is possible to reduce the domain of definition. This lowers the maximum error. The error analysis described in section 5 comprises a systematic variation of all design parameters of rectangular coils and exactly known inductances of close to 194000 designs. This analysis has revealed the following maximum errors of the model [12] for specific applications areas: 18 % for inductors in RFIC's, 14 % for RFID and telemetry reader antennas and NFC antennas, and 11 % for RFID and telemetry transponder antennas. These are all maximum errors. The individual error in a specific case may be substantially smaller. But for the circuit designer, only the maximum error counts because this is the error he or she must expect when designing a coil. This paper derives a more precise formula. Its maximum error over the whole domain of definition amounts to 5.6 %, compared to 28 % of the model [12], and it lowers the maximum errors for the specific application areas listed above down to 2.3 %, 2.6 %, and 1.5 %, respectively. Recently, an improved formula for square coils was presented [14]. It lowered the maximum error from hitherto 29 % down to 2.0 %. Therefore, it is sensible to try to adapt it to the rectangular case. Unfortunately, the two-dimensional empirical correction factor in the formula ( [14], equation (25)) only holds for coil aspect ratios close to unity. For larger ratios, the correction fails. E.g. for a ratio of 5.0, the error can rise up to 120 %. But a closer inspection has revealed a systematic dependence of the error on four parameters. Hence, one could, in principle, derive a new correction factor adapted to the rectangular case. But it would require a four-dimensional fit instead of a two-dimensional one. This was deemed to be too involved. Thus, an alternative solution was sought. It was found in the following approach: The conductor segments forming the spiral are connected in series, whereby the current remains constant along the spiral. So, as far as the mutual inductances between the parallel segments are concerned, the latter may as well be regarded as being connected in parallel. In a first step, this allows to model the spiral coil as a single-turn coil as in the current sheet approximation [12,13], but with the essential difference that the four sides of the rectangle are not approximated as homogeneous conductors, but consist of parallel conductor segments ( being the number of turns), so that the gaps between them are considered. Their mutual inductances are calculated similarly to the Greenhouse method [9], except that their individual lengths are approximated by their average length. This enables the inductance to be calculated with the help of the method of mean distances [15] by treating each side of the coil as one single conductor composed of parallel conductor segments. The mean distances between two arbitrary points anywhere within the total cross section of the coil are then needed. These mean distances are composed of elementary ones of two types, firstly, between a single rectangle and itself, and secondly, between two displaced congruent rectangles. The method of mean distances relies on three kinds of mean distance: the geometric mean distance (GMD), the arithmetic mean square distance (AMSD), and the arithmetic mean distance (AMD). Finally, the inductance of the single-turn coil is then multiplied by 2 to get the total inductance, exactly as in the current sheet approximation. The resulting formula is purely based on physical principles and on approximations for the mean distances. The latter are found with the help of exact expressions that are derived in this paper. Unlike for the formula developed in [14] for square coils, no empirical correction factor for the inductance is needed. So, the formula is inherently scalable. Hence, it is valid for all coil dimensions and inductance ranges. For a proof see ( [10], p. 39). Section 2 defines the design parameters of rectangular planar spiral inductors. Section 3 explains the method of error analysis employed in validating the approximations for the mean distances. The derivation of the inductance formula is given in section 4. Section 5 presents the error analysis of the inductance formula. A comparison with measurements is found in section 6. Section 7 discusses a MATLAB function for implementing the formula on a computer, together with numerical examples. Section 8 describes the automatic solution of an inverse problem with constraints to design a coil in an RFIC. An exemplary MATLAB code is given. Section 9 concludes i.a. that this paper provides the first experimentally verifiable evidence of the usefulness of the full method of mean distances as it was introduced in [15]. In the appendices 1 -3, exact expressions for the GMD, AMSD, and AMD of rectangles are derived. Those for the AMD and one for the AMSD seem to be new. Appendix 4 lists the source code of the MATLAB function to evaluate the inductance formula. Fig. 1 shows the layout of a rectangular planar spiral coil together with the definition of its dimensional design parameters. These are: Design parameters of planar spiral coils • , number of turns or windings ( ≥ 2). • , gap or spacing between windings. • , conductor width. • ℎ, conductor height or -thickness (hidden in Fig. 1). There is also a set of dimensionless parameters (see next section). Note that , as a generic parameter, belongs to both sets. From Fig. 1, the following relations can be inferred: The method of error analysis The derivation of the inductance formula implies testing approximations for the mean distances to choose the best ones. Of course, one could simply compare some values of approximated distances with exact values to assess the approximation. Exact expressions for all elementary mean distances are derived in appendices 1 -3. But this would shed no light on the contribution of the approximation to the maximum error of the inductance formula. Depending on the design parameters, the calculated inductance may be insensitive to the error of the mean distance. Or, quite to the contrary, the result may be very sensitive to it, magnifying any relative error of the mean distance, e.g. if the self-and the mutual inductance terms are of comparable magnitude. Their difference is the total inductance, see equation (30). So, what one needs is a way to systematically test the effect of any such approximation on the maximum error of the inductance formula. These tests must include all dimensions of the parameter space and cover its whole domain of definition. In a recent paper, a method of error analysis was developed for square coils, and exact inductance data for 13851 coil designs were compiled [10]. Besides its use for finding the maximum error of various inductance formulae, this method offers the very possibility to perform the tests needed. Therefore, this section provides an outline of the method. For the purpose of assessing the quality of approximations, we may limit the analysis to the case of square coils. This is justified because square coils always have the largest maximum error, as will be shown in section 5. Square planar spiral coils are characterized by five parameters, e.g. , , , , and ℎ. Their number can be reduced to four by transforming them to dimensionless ones [10]. These are , , , and , in the order of importance in determining the inductance. The parameter 0 < < 1 is the filling factor. For a square coil, it is the ratio Ⱳ/ of the overall width Ⱳ spanned by one row of the windings, see Fig. 1 and equation (1), and the average conductor length Now we modify this definition for the later use for rectangular coils. Without restricting generality, we may assume ≥ , and we redefine in terms of the shorter rectangle side, substituting for and for in equation (2). This leaves the result for Ⱳ unchanged. Equation (3) is replaced by the equation for the shorter average conductor length leading to = Ⱳ/ or A maximum value of is allowed for any given combination of parameter values ( [10], equation (19)). This is to prevent invalid parameter combinations, like e.g. a too large conductor width for the given coil dimension and number of windings, for which the length of the innermost conductor segment would vanish or even become negative ( [10], p. 40). With definition (5), we are on the safe side because it yields larger values of than the original definition. The remaining two dimensionless parameters besides and are the relative winding distance κ defined as = > 1, and the cross-sectional aspect ratio , = ℎ ≥ 1. In PCB's, one has ≫ 1. In RFIC's, values down to = 2 occur ( [13], Table 4.10, ℎ = 0.9 µm). Permitting ≥ 1 also allows for square cross sections. The inverse transformation equations can be found in section 8. The designs were defined by = = 1 mm and all parameter combinations given by the Cartesian products of the sets of values of the dimensionless parameters , , , and listed below. The parameter combinations are grouped in four ranges of containing 1, 5, 5, and 8 values of , and 9 values each of , , and , resulting in a total of (1 + 5 + 5 + 8) • 9 3 = 729 + 3645 + 3645 + 5832 = 13851 parameter combinations or coil designs. The exact inductances were calculated with the help of the free standard software FastHenry2 [16]. Note that the values of differ from range to range. The reason is that, depending on and , a maximum value of is allowed (see the comment following equation (5)). In the lists above, the most restrictive upper limit for that is valid for all and for all within each range is used [10]. Now one can calculate the respective inductances with the formula in question, e.g. with the approximation for one of the mean distances, for the 13851 parameter combinations and compare the maximum relative error to the value obtained with the exact expression. This is done in the next section to select the best possible approximation for each of the mean distances. Derivation of the inductance formula In the spiral coil, the lengths of the parallel conductor segments forming the windings vary from − to and from to , respectively, see Fig. 1. In this derivation, the coil is approximated as a single-turn coil with parallel conductor segments in each of its four sides. To enable the use of the method of mean distances [15], all segments on the same side are approximated to have the same average length or given by equations (3) and (4). Precise expression for the partial self-inductances According to the method of mean distances, the partial selfinductances and of the sides of length and , respectively, can be summarized in the form ( [15], equation (33)): where = , . Throughout this paper, log designates the natural logarithm. The constant 0 is the magnetic permeability of the vacuum, 0 = 4 • 10 −7 Vs/(Am) . The quantities , , and are the compound mean distances belonging to the partial self-inductances of the sides of the single-turn coil. They are the respective mean distances between two arbitrary points anywhere within the cross section of parallel conductor segments, i.e. anywhere within a row of congruent rectangles, see Fig. 2. So, these quantities represent compound mean distances spanning over rectangles. Equation (6) is very precise provided that the ratio / ≥ 1, where is the length of the conductor. The percentage error of the method of mean distances is plotted for a single straight wire of circular cross section as a function of the ratio / in ( [15], Fig. 6), where is the wire radius. The smaller this ratio, the larger is the error. To translate between circular and square cross section, note that we have = 0.7788 • ( [15], equation (15)) on one hand and = 0.4470 • , equation (17), on the other hand. Hence, for a square cross section, the equivalent width amounts to = 1.7423 • . Then the square and the circular cross section both have the same GMD. The equivalent worst-case ratio / in a spiral coil is found in its shortest conductor segment. This is the innermost one. Its length is = − , see Fig. 1. Hence, the equivalent ratio / of a conductor of square cross section corresponding to a round wire of the same length is 1.7423 • ( − )/ . For the coil designs defined in section 3, its minimum value is 3.85 (attained for = 13, = 0.86, and = 1.1). According to ( [15], Fig. 6), for / = 3.85, the error of the partial self-inductance obtained by the method of mean distances for circular cross section lies below 0.007 % if the exact values for the GMD, AMSD, and AMD are used. For square cross section, this is only an approximation. But it demonstrates that the use of the method of mean distances for planar spiral coils is well justified. To apply it to calculate the partial self-inductances according to equation (6), we need the compound mean distances , , and between the total cross-sectional area and itself (drawn black in Fig. 2). We start by calculating log . It is given by the double area integral ( [18], equation (6.32), p. 273) where is the distance between any two points within the total surface ′ = of total area | |. Now, is composed of disjunct and congruent rectangles of area | | = = ℎ, see Fig. 2. Hence, | | = . By virtue of the linearity of the integral, the double area integral over and ′ can be expressed as the double sum of 2 double area integrals over the rectangles and . Thus, where , is the distance between one integration point located in rectangle and the other one in rectangle . Two cases can be distinguished: The two integration points either lie in the same rectangle (case 1) or in different ones (case 2). Hence, the double sum (7) can be split into two parts representing the two cases, log ,1 and log ,2 : In case 1, only the summands with = are extracted from the double sum (7) to form log ,1 . It reduces to the sum of identical contributions, which, by definition, are all given by the logarithm of the elementary GMD between a single rectangle and itself, denoted by log 1 , normalized by 1/ 2 . Hence, The exact expression for log 1 is presented in appendix 1, equation (39). In case 2, only the summands with ≠ are extracted from the double sum (7) to form log ,2 . It represents the sum of the remaining ( − 1) contributions, which, by definition, are given by the logarithm of the elementary GMD between two rectangles displaced by a multiple of the winding distance , denoted by log 2 ( ), normalized by 1/ 2 . By contrast with case 1, here the contributions are not all equal, but they depend on the multiplier . Hence, the second part of the double sum (7) must be transformed to a single sum with as the summation index. In Fig. 2 there are three pairs of rectangles: Two pairs, (1,2) and (2,3) , each have the displacement , and one pair, (1,3), has the displacement 2 . So, = 1 … 2. For windings this becomes = 1 … − 1 because there are − 1 gaps between rectangles. Thus, the largest displacement of two rectangles is ( − 1) . For each value of , there are − pairs of rectangles of mutual displacement . Each of these pairs must be counted twice, because in the double sum (7), all pairs with ≠ occur twice: once as ( , ), and once as ( , ) , and both notations refer to the same pair. Hence, the second part of the double sum (7) transforms to The exact expression for log 2 , the logarithm of the elementary GMD between two displaced rectangles, is given in appendix 1, equation (38). The total number of contributions in case 2 is as anticipated above, confirming the correctness of the transformation leading to the single sum (10). With the help of equations (9) and (10), the sum (8) becomes Next, the quantity 2 in equation (6), i.e. the square of the compound AMSD belonging to the partial self-inductances of the sides of the coil, is defined analogously to equation (7) as Analogously to equation (12), by the definition of 1 2 , the square of the elementary AMSD between a single rectangle and itself, and 2 2 , the square of the elementary AMSD between two displaced rectangles, one obtains The exact expressions for Finally, the quantity in equation (6), i.e. the compound AMD belonging to the partial self-inductances of the sides of the coil, is defined analogously to equation (13) as Analogously to equation (14), by the definition of 1 , the elementary AMD between a single rectangle and itself, and 2 , the elementary AMD between two displaced rectangles, one obtains The exact expressions for 1 and 2 are derived in appendix 3, equations (48) and (46), respectively. Now the exact expressions for all quantities appearing in equation (6) are known, and the partial self-inductances and can be evaluated. Approximate expression for the partial self-inductances The equations derived in subsection 4.1 for the compound mean distances spanning over several rectangles and the expressions for the elementary mean distances derived in the appendices are exact. Only equation (6) is not exact, although it is very precise, see subsection 4.1. The resulting formulae get very complicated. But since the concept of average-sized single-turn coil is only a rather coarse approximation for the spiral coil, it is not worth evaluating them exactly. Instead, good approximations would be useful. The method of error analysis from section 3 allows to assess the effect of approximations on the maximum error of the inductance formula over the whole domain of definition of the design parameters of square coils. In this and the next subsection, the method is used to validate approximations for the equations appearing in subsection 4.1. The first equation that might be simplified is equation (6), since the mean distances AMSD and AMD are often neglected altogether, see e.g. in ( [14], equation (2)). This is viable when the mean distances only stretch across a single conductor, i.e. a single rectangle in Fig. 2. But here they span over all rectangles. An error analysis according to section 3 revealed that neglecting the mean distances and in equation (6) in the final version of the inductance formula as defined in subsection 4.4 raised the maximum error to 30 % ( = 13, = 0.86, = 2.2125, and = 1.0). Thus, the AMSD and AMD cannot be neglected. Another simplification of equation (6) often found in the literature is substituting both the AMSD and the AMD by the GMD, so that only the latter needs to be calculated. An analogous error analysis disclosed a maximum error of 14 % (as above, but = 10.0). So, this simplification is not an option either. Hence, equation (6) cannot be simplified. Seeking approximations for the mean distances was more successful. Rosa ([19], p. 314) and Grover ([20], p. 22) showed that the GMD between a single rectangle (of width and height ℎ) and itself, whose exact expression is given in equation (39), can be approximated as For the coil designs in section 3, an error analysis showed that, if in the final version of the inductance formula as defined in subsection 4.4, equation (18) was substituted by the exact expression (39) in equation (12), the maximum error changed by not more than 0.05 %. Therefore, approximation (18) is validated. The exact expression (38) for the GMD between two displaced congruent rectangles is even more complicated than equation (39), so an approximation is even more desirable. The only approximation found in the literature was the one derived by Rosa ([21], equation (13)) as the series expansion of the exact solution for the GMD between two axially displaced line segments of equal length . This is the special case of two congruent rectangles of vanishing height ℎ = 0, and for the displacement = , ∈ ℕ: which corresponds to = / = ∈ ℕ. It converges very fast except for = 1. It can also be found in ( [20], p. 20). Rosa's derivation [21] remains valid for ∈ ℝ, so Mohan wrote it in the generalized form ( [13], equation (3.13)) log 2 ≈ log − ( This formula was adopted with the same number of terms by Jayaraman et al. in their recent paper ( [12], equation (3)). By contrast, equation (20) will consider the finite height ℎ of the rectangles, thus extending the applicability of the inductance formula to coils with low = /ℎ down to = 1. Not surprisingly, an error analysis according to section 3 showed that, if in the final version of the inductance formula as defined in subsection 4.4, equation (20) was replaced by the GMD between two displaced line segments, the maximum errors in the four ranges of worsened from 4.3 %, 3.1 %, 3.7 %, and 5.6 % to 6.0 %, 5.1 %, 3.6 %, and 5.4 %. Because of this large increase of up to 2.0 %, equation (20) was preferred. To derive equation (20), one can guess that log 2 should also depend on log( + ℎ), as log 1 does, since for zero displacement = 0 , the rectangles coincide: 2 = 1 . Further, 2 must depend on . Intuitively, one may expect it to depend on the relative displacement = / . Indeed, some trial calculations revealed that for square cross section, ℎ = , log 2 was very well approximated by log 2 ≈ log( + ℎ) + log ( 2 ) . So, despite the intuition that leads to equation (19), it is not even defined for = 0. Interestingly, for ℎ = , this is identical to log , the first term of the series expansion for the GMD between two displaced line segments (which is based on ℎ = 0). In the general case of rectangular cross section, log 2 must also depend on the cross-sectional aspect ratio = /ℎ. Although approximation (19) does depend on ℎ, it is only valid for square cross section, = 1. It had to be extended to cover the case > 1. Plotting the exact solution of log 2 ( ) according to equation (38) for various values of , together with the approximation (19), both as a function of , always resulted in a nearly constant difference between the two plots. This meant that the required correction mainly depended only on , but not on . Hence, it sufficed to add a function of to equation (19) For ≈ 1, the correction in vanishes, as expected. An error analysis according to section 3 showed that, if in the final version of the inductance formula as defined in subsection 4.4, equation (20) was substituted by the exact expression (38) in equation (12), the maximum errors in the four ranges of changed by 0.11 % maximum. Hence, approximation (20) is well-founded. Next, the exact expression (43) for 1 2 in equation (14) is so simple that it doesn't need an approximation: In the exact expression (42) for 2 2 , the square terms beside 2 can even be neglected, so it simplifies to affecting the maximum error of the inductance formula by less than 0.009 %. Finally, the exact expression (48) for 1 in equation (16) is as complicated as equation (39) indicating a precision of 2 %, but without disclosing how he had found the formula and against what reference he had calculated the error. I found a maximum error of 1.4 % compared to the exact expression (48) for a ratio to ℎ (or vice versa) close to 5.12. Note that the formula above is symmetrical on exchanging and ℎ, as it must be. Hence, its error only depends on the ratio of and ℎ. In the literature, the AMD is often approximated by the GMD. In our case this means where 1 is given in equation (17). An error analysis based on section 3 revealed that, if in the final version of the inductance formula as defined in subsection 4.4, the exact expression (48) was substituted for equations (22) and (17) in equation (16), the maximum error changed by 0. 15 % maximum. Therefore, approximation (22) is justified. If Mohan's formula was used instead of equation (22), the maximum error in the lower ranges of increased by 0.15 %, so equation (22) was preferred. The exact expression (46) for 2 is very complicated. An approximation was needed. Here as well, the approximation 2 ≈ 2 (23) can be made. 2 , in turn, can be approximated by taking the exponential function of expression (20). Substituting the exact expression (46) for equation (23) and the exponential function of expression (20) in equation (16) changed the maximum error of the calculated inductance by less than 0. 09 %. So, approximation (23) is viable. Note that applying equations (22) and (23) Approximate expression for the mutual inductances The mutual inductance between the two rows of parallel conductors of average length at average mutual distance , and the mutual inductance between the other two rows of parallel conductors of average length at average mutual distance , are given by an equation analogous to (6), where = , , = , . We start by calculating log ̅ , the logarithm of the compound GMD between two rows of rectangles that are in-line. The cross-sectional view of the spiral coil with the two rows of rectangles is exemplified in Fig. 3. In calculating log ̅ , the two integration points always lie in two different rectangles. One of the points lies in a rectangle in the left row (i.e. in one of the rectangles 1, 2, or 3 in Fig. 3), and the other one in a rectangle in the right row (i.e. in one of the rectangles 4, 5, or 6). So, analogously to equation (7), we have a double sum of 2 double area integrals, . (25) In the double sum (25), only contributions of case 2 arise, namely, by definition, log 2 ( + ) , normalized by 1/ 2 , where = −( − 1) … ( − 1), as can be inferred from Fig. 3. Since they depend on the multiplier , we need an analogous transformation to a single sum with as the summation index as in subsection 4.1. For each value of , there are − | | pairs of rectangles of mutual displacement + . In the double sum (25), none of the pairs of rectangles occurs twice, so there is no factor 2 in the resulting single sum. Hence, the double sum (25) transforms to ]. (26) The total number of pairs of rectangles is With the help of equation (11), this is found to be 2 , consistent with the number of 2 summands in equation (25), thus confirming the correctness of the transformation leading to the single sum (26). The quantity log 2 could, in principle, be evaluated with the help of the exact expression (38). But for such large displacements compared to and for > 1, expression (38) became numerically unstable. It then contained differences of numbers that were nearly equal, so that double precision arithmetic was not precise enough to represent them. This led to large errors beyond bound. So, equation (38) cannot be used to calculate log ̅ . A remedy might be to reformulate equation (38) in terms of ratios ℎ/ , ℎ/( + ), etc., and to derive a Taylor series expansion for small values of these ratios where, hopefully, the terms causing the instability would cancel analytically. Both types of functions occurring in equation (38), namely, log(1 + 2 ) and arctan( ), have alternating Taylor series, allowing the estimation and thus the control of the truncation error. But it turned out that this tedious procedure is probably not worth the effort. It was possible to bypass the instability problem of equation (38) and to do the error analysis by restricting it to the subdomain characterized by = 1. It revealed that the maximum error of the inductance formula over all four ranges of defined in section 3 changed by less than 0.0002 % if the approximation given in equation (27) was replaced by the exact expression (38). Due to this marginally small effect of approximation (27) for = 1, it may be expected to remain negligible for all > 1, even if it should increase by up to three orders of magnitude. Two alternatives for approximating equation (38) were at hand: equation (20) with = ( + )/ and the central filaments approximation. In the latter, the GMD of the two conductor segments is simply replaced by their central distance + . One important result of the error analysis to be discussed in section 5 is that the maximum error strictly decreases as a function of the coil aspect ratio = ≥ 1 . Now, error calculations showed that equation (20) would confine this desirable property to within a narrower interval of than the central filaments approximation, i.e. it would undesirably restrict the scope of applicability of error interpolation. Hence, in the calculation of the mutual inductances, the central filaments approximation was preferred, even though in the low range of and at = 1, the maximum error increased by 0.4 % compared to using equation (20). Therefore, equation (26) becomes ], (27) with ̅ = , , see equation (24). Next, in the analogous calculation of ̅ 2 in equation (24), no instability problem arose, so the exact equation (42) for 2 2 could be used, substituting + for . Since ≫ , approximation (21) is even more justified in equation (24) than it was in equation (6). Thus, analogously to equation (27), one finds Finally, in the calculation of ̅ , the same problem as for log ̅ arose: The exact equation (46) for 2 became numerically unstable. The problem was circumvented in the same manner, by applying the central filaments approximation, i.e. the exact AMD was simply replaced by the displacement + as the only alternative available. Analogously to equation (28), this becomes ]. Here, an error analysis restricted to = 1 led to the same conclusion as for equation (38): It is probably not worth expanding equation (46) in a Taylor series. In the highest range of , it remained numerically unstable, even for = 1. The inductance formula Finally, the inductance of the single-turn coil must be multiplied by 2 to consider the effect of the windings, just as in the current sheet approximation [12,13]. The total inductance of the spiral coil then reads where and are given in equation (6), which, in turn, refers to equations (3), (4), (12), (14), (16) - (18), (20) - (23), and (43), and and are defined in equation (24), which, in turn, refers to equations (3), (4) and (27) -(29). The factor 2 results from the fact that there are two sides of the rectangle with conductors of length and two of length , each having the partial self-inductance and , respectively, and that the mutual inductances and must be counted twice to consider the coupling from the conductors on one side of the rectangle to those on the opposite side, and vice versa. One might argue here that the exponent of in equation (30) should be < 2.0 since the size of the loops differs from turn to turn, leading to coupling losses resulting in a lower inductance. But this effect is already considered in the mean distances used in equations (6) and (24). Indeed, an error analysis according to section 3 confirmed that 2.0 is the optimum value. Error analysis for rectangular coils In the present work, to ensure comparability with the results from [10] and [14], the error analysis was done in the same way as described in [10] by varying all four dimensionless parameters , , , and and by using the same sampling values for the parameters. But the formulae derived in section 4 extend the scope of applicability to rectangular coils. So, a fifth dimensionless parameter had to be added, namely, the coil aspect ratio = / ≥ 1. The assumption ≥ 1 does not restrict generality, but it is necessary for the unambiguous and safe definition of , see the comment following equation (5). For all ranges of , the following 14 sampling values of were used: The maximum errors over , , , and were calculated as a function of up to = 10.0. They were found to strictly decrease with increasing in all four ranges of , but only up to a certain limit. Beyond this limit, the error increased, and eventually its behavior even became chaotic. The said limit occurred earliest in the high range of , at = 2.5, and latest in the two-windings range, at = 4.0. The results are listed in Table 1 as a function of up to = 4.0. Values beyond the limit of strict decrease are printed in red. The results are also plotted in Figs. 4 -7 as a function of up to the limit of strict decrease. Note that every value in Table 1 and every point in Figs. 4 -7 represents the maximum error of 729, 3645, 3645, and 5832 coil designs, respectively, depending on the range of , over the domain of definition of , , and as defined in section 3, computed at the respective value of . The property of strictly decreasing maximum error as a function of was considered highly desirable. It allows to safely estimate the maximum error to be expected for any coil design by linear interpolation for any value of below the limit of strict decrease. Consequently, the largest errors occur for square coils, i.e. at = 1. This fact has already been exploited in section 4, as explained in section 3, and it will again be evoked in section 6. For > 1, the maximum errors decrease rapidly. The higher the range of , the faster the maximum error decreases. The values of at which it has halved are roughly 2.5, 2.25, 1.5, and 1.2, respectively, for the four ranges in increasing order of . The formula can be used up to = 4.0 for all ranges of , but interpolation of the maximum error is restricted to below the upper limit of strict decrease, i.e. to within the region with black numbers in Table 1. The region with red numbers displays chaotic behavior in the mid and high range of . The overall maximum error of 5.55 % is considerably larger than the value to be expected from the approximations used for the mean distances in section 4, where the largest contribution to the maximum error amounted to 0.4 % (see the comment preceding equation (27)). In addition, from the results obtained by the error analyses in section 4, it is evident that the contributions from the method of mean distances and from the central filaments approximation are marginal. All this suggests that the main contribution to the error of the inductance formula does not stem from these approximations, but rather from the concept of average-sized single-turn coil on which it is based, which approximates the varying lengths of the conductor segments by their averages and . Therefore, avoiding this approximation would improve the accuracy. But fortunately, there is a simpler although partial solution: Restricting the domain of definition in three ways, depending on the field of application, allows the maximum error to be considerably reduced: For wireless charging devices, as things stand now, one is free to choose the shape of the coils. In eddy current sensors, the strongest normalized impedance response was found for square coils [8]. Hence, for these application fields, square coils and the formula in [14] are recommended 2 . The latter features a maximum error of 2.0 %, compared to 5.55 % for equation (30) at = 1. Comparison with measurements The error analysis in the last section revealed that the largest errors of the inductance formula (30) arise for square coils. Therefore, it made sense to test the formula against measurements on coils with = 1. So far, measurements seem to have been limited to inductors in RFIC's [10 -13]. By contrast, in this section, measurements on 16 square RFID reader antennas manufactured as PCB's with standard copper layer thickness ℎ = 35 μm are reported. They were performed at 300 kHz (representing DC, justified for 13.56 MHz in ( [14], section 5)) with an Agilent ® 4294A Precision Impedance Analyzer and an 42941A Impedance Probe. Table 2 lists the 16 reader antennas, characterized by their design parameters , , , and , together with the measured inductances and the percentage deviations of the results of the inductance formula (30) from the measurements and from the exact values. The parameters and are given in mils, whereby 1 mil corresponds to 25.4 μm. According to Fig. 10-6 of the operation manual of the measurement equipment, the measurement error at 300 kHz was less than 1 % for inductances above about 1 μH, and less than Implementation in MATLAB The source code of the MATLAB function L_RectPla-narSpiral to calculate the DC inductance of rectangular planar spiral coils is listed in appendix 4. All quantities are in SI units. Besides the inductance , the function also returns the filling factor given in equation (5). If the data entered represents an invalid parameter combination as explained in section 3, an error message is output. The invalid entry is detected by checking the value of . For each of the four ranges of , Table 3 lists the number of windings to be used in the test, the value of returned by the MATLAB function, the exact DC inductance (from Fas-tHenry2), the relative error of , and the value of returned. Solving inverse problems The following example shows how to solve inverse problems of coil design with constraints as they typically arise in electrical engineering. The example comprises the design of an RFIC inductor, but the same procedure can easily be adapted to any field of application. The coil is to be etched on the top layer of a 0.35 μm complementary metal-oxide semiconductor (CMOS) process. Let the thickness of the layer be ℎ = 0.9 μm. Suppose we dispose of an area of 150 μm × 250 μm which is not to be exceeded. Let the minimum permitted conductor width and gap between conductors be 1.0 μm. We want an inductance of 84.0 nH, and shall be as small as possible to keep the loss resistance small. What values of , , , or , and do we need? The straightforward way of solution would be to vary these dimensional parameters within the whole domain of definition, thereby observing the constraints, and selecting the combination that comes closest to the targeted inductance. With this approach, invalid parameter combinations as explained in section 3 will inevitably occur. They must either be detected and skipped in the nested loops of the parameter variation step or be detected in the results and discarded afterwards. The trouble with invalid parameter combinations can be spared by avoiding their occurrence altogether. This can be achieved by varying dimensionless parameters within the intervals defined in section 3. To do this, we also need the inverse transformation equations to transform the dimensionless parameters back to dimensional ones after the inductance calculations are done. They are given in ( [10], equations (12) - (15)). Note that, for rectangular coils, we adhere to ≥ , so that, in these equations, must be replaced by , as was done in equation (5). The inverse equations then read = , The constraints are then considered after varying the parameters, by way of selection. The MATLAB code below is based on this procedure. On the PC specified in section 5, it solved the problem within 2.1 seconds. First, the given data is initialized. and are the outermost coil dimensions, = + and = + , as can be inferred from Fig. 1. We have = 250 μm, = 150 μm: Ao = 250e-6; Bo = 150e-6; h = 0.9e-6; smin = 1e-6; Ltarget = 84.0e-9; Next, vectors (Nv, rv, and kv) for the parameters , , and are initialized with sampling values within the intervals defined in section 3 for one of the four ranges of . Here, the data for the high range ( = 13 … 20) is shown. The calculated DC inductance is 0.6 % off target. The exact DC inductance calculated with FastHenry2 is 82.5 nH. Thus, the effective error of the inductance formula is 1.2 %, in accordance with the maximum error of 1.8 % as disclosed by linear interpolation of Table 1 for = 1.67. At 4.0 GHz, the exact inductance decreases by only 0.007 % compared to DC. Hence, at 4.0 GHz, the error of the inductance formula also amounts to 1.2 %. The inductance at 4.0 GHz was computed by requesting 10 × 10 subfilaments in FastHernry2 to consider the frequency effects, see ( [14], section 5). The run time of FastHenry2 on the PC specified in section 5 was 3 hours and 7 minutes. Conclusions A precise formula for the DC inductance of rectangular planar spiral coils with an aspect ratio of up to 4.0 and having a rectangular conductor cross section with an aspect ratio of height to width not exceeding unity has been derived from physical principles. The formula is scalable; hence, it is valid for coils within any inductance range and of any dimension. The formula is akin to the current sheet approximation [12], but with the difference that the gaps between the conductor segments are considered. As a consequence, it is much more precise. Its maximum error for the usual range of parameters used for inductors in RFIC's and in antennas in RFID, NFC, and telemetry devices is 2.6 %. This has been tested systematically on almost 194000 reference designs of exactly known inductance. To this end, dimensionless parameters to reduce the number of dimensions of the parameter space by one have been introduced. The formula has also been tested against measurements on 16 RFID antennas manufactured as PCB's. It is based on the method of mean distances [15]. For wireless charging devices and for eddy current sensors for nondestructive testing it has been found that square coils and the formula presented in [14] are to be preferred. The source code of a MATLAB function for evaluating the formula, together with numerical examples, have been provided. Moreover, an example for solving a coil design problem for RFIC's with constraints as they arise in practical engineering has been presented, together with the complete MATLAB code. Exact expressions for the GMD, AMSD, and AMD between a rectangle and itself and between two displaced congruent rectangles are derived in the appendices. Those for the AMD and one for the AMSD seem to be new. They have been used to verify known or to derive new approximations, like e.g. for the GMD between two displaced congruent rectangles, equation (20). So far, only an approximation for two displaced line segments was known. One of the exact expressions for the AMSD is used in the inductance formula. The paper presents the first example in the literature of an application of the method of mean distances where any curtailment in calculating the partial self-inductance of conductor segments as it is often found in the literature (like e.g. neglecting the AMSD and AMD or replacing them by the GMD), leads to a clearly verifiable error in the calculated total inductance. The method of mean distances was originally proposed by Rosa [19], although he did not actually carry it out. To my knowledge, its first full use was reported in [15] where it was also justified mathematically ( [15], section 7). But the partial inductance of single wires to which it was applied cannot be measured ( [15], section 8). Further, the inductance of the small shorted two-wire transmission lines of a few nH, used as examples of an application to components that are at least measurable in principle, was too small to be measured accurately. Thus, these examples were not suitable either to provide experimental evidence of the usefulness of the method ( [15], section 10). The first application to a precisely measurable inductance was reported in [22] on large shorted twowire transmission lines. But due to the proximity effect occurring in the main conductors, the method of mean distances could only be applied to the small shorting rods, and the effect of neglecting the AMSD and AMD in calculating the inductance of the shorting rods on the total inductance was ≤ 1.5 % ( [22], p. 33), comparable to the measurement error of 1 % ( [22], p. 32). By contrast, in this paper, neglecting the AMSD and AMD in the calculation of the partial self-inductance of coils has led to errors in the total inductance of up to 30 %, and up to 16 % for replacing the AMD by the GMD. Due to the scalability of coil inductance ( [10], p. 39), these results can be reproduced for coils of any dimension, in any inductance range, and at any frequency low enough to represent DC. Therefore, this paper provides the first experimentally verifiable evidence of the usefulness of the full method of mean distances as introduced in [15]. It has been found that the maximum error of the formula over all parameters except the coil aspect ratio strictly decreases with increasing coil aspect ratio up to a certain limit, depending on the number of turns. Up to this limit, the maximum error for any coil design in function of the coil aspect ratio can be found by linear interpolation of the data provided in this paper. Above this limit, the error has been found to increase or even to become chaotic. Note that the maximum error of the current sheet approximation [12] also decreases, but it is not possible to define a convex domain of definition where the error decreases strictly. The maximum error of the formula (5.6 %) is clearly larger than the largest contribution (0.4 %) found from the approximations for the mean distances. The conclusion is that the error must originate from the concept of average-sized single-turn coil on which the formula is based. The varying lengths of the parallel conductor segments on one side of the spiral coil are all approximated by the same average length. This has been a prerequisite for using the method of mean distances for a row of rectangles. Thus, considering these length variations in future work may be expected to significantly improve the accuracy of the formula, but at the cost of substantially increasing its complexity, provided that it will still be possible to derive a closed formula at all, rather than just being reverted to the Greenhouse method [9]. Transponder antennas are often manufactured from round wire. It would be desirable to derive a modified formula for coils with circular conductor cross section. Formulae for all respective mean distances can be found in [15], even for the high-frequency limits (but neglecting the proximity effect). The challenge will be that the software FastHenry2 is limited to conductors of rectangular cross section, so that other means for doing exact inductance calculations for the error analysis will have to be sought. Appendix 1 In this appendix, the exact expression for log 2 is derived, i.e. the logarithm of the GMD between two congruent rectangles of width and height ℎ, horizontally displaced by the distance , see Fig. 8, and the exact expression for log 1 , the logarithm of the GMD between a single rectangle and itself. The definition of the GMD is given by the double area integral normalized by 1/ 2 in equation (7). First, we must find the antiderivative given by the indefinite integral Noting that log √ = (1/2) log , finding the solution is tedious but straightforward. It was also presented in a different form in [23]. The result is Apart from the normalizing factor 1/ 2 in equation (7), we need the definite integral According to equation (32), the antiderivative is an even function of the differences − ′ and − ′ . Therefore, the signs of the differences in the arguments of don't matter, and equation (35) simplifies to = −4 ( , ℎ) + 4 ( , 0) +2 ( − , ℎ) − 2 ( − , 0) +2 ( + , ℎ) − 2 ( + , 0). We must normalize the integral (36) by the square of the rectangular area = ℎ, which was omitted in equation (33): integral2. The equality could be established to a relative accuracy of 10 −10 .
13,209
2020-02-07T00:00:00.000
[ "Physics" ]
Regional effects of atmospheric aerosols on temperature: an evaluation of an ensemble of online coupled models . The climate effect of atmospheric aerosols is associated with their in(cid:3)uence on the radiative budget of the Earth due to the direct aerosol(cid:150)radiation interactions (ARIs) and indirect effects, resulting from aerosol(cid:150)cloud(cid:150)radiation interactions (ACIs). Online coupled meteorology(cid:150)chemistry models permit the description of these effects on the basis of simulated atmospheric aerosol concentrations, although there is still some uncertainty associated with the use of these models. Thus, the objective of this work is to assess whether the inclusion of atmospheric aerosol radiative feedbacks of an ensemble of online coupled models improves the simulation results for maximum, mean and minimum temperature at 2 m over Europe. The evaluated models outputs orig-inate from EuMetChem COST Action ES1004 simulations for Europe, differing in the inclusion (or omission) of ARI and ACI in the various models. The cases studies cover two important atmospheric aerosol episodes over Europe in the year 2010: (i) a heat wave event and a forest (cid:2)re episode (July(cid:150)August 2010) and (ii) a more humid episode including a Saharan desert dust outbreak in October 2010. The simulation results are evaluated against observational data from the E-OBS gridded database. The results indicate that, although there is only a slight improvement in the bias of the simulation results when including the radiative feedbacks, the spatiotemporal variability and correlation coef(cid:2)cients are improved for the cases under study when atmospheric aerosol radiative effects are included. Introduction Atmospheric aerosol particles are known to have an impact on Earth's radiative budget due to their interaction with radiation and clouds properties, which is dependent on their optical, microphysical and chemical properties, and they are considered to be the most uncertain forcing agent.They influ-Published by Copernicus Publications on behalf of the European Geosciences Union. ence climate by modifying the global energy balance through both absorption and scattering of radiation (direct effect) and by acting as cloud condensation nuclei, thus affecting clouds' droplet size distribution, lifetime (Twomey, 1977;Lohmann and Feichter, 2005;Chung, 2012) and reflectance (indirect effects) (Ghan and Schwartz, 2007;Yang et al., 2011).Depending on the atmospheric aerosol concentration, aerosolcloud interactions may result in an increase or decrease in the liquid water content, cloud cover and lifetime of lowlevel clouds and a suppression or enhancement of precipitation (Bangert et al., 2011).Moreover, aerosol absorption may decrease low-cloud cover by heating the air and reducing relative humidity.This leads to a positive radiative forcing, termed the semi-direct effect, which amplifies the warming influence of absorbing aerosols (Hansen et al., 1997).The Fifth Report of the Intergovernmental Panel on Climate Change (IPCC AR5) (Boucher et al., 2013;Myhre et al., 2013) distinguishes between aerosol-radiation interactions (ARIs), which encompass the aerosol direct and semidirect effect, and the aerosol-cloud interactions (ACIs), which encompass the indirect effects. In order to account for these atmospheric aerosol effects, the use of fully coupled models is needed for meteorological, chemical and physical processes.Online coupled models include the interaction of atmospheric pollutants (gaseousphase compounds and aerosols) with meteorological variables (Baklanov et al., 2014).In this context, in its phase 2, the air quality model evaluation international initiative (AQMEII) (Alapaty et al., 2012;Galmarini et al., 2015) focused on the assessment of how well the current generation of coupled regional-scale air quality models can simulate the spatiotemporal variability in the optical and radiative characteristics of atmospheric aerosols and associated feedbacks among aerosols, radiation, clouds and precipitation.On this basis, a coordinated exercise of working groups 2 and 4 of the COST Action ES1004 European framework for online integrated air quality and meteorology modeling (EuMetChem, http://eumetchem.info)emerged in order to take into account the radiative feedbacks of atmospheric aerosol effects on meteorology.In this initiative, two important episodes with high loads of atmospheric aerosols were analyzed which were identified during the previous AQMEII phase 2 modeling intercomparison exercise (Galmarini et al., 2015).They were selected due to their strong potential for aerosol-radiation and aerosol-cloud-radiation interactions (Makar et al., 2015a, b;Forkel et al., 2015). As a result of the AQMEII phase 2 initiative and Eu-MetChem COST Action, several studies covering the analysis of the ARI+ACI feedbacks to meteorology have been done (e.g., Baró et al., 2015;Forkel et al., 2015Forkel et al., , 2016;;Kong et al., 2015;San José et al., 2015).Focusing on the effects of including ARI+ACI on temperature, Forkel et al. (2015) focused on the 2010 Russian wildfire episode, where the presence of the atmospheric aerosols decreased the 2 m mean temperature during summer 2010 by 0.25 K over the tar-get area.For the same episode, Péré et al. (2014) showed daily mean surface temperature reductions between 0.2 and 2.6 K. Forkel et al. (2012) studied a 2-month episode (June to July 2006) in order to allow medium-range effects of the direct and indirect aerosol effect on meteorological variables and air quality.They found a slightly lower temperature over western Europe when including atmospheric aerosol feedbacks.This reduction followed the same pattern as the planetary boundary layer height.Moreover, during July 2006, Meier et al. (2012) found a general decrease of 0.14 K in 2 m temperature when simulating absorbing aerosol in upper layers compared to an aerosol-free troposphere over land surface. However, all these studies are based on individual model evaluations and do not take into account an ensemble of regional models, in order to build confidence in model simulations and to characterize the uncertainty associated with the use of different modeling systems.Therefore, the objective of this work is to assess whether the outputs of an ensemble of regional online coupled model simulations including aerosol radiative feedbacks, during two important atmospheric aerosol episodes of the year 2010, improves the prognostic for maximum, mean and minimum temperature at 2 m over Europe. Methodology The analyzed model outputs are the results of a coordinated modeling exercise which was performed within the COST Action ES1004 (EuMetChem).In order to analyze the ARI or ARI+ACI effect on temperature, it was suggested to run three case studies for two episodes with different online coupled models with identical meteorological boundary conditions and anthropogenic emissions.The two considered episodes are (i) the Russian heat wave and wildfire episode in the summer of 2010 (25 July-15 August 2010) and (ii) an autumn Saharan dust episode, including dust transport to Europe (2-15 October 2010). The weather conditions during the Russian forest fires were mainly dry and particularly hot, with light winds (Péré et al., 2014;Kong et al., 2015).During this situation, sealevel pressure (SLP) showed a high-pressure system over the northeast part of the Russian area, giving a strong positive SLP anomaly for this period.This resulted in a strong positive surface temperature anomaly accompanied by weak winds from the southeast (Baró et al., 2017).On the other hand, the dust period situation is characterized by a very deep trough with a vortex reaching 20 • N latitude.This situation is maintained for several days, causing a continuous transport in middle levels.It is also worth mentioning the blocking situation over all central Europe.The dust event was dominated by strong southeasterly wind.This may explain windblown dust emissions increasing with wind speed and being transported to some parts of the European area (Kong et al., 2015).For the chosen episodes, simulations with each model were performed with and without considering the atmospheric aerosol effects.Three different configurations were requested: the first one which does not consider any aerosol effect feedbacks to meteorology (no radiative feedbacks, NRF; C11 fire and C21 dust episode); second, where only aerosol-radiation interactions are considered (ARIs; C12 fire and C22 dust episode); and third, where aerosol-radiationcloud interactions are considered (ARI+ACI; C13 fire and C23 dust episode) (this case could not be submitted by all of the participants).Although the NRF case does not consider the aerosol effects and feedbacks, this configuration considers an assumption of 250 cm −3 used by WRF-Chem in the absence of ACI for estimating cloud droplet number.This number is used in the corresponding microphysics parameterization (Morrison or Lin).On the other hand, ARI uses this constant value for accounting the interaction between aerosols and clouds but allows the modification of the radiation budget by using the online estimated aerosols.Lastly, the ARI+ACI cases are based on simulated aerosol concentrations which interact both with radiation and aerosols.The common setup for the participating models and a unified output strategy allow analyzing the model output with respect to similarities and differences in the model response to the aerosol direct effect and aerosol-cloud interactions. Participating models An overview of the different models and their configurations is shown in Table 1, where in the first row the model acronym is shown.The participating models shown here are COSMO-MUSCAT (Wolke et al., 2012) and WRF-Chem (Grell et al., 2005;Fast et al., 2006;Gustafson Jr. et al., 2007;Chapman et al., 2009;Grell and Baklanov, 2011) with different chemistry and physics options.The table also includes the episodes run for each model.The horizontal grid spacing is around 25 km for most of the contributions.Only for the fire episode were the COSMO-MUSCAT simulations made with a grid with of 0.125 • (approximately 14 km); there is an additional WRF-Chem run with 9 km grid spacing.The COSMO models use Kessler-type bulk microphysics (Doms et al., 2011), and WRF-Chem uses Morrison microphysics (Morrison et al., 2009), except for one contribution that utilizes Lin (Lin et al., 1983).COSMO models use prognostic turbulence kinetic energy (TKE) (Doms et al., 2011) planetary boundary layer (PBL).The The Yonsei University (YSU) PBL scheme (Hong et al., 2006) was chosen for the WRF-Chem simulations.In general, the Modal Aerosol Dynamics Model for Europe (MADE) is applied (Ackermann et al., 1998) except for one WRF-Chem simulation, which uses the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) (four aerosol size bins) approach (Zaveri et al., 2008).For further information and details about the models, we refer to the work of Forkel et al. (2015), Im et al. (2015a), Im et al. (2015b) and Baró et al. (2015).To enable the cross compar- ison between models, the participating groups interpolated their model output to a common grid with 0.1 • resolution. Moreover, the ensemble of the available simulations has also been included in this comparison, as recommended by several studies (Vautard et al., 2012;Jiménez-Guerrero et al., 2013;Landgren et al., 2014;Solazzo and Galmarini, 2015;Kioutsioukis et al., 2016), in order to check whether the design of an ensemble of simulations outperforms (or not) the skill of individual models. Emissions and boundary conditions For the EU domain, the anthropogenic emissions for the year 2009 (http://www.gmes-atmosphere.eu/)were applied by all modeling groups and are based on the TNO-MACC-II (Netherlands Organization for Applied Scientific Research, Monitoring Atmospheric Composition and Climate-Interim Implementation) framework (Kuenen et al., 2014;Pouliot et al., 2015).As described in Im et al. (2015a), annual emissions of methane (CH 4 ), carbon monoxide (CO), ammonia (NH 3 ), total non-methane volatile organic compounds (NMVOCs), nitrogen oxides (NO x ), particulate matter (PM 10 and PM 2.5 ) and sulfur dioxide (SO 2 ) from 10 activity sectors are provided on a latitude-longitude grid of 1/8 × 1/16 resolution.Consistent temporal profiles (diurnal, day-of-week, seasonal) and vertical distributions were also made available to AQMEII and EuMetChem participating groups for time disaggregation.The temporal profiles for the EU anthropogenic emissions were provided from Schaap et al. (2005).For further details, the reader is referred to Im et al. (2015a) and Im et al. (2015b). Hourly biomass burning emissions were provided by the Finnish Meteorological Institute (FMI) fire assimilation system (http://is4fires.fmi.fi/)(Sofiev et al., 2009).More details on the fire emissions and their uncertainties are discussed in Soares et al. (2015).The fire assimilation system only provides data for total PM emissions; the estimation of emissions for other species are described in Im et al. (2015b). The chemical initial conditions (ICs) were provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) IFS-MOZART model and are available in 3 h time intervals and provided in daily files with eight different times of day per file.They were run under the MACC-II project (Monitoring Atmospheric Composition and Climate -Interim Implementation) which uses an updated data set of anthropogenic emissions and compiles satellite observation assimilations of O 3 , CO and NO 2 in the IFS-MOZART system. Observational database The comparison of regional models with gridded data sets has to be undertaken with care given the differences between available databases.For instance, Gómez-Navarro et al. (2012) showed that even in areas covered by dense monitoring networks, uncertainties in the observations are comparable to the uncertainties within state-of-the-art regional climate models, at least when they are driven by nominally perfect boundary conditions like reanalysis.This work uses the E-OBS (Haylock et al., 2008) version 11.0 gridded observational database for maximum, mean and minimum temperature.E-OBS is a high-resolution European land-only daily gridded data set covering the period 1950-2014.The E-OBS 0.25 • regular latitude-longitude grid has been used as the reference for validation.Thus, data from all model runs have been bilinearly interpolated onto the E-OBS grid.Since the resolution of the models is similar to that of E-OBS, the interpolation procedure is not expected to alter our results significantly. The election of this gridded data set is based on the abundant scientific literature using E-OBS for the evaluation of regional climate models (e.g., Costa et al., 2012;Jiménez-Guerrero et al., 2013;Turco et al., 2013;Ceglar et al., 2014, among many others).However, several authors highlight the E-OBS limitations.Thus, Kysely and Plavcova (2010) compare E-OBS and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt), finding that large differences existed between the two gridded data sets, particularly for minimum temperatures and diurnal temperature range.The errors tended to be larger in tails of the distributions.Therefore, when evaluating regional models against one gridded data set, results have to be treated with caution. Validation methodology All the statistical measures are calculated at individual grid points.Only land grid points are considered in the analysis, since these are the only points where E-OBS contains information.Areas in grey indicate cells where E-OBS data are not available (southeastern part of the domain for the wildfires or southern part of the domain in the dust episode) or areas not covered by the modeling domain (southern part of the domain for the CS2 configuration). We will use the notation V k ipc for a variable from model k at grid point i in period p = fires, dust and case c = 1 2, 3 representing no radiative feedbacks, ARI and ARI+ACI.If we use bracket notation for an average over a given index (e.g., • pc ), we can express the bias at a given grid point as where O ip is the value observed.The model bias is the simplest measure of model performance. The ensemble mean, V k ipc k , is usually considered as an additional simulation which compensates for the errors of the different ensemble members.Even though this is a very simplistic view of the ensemble (which should be considered from a probabilistic point of view), it can be useful to reinforce the common signal of the different models in our analysis of the mean climate.Notice, however, that the ensemble mean is not a physical realization of any of the models, but just a statistical average (Knutti et al., 2010;Jiménez-Guerrero et al., 2013).R. Baró et al.: Regional effects of atmospheric aerosols on temperature Then, the variability was assessed on the hourly series (V k ipc ).The ability to represent the variability can be decomposed into the ability to represent its size, which can be represented by the standard deviation of the series: and can be compared to that of the observations sd[O] ip , and the ability to represent the hourly variations, which can be represented by the linear determination coefficient (ρ 2 ) with the observations The latter ability can only be expected in simulations nested into "perfect" boundary conditions such as those considered in this study.Finally, pattern agreement between simulated and observed data was quantified in a Taylor diagram by means of the spatial correlation (r) and the ratio between simulated and observed standard deviations, This information can be summarized in a Taylor (2001) diagram, which is a polar plot, with radial coordinate s k and angular coordinate related to r k . Aerosol representation In order to address the influence of aerosol effects on the surface temperature, it is crucial to have an understanding of the aerosol loading, both observed and modeled.For that purpose, aerosol optical depth (AOD) from the MODIS platform (Levy et al., 2013) is used, precisely Level 2 of the Atmospheric Aerosol Product (MxD04_L2), collection 6 (C6), with a resolution of 10 km.Palacios-Peña et al. (submitted to ACP, this issue) provide full details of the evaluation of the same set of models presented here against diverse satellite observations for AOD.The current contribution includes a brief description of the results.Figure 1 represents the model-MODIS comparison of AOD at 550 nm both for the fire and the dust episode. For the Russian wildfire episode, the highest values of AOD measurements by MODIS (around 2.7) are found over Russia and the surroundings areas, due to the emissions produced by the wildfires.According to the estimation of the bias (MBE), all WRF-Chem simulations (CS1, CS2, ES1, ES3) and the ensemble underestimate AOD over the fireaffected areas (minimum MBE values for NRF: the ensemble −1.30; CS1 −1.46; CS2 −1.61; ES1 −1.46; and ES3 −1.62).Over the rest of the domain, a lower overestimation (around 0.5) is produced by the WRF-Chem simulations (maximum MBE values for NRF: CS1 0.55; CS2 0.37; ES1 0.45; and ES3 0.64) and the ensemble (maximum MBE values for NRF 0.23).For DE3, the underestimation is lower (minimum MBE values for NRF −0.72) and does not cover so large an area as the rest of the experiments; however, over the rest of the domain a higher overestimation is found in DE3 (maximum MBE values for NRF 2.61).Generally, for ARI and ARI+ACI simulations, slightly lower MBE values than NRF are found in all the experiments (for example, in ES1 simulations: NRF −1.46; ARI −1.43; ARI+ACI −1.41).However, the MBE for the ensemble (NRF −1.3; ARI −1.23; ARI+ACI −1.40) does not show this improvement, but this analysis should be treated with caution because the ARI+ACI ensemble does not include DE3 simulations. For the dust episode, AOD values measured by MODIS > 0.5 are observed over the southeast of the domain due to the dust transported.This value is not very high for a dust outbreak, but this is caused by the wet deposition (rain during the episode).The highest AOD values, around 1.3, are found over a small area near the Po Valley.All experiments (no CS2 simulations are available in this case) underestimated high AOD values (over the southeast of the domain).MBE values over this area are around −0.3 for DE3, but for the rest of the experiments (WRF-Chem simulations) these values are around −0.2.However, small areas with a higher underestimation are found over this zone (minimum MBE values: the ensemble −0.73; CS1 −0.68; DE3 −0.84; ES1 −0.70; ES3 −0.67).Over the rest of the domain, small overestimations are modeled (MBE values around 0.1).Conversely, small, specific areas with a high overestimations are found (maximum MBE values for ENS 0.54; CS1 0.81; DE3 0.62; ES1 0.48; ES3 1.09). Bias The results for the daily bias of maximum, mean and minimum temperature have been obtained by calculating the bias of the daily mean series at each grid point of all the land grid points of the corresponding domain for the fire and dust episodes.They are summarized in Table 2 for the entire domain.Table 3 only considers the biases in those cells and time steps with a high load of aerosols (masking only those areas where 1 h AOD 550 > 1.0 in the fire case or 1 h AOD 550 > 0.5). During the fire episode (Fig. 2a and c) there is a general underestimation of the maximum temperature in the base case (average domain values from −2.1 K in ES3-C11 to −1.2 K in DE3-C11 for the entire domain or −5.7 K in CS2-C12 to −3.0 K in DE3-C11 only in those areas with 1 h AOD 550 > 1.0).This is especially noticeable over several cells in Russia (up to −7 K).Conversely, a general overestimation is found in the west and northwest area of the domain (positive differences between +1.0 K in DE3-C11 and +6.5 K in ES1-C11).When introducing the ARI or ARI+ACI, model biases do not improve (mean variation in the bias of +17.2 % in C12 and +11.0 % in C13 for the entire domain).This positive variation was expected because of the cold bias of models for reproducing maximum temperature and the overall cooling effects of aerosols.However, the improvement of introducing aerosol-cloud interactions is remarkable with respect to the case of including just aerosolradiation effects (the bias reduces by 6.2 % in ARI+ACI with respect to ARI simulations).During the dust episode (Fig. 2b and d) the analysis of the results is similar to the fire case (averaged domain underestimations around −1.0 K in DE3-C11 to −0.56 K in ES1-C21; −4.1 K in DE3-C21 to −2.8 K in ES1-C11 only for areas and time steps where 1 h AOD 550 > 0.5).Here, the inclusion of ARI (C22) leads to a mean increase in the bias of +10.2 % for the entire domain, but ARI+ACI (C23) leads to a very limited improvement of the simulations with respect to the base case (C21), i.e., generally to reductions in the bias of around −0.4 %. A similar discussion can be had for mean temperature.During the fire episode (Fig. 3a and c), all runs (except DE3) tend to underestimate the domain-averaged mean temperature (biases ranging from −0.4 K in ES1-C11 to +1.0 in DE3-C11; for those areas when AOD 550 > 1.0, biases range from −1.1 K in CS2-C13 to +1.0 in DE3-C11).Here, the ensemble (ENS) simulation clearly outperforms the individual simulations (bias of −0.2 K in ENS-C11 for the entire domain and −0.1 K in the high-AOD domain).Again, the model skill does not improve for mean temperature when including ARI or ARI+ACI (bias increase of 46.0 and 56.2 %, respectively, for the fire episode averaged over the entire domain) except in the case of the DE3-C12 simulation (including ARI reduces the bias by −27.3 %).During the dust episode (Fig. 3b and d), there is a general averaged overestimation of mean temperature (+0.4 in ES1-C21 to 0.8 K in DE3-C21; for those areas when AOD 550 > 0.5, biases range from −0.5 K in CS1-C21 to −0.1 in ES1-C11).Conversely to the fire episode, the inclusion of ARI and ARI+ACI improves the entire-domain bias (reductions in this variable of −13.4 % in C22 and −4.2 % in C23).The reduction in the bias when including ARI+ACI is especially remarkable for the ensemble of simulations, where the bias decreases by −24.4 % in ENS-C23 (averaged for the entire domain). Lastly, minimum temperature during the fire episodes is shown in Fig. 4a and c.Here, the analysis of results regarding improvements or worsening of the bias is very different, since the domain-averaged errors are in the order of −0.01 K for WRF-based models in C11 and C12, so a very slight difference would lead to a percentage increase (or reduction) in the bias compared to the base case.However, for DE3-C11 the bias is larger (up to +1.6 K for minimum temperature averaged over all the domain) and the inclusion of ARI leads only to a small improvement (−1.5 %).Despite the conclusions being similar for areas with 1 h AOD 550 > 1.0, WRF-Chem based models present biases of around +3.0 to 3.5 K for the fire episode and of around +4.5 K for DE3-C11 and DE3-C12.The dust case (Fig. 4b and d) shows a general overestimation of minimum temperature for domainaveraged values, with base-case biases ranging from +0.5 K in ES1-C21 to +1.8 K in DE3-C21 (biases from +2.0 in ES3-C23 to +3.5 in DE3-C21 in areas with AOD 550 > 0.5).Here, the inclusion of ARI and ARI+ACI slightly improves the average bias for the entire domain (reductions of −10.5 % in C22 and −5.0 % in C23).Here, again, the improvement of the ENS-C22 and ENS-C23 simulations is larger than for the rest of the models (reductions in the bias of −29.7 and −38.2 % for ARI and ARI+ACI, respectively).Analogous discussions can be had for the masked domain according to the AOD 550 values. Temporal correlation The temporal correlation (estimated through the coefficient of determination, ρ 2 ) between simulated and observed series is shown in Figs. 5, 6 and 7 for maximum, mean and minimum temperature, in that order.The latter are also summarized in Table 4 for the entire domain.Table 5 only considers the temporal correlation in those cells and time steps with a high load of aerosols (masking only those areas where 1 h AOD 550 > 1.0 in the fire case or 1 h AOD 550 > 0.5).Since the values and conclusions are very similar, only the results from the entire domain are discussed below. The first column in each panel represents the value of ρ 2 of the base case (C11 or C21) of each individual model (or the ensemble) with respect to the E-OBS database.The center (C12 or C22) and right (C13 and C23) columns indicate the increase (red values) or decrease (blue value) in the ρ 2 for each simulation with respect to the case not including feedbacks.Then, that gives an idea of the improvement (or not) in the skill of the model for representing the time evolution of our series when compared to the observations.For maximum, mean and minimum temperature during the fire episode (Figs.5a, 6a and 7a, respectively), domainaveraged ρ 2 is higher than 0.5 for all models (0.52 in CS1-C11 minimum temperature to 0.78 in DE3-C11 mean tem- perature).In general, coefficients of determination are highest for mean temperature (ranging from 0.60 to 0.78 depending on the individual model) with respect to minimum and maximum temperature.The variable with the lowest ρ 2 is minimum temperature (varying from 0.50 to 0.56 depending on the model).Moreover, the coefficient of determination for the ensemble is always higher than that of each individual model for the three studied variables (0.75, 0.79 and 0.61, respectively, for maximum, mean and minimum temperature).The highest ρ 2 values are found over the north and west part of the domain (above 0.8 in mean temperature) and the lowest mainly over the south and southeast area of the domain (under 0.2).According to the improvement with respect to the C11 case, when analyzing the inclusion of the ARI and ARI+ACI, a general improvement is observed for maximum and mean temperature, with positive values reaching up to 0.18 (domain-averaged values improve for individual models around 1 % for maximum and 0.3 % for mean temperature).Correlation with minima experiences a slight decrease (−0.4 %) when including ARI or ARI+ACI for the ensemble mean. During the dust episode (Figs.5b, 6b and 7b), domainaveraged ρ 2 is higher than for the fire episode for all models and variables in the base case (0.76 in DE3-C21 minima to 0.90 in DE3-C21 mean temperature), with the ensemble again providing the highest correlation (values ranging from 0.88 for maximum, 0.91 for mean and 0.84 for minimum temperature).As before, the inclusion of the ARI and ARI+ACI shows an improvement over some areas in the order of 0.17 for mean and maximum temperature, with domain-averaged improvements of 0.3 % in C22-C23 for maximum temperature, 0.2 % in C22-C23 for mean temperature and 0.1 % in C23 for minimum temperature, with no improvement for C22 in this latter variable. Temporal variability The results for the daily variability in maximum, mean and minimum temperature have been obtained by calculating the standard deviation of the daily mean series at each grid point of all the land grid points of the corresponding domain for the fire and dust episodes. Considering maximum temperature, in the fire episode (Fig. 8a and c), all runs tend to slightly overestimate the standard deviation of maximum temperature for the base case (no radiative feedbacks), with biases of maximum temperature standard deviation varying between +1.28 K for DE3-C11 to +0.25 K for ES1-C11.The biases of the standard deviation are reduced by −22.6 % (on average) when including the ARI, with reductions in the biases of the standard deviation ranging from −34.2 % in ES1-C12 and −8.6 % for DE3-C12.For the ARI+ACI simulations, the average reduction in the bias is −41.21 % (−56.9 % for ES1-C13 and −24.40 % for CS2-C13).The rest of the models and cases show an intermediate behavior for representing the variability, with the best skills always for the cases including the ARI+ACI interactions.Analogous results can be found for maximum temperature during the dust episode (Fig. 8b and d): the inclusion of aerosol feedbacks generally improves the represen-tation of the temporal variability in maximum temperature, with an average reduction in the bias of the standard deviation of −5.9 % (−16.6 %) for ARI (ARI+ACI) simulations. For mean temperature during the fire episode (Fig. 9a and c), all runs tend to overestimate the standard deviation for the base case (no radiative feedbacks), with biases of mean temperature standard deviation between +0.2 and +1.1 K.As for the maximum temperature, the biases of the standard deviation are reduced on −41.8 % (on average) when including the ARI and −66.5 % for the ARI+ACI simulations, with reductions in the biases of the standard deviation ranging from −8.5 % in the DE3-C12 simulation to −78.2 % in the ES1-C13 case.Similar to the maximum temperature, the rest of the models and cases show an intermediate representation of the variability in the mean temperature, with the best skills always for the cases including the ARI+ACI interactions.Results for the dust episode are shown in Fig. 9b and d.The standard deviation tends to be overestimated by all models in the north of Africa and central Europe, and underestimated in the eastern part of the target domain.Overall, the inclusion of ARI does not lead to better skills of the models when representing the temporal variability (+2.4 %), and for ARI+ACI the skill improved only marginally (reductions of −0.6 %). With respect to the minimum temperature, for the fire episode (Fig. 10a and c) all runs tend to overestimate the standard deviation.Biases of the minimum tempera- If considering the biases of the standard deviation, there is a slight improvement when including ARI or ARI+ACI for the fire episode, while a slight worsening is depicted for the dust case.The variations in the biases of the standard deviation are on average −2.1 and −4.9 %, respectively, for the ARI and ARI+ACI simulations (+3.4 and +5.4 % for the dust episode). Spatial variability Taylor diagrams (Taylor, 2001) allow an easy comparison between the spatial and temporal patterns of two fields (Rauscher et al., 2010).In Fig. 11 shows the relative spatial standard deviation (radial distance from the origin) and the correlation (the cosine of the angular coordinate) with E-OBS.Model results with good performance in terms of spatial variability and correlation are located closer to the standard deviation ratio 1 and correlation 1, which corresponds to E-OBS (indicated by the small black asterisk).For maximum, mean and minimum temperature, the diverse models (and configurations) show a narrow spread in the representation of the spatial structure of the standard deviation. With respect to the mean field of maximum temperature (Fig. 11a and d), all models perform well for the fire period (panels a-c), with high spatial correlations (over 0.9) and a normalized standard deviation close to observations.However, the no radiative feedback configuration (C11 cases in Fig. 11) represents excessive spatial variability (standard deviation ratio over 1).The spatial variability in the daily standard deviation for the ARI simulations (asterisks in Fig. 11, C12 cases) as well as for ARI+ACI simulations (squares, C13 cases) is substantially improved, despite the spatial correlation remaining practically constant for all models.Since there is a positive bias in the models when representing the spatial variability in the no radiative feedbacks simulations, the inclusion of radiative effects reduces the variability and therefore improves its spatial patterns.Analogous results can be found for the dust episode (Fig. 11c-f), with a larger agreement between models and lower differences between C21, C22 and C23 cases (no feedbacks, ARI and ARI+ACI simulations, in that order). With respect to the mean temperature (Fig. 11b and e), the models perform very similarly with each other, showing a high spatial correlation with the observations (over 0.9 for all models and cases), with a small overestimation of the spatial variability for the C11 (fire episode, no radiative feedbacks) case (panels a-c), which improves when including the ARI and ARI+ACI interactions.Similarly, the spatial variability is slightly overestimated for the C21 (dust, no radiative feedbacks) case, except for the DE3 model.Generally, the models capture the spatial structure of the variability during the fire and dust cases better (Fig. 11b and e) when including the radiative feedbacks.The correlation is only slightly improved for the ARI and ARI+ACI cases (except for ENS simula- tions, which will be discussed below) and is always higher for the mean temperature than for maximum temperature. The minimum temperature (Fig. 11c and f) is captured well as the maximum and mean temperature.While for the fire episode the models (in all cases) tend to provide a higher spatial variability than the observations, the spatial variability is underestimated for the dust episode, but with a high correlation (over 0.9) for both episodes.For this variable, the improvement of including the radiative feedbacks is not as evident, since the spatial variability does not generally improve for C12, C13, C22 or C23 cases with respect to the configuration without radiative feedbacks.Moreover, the correlation coefficient is even slightly reduced with the inclusion of ARI or ARI+ACI. Lastly, the added value of considering the ensemble mean of all available simulations in each episode and case is clear for the fire episode but not that obvious for the dust period.For the fire episode, the ensemble mean outperforms indi-vidual models in terms of the standard deviation and the correlation coefficient, especially for mean temperature, where the correlation increases up to 0.99 for the ENS-C11 case.The exception is found for the ENS-C13 for minimum temperature.Generally, the skill of most models improves when aerosol-meteorology interactions are taken into account For the dust case, the ensemble mean outperforms the individual models in representing the standard deviation (that is, the spatial variability).However, the spatial correlation coefficient is somewhat reduced as compared to the individual models. Summary and conclusions This study shows a collective operational evaluation of the temperature at 2 m (maximum, mean and minimum) simulated by the coupled chemistry and meteorology models under the umbrella of COST Action ES1004 for a wildfires and a dust episode in the year 2010.The meteorological parameters considered in this assessment are important to understand the effect of the aerosol interactions with clouds and radiation.In this sense, this study complements several other analyses (e.g., Brunner et al., 2015;Forkel et al., 2015;Makar et al., 2015b) by analyzing whether the inclusion of the radiative feedbacks improves the representation of the temperature field (maximum, mean and minimum) in an ensemble of simulations or not. Focusing on the bias, in both episodes there is a general underestimation of the studied variables, which is most noticeable in maximum temperature.In general, there is no a straightforward conclusion with respect to the improvement (or not) of the bias when introducing aerosol radiative feedbacks.Broadly, the biases are improved when including ARI or ARI+ACI in the dust case, but no evident improvements are found for the heat wave/wildfire episode.Although the ensemble does not outperform the individual models (in general), the improvements found when including ARI and ARI+ACI are by far more remarkable for the ensemble than for the individual models. With respect to the temporal correlation, maximum and mean temperatures in the fire and dust episodes show higher correlations over most of the domain when considering the C11 case with respect to the E-OBS database than minimum temperature.During these episodes, a twofold conclusion can be drawn: (1) the ensemble of simulations always outperforms the representation of the temporal variability in the series; and (2) an improvement of the ρ 2 coefficient is found when considering ARI or ARI+ACI feedbacks (in both episodes). Regarding the temporal variability, during the fire episode there is a general, pronounced overestimation of the standard deviation of the studied variables.Here, the inclusion of aerosol feedbacks largely improves the representation of the temporal variability in the three studied variables (reduction in the bias of the standard deviation) showing the best skills for the cases including the ARI+ACI interactions, with a re- duction in bias of the standard deviation by as much as 75 %.Very similar results can be found for the dust episode.Generally, the inclusion of the aerosol radiative feedbacks shows the largest improvements for temporal variability and results in an added value of the computational effort made to include direct aerosol-radiation interactions and aerosol-cloud interactions in the models.Lastly, with respect to the spatial variability for maximum and mean temperature, the inclusion of radiative effects reduces the variability and improves the spatial patterns for both episodes.For the minimum temperature, the improvement of including the radiative feedbacks is less evident. In order to further investigate the impact of including the aerosol interactions in online coupled models, more episodes with effects on the aerosol-radiation-cloud interactions should be considered.In this work, the fire episode represents a situation of clear skies, and therefore the ARI feedbacks are dominant.The dust episode election permits us to study aerosol-cloud interaction; most of the ARI+ACI differences found in the models with respect to the base case were found over the Mediterranean sea.Since the observational data from E-OBS only have values over land, the effect of ARI+ACI was not evaluated here.Unfortunately part of the interpretation of the results may be missing due to the unavailability of this database over the ocean.Furthermore, it should be pointed out that all results for the ARI+ACI cases were from WRF-Chem simulations, which may bias the ARI+ACI results towards the behavior of this model. There are still modeling issues regarding the representation of the field of temperature, where maximum temperatures are underestimated and minimum temperatures are overestimated, and the inclusion of the aerosol feedbacks does not improve this situation.Nevertheless, in this study, a general improvement in the temporal variability and correlation has been seen.These improvements may be important not only for certain episodes, as analyzed here, by also for the representation of the climatology of temperatures.How- ever, climatically representative periods should be covered in further studies. Data availability.The modeling data generated are available by contacting the corresponding managing organizations.The E-OBS database is available through the ECAD project platform (http: //www.ecad.eu). Competing interests.The authors declare that they have no conflict of interest.Special issue statement.This article is part of the special issue "Coupled chemistry-meteorology modelling: status and relevance for numerical weather prediction, air quality and climate communities (SI of EuMetChem COST ES1004) (ACP/GMD inter-journal SI)".It is not associated with a conference. Figure 1 . Figure 1.Aerosol optical depth (AOD) at 550 nm for the fire (a) and dust (b) episodes, as derived from MODIS.The panels below represent the bias for the fire (c) and dust (d) episodes of each simulation with respect to the MODIS AOD. Figure 2 . Figure 2. Maximum temperature (TMAX) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the fire (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 3 . Figure 3. Mean temperature (TEMP) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the fire (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 4 . Figure 4. Minimum temperature (TMIN) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the fire (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 5 . Figure 5.Time determination coefficient (ρ 2 ) (model vs. E-OBS) of the maximum temperature (TMAX) for the fire (a) and dust (b) episodes.The first column in each panel represents the value of ρ 2 of the no radiative feedback case with respect to the E-OBS database.The center and right columns indicate the increase (red values) or decrease (blue value) in each simulation with respect to the case not including feedbacks. Figure 6 . Figure 6.Time determination coefficient (ρ 2 ) (model vs. E-OBS) of the mean temperature (TEMP) for the fire (a) and dust (b) episodes.The first column in each panel represents the value of ρ 2 of the no radiative feedback case with respect to the E-OBS database.The center and right columns indicate the increase (red values) or decrease (blue value) in each simulation with respect to the case not including feedbacks. Figure 7 . Figure 7. Time determination coefficient (ρ 2 ) (model vs. E-OBS) of the minimum temperature (TMIN) for the fire (a) and dust (b) episodes.The first column in each panel represents the value of ρ 2 of the no radiative feedback case with respect to the E-OBS database.The center and right columns indicate the increase (red values) or decrease (blue value) in each simulation with respect to the case not including feedbacks. Figure 8 . Figure 8.Standard deviation (SD) of the maximum temperature (TMAX) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the standard deviation of the fires (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 9 . Figure 9.Standard deviation (SD) of the mean temperature (TEMP) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the standard deviation of the fires (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 10 . Figure 10.Standard deviation (SD) of the minimum temperature (TMIN) for the fire (a) and dust (b) episodes, as derived from the E-OBS database (in K).The panels below represent the bias for the standard deviation of the fires (c) and dust (d) episodes of each simulation with respect to the E-OBS database. Figure 11 . Figure 11.Taylor diagrams for (a, d) maximum temperature, (b, e) mean temperature and (c, f) minimum temperature for the simulations included in the analysis.Panels (a-c) represent the Taylor diagrams for the fire episode, while panels (d-f) stands for the dust episode.The cases included are no radiative feedbacks (filled circle), ARI (asterisk) and ARI+ACI (empty squares).Each configuration is shown in a different color: CS1 (green), CS2 (dark blue), DE3 (red), ES1 (yellow), ES3 (pink) and ENS (black). Table 1 . Modeling systems participating and their contributions to the case studies. Table 2 . Domain-averaged bias (in K)for the fire (C1X) and dust (C2X) episodes of each simulation with respect to the E-OBS database. Table 3 . Bias (in K)in those areas affected by high aerosol optical depths (1 h AOD > 1.0 for the fires, C1X case, and AOD > 0.5 for the dust, C2X case) with respect to the E-OBS database. Table 4 . Domain-averaged coefficient of determination (ρ 2 ) for the fire (C1X) and dust (C2X) episodes of each simulation with respect to the E-OBS database. Table 5 . Coefficient of determination (ρ 2 ) in those areas affected by high aerosol optical depths (1 h AOD > 1.0 for the fires, C1X case, and AOD > 0.5 for the dust, C2X case) with respect to the E-OBS database.
10,134.6
2017-04-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Artemether-lumefantrine to treat malaria in pregnancy is associated with reduced placental haemozoin deposition compared to quinine in a randomized controlled trial Background Data on efficacy of artemisinin-based combination therapy (ACT) to treat Plasmodium falciparum during pregnancy in sub-Saharan Africa is scarce. A recent open label, randomized controlled trial in Mbarara, Uganda demonstrated that artemether-lumefantrine (AL) is not inferior to quinine to treat uncomplicated malaria in pregnancy. Haemozoin can persist in the placenta following clearance of parasites, however there is no data whether ACT can influence the amount of haemozoin or the dynamics of haemozoin clearance. Methods Women attending antenatal clinics with weekly screening and positive blood smears by microscopy were eligible to participate in the trial and were followed to delivery. Placental haemozoin deposition and inflammation were assessed by histology. To determine whether AL was associated with increased haemozoin clearance, population haemozoin clearance curves were calculated based on the longitudinal data. Results Of 152 women enrolled in each arm, there were 97 and 98 placental biopsies obtained in the AL and quinine arms, respectively. AL was associated with decreased rates of moderate to high grade haemozoin deposition (13.3% versus 25.8%), which remained significant after correcting for gravidity, time of infection, re-infection, and parasitaemia. The amount of haemozoin proportionately decreased with the duration of time between treatment and delivery and this decline was greater in the AL arm. Haemozoin was not detected in one third of biopsies and the prevalence of inflammation was low, reflecting the efficacy of antenatal care with early detection and prompt treatment of malaria. Conclusions Placental haemozoin deposition was decreased in the AL arm demonstrating a relationship between pharmacological properties of drug to treat antenatal malaria and placental pathology at delivery. Histology may be considered an informative outcome for clinical trials to evaluate malaria control in pregnancy. Trial registration REGISTRY: http://clinicaltrials.gov/ct2/show/NCT00495508 Background Plasmodium falciparum malaria in pregnancy is a major cause of morbidity and mortality for pregnant women and their offspring [1]. Artesunate monotherapy is more efficacious than quinine to treat severe malaria in Asian adults [2] and African children [3] and is now the recommended treatment [4]. The efficacy of artesunate and artemisinin-based combination therapy (ACT) in pregnancy has been well documented in Asia [5][6][7][8], however data on efficacy of ACT to treat malaria during pregnancy in sub-Saharan Africa is scarce [9][10][11][12]. The WHO currently recommends ACT for treatment of women in their second and third trimesters [4], yet quinine remains widely used even though the seven-day course is associated with more side effects and poor compliance [9,13,14]. Quinine may remain first line therapy due to greater availability, prescription habit and possibly lower cost (although this might be untrue thanks to increasing availability of subsidized ACT by various programmes, e.g. Global Fund, President's Malaria Initiative, and World Bank). In an antenatal cohort of women in Mbarara, Uganda, a recent open label, randomized, non-inferiority trial of artemether-lumefantrine (AL) versus quinine demonstrated no difference in parasitological clearance rates corrected for re-infection by PCR genotyping [9]. Although there was no significant difference in clinical outcome, there was a trend towards decreased rates of low birth weight and pregnancy loss with AL [9]. The primary endpoints for treatment trials of malaria in pregnancy are PCR-adjusted cure rates (day 42) [15] and ultimately birth weight, however no data exists whether histopathology can be used to assess treatment efficacy. Histopathological endpoints are well established for clinical trials to treat chronic hepatitis B viral infection [16], to prevent renal allograft rejection [17], and may also function as surrogates for survival following chemotherapy [18]. During malaria, placental histopathology is strongly associated with birth outcomes [19][20][21], and has been utilized in a limited number of preventative trials: improved pathology was seen in two trials of intermittent presumptive therapy (IPT) compared to placebo [22,23], whereas no histological differences were observed with vitamin A administration [24] or in two trials examining differing IPT regimens [25,26]. The histological hallmarks of placental malaria infection are parasitized erythrocytes, intervillous inflammatory infiltrate and haemozoin (malarial pigment) deposition in fibrin (Figure 1). In a treatment trial the majority of women are anticipated to be categorized traditionally as "past" or "uninfected" based on the presence or absence of haemozoin in fibrin [27]. A scoring system for placental malaria that includes semiquantitative analysis of haemozoin deposition in fibrin was developed to be appropriate for clinical trials with low incidence of parasitaemia at delivery [19], and here was applied to the current trial. Artemisinin has a greater parasite reduction ratio than quinine [28], and unlike quinine is active on early ring stages. Both drugs are efficacious to treat uncomplicated malaria in pregnancy [9], however AL was hypothesized to lead to reduction of the cumulative parasite biomass within the intervillous space of the placenta and thus result in reduced downstream pathology. Haemozoin deposition and inflammation were assessed by histology for the randomized controlled trial from Mbarara to determine whether AL versus quinine to treat uncomplicated malaria in pregnancy was associated with reduced placental haemozoin and increased haemozoin clearance. Methods Written informed consent was provided, and the study was approved by four regulatory boards: the Faculty Research Ethics Committee and the Institutional Review Board of the Mbarara University of Science and Technology, the Uganda National Committee for Science and Technology, and the Comités de Protection des Personnes (Ile de France XI, France). The use of specimens was approved by the University of Washington Human Subjects Division. Women attended antenatal clinics from 2006 to 2009 at the Mbarara University of Science and Technology Hospital in Uganda, and were recruited to enter a cohort involving weekly screenings by blood smear. This is an area of meso-endemic transmission, with data for children under five demonstrating a 43% prevalence of Plasmodium falciparum determined by rapid diagnostic test (RDT) and corrected by blood smear in 2004 declining to 23% and 3% in rural and urban areas, respectively in 2010 [29]. Women with viable pregnancies >13 weeks gestational age with positive blood smears by microscopy either asymptomatic or symptomatic but without complicated or severe malaria were eligible to participate in the open label, randomized, non-inferiority efficacy trial [9]. Women were directly observed to complete the seven-day course of oral quinine (10 mg base per kg bodyweight every 8 h for seven days) or the three-day course of oral AL (fixed-dose combination of 20 mg and 120 mg at 0 h, 8 h, 24 h, 36 h, 48 h, and 60 h, given with milk). Women with subsequent P. falciparum infection were treated depending on study arm such that they received the other study drug (quinine or AL). Women with non-falciparum infections received chloroquine. Women were followed in weekly antenatal clinics, with rapid diagnostic tests followed by blood smears. Subsequent parasitaemia was genotyped as described [9]. Intermittent presumptive treatment (IPT) was discontinued, and women were traced to their homes if they did not attend clinic. After delivery, placental biopsies were collected in neutral buffered formalin, stored for a period of one to four years, and were processed at University of Washington (UW) Medical Center Histology in 2010, with one haematoxylin and eosin and one Giemsa stained section per block. Formalin pigment was identified and these samples were excluded from analysis of pigment deposition or parasitaemia. Haemozoin deposition in fibrin and placental inflammation were scored on blinded sections as previously described [19]. Briefly, parasitized erythrocytes were identified by haemozoin and parasite cytoplasm within an erythrocyte in the absence of formalin pigment or nearby debris, intervillous inflammation was categorically graded, and haemozoin within fibrin was quantified as percentage of 600X high power fields positive for haemozoin ( Figure 2). Fields were considered positive whether they contained single or multiple granules of haemozoin. A cut off value for haemozoin greater than 10% of high power fields (HPF) was previously associated with birth weight reduction and population distributions in cohorts from Tanzania and the Thai-Burma border [19]. Clinical data included treatment arm, gravidity, day of enrolment, level of parasitaemia, day of re-infection or recrudescence, and haemoglobin at delivery. The total number of antenatal visits prior to trial entry was not available. Analysis of birth weight and anaemia will be reported separately. Placental weights were not collected. Categorical variables were analysed by chi-square test or Fisher's exact test. Continuous variables were analysed using the unpaired t-test, except for gravidity, parity and day of enrolment when the Mann-Whitney test was used. Parasitaemia at enrolment and the% HPF with haemozoin were log transformed prior to all analyses. Multivariate analysis was performed by ANOVA or logistic regression for continuous and categorical variables, respectively (Statview, SAS). Multivariate models incorporated variables expected to contribute to histological changes: duration between treatment and delivery, gravidity, re-infection or recrudescence, and parasitaemia at enrolment. Although there was a potential risk of confounding due to cross-over of study drug to treat subsequent parasitaemia, re-infection was included in the multivariate analysis because the infection at enrolment could have persisted from any time during pregnancy prior to enrolment, whereas re-infection would have occurred between weekly screenings and be promptly treated. Secondary analyses excluded women with re-infection from the multivariate models, and further included haemoglobin level at delivery, which has been associated with placental size [30]. Clearance curves were generated using Microsoft Excel (Microsoft). Results Of 304 women in the trial, histology was available for 97 in the quinine arm, and 98 in the AL arm ( Figure 3). There was no difference in maternal demographics or day of enrolment based on whether histology was available (Table 1). Of women with histology available, there was no difference by treatment arm for maternal demographics or infant outcome, although parasitaemia was slightly higher at enrolment in the quinine arm (Table 2). Among women with histology, re-infection rates with P. falciparum were similar between study arms: 12.2% (12/98) in the AL arm, with two women having three separate additional episodes each, and 13.4% (13/97) in the quinine arm with one woman having two separate additional episodes. By PCR genotyping [9], there was a single documented P. falciparum recrudescence in each arm. For non-P. falciparum infections (including Plasmodium vivax, Plasmodium ovale, Plasmodium malariae), five women in the AL arm had single-species infections and one woman had mixed infection (with P. falciparum) and one woman had one of each. In the quinine arm, three women had single-species infections and one woman had a mixed infection (with P. falciparum). A single woman in the AL arm had taken sulphadoxinepyrimethamine for IPT prior to enrolment. Of the five women in the clinical trial who did not complete the seven-day course of quinine prior to delivery, three women withdrew consent and two received rescue treatment but no specimens were available. Considering the extent of prolonged storage, there were relatively few specimens with an obscuring amount of formalin pigment (16/195, 8.2%), which was associated with specimen dessication. Intervillous inflammation, but not malaria pigment or parasitaemia, could be reliably determined in specimens with obscuring formalin pigment. In the remaining specimens, 65.9% (118/179) had haemozoin, 8.2% (16/195) had intervillous inflammation, and 7.3% (13/179) had parasitaemia by histology; Table 2. Only a single case with high grade haemozoin deposition (>40% HPF) was present (quinine arm) and only a single case with massive intervillositis was present: she was within the AL arm and was enrolled two days prior to delivery of a stillborn infant. Of 13 women with parasitized erythrocytes by histology, six were under initial treatment at time of delivery (two in the AL and four in the quinine arm), three were placental blood smear positive (two in the AL arm [both re-infections] and one in the quinine arm [under initial treatment]), and the remaining five had parasites only detected by histology, with time of initial treatment ranging 70 to 100 days prior to delivery (one of these women woman experienced three separate re-infections, with the most recent at 49 days prior to delivery). There was one blood smear positive case (194 parasitized erythrocytes/μL) that did not have parasitized erythrocytes detectable by histology; she had been treated with quinine 68 days prior to delivery, and her delivery parasitaemia was confirmed as a reinfection by PCR genotyping. Among all women with histology, a moderate or greater level of haemozoin deposition (>10% HPF) was independently associated with the proximity of the last malaria episode to delivery, lower gravidity, and re-infection (p < 0.001, 0.022, and 0.014, respectively by logistic regression). Parasitaemia at enrolment significantly increased with proximity to delivery (R = 0.285; p < 0.001 by linear regression), and was increased in women with heavy haemozoin deposition by univariate analysis (p = 0.001), however this association was non-significant in the multivariate analysis (p = 0.095). Re-infection was not associated with differences in inflammation by histology, although the sample size was small. There were two samples from women who experienced PCR-confirmed recrudescence (one in each arm), however histology was compromised in each by formalin pigment. Among women who did and did not experience non-P. falciparum infections, no pathological differences were observed. According to the treatment arm, there was no difference in the presence of formalin pigment, intervillous inflammation, parasitaemia, or absence of hemozoin by histology. However, the proportion of cases with moderate or greater levels of hemozoin (>10% HPF) were significantly reduced in the AL arm by univariate analysis (p = 0.031) and by logistic regression after correcting for gravidity, day of enrolment, re-infection (yes/no), and parasitaemia at enrolment (p = 0.028); Table 3. Results were similar when numbers of re-infections were included as a continuous variable. The effect of treatment arm on haemozoin level remained significant after the inclusion of haemoglobin at delivery (p = 0.013), but was no longer significant after excluding the 22 women who experienced re-infection in the multivariate analysis (p = 0.171). Haemozoin level when quantified as a continuous variable (% HPF) was non-significantly decreased in the AL arm (p = 0.090), and remained so after ANOVA (p = 0.101). The longitudinal trial data allowed calculation of an estimated population haemozoin clearance rate for women with malaria treated prior to delivery, which was best fit to a logarithmic curve ( Figure 4); women with re-infection were excluded. The amount of haemozoin increased with proximity of infection to delivery, however individual cases demonstrated much variation: for example one primigravid woman enrolled in the month prior to delivery had no detectable placental haemozoin deposition, whereas other primigravid women had high levels. Stratified by treatment arm, the magnitude of the curve was greater with quinine than AL although the slopes were similar. Because primigravid and secundigravid women had similar rates of low birth weight, placental parasitized erythrocytes and inflammation (data not shown) they were included in the same category for comparison to multigravid women ( Figure 4C). No significant relationship with treatment arm was observed after excluding women with re-infection by ANOVA (p = 0.310), however a trend was observed (p = 0.082) when gravidity was included as a categorical variable, with initial clearance greatest in multigravid women treated with AL. This trial involving women undergoing weekly malaria screenings since booking at antenatal clinics in Mbarara, Uganda, establishes the sensitivity of histology to detect haemozoin as a marker for "past" infection following successful treatment (Table 4), which decreased from 86% to 28% with proximity of infection ranging from one to six months prior to delivery. There was no difference by treatment arm. The maximum time between infection and delivery that haemozoin could be detected by placental histology in a woman without re-infection was 162 days prior to delivery, at an estimated gestational age of 14 weeks at enrolment (quinine arm, shown in Figure 2A). Discussion The overall results of the randomized controlled clinical trial demonstrated that AL was not inferior to quinine with a similar day 42 parasitological cure rate [9]. However AL was associated with a trend towards decreased rates of low birth weight and pregnancy loss. In the histological analysis, moderate and high-grade haemozoin deposition was decreased in the AL arm demonstrating a relationship between drugs used to treat an antenatal malaria episode and placental pathology at delivery. Haemozoin deposits within fibrin are independently associated with birth weight during placental malaria [19], and they originate from monocyte-macrophages that phagocytose parasite material, subsequently become enmeshed in fibrin and degenerate. Monocyte-macrophages are a source of pro-inflammatory cytokines and associated with poor outcomes [31,32]. Based on these pathological data AL is hypothesized to be more clinically efficacious in pregnancy than quinine. Artemisinin derivatives are active at ring stages and have greater parasite reduction ratios compared to quinine [28] suggesting that ACT clinical efficacy could be linked to a greater reduction of sequestered mature-stage parasite biomass during treatment. In the placenta, this reduced sequestered parasite burden would result in less immune cell activation and associated phagocytosis which would be evident by reduced haemozoin in fibrin persisting until delivery ( Figure 5). Haemozoin is biologically active and with a direct immunomodulatory effect in vitro [33], however it is unknown whether haemozoin embedded within placental fibrin exerts a biological effect during pregnancy or whether it is simply an inert marker of cumulative exposure to sequestered malaria parasites. Haemozoin was not detected in approximately a third of cases, similar to previous reports of prompt and effective treatment of antenatal episodes resulting in no residual histopathology [34]. Further, comparatively low rates of intervillous inflammation and parasitaemia at delivery were observed in this trial, reflecting the efficacy of frequent antenatal screenings with prompt treatment of malaria in pregnancy. A much higher degree of pathology is seen in populations undergoing passive screening (consisting of IPT, bed net use and symptomatic treatment) although a formal comparison cannot be made across study sites due to differences in geography and study design. HIV status was not assessed in the trial, however considering that HIV is associated with increased rates of chronic PM [35] and delayed acquisition of protective immunity [36], HIV would be hypothesized to result in increased haemozoin deposition. The prevalence of HIV infection in antenatal clinics was previously reported to be 13% in the Mbarara region [37]. All women in the histological trial completed the directly observed seven-day course of quinine. Quinine is known to be associated with poor compliance [13] and data from this study would likely overestimate histological effectiveness of quinine in the population. In the clinical trial seven of eight interrupted treatments were in the quinine group [9]. Treatment failures in the AL arm most likely reflected altered pharmacokinetics during pregnancy of fixed dose lumefantrine [9] rather than resistance to AL. Although there was possibility of confounding due to cross-over of study drug to treat re-infection, episodes of re-infection were deemed sufficiency different from the initial infection to include in the multivariate analyses. Re-infection occurred between weekly screening visits and was promptly treated whereas the infection at enrolment could have persisted for any length of time during pregnancy. In the absence of antenatal records available for review, these women were likely not screened prior to study enrolment. Exclusion of women with re-infection in secondary analyses demonstrated a non-significant effect by treatment arm, perhaps due to insufficient sample size. The longitudinal trial design allowed for calculation of estimated haemozoin clearance rates. These curves hypothetically reflect the clearance of placental haemozoin following successful treatment over the course of gestation, analogous to parasite clearance curves from peripheral blood [38]. Curves were affected by treatment arm and parity, indicating a relationship between drug efficacy and immunity. Haemozoin clearance would be influenced by a combination of initial parasite burden, haemozoin dissipation through placental growth and perhaps biological clearance. Haemozoin levels increased with proximity of infection to delivery. Placental growth is most rapid in the third trimester where growth of the chorionic villi is likely to dissipate haemozoin deposits Figure 5 Proposed model of increased ACT efficacy during pregnancy. AL is active on early ring stages with a greater parasite reduction ratio, limiting parasite sequestration. Quinine is only active on mature parasites, which are sequestered in the placenta. Parasite sequestration leads to mononuclear cell infiltration and risk of poor pregnancy outcome. Haemozoin-laden macrophages become enmeshed in fibrin resulting in haemozoin deposition that persists until delivery. If women receive no or ineffective treatment, sequestered parasites, inflammation and haemozoin deposits persist until delivery. acquired earlier in gestation when the placenta was very small. Further, malaria in early gestation may limit placental growth [39], such that smaller placentas could be hypothesized to have higher levels of hemozoin. Placental weights were not collected in this trial and ideally future longitudinal studies would incorporate placental weight with ultrasound assessment of growth and measurements of placental haemozoin. Further, as a marker for malaria exposure, data on the sensitivity to detect "past" infections by histology (Table 4) would be useful for sample size calculations for programmatic studies to prevent or treat placental malaria prior to delivery. As an alternative to histology, placental haemozoin content was previously analysed by spectrophotometry [40,41], and was similarly demonstrated to increase with proximity of infection to delivery. However spectrophotometry would also detect haemozoin within intact parasitized erythrocytes and macrophages, potentially confounding interpretation. For example, one subject in this study had a low level of placental haemozoin deposition (3.5% of HPF), consistent with effective treatment two months prior to delivery, however at delivery there was re-infection with 18% maternal erythrocytes parasitized with haemozoin-containing mature forms which would confound interpretation of treatment efficacy. The specimens from this trial were processed in Seattle using state of the art histology equipment. Pathology is a crucial yet underfunded part of medical care in tropical countries [42] and although histology is labour intensive, prone to artefact and requires considerable expertise for interpretation, investment in laboratory and training of staff could strengthen local pathology services and generate long term benefit to the community. The placental sections generated in this study were of excellent quality and covered a wide range of pathology. This material was used to generate training slide sets distributed through the Malaria Research and Reference Reagent Repository [43] that will hopefully contribute to training and standardization in endemic areas. Improved local pathology systems would facilitate the assessment of placental effects of malaria in an era of changing transmission and increasing drug resistance. In conclusion, in the randomized controlled trial AL was associated with lower rates of moderate to highgrade haemozoin deposition compared to quinine for treatment of uncomplicated malaria in pregnancy. Decreased haemozoin deposition in the AL arm likely reflects decreased cumulative sequestered parasite biomass. The results support the WHO guidelines for using ACT to treat malaria in the second and third trimester of pregnancy. Placental histology is a useful and sensitive tool to assess cumulative placental exposure to malaria to evaluate malaria control policy and implementation in pregnancy.
5,271
2012-05-03T00:00:00.000
[ "Medicine", "Biology" ]
Simultaneous Sensing of Codeine and Diclofenac in Water Samples Using an Electrochemical Bi-MIP Sensor and a Voltammetric Electronic Tongue † : Codeine and diclofenac overdoses have been widely reported. Here, a biomimetic sensor (bi-MIP) was devised, and an electronic tongue was used to analyze water samples simultaneously containing both these drugs. The bi-MIP sensor limits of detection for diclofenac and codeine taken individually were 0.01 µ g/mL and 0.16 µ g/mL, respectively. Due to a cross-reactivity effect when using the bi-MIP sensor, the electronic tongue was shown to differentiate samples containing both analytes. The results confirm the feasibility of simultaneous detection of two target analytes via a bi-MIP sensor. Additionally, they demonstrate the ability of a multi-sensor to classify different water samples. Introduction Diclofenac (DCF) and codeine (COD) are drugs administered to treat certain human health problems.Here the focus is first on DCF, which is a non-steroidal anti-inflammatory drug (NSAID), widely prescribed for the treatment of a wide variety of conditions.It reduces the need for morphine after surgery and is effective against menstrual pain and endometriosis.Although DCF has outstanding medical features, it is sometimes misused and can, as a result, easily move into the synovial fluid.This unfortunately leads to a reduction in the secretion of prostaglandins [1].In consequence, the consumer can experience many health problems [2]. The second study focus is on COD, which is an opiate used clinically for its analgesic, antitussive and antidiarrheal properties.However, it is said to be addictive and can cause psychological damage to the patient if abused.Extreme consumption of COD can even cause death [3].For these reasons, the World Health Organization (WHO), the US Food and Drug Administration (FDA), and the European Medicines Agency (EMA), among other international organizations, have issued strict warnings about the adverse effects of COD [4]. Electrochemical methods are very good candidates for drug analysis [5].This is attributed to their low cost, lower detection limits, wide range of potential windows, and ease of surface renewal. Firstly, electrochemical devices based on molecularly imprinted polymers (MIPs) can be considered as good alternatives to conventional techniques.However, according to our literature research, the MIP strategy has not yet been exploited for the simultaneous detection of these two analytes.Currently, the immobilization of MIPs, as a sensing element on portable electrochemical transducers, such as screen-printed electrodes (SPEs), offers an interesting approach.A study has been reported for the detection of dopamine and uric acid using MIP technology. Secondly, as drugs are usually released in wastewater, and wastewater treatment plants are not totally efficient, this work focuses on the analysis of mineral water samples with different concentrations of the drugs in question.When multiple targets are to be detected, it is appropriate to use various electrical interfaces, such as multi-sensor systems. The following content of this study is devoted to the qualitative analysis of drugs in mineral water samples using a voltammetric electron tongue (VE-Tongue) combined with chemometric methods.When using the bi-MIP sensor, a cross-reactivity effect due to the presence of several compounds was encountered.To avoid it, qualitative analysis via VE-Tongue can help to classify/discriminate drug samples with different concentrations of the drugs in question. Taking all these points into consideration, the primary objective of this paper was to report on the development of an electrochemical sensor based on molecularly imprinted polymers for the simultaneous detection of DCF and COD.Electrochemical techniques, such as electrochemical impedance spectroscopy (EIS), differential pulse voltammetry (DPV), and cyclic voltammetry (CV), were used to investigate the electrochemical behavior of the electrodes during the different steps of the bi-MIP sensor fabrication.Principal components analysis (PCA) was used to process the database from the VE-Tongue sensor array for the purpose of discriminating between water samples containing DCF and COD. Samples Five sets of mineral water samples were prepared for the electrochemical analysis: Set 1: Mineral water sample used as reference sample which was not spiked; Set 2: Mineral water samples spiked with different concentration of diclofenac (0.001, 0.01, 0.1, 1, 10, 100, 300, 500 µg/mL); Set 3: Mineral water samples spiked with codeine at the same concentrations as described above; Set 4: Mineral water samples spiked with diclofenac at the same concentrations as described above, but each containing 300 µg/mL codeine; Set 5: Mineral water samples spiked with codeine at the same concentrations as described above, but each containing 300 µg/mL diclofenac. Instrumentation and Electrochemical Techniques Figure 1 shows the experimental setup used in this study.The five sets described above were studied using both detection systems (i.e., bi-MIP sensor and VE-Tongue). The bi-MIP sensor was designed on a screen-printed gold electrode (Au-SPE). The voltammetric electronic tongue (VE-Tongue) consisted of an array of 5 working electrodes made of gold, copper, glassy carbon, platinum, and palladium.A silver/silver chloride (Ag/AgCl) reference electrode and a platinum counter electrode completed the three-electrode configuration. A computer interfaced to a potentiostat device was used for data acquisition.Using the potentiostat, electrochemical characterization techniques, including CV, DPV and EIS, were run. These three established techniques were used for the electrochemical measurements.The CV was operated from −0.4 to 0.6 V at a scan rate of 30 mV/s.To investigate the surface properties of the bi-MIP sensor, the EIS was performed in an open circuit at a low AC potential of 10 mV amplitude and a frequency range of 0.1 to 50,000 Hz.The retention properties of the bi-MIP sensor were investigated using DPV over a potential range of The bi-MIP sensor was designed on a screen-printed gold electrode (Au-SPE).The voltammetric electronic tongue (VE-Tongue) consisted of an array of 5 working electrodes made of gold, copper, glassy carbon, platinum, and palladium.A silver/silver chloride (Ag/AgCl) reference electrode and a platinum counter electrode completed the three-electrode configuration. A computer interfaced to a potentiostat device was used for data acquisition.Using the potentiostat, electrochemical characterization techniques, including CV, DPV and EIS, were run. These three established techniques were used for the electrochemical measurements.The CV was operated from −0.4 to 0.6 V at a scan rate of 30 mV/s.To investigate the surface properties of the bi-MIP sensor, the EIS was performed in an open circuit at a low AC potential of 10 mV amplitude and a frequency range of 0.1 to 50,000 Hz.The retention properties of the bi-MIP sensor were investigated using DPV over a potential range of -0.2 to 0.3 V and a slew rate of 50 mV/s.All measurements were performed at room temperature (25 °C). Data Analysis The multivariate responses of the VE-Tongue were processed by a known unsupervised method called PCA.This statistical technique reduces the dimensionality of the mul- Data Analysis The multivariate responses of the VE-Tongue were processed by a known unsupervised method called PCA.This statistical technique reduces the dimensionality of the multivariate data while retaining maximum information on new variables called principal components (PCs) [6,7].This allows for better visualization of the data and better interpretation of the analyzed samples. Biomimetic Receptor Assembly During the development of the biomimetic sensor, several immobilization procedures to form the sensitive layer were performed.After each step, the electrochemical behavior of the electrode was observed using a supporting electrolyte (PBS pH 7.4) containing electroactive species ([Fe(CN) 6 ] 4−/3− ).For this purpose, the CV and EIS techniques were run in PsTRACE software.The results of these characterizations are presented in Figure 3.At each step of the sensor development, the electrochemical behavior of the electrode changed compared to the bare electrode.Moreover, the CV and EIS results were in good agreement. Data Analysis The multivariate responses of the VE-Tongue were processed by a known unsupervised method called PCA.This statistical technique reduces the dimensionality of the multivariate data while retaining maximum information on new variables called principal components (PCs) [6,7].This allows for better visualization of the data and better interpretation of the analyzed samples. Biomimetic Receptor Assembly During the development of the biomimetic sensor, several immobilization procedures to form the sensitive layer were performed.After each step, the electrochemical behavior of the electrode was observed using a supporting electrolyte (PBS pH 7.4) containing electroactive species ([Fe(CN)6] 4−/3− ).For this purpose, the CV and EIS techniques were run in PsTRACE software.The results of these characterizations are presented in Figure 3.At each step of the sensor development, the electrochemical behavior of the electrode changed compared to the bare electrode.Moreover, the CV and EIS results were in good agreement. Bi-MIP Sensor Responses In the first step, the analysis of DCF alone (set 2), at different concentrations on the bi-MIP sensor, was performed using the differential pulse voltammetry (DPV) technique.The calibration curves related to these responses are shown in Figure 4.A clear decrease Bi-MIP Sensor Responses In the first step, the analysis of DCF alone (set 2), at different concentrations on the bi-MIP sensor, was performed using the differential pulse voltammetry (DPV) technique.The calibration curves related to these responses are shown in Figure 4.A clear decrease in the amplitude of the voltammograms was observed as the concentration of DCF increased, expressed in the linear regression equation shown in Figure 4a.The equation is y = −0.083Log(C)-0.355with a determination coefficient R 2 = 0.93.The calculated detection limit was 0.01 µg/mL using the formula described by DIOUF et al. [8].Secondly, COD alone (set 3) was analyzed under the same conditions.The corresponding equation of the bi-MIP sensor responses (voltammograms) is shown in Figure 4b.Here, a similar trend to that of the DCF was obtained with a calibration equation of y = −0.089Log(C) − 0.347 with R 2 = 0.98.The limit of detection was 0.16 µg/mL. When detecting the two analytes individually, it was found that the bi-MIP sensor Secondly, COD alone (set 3) was analyzed under the same conditions.The corresponding equation of the bi-MIP sensor responses (voltammograms) is shown in Figure 4b.Here, a similar trend to that of the DCF was obtained with a calibration equation of y = −0.089Log(C) − 0.347 with R 2 = 0.98.The limit of detection was 0.16 µg/mL. When detecting the two analytes individually, it was found that the bi-MIP sensor had almost equivalent sensitivity.However, because of cross-reactivity, the results for the simultaneous detection of both analytes by the bi-MIP were not satisfactory.An electronic tongue was used to explore a potential strategy to address this. PCA Analysis of the VE-Tongue Dataset Due to cross-reactivity and limitations encountered with the bi-MIP sensor, measurement of samples containing both target analytes simultaneously was performed using the VE-Tongue.After data pre-processing, principal components analysis (PCA) was used to classify the samples from all sets.The results are presented in Figure 5, which shows the projections of the experimental results onto a two-dimensional (2D) space formed by the first two principal components; 78.90% of the total variance of the data was explained by the first two PCs indicating significant pattern separation.Secondly, COD alone (set 3) was analyzed under the same conditions.The corresponding equation of the bi-MIP sensor responses (voltammograms) is shown in Figure 4b.Here, a similar trend to that of the DCF was obtained with a calibration equation of y = −0.089Log(C) − 0.347 with R 2 = 0.98.The limit of detection was 0.16 µg/mL. When detecting the two analytes individually, it was found that the bi-MIP sensor had almost equivalent sensitivity.However, because of cross-reactivity, the results for the simultaneous detection of both analytes by the bi-MIP were not satisfactory.An electronic tongue was used to explore a potential strategy to address this. PCA Analysis of the VE-Tongue Dataset Due to cross-reactivity and limitations encountered with the bi-MIP sensor, measurement of samples containing both target analytes simultaneously was performed using the VE-Tongue.After data pre-processing, principal components analysis (PCA) was used to classify the samples from all sets.The results are presented in Figure 5, which shows the projections of the experimental results onto a two-dimensional (2D) space formed by the first two principal components; 78.90% of the total variance of the data was explained by the first two PCs indicating significant pattern separation.PCA was also applied to data after analysis of samples from set 4 and set 5, according to their concentrations. Set 4 contained water samples with varying concentrations of DCF and a fixed concentration of COD (300 µg/mL) for each.As shown in Figure 6a, all samples in set 4 were well separated with only 85.87% of the total variance expressed by PC1 and PC2.In addition, the samples containing low and high concentrations of DCF clustered in the top right and bottom of the graph, respectively. PCA was also applied to data after analysis of samples from set 4 and set 5, according to their concentrations. Set 4 contained water samples with varying concentrations of DCF and a fixed concentration of COD (300 µg/mL) for each.As shown in Figure 6a, all samples in set 4 were well separated with only 85.87% of the total variance expressed by PC1 and PC2.In addition, the samples containing low and high concentrations of DCF clustered in the top right and bottom of the graph, respectively.In Figure 6b, the same trend is also observed for the analysis of samples in set 5. In this set, COD was varied but the concentration of DCF was maintained at 300 µg/mL.In the graph, the clean water sample and the spiked samples are well separated, with a score of 41.1% of the total variance, expressed as PC2 and PC3. These results clearly show that the VE-Tongue was able to discriminate water samples containing several compounds at different concentrations. Figure 1 . Figure 1.Graphical overview of the experimental setup. Figure 2 Figure 2 illustrates the procedures for the bi-MIP sensor elaboration.Briefly, a layer of polyvinyl carboxylic chloride (PVC-COOH) was first assembled to modify the bare Au-SPE.Then, after activation of -COOH groups by 1-ethyl-3-(3-dimethylaminipropyl) carbodiimide (EDC) and N-hydrosuccinimide (NHS), a solution (1 mg/mL), containing simultaneously DCF and COD, was deposited on the modified electrode.After DCF and COD binding, a solution containing methacrylic acid, as the functional monomer, and silver nanoparticles (AgNPs) was immobilized.An extraction stage of template molecules followed to complete the fabrication of the bi-MIP sensor. Figure 1 . Figure 1.Graphical overview of the experimental setup. Figure 2 7 Figure 2 . Figure 2 illustrates the procedures for the bi-MIP sensor elaboration.Briefly, a layer of polyvinyl carboxylic chloride (PVC-COOH) was first assembled to modify the bare Au-SPE.Then, after activation of -COOH groups by 1-ethyl-3-(3-dimethylaminipropyl) carbodiimide (EDC) and N-hydrosuccinimide (NHS), a solution (1 mg/mL), containing simultaneously DCF and COD, was deposited on the modified electrode.After DCF and COD binding, a solution containing methacrylic acid, as the functional monomer, and silver nanoparticles (AgNPs) was immobilized.An extraction stage of template molecules followed to complete the fabrication of the bi-MIP sensor.Chem.Proc.2021, 5, 63 4 of 7 Figure 2 . Figure 2. The development stages of the bi-MIP sensor. Figure 2 . Figure 2. The development stages of the bi-MIP sensor. Figure 3 . Figure 3. Electrochemical signals corresponding to the development stages of the bi-MIP sensor: (a) cyclic voltammograms, (b) Nyquist diagrams. Figure 3 . Figure 3. Electrochemical signals corresponding to the development stages of the bi-MIP sensor: (a) cyclic voltammograms, (b) Nyquist diagrams. Chem.Proc.2021, 5, 63 5 of 7in the amplitude of the voltammograms was observed as the concentration of DCF increased, expressed in the linear regression equation shown in Figure4a.The equation is y = −0.083Log(C)-0.355with a determination coefficient R 2 = 0.93.The calculated detection limit was 0.01 µg/mL using the formula described by DIOUF et al.[8]. Figure 5 . Figure 5. PCA plot showing the discrimination of the different sets using ∆I and Area as features.∆I is the difference between the maximum current of the oxidation wave and the reduction wave.Area is the area of the VE-Tongue response (voltammogramme) using the trapezoidal method. Figure 6 . Figure 6.PCA plot showing the discrimination between set 1 and water samples of (a) set 4 and (b) set 5 at different concentrations using ΔI and Area as features.
3,638.6
2021-06-30T00:00:00.000
[ "Materials Science" ]
Comparison of Conductor-Temperature Calculations Based on Different Radial-Position-Temperature Detections for High-Voltage Power Cable : In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE) insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound. Introduction XLPE (cross-linked polyethylene) insulated cable is currently a major type of high-voltage power cable, and its insulation condition is related to conductor temperature [1]. Since the conductor temperature of the in-service cable is difficult to measure directly, it is usually obtained by calculation. The IEC-60287 standard provides the general methods for determining the conductor temperature and current rating in cable, and the IEC-60853 standard provides the methods of cyclic-current rating when the thermal capacities of cable structures cannot be ignored [2,3]. These standards are used to calculate the conductor-temperature rise with environmental temperature and laying conditions according to the thermal circuit. The calculation result, however, usually exhibits errors because of the uncertainties of environmental parameters [4]. To solve this problem, many studies, including the modified thermal circuits and the numerical methods [5,6], have been undertaken. In numerical methods, the finite element method (FEM) is the most used method that has been applied to various laying environments. For example, the rating of underground cables [7]; cables in a tray [8]; the effects of radiation, solar heating, duct structures, and back-filled materials [9]; and, the formation of the dry zone around underground cables [10] have been studied by FEM. Other numerical methods, including the finite difference method (FDM) [11] and the boundary element method (BEM) [12], have also been studied. The above numerical methods can generally model complicated laying conditions with high accuracy, but the thermal circuit is more suitable for meeting the demand of transient rating and online implementation [13]. For engineering, the conductor temperature calculation of in-service cable is closely related to the temperature sensor. Typical arrangements of the sensor in a practical cable in studies are given in the following Table 1 [14][15][16][17][18][19]. Table 1. Radial positions of the sensor in practical cables. Reference Cable Type Sensor Type Sensor Position [14] 154 kV XLPE underground power cable Optical fiber The screening wires [15] 275 kV XLPE underground cable, 420 kV XLPE submarine cable Optical fiber Cable surface, or in a stainless steel sheathed tubular structure in cable [16] Three-phase power cable Optical fiber Interstices of three-phase insulation [17] 110 kV XLPE submarine cable Optical fiber Steel-tape armor [18] Underground power cable Optical fiber Stainless steel tube [19] 230 kV oil-filled cable Optical fiber or thermocouples Cable surface In these cases, when the sensor has realized the temperature measurement of a certain structure, the real-time conductor temperature could then be obtained according to the corresponding thermal circuit [18], or using the numerical methods, like FEM [19]. There are various placements available for the sensor, except the conductor and the solid insulation layers, from the perspective of manufacture, and the principle of determining the position of temperature sensor is still unspecified. Therefore, the effect of the different detection positions on the conductor-temperature calculation needs to be researched to optimize the temperature-sensor arrangement, and then a better calculation accuracy could be achieved on the premise of ensuring the insulation security. When calculating the conductor temperature, the thermal parameters of the substantial materials in standards are usually used for the entire thickness between conductor and jacket [2]. However, the heat transport may be more complex for the cable that has a substantial volume of air under the corrugated metal sheath [20]. Generally, the methods to take this air gap into account included adding an empirically derived constant, which was not found in standards, to the calculated thermal resistance. In addition, when considering the in corrugation designs difference, and the clearance variation that is caused by the thermal expansion and contraction of the materials inside the sheath, it is difficult to recommend a constant for modification [21]. Two main thoughts are available to handle the air gap. One is to follow the way of calculating the thermal resistance and capacitance in standards, the other is considering the flow characteristics and solving the thermal field, which has also been used in other power equipment [22]. The first way is simplified by using the appropriate correlations when we mainly focus on its heating power rather than the flow characteristics, so the flow regimes, including the laminar and turbulent flow, could be handled at a very low computational cost. The second way solves for both the total energy balance and the flow equations of the air, which produces detailed results for the flow field, as well as for the temperature distribution and heating power, but it is more complex and requires more computational resources and time than the first approach. Therefore, the handling of the air inside the cable requires more attention. To determine the appropriate radial position for the temperature sensor in the cable and determine the factors influencing the conductor-temperature calculation, in this paper, we took XLPE power cable as an example and established thermal circuits based on the temperatures of the insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface. Thermocouples were arranged in the above radial positions of a 110 kV single-core XLPE power cable, and six cases of current heating experiment under three laying environments, including duct, water, and backfilled soil, were carried out. The real-time conductor temperature was calculated according to the current and the measured temperature, and then the effect of radial positions on conductor temperature calculations was analyzed. In addition, FEM was used to validate the accuracy of the calculation by considering both the effects of convection and radiation in the air layer of cable. All of the factors influencing the conductor-temperature calculation, including the thermal resistance and thermal capacitance, and the initial temperature, were discussed. Thus, a referential radial-position for the arrangement of temperature sensor was proposed. Principle of Thermal Circuits The analytic temperature-distribution calculation in power cables generally ignores the axial heat transfer, and uses the lumped parameters to describe parts of the cable itself, as well as the surrounding environment in the direction of radius, and the ladder thermal circuit is established. As for the transient heat-transfer process, a thermal capacitance parameter is introduced to describe the heat-storage model, which is shown in Figure 1a. Figure 1b shows a simplified arrangement of the thermal capacitance of the layer, as proposed by Van Wormer [23], who used an equivalent π thermal circuit to express the heat-transfer process. Energies 2018, 11,117 3 of 17 To determine the appropriate radial position for the temperature sensor in the cable and determine the factors influencing the conductor-temperature calculation, in this paper, we took XLPE power cable as an example and established thermal circuits based on the temperatures of the insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface. Thermocouples were arranged in the above radial positions of a 110 kV single-core XLPE power cable, and six cases of current heating experiment under three laying environments, including duct, water, and backfilled soil, were carried out. The real-time conductor temperature was calculated according to the current and the measured temperature, and then the effect of radial positions on conductor temperature calculations was analyzed. In addition, FEM was used to validate the accuracy of the calculation by considering both the effects of convection and radiation in the air layer of cable. All of the factors influencing the conductor-temperature calculation, including the thermal resistance and thermal capacitance, and the initial temperature, were discussed. Thus, a referential radial-position for the arrangement of temperature sensor was proposed. Principle of Thermal Circuits The analytic temperature-distribution calculation in power cables generally ignores the axial heat transfer, and uses the lumped parameters to describe parts of the cable itself, as well as the surrounding environment in the direction of radius, and the ladder thermal circuit is established. As for the transient heat-transfer process, a thermal capacitance parameter is introduced to describe the heat-storage model, which is shown in Figure 1a. Figure 1b shows a simplified arrangement of the thermal capacitance of the layer, as proposed by Van Wormer [23], who used an equivalent π thermal circuit to express the heat-transfer process. In Figure 1, thermal resistance T, thermal capacitance Q, and allocation factor p could be calculated, respectively, using the following equations: where ρT is the thermal resistivity in (K·m)/W, D and d the external and internal diameter of the layer in m, and σ the volumetric specific heat in J/(m 3 ·K). In Figure 1, the power loss in the conductor, the dielectric loss and the sheath loss should be considered. The conductor loss can be calculated according to: Lumped parameter model of a cylinder layer: (a) before equivalence and (b) the equivalent π circuit. In Figure 1, thermal resistance T, thermal capacitance Q, and allocation factor p could be calculated, respectively, using the following equations: where ρ T is the thermal resistivity in (K·m)/W, D and d the external and internal diameter of the layer in m, and σ the volumetric specific heat in J/(m 3 ·K). In Figure 1, the power loss in the conductor, the dielectric loss and the sheath loss should be considered. The conductor loss can be calculated according to: where W C represents the power losses of conductor and sheath per unit length in W/m, I the current in A, and R the ac resistance of the conductor in Ω. When the voltage has been applied on the cable over a long period of time, the temperature rise that is caused by dielectric loss could be regarded as stable, so that the transient temperature rise of the conductor usually only takes the current heating into account, and the dielectric loss is ignored. The sheath loss is usually merged into the nearby thermal resistance and capacity, via a partition coefficient q s , which is calculated using [2]: where λ 1 is the ratio of sheath loss to the power loss in the conductor. Thermal Circuit Calculation Methods of Conductor Temperature Based on Four Radial Position Temperatures Having investigated the structural features of an actual power cable, the insulation-shield surface, the center of waterproof compound, the aluminum sheath, and the jacket surface, are the available locations for the placement of temperature sensors. The thermal circuit with the temperatures of different radial positions is given in Figure 2. Apparently, all of these temperature calculation methods are of a similar circuit, but different in their boundary conditions. where WC represents the power losses of conductor and sheath per unit length in W/m, I the current in A, and R the ac resistance of the conductor in Ω. When the voltage has been applied on the cable over a long period of time, the temperature rise that is caused by dielectric loss could be regarded as stable, so that the transient temperature rise of the conductor usually only takes the current heating into account, and the dielectric loss is ignored. The sheath loss is usually merged into the nearby thermal resistance and capacity, via a partition coefficient qs, which is calculated using [2]: where λ1 is the ratio of sheath loss to the power loss in the conductor. Thermal Circuit Calculation Methods of Conductor Temperature Based on Four Radial Position Temperatures Having investigated the structural features of an actual power cable, the insulation-shield surface, the center of waterproof compound, the aluminum sheath, and the jacket surface, are the available locations for the placement of temperature sensors. The thermal circuit with the temperatures of different radial positions is given in Figure 2. Apparently, all of these temperature calculation methods are of a similar circuit, but different in their boundary conditions. The thermal conductivities of the copper conductor and the aluminum sheath are much greater than those of the other material, such that these two structures could be regarded as isothermal surfaces. In Figure 2, Wal denotes the power losses of the sheath per unit length in W/m. Tcs, Ti, Tis, Tw, Ta, and Tj are, respectively, the thermal resistances of the conductor shield, insulation, insulation shield, waterproof compound, air gap, and jacket in (K·m)/W. Q, Qcs, Qi, Qis, Qw, Qa, Qal, and Qj are, respectively, the thermal capacitances of the conductor, conductor shield, insulation, insulation shield, the center of waterproof compound, air gap, aluminum sheath, and jacket in J/(m 3 ·K). θc, θi, θw1, θal, and θj are, respectively, the temperatures of the conductor, insulation shield surface, waterproof compound center, aluminum sheath, and jacket surface in °C. In addition, the environmental temperature is represented by θe. According to Figure 2, a transient temperature-calculation model from insulation shield to conductor was first developed after referring to the standard method for a short duration in IEC-60853 when the current and the temperature of the insulation shield were known, which is denoted Method 1. Since the thicknesses of the conductor shield as well as of the insulation shield are very low, and their thermal characteristics are similar to that of XLPE insulation, these two structures are usually merged into the insulation layer to simplify the calculation [24]. The thermal resistance and the thermal capacitance of the insulation layer were partitioned at the geometric average of its internal and external diameter, and then the thermal capacitances were partitioned to both sides of thermal resistances by the allocation factor shown in Figure 1. The transient model was finally simplified to be a two-loop network, as illustrated in Figure 3, where θb is the boundary condition that represents the temperature of the insulation shield in Method 1, TA and TB are the equivalent The thermal conductivities of the copper conductor and the aluminum sheath are much greater than those of the other material, such that these two structures could be regarded as isothermal surfaces. In Figure 2, W al denotes the power losses of the sheath per unit length in W/m. T cs , T i , T is , T w , T a , and T j are, respectively, the thermal resistances of the conductor shield, insulation, insulation shield, waterproof compound, air gap, and jacket in (K·m)/W. Q, Q cs , Q i , Q is , Q w , Q a , Q al , and Q j are, respectively, the thermal capacitances of the conductor, conductor shield, insulation, insulation shield, the center of waterproof compound, air gap, aluminum sheath, and jacket in J/(m 3 ·K). θ c , θ i , θ w1 , θ al , and θ j are, respectively, the temperatures of the conductor, insulation shield surface, waterproof compound center, aluminum sheath, and jacket surface in • C. In addition, the environmental temperature is represented by θ e . According to Figure 2, a transient temperature-calculation model from insulation shield to conductor was first developed after referring to the standard method for a short duration in IEC-60853 when the current and the temperature of the insulation shield were known, which is denoted Method 1. Since the thicknesses of the conductor shield as well as of the insulation shield are very low, and their thermal characteristics are similar to that of XLPE insulation, these two structures are usually merged into the insulation layer to simplify the calculation [24]. The thermal resistance and the thermal capacitance of the insulation layer were partitioned at the geometric average of its internal and external diameter, and then the thermal capacitances were partitioned to both sides of thermal resistances by the allocation factor shown in Figure 1. The transient model was finally simplified to be a two-loop network, as illustrated in Figure 3, where θ b is the boundary condition that represents the temperature of the insulation shield in Method 1, T A and T B are the equivalent thermal resistances, and Q A and Q B are the equivalent thermal capacitances [3]. According to Figure 3, the temperature rise of the conductor relative to the insulation shield, θ(t), could be obtained by: where a, b, T a , and T b are determined by T A , T B , Q A , and Q B [3]. Energies 2018, 11, 117 5 of 17 thermal resistances, and QA and QB are the equivalent thermal capacitances [3]. According to Figure 3, the temperature rise of the conductor relative to the insulation shield, θ(t), could be obtained by: where a, b, Ta, and Tb are determined by TA, TB, QA, and QB [3]. As for the conductor temperature inverted from the temperatures of the center of the waterproof compound, aluminum sheath, and jacket surface, simplified transient thermal circuits from the corresponding radial positions to the conductor were established and marked as Methods 2, 3, and 4, respectively. All of the thermal capacitances that are involved in the transient model were partitioned to both sides of the thermal resistances by the allocation factor in the same way, and the sheath loss was merged into the nearby thermal resistance and capacity parameters via qs. Thus, we obtained the ladder network shown in Figure 4, where the boundary temperature θb represents θw1 in Method 2, θal in Method 3, and θj in Method 4, respectively. Tα to Tν and Qα to Qν represent the equivalent thermal resistances and thermal capacitances within the boundary. FEM Simulation Methods of Conductor Temperature Based on Different Radial Position Temperatures The FEM simulation model for the cable was built in COMSOL Multiphysics (5.2a, COMSOL, Stockholm, Sweden). It consists of each layer in Figure 2. The main interest is to calculate the Figure 3. Simplified transient thermal circuit model from insulation shield to conductor. As for the conductor temperature inverted from the temperatures of the center of the waterproof compound, aluminum sheath, and jacket surface, simplified transient thermal circuits from the corresponding radial positions to the conductor were established and marked as Methods 2, 3, and 4, respectively. All of the thermal capacitances that are involved in the transient model were partitioned to both sides of the thermal resistances by the allocation factor in the same way, and the sheath loss was merged into the nearby thermal resistance and capacity parameters via q s . Thus, we obtained the ladder network shown in Figure 4, where the boundary temperature θ b represents θ w1 in Method 2, θ al in Method 3, and θ j in Method 4, respectively. T α to T ν and Q α to Q ν represent the equivalent thermal resistances and thermal capacitances within the boundary. thermal resistances, and QA and QB are the equivalent thermal capacitances [3]. According to Figure 3, the temperature rise of the conductor relative to the insulation shield, θ(t), could be obtained by: where a, b, Ta, and Tb are determined by TA, TB, QA, and QB [3]. As for the conductor temperature inverted from the temperatures of the center of the waterproof compound, aluminum sheath, and jacket surface, simplified transient thermal circuits from the corresponding radial positions to the conductor were established and marked as Methods 2, 3, and 4, respectively. All of the thermal capacitances that are involved in the transient model were partitioned to both sides of the thermal resistances by the allocation factor in the same way, and the sheath loss was merged into the nearby thermal resistance and capacity parameters via qs. Thus, we obtained the ladder network shown in Figure 4, where the boundary temperature θb represents θw1 in Method 2, θal in Method 3, and θj in Method 4, respectively. Tα to Tν and Qα to Qν represent the equivalent thermal resistances and thermal capacitances within the boundary. FEM Simulation Methods of Conductor Temperature Based on Different Radial Position Temperatures The FEM simulation model for the cable was built in COMSOL Multiphysics (5.2a, COMSOL, Stockholm, Sweden). It consists of each layer in Figure 2. The main interest is to calculate the Figure 4. Transient thermal circuit model from boundary to conductor. Figure 4 could be simplified to the two-loop network shown in Figure 3 [3], according to: When the boundary temperature as well as the current were provided as solution conditions, the time-varying temperature θ c (t) could be obtained based on (6). As time goes by, the heat storage of thermal capacitance kept increasing, and a steady temperature distribution of the cable could be attained after a long enough period with constant current and ambient. FEM Simulation Methods of Conductor Temperature Based on Different Radial Position Temperatures The FEM simulation model for the cable was built in COMSOL Multiphysics (5.2a, COMSOL, Stockholm, Sweden). It consists of each layer in Figure 2. The main interest is to calculate the conductor temperature when the heating power and the boundary temperature are known. The cable has an initial temperature of ambient and heats up over time, due to Joule heating effect. The model is illustrated in Figure 5. conductor temperature when the heating power and the boundary temperature are known. The cable has an initial temperature of ambient and heats up over time, due to Joule heating effect. The model is illustrated in Figure 5. The key problem is how to handle the air gap between the waterproof compound and the aluminum sheath. Two methods were applied to handle the natural convection heating of air. Approach 1 modeled the convective flow of the air directly, while in Approach 2, the thermal dissipation of the air was equivalent to the combination of thermal resistivity and heat capacity, as presented in standards. In this paper, we firstly used Approach 1 to simulate the temperature field of the cable to compare with the experimental results. Then, Approach 2 was adopted to study the correctness of each temperature calculation method. The assumptions of the simulation are as follows. The model of solid heat transfer was used, and laminar flow was used to define the air structure when using Approach 1. The model solved a thermal balance for the cable structures, including the air flowing in the gap if needed. Thermal energy was transported through conduction in solid layers and through convention and radiation in the air layer. Taking surface-to-surface radiation between the waterproof compound and the aluminum sheath into account, the surface emissivity of the waterproof compound was set to 0.95. The boundary conditions in the FEM model, were same to those in thermal circuit Methods 1, 2, 3, and 4 in Figure 2, and the insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface, were denoted Boundaries 1, 2, 3, and 4, respectively. Heat rate was use to defined the heat source. The thermal conductivity, the heat capacity, and the density of the air, were temperature-dependent material properties. Other simulation conditions, including the control equations and the parameter of material, were predefined in the software automatically. Experiment Arrangement and Equipment Current-heating experiments were performed on a 110 kV single-core XLPE insulated power cable, whose parameters are given in Table 2. The thermal resistivity and volumetric specific heat of the solid materials are recommended by IEC standards [2,3]. In addition, since it is difficult to recommended constants for the air between waterproof compound and aluminum sheath, the heat parameters of quiescent air were adopted [25]. The key problem is how to handle the air gap between the waterproof compound and the aluminum sheath. Two methods were applied to handle the natural convection heating of air. Approach 1 modeled the convective flow of the air directly, while in Approach 2, the thermal dissipation of the air was equivalent to the combination of thermal resistivity and heat capacity, as presented in standards. In this paper, we firstly used Approach 1 to simulate the temperature field of the cable to compare with the experimental results. Then, Approach 2 was adopted to study the correctness of each temperature calculation method. The assumptions of the simulation are as follows. The model of solid heat transfer was used, and laminar flow was used to define the air structure when using Approach 1. The model solved a thermal balance for the cable structures, including the air flowing in the gap if needed. Thermal energy was transported through conduction in solid layers and through convention and radiation in the air layer. Taking surface-to-surface radiation between the waterproof compound and the aluminum sheath into account, the surface emissivity of the waterproof compound was set to 0.95. The boundary conditions in the FEM model, were same to those in thermal circuit Methods 1, 2, 3, and 4 in Figure 2, and the insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface, were denoted Boundaries 1, 2, 3, and 4, respectively. Heat rate was use to defined the heat source. The thermal conductivity, the heat capacity, and the density of the air, were temperature-dependent material properties. Other simulation conditions, including the control equations and the parameter of material, were predefined in the software automatically. Experiment Arrangement and Equipment Current-heating experiments were performed on a 110 kV single-core XLPE insulated power cable, whose parameters are given in Table 2. The thermal resistivity and volumetric specific heat of the solid materials are recommended by IEC standards [2,3]. In addition, since it is difficult to recommended constants for the air between waterproof compound and aluminum sheath, the heat parameters of quiescent air were adopted [25]. The experimental arrangement at Guangzhou Lingnan Cable Co., Ltd. (Guangzhou, China) is illustrated in Figure 6. The experimental arrangement at Guangzhou Lingnan Cable Co., Ltd. (Guangzhou, China) is illustrated in Figure 6. According to the test standard [26], the tested cable, with a length of approximately 25 m, was bent into a U shape with a radius of 2.5 m. The ends of the cable were connected with a copper conductor and the aluminum sheath was not electrically grounded. When in water or backfilled soil, the cable was laid at a depth of 1 m. In this experiment, high voltage was not applied and two 50 kVA heating transformers (CXBYQ-50, Xinyuan Electric Co., Ltd., Yangzhou, China) were used to generate an induction current of high intensity to the closed cable loop. The real-time current and conductor temperature were measured at the end of the tested cable to keep the output testing current basically stable by operating the regulator. As shown in Figure 6, three groups of K-type thermocouples were symmetrically placed along the arc of the tested cable at an interval of 1 m. Each group consisted of five thermocouples that were inserted into the conductor, insulation shield, the center of waterproof compound, aluminum sheath, and jacket surface in sequence, with an interval of 5 cm in the axial direction. In addition, the ambient temperature was surveyed by the thermocouple placed at the bottom of the duct. Experimental Scheme The experiment schedule is given in Table 3. According to the test standard [26], the tested cable, with a length of approximately 25 m, was bent into a U shape with a radius of 2.5 m. The ends of the cable were connected with a copper conductor and the aluminum sheath was not electrically grounded. When in water or backfilled soil, the cable was laid at a depth of 1 m. In this experiment, high voltage was not applied and two 50 kVA heating transformers (CXBYQ-50, Xinyuan Electric Co., Ltd., Yangzhou, China) were used to generate an induction current of high intensity to the closed cable loop. The real-time current and conductor temperature were measured at the end of the tested cable to keep the output testing current basically stable by operating the regulator. As shown in Figure 6, three groups of K-type thermocouples were symmetrically placed along the arc of the tested cable at an interval of 1 m. Each group consisted of five thermocouples that were inserted into the conductor, insulation shield, the center of waterproof compound, aluminum sheath, and jacket surface in sequence, with an interval of 5 cm in the axial direction. In addition, the ambient temperature was surveyed by the thermocouple placed at the bottom of the duct. Experimental Scheme The experiment schedule is given in Table 3. Backfilled soil 6 15 1000 The first two cases were carried out with the same constant current in the duct, and then four cases with different currents when the duct was filled with water or the backfilled soil. Each experiment began in a steady temperature condition without current being loaded, and the entire rising process from ambient temperature to steady-state temperature was recorded every 0.5 h. The experiments met the demand of the test standard [26] that the temperature difference in the central area of 2 m should not exceed 2 • C, so the following calculation and analysis adopted the data of the center of the cable loop. Comparison of the Experiments and Thermal Circuit Calculations Based on Different Radial-Positions The results of six test cases are shown in Figure 7. Corresponding to four radial position temperatures, calculation results of conductor temperature based on thermal circuits are shown in Figure 8, where Methods 1-4 represent the conductor temperatures inverted from θ i , θ w1 , θ al , and θ j , respectively, according to the models described in Section 2. The first two cases were carried out with the same constant current in the duct, and then four cases with different currents when the duct was filled with water or the backfilled soil. Each experiment began in a steady temperature condition without current being loaded, and the entire rising process from ambient temperature to steady-state temperature was recorded every 0.5 h. The experiments met the demand of the test standard [26] that the temperature difference in the central area of 2 m should not exceed 2 °C, so the following calculation and analysis adopted the data of the center of the cable loop. Comparison of the Experiments and Thermal Circuit Calculations Based on Different Radial-Positions The results of six test cases are shown in Figure 7. Corresponding to four radial position temperatures, calculation results of conductor temperature based on thermal circuits are shown in Figure 8, where Methods 1-4 represent the conductor temperatures inverted from θi, θw1, θal, and θj, respectively, according to the models described in Section 2. From Figure 7, the conductor, insulation shield, and the center of the waterproof compound basically exhibited the same temperature-variation tendency, while those of the jacket surface and the aluminum sheath were similar. Each case had recorded a gentle variation of temperature distribution, except for Cases 1 and 2. In these two cases, the cable was placed at the bottom of the duct and were exposed to the solar radiation in the daytime, which made the temperature variations of the jacket and aluminum sheath more violent. During the last several hours at midnight, the effect of the solar radiation disappeared, and the steady temperature gradients of the Cases 1 and 2 were similar because of the roughly identical conductor loss. However, both the ambient and the jacket surface temperatures in Case 1 were higher, so a higher boundary temperature of the cable resulted in higher temperatures in cable. When comparing Case 1 with Case 2, the steady temperature differences of five measuring points inside the cable ranged from 8.2 °C to 10.4 °C . From Figure 8, similar tendencies in both the measurement and the calculation of conductor temperature were found in each case. In general, the calculation results based on Method 1 were closest to the measurements, and that of Method 2 was closer. However, a remarkable difference was found on the results based on Method 3 or 4. To compare the effect of radial-position temperature detections on the accuracy of the conductor-temperature calculation, quantitative analysis was required. The absolute and relative errors of the calculation when compared to the measurement are defined as: er(θc) = (θc − θc * )/θc * × 100%, where θc and θc * represent the real-time calculation and the conductor-temperature measurement in °C, respectively. Taking Case 1 as an example, the variations of absolute error and relative error during the transient process are illustrated in Figure 9. From Figure 7, the conductor, insulation shield, and the center of the waterproof compound basically exhibited the same temperature-variation tendency, while those of the jacket surface and the aluminum sheath were similar. Each case had recorded a gentle variation of temperature distribution, except for Cases 1 and 2. In these two cases, the cable was placed at the bottom of the duct and were exposed to the solar radiation in the daytime, which made the temperature variations of the jacket and aluminum sheath more violent. During the last several hours at midnight, the effect of the solar radiation disappeared, and the steady temperature gradients of the Cases 1 and 2 were similar because of the roughly identical conductor loss. However, both the ambient and the jacket surface temperatures in Case 1 were higher, so a higher boundary temperature of the cable resulted in higher temperatures in cable. When comparing Case 1 with Case 2, the steady temperature differences of five measuring points inside the cable ranged from 8.2 • C to 10.4 • C. From Figure 8, similar tendencies in both the measurement and the calculation of conductor temperature were found in each case. In general, the calculation results based on Method 1 were closest to the measurements, and that of Method 2 was closer. However, a remarkable difference was found on the results based on Method 3 or 4. To compare the effect of radial-position temperature detections on the accuracy of the conductor-temperature calculation, quantitative analysis was required. The absolute and relative errors of the calculation when compared to the measurement are defined as: where θ c and θ c * represent the real-time calculation and the conductor-temperature measurement in • C, respectively. Taking Case 1 as an example, the variations of absolute error and relative error during the transient process are illustrated in Figure 9. From Figure 9, it can be seen that the errors of Method 1 were much less than those of others. Method 2 generally exhibited both a larger, negative error and relative error that increased with time and could reach −5.0 °C and −7%, respectively. The errors of Methods 3 and 4 were much larger than those of Methods 1 and 2, with a tendency to decrease at first and then increase. The initial errors caused by the solar radiation of Methods 3 and 4 were 6.7 °C and 14.1 °C, respectively. As time went by, the error kept decreasing until it reached the minimum value at hour 3, and then began increasing over time. The average values of the absolute error and the relative error from the four methods were chosen to survey the difference under different laying conditions, which is illustrated in Figure 10. From Figure 10, we note the following: (1) Under all of the laying environments, Method 1 always exhibited minimal error and relative error that were less than 1.4 °C and 3.1%, respectively. This proved that the highest accuracy could be achieved when using the temperature of the insulation shield as the boundary condition. The errors and relative errors of Methods 2, 3, and 4 increased gradually, and the latter two methods exhibited similar errors. (2) When the cable was laid in the duct, Cases 1 and 2, the errors of Methods 3 and 4 were too large, while those of Methods 1 and 2 were smaller. When the cable was laid in water or backfilled soil, Cases 3-6, the limits of the errors caused by Methods 1 and 2 had little difference, but those caused by Methods 3 and 4 exhibited significant nonconformity, and the maximum was found in Method 4, at 1300 A loaded under the condition of water. In this paper, the steady state of the conductor temperature was defined as that the range of conductor-temperature variation within 3 h was less than ±1 °C, so the mean value served as the steady temperature. As seen in Figure 11, the calculations based on Method 1 always exhibited a minimal steady error under all the laying conditions. The steady error and relative error were less From Figure 9, it can be seen that the errors of Method 1 were much less than those of others. Method 2 generally exhibited both a larger, negative error and relative error that increased with time and could reach −5.0 • C and −7%, respectively. The errors of Methods 3 and 4 were much larger than those of Methods 1 and 2, with a tendency to decrease at first and then increase. The initial errors caused by the solar radiation of Methods 3 and 4 were 6.7 • C and 14.1 • C, respectively. As time went by, the error kept decreasing until it reached the minimum value at hour 3, and then began increasing over time. The average values of the absolute error and the relative error from the four methods were chosen to survey the difference under different laying conditions, which is illustrated in Figure 10. From Figure 9, it can be seen that the errors of Method 1 were much less than those of others. Method 2 generally exhibited both a larger, negative error and relative error that increased with time and could reach −5.0 °C and −7%, respectively. The errors of Methods 3 and 4 were much larger than those of Methods 1 and 2, with a tendency to decrease at first and then increase. The initial errors caused by the solar radiation of Methods 3 and 4 were 6.7 °C and 14.1 °C, respectively. As time went by, the error kept decreasing until it reached the minimum value at hour 3, and then began increasing over time. The average values of the absolute error and the relative error from the four methods were chosen to survey the difference under different laying conditions, which is illustrated in Figure 10. From Figure 10, we note the following: (1) Under all of the laying environments, Method 1 always exhibited minimal error and relative error that were less than 1.4 °C and 3.1%, respectively. This proved that the highest accuracy could be achieved when using the temperature of the insulation shield as the boundary condition. The errors and relative errors of Methods 2, 3, and 4 increased gradually, and the latter two methods exhibited similar errors. (2) When the cable was laid in the duct, Cases 1 and 2, the errors of Methods 3 and 4 were too large, while those of Methods 1 and 2 were smaller. When the cable was laid in water or backfilled soil, Cases 3-6, the limits of the errors caused by Methods 1 and 2 had little difference, but those caused by Methods 3 and 4 exhibited significant nonconformity, and the maximum was found in Method 4, at 1300 A loaded under the condition of water. In this paper, the steady state of the conductor temperature was defined as that the range of conductor-temperature variation within 3 h was less than ±1 °C, so the mean value served as the steady temperature. As seen in Figure 11, the calculations based on Method 1 always exhibited a minimal steady error under all the laying conditions. The steady error and relative error were less From Figure 10, we note the following: (1) Under all of the laying environments, Method 1 always exhibited minimal error and relative error that were less than 1.4 • C and 3.1%, respectively. This proved that the highest accuracy could be achieved when using the temperature of the insulation shield as the boundary condition. The errors and relative errors of Methods 2, 3, and 4 increased gradually, and the latter two methods exhibited similar errors. (2) When the cable was laid in the duct, Cases 1 and 2, the errors of Methods 3 and 4 were too large, while those of Methods 1 and 2 were smaller. When the cable was laid in water or backfilled soil, Cases 3-6, the limits of the errors caused by Methods 1 and 2 had little difference, but those caused by Methods 3 and 4 exhibited significant nonconformity, and the maximum was found in Method 4, at 1300 A loaded under the condition of water. In this paper, the steady state of the conductor temperature was defined as that the range of conductor-temperature variation within 3 h was less than ±1 • C, so the mean value served as the steady temperature. As seen in Figure 11, the calculations based on Method 1 always exhibited a minimal steady error under all the laying conditions. The steady error and relative error were less than 1.7 • C and 2.6%, respectively. This proved that the best accuracy for the calculation of steady conductor temperature could be achieved when it was inverted from the temperature of the insulation shield. Method 2 had the better accuracy, with an error and relative error of less than 3.5 • C and 4.7%, respectively, while both errors for Methods 3 and 4 were much more. The steady errors and relative errors of the latter two methods were correlated with current density. Energies 2018, 11,117 11 of 17 than 1.7 °C and 2.6%, respectively. This proved that the best accuracy for the calculation of steady conductor temperature could be achieved when it was inverted from the temperature of the insulation shield. Method 2 had the better accuracy, with an error and relative error of less than 3.5 °C and 4.7%, respectively, while both errors for Methods 3 and 4 were much more. The steady errors and relative errors of the latter two methods were correlated with current density. FEM Simulation Results of the Conductor Temperature To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum were built up. The FEM simulation results of the cable's thermal field in Case 1 are shown in Figure 12. Both the tested and the simulation values of the steady conductor temperature in each case are presented in Table 4. In Figure 12 and Table 4, Approach 1 represents the convective flow of the air directly, while Approach 2 gives a combination of equivalent thermal resistivity and heat capacity for dissipation of the air between the waterproof compound and aluminum sheath. Boundaries 1, 2, 3, and 4 represent the insulation-shield, the center of waterproof compound, the aluminum sheath, and the jacket surface, respectively. FEM Simulation Results of the Conductor Temperature To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum were built up. The FEM simulation results of the cable's thermal field in Case 1 are shown in Figure 12. Energies 2018, 11,117 11 of 17 than 1.7 °C and 2.6%, respectively. This proved that the best accuracy for the calculation of steady conductor temperature could be achieved when it was inverted from the temperature of the insulation shield. Method 2 had the better accuracy, with an error and relative error of less than 3.5 °C and 4.7%, respectively, while both errors for Methods 3 and 4 were much more. The steady errors and relative errors of the latter two methods were correlated with current density. FEM Simulation Results of the Conductor Temperature To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum were built up. The FEM simulation results of the cable's thermal field in Case 1 are shown in Figure 12. Both the tested and the simulation values of the steady conductor temperature in each case are presented in Table 4. In Figure 12 and Table 4, Approach 1 represents the convective flow of the air directly, while Approach 2 gives a combination of equivalent thermal resistivity and heat capacity for dissipation of the air between the waterproof compound and aluminum sheath. Boundaries 1, 2, 3, and 4 represent the insulation-shield, the center of waterproof compound, the aluminum sheath, and the jacket surface, respectively. Both the tested and the simulation values of the steady conductor temperature in each case are presented in Table 4. In Figure 12 and Table 4, Approach 1 represents the convective flow of the air directly, while Approach 2 gives a combination of equivalent thermal resistivity and heat capacity for dissipation of the air between the waterproof compound and aluminum sheath. Boundaries 1, 2, 3, and 4 represent the insulation-shield, the center of waterproof compound, the aluminum sheath, and the jacket surface, respectively. When the effect of the convection and radiation, i.e., Approach 1, was considered, the simulation results were much close to the experimental results. However, when a combination of thermal resistivity and heat capacity to describe the heat transfer character of the air, i.e., Approach 2, was considered, the difference between the simulation and the experiment results was significant, which appeared similar calculation errors to that between the thermal circuit calculation and the experiment results in Figure 11. As for the air layer, convection and radiation are the primary heat dissipation ways, while the heat conductivity coefficient is very small; for instance, 0.0259 W/(K·m) at 20 • C. An appropriate correlation of the thermal resistance should be considered when Formula (1) was applied to the calculation of thermal resistance for the air layer. Discussions Both the thermal circuit calculation and the FEM simulation results indicated that the farther the temperature sensor was from the conductor, the more significant the error of the conductor-temperature calculation was. So, the factors influencing the conductor-temperature calculation errors were discussed below. Effect of the Initial Temperature Difference on Errors In Formula (6), the initial condition of each layer in the cable was assumed to be isothermal; however, the experimental conditions were different from this hypothesis. In particular, in Case 1, as shown in Figure 7a, the initial temperatures of the jacket surface and the aluminum sheath were obviously higher than those of other three structures, the latter was of similar temperature. This caused remarkably larger initial errors that were based on Method 3 or Method 4 than those Methods 1 and 2, demonstrating in Figure 9a. The influence of the obvious initial temperature difference ∆T 0 between the conductor and the boundary of the second-order resistance-capacitance circuit shown in Figure 3 on the conductor temperature would decrease over time and the calculation errors would reduce. It also explained why the errors based on Methods 3 and 4 showed a decreasing tendency at the first several hours, and then increasing under the condition of the duct. When the cable was laid in water or backfilled soil, the temperature risings of the jacket surface and the aluminum sheath were gentle, so the influence of initial temperature on the calculation results was not significant. In these cases, the errors were mainly caused by the thermal resistivity inaccuracy. Placing the temperature sensor closer to the conductor, i.e., the insulation-shield and waterproof compound, would contribute to eliminating the effect of the non-isothermal initial temperature due to the environment. Effect of the Thermal Resistivity on Errors In this paper, the thermal resistivity from IEC standards, together with the parameters of quiescent air, were used in thermal circuit calculation. However, these values might be different from those calculated based on the measured temperatures. As is illustrated in Figure 8, this difference made the temperature-calculation results higher or lower than the measurements overall. So, we used the steady-state temperatures of the tested cable shown in Table 5 to calculate the thermal resistivities. The steady thermal circuit in Figure 2 ensures that the heat storage of the thermal capacitances kept constant in the steady state, so the thermal capacitances were regarded as open-circuit type. The mean values of the calculated thermal resistivity are given in Table 6. As for the XLPE insulation, including the merged conductor shield and insulation shield, both the recommended and the calculated values were 3.5 (K·m)/W, which proved that the evaluation of the thermal resistivity of the insulation layer was precise. Homoplastically, for the thermal resistivity of the jacket, the recommended value was 3.5 (K·m)/W, while the calculated value was 3.3 (K·m)/W. To determine the influence of the thermal resistivity of the jacket, taking Case 1 as an example, the temperature of the aluminum sheath, θ al , was inverted from the temperature of the jacket surface based on the thermal circuit of Method 4, and a comparison of the calculated values was then made with those measured. Figure 13 shows that the effect of the initial temperature decreased over time, and the calculation results of the aluminum sheath temperature was really close to those that were measured after 5.5 h. This means that the thermal resistivity of the jacket was acceptable, so in Figure 9, the error curves based on Methods 3 and 4 exhibited a trend of moving toward each other. As for the XLPE insulation, including the merged conductor shield and insulation shield, both the recommended and the calculated values were 3.5 (K·m)/W, which proved that the evaluation of the thermal resistivity of the insulation layer was precise. Homoplastically, for the thermal resistivity of the jacket, the recommended value was 3.5 (K·m)/W, while the calculated value was 3.3 (K·m)/W. To determine the influence of the thermal resistivity of the jacket, taking Case 1 as an example, the temperature of the aluminum sheath, θal, was inverted from the temperature of the jacket surface based on the thermal circuit of Method 4, and a comparison of the calculated values was then made with those measured. Figure 13 shows that the effect of the initial temperature decreased over time, and the calculation results of the aluminum sheath temperature was really close to those that were measured after 5.5 h. This means that the thermal resistivity of the jacket was acceptable, so in Figure 9, the error curves based on Methods 3 and 4 exhibited a trend of moving toward each other. It seems that the recommended thermal resistivities for the dense and solid materials, i.e., the XLPE insulation and jacket, were consistent with the calculated based on measured temperatures, while the others not. For the thermal resistivity of the waterproof compound, the IEC recommended value was 6 (K·m)/W, which was less than the calculated value of 12.1 (K·m)/W in this paper. It should be noted that the negative error in Method 2 increased after hour 2. Since only part of the thermal resistivity of the waterproof compound was involved due to the temperature sensor placed in the center of the waterproof compound, the errors of Method 2 were not so remarkable, even though the calculated thermal resistivity of the waterproof compound was twice that of the recommended. It seems that the recommended thermal resistivities for the dense and solid materials, i.e., the XLPE insulation and jacket, were consistent with the calculated based on measured temperatures, while the others not. For the thermal resistivity of the waterproof compound, the IEC recommended value was 6 (K·m)/W, which was less than the calculated value of 12.1 (K·m)/W in this paper. It should be noted that the negative error in Method 2 increased after hour 2. Since only part of the thermal resistivity of the waterproof compound was involved due to the temperature sensor placed in the center of the waterproof compound, the errors of Method 2 were not so remarkable, even though the calculated thermal resistivity of the waterproof compound was twice that of the recommended. As for the air gap between the waterproof compound and sheath, the recommended thermal resistivity was 34 (K·m)/W, which is almost three times the calculated value of 12.1 (K·m)/W. An overlarge recommended thermal resistivity of the air gap was the dominant reason for the significant calculation errors found in Methods 3 and 4, which would increase with increasing current. Based on the calculated values of thermal resistivity, calculation errors of the transient and steady conductor-temperature for the four methods in each case were recounted, and then illustrated in Figure 14. As for the air gap between the waterproof compound and sheath, the recommended thermal resistivity was 34 (K· m)/W, which is almost three times the calculated value of 12.1 (K·m)/W. An overlarge recommended thermal resistivity of the air gap was the dominant reason for the significant calculation errors found in Methods 3 and 4, which would increase with increasing current. Based on the calculated values of thermal resistivity, calculation errors of the transient and steady conductor-temperature for the four methods in each case were recounted, and then illustrated in Figure 14. For Method 1, the calculated insulation thermal resistivity was the same as the recommended, so the calculation errors were still small. Both the average errors and relative errors of Methods 2-4 in Figure 14 sharply decreased to a very low level when compared to the results using the recommended thermal resistivity shown in Figures 10 and 11. Their errors of transient and steady conductor temperature were less than 1.9 °C and 1.7 °C, respectively, apart from the higher values that were found in the first two cases with the initial-temperature differences caused by solar radiation. The simulation results of the steady conductor temperature were also updated when using the calculated thermal resistance to describe the heat transfer of the air, i.e., Approach 2. The result is illustrated in Table 7. The simulation errors were less than 2.8 °C, apart from a higher value, 4.92 °C for Boundary 4 in Case 5. So, using the calculated thermal resistance, the FEM simulation results of Approach 2 were close to that of Method 1 as well as the measured results. In summary, the thermal resistance of the air layer is the source of the largest error, so it seems that the temperature sensor should be placed closer to the conductor. Though the insulation-shield was the closest position to the conductor in this paper and led to the best accuracy, placing a temperature sensor in this position would have a potential risk of the direct contact with the insulation material. So, the waterproof compound was recommended for the better accuracy and good insulation security because the sensor was isolated from the insulation. Both the calculation and For Method 1, the calculated insulation thermal resistivity was the same as the recommended, so the calculation errors were still small. Both the average errors and relative errors of Methods 2-4 in Figure 14 sharply decreased to a very low level when compared to the results using the recommended thermal resistivity shown in Figures 10 and 11. Their errors of transient and steady conductor temperature were less than 1.9 • C and 1.7 • C, respectively, apart from the higher values that were found in the first two cases with the initial-temperature differences caused by solar radiation. The simulation results of the steady conductor temperature were also updated when using the calculated thermal resistance to describe the heat transfer of the air, i.e., Approach 2. The result is illustrated in Table 7. The simulation errors were less than 2.8 • C, apart from a higher value, 4.92 • C for Boundary 4 in Case 5. So, using the calculated thermal resistance, the FEM simulation results of Approach 2 were close to that of Method 1 as well as the measured results. In summary, the thermal resistance of the air layer is the source of the largest error, so it seems that the temperature sensor should be placed closer to the conductor. Though the insulation-shield was the closest position to the conductor in this paper and led to the best accuracy, placing a temperature sensor in this position would have a potential risk of the direct contact with the insulation material. So, the waterproof compound was recommended for the better accuracy and good insulation security because the sensor was isolated from the insulation. Both the calculation and FEM simulation proved that when the uncertainty of thermal resistances, especially the air resistance was eliminated, the conductor temperature calculations using the temperatures of the waterproof compound or the aluminum sheath could obtain a close accuracy to that of insulation shield. The waterproof compound and the aluminum sheath, especially the latter, are more practical for engineering. Effect of the Air Thermal Capacitance on Errors From the point of dynamic response, the time constant of the circuit depends on its resistance and capacitance parameters. As shown in Figure 14a, when thermal resistivities were calculated, the calculation errors of transient conductor temperature would not beyond 1.9 • C, apart from two higher values found in Method 4. So, the thermal capacitances were generally valid because the calculation accuracy was high enough. Since the IEC standard has provided the volumetric specific heat of solid material, we only analyzed the variation of the air thermal capacitance in this paper. For an actual air layer, both the density and specific heat are temperature-varying. For example, they are 1.205 kg/m 3 and 1.005 kJ/(kg·K) at 20 • C, respectively, while 1.029 kg/m 3 and 1.009 kJ/(kg·K) at 70 • C. When taking the temperature-varying thermal capacitance into account, the calculation errors of transient conductor temperature were updated using Boundaries 3 and 4 by FEM. The results are presented in Table 8. In Table 8, the results of using the constant and the temperature-varying air thermal capacitance were almost identical. The effect of the temperature-varying thermal capacitance of the air was so insignificant within the range of cable's temperature variation that the errors caused by it could be neglected. Thus, defining the air thermal capacitance as a constant is appropriate. In other words, it is the thermal resistance, rather than the thermal capacitances, which is the main error source of temperature calculation. Conclusions Four conductor-temperature calculation methods based on the temperature measurements of different radial positions in the cable were proposed, respectively, and were compared with the FEM simulations. The results showed that the calculated conductor temperature based on the temperature of the insulation shield was closest to the measured, and that of the center of the waterproof compound was the closer. However, there was a remarkable difference between the measured and calculated results based on the temperature of the jacket surface or aluminum sheath. The results of FEM simulation agreed with those of the calculations. The uncertainty of the thermal resistivities, especially that of the air between the waterproof compound and the sheath, is the main factor affecting the conductor-temperature calculations. Because of the differences in corrugation designs between manufacturers, it is difficult to recommend a constant for the thermal-resistance calculation of the air in standards. The influences of other factors are limited. The difference in the initial temperature of each radial position caused by the solar radiation mainly influenced the calculation based on the temperature of the jacket during the first several hours. In addition, the effect of the temperature-varying thermal capacitance of the air could be neglected. To improve the accuracy of the conductor temperature calculation, the temperature sensor should be placed closer to the conductor. The waterproof compound was recommended for better accuracy and good insulation security. When eliminating the uncertainty of thermal resistances, especially the thermal resistance of the air between waterproof compound and aluminum sheath, the conductor temperature calculations using the temperature of the aluminum sheath could also obtain a high accuracy.
14,117.6
2018-01-03T00:00:00.000
[ "Engineering", "Physics" ]
Evaluation of a Point-of-Use Electrocoagulation System for Arsenic Removal A point-of-use prototype electrocoagulation treatment system was designed and evaluated for its ability to remove arsenic from a synthetic groundwater. The system was comprised of an electrocoagulation reactor providing batch treatment, a rechargeable battery power source, an electrical monitoring and control module, and a granular media filter. The control module and the filter underdrain system were designed to improve user convenience. The system was able to consistently reduce arsenic concentrations to below 20 μg/L. Effluent soluble arsenic concentrations under 10 μg/L were deemed possible with enhanced effluent suspended solids removal techniques. Arsenic removal was found to be a function of the initial arsenic concentration and the cumulative charge dosage as measured in coulombs per liter of water treated. The steel plate size used in the electrocoagulation module influenced the current draw and the overall electrical efficiency of the system. The monitoring and control module allowed the system to produce up to 100 liters of treated water daily on a single battery charge and would automatically control charge and iron dosage. The point-of-use system was capable of meeting household potable demands for water where other treatment options are limited. Index Terms – arsenic; electrocoagulation; point-of-use; prototype INTRODUCTION Arsenic contamination in groundwater is a problem affecting over 225 million people globally in approximately 105 countries 1,2 .Areas experiencing elevated groundwater arsenic levels include North America, South America, South Asia, Central Asia, and Australia.Potential human health issues associated with prolonged potable use of arsenic-containing water include skin damage, circulatory system problems, neurological effects, and increased cancer risks 3 .As a result of these potential impacts, the United States Environmental Protection Agency (USEPA) and the World Health Organization (WHO) have established maximum acceptable levels of arsenic in drinking water at 10 µg/L.Other agencies and countries around the world have locally set acceptable arsenic levels as high as 50 µg/L 4 . A number of technologies have been identified for removing inorganic arsenic from groundwaters that potentially serve as sources of potable water.The four most common technologies include ion exchange, precipitation, coagulation/adsorption, and membrane treatment systems [4][5][6][7][8][9] .These processes have been shown to be capable of producing high quality finished water but most require expensive resins, replaceable adsorption media, and/or chemicals to sustain effective, long-term operation.Systems utilizing fixed adsorption media in contact columns have also been plagued by clogging when untreated waters contain elevated concentrations of reduced iron.Arsenic removal from groundwater using an electrocoagulation (EC)-based process technology has been recently demonstrated as a viable approach to potable water production as it appears to provide effective treatment over a wide range of conditions while minimizing many of the limitations and water chemistry-based site-specific issues associated with other technologies 10,11 .The EC technology has been shown capable of providing treatment either at laboratory scale 12,13 or in small communal systems 14 .The goal of this project was to design and operationally evaluate a small point-of-use arsenic removal system providing arsenic removal based on EC technology.Experimentation revolved around treating the type of arsenic-contaminated groundwaters commonly encountered in South Asia where arsenic levels range between 100 and 300 µg/L.EC-based treatment was chosen because of its potential to produce a high-quality treated water in a relatively simple, resource-frugal manner 12,15 .Because the prototype system was designed for application in the rural areas of developing nations, relevant design and operating constraints were identified through communications with the head of a non-governmental agency (NGO) providing services in south Asia.The primary limiting factor was identified as being the available electrical power supply.The final prototype system tested was designed to operate using a small 6.0 volt direct current (VDC) rechargeable battery while producing an effluent that could potentially satisfy a 10 µg/L arsenic limit. METHODS The test treatment system was comprised of four integrated elements: an EC reactor, an electrical power source, a control module for electrical current monitoring and dosage control, and a granular media filter.Figure 1 provides a schematic representation of how these individual elements were configured for evaluating system performance.FIGURE 1 TREATMENT SYSTEM SCHEMATIC DIAGRAM The test system was fabricated using products and materials available in the general South Asia region with cost, physical size, and weight minimization being primary constraints.Plastic parts were used wherever possible to minimize corrosion problems.A commercial multimeter was used on occasion to help verify electrical operation.Batch treatment test cycles using the prototype system were run at least twice for each set of operating conditions to verify system performance.Details for each system element are provided in the following paragraphs. EC Reactor The EC reactor employed a 13.2 L liquid volume batch-fed plastic bucket as the containment vessel.This system liquid volume provided 5 cm of freeboard.The EC module was fabricated from five parallel sheets of 3.175 mm thick mild carbon steel plate.Two plate sizes were selected for evaluation.Both 10 cm x 10 cm and 15 cm x 15 cm plates were tested using the same reactor vessel and compared for their ability to reduce arsenic concentrations down to target effluent levels as a function of time and cumulative power draw.Before use, exposed plate surfaces were conditioned by abrading them with coarse sand paper.All plates were separated by a uniform distance using plastic block spacers and nylon threaded rods and nuts.The center plate and two extreme outer plates served as anodes with the other two plates serving as cathodes.The five-plate EC module was suspended mid-depth using a piece of 12 mm diameter Schedule 40 PVC pipe and plastic support chain.This configuration allowed for rapid assembly and disassembly.A terminal block (with jumpers) attached to the outside top of the container was used to distribute electrical power to the individual plates and provide for the needed current flow.Wires from the terminal block were connected to the plates using 10 gauge insulated, solid copper wire and liquid electrical tape-based watertight connections. Several experimental runs were conducted with supplemental mechanical mixing to assess the impact of increased turbulence on system performance.Mechanical mixing was provided using a magnetic stirrer located directly under the center of the EC reactor and a Teflon coated "floating" stir bar placed on the center bottom of the reactor.The mechanical mixing energy intensity was set to ensure more uniform suspension of the internally generated hydrous ferric oxide (HFO) particles materials in the tank.When employed, the supplemental mechanical mixing was maintained from the beginning to end of the otherwise standard reaction cycle. Power Sources The EC module was designed to operate with a 6.0 to 7.0 VDC power source.Two alternative power sources were used when evaluating system operation.A Mastech HY3005F-3 Triple Linear DC regulated power supply was used initially to establish an operational baseline for comparing electrocoagulation system performance under controlled conditions.Subsequent experimental runs employed a rechargeable battery as the sole power source.The battery was rated at 6.0 volts with a power storage capacity of at least 4.5 amp-hours and was recharged repeatedly during the course of the experimental work with no apparent loss in performance.When using 10 cm x 10 cm plates in the EC module, both power sources provided the target operating voltage while maintaining the same, constant electrical current.The regulated DC power supply was used for all experimental runs when the impact of plate size on system performance was assessed.This helped ensure that all system electrical demands were satisfied and the power source was not limiting the rate of reaction. Current Dosage Monitoring and Control System Recognizing that a point-of-use EC system could be powered by a rechargeable battery, it was deemed critical to conserve energy while still providing an appropriate level of operational functionality.Accordingly, a small electrical monitoring and dosage control system was designed to measure, record, and display the instantaneous electrical current flow (in milliamps) as well as the totalized current flow (in coulombs).The control system was also designed with a programmable totalized current shut-off set point to discontinue power to the EC system when the set point value for an operating cycle was reached.The shut-off set point value could be programmed into the system based on volume of water to be treated per operating cycle together with the charge dosage needed to the achieve target treated water arsenic level.This approach allowed the system to provide the target charge dosage each cycle even with small variations in electrical current flow due to changes in aqueous phase conductivity or anode/cathode spacing.An actual point-of use system would probably be supplied with the same base controller and an LED "reaction cycle complete" indicator bulb rather than the digital display to reduce costs. The control system was built around a Texas Instruments model MSP430FR5969 programmable microcontroller.Output readings were directed to a 3.4 cm (diagonal) Sharp Memory LCD display.A Texas Instruments INA219B integrated circuit was used to monitor electrical current flow to the EC system.A standard USB port provided access to the secondary on-board microcontroller that was used to program the controller and make any necessary runtime adjustments.The cumulative electrical current shut-off set point value could only be changed only by an individual having the required computer resources.A small pushbutton was used to initiate system operation.Once the EC reactor was filled with untreated water and the "start cycle" pushbutton was pressed, no operator attention was required until the designated charge dosage was applied and power delivery to the system was interrupted.A separate pushbutton was provided to reset the control system to allow the next operating cycle to begin once the EC reactor was again filled with untreated water for the next (batch) operating cycle. All system components were selected for their low power consumption.The fully configured control module had dimensions of less than 6 cm x 6 cm x 6 cm and a mass less than 0.25 kg. Granular Media Filter Two granular media filter systems with alternative underdrain systems were assessed for their performance and usability.The first underdrain system utilized perforated PVC piping and a gravel layer below the silica sand media.The second underdrain system consisted of a single ORTHOS Liquid Systems model GVU™ filter nozzle.Each filter made use of an 18.9 L plastic bucket filled approximately two-thirds to the top with sand.This yielded an effective granular media depth of 0.18 m for the piped underdrain system and 0.25 m for the nozzle-based system.The silica sand utilized had an effective size of 0.38 mm and a uniformity coefficient of 1.35.A perforated plastic screen was placed just above the top of the granular media to help distribute the flow across the entire filter area.The granular media filter was placed into operation at the end of a reaction cycle in the EC reactor by manually opening a valve that allowed treated water to flow from the EC reactor to the filter bucket by gravity.Filtered water was discharged through the bottom of the bucket through a hole and fitting arrangement specifically designed to work with each underdrain system.The system outlet discharged into a treated water storage reservoir.Larger, deeper filters were not considered due to their excessive weight. Synthetic Groundwater A synthetic groundwater feed stream was used to minimize any variations in composition that could potentially impact observed system performance.The influent feed was synthesized using reagent grade raw materials and yielded the composition detailed in Table 1.Small stock solution volumes were added to 1.0 L of tap water and enough distilled water to produce the final 13.0 L volume used for each test cycle.Arsenic was added as sodium arsenate so that all influent arsenic was in the plus-five {As(V)} oxidation state.No arsenite {As(III)} was included in the synthetic groundwater as it would have been effectively oxidized to arsenate {As(V)} under the conditions realized 16 .The tap water was used to provide trace amounts of any ions that were not added with the stock solutions.Annual water quality reports from the local tap water purveyor indicated that arsenic, iron, and phosphorus contributions from this source were negligible.The pH of this untreated feed water was 7.8 when measured with a Fisher Scientific Accumet AB15 Plus pH meter and combination electrode.The solution pH in the EC module was observed to remain fairly constant (∆pH < 0.1 pH unit) during each treatment cycle.All experimental test runs were conducted at a room temperature of 22 + 1 o C. Analytical Methodology Arsenic concentrations were measured using a colorimetric technique demonstrated to have a low detection limit 17 .This method was chosen for its ability to provide accurate results at low arsenic concentration and its low susceptibility to potentially interfering agents.Prepared samples were analyzed using a Shimadzu UVmini-1240 single beam spectrophotometer.Conditions realized during sample preparation ensured that any arsenic in a solid phase would be quickly solubilized.Samples taken during mid-cycle operation of the EC reactor were first filtered using a 0.2 µm (pore size) membrane filter and associated vacuum filtration apparatus.Aliquots of the filtered material were then tested colorimetrically after addition of color-inducing reagents.The volume of actual filtered sample used for testing was adjusted as part of the initial sample dilution procedure inherent in the method to provide a fairly consistent mass of arsenic in the final diluted sample.This approach yielded consistent and accurate results when known, standard arsenic solutions were evaluated.Samples collected after granular media treatment were tested both with and without membrane filter pre-filtration.The parallel samples were employed to explore the degree to which relatively small HFO particles were able to pass through the granular filter media material and end up contributing to arsenic levels in the treated water.Arsenic concentrations determined in samples tested after membrane pre-filtration are identified as soluble arsenic concentrations in this study.As an added quality control measure, a number of membrane-filtered samples were split and tested colorimetrically and also using a novel sensor-based technology currently under development 18 .The sensor-based arsenic measurement system is being developed to quickly and reliably measure low levels of arsenic directly in the field and has yielded consistent results with mass spectrometry-based testing at low arsenic concentrations.In this study, the colorimetrically-derived concentrations were found to correlate well with the corresponding sensor-derived values. Experimental Overview Experimentation was divided into three phases as detailed in Table 1.The Preliminary Assessment Phase (Phase 1) was used to identify an EC module configuration that would satisfactorily address all system design and performance constraints.Phase II, System Performance Assessment, examined how performance changed over time and how performance was impacted by different initial arsenic concentrations.Phase III, Filtration Assessment, investigated several potential underdrain systems.There was a limited amount of overlap in Phases II and III where the test runs revolved around a 300 μg/L initial arsenic concentration.Phase I testing evaluated the impact of EC module plate size and plate spacing on system operation.In accordance with the results of others 15,16 , the goal during this test cycle was to identify a module configuration that would provide (a) a batch reaction cycle time of less than 60 minutes, (b) a charge dosage rate of no more than 6 coulombs per liter per minute (C/L/min), (c) a cumulative charge dosage of no more than 180 coulombs per liter (C/L) for an initial 300 μg/L arsenic concentration, and (d) dimensional compatibility with the chosen EC reactor vessel.A number of test runs were terminated early when it became apparent that the tested plate spacing would not be able to provide satisfactory operation.The plate spacing coming closest to satisfying all target operating constraints was 22.0 mm for both plate sizes.A summary of the final (best) operating conditions and general performance results for the two EC modules is included as Table 3.The following sections provide additional details on all three experimental phases. General Observations Electrical current flow and associated oxidation-reduction reactions commenced immediately after all electrical connections were completed and power was directed to the EC module.These reactions were characterized by the formation of rust-colored suspended HFO particles and the generation of a continuous stream of fine gas bubbles emanating from the surfaces of the designated cathode plates.Figure 2 shows the progression of HFO precipitate formation during the beginning phases of a typical test run with the smaller sized steel plate-based EC module.By the end of an operating cycle, the suspended HFO particles in the reactor were distributed from the top to the bottom of the water column in the bucket/reactor.Each of three anode plates became uniformly coated with a rust-colored iron oxide after several successive testing cycles.The two cathode plates remained free of any apparent oxidation and could be used later as anode plates once the companion anode plates reached a point where the extent of the induced oxidation reactions and the associated reduction in structural integrity rendered them unsuitable for continued operation.Gas production from the cathode plates was sufficient to provide a limited degree of liquid mixing in the reaction vessel although larger, heavier HFO particles tended to settle to the bottom portion of the reactor with time.Electrical current draw remained essentially constant during each individual test cycle.The aforementioned rust-like coating on the anode plates thus had little apparent effect on system electrical performance. Impact of Plate Size Arsenic removal patterns observed when using 10 cm x 10 cm and 15 cm x 15 cm steel plates as part of the EC module for a typical operating cycle with the aforementioned final plate spacing were investigated to determine how plate size impacted system operating requirements.Current draw for the EC module configured with the smaller sized plates was approximately 0.63 amps while the current draw for the module configured with the larger plates with the same plate spacing was approximately 2.0 amps.The reaction times needed to reduce an initial 300 µg/L arsenic concentration to sub-10 µg/L levels for the larger and smaller EC module plates were 24 minutes and 60 minutes, respectively.The reduction in reaction/treatment time realized with the larger plates was, however, achieved at a cost of lower overall system electrical efficiency.The cumulative charge dosage needed to produce sub-10 µg/L soluble arsenic levels increased from about 175 C/L for the system employing the smaller plates to over 220 C/L for the system using the larger plates.These trends are consistent with results reported by others 15 .The higher current draw associated with the larger plates, together with the lower electrical efficiency of the associated EC module, led to a decision to eliminate use of the 15 cm x 15 cm plates from further consideration where limitations associated a rechargeable battery power source dictated use of the more efficient system.These initial experiments established baseline performance levels and operational traits for the EC module configured with the 10 cm x 10 cm plates.For a 300 µg/L initial arsenic concentration, a cumulative charge dosage of 175 C/L was consistently found to be able to achieve sub-10 µg/L soluble arsenic levels. Impact of Anode Condition After initial system testing was conducted for characterizing arsenic removal patterns, it became apparent that the arsenic-versus-time removal pattern in the EC reactor changed over time in parallel with the oxidation condition of the anode plates.The first few operating cycles with a newer EC module (little or no apparent surface oxidation on the anodes) was characterized by an exponential decay-type reduction in arsenic concentration with time.As the anode plates became more heavily oxidized after additional operating cycles were completed, a three-phase pattern of soluble arsenic removal was observed.An initial rapid reduction in arsenic concentration was following by a slower, temporary reduction in arsenic removal rate.This was finally followed by a third, faster removal phase with reductions in soluble arsenic concentrations ultimately reaching sub-10 μg/L levels.This pattern was observed for all "seasoned" anode plates (operated long enough to build up a visible iron oxide coating on the plate surfaces).The differences in arsenic removal patterns when comparing new and "seasoned" EC module anode plates are depicted in Figure 3 for experimental runs conducted with an initial arsenic concentration of 300 μg/L.The first of the three response phases with the seasoned/oxidized anode plates can potentially be related to reports of rapid initial arsenic binding to the oxidized surfaces of EC module anode plates 10 .Both the overall batch reaction time (τ 10 in Figure 3) and charge dosage needed to reduce soluble arsenic levels from 300 to 10 μg/L were unaffected by the degree of anode oxidation for a given EC module plate size.This is consistent with observations over the duration of the experimental period that anode oxidation condition had no impact on system current draw and general electrical performance.Required reaction time was independent of plate condition.Test runs conducted with other initial arsenic concentrations all involved the use of "seasoned" plates and were characterized by the aforementioned three-phase response. Impact of Initial Arsenic Concentration Experiments were conducted using initial arsenic concentrations of 100, 200, and 300 µg/L.This represented the typical range of arsenic concentrations found in many areas of South Asia.The required cumulative charge dosage required to achieve the sub-10 µg/L soluble arsenic levels in the treated water were approximately 102, 154, and 175 C/L for initial arsenic concentrations of 100, 200, and 300 µg/L, respectively.The smaller cumulative charge dosages required for treatment with lower initial arsenic concentrations translates into an ability to treat more water with a single charge for a rechargeable battery.Conversely, higher initial arsenic levels would require greater charge dosages.This suggests that an accurate assessment of the untreated water quality would be needed to optimize power usage with the electrical controller. Impact of Supplemental Mechanical Mixing The impact on arsenic removal of providing supplemental mechanical mixing in the EC reactor is shown in Figure 4. Supplemental mechanical mixing did not improve arsenic removal in the test system when compared to the same system operated without any supplemental mixing.The stream of fine bubbles produced from the cathode plates apparently satisfied any mixing needed to allow the generated HOF particles (and/or the exposed anode surfaces) to effectively interact with the solution phase arsenic.The absence for any need for external mixing in this prototype system, however, should not be interpreted to be a universal statement; systems should be evaluated on a case-by-case basis as the dimensions and production capacities for individual systems vary. Filtration Effectiveness The filter system employing the nozzle-based underdrain system was quickly chosen over the PVC pipe and gravel underdrain system because of its relative simplicity and ease/rapidity of media replacement.The ability of the nozzle-based system to remove HFO particles and its associated arsenic was evaluated by comparing arsenic levels in final effluent samples with and without membrane pre-filtration.Particle-laden water from the EC reactor was fed to the filter at a rate of approximately 1.6 L/min using a manual control valve (yielding a calculated filter application rate of 37 m 3 /m 2 /d).Direct measurement of arsenic levels in the filter effluent ranged between 9 and 21 µg/L while arsenic concentrations measured in parallel, membrane filtered samples were consistently below 10 µg/L.A few HFO particles apparently were able to pass through the granular media filter bed.Alum has been used to help improve HFO particle removal in a larger scale EC system arsenic removal systems 14 .The sub-10 µg/L arsenic levels achieved after membrane filtration offer hope that an enhanced granular media filter or another size-appropriate solids separation technology can be used to meet global standards. Practical Operating Considerations System operation with the 10 cm x 10 cm plates and an influent arsenic concentration of 300 µg/L required approximately 60 minutes to treat 13 L of water.Including eight minutes to transfer liquid from the EC reactor to the granular media filter, about 9.0 hours would be needed to provide roughly 100 L of final product water needed daily for a family of five using a WHOrecommended supply rate of 20 L/capita/d.The 20 L/capita/d supply rate seems acceptable considering that little arsenic is absorbed directly through the skin 19 and arsenic containing groundwater can be used for other non-potable purposes.Use of the control system would allow the system owner more discretion over how the system is operated but the total required time would remain unchanged.Access to a continuous, centralized power distribution system would potentially allow larger plates to be used in the EC module and reduce treatment times. While this investigation did not address the potential effect of untreated water pH on system performance, other studies have specifically examined the impact of pH on the removal of arsenic by HFO particles in similar systems 12,13 .These investigations have reported that arsenic adsorption onto HFO particles is fairly independent of pH up to a pH of 8. Limited available field data suggest that groundwaters in the Southern Asia area have pH levels in the near-neutral pH range 6,14 and, as seen herein, would provide a suitable aqueous environment for the application of EC technology.These groundwaters also appear to have an ionic composition (and electrical conductivity) that would also support EC-based arsenic removal systems. A stoichiometric reaction equation describing the production of HFO particles in EC systems has been presented by Ji, et al. 13 and is shown below as Equation (1). Using this reaction and the 175 C/L charge dosage needed to affect arsenic removal for an initial arsenic concentration of 300 μg/L, Faraday's Law suggests that 0.45 liters of hydrogen gas would be produced at standard temperature and pressure during each 60-minute operating cycle.While this is not a particularly large volume of hydrogen, its potential accumulation over time suggests that EC systems be located in a well-ventilated space to avoid any explosive conditions. The iron anode plates in the system must be considered a consumable material.Again using Faraday's Law, it was estimated that the 10 cm x 10 cm anode plates would be completely consumed in about 150 days when producing 100 L/d of treated water at a charge dosage of 175 C/L.The actual working life of the plates is probably 80 to 90 percent of this value as the plates would become mechanically unstable before being completely consumed.Sand in the granular media filter must also be considered a consumable material.Suspended solids production rates of 55, 100, and 155mg/L were measured after applied charge dosages of 60, 120, and 175 C/L, respectively.With the maximum rate of solids production, sand replacement frequency was estimated to be every three days.A local source for the sand should be available for long-term system operation.Spent sand must be disposed of in a responsible manner.Fortunately, the HFO particle-bound arsenic is highly resistant to leaching 15 and this will help ensure that once removed, the arsenic can be effectively contained.A dedicated sand disposal pit (no co-disposal with waste organic materials) should be located as far as reasonably possible from any wells. Table 4 presents projected cost information for a point-of-use arsenic removal system employing EC technology.Useful lives for individual system elements were estimated based on expected field conditions in south Asia for a water production rate of up to 100 L/d.A discount rate of five percent was used in determining the annual cost.The cost for individual items, in U.S. dollars (USD), reflects quantity discounts and an estimated amortized cost of $0.12/kw-hr (8.0 Indian Rs per kw-hr) for power from a small, modular community-based solar-power system in South Asia.The two oversized batteries identified for use in the EC system include one unit actively powering the system while the second is being recharged.A more robust battery could be used to decrease the recharge frequency but only at substantial additional cost.Table 5 presents published cost data for point-of-use arsenic removal technologies.Owing to different locales and system production capacities, direct comparisons of production costs are difficult.In one case, only operating costs were directly reported without any estimation of related amortized (initial and/or recurring) system capital costs.Iron coagulation-based systems, in general, have been identified as one of the most cost effective technologies for small scale arsenic removal when compared to other systems proposed for use in the South Asia region 5 .Even the relatively modest costs indicated in Table 5 for point-of-use treatment systems would require financial subsidies for sustained operation in most rural South Asia communities 22,23 .Economies-of-scale realized for communal treatment systems would reduce costs, operating subsidies, and user attention requirements 24,25 but would, in turn, require a suitable population density and a reasonable level of local trust in such centralized operations. In considering the electrical components needed to fabricate the point-of-use treatment system used in these experiments, the parts and technical expertise required to manufacture the control system is certainly available in the South Asia region.Basic steel plates are also produced in the region at costs lower than many other parts of the world.Future integration of a low-cost arsenic sensor 18 with the electrical control system could make this type of treatment more cost-effective.The sensor could determine when the arsenic level in the EC reactor have reached a target value and then would direct the controller to discontinue power to the system and alert the system attendant.This approach would optimize the use of consumable iron plates and provide real time response to changes in the composition of the untreated groundwater.While the control system does make EC treatment more complex than other currently promising point-of-use arsenic removal systems 26 , its ability to function when local, available groundwaters contain elevated concentrations of reduced arsenic and/or reduced iron renders it a viable option to consider in situations where long-term operational problems related to poor adsorption or insystem inorganic mineral oxidation and precipitation reactions are likely to occur. Wide-spread implementation of point-of-use EC treatment systems in rural communities should be conducted under the auspices of an NGO or similar local authority.In this manner, the technical expertise needed for training and system monitoring will be available on an ongoing basis.This expertise would also help ensure that supplies of consumables were available and process residuals were managed properly.The NGO or local authority could also play a role in providing centralized facilities for recharging batteries that would be used as the power source for these systems and then managing battery disposal when they are no longer functional. CONCLUSIONS AND RECOMMENDATIONS Small-scale or point-of-use EC systems are capable of reducing arsenic concentrations down to levels of 20 µg/L or less.If granular media filtration can be cost-effectively enhanced or replaced with a more effective, low-cost technology, treated water arsenic concentrations below 10 µg/L can be reliably achieved.Point-of-use EC treatment systems can be powered by small, rechargeable batteries with the ability to produce up to 100 L of water on a single charge.Supplemental mixing appears to have little impact on system performance in a bucket-scale operation.The prototype system tested made use of an innovative electrical control system to efficiently use available power and minimize iron consumption.A discharge nozzle-based filtration unit was developed to simplified media replacement.Developing arsenic sensor technologies could also be used to optimize system performance.In designing EC systems, compromises will have to be made between treated water productivity rate and energy efficiency; decreases in processing time may likely come at higher electrical consumption per unit volume of water produced.No attempt was made to assess the disinfection needs of an ECbased treatment systems.This should be an area of future research.Lastly, assistance provided by an NGO or governmental entity is key to the successful field implementation of this technology.Based on observations with other point-of-use arsenic mitigation systems, this assistance would probably need to include both technical oversight and financial subsidies. Figure 2 PROGRESSION Figure 2 PROGRESSION OF HFO PRECIPITATE FORMATION DURING A TYPICAL TEST RUN FIGURE 3 IMPACT FIGURE 3 IMPACT OF ANODE PLATE OXIDATION STATE ON SOLUBLE ARSENIC REMOVAL PATTERNS. FIGURE 4 IMPACT FIGURE 4 IMPACT OF SUPPLEMENTAL MIXING ON SOLUBLE ARSENIC REMOVAL. TABLE 1 MAJOR SYNTHETIC GROUNDWATER COMPONENTS TABLE 3 SUMMARY OF TEST PHASE I FINAL OPERATING AND PERFORMANCE CHARACTERISTICS TABLE 4 ESTIMATED ANNUAL COST OF A 100 LITER PER DAY POINT-OF-USE TREATMENT SYSTEM * the cost of the timer/controller varies depending on the output (LED bulb versus LED screen) TABLE 5 COST DATA FOR POINT-OF-USE ARSENIC TREATMENT SYSTEMS
7,361.2
2016-06-01T00:00:00.000
[ "Engineering" ]
Mutation Spectrum of Stickler Syndrome Type I and Genotype-phenotype Analysis in East Asian Population: a systematic review Background Stickler syndrome is the most common genetic cause of rhegmatogenous retinal detachment (RRD) in children, and has a high risk of blindness. Type I (STL1) is the most common subtype, caused by COL2A1 mutations. This study aims to analyze the mutation spectrum of COL2A1 and further elucidate the genotype-phenotype relationships in the East Asian populations with STL1, which is poorly studied at present. Methods By searching MEDLINE, Web of Science, CNKI, Wanfang Data, HGMD and Clinvar, all publications associated with STL1 were collected. Then, they were carefully screened to obtain all reported STL1-related variants in COL2A1 and clinical features in East Asian patients with STL1. Results There were 274 COL2A1 variants identified in 999 patients with STL1 from 466 unrelated families, and more than half of them were truncation mutations. Of the 107 STL1 patients reported in the East Asian population, it was found that patients with truncation mutations had milder systemic phenotypes, whereas patients with splicing mutations had severer phenotypes. In addition, several recurrent variants (c.3106C > T, c.1833 + 1G > A, c.2710C > T and c.1693C > T) were found. Conclusions Genotype-phenotype correlations should certainly be studied carefully, contributed to making personalized follow-up plans and predicting prognosis of this disorder. Genome editing holds great potential for treating inherited diseases caused by pathogenic mutations. In this study, several recurrent variants were found, providing potential candidate targets for genetic manipulation in the future. Background Stickler syndrome is a clinically variable and genetically heterogeneous disease, first described by Stickler et al [1]. It is featured by ocular, skeletal, auditory and orofacial abnormalities. The incidence among newborns is approximately 1:7500-1:9000 [2]. Ocular findings include early-onset high myopia, retinal detachment (RD), cataract and glaucoma. It was reported that 50-73% of patients developed RD in their lifetime, and up to 75% of patients were bilateral [3][4][5]. Recurrent RD can cause irreversible vision loss and even blindness. Skeletal changes include joint hypermobility in childhood, mild spondyloepiphyseal dysplasia and premature osteoarthritis. Sensorineural hearing loss (HL) is usually mild, mainly for the high tones [6]. Orofacial features include flat midface, depressed nasal bridge, micrognathia and cleft palate. At present, six subtypes of Stickler syndrome have been discerned. Type I (STL1, OMIM 108300) is the most common form, accounting for 80-90% [7]. STL1 is a dominantly inherited disorder, caused by mutations in the COL2A1 gene (OMIM 120140). The COL2A1 gene is mapped to human chromosome 12q13.11 and is composed of 54 exons, spanning 31.5 kb of genomic DNA [8]. It encodes α1(II) chain, which mainly expresses in hyaline cartilage, intervertebral discs, adult vitreous and inner ear [9]. To date, numerous patients with STL1 have been reported worldwide [9], but limited data are available in the East Asian population only with a few cases and small cohort studies. In this study, we aim to further elucidate the genotype-phenotype relationships by analyzing the clinical and genetic characteristics of East Asian patients with STL1. Furthermore, all reported variants in COL2A1 associated with STL1 were also analyzed. Methods MEDLINE, CNKI, Web of Science and Wanfang Data were searched, applying the following search terms for the period 1965 to October 2019: (mutation OR variant) AND (COL2A1 AND "Stickler syndrome"). In addition, Human Gene Mutation Database (HGMD) and Clinvar were also searched, and the search term was COL2A1. All relevant publications were carefully screened. We included publications that 1) were studies among patients with STL1, 2) provided the variant in COL2A1, and 3) were full-text articles. Reviews and obvious duplicates were excluded, and other types of studies met the inclusion criteria were included. Related information was collected, including variants and ethnicities. For patients in East Asia, detailed clinical features were also collected. All the processes were performed independently by two authors (D.W. and F.G.), including screening, selecting studies, extracting data and assessing the risk of bias. Any discrepancy in the assessment would be resolved by consensus. The study was performed according to the Declaration of Helsinki and approved by the Ethics Committee of the Eye and ENT Hospital of Fudan University. All variants in COL2A1 were checked to ensure that they were numbered according to the reference cDNA sequence NM_001844.4. If not, nucleotide and codon numbers were converted to ensure that their mutation nomenclature matched with reference transcript NM_ 001844.4 for COL2A1. For DNA numbering + 1 corresponded to the A of the ATG translation initiation codon. All statistical analyses were performed using SPSS 20.0 (SPSS Inc., Chicago, IL, USA). According to mutation types, East Asian patients were divided into 3 subgroups. Chi-squared test was performed to compare the phenotypes and sex among 3 subgroups. In addition, Kruskal-Wallis test was applied to compare the age among 3 subgroups. P value < 0.05 was considered the threshold of statistical significance. Spectrum of COL2A1 Mutations A total of 80 original articles met the inclusion criteria. There were 274 disease-causing variants in COL2A1 identified in diverse ethnicities, which is shown in Additional file 1: Table S1. Of 999 patients with STL1 from 466 unrelated families, most were Europeans (342/ 466 families, 73.4%), followed by Asians (63/466 families, 13.5%) and North Americans (42/466 families, 9.0%; Fig. 1). All patients carried one heterozygous variant in COL2A1, except for one patient harboring two variants in COL2A1. Most variants (158/274, 57.7%) are nonsense and frameshift mutations (insertions/deletions/ indels) that are predicted to cause premature protein truncation, leading to the absence of collagen synthesis (Fig. 2a). The majority of variants were identified in only one family (213/274, 77.7%), suggesting that the mutation spectrum is far from being saturated in spite of numerous COL2A1 mutation reports. The triple helix is the common structural feature of collagens, composed of a core repeat of Gly-X-Y [10]. The majority of variants in COL2A1 were located in the triple helix region (encoded from codon 201-1214), accounting for 81.0% (222/274), followed by N-terminal propeptide (encoded from codon 26-181, 10 (Fig. 3a). Genotype-Phenotype Correlations All East Asian patients were divided into 3 subgroups according to mutation types (Table 1, Fig. 3b). No apparent difference in age or sex was observed among 3 subgroups (P = 0.301 and 0.692, respectively). Compared with the other two groups, patients harboring splicing mutations were more likely to develop cataract (P = 0.004) and had severer systemic phenotypes (P = 0.010 in arthropathy, P = 0.022 in HL), whereas patients with truncation mutations had milder systemic phenotypes. In addition, there was no significant difference in the occurrence of high myopia, RD, midfacial dysplasia and cleft palate among 3 subgroups (P = 0.510, 0.575, 0.246 and 0.743, respectively). Type II collagen can be alternatively spliced, and its long form including exon 2 mainly expresses in vitreous [11]. Therefore, patients with exon 2 mutations have few extraocular manifestations [12]. Nevertheless, seven patients harboring other exon mutations also had no systemic manifestation. In addition, phenotypic variation, such as differences in spherical equivalent refraction and sensorineural hearing loss or presence versus absence of cleft palate, and varying skeletal manifestations, was observed in siblings or in unrelated families with the same variants. This suggested the high clinical heterogeneity in this disorder, and there were other factors affecting the severity of the phenotypic spectrum, such as modifier genes and environmental exposures. Diagnostic Relevance Of 107 patients with STL1, 10 patients were initially diagnosed with early-onset high myopia (eoHM), and diagnosis was not made correctly until they presented with systemic symptoms of STL1 after years of followup [13]. There were seven patients still diagnosed with eoHM, even if their COL2A1 variants have been recorded in HGMD as STL1-related variants (Additional file 3: Table S3) [14,15], because the diagnosis of STL1 is clinically based at present [16]. In addition, nine other patients with eoHM, harboring COL2A1 variants, have been reported, and these variants have been recorded in HGMD as eoHM-related variants (Additional file 3: Table S3) [14,15]. Discussion In this study, it was found that patients with splicing mutations had severer systemic phenotypes when compared with patients harboring truncation mutations and missense mutations. In addition, most STL1 patients carried with truncation mutations, and they were less likely to develop arthropathy. In light of the previous study, the STL1 spectrum was mostly attributed to truncation mutations, SED to missense mutations and Kniest dysplasia (KND) to splicing mutations [9,17], which is consistent with our finding. This also explains why patients with STL1-related variants have milder phenotypes, especially arthropathy, than patients harboring other variants in COL2A1. Furthermore, this genotype-phenotype correlation provides a reference for personalized follow-up plans in spite of high clinical heterogeneity in STL1. Genome editing shows great potential for treating inherited diseases caused by pathogenic mutations. Currently, multiple Cas9-based clinical trials are in progress or beginning soon, which is likely to guide future use for somatic cell editing both ex vivo and in patients [18]. Several recurrent variants were found in this study, including the variants c.3106C > T (18/466), c.1833 + 1G > A (18/466), c.2710C > T (15/466) and c.1693C > T (14/ 466), providing potential candidate targets for genetic manipulation in the future. We reviewed clinical features of 107 STL1 patients in East Asia. There were 98.9% (88/89) of the patients with myopia, higher than that in the European population (89%) [19]. In addition, 82.8% (24/29) of eyes were high myopia before 6 years old. Zhou L et al. proposed that eoHM was the most easily recognizable sign for children with potential STL, usually presenting earliest [13]. Midfacial dysplasia was observed in 60.6% (60/99) of the patients, second only to myopia. However, it is also a common facial feature in Asians, which is easily ignored by clinicians. Therefore, some patients with STL1 are initially diagnosed with eoHM. Most phenotypes of STL1 become more prevalent with advancing age, such as RD, sensorineural hearing loss and arthropathy [20]. So, it is not wise to draw a conclusion too early, and a regular follow-up for multisystem is essential for patients harboring COL2A1 variants. STL is the most common genetic cause of rhegmatogenous retinal detachment (RRD) in children [21]. In this study, RRD was observed in 41.7% (43/103) of the patients and 46.5% (20/43) were bilateral, which is lower than previous studies (50%~73 and 75%, respectively) [3][4][5]. This may be related to the younger age of the patients included, because the prevalence of RRD is a function of age [20]. In addition, the prevalence of MVA was 45.8% (44/96), similar to the previous study in the European population (42%) [19]. There are some limitations in this study. The phenotypic data in some articles is incomplete, which limits the number of evaluable patients. Furthermore, the reporting bias should be inherent. The information based on the number of reports cannot reflect the prevalence of STL1 in East Asia. Lastly, a long-term, controlled prospective study is required to substantiate the genotype-phenotype correlation found in this study. Conclusions In this study, our findings revealed that patients with splicing mutations had severer systemic phenotypes when compared with patients harboring other types of mutations, whereas patients with truncation mutations had milder phenotypes. This helps clinicians develop personalized follow-up plans for patients with STL1. In addition, recurrent variants c.3106C > T, c.1833 + 1G > A, c.2710C > T and c.1693C > T were found, which provides potential candidate targets for future gene therapy. Finally, high myopia before 6 years old is a key sign. For patients with high myopia, a regular follow-up for multisystem is essential if a heterozygous variant in COL2A1 is identified. Additional file 2 : Table S2. Clinical characteristics in East Asian patients. Additional file 3 : Table S3. The patients suspected with STL1.
2,710.4
2020-02-10T00:00:00.000
[ "Medicine", "Biology" ]
Prognostic role of CD4 T-cell depletion after frontline fludarabine, cyclophosphamide and rituximab in chronic lymphocytic leukaemia Background Eradication of minimal residual disease (MRD), at the end of Fludarabine-Cyclophosphamide-Rituximab (FCR) treatment, is a validated surrogate marker for progression-free and overall survival in chronic lymphocytic leukaemia. But such deep responses are also associated with severe immuno-depletion, leading to infections and the development of secondary cancers. Methods We assessed, blood MRD and normal immune cell levels at the end of treatment, in 162 first-line FCR patients, and analysed survival and adverse event. Results Multivariate Landmark analysis 3 months after FCR completion identified unmutated IGHV status (HR, 2.03, p = 0.043), the level of MRD reached (intermediate versus low, HR, 2.43, p = 0.002; high versus low, HR, 4.56, p = 0.002) and CD4 > 200/mm3 (HR, 3.30, p <  0.001) as factors independently associated with progression-free survival (PFS); neither CD8 nor NK counts were associated with PFS. The CD4 count was associated with PFS irrespective of IGHV mutational status, but only in patients with detectable MRD (HR, 3.51, p = 0.0004, whereas it had no prognostic impact in MRD < 10− 4 patients: p = 0.6998). We next used a competitive risk model to investigate whether immune cell subsets could be associated with the risk of infection and found no association between CD4, CD8 and NK cells and infection. Conclusions Consolidation/maintenance trials based on detectable MRD after FCR should investigate CD4 T-cell numbers both as a selection and a response criterion, and consolidation treatments should target B-cell/T-cell interactions. Background In chronic lymphocytic leukaemia (CLL), chemo-immunotherapy (CIT) with fludarabine, cyclophosphamide and rituximab (FCR) is now well established as a standard of care for young treatment-naive, fit patients without TP53 locus alterations (mutations and/or deletions) and with normal renal function [1,2]. When compared to new generation targeted signalling inhibitors, FCR induces very prolonged remission periods in a subset of patients with IGHV mutations (IGHV-M), with three independent longterm follow-up studies reporting a > 10 year progressionfree survival (PFS), specifically in patients in whom minimal residual (MRD) cannot be detected (< 10 − 4 ) after treatment completion [3][4][5]. In a pooled analysis from randomised trials, FCR treatment of patients without IGHV mutations (IGHV-UM) resulted in a median PFS of only 42.9 months, with the absence of a plateau on the PFS curve and an attenuation of the advantages of reaching an undetectable MRD status [6]. In the context of CIT, the evaluation of MRD is of utmost importance because patients with undetectable MRD after treatment still achieve better PFS and overall survival (OS) than those with detectable MRD [7][8][9][10][11][12]. The quantification of MRD is however not recommended beyond the context of clinical trials [13,14]. A number of factors are known to be associated with the depth of MRD response achieved by CIT (TP53 mutation and/or deletion 17p [del17p], high β2-microglobulin levels, or complex karyotype). Conversely, we have a limited understanding of the factors that influence an almost universal relapse in IGHV-UM patients, despite achieving undetectable MRD status [6]. Indeed, there is a lack of clinical factors that can accurately improve the prognostic power of eradicating MRD [15]. Since bystander immune cells such as CD4 T-cells promote CLL survival/proliferation in tumour niches before FCR [16], we hypothesised that normal lymphocyte levels may influence the duration of PFS independently of the MRD status achieved after completion of therapy. Since FCR also induces profound and durable lymphopenia, we correlated these measurements to the well-described risk of developing secondary malignancies and/or serious infectious events [17]. Study population Between January 01, 2005 and February 29, 2016, 162 patients receiving frontline FCR for CLL in two institutions (IUCT-Oncopôle, Toulouse and Institut Bergonié, Bordeaux, France) were enrolled in our study. Patients' clinical and biological data were retrieved from medical charts. In addition to complete blood counts, flow cytometry analyses were performed on peripheral blood samples at the end of treatment (EOT, i.e. 3 months after the last course of FCR) to monitor both normal immune reconstitution (CD4, CD8, NK) and MRD levels. MRD was quantified by 8-colour flow cytometry, with a sensitivity of at least 10 − 4 , using a combination (MRD antibody cocktail) comprising CD81-FITC (BD Pharmingen), CD43-PE (Beckman Coulter), CD79b-PerCP Cy5.5 (BD Biosciences), CD5-PC7 (Beckman Coulter), CD22-APC (BD Biosciences), CD20-AA700 (Biolegend), CD45-APC-H7 (BD Biosciences) and CD19-BV510 (BD Biosciences). One to five hundred microliters of fresh blood were incubated with the MRD antibody cocktail for 15 min, then red cells were lysed (with BD lysis buffer) for 15 min and washed twice. Flow cytometry analysis of a minimum of 10 5 leucocytes was carried out on a Navios instrument with Kaluza software (Beckman Coulter). Residual CLL cell gating and quantification was assessed according to the ERIC recommendations [18][19][20]. Definition of outcomes Progression-free survival (PFS) was calculated from the first day of the first cycle of FCR (D1C1) to either relapse (per IwCLL2008 recommendations) or death, from any cause [13]. Overall survival (OS) was calculated from D1C1 FCR to death, from any cause. At the end of treatment (EOT, i.e. 3 months after the last course of FCR), the overall response rate was classed as either complete clinical response (clinical CR), complete response with incomplete bone marrow recovery (CRi), partial response (PR), or failure. This response assessment differed from the IwCLL2008 criteria, in that bone marrow biopsies are not warranted beyond the context of clinical trials in France; this explains why we used the term "clinical CR" instead of complete response (CR). MRD levels were classified as undetectable (< 10 − 4 ), intermediate (10 − 4 to 10 − 2 ) and high (≥10 − 2 ), as defined by the German CLL study group in the CLL8 and CLL10 trials [7,21]. Opportunistic infections were described as follows: herpes zoster, Pneumocystis pneumonia, CMV disease, infectiondriven hemophagocytic lymphohistiocytosis, invasive fungal infection, Toxoplasma gondii infection, malignant external otitis, progressive multifocal leukoencephalopathy, hepatitis B re-activation (in patients who were previously both antihepatitis B core and anti-hepatitis B surface antigen positive), and chronic hepatitis E infection. Severe infections were defined as any infection leading to hospitalisation (irrespectively of a common terminology criteria grade). Patients received primary prophylaxis with trimethoprim-sulfamethoxazole and valaciclovir in > 90% of cases (stopped 6 months after EOT evaluation in most cases [22]). Statistical analyses Continuous variables were presented as the median with a range (min-max) and categorical variables were summarised by frequencies and percentages. EOT CD4 counts were evaluated as a binary covariable with a threshold of 200/mm 3 , typically used to guide infection prophylaxis in HIV patients [23], but also in routine haematology practice. NK, CD8 and monocyte count cut-offs used were based on the median count at EOT. The chi-square or Fisher's exact test was used to compare categorical variables. Survival rates were estimated by Kaplan-Meier, with 95% confidence intervals (95%CI). Patients that were still alive were censored at the cut-off date or at their last available follow-up. Univariate and multivariate analyses were performed using the Logrank test and the Cox proportional hazards model; Hazard Ratios (HR) were estimated with 95% confidence intervals. Landmark analyses were performed at 9 months after initiation of treatment, to assess the impact of variables evaluated post-treatment on OS and PFS. Cumulative incidences of opportunistic and/or serious infections were estimated using a competing risks model, with relapse and death considered as competing events. Univariate analyses were performed using the Fine and Gray model and sub Hazard Ratios were estimated with a 95%CI. All tests were two-sided and p values < 0.05 were considered statistically significant. All analyses were conducted with STATA v13 (Stata Corporation, College Station, TX, USA) and R (3.4.3). Pre-therapy cohort characteristics Patients' characteristics are summarised in Table 1. Patients were males in 69.1% of cases. The median age was 61.5 years and Binet stage was B/C in 79% of patients. Other known prognostic variables included: 11q deletion in 22.2%, 17p deletion in 3.9%, IGHV-UM status in 63.4%, β2-microglobulin > 3.5 mg/L in 78.3%, complex karyotype in 21.6%, NOTCH1 mutations in 16.2%, and SF3B1 mutations in 8.8% of patients. The majority (75.9%) of patients received 6 cycles of FCR, and 98.1% received at least 4 cycles. (Fig. 4). During FU, 20 patients (12.3%) developed a secondary cancer within a median time of 40 months from D1C1 FCR (range, 6-111), and 10 patients (6.2%) developed a Richter transformation (RT) within a median time of 59.5 months from D1C1 FCR (Table 3). Due to the small number of patients with secondary cancers as the first event (n = 10), we could not investigate the association of EOT CD4, CD8 and NK cell counts with the incidence of those events; nevertheless, of these 10 patients, 4 had EOT CD4 > 200/ (Table 4) did not detect any association between EOT levels of NK, CD8 or CD4 and serious and/or opportunistic infections. Figure 5 represents the cumulative risk of serious and/or opportunistic infections and relapse or death with the EOT CD4 T-cell count in the entire studied population. Landmark competing risk analysis at 9 months. sHR indicates the sub-Hazard Ratio. Discussion We report results obtained from a large series of patients receiving frontline FCR in the routine practice of two large regions of southwestern France, with a median follow up of over 5 years. Our population was rather similar to that of the CLL8 study and other cohorts, but also included older patients and patients with more advanced disease [1,4,21]. We first confirmed the general clinical importance, of achieving a low MRD level at EOT, which extends the relevance of assessing MRD well beyond that of clinical trials. We observed a plateau in the PFS curves of IGHV-M patients who achieved a low MRD level endpoint, and also the universal relapse pattern of IGHV-UM patients despite eradicating MRD in peripheral blood. In an attempt to better understand this unique feature, we investigated whether normal lymphocyte counts could redefine the prognosis in distinct subgroups of patients. We found that the posttherapy CD4 count was associated with a different prognosis depending on the IGHV status, and that this also extended to patients with detectable MRD at EOT. The CD4 count was however not associated with infections, even though this parameter is generally routinely used in clinical practice to determine the start/hold timing of prophylactic measures (with trimethoprim-sulfamethoxazole and/or valaciclovir). Since no plateau was observed in the PFS curves of low CD4 IGHV-UM patients, it is very unlikely that this parameter alone could explain the relapse pattern observed in these patients. But in the detectable MRD group, a high CD4 count post-FCR was able to identify a subgroup of patients with a median PFS of only 24 months (a widely accepted definition of FCR-refractory disease [2]). Hence, the CD4 count could help identify patients who may benefit from a consolidation after FCR, especially if the drug modulates T-cells numbers and effects (such as lenalidomide [24][25][26][27][28][29] or ibrutinib [30][31][32][33]). Our previous series was the first to illustrate an effect on CD4 T-cell count following FCR treatment in CLL [34]. A thorough analysis of the phenotype of these T-cells revealed that most were CD4+ CD25+ CD127-FoxP3+ (and as such likely to belong to the T regulatory subset, our unpublished data), which have previously been reported to mediate a CLLsupportive effect in vitro and in vivo [16,[35][36][37]. Another single-centre retrospective study found that absolute lymphocyte count < 1000/μl three months post-FCR was associated with OS and event-free survival, without MRD data and without analysing the lymphocyte subsets (thus they could not determine the clonal nature of these lymphocytes [38]). In addition to reflecting the pharmacodynamic activity of FCR, we consider lympho-depletion as a more complex, dynamic period of lymphocyte recovery with inter-clonal competitions. It would be surprising that a 3-drug regimen dose effect would be restricted to the CD4 subset (and not to CD8 or NK lymphocytes). Since the prognostic benefits of CD4 T-cells in our study were only observed in patients with detectable residual CLL cells, this argues for a bystander effect rather than just a dose effect. It would be interesting to further investigate CD4 effects in CLL, and to observe whether patients with low EOT CD4 already presented with low CD4 prior to FCR treatment; this could help clinicians identify patients with a high probability of reaching low EOT CD4 after CIT, and thus help select patients who would benefit the most from CIT, which would be a useful distinction to make as FCR is currently being compromised by other first-line therapeutic strategies [39]. Since our research focussed on identifying patients who would benefit from maintenance therapy after completing FCR, we did not perform this type of analysis; neither did we perform sequential lymphocyte subset counts during FCR therapy, as has been previously reported in the case of sequential MRD measurements taken during FCR therapy [40]. Furthermore, by highlighting the clinical relevance of CLL cell interactions with their microenvironment in relation to PFS, our research may pave the way for the investigation of associations between other amenable factors (such as CD40 or IL4) and PFS [41,42]; this kind of research could help clinicians to optimise the tools and timing (before, during or after FCR completion), to exploit the complex interactions between CLL and normal immune cells. Our cohort confirmed the high rate of infection, previously observed, during the first two years following FCR (Fig. 5) [17]. It is therefore perhaps not surprising that a low EOT CD4 count was not associated with an increased risk of infection, which means that a CD4 cell count is not useful to manage anti-viral or microbial prophylaxes, in clinical practice (the 200/mm 3 threshold for discontinuing prophylactic measures was first suggested by HIV-treating physicians, but has never been validated in onco-heamatology patients [22,23]). Monitoring of NK cells may be more informative to predict possible infectious complications in these patients (we indeed found a trend between low EOT NK cells and infectious events). Some authors have recently suggested a protective role of NK cells in CLL, not in terms of progression of disease, but in terms of OS, corroborating our observation [43]. However, these authors did not study the influence of NK cells on infections, nor the impact of NK cells after frontline CIT. Secondary cancer rates in our cohort were found to be comparable to those reported in the literature [4,5], but only in terms of Richter transformation: it is noteworthy that our rate of myelodysplastic syndromes/AML was unusually low (2/162) when compared to the MDACC FCR300 cohort (14/300), but our follow-up duration was much shorter. This cannot be explained by dose intensity of FCR, since the French oral FC regimen is slightly over-dosed compared to intravenous FC. In the latter series, 59/300 patients developed solid tumours (28 non-melanoma skin cancers), as compared to 15/162 patients in our cohort. No correlations with EOT lymphocyte counts could be drawn from our analyses. Conclusion Our data suggests that in real-life clinical practice, CD4 cell counts should be assessed after completing FCR, not to stop prophylaxes, but as an opportunity to discuss our patient's recruitment into a clinical trial assessing maintenance, or to mitigate our multiple concerns about prognostication, response durations and/or infectious risks. This parameter is easily available in most centres, but does not replace MRD as the best post-therapy evaluation tool (it is not the "MRD of the poor"). We think there is a window of opportunity to develop post-FCR T-cell targeted (not only B-cell-targeted with antiCD20 antibodies) strategies aiming at eradicating B/ T-cell interactions driving subsequent clinical relapses.
3,607.8
2019-08-14T00:00:00.000
[ "Medicine", "Biology" ]
Parallelized Dilate Algorithm for Remote Sensing Image As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. Introduction Land use/cover information has been identified as one of the crucial data components for many aspects of global change studies and environmental applications. The development of remote sensing technology has increasingly facilitated the acquisition of such information [1]. As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very big. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. Paralleled program can split a big computing task into subcomputing tasks and make full use of the advantage of multicore and multicomputer to improve the computing speed [2]. To accelerate the process speed of remote sensing images algorithm, many methods had been proposed: parallel k-means or EM cluster method for remote sensing image [3,4]. Wang utilized loud computing to a rapid processing of remote sensing images [5]. Parallel classification method has been proposed to archive a faster remote sensing images training speed [6,7]. Parallel program can be further divided into multiprocesses parallel and multithreads parallel. Message passing interface (MPI) is a library specification for message passing, proposed as a standard by a broadly based committee of vendors, implementers, and users [8]; we can realize multiprocesses. Multiprocessing (MP) is the use of two or more central processing units (CPUs) within a single computer system [9]. In this research, we introduce MPICH2 and OpenMP technology and propose a parallelized dilate algorithm for remote sensing image (PDARSI); through PDARSI a big dilate task can be split into a lot of subtasks; each subtask can run on corresponding computer or core. Experiments show that our method runs obviously faster than traditional singleprocess algorithm. Dilate Algorithms. There are two sets and in ; a complement set of is as follows: Based on this formula the difference of and represented by − can be defined as The reflection of represented aŝcan be defined aŝ 2 The Scientific World Journal The binary dilation of by , denoted by ⊕ , is defined as the set operation: Herêis the reflection of the structuring element . In other words, it is the set of pixel locations , where the reflected structuring element overlaps with foreground pixels in when translated to . Note that some people use a definition of dilation in which the structuring element is not reflected [10]. In the general form of gray-scale dilation, the structuring element has a height. The gray-scale dilation of ( , ) by ( , ) is defined as where is the domain of the structuring element and ( , ) is assumed to be −∞ outside the domain of the image. To create a structuring element with nonzero height values, use the syntax strel (nhood, height), where height gives the height values and nhood corresponds to the structuring element domain, [11]. MPI and OpenMP. Message passing interface (MPI) is a standardized and portable message-passing system designed by a group of researchers from academia and industry to function on a wide variety of parallel computers. OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer. We can use MPI in the cluster computers to realize multiprocess parallelization, and each process adopts OpenMP to realize multithreading parallelization. Parallelized Dilate Algorithm for Remote Sensing Image The generic process of parallelized dilate algorithm for remote sensing image (PDARSI) is shown in Figure 1. Firstly, in main function, MPI interface start and initial all the processes by: And then the algorithm is divided into five steps: (1) Rank0 read the entire remote sensing data, and data in accordance with the number of processes is divided into multiple subdata; (2) Rank0 send data, and the data is distributed to each process; (3) each rank processes its own data to obtain the corresponding results; (4) Rank0 collect all the data, and data integration as a result; (5) Rank0 write the result to disk. Finally, in main function, call MPI::Finalize(); //stop all the process. All the process was destroyed and the algorithm was ended. Stage 1. Reading stage: in order to solve the problem of image data read, task assignment, and data transmission, PDARSI adopt a Plines class to store the remote sensing data; Plines class has the following functions: (1) storing remote sensing image in units of row; (2) the storage part of the data; (3) redundant storage of data boundary information; (4) supporting serialization; and (5) supporting the reconstruction of the data; the process of Rank0's reading and splitting can be described in Algorithm 1. Through this procedure, Rank0 can split all the data into subdata corresponding to each Rand, and the PDARSI proceed to Step 2. In Step 2, Rank0 send all the subdata to Rank0 to Rankn by In Step 3, Plines object which own by each rank was further split into Plines array and each object of array run dilate algorithm and obtain result in a thread. In Step 4, Rank0 collect all the results from every rank process and integrate them as a result. In Step 5, Rank0 save the result to a disk. The Plines objects send and receive figure can be seen in Figure 2. Through PDARSI remote sensing image can dilate parallel in multiprocess and multithread. Experiments This research chooses Landsat-5 TM image and extracts a band for test image; the image size is 5230 × 4736 and 23.6 M (see Figure 3). To test the efficiency of PDARSI algorithm, we adopt a HPC cluster which contains Intel i5 2300 computer as head node; it controls two compute nodes which have AMD FX8350 8-core CPU. Each computer of cluster utilizes Fedora 16 64-bit Linux operating system and MPICH2 as MPI management interface and OpenMP as multithread library. In order to test the effectiveness of parallel algorithms, we adopt the number of processes from 1 to 10 and the number of threads from 1 to 10 Pnn Rn · · · · · · · · · · · · · · · Rankn Figure 2: Plines object and its status at different stage. Figure 4 is algorithm speed and its relation to the number of processes and 1-5 threads. As can be seen from Figure 4(a), the elapsed time of algorithm declined along with the increasing number of processes, but the trend became slower when the processes number exceeds 4. Figures 4(b) and 4(c) show that the algorithm's speed increases more slowly when the thread process number is bigger than 1; this means that threads can bring more increase than the processes. From Figures 4(d) and 4(e), the number of threads in the initial stage more than 4, number of processes' increase may actually reduce the operating speed, which is due to the improvement of the process of the speed has less influence than the speed of data transmission between processes. Figure 5 is algorithm speed and its relation to the number of processes and 6-10 threads. In Figure 5, there are less differences among the five figures, due to HPC cluster computer hardware limitations (the compute nodes containing a total of 16 cores) and the time-consuming communication among processes. The algorithm's speed is not linear with the number of processes and threads, and when the speed limitation is reached, the increase of the number of processes or threads will not increase running speed or even decrease the speed. The algorithm will archive better result when more powerful HPC cluster is utilized. The relation between threads and processes can be seen from Figure 6. Multithread method does not require data transmission so is can bring more obvious increase in algorithm speed, since algorithms require the transmission of data between two computers in multiprocess stage, when process number is even algorithm need transmitted half of the data from Rank0 node to another node, so the speed is more faster in processed number is odd. The results of the PDARSI algorithm and the traditional single-process algorithm can be seen in Figure 7. PDARSI algorithm splits a remote sensing image into subdata, and each subdata has UpperBuffer and BottomBuffer to ensure a pixel which at subdata border can access neighbor pixels on the other subdata; this mechanism guarantees that the dilation algorithm can obtain right result even the whole calculate task are partitioned into processes. When the dilation algorithm calculation in each process is completed, Rank0 collect the results from processes and integrate all the results into a result image. As can be seen from Figure 7, PDARSI does not change the results of the original algorithm and result images are exactly the same; this proves that our proposed algorithm can accelerate running speed and does not alter the results of the original algorithm. Conclusions This research uses MPICH2 and OpenMP to design a parallelized dilate algorithm; it can take full advantage of HPC cluster computing resources and achieve the purpose of rapid processing of remote sensing image. Through PDARSI a big dilate task can be split into a lot of subtasks; each subtask can run on corresponding computer or core. Experiments show that our method runs obviously faster than traditional singleprocess algorithm.
2,324
2014-05-11T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Biocontrol of Aflatoxins Using Non-Aflatoxigenic Aspergillus flavus: A Literature Review Aflatoxins (AFs) are mycotoxins, predominantly produced by Aspergillus flavus, A. parasiticus, A. nomius, and A. pseudotamarii. AFs are carcinogenic compounds causing liver cancer in humans and animals. Physical and biological factors significantly affect AF production during the pre-and post-harvest time. Several methodologies have been developed to control AF contamination, yet; they are usually expensive and unfriendly to the environment. Consequently, interest in using biocontrol agents has increased, as they are convenient, advanced, and friendly to the environment. Using non-aflatoxigenic strains of A. flavus (AF−) as biocontrol agents is the most promising method to control AFs’ contamination in cereal crops. AF− strains cannot produce AFs due to the absence of polyketide synthase genes or genetic mutation. AF− strains competitively exclude the AF+ strains in the field, giving an extra advantage to the stored grains. Several microbiological, molecular, and field-based approaches have been used to select a suitable biocontrol agent. The effectiveness of biocontrol agents in controlling AF contamination could reach up to 99.3%. Optimal inoculum rate and a perfect time of application are critical factors influencing the efficacy of biocontrol agents. Introduction Aflatoxins (AFs) are secondary metabolites produced by Aspergillus flavus, A. parasiticus, A. nomius, and A. pseudotamarii [1,2]. AFs are organic compounds with lower molecular weight, typically produced by fungal mycelia and accumulated in conidia and sclerotia. AFs contaminate a wide range of crops, including corn, oilseeds, rice, and nuts [3][4][5][6]. AFs contamination in cereals may occur during pre-or post-harvest stages [7,8]. Hot temperature and high humidity stimulate fungal growth in fields and storage. Contamination by AFs is responsible for substantial commercial losses throughout the world [9][10][11]. AFs are among the most toxic compounds that adversely affect humans and animals' health [12][13][14][15][16][17]. AFs are mutagenic, teratogenic, genotoxic, and carcinogenic compounds, causing severe diseases in humans, poultry, fishes, and cattle under long-term exposure [18,19]. AFs can penetrate the feed and food chain, posing a threat to even newborns [20,21]. While several AFs were currently identified, AFB 1 , AFB 2 , AFG 1 , and AFG 2 are the four most significant AFs. The IARC (International Agency for Research on Cancer) classifies AFB 1 as the most toxic, mutagenic, and Group 1 human carcinogen [22][23][24], causing chronic and acute diseases in children and the elderly. AFB 1 carcinogenicity has long been linked to the liver; however, recent epidemiological studies revealed that it was also carcinogenic to the pancreas, kidney, bone, bladder, and central nervous system [25][26][27][28]. According to El-Serag [29], Bruix et al. [30], and Yoshida et al. [31], AFB 1 exposure could increase the hepatocellular carcinoma (HCC) risk for up to 30 times, particularly in those who infected with hepatitis B virus. The inhalation of dust contaminated by AFB 1 may cause tumors in humans' respiratory tracts [32]. Furthermore, AFB 1 disturbs the cytochrome P450 enzymes involved in steroid production [33]. Sometimes, AFB 1 could get into the blood-testis barrier, resulting in spermatogenesis disorder [34]. Based on toxicological syndromes, AFs contamination can be divided into acute aflatoxicosis and chronic aflatoxicosis. Acute aflatoxicosis is distinguished by a high-dose exposure of AFB 1 for a short time, causing hepatotoxicity [35,36]. Acute aflatoxicosis is characterized by vomiting, fever, liver injury, pulmonary or cerebral edema, anemia, necrosis, diarrhea, kidney failure, and fatigue [37,38]. Several incidences of acute aflatoxicosis are reported in India, Malaysia, and Kenya [39][40][41][42]. In contrast, chronic aflatoxicosis is a low-dose exposure for a long duration, causing cancer and other severe diseases in humans. Some research reported that chronic exposure to AFB 1 caused the deaths of 250,000 people in Africa and China [43][44][45]. Human exposure to AFs can be direct or indirect. The inhalation of AFB 1 -contaminated dust is an excellent example of direct exposure to AFs, resulting in the tumor in the human respiratory tract. On the other hand, the intake of AF-contaminated milk (AFM 1 ) and other dairy products carried over contaminated feed is indirect exposure to AFs. The consumption of eggs and animal meat contaminated by AFs is another example of indirect exposure to AFs. Kaplan et al. [46] estimated human average intake of AFs at around 10-200 ng/kg per day. Humans' health risks related to contaminated food consumption are becoming a serious problem all over the world. Countries where strict rules for AFs are not implemented, resulting in high health risks related to AFs exposure. Therefore, every country should implement strict rules for AF levels in their food products [47]. Global Distribution of Aspergillus flavus and Aflatoxins Aspergillus section Flavi contains the most prevalent aflatoxigenic fungi, including A. flavus and A. parasiticus. The less prevalent aflatoxigenic species in this section are A.nomius, A. pseudotamarii, A. bombysis, and A. parvisclerotigenus [48]. Aspergillus species are remarkably different in AF production; some are aflatoxigenic while others are nonaflatoxigenic [49,50]. Alternatively, A. flavus is the most common species in crops producing AFs, and cyclopiazonic acid and non-aflatoxigenic strains are rare [50][51][52]. A. flavus can be found in decaying vegetation, crops, and seeds as a saprophyte or parasite. Soil is the main source of primary inoculum responsible for infection in crops vulnerable to AF contamination. The infection of A. flavus on the aerial parts of crops is different, depending on their rhizosphere habits [53]. The hot and humid weather and the absence of suitable storage facilities are favorable conditions for the growth of A. flavus and AF production [54]. For instance, tropical and subtropical regions with climate change encourage AF − producing A. flavus to produce AFs in large quantities [55]. Corn and peanuts are the only crops consumed by humans worldwide, and unfortunately, highly vulnerable to AF contamination [56]. Around 40% of the loss of productivity due to infections caused by AFs has been increased in many developing countries [6]. Factors Affecting Aflatoxin Production Several physical (abiotic) and biological (biotic) factors influence fungal growth and AF production [57]. In crops, AF contamination occurs during harvest, as the weather is wet due to unseasonal rains. Moreover, insect damage, drought, and heavy rainfall favor fungal growth. The degree of mycological infiltration and AF contamination varies with time and region [58]. Nature depends on fungal strains [59,60] and other microbes' interference, moisture content, temperature, and resulting soil conditions. Fungal spores can enter through either damaged pod walls, insects, or pollination. Additionally, nutrient deficiency in plants may increase AF levels. Recent studies showed high levels of AF production at 25 • C to 28 • C [61,62]. Likewise, high humidity (83-88%) and optimal CO 2 and O 2 have been found to influence fungal growth and AFs production [63]. Alternatively, lower concentrations of CO 2 and O 2 may inhibit fungal growth and AFs production. The presence or absence of certain compounds and elements can also control the AFs production, such as glucose, sucrose, and fructose that provide a suitable environment for fungal growth, while cadmium and iron slower down the fungal growth and AF production [64,65]. Similarly, climate change could significantly influence the AF + life cycle, changing host-pathogen relationships and host resistance. It could directly impact the ability of AF + species to produce AFs and their overall resilience [66,67]. Climate change not only affects host-pathogen relationships in specific areas but also promotes the emergence of new diseases and modifications in fungal biodiversity caused by fluctuations in their ecological niches [68][69][70]. Certain AF + species are declining in one environment and reappearing in other regions because of climate change. The ability of AF + species to adapt to such environmental changes can be perceived by continuously evolving combinations of AFs in food and feed. Aflatoxin Management Researchers are actively involved in preventing AFs production and spread, as the dangers of AFs to livestock and human health cannot be underestimated. Several preand post-harvest prevention measures, such as good agricultural practices, including deep plowing, manuring, irrigation, and maintaining water supply to the crops, are considered the best options for reducing AF contamination in crops [71][72][73]. Recent studies have suggested that irrigation in the late season could increase soil moisture contents and reduce the soil temperature, resulting in a decreased AF levels in crops [74]. These physical strategies, however, are not always feasible [19]. Apart from physical methods, various chemical strategies have been used for several years to lower AF levels in foods and feeds [75,76]. While almost all emphasis has been focused on controlling AFs contamination in crops, the most effective method is to use ammonia [77,78]. Additionally, fungicides such as amphotericin B, voriconazole, posaconazole, caspofungin, and voriconazole are effective against A. flavus invasion and AF contamination during pre-harvest stages. However, there is the risk of potential environmental pollution and health issues from fungicides [79,80]. Therefore, it is necessary to eliminate the risk by replacing chemical fungicides used with eco-friendly methods. Crop varieties resistant to AFs are produced by breeding and genetic engineering techniques, yet no suitable resistant variety has been commercially developed [81]. Similarly, AF decontamination in food is convenient, but it is expensive and challenging [82]. Therefore, an interest in using biological control strategies has been developed, as they are helpful, friendly to the environment, and natural opponents of AF − producing strains of A. flavus [83][84][85]. These strategies exploit some microorganisms' antagonistic effects, such as bacteria, yeasts [86], and AF − strains [87], on the development and production of AFs produced by AF + strains. It has been reported that lactic acid bacteria such as Bacillus subtilis effectively inhibit the growth of various molds [88]. The inhibition is usually caused by competition for space and available nutrients needed for AFs biosynthesis but not for AF + strains by co-existing microorganisms. Similarly, Flavobacterium aurantiacum has been found to remove AFs from different foodstuffs. Likewise, Pseudomonas helps develop a healthy root system by its rapid colonization of the rhizosphere, stimulating plant defense mechanisms resulting in plant resistance to pathogens [89][90][91][92][93]. Faraj et al. [94] demonstrated that both B. subtilis have inhibitory effects on A. flavus and AF production growth. Mixing B. subtilis with groundnut diminished the deleterious effects of A. flavus on groundnuts. Mishagi et al. [95] have reported a 60-100% reduction in A. flavus incidence in synthetic media when treated with P. cepacia bacteria. Kong et al. [96] examined the potential antifungal activity of B. megaterium against the growth of A. flavus in groundnut kernels in vitro and in vivo. However, it has been found that biological control of AFs using AF − strains is more productive compared to bacterial strains [97,98]. Therefore, biological control strategies based on AF − strains could be viable options for reducing pre-harvest AF contamination in crops. The efficacy of AF − strains are based on their stability and aggressiveness against AF + strains [99,100]. Thus, this study focuses on the recent developments in the use of AF − strains in reducing AF contamination in crops. Advantages of Biocontrol of Aflatoxins Using Non-Aflatoxigenic Aspergillus flavus Biocontrol methods are more effective and innovative to control AF contamination in crops. The application of biocontrol agents (AF − ) carries some adaptations in fungal populations, which persist throughout the food chain. These adaptations prevent the grains from AF contamination during storage and transport; even environmental conditions are favorable for fungal growth. In biocontrol methods, the application of AF − strain in the field remarkably reduces AF contamination in crops [101,102]. Similarly, like air, AF can disperse Aspergillus spores-communities, improve safety within the treated, and positively affect neighboring fields [103]. The positive impacts of AF − strains can benefit crops and other plants for several years. This means a single dose of AF − strain could benefit the treated crop and the second season crop, which missed the treatment [104]. Selection of Non-Aflatoxigenic Strains Biocontrol is a promising method to reduce AF contamination in crops. Recent studies reported reducing AF contamination by applying AF − strain to the soil around growing plants. When the crop is vulnerable to fungal attack during drought conditions, these AF − strains competitively exclude the AF + strains in the soil and reduce AF concentrations. Dorner [105] reported the reduction in AF contamination in a cornfield using AF − strains. In other research, Dorner [105] assessed the efficacy of AF − for AFs control in peanuts. AF − strains can be found in air, soil, and plants. Usually, both AF + and AF − strains mutually occur in different ecosystems. The ability of AF − strains competing with AF + strains for nutrients provides an opportunity to use them as biocontrol agents. Different techniques have been developed to discover the suitable AF − strain for biocontrol use. Some of them are based on phylogenetic features, while others on phenotypic characteristics such as sclerotial size. Based on sclerotial morphology and production, A. flavus can be divided into two distinct morphotypes, including S-strain and L-strain. The S-strains produce a large number of small-sized sclerotia (>400 µm in diameter), whereas the L-strains produce a small number of large-sized sclerotia (<400 µm in diameter). Moreover, S-strains produce a higher concentration of AF compared to L-strains. Molecular techniques may describe the phylogenetic relationships between A. flavus strains successfully. Several polymerase chain reaction (PCR)-based pyrosequencing methods are currently being developed to detect genes responsible for AF production and discover suitable biocontrol agents [106]. Abbas et al. [107] isolated some AF − strains, including K49, F3W4, NRRL 58,974, NRRL 58,976, and NRRL 58,988. The classification was based on their growth rate, pigmentation, fluorescence, and AF production. Efficacy of Non-Aflatoxigenic Strains as Biocontrol Agents AF − strains have been suggested as biocontrol agents in the hope that they would inhibit the growth of AF + and thereby reduce AFs contamination. Previous studies conducted by Erhlich [108] revealed that co-inoculation of AF − strains with AF + substantially reduced the production of AF in corn under in vitro conditions. The potential for biocontrol of AFs using AF − strains has been demonstrated under field conditions in cotton [109], peanuts [85], and corn [97,110]. These scientists have applied the AF − strain to the soil as infested grain cultures of barley, rice, or wheat, whereas [111] inoculated corn ears directly by injection. In the cotton studies performed by Cotty [98], the AF − strains were failed to suppress AFs contamination when it was sprayed on the cottonseed immediately before the bolls formed but were effective when sprayed on the soil later. Similarly, a study conducted by Abbas [63] has demonstrated that soil inoculation of AF − strain (K49) with AF + strain (F3W4) mixture significantly reduced AFs contamination (74-95%) in corn. The degree of AF reduction found in his analysis was similar to the reductions obtained in other studies, using soil inoculation of corn and other crops. In Georgia, different studies have reported reductions in AF levels (80-87%) in cornfields after using AF − strains against AF − producing strains of A. flavus. Likewise, in cotton fields, the application of AF − strain has decreased the amount of AFB 1 from 75% to 99.8% [112]. Furthermore, Dorner et al. [99] reported the reduction of AFs concentrations between 74.3% and 99.8% in peanut crop when they applied the AF − strains with non-aflatoxigenic strains of A. parasiticus. Peanuts produce fruiting bodies below the soil and hence increase the chances of biological control of AFs. In another study, Dorner et al. [99] reported a 10-100 times increase in propagule density of the Aspergillus community when they co-inoculated the mixture of non-aflatoxigenic strains of A. flavus and A. parasiticus with AF + strains. Additionally, their research has shown that A. flavus strains were more dominant over A. parasiticus in the displacement of AF + strains in the soil. Dorner et al. [87] noted that the application of AF − strains to the soil would control soil-borne infection and AFs contamination in crops like peanuts; however, the same treatment in some crops like corn will be difficult. On the contrary, an AF − strain (CT3) was tested for its efficacy in AFs reduction, but it does not show effectivity like K49 to mitigate AFs contamination in corn. On the other hand, Cotty and Mellon [113] noted that co-inoculation of AF36 (AF − strain) with AF + ultimately displaced AF + strain and markedly reduced AFs contamination in cottonseed. Moreover, Chang et al. [114] identified an AF − strain (TX9-8) by screening subgroups of AF − strains. Co-inoculation of TX9-8 strain with AF + strain with 1:1 ratio reduced AF production. No reduction in AF concentration has been observed when TX9-8 was injected one day later in AF + strain. This competitive exclusion was possibly due to the vigorous growth of TX9-8 against AF + strain [115]. Recently, Atehnkeng et al. [116] found La 3279 as the most efficient strain, decreasing AF contamination by >99.3%. Similarly, Ehrlich et al. [117] found the same results regarding secalonic acid reduction when they co-inoculated AF − stain with Penicillium oxalicum. They assumed that the two coinoculated species might cause competition for energy (ATP) required for the biosynthesis of secondary metabolites. There is an assumption that AF − strains competitively exclude the AF + strains when co-inoculated, resulting in the reduction of AFs contamination in crops [118]. Although AF − strains have been employed to minimize AF infections in crops, the mechanism of AF − strains' intervention on AF + strains remains unknown [119][120][121][122]. Inoculation Method For many years, AF − strains have been used on cornfield soil. Although the use of K49 in the soil can reduce AF levels by 65% [123], the direct use of AF − strain on corn ears is immensely more efficient. A clay-based water-dispersible granule system was also developed to spray AF − strain on corn silk directly. This management decreased AF production by up to 97%. Inoculum Rate Inoculum concentration is an essential factor for the effective control of AF contamination. Recent studies have revealed a direct relationship between the inoculum rate and AF's efficacy − strain in decreasing AF concentrations [124]. Studies demonstrated a significant reduction in AF concentration in peanuts when AF − inoculum increased from 2-50 g/L. In the USA, research was conducted in which an AF − strain (NRRL 21,368) with different quantities (0, 2, 10, and 50 g) was applied to the cornfield [125]. The AF levels for whole kernels were 337.6, 73.7, 34.8 and 33.3 µg/kg for the above quantities. Other research showed AF concentrations of 718.3, 184.4, 35.9 and 0.4 µg/kg in corn kernels, which demonstrated 74.3%, 95.0% and 99.9% of AF reduction. In the following years, the retreated field with AF − strain showed a significant reduction in AF levels. According to Pitt and Hocking [119], the same results were achieved when tested in Australia. Optimal Time for Non-Aflatoxigenic Strains Application Research showed that with the concentration of AF − strains, the time of its application significantly affects their efficacy. The application of AF − strain at earlier stages significantly reduced AF levels in cotton. Similarly, Kabak and Dobson [126] suggested the co-inoculation of AF + and AF − strains (TX9-8) to reduce the AF contamination; however, if the AF − strain is applied one day later, AF + strains, fewer or no reduction in AF concentrations will be achieved. Abiotic Factors The time for the application of AF − , depends on the significant environmental conditions. Abiotic factors such as water activity and temperature directly affect AF − strains' efficacy by controlling spore germination, hyphal growth, and spore-production [127]. Water Activity and Growth of Non-Aflatoxigenic Strains Water plays a vital role in all biological practices. The main factor is the ambient water availability instead of the overall water content inside the hosts with microbes. The water content accessible to microbes in substrates is known as water activity. In substrates, water activity and total water content are interrelated. This helps to quantify the actual water content and microbial growth on the substrate. The respiration rate of AF − strain used for reducing AF needs water. The water content in food performs an essential role in the growth of fungi and other biological activities. Once water availability is low, food spontaneously attains biological safety since it reduces the decomposition process through respiration. Seasonal variation and high humidity result in water availability for food, providing a breeding place for fungi. Moist is the primary source of crop losses [128], as the water content in grains increases fungal invasion. A. flavus grows at high water content (175 g/kg) and low temperatures (10-15 • C). As soon as the water content plunges from 175 g/kg to 94 g/kg, A. flavus cannot persist at 30-40 • C, demonstrating the significance of water content to the growth of fungi. Recent literature has shown that most of the molds could not propagate at a relative humidity of less than 70% [129]. Maintaining a lower water activity in preserved seeds, particularly in tropical regions, could be more challenging; hence, seeds containing high moisture content should be dried before storage to uphold seed sustainability against the fungal activity. Temperature and Growth of Non-Aflatoxigenic Strains Microbial growth is exceptionally conspicuous in tropical and subtropical regions, where high temperatures and humidity prevail in most areas. High temperature and humidity favor fungal growth. Species like A. candidus have higher thermal tolerance, growing even in hot temperatures. However, some Aspergillus species show vigorous growth at a lower temperature (10-20 • C) [130]. Since Aspergillus does not reproduce at a higher temperature, the grains could be well-preserved at 40 • C [131]. Reed et al. [132] reported temperature as the primary factor in deciding field sustainability for mold. Under laboratory conditions, A. flavus multiplies when the temperature is around 10 • C [133]. However, no A. flavus growth occurred in the field environment if the temperature was less than 20 • C. Alternatively, Pitt and Hocking [119] reported faster growth for A. parasiticus at 15 • C under laboratory conditions, while, in field environments, their growth started at 17-20 • C. Thus, the application of AF − strain should be delayed until the field temperature reaches 20 • C [134]. Biotic Factors Low temperature and high water content in storage provide favorable conditions for insects, mites, and other microorganisms to grow. Insects' respiration process produces hot spots in seeds, causing grain charring that affects seed quality and germination. In grains, insects' activities increase the surrounding bulk's temperature and water content, providing favorable mold growth conditions. Studies have shown that seeds damaged by insects are highly susceptible to fungal contamination [135]. Some fungi absorb insects and boost their populace, while others repel pests by secreting harmful toxins. Magan [136] reported that other microorganisms and environmental conditions significantly influenced the growth of AF + strains, AF production, and competitiveness. Insects and mites are carriers as they carry fungal spores in their bodies. Studies have shown that mite infections supplement the A. flavus growth, as they carry fungal spores to fresh grains. Magan [136] suggested that mites are secondary vectors, carrying fungal spores into infected grains. In infected grains, mites take the fungal spores and carry them into their bodies or digestive tracts. When mites enter the fresh grain, they inoculate the fungal spores in it. The study discovered that mites seek out preferred fungi and digest a more significant percentage of their spores. Thus, these mites' heavy infestations can be linked to damage from the mites and fungi associated with them. Some mites are growth inhibitors for fungi too. Some Aspergillus species are abundantly found on Acarus siro, indicating the symbiotic relationship of the fungi with their preferred mites. Physiological Manipulation of Non-Aflatoxigenic Strains Most of the fungal niches are not persistent as they modify their features according to the external environment [137]. In unfavorable environments, xerophilic fungi produce small polyols, which allow their enzymatic systems to work efficiently. Similarly, A. flavus accumulates glycerol and erythritol in their conidia during unfavorable conditions [138]. Therefore, fungal propagules used for biocontrol must be resistant to environmental stresses [120]. According to Magan [136], agricultural management could improve the resistive performance of biocontrol agents. Furthermore, sugar and polyol mixture could boost spore germinability in severe environmental conditions. The conidia, which have a high amount of glycerol and trehalose, grow quicker than other conidia. Likewise, Abadias et al. [139] indicated high resistivity of Candida sake to water stress as their spores contain a high concentration of glycerol and erythritol. Gasch [137] suggested a link between environmental changes and adaptation length and proposed a conidial adjustment time. Thus, a strong adaptation with a short modification time makes biocontrol agents more competitive under critical conditions. Conclusions The above review showed that AF contaminates many cereal crops throughout the world. AFs producing molds, including A. flavus, contaminates these commodities at different stages within the food web. The strategies and tools developed for AF analysis have their advantages and disadvantages. Despite the immense information controlling AF, contamination continues with its harmful effects on human health, agro-industry, trade, and financial growth. This issue becomes more severe as AF's contaminated cereal crops are essential for most of the world population. The AF's contaminated crops are used in foods and feed products, resulting in many severe diseases in humans and animals. Thus, in every country, consumers and animals are persistently at risk. Aspergillus studies on their environmental conditions and the central perspective of farming systems can develop new AF control equipment. The pre-harvest methods (fungal population ecology, reproduction, and gene manipulation) are suitable for AF control; still, attention must be given to the environmental effects affecting these practices. For instance, the AF − strains sometimes worsen AF's issues by getting AF − producing genes during vegetative fusion or sexual reproduction. This can be prevented by DNA-DNA hybridization to fully understand the genetic structure of AF − strains and detect gene deletions in their chromosomes. Furthermore, ear rot-resistant corn breeding might be a safe option for AF control, but it could take many breeding seasons due to AF − resistant genes' polygenic characteristics. The study of gene role and expression in different environmental conditions is necessary to understand the host-induced ecological reactions. Some resistant varieties are not fully adapted to grow in the field and are susceptible to AF contamination. Biocontrol techniques are more effective, environment-friendly, and economical for reducing AF in crops. The use of biocontrol agents brings some changes to the fungal communities that remain throughout the food chain. These changes prevent AF contamination during storage and transport; even environmental conditions are favorable for fungal growth. The application of biocontrol agents in the field remarkably reduces AF levels in crops from harvest until use. As Aspergillus spores can be dispersed by air, these fungal communities improve safety within treated fields and positively impact the neighboring fields, which means that a single dose of AF − strain could benefit the treated crop and the second season crop that missed the treatment.
6,249.6
2021-05-01T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Residual Stresses on Various PVD Hard Coatings on Tube and Plate Substrates : In this study, the average residual stresses were determined in hard PVD nACRo (nc-AlCrN/a-Si 3 N 4 ), nACo (nc-AlTiN/a-Si 3 N 4 ), AlCrN, TiAlN, and TiCN commercial coatings through the deflection of the plate substrates and the simultaneous measurement of length variation in thin-walled tubular substrates. The length measuring unit was used for the measurement of any length change in the tubular substrate. A change in tube length was reduced to the deflection of the middle cross-section of the elastic element for which deformation was measured using four strain gauges. The cross-sectional microstructure and thickness of the coatings were investigated by means of scanning electron microscopy (SEM), and a determination was made of the chemical composition of the coatings and substrate by means of energy dispersive X-ray spectroscopy (EDS). The values of average compressive residual stresses, as determined by both methods, were very high (with a variation of between 2.05 and 6.63 GPa), irrespective of coating thickness, but were dependent upon the shape of the substrate and on its position in relation to the axis of the rotating cathode. The thicknesses of the coatings that were deposited on the plates with two parallel fixings (such as the nACRo coatings on the front surface at 6.8 μm and on the rear surface at 2.9 μm) and on the tubular substrates (10.0 μm) were significant ly different. The higher average compressive residual stresses in the coating correlate to the higher average relative wear resistance that was obtained during field wear testing. Introduction Physical Vapor Deposition (PVD) coatings are used inter alia for blanking, punching, and cutting applications [1] and can be deposited both on plain surfaces and more complex ones [2,3]. It is well known that residual stresses that arise in coatings during the deposition process [1,4,5] have an important effect on the service life of the coating by means of influencing its mechanical and tribological properties and adhesion [6]. In general, residual macro-and microstresses in materials can be distinguished. Macrostresses (average values), as Type I, vary within a bulk coating, and microstresses, as Type II, operate at the grain-sized level [7,8]. It should be noted that the (micro) residual stresses that have been measured by X-ray diffraction are the arithmetic average stress results in the small irradiated area alone. To be able to gain residual stresses for the whole part, a number of measurements should be carried out at numerous points [7]. From an engineering point of view, average macroscopic residual stresses in thin PVD coatings are also of interest. The origin of residual stresses in PVD coatings and suitable methods for their determination are discussed in review papers [5,9]. In this study, the average residual stresses were measured by means of the labor-intensive indirect deformation method, using a simple measurement technique, and were compared to the microstresses that were obtained by means of X-ray diffraction. The cutting tools have a complicated shape: plane surfaces with various inclinations, as well as cylindrical surfaces and sharp edges. When the cutting tool is placed into the deposition chamber, its surfaces have different positions and angles in relation to the cathode. This can affect the values for residual stresses in PVD coatings. The aim of the study was to determine macroscopic residual stresses in coatings that were evaporated onto a vertically-fixed cylindrical surface in relation to the axis of the rotating cathode, using the deformation method, through the measurement of the longitudinal length variation of the thin-walled tube. In this way it is possible to estimate the values for residual stresses in coatings that have been deposited on cylindrical surfaces [10], as well as to validate the results that were obtained with the conventional curvature method, using quadrate steel plates with a unilateral coating as the substrate, with different thicknesses and chemical components. The length measuring unit used for measuring the longitudinal length change in the substrate was improved based on the specifications of a previous measuring unit [11], where the tube length variation was reduced to the deflection of the middle cross-section of the elastic element for which deformation was measured by means of four strain gauges [12]. As an example of application, residual stresses were measured in hard PVD nACRo (nc-AlCrN/a-Si3N4) and nACo (nc-AlTiN/a-Si3N4) novel nanocomposite coatings and the results were compared with the benchmark AlCrN, TiAlN, and TiCN coatings, which are the most widely used for cutting tools [13]. The microstructure and thickness of the coatings being studied were investigated and the chemical composition of the coatings and substrate was measured. Residual stresses have an effect on the tribological properties of the coating [14,15]. The industrial field tests provided estimated figures by studying fine-blanking punches (using a convex surface) with three PVD coatings-TiCN, nACRo, and nACo-and the average coating relative wear values were compared with residual stress values. Materials and Methods The tubular and plate substrates intended for coating deposition were prepared from steel. The mean values of the specimen's dimensions, the physical parameters, and the chemical composition of the substrate material are presented in Table 1. The surfaces of the substrates were polished to Ra = 0.024-0.029 μm. The PVD coatings being studied were produced at the Material Engineering Research Centre, Tallinn University of Technology. The PVD unit, a Platit π-80 with Lateral Rotating ARC-Cathode technology, and with two rotating cathodes embedded in the door of the vacuum chamber, was used for deposition. The commercial coatings being used were deposited on specimens whose roughness was similar to that of cutting tools. A schematic for the placement of specimens in the vacuum chamber is presented in Figure 1. The tubes had two different wall thicknesses (outer diameter d1 × inner diameter d2 × length), shown as Tube 1 (3.0 mm × 2.70 mm × 167.37 mm) and Tube 2 (3.0 mm × 2.50 mm × 167.57 mm), and these were affixed vertically within the rotary table (which had a rotational speed of 12 rpm) inside the vacuum chamber. To prevent the coating being deposited on the cross-section of the tube ends, they were closed off at the nozzle, so that the entire outer surface of the tube could be coated [11]. At the same time, the tube was vertically fixed by the lower nozzle to the rotary table in the deposition chamber, and it was simultaneously rotated around its axis ( Figure 1). As the cutting edge of the cutting tools in the deposition chamber are placed at different angles, four placement angles were used with respect to the cathode during coating deposition. The plates are only deposited on one side and should be placed so that they are gripped by a claw in the affixing unit, which should be made of carbon steel (adapted from [10]). Note that a considerable amount of the evaporated target material was also deposited on the holder. Four plates were mounted on the holder so that one batch of the plates could be prepared by deposition on the front surface (directed to the edge of the rotary table as plate 0°), and the other batch could be prepared by deposition on the rear surface (directed to the center of the rotary table as plate 180°). In addition, the holder was affixed at 90° and 45° in relation to the cathode, as plate 90° and plate 45°, respectively. Preparation of the specimens included cleaning in pulsed Ar glow discharge at 425 °C, with a bias of −850 V at a pressure of 4 × 10 −3 mbar (0.4 Pa) to reduce the volume of contaminants and oxides on the deposited surface of the specimens. After that, a thin metallic pure Ti layer (Ti etching) was deposited in an Ar environment to create a valuable adhesion layer on the surface of the substrate. The top coating was deposited onto the adhesion (buffer) layer, with a thickness of about 300 nm, which was deposited directly onto the substrate with the same parameters as those of the top layer. After measuring the length variation in the coated tube and the deflection of the coated plate, three pieces were cut from one tube (two from the ends and one from the middle) with a length of 10 mm, and two pieces were cut from the plates with differing depositing positions, with dimensions of 10 mm × 10 mm, for SEM analysis. By means of the use of Field Emission Gun Scanning Electron Microscopy (FEGSEM) in Zeiss Ultra-55 HR (SEM-Zeiss, Obercohen, Germany), the microstructure of the coatings was investigated. The coating thicknesses were measured from the SEM images and using the ball-cratering equipment, Calotester kaloMax (BAQ GmbH, Braunschweig, Germany). The mean values of residual stresses in the coatings were calculated from the length change in the tubular substrate. The calculation formula was presented in our earlier papers [11,16]. As the coating was relatively thin, it was assumed that residual stresses were distributed uniformly throughout the coating thickness: where E1 and µ1 are the modulus of elasticity and Poisson's ratio of the substrate, respectively, while d1 is the outer diameter of the tube, d2 is the inner diameter of the tube, h2 is the thickness of the coating, l is the length of the tube, and l  is the measured length variation in the tube. From the deflection of the plate, residual stresses were determined using an equation that was based on Stoney's formula [17], but appropriately modified for a plate substrate by introducing the factor 1/(1 − µ1) to account for the biaxial state of stress [12]: where  = (4/b 2 )w is the curvature of the free surface of the plate substrate as determined via the measured deflection w in the middle of the coated convex plate; t1 and t2 are thicknesses of the substrate and coating respectively; and b is the coated width. The unit presented in Figure 2a was adapted from [11] and improved to enable the measurement of the length of the thin walled tubular substrate before and after coating deposition. The measured length variation l  can be used as an experimental parameter for calculating the average values of residual stresses in coatings. The calibration of the length measuring unit is a separate task. The length change in the tubular substrate was transformed to the deformation of the elastic element, which was measured by four strain gauges (Figure 2a). The schematic for the calibration work is presented in Figure 2b. The displacement of the middle cross-section of the elastic element, depending upon the units of the strain indicator in the case of unloading, is presented in Figure 3. During the process of calibration, the readings were taken at a point in time at which rotation of the screw turned off the indicator light. The constant should be determined so that the relation is approximated in the best way possible by minimizing the square of error; by using the software, MS Excel 2016, this was found by using the regression analysis function. As a result, a constant of 3.14 × 10 −5 mm per unit of the strain indicator was obtained. A constant of 8.26 × 10 −5 mm per unit was also used before modifications were carried out on the length measuring unit [11]. The modified unit has 2.63 times higher sensitivity levels. The length of the tube was measured a total of ten times before and ten times after deposition, and the mean value was used to calculate residual stresses in the coating. To guarantee the centering of the tube in the length measuring unit, the inner circular line of the cross section of its ends remained in contact with the spherical surface of the support [11]. Results The chemical composition of the coatings was measured using energy dispersive X-ray spectroscopy (EDS) in a Bruker Esprit 1.82 system, and the deposition parameters are presented in Table 2. From the results, it is obvious that the thicknesses of those coatings that were deposited on the tubular substrates and the thicknesses of those coatings that were deposited on the plates are significantly different, as they are located at different angles in relation to the cathode. This can be manipulated with the energy of evaporated ions, and hence directly influence the dynamics of growth for the coatings. The coating that was deposited on the tube is thicker (Figures 4c and 5c) than the one on the plate, as some part of its surface is constantly forehead-bombarded with target plasma atoms and ions. It can be seen that coatings on the plates which were directed towards the edge of the rotary table (Figures 4a and 5a) are thicker when they are compared to plates that were directed to the center of the rotary table (Figures 4a and 5a). This is due to the minimal potential distance between the target and the substrate, as well as the larger parameters of kinetic energy during deposition. The coated surface of the plate is directly orientated towards the cathode and is bombarded with atoms, ions, and metal-rich microparticles. In the case of the plates that were directed towards the center of the rotary table, the potential distance between the target and the substrate is at its maximum and the parameters for kinetic energy during deposition are smaller, and, consequently, the thickness of the coating is less (Figures 4b and 5b) when compared to those plates that were directed towards the edge of the rotary table. By using direct measurements of the layer's thicknesses from SEM images and the ball-cratering equipment, Kalotester KaloMax, the average values of residual stresses were calculated by means of Equations (1) and (2), as presented in Table 3. crack To gain a better overview of the measured residual stresses for different types of coatings, a graph is presented in Figure 6. The shape of the substrate's surface and the orientation of the coated surface in relation to the cathode have an effect on average residual stresses in the coatings. It can be noted that the content of Si does not affect residual stresses. Residual stresses in Ti-containing coatings are higher than those in Cr-containing coatings, and this trend is evident for coatings that have been deposited on the plate's front surface. In our experiments, the orientation of the coated surface towards the cathode did not have an effect on residual stresses in the TICN coatings. The average residual stresses that were obtained fall within the range that is presented in the available literature, where the X-ray technique is mainly used for the measurement of residual stresses. Such good agreement between residual stresses that have been obtained by physically different methods can be explained by the microstructure of the coatings that were investigated. It should also be noted that the proportions of the chemical composition elements in the compared coatings may differ slightly. Discussion The authors of [14] stated that compression stresses in a coating usually lead to the formation of delamination and longitudinal cracks. Cracks in coatings are a critical property in wear resistance. If a failure of the coating occurs during the working state (residual compressive stresses are summarized with contact stresses), the coating's capabilities can be greatly reduced, which causes severe abrasion that in turn can place wear on the friction system (pair of friction). Cracks were observed in two nanocomposite nACRo and nACo coatings on the cylindrical substrate and at the deposition parameters used in this study. Stress-induced cracks were found to be perpendicular to the direction of growth (Figure 5c), and we assume that cracking leads to a relaxation of the stress that is accumulated in the growing coating during deposition and a decline in overall residual stresses. The reason for cracking can be related to the plain state of compressive stress, which induces stresses in the radial direction of the coating. Radial stress in the interface of the substrate and coating can be calculated according to Equation (3) that is presented in [23]: For example, as the nACo hard grade coating has h2 = 9.5 μm, σ = −3.92 GPa, and r1 = 1.5 mm, σr = 26.2 MPa, which is considerable (the ultimate tensile strength of the coating is unknown). When the same coating is placed on a cylindrical surface with r1 = 10 mm, σr = 3.72 MPa; when it is on a spherical surface with r1 = 2.5 mm, σr = 14.8 MPa [24]. It is evident that, when the radius of the substrate increases, residual stresses decrease in a linear fashion. There were no visible cracks in coatings that were deposited on the plate substrate and in those coatings that were deposited on the tube substrate: AlCrN with a thickness of 7.1 μm and TiCN with a thickness of 8.8 μm. In industrial field wear tests with fine-blanking punches (using a convex surface), the three PVD coatings TiCN, nACRo, and nACo were estimated to have average coating relative wear (wi) values of 84.3%, 66.7%, and 69.9%, correspondingly [24]. TiCN coating wear was 15-17% higher than that of nACRo and nACo coatings, but it is difficult to find any differences between those two [24]. The average residual stresses obtained in coatings on different substrates (Table 3) It can be concluded that the higher average residual compressive stress in the coating correlates to the higher average relative wear resistance levels during industrial field wear testing. There is a dependence of the elastic strain towards a failure parameter (the hardness to modulus of the elasticity ratio or H/E) in terms of the measured coating average relative wear for industrial tests, meaning that the same correlation stands for residual stresses that were measured in the same coatings systems. The combination of the highest H/E ratio and lowest compressive residual stress leads to the lowest relative wear (or higher wear resistance), as proven by the superior behavior of the nACo coating in the industrial field wear testing process. However, the difference between the wear behavior in nACo and nACRo coatings is quite insignificant, as the mechanical properties are quite similar. The same tendencies can be seen in the indentational response of the coating systems being studied, along with impact wear and indentation surface fatigue behavior [25][26][27]. The nACo and nACRo coatings reveal lower coating failed area ratios (FR, %) when they are compared to TiCN, at a level of 24-42% for 10 3 and 5 × 10 6 for the of impact figures, respectively [27]. The FR for TiCN for the same impact figure range is 31-45%. This conclusion is valid for different forms of substrates [26]. The nACo behavior is still superior to that of the TiCN, independent of the variation of the hard metal or cermet substrate composition. Conclusions The average residual stresses in various PVD hard coatings on tube and plate substrates were calculated using as an experimental parameter, along with the length variation of the tube, and the deflection of the plate. The major conclusions are summarized as follows: The length measuring unit was improved to make possible the measurement of the length of the thin-walled tubular substrate before and after coating deposition. The calculated average values of residual stresses were compressive and high, varying from 2.05 to 6.63 GPa, and were of the same range as the corresponding data shown in the available literature that were obtained by the X-ray technique. The coating deposited on the tube substrate was thicker, and average residual stresses in it were lower than that of coatings on plate substrate. Residual stresses were at their highest in the coating that were deposited on the front surface of the plate; the values for residual stresses on those plates that were inclined (45°) and perpendicular (90°) with respect to the cathode were within the same range. The microstructure of the coating on the tube and on the plate was investigated by means of SEM, and the images of the coating's cross-sections and thicknesses are presented. Observed cracks were perpendicular to the direction of growth in the nanocomposite nACRo and nACo coatings which were deposited on the tube substrate. The combination of the highest levels of hardness to the modulus of the elasticity ratio (H/E or elastic strain to failure) and the lowest average residual stress levels is proven to have a positive effect on the wear resistance and indentational (cyclic impact loading) behavior of the coatings. The nACo and nACRo coatings systems showed a tendency to be superior over other studied coatings, with a wear resistance during industrial trials of 15-17% higher than that of TiCN and a lower coating failure ratio (FR) of around 10% for cyclic loading of up to 5 × 10 6 cycles. The presented analysis is limited to the data that were obtained from the aforementioned experiments.
4,832.8
2020-10-30T00:00:00.000
[ "Materials Science" ]
Socio-economical factors that influence the perception of quality of life in patients with osteoporosis The appearance of osteoporosis in elders and the growth of the frequency which it is diagnosed with as we approach patients who are older and older, makes this health problem very important in the societies in which a high number of persons reach old age. These societies, usually belonging to economically advanced jurisdictions, are the first interested in the way health expenses can balance the benefits of the quality of life acquired in these groups of population. The evaluation of the quality of life has become a very important process, which still raises methodological problems to the researchers. The aim of this study was to analyze to what extent the factors involved in defining the quality of life by the patients modified according to the existence of osteoporosis as a defined but also as a perceived disease, as far as it is considered a serious or less serious affection by each patient. 210 female patients participated in the study. The statistical analysis was done by using SPSS 22.0 (IBM Corp. – U.S.A.). p < 0,05 was used as a limit for the statistical significance. Descriptive and analytical analyses were made by following Pearson correlation index in cases of normal distributions, the comparison between groups was made by using t-Student test, respectively chi square test in the cases which required its use. The current study highlights a direct relationship between the quality of life, as it is perceived by the patients, and the quality of the health status, which is more important than the relationship between the quality of life and the other objectives measured by WHOQOL scale. This study also shows that for the Romanian patient diagnosed with osteoporosis, who is enclosed in the age limits of this study, the health status represents the main driver of monitoring the quality of life. Introduction The appearance of osteoporosis in elders and the growth of the frequency which it is diagnosed with as we approach patients who are older and older, makes this health problem very important in the societies in which a high number of persons reach old age [1,2]. These societies, usually belonging to economically advanced jurisdictions, are the first interested in the way health expenses can balance the benefits of the quality of life acquired in these groups of population. The evaluation of the quality of life has become a very important process [3], which still raises methodological problems to the researchers. First, the question regarding the validity of the tools of the quality of life is posed, although the same tools are used in larger groups of population, including youngsters, adults but also elders. What should be taken into account is that those items which represent elements with an important value in the quality of life of youngsters do not have the same value for elders and vice versa [4,5]. The current scales used for the evaluation of the quality of life are mainly represented by SF-36 scale (which was privately developed for the access to the way of calculating the indexes and for the publication rights of the results, a financial contribution being necessary [6,7]) and WHOQOL scale, developed by the World Health Organization, offered for free for research purposes. This scale, WHOQOL, was developed at the beginning of the 90's and it tried to validate itself at the level of different cultures with the declared objective of comparing the information obtained in different population areas [8,9]. WHOQOL-100 was developed as a multidimensional generic tool whose intention is to measure the quality of life (QOL), both in patients with health problems and in healthy populations. In 1995, the scale was reduced to only 26 items, starting from the initial version of 100 items, and this new evolution was called WHOQOL-BREF [10,11]. The scale is available in over 50 languages and it was validated for multicultural use. The WHOQOL domains are the following: physical (7 items), psychological (6 items), social relationships (3 items) and relationships in the environment (8 items). Moreover, there are also two global questions which try to evaluate the satisfaction regarding the health status and the quality of life. Each item is evaluated according to Likert scale in 5 points. In addition, each item is evaluated by taking into consideration the last two weeks of the participant. Higher values show a better quality of life and lower values show a worse quality of life. However, there are some exceptions, in questions 3, 4, and 26, which should be recoded. As mentioned before, the evaluation of the quality of life has to use of some tools in which the evaluated elements should totally reflect in the resulted scores. The internal and external validity, the discriminant power and feasibility of administrating such a questionnaire have been tested for the initial questionnaire as for each version translated in the existent languages. Taking into account the wider extension of SF-36 questionnaire and the relatively late appearance of WHOQOL variant, the question to what extent the two questionnaires show in fact the same status of the quality of life, was posed. A number of existent studies compared the results of applying the two questionnaires in various groups of persons, analyses being realized according to age, sex, ethnic group, social status and income status. These evaluations largely demonstrated that the two questionnaires, SF-36 and WHOQOL 100 overlap as far as indicating the way patients, respectively the respondents, appreciate their quality of life. When talking about PROs (Patient-Reported Outcomes), we have access to a specialty literature in which the number of publications increases exponentially each year. The explanation resides in the fact that in advanced societies, the elders are mainly interested in the health status and in the quality of life and less interested in the level of the earnings, the percent of these persons being increasingly higher. PROs have become a very important subject especially in the pathology that affects the elders, osteoporosis being in this case a very good example. There is an overlapping of the terms between Quality of Life (QOL) and the Health-Related Quality of Life (HRQOL), Health Status (HS) and, not least, the Subjective Well-Being (SWB). A special attention should be given to the separation of these notions because they are not totally interchangeable, but, despite this warning, many authors use them this way. In a meta-analysis realized by Gill and Feinstein on 75 articles which presupposed the use of QOL measurement tools, only 15 took into account the definition of QOL and even a lower number of articles justified the choice of the QOL measurement tool in that certain study [12]. It seems that none of these articles managed to clearly distinguish the differences between SWB, QOL and HRQOL, which created confusion both for the reader but mostly for the researcher who tried to develop his further studies based on the already published results. Aim The aim of this study was to analyze to what extent the factors involved in defining the quality of life by the patients modified according to the existence of osteoporosis as a defined but also as a perceived disease, as far as it is considered a serious or less serious affection by each patient. Participants in the study and the collection of data 210 female patients participated in the study. They were identified in the Rheumatology Clinic of "Sfanta Maria" Hospital according to the following algorithm: patients known to suffer from osteoporosis, in whom the DXA examination confirmed the presence of osteoporosis or patients with a high risk of osteoporosis (for example suffering from early menopause, being under an aggressive and prolonged corticosteroid treatment, being under a prolonged metotrexat treatment, with presence of other comorbidities which generate osteoporosis), were included in the study. In the first stage, these patients were measured the bone mineral density and, on this occasion, we were able to identify both patients with a clear diagnosis of osteoporosis and patients in whom the bone mineral density was below the international acceptance level for defining this disease (standard deviation of -2,5). Then, a questionnaire was developed based on the questions included in the Romanian variant of the WHOQOL questionnaire, while also adding demographical and other supplementary data which helped positioning the patient in a certain socioeconomical category. Data regarding the treatments most often used in Rheumatology, known to generate or on the contrary protect against osteoporosis, were collected. The same Likert scale as in the original WHOQOL variant, with numbers from 1 to 5, was used. The questions were the following: In questions regarding the amount of money spent on medication and on food the answers ranged between the following percentages: in question regarding the amount of money spent on medication the values were: 1) over 30%, 2) 20-30%, 3) 10-20%, 4) below 10%, 5) 0 or I do not buy the medication. In question regarding the amount of money spent on food the values were: 1) over 40%, 2) 30-40%, 3) 20-30%, 4) below 20% or I do not buy the food. Statistical analysis The statistical analysis was done by using SPSS 22.0 (IBM Corp. -U.S.A.). p < 0,05 was used as a limit for the statistical significance. Descriptive and analytical analyses were made by following Pearson correlation index in cases of normal distributions, the comparison between groups was made by using t-Student test, respectively chi square test in the cases which required its use. Results 210 female patients were included in the study, among whom 77% met the DXA criterion for the definition of osteoporosis. As far as the treatment of these patients is concerned, 73 were under a treatment for osteoporosis (osteoform or inhibiting the bone destruction). 79,5% were under a calcium treatment (the frequency and quantity did not matter), 22,8% were under a corticoids treatment, 42,4% were under a biological therapy, 80,9% were under a nonsteroidal anti-inflammatory treatment ± aspirin and 38,6% were under a hypolipemiant/ antiaggregant/ antiarrhythmic therapy. Among the patients who were under a corticosteroids treatment, only 43,6% were also under an antiosteoporotic treatment. The age of the patient varied between 25 and 87 years, with a medium of 64,69 years and a standard deviation of 11,493. Patients who had been diagnosed with osteoporosis had this disease for approximately 5,66 years (standard deviation of 3,67). As far as Pearson correlations are concerned, it should be noticed that the patients appreciate their quality of life more concomitantly with a risen adherence to the treatment (Pearson correlation index of 0,494, p < 0,001). The correlation between the quality of life and the adherence to the doctor's advice goes in the same direction (correlation index of 0,386 when p < 0,001). What is interesting is that a very good correlation (correlation index 0,591 when p < 0,001) appeared between the way patients appreciate their quality of life and the way they are satisfied by their health status. This makes us think that the quality of health has an important share in the definition of the quality of life. As far as the capacity of achieving the objectives is concerned, the patients who consider that they have a satisfying quality of life are the same who consider they can easily achieve the objectives (Pearson index of 0,285, p < 0,001). The correlation between the quality of life and the available amount of money is positive, but very weak (Pearson index of 0,163, p < 0,05). This means that the patients connect the quality of life with the quality of health more than with the access to high financial resources. As far as the satisfaction regarding the health status is concerned, a positive but low degree of correlation was noticed as compared with the time since the patient was diagnosed with osteoporosis (Pearson index of 0,168 when p < 0,05), with the adherence to the treatment (Pearson index of 0,311 when p < 0,001), the compliance with the doctor's advice (Pearson index of 0,232 when p < 0,001), but also a superior correlation level with regard to the way the patient appreciates the quality of life as far as the following aspects are concerned: how easy do you accomplish the things you propose? (Pearson index of 0,419, p < 0,001), how sure are you of your future? (Pearson index of 0,222, p < 0,001) and especially: are you satisfied with your income? (Pearson index of 0,234, p < 0,001). These aspects make us conclude that the health status and not the quality of life is considered the main determinant factor in achieving the objectives and, on the other side, the health status and not the quality of life represents the constant indicator of a long future and a quality life. We can notice the difference between the quality of life and the quality of health status reported to the income of the person. The patients with higher incomes are the same who consider that their health status is better and not the ones with higher incomes versus a better quality of life. What is interesting to notice is the fact that even the patients who consider they are balanced from the financial point of view, they adhere better to the doctor's indications, this correlation being statistically significant but very weak (Pearson index of 0,137, when p < 0,05). As far as the opinions or the ideas of suicide are concerned, stronger correlations were noticed, as expected, in the patients low on money Discussions The current study highlights a direct relationship between the quality of life, as it is perceived by the patients, and the quality of the health status, which is more important than the relationship between the quality of life and the other objectives measured by WHOQOL scale. This study also shows that for the Romanian patient diagnosed with osteoporosis, who is enclosed in the age limits of this study, the health status represents the main driver of monitoring the quality of life. It can be noticed that as patients adhere to the treatment and to the doctor's indications, they consider that both the health status and the quality of life are better. It remains to be determined to what extent these patients have a particular psychological profile which makes them be optimistic about the quality of life and the health status and adhere independently of other factors to the doctor's treatment indications. The relationship between the financial capacity and the access to the health services is highlighted by this study: even when most of the health services specific for osteoporosis are free, patients consider this disease is treated in a way which requires a financial supplement. Moreover, we can observe that depression is not mainly connected to the way quality of life or quality of health are perceived, but mostly by external factors, respectively the access to a various diet or an adequate financial level. This is surprising as long as an adequate financial level is not considered the main driver of the access to health services. It still remains to be seen to what extent depression in this kind of patients is connected to other entities than the main disease, osteoporosis. Patients who consider they are deeply anchored socially, respectively they have many real friends are also those who consider that their health status and the quality of life are maximum. On the other side, patients who consider they are stressed at home or at the working place are also those who admit a weaker quality of life and quality of health. To our knowledge, this study is the first to evaluate the patients with osteoporosis in the context of the socio-economical indicators. The results are interesting and they represent a challenge both for the providers of the health services and the executives of health politics as far as the optimization of resources allocation are concerned, so that the patients are aware both of the services received and of the obtained results which reflect in the quality of life and the quality of health. A limit to this study is represented by the fact that it did not include other pathologies and it only compared patients with osteoporosis defined by DXA score with patients without osteoporosis, but with a high risk of developing osteoporosis, in the absence of a control group with other pathologies and similar demographical characteristics. In the future, such a comparison should also be realized, in order to see to what extent, for example, a chronic disease differs from an acute disease of a higher or lower intensity, as far as the impact on quality of life and quality of health are concerned. Moreover, it is important to think to what extent the access to generous financial resources is considered important in a chronic entity as compared to an acute pathological entity. It still remains to be seen if other studies developed in the future will also answer to these challenges.
3,939.6
2015-01-01T00:00:00.000
[ "Economics", "Sociology", "Medicine" ]
Analysis of Pre-Monsoon Convective Systems over a Tropical Coastal Region Using C-Band Polarimetric Radar, Satellite and Numerical Simulation : Analysis of pre-monsoon convective systems over the southern peninsular India has been performed using C-band radar and numerical simulation. Statistics on the radar polarimetric measurements show that the distribution of differential reflectivity (Z dr ) and specific differential phase (K dp ) have much higher spread over convective regions. The distribution of K dp is almost uniform across the vertical over the stratiform regions. The mean profile of Z dr over stratiform regions shows a distinct local maxima near melting level. A comprehensive analysis has been done on an isolated deep convective system on 13 May 2018. Plan position indicator (PPI) diagrams and satellite measured cloud top temperature demonstrate that pre-monsoon deep convective systems can develop very rapidly within a very short span of time over the region. Heavy precipitation near the surface is reflected in the high value of K dp (>5 ◦ km − 1 ). High values of Z dr (>3 dB) were measured at lower levels indicating the oblate shape of bigger raindrops. A fuzzy logic-based hydrometeor identification algorithm has been applied with five variables (Z h , Z dr , ρ hv , K dp , and T) to understand the bulk microphysical properties at different heights within the storm. The presence of bigger graupel particles near the melting layer indicates strong updrafts within the convective core regions. The vertical ice hydrometeor signifies the existence of a strong electric field causing them to align vertically. Numerical simulation with the spectral bin microphysics (SBM) scheme could produce most of the features of the storm reasonably well. In particular, the simulated reflectivity, graupel mixing ratio and rainfall were in good agreement with the observed values. Introduction Thunderstorms are severe mesoscale weather phenomena that develop mainly due to intense convection over the heated landmass and are accompanied by heavy rainfall, lightning, and, sometimes, hail. They have a spatial extent of a few kilometres to a few hundred kilometres and a life span of less than an hour to several hours [1][2][3]. Numerous thunderstorms occur daily across the globe [4], a major fraction of which is over the tropical belt. In the case of the Indian subcontinent, most of the thunderstorms occur during the pre-monsoon (March-April-May) season [5]. They are locally known as Kalbaisakhi in West Bengal, Bordoichila in Assam and Andhi in north-west India. A large amount of precipitation, particularly during the pre-monsoon season, occurs due to thunderstorm events [2,6]. The satellite data show the lightning climatology across the globe and reveals different hotspots, especially over the tropical region [7]. Five lightning hotpots have been identified during pre-monsoon over the Indian region and one of them is over the southern peninsular India [8]. Analysis of data from different observatories across India shows that, the highest annual thunderstorm frequency is observed over Assam and sub-Himalayan West Bengal in the east, Jammu region in the north, and over Kerala, where the frequency of thunderstorm is higher, in the southern peninsula [9]. The thunderstorm frequency peaks in the month of May over southern India [10]. A study by Unnikrishnan et al. [11] on lightning activity using TRMM-LIS data and ground-based lightning detection network shows strong lightning activity over south India, particularly over the Kerala region. The effect of orography, along with an abundant supply of moisture from the sea and the presence of a land-sea breeze are some of the important factors that favour the occurrence of thunderstorms over the southwest peninsular region [12,13]. Thunderstorms cause damage to crops, properties and even human lives every year. It is estimated that between 1500 and 2800 deaths occurred annually due to thunderstorms/lightning during 2001-2017 across India [14]. Heavy rainfall and high winds from these weather systems cause an interruption in connectivity among different places and infrastructure in general. Hence, there is an increasing demand for better nowcasting of such weather systems. Several attempts have been made to predict such systems using a statistical approach [15][16][17], satellite-based nowcasting [18,19], numerical simulations [20][21][22][23][24][25][26] and even artificial intelligence [27][28][29], but because of their small-scale nature and innate underlying nonlinearity, prediction of such systems is far from the desirable accuracy. More observations are required to understand the features and internal structures of these systems, which, in turn, will help their forecasting. Most of the thunderstorm-related studies in India were on pre-monsoon thunderstorms (Nor'westers) occurring over east and north-east parts of India [1,3,21,30]. A few studies have been conducted on the thunderstorm occurrences over the southern peninsular India, particularly over Kerala, which is one of the potential lightning hotspots in the southern peninsular India [22,31,32]. Proximity of the Arabian Sea backed by the towering Western Ghats orography influences the formation and development of clouds and thunderstorms in the region. The Doppler weather radar (DWR) is one of the most relevant and reliable instruments to monitor these weather events in three-dimension, starting from their genesis to dissipating stage. Radars have been used in numerous studies to understand the structure and evolution of thunderstorms [22,23,30,32,33], but most of these studies mainly use radar reflectivity and sometimes radial velocity also. However, studies using polarimetric radars are rare, particularly over the Indian region mainly because of scarcity of such data. Radars with polarimetric capabilities could provide much more information about the precipitating systems, e.g., about size and shape of the hydrometeors within the system. Polarimetry has two major advantages viz. polarimetric measurements improve the retrieval of microphysical parameters, such as mean drop size and rainfall estimation [34][35][36][37] and polarimetric clutter-detection techniques help in the removal of non-meteorological echoes [38][39][40][41]. Since polarimetric measurements contain information on the shape and size of the hydrometeors, they can be used for better retrieval of hydrometeor types. Fuzzylogic based hydrometeor identification (HID) is a very efficient and popular method for identifying hydrometeors within the radar scan volume [42][43][44][45][46][47]. Such studies give valuable information about different ice hydrometeors present at different heights within a precipitating system. Unlike raindrops, it is not easy to obtain information about ice particles using remote sensing techniques, mainly because of their irregular shapes and varying densities. Hydrometeor identification algorithms provide an indirect way to obtain information on ice particles. Such information can help us understand the charge separation and subsequent lightning in thunderstorms as detailed in different laboratory studies [48][49][50]. These studies suggest that the non-inductive charge separation due to rebounding collision between graupel and ice crystals in the presence of super-cooled water droplets is the main mechanism of thunderstorm charging. Hence, hydrometeor identification is particularly important during thunderstorm events. The spatial structure of the Ockhi cyclone and implemented HID algorithm using polarimetric Doppler weather radar observations at the west coast of the southern peninsular India provided information about polarimetric signatures of rain-bearing clouds [51]. However, hydrometeor identification in thunderstorms using polarimetric observations has not been done yet over the Indian region, mainly because of the lack of radars with polarimetric capabilities. C-band polarimetric Doppler weather radar data and several other observation data are used in this study to understand the features of pre-monsoon thunderstorms over the southern peninsular India. A hydrometeor classification algorithm has been applied to obtain information on hydrometeor types. The paper is organized as follows, apart from the introduction (Section 1), the data from different instruments and methodology are described in Section 2; the results and discussions are presented in Section 3; and Section 4 summarises the major findings/conclusions drawn from the study. Materials and Methods We have identified eleven convective events over the southern peninsular India during the pre-monsoon period (i.e., March to May) of 2018 using C-band radar reflectivity field. For these events convective-stratiform separation has been done to obtain statistics on radar polarimetric variables over convective and stratiform regions. A prominent convective event on 13 May 2018 has been analysed in detail for better understanding of such systems. Besides the DWR data, we have used brightness temperature data from INSAT satellite, rain drop size distribution (DSD) data from disdrometer, cloud base height (CBH; m) data from ceilometer, ERA5 reanalysis data and also radiosonde measurements. A disdrometer and ceilometer were installed over the rooftop of the National Centre for Earth Science Studies (NCESS; 8.5228 N, 76.9097 E). Locations of the DWR and NCESS along with the topography of the surrounding area are shown in Figure 1. A brief description of the instruments and data are summarised in Table 1. The optical disdrometer (model: OTT Parsivel, manufactured by OTT Hydrome Germany) is a laser-based system that detects all types of precipitation at the surfac [52,53]. It measures rain DSD and fall velocity distribution in 32 size and velocity classe and also provides rain rates (R; mm h −1 ) and radar reflectivity (dBZ). The size of measu able liquid precipitation particles ranges from 0.2 to 8 mm and it varies from 0.2 to 25 mm for solid precipitation particles. It can measure the particles fall velocity from 0.2 to 2 ms −1 . The temporal resolution of this data is 1 min. The disdrometer used in this stud The optical disdrometer (model: OTT Parsivel, manufactured by OTT Hydromet, Germany) is a laser-based system that detects all types of precipitation at the surface [52,53]. It measures rain DSD and fall velocity distribution in 32 size and velocity classes and also provides rain rates (R; mm h −1 ) and radar reflectivity (dBZ). The size of measurable liquid precipitation particles ranges from 0.2 to 8 mm and it varies from 0.2 to 25 mm for solid precipitation particles. It can measure the particles fall velocity from 0.2 to 20 ms −1 . The temporal resolution of this data is 1 min. The disdrometer used in this study was installed over the rooftop of the NCESS. The ceilometer (model: CHM15k-Nimbus manufactured by Lufft Mess-und Regeltechnik GmbH) is a ground-based remote sensing device that uses standard lidar method to determine the cloud base height (CBH) from the altitude profile of backscattered signals. It can provide cloud thickness where the cloud layers do not totally attenuate the laser beam, but the signals get attenuated in a rainy situation depending on the number concentration and size of raindrops, and hence, signal to noise ratio of the ceilometer decreases with increasing rain rate [54]. Technical details of CHM15k can be obtained from the previous studies by Heese et al. [55] and Sumesh et al. [56]. The CHM15k was operated with a vertical resolution of 15 m and a temporal resolution of 15 s. Brightness temperature data (Infrared Brightness Temperature, IRBT) have been used as a proxy for the cloud top height. These data were obtained from INSAT-3DR, which is a multi-purpose geosynchronous spacecraft and provides data with spatial resolution of 4 km × 4 km and temporal resolution of 30 min, of mesoscale phenomena in the visible and infrared (IR) spectral bands (0.55-12.5 µm) over the Indian region. These data are freely available through the https://www.mosdac.gov.in/server (accessed on 6 September 2021). The synoptic circulations over the study region were analysed using the geopotential (m 2 s −2 ), u-wind (m s −1 ) and v-wind (m s −1 ) variables from ERA5 reanalysis hourly data having spatial resolution of 0.25 • × 0.25 • . Radiosonde measurements from India Meteorological Department (IMD), Thiruvananthapuram have been utilised to analyse the Convective available potential energy (CAPE; J kg −1 ), vertical profiles of temperature (K), mixing ratio (g kg −1 ), wind speed (m s −1 ) and wind direction (deg). DWR Data and Quality Control The C-band polarimetric Doppler Weather Radar (DWR), installed at VSSC, Thiruvananthapuram (8.5374 N, 76.8657 E, 27 m above mean sea level), operates at a frequency of 5.625 GHz and has a peak transmitting power of 250 kW. The radar performs a volumetric scan of the surrounding atmosphere within a radius of 240 km at 11 elevation angles (0.5 • , Atmosphere 2022, 13, 1349 5 of 22 1 • , 2 • , 3 • , 4 • , 7 • , 9 • , 12 • , 15 • , 18 • and 21 • ) with azimuthal and radial resolutions of 1 • and 150 m, respectively. One full volume scan takes around 15 min. The radar provides base products, such as reflectivity at horizontal polarization (Z h ), differential reflectivity (Z dr ), differential propagation phase (Φ dp ), cross-correlation (ρ hv ), radial velocity (V r ) and spectral width (σ). Z dr is the difference between reflectivities (in decibel) at horizontal and vertical polarisation, Φ dp is the phase difference between the horizontally and vertically polarised pulses [57,58]. A comprehensive detail about the radar is given in Mishra et al. [59]. The validation of the radar data with other instruments showed that the DWR reflectivity agrees quite well with GPM satellite measurements and also the radar retrieved precipitation have a good correlation (0.89) with ground based in situ measurements [60]. Received signal by radar is often contaminated by signals reflected from nonmeteorological objects, such as hills and birds etc., anomalous propagation, and also attenuation of the electromagnetic wave by different types of hydrometeors [39,41,61,62]. Even though the radar signal processor takes into account many factors to give reasonably accurate base products from the return signal, still the data need certain quality control measures. The use of simple thresholds for different variables can be quite useful in removing unwanted echoes [41,63]. The following quality control measures were considered for this study-(i) pixels with Z h > 70 dBZ or ρ hv < 0.7 were ignored, (ii) topography data from Shuttle Radar Topography Mission (SRTM) were used to remove ground clutter from hills present towards 40 km east of the radar as seen in Figure 1 [62,64]. Figure 2 shows the radar reflectivity during an event on 13 May 2018 before quality control ( Figure 2a) and after quality control (Figure 2b). The clutter due to hills is present in the reflectivity field before quality control, which was removed nicely after applying the above-mentioned quality control measures. Other variables (Z dr , Φ dp and ρ hv ) were processed similarly. Convective-Stratiform Separation Several studies have been done for the classification of precipitation into convective and stratiform parts using in situ measurements [65][66][67] and weather radars [68][69][70][71][72]. Convective and stratiform parts of the cloud systems exhibit significantly different behaviours in terms of dynamics as well as microphysics [73]. Vertical air motions within these two portions of a cloud system differ significantly; convective parts are mainly driven by large narrow updrafts (5-10 m s −1 or more), while stratiform portions are governed by gentler mesoscale ascents (<3 m s −1 ). Thus, microphysical processes responsible for particle growth within the convective and stratiform parts are very different. Particles within convective cores regions mainly grow by riming or accretion (collection of supercooled liquid water droplets onto the ice particle surface), which leads to large/dense hydrometeors, whereas in the stratiform region vapour deposition and aggregation are dominating processes that lead to smaller and less dense ice hydrometeors (though large aggregates may exist). The convective-stratiform separation method by Steiner et al. [68] is based on the texture of the radar reflectivity field adopted for the present study and is widely used by the radar community. This method basically checks for two criteria viz. intensity or peakedness criteria on the horizontal reflectivity field at 3 km height, to identify a grid point (pixel) as a convective centre. Any grid point with reflectivity at least 40 dBZ (intensity criteria) or greater than a fluctuating threshold (peakedness criterion) depending on the area-averaged background reflectivity (Z bg calculated within a radius of 11 km around the grid point), is considered as a convective centre. For each pixel identified as a convective centre, all surrounding pixels within a certain radius of influence are also included as convective pixels. This radius of influence is dependent on Z bg . Once all the convective pixels are identified, the rest of the pixels with non-zero reflectivity values are assigned as stratiform pixels. 2.3. Φ dp Data Processing and K dp Calculation The differential propagation phase (Φ dp ) is the phase difference between the horizontal and vertical polarised pulses on traversing through the atmosphere. The differential propagation phase is proportional to the water content along a rain path. Since most of the hydrometeors in the atmosphere are aligned with their major axis in the horizontal plane and it is a range cumulative parameter, the value of Φ dp increases with propagation path. Now, the unambiguous range of Φ dp usually is 180 • in the alternate H/V transmission mode and 360 • in the simultaneous H/V transmission mode. Hence, for a long propagation path in rain, Φ dp values can easily exceed the unambiguous range and then the Φ dp will be wrapped/folded, which is usually manifested as a sudden jump in the range profiles of Φ dp . This issue with Φ dp is known as phase wrapping/folding [74,75]. The unfolding of these phases has been done by adding appropriate phase offset whenever such jumps are found [75]. So, even after the quality control steps mentioned in the previous section, Φ dp needs this extra processing before it can be used for further analysis. In Figure 2c, such a situation of phase wrapping is observed towards 15 km west of the radar during a convective event. Then, the phases are unfolded nicely and the unfolded Φ dp is shown in Figure 2d. Specific differential phase (K dp ) is defined as the slope of range profiles of Φ dp [58,76,77] and is defined as follows. K dp is an important parameter for meteorological applications as it is closely related to rain intensity. More importantly it is insensitive to signal attenuation during propagation, radar calibration, partial beam blockage and the presence of hail [78,79]. This makes the specific differential phase very useful for precipitation estimation at heavy rain intensity or during partial beam blockage. Though the estimation of K dp seems quite simple, it requires further processing of Φ dp range profiles before calculating the slope. Φ dp is known to be a very noisy parameter, particularly in regions with low rain rates, and the process of differentiation increases this noise even further. To tackle this, we have applied a low-pass Butterworth filter [80,81] of order 10 with a cut-off scale of 2 km to reduce the statistical fluctuation, but keeping the overall features intact. Similar filters with similar cut-off scales were used in previous studies [74,82]. Figure 3a shows the Φ dp plan position indicator (PPI) at 2 • elevation angle during the convective event on 13 May after quality control and unfolding. Then, the low-pass filter has been applied on this Φ dp and obtained a smoothed Φ dp (Figure 3b). Small scale fluctuations in the Φ dp field were removed in the filtered Φ dp . With this smoothed Φ dp field, K dp has been estimated using Equation (1) and is shown in Figure 3c. Another K dp estimate using the slope of the linear regression line [83] has also been calculated. Both the methods gave similar K dp values. The K dp field shows high values close to 9 • km −1 at a distance of 5 to 15 km westward from the radar, indicating the presence of heavy precipitation. The blue line in this plot represents the 281 • azimuth. Along this direction original Φ dp (dot-dashed blue curve), filtered Φ dp (solid blue curve), and estimated K dp (red curves) are shown in Figure 3d. The ranges of K dp values obtained here agrees quite well with previous studies on convective cases [47,74]. Hydrometeor Identification A hydrometeor identification (HID) algorithm by Dolan et al. [47] is used to identify the types of hydrometeors present at different heights within a convective system. This is a fuzzy logic-based algorithm in which fuzzification of the inputs is done by calculating values of the membership functions corresponding to each fuzzy set (here, different hy drometeor types). Then, a fuzzy score for each fuzzy set is calculated using Equation (2) This step is called aggregation. Then, defuzzification is done by choosing the fuzzy se corresponding to the maximum fuzzy score, i.e., the hydrometeor with the highest fuzzy logic score is the most probable hydrometeor type at that grid point within the radar scan volume. . PPI diagrams at 2 • elevation of (a) unfolded Φ dp , (b) filtered Φ dp and (c) estimated K dp . Blue line in (b,c) represents 281 • azimuth. (d) Variation of Φ dp (blue) and K dp (red) along 281 • azimuth at 18:54:12 IST, 13 May 2018. Hydrometeor Identification A hydrometeor identification (HID) algorithm by Dolan et al. [47] is used to identify the types of hydrometeors present at different heights within a convective system. This is a fuzzy logic-based algorithm in which fuzzification of the inputs is done by calculating values of the membership functions corresponding to each fuzzy set (here, different hydrometeor types). Then, a fuzzy score for each fuzzy set is calculated using Equation (2). This step is called aggregation. Then, defuzzification is done by choosing the fuzzy set corresponding to the maximum fuzzy score, i.e., the hydrometeor with the highest fuzzy logic score is the most probable hydrometeor type at that grid point within the radar scan volume. Here, µ i is the fuzzy logic score for the ith hydrometeor type. β j,i is the membership function for ith hydrometeor types and jth variable (Equation (3)). W j is the weight factor for the jth variable. The values of these membership function parameters and the weights are taken as in [47], which are obtained from simulations at C-band. Five variables viz. Z h , Z dr , K dp , ρ hv and temperature (T) are used to calculate the fuzzy logic score. Seven types of hydrometeors have been considered viz. drizzle (DZ), rain (RN), ice crystals (CR), aggregates (AG), low-density graupel (LDG), high-density graupel (HDG), and vertically oriented ice (VI). Graupels are ice particles with a diameter of 2-5 mm, which grow mainly due to the riming process, i.e., collection of supercooled water droplets onto the surface of ice crystals and subsequent freezing. The temperature for the HID scheme has been obtained from radiosonde measurements by IMD Thiruvananthapuram at 5:30 IST (Indian Standard Time). Radar data interpolated on a 0.5 km × 0.5 km × 0.5 km grid have been used for the HID analysis. Numerical Simulation In the present study, we have used the state-of-the-art mesoscale model-Weather Research and Forecasting (WRF) model version-3.9 for the simulation of the thunderstorm observed over the southern peninsular India on 13 May 2018. We have considered three nested domains (D1, D2 and D3) of 9 km, 3 km and 1 km grid resolutions, as shown in Figure 4. For better simulation of the thunderstorm event, the innermost domain (D3) with 1 km resolution has been considered, as suggested by Rajeevan et al. [22]. The model used 50 vertical levels. It was initialised at 00 UTC of 13 May 2018 with the NCEP global final analysis data having spatial resolution of 0.25 degree. Furthermore, the boundary conditions were taken from the same analysis data. The Kain-Fritsch cumulus parameterisation scheme [84] was used for D1 and D2, whereas explicit convection was allowed for D3 given its higher spatial resolution (1 km). For microphysics, the fast version of the spectral bin microphysics (Fast-SBM) scheme [85] was used. Fast-SBM includes four hydrometeor categories: water drops, ice/snow, graupel/hail and aerosol. In contrast to bulk microphysics schemes, which assumes a size distribution for hydrometeors, Fast-SBM uses 33 mass bins to explicitly describe the size distributions of each type of hydrometeors. Even though it is computationally much more expensive than bulk schemes, studies [86] show that Fast-SBM produce more realistic results than bulk schemes, particularly for deep convective systems. Among other physical parameterisation schemes that were used are: RRTM long-wave scheme, Dudhia shortwave scheme, Noah LSM scheme for land surface processes etc. nested domains (D1, D2 and D3) of 9 km, 3 km and 1 km grid resolutions, as shown in Figure 4. For better simulation of the thunderstorm event, the innermost domain (D3) with 1 km resolution has been considered, as suggested by Rajeevan et al. [22]. The model used 50 vertical levels. It was initialised at 00 UTC of 13 May 2018 with the NCEP global final analysis data having spatial resolution of 0.25 degree. Furthermore, the boundary conditions were taken from the same analysis data. The Kain-Fritsch cumulus parameterisation scheme [84] was used for D1 and D2, whereas explicit convection was allowed for D3 given its higher spatial resolution (1 km). For microphysics, the fast version of the spectral bin microphysics (Fast-SBM) scheme [85] was used. Fast-SBM includes four hydrometeor categories: water drops, ice/snow, graupel/hail and aerosol. In contrast to bulk microphysics schemes, which assumes a size distribution for hydrometeors, Fast-SBM uses 33 mass bins to explicitly describe the size distributions of each type of hydrometeors. Even though it is computationally much more expensive than bulk schemes, studies [86] show that Fast-SBM produce more realistic results than bulk schemes, particularly for deep convective systems. Among other physical parameterisation schemes that were used are: RRTM long-wave scheme, Dudhia shortwave scheme, Noah LSM scheme for land surface processes etc. Statistics of Z h , Z dr and K dp over Convective and Stratiform Regions An implementation of the convective-stratiform separation algorithm is depicted in Figure 5 during the convective event on 13 May 2018. Figure 5a shows the reflectivity field averaged between 2.5 and 3.5 km height. The convective-stratiform separation algorithm was then applied on this horizontal reflectivity field and the results are shown in Figure 5b. The red and blue pixels are identified as convective and stratiform precipitation, respectively. Not only high reflectivity regions, but also other regions with a strong gradient have been identified as convective regions. The convective-stratiform separation were implemented for all the volume scans available for all the eleven convective events during March-May, 2018 and the corresponding statistics for Z h , Z dr and K dp are shown in Figure 6. A total of 132 volume scans were obtained from all of these events. Within these radar volumes, the number of convective and stratiform pixels were found to be 4,09,049 and 14,39,977, respectively, which is equivalent to a convective and stratiform fraction of 22% and 78%, respectively. Figure 6a-c shows the contour frequency by altitude diagram (CFAD) of Z h , Z dr and K dp over the convective regions and Figure 6d-f shows the corresponding CFADs over the stratiform regions. The convective core is seen near 3 km height (Figure 6a), though such a feature is not present over stratiform regions (Figure 6d). The spread in the distribution of Z dr and K dp is much higher over convective regions compared to stratiform regions. Particularly, the spread is much higher at lower levels (below 5 km height). The CFADs of Z dr and K dp obtained in this study have quite similar features as the ones obtained by Machado et al. [87], e.g., almost uniform distribution across the vertical; although, their findings show that the centre of the CFAD of K dp is shifted slightly towards the higher value of 0.2 • km −1 as compared to 0.1 • km −1 in our study. The reason behind the differences could be attributed to the fact that they used X-band radar data instead of C-band. Statistics of Zh, Zdr and Kdp over Convective and Stratiform Regions An implementation of the convective-stratiform separation algorithm is depicted in Figure 5 during the convective event on 13 May 2018. Figure 5a shows the reflectivity field averaged between 2.5 and 3.5 km height. The convective-stratiform separation algorithm was then applied on this horizontal reflectivity field and the results are shown in Figure 5b. The red and blue pixels are identified as convective and stratiform precipitation, respectively. Not only high reflectivity regions, but also other regions with a strong gradient have been identified as convective regions. The convective-stratiform separation were implemented for all the volume scans available for all the eleven convective events during March-May, 2018 and the corresponding statistics for Zh, Zdr and Kdp are shown in Figure 6. A total of 132 volume scans were obtained from all of these events. Within these radar volumes, the number of convective and stratiform pixels were found to be 4,09,049 and 14,39,977, respectively, which is equivalent to a convective and stratiform fraction of 22% and 78%, respectively. Figure 6a-c shows the contour frequency by altitude diagram (CFAD) of Zh, Zdr and Kdp over the convective regions and Figure 6d-f shows the corresponding CFADs over the stratiform regions. The convective core is seen near 3 km height (Figure 6a), though such a feature is not present over stratiform regions (Figure 6d). The spread in the distribution of Zdr and Kdp is much higher over convective regions compared to stratiform regions. Particularly, the spread is much higher at lower levels (below 5 km height). The CFADs of Zdr and Kdp obtained in this study have quite similar features as the ones obtained by Machado et al. [87], e.g., almost uniform distribution across the vertical; although, their findings show that the centre of the CFAD of Kdp is shifted slightly towards the higher value of 0.2° km −1 as compared to 0.1° km −1 in our study. The reason behind the differences could be attributed to the fact that they used X-band radar data instead of C-band. The mean profiles of Zh, Zdr and Kdp along with standard deviations are shown in Figure 6g-i. In the convective case (red), mean reflectivity values gradually increase with height from the ground and reaches a maxima at ~3 km height and then gradually decreases towards upper levels. The peak value of the reflectivity is about 32 dBZ. Similar Figure 6. Contour frequency by altitude diagram (CFAD) of Z h , Z dr and K dp for convective (a-c) and stratiform (d-f) regions. (g-i) Mean vertical profile with 1-σ error bars of Z h , Z dr and K dp over convective (red) and stratiform (blue) regions. (j-l) Frequency distribution of Z h , Z dr and K dp at 3 km height. The black dotted line represents rain rate of 10 mm h −1 . The mean profiles of Z h , Z dr and K dp along with standard deviations are shown in Figure 6g-i. In the convective case (red), mean reflectivity values gradually increase with height from the ground and reaches a maxima at~3 km height and then gradually decreases towards upper levels. The peak value of the reflectivity is about 32 dBZ. Similar features in the reflectivity profile were found over the tropical region by Zipser and Lutz [88]. On the other hand, in the stratiform case (blue), mean reflectivity remains almost uniform (~18 dBZ) at lower levels and then increases gradually and forms a distinct peak near 5 km height. This peak in the reflectivity signifies the bright band (caused by enhanced reflectivity from melting ice particles near 0 • C level) over stratiform regions. Above the melting layer, the reflectivity decreases monotonically. The spread in the distribution is quite high for the convective case compared to the stratiform case, particularly at higher levels. The mean profile of Z dr over convective regions decreases monotonically towards upper levels and remains almost uniform at 6 km height. On the other hand, over stratiform regions, mean values of Z dr decreases gradually at lower levels, but from 3 km height it starts increasing and forms a distinct local maxima near 5 km height and again decreases towards higher heights. This local maxima in the Z dr may be utilised to identify the melting layer as an alternative to the peak in reflectivity profile. The mean values of Z dr are very close to each other for convective and stratiform cases over 5 km in height. Higher mean values of Z dr at lower levels over convective regions are due to the higher oblateness of larger raindrops, which were found at lower levels as a result of the coalescence process or due to the melting of bigger graupels. The spread in the distribution of Z dr is much higher at lower levels (below 4 km height), particularly over convective regions. The mean K dp profile remains almost uniform with small values (~0.2 • km −1 ) throughout all levels over stratiform regions. On the other hand, over convective regions, the mean K dp values are much higher at lower levels, which signifies the presence of strong precipitation over convective regions. The spread in the distribution of K dp is quite large over convective regions due to the larger spread in the raindrop size distribution as well as turbulent conditions. Figure 6j-l shows the distribution of Z h , Z dr and K dp over convective (red) and stratiform (blue) regions at 3 km height. The peak in the distributions of Z h over the convective and stratiform regions are well separated; although, there is an overlap between the two distributions. The peaks for convective and stratiform regions occur at 34 dBZ and 20 dBZ, respectively. The dashed vertical line represents the reflectivity corresponding to the rain rate of 10 mm h −1 . Here, we have used Z = 168R 1.4 relation, which was obtained from another study by Jash et al. [89] over this region using micro rain radar data. This result clearly shows that the use of a rain rate threshold (e.g., 10 mm h −1 ) to separate convective and stratiform rain is questionable, though such a simple classification method is often useful in many studies [65,66,90]. The distributions of Z dr over convective and stratiform regions are almost symmetric with a maximum occurrence frequency at 0.6 dB and 1.0 dB, respectively. For K dp , the maximum occurrence frequency is at 0.4 • km −1 and 0.1 • km −1 over convective and stratiform regions. The distribution over convective regions have a much longer tail crossing beyond 3 • km −1 , which resulted in a large error-bar in Figure 6i. Even though numerous studies on convective systems are available in the literature, such statistics on radar polarimetric variables (Z dr , K dp ) are rare, particularly for thunderstorms over the Indian region. Hence, this analysis provides valuable quantitative information on the signatures/characteristics of convective and stratiform precipitation from radar polarimetric measurements. Case Study of a Deep Convective System-13 May 2018 An in-depth analysis is performed on the convective event on 13 May 2018 for a detailed understanding of the evolution and structure of pre-monsoon convective systems over the southern peninsular India. It was an isolated deep convective system. Favourable synoptic and thermodynamic conditions help in the organisation of convective storms to develop into severe ones [30]. High moisture, atmospheric instability, vertical wind shear and a lifting mechanism are the different necessary conditions for the development of thunderstorms. Hence, an overview of the synoptic conditions before and during the event will give more insights into its development. Geopotential height anomaly and wind data from ECMWF Reanalysis v5 (ERA5) at 12 UTC and vertical profile of equivalent potential temperature (θ e ), mixing ratio, wind speed, wind direction from radiosonde measurements by IMD, Thiruvananthapuram at 00 UTC (i.e., 05:30 IST), were used to look into the environmental conditions for the event. A low-pressure area formed in the south-west Arabian Sea (Figure 7a) on 13 May 2018, which was evident from the minimum geopotential height anomaly at 700 hPa levels between 55-65 E and 4-10 N. Even though it was far from the present study region, under the influence of this low-pressure area, the mean wind was from the Bay of Bengal to the Arabian Sea in the easterly direction (Figure 7a) converging towards the low-pressure centre. A strong negative gradient of the θ e profile (Figure 7b (red)) up to 3 km height shows the instability present in the lower atmosphere during morning time. The mixing ratio profile (Figure 7b (blue)) indicates the presence of a moist layers between 2 and 6 km levels, and also suggests the existence of favourable atmospheric conditions for the formation of a thunderstorm. Figure 7c shows the profiles of wind speed (red) and wind direction (blue). They changed abruptly along the vertical, which is due to the turbulence associated with the unstable atmospheric conditions. Heavy rainfall in isolated places were reported over Kerala and Tamil Nadu by IMD. These conditions led to the formation of a convective system over the inland region on 13 May 2018 in the afternoon hours between 16:00 and 22:30 IST. Evolution of the Storm The development of this convective system on 13 May 2018 is captured in the plan position indicator (PPI) diagrams of radar reflectivity field at consecutive times during the event (Figure 8). The convective clouds started developing over the land around 25 km east of the radar location at 16:00 IST and then gradually it started moving westward. This movement of the system was due to the prevailing easterly wind (Figure 7a) as discussed previously. The cloud system passed over the NCESS location around 18:00 IST ( Figure 8d). Evolution of the Storm The development of this convective system on 13 May 2018 is captured in the plan position indicator (PPI) diagrams of radar reflectivity field at consecutive times during the event (Figure 8). The convective clouds started developing over the land around 25 km east of the radar location at 16:00 IST and then gradually it started moving westward. This movement of the system was due to the prevailing easterly wind (Figure 7a) as discussed previously. The cloud system passed over the NCESS location around 18:00 IST (Figure 8d). The development of this convective system on 13 May 2018 is captured in the plan position indicator (PPI) diagrams of radar reflectivity field at consecutive times during the event (Figure 8). The convective clouds started developing over the land around 25 km east of the radar location at 16:00 IST and then gradually it started moving westward. This movement of the system was due to the prevailing easterly wind (Figure 7a) as discussed previously. The cloud system passed over the NCESS location around 18:00 IST ( Figure 8d). As soon as it reached over the NCESS location, extremely heavy rainfall started, which was observed in the rain rate measured by the disdrometer (Figure 9a). The rain rate crossed 100 mm h −1 and sustained in that range for over an hour. Gradually the rain intensity declined to a range of 0.1-1 mm h −1 , which was basically the stratiform precipitation following the main convective activity. The rain DSD obtained by the disdrometer showed an abundance of bigger raindrops (diameter > 3 mm) during this intense convective spell, followed by smaller drops at the later stage of the event. Cloud base height measured by ceilometer shows (Figure 9b) the presence of multilevel clouds. Before 17:00 IST mostly high-level clouds were detected (CBH~7 km) and then, just before the precipitation started, all three cloud layers had cloud bases below 2.5 km. Such a low cloud base height and high cloud top measures the significant depth of the cloud system. The deep convective cloud system eventually moved over the Arabian Sea around 30 km westward from the radar location. Meanwhile, it turned into a stratiform system (Figure 8g-i). The IMD weather report also mentioned the rainfall during these hours. This event was associated with the rapid development of deep convective clouds, as observed in the evolution of the cloud top infrared brightness temperature (IRBT) measured from satellite (INSAT-3DR). A lower brightness temperature signifies a higher cloud top height [91]. Figure 10a-e shows the spatial and temporal evolution of the brightness temperature during this event. Around 18:00 IST, much of the region had a brightness temperature below 200 K revealing the occurrence of deep clouds over most of the region. Figure 10f shows the temporal evolution of the brightness temperature over the NCESS location (averaged over a 12 × 12 km area centred at NCESS). A rapid decrease in the brightness temperature started at 15:45 IST and reached a minimum value of 185 K at 17:45 IST, which exemplifies how fast such a deep system can develop within such a short span of time. tation following the main convective activity. The rain DSD obtained by the disdrometer showed an abundance of bigger raindrops (diameter > 3 mm) during this intense convective spell, followed by smaller drops at the later stage of the event. Cloud base height measured by ceilometer shows (Figure 9b) the presence of multilevel clouds. Before 17:00 IST mostly high-level clouds were detected (CBH ~7 km) and then, just before the precipitation started, all three cloud layers had cloud bases below 2.5 km. Such a low cloud base height and high cloud top measures the significant depth of the cloud system. The deep convective cloud system eventually moved over the Arabian Sea around 30 km westward from the radar location. Meanwhile, it turned into a stratiform system (Figure 8g-i). The IMD weather report also mentioned the rainfall during these hours. This event was associated with the rapid development of deep convective clouds, as observed in the evolution of the cloud top infrared brightness temperature (IRBT) measured from satellite (INSAT-3DR). A lower brightness temperature signifies a higher cloud top height [91]. Figure 10a-e shows the spatial and temporal evolution of the brightness temperature during this event. Around 18:00 IST, much of the region had a brightness temperature below 200 K revealing the occurrence of deep clouds over most of the region. Figure 10f shows the temporal evolution of the brightness temperature over the NCESS location (averaged over a 12 × 12 km area centred at NCESS). A rapid decrease in the brightness temperature started at 15:45 IST and reached a minimum value of 185 K at 17:45 IST, which exemplifies how fast such a deep system can develop within such a short span of time. The CAPE value of 1713 J kg −1 was observed from the nearest radiosonde measurements in the mooring hour (05:30 IST), which was indicative of already existing moderate instability in the atmosphere which built up further and eventually led to strong updraft during evening hours. Vertical Structure of the Storm The vertical structure of the storm in terms of DWR polarimetric measurements and associated hydrometeor identification are shown in Figure 11. An averaged reflectivity between 2.5 and 3.5 km height during the rapid initial development stage of the storm The CAPE value of 1713 J kg −1 was observed from the nearest radiosonde measurements in the mooring hour (05:30 IST), which was indicative of already existing moderate instability in the atmosphere which built up further and eventually led to strong updraft during evening hours. Vertical Structure of the Storm The vertical structure of the storm in terms of DWR polarimetric measurements and associated hydrometeor identification are shown in Figure 11. An averaged reflectivity between 2.5 and 3.5 km height during the rapid initial development stage of the storm reveals active convective regions (Figure 11a). Then, a vertical cross section along the convection line AB has been considered to analyse the vertical structure of the storm. Figure 11b shows the vertical cross section of reflectivity at horizontal polarisation (Z h ) along the convection line AB. The x-axis represents the distance from point A towards point B. Reflectivity values greater than 30 dBZ, reaching up to 10 km height, signifies the existence of strong updraft within the convective core region. This strong updraft can keep the larger hydrometeors (bigger raindrops, graupels etc.) float aloft for a longer period, giving them more time to grow further by the collision-coalescence process for raindrops and by riming process for ice particles [92]. Since reflectivity is proportional to the 6th power of the particle diameter [58], these larger particles produce such strong reflectivity values even at higher altitudes. Figure 11c shows the vertical cross section of differential reflectivity (Zdr) along the convection line. The Zdr value gives a measure of the oblateness of precipitation particles, and hence could be useful in distinguishing between larger raindrops, hail, and graupel due to differences in shape and orientation. Since raindrops (diameter > 1 mm) are deformed into an oblate spheroid shape due to aerodynamic forces [93] with a preferred orientation of their major axes in the horizontal direction (and therefore Zh > Zv), Zdr is positive and increases with raindrop size. This increase in the value of Zdr with raindrop size is shown in terms of a polynomial fit by Bringi et al. [36] between observed Zdr and mean drop diameter measured by disdrometer. In our analysis, Zdr values greater than 2 dB were observed, which indicates the presence of bigger raindrops or melting bigger ice particles [94] below 4 km height. Bigger raindrops were also observed in the disdrometer measurements of rain DSD (Figure 9a). Zdr values are much smaller at higher altitudes (above 0° isotherm ~5 km height) as the ice particles, such as aggregate, graupel, and hail, tend to be spherically symmetric or tumble while falling, causing low values of Zdr. The lower value of dielectric constant for ice compared to water is another factor behind the lower value of Zdr for ice particles. Within the strong convective regions at heights above the melting layer, a higher value of Zdr along with high value of Kdp indicates supercooled liquid drops above freezing level [95]. Figure 11c shows the vertical cross section of differential reflectivity (Z dr ) along the convection line. The Z dr value gives a measure of the oblateness of precipitation particles, and hence could be useful in distinguishing between larger raindrops, hail, and graupel due to differences in shape and orientation. Since raindrops (diameter > 1 mm) are deformed into an oblate spheroid shape due to aerodynamic forces [93] with a preferred orientation of their major axes in the horizontal direction (and therefore Z h > Z v ), Z dr is positive and increases with raindrop size. This increase in the value of Z dr with raindrop size is shown in terms of a polynomial fit by Bringi et al. [36] between observed Z dr and mean drop diameter measured by disdrometer. In our analysis, Z dr values greater than 2 dB were observed, which indicates the presence of bigger raindrops or melting bigger ice particles [94] below 4 km height. Bigger raindrops were also observed in the disdrometer measurements of rain DSD (Figure 9a). Z dr values are much smaller at higher altitudes (above 0 • isotherm~5 km height) as the ice particles, such as aggregate, graupel, and hail, tend to be spherically symmetric or tumble while falling, causing low values of Z dr . The lower value of dielectric constant for ice compared to water is another factor behind the lower value of Z dr for ice particles. Within the strong convective regions at heights above the melting layer, a higher value of Z dr along with high value of K dp indicates supercooled liquid drops above freezing level [95]. ρ hv shows high values (>0.95) throughout the entire cross section (Figure 11d) and ρ hv depends on several factors, such as eccentricity, distribution of canting angle, irregular shape and mixture of different types of hydrometeors. Relatively lower values of ρ hv at the central region and at higher altitudes within the cross-section, could be attributed to the mixture of ice particles with rain. The estimated K dp (Figure 11e) shows that the spatial pattern of K dp is in tandem with that of reflectivity, though there are differences. High values of K dp (greater than 5 • km −1 ) below melting level suggest the presence of intense convective precipitation with bigger raindrops formed due to the coalescence process or due to the melting of graupel. As drop eccentricity increases with diameter, the differential propagation phase increases, causing higher values of K dp within regions of intense convective precipitation. A similar structure of K dp within convective regions is reported by Ryzhkov et al. [61]. Higher values of K dp above freezing level suggests prevalence of supercooled droplets, which can help in the formation of graupel particles via the riming process. Identified hydrometeor types are shown in Figure 11f. Below the melting level, it is mainly dominated by rain (RN) and above the melting level, ice aggregates (AG) are the dominating hydrometeors. At heights between 4.5 and 8 km, within the convective core regions, graupel (HDG) particles are abundant. Similar findings were obtained in Dolan et al. [47], in which HDG was found close to the melting level, as in our study, and LDG at higher heights. Within such convective cores reaching up to 10 km height, liquid droplets are pushed to heights much above the freezing level and they stay there as unstable supercooled droplets. Upon contact with ice-aggregates they immediately freeze onto the surface forming bigger ice particles viz. graupel. The strong updraft can sustain these graupels in air for longer, helping them grow even further. The presence of vertical ice indicates the existence of an electric field, which forces these particles to orient vertically and this could be due to the charging via the collisions between graupels and smaller ice crystals, as confirmed by different laboratory experiments [48][49][50]. WRF Simulation of Reflectivity, Graupel Mixing Ratio and Rainfall The radar reflectivity field was obtained through post processing of the WRF model outputs using the ARWpost package. The simulated radar reflectivity structure during the thunderstorm on 13 May 2018 is shown in Figure 12a-c. The spatial distribution of the radar reflectivity at 3 km height (which was previously considered for the convective-stratiform separation) at 19:15 IST during the mature stage of the storm is shown in Figure 12a. Strong radar reflectivity values greater than 40 dBZ shows the most active convective regions. There are many similarities with the observed reflectivity fields (Figure 8d) from DWR, but at an earlier time (18:08 IST), i.e., in the model the storm developed almost an hour late compared to what was observed. A 5 × 5 km box (white rectangle) over an active convective region was considered to study the vertical structure of the simulated storm. Figure 12b shows the time-height cross section of the reflectivity averaged over the box. The peak reflectivity occurred around 19:15 IST, showing an hour delay in the development of the storm compared to the observation. Figure 12c shows the east-west vertical cross section along the dotted line passing through the centre of the box. The main convective region is spread over a distance of~25 km surrounded by lower reflectivity regions. The reflectivity core reached beyond 12 km height showing the severity of the storm. Such strong reflectivity cores were also seen in the DWR observation (Figure 11b). Atmosphere 2022, 13, x FOR PEER REVIEW 18 of 24 The occurrence of graupel within the storm is shown in Figure 12d-f in terms of the graupel mixing ratio. Graupels are often abundant in thunderstorms because of strong updraft, which help in their growth. Graupels are particularly important while studying thunderstorms as they play a crucial role in the occurrence of lightning. Figure 12d shows the spatial distribution of the graupel mixing ratio at 6 km height. This particular height was considered as graupels are mostly found around this height. High values of graupel mixing ratio were seen over the active convective regions. Then, the same box (as in Figure 12a) was considered to study the structure of the graupel occurrence across the vertical. Figure 12e shows the time-height cross section of the graupel mixing ratio. High values of graupel mixing ratio started to be seen around 19:00 IST at heights beyond 5 km. An abundance of graupel was seen up to 20:30 IST. Figure 12f shows the east-west vertical structure of the graupel mixing ratio along the black dotted line. Graupel started to show up from 5 km height and highest mixing ratio was observed near 8 km height. Such structure of graupel occurrence across the vertical was observed from the hydrometeor identification analysis (Figure 11f), which showed graupel was identified between 5 and 9 km height. Rainfall is probably the most important parameter in meteorology. So, the study will be incomplete without exploring how well the model captured the rainfall. Figure 12g shows the spatial distribution of the surface rain rate (mm h −1 ). Strong convective rainfall greater than 10 mm h −1 was produced over different regions, particularly over the convectively active regions. Then, the same box (as in Figure 12a) was considered to obtain the rainfall time series during the course of the event. Figure 12h The occurrence of graupel within the storm is shown in Figure 12d-f in terms of the graupel mixing ratio. Graupels are often abundant in thunderstorms because of strong updraft, which help in their growth. Graupels are particularly important while studying thunderstorms as they play a crucial role in the occurrence of lightning. Figure 12d shows the spatial distribution of the graupel mixing ratio at 6 km height. This particular height was considered as graupels are mostly found around this height. High values of graupel mixing ratio were seen over the active convective regions. Then, the same box (as in Figure 12a) was considered to study the structure of the graupel occurrence across the vertical. Figure 12e shows the time-height cross section of the graupel mixing ratio. High values of graupel mixing ratio started to be seen around 19:00 IST at heights beyond 5 km. An abundance of graupel was seen up to 20:30 IST. Figure 12f shows the east-west vertical structure of the graupel mixing ratio along the black dotted line. Graupel started to show up from 5 km height and highest mixing ratio was observed near 8 km height. Such structure of graupel occurrence across the vertical was observed from the hydrometeor identification analysis (Figure 11f), which showed graupel was identified between 5 and 9 km height. Rainfall is probably the most important parameter in meteorology. So, the study will be incomplete without exploring how well the model captured the rainfall. Figure 12g shows the spatial distribution of the surface rain rate (mm h −1 ). Strong convective rainfall greater than 10 mm h −1 was produced over different regions, particularly over the convectively active regions. Then, the same box (as in Figure 12a) was considered to obtain the rainfall time series during the course of the event. Figure 12h The WRF model could capture the main features of the storm well as compared to the observations. In particular, the horizontal patterns of reflectivity as well as the vertical profiles were captured nicely as were seen in the DWR observations. The presence of graupel across the vertical was captured well, as the same was identified via hydrometeor identification analysis. The rain rate time series was beautifully simulated as confirmed by the disdrometer rain rate observation. The model could capture all of these features with the spectral bin microphysics scheme. We tried with a few other configurations with bulk microphysics as well (not shown here), but those results were far from what SBM could capture as shown here. This may be because of the isolated deep nature of this particular event considered here. Conclusions The present study is focused on the structure of pre-monsoon convective systems over a tropical coastal region in the southern peninsular India. Statistics on radar polarimetric variables for pre-monsoon convective systems have been obtained for the first time over the Indian region. Using the quality controlled DWR data, 11 convective events have been identified by inspecting the radar reflectivity fields. Out of which, a prominent convective event, which occurred on 13 May 2018, has been analysed in detail to understand the development and structure of a typical pre-monsoon convective system. Convectivestratiform separation has been done for all of the events. The following are the major conclusions of the study: 1. The distribution of differential reflectivity (Z dr ) and specific differential phase (k dp ) have much higher spread over convective regions, particularly below 5 km height. The distribution of k dp is almost uniform across the vertical over the stratiform regions. The mean profile of Z dr over stratiform regions shows a distinct local maximum near melting level, which could be utilised to identify stratiform precipitation. 2. The percentages of convective and stratiform pixels were found to be 22% and 78%, respectively. The distributions of the reflectivity values over convective and stratiform regions show that a single threshold for reflectivity or rain rate may not be useful for convective-stratiform separation as used in many studies. 3. The analysis of the thunderstorm on 13 May 2018 clearly exemplifies that pre-monsoon deep convective systems can develop rapidly within a very short span of time and cause heavy precipitation. Satellite-based cloud top temperature reveals the development of much deeper cloud. 4. Vertical structures inside the storm during the rapid development stage have been obtained by taking vertical cross sections of reflectivity through major convective regions. Convective cores reaching 10 km in height have been observed due to the strong updraft. High values of Z dr at lower levels were observed due to the oblate spheroid shape of the bigger raindrops. The structure of the K dp field is quite similar to that of reflectivity. High values of K dp reveals the presence of intense rainfall, as K dp is mainly dominated by bigger raindrops. 5. The implementation of fuzzy logic-based hydrometeor identification showed the presence of graupel at middle levels within the convective core regions, revealing the presence of strong updrafts. Ice aggregates and rain were the dominant hydrometeors above and below melting level, respectively. The presence of vertical ice signifies the presence of an electric field inside the storm. Such an electric field may be generated due to non-inductive charging via collision between graupel and smaller ice crystals. 6. Numerical simulation using the WRF model with the spectral bin microphysics (SBM) scheme could produce most of the features of the storm reasonably well. In particular, the simulated reflectivity, graupel mixing ratio and rainfall were in good agreement with the observed values. These results show the capability of the SBM scheme in simulating deep convective clouds. 7. It would be worth studying the observed lightning activity (if any) during these events, as the presence of vertical ice indicates the presence of a strong electric field. If major lightning activity occurred during these events, then it would support the collision charging mechanism, as graupels were identified within the convective core regions. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The Doppler weather radar data and INSTA-3DR satellite data used in this study are freely available through the https://www.mosdac.gov.in/server (accessed on 6 September 2021). The in situ data used in this study will be shared on the acceptance of the manuscript.
13,777.4
2022-08-24T00:00:00.000
[ "Environmental Science", "Physics" ]
The Language of Innovation Predicting innovation is a peculiar problem in data science. Following its definition, an innovation is always a never-seen-before event, leaving no room for traditional supervised learning approaches. Here we propose a strategy to address the problem in the context of innovative patents, by defining innovations as never-seen-before associations of technologies and exploiting self-supervised learning techniques. We think of technological codes present in patents as a vocabulary and the whole technological corpus as written in a specific, evolving language. We leverage such structure with techniques borrowed from Natural Language Processing by embedding technologies in a high dimensional euclidean space where relative positions are representative of learned semantics. Proximity in this space is an effective predictor of specific innovation events, that outperforms a wide range of standard link-prediction metrics. The success of patented innovations follows a complex dynamics characterized by different patterns which we analyze in details with specific examples. The methods proposed in this paper provide a completely new way of understanding and forecasting innovation, by tackling it from a revealing perspective and opening interesting scenarios for a number of applications and further analytic approaches. Introduction Predicting an innovation is a daunting task for a data scientist. It is the definition of innovation itself that contains the reason for this: being an innovation something that has never been seen before, it is impossible to follow the usual prescriptions of supervised-learning approaches. In fact no class can exist a-priori for an event that was never observed, therefore no supervised model can be trained to predict it. This abstract difficulty becomes very concrete when we focus on actual datasets that are usually considered to study technological innovation, such as those of products or patents [1,2], for a very general reason that applies to virtually any dataset. Data-gathering activities, in fact, usually rely on the definition of categories that are set before the actual accumulation of data begins. When new events occur and need to be recorded in the dataset, they can only be classified according to pre-existing categories. However, if an innovation comes, the system is not ready to classify it because the relevant class does not exist yet, therefore the most similar applicable category is typically used. It is only when an innovation becomes popular enough that a new class is created and added to the existing basket. For this reason, an ex-post study of such time-series would result in one completely missing the point in time when the innovation really happened. A workaround for this would be to manually reconsider the whole data-set, by using "future knowledge", and try to label the real point in time when a new class would have been needed: this approach anyways suffers from many limitations, the most important being an evident bias due to a knowledge of the "future", letting aside all the practical problems and subjectivity that such operations involve. A very similar problem is faced by every inventor. In fact, when humans innovate, they often have the problem of lacking a word to describe their invention. One very famous patent, dated 1906 and signed by Orville and Wilbur Wright, displays the most typical solution to this problem: it is titled "Flying-Machine", the combination of 2 existing words that define the innovation. Using publicly available data (see https://books.google.com/ngrams/) one can see how the word "Aircraft" only appeared after a decade, during World War I, and didn't become popular before World War II: the introduction of such word is corresponds to the popularization, rather than the invention, of the "Flying-Machine". Such example reflects the deeper and common opinion that one of the most important processes through which humanity achieves innovation, is the recombination of already known ideas for a novel or improved function, [3,4]. With his work, Schumpeter paved the way for many modern analysis on technological progress and innovation in general. Weitzman, for instance, argues extensively about the fundamental role of recombination in the innovation process building an abstract model to describe its unfolding, [5]. Fleming focuses on patents data and combinations of technologies to study the source of technological uncertainty, which, he argues, is due to inventors' attempts to combine together unfamiliar technologies, [6]. Recombination of existing elements is a powerful tool to generate new ideas and its application is not limited to technological progress. Many studies indeed investigate the effect of recombination of ideas in science describing its impact in the scientific progress, [7,8]. Following the definition of innovation as recombination, many have pointed out that innovation can be seen as an exploration process where introducing a new discovery or a new combination modifies the technological landscape and opens up a whole new space of possible innovative associations. The concept of Adjacent Possible, [9,10] embraces this metaphor of exploration by introducing the notion of the boundary of what is already known and what is just one step away. Introducing an innovation is a step from such boundary into what was before the Adjacent Possible: the boundary is moved and the exploration of a new part of such unexplored space becomes possible. With this work, Kaufmann paved the way to many different studies which investigates the Adjacent Possible from different point of views. For example Monechi et al. discuss the expansion of its boundaries [11], Iacopini et al. describe its exploration in cognitive processes [12], and Tria et al. quantifies its dynamics [13]. Others try to define and explain the statistical features of the process of innovating, often describing it as a combinatorial or evolutionary process [14][15][16][17][18][19][20][21][22][23][24][25], while some works have tried to sketch optimal strategies or environments to maximize the probability of events of innovations [26,27]. However, some of these models are typically not grounded into real data, at least not to the point of being able, or even try, to predict specific innovation events, while others are not at all interested in predictions and focus on a descriptive analysis. Furthermore, due to the limitations in data, typical approaches focus more on what we can call a novelty rather than an innovation, i.e. the introduction of an event that might be new only in a limited context (novelty), but it is not universally unseen (innovation) and it does not require a new category to classify it: trying out a new dish at a restaurant is a novelty for the person who does it, while inventing a new recipe is an innovation for everyone [14]. Contrarily, we intend to contribute in the field opened by Schumpeter and focus on unprecedented associations of categories, rather than on new categories themselves, thus, opening the possibility to observe and predict innovations as novel recombinations of pre-existing elements. In this work we introduce a computational framework that allows to define and successfully predict a large and important class of innovation events, namely new combinations of technologies, by bringing the analogy between language and innovation one step further. In particular we show how recently introduced concepts of self-supervised learning, can be fruitfully applied to link prediction in large bipartite networks. As a natural source of innovation data, we refer to the context of patents and ground our analysis on the PATSTAT database [2], which allows to connect patents to the set of technologies used in them. Such technologies are categorized in a nested classification and represented by technological codes, [28], that we use at a level that contains around 7000 of them. Every new patent, per-se, can be seen as an innovation event, and there are already studies that try to predict the dynamics of patents and knowledge spillovers between technological sectors through the study of patent citations network, see [29,30] for instance. However, we want to discriminate minor improvements or better exploitation of already known processes from actual radically new inventions, i.e. novel and unseen recombinations of pre-existing elements. There is no perfect way of performing such distinction, therefore we choose to make use of the technological codes that are associated to each patent and define an innovation as the first event in which a given couple of technological codes is used in the same patent. By using couples of technological codes, we overcome the limitation of being constrained by the classification of technologies that would effectively prevent every direct inspection of innovations as "first appearances". Our goal is to derive a measure that predicts when a specific couple is getting increasingly more likely to appear. As our starting point is an analogy between words and technological codes, it is very natural to extend it: a patent, being a coherent association of technological codes, is comparable to what would be a sentence, or a context, in natural language. The full database of world patents contains around 30 million patents from 1980 to 2011, that can then be seen as an extremely large corpus of text, written in the evolving Innovation Language. Computational models for Natural Language Processing (NLP) such as [31] allow to give a mathematical representation of semantic contexts that is learned from a corpus of text. We can apply such tools to the corpus of patents with the aim of learning the Language of Innovation and of describing its evolution in terms of the change of relative distance between words (tech codes) and, consequently, contexts (patents). When we observe that the context similarity (CS) of two codes is increasing, we are able to predict new combinations before they happen. Moreover we show that CS can be complemented by an indicator of the intensity of the patenting activity in given technological codes: namely more active codes are more likely to generate innovations by chance. We control for this effect by making use of a bipartite version of the Chung-Lu null model [40,41]. While a precise mathematical definition of the CS is given in the Methods Section, we now describe the main aspects of its calculation. Along the lines of [31] we train a Skip-Gram model, i.e. a neural network, to predict the context from which a technological code is randomly extracted, i.e. a patent. The internal structure of such neural network corresponds to the assignment of a vector (whose dimension is a parameter) to each possible word of the corpus, or technological code. At each step of the training, vectors are moved into the space to represent the relative distances among codes as learned from the batch of patents under exam. After the training, these vectors contain all the information on how the neural network has learned to represent the Language of Innovation semantic structure in a high-dimensional euclidean space. Such vectors are called Embeddings, and we defineẼðc i Þ as the embedding of the technological code c i . Given the objective of the training, two codes that are expected to be good candidates to appear in the same context will have a similar embedding (i.e. their vectors will be parallel). The reciprocal positioning of each code's vector in the space is the result of a global optimization of the relative position of all the embeddings which aims to increase the scalar product between technological codes belonging to similar contexts (see Methods Section for more details). Another immediate result provided by CS is the analysis of technological trends to shed light on the dynamics of couples that appear together in a patent. Not only CS is a good estimator of the probability that novel associations of technologies will be patented in the close future, but it can also be exploited to study their behaviour once patented. By introducing a definition of popularity for couples of technological codes as a function of the number of patents employing them, we can build a 2-dimensional similarity-popularity space where the dynamics of patented innovations unfolds. In the results section, we analyze such dynamics breaking it down into its fundamental pattern and trends, showing concrete examples of real trajectories. The similarity-popularity plane is a powerful tool that can be employed to understand the most likely future of patented technological couples: whether they will be popular for a long time or if they will quickly exhaust their innovation potential, and in this way it gives new insights on the dynamics of innovation. Materials and methods Natural Language Processing is a vast field intersecting computer science, artificial intelligence and computational linguistics which aims to integrate computers with human language. It is composed by several branches, each with different purposes. One of the most recent approaches consists in producing spatial representations of words to capture relevant dimensions of meaning, based on the typical contexts in which a word is usually seen. In particular, in our work, we employ the Word2Vec (W2V) [31] algorithm, which was originally designed to analyze corpus of text and create high dimensional vector representation of words, and that we have specifically adapted to create vector representations of technological codes from the PATSTAT database. The problem of predicting novel associations of technological codes can be cast, from a network science perspective, as a link-prediction problem in the network of technologies, defined in such a way that two codes are linked if they appear together in at least one patent. This network is the monopartite projection of the patents-technologies network, i.e. it is the projection on the technologies layer of the bipartite network created by linking each patent to all its technological codes. There exist several standard techniques to predict new links on monopartite networks and we test them for comparison in the S1 File. The main limitation of such techniques is that, by definition, they ere grounded on the topology of the projected monopartite network, and therefore are able to extract only part of the information available in the full bipartite topology. The approach that we propose here completely surpasses the standard ones, as it operates directly on the bipartite topology and makes use of its full information. Moreover, besides CS we also show the results of a further metric derived from the Chung-Lu null model that preserves, on average, the degree sequence of the bipartite network. Interestingly, this model complements well the CS and is able to account for a great part of the signal due to the popularity of a technology, intended as the expected amount of patents that will make use of it. In fact more popular technologies are more likely to form new couples independently of their CS. The results obtained combining these two techniques based on the full bipartite topology, largely outperform all the monopartite techniques, as does the CS alone. To evaluate the performance of CS and the other predictors tested in our work, we rely on the Receiver Operating Characteristic curve (ROC) and on the best F1-Score, standard tools in statistic to evaluate the performance of a binary classifier [32][33][34]. For a detailed study of the tuning of the parameter of W2V that have led to the results presented here, we refer to the S1 File. Word2Vec: Technical definition There are two version of the W2V algorithm that can be implemented: the Skip Gram model and the Continuous Bag of Words (CBOW) model. They differ in the aim of the training: while Skip Gram learns how to predict a context given a word, CBOW learns how to predict a word given a context. In what follows we give a brief description of the Skip Gram algorithm and comment the difference with CBOW. In the S1 File we show that Skip Gram outperforms CBOW, thus justifying our choice of the former. The Skip Gram model. In W2V a neural network is trained to relate contexts to elements extracted from those contexts. The collection of all the elements that can be in a context, and that form a context, is the Vocabulary. Once the network is trained, its internal structure contains representations of the elements of the Vocabulary based on their typical contexts. The difference between the two flavors of W2V, SkipGram and CBOW, are only in how the contexts are related to their elements: in CBOW the context is the input given to the neural network, and the missing element is the prediction target, while SkipGram is trained to predict the most likely elements of the context, given an input word. In both cases, after the training the internal representations can be used to compute similarity metrics between the elements of the contexts. Here we focus on SkipGram, see Fig 1, which performs better in the analysis of the technological language (as shown in the S1 File). To derive its loss function we follow the steps detailed in [35]. The fundamental components of the SkipGram algorithm are: the embedding matrix E of size V × N, where V is the size of the vocabulary and N the dimension of the embedding representation, the decoding matrix D of size N × V and a series of random batches of words (or more generally, elements of the vocabulary) extracted at each step of the training from sentences of the corpus used as the training set. From each batch a random word is extracted and singled out while the remaining words are grouped to form the context. The input word is represented through a one-hot-encoded vector with a number of elements equal to the vocabulary size V such that if all codes of the vocabulary are listed in a fixed order, than each code is represented by a vector of all zeros and a one at the position it occupies in the vocabulary (the first code is represented by [1, 0, 0, . . .], the second code by [0, 1, 0, . . .] and so on). In the specific case of the technological language, we create embeddings for the 4500 most frequent codes (see the S1 File for more details on this choice). From the point of view of the algorithm, a patent is a collection of codes, thus is represented as the sum of the one-hot-encoded-vectors of its codes. The embedding matrix E stores the vector representations of the words in the vocabulary. Let us call h the embedding of a given input word w. Let C be the set of all the words w j in the target context. The decoding matrix is used to calculate the score between the input word w and all the words in the target context C. Let us call sc j the score for the jth word of the target context w j , it is defined by: where D j is the jth column of the decoding matrix which is obtained applying the the matrix D T to the one hot encoded representation of the word w j . Each score passes through the softmax function and allows to calculate the posterior multinomial distribution for the context word w j given the input word w: The posterior probability to predict the whole context is the product of all posterior probabilities for each word in the context. The Skip Gram model aims to maximize this probability at each step of the training for each input-context couple. However it is computationally more efficient to transform such maximization problem into the minimization of the following loss function: At each step, Skip Gram is trained over a random batch of input-context couples therefore the total loss over the batch is the average of all the single losses L. L ¼ hLi Sampling the training corpus in batches allows to efficiently process large quantities of data because parameter updates are calculated only on subsets, i.e. only vectors present in the sample at each step are modified. For all practical purposes, we minimize the loss via Stocastic Gradient Descent (SGD), which is a well established technique for treating large datasets in machine learning [36]. Gradient descent if a strategy to minimize a given function F ðaÞ with respect to its parameters w through an iterative procedure that at each step updates the parameters according to the formula a ! a À ZrF ðaÞ where η is the learning rate and rF ðaÞ is the variation vector w.r.t. the parameters α. Stocastic gradient descent updates the parameters by calculating the variations only in a sample of the training set thus approximating the gradient calculated on the entire manifold where F is defined with its value on the sub-manifold defined by the training sample used. Robbins-Siegmund theorem defines the criteria that ensure such approximation to converge [37]. To further speed-up the training we also employ noise contrastive estimation (NCE) techniques that slightly modifies the loss. Details can be found in [35,38]. We implement the algorithm using Google's TensorFlow library [39], on our 8-core machine it takes 6 minutes to train 32-dimensional embeddings for 4500 technological codes and order 10 6 patents. In particular, patents are grouped by date and, from the point of view of the algorithm, each patent is just the list of its technological codes, i.e. the context on which W2V relies on for the training. For more details on the patents-codes network, we refer to the S1 File. CS increase anticipates radical innovations We explore the time dependency between the CS and the actual patenting activity, demonstrating how the relative positions of the embeddings are predictive of the appearance of new couples of codes. We use data from 1980 to 2011 extracted from the PATSTAT database with patents from the main international patent offices. We build training sets using patents in sliding windows of 5 years. On each training set we train 30 different copies of the same neural network, and we define the CS of codes i and j to be the scalar product S i;j ¼Ẽðc i Þ �Ẽðc j Þ averaged over the 30 runs, see S1 File for more information. By direct inspection, it is easy to see that many events of new co-occurrences are clearly anticipated by a rise in similarity of the two codes. In other words an innovation is often anticipated by the approaching of the contexts where the two codes are typically seen. In CS forecasts radical innovations Such results can be generalized and validated systematically. For each training set we consider all the couples of codes never patented together during and before the training set, which we refer to as potential innovations, namely couples that if patented in the future would represent PLOS ONE The Language of Innovation an innovation. We compute the average number of co-occurrences per year in the next 10 years for all potential innovations. In Fig 3 we show how higher CS as computed with data from 1996 to 2000 corresponds to an higher number of co-occurrences in the 2001-2010 period, thus implying that potential innovations with higher CS are not only more likely to be patented, but are also more likely to appear in a larger number of patents and become popular. To quantify the ability of CS to represent non-trivial features of the Language of Innovation, we show how the associations it predicts become much more popular than what would be expected by chance, given how popular are the two technologies alone. To do so we define innovations in a stricter way than simply a first co-occurrence of two codes. Namely we consider a never-seen-before association to be an innovation if the observed co-occurrences between technologies in patents are significantly higher than what would be expected in a specific ensemble of random bipartite graphs that connect patents and technologies. This ensemble of graphs is built constraining the expected values of the degree sequences of technologies and patents to be equal to those observed in the real network. By generalizing [40] to the case of a sparse bipartite network, similarly to what is done in [41], we assign to each patent-code couple a link probability equal to the product of the patent degree w p with the code degree w c normalized to the total number of links in the network where N links ¼ P patents p w p ¼ P codes p w c . This probability is an approximation of the exact methods presented in [42,43], which we can apply to this context due to the sparsity of the patentscodes network (peak density 0.035%, see S1 File). Therefore, the expected value for the cooccurrences of a given couple of codes c-c 0 , E cc 0 , can be calculated straightforwardly as the sum PLOS ONE The Language of Innovation over all patents of the probability that a given patent p possesses both codes, P p cc 0 = P p c × P p c 0 : We define the Z-score for a couple of codes as where O cc 0 is the observed co-occurrence value in the testing set and σ cc 0 is the standard deviation calculated as s cc 0 ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P patents p P p cc 0 ð1 À P p cc 0 Þ q . Z cc 0 is a measure of how unexpected is the success of the c − c 0 couple of technologies, given the degree sequences. We divide potential innovation events in two classes, based on thresholds on their Z-score: the events with Z-score above the threshold are put in class 1, while the others stay in class 0. The ratio of class 1 over class 0 elements (class imbalance) is kept fixed throughout the years, by changing the Z threshold appropriately, and we explore the effect of being more or less restrictive on our definition of innovations by using different class imbalance ratios. As a control, we compare the CS classifier with the Z-score computed in the training set, that we use as a Degree Predictor (DP). Since we restrict to couples with no co-occurrences in the training set, DP is always smaller than 0. Couples with strongly negative DP are expected to have a high number of co-occurrences due to their popularity, but are never seen together in the training set. For a comparison with standard monopartite predictors, we refer to the S1 File. In the top panel of Fig 4 we show the Area under the ROC curve (AUC) for 3 different classifiers, with two different class-imbalance ratios, across a time span of more than 20 years: DP, CS and a combination of these two, computed as the squared sum of the rankings induced by DP and CS. To be more precise the Squared Sum (SS) classifier ranks couples according to where r CS/DP are integer numbers ranging from 1, for the couple with the lowest score, to N c , i.e. the number of potential innovations, for the couple with the highest score. This heuristic approach allows to combine the two methods removing the effects of different shapes of the distribution of the scores, and by giving strong weights to examples where at least one of the two methods gives a very strong score. In the bottom panel of Fig 4 we focus on CS and DP, investigating their ability to forecast radical innovations far in the future. We select the training set 1990-1994, which is in the middle of our database, and move the beginning of the testing set window up to ten years in the future. While the performance of context similarity increases for both class imbalance, the plot shows how the degree predictor loses its prediction power, and the decrease in the ROC AUC is more pronounced for higher class imbalance, namely stricter definition of innovation. The DP classifier is basically tracking the auto-correlation between the training set and the test set, which naturally decreases when we advance the testing set farther into the future: its main contribution is to give a very low score to very popular couples that are never seen together in the training set. Those couples will continue to be popular in the test set as well, therefore their Z-score is likely to remain very low. CS performs much better across all the years and further in the future, demonstrating its ability to forecast significant innovations. It is worth noticing that CS and DP are completely uncorrelated (ρ 2 < 0.0025), and this is an indication of the fact that CS is exploiting information that has noting to do with the popularity of the codes in the couples, but it is really grasping the semantic structure of the Language of Innovation. Given the orthogonality of the two methods, it is unsurprising that, with a minimal time delay, their combination further improves our ability to predict innovations. The AUC for the combined methods is in fact higher than CS alone and never drops below 0.85 in the 0.25% CI case, and is typically above 0.9 in the 0.05% CI case. As expected, setting a stronger criterion to define innovations (i.e. a smaller CI ratio) reduces noise and improves the quality of the predictions. In the S1 File we compare these results with the performance of standard approaches for link prediction, such as those described in [30], applied to the In blue the DP classifier, in green the CS and in red the SS classifier. The classifiers are trained in 5 years windows and tested out-of-the-sample over a 5-years-long testing set. In the top panel the testing set immediately follows the training set. CS performs systematically better than DP and the SS classifier performs better of the CS and DP alone, demonstrating how the CS is grasping a semantic structure that is uncorrelated with the popularity of the codes. In the bottom panel we fix our attention on the embeddings learnt in the 1990-1994 training set and move the beginning of the testing set window in the future with an increasing delay, to test the performance of the CS and DP predictors in the far future. The results show how CS performs better the farthest in the future we test it while the the prediction power of DP drops. https://doi.org/10.1371/journal.pone.0230107.g004 monopartite projection on the technologies layers of the patents-technologies network. These standard approaches are systematically outperformed by the fully bipartite approach we propose here, see Table 1 for a synthesis of the comparison and the S1 File for the complete analysis. CS highlights technological trends With the effectiveness of context similarity to forecast radical innovation established, we move one step further into the analysis of the dynamics of innovation. We introduce the popularity of a pair of technological codes as a measure of its success and general usage in patents. The number of co-occurrences of codes is a good proxy for the popularity of a couple at a given time, but it can not be directly compared at different times because of the positive trend in the number of registered patents per year and the increase in the average number of codes per patent. Both trends imply a general increase in the number of co-occurrences that has nothing to do with the dynamics of technological contexts. To circumvent this problem, we normalize the number of co-occurrences of a couple in a given year with respect to total number of co-occurrences summed over all possible pairs of technological codes appearing in patents of that year. In particular, we focus on the time interval 1990-2009 and we group years in 5-years-long sliding windows. In each window we calculate the context similarity of all pairs of technological codes and we define the popularity of a couple of codes (A, B) as: where C ij is the co-occurrences matrix and C AB is the element of C ij corresponding to the couple (A, B). The logarithmic function is introduced to take into account the fact that the difference between the maximum value of the number of co-occurrences and the minimum spans different orders of magnitude. In Fig 5 we show the similarity-popularity plane obtained rescaling the popularity with a linear transformation to make it range in the same interval of context similarity. The similarity-popularity plane is a powerful instrument to visualize technological trends as it allows to represent the rise and fall patterns of trajectories. Introducing a 20 × 20 grid on the similarity-popularity plane and decomposing each trajectory in its segments, allows us to build a mean velocity vector in each cell by averaging together all segments starting in one cell. This result in a velocity vector field that we integrate and display in Fig 5. In particular, in the left panel we show the velocity flux resulting from averaging all segments in a cell, while in the right panel we have disentangled the positive trend from the negative one by conditioning on the past popularity derivative. Combining the information of the three plots of Fig 5, we can clearly identify four different regions on the similarity-popularity plane with a characteristic dynamic: • Slow growth. Pair of codes born with a low context similarity are very likely to go the decommissioning area in the center of the similarity-popularity plane. The positive trend flux shows that to avoid this fate, couples should have at least a popularity of 0.3 otherwise they will most likely be readily dismissed. If they do start with a high enough popularity, they experience a slow growth until they reach the stationary region. This is most likely the area where creative innovations emerge and we plan to investigate it in dedicated future works. • Explosive growth. Couples of codes born with a high context similarity experience a sudden increase of their popularity which brings them into the stationary region where they are at the peak of their general usage before they inevitably fall into the decommissioning region. • Stationary region. Codes with high context similarity and high popularity lives in a stationary region characterized by circular trajectories. When they have exhausted their innovative potential, they leave such zone and fall into the decommissioning region. • Decommissioning region. Once a couple of technological codes has spent all its innovative potential, it falls in the decommissioning region: low popularity and average context similarity until they stop being used in patents. In the left panel of Fig 5 we also show some example of real trajectories that showcase the different possible pattern of rise and fall of technological couples that happens in the different regions of the similarity-popularity plane. The decommissioning region is the endpoint of all trajectories, what changes is the way a couple can reach this zone and the time required. If it is born with high context similarity, it experience a sudden growth of its popularity and after a while in the stationary region, it falls back in the decommissioning region. If on the other side it is born with low context-similarity, it will be more likely be decommissioned without reaching a higher popularity. In Fig 6, for example, we focus on trajectories for which have a value of CS and popularity every year and estimate the probability of avoiding the decommissioning area for different starting regions in the similarity-popularity plane. As expected high popularity alone is not enough and requires and appropriate value of context similarity. The instruments showcased in Figs 5 and 6 are powerful tools that can be used to shed a light on the different dynamics underlying the technological progress. We leave for future works the construction of systematic predictions and the application of such tool to tailor optimal strategies of innovation for companies and countries given the position of their technological basket in the similarity-popularity plane. Discussion This paper contributes to the established literature in recombinant innovation by providing a novel perspective to characterize the dynamics of innovation, that goes beyond the standard approaches in network science. The inspiration for this approach comes directly from natural language, where neologisms are often built by composition of common words. This same inspiration drives our analytic approach. Namely we treat the set of technological codes used in patents as a vocabulary, and the patents, that aggregate coherent sets of codes, as phrases written in the Language of Innovation. Using techniques borrowed from natural language processing, we are able to give a precise mathematical representation of the semantic contexts of the Innovation Language, and we have shown how such contexts and their dynamics can provide non-trivial forecasts of upcoming innovation trends, significantly beyond what can be achieved with standard network approaches (see S1 File). We believe that the ideas and approaches presented in this work, not only provide an intriguing perspective to look at innovation as a language, but can also open a large set of applications and further development, by bringing our understanding of how innovation processes develop one step closer to a quantitative picture. The potential application of a quantitative framework for innovation are countless, ranging from scientific policy, to R&D strategies for firms, regions and even nations, it can be connected to socioeconomic data, to products and can be embedded in frameworks for industrial development. More in general, the recently developed field of Economic Complexity is demonstrating how representing social, economical and technological ecosystems as bipartite networks is an extremely powerful approach, that has already yielded very important results, [44][45][46][47][48][49][50][51][52]. We believe that the ideas developed in this work will find vast and crucial applications in better characterizing and predicting the dynamics of socio-economic bipartite networks. Supporting information S1 File. In the supporting information we report the analysis of the bipartite patents-codes network, the details of the various tests performed to calculate the embedding vectors, and the comparison of context similarity with other indirect similarity measures. Embeddings Embedding Vectors We provide the embedding vectors used in this paper. They are arranged in an archive and divided by training sets. Each group corresponds to a 5-years-long training
8,779.4
2020-04-30T00:00:00.000
[ "Computer Science", "Linguistics" ]
The lipid elongation enzyme ELOVL2 is a molecular regulator of aging in the retina Abstract Methylation of the regulatory region of the elongation of very‐long‐chain fatty acids‐like 2 (ELOVL2) gene, an enzyme involved in elongation of long‐chain polyunsaturated fatty acids, is one of the most robust biomarkers of human age, but the critical question of whether ELOVL2 plays a functional role in molecular aging has not been resolved. Here, we report that Elovl2 regulates age‐associated functional and anatomical aging in vivo, focusing on mouse retina, with direct relevance to age‐related eye diseases. We show that an age‐related decrease in Elovl2 expression is associated with increased DNA methylation of its promoter. Reversal of Elovl2 promoter hypermethylation in vivo through intravitreal injection of 5‐Aza‐2’‐deoxycytidine (5‐Aza‐dc) leads to increased Elovl2 expression and rescue of age‐related decline in visual function. Mice carrying a point mutation C234W that disrupts Elovl2‐specific enzymatic activity show electrophysiological characteristics of premature visual decline, as well as early appearance of autofluorescent deposits, well‐established markers of aging in the mouse retina. Finally, we find deposits underneath the retinal pigment epithelium in Elovl2 mutant mice, containing components found in human drusen, a pathologic hallmark of age related macular degeneration. These findings indicate that ELOVL2 activity regulates aging in mouse retina, provide a molecular link between polyunsaturated fatty acids elongation and visual function, and suggest novel therapeutic strategies for the treatment of age‐related eye diseases. | INTRODUC TI ON Chronological age predicts relative levels of mental and physical performance, disease risks across common disorders, and mortality (Glei, 2016). The use of chronological age is limited, however, in explaining the considerable biological variation among individuals of a similar age. Biological age is a concept that attempts to quantify different aging states influenced by genetics and a variety of environmental factors. While epidemiological studies have succeeded in providing quantitative assessments of the impact of discrete factors on human longevity, advances in molecular biology now offer the ability to look beyond population-level effects and to hone in on the effects of specific factors on aging within single organisms. A quantitative model for aging based on genome-wide DNA methylation patterns by using measurements at 470,000 CpG markers from whole-blood samples of a large cohort of human individuals spanning a wide age range has recently been developed (Hannum, 2013;Horvath, 2013;Levine, 2018). This method is highly accurate at predicting age and can also discriminate relevant factors in aging, including gender, genetic variants, and disease (Gross, 2016;Hannum, 2013). Several models work in multiple tissues (Horvath, 2013;Levine, 2018), suggesting the possibility of a common molecular clock, regulated in part by changes in the methylome. In addition, these methylation patterns are strongly correlated with cellular senescence and aging (Xie, Baylin, & Easwaran, 2019). The regulatory regions of several genes become progressively methylated with increasing chronological age, suggesting a functional link between age, DNA methylation, and gene expression. The promoter region of ELOVL2, in particular, was the first to be shown to reliably show increased methylation as humans age (Garagnani, 2012), and confirmed in one of the molecular clock models (Hannum, 2013). DHA is the main polyunsaturated fatty acid in the retina and brain. Its presence in photoreceptors promotes healthy retinal function and protects against damage from bright light and oxidative stress. ELOVL2 has been shown to regulate levels of DHA (Pauter, 2014), which in turn has been associated with age-related macular degeneration (AMD), among a host of other retinal degenerative diseases (Bazan, Molina, & Gordon, 2011). In general, LC-PUFAs are involved in crucial biological functions including energy production, modulation of inflammation, and maintenance of cell membrane integrity. It is, therefore, possible that ELOVL2 methylation plays a role in the aging process through the regulation of these diverse biological pathways. In this study, we investigated the role of ELOVL2 in molecular aging in the retina. We find that the Elovl2 promoter region is increasingly methylated with age in the retina, resulting in age-related decreases in Elovl2 expression. These changes are associated with decreasing visual structure and function in aged mice. We then demonstrate that loss of ELOVL2-specific function results in the early-onset appearance of sub-RPE deposits that contain molecular markers found in drusen in AMD. This phenotype is also associated with visual dysfunction as measured by electroretinography, and it suggests that ELOVL2 may serve as a critical regulator of a molecular aging clock in the retina, which may have important therapeutic implications for diseases such as age-related macular degeneration. | Elovl2 expression is downregulated with age through methylation and is correlated with functional and anatomical biomarkers in aged wild-type mice Previous studies showed that methylation of the promoter region of ELOVL2 is highly correlated with human age (Hannum, 2013). Methylation of regulatory regions is thought to prevent the transcription of neighboring genes and serves as a method to regulate gene expression. We first wished to characterize whether the age-associated methylation of the ELOVL2 promoter previously found in human serum also occurs in the mouse. First, we analyzed ELOVL2 promoter methylation data obtained using bisulfite sequencing in mouse blood and compared it to the available human data for the same region (Wang, 2017) and observed similar age-related increase in methylation level in the compared regions ( Figure S1a). To assay methylation of the Elovl2 promoter in retina, we used methylated DNA immunoprecipitation (MeDIP) method (Weber, 2005) and tested the methylation levels in the CpG island in the Elovl2 regulatory region by quantitative PCR with Elovl2-specific primers (Table S1). MeDIP analysis of the CpG island in the Elovl2 regulatory region showed increasing methylation with age in the mouse retina ( Figure 1a). This was well-correlated with age-related decreases in expression of Elovl2 as assessed by Western blot and qPCR ( Figure 1b and Figure S1b,c) indicating the potential role of age-related changes in DNA methylation in Elovl2 expression. To understand the cell-type and age-specific expression of Elovl2, we performed in situ hybridization with an Elovl2 RNAscope probe on mouse retina sections (Stempel, Morgans, Stout, & Appukuttan, 2014). In three-month-old and in 22-month-old mice, we noticed Elovl2 expression in the photoreceptor layer, particularly in the cone layer as well as the RPE (Figure 1c and Figure S1e). We observed that the expression of Elovl2 on mRNA level in RPE was lower than in the retina ( Figure S1d). Importantly, at older stages (22-month-old animals), we noticed Elovl2 mRNA in the same locations but dramatically reduced in expression ( Figure 1c). As Elovl2 is also highly expressed in the liver, we performed a time course of Elovl2 expression in this tissue. We observed similar age-related decreases in Elovl2 expression correlated with increases in methylation of the Elovl2 promoter in mouse liver, indicating that age-associated methylation of Elovl2 occurs in multiple tissues in mice ( Figure S1f). Visual function is highly correlated with age, including age-related decreases in rod function in both humans and mice (Birch & Anderson, 1992;Kolesnikov, Fan, Crouch, & Kefalov, 2010). In addition, autofluorescent aggregates have been observed in the fundus of aged mice, suggesting that these aggregates may also be an anatomical surrogate of aging in the mouse retina (Chavali, 2011;Xu, Chen, Manivannan, Lois, & Forrester, 2008). To measure and correlate these structural and visual function changes with age in mice, we performed an analysis of wild-type C57BL/6J mice at various timepoints through development, using fundus autofluorescence and electroretinography (ERG) as structural and functional readouts for vision. We observed increasing amounts of autofluorescent aggregates on fundus autofluorescence imaging with increasing mouse age, most prominently at two years (Figure 1d,e and Figure S1g). We also detected an age-associated decrease in visual function, as measured by maximum scotopic amplitude by ERG (Figure 1f and Figure S1h), as shown in previous studies (Kolesnikov et al., 2010;Williams & Jacobs, 2007). These data show that an age-associated accumulation of autofluorescent spots and decrease in visual function as detected by ERG correlate with Elovl2 downregulation in the mouse retina. F I G U R E 1 ELOVL2 expression is downregulated with age through methylation of its promoter and is correlated with age-related increases in autofluorescence aggregates and decreased scotopic response. (a) Methylation of ELOVL2 promoter region measured using immunoprecipitation of methylated (MeDIP) followed by qPCR. ELOVL2 promoter is increasingly methylated with age. ( | Manipulating ELOVL2 expression causes agerelated changes in cells The WI38 and IMR90 cell lines are well-established cell models of aging (Hayflick, 1965). We used these cell lines to further explore the effect of ELOVL2 promoter methylation on cell health. First, using MeDIP, we found that promoter methylation increased with cell population doubling (Figure 2a) further confirming strong correlation between increased ELOVL2 methylation and aging. Since the methylation of the promoter region was shown to be inhibitory for transcription (Jones, 1998), we investigated whether the expression level of ELOVL2 inversely correlated with ELOVL2 promoter methylation. Using qRT-PCR, we found that the expression level of the gene decreased with increasing population doubling (PD) number (Figure 2b)). We conclude that ELOVL2 expression is downregulated in aging cells, with a correlated increase in ELOVL2 promoter methylation. F I G U R E 2 (a-c) ELOVL2 expression, methylation, and senescence in WI38 csells. (a) Methylation level in ELOVL2 promoter region in human normal lung cell line WI38 by MeDIP/qPCR. Amplicons contain CpG markers cg16867657, cg24724428, and cg21572722. N > 3. (b) ELOVL2 expression by qPCR in WI38 cells at PD35, PD45, and PD55. (c) Fraction of senescent cells measured by beta-galactosidase staining in WI38 cells at given population doubling upon shRNA-mediated knockdown of ELOVL2 gene or control Luc. (d-f) Manipulating DNA methylation in PD52 WI38 cells. (d) ELOVL2 promoter methylation as measured by MeDIP followed by qPCR in untreated control and 5-Aza-dc-treated WI38 cells. (e) ELOVL2 expression by qPCR in untreated control and 5-Aza-dc-treated WI38 cells. (f) Percent senescence by beta-galactosidase staining in WI38 cells treated with 2µM 5-Aza-dc. (g-j) Manipulating DNA methylation in mice. (g) Experimental setup. Eight-month-old mice were injected intravitreally with of 5-Aza-dc five times every two weeks. ERG measurements were taken at indicated time points. At 11 months, expression and methylation levels were measured in 5-Aza-dc treated and control (PBS-treated) mice. (h) Methylation of ELOVL2 promoter by MeDIP at 11 months after 5-Aza injection. (i) ELOVL2 expression by qPCR after 5-Aza injection. (j) Maximum amplitude scotopic response by ERG after 5-Aza injection. For panels A-F, N>=3, *p < .05, **p < .01, t test. Error bars denote SD; for panels H-J, N = 3, *p < .05, **p < .01, t test. Error bars denote SD We then asked whether modulating the expression of ELOVL2 could influence cellular aging. First, using shRNA delivered by lentivirus, we knocked down ELOVL2 expression in WI38 and another model cell line, IMR-90, and observed a significant decrease in proliferation rate ( Figure S2a,b), an increased number of senescent cells in culture as detected by SA-β-gal staining ( Figure 2c and Figure S2e), and morphological changes consistent with morphology of high PD cells ( Figure S2f). Altogether, these data suggest that decreasing ELOVL2 expression results in increased aging and senescence in vitro. Next, we tested whether we could manipulate Elovl2 expression by manipulating the Elovl2 promoter methylation. We treated WI38 fibroblasts with 5-Aza-2'-deoxycytidine (5-Aza-dc), a cytidine analog that inhibits DNA methyltransferase (Momparler, 2005). Cells were treated for two days with 2 µM 5-Aza-dc followed by a five-day washout period. Interestingly, we found that upon treatment with 5-Aza-dc, Elovl2 promoter methylation was reduced (Figure 2d), and Elovl2 expression was upregulated ( Figure 2e). Moreover, upon 5-Aza-dc treatment, a lower percentage of senescent cells were observed in culture ( Figure 2f). To assess whether the decrease in senescence is caused at least in part by the ELOVL2 function, we knocked down the ELOVL2 expression in aged WI38 cells and treated them with 5-Aza-dc as previously described. Again, significantly lower proportion of senescent cells was detected upon the drug treatment, but the effect of drug treatment was significantly reduced by shRNA-mediated knockdown of ELOVL2, using either of two ELOVL2 shRNAs compared with a control shRNA ( Figure S2b). This indicates an important role of ELOVL2 in the process. Altogether, these data suggest that the reversing ELOVL2 promoter methylation increases its expression and decreases senescence in vitro. | DNA demethylation in the retina by intravitreal injection of 5-Aza-dc increases Elovl2 expression and rescues age-related changes in scotopic function in aged mice We next explored whether demethylation of the Elovl2 promoter could have similar effects on Elovl2 expression in vivo. To accomplish this, we performed intravitreal injection of 5-Aza-dc, known to affect DNA methylation in nondividing neurons (Choi, Lee, Kim, Choi, & Lee, 2018;Christman, 2002;Miller & Sweatt, 2007;Wang, 2018), into aged wild-type mice. Eight-month-old C57BL/6J mice were injected with 1 µl of 2 µM 5-Aza-dc in one eye and 1 µl of PBS in the other eye as a control, every other week over a period of 3 months (total of 5 injections) ( Figure 2g). After the treatment, tissues were collected, and RNA and DNA were extracted. We found, using the MeDIP method, that methylation of the Elovl2 promoter decreased after treatment (Figure 2h), with a corresponding upregulation of Elovl2 expression ( Figure 2i). Notably, we observed that the scotopic response was significantly improved in the 5-Azadc-injected eyes compared with vehicle controls (Figure 2j). These data show that DNA demethylation, which included demethylation of the Elovl2 promoter region, influence and potentially delay agerelated changes in visual function in the mouse retina. | Elovl2 C234W mice demonstrate a loss of ELOVL2-specific enzymatic activity We next sought to investigate the in vivo function of Elovl2 in the retina. Since C57BL/6 Elovl2 knockout heterozygous mice display defects in spermatogenesis and are infertile (Zadravec, 2011), we developed an alternative strategy to eliminate ELOVL2 enzymatic activity in vivo. Using CRISPR-Cas9 technology, we generated Elovl2-mutant mice encoding a cysteine-to-tryptophan substitution (C234W). This mutation selectively inactivates enzymatic activity of ELOVL2 required to process C22 PUFAs, to convert docosapentaenoic acid (DPA) (22:5n-3) to 24:5n-3, while retaining elongase activity for other substrates common for ELOVL2 and the paralogous enzyme ELOVL5 (Figure 3a, Figure S3a; Gregory et al., 2013;Gregory et al., 2013;Zadravec, 2011). A single-guide RNA against the Elovl2 target region, a repair oligonucleotide with a base pair mutation to generate the mutant C234W, and Cas9 mRNA were injected into C57BL/6N mouse zygotes ( Figure 3b). One correctly targeted heterozygous founder with the C234W mutation was identified. No off-target mutations were found based on DNA sequencing of multiple related DNA sequences in the genome ( Figure S3b). The C234W heterozygous mice were fertile, and C234W homozygous mice developed normally and showed no noticeable phenotypes. We analyzed the long-chain fatty levels in the retinas of homozygous Elovl2 C234W mice to determine whether there was a loss of enzymatic activity specific to ELOVL2. We observed that Elovl2 C234W mice had higher concentrations of C22:5 fatty acid (a selective substrate of ELOVL2 elongation) and lower levels of C24:5 (primary product of ELOVL2 enzymatic activity) and C22:6 (DHAthe secondary product of ELOVL2) (Figure 3c). We also observed similar changes in fatty acid levels in livers of Elovl2 C234W mice as well as lower levels of longer fatty acids that require primary product of Elovl2 as a substrate ( Figure S4) This suggests that the Elovl2 C234W mice have altered ELOVL2 substrate specificity and inhibited ELOVL2-specific C22 elongase activity. | Loss of ELOVL2-specific activity results in early vision loss and accumulation of sub-RPE deposits We next investigated whether the Elovl2 C234W mutation affected the retinal structure and/or function in vivo. First, we observed a significant number of autofluorescent spots on fundus photography in animals at six months of age, which were not found in wild-type littermates (Figure 4a,b). This phenotype was consistently observed in 6-, 8-, and 12-month-old mutant animals and in both animal sexes, but the phenotype was consistently more pronounced in male mice ( Figure S5). Importantly, ERG analysis revealed that 6-month-old Elovl2 C234W mice displayed a decrease in visual function as compared to wild-type littermates (Figure 4c, Figure S5). To determine the impact of the mutation on the morphology of the retina on the microscopic level, we performed an immunohistological analysis of tissue isolated from wild-type and Elovl2 C234W littermates. Although we did not observe gross changes in morphology of the retinas in mutant animals, we have observed the presence of small aggregates underneath the RPE and found that these sub-RPE aggregates contained the complement component C3 as well as the C5b-9 membrane attack complex, proteins found in human drusenoid aggregates (Figure 4d). In addition, in the mutant sub-RPE aggregates, we also identified other components found in human deposits such as HTRA1 (Cameron, 2007), oxidized lipids/ T15 (Shaw, 2012), and ApoE, an apolipoprotein component of drusen (Li, Clark, Chimento, & Curcio, 2006;Figure 4e). This suggests that the sub-RPE deposits found in the Elovl2 C234W mouse contain some drusen-specific components found in early nonexudative AMD. Taken together, these data implicate ELOVL2-specific activity as a potential functional target in age-related eye diseases. | ELOVL2 as a critical regulator of molecular aging in the retina This work is the first demonstration, to our knowledge, of a functional role for Elovl2 in regulating age-associated phenotypes in the retina. Methylation of the promoter region of ELOVL2 is wellestablished as a robust prognostic biomarker of human aging (Garagnani, 2012;Gopalan, 2017), but whether ELOVL2 activity contributes to aging phenotypes had not yet been documented. F I G U R E 3 Elovl2 C234W mice show a loss of ELOVL2 enzymatic activity. (a) Schematic of ELOVL2 elongation of omega-3 and omega-6 fatty acids. ELOVL2 substrates 22:5 (n-3) and 22:4(n-6) are elongated by ELOVL2 to 24:5(n-3) and 24:4(n-6). This leads to other products such as DHA, DPA n6, and VLC-PUFAs, which are elongated by ELOVL4. (b) CRISPR-Cas9 strategy to create Elovl2 C234W mice. Elovl2 gRNA, Cas9, and repair oligo are used to create the Elovl2 C234W mutant. (c) Lipid levels of ELOVL2 substrate DPA (22:5(n-3)), ELOVL2 product (24:5(n-3)), and DHA (22:6(n-3)) in retinas of Elovl2 C234W mice and wild-type littermates. N = 4, *p < .05 by Mann-Whitney U test. Error bars represent SD In this work, we demonstrated that the age-related methylation of regulatory regions of Elovl2 occurs in the rodent retina and results in age-related decreases in the expression of Elovl2. We show that inhibition of ELOVL2 expression by transfection of ELOVL2 shRNA in two widely used cell models results in increased senescence and decreased proliferation, endpoints associated with aging. Conversely, we show that the administration of 5-Aza-dc leads to demethylation of ELOVL2 promoter and prevents cell proliferation and senescence compared with controls. Next, we explored whether Elovl2 expression affected age-related phenotypes in vivo. Intravitreal injection of 5-Aza-dc in rodents increased Elovl2 expression and reversed age-related changes in visual function by ERG. Next, we showed a decrease in visual function as assessed by ERG as well as increased accumulation of autofluorescent white spots in Elovl2 C234W mice, with ELOVL2-specific activity eliminated, compared with littermates controls. These physiologic and anatomical phenotypes are well-established markers of aging in the mouse retina, suggesting that loss of Elovl2 may be accelerating aging on a molecular level in the retina. Finally, in Elovl2 C234W mice, we observed the appearance of sub-RPE deposits, which colocalize with markers found in human drusen in macular degeneration, a pathologic hallmark F I G U R E 4 Elovl2 C234W mice show autofluorescent deposits and vision loss. (a) Representative fundus autofluorescence images of WT and Elovl2 C234W mice at 6 months with representative scotopic ERG waveforms. Note multiple autofluorescent deposits (arrows) in Elovl2 C234W mice which are almost absent in wild-type littermates. (b) Quantification of the autofluorescent spots in 6mo wild-type and C234W mutant mice. N = 8. *p < .05, t test. Error bars denote SD. (c) Maximum scotopic amplitude by ERG at 6 months between WT and Elovl2 C234W mice. N = 4, *p < .05, t test. Error bars represent SD. (d) Immunohistochemistry of sub-RPE deposits found in Elovl2 C234W mice. Deposits are found underneath the RPE (yellow line), which colocalize with C3 and C5b-9, which is not present in WT controls. Bar-50um. (e) Quantification of sub-RPE aggregates stained with C3, C5b-9, Htra1, T-15, and ApoE, all components found in drusen in AMD. N = 4, ** p < .01, t test. Error bars represent SD of a prevalent age-related disease in the eye. Taken together, we propose that Elovl2 plays a critical role in regulating a molecular aging in the retina, which may have therapeutic implications for age-related eye diseases. | Methylation of the regulatory region as a mechanism of age-dependent gene expression DNA methylation at the 5-position of cytosine (5-methylcytosine, 5mC) is catalyzed and maintained by a family of DNA methyltransferases (DNMTs) in eukaryotes (Law & Jacobsen, 2010) and constitutes ~2%-6% of the total cytosines in human genomic DNA (28). Alterations of 5mC patterns within CpG dinucleotides within regulatory regions are associated with changes in gene expression (Jones, 1998;Telese, Gamliel, Skowronska-Krawczyk, Garcia-Bassets, & Rosenfeld, 2013). Recently, it has been shown that one can predict human aging using DNA methylation patterns. In particular, increased DNA methylation within the CpG island overlapping with the promoter of ELOVL2 was tightly correlated with the age of the individual (Gopalan, 2017). We attempted to demethylate this region using 5-Aza-dc, known to inhibit the function of DNMTs also in nondividing neurons (Choi et al., 2018;Miller & Sweatt, 2007;Wang, 2018). We reported that upon intravitreal injection of the compound, the DNA methylation is reduced, gene expression is upregulated, and visual function is maintained in the treated eye compared with the contralateral control. These data suggest that Elovl2 is actively methylated by enzymes inhibited by 5-Aza-dc and that age-related methylation either directly or indirectly regulates Elovl2 expression. Further studies are needed to fully address the directness and specificity of methylation effects on Elovl2 expression and visual function. | A molecular link between long-chain PUFAs in age-related eye diseases Our data show that Elovl2 C234W animals display accelerated loss of vision and the appearance of macroscopic autofluorescent spots in fundus images. The exact identity of such spots in mouse models of human diseases is unclear, as they have been suggested to be either proteinrich, lipofuscin deposits or accumulating microglia (Chavali, 2011;Combadiere, 2007). Rather than deciphering the identity of these macroscopic spots, we used the phenotype as a potential sign of age-related changes in the retina, as suggested by others (Chavali, 2011;. The composition of aggregates visible on the microscopic level in sub-RPE layers in the retina is potentially informative with regard to human parallels. Using immunofluorescence, we observed the accumulation of several proteins described previously as characteristic for drusen in human AMD samples. Although our analysis did not exhaust the documented components of drusen in human disease (Crabb, 2014), nevertheless, our data show the appearance of these sub-RPE deposits, even in the absence of known confounding mutations or variants correlating with the risk of the disease. What may be the mechanism by which Elovl2 activity results in drusen-like deposits and loss of visual function? ELOVL2 plays an essential role in the elongation of long-chain (C22 and C24) omega-3 and omega-6 polyunsaturated acids (LC-PUFAs) (Figure 3a). LC-PUFAs are found primarily in the rod outer segments and play essential roles in retinal function. These PUFAs include both long-chain omega-3 (n-3) and omega-6 (n-6) fatty acids such as docosahexaenoic acid (DHA) and arachidonic acid (AA). DHA is the major polyunsatu- The pathogenesis of macular degeneration is complex and with multiple pathways implicated including complement activation, lipid dysregulation, oxidative stress, and inflammation (Ambati & Fowler, 2012). Despite intense research, the age-related molecular mechanisms underlying drusen formation and geographic atrophy are still poorly understood. Analysis of AMD donor eyes showed decreased levels of multiple LC-PUFAs and VLC-PUFAs in the retina and RPE/choroid compared with age-matched controls (Liu, Chang, Lin, Shen, & Bernstein, 2010). Despite the biochemical, epidemiologic, and genetic evidence implicating PUFAs in AMD, the molecular mechanisms by which LC and VLC-PUFAs are involved in drusen formation and AMD pathogenesis are still poorly understood. The finding that loss of ELOVL2 activity results in early accumulation of sub-RPE deposits strengthens the relationship between PUFAs and macular degeneration. Since Elovl2 is expressed in both photoreceptors and RPE, whether these phenotypes of visual loss and sub-RPE deposits are due to cell-autonomous function in the photoreceptors and RPE, respectively, or require interplay between photoreceptors and RPE still needs to be established. | Role of Elovl2 in aging DNA methylation of the regulatory region of Elovl2 gene is well-established to be a cell-type-independent molecular aging clock (Garagnani, 2012;Hannum, 2013;Slieker, Relton, Gaunt, Slagboom, & Heijmans, 2018) with Elovl2 expression detectable in many tissues and highest levels observed in liver, testis, and central nervous system including retina (https ://www.prote inatl as.org). The high metabolic activity and critical role of PUFAs, reflecting a high metabolic demand for the products of the ELOVL2 enzyme in the photoreceptors, is the most probably the reason why the ocular phenotype is first to be observed in the Elovl2 C234W animals. Further studies are required to establish the role of the gene in other tissues than the retina and impact of the lack of the ELOVL2 products in the lipid bilayers in aged organisms. | CON CLUS IONS In summary, we have identified the lipid elongation enzyme ELOVL2 as a critical component in regulating molecular aging in the retina. Further studies may lead to a better understanding of molecular mechanisms of aging in the eye, as well as lead to therapeutic strategies to treat a multitude of age-related eye diseases. Knockdown lentivirus was generated using MISSION shRNA (Sigma) according to the manufacturer's instructions. 5-Aza-2'deoxycytidine was purchased from TSZ Chem (CAS#2353-33-5) and dissolved in cell culture medium at a concentration of 2µM. Cells were treated every day for a period of 48 hr. The medium was then replaced with regular cell culture medium, and the cells were cultured for 5 more days. | Senescence-associated β-galactosidase (SA-βgal) activity The SA-β-gal activity in cultured cells was determined using the Senescence β-Galactosidase Staining Kit (Cell Signaling Technology), according to the manufacturer's instructions. Cells were stained with DAPI afterward, and percentages of cells that stained positive were calculated with imaging software (Keyence), including three fields of view (10×). | Nucleic acid analysis DNA and RNA were isolated from human fibroblasts and mouse tissues with TRIzol (Ambion) according to the manufacturer's instructions. RNA was converted to cDNA with iScript cDNA Synthesis Kit (Bio-Rad). qPCR was performed using SsoAdvanced Universal SYBR Green Supermix (Bio-Rad). Methylated DNA immunoprecipitation (MeDIP) was performed by shearing 1µg DNA by Bioruptor (Diagenode) for 8 cycles on the high setting, each cycle consisting of 30 s on and 30 s off. Sheared DNA was denatured, incubated with 1 µg 5mC antibody MABE146 (Millipore) for 2 hr, and then with SureBeads protein G beads (Bio-Rad) for 1 hr. After washing, DNA was purified with QIAquick PCR Purification Kit (Qiagen). qPCR was then performed as above. List of primers can be found in Table S1. | Western blotting 10μg of total protein isolated with TRIzol (Invitrogen) from retinas of WT mice of varying stages of development was subject to SDS-PAGE followed by Western blotting (see Table S2 for antibodies used in the study). H3 served as loading control. | Quantification of western blots WB ECL signals were imaged using Bio-Rad ChemiDoc system. Background-subtracted signal intensities were calculated using ImageJ separately for ELOVL2 bands and H3 loading-control bands. ELOVL2 levels were calculated by dividing ELOVL2 signals by corresponding H3 signals, and then normalized to E15.5. | RNAscope ® In situ hybridization In situ hybridization was performed using the RNAscope ® Multiplex Fluorescent Assay v2 (ACD Diagnostics). Mouse Elovl2 Rpe65 and Arr3 probes (p/n 542711, p/n 410151, and p/n 486551, respectively) were designed by the manufacturer. Briefly, fresh frozen histologic sections of mouse eyes were pretreated per manual using hydrogen peroxide and target retrieval reagents such as protease IV. Probes were then hybridized according to the protocol and then detected with TSA Plus ® Fluorophores fluorescein, cyanine 3, and cyanine 5 (Perkin Elmer). Sections were mounted with DAPI and Prolong Gold Antifade (Thermo Fisher) with coverslip for imaging and imaged (Keyence BZ-X700). T7 promoter and sgRNA sequence were synthesized as a long oligonucleotide (Ultramer, IDT) and amplified by PCR. The T7-sgRNA PCR product was gel-purified and used as the template for IVT using the MEGAshortscript T7 Kit (Life Technologies). A repair template encoding the C234W variant was synthesized as a single-stranded oligonucleotide (Ultramer, IDT) and used without purification. | Animal injection and analysis All animal procedures were conducted with the approval of the Institutional Animal Care Committee at the University of California, San Diego (protocol number: S17114). All studies were performed on equal number of females and males. Number of animals nedeed for each experiment was estimated using power analysis. | CRISP/Cas9 injection C57BL/6N mouse zygotes were injected with CRISPR-Cas9 constructs. Oligos were injected into the cytoplasm of the zygotes at the pronuclei stage. Mice were housed on static racks in a conventional animal facility and were fed ad libitum with Teklad Global 2020X diet. | Genotyping, mice substrains To test for the potentially confounding Rd8 mutation, a mutation in the Crb1 gene which can produce ocular disease phenotypes when homozygous, we sequenced all mice in our study for Rd8. C57BL/6J mice in the aging part of the study were purchased from the Jax Laboratory and confirmed to be negative for mutation in Crb1 gene. All C234W mutant animals and their littermates were heterozygous for Rd8 mutation. To test RPE65 gene, all animals were tested for the presence of the variants. All animals in the study harbor homozygous RPE65 variant Leu/Leu. | Intravitreal injections For the 5-Aza-dc injection study, mice were anesthetized by intraperitoneal injection of ketamine/xylazine (100 mg/kg and 10 mg/ kg, respectively), and given an analgesic eye drop of proparacaine (0.5%, Bausch & Lomb). Animals were intraocularly injected with 1µl of PBS in one eye, and 1 µl of 2 µM 5-Aza-dc dissolved in PBS in the contralateral eye, every other week over a period of 3 months. Drug dosage was estimated based on our cell line experiments and on previously published data (Gore, 2018). Autofluorescence imaging was performed using the Spectralis ® HRA + OCT scanning laser ophthalmoscope (Heidelberg Engineering) as previously described (16) using blue light fluorescence feature (laser at 488 nm, barrier filter at 500 nm). Using a 55-degree lens, projection images of 10 frames per fundus were taken after centering around the optic nerve. The image that was most in focus was on the outer retina was then quantified blindly by two independent individuals. Electroretinograms (ERGs) were performed following a previously reported protocol (Luo, 2014). Briefly, mice were dark-adapted for 12 hr, anesthetized with a weight-based intraperitoneal injection of ketamine/xylazine, and given a dilating drop of tropicamide (1.5%, Alcon) as well as a drop of proparacaine (0.5%, Bausch & Lomb) as analgesic. Mice were examined with a full-field Ganzfeld bowl setup (Diagnosys LLC), with electrodes placed on each cornea, with a subcutaneous ground needle electrode placed in the tail, and a reference electrode in the mouth (Grass Telefactor, F-E2). Lubricant (Goniovisc 2.5%, HUB Pharmaceuticals) was used to provide contact of the electrodes with the eyes. Amplification (at 1-1,000 Hz bandpass, without notch filtering), stimuli presentation, and data acquisition are programmed and performed using the UTAS-E 3000 system (LKC Technologies). For scotopic ERG, the retina was stimulated with a xenon lamp at −2 and −0.5 log cd·s/m 2 . For photopic ERG, mice were adapted to a background light of 1 log cd·s/m 2 , and light stimulation was set at 1.5 log cd·s/m 2 . Recordings were collected and averaged in manufacturer's software (Veris, EDI) and processed in Excel. | Immunostaining Eyeballs were collected immediately after sacrificing mice, fixed in 4% paraformaldehyde for 2 hr, and stored in PBS at 4°C. For immunostainings, eyeballs were sectioned, mounted on slides, and then incubated with 5% BSA 0.1% Triton-X PBS blocking solution for 1 hr. Primary antibodies (see Table S2 for antibodies used in the study) were added 1:50 in 5% BSA PBS and incubated at 4°C for 16 hr. Following 3× PBS wash, secondary antibodies were added 1:1,000 in 5% BSA PBS for 30 min at room temperature. Samples were then washed 3x with PBS, stained with DAPI for 5 min at room temperature, mounted, and imaged (Keyence BZ-X700). | Lipid analysis Lipid extraction was performed by homogenization of tissues in a mixture of 1 ml PBS, 1 ml MeOH, and 2 ml CHCl3. Mixtures were vortexed and then centrifuged at 2,200 g for 5 min to separate the aqueous and organic layer. The organic phase containing the extracted lipids was collected and dried under N2 and stored at −80°C before LC-MS analysis. Extracted samples were dissolved in 100 μl CHCl3; 15 μl was injected for analysis. LC separation was achieved using a Bio-Bond 5U C4 column (Dikma). The LC solvents were as follows: buffer A, 95:5 water:methanol + 0.03% NH4OH; buffer B, 60:35:5 isopropanol:methanol: water + 0.03% NH4OH. A typical LC run consisted of the following for 70 min after injection: 0.1 ml/min 100% buffer A for 5 min, 0.4 ml/min linear gradient from 20% buffer B to 100% buffer B over 50 min, 0.5 ml/min 100% buffer B for 8 min and equilibration with 0.5 ml/min 100% buffer A for 7 min. FFA analysis was performed using a Thermo Scientific Q Exactive Plus fitted with a heated electrospray ionization source. The MS source parameters were 4kV spray voltage, with a probe To explore methylation of the promoter region of ELOLV2, we first designated the promoter as −1000bp to + 300bp with respect to the strand and transcription start site (TSS) and then identified profiled methylation CpGs using BEDtools (v2.25.0) (Quinlan & Hall, 2010). We then binned each profiled CpG in the promoter region according to 30-bp nonoverlapping windows considering CpGs with at least 5 reads. We then grouped the 136 C57BL/6 control mice according to five quantile age bins and took the average methylation for each age bin and each window. All analysis was performed using custom python (version 3.6) scripts, and plots were generated using matplotlib and seaborn. To explore the homologous region in humans, we accessed human blood methylome data generated using the Human Illumina methylome array downloaded from GEO, using accessions GSE36054 (Alisch, 2012) and GSE40279 (Hannum, 2013) for a total of 736 samples. Methylation data were quantile-normalized using Minfi, (Aryee, 2014) and missing values were imputed using the Impute package in R. These values were adjusted for cell counts as previously described (Gross, 2016). To enable comparisons across different methylation array studies, we implemented beta-mixture quantile dilation (BMIQ; Gross, 2016;Teschendorff, 2013) and used the median of the Hannum et al. dataset as the gold standard (Hannum, 2013). We then identified probes within the promoter region of ELOLV2 in the human reference (hg19, UCSC), identifying 6 total probes in the commonly profiled region. We then grouped the 787 individuals according to 5 quantile age bins and grouped probes into 10bp nonoverlapping windows. These data were then analyzed and plotted identically as for mice. ACK N OWLED G M ENTS We thank Dr. Trey Ideker for supporting work of T.W. We thank DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are openly available in Dryad at https ://doi.org/10.6075/J0TX3CQ9
8,234.8
2019-10-07T00:00:00.000
[ "Medicine", "Biology" ]
Learning System for Proactive Pre-emptive Unified Spectrum Handoff (pro-push) in Cognitive Radio Mobile Ad Hoc Networks The aim of this study is to introduce a conceptual framework for the new approach Intelligent Proactive Pre-emptive Unified Spectrum Handoff (IP PRO-PUSH) algorithm is presented, which is a highly potential intelligent technology to intelligently address the spectrum scarcity challenges in Cognitive Radio Mobile Ad hoc Networks (CR-MANETs). The fact that spectrum is a limited natural resource poses technology challenge to ensure the available frequency is used efficiently and able to cope with new services which definitely requires more bandwidth in the future. However, over the past years, traditional approaches to spectrum management have been challenged by new insights into actual use of spectrum. In order to improve the utilization of the overall radio spectrum, the Cognitive Radio (CR) is a useful solution to this low utilization of the radio spectrum. The variation in spectrum band is called spectrum mobility. The objective of the current study is to investigate the Intelligent-Based (IB) cognitive radio learning for IP Pro-PUSH and local routing based on spectrum mobility in mobile Cognitive Radio Ad hoc Networks (CR-MANET) along with a conceptual framework and its connections. INTRODUCTION The fixed spectrum assignment has led to poor spectrum efficiency.According to Federal Communications Commission (FCC), temporal and geographical variations in the utilization of the assigned spectrum range from 15 to 85% (FCC, 2003).Therefore, the dynamic spectrum access was proposed to increase the spectrum efficiency by an opportunistic spectrum usage.The low efficiency is due to the underutilized spectrum usage by the licensed user or Primary Users (PUs) most of the time.The Cognitive Radio (CR) users or Secondary Users (SUs) can use these empty channels in an opportunistic manner.CR implies on an adaptive and intelligent radio which is aware of its surrounding environment (Mitola and Maguire, 1999). Spectrum handoff refers to changing the operation frequency of a SU.The spectrum handoff happens when the status of the current channel does not satisfy the quality of service or a PU appears in a licensed band.SU performs spectrum handoff in order to capture a better channel or evacuate the current channel for the PU. The available frequency bands show different characteristics in CR networks.Therefore, the radio conditions and PU activities must be considered for spectrum band characterization.In addition, a dynamic decision scheme is also needed, which considers the spectrum sensing and channel characteristics for maximizing the efficient CR transmission (Duan and Li, 2011).Available spectrum bands vary over time as well as with the SUs movements.The PUs activities, SUs mobility and channel heterogeneity must be considered in spectrum handoff and mobility management protocols.Therefore the integrated mobility and handoff management is essential to have a reliable, fast and smooth spectrum handoff and transition (Nejatian et al., 2012a).Integrated mobility function must consider the spectrum mobility in both time and space domain.In other words, for designing a mobility management protocol, the spectrum mobility, user mobility and channel quality degradation must be considered jointly (Lee and Akyildiz, 2012). As mentioned previously, the learning is one of the main aspects of the CR.In learning process, the CR collects the knowledge based on evaluation of the output in the decision-making process.CR uses this knowledge for better orientation in the future.In this study, we emphasize the Intelligent-Based (IB) cognitive radio learning for Intelligent Proactive Pre-emptive Unified Spectrum Handoff (IP PRO-PUSH).Based on spectrum mobility in CR-MANET, segments and connections are highlighted for intelligent integrated spectrum handoff management in CR-MANETs.A conceptual model for the new approach IP PRO-PUSH algorithm is presented, which is a highly potential intelligent technology to intelligently address the spectrum scarcity challenges in Cognitive Radio Mobile Ad hoc Networks (CR-MANETs). LITERATURE REVIEW Mobility is an essential function in cognitive radio networks because of its effect on network properties such as channel capacity, routing, connectivity and coverage (Nejatian et al., 2012b). The existence of available spectrum holes is random due to the randomness PU's presence and unpredictability of SUs' demands.Therefore, the spectrum holes are shifting over time because of the PUs activities and over space because of the SUs mobility.The spectrum holes shifting is defined the as spectrum mobility.Spectrum mobility leads to spectrum handoff, which refers to evacuating the operation frequency of a SU.During the spectrum mobility, PU reclaims the special licensed band occupied by the SU.The SU's ongoing data transmission is transferred to another empty band of the spectrum.Mobility function guarantees a fast channel evacuation with minimum network performance degradation.Spectrum handoff management is challenging in ad hoc networks due to the lack of a central entity for managing and controlling the spectrum handoff procedure. In Duan and Li (2011), a handoff management scheme is proposed.This scheme determines the optimal spectrum band based on a multi criteria decision making strategy, which considers the estimated transmission time, the PU presence probability and spectrum availability time.The authors of Song and Xie (2011a, b) proposed a proactive spectrum handoff scheme which is based on the statistics of channel utilization.The collision among SUs is also deleted using a distributed channel allocation scheme.Nejatian et al. (2012c), an established route from a source node to a destination node is considered.Different scenarios, which lead to the handoff initiation in this route, are also introduced.Considering these events, which are node mobility and spectrum mobility, the authors introduced a conceptual model for unified handoff management in CR-MANETs.The authors of Nejatian et al. (2012b) have characterized the availability of spectrum bands in CR-MANETs.The authors proved that the channel heterogeneity must be considered in terms of transmission range, because it increases the blocking probability of spectrum handoff.Based on their findings, a unified system, which considers the spectrum mobility in time and space domain as well as topology changing, must be investigated.Nejatian et al. (2013aNejatian et al. ( , 2014Nejatian et al. ( , 2013b) ) proposed Preemptive Unified Spectrum Handoff (PUSH) scheme in Cognitive Radio Mobile Ad hoc Networks (CR-MANETs) through an established route in which the Channel availability depends on the PU's activity, the SU's mobility and the channel heterogeneity.First, they introduced an analytical model for Unified Spectrum Handoff (USH) scheme in which the Secondary Users (SUs) will move to another unused spectrum band, giving priority to a Primary User (PU) (Nejatian et al., 2013c(Nejatian et al., , 2013d)).Then, they proposed the PUSH method in which a handoff threshold is used to start the handoff pre-emotively.When a channel handoff cannot be performed due to the SU's mobility, a Local Flow Handoff (LFH) is performed.The proposed PUSH algorithm uses the cognitive link availability prediction while considering the interference to the PUs to estimate the maximum link accessibility period with the PU's interference avoidance.Based on the analytical model, the channel heterogeneity and the SU's mobility affected the performance of the handoff management method rather than the PU's activity.As illustrated in Fig. 1, the proposed PUSH framework considers the spectrum-aware handoff management based on interactions between routing (layer 3) and the physical layer (layer 1).The proposed scheme is equipped with an algorithm to identify appropriate channel based on the channel qualities, the spectrum and the node mobility.The spectrum analysis provides information about the environment condition and the mobility of the SUs.Thus, a precise and cooperative environment and a location aware mechanism are used in the PUSH.The spectrum analysis segment requires information about the situations of the SUs as well as the PUs' activities.Hello packets in neighbour discovery message are modified to provide or share the information regarding free channel at each node to its neighbour nodes.The neighbours can also obtain information about the situation of the node based on the Received Signal Strength Indicator (RSSI) and other appropriate parameters.In a heterogeneous channel conditions in which different channels have different transmission and broadcasting ranges, channel availability time depends on the channel transmission range.On the other hand, channel availability time varies rapidly due to SU's mobility.Hence, the integrated spectrum handoff management system considers the mentioned factors along with SINR and power management schemes to avoid interference with PUs.When a channel handoff cannot be performed due to the SUs mobility, a pre-emptive local flow handoff is performed.The motivation of this mechanism is to maintain end-to-end connectivity once a route is established for the purpose of sending data and increase the route maintenance probability.The PUSH algorithm also predicts the cognitive link availability and estimates the maximum link availability time considering the interference to the PUs.The motivation of this mechanism is to reduce the number of spectrum handoff and the number of route error requests. Although, both the simulation and analytical results demonstrated an improvement of route maintenance probability based on the LFH and cognitive link Fig. 2: Elements of cognitive cycle and their connections availability prediction mechanisms in PUSH, the learning area of PUSH algorithm is still in a nascent stage.Further work on PUSH algorithm may proceed along the improvement of the learning algorithms that allow a fast, appropriate and accurate converging on the decision-making process.Using the intelligent systems such as neural networks, fuzzy sets and systems and Bayesian networks can be a good solution to improve the learning algorithms. This research introduces the IB cognitive radio learning for PUSH.The new approach IP PRO-PUSH algorithm is presented, which is a highly potential intelligent technology to address the spectrum scarcity challenges in CR-MANETs.Based on the literature, as far as we are concerned, there is no work which considers intelligent integrated spectrum handoff management in CR-MANETs.Current study leads to optimization of CR-MANETs by exploiting intelligent efficient usage of the available wireless spectrum and it can be used potentially for further development and commercialization.In the next section we introduce the concept of the IP PRO-PUSH and its main connections. INTELLIGENT-BASED COGNITIVE RADIO LEARNING FOR INTELLIGENT PROACTIVE PRE-EMPTIVE UNIFIED SPECTRUM HANDOFF Cognitive cycle in cognitive radio: CRs are intelligent wireless radios that are able to adapt and configure themselves to satisfy the quality of service.In order to achieve this goal, they must observe the environment, adapt to the spectrums, make a decision, perform necessary actions and learn from their experiences (Maqbool et al., 2013a(Maqbool et al., , b, 2014)).These activities form a cycle which is famously known as cognitive cycle.The cognitive cycle elements and their relations are illustrated in Fig. 2. As it can be seen, the last stage in cognitive cycle is related to the learning aspect of this cycle.In this stage, the CR collects the knowledge based on evaluation of the output in the decision-making process.CR uses this knowledge for better orientation in the future.Therefore, using an IB learning part can improve the performance of the CR.Intelligent learning cycle for proactive pre-emptive unified spectrum handoff: Considering the cognitive cycle in CR, the IB learning part for PUSH algorithm is proposed.Figure 3 illustrates the cognitive cycle with IB learning part for PUSH.Here, each SU is equipped with an Intelligent Core (IC) that is an adaptive and intelligent decision-making unit.The IC improves the spectrum handoff performance of the SU by learning how to choose the optimal action from a set of actions through repeated interactions with a random and time varying environment in which spectrum availability varies over time as well as over space. The action is chosen by the Decision-making part of the IC (ICD) based on a probability distribution kept over the actions set learnt by the learning part of the IE (ICL) and an estimation of channel state predicted by the Predictive part of the IC (ICP).At each instant the given action is served as the input to the random and time varying environment.The environment responds the taken action in turn with a reinforcement signal.The action probability vector is updated based on the reinforcement learning feedback from the environment.The objective of using the IC is to find the optimal action from the finite action-set so that the average penalty received from the environment is minimized.The IC is also used to recognize the parameters of an unknown traffic distribution and to find the optimal action in pre-emptive unified spectrum handoff strategy. The influence of different parameters and events on channel availability is considered in learning part, prediction part and decision part of the IC in order to obtain an IB unified model for channel availability and spectrum handoff in CR-MANETs. Intelligent proactive pre-emptive unified spectrum handoff framework: Based on the IB learning cycle introduced in the previous subsection, the Intelligent Proactive Pre-emptive Unified Spectrum Handoff (IP PRO-PUSH) framework is proposed.Figure 4 illustrates the proposed framework for the IP PRO-PUSH. As illustrated in Fig. 4, the proposed IP PRO-PUSH framework considers the spectrum-aware handoff management based on interactions between routing (layer 3) and the physical layer (layer 1).The proposed scheme is equipped with an IC to manage the unified spectrum handoff intelligently.The IC consists of three different segments ICL, ICD and ICP segments. In this IB framework, the neural network is used in the ICL segment.In this segment, the CR collects the knowledge based on the evaluation of the output of the decision-making process.CR uses this knowledge for better orientation in the future.Here, an Adaptive Neuro-Fuzzy Inference System (ANFIS) is used to In the prediction segment the Bayesian network is used as the predictive tool.Bayesian inference allows informative priors so that prior knowledge or results of the previous experiences learnt by the ICL can be used to inform the current state of the system to the ICD.CR observes and senses the radio environment by sensing.Interference information, which is the output of radio analysis part, is sent to the prediction segment.The ICP predicts the channel capacity based on the interference information and also the experiences learnt from the learning stage and sends this information to the ICD. Spectrum holes information is sent to the decisionmaking segment by the spectrum analysis part.decision-making, the spectrum holes information, the time varying properties of the radio environment and channel capacity are needed.In the decision-making segment the Fuzzy system is used as the ICD tool.The Fuzzy decision-making algorithm is capable of adapting to the network conditions and traffic changes.It can also incorporate the uncertain conflicting metrics to make a comprehensive decision with a low cost.In this stage, CR determines the data rate and transmission mode to provide the QoS.Therefore, CR must change its parameters according to the decision before transmitting the signal.CR must learn from its experiences and gather the knowledge for better spectrum orientation in the future. The presented IP PRO-PUSH algorithm is a highly potential intelligent technology to intelligently address the spectrum scarcity challenges in Cognitive Radio Mobile Ad hoc Networks (CR-MANETs).It can also intelligently reduce the handoff blocking and link failure probabilities due to user mobility.Additionally, IP PRO-PUSH reduces the route request frequency, hence maintaining end to end connectivity.It is also expected to intelligently decrease the number of spectrum handoff in these networks considering the effects of different mobility events and parameters on spectrum handoff.Above all, IP PRO-PUSH algorithm is expected to improve the performance of bandwidth utilization of CR-MANETs. CONCLUSION The last stage in cognitive cycle is related to the learning aspect of this cycle.In this stage, the CR collects the knowledge based on evaluation of the output in the decision making process.CR uses this knowledge for better orientation in the future.This area is still in a nascent stage.IP PRO-PUSH may proceed along the improvement of the learning algorithms that allow a fast, appropriate and accurate converging on the decision-making process.Using the intelligent systems such as neural networks, fuzzy sets and systems and Bayesian networks is a good solution to improve the learning algorithms for PUSH.In this study, we emphasize the Intelligent-Based (IB) cognitive radio learning for IP PRO-PUSH based on spectrum mobility in CR-MANET, highlighting those segments and connections for intelligent integrated spectrum handoff management in CR-MANETs.A conceptual framework for the new approach IP PRO-PUSH algorithm is presented, which is a highly potential intelligent technology to intelligently address the spectrum scarcity challenges in Cognitive Radio Mobile Ad hoc Networks (CR-MANETs). Fig. 3 : Fig.3: Intelligent based learning for proactive pre-emptive unified spectrum handoff management scheme Fig. 4 : Fig. 4: The proposed framework for IP PRO PUSH
3,711.4
2014-12-05T00:00:00.000
[ "Business", "Computer Science" ]
The Neurophysiological Impact of Subacute Stroke: Changes in Cortical Oscillations Evoked by Bimanual Finger Movement Introduction To design more effective interventions, such as neurostimulation, for stroke rehabilitation, there is a need to understand early physiological changes that take place that may be relevant for clinical monitoring. We aimed to study changes in neurophysiology following recent ischemic stroke, both at rest and with motor planning and execution. Materials and Methods We included 10 poststroke patients, between 7 and 10 days after stroke, and 20 age-matched controls to assess changes in cortical motor output via transcranial magnetic stimulation and in dynamics of oscillations, as recorded using electroencephalography (EEG). Results We found significant differences in cortical oscillatory patterns comparing stroke patients with healthy participants, particularly in the beta rhythm during motor planning (p = 0.011) and execution (p = 0.004) of a complex movement with fingers from both hands simultaneously. Discussion. The stroke lesion induced a decrease in event-related desynchronization in patients, in comparison to controls, providing evidence for decreased disinhibition. Conclusions After a stroke lesion, the dynamics of cortical oscillations is changed, with an increasing neural beta synchronization in the course of motor preparation and performance of complex bimanual finger tasks. The observed patterns may provide a potential functional measure that could be used to monitor and design interventional approaches in subacute stages. Introduction Stroke represents the third major cause of death and is one of the leading sources of disability, contributing to a decline in the global quality of life. Although several approaches are applied to the rehabilitation of patients, current interventions lack efficacy [1]. In order to develop new and more effective interventions for neurorehabilitation, and particularly, for the rehabilitation of stroke patients, it is fundamental to understand subacute physiological changes of potential neuroplastic significance following the event. After a brain lesion, neural networks are damaged, which triggers the reorganization of neural connectivity and brain rhythms. Plastic changes may occur not only on the lesioned but also in the contralateral hemisphere [2]. It is frequently reported in the literature that the activity of the unaffected hemisphere increases in the first days after the cerebrovascular accident [2,3]. After this period, at 3 to 6 months following the event, a relative increase in the activity of the areas adjacent to the lesion is frequently observed, concurrent with functional improvements [3]. Functional techniques to assess brain changes include electroencephalography, magnetoencephalography, and functional magnetic resonance imaging [2]. Electroencephalography (EEG) can potentially contribute to the understanding of the physiology of brain reorganization [4], in particular in which concerns the study of dynamics of oscillations [5]. Brain oscillations can appear at diverse frequencies, associated to distinct levels of synchrony in neuronal networks [6]. The visual alpha rhythm is known to respond to a stimulus or instruction with a decrease in amplitude or power, resulting in an event-related desynchronization (ERD). Synchronization (ERS) occurs in the absence of stimuli or idle states. It is therefore believed that alpha ERS is associated to cortical inhibition, whereas ERD is related to the reduction of inhibition, in turn [7]. Current knowledge, nevertheless, also points out a role for other types of alpha rhythm in attention and conscious awareness [8]. Performing a voluntary movement or receiving instructions to execute a motor task are generally associated with a decrease in upper alpha (mu rhythm) and in beta rhythms [6,7], in those regions around sensorimotor areas [6,9]. This reduction of movement-related beta power is thought to be associated with the excitability of the primary motor cortex and to be affected by GABA (gamma-aminobutyric acid) levels [10]. Preparation and execution of motor tasks might reveal altered activity patterns in stroke, which may have significant implications for the design of therapeutic interventions [11]. Changes in neural synchronization and oscillatory activities can play a role in the pathophysiology of distinct disorders, such as in stroke [7]. The poststroke changes in brain oscillations, particularly those accompanying movements of the impaired limbs, are worthy of further research [10]. Therefore, exploration of biomarkers to strengthen stroke investigation has been advocated [12], and recent works have been studying EEG activity in stroke, along with motor tasks, such as unilateral [11][12][13] or bilateral wrist movements [13]. Here, we determined motor thresholds as a measure of cortical excitability and assessed ERD and ERS in the course of motor tasks, both in healthy subjects and in poststroke patients. To the best of our knowledge, this is the first time that the neurophysiology of stroke patients is analysed shortly after the event (between 7 and 10 days poststroke) by EEG preceding and during simple and complex finetuned unilateral and bilateral motor tasks performed with both the affected and unaffected arms and hands, and a direct comparison with a control healthy sample that did the same experiment is provided. Our aim was to study the impact of a subacute ischemic stroke in brain neurophysiology at rest and during motor preparation and execution. Moreover, we investigated whether significant changes in the EEG brain activity pattern following stroke could be correlated with the motor performance of the affected upper limb, assessed by the Wolf Motor Function Test. Materials and Methods The present work was conducted in accordance with the Declaration of Helsinki and received the approval from the Ethics Committee of the Faculty of Medicine of the University of Coimbra. Written informed consent was collected from each participant. 2.1. Sample. We included 10 patients who were recruited from the Neurology Department of the Coimbra University Hospital after a first-ever middle cerebral artery stroke and fulfilled our requirements: (i) 18 to 85 years of age; (ii) corticosubcortical ischemic lesion; (iii) stroke event 7 ± 3 days before; (iv) motor deficit of the upper extremity; (v) score ≤ 1 on the modified Rankin Scale, previous to the event; and (vi) ability to comprehend and follow the tasks. On the other hand, patients who (i) were not clinically stable, (ii) were diagnosed with cognitive impairment or dementia, (iii) had history of epileptic seizures, (iv) presented posterior or global aphasia, (v) presented neglect, (vi) abused drugs or alcohol, or (vii) presented contraindications to transcranial magnetic stimulation as assessed by a questionnaire based on published guidelines [14,15] were excluded. Moreover, we recruited 20 age-matched healthy controls. Demographic data from the participants, both healthy individuals and stroke patients, is presented in Table 1. In Table 2 we present some clinical data from our sample of stroke patients. Stroke Research and Treatment performed with the affected upper limb. Each movement had a maximum length of 120 seconds. This way, if a patient could not perform the task, it was attributed a duration of 120 seconds. The quality of the movements was also evaluated by the functional ability scale (FAS) [17], wherein we attributed a score of "0" when a given movement was not performed and a maximum of "5" points per task, if it appeared to be normal, counting up to a maximum of 75 points. Electroencephalography (EEG) Task. In this study, we have used the same methodological EEG procedure as used in our prior works addressing oscillatory changes induced by TMS [18,19]. EEG was conducted using a SynAmps2 RT amplifier and Scan 4.5 software (Compumedics, Charlotte, NC). Electrodes' positioning was based on the International 10-20 montage, through the use of a 64-channel cap (QuickCap, NeuroScan, USA), including a ground placed in the forehead, close to FPZ, and online reference channel close to CZ. The signal was acquired at a 1000 Hz sampling rate. We applied a high-pass filter from the DC level and a low-pass at 200 Hz. For the study of posterior alpha rhythms, we recorded electrical activity during 180 seconds of eyes opening and closure task (blocks of 10 sec). To analyse differences in cortical oscillatory patterns along motor preparation and execution, we instructed participants to perform two different motor tasks, namely, 90°shoulder flexion and thumb opposition. Motor tasks were executed with both upper limbs, first individually and then simultaneously. Each participant was instructed to perform the movement and sustain it for 15 seconds and then reposition and rest for another 15 seconds. Subjects performed 6 trials of 30 seconds per movement, divided into blocks of 6 secs locked to the beginning of the task, in a 180 sec experiment for each movement, totalizing 540 secs per task and 1080 secs for the complete motor paradigm. Triggers time-locked to the beginning of each movement were inserted in the EEG file during the online recording of all tasks. We carried out signal analysis with Scan 4.5 software (Compumedics, Charlotte, NC) and with the MATLAB (version R2017b, The MathWorks, USA) toolbox EEGLAB v.14.1.1b [20]. After recording data, we filtered the signal offline from 1 to 45 Hz and downsampled data to 250 Hz. The average of all channels was used for offline rereference. Moreover, we ran custom MATLAB scripts (adapted from our previous works by Castelhano et al. [21] and by Silva et al. [22]) to quantify alpha (8-13 Hz), mu (10-12 Hz), and beta (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25) power, in the specified electrode clusters ( Figure 1). We selected posterior electrodes for the analysis of visual alpha in the occipital area. For the motor tasks, in order to quantify motor rhythms, namely, mu and beta bands, we selected those electrodes located on the central motor regions. During the acquisition, we inserted online manual triggers in the EEG file marking the events that could disturb the signal and should be rejected. We have also used an offline procedure implemented in EEGLAB, with default parameters, that included a 1 Hz high-pass filtering step, a voltage threshold, and a visual confirmation of the muscle artifacts. Moreover, we computed Independent Component Analysis for further cleaning of the data and to remove components such as eye blinks. The pseudo-Wigner-Ville transformation was applied, according to the works by Uhlhaas et al. [23] and others [21,[24][25][26], for performing a timefrequency analysis. The amplitude and phase were computed for all periods of interest, with epochs being defined ahead, for all frequency bins from 5 to 40 Hz (resolution of 1 Hz/ frequency bin). Posterior alpha rhythm was assessed from -2000 to 10000 milliseconds, where the period between -2000 milliseconds and 0 was defined as the baseline. Quantification of motor rhythms, in turn, was computed between -2000 and 0 milliseconds for premovement and preparation and from 0 until 4000 milliseconds, time-locked to the beginning of the movement. We also mapped topographical distribution in EEGLAB, using default parameters. In addition, for patients, we determined beta power for one central electrode in each hemisphere to assess whether changes in relation to controls were central and bilateral or if they were due to hemispheric asymmetries induced by the lesion. This analysis was carried out over C3 and C4, where oscillations such as the mu rhythm are reported to show maximum amplitude [27]. One patient was not able to complete the EEG recording; therefore, for EEG analysis, we had a sample size of 9 patients and 20 healthy volunteers. Transcranial Magnetic Stimulation (TMS). We applied single pulses of transcranial magnetic stimulation to the unaffected primary motor cortex (M1) of patients and randomly to the right or left M1 of healthy subjects, at 45°to the sagittal plane, via a figure-of-eight coil plugged into a MagPro X100 magnetic stimulator (MagVenture, Denmark). Active motor threshold (aMT) was determined during isometric contraction of the upper limbs, being defined as the lowest intensity that elicited a minimal visible muscle twitch on the hand. The aMT was selected as a measure of cortical excitability, rather than the resting motor threshold (rMT), since it is reported that it presents less variability than rMT, due to the lower variability in the spinal excitability, associated with muscle contraction [28]. 2.5. Statistics. Statistical tests were computed on the SPSS Statistics software, version 24 (IBM SPSS Statistics, IBM Corporation, Chicago, IL), and we adopted a significance level of 5% for all tests. We ran Mann-Whitney U test to address differences between healthy individuals and stroke patients, in cortical excitability and oscillatory patterns, comparing groups regarding active motor threshold, alpha power (8)(9)(10)(11)(12)(13), and the ERD in mu (10-12 Hz) and beta (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25) rhythms. Moreover, we applied the same test to investigate differences between groups of participants in age and handedness. For differences in sex, we used Fisher's exact test. Hemispheric asymmetries in patients were tested with the Wilcoxon test. We corrected with false discovery rate (FDR) for multiple comparisons. To check for correlations between changes in EEG and the severity of the motor deficits, as evaluated by NIHSS and WMFT scores, we assessed normality of data with Shapiro-Wilk tests and determined Pearson coefficients. Results The demographic characteristics of the stroke patients who were included in our sample did not differ significantly from those pertaining to the healthy participants, concerning age (U = 67:000, p = 0:150), sex (p = 0:700), or handedness as assessed by an adapted Edinburgh Handedness Inventory questionnaire [29] (U = 80:000, p = 0:272). Concerning changes in neurophysiology following the stroke event, we assessed alpha rhythm at rest and motor rhythms, namely, mu and beta bands, during motor planning and execution. Even though both groups showed the expected beta desynchronization on the central motor areas (see Figure 1 for selection of electrode clusters) with simultaneous bimanual finger opposition, stroke patients showed significantly reduced ERD, in comparison with controls. This difference was significant both during premovement/preparation and on time-locked beginning of movement (U = 37:000, p = 0:011, Figure 2(a) and U = 31:000, p = 0:004, Figure 2(b), respectively). In Figure 3, we illustrate the group-averaged timefrequency plots, wherein we can distinctly observe the desynchronization pattern for the beta band in the motor area (Cz) of the control volunteers but not of the stroke patients. The differences in the beta band with the thumb opposition of both hands simultaneously coexisted with changes in the topography of individuals after a cerebrovascular lesion. In Figure 4, we compare stroke topographical distribution with that of a healthy brain, by presenting beta band scalp mapping during bimanual thumb opposition task. Topographical distribution seems to corroborate the lower beta desynchronization (blue) in the central areas of stroke patients, comparing with controls. Moreover, in patients, the lesioned hemisphere showed a red pattern that suggests impaired modulation of beta oscillations. Differences between healthy participants and stroke patients in alpha power of the posterior area were not significant, either when the subjects had the eyes opened (U = 68:000, p = 0:317) or closed (U = 72:000, p = 0:417). Stroke Research and Treatment Mu rhythm did not show significant group differences when performing motor tasks with each upper limb (healthy or stroke-affected) individually or both simultaneously, either on shoulder flexion (p ≥ 0:183) or thumb opposition (p ≥ 0:077). Beta rhythm was not significantly altered in stroke patients comparing with healthy participants for shoulder flexion (p ≥ 0:216). We found a significant moderate negative correlation between beta power during the execution of bimanual thumb opposition and the velocity of execution in WMFT tasks (r = −0:675, p = 0:046). Discussion The study of the hemisphere contralateral to the stroke lesion seems to be critical for the investigation of poststroke alterations [30]. The active motor threshold was assessed on the unaffected hemisphere, in patients, and randomly on the right or left hemisphere of healthy participants. After stroke, the hemisphere contralateral to the lesion is known to become overactive, which raises the hypothesis that the aMT in this hemisphere would be reduced. Our results however indicated only a nonsignificant trend for lower active motor threshold, suggesting that the hemisphere contralat-eral to the lesion was still relatively preserved. This is consistent with other findings. For example, Prashantha et al. analysed changes in the resting motor threshold of the nonaffected hemisphere compared with healthy controls and reported no differences at baseline (2 weeks after stroke onset), a trend for a decrease after 4 weeks of the lesion and a significant reduction on the second follow-up, at 6 weeks poststroke [30]. Our group-averaged time-frequency plots in the central Cz electrode revealed a distinct pattern of desynchronization with bimanual thumb opposition task in healthy subjects, which was not so evident in poststroke individuals. We found significant differences between patients and healthy participants in motor rhythms during thumb opposition, when performing the task with both hands simultaneously. These were observed as a lower reduction in beta power with the motor task, for patients, which indicates less desynchronization and suggests a less disinhibited state on central motor areas of stroke patients, when comparing to healthy subjects. Moreover, from the observation of topographical distribution in patients, we hypothesize that the impaired modulation of beta oscillations during movements including the affected hand might be detrimental to motor control. Bönstrup et al. [31] and Rossiter et al. [10] both assessed brain oscillations with paretic hand grip tasks in stroke patients, the first in the acute and the latter in the chronic phase of the disease, and described less movement-related beta decrease. Interestingly, Rossiter et al. did not detect changes in baseline power levels, reporting significant 6 Stroke Research and Treatment differences between groups only when studying dynamic changes with the motor task [10]. We suggest that, in our study, poststroke changes in oscillatory activity during bimanual thumb opposition were not circumscribed to the areas located near the lesion, which is supported by our results showing no significant asymmetries between powers on the electrode located in affected versus unaffected hemispheres. Bartur et al. [12] found a correlation between the magnitude of ERD in the high-mu and low-beta bands and the motor function of the paretic upper limb, evaluated by EMG and by Fugl-Meyer and Box and Block tests, with better motor performance being correlated with greater desynchronization in the lesioned hemisphere only. In our work, we studied the correlation between WMFT and beta rhythm during bimanual movements and observed that patients who had more severe deficits (with slower execution in WMFT tasks) showed a significant correlated decrease in beta desynchronization with bimanual thumb opposition, which is in line with those results reported by Bartur et al. for the lesioned hemisphere. Our significant moderate correlation suggests that future studies, with large sample sizes, should further explore the potential of beta levels as biomarkers for stroke recovery of motor deficits. Actually, oscillations in the beta band are especially responsive to motor parameters [32]. Interestingly, Fu et al. [33] studied shoulder-elbow movement of the affected limb and also reported a significant decrease poststroke in peak ERD% in the mu range (8)(9)(10)(11)(12), comparing with healthy participants. Regarding the shoulder flexion task, we were not able to detect significant differences between groups. This is consistent with the notion that movement complexity can influence the brain activation of the lesioned primary motor cortex [34]. Gerloff et al. [35] had already suggested that the involvement of M1 might be superior in more complex movement sequences, where there is larger activation of cortical areas. Puh et al. [36] pointed out finger movements as being the most suitable instruction when the focus is motor rehabilitation. The higher complexity involved in thumb opposition, associated to the motor control required for the transitions between fingers [36], can possibly explain the specificity of our results. This also provides insights into task dependence when probing neurophysiological changes in stroke and on the design of neurostimulation approaches. This study has some limitations. Although it is crucial to analyse the neurophysiology of stroke in the acute and subacute stages, we cannot disregard the possibility that the timing of our experiment was too early to detect significant changes in the active motor threshold. Also, the effort required from poststroke patients to perform the motor tasks during electroencephalographic recording prevented us from including a larger number of trials for each movement. Despite this, we were able to find significant differences in motor rhythms, particularly in the beta band, in patients, when comparing with healthy controls. The findings from this proof-of-concept study point out the value of studying EEG oscillations as potential biomarkers for understanding the neurophysiology of subacute stroke and the importance of conducting future work, with larger sample sizes, for potential application in clinical monitoring and novel therapeutic approaches. Conclusions We found that cerebrovascular lesions induced by recent ischemic stroke alter neurophysiological motor response patterns in both hemispheres translating into an alteration in event-related synchronization and desynchronization, particularly at beta frequencies during motor planning and execution of complex bimanual movements. These results have implications for tailoring neurostimulation strategies. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Disclosure The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
4,811.8
2022-01-24T00:00:00.000
[ "Psychology", "Biology", "Medicine" ]
SumoPred-PLM: human SUMOylation and SUMO2/3 sites Prediction using Pre-trained Protein Language Model Abstract SUMOylation is an essential post-translational modification system with the ability to regulate nearly all aspects of cellular physiology. Three major paralogues SUMO1, SUMO2 and SUMO3 form a covalent bond between the small ubiquitin-like modifier with lysine residues at consensus sites in protein substrates. Biochemical studies continue to identify unique biological functions for protein targets conjugated to SUMO1 versus the highly homologous SUMO2 and SUMO3 paralogues. Yet, the field has failed to harness contemporary AI approaches including pre-trained protein language models to fully expand and/or recognize the SUMOylated proteome. Herein, we present a novel, deep learning-based approach called SumoPred-PLM for human SUMOylation prediction with sensitivity, specificity, Matthew's correlation coefficient, and accuracy of 74.64%, 73.36%, 0.48% and 74.00%, respectively, on the CPLM 4.0 independent test dataset. In addition, this novel platform uses contextualized embeddings obtained from a pre-trained protein language model, ProtT5-XL-UniRef50 to identify SUMO2/3-specific conjugation sites. The results demonstrate that SumoPred-PLM is a powerful and unique computational tool to predict SUMOylation sites in proteins and accelerate discovery. Introduction Post-translational modifications (PTMs) are the predominant factors leading to the diversity of the proteome ( 1 ,2 ).Protein SUMOylation is one of the most common PTMs in humans that performs essential roles in many vital biological processes like transcription control, chromatin organization, accumulation of macromolecules in cells, regulation of gene expression, and signal transduction ( 3 ,4 ).SUMOylation is also necessary for the conservation of genome integrity ( 5 ).Consequently, it is not surprising that a change in SUMOylation dynamics favors the onset of a variety of human diseases including cancer, Alzheimer' s disease, Parkinson' s disease, viral infections, heart diseases, and diabetes (5)(6)(7)(8)(9). SUMOylation occurs as a modifier in an ε -amino group of lysine residues in the target protein through a multi-enzymatic cascade ( 10 ).In this reaction, SUMO is connected to a lysine residue in substrate protein by covalent linkage via three enzymes, namely activating (E1), conjugating (E2) and ligase (E3).Also, it can be separated from the target protein by a specific SUMO protease enzyme ( 11 ).This covalent SUMO conjugation frequently occurs at a consensus motif ψ-K-X-E where ψ represents lysine, isoleucine, valine, or phenylalanine, K is lysine, X can be any amino acid, and E is glutamic acid ( 12 ).However, additional SUMO protein substrates have recently been identified that lack this canonical SUMO consensus motif, making it more challenging to identify SUMOylated protein targets. SUMOylation requires the Small Ubiquitin-Related Modifier (SUMO) protein, which has structural similarity to ubiquitin and has been discovered in a wide range of eukaryotic organisms ( 6 , 13 , 14 ).Five SUMO paralogues exist in humans with SUMO1, SUMO2 and SUMO3 expressed ubiquitously in multiple tissue / cell types and consistently the most studied.SUMO2 and SUMO3 are highly homologous with 97% amino acid sequence overlap and frequently referred to as SUMO2 / 3. Unlike SUMO1, SUMO2 / 3 includes an internal SUMO-consensus site with lysine 11, which allows for poly-SUMOylation to occur ( 15 ).The internal SUMOylation site allows SUMO2 / 3 to form poly-SUMO chains on target proteins.The SUMO2 / 3 poly-chain serves as a binding platform for proteins with SUMO-interaction motifs (SIMs) and thereby supports dynamic non-covalent protein complexes.As SUMO2 / 3 directs both covalent and non-covalent protein interactions, previous biochemical studies suggest a unique protein substrate profile and biological function for SUMO2 / 3 versus SUMO1.SUMO2 / 3 specific protein conjugates direct protein degradation, chromatin remodeling, gene expression, and DNA repair ( 9 , 16 , 17 ).Also, only the SUMO2 knockout is embryonic lethal and is essential for organismal development ( 18 ,19 ).Yet identification of SUMO paralogue specific targets is still in its infancy and is frequently only addressed at an individual protein level. To date, SUMOylation is most often identified using mass spectrometry and a lot of progress has been made in experimental techniques used for mapping and quantifying PTMs.In that regard, more than 53 000 unique SUMOylation sites have been identified in human proteins (20)(21)(22).Although experimental approaches are the most reliable ways to identify SUMOylation sites, they are often time-consuming, labour-intensive, and are still quite limited.Thus, a mechanistic characterization of PTMs including SUMOylation sites is lacking for a large portion of the proteome.Therefore, complementary computational tools using machine learning (ML) and deep learning (DL) are playing an increasingly essential role in the characterization of SUMOylation sites. Several different SUMOylating site prediction models currently exist ( 4 ,23-35 ).The most utilized program remains GPS-SUMO with the ability to predict SUMO-accepting lysine residues in consensus and non-consensus SUMO sites ( 36 ).However, like most prediction models, the input features are still hand-crafted features.Additionally, to the best of our knowledge the benefits of the recent advances in large protein language models (PLMs) and the distributed representation learned from the distillation of these language models have not been explored for SUMOylation site prediction.A recent study evaluated the performance of different models for protein representation, revealing that ProtT5 achieved the best performance in most of the tasks ( 37 ).However, ProtT5 has not yet been used for SUMOylation and SUMO2 / 3 PTM prediction. Recently, transformer-based language models trained with a large corpus of unlabelled data have achieved stunning results in the field of natural language processing (NLP) ( 38 ).Due to the availability of large number of protein sequences in the UniProt knowledge base and other resources, we now have a wide variety of PLMs under development (39)(40)(41)(42)(43)(44)(45)(46).Considering protein sequences as sentences, Elnaggar et al. developed a pre-trained PLM called ProtT5-XL-UniRef50 (herein called ProtT5) based on 2.5 billion protein sequences ( 46 ).The representations of these models have been utilized for various downstream tasks, and the results demonstrate that the distributed representation learned from the distillation of these language models provide useful information that captures the evolutionary context of a sequence, contact map, taxonomy, long-range dependencies, protein structure, physicochemical properties, subcellular localization, and function (47)(48)(49)(50)(51)(52)(53).Moreover, long-range dependencies can yield essential insights into the broader context and functional implications of SUMOylation PTM.These dependencies offer indispensable insights into the intricate connections between distant amino acids within a protein, shedding light on how modifications at one site influence the protein's behavior and its interactions with other molecules.Taking these distant relationships into account can enhance the accuracy of algorithms used to predict SUMOylation PTM sites.Additionally, it can also provide valuable information about the protein's threedimensional structure.For example, it can help pinpoint regions of a protein that, although far apart in the primary sequence, exhibit close interactions within the folded protein structure, which is pertinent for predicting SUMOylation PTM sites.Similarly, features from these transformer-based PLMs have been successfully utilized to predict signal peptides ( 54 ), lysine glycation sites ( 55 ), N and O-linked glycosylation sites ( 51 ,56 ), phosphorylation sites ( 57 ), lysine crotonylation sites ( 58 ), subcellular localization ( 59 ), protein structural features ( 60 ), intrinsic disorder sites ( 61 ), and binding residues ( 52 ) among others. Hence, we propose a novel computational approach called SumoPred-PLM ( S UMOylation site P rediction using P rotein L anguage M odel) that utilizes embeddings from a protein language model (i.e.ProtT5) to improve the predictive performance of SUMOylation sites.By considering proteins as sentences, we feed the full protein sequence into the pre-trained ProtT5 model to extract fixed-length highdimensional per residue representations from the last encoder layer.Subsequently, the high-dimensional contextualized embeddings (i.e. a vector with 1024 features) of the site in interrogation (Lysine, K) are fed into a Deep Neural Network (DNN), essentially a Multi-layer Perceptron (MLP)-based classifier for SUMOylation and SUMO2 / 3 site prediction. Using cross-validation experiments, we found that the classifier based on the MLP architecture performed better compared to other architectures employed.To demonstrate its effectiveness, we evaluated the performance of the proposed method SumoPred-PLM using the GPS-SUMO dataset against quintessential approaches like GPS-SUMO ( 36 ).Our experiments showed that SumoPred-PLM achieved better performance in predicting protein SUMOylation sites compared to the state-of-the-art GPS-SUMO predictor, yielding an area under the receiver operating characteristic curve (AUROC) of 0.895.SumoPred-PLM is a freely available, fast and reliable approach for prediction of SUMOylation sites.All programs and data are available at https:// github.com/PakhrinLab/ SumoPred-PLM . Predicting protein SUMOylation and SUMO2 / 3 sites This section describes the dataset, features extraction method, performance evaluation metrics, feature selection, and the methods for training the model.With the aim to train a DL algorithm to predict SUMOylation and SUMO2 / 3 sites in proteins, we utilized three different datasets: CPLM 4.0 ( 22 ), SUMO2 / 3 ( 62 ) and GPS-SUMO ( 36 ), which are described below. CPLM 4.0 dataset In this study we utilized the Compendium of Protein Lysine Modifications (CPLM 4.0) dataset that was developed by Zhang et al. ( 22 ).This dataset consists of 29 different types of lysine PTMs including SUMOylation as a part of the CPLM 4.0 database.To avoid overestimation of the prediction accuracy as well as to maintain diversity, the redundant sites were removed using CD-HIT Suite with a threshold of 30% sequence identity ( 63 ).If two or more proteins were found to be modified at the same position and if they have > 30% sequence similarity, only one of the proteins was preserved.As a result of this filtering, we obtained 5,695 unique and diverse SUMOylated proteins.Moreover, we separated 5117 SUMOylated proteins for the training.From these training proteins we extracted 26 911 positive samples (SUMOylated sites) and 201 949 negative samples (non-SUMOylate sites).Additionally, we extracted 2988 independent positive and 2988 independents negative SUMOylated sites distributed across 578 proteins for testing.SUMOylation may occur on any lysine (K) amino acids of the SUMOylated protein sequence; however, not all of these sites are SUMOylation sites.We considered the experimentally verified SUMOylated sites acquired from the CPLM 4.0 database as positive SUMOylated sites.All other lysine sites within the same substrate are considered as negative SUMOylation sites.The difference between the small number of positive and large number of negative samples makes this benchmark dataset unbalanced.This imbalance can bias the performance of any predictor towards the identification of negative samples (a high true negative rate) over the detection of positive samples (a low true positive rate). The two commonly used strategies to overcome the imbalance problem are random over-sampling and under-sampling.The idea behind over-sampling is to duplicate the positive samples to increase them to the number of negative samples.While in under-sampling, some of the negative samples are discarded to make the number of negative samples equal to the number of positive samples.The over-sampling procedure could increase the probability of over-fitting the model due to duplication of positive samples while under-sampling often provides a modest solution for a given model.Therefore, we selected an under-sampling procedure to overcome the imbalance problem ( 64 ).As a result of under-sampling, we ended up with 26,911 positive and an equal number of negative samples.In this way, we avoid bias in our benchmark towards negative samples and increase our chance to detect more positive samples, or in other words, more SUMOylation sites accurately.Table 1 shows the number of positive and negative sites from CPLM 4.0 Dataset after 30% CD-HIT .Moreover , we explored the dbPTM human SUMOylation dataset ( 65 ).Supplementary Table S1 presents the statistics of the training and independent test datasets when 30% psi-cd-hit was applied. SUMO2 / 3 dataset Another dataset we utilized in this study is the human endogenous SUMO2 / 3 SUMOylation dataset developed by Hendriks et al. ( 62 ).This dataset consists of 14 869 endogenous SUMO2 / 3 Sites.We used CD-HIT Suite with a threshold of 30% to remove sequence identity among SUMO2 / 3 proteins ( 63 ).As a result of this filtering, we obtained 3225 SUMOylated proteins.Moreover, we separated 2902 SUMOylated proteins for training.From these training proteins we extracted 10 684 positive sites (SUMO2 / 3 sites) and 10 684 randomly under-sampled negative sites (non-SUMO2 / 3 sites) from 131 459 negative sites.Additionally, we extracted 1269 independent positive and 1269 independents negative SUMO2 / 3 sites distributed across 322 proteins for testing.We considered the experimentally verified SUMO2 / 3 sites acquired from Hendriks et al. database as positive SUMO2 / 3 sites and all the other lysine sites within the same substrate as negative SUMO2 / 3 sites.Table 2 shows the number of positive and negative sites from Hendriks et al.SUMO2 / 3 dataset after 30% CD-HIT. GPS-SUMO dataset The GPS-SUMO dataset consists of 548 proteins and among them 509 proteins were used for training, and 39 were utilized for independent testing.Eight hundred ninety one experimentally verified human SUMOylation sites were extracted from 509 SUMOylation proteins.All the other lysine's from the same 509 SUMOylated proteins were considered as negative SUMOylation sites.To create a balanced dataset, 891 negative sites were randomly under sampled from 23 371 sites.These experimentally verified sites form the training data.Moreover, 71 experimentally verified independent positive SUMOylated test sites were extracted from 39 different SUMOylation protein which are different than the training proteins.Next, 1377 independent negative lysine sites which do not include the independent SUMOylated positive sites were extracted from the same 39 independent SUMOylation proteins.These experimentally verified sites form the testing dataset.Further information about GPS-SUMO can be found in the seminal approach section of GPS-SUMO ( 36 ).Table 3 summarizes the number of sites included in GPS-SUMO dataset. Feature extraction-embeddings from protein language model A range of numerical representation schemes can be used to encode protein sequences.A recent development in the field is the advent of embeddings (distributed vector representations), which are representations of protein sequences extracted from the last hidden layers of the networks forming the PLM trained on a large set of unlabeled protein sequences.These latent embeddings capture a diversity of higher-level features of proteins and have been used successfully in predicting secondary structure and other tasks ( 52 ).In this work, we used embeddings from the PLM, ProtT5-XL-Uniref (herein called, ProtT5) ( 46 ).The PLM ProtT5 was trained on unlabeled protein sequences from BFD (Big Fantastic Database; 2.5 billion sequences including meta-genomic sequences) ( 66 ), and UniRef50 ( 67 ).ProtT5 has been built in analogy to the NLP (Natural Language Processing) T5, ultimately learning some of the constraints of protein sequences ( 68 ).Features learned by the PLM can be transferred to any (prediction) task requiring numerical protein representations by extracting vector representations for single residues from the hidden states of the PLM using transfer learning.As ProtT5 was only trained on unlabeled protein sequences, there is no risk of information leakage or overfitting to a certain level during pretraining.Essentially, ProtT5 outputs fixed length (1024) vector representations for each residue in a protein sequence.In essence, to predict whether an amino acid lysine is SUMOylated, SUMO2 / 3 or not, we extracted a 1024dimensional vector for each SUMOylated, SUMO2 / 3 or non-SUMOylated, non-SUMO2 / 3 lysine residue, where only the encoder side of ProtT5 was used, and embeddings were extracted from the last hidden layer of the models.A similar methodology was applied to extract features from influential Ankh PLM ( 41 ).We utilized the Ankh large model because our experimental results show that it can encode more intrinsic information about proteins than the Ankh base model.The only difference from the ProtT5 model was that it produced a per residue contextualized embedding feature vector length of 1536 rather than 1024 produced by ProtT5. Machine learning and deep learning models Naïve Bayes (NB) is a simple ML algorithm commonly used for classification tasks ( 69 ).It is based on Bayes theorem and assumes that the features are conditionally independent given the class label.Support Vector Machine (SVM) is a class of supervised machine learning algorithms used for classification and regression tasks ( 70 ).The basic idea behind SVM is to find an optimal hyperplane that separates the data into different classes.When the data is not linearly separable, SVM can still classify it by using kernel trick.The kernel trick maps the input data into a higher-dimensional feature space, where it might become linearly separable. Random Forest (RF) is a popular ensemble learning method used for classification and regression tasks in ML ( 71 ).It is an extension of decision trees and combines multiple decision trees to make predictions.For classification tasks, it predicts the class label by taking a majority vote among the individual trees.Each tree's prediction is counted, and the class with the most votes becomes the final prediction. Logistic Regression (LR) is a ML algorithm used for binary classification tasks ( 72 ).It predicts the probability of an instance belonging to a certain class by fitting a logistic (sigmoid) function to the input features.It estimates coefficients to create a linear decision boundary that separates the two classes. Extreme Gradient Boosting (XGBoost) belongs to the family of gradient boosting method ( 73 ).It sequentially adds weak models (decision trees) to iteratively correct the errors made by previous models.It optimizes a specific loss function by finding the best-fitting model in an additive manner. 1D Convolutional Neural Network (1D CNN) is a variant of convolutional neural networks (CNNs) specifically designed for processing one-dimensional sequential data ( 74 ).It utilizes one-dimensional convolutional filters to capture local patterns and features in sequential data.The filters slide along the input sequence, performing convolutions and generating feature maps.While traditional CNNs are commonly used for image analysis and computer vision, 1D CNN is particularly suited for tasks involving sequential data, such as time series analysis, speech recognition, and natural language processing.Long Short-Term Memory (LSTM) is a type of recurrent network (RNN) architecture specifically designed to model and process sequential data.It addresses the vanishing gradient problem that occurs in traditional RNNs, allowing for better capturing of long-term dependencies in the data.The hyperparameters and other details are explained in Supplementary Table S2 . Model training As discussed above, SUMOylation and SUMO2 / 3 occur on lysine residues, so we extract contextualized embeddings from the ProtT5 model using the full-length protein sequence as input.Finally, the corresponding feature for the site of interrogation (in this case lysine) is extracted (1024-dimensional vector) and passed to the subsequent DL model.Using these representations and datasets (CPLM 4.0, SUMO2 / 3, and GPS-SUMO), we trained several models to correctly predict SUMOylation, and SUMO2 / 3 sites in amino acid sequences.The performance of several architectures was evaluated: 1D CNN, 1D CNN-LSTM, 1D CNN-BiLSTM, BiLSTM, LSTM, LR, MLP, SVM, XGBoost, NB and RF.We describe the MLP architecture in Figure 1 .As shown in Figure 1 , the features are extracted for the site of interrogation (K, highlighted in white) using full protein sequence as input and the 1024 realvalued feature vectors are fed into a MLP deep-learning architecture consisting of 64 neuron input layers followed by 2 neuron output layers.To explore the hyperparameter space, we performed a ten-fold cross-validation grid search on the MLP deep learning model with the CPLM 4.0, and SUMO2 / 3 training dataset.It was done against 1, 2, 3 and 4 dense hidden layers; sigmoid and ReLU activation function; 32, 64, 128, 256, 512 and 1024 neurons in each layer; RMSprop, and Adam optimizers; and 0.2, 0.3, 0.4 and 0.5 dropout rate; whereas the default learning rate of 0.001 was used.A similar approach was performed for the rest of the deep learning and machine learning algorithms.The optimized hyperparameters using grid search are shown in Table 4 .Based upon grid search, 64 neuron input layers were configured with ReLU activation function.As dropout layer / nodes in the network helped alleviate overfitting and improved the generalization capacity, we set the dropout equal to 0.3.Our task was to train a binary classification model to distinguish SUMOylation or SUMO2 / 3, and non-SUMOylation or SUMO2 / 3 sites.Therefore, in the output dense layer, we set the number of neurons equal to 2. The optimized hyperparameters for the deep-learning architecture are elaborated in Table 4 .To avoid overfitting, we have used overfitting reduction techniques like dropout, early stopping, model checkpoint, and reduce learning rate on the plateau.Furthermore, no signs of overfitting and underfitting are present in our trained model as can be seen from Supplementary Figure S1 .The loss curve for the training and validation follows each other as well as the training and validation accuracy curves also follow each other. Model evaluation and performance metrics In this study, 10-fold cross-validation was used to evaluate the performance of the model and to determine its robustness and generalizability.During 10-fold cross-validation, the data are partitioned into ten equal parts.Then, one part is left out for validation while training is performed on the remaining nine parts.This process is repeated until all parts are used for validation.For the results of 10-fold crossvalidation, unless otherwise noted, all performance metrics are reported as the mean value ± 1 standard deviation from the mean. To evaluate the performance of each model, we use accuracy (ACC), sensitivity (SN), specificity (SP) and Matthews correlation coefficient (MCC) ( 75 , 76 ).A CC describes the correctly predicted residues out of the total residues (Equation ( 1)).Meanwhile, SN defines the model's ability to distinguish positive residues (Equation ( 2)), and SP measures the model's ability to correctly identify the negative residues (Equation ( 3)).On the other hand, MCC considers the model's predictive capability concerning both positive and negative residues (Equation ( 4)). Results SumoPred-PLM utilizes per residue embeddings (1024 features) extracted for the site of interest (K) from ProtT5 using a full-length sequence as input.We use three datasets for training SumoPred-PLM: CPLM 4.0, SUMO2 / 3, and GPS-SUMO.Protein redundancies are removed from within and across training and independent test datasets.We performed S3 , revealing subpar performance of these models.74.17% respectively, on the independent test dataset.Furthermore, MLP was able to classify 2,220 samples as True Negative, 2,212 samples as True Positive, 767 as False Positive and 776 as False Negative.The independent test set result and 10-fold cross-validation results produced by SumoPred-PLM are similar .Moreover , it can be observed from Figure 2 that SumoPred-PLM, which is based on a MLP approach, has the highest area under the receiver operating characteristics curve (ROC).Similarly, Figure 3 shows that SumoPred-PLM has the highest precision-recall area under the curve (PrAUC) compared to other DL and ML approaches.Hence, SumoPred-PLM is a robust computational model for the prediction of SUMOylation PTM in amino acid sequences of proteins.In addition, the SumoPred-PLM MLP model was trained using the dbPTM training dataset ( 65 ), utilizing the ProtT5 encoding scheme.Subsequently, the trained model was evaluated with the dbPTM independent test dataset, and the findings are presented in Supplementary Table S4 .Furthermore, McNemar's significant test ( 78 ,79 ) was conducted on the best-performing MLP and SVM classification models.Subsequently, the Chi-square ( χ 2 ) distribution value (0.04) was computed and compared with the alpha (0.05) value.Since the χ 2 value is less than the alpha value, we rejected the null hypothesis, suggesting that there is a significant difference between the SVM and MLP classifiers in predicting the outcomes of an independent test dataset.Moreover, the utility of the recently developed ESM2 (3 billion) PLM ( 80) on the CPLM 4.0 dataset is illustrated in the Supplementary Table S5 . Visualization using t-SNE plot Additionally, we investigated the classification efficacy of the features and the learned model using t-SNE visualization technique.Herein, features represent the 1024 numeric vectors of SUMOylated, or non-SUMOylated 'K' residues extracted from ProtT5, and the learned model refers to the MLP network trained with the CPLM 4.0 training dataset.To discern the classification effectiveness of these features as well as the feature vector produced by the penultimate hidden layer of the trained MLP network, we used t-SNE to project the features into a two-dimensional space (Figure 4 ) ( 81 ).For the features extracted from ProtT5 on the SUMOylated or non-SUMOylated token 'K' of CPLM 4.0 training set, the positive and negative samples are relatively clustered together (Figure 4 ).ples in two-dimensional space.Hence this result demonstrates that contextualized features produced from pretrained ProtT5 when passed to a MLP deep learning network can cluster positive and negative samples of SUMOylation sites in twodimensional space. 10-fold cross-validation on the CPLM 4.0 training set with Ankh PLM features In order to scrutinize the usefulness of recent pre-trained PLM Ankh, we performed 10-fold cross-validation on the CPLM 4.0 training dataset with the embeddings from the Ankh PLM ( 41 ).The predictive performance of different DL and ML models using the stratified 10-fold cross-validation on the CPLM 4.0 training data set, where the features are extracted from the Ankh PLM is shown in Table 6 .The contextualized embeddings (feature vector length = 1536) of the SUMOylated or non-SUMOylated token 'K' produced by the pretrained Ankh model when fed to MLP achieves the best performance as seen in Table 6 .This MLP model produced MCC, SN, SP and ACC values of 0.464 ± 0.010, 0.752 ± 0.017, 0.711 ± 0.019 and 0.731 ± 0.005 respectively for the stratified 10-fold cross-validation.These large pretrained PLMs have increased capacity to learn and represent complex patterns of proteins, as well as exhibit better performance in terms of accuracy, generalization, and protein language understanding.Moreover, the token capacity (the maximum number of tokens the model can handle during processing), which affects the model's ability to handle long sequences of amino acids, is increased in these large pretrained PLMs.Moreover, the 10-fold cross-validation of explored models on CPLM 4.0 training dataset summarizes that Ankh PLM is shorter than the baseline ProtT5 PLM by slight margins, hence we chose pretrained ProtT5 PLM to encode the protein sequence. Testing on CPLM 4.0 independent test dataset with Ankh feature To assess the performance of our approach on an independent test set with Ankh features, we trained the MLP model on the overall CPLM 4.0 training set and appraised the trained model with CPLM 4.0 SUMOylation indepen- Performance on the SUMO2 / 3 dataset GPS-SUMO produced an area under curve (AUC) of 0.8629 whereas SumoPred-PLM produced an AUC of 0.895 as illustrated in Figure 6 .This result is better than the performance of the seminal GPS-SUMO approach, which uses the generation group-based prediction system (GPS) algorithm integrated with Particle Swarm Optimization approach.Furthermore, the MLP classifier was able to classify 1,216 samples as True Negatives, and 52 samples as True Positives.However, it falsely classified 161 samples as False Positive, and 19 samples as False Negative.These results suggest that SumoPred-PLM performs better than the seminal GPS-SUMO method.In addition, it should be noted that SumoPred-PLM was trained and tested with the exact same dataset that was used with the GPS-SUMO approach. Comparison of SumoPred-PLM with SUMOhydro predictor on SUMOhydro dataset To facilitate a more comprehensive comparison, we have acquired the dataset associated with the SUMOhydro predictor ( 24 ).Detailed statistical information pertaining to SUMOhydro dataset is provided in Supplementary Table S6 .We extracted the ProtT5 contextualized embedding of the SUMOhydro datasets and then applied these features to the SumoPred-PLM MLP architecture.The results of this analysis are presented in Case studies We performed one case study on the androgen receptor (AR, ANDR_HUMAN UniProt ID: P10275) protein which was not present in training or in the independent test set of CPLM 4.0 dataset.This nuclear steroid receptor is a ligand-activated transcription factor that directs cellular proliferation and differentiation in target tissues ( 84 ).Specifically, androgen hormone activated AR binds androgen response elements / ARE on target genes and recruit's coactivator and corepressor proteins to direct gene transcription ( 85 ).The AR protein is subject to multiple PTMs including SUMOylation.We and others report AR SUMOylation regulates AR function as our collective whole animal and cell-based studies demonstrate that a disruption of dynamic AR SUMOylation directs aberrant proliferation of prostate and breast cancer cells (86)(87)(88)(89). The human androgen receptor protein contains 40 lysine ('K') residues.Biochemical studies first identified canonical SUMO consensus sites that include K387 and K520 on AR (highlighted on the Table S3).The K387 and K520 serve as acceptor sites for both SUMO1 and SUMO2 / 3 modification of endogenous AR protein in several cell lines.However, mono-SUMO1 and poly-SUMO2 / 3 chains differentially regulate AR function.SUMO1 modification of AR effects transcriptional activity while SUMO2 / 3 conjugation to AR directs chromatin enrichment and AR protein stability / degradation ( 90 ,91 ).With 40 lysine residues, we postulated that AR protein may exhibit additional non-consensus SUMO motif and possibly even several SUMO paralogue-specific acceptor sites.Our in silico analysis with GPS-SUMO of the primary amino acid sequence of AR identified K387 and K520 and three additional SUMOacceptor sites (K313, K910, K913, Table S3).However, this platform does not distinguish between SUMO paralogue conjugates.Hence, we next evaluated published mass spectrometry data of the endogenous SUMO2 / 3 proteome from HeLa cells.The dataset reports that 20% of AR is SUMOylated in S7 shows prediction results of SumoPred-PLM for all the lysine in ANDR_HUMAN protein. Discussion and conclusions One of the key innovations in SumoPred-PLM is the incorporation of PLM based features to represent protein sequences.PLM based features have proven to be quite useful in various bioinformatics tasks (92)(93)(94)(95)(96)(97)(98)(99)(100).Our major goal in the project was to move away from hand-crafted feature extraction for prediction of SUMOylation and SUMO2 / 3 sites.To achieve this goal, we investigated whether language models learned from a large amount of protein sequences could capture the features predictive of SUMOylation and SUMO2 / 3 sites.Additionally, we also wanted to investigate what type of machine learning approach would work well on these pre-trained feature representations.Moreover, our other significant contribution is the study of SUMO2 / 3 data set which was not extensively studied in prior studies.To achieve the goal, we used contextualized embeddings learned from a PLM called ProtT5 to extract features for the site of interest.Subsequently, various ML and DL algorithms were evaluated using 10-fold cross validation and the top performing model was selected as the final model.The MLP model, namely SumoPred-PLM, achieves the best prediction performance among the compared methods as it largely benefits from the knowledge obtained from large sets of protein sequences by the pre-trained ProtT5 model that is used to encode the protein sequences.SumoPred-PLM does not rely on knowledge of protein structure, nor in the expert-crafted sequence features or timeconsuming evolutionary information derived from multiple sequence alignments (MSAs).Instead, the input to the MLP model is a contextual representation of the SUMOylated or non-SUMOylated token 'K' from the pre-trained PLM (ProtT5).This state-of-the-art prediction of SUMOylation is likely due to the contextual embeddings of all the amino acids in the protein sequence that are produced by the transformerbased model which makes use of position embedding with a self-attention mechanism.SumoPred-PLM model outperforms the pioneering GPS-SUMO predictor, in the identification of consensus and non-consensus SUMO-acceptor sites.One interesting result portrayed in the t-SNE plot (Figure 5 ) is that our model was largely able to cluster the two classes of SUMOylated and non-SUMOylated lysine residues in twodimensional space.SumoPred-PLM is a new approach proposed in this work that uses information distilled from large PLMs to train the DL framework and results an outstanding performance compared to existing approaches.In the future, we will consider using the structural information predicted by AlphaFold2 ( 101 ,102 ) to build models using graph networks ( 103 ) for further improving the performance of SUMOylation and SUMO2 / 3 PTM site prediction. In addition, we provide a unique service for SumoPred-PLM as a SUMO2 / 3-specific predictor.To our knowledge, this is the first platform that provides the ability to predict SUMO2 / 3 paralogue selective acceptor sites.As stated previously, increasing biochemical studies highlight SUMO paralog differentially effect a protein substrate's function and stability.Hence, we anticipate that this SUMO2 / 3 predictor platform will greatly accelerate the discovery of this SUMOparalog directed protein effects.For the SUMO2 / 3 platform, protein machine learning was based on available large-scale SUMO2 / 3 proteomics data ( 62 ).Unfortunately, a similar SUMO1 proteomic analysis is unavailable currently but, when accessible, this dataset can be easily incorporated into the current standing platform. Figure 1 . Figure 1.The overall framework of SumoPred-PLM.Beads with letters represent protein sequences.The sky-colored rectangular box represents ProtT5 PLM. Green rectangular bo x es are per residue 1024 features representations produced by ProtT5 PLM.The empty circle represents neurons.Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection.A dropout of 0.3 means, 30% of neurons are switched off randomly while training the MLP. 10 - fold cross-validation on the training dataset(s) to obtain the best hyperparameters for our deep learning architecture.Finally, we used the hyperparameters obtained from 10-fold cross-validation and trained the model using the overall training set and assessed the trained model on the independent test set and compared the performance against other existing approaches.Performance on the CPLM 4.0 dataset10-fold cross-validation on the CPLM 4.0 training set with ProtT5 featuresTo tune the hyperparameters (parameters whose values are used to control the learning process) and to investigate the performance of various DL / ML models, we performed 10fold cross-validation on the CPLM 4.0 training dataset( 77 ).The predictive performance of different DL and ML models using the stratified 10-fold cross-validation on the CPLM 4.0 training data set is shown in Table5.The contextualized embedding of the SUMOylated or non-SUMOylated token 'K' produced by the pretrained ProtT5 model when fed to MLP achieves the best performance as seen in Table5.Intriguingly, the same architecture (MLP) produced the highest result using 10-fold cross-validation on the SUMO2 / 3 training data set as well.This MLP model produced MCC, SN, SP, and ACC values of 0.478 ± 0.010, 0.757 ± 0.026, 0.720 ± 0.026 and 0.738 ± 0.005 respectively for the stratified 10-fold crossvalidation.Since the MLP model produced the best result on 10-fold cross-validation, we selected this architecture as our final model and called it SumoPred-PLM.Furthermore, we conducted a 10-fold cross-validation on the CPLM 4.0 training dataset with the 1D CNN-BiLSTM, 1D CNN-LSTM, BiLSTM and LSTM DL methods.The findings are presented in Supplementary Table Figure 2 . Figure 2. Comparisons of ROC curves of SumoPred-PLM and other models on the SUMOylation CPLM 4.0 independent test dataset.For each model, the area under the ROC curve is reported. Figure 3 . Figure 3.Comparison of precision-recall curves of SumoPred-PLM and other models on the SUMOylation CPLM 4.0 independent test dataset.For each model, the area under the PrAUC is reported. Figure 4 . Figure 4. t-SNE illustration of the learned features from ProtT5 language model. Figure 5 represents the t-SNE plot of the feature vectors generated from the penultimate hidden layer of the MLP DL architecture when CPLM 4.0 training set is used.This shows that negative samples (blue points) are concentrated at the right while positive samples (orange points) are concentrated at the left, which indicates that the per residue pre-trained PLM feature extraction with MLP learns SUMOylation patterns and largely clusters positive and negative sam- Figure 5 . Figure 5. t-SNE illustration of the learned features from the trained MLP model. Figure 7 . Figure 7. SumoPred-PLM prediction results of human androgen receptor, where sites with a prediction score above 0.5 (shown by the red dotted line) are predicted as SUMOylated sites.Green bars represent the five SUMOylation sites with experimental evidence from protein microarray data. Table 1 . Positive and negative SUMOylation sites for training and independent testing derived from CPLM 4.0 dataset Table 2 . Positive and negative SUMO2 / 3 sites for training and independent testing derived fromHendriks et al. dataset Table 3 . Positive and negative SUMOylation sites for training and independent testing derived from the GPS-SUMO dataset Table 4 . Hyperparameters used in the MLP network for the SUMOylation, SUMO2 / 3 and GPS- Table 5 . Results of the 10-fold cross-validation on the CPLM 4.0 training dataset using different deep and machine learning models encoded with ProtT5 PLM.The highest values in each column are highlighted in bold ModelsMCC ± 1 S.D. SN ± 1 S.D. SP ± 1 S.D. ACC ± 1 S.D. Table 6 . Results of the 10-fold cross-validation of explored models on the CPLM 4.0 training dataset using Ankh PLM feature encoding.The highest values in each column are highlighted in bold Table 7 . Prediction performance of SumoPred-PLM with ProtT5 and Ankh PLM features on the CPLM 4.0 independent test dataset.The highest values in each column are highlighted in bold The trained MLP model produced MCC, SN, SP, and ACC of 0.4728, 77.55%, 69.58% and 73.56% respectively, when features from Ankh PLM were used.Furthermore, MLP model trained with Ankh features was able to classify 2077 samples as True Negative, 2315 samples as True Positive, 908 as False Positive, and 670 as False Negative for the CPLM 4.0 independent test dataset.It can be observed from Table 7 that SumoPred-PLM trained with ProtT5 PLM feature representation is better than ANKH PLM feature representation. 10-fold cross-validation on the SUMO2 / 3 training set with ProtT5 features To further examine the robustness of the proposed model, we performed 10-fold cross-validation on the Hendriks et al.SUMO2 / 3 training dataset.The predictive performance of different DL and ML models using the stratified 10-fold crossvalidation on the Hendriks et al.SUMO2 / 3 training data set is shown in Table8.Intriguingly, the same architecture (MLP) produced the highest performance resulting in MCC, SN, SP and ACC values of 0.481 ± 0.017, 0.745 ± 0.029, 0.735 ± 0.027 and 0.740 ± 0.008, respectively for the stratified 10-fold cross-validation.Since the MLP model produced the best result on 10-fold cross-validation, we selected this architecture as our final model and used it to assess the performance on the Hendriks et al.SUMO2 / 3 independent test set. Table 8 . Comparison of different learning models on Hendriks et al.SUMO2 / 3 training dataset using 10-fold cross-validation, where features were encoded utilizing ProtT5 PLM.The highest values in each column are highlighted in bold ModelsMCC ± 1 S.D. SN ± 1 S.D. SP ± 1 S.D. ACC ± 1 S.D.Figure 6. ROC curve of SumoPred-PLM on GPS-SUMO independent test dataset. Table 9 . Comparison of SumoPred-PLM with other predictors that were trained with SUMOh y dro training dataset and tested with SUMOh y dro independent test dataset
8,822.8
2024-01-05T00:00:00.000
[ "Computer Science", "Biology" ]
Myco-fabricated ZnO nanoparticles ameliorate neurotoxicity in mice model of Alzheimer’s disease via acetylcholinesterase inhibition and oxidative stress reduction Alzheimer’s disease (AD) is one of the primary health problems linked to the decrease of acetylcholine in cholinergic neurons and elevation in oxidative stress. Myco-fabrication of ZnO-NPs revealed excellent biological activities, including anti-inflammatory and acetylcholinesterase inhibitory potentials. This study aims to determine if two distinct doses of myco-fabricated ZnO-NPs have a positive impact on behavioral impairment and several biochemical markers associated with inflammation and oxidative stress in mice that have been treated by aluminum chloride (AlCl3) to induce AD. Sixty male mice were haphazardly separated into equally six groups. Group 1 was injected i.p. with 0.5 ml of deionized water daily during the experiment. Mice in group 2 received AlCl3 (50 mg/kg/day i.p.). Groups 3 and 4 were treated i.p. with 5 and 10 mg/kg/day of ZnO-NPs only, respectively. Groups 5 and 6 were given i.p. 5 and 10 mg/kg/day ZnO-NPs, respectively, add to 50 mg/kg/day AlCl3. Results showed that the AlCl3 caused an increase in the escape latency time and a reduction in the time spent in the target quadrant, indicating a decreased improvement in learning and memory. Moreover, acetylcholinesterase enzyme (AChE) activity and malondialdehyde (MDA), tumor necrosis factor-alpha (TNF-α), and interleukin 1β (IL-1β) levels were significantly increased, and the content of glutathione (GSH), activities of superoxide dismutase (SOD), catalase (CAT), alanine aminotransferase (ALT), and aspartate aminotransferase (AST), as well as levels of serotonin and dopamine, were decreased in brain tissues only in AlCl3 treated mice. However, treatment of mice with myco-fabrication of ZnO-NPs at doses of 5 or 10 mg/kg improves learning and memory function through ameliorate all the previous parameters in the AD mice group. The low dose of 5 mg/kg is more effective than a high dose of 10 mg/kg. In accordance with these findings, myco-fabricated ZnO-NPs could enhance memory and exhibit a protective influence against memory loss caused by AlCl3. Introduction Alzheimer's disease (AD) is one of the primary health issues whose prevalence has grown recently throughout the world.In 2015, about 44 million people in the world had AD, and by 2050, it is expected that this number will have doubled (Ngolab et al. 2019).The development of AD was connected with a number of variables, involving oxidative stressinduced neuronal injury, loss of acetylcholine in cholinergic neurons, and the formation of β-amyloid (Aβ) plaques in the cells of the brain (Cheignon et al. 2018).A buildup of particular metals, such as aluminum, can start processes that result in the creation of highly reactive radicals.Its simple entry and maintenance in the brain induce oxidative stress that leads to excessive AchE activity that induces a low level of acetylcholine which is linked to the development of β-amyloid plaques and memory loss in AD patients (Liaquat et al. 2019).Malik et al. (2022) reported that AlCl 3 induced mouse model of AD characterized by memory loss, elevated expression of β-amyloid and increased acetylcholinesterase activity.Many drugs used for the treatment of AD depend on the inhibition of acetylcholinesterase, such as galantamine, rivastigmine, and donepezil.This allows for prolonging the action of the deficient neurotransmitter in the brain, but these drugs have side effects with extended use, e.g., hepatotoxicity (Joe and Ringman 2019).Therefore, searching for treatments with a high potential to reverse neuronal dysfunction and little risk of side effects and expense will be beneficial.Several studies suggested that natural or metal nanoparticle supplements with antioxidant and anti-inflammatory characteristics could be used to regulate oxidative stress and inflammation in order to slow or stop the development of AD (Szczechowiak et al. 2019;Ayaz et al. 2020). There are numerous uses for metal nanoparticles (NPs) and their oxides in the domains of medicine, agriculture, and industry (El-Sayed et al. 2020a, b, 2023b).The applicability of the NPs has been greatly enhanced by their reduced size, special physicochemical features, and surface changes (Hussein et al. 2022).Among metal oxide nanoparticles, ZnO-NPs are widely employed in biomedical uses such as drug delivery, antibacterial, anticancer, antioxidant, and wound healing (Gomaa et al. 2022).Zinc is a neuromodulator that carries out a variety of physiological actions (Blakemore and Trombley 2017;Hatab et al. 2022), and it is vital for controlling cell proliferation.Additionally, it functions as a molecular signal for transcription factors and immune cells join in the generation of inflammatory cytokines.According to literature, zinc giving diminishes infection incidence and inflammatory cytokine generation.The capacity of zinc to bind metals, along with its role in the catalysis of Cu/Zn superoxide dismutase, preservation of the protein's -SH group, and upregulation of metallothionein (MT) production, make it a well-known antioxidant (Jarosz et al. 2017). ZnO-NPs are prepared using different methods, including physical, chemical, and biological ones (Abdelhakim et al. 2020;Mousa et al. 2021).The chemical and physical routes have a number of disadvantages, such as high cost, the need for high-yield equipment, and the formation of unsafe by-products that could be harmful to human health or the environment (Suntako 2015).The green synthesis method eliminates all of these issues by offering safer, extra cost-effectiveness, and less harm to the environment (Singh et al. 2018;Anwar et al. 2022).In the literature, gamma rays can be used as a physical mutagen to improve microbial cultures and develop overproducers of bioactive substances with high economic value (Mousa et al. 2021;El-Sayed 2021;El-Sayed et al. 2022a;b, c).Consistent with Mossa and Shameli (2021), Ag-NPs produced using gamma-irradiated synthesis had a stronger antibacterial impact than those created via chemical synthesis. In this respect, in our previous study by El-Sayed et al. (2023a), in vitro, we found that the myco-fabricated ZnO-NPs revealed excellent in vitro biological activities, including anti-inflammatory and acetylcholinesterase inhibitory potentials, so we need to apply these results in vivo.Thus, the aim of this investigation was to determine whether two distinct doses of ZnO-NPs had any positive effects on biochemical variables related to neurotransmission, oxidative stress, and inflammation in mice that had been given AlCl 3 to cause AD. Animals A total of 60 male albino mice, each weighing 50-60 g and being 9-10 weeks old, were used in the tests.The mice were acclimated before the experiment by spending a week living in our animal building, eating a standard mouse meal, and having unlimited access to water.The mice were separated into six groups equally.The research protocol with serial number 52 A/22 for the purpose of overseeing and monitoring experimental animals was approved by the National Centre for Radiation Research and Technology's Research Ethics Committee. Chemicals Aluminum Chloride (AlCl 3 , 133.34 M.wt) was bought from Sigma-Aldrich Co., Munich, Germany, and dissolved into distilled water.ZnO-NPs were produced using the gamma-irradiated mutant fungus Alternaria tenuissima AUMC10624, according to El-Sayed et al. (2023a).In brief, the was cultured in potato-dextrose broth and the obtained cellfree culture filtrates were mixed with an aqueous solution of the corresponding salt (2 mM zinc sulfate for ZnONPs).Then, the mixture was vigorously stirred for 20 min and kept at room temperature, and the precipitate was separated by ultracentrifugation, washed in deionized water followed by ethanol, and finally dried at 50 °C.Myco-fabricated ZnO-NPs were suspended in deionized water.To prevent particle aggregation, the suspension was well mixed for 1 min before each injection.ZnO nanoparticles mean size of 18.65 nm with a hexagonal crystal structure (El-Sayed et al. 2023a). Experimental groups Six groups of mice (Fig. 1) were equally divided as follows: G1: The control group included mice that were injected i.p. with 0.5 ml of deionized water daily throughout the experiment.G2: Mice were injected i.p. with AlCl 3 at a dose of 50 mg/kg to induce AD in mice (Abdelazem 2020) for three weeks.G3 and G4: Mice were received myco-fabricated ZnO-NPs only (5 and 10 mg/kg /day i.p, respectively) for three weeks daily. Morris water maze (MWM) With some modifications to the initial technique, the Morris water maze was utilized in the current investigation to test spatial learning and memory (Vorhees and Williams 2006) in the last week of the experiment.The maze was built as a circular tank with dimensions of 180 cm in diameter and 60 cm in height.It was full of water that was kept at a constant temperature of (27 ± 2 °C) and was made opaque by adding a white nontoxic dye.For the objective of the experiment, four equal quadrants were created in the swimming pool: Northeast, Southeast, Northwest, and Southwest, with one of the diagonal lines serving as the starting point.In the target quadrant, a movable circular platform with a 9 cm diameter was erected on a column and placed in the pool 2 cm below the water's surface.The first four days of training were spent teaching the animals where the platform was so they were able to attempt to find it.For each mouse to swim in the pool, a cut-off time of 120 s was chosen.Mice were allowed 60 s to locate the platform on the test day (the fifth day) when it was taken away.The total time that the animals consumed in the quadrant of the pool known as the "target" on test day can be used to measure spatial memory (D'Hooge and De Deyn 2001). Samples Once the experimental period has ended, mice were not eating overnight, euthanized with intraperitoneal injections of sodium pentobarbital, and undergoing a complete necropsy.Samples of brain tissue were gathered and then homogenized in 9 volumes of ice-cold 0.05 mM potassium phosphate buffer (pH 7.4) through a glass homogenizer.The supernatant from centrifuging homogenates at 5000 rpm for 15 min at 4 °C was then utilized to measure biochemical parameters. Brain neurotransmitter biomarkers Acetylcholine was measured in brain supernatants using a colorimetric choline/acetylcholine assay kit (BioVision Inc., Waltham, MA, USA).The Elisa kits (BioVision) were used to assess the levels of dopamine and serotonin in brain homogenates in accordance with the manufacturer's instruction. Brain acetylcholinesterase (AchE) activities The activity of the acetylcholinesterase enzyme was determined using quantification ELISA kits purchased from Cusabio company according to the method of Ellman et al. (1961). Brain inflammation markers Tumor necrosis factor-alpha (TNF-α) and interleukin 1β (IL-1β) levels in the brain homogenates were quantified using the Ray Bio mouse ELISA technique (Bio-Techne LTD company) in accordance with the manufacturer's recommendations. Brain enzymes Diagnostic kits from Biodiagnostic Reagent Kits, Dokki, Giza, Egypt, were used to detect the activity of ALT and AST in the brain supernatant. Statistical analysis The data were displayed as means ± SD.The analysis of difference (ANOVA) in one direction was used.Using the statistical package program COS-TAT 3.03,198, Duncan's test was used to compare the groups statistically.At p < 0.05, differences among the groups were measured as significant.The relationship between escape latency and the measured parameters was assessed using Pearson correlation coefficients. Effect of ZnO-NPs on the AlCl 3 -induced behavioral alterations (memory deficits) in mice by Morris water maze (MWM) test The MWM test was performed to evaluate the memory and learning ability in mice (for all groups) for 5 days.A two-way analysis of variance was applied to test the significance of the difference between the mean values of escape latency (the actual time it took the mice to reach the platform) of different groups representing the effect of various treatments (A) and time intervals (B).A significant value of A (F = 353.86)and B (F = 529.19)at p < 0.001, were obtained indicating that escape latency differed according to 5 days of training (Table 1).Significant interactions between training days and treatments were also observed (F = 6.57).Duncan , s multiple range test revealed AlCl 3 caused a significant increase in escape latency between time intervals at p < 0.05.while ZnO-NPs alone at two doses showed a significant decrease in the escape latency from day one compared to the control group. Additional analysis with one-way ANOVA showed that mice given AlCl 3 demonstrated a substantial (p < 0.05) increase in escape latency and a decrease in time spent in the target quadrant compared to the control group.In comparison to the AlCl 3 group, the mice who were co-treated by ZnO-NPs (5 and 10 mg/ kg) had significantly shorter travel times to the platform and spent extra time in the desired quadrant (Figs. 2 and 3).These findings demonstrate that mice given AlCl 3 may recover spatial memory by giving ZnO-NPs.Furthermore, in comparison to the control group, mice that were given ZnO-NPs alone showed statistically significant elevations in time spent in the desired quadrant and reductions in escape latency, 5 mg/kg ZnO-NPs being more efficient than 10 mg/ kg ZnO-NPs. Effect of myco-fabricated ZnO-NPs on changes in acetylcholinesterase, acetylcholine, dopamine, and serotonin in the brain of mice test (1.91 ± 0.14).The values of all the earlier parameters almost reverted to the control value when mice were treated with ZnO-NPs at two doses (5, 10 mg/kg). The results from the lower dose of ZnO-NPs (5 mg/ kg) are better to those from the higher dose (10 mg/ kg). Discussion The present study proved the ameliorative action of myco-fabricated ZnO-NPs against AlCl 3 -induced AD in mice.Aluminum cross into the brain through the specific great affinity receptors for transferrin expressed in the blood-brain barrier (Roskams and Connor 1990).The hippocampus and cortex are crucial for cognitive functions including learning and memory, and these areas are the most susceptible to AD and Al poisoning (Malik et al. 2022).In the current research, supplementation of Myco-fabricated ZnO-NPs significantly ameliorated the neural, behavioral, and biochemical abnormality in AlCl 3 -induced AD in mice, which means the beneficial and neuroprotective action of Myco-fabricated ZnO-NPs against the AD.Baydar et al. ( 2003) stated that the measurements of behavioral alterations are more sensitive than neurochemical variations as signs of neurotoxicity through AlCl 3 exposure.According to our current findings, mice treated with AlCl 3 had poorer spatial memory and accuracy, as shown by higher escape latencies to the platform and less time spent on the target (platform) quadrant during the MWM test.This may be due to the accumulation of aluminum in the brain, which induces increasing AchE activity, inflammation, and the accumulation of beta-amyloid, as well as reducing the antioxidant activity that affects learning and memory (Thenmozhi et al. 2015).This behavioral alteration is confirmed by the biochemical changes in the AlCl 3 groups compared with the normal group.Our study findings were in agreement with the previous article stated by Ekundayo et al. (2022).However, myco-fabricated ZnO-NPs treatment at two different doses significantly enhanced this diminished spatial learning and memory adjacent to the control group, and the dose of 5 mg/kg/bw was more powerful than 10 mg/kg/bw.This may be related to zinc having antioxidant, and anti-inflammatory action (Jarosz et al. 2017).These findings suggest that myco-fabricated ZnO-NPs has a memory improving function and a protective effect against AlCl 3 -induced memory loss through antioxidant, anti-inflammatory, and inhibitory actions for AchE. In the current investigation, mice treated with AlCl 3 displayed substantial changes in brain MDA concentration, GSH content, SOD, and CAT activity, all of which are markers of enhanced oxidative damage and lipid peroxidation caused by aluminum accumulation in brain tissues.Due to its high oxygen consumption and insufficient antioxidant system, the brain is particularly vulnerable to oxidative stress (Parashar and Udayabanu 2017).Thus, neurotoxicity caused by AlCl 3 might be due to the overproduction of ROS resulting in considerable neuronal injury arising from disorders in the antioxidant defense system.It is widely known that aluminum can enter the bloodbrain barrier and build up in many brain tissues, promoting the production of free radicals, which in turn raises protein and DNA oxidation and lipid peroxidation.Aluminum has also been demonstrated to interfere with iron homeostasis by displacing iron from the iron transport protein transferrin, increasing the amount of redox-active iron in brain tissues (Vieelien et al., 2022).This causes significant oxidative damage and may result in brain injury, particularly in regions of the brain associated with memory and learning (Saba et al. 2017).Khan et al. (2011) found that the significant decrease in GSH in the AlCl 3 -treated group may have resulted from aluminum attaching to the SH group of GSH, which can be excreted, reducing GSH's ability to act as a neutrophilic scavenger.Nehru and Anand (2005) observed that decreased activities of SOD and CAT in rat's brains exposed to AlCl 3 may be attributed to a decrease in the synthesis of enzyme proteins.Similarly, numerous studies stated that declines in activities of SOD and CAT are connected with AD (Jadhav and Kulkarni 2022;Ekundayo et al. 2022;Ojha 2023;Chen et al. 2021). One of the potential components that can stop AD from starting and progressing is antioxidants.The mice treated with myco-fabricated ZnO-NPs alone had higher levels of GSH as well as activities of SOD and CAT than the other groups in the current study.This may be attributable to an increase in Zn concentration in the brain tissue as a result of Zno nanoparticle dissociation.According to Sidhu and Garg (2005), zinc reduces the action of pro-oxidant enzymes, inhibits lipid peroxidation, and promotes the production of proteins and enzymes such as antioxidant proteins, GSH, CAT, and SOD.According to Abd Elmonem et al. (2021) ZnO-NPs can decrease MDA levels, improve antioxidant enzyme activities, and protect cell membrane integrity from oxidative stress damage.Furthermore, Zhao et al. (2014) verified that Cu-Zn-SOD activity is stimulated by a suitable concentration of ZnO-NPs, and this improvement will reduce ROS production.Hence, myco-fabricated ZnO-NPs (5 or 10 mg/kg) given to the AlCl 3 group revealed a significant reduction in the brain MDA and significantly improved SOD and CAT activities as well as GSH levels compared with the AlCl 3 treated group.Additionally, a dose of 5 mg/kg of myco-fabricated ZnO-NPs was more efficient than a dose of 10 mg/kg, meaning that a low dose of these particles had powerful antioxidant effects by increasing antioxidant activity and lowering free radical levels.Numerous authors have studied the connection between oxidative stress and inflammation, and they discovered that high levels of pro-inflammatory cytokines are associated with low antioxidant levels and insufficient antioxidant enzyme activity (Salim et al. 2012).Our findings indicated an increase in brain TNF-α and IL-1β, which may be related to an increase in oxidative stress induced by aluminum in the AlCl 3 group.According to Popa-Wagner et al. (2013), ROS produced in brain cells can alter synaptic and non-synaptic transmission neurons, leading to neuro-inflammation, cell death, neurodegeneration, and memory loss.Previous studies have shown that neuro-inflammatory cytokines reduce the efflux transfer of amyloid (Aβ), which results in increased Aβ concentrations in the brain (Blasko et al. 1999).Amyloid β plaque formation in the brain is one of the primary causes of AD (Murphy and LeVine 2010).Additionally, it is shown that TNF-α plays an essential role in Aβ made destruction of LTP, a kind of synaptic plasticity directly related to memory and learning (Wang et al. 2005). In the current investigation, it was found that myco-fabricated ZnO-NPs co-treatment with AlCl 3 decreased the rise levels of TNF-α and IL-1β in mice ' s brains in comparison with AlCl 3 alone, which significantly raised these cytokines.This may be attributed to the elevated Zn content in the brain tissue.Zinc improves the up-regulation of A20 protein (TNF-α-induced protein 3).It is a highly conserved protein that has seven zinc finger (ZnF) domains in its C-terminus, which decline NF-kappaB activation, causing reduced gene expression and the generation of TNF-α, IL-1 ß, and IL-8 (Dardenne 2002;Prasad 2008).This result confirms our previous study in vitro, which showed that myco-fabricated ZnO-NPs have wound healing, anti-inflammatory action (El-Sayed et al. 2023a, b).Our findings concur with earlier research that showed the ability of ZnO-NPs to reduce inflammation (Ekundayo et al. 2022;Chen et al. 2021;Abdulmalek et al. 2021). Aspartate aminotransferase and alanine aminotransferase are active brain enzymes that are found in cytosolic and mitochondria and they have a role in glutamate metabolism (Palailogos et al. 1989).The significant decline in brain AST and ALT activities in the AlCl 3 group could be related to oxidative stress formed from the buildup of aluminum in brain tissues, which disturbs protein synthesis and causes a decrease in ALT and AST activities (Netopilová et al. 2001).The decline in transaminase enzymes indicates a decrease in glutamate metabolism, causing neurological dysfunction.Glutamate plays a part in synaptic plasticity, one of the key neurochemical bases of memory and learning, which is important for cognitive processes like memory and learning in the brain (Meldrum 1994).Bartos et al. (2019) reported that oxidative stress induced by exposure to fluoride caused a decline in ALT and AST enzymes in brain offspring rats, which led to a decrease in glutamate, a possible mechanism of neurotoxicity and memory impairment.Moreover, Amel et al. (2016) found that giving rats 1000 ppm lead acetate in drinking water decreased brain ALT and AST.Moreover, myco-fabricated ZnO-NPs ameliorated these enzymes, which may be related to the antioxidant effects of myco-fabricated ZnO-NPs. The current investigation found that AlCl 3 significantly increased AchE activity and decreased Ach levels.This might be related to aluminum's allosteric interaction with the peripheral anionic site of the enzyme molecule (Pohanka 2011), producing variation of its secondary structure and so increasing its activity ( Zatta et al. 1994).An additional explanation for the elevated AchE could be related to IL-1ß overproduction, which stimulates the activity and expression of AchE through the interaction of IL-1β by muscarinic acetylcholine receptors ( Schliebs et al. 2006).Also, an increase in AchE may be due to increased oxidative stress and lipid peroxidation and a decrease in antioxidant capacity induced by aluminum in brain tissues (Kumar and Gill 2014).Kaizer et al. (2005) suggested that changes in the lipid membrane might be responsible for an alteration in the structural form of the AchE molecule, that make induction of AchE activity after prolonged exposure to aluminum.Additionally, the levels of the neurotransmitter dopamine and serotonin (5-hydroxytryptamine, 5-HT) in the mice's brain tissue significantly decreased in the AlCl 3 -exposed animals.This could be linked to the oxidative stress caused by AlCl 3 , which makes the oxidation of tryptophan a precursor to 5-HT.It is true that both ROS and pro-inflammatory cytokines can convert tryptophan to kynurenine.According to Bakunina et al. (2015), this molecule might be further metabolized to the pro-oxidant substances 3-hydroxykynurenine and quinolinic acid, which are linked to the causing of depression.Also, Cunnington and Channon (2010) illustrated that an increase in ROS can lead to reduced availability of tetrahydrobiopterin (BH4).BH4 is a cofactor that uses the three aromatic amino acid hydroxylase enzymes to produce the precursors of the major monoamine neurotransmitters dopamine and serotonin from aromatic amino acids like phenylalanine, tyrosine, and tryptophan.This reduced synthesis of dopamine and serotonin results from the availability of these precursors (Kappock et al. 1996).The results of our investigation concurred with those of the earlier study mentioned by Ekundayo et al. (2022). We found that the AchE activity in brain tissues was significantly suppressed in both doses of mycofabricated ZnO-NP (5 and 10 mg/kg)-treated mice, which confirms our previous study in vitro by El-Sayed et al. (2023a,) we found that myco-fabricated Zno-Ps have a stronger inhibitory influence on AchE through direct interaction with AchE by molecular docking.Inhibited AchE activity can prevent Ach from being degraded in the synaptic cleft, resulting in a buildup of Ach, that augments cholinergic neurotransmission and improves memory and cognition in animals.This result suggested that myco-fabricated ZnO-NPs has an ameliorating effect on neurodegenerative symptoms in AD through its antioxidant, antiinflammatory, and inhibiting AchE.Thus, ZnO-NPs can be enhancing cognition in experimental animals by elevating acetylcholine at synapses.The anticholinesterase activities ZnO-NPs detected in our research are in alignment with previous studies (Guo et al. 2020;Hamza et al. 2019).According to Lu et al. (2013), zinc was found to mitigate the negative effects of aluminum exposure on AchE activity, dopamine and serotonin levels, and brain redox status. Longer escape latencies to arrive at the platform and less time spent in the target quadrant during the MWM test in AlCl 3 -treated mice indicate deteriorating spatial memory, which is indicative of poor learning and memory.This may be due to the accumulation of aluminum in the brain, which induces increasing AchE activity, inflammation, and the accumulation of beta-amyloid, as well as reducing the antioxidant activity that affects learning and memory (Thenmozhi et al. 2015).These behavioral results supported the previous biochemical finding.Myco-fabricated ZnO-NP treatment at two different doses significantly enhanced this diminished spatial learning and memory adjacent to the control group, and the dose of 5 mg/kg/bw was more powerful than 10 mg/kg/bw.These findings suggest that myco-fabricated ZnO-NPs have a memory-improving function and a protective effect against AlCl 3 -induced memory loss through antioxidant, anti-inflammatory, and inhibitory actions for AchE. Conclusion The findings of this work demonstrated that mycofabricated ZnO-NPs provide neuro-amelioration against an experimental AD model caused by AlCl 3 through reducing IL1 β, TNF-ἀ, MDA, and activity of AChE and increasing production of GSH, SOD, and CAT.Also, myco-fabricated ZnO-NPs can enhance behavioral alterations by reducing escape latency and increasing time spent in the target quadrant.This indicates that the myco-fabricated ZnO-NPs (especially 5 mg/kg b.w) have antioxidant, anti-inflammatory, and inhibitory action for AChE, and these in vivo results confirm our previous study in vitro. Fig. 1 Fig. 1 A schematic diagram showing the experimental design Fig. 2 Fig.2The MWM test was used to determine the effects of ZnO-NPs on escape latency in AlCl 3 -stimulated behavioral changes in AD mice.The results were provided as mean ± SD (one-way ANOVA followed by Duncan's test).A distinct Table 1 Effects of ZnO-NPs on escape latency during 5 days intervals in AlCl 3 -stimulated behavioral changes in AD mice F to-way analysis of variance, A comparison among the treatment, B comparisons among the time intervals, A and B the interaction between treatment and times Table 2 Effect of myco-fabricated ZnO-NPs on lipid peroxidation and antioxidant indicators in the brain of mice administrated with AlCl 3 . Table 3 Effect of myco-fabricated ZnO-NPs on changes in pro-inflammatory cytokines and transaminase enzymes in the brain of mice induced by AlCl 3The mean and standard deviation of ten mice from each group are used to represent the values.Means with different superscripts within the same row differ significantly at (p ≤ 0.05) Table 4 Effect of myco-fabricated ZnO-NPs on changes in acetylcholinesterase, acetylcholine, dopamine, and serotonin in the brain of mice induced by AlCl 3The mean and standard deviation of ten mice from each group are used to represent the values.Means with different superscripts within the same row differ significantly at (p ≤ 0.05) Table 5 Correlation coefficient between escape latency and biochemical parameters in the brain of mice Vol:. (1234567890)
6,108.2
2023-08-09T00:00:00.000
[ "Medicine", "Environmental Science", "Materials Science" ]
A Case Study Based Slope Stability Analysis at Chittagong City , Bangladesh Heavy rainfall occurs almost every year in Bangladesh and induces landslides in the hilly regions of this country. Among them the Chittagong City has the worst scenario―as there lives a dense population, extending from the plain lands to the hilly area. So, for risk mitigation and management in this landslide prone city, slope safety margin should be determined. From this context, this article presents factor of safety (FS) values in terms of landslide hazard at Chittagong city, based on geotechnical parameters and slope geometry. Thus a preliminary idea on the allowable stress for slope design could be made from this study. In total, 16 hazard sites of the 2007 and 2008, rainfall induced, landslides were examined as a case study along with subsequent collection of in situ soil samples of the failed slopes for geotechnical laboratory analysis. For FS calculation, the limit equilibrium nd at the hazard sites. The results imply that FS value more than 1.57 should be used for slope safety margin. Moreover, from a probabilistic approach, the authors recommend FS > 1.80 as optimum value for the region. Furthermore, a relationship between slope height to slope length ratio, or slope angle and FS was established for this region for a quick calibration of FS value by simple on-field measurement of slope parameters. It is expected that this scenario based finding would contribute in mitigation of landslide hazard risk at the study area. Additionally, site specific FS values were presented in a 3D contour map. This research could ascertain the location wise slope strength requirement and be considered as a guideline for future calculation for slope safety design against rainfall triggered landslides in this city. Introduction Landslide or slope instability is a result of stress exceeding the shear strength of the slope material.The excess stress could be added with increasing pore water, excessive overburden pressure due to external load, etc.Moreover, poor soil condition, weathering, slope geometry, soil stratification, discontinuities in the rock body, etc. are some other common factors to decrease soil strength for slope instability (Islam et al., 2014).Slope height and steepness is also pivotal-as described by Putra and Choanji (2016)-with increasing slope height, surface runoff and water transport energy also increase by the action of higher gravity, and at the same time steep slopes tend be eroded more quickly; both leads to instable slopes. Chittagong city, in the southeastern hilly region of Bangladesh, is a landslide prone city, where almost every year devastating landslides result in casualties and damage to properties (Islam et al., 2017, Mia et al., 2017).Table 1 represents some of the major historic and recent landslide events.It is the second largest city of the country-also regarded as the business capital. Almost one-third area of the Chittagong City Corporation is occupied by hilly terrains.These hilly portions of the city are also being chosen for settlements due to rapid urban growth.Unsafe construction on the hills or foothills is the major cause for the worst scenario of landslide events.It is documented that excessive rainfall triggers the landslide phenomenon in Bangladesh, and is more facilitated in many places by hill cutting and deforestation (Islam et al, 2014;Islam et al., 2017;Mia et al., 2017).Population is growing in this city as well in the hill sites due to continuously incoming population for livelihood and therefore constructions are made more rapidly on the hills and foothills (Islam et al., 2017).Therefore, determination of appropriate and site specific real factor of safety (FS), based on geotechnical parameters of the slope materials where there the hazard has already been taken place, is important and would be really useful for inferring the degree of strength required for slope design. Table 1.Historical landslides (in ascending chronology) in Bangladesh (modified after Mia et al., 2017 Factor of safety (FS) is a very popular term that civil engineers use ubiquitously for risk free infrastructure development, and as a classical approach to project the possible relationship between soil strength and expected stress.Among various geohazard, landslide is a potential one that is reported to cause serious damage to life and properties.That is why engineering structures in landslide prone areas are needed to be given proper design and strength considering the appropriate FS value along with proper site selection.In this research, FS values were determined following the conventional limit equilibrium method for slope stability analysis.Engineering parameters of soils alongside slope characteristics of sites where landslide events have already taken place, in 2007 and 2008, were the basis of this disquisition.Some authors, e. g., Islam et al. (2014), Islam et al. (2015), etc. have previously determined safety factor as per slope stability analysis at various locations of the Chittagong City, but this study provides safety factor based on data from the failed slope as a case study.Some of these data were previously used by Mia et al (2017).Notably, devastating rainfall triggered landslide events took place in the city in 2007 and 2008, which caused death of about 149 people (Khan et. al., 2012). The limit equilibrium method is a common and widely used procedure for slope stability analysis.Simplicity and requirement of less parameters compared to other methods have hold popularity of this method up even though the method has disadvantages like constant FS along the slope plain, and negligence of ground response (Baba et al., 2012).Another important factor is that in this method of analysis there is no requirement to consider strain and uncertainties regarding engineering parameters of slope soil, and are only restricted to shear strength properties, which has also attracted civil engineers for widely accepting these methods (Kakou et al., 2001).However, the differences of FS values from this method with those of others are reportedly less than 6% and even though having some drawbacks, this method is frequently used with pragmatic experience that slopes could be comprehensively delineated by this analytical method (Duncan, 1996;Blake et al., 2002;Baba et al., 2012).In this method, FS value in terms of slope stability is assessed considering the state of equilibrium between the assumed stress and the soil strength along a surface of failure.As a result, in these methods of analysis assumption of sliding surface shape is crucial.The shapes popularly in use are straight line, circular arc, logarithmic spiral, etc., along which both linear applied (Koranne et al., 2011).Among the various limit (Taylor, 1948) and the friction-circle method consider the whole mass to move freely while the others divide the whole mass into vertical slices to assess the equilibrium condition individually for each slice and are known broadly as the methods of slices (Koranne et al., 2011).In addition, another categorization of this analysis could be mentioned where one is based on the assumption of infinite slopes whereas the other considers the slopes as a finite surface.However, in this article simple limit equilibrium model of infinite slopes-with materials having both cohesive and frictional strength-was used along with Cousins (1978) stability chart for simple slopes to determine the FS values: as the existing data best facilitates.Slope stability chart was proposed primarily by Taylor (1937), and now it is ubiquitously used for assessment of FS of simple slopes with characteristically uniform soil (Sun and Zhao, 2013;Javankhoshdel and Bathurst, 2014).These graphical methods require simple calculations, rendering quick estimation of FS of a slope.Hence, these charts could be particularly useful for preliminary design and estimating purposes. All of the limit equilibrium methods compare the forces, moments, or stresses resisting sliding of a mass with those that could render instability to a slope by examining the equilibrium condition of a slope in response to gravitational pull.Factor of safety (FS) values could be obtained at the end of the analysis that is the ratio of the shear strength to the shear stress. Moreover, critical slip surface is assumed to be the location where there has the lowest value of FS. As dynamic slope stability is related to its static stability, static factor of safety for each point, e.g.in-situ field measurements on slope can yield dynamic stability of a slope.As per regional analysis we used a relatively simple limit equilibrium model of infinite slope with material having both frictional and cohesive strength.In general, FS is determined by the following formula: where, s = shear strength and t = shear stress. is the ratio between the forces that prevent the slope from failing i.e., shear strength of the slope to that make the slope fail i.e., shear stress (Kakou et al., 2001).In common practice, FS > 1 indicates stable conditions whereas FS < 1 unstable. Slope stability analysis with inhomogeneous dip and/or soils are now done by software.But, slopes with homogeneous inclinations and isotropic soils are conventionally studied, for slope safety, by charts as a quick tool.Taylor (1937) is one of the pioneers who introduced such chart (Michalowski, 2001;Sun and Zhao, 2013).Moreover, these charts could rapidly render FS value with easier calibration process and also could be handy for preliminary site characterization, design and planning (Sun and Zhao, 2013;Huang, 2014).The stability number, pertaining to the chart, a dimensionless parameter (λ cⱷ ) for homogeneous soil slopes, he defined using the chart is also widely used and were later adopted and described by a number of authors, e, g., Janbu (1956), Cousins (1978), Rahman and Khan (1995), Michalowski (2002), etc., and generally is expressed as: where, γ = unit weight, H = slope height, ⱷ = angle of friction, and c = cohesion of soil. For cohesionless (c = 0) soils λ cⱷ becomes infinite; this is when the circle runs on the point where there is slope toe in a 2D view and then critical slip surface becomes plane parallel to the slope surface (Rahman and Khan, 1995;Duncan and Wright, 1980).Cousins (1978), however, used simple slope with homogeneous soil (Fig. 1) for his stability chart, known as Cousins stability chart (Fig. 3).Hence, as the next step of this analysis we used Cousins (1978) stability chart for simple slope, and finally FS values were calculated using the following equation: where, N F = Cousins stability number, which would be determined from Cousins chart (Fig. 2).Cousins (1978) assumed the critical plane of failure as a circular arc passing through the toe of the infinite slope.He variably utilized the friction-circle method to construct the chart from which FS values, critical slip circles and toe circles could be yielded for soils having both friction and cohesion.But this chart could be used for slope angles (β) up to 45 o .Because of this, FS values of some places (samples CTG03, CTG05, CTG07 and CTG09), where slope angle is more than 45 o , could not be determined using this chart.Estimation of the FS values with Cousins chart was performed manually. Later, an optimum FS value was recommended for the study area based on probabilistic regression.After that, spatial distribution of the FS values were also presented by a 3D map to show site specific and spatial variation of the degree of slope strength requirement within the region.The amount of allowable stress could also be inferred from the FS values.The authors expect that the FS values presented in this article would be useful in landslide hazard risk management and proper land use planning in this region. Study Area The Chittagong City is located between 22 o 14 to 22 o 24 30 north latitudes and between 91 o 46 E to 91 o 53 east longitudes (Fig. 3).It is the second largest city of Bangladesh after the capital city Dhaka. The city comprises almost of 160.99 sq.km area with a total population of about 2,068,082 (Bangladesh Population Census, 2011).This tropical region experiences monsoon mostly from the month of June to October, with average rainfall recorded more than 2540 mm in this period (Islam et al., 2017). Geology and geomorphology The study area comprises a part of Chittagong hill tracts, which is geologically in the western margin of the Chittagong-Tripura folded belt (CTFB) and more precisely in the plunged zone of the Sitakund anticline (Islam et al., 2018).This Tertiary fold belt in the eastern margin of the Bengal basin-the basin itself is situated in the eastern periphery of the Indian plate-has come into existence because of intricate interaction among the Indian plate, the Eurasian plate, and Burmese platelet (Rahman et al., 2017;Farazi et al., 2018).Like all other anticlines of CTFB, Sitakund anticline also has NNW-SSE axial trend.Therefore, various rock formations, i.e., Dupi Tila Sandstone Formation, Tipam Sandstone Formation, and Bokabil Formation, are exposed from east to west in the study area.Sandstones of the Dupi Tila Formation lie at the top and, therefore, exposed at the surface of the hill tracts of the city.Sandstones of this formation are loose and friable, hence, has less shear strength whereas sandstones of Tipam Formation are mostly hard and compacted unless highly weathered (Islam et al., 2015). Varied geomorphology and topography is observed in Chittagong City because of its position in the hilly region.The northern and the eastern part of the city has hilly terrains.Otherwise, the western portion is surrounded by the coastal plain and the Bay of Bengal while the southeastern, and the northeastern parts pf the city is covered by Karnafuli River and the floodplain of the Halda River, respectively (Mia et al., 2017).Three geomorphologic units are largely seen in this region 1) hills and associated valleys in the north, with 12-80 m elevation, 2) fluvio-tidal plains in the west and south, including tidal plains of the Bay of Bengal, with 5-10 m elevation, and 3) the Karnafuli River floodplain having 5-10 m elevation (Mia et al., 2017). Materials and Methods This article examines factor of safety (FS) values of the slopes of the landslide hazard sites of the Chittagong City to recommend the minimum FS value that should be considered in this region.Therefore, in-situ samples from 16 hazard sites of 2007 and 2008 landslides were collected along with necessary slope parameters, i.e., slope height and slope angle or inclination, to calibrate FS values against landslide as a case study.FS values were calculated by means of the (1978) chart combining with simple limit equilibrium method of infinite slopes.The collected samples of the slope materials belong to the Dupi Tila Sandstone Formation.Slope geometry was measured in the field by measuring tape and clinometer. Notably, during the investigation we found that the slumping took place principally with the sandstones of the Dupi Tila Formation.So, samples of this formation were collected by augar for laboratory analysis.Table 2 represents the slope characteristics and geotechnical parameters of soil samples regarding this study.Strength parameters, i.e., friction angle and soil cohesion of the soil samples were obtained by direct shear test under consolidated drained condition at the Engineering Geology Laboratory of Geological Survey of Bangladesh (GSB).ASTM D3080 standard was followed for this analysis. Following the estimation of the FS values, a power based regressive curve for these values against the tangent of the slope angles (tanβ) was drawn (Fig. 4).In this case, it should be noted that tanβ is the ratio of the slope height to slope length.The reason behind selecting the power based regressive curve is that the highest value of the coefficient R 2 was found for power curve compared to other curves like linear, logarithmic, exponential, etc. Again, Islam et al. (2017) showed that all the failed slopes in the Chittagong City has slope angle (β) > 15 o .Therefore, we assumed the FS values below the crossing point of the line representing tan15 o and the best-fit curve as unsafe and the FS values above the point as safe in terms of rainfall induced landslide hazard in the study region.Thus the optimum FS value we inferred from this probabilistic approach was > 1.80.Additionally, we found from this probabilistic approach that the relationship between tanβ (x) and FS (y) is: This relationship could be used as a quick tool for on-field primary estimation of the requirement of the degree of strength for slope design in the study area, in terms of the optimum FS value, by simply measuring the slope height, slope length and/or slope angle. After derivation of FS values at various hazard sites, they were plotted on a 3D map (Fig. 5) was produced using ArcGIS software and finally a map showing spatial variation of the FS values by means of contour lines of equipotential FS value.The map represents spatial variability by means of contour lines with equipotentil FS values.The map was produced in ArcGIS environment, and contouring was performed by interpolation.Moreover, 30 m digital elevation model (DEM) was used for 3D mapping.In this work contouring was performed by Kriging method in ArcGIS environment. It should be noted that the sand dominating soil samples were not truly homogeneous at each of the locations as presented by Table 2. Rather, minor shale beds were present within the sand dominating units at some places.But, still we assumed the slope materials as homogeneous for simplification of this study. Result and Discussion Table 3 represents site specific factor of safety (FS) values in terms of landslides-based on the data collected at various landslide hazard sites at the Chittagong Metropolitan area as per deciphering slope stability condition there.This study was carried out by simple limit equilibrium method for infinite slopes, and Cousins (1978) chart, which was the best suit within the limitation of current facilities and data.These data were collected from 2007 and 2008 landslide locations.As the use of the Cousins chart is limited up to slope angles of 45 o , FS values of the locations regarding samples CTG03, CTG05, CTG07 and CTG09 could not be determined.We found that the FS values in this area range from 0.94 to 1.57.The highest value was found at Sikandarpara (CTG06) and the lowest at Kusumbagh (CTG02).Here, it should be kept in mind that whatever the FS value is, it is from a failed slope.This research was conducted upon failed slopes of 2007 and 2008 landslides, and the FS values were calculated within particular failure blocks, which means that FS value as high as 1.57 is not enough for slope design for this region, or in other words there is full probability that this value is inadequate for slope safety.So, FS > 1.57 could be recommended for safe slope design in this region.But, being further advanced a probabilistic approach for more accurate evaluation of slope strength requirement was employed in this study.The FS values above the crossing point of the regressive curve from the plot of tanβ vs FS and the line presenting tan15 o was assumed safe (Fig. 4).Thus FS > 1.80 would render slope safety while FS < 1.80 would provide unsafe condition.Moreover, a relationship such as tan = 1.227 −.035 (Eqn.4) was established between tangent of slope angle (tanβ), or slope height to slope length ratio and FS for this region.Using this probabilistic relationship, FS values could be used for slopes with slope angle (β) > 45 o for the hills of the city an overcome.Another advantage of the outcome of this probabilistic approach could be that the relationship between tanβ and FS could be used as a quick tool for estimation of FS and slope safety requirement as well.Simply, on-field measurement of slope height and slope length, or even only measurement of the slope angle (β) could serve the purpose.Because, tanβ is the ratio of slope height to slope length. In addition, spatial variability of FS values were presented by a 3D contour map (Fig. 5), so that site specific strength requirement or allowable stress could be discerned.This map could practically substitute the existing geographic information system (GIS) and DEM based landslide hazard potential maps for the Chittagong City. Notably, it could be seen from Table 2 that all of the failed slopes has slope angles exceeding internal frictional angle, except for the sample CTG06.The exception was made probably by hill cutting, which escalated the landslide phenomena at that particular place. Geotechnical engineers use FS conventionally is based on experience.FS is useful because it minimizes the degree of uncertainty and helps assessing the risk associated with slope stability analysis as accurate computation is nearly impossible (Duncan, 1999;Huang, 2014).Furthermore, stress design or the stress limit, he maximum stress that soils of a particular location can absorbs without rendering collapse, or damage, could be inferred from FS values.Because we know from Harris (1995), Farazi and Quamruzzaman (2013), etc. that: So, the FS values of failed slopes of the Chittagong City would be highly useful in proper slope design prior to construction at the hilly portions.Engineers and planners would get a guideline from this article about the strength of soil required and the stress limit for infrastructure development and/or about which area is suitable for masonry work.Therefore, this article can show a primary way out for landslide hazard risk free civil engineering work, and eventually for capacity building and sustainable development in this region. Conclusions Slope stability analysis with the notion of factor of safety (FS) was performed at 16 locations of Chittagong City.The analysis was carried out by limit equilibrium method for infinite slopes and Cousins stability chart.Slope geometry data and in situ soil samples for this research were collected from failed slopes of 2007 and 2008 rainfall induced landslide locations.We found FS values ranging from 0.94 to 1.57 within the study area.So, it is clear from this study that 1.57 is not the adequate value in terms of slope safety requirement in this region.Further, based on a probabilistic approach, we recommend that FS > 1.80 should be used for slope design with proper strength.On top of that, a relationship between slope height to slope length ratios, or tangent of slope angles (tanβ) and FS values was established for this particular region that is: tan = 1.227 −.035 .This relationship could be handy for a quick assumption of FS value and hence, slope safety requirement by simple on-field measurement of slope height and slope length, or only slope angle.Another important factor is that at each of the hazard sites slope angle exceeds the internal friction angle of soils, except one site that we inferred because of hill cutting.Spatial variation of the FS values were depicted in a 3D contour map, from which site specific strength requirement or maximum allowable stress for slope design would be predicted as a nascent guideline.Stability charts provide the preliminary design purpose of slope by slope stability analysis.The FS values and their relation to slope geometry, and the engendered map from this article would act as a primary guideline for landslide risk mitigation and risk management in this locality as well as a base for future estimation of slope safety requirement.Besides, this article could also play a useful role in taking remedial measures, ground improvement, and risk informed landuse planning for this city. Fig. 3 . Fig. 3. Location map of Chittagong city showing the study locations of 2007 and 2008 landslide sites. Fig 4 . Fig 4. Power based regression curve from the plot of tanβ vs FS.The red line indicates tan15 o = 0.27.FS values above the crossing point of the regression curve and the line representing tan15 o has been marked safe (FS > 1.80, above the green line) and the values below the point has been marked unsafe (FS < 1.80) for Chittagong City. Fig Fig 5.A 3D contour map showing spatial variation of factor of safety (FS) of slopes within the hilly areas of Chittagong City. ). Table 2 . Slope geometry or parameters, and soil characteristics or engineering parameters of soils of 2007 and 2008 landslide locations used in this study. Table 3 : Factor of safety (FS) values in terms of slope stability at various locations of Chittagong city.
5,488
2018-09-01T00:00:00.000
[ "Geology" ]
Review An Astrobiological View on Sustainable Life Life on a global biosphere basis is substantiated in the form of organics and organisms, and defined as the intermediate forms (briefly expressed as CH2O) hovering between the reduced (CH4, methane) and (CO2, carbon dioxide) ends, different from the classical definition of life as a complex organization maintaining ordered structure and information. Both definitions consider sustenance of life meant as protection of life against chaos through an input of external energy. The CH2O-life connection is maintained as long as the supply of H and O lasts, which is in turn are provided by the splitting of the water molecule H2O. Water is split by electricity, as well-known from school-level experiments, and by solar radiation and geothermal heat on a global scale. In other words, the Sun’s radiation and the Earth’s heat as well as radioactivity split water to supply H and O for continued existence of life on the Earth. These photochemical, radiochemical and geothermal processes have influences on the evolution and current composition of the Earth’s atmosphere, compared with those of Venus and Mars, and influences on the planetary climatology. This view of life may be applicable to the “search-for-life in space” and to sustainability assessment of astrobiological habitats. Introduction What is life?Erwin Schrödinger, 1933 Nobel Laureate in Physics, tackled this long-standing question and defined life as the organization that maintains complex structure and heritable information in expense of "negentropy" [1].Negentropy is a useful conceptual tool to understand the physical basis underlying sustenance of biological machinery, and is the counter concept of entropy defined by the second principle of thermodynamics, i.e., the time arrow theory.According to the principle, the total amount of available energy, or exergy, decreases irreversibly with time, and entropy is a measure of the ever-increasing unavailability, partly as heat.Local entropy within a system may decrease in the expense of potential energy, i.e., negentropy, and such situation is substantiated in living organisms that expend chemical potential energy to maintain their structure and information.Heat, the energy-in-transit, does not directly support life by itself, but it may generate chemical potential energy via thermochemical reactions. The question what is life is thus transformed as what supports life, and an answer for a living organism is the chemical potential energy that lowers local entropy, despite an increase of overall entropy.Then, the question is extended to a larger system where living organisms live, and revised as what supports the biosphere.The Earth's biosphere receives heat sens general from both external and internal sources, i.e., the Sun and Earth's interior, respectively.Heat from the Sun derives ultimately from gravitational and nuclear potential energy of the hydrogen gas disk in the Hadean (Pregeologic) Eon, while heat from the Earth's interior originates from gravitational and nuclear potential energy of the silicate gas disk/microplanets and radioactive 40 K, respectively.Contraction of gas disks and accumulation of microplanets liberated gravitational potential energy to yield heat.Heat of the ancient Sun ignited nuclear fusion to burn as a star, irradiating the Earth at the Solar constant of 1.4 kW m -2 .The profound underlying problem is how heat potentiates the Earth to host life. Heat as energy-in-transit may form locally ordered structures in an open non-equilibrated system, as advocated by Ilya Prigogine, 1977 Nobel Laureate in Chemistry [2].A visual example of the locally organized structure formed by heat is Prigogine's hexagon, or Rayleigh-Benard convection [3] (Figure 1).The heat used in the experiments to form convection cells derives from the chemical or electrical potential energy, and therefore chemical potential energy indirectly (via heat) forms convection cells.The minimum unit of living organisms is coincidentally called "cell", and the biological cells are maintained by chemical potential energy contained in foods, or organic compounds.Animals eat organic compounds produced by others, while plants produce organics for themselves and others via photosynthesis.These organic-eaters and organic-producers are collectively called heterotrophs and autotrophs, respectively.The non-photosynthetic mode of autotrophy is called chemolithoautotrophy, by which organics are produced in the expense of chemical energy liberated from the oxidation of inorganic compounds such as hydrogen and hydrogen sulfide.This mode of chemolithoautotrophic life is known to thrive in the deep-sea and deep sub-seafloor.This chapter tries to apply Schrödinger's negentropy concept to biosphere, and evaluates the sources of chemical potential energy for chemolithoautotrophic lives in deep-sea and deep sub-seafloor [4,5] from a planetary point of view. Vortex of Life Prigogine's hexagons, or Rayleigh-Benard convection cells, are generated by continuous flow of heat.Continuous flow of chemical potential energy maintains life.Similarly, continuous flow of water forms vortices, and therefore vortex serves as a key idea to understand life. Flow of water sometimes form vortices.A vortex is only a temporal pattern of water movement, composed of different water molecules coming-in and going-out moment-by-moment.Kamo-no-Chomei, a Japanese medieval essayist, expressed in Hojoki (1212) his view of transitory life as "The flowing river never stops and yet the water never stays the same.Foam floats upon the pools, scattering, re-forming, never lingering long.So it is with man and all his dwelling places here on earth" (translated by Moriguchi and Jenkins [6]).Chomei's transitory bubbles are parallel to vortices in my view of life.Although atoms and molecules of my body have been replaced since my birth, I have never doubted my continuity and identity.That is, my identity is based more on a pattern like a vortex than materials, and I represent a tiny vortex of life (Figure 2).Vortices consist of water molecules flowing in and out every second, and are kept alive by a constant current caused by a water height gradient.Water flows from highs to lows, and manifests gravitational potential energy as kinetic/mechanical energy and even electric energy at hydraulic power plants (Figure 3), as well as vortices.Vortices are formed and maintained by the slopes between high and low water tables.If water tables become flat, there will be no flows and no vortices. By what water is transported to high places?Water cycling, i.e., evaporation and precipitation, is mainly driven by solar heat energy.Therefore, hydraulic power plants are said to convert solar heat energy to electric energy via gravitational potential energy of water.Hydraulic high means the water on high spots that contains high gravitational potential energy.Then, what corresponds to the chemical "high" for chemical potential energy?Taking examples of organic compounds (generalized as CH 2 O), they manifest chemical potential energy through combustion (parallel to water falling), or oxidation, to yield the most oxidized form of carbon (CO 2 ) on the lowest.Hence, chemical high and low correspond to more oxidizable (more reduced) and less oxidizable (less reduced) states, respectively (Figure 3).The Sun transports water to high places via evaporation, adding gravitational potential energy.Similarly, the Sun splits water into hydrogen (source of reducing power) and oxygen (source of oxidizing power).Split of water yields chemical highs and lows, or reducing and oxidizing ends, and thus forms chemical vortex of life.Water may split also by geothermal heat, including radioactivity of rock-borne 40 K, as described later. Life Vortex as Intermediate between CH 4 and CO 2 Since the life on the Earth is based on carbon, the reduced and oxidized ends of carbon, methane (CH 4 ) and carbon dioxide (CO 2 ), respectively, are mainly discussed.Organics and organisms are simply and collectively expressed here as CH 2 O, instead of commonly used R for an organic functional group, because it is easily understandable that organics and organisms are substantially intermediates of CH 4 and CO 2 .The merit of the use of "CH 2 O" to represent organics/organisms would overcome potential confusion arising from use of the exact chemical formula of formaldehyde. Erwin Schrödinger defined life manifested by biological cells or individuals that maintain structure and information by eating negentropy [1].In contrast, I view life as "transitory intermediates on the balance of hydrogen and oxygen supplies", considering the Earth's biosphere.In this sense, organisms are only ephemeral and hovering between life and death.This view is depicted in Figure 4. [7].Methane is also produced by methanogenic microorganisms mediating the biological counterpart of the Sabatier reaction in deep-sea hydrothermal vents and deep subsurface [4,5]. Organic compounds and organisms, generally expressed here as CH 2 O or [CH 2 O] n , are intermediates of CH 4 and CO 2 , and various organic forms are found in nature.Methane has the maximum number of hydrogen, i.e., four hydrogen atoms per carbon, and the greatest chemical potential energy (890 kJ mol -1 ) among carbon compounds is manifested via full oxidation to yield carbon dioxide.Methane-oxidizing microorganisms, namely methanotrophs, utilize this chemical energy for metabolism for growth and reproduction, and are often found in deep-sea hydrothermal vents and deep subsurface [8].That is, life vortex as realized by methanotrophs is manifested on the slope from CH 4 to CO 2 (Figure 5).All the organics and organisms are partially oxidized forms of CH 4 , and are to be further oxidized to CO 2 .Oil (petroleum) is the mixture of relatively less oxidized hydrocarbon chains (with more hydrogen per carbon on average), while formaldehyde (truly CH 2 O) and acetaldehyde (CH 3 CHO) are more oxidized forms and close to the CO 2 -end.Life vortices may vary in numbers, sizes, features, etc., according to the amounts of manifested chemical potential energy (Figure 6). Splitting of Water Water, H 2 O, may be the most common but also the most miraculous molecule in the Universe.The most abundant element in the Universe is hydrogen, followed by helium, oxygen, carbon, nitrogen, and so on.As helium is chemically inert, hydrogen reacts with oxygen to form water, with carbon to form methane, and with nitrogen to form ammonia.Among these hydrogen compounds, water displays a number of peculiarities that are not seen by other hydrogen compounds as follows: (1) Water molecules are held together by sharing electrons among two hydrogen and one oxygen atoms via two covalent bonds; (2) Water is a polar molecule.Although the net electrical charge on the molecule is zero, its structure causes the molecule to become polarized; (3) Electrostatic bonds (hydrogen bonds) form between the negatively charged oxygen side of one water molecule and the positively charged hydrogen of another molecule; (4) The existence of these hydrogen bonds explains many of the unique properties of water; (5) Ice has an orderly, open structure of water molecules held together by hydrogen bonding; (6) Water ice's crystal structure results in lower density (0.92 g cm -3 ) than that of liquid water, 1.0 g cm -3 ; (7) Liquid water structure is intermediate between that of ice and water vapor, and consists of two types of aggregates of water molecules; (8) Structured water is composed of clusters of hydrogen-bonded water molecules that form and reform very quickly but slow enough to influence the physical behavior of water; (9) Unstructured water is composed of closely packed free water molecules, denser than structured water; (10) If hydrogen bonding did not exist, water would only occur as a gas at the Earth's surface; (11) Water is the only naturally occurring substance on the Earth to exist at the surface in all three states: liquid, solid and gas; (12) Water dissolves more substances in greater quantity than any other common liquid; (13) Water has the highest surface tension of all liquids; (14) Water has the highest heat capacity of all common solids and liquids, which prevents extreme range in aquatic temperature; (15) Boiling and melting points of water are higher than those of other hydrogen compounds of similar size or oxygen-group ( 8 O, 16 S, 34 Se, 52 Te and so on).For example, CH 4 , NH 3 , H 2 S, H 2 Se and H 2 Te occur as gases at room temperature; and ( 16) Water has the highest heat conductivity of any common liquid.A metaphysical consequence of the above-listed peculiarities of the water molecule is that hydrogen and oxygen atoms, i.e., H and O, attract each other.The most and third abundant elements in the universe bind by strong covalent bonding to contain chemical potential energy, and they may split by the input of external dissociation energy.The sources of external dissociation energy are the Sun (solar radiation) and the Earth (geothermal heat including 40 K radioactivity); and water molecules split via the light reaction of photosynthesis, photochemical reactions (photolysis), 40 K -radiation (radiolysis) and the high-temperature water-rock interaction (thermolysis) (Figure 7). Testing Planetary Biospheres The Earth's biosphere is sustained by solar and geothermal heat via the splitting of water.Even if the Sun radiation ceased, some part of the Earth's life in the deep continue continue as long as the Earth stayed alive with liquid water and active plate/plume tectonics, or volcanism (Figure 8a).Splitting of water would maintain the existence of organics and organisms (CH 2 O), and the levels of CH 4 and CO 2 are kept below 0.04%. No liquid water but water vapor exists on Venus due to the high surface temperature of about 500 o C. Water vapor may split photochemically.The resultant hydrogen escapes to the extra-Venus milieu, and the leftover oxygen accumulates in Venus' atmosphere, with a CO 2 content of 98%.This biased split of water is unlikely to support life (Figure 8b).Hydrothermal activity may have existed or exist on ancient Mars [8], although the existence of liquid water on ancient and recent Mars is still controversial [9].Regardless of liquid water, no plate tectonic activity is expected on modern Mars, due to short longevity of the planet, which is as light as 1/10 of the Earth's mass.Therefore, only photochemical split, or photolysis, of water is presumed for Martian environment [10], and accumulation of oxygen in the Martian atmosphere as presumed for the Venus' atmosphere results in a CO 2 content of 95%.Ancient Mars may have hosted geothermal splitting of water, and remnants of ancient Martian life have been suggested in the Martian meteorite ALH84001 [11]; however, the nanofossils are still controversially discussed in the scientific community [12,13].In contrast, modern Mars is unlikely to be capable of hosting a biosphere (Figure 8c).The latest finding of ground ice of temporal melt-water [14] suggests an icy/watery subterranean Mars.However, just occurrence-of-water on Mars, echoing with another latest evidence for water on the Moon surface [15], does not imply life on modern Mars; splitting-of-water has more realistic relevance to life.In contrast, occurrence-of-methane has more implications for splitting-ofwater, according to my scheme (Figure 8c).It is interesting that methane (CH 4 ) is present in the Martian atmosphere [16,17], and that volcanic activity as recent as four million years ago is suggested [18].This being so, a modern biosphere may be sustained in the Martian subsurface that could store liquid water and remnant geothermal heat [19].Another possibility is hydrogen (H 2 ) production via hydration of one of the most ancient volcanic rock of the Earth, komatiites, whose occurrence on Mars is also suggested [20].This is the splitting-of-water catalyzed by komatiites, and a recent experiment confirmed that the komatiite-catalyzed H 2 production is robust enough to support H 2 -based methanogenesis [21].Splitting-of-water catalyzed by komiites is likely to be a source of sustaining life forms on ancient Earth and Mars, and the possibility may extends to modern Mars. Both liquid water and volcanism are postulated to occur under the ice crust of Europa, Jupiter's J2 satellite [22].That is, geothermal split of water is likely to occur in Europa, as well as photochemical splitting of water in Europa's thin atmosphere [23].The resultant atmospheric oxygen may be incorporated into melt-and-refrozen surface ice and transported to the interior ocean via the tectonic ice convection of Europa [24,25].This planetary (or satellitary) setting may facilitate the formation of an extra-terrestrial biosphere (Figure 8d), and thus provides a biospheric basis for a search for life [26]. Astrobiological Conclusion Life on Earth is carbon-based, and is substantiated as intermediate forms (expressed as CH 2 O) hovering between the reduced end (CH 4 ) and the oxidized end (CO 2 ).The intermediate forms, organics and organisms, are ephemeral and eventually subject to full reduction or oxidation when the supplies of O or H cease, respectively.In other words, life is maintained only with the continuous supplies of H and O, which are in turn provided by the splitting of water.Solar radiation and geothermal heat would split water, and therefore it may not be too extreme to conclude that the Earth's lives are mainly sustained by either the Sun or the Earth, depending on the types of ultimate sources of nutrition, i.e., photosynthesis or chemolithoautotrophy. In the Japanese language, the Sun is hi, and heat (fire) is also hi (originally ho or fo); water is mi or mizu; and, life is i-no-chi meaning energy of breath.The coincidence of two hi has impressed me, and I might say that split of mi by hi nourishes chi, at least, on the Earth.Both hi, that is the Sun's radiation and the Earth's interior heat, contribute to life.The degrees of contributions vary according to major modes of autotrophy, i.e., photosynthesis or chemolithoautotrophy. Examples of chemolithoautotrophic communities that depend primarily on geothermal hi are found in deep-sea hydrothermal vents and deep subsurface, respectively [4,5].The idea that non-solar splitting of water nourishes life thus derives from the studies of deep-sea and deep subsurface biospheres, and is extended to possible extra-terrestrial biospheres.The concept of planetary biospheres should accommodate a more universal notion of life than traditional ones.The "non-solar splitting of water" idea is applicable to possible astrobiological biospheres. Figure 1 . Figure 1.Rayleigh-Benard convection.Convection cells in a fluid are formed and maintained when heated from below.The upper surface of the heated fluid is unconstrained and is free to move and deform.This situation is sustained as long as heat energy is given to the fluid system and gives rise to the Prigogine's hexagons, or Rayleigh-Benard convection cells.Photo: Hideo Suzuki, Tokyo Metropolitan College of Technology. Figure 2 . Figure 2. Leonardo Da Vinci's Vitruvian Man, showing his identity as a pattern like a vortex. Figure 3 . Figure 3. Manifestation of gravitational and chemical potential energy as electric and biochemical energy via water fall and oxidation, respectively. Figure 4 . Figure 4. Transitory view of C-based life hovering between the reduced (CH 4 ) and oxidized (CO 2 ) ends.The reducing and oxidizing powers are derived from the split of water by heat or geothermal heat. Figure 5 . Figure 5. Life vortex in carbon cycling between CH 4 and CO 2 .Chemical potential energy is manifested during the oxidation of CH 4 to generate life vortices as intermediates before full oxidation to CO 2 , which is in turn re-potentiated via reduction by H + e -from the split of water. Figure 6 . Figure 6.Different numbers and sizes of life vortices on the slope of oxidation of various carbon compounds.The vortices may have different characteristics according to the features of slopes. Figure 7 . Figure 7. Split of the water molecule by solar radiation and geothermal heat (including radioactive decay) via the light reaction of photosynthesis and water-rock interaction, respectively. Figure 8 . Figure 8. Possibility of carbon-based life (expressed as CH 2 O) viewed from solar split of water (via photosynthesis and photolysis) and geothermal split of water (via radiolysis and thermolysis) in (a) Earth, (b) Venus, (c) Mars, and (d) Jovian satellite Europa.Black solid lines indicate existing and probable processes; black broken lines show possible pathways; and, gray lines suggest unlikely or negligible reactions.Recent finding of methane in Mars atmosphere [16] may suggest occurrences of split-of-water and thus any form of life.
4,510.8
2009-10-19T00:00:00.000
[ "Environmental Science", "Philosophy", "Physics" ]
On a nonlinear mixed-order coupled fractional differential system with new integral boundary conditions Abstract: We present the criteria for the existence of solutions for a nonlinear mixed-order coupled fractional differential system equipped with a new set of integral boundary conditions on an arbitrary domain. The modern tools of the fixed point theory are employed to obtain the desired results, which are well-illustrated by numerical examples. A variant problem dealing with the case of nonlinearities depending on the cross-variables (unknown functions) is also briefly described. Introduction It is well known that the classical boundary conditions cannot describe certain peculiarities of physical, chemical, or other processes occurring within the domain. In order to overcome this situation, the concept of nonlocal conditions was introduced by Bicadze and Samarskiȋ [1]. These conditions are successfully employed to relate the changes happening at nonlocal positions or segments within the given domain to the values of the unknown function at end points or boundary of the domain. For a detailed account of nonlocal boundary value problems, for example, we refer the reader to the articles [2][3][4][5][6] and the references cited therein. Computational fluid dynamics (CFD) technique directly deals with the boundary data [7]. In case of fluid flow problems, the assumption of circular cross-section is not justifiable for curved structures. The idea of integral boundary conditions serves as an effective tool to describe the boundary data on arbitrary shaped structures. One can find application of integral boundary conditions in the study of thermal conduction, semiconductor, and hydrodynamic problems [8][9][10]. In fact, there are numerous applications of integral boundary conditions in different disciplines such as chemical engineering, thermoelasticity, underground water flow, population dynamics, etc. [11][12][13]. Also, integral boundary conditions facilitate to regularize ill-posed parabolic backward problems, for example, mathematical models for bacterial self-regularization [14]. Some recent results on boundary value problems with integral boundary conditions can be found in the articles [15][16][17][18][19] and the references cited therein. The non-uniformities in form of points or sub-segments on the heat sources can be relaxed by using the integro multi-point boundary conditions, which relate the sum of the values of the unknown function (e.g., temperature) at the nonlocal positions (points and sub-segments) and the value of the unknown function over the given domain. Such conditions also find their utility in the diffraction problems when scattering boundary consists of finitely many sub-strips (finitely many edge-scattering problems). For details and applications in engineering problems, for instance, see [20][21][22][23]. The subject of fractional calculus has emerged as an important area of research in view of extensive applications of its tools in scientific and technical disciplines. Examples include neural networks [24,25], immune systems [26], chaotic synchronization [27,28], Quasi-synchronization [29,30], fractional diffusion [31][32][33], financial economics [34], ecology [35], etc. Inspired by the popularity of this branch of mathematical analysis, many researchers turned to it and contributed to its different aspects. In particular, fractional order boundary value problems received considerable attention. For some recent results on fractional differential equations with multi-point and integral boundary conditions, see [36,37]. More recently, in [38,39], the authors analyzed boundary value problems involving Riemann-Liouville and Caputo fractional derivatives respectively. A boundary value problem involving a nonlocal boundary condition characterized by a linear functional was studied in [40]. In a recent paper [41], the existence results for a dual anti-periodic boundary value problem involving nonlinear fractional integro-differential equations were obtained. Motivated by aforementioned applications of nonlocal integral boundary conditions and fractional differential systems, in this paper, we study a nonlinear mixed-order coupled fractional differential system equipped with a new set of nonlocal multi-point integral boundary conditions on an arbitrary domain given by where c D χ is Caputo fractional derivative of order χ ∈ {ξ, ζ}, ϕ, ψ : [a, b] × R × R → R are given functions, p, q, δ i , x 0 , y 0 ∈ R, i = 1, 2, . . . , m. Here we emphasize that the novelty of the present work lies in the fact that we introduce a coupled system of fractional differential equations of different orders on an arbitrary domain equipped with coupled nonlocal multi-point integral boundary conditions. It is imperative to notice that much of the work related to the coupled systems of fractional differential equations deals with the fixed domain. Thus our results are more general and contribute significantly to the existing literature on the topic. Moreover, several new results appear as special cases of the work obtained in this paper. We organize the rest of the paper as follows. In Section 2, we present some basic concepts of fractional calculus and solve the linear version of the problem (1.1). Section 3 contains the main results. Examples illustrating the obtained results are presented in Section 4. Section 5 contains the details of a variant problem. The paper concludes with some interesting observations. Preliminaries Let us recall some definitions from fractional calculus related to our study [53]. Definition 2.1. The Riemann-Liouville fractional integral of order α ∈ R (α > 0) for a locally integrable real-valued function of order α ∈ R, denoted by I α a + , is defined as where Γ denotes the Euler gamma function. In the following lemma, we obtain the integral solution of the linear variant of the problem (1.1). Then the unique solution of the system is given by a pair of integral equations 4) and it is assumed that Proof. Applying the integral operators I ξ a + and I ζ a + respectively on the first and second fractional differential equations in (2.1), we obtain where c i ∈ R, i = 1, 2, 3 are arbitrary constants. Using the condition y(a) = 0 in (2.6), we get c 2 = 0. Making use of the conditions px(a) + qy x(s)ds in (2.6) after inserting c 2 = 0 in it leads to the following system of equations in the unknown constants c 1 and c 3 : Solving (2.7) and (2.8) for c 1 and c 3 and using the notation (2.5), we find that Inserting the values of c 1 , c 2 , and c 3 in (2.6) leads to the solution (2.2) and (2.3). One can obtain the converse of the lemma by direct computation. This completes the proof. Main results Let In view of Lemma 2.1, we define an operator T : X × X → X by: where (X × X, (x, y) ) is a Banach space equipped with norm (x, y) = x + y , x, y ∈ X, For computational convenience we put: Our first existence result for the system (1.1) relies on Leray-Schauder alternative [54]. Then there exists at least one solution for the system (1. Let (x, y) ∈ P with (x, y) = νT (x, y). For any t ∈ [a, b], we have x(t) = νT 1 (x, y)(t), y(t) = νT 2 (x, y)(t). Then by (H 1 ) we have In consequence of the above inequalities, we deduce that which imply that Hence the set P is bounded. As the hypothesis of Leray-Schauder alternative [54] is satisfied, we conclude that the operator T has at least one fixed point. Thus the problem (1.1) has at least one solution on [a, b]. By using Banach's contraction mapping principle we prove in the next theorem the existence of a unique solution of the system (1.1). A variant problem In this section, we consider a variant of the problem (1.1) in which the nonlinearities ϕ and ψ do not depend on x and y respectively. In precise terms, we consider the following problem: x(s)ds, a < σ 1 < σ 2 < . . . < σ m < τ < . . . < b, where ϕ, ψ : [a, b] × R → R are given functions. Now we present the existence and uniqueness results for the problem (4.1). We do not provide the proofs as they are similar to the ones for the problem (1.1). Conclusions We studied the solvability of a coupled system of nonlinear fractional differential equations of different orders supplemented with a new set of nonlocal multi-point integral boundary conditions on an arbitrary domain by applying the tools of modern functional analysis. We also presented the existence results for a variant of the given problem containing the nonlinearities depending on the cross-variables (unknown functions). Our results are new not only in the given configuration but also yield some new results by specializing the parameters involved in the problems at hand. For example, by taking δ i = 0, i = 1, 2, . . . , m in the obtained results, we obtain the ones associated with the coupled systems of fractional differential equations in (1.1) and (4.1) subject to the boundary conditions: Furthermore, the methods employed in this paper can be used to solve the systems involving fractional integro-differential equations and multi-term fractional differential equations complemented with the boundary conditions considered in the problem (1.1).
2,091.6
2021-01-01T00:00:00.000
[ "Mathematics" ]
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model We consider Hermitian random band matrices $H$ in $d \geq 1 $ dimensions. The matrix elements $H_{xy},$ indexed by $x, y \in \Lambda \subset \mathbb{Z}^d,$ are independent, uniformly distributed random variable if $|x-y| $ is less than the band width $W,$ and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size $|\Lambda| $ of the matrix. Introduction Random band matrices H = (H xy ) x,y∈Γ represent systems on a large finite graph with a metric. They are the natural intermediate models to study quantum propagation in disordered systems as they interpolate in between the Wigner matrices and Random Schrödinger operators. The elements H xy are independent random variables with variance σ 2 xy = E|H xy | 2 depending on the distance between the two sites. The variance decays with the distance on the scale W , called the band width of the matrix H. This terminology comes from the simplest model in which the graph is a path on N vertices labelled by Γ = {1, 2, . . . , N }, and the matrix elements H xy are zero if |x − y| W. If W = O(1) we obtain the one-dimensional Anderson type model (see [4]) and if W = N we recover the Wigner matrix. In the general Anderson model, introduced in [4], a random on-site potential V is added to a deterministic Laplacian on a graph that is typically a regular box in Z d . For higher dimensional models in which the graph is Γ is a box in Z d , see [5]. In [1] it was proved that the quantum dynamics of d-dimensional band matrix is given by a superposition of heat kernels up to time scales t W d/3 . Note that diffusion is expected to hold for t ∼ W 2 for d = 1 and up to any time for d 3 when the thermodynamic limit is taken. The threshold d/3 on the exponent is due to technical estimates on Feynman graphs. The approach of this paper is similar with the one in [1] . We normalize the entries of the matrix such that the rate of quantum jumps is of order one. In contrast with [1] in this paper double-rooted Feynman graphs are used to estimate the variance of the quantum diffusion. The main result of this paper is upgrading the previous results on the convergence of expectation of the quantum diffusion from [1] to convergence in high probability. For simplicity, we avoid working directly on an infinite lattice. Throughout our proof, we consider a d-dimensional finite periodic lattice Λ N ⊂ Z d (d 2) of linear size N equipped with the Euclidean norm | · | Z d . Specifically, we take Λ N to be a cube centered around the origin with the side length N , i.e. We regard Λ N periodic, i.e. we equip it with the periodic addition and periodic distance We analyze random matrices H with band width W and with elements H xy , where x and y are indices of points in Λ N . For introducing H we first define a matrix We consider A = A * = (A xy ) a Hermitian random matrix whose upper triangular entries (A xy : x y) are independent random variables uniformly distributed on the unit circle S 1 ⊂ C . We define the random band matrix (H xy ) through Note that H is Hermitian and |H xy | 2 = S xy . Throughout our investigation we will use the simplified notation Our main quantity is The function P (t, x) describes the quantum transition probability of a particle starting in x 0 and ending up at position x after time t . Let κ > 0 . We introduce the macroscopic time and space coordinates T and X, which are independent of W , and consider the microscopic time and space coordinates t = W dκ T , Using the definition of the quantum probability and the scaling that we have introduced before, we define the random variable that we are going to investigate by Our main result gives an estimate for the variance of the random variable Y T (φ) up to time scales Theorem 2.1. Fix T 0 > 0 and κ such that 0 < κ < 1/3 . Choose a real number β satisfying 0 < β < 2/3−2κ . Then there exists C 0 and W 0 0 depending only on T 0 , κ and β such that for all T ∈ [0, T 0 ] , W W 0 and N W 1+ d 6 we have Using the estimate that we obtain in Theorem 2.1 and Chebyshev inequality for the second moment we obtain the convergence in high probability of the random variable Y T (φ) . We think that the same technique can be implemented for a graphical representation with 2p directed chains with p ∈ N . This approach should give similar estimates on the 2p-th moment of our random variable that we further use in the Chebyshev's inequality to get the desired conclusion. Graphical representation In this section we give the exact formula of the quantity of our analysis and we motivate the graphical representation that we will use in order to compute the upper bound. 3.1. Expansion in non-backtracking powers. First, as in [1] we define H The following result is proved in [1] . Lemma 3.1. Let U k be the k-th Chebyshev polynomial of the second kind and let We define the quantity a m (t) We will use also the abbreviation Plugging in the definition of Y T (φ) we have Moreover, . We summarize the graphical representation of H Graphical representation. We define a graph L which consists of two rooted directed chains L 1 and L 2 by L(n 11 , n 12 , n 21 , n 22 ) ≡ L . .= L 1 (n 11 , n 12 ) L 2 (n 21 , n 22 ) , where L k (n k1 , n k2 ) is a rooted directed chain of length n k1 + n k2 1 for k ∈ {1, 2}. We denote the set of vertices of the graph L by V (L) and the set of edges by E(L). Each of the rooted directed chains contains two distinct vertices denoted by r(L k ) (root) and s(L k ) (summit) defined as the unique vertex such that the path r(L k ) → s(L k ) has length n k1 . Note that if n k1 = 0 or n k2 = 0 then r(L k ) = s(L k ) . Using the orientation of the edges, for each e ∈ E(L) we denote the vertex a(e) ∈ V (L) as predecessor and the vertex b(e) ∈ V (L) as successor (see Figure 2.1). Similarly, for each vertex i ∈ V (L) , we denote the adjacent vertices, a(i) and b(i) as the predecessor and the successor of i (see Figure 2.2). The root and the summit are drawn using white dots and all other vertices using black dots. Hence, the set of vertices can be split as where the subscript w stands for the white vertices and b for the black vertices. The labels x = (x i ) i∈V (L) can be split according to the needs, e.g. For each configuration of labels x we assign a lumping Γ = Γ(x) of the set of edges E(L) as in [1] . A lumping is an equivalence relation on E(L) . We use the notation Γ = {γ} γ∈Γ where γ ∈ Γ is a lump, i.e. an equivalence class of Γ . The lumping Γ = Γ(x) associated with the labels x is given by the equivalence relation The summation over x is performed with respect to the indicator function Throughout the proof we will use the notation Using the graph L we may now write the covariance as We further define the value of the lumping Γ by Let P c (E(L)) be the set of connected even lumpings, i.e. the set of all lumpings Γ for which each lump γ ∈ Γ has even size and there exists γ ∈ Γ such that γ ∩ E(L k ) = ∅ , for k ∈ {1, 2} . Using that EH xy = 0 , it is not hard to see that the graphical representation of the variance yields to the following result (for further details, see [3]) . We call the lumps π ∈ Π of a pairing Π bridges. Moreover, with each pairing Π ∈ M c we associate its underlying graph L(Π), and regard n 11 (Π) and n 12 (Π), n 21 (Π) and n 22 (Π) as functions on M c in self-explanatory notation. We abbreviate V (Π) = V (L(Π)) and E(Π) = E(L(Π)). We refer to V (Π) as the set of vertices of Π and to E(Π) as the set of edges of Π . Let us define the indicator function J {e,e } (x) . .= 1(x a(e) = x b(e ) )1(x a (e) = x b(e) ) . (3.6) Using the same reasoning as in Section 4 of [3] and Equation 4.14 of [3], we obtain the following bound. {e,e }∈Π S xe π∈Π J π (x) . In the following we rewrite the right hand side of (2.7) using the summation over skeleton pairings . We further define |l Σ | . .= σ∈Σ l σ for Σ ∈ G and l Σ ∈ N Σ . For the skeleton Σ ∈ G of the pairing Π = G lΣ (Σ) we use the notation n ij (Σ, l Σ ) for n ij (Π), for all i, j ∈ {1, 2} . Parametrising Π using Σ and l Σ and neglecting the non-backtracking condition in the definition of Q y1,y2 (x) we obtain the following upper bound (for full details see Lemma 7.6 in [1]) . The following result is obtained using (2.9) . The following result follows easy from the definition of S xy . Lemma 3.6. Let l ∈ N . For each x, y ∈ Λ N we have 3.4. Orbits of vertices. Let us fix Σ ∈ G . On the set of vertices V (Σ) we construct the orbits of vertices as in [1] . We define τ : V (Σ) → V (Σ) as follows. Let i ∈ V (Σ) and let e be the unique edge such that {{i, b(i)}, e} ∈ Σ . Then, for any vertex i of Σ ∈ G we define τ i . .= b(e). We denote the orbit of the vertex i ∈ Σ by [i] := {τ n i : n ∈ N} . We order the edges of Σ in some arbitrary fashion and denote this order by < . Each bridge σ ∈ Σ "sits between" the orbits ζ 1 (σ) and ζ 2 (σ). We remark that Lemma 2.7 is sharp in the sense that there exists Σ ∈ G such that the estimate of Lemma 2.7 saturates. Given that H 00 ; H 00 = 0 it follows that in the cases |Σ| = 0 and |Σ| = 1 the quantity of interest is deterministic.
2,720.2
2018-06-01T00:00:00.000
[ "Mathematics" ]
Maximum-order Complexity and Correlation Measures We estimate the maximum-order complexity of a binary sequence in terms of its correlation measures. Roughly speaking, we show that any sequence with small correlation measure up to a sufficiently large order $k$ cannot have very small maximum-order complexity. Introduction the predictability of a sequence and thus its unsuitability in cryptography. For surveys on linear complexity and related measures of pseudorandomness see [6,13,14,17,20,21]. Let k be a positive integer. Mauduit and Sárközy introduced the (Nth) correlation measure of order k of a binary sequence S = (s i ) ∞ i=0 in [10] as where the maximum is taken over all D = (d 1 , d 2 , ..., d k ) with non-negative integers 0 ≤ d 1 < d 2 < ... < d k and U such that U + d k ≤ N. (Actually, [10] deals with finite sequences ((−1) s i ) N −1 i=0 of length N over {−1, +1}.) Brandstätter and the second author [2] proved the following relation between the Nth linear complexity and the correlation measures of order k: Roughly speaking, any sequence with small correlation measure up to a sufficiently large order k must have a high Nth linear complexity as well. For example, the Legendre sequence L = (ℓ i ) ∞ i=0 defined by where p > 2 is a prime, satisfies and thus (1) implies see [10] and [19, see [8,9,15]. Obviously we have and the maximum-order complexity is a finer measure of pseudorandomness than the linear complexity. In this paper we analyze the relationship between maximum-order complexity M(S, N) and the correlation measures C k (S, N) of order k. Our main result is the following theorem: Again, any nontrivial bound on C k (S, N) for all k up to a sufficiently large order provides a nontrivial bound on M(S, N). For example, for the Legendre sequence we get immediately see also [19,Theorem 9.3]. (f (N) = O(g (N)) is equivalent to f (N) ≪ g(N).) We prove Theorem 1 in the next section. The expected value of the Nth maximum-order complexity is of order of magnitude log N, see [8] as well as [15,Remark 4] and references therein. Moreover, by [1] for a 'random' sequence of length N the correlation measure C k (S, N) is of order of magnitude √ kN log N and thus by Theorem 1 M(S, N) ≥ 1 2 log N + O(log log N) which is in good correspondence to the result of [8]. In Section 3 we mention some straightforward extensions. If m is a prime, then x → hx is a permutation of Z m for any h ≡ 0 mod m and the sums in (4) can be estimated by the correlation measure C k (S, N) of order k for m-ary sequences as it is defined in [11] and we get Even if the correlation measure of order k is large for some small k, we may be still able to derive a nontrivial lower bound on the maximum-order complexity by substituting the correlation measure of order k by its analog with bounded lags, see [7] for the analog of (1). For example, the two-prime generator T = (t i ) ∞ i=0 , see [3], of length pq with two odd primes p < q satisfies if gcd(i, pq) = 1 and its correlation measure of order 4 is obviously close to pq, see [16]. However, if we bound the lags d 1 < . . . < d k < p one can derive a nontrivial upper bound on the correlation measure of order k with bounded lags including k = 4 as well as lower bounds on the maximum-order complexity using the analog of Theorem 1 with bounded lags. Finally, we mention that the lower bound (3) for the Legendre sequence can be extended to Legendre sequences with polynomials using the results of [5] as well as to their generalization using squares in arbitrary finite fields (of odd characteristic) using the results of [12,18]. For sequences defined with a character of order m see [11]. Acknowledgement The authors are supported by the Austrian Science Fund FWF Projects F5504 and F5511-N26, respectively, which are part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications". L.I. would like to express her sincere thanks for the hospitality during her visit to RICAM.
1,030.4
2017-03-27T00:00:00.000
[ "Mathematics" ]
Sigmorphon 2019 Task 2 system description paper: Morphological analysis in context for many languages, with supervision from only a few This paper presents the UNT HiLT+Ling system for the Sigmorphon 2019 shared Task 2: Morphological Analysis and Lemmatization in Context. Our core approach focuses on the morphological tagging task; part-of-speech tagging and lemmatization are treated as secondary tasks. Given the highly multilingual nature of the task, we propose an approach which makes minimal use of the supplied training data, in order to be extensible to languages without labeled training data for the morphological inflection task. Specifically, we use a parallel Bible corpus to align contextual embeddings at the verse level. The aligned verses are used to build cross-language translation matrices, which in turn are used to map between embedding spaces for the various languages. Finally, we use sets of inflected forms, primarily from a high-resource language, to induce vector representations for individual UniMorph tags. Morphological analysis is performed by matching vector representations to embeddings for individual tokens. While our system results are dramatically below the average system submitted for the shared task evaluation campaign, our method is (we suspect) unique in its minimal reliance on labeled training data. Introduction This paper describes the UNT HiLT+Ling system submission for the Sigmorphon shared task on morphological analysis and lemmatization in context (McCarthy et al., 2019). We focus primarily on the morphological tagging task, treating partof-speech tagging and lemmatization as secondary tasks. We approach morphological analysis from the perspective of low-resource languages, aiming to develop an approach which exploits existing language resources in order to make morphological analysis in context feasible for languages without annotated training data. We propose a model to perform morphosyntactic annotation for any language with a translation of the Bible. According to Wycliffe 1 , there are currently 683 languages in the world which contain a translation of the entire Bible, and an additional 1534 languages for which the entire New Testament, and sometimes other sections, are available. We train contextual word representations using ELMo (Peters et al., 2018) and align embedding spaces for language pairs using Bible verse numbers as an alignment signal. We then compute vector representations for UniMorph tags in English and project those representations into the target language. The projected morpheme tag embeddings are used to identify morphological features and label tokens in context with UniMorph tags. We give a system overview in Section 2, with more detailed model descriptions in Section 5. The system's performance is currently poor; we outline known limitations and make some suggestions for improvement. System Overview The system we developed for Sigmorphon 2019 Task 2 can be divided into two parts: the core model and two additional non-core components. The core model is responsible for the morphological tagging task, our main focus. The two noncore components are part-of-speech tagging and lemmatization. Core model: Minimally-supervised morphological analysis in context. Following task specifications, we aim to predict UniMorph tags for words in context. Our approach is designed to work on new languages with minimal supervision. Specifically, the base model uses the following forms of supervision: a) multilingual bible data, verse-aligned; and b) roughly twenty words per from the training data per UniMorph tag. Once this model has been developed, it can be applied for a new language with no annotated training data for the task; the only data needed is a Bible in that language. The steps in the process (explained in detail in Section 5.1) are as follows: 1. Learn sentence-level ELMO embeddings (Peters et al., 2018) for each language. 2. Use verse-aligned data to learn a vector translation matrix (following Mikolov et al., 2013a) between each language and English. 3. Compute a vector representation for each UniMorph tag. 4. For UniMorph tags found in English, map tag vectors into the other languages which use the tag, by way of the relevant translation matrix. For tags not found in English, compute vector representations for each tag in the language-specific space. 5. Identify all UniMorph tags represented in the embedding for a given word, treating morphological analysis in the style of analogy tasks (Mikolov et al., 2013b). POS tagging and lemmatization. POS tagging and lemmatization are treated as non-core components of the model. In other words, we incorporate these tasks into our model in order to meet the requirements of the competition. For these two tasks, greater supervision is allowed, and models are learned from the training data provided. The POS tagger in our system is a straightforward HMM model, and lemmatization is done with a seq2seq neural architecture. See Section 5.2 for more detailed descriptions of the models. Related Work The core idea of using the Bible as parallel data in low-resource settings is largely inspired by previous work. The Bible has been used as a means of alignment for cross-lingual projection, both for POS tagging (Agic et al., 2015) and for dependency parsing (Agic et al., 2016), as well as for base noun-phrase bracketing, named-entity tagging, and morphological analysis (Yarowsky et al., 2001) with promising results. Peters et al. (2018) introduce ELMo embeddings, contextual word embeddings which incorporate character-level information using a CNN. Both of these properties -sensitivity to context and the ability to capture sub-word informationmake contextual embeddings suitable for the task at hand. In order to make embeddings useful across languages, we need a method for aligning embedding spaces across languages. Ruder et al. (2017) provide an excellent survey of methods for aligning embedding spaces. Mikolov et al. (2013a) introduce a translation matrix for aligning embeddings spaces in different languages and show how this is useful for machine translation purposes. We adopt this approach to do alignment at the verse level. Alignment with contextual embeddings is more complicated, since the embeddings are dynamic by their very nature (different across different contexts). In order to align these dynamic embeddings, Schuster et al. (2019) introduce a number of methods, however they all require either a supervised dictionary for each language, or access to the MUSE framework for alignment, neither of which we assume in our work. The UniMorph 2.0 data-set (Kirov et al., 2018) provides resources for morphosyntactic analysis across 111 different languages. The work described here uses the tag set from UniMorph. Data This section describes the data resources used for training and evaluating the system. Bible data The main data used for building our core model is a multilingual Bible corpus. For as many of the shared task languages as possible (41), we use the corpus from Christodouloupoulos and Steedman (2015). Bibles for an additional 19 languages were sourced elsewhere. Of the remaining 11 languages, we use proxy languages (Section 4.2) for 9. For two languages (Akkadian and Sanskrit), we were unable to locate a suitable Bible in time. Where there are multiple data sets for a given language, we use the same Bible for all data sets. For some languages we have access to the entire Bible, and for others only the New Testament (NT). This introduces discrepancies in the amount of data used to train embeddings from language to language, as the Old Testament is much longer than the New Testament. The Bible is a natural source of parallel data, as it is available (either in whole or in parts) in over one thousand languages, including many lowresource languages. One advantage of using the Bible, beyond its wide availability in translation for free, is that its verses are fairly well-aligned in meaning across languages (unlike words or even sentences). One drawback to using Bible data is the archaic nature of the language. For example, even if we use a modern translation, the English Bible contains fewer than 15,000 different word types, and no occurrences of modern words (e.g. Republican, computer, or NASA). The limited domain of the text offers both advantages and disadvantages. On the one hand, much of the vocabulary found in the shared task evaluation data does not occur in the Bible. Using embeddings trained on the Bible, then, results in an extremely large number of out-of-vocabulary tokens at test time. On the other, the semantic territory covered by the embedding spaces varies remarkably little from language to language, increasing the feasibility of aligning embedding spaces across multiple languages. Proxy languages In order to do morphological analysis for a given language, our method requires access to a digitally-available version of at least portions of the Bible for that language. At the time the model was developed, we did not have access to Bibles for all shared task languages. For each missing language, we select a proxy language (Table 1). For example, we don't have a Bible for Galician, so at every stage in the process where the Galician Bible would be used, we substitute the Portuguese Bible, treating Portuguese as pseudo-Galician. We identify two different cases of proxy language substitution. In some cases, we are able to select a closely-related dialect for the target language. In others, the proxy language is selected based on a combination of morphological similarity (typologically speaking) and language relatedness. 2 Sigmorphon data We use the provided training data primarily to train a part-of-speech tagger and lemmatizer for each shared task data set, and the provided test data is used to evaluate the system. We use portions of the training data for three other purposes: a) to build contrasting sets of words for each UniMorph tag (Section 5.1.3); b) to build lists of UniMorph tags relevant for each language; and c) to create a simple baseline for the two languages for which we have no Bible, proxy language or otherwise. Models The model description consists of two parts: the core model, for morphological analysis, and two non-core components, for part-of-speech tagging and lemmatization. Core model: morphological analysis Our core system addresses the task of morphological analysis with minimal supervision from labeled training data. The approach exploits parallel data in the form of a multilingual Bible corpus. Contextual embeddings for every Bible Prior research has shown embedded word vector representations are capable of capturing contextual nuances in meaning beyond one sense per word (Arora et al., 2018, for example). Because context variance is an important factor affecting morphological analysis, we use ELMo embeddings (Peters et al., 2018) as our base representation. As a first step, we train separate ELMo models on each of the Bible translations in our corpus. For each language, we hold out four books (Mark, Ephesians, 2 Timothy, and Hebrews) for model evaluation and train on all remaining books. Models are trained at the sentence level, using default parameter settings and following recommendations from the AllenNLP bilm-tf repository. 3 Verse alignment for embedding projection The next step is to use the natural verse alignment of the Bible to learn projections from one embedding space to another, treating English as the source language and learning projections into the embedding spaces for each of our non-English Bible languages in turn. Mikolov et al. (2013a) show that type-level embedding spaces (e.g. word2vec) can be projected across languages by calculating a translation matrix from a set of type-level translation word pairs. The translation matrix is a vector of dimensionwise factors by which word representations from a source language can be multiplied to transform them into parallel word representations in the target language embedding space. Aligning contextual representations such as ELMo is more complicated, as there is no good way of aligning words between two language embedding spaces without a dictionary and without 3 https://github.com/allenai/bilm-tf losing the encoded information about contextual polysemy, for which ELMo is particularly useful. Schuster et al. (2019) propose using contextfree anchors to align contextually-dependent embedding spaces (such as ELMo). We propose instead to calculate translation matrices at the verse level, computing the representation for each verse as the unweighted average of its constituent contextual word embeddings. First, we compute ELMo embeddings for each token in a small subset of the Bible: Psalms (OT) and Romans (NT). For a given language pair, we compute a verse embedding for each verse that appears in both Bibles (some verses are missing in some languages, and some languages have extra verses) 4 and derive the translation matrix for that language pair using the standard method, as introduced by Mikolov et al. (2013a). Given pairs of verse vectors in a source and target language {x i , z i } n i=1 respectively, we calculate the translation matrix (W ) between the two languages utilizing gradient descent, as follows: Inducing vectors for UniMorph tags In lieu of using supervised, annotated data for training the model with morphological information, we work from the hypotheses that each of the 42 UniMorph tags can be isolated in the embedding space and that we can derive a vector representation for each tag, applying a process similar to the well-known analogy tasks of Mikolov et al. (2013b). For this purpose, we build small handcurated data sets (only in English), with contrasting sets of words for each tag. In other words, for each UniMorph tag found in English, we collect from the training data one set of words with the tag and a parallel set without it. The word sets do not necessarily contain minimal pairs, but rather groups of words that are matched for partof-speech. For example, for the plural tag PL, we build a list of 10 plural tokens (e.g. [women, cats, dogs, deer, ...]) and another list of 10 singular tokens (e.g. [man, car, dog, apple, ...]). The (vectors for) the set of words with the tag are subtracted from (vectors for) the set of words without the tag. More precisely, we take the weighted average of both sets of words, in which those with the tag are weighted 1, and those without it are weighted -1. Having derived a vector representation for each UniMorph tag, these vectors can now be projected from English into the target language using the respective translation matrix. Rather than projecting every tag into every language, we project only the tags that are seen in a given language's training data. Of course, only a subset of all UniMorph tags are found in English. For those which do not appear in the English data (e.g. Ergative), an additional method was developed using the Sigmorphon training data in other languages. When tagging a language that has the tag ERG in the training data, we build new word list pairs specific to that language and calculate the UniMorph tag representation as described above. Morphological analysis To assign UniMorph tags to words at test time, a sequence of tokens in context (one sentence at a time) is fed into ELMo using the target language ELMo model, generating contextual embeddings for each word in the sequence. Next, for each token, we iteratively subtract each of the target language's possible UniMorph vectors and search for another word in the target language whose embedding is within 0.1 cosine distance of the resulting vector. For example, when tagging the German word Kinder (children), subtracting the vector representation for the Plural tag should result in a vector that is close to that for Kind (child). This subtraction process is applied to every word, for every UniMorph tag found in the language. Whenever a word is found within the threshold of the derived embedding, the tag that resulted in the successful transformation is assigned to that token. In the example above, Kinder gets tagged with PL. Intuitively, this method is plausible because words, their inflected forms, synonyms, and closely related terms tend to occur in tight clusters in embedding spaces. Therefore, subtracting the embedding for the PL tag from the embedding for the should not produce a close match in English, since the plural tag is never associated with the. This would not be a grammatically meaningful transformation. Baselines We use two different baselines for the morphological analysis task. No-embedding baseline. This method is used to tag the two languages for which we have no Bible, not even for a proxy language, and thus have no Bible-trained word embeddings for the language. Under this approach, each word is simply labeled with all tags it has been seen with in the training data. Embedding baseline. This method makes use of the verse embeddings described above and was deployed to do tagging where time constraints prohibited implementation of the full model for a given language. The contextualized word representations built to support the embedding projection process are collected into a set of dictionaries (one for each language) of seen tokens and their associated vectors. In this setting, instead of re-training the ELMo model on test data in context, we retrieve stored vectors for tokens to be tagged. This method has clear shortcomings, both with respect to coverage of the model and regarding the handling of polysemous tokens. Non-core components: POS tagging and lemma generation For part-of-speech tagging, we implement a Hidden Markov Model Viterbi algorithm trained on the Sigmorphon training and development datasets. Given our interest in methods which reduce the need for large labeled corpora and supervised learning, we additionally implemented some simple heuristics based on previously-generated morpheme tags. For example, a word is given a higher probability of being tagged as a verb if it has a modal, tense, or other conjugative tag already assigned to it (e.g., V.PTCP or PRS). These heuristics were designed to be entirely language-neutral, generalizing to the full set of test languages. As a final task, we perform lemma generation using a joint neural model following Malaviya et al. (2019)'s proposed method. The joint model consists of a simple LSTM-based tagger to recover the morphology of a sentence and a sequence-tosequence model with hard attention mechanism as a lemmatizer. The lemmatization model trains over words and their morphological information recovered with the tagger. To counter exposure bias, all training is done with Jackknifing. Limitations -there are many The models as described above are subject to many limitations, and we have many ideas for improving the system. First, the model is computationally intensive and time intensive, to an extent that meant we only applied the full model to a fraction of the data. Because producing ELMo embeddings on-thefly is so time consuming, we took some shortcuts in order to get results in time for submission. Word types already tagged were stored together with their tags after the first encounter, and the tags retrieved for later occurrences. Also, only a subset of the test sentences were in fact tagged with the ELMo approach at all. These two things together resulted in many false positives and redundant tags (e.g. the same noun tagged as both nominative and accusative). We feel confident that a full run of the system, however long it takes, will result in much better performance. Second, our method for tagging words with UniMorph tags does nothing to constrain the set of possible tags, allowing multiple conflicting tags to be simultaneously assigned. Application of output constraints could go a long way toward solving this issue. Third, we would like to rework our method for collecting pairs of word lists for derivation of vector representations for UniMorph tags. A problem with the current method is that it assumes the existence of inflected/non-inflected word pairs for all tags, and in all languages. In fact, many morphological paradigms do not consist of contrasts be-tween inflected and un-inflected forms (these are perhaps more common in English than in most languages), but rather of sets of inflectional options, one of which is likely to occur. Our model does not currently account well for this aspect of morphology. For example, when tagging the German article dem (definite, masculine, dative), subtracting the vector representation for the Dative tag under our current model results in an ill-defined form; there is no article that is definite and masculine and with undefined case. Instead, we would like for the process to yield a set of vectors, close to those for the articles der (definite, masculine, nominative); den (definite, masculine, accusative); and des (definite, masculine, genitive). Fourth, the system is very bad at handling morphological analysis for out-of-vocabulary tokens, and there are many out-of-vocabulary tokens. Table 2 provides an overview of our system results. Additional discussion of results can be found in McCarthy et al. (2019). The results are uncontroversially bad, particularly for the morphological analysis task. For this portion of the task, our accuracies are dramatically lower than all other teams (at least 50% worse than every other team, on most languages). Some of this performance gap surely can be attributed to the fact that we make very minimal use of the training data supplied, but not all of it! We strongly believe that the limitations described in Section 5.3 have severely decreased our results, and we look forward to giving our method a true test in the near future. Results For lemmatization, we come closer to average performance, coming in at roughly 12 percent less accurate on average (across languages) than the top-performing submitted system. Table 3 looks at results compared to the amount and type of Bible data used to train embeddings for each language. Performance suffers when training on only the New Testament, compared to the full Bible. Surprisingly, proxy language training shows only a slightly lower average performance compared to training and testing on the same language. Of course, all results need to be interpreted with respect to the limitations previously discussed. Discussion In addition to the model and implementation limitations discussed in Section 5.3, there are a number of extensions which could be considered for improving the model. Our current model allows a mismatch between granularity for training of the embedding spaces (sentences) and granularity for alignment of the embedding spaces (verses). We'd like to experiment with verse-trained models as well. We would also like to train on all of our Bible data, without holding out any data for evaluation of the embedding space (i.e. the four books mentioned in Section 4). For languages for which we don't have a Bible, we will investigate new methods for identifying transfer languages (Lin et al., 2019). Even though our models as implemented prior to submission failed to attain reasonable accuracy on the morphological analysis task, we believe that performance can be improved and that the general architecture deserves further exploration. Ideally, our model could extend to any of the 800 (or more) languages that has a translation of the entire bible, opening new frontiers for minimallysupervised morphological analysis.
5,213.6
2019-08-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Bioethics as a Governance Practice Bioethics can be considered as a topic, an academic discipline (or combination of disciplines), a field of study, an enterprise in persuasion. The historical specificity of the forms bioethics takes is significant, and raises questions about some of these approaches. Bioethics can also be considered as a governance practice, with distinctive institutions and structures. The forms this practice takes are also to a degree country specific, as the paper illustrates by drawing on the author’s UK experience. However, the UNESCO Universal Declaration on Bioethics can provide a starting point for comparisons provided that this does not exclude sensitivity to the socio-political context. Bioethics governance practices are explained by various legitimating narratives. These include response to scandal, the need to restrain irresponsible science, the accommodation of pluralist views, and the resistance to the relativist idea that all opinions count equally in bioethics. Each approach raises interesting questions and shows that bioethics should be studied as a governance practice as a complement to other approaches. interest going on, but it is less clear whether there are common threads that link these questions, and, if so, what they are. Should we focus on the use of novel technologies, privileging the modern miracle of the artificial pancreas over the ancient miracle of birth? Or should we stress the human dimension, excluding the feline intervention from the scope of bioethics? In order for us to address such issues, they need to be identified and framed in a manner that enables us to work towards resolution of the practical dilemmas that they generate. This paper suggests that it is helpful to think about this by exploring the governance practices that have developed to help societies respond to the choices and challenges that arise in the field of bioethics. It does not claim that this should replace other ways of thinking about bioethics, but it does suggest that bioethics governance is an important subject in its own right and that it should supplement other more established perspectives in order to create a fuller picture. Some mapping of the field of bioethics is necessary if we are to understand the subject matter over which governance is being exercised. However, the boundaries do not need to be precise or fixed. Governance can be established even in the face of disagreement over the inclusion of specific issues within its scope. The paper begins by examining the content of the academic field of bioethics as a subject area for study. It is important for bioethics governance because it speaks to the scope of its jurisdiction. The idea of jurisdiction provides a helpful framework for the consideration of governance questions. As used here, it denotes the processes of marking out the territory in which 'bioethics' is accepted to be the most appropriate perspective for considering questions, constituting bioethicists or bioethics bodies as the authoritative decision-makers, and delineating the terms on which this authority is conferred. A jurisdictional perspective enables both descriptive and normative questions to be identified. Thus, we can (and should) consider separately how the jurisdiction has come to be constituted from whether it can be defended as legitimate. In addition to matters of scope, bioethics governance needs to address questions of working methods and the human resources required. The paper therefore considers briefly aspects of the debate over whether there is a discrete academic or practical discipline of bioethics. There can be little doubt that academics have arranged their work to address the field. Centres, journals and courses are badged under the label of bioethics. The question for the purposes of this paper is whether this has led to a distinctive discipline and methodology. This is linked to the question of whether bioethics as a governance practice should be the province of 'expert' bioethicists. The idea that bioethics can be understood as a practice is not new. As with all academic disciplines, there is a social element to way in which it has developed. In some countries, this has led to the professionalization of bioethics [19], with a degree of governance of the activities of bioethicists [4]. There has also long been a tendency to codify bioethical positions into quasi-legal guidance, extending bioethics from private discussion into a more public and collective process. Parallels can be drawn with other areas of applied ethics, such as business ethics and its manifestation in expectations of corporate social responsibility. However, the extent to which bioethical practices have been consolidated into advisory and regulatory structures (such as 'ethics committees') is distinctive. These have a recognized place in global governance mechanisms through the UNESCO Universal Declaration on Bioethics 2005. It is these processes of institutionalization that I suggest constitute the primary forms of bioethics governance. However, although the UNESCO formulation provides a useful framework for discussion, it is not the only manifestation of governance activities. The UK, for example, deploys a range governance practices beyond the activities specified in the Declaration. The ideas and questions explored in this paper have emerged from the author's reflections on personal experience of the institutions of bioethics governance in the UK and the primary examples it uses are therefore drawn from there. 1 One of the arguments of the paper is that attention needs to be given to the socio-political contexts in which bioethics institutions operate if they are to be well understood. Some institutions may look similar but have different roles and scope. Even a superficial comparison between the work of the French Comité Consultatif Nationale d'Ethique and the UK's Nuffield Council on Bioethics shows that they have significant differences in the topics they have addressed and their ways of working even though they are also in many ways very similar [22,44]. Bioethics governance should, therefore, be considered in terms of its functions as well as its institutions. In our morally pluralist society, there is considerable divergence of views on many bioethical issues. Various committees, commissions, and authorities have been created to mediate such disagreements, facilitate sufficient consensus to enable public policy choices to be made, maintain public confidence (especially in the responsible use of scientific advances) and reduce the risks that disagreement will manifest itself in social conflict. These responses can be understood as part of an ecosystem of bioethics governance. The aims of this paper are to demonstrate that thinking about bioethics as a governance practice will enhance our understanding of such activities and to draw attention to some features of this approach that merit further consideration, thus sketching an agenda for further study. What is Bioethics? Bioethics as a Subject We might seek to define the subject of bioethics by reference to the topics that fall within its scope. The term seems to have emerged in the context of environmental . The views expressed in this paper are personal and do not represent the positions of any of these bodies or committees. This paper is a revised version of the 2015 Annual Lecture of the Centre for Health Ethics and Law at Southampton Law School, held in conjunction with Health Care Analysis. I am grateful for the comments of the members of the Centre and the two anonymous reviewers for the journal. I take full responsibility for the errors that remain. Health Care Anal (2016) 24:3- 23 5 ethics [35,67,51,57], but soon came to be used in relation to medicine and scientific advance. Levine identifies the early agenda pursued by North American scholars in the emerging field of bioethics as comprising research ethics, death and dying, genetics, reproductive technologies and behavioural control [35]. Dunstan, an Anglican theologian active in ethical discussions in the UK in the 1970s selected many similar issues to illustrate his thesis about The Artifice of Ethics-a study of the 'institutions built to support and shelter the frail but precious moral judgments of mankind'-birth control, the use of aborted fetuses in medical research, genetic engineering, IVF, abortion, and euthanasia. He was less clear, however, that these illustrated a specific area of bioethics. His analysis placed these examples alongside questions about business ethics and the conduct of war (covering both the use of lethal force in Northern Ireland and what might now be called 'weapons of mass destruction'-biological and thermo-nuclear warfare) [16]. Wilson has argued that the identification of bioethics as a separate area for consideration in the UK should be attributed to Ian Kennedy's 1980 Reith lectures, The Unmasking of Medicine [31,67]. These covered a range of issues that had become medicalized but which Kennedy argued needed to be re-appropriated by society. His opening list contained heart transplants, the definition of death, the treatment of the dying (including prolonging insensate life), the selective treatment of handicapped new born babies, and the treatment of the mentally ill [30]. Thus, he linked UK bioethics to issues in medical practice and its advance. A flavour of the current scope of the academic literature can be gleaned from the contents of two leading collections. Bioethics: An Anthology edited by philosophers Kuhse and Singer contains eighty-one extracts, grouped into sections on abortion, reproductive technologies (including surrogate motherhood, sex selection, embryos as tissue donors and cloning), 'the new genetics', life and death, resource allocation, organ donation, experimentation (both human and animal), ethical issues in the practice of health care (mostly relating to aspects of confidentiality and consent), a separate section on issues facing nurses, and finally four pieces on ethicists and ethics committees [34]. This broadly reflects an approach that presents bioethics as addressing choices that are made in and around health services. Methodological issues are generally drawn out within solutions to specific problems rather than presented as of interest in their own right. Steinbock's [60] Oxford Handbook of Bioethics selects only thirty pieces and takes an approach that focuses less on topics than methods. It opens with a group of essays on theoretical and methodological issues and addresses justice and policymaking before moving to selected essays on the topics of bodies and body parts, end of life, reproduction and cloning, genetics and enhancement, research ethics, and finally justice and global health. This opens up interest in the way in which bioethical matters should be approached, including the possibility that there might be expert 'bioethicists'; experts who are particularly authoritative guides to the subject. This might manifest itself in the form of a discrete academic discipline, but even this snapshot from two anthologies is enough to remind us that there is sufficient variation in discussion of scope and methods to make the idea that bioethics institutions might be best staffed by expert bioethicists politically controversial [12,33,61]. It is perhaps better to think in terms of bioethics being a field or enterprise. As Sheehan and Dunn put it: 'disciplines are closely tied to methodologies and traditions of thought, whereas what counts as a field is driven by a set of questions' [59]. Bioethics as a Discipline? Harris [25] has argued that the modern understanding of bioethics has two main historical roots. The first is the ethics of the medical profession. He suggests this was more a matter of norms of practice and etiquette than reasoned reflection, although this may not do justice to the richness of the tradition of medical ethics [5,28]. The second was moral philosophy. It is this that Harris [25] sees as the driver for modern bioethics, suggesting that is essentially a specialist area within applied moral philosophy. 2 Warnock, also a professional philosopher and a key figure in UK bioethics, pursued a slightly different dynamic, resisting the idea that philosophical expertise should determine public policy while welcoming the role of bioethics in rescuing British philosophical ethics from an analytical dead-end by replacing the focus on the logic of ethical discourse with an interest in substantive questions [67]. These claims raise questions about the disciplinary nature of bioethics. Conceptual work is needed to define the scope of bioethics, and refine its methods so as to enable poor and robust work to be distinguished. However, its connection to policymaking needs further explanation. Further, the way in which bioethical work is organised and funded has a significant impact on what counts as bioethics. In the USA in the 1970s, bioethics emerged as a new academic and practical discipline through the creation of institutions. The Hastings Center, originally the Institute of Society, Ethics and the Life Sciences was established in 1969 [9,35]. The Kennedy Institute of Ethics at Georgetown was established in 1971 [35]. 3 Many more followed as bioethics took root in the academy. The American Society for Bioethics and Humanities now has over 1800 members. 4 It is hard, therefore, to resist the conclusion that bioethics is an area for academic activity. However, it does not follow that there is a discrete academic discipline with a distinctive set of concerns, conceptual tools and methodologies. Context is crucial. Rothman argues that there was a very specific congruence of political forces that made the context in America particularly hospitable to the emerging thinking on bioethics. He cites the civil rights movement's wider challenges to power and authority, including anti-discrimination provisions that became applied to neonatal care decisions, the Patient Bill of Rights (formally developed by the America Hospitals Association in 1973), and the development of the right of privacy in the courts in the key medical case of Roe v Wade on abortion. The particular manifestation of bioethics suited the zeitgeist. The fit between the movement and the times was perfect. Just when courts were defining an expanded right to privacy, the bioethicists were emphasising the principle of autonomy, and the two meshed neatly; judges provided a legal basis and bioethicists, a philosophical basis for empowering the patient. Indeed, just when movements on behalf of a variety of minorities were advancing their claims, the bioethicists were defending another group that appeared powerless -patients. All these advocates were siding with the individual against the constituted authority; in their powerlessness, patients seemed at one with women, inmates, homosexuals, tenants in public housing, welfare recipients, and students, who were all attempting to limit the discretionary authority of professionals [57:245]. On this view, the discipline of bioethics is aligned with philosophy, political theory and human rights law. In terms of the disciplinary power, bioethics is characterised as competing with medicine for jurisdiction. Evans [19] provides a sociological account of the history of US bioethics that also concentrates on the struggle for jurisdiction, but characterizes it as being between science and theology. He describes the transmutation of the work of theologians into bioethics in the form of 'principlism', which he sees as the distinctive methodology on which the claim of bioethics to be a 'discipline' is based. In line with the political philosophy of Rawls, and in particular his idea of 'public reason' [54], Evans argues that this enabled bioethics to claim a degree of neutrality between substantive approaches. He notes Engelhardt's position that bioethics could develop a 'moral lingua franca' without 'endorsing a particular moral vision' [18:ix]. These two rather different approaches share the insight that the nature of argumentation in bioethics is shaped by the circumstances in which it emerges. This should make us cautious on putting too much weight on the conception of bioethics as a discipline. The intellectual approaches that have driven bioethics in mainland Europe have been different, with more focus on personhood, the virtues of patientprofessional relationships, solidarity and human dignity [58]. The USA has been dominated by Beauchamp and Childress's four principles of autonomy, nonmaleficence, beneficence and justice [6]. Both approaches can be found in the UK. However, the dominant view in Britain in the 1970s was that medical ethics assisted doctors to make better decisions, and that this was contrasted with the 'American trend' of bioethics in which outsiders had assumed the role of 'society's conscience' on issues previously entrusted to doctors [66]. This focus on supporting health professionals to make ethically informed decisions, on the assumption that they adhere to a system of moral values has continued to underpin much of legal regulation in the UK [38,39,42]. This has not prevented the emergence of a significant 'bioethics industry', but it has taken the form of practices for public governance. British work on bioethics did not begin with the creation of new academic centres, but by working within the established professional institutions, in what Wilson describes as 'club regulation' [67:ch 2]. This is less clearly rooted in disciplinary competition. He sees the emergence of UK Bioethics as an aspect of the ascendancy of the Audit Society, in which social institutions were constrained and controlled by measurement and oversight [67:ch 3]. He argues that although Kennedy's Reith lectures adapted an intellectual discipline of bioethics from the US academic tradition, it took root because of its congruence with the Thatcherite programme to break down the power of the traditional professions. Wilson sees the 1980s and 1990s as the 'high water-mark' of bioethics. Prior to these decades, what is now known as bioethics was a collaboration of the professional establishments of church, medicine and law. During those decades, the professions were drawn into conflict and competition with each other by wider social processes. The emergence of bioethics was one of the ways in which those conflicts played out. Wilson cites the critique of the Audit Society by Baroness Onora O'Neill [46] in the Reith lectures for 2002 as a further watershed as the general retreat from big government manifested itself in the dismantling of the instruments of bioethics governance. He suggests that 'today, neither the government, nor many bioethicists, share Kennedy's belief that oversight is the best way to ensure public accountability [65]. ' Bioethics in the UK thus moved more quickly into governance practices than academic ones and, perhaps as a result, has not reached the same disciplinary definition as seen in the USA. Indeed, in the UK, few people see themselves as professional bioethicists [10]. Rather, Onora O'Neill has suggested, bioethics should be seen as a communal practice. It 'is not a discipline', but instead provides 'a meeting ground for a number of disciplines, discourses and organizations concerned with ethical, legal and social questions raised by advances in medicine, science and technology' [47:1]. This identification of bioethics as a field of study, to which many disciplines can contribute suggests the need for mechanisms for coordination of activity. Bioethics as an Enterprise? This co-ordination is often directed at a particular kind of purpose. One of the characteristics of contemporary bioethics is that it has taken a 'public' turn, in which it 'constitutes a resource for the formation of public policy which impacts upon the social world [52:8].' Sheehan and Dunn have described this as a requirement of 'practicality', suggesting that it counts against a bioethical argument that it could not be implemented. a piece of research or public activity is not correctly defined as bioethics unless it aims at actually convincing people to act differently or to change policy because of the arguments and answers that the bioethicist provides [59:58]. Bioethics must be 'sensitive to the realities of political contingencies and institutional constraints' [59:58], and It is an entirely appropriate response to an argument in bioethics that is impractical, or, for example, that its actual implementation would not succeed because the argument fails to consider relevant contextual features of the relevant situations [59:59]. They describe bioethics as concerned with primary 'ought' questions (e.g. the best policy on abortion), to which a series of secondary questions are relevant because of Health Care Anal (2016) 24:3- 23 9 the practicality requirement. These include 'the nature and functioning of the regulatory system.' However, the issues of practicality perhaps become the primary, rather than secondary questions when bioethics is considered as a public facing enterprise. The starting point for addressing the 'ought' questions is the available practical options and a substantive position is already implicit within the status quo. The issue is about whether there is a case for change. The onus of proof lies on those who propose reform and the burden of proof requires a sufficiently compelling argument to justify investing the energy necessary to overcome the forces of inertia. The playing field is not an even one, but the unevenness is entirely contextual. The status quo may be different in different places in relation to the same question. Reform proposals developed by bioethicists should therefore be understood in their specific historical, political and social contexts [43]. Thinking about bioethics as an enterprise, aimed at public persuasion and impact, enables us to focus on dimensions that are marginal to the normative tasks of the version of the discipline that sees itself as rooted in applied moral philosophy. We should think not so much about what bioethics 'is' as about what it does. We should be concerned to understand the nature of bioethics as a Foucauldian 'discipline', a discursive technology of social control [56], and look for a normative framework for critique that is sensitive to the way in which bioethics asserts its jurisdiction in matters of public significance, not merely private morality. We should also be concerned to study the institutions by which society governs matters of bioethical significance. Dunstan describes a societal activity in which 'the moralist, having seen his (sic) vision, or arrived at his position, must weave his insight into the fabric of society by creating an institution in which to embody it' [16:4]. We must consider the structures and functions of bioethics governance. Typologies of Governance Practices The examination of institutions shifts the focus of the practice of bioethics from an intellectual enterprise to a governance one. Duwell has described an institutionalisation of bioethics as a response to a mixture of demands from clinicians for support, emerging public concerns, (including those about technological advances and also scandalous behaviour), and the changing political contexts in which longstanding questions about the value of life were debated and translated into principles and rules to guide public life. He draws attention to three types of committee. First, those formed to advise political institutions (concerned with ethical reflection on emerging problems). Second, those to provide assurance that established ethical principles have been observed (such as research ethics committees). Third, those devised to support individual decision-making in particular cases (typically described as 'clinical' ethics committees). Each of these aims to provide support on bioethical issues, but in different ways and for different purposes. The first is aligned with existing political or professional authority (such as governments, hospitals) and offers advice on how to exercise it. The second rarely engages directly in ethical reflection, but is concerned with ensuring compliance with established standards. The third is concerned with specific cases and Duwell suggests that it serves to 'create a space within the clinical praxis in which conflict situations can be dealt with transparently with regard to both argumentation and procedure' [17:2-5]. A similar typology can be derived from the UNESCO Universal Declaration on Bioethics (2005), which adds a fourth category about wider public discussion (one recognized by Duwell but not considered to be institutionalized). Article 19 of the Declaration states: Independent, multidisciplinary and pluralist ethics committees should be established, promoted and supported at the appropriate level in order to: (a) assess the relevant ethical, legal, scientific and social issues related to research projects involving human beings; (b) provide advice on ethical problems in clinical settings; (c) assess scientific and technological developments, formulate recommendations and contribute to the preparation of guidelines on issues within the scope of this Declaration; (d) foster debate, education and public awareness of, and engagement in, bioethics. This international instrument has legitimated both a capacity building programme, 5 and also the comparison and critique of the bioethics governance structures in different states [48,62,63]. Even though the forms that bioethical governance practices take must be understood as generated within a specific historical context, comparisons and discussions can only proceed with a schematic analysis of some sort. However, we should not limit the conception of governance processes to this schema as the tools that are used in bioethics governance are more contingent and diverse than the UNESCO Declaration suggests. The Historical Contingency of Bioethics Governance Understanding this requires historical consideration of how bioethics became institutionalised into the specific forms of committees, regulatory bodies and commissions. Jonsen has documented the transition from the 1960s as a 'decade of conferences', in which scientists from across the world came together to discuss the emerging possibilities, into the 1970s when bioethics became the province of Government commissions [29]. The emergence of these earliest governance bodies seems more nationally determined. Rothman's study of how medical ethics became detached from the internal morality of the profession traces the tousles in the US Congress that led to the establishment of National Commissions [57:ch9]. He shows how professional resistance to external scrutiny frustrated politicians and contributed to the creation of Commissions on which medics were in a minority. Professional witnesses denied the relevance of the concerns put to them, suggested that outsiders lacked competence in these matters, and implied that their appearances before the committee were a waste of their valuable time. In this context, the transfer of power and authority in bioethics from the medical profession to 'strangers' by the creation of commissions to deliberate on issues of principle can be seen to flow from the resistance of the profession to the legitimacy of public debate, and its denial of political authority in the area. This was extended by the Superior Court of New Jersey in the Quinlan case, which suggested the use of committees to oversee individual clinical decisions as a way of addressing perceptions that doctors were faced with conflicting interests and to provide protection from civil and criminal liability [27]. Despite the fact that this was not a decision that was legally binding in other states, the decision was followed by increased formality within hospitals for oversight of clinical decisions; an Optimum Care Committee was established at Massachusetts General Hospital, formal guidelines began to be drawn up and reference to court increased. Lawyers began to play a prominent role [57:229-235]. Robert Baker attributes the particular shape of the development of bioethics in the USA less to the arrogance of doctors in respect of ethical issues than to their complacency: organized medicine's laissez-faire abandonment of medical ethics created a void in the marketplace of ideas and a vacuum of moral authority. To fill this void, legislators, bureaucrats, the courts, and American society generally sought ideas and invested moral authority elsewhere, ultimately finding it in an oddball collection of lumpen intellentsia who were soon valorized as ethics experts or ''bioethicists'' [5:279]. He points out that in Europe, organized medicine never abandoned its jurisdiction over ethics. Consequently, bioethics developed as a collaborative not antagonistic enterprise. Thus, the institutions of bioethics governance do not play precisely the same roles in different societies. We have already noted Wilson's description of early British bioethics as a system 'club regulation'. This collaborative approach continued with the emergence of institutions of bioethics governance. In the UK, the medical profession was not forced into developing such bodies, but took a lead. It moved more quickly than Government to establish a body to provide ethical oversight in the context of assisted reproductive technologies. A Voluntary (later renamed Interim) Licensing Authority was established soon after the publication of the Warnock report in 1984 [55], and broadly played the same role as was subsequently taken on by the Human Fertilisation and Embryology Authority from 1991 when the legislation came into force. The doctors' trade union, the British Medical Association developed a Handbook of Medical Ethics that promoted a focus on raising expectations that doctors should meet ethical standards in advance of the regulator's interest which for many decades was on misconduct rather than good practice [41:42-44]. British bioethics has been less confrontational than its US equivalent and the history and functions of its institutions need to be considered with that in mind. However, we should not to develop too narrow a conception of governance processes. There is more to governance than the constitutions of committees. It is also important to recognise the messiness of the connection between the institutions actually created and the stylized explanations of their purpose. These points can be drawn out of a brief discussion of some of the functions of bioethics governance. Functions of Bioethics Governance Evans' [19] sociological perspective on bioethics identifies four jurisdictional spaces that might be inhabited by bioethicists. These concern health care ethics consultations, research bioethics, public policy bioethics, and cultural bioethics. Evans' account stresses the importance of understanding the 'jurisdiction givers' who provide access to these spaces. He suggests that the possibility of the professionalization of bioethics in the USA, and also the principlist form it took, were a consequence of the government officials becoming jurisdiction givers. Their expectations were for an abstract body of knowledge that could claim to provide a common morality in a pluralist society. As bioethics adopted this form it gained jurisdiction. Evans argues that the status of professional bioethics is waning in the USA because both government and the media (as gatekeepers to cultural bioethics) are turning directly to the spokespeople for partisan social movements rather than looking to bioethicists to mediate the opposing arguments. This section of the paper considers four justificatory narratives that provide plausible explanations for governance activities, and therefore serve to legitimate it. These are the response to scandalous activity, the imperative to address public concerns about potentially irresponsible scientific advance, the need to resolve deep disagreements about bioethical matters in a pluralist society, and the desire to delineate public bioethics from other political issues. These narratives are not the only ones that might be considered, but they are sufficient to show that thinking about bioethics as a governance practice raises distinctive questions that merit further study. Bioethics Governance as a Response to Scandal: The Case of Research Governance One of the dominant narratives that serves to drive bioethics governance is that it is a necessary response scandals of professional immorality. This is perhaps most easily seen in relation to the governance of medical research. On this account, the abuse of human research subjects leads to regulatory interventions to control researchers and protect participants. While doubts have been raised about the historical accuracy of this explanation in respect of specific regulatory developments [26], there is little doubt that it is seen by many as the explanation for the need for governance. Even if this account of the origins of research governance is more myth than history, it enables us to connect the rationale with an appropriate framework of critique. Given that scandal is generated by events that are considered to transgress norms (even if those norms were never explicit), a successful governance framework will address both the occurrence of such events and the management of expectations in order to sustain public trust and confidence. This need not be seen as an exercise in external control. The research community has an interest in good governance provided that it serves to maintain the social licence that it requires for its work [15]. So far as the tools of bioethics governance are concerned, three dimensions of the regulation of health research are worthy of mention. The first concerns the codification of principles. Here, the response to the abuses of Nazi medicine that were revealed in the War Trials is often regarded as pivotal. The 'Nuremberg Principles' stressed the primacy of individual rights over the advancement of science and the importance of informed consent [1,23]. The World Medical Association took this approach forward in the Declaration of Helsinki, first adopted in 1964 and amended for the seventh time in 2013 [69]. It is important not to claim too much for this component of bioethics governance. Germany was the first country to pass laws protecting research subjects, so Nazi medicine was not pursued in ignorance of the requirements [20:106, 23]. Nevertheless, standards are an important tool. They make it clear to researchers what is expected of them, hopefully providing a guide to their conduct but also enabling them to be called to account. They can also be protective, offering a means to explain to the public that they are acting ethically. Committees charged with scrutinizing research cannot be identified as 'ethics' committees without some set of principles to define ethical issues. In the absence of such standards, even if they are gatekeepers whose permission is required before research takes place, they not are not engaged with bioethics governance but with other types of question such as scientific review, prioritization of resources or reputation management. Principles may be essential to the creation of an ethical governance system, but they are not sufficient. As the exposés of Beecher in the USA [7], and Pappworth in the UK [49,50], showed significant numbers of medical research studies were (in various different ways) unethical long after the Nuremberg Principles had been promulgated. This was more a professional than public scandal, although Beecher's criticisms were neglected until amplified by journalists. Both came from within the medical profession not outside it. So too did the most visible governance mechanism in medical research, the Institutional Review Board or Research Ethics Committee. This was promoted by the US Surgeon General in 1966 on prompting by the US National Institutes of Health (which had had some sort of ethical review since 1953) [26:333] and extended as a regulatory requirement by the Belmont Report in 1979 [57:ch5]. As Adam Hedgecoe has shown, the history of RECs in the UK is similarly bottom up [26]. It began in hospitals seeking US funding. It was picked up pragmatically by the Royal College of Physicians in the form of a Committee on the Ethical Supervision on Clinical Investigations in Institutions, which reported in 1967. Committees were required administratively by the Department of Health in its Red Book of 1991 [13]. They only became a legal requirement in 2004 following the implementation of the EU's clinical trials directive [37]. Prior review by gatekeeping committees is now an established part of the governance structure for medical experimentation to provide assurance that studies are designed in accordance with the ethical principles previously established. In itself, this does nothing to ensure that studies are well conducted and following a research scandal in North Staffordshire [24], the UK National Health Service put in place a further dimension of the governance, known as the Research Governance Framework for Health and Social Care [14]. This incorporated the features already described; a restatement of principles (the primacy of the rights of participants, informed consent, confidentiality and data protection) and the requirement of ethics review. However, it went further and specified the separate responsibilities for the conduct of the trial that lay with (a) principal investigators, (b) research sponsors, and (c) the organisations which employ the researchers. These clarifications have enabled accountability for breaches in the conduct of research, monitored by the Health Research Authority, although regulatory action in this area remains rare. Thus, we have an archetype of a bioethics governance structure; codified ethical standards, licensing through ethical review and oversight/accountability through research governance. In origin, this emerged from within the professions and serves to protect its reputation. Its continuation is typically justified by reference to a series of historical scandals against whose recurrence the governance system is claimed to provide protection for participants. Bioethics Governance as the Restraint of Irresponsible Science A second explanatory narrative for the need for bioethics governance lies in fears about scientific advance. In this story, science is portrayed as being driven by a technological imperative, doing things because it can, without regard for whether it should. This was one of the planks of the argument put forward by Kennedy for the establishment of national a bioethics commission on the USA model in the 1980s [32]. It is this aspect of bioethics governance that lay behind the creation of the Nuffield Council on Bioethics in 1991. Its terms of reference, as yet unchanged, require it: To identify and define ethical questions raised by recent advances in biological and medical research in order to respond to, and to anticipate, public concern; To make arrangements for examining and reporting on such questions with a view to promoting public understanding and discussion; It is also charged with publishing reports on these matters and making representations to appropriate regulatory or other bodies. The working assumption is that scientific advances are a matter of public concern that needs to be allayed. As with the evolution of research governance, it would be wrong to see the creation of commissions charged with considering the implications of scientific advance as external regulation driven by lay people or even as necessarily resisted by scientists. Warnock, another advocate of a national commission, suggested that it would be a counter-balance to an almost medieval obscurantism… a hostility to science based on vague thoughts that there are some things that we should not know, but based more than anything on fear and ignorance…. After the last war there was there was a cliché to the effect that man's scientific knowledge has outstripped his moral sense. At that time it was uttered in the context of the physical sciences. The bomb had, rightly, frightened us all. Now the same cliché is more and more to be heard in the context of the biological sciences. We must take it seriously. Only within an ethical framework widely seen to be secure and sensible can we continue, as we must, to push back the frontiers of science [64]. Although the source of public concern is different, bioethics governance is once again a mechanism for enabling public confidence to be maintained. It is a moot point whether 'science' is a single thing about which concerns are raised, or whether the need for governance arises differently in relation to distinct issues. The Nuffield Council on Bioethics, as a non-government body, does not precisely reflect the expectations of the UNESCO Declaration. It is, however, the only British body with an overarching remit for keeping new bioethical issues under review. More characteristically, the tendency of the UK has been to address bioethical issues through specialist institutions rather than a generic one. Thus, issues in assisted human reproduction are on the agendas of many national bioethics commissions but in the UK they have been addressed by sector specific means. The Warnock Committee reported on the policy in 1984 [55], leading to the creation of the sector regulator the Human Fertilisation and Embryology Authority. Questions around organ donation were explored by the Redfern Committee into a scandal at Alder Hey, the Retained Organ Commission, a sector regulator (The Human Tissue Authority), a Parliamentary Select Committee and an Organ Donation Task Force. Non-Government investigations into organ donation have included reports from the King's Fund, the Nuffield Council and from the BMA. The UK's preparation for pandemics has included a Committee on the Ethics of Pandemic Influenza. Some of the issues around the biodata being gathered by Genomics England have been addressed by the ad hoc route of asking a trusted bioethicist to convene a group of people to help draft a letter of advice to the Chief Medical Officer. Parliamentary Select Committees have examined a number of other bioethical questions generated by scientific advances, including regenerative medicine, mitochondrial DNA donation, and the use of biodata. It is also clear that there are important bioethical issues that are perceived to give rise to governance challenges that are not driven by scientific advance but by clashes of values. While interest in bioethics governance can be promoted by 'morally disruptive' technologies [5], it is not necessary to have new technologies for bioethical controversy to arise. This can be seen in relation to debates over abortion, where technological advances (such as the move to medical rather than surgical methods) provide the opportunity to revisit fundamental conflicts over the status of the human fetus rather than generate new ethical arguments. It can also be seen in deliberations over end of life care, where the arguments are well known but the solutions remain hotly contested. In the UK, the private member's bill process has led to full Parliamentary debates on assisted dying in both Houses, but this has not resolved the matter. 6 We need therefore to consider further explanations of the role and function of bioethics governance processes. Governance as a Response to Pluralism A starting point would be the fact of deep disagreement; a problem of moral pluralism [18]. This creates a challenge for bioethics governance mechanisms to achieve a sufficient degree of closure to enable health and research systems to function. This will clearly be a key feature of clinical ethics support, in which the patient's position will often dictate a timescale for decisions to be taken. In the policy context, decisions may also need to be taken within specific time frames. There is no neutral position in the face of controversy about, or professional and public demand to be permitted to use, new (or indeed established) technologies. A decision to sustain the status quo needs justification just as much as one to reform the position. The decision cannot, therefore, be ducked. Amongst the functions of bioethics governance is to legitimate such decisions. Sometimes, this might be a form of closure that claims to have resolved a substantive dispute and answered the concerns. In other circumstances, however, the position may be reached on a temporary or provisional basis, so that pressing policy decisions can be made but revisited at a later date. Here, it might be better to describe the governance process as one whereby jurisdictional distinctions are created that allocate the decisions to particular decision-making processes. Under such an approach, an issue is regarded as closed in certain fora, and the settling of controversy is deferred or diverted to a different forum. Thus, pluralism is acknowledged in a way that balances the desire for public debate with the need to facilitate research and clinical care. An illustration can be found in the terms of the Abortion Act 1967, which sets out conditions when termination of pregnancy is permissible. For the purposes of individual clinical decisions this defines the ethical questions, balancing women's rights with other values. The responsibility for assessing whether those conditions are met is allocated to two medical practitioners. Accountability is limited to the scrutiny of their good faith and does not extend to the substantive judgment itself. Controversy remains about the legitimacy of this settlement but it is deferred away from the clinic to Parliament, managerial oversight and political campaigning. In such fora, the issues are far from resolved. A further aspect of jurisdictional deferral can be seen in the Human Fertilisation and Embryology Act's tiers of authority to create norms [40]. Some matters are determined by Parliament, some by the HFEA through its licensing conditions or Code of Practice, and some by clinics or individual clinicians. This distribution of authority is a key technique of bioethics governance. These decision-makers remain accountable through Parliament and in the courts. Without this, the jurisdictional settlement would become unstable but with sufficient accountability it can handle the challenges of balancing competing values in society and enable decisions on bioethical issues to be implemented. Separating out these tiers of norm-making powers enables the differentiation of who should be involved in which types of activities. It seems legitimate for the HFEA to exclude from the licensing committee those who are opposed on principle, in order to ensure that the legislation is applied and not undermined. However, a governance structure that completely prevented the possibility of dispute over such issues would soon lose its legitimacy. If bioethics governance is to be an effective response to the challenge of moral pluralism an account is therefore required of the ways its processes for mediating between conflicting factions demonstrate sufficient respect for differences to justify acting on conclusions reached [8]. This might involve some appeal to the representativeness of the membership of governance bodies. Thus, the constitution of some national ethics committees requires that membership includes a range of characteristics that reflect the diversity of the populations. The Belgian Advisory Committee on Bioethics has its origins in a co-operation agreement of 15 January 1993 signed by the federal Government, the French-speaking Community, the Dutch-speaking Community, the German-speaking Community and the Joint Commission for Community Matters. It has 35 voting members, representing various constituencies, each of which must evenly balance French and Dutch speaking members. Each of member has an alternate to ensure that their perspective is not lost if they are absent. 7 It seems clear that the fact of pluralism is one of the important contexts in which the institutions of bioethics governance function. However, it does not explain the creation of bodies specifically concerned with bioethics rather than more general approaches to public debate and democratic decision-making. The observation by Evans that in the USA, the gatekeepers have turned away from inviting professional bioethicists to dominate the jurisdictional spaces in favour of partisan social movements draws attention to the need to identify a justificatory narrative for bioethics governance that makes a substantive ethical contribution to the ways in which pluralism is acknowledged. Governance as a Response to Relativism A fourth sustaining narrative for bioethics governance concerns its claim to move beyond a relativist assumption that all ethical opinions are entitled to equal respect. It needs to be more than just counting votes if it can claim to be worthy of separate consideration beyond other democratic mechanisms. Issues might be resolved by plebiscite, counting votes, as in the decision of Oregon to permit physician assisted suicide [58]. It has been argued that the governance of new health technologies in the European Union has been determined more by regulation of markets, based on managing risk and respecting human rights, than by ethics [21]. Bioethics governance seems to be based on the idea that there is something sufficiently distinctive about the topics within its scope to render these mechanisms unsatisfactory. It aims for substantive ethical argument to drive decision-making. Market approaches make such arguments exceptional, reasons to restrict the normal economic freedoms. Plebiscites privilege the quantity of support rather than its basis. An opinion counts even if it is rationally indefensible or based on scientific error. The response to challenge relativism takes one of two main forms. The first is substantive and takes the form of a search for common principles that can then be implemented through the governance institutions. In this model, the relationship between the committees and the ethical debates may be ambiguous and questions arise as to how far they should be concerned with substantive ethics and how far with ensuring compliance with standards of which they are stewards not creators [2,36]. The response to relativism lies in the robustness of the principles rather than the enforcement processes. We have already seen how the emergence of principlism in the USA to take this role can be linked to the Rawlsian idea of 'public reason'. The essence of this approach is the use of concepts that can provide a common currency for debate, even though they will be supported by participants for different (and sometimes contradictory) reasons, and do not involve judging the morality of others: 'it neither criticizes nor attacks any comprehensive doctrine, religious or nonreligious, except insofar as that doctrine is incompatible with the essentials of public reason and a democratic polity' [54]. In the context of clinical ethics, there are manifestations of the public reason approach in the form of professional guidance, consensus statements and regulatory requirements for maintaining a licence to practise. The study of bioethics governance might take the form of reviewing such documentary manifestations as evidence of the content of bioethical public reason. In relation to the wider functions of bioethics in support of public policy, international conventions can be seen as a similar response to the challenges of relativism through the establishment of a foundation of common principles. The UNESCO Declaration has already been noted. The Council of Europe's Oviedo Convention provides another example. Developed within the human rights paradigm, it pronounces on issues that are firmly within the scope of academic bioethics and subsequent protocols have extended its reach further. Whether the process is seen as intellectually satisfactory or merely an exercise in politics will determine whether this turn to 'public reason' provides a satisfactory response to the problem of relativism [3] and requires consideration of the logic of public ethics [43,68]. There is a second approach that seeks to address the challenge of moving from disagreement to some normative framework without simply accepting the validity of different views in our pluralist societies. Here the focus is not on the premises and conclusions of public reason, but upon the character of the processes that are thought in themselves to provide legitimacy for the positions reached. The Nuffield Council on Bioethics has adopted a position of this type. A stock take of its work, by Chan and Harris in 2006, suggested that it might have incrementally developed something a little like 'Nuffield Council Ethical Framework for Bioethics' [11]. They identified a number of principles that seemed to be drawn upon, although not always consistently; avoidance of harm, the duty of beneficence, respect for persons and autonomy, justice and just resource allocation, informed consent, confidentiality and privacy, respect for dignity, and naturalness. Since then, the Council has committed itself to a different legitimation narrative based on the procedural aspects of public reasoning rather than its conceptual content. In response to the fact of pluralism, it has committed to a principle of 'inclusiveness: no single view or approach to bioethics should be favoured, and the expression of all views should be encouraged and welcomed [45].' On this basis legitimacy can be drawn partly from the fact that no one has been excluded from the debate. In response to the problem of relativism, it has asserted the importance of applying to all arguments 'tests of coherence and rationality' in a rigorous way, 'based on the best evidence available, and supported by careful and comprehensive analysis' [45]. Here the approach seeks to distinguish the resolution of disagreement through compromise and negotiation from bioethics by characterizing it as an evidence-based argumentative activity in which participants must justify and not merely assert their positions. Conclusions The approach adopted by the Nuffield Council could be little more than a particular manifestation of the principles of deliberative democracy. However, this is where it becomes necessary to reflect on the interrelationship between the senses of bioethics that were explored in the earlier sections of this piece. It is a deliberative process, but it is overseen by a group of people selected for knowledge and interest in the disciplines that the Council perceives relevant to contribute to the field of bioethics. The idea of bioethics as a discipline is rejected in favour of an understanding of a field of inquiry. However, the activities of the Council are very much of the nature of an expert enterprise in policy engagement. The Council is influential in policy fora, but has no formal authority. It is therefore reliant on implementation structures being in place for bioethics governance. So, bioethics is all of the things that have been discussed. However, its nature as a governance practice is an important dimension of the ecology of bioethics. Recognition of the value of seeing bioethics as a governance practice, raises areas for enquiry that can too easily be overlooked. First, the legitimacy of its institutions to operate in the public sphere needs more explanation and justification than it currently demonstrates. This will include consideration of the people who are involved; how they are selected; the nature of the authority that they exercise; the processes by which positions are reached; the efficiency, proportionality and effectiveness of the accountability mechanisms that can be invoked. Second, further elaboration of the forms of institutionalization of bioethics is needed. Third, there is scope for study of the literary forms of bioethics governance; such as opinions, reports, guidelines, consensus statements. The forms of dissemination of public ethics are different to those of the academy and these differences are worthy of examination. Studying bioethics as a governance practice focusses more on who does things, how and why they do them, than in what they study and what they conclude. It should not supersede other approaches to understanding bioethics, but it should complement them. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
12,254.2
2016-01-07T00:00:00.000
[ "Political Science", "Law", "Philosophy" ]
An actin‐depolymerizing factor from the halophyte smooth cordgrass, Spartina alterniflora (SaADF2), is superior to its rice homolog (OsADF2) in conferring drought and salt tolerance when constitutively overexpressed in rice Summary Actin‐depolymerizing factors (ADFs) maintain the cellular actin network dynamics by regulating severing and disassembly of actin filaments in response to environmental cues. An ADF isolated from a monocot halophyte, Spartina alterniflora (SaADF2), imparted significantly higher level of drought and salinity tolerance when expressed in rice than its rice homologue OsADF2. SaADF2 differs from OsADF2 by a few amino acid residues, including a substitution in the regulatory phosphorylation site serine‐6, which accounted for its weak interaction with OsCDPK6 (calcium‐dependent protein kinase), thus resulting in an increased efficacy of SaADF2 and enhanced cellular actin dynamics. SaADF2 overexpression preserved the actin filament organization better in rice protoplasts under desiccation stress. The predicted tertiary structure of SaADF2 showed a longer F‐loop than OsADF2 that could have contributed to higher actin‐binding affinity and rapid F‐actin depolymerization in vitro by SaADF2. Rice transgenics constitutively overexpressing SaADF2 (SaADF2‐OE) showed better growth, relative water content, and photosynthetic and agronomic yield under drought conditions than wild‐type (WT) and OsADF2 overexpressers (OsADF2‐OE). SaADF2‐OE preserved intact grana structure after prolonged drought stress, whereas WT and OsADF2‐OE presented highly damaged and disorganized grana stacking. The possible role of ADF2 in transactivation was hypothesized from the comparative transcriptome analyses, which showed significant differential expression of stress‐related genes including interacting partners of ADF2 in overexpressers. Identification of a complex, differential interactome decorating or regulating stress‐modulated cytoskeleton driven by ADF isoforms will lead us to key pathways that could be potential target for genome engineering to improve abiotic stress tolerance in agricultural crops. Filamentous actin (F-actin) network constitutes majority of the cytoskeleton (Li et al., 2015). The stochastic dynamics of the F-actin via polymerization, depolymerization, severing, nucleation and large-scale cellular translocation events affect the overall cytoskeletal integrity (Augustine et al., 2011). Actin remodelling plays an important role in plant cell, tissue, and organ development reprogramming, cell division and cellular organelles assembly. Actin also predictably participates in nucleosome occupancy, chromatin modification and regulation of gene expression (Bettinger et al., 2004;Miralles and Visa, 2006). Actin, in coordination with a large group (over 70 families) of both cytoplasmic and nuclear actin-binding proteins (ABPs), provides the cytoskeleton with high plasticity during growth and environmental challenges (Augustine et al., 2011;Deng et al., 2010;Tholl et al., 2011). ABPs, singly or in combination, regulate the stoichiometric ratio between the free monomeric G-actin (globular actin) and constantly depolymerizing F-actin in plant cell. Of the total pool of actin moieties, only 5% usually remains in filamentous state at a given time (Gibbon et al., 1999;Snowman et al., 2002). The constant entry and exit of the G-actin pool within the cytoskeletal mesh requires a number of ABPs and their functional partners to be expressed and active during the process. Actin-depolymerizing factors (ADFs)/cofilins are a family of ubiquitous, low molecular mass (15 to 20 kDa) ABPs that bind both the G-actin and F-actin in plants, and their functions are regulated by cellular pH, ionic strength and the availability of other binding partners . ADF is reportedly essential for plant viability (Augustine et al., 2008). By binding to the ADPbound form of actin, ADFs sever actin filaments and thus provide more barbed filament ends for polymerization (Cl ement et al., 2009;Li et al., 2010;Staiger et al., 2009;Tian et al., 2009). ADFs also increase the rate of dissociation of F-actin monomers from the pointed ends by changing the helical twist of the actin filament, thus accelerating the dissociation of subunits (Bamburg and Bernstein, 2008;Bowman et al., 2000;Cooper and Schafer, 2000;Daher et al., 2011). These two activities together make ADFs to be the major regulator of actin dynamics in plant cell, with important functional association with other regulatory proteins, for example actin-interacting protein 1 (AIP1, Amberg et al., 1995;Iida and Yahara, 1999;Konzok et al., 1999) and calcium-dependent protein kinase (CDPK, Smertenko et al., 1998). To date, only a few plant ADFs, such as Arabidopsis (Bowman et al., 2000;Carlier et al., 1997;Nan et al., 2017;Tholl et al., 2011), maize ZmADF (Gungabissoon et al., 1998) and a pollen-specific ADF from lily (Allwood et al., 2002), have been biochemically characterized. The inhibition of ADF activity by phosphatidylinositol 4, 5-bisphosphate and phosphatidylinositol 4-monophosphate and that they can also shut down phospholipase C activity reveal a close association of ADFs with phosphoinositide signalling in plants (Gungabissoon et al., 1998). Phosphorylation of plant ADFs at the conserved serine-6 residue by CDPK inhibits their depolymerization activity (Allwood et al., 2001;Smertenko et al., 1998), which suggests that Ca 2+ status of the cell may play an important role in the regulation of ADF activity. Drought and salinity are the two most important environmental stressors that negatively impact the growth and productivity of agricultural crops, including rice, arguably the most important global food crop. Plants, as sessile organisms, have developed strategies to adapt to these stresses by physiological and biochemical adjustments achieved through the coordinated expression of genes involved in stress-responsive gene regulatory networks. Many ABPs influence actin filament dynamics in response to environmental signals (Hussey et al., 2006;McCurdy et al., 2001;Staiger and Blanchoin, 2006;Yokota and Shimmen, 2006). Plant cytoskeleton is thus emerging as an active receiver of environmental stress signals through the recruitment of ABPs, including ADFs (Drobak et al., 2004;Solanke and Sharma, 2008). However, there are only a few reports of implications of ADFs in abiotic stress response. TaADF was regulated specifically under cold stress in wheat (Ouellet et al., 2001). A hydrophobic ADF mutant (valine 69 to alanine) was shown to rescue a partial RNA interference-mediated stunted growth phenotype at a permissive temperature (20 to 25°C) but not at 32°C, a restrictive temperature (Vidali et al., 2009) in temperature-sensitive candidates of moss, Physcomitrella patens. Freezing induced an ADF activity leading to depolymerization of actin filaments in oilseed rape (Egierszdorff and Kacperska, 2001). ADF was up-regulated in rice after 2 to 6 days of drought stress (Ali and Komatsu, 2006). Rice OsADF3 was shown to be induced under stress and enhance drought stress tolerance in Arabidopsis . Halophytes adapt to salt and drought by virtue of their superior alleles of the genes involved ion homeostasis, osmotic adjustment, ion extrusion and compartmentalization in comparison with glycophytes (Zhu, 2000). Many halophytes, such as Thellungiella halophila (Wu et al., 2012), Mesembryanthemum crystallinum (Chiang et al., 2016;Tsukagoshi et al., 2015), Porteresia coarctata (Majee et al., 2004), have been proved to be elite source of stress tolerance genes for bioprospecting. A perennial grass halophyte, Spartina alterniflora (Loisel) (smooth cordgrass), is reported to grow in salinity ranging from 5 to 32 psu, that is double the strength of marine water (Baisakh et al., 2008). Along with P. coarctata, S. alterniflora was proposed to be a model halophyte grass for monocotyledonous crops (Joshi et al., 2015;Subudhi and Baisakh, 2011). Bioprospecting of S. alterniflora genes has been reported to improve salinity and drought stress resistance when overexpressed in model plant Arabidopsis and rice (Baisakh et al., 2012;Joshi et al., 2013Joshi et al., , 2014. The present study emanates from the hypothesis that modulation of cytoskeleton architecture by manipulating actin turnover provides abiotic stress resistance in crops and that an ADF of a halophyte is superior to its homolog from the glycophytic rice. Here, we report on the biochemical and functional implications of an ADF from S. alterniflora (SaADF2) in drought and salt stress response when overexpressed in rice. Further, its superiority over rice homolog OsADF2 was studied by overexpressing OsADF2. Structural differences between the two highly identical proteins as possible reasons of the functional superiority of the former in conferring abiotic stress tolerance are discussed. Results SaADF2 was highly identical to OsADF2 and nuclear localized A 438-bp-long cDNA isolated from abiotic stress-responsive transcriptome of the grass halophyte Spartina alterniflora codes for a conserved and ubiquitous actin-binding protein (ADF) of 145 amino acid residues. Protein sequence comparison of S. alterniflora ADF with ADF gene family members from rice showed that S. alterniflora ADF was >95% identical to rice ADF isoform, OsADF2, and hence was annotated as SaADF2 (Figure 1a), which clustered with AtADF6 from Arabidopsis ( Figure 1b). SaADF2 showed nuclear localization ( Figure S1), as was observed for OsADF2 by Huang et al. (2012). SaADF2 is structurally different from OsADF2 SaADF2 is a typical plant ADF with a highly conserved cofilin/ADF domain spanning the C-terminus of the monomer (ADF-H domain, N19 to H145). It weighs 16.8 kDa with a predicted pI of 6.20, while it is 5.65 kDa for OsADF2. The core structures of SaADF2 and OsADF2 are highly similar with five central a-helices and five b-strands, but SaADF2 shows a sixth b-strand at the N-terminal end (Figure 1c). Predicted tertiary structures showed a central core barrel of b4 and b5 in SaADF2 (b3 and b4 in OsADF2), surrounded by a-helices and other b-sheets. The two core b-sheets are joined by the F-loop (Figure 1d), a flexible coil responsible for F-actin binding (Singh et al., 2011). In OsADF2, the F-loop is 6.50 to 8.86 A high from N-and C-terminal side, respectively, with a base 4.98 A and active plane radius 5.6 A. Contrastingly, the F-loop in SaADF2 is 12.76 and 14.47 A high from b4 and b5 (N and C termini), respectively, with active plane radius 8.9 A (Figure 1e). The long F-loop of SaADF2 is significantly exposed outside the protein core providing it a high rotational free space. The F-loop tip is highly hydrophilic and organized with two hydrophobic patches on both sides of SaADF2, whereas the hydrophilic tip volume is much reduced in OsADF2 (Figure 1d). OsADF2 and SaADF2 differ by six amino acid substitutions ( Figure 1d). Serine 6, the key phosphorylation site of plant ADFs, is substituted in SaADF2 by threonine. At the helix subproximal to C-terminus in SaADF2, phosphosensitive proline 132 and threonine 133 are both substituted by serine in OsADF2. Other substitutions, 19N>19D, 25H>25L and 118H>118Q, are positioned on OsADF2 model superimposed over SaADF2 (Figure 1d). The Mn + ligand binding sites on both proteins are K106 and R102. SaADF2 had greater actin-binding affinity than OsADF2 in vitro Immunoblotting results showed that the recombinant SaADF2 and OsADF2 proteins (17 kDa) were mostly expressed in the membrane fraction of the prokaryotic system ( Figure S2; Appendix S1). F-actin binding and bundling assay showed that both SaADF2 and OsADF2 co-sediment with actin at low-speed centrifugation in a concentration-dependent manner (Figure 2ad). Both proteins bound to G-actin monomer and F-actin bundles, but their actin-binding efficiency differed with protein concentration. In a dose-dependent assay using 0.1 to 2 lM protein concentration, actin (3 lM) co-sedimented with SaADF2 at 0.1 lM (Figure 2b,d), whereas OsADF2 started to co-sediment only at 0.5 lM, and at low concentration, a major portion of the protein remained in the supernatant fraction (Figure 2a,d). Both proteins at 2 lM co-sedimented with 20% input actin (Figure 2d). However, at 0.5 lM ADF2, the binding of SaADF2 was twofold higher than OsADF2. At 0.1 to 0.3 lM concentration range, OsADF2 showed no binding, but SaADF2 showed binding proportionate to protein concentration ( Figure 2d). On the other hand, OsADF2/6a mutant protein with serine-6 replaced by threonine had an actin-binding efficiency equivalent to SaADF2, that is the protein co-sedimented at its lowest concentration (0.1 lM) with actin and the amount of cosedimented protein increased with increase in its concentration (Figure 2c,d). SaADF2 depolymerized F-actin filaments at a wider pH range and more efficiently than OsADF2 F-actin binding as well as depolymerizing activity of the purified recombinant ADF2 proteins was monitored by a fluorescence assay by incubating the proteins with 0.8 lM pyrene-labelled actin. Fluorescence quenching of undiluted F-actin suggested that both ADF2s bind to F-actin and thus promote actin depolymerization and enhance actin turnover rate (Figure 2e-h). At pH 8.0, both ADF2s showed comparable F-actin depolymerization activity ( Figure 2e). While SaADF2 showed 20% decrease in fluorescence over a 10-min period, OsADF2 showed 18% decrease ( Figure 2e). However, at pH 6.0, OsADF2 lost its potency to depolymerize Factin significantly and the decrease in fluorescence dropped to 9%, whereas SaADF2 maintained its depolymerization activity at 18% (Figure 2f). Interestingly, OsADF2/6a showed slow, early depolymerizing activity at pH 8.0, which increased to 12% by 11 min (Figure 2e), but at pH 6.0, it had the highest (23%) actin depolymerization activity (Figure 2f). In the absence of any binding protein in vitro, 4-to 8-lm-long actin filaments were observed by total internal reflection fluorescence (TIRF), which began dissociating following disassembly (severing/depolymerization) in the presence of excess (8 lM) of the ADF proteins (Figure 2g-i). SaADF2 predominantly severed the filaments by depolymerizing from ends, whereas OsADF2 mostly disassembled them into shorter fragments (Figure 2g). Steady-state actin single filaments showed more severing and depolymerization by SaADF2 and OsADF2/6a as compared to slow and moderate severing by OsADF2 (Figure 2h,i). Higher depolymerization by SaADF2 may have led to more enrichment of the G-actin pool than by OsADF2. OsCDPK6 preferentially phosphorylated OsADF2 at serine-6 Immunoblot with antiphosphoserine antibody showed apparent difference in the degree of phosphorylation by the ADF2s (Figure 2j,k). OsADF2 produced at least two times higher phosphorylation signal compared to SaADF2 and OsADF2/6a ( Figure 2k). Although threonine is also phosphorylated by the promiscuous CDPK, its preference for serine-6 was evident with no substantial change in phosphorylation of the OsADF2 mutated at two other serine sites, 132 and 133 (Figure 2j,k). SaADF2 and OsADF2 overexpression conferred contrasting drought tolerance phenotypes Thirty-two and 15 primary transgenic rice 'Nipponbare' lines overexpressing SaADF2 and OsADF2, respectively, under the control of the constitutive cauliflower mosaic virus 35S promoter ( Figure 3a) (hereinafter referred to as SaADF2-OE and OsADF2-OE) were generated. T 1 lines showing single-copy Mendelian inheritance and expression of ADF genes were grown for attainment of homozygosity and further analysis. Under 7-14 days after drought stress (DAS) at prebooting stage (10% FC), the SaADF2-OE showed significantly greater tolerance with less wilting, withering, and reduction in shoot and root growth than the OsADF2-OE and WT plants (Figure 3b,c,gi). Upon resuming irrigation, SaADF2-OE recovered to normal growth quickly as compared to the WT ( Figure S3; Appendix S1). Under well-watered control condition, no significant difference was observed in growth and development between WT and SaADF2-OE ( Figure S3). SaADF2-OE held high relative water content and stomatal conductance under drought stress Relative water content (RWC), the physiological ability of a cell to maintain water status through osmotic adjustment, was 90% for both WT and transgenic plants. However, RWC of WT and OsADF2-OE plants dropped to 15% and 43%-45% 7 DAS. On the other hand, SaADF2-OE maintained 65%-70% RWC 7 DAS ( Figure 3d). SaADF2-OE maintained higher membrane stability ( Figure 3j). Scanning electron micrographs revealed that SaADF2-OE maintained stomatal aperture opening comparable to control condition, but OsADF2-OE showed reduced aperture opening and WT showed complete closure of stomatal aperture with visibly shrunken guard cells under drought (Figure 4a). SaADF2-OE efficiently maintained the cellular osmotic potential under water deficit with less reduction in stomatal conductance than OsADF2-OE and WT ( Figure 4b). Approximate degree of binding was measured by densitometry scanning of the gel (d). Depolymerization activity of SaADF2, OsADF2 and OsADF2/6a proteins using prepolymerized F-actin at pH 8.0 (e) and pH 6.0 (f)). Severing and depolymerization of steady-state actin single-filaments incubated with 8 lM SaADF2, OsADF2 and OsADF2/6a (g-i). Analysis of severing activities (h) and average time to half-maximal severing (i) by the proteins of interest (POI) and in the absence of protein (F-actin) is shown at end time point of the assay (n = 5). Inhibitory phosphorylation of SaADF2, OsADF2, OsADF2/6a and another phosphosensitive mutant, OsADF2/132/133a, by CDPK shown by immunoprecipitation (j) followed by densitometric quantification (k). Data are shown as means with standard error of means, n = 3). (e), proline content (f), plant height (g), fresh biomass (h) and dry weight (i). Blue and red bars represent control (unstressed) and stressed conditions, respectively. Expression of SaADF2 and OsADF2 under control (0D), and 3 days (3D) and 7 days (7D) after drought stress (7D) (j). The faint nonspecific signals for SaADF2 in WT are from the endogenous OsADF2. Data are presented as means with standard error of means (n = 3). Bars topped with different letters represent values that are significantly different (P < 0.05) at 0 day or 7 day after stress. SaADF2-OE lines maintained intact chloroplast ultrastructure, higher chlorophyll content and photosynthesis under drought stress Control plants showed well-defined, normal kidney-shaped chloroplasts with clearly distinct envelope membranes and a well-developed internal membrane system with evenly distributed, well-packed grana and long stromal thylakoids (Figure 5a). Drought caused disintegration of the chloroplast fine structures including outer membrane and thylakoid disorganization with disoriented grana stacking; many plastoglobuli appeared with high electron density in both WT and OsADF2-OE (Figure 5a). In contrast, typical fine structure of chloroplasts was conserved in SaADF2-OE at 7 DAS. Chloroplastids were disarranged in WT plant mesophyll cells as compared to the regular arrangement in SaADF2-OE ( Figure S3, Appendix S1). SaADF2-OE maintained higher chlorophyll concentration than OsADF2-OE and WT at 7 DAS ( Figure 5b). SaADF2-OE showed better photosynthetic performance as reflected by less damage to photosystem II with higher Fv/Fm over OsADF2-OE and WT under drought stress (Figure 5c), which indicated that SaADF2-OE plants were less sensitive to drought-induced photo-inhibition. SaADF2-OE produced less reactive oxygen species (ROS) than OsADF2-OE and WT plants under drought stress Both SaADF2-OE and OsADF2-OE accumulated less ROS compared to WT under drought as shown by less coloration of the leaves in DAB and NBT assay. DAB assay showed the accumulation of H 2 O 2 in only mid-vein region of SaDAF2-OE (Figure 5d). On the contrary, OsADF2-OE demonstrated higher accumulation of ROS with more coloration in the vein and H 2 O 2 accumulation all over the leaf strip ( Figure 5d). SaADF2-OE lines were particularly superior with very less characteristic dark blue coloration of the leaf strips in NBT assay ( Figure 5e). O 2À accumulation in the leaves of SaADF2-OE under drought was comparable to control condition. SaADF2-OE were agronomically superior under drought stress Drought-stressed SaADF2-OE had markedly higher grain yield and yield attributing traits compared to OsADF2-OE and WT ( Figure 6). Drought-stressed OsADF2-OE and WT showed~45% and~41% reduction in tiller number compared to 28% in SaADF2-OE (Figure 6a). There was~70% and~50% decrease in panicle number in WT and OsADF2-OE compared to 30% in SaADF2-OE ( Figure 6b). Panicles had less spikelets in OsADF2-OE than WT and SaADF2-OE under unstressed condition, and drought caused a reduction of 53%, 26%, and 18% in WT, OsADF2-OE and SaADF2-OE, respectively (Figure 6c). A striking difference was noted in the fertile seeds count, which declined by about 88% in WT, 66% in OsADF2-OE, but only 11% in SaADF2-OE (Figure 6d,e). This was reflected in grain yield per panicle where drought caused 76% and 70% yield reduction in WT and OsADF2-OE compared to only 31% in SaADF2-OE (Figure 6f). Interestingly, ADF2-OE showed some superiority over the WT plants for reproductive traits, such as number of tillers and panicles per plant, and spikelet number (Figure 6a-c) as well as vegetative growth traits, such as fresh and dry biomass (Figure 3h,i) under control conditions. This suggested that ADF2 overexpression conferred an overall growth benefit to the transgenic plants. SaADF2 overexpression conferred salt tolerance in rice plants SaADF2-OE showed enhanced salt tolerance as revealed by less chlorophyll bleaching of the leaf tissues in the cut-leaf float assay (Figure 7a) as well as seedlings in hydroponics (150 mM NaCl) ( Figure 7b). As under drought stress, SaADF2-OE maintained superior physiological traits over OsADF2-OE and WT under salinity (Figure 7c-j). The SaADF2-OE also displayed an unabated photosystem II functioning as reflected by higher Fv/Fm compared to WT (Figure 7f). SaADF2 transcript accumulation was maintained in the leaf and root tissue of SaADF2-OE under salt stress at all time points except a slight reduction at 24 h after stress (Figure 7k). SaADF2 expression differed from OsADF2 in actin filament organization under drought stress Leaf mesophyll protoplasts from 10-day-old mannitol-treated ADF2-OE (Figure 8a) showed thick and long actin filaments (AFs) arranged longitudinally along the length of the cortical cells of the untreated control seedling (Figure 8b). However, AF organization in WT cells was significantly affected with no finer AFs, and the length of the thicker filaments was greatly reduced under osmotic stress. The small AFs, instead of adhering to organelles or being dispersed in cytosol, were shifted to the periphery closer to the plasma membrane. In OsADF2-OE cells, although the mesh of cytosolic AFs was not completely lost, the integrity of fine filament structures was lost (Figure 8b). On the other hand, the number and length of filaments were considerably higher and fine filaments remained conserved in SaADF2-OE cells under both control and stress. The basketlike mesh in the cytosol also remained preserved with no apparent shift of small AFs towards plasma membrane (Figure 8b). SaADF2-OE and OsADF2-OE had differential global gene expression pattern RNA-Seq analysis showed that unstressed control and drought stress-induced (3 and 7 DAS) leaf transcriptome of SaADF2-OE and OsADF2-OE vis-a-vis WT resulted in 1871 significantly (log2FC ≥2 for up-regulated and ≤À2 for down-regulated; P < 0.05) differentially expressed genes (DEGs). Under drought stress, 255 genes (65 at 3 DAS and 190 at 7 DAS) were up- Figure 6 Postharvest yield parameters (a-f) and other agronomic traits (g-i) of 1-week drought-stressed overexpressers of SaADF2 and OsADF2 vis-a-vis WT. Data are presented as means with standard error of means (n = 3). Blue and red bars represent control (unstressed) and stressed conditions, respectively. Bars topped with different letters represent values that are significantly different (P < 0.05) at 0 day or 7 day after stress. regulated in SaADF2-OE over OsADF2-OE, whereas only 30 genes were up-regulated under control. Altogether, 5566 genes were significantly down-regulated (Data S1). DEGs in OsADF2-OE were significantly enriched in anion transport, photosynthesis and chlorophyll metabolism when compared to WT (Data S2). Photosynthesis was the most significantly enriched biological process, and genes predominantly localized in plastid and photosynthetic membranes, specifically in PSII, represented the cellular component. Nucleotide binding was the most enriched molecular pathway. Photosynthesis was also the most enriched biological process in SaADF2-OE, but genes involved in light harvesting process, generation of precursor metabolites and energy coins were significantly enriched compared to OsADF2-OE. The molecular components were concentrated in nucleotide/nucleoside binding (specifically adenosyl), phosphorylation (phosphatase and kinase) and oxidoreductase. Cellular components were mostly membrane localized. SaADF2-OE and OsADF2-OE differed significantly (P < 0.05) for genes down-regulated in response to abiotic stimulus (GO:000385) and response to water stress (GO:0009415). Genes in membrane integral components were significantly downregulated in SaADF2-OE. A significantly enriched term between SaADF2-OE and OsADF2-OE was cation transport (GO:0006812) and others related to stress response (Data S2). KEGG analysis also showed high enrichment of photosynthesis and general metabolism-related pathways in SaADF2-OE and OsADF2-OE (Data S3). Ribosomal proteins biosynthesis was increased as a common abiotic stress response to accommodate the translation of stress-responsive proteins. Proline and arginine metabolism path:osa00330 was also up-regulated as a general stress response in WT/SaADF2-OE at 3 DAS. Genes involved in carbohydrate biosynthetic pathways and inositol phosphate metabolism were overrepresented. ADF2-related transcripts showed differential expression in transgenic lines Expression analysis of DEGs, such as Ca 2+ -dependent kinases (CDPK/CAM Kinases), Rho-GTPases, phosphoinositide (PI) signalling-regulated transcripts (PI45K4, I145PP and PLD) and protein phosphatases ( Figure S4; Appendix S1), which were enriched in drought-induced transcriptome and likely interact with ADF2, showed down-regulation of most CDPK/CAM Kinases in SaADF2-OE but fivefold to sixfold up-regulation in WT and OsADF2-OE under drought. CAMK isoform AK1 (Os02g56310) was up-regulated in SaADF2-OE and OsADF2-OE, but expression was lower than WT. WT and OsADF2-OE accumulated eightfold and 3.5-fold, respectively, CAMK28 at 7 DAS, while it was downregulated in SaADF2-OE at 7 DAS, although with 3.5-fold accumulation over WT at 3 DAS. Rho-GTPases, known to positively regulate CDPK activation, were down-regulated in SaADF2-OE. Phosphatases also showed down-regulation in SaADF2-OE under drought. Of the PI signalling-regulated transcripts, PLD showed the same trend as CaMK AK1. ADF overexpression down-regulated the expression of PI4,5-K4 and I-1,4,5-PP with no significant difference among the genotypes. RT-PCR of 12 putative interacting partners of OsADF2 under drought stress in six independent SaADF2-OE lines showed temporal variation in their expression profile. SaADF2 transcript accumulation in SaODF2-OE showed increase under drought stress, especially 7 DAS ( Figure S5; Appendix S1). Adenyl cyclaseassociated protein (ACP), the known ADF-interacting partner, demonstrated drought-induced up-regulation up to 1.4-fold (D1) in SaADF2-OE relative to WT. Discussion The present study reports biochemical and functional characterization of an actin-depolymerizing factor (SaADF2) from a halophyte, Spartina alterniflora and its rice homolog (OsADF2), and that SaADF2 overexpression imparts higher drought (and salt) tolerance in rice (as well as Arabidopsis thaliana; Figure S6, Appendix S1) in comparison with OsADF2. ADFs are present in multiple isoforms in higher plants (Maciver and Hussey, 2002). Of the 11 isoforms reported in rice, OsADF2 is expressed in both vegetative and reproductive tissues without significant change in its expression under abiotic stresses . In Physcomitrella patens, cell viability is compromised in knockdown mutants of a single intronless ADF isoform (Augustine et al., 2008), suggesting that ADF functionality is essential for plant cells. ADF is reportedly essential for cytoskeleton rearrangement in response to extra-and/or intercellular stress (Ali and Komatsu, 2006;Augustine et al., 2008). Superior in vitro activity of SaADF2 could be due to its longer and more exposed F-loop than OsADF2 (Figure 1c-e), because binding ability of ADF to F-actin and subsequent filament severing or disassembly is attributed to the charged residues at the exposed tip of its F-loop (Figure 1d-e) in coordination with C-terminal a-helix and tail (Lappalainen et al., 1997;Ono et al., 2000;Pope et al., 2000). F-actin-binding motif of ADF is highly divergent, and only a single exposed charged residue may be sufficient to effect binding (Wong et al., 2011), but the degree of binding may differ depending on other structural factors. SaADF2 and OsADF2, with subtle tertiary structure difference (Figure 1d), have three amino acid differences in the mostly conserved G-actin-binding motifs, which is comprised of N-termini, the long a3-helix, and the turn connecting b6 and a4/5 (Wong et al., 2011). This could result in their differential actin-binding affinity (Figure 2d). Plant cells normally exhibit a slightly higher alkaline pH, and most ADFs are (Gungabissoon et al., 1998). SaADF2's ability to depolymerize actin at a broader pH range (pH 6.0-8.0) and more efficiently compared to OsADF2 along with its high actin affinity (Figure 2d-f) could prove useful to keep the plant growth and development unabated under drought (or salt) stress that frequently changes the cellular ionic concentration. Serine-6 in OsADF2 and its homologs from other drought and/ salt tolerant/sensitive rice varieties and a halophyte wild rice ( Figure S7, Appendix S1), which is substituted by threonine in SaADF2, participates in an inhibitory regulatory phosphorylation by CDPK family in plants and protists (Allwood et al., 2001;Smertenko et al., 1998). Various isoforms of CDPKs inactivate ADF by phosphorylating and inhibiting its actin binding, and consequently interfere in actin dynamics. Mutations in serine-6 lead to the loss and/or alteration of its binding constant with CDPK that could compromise growth and development as revealed by abnormal polar tip growth of phosphomimetic and unphosphorylatable mutant protonema (Augustine et al., 2008). ADF interacts with CDPK in different organisms (Allwood et al., 2001). Less fluorescence intensity of SaADF2-OsCDPK6 interaction indicated a physiologically more active SaADF2 protein due to a partial lift in the negative regulation of OsCDPK6 (Figure 9a,b). Lower in vitro phosphorylation efficiency of SaADF2 and OsADF2 serine-6 mutant (OsADF2/6a) than OsADF2 by OsCDPK6 in the presence of Ca 2+ further confirms such observation (Figure 2j-k). Thus, downregulation of OsCAMKs ( Figure S4) may be functionally relevant for sustained actin dynamics in SaADF2-OE. OsWD40 is the WD-40 (or G-beta) repeat domain-containing 66-kDa stress-regulated protein. WD domains, when present in tandem, form a propeller-shaped scaffold useful for multiprotein interaction. WD40 has important roles in histone recognition, chromatin function, RNA processing and transcriptional regulation (Suganuma et al., 2008). WD repeat domain-containing proteins, such as AIP1, disassemble the actin filaments decorated with ADF and shorten the ADF-severed actin filaments, thus maintaining a high concentration of cellular actin monomers (Nomura et al., 2016). The adenylyl cyclase-associated protein (OsACP) is highly implicated in positive regulation of actin turnover process (Ono, 2013) as an actin-sequestering protein. ACP interacts with ADF (Zhang et al., 2013) for G-actin binding, and promotes nucleotide exchange and severing of ADF-bound actin filaments (Ono, 2013). Both interactions suggested a more active SaADF2 compared to OsADF2 in BiFC assay (Figure 9). Amino acid substitutions could alter binding of co-regulatory proteins, thereby changing the turnover of actin-ADF complex in vivo. ADF may also compromise its depolymerizing activity by binding to phosphoinositide (PIP2) and inhibiting phospholipase C activity (Gungabissoon et al., 1998;Smertenko et al., 1998), thus removing itself from the cytoplasm. PIP/PIP2 binding ideally localizes plant ADF near the plasma membrane where it may participate in stress signalling (Liu et al., 2013;Ouellet et al., 2001). Although OsADF2 was not significantly induced under stress , OsADF2-OE in the present study showed higher drought tolerance than WT. Superiority of SaADF2-OE over OsADF2-OE for drought (and salt) stress tolerance phenotype endorses that the difference in in vitro activities between highly identical SaADF2 and OsADF2 could relate to their differential response in vivo. Studies showing enhanced salt and/or drought tolerance of transgenics overexpressing Spartina alterniflora genes with subtle sequence differences from rice provide further credence (Baisakh et al., 2012;Joshi et al., 2013Joshi et al., , 2014. Environmental perturbations restrict photosystem II (PSII) activity by inhibiting its repair, which leads to photo-inhibition (Jin et al., 2011). Better vegetative and prereproductive growth of SaADF2-OE under drought could be attributed to their improved photosynthetic efficiency due to less sensitivity to photo-inhibition shown by higher Fv/Fm than OsADF2-OE and WT. Quantum yield of PSII activity is directly related to chlorophyll 'a' (Checker et al., 2012). Higher chlorophyll content observed in SaADF2-OE suggested more efficient internal carbon adjustment in comparison with WT and OsADF2-OE under stress. Stress-induced excessive chloroplastidic reactive oxygen species (ROS), generated as a result of imbalance between electron transport and CO 2 fixation, reduce photosynthetic yield by dissociating or bleaching of pigment centres. Genes coding for ROS-scavenging enzymes were upregulated in SaADF2-OE and OsADF2-OE when compared to WT, including mitochondrial superoxide dismutase, glutaredoxin, glutathione S-transferase, peroxiredoxin, aldehyde dehydrogenase and ascorbate peroxidase (Data S1). Hence, low accumulation of O 2À and H 2 O 2 in SaADF2-OE may have protected the plants from membrane and PS-II damage and photo-inhibition under drought (Figure 5d,e). Further, maintenance of chloroplast integrity with intact grana and organized thylakoid in SaADF2-OE with higher Fv/ Fm under drought possibly contributed to their higher grain and biomass yield than OSADF-OE and WT. Higher RWC of SaADF2-OE than OsADF2-OE and WT under drought indicated greater tissue tolerance of SaADF2-OE likely through superior osmotic adjustment. Transcriptome analysis (Data S1) showed up-regulation of trehalose synthase, proline oxidase (LOC_Os10 g40360; proline dehydrogenase) and inositol synthase in SaADF2-OE as compared to WT, which may be related to the protected osmotic status and conservation of relative water content of the transgenics. Additionally, group 1 and 3 LEA (late embryogenesis abundant) proteins and dehydrins that are known to impart enhanced desiccation tolerance were up-regulated in SaADF2-OE (as well as in OsADF2-OE) compared to WT. Plants lose control over stomatal conductance to maintain the balance of water and gas exchange under drought and switch to flight mechanism by closing stomata to reduce water loss (Sikuku et al., 2010). The positive correlation between osmotic stress regulation of actin organization and K + channel activity in guard cells (Luan, 2002) could explain higher stomatal conductance of SaADF2-OE plants under drought stress leading to more efficient maintenance of CO 2 exchange capacity and cellular osmotic potential than OsADF2-OE and WT plants. The results indicated that drought did not affect stomatal conductance much in SaADF2-OE plants, which is possibly because of their superior osmotic adjustment more by osmolyte/osmoprotectant accumulation and less by stomatal closure. ADF increases AFs turnover through the combination of depolymerization and severing. The average length of an AF is a function of the ADF and actin monomer concentration, phosphorylation status of the subunits, availability of other ABPs (CAP or AIP) and the average time the subunit resides inside the AF. In a resting cell, fluctuation of AF length depends on the filament severing and is~20% of average filament length (Roland et al., 2008). The integrity of the structural components including cytoskeleton with protected actin fibres and preserved actin mesh of the SaADF2-OE could be due to maintenance of high water potential of the plants under water deficit. Induction of ADF expression by salt and cold stress besides drought because of the increased rate of actin turnover suggested their role in osmoregulation (Ali and Komatsu, 2006;Baisakh et al., 2008;Ouellet et . Relative superiority of SaADF2-OE over OsADF2-OE and WT with better root and shoot growth under salinity is the manifestation of their better physiological responses by high RWC, membrane stability index and high Fv/Fm (Baisakh et al., 2012). Rapid repolymerization of AFs in cortex and nuclear envelope was recorded in cold-treated tobacco cells (Pokorna et al., 2004). The significant differential expression of downstream stress-related genes in SaADF2-OE plants provides a strong indication of its role as a transactivator, in addition to modulating cytoskeleton architecture via reorganization of actin dynamics with interacting protein partners, to provide drought tolerance phenotype. Detail biochemical and functional investigation of different ADF isoforms will lead to identification of undefined molecular pathways related to cytoskeleton modulation and their precise role in abiotic stress responses (Nan et al., 2017). Our data showed that ADF overexpression did not compromise with the agronomic yield of ADF2-OE lines (Figure 6a-f). On the other hand, ADF overexpression had a positive impact on important agricultural traits of transgenics at both vegetative and reproductive stages under control condition. This could be attributed to the enhanced actin dynamics in transgenics that promote cell division and expansion, and polar growth. Also, high enrichment of genes involved in photosynthesis and general metabolism-related pathways in ADF2-OE (Data S3) might have contributed to the vigour of the transgenics. Identification of the complex, differential interactome regulating stress-modulated cytoskeleton driven by ADF isoforms will lead us to key genetic conduits that could be potential targets for genome engineering to improve abiotic stress resistance in crops. Sequence analysis and subcellular localization of SaADF2 An actin-depolymerizing factor (SaADF2) from the salt-induced transcriptome of Spartina alterniflora Bedre et al., 2016) was queried against the NCBI and UniProtKB nonredundant database. SaADF2 and orthologs from rice and Arabidopsis were aligned, and phylogenetic tree was constructed using CLC workbench v7.0. Homology-based threading was performed in I-TASSER stand-alone server (Yang et al., 2015) or LOMETS (Wu and Zhang, 2007) and predicted three-dimensional structures were analysed and aligned using UCSF Chimera (Pettersen et al., 2004). SaADF2 was cloned in frame with green fluorescent protein gene (gfp) under 35S promoter at Nco I and Spe I sites of pCAMBIA1302, and the resulting P35S::SaADF2:gfp fusion construct was bombarded into onion epidermal cells to visualize subcellular localization as described by Baisakh et al. (2012). Expression and purification of recombinant ADF2 proteins Full-length cDNA of SaADF2, OsADF2 (LOC_Os03 g56790) and OsCDPK6 (LOC_Os02 g58520.1) was cloned in pET200 carrying an N-terminal His-tag to generate pET200-SaADF2/OsADF2 using the standard Gateway technology (Invitrogen, Carlsbad, CA). A point mutant 6a was generated at the serine-6, the major phosphorylation site in OsADF2 by substituting serine with threonine to mimic SaADF2 using In-Fusion cloning kit (Clontech, Paolo Alto, CA) in pET200-SaADF2/OsADF2 according to the manufacturer's instructions. The recombinant proteins were expressed in Escherichia coli BL21 (DE3) and purified following standard protocol (Method S1). The affinity tag was removed from the recombinant proteins with thrombin restriction 3 (EMD Millipore, Chicago, IL) for all downstream biochemical analyses, except phosphorylation. Actin polymerization and co-sedimentation assay Human platelet G-actin (Cytoskeleton Inc., Denver, CO) was polymerized at RT following the manufacturer's protocol. The F-actin/actin bundles were separated from the G-actin by centrifugation (40 000 g) at 4°C for 3 h. The pellet was reconstituted in actin-binding buffer (ABB; 10 mM Tris, 1 mM ATP, 0.2 mM DTT, 1 mM EGTA, 0.1 mM CaCl 2 and 2 mM MgCl 2 ), and used immediately for binding assays. Low-speed co-sedimentation of SaADF2 and OsADF2 with actin was performed as described by Allwood et al. (2001) (Method S1). F-actin depolymerization assay and visualization of actin disassembly and severing Four micromolar rabbit muscle 30% pyrene-labelled actin (Cytoskeleton, Inc.) was polymerized as described by Singh et al. (2011). F-actin depolymerization was induced with 0.8 lM SaADF2/OsADF2 either by using prepolymerized actin or by adding proteins to an actively polymerizing G-actin following Carlier et al. (1997) (Method S1). Actin filament disassembly and severing by ADF proteins was observed by total internal reflection fluorescence (TIRF) microscopy, as described by Shekhar and Carlier (2017), with modifications (Method S1). In vitro phosphorylation In vitro phosphorylation was performed following in the presence of 4 lM CDPK, 16 lM ADF and 4 lM ATP following Allwood et al. (2001). All proteins were dephosphorylated with calf intestinal phosphatase (CIP, New England Biolabs, Ipswich, MA) prior to phosphorylation. Following phosphorylation, His-tagged ADF was immunoprecipitated with anti-His antibody and protein A/G sepharose (Pierce, Waltham, MA), eluted in low pH, and dialysed (Methods S1). The protein fractions were immunoblotted with antiphosphoserine antibody (Abcam, Cambridge, MA). The membrane was CIP-treated prior to blocking with rabbit serum. Membrane was developed using ECL chemiluminescence kit (Pierce) following the manufacturer's instructions. Construction of binary vector and development of rice transgenics First-strand cDNA was synthesized from total RNA isolated from S. alterniflora and rice as described in Baisakh et al. (2012). The complete coding sequence of SaADF2 and OsADF2 was amplified from the respective first-strand cDNA using forward and reverse primers containing Bgl II and Bst EII restriction endonuclease recognition sites, respectively (Table S1). Construction of p35S: SaADF2/OsADF2 in pCAMBIA1305.1 backbone and its subsequent mobilization into Agrobacterium tumefaciens LBA4404 was performed following Baisakh et al. (2012). Phenotypic, physiological, biochemical and microscopic analyses Control and stressed plants were observed for common stressinduced phenotypes, and physiological traits were measured following the procedures described earlier (Baisakh et al., 2012;Joshi et al., 2014). O 2À and H 2 O 2 were visualized in situ following Jabs et al. (1996) and Thordal-Christensen et al. (1997), respectively. Transmission electron microscopy (Marques et al., 2018) and scanning electron microscopy (Baisakh et al., 2012) were conducted to examine chloroplast ultrastructure and stomata, respectively (Method S1). All quantitative data were analysed statistically for variance (ANOVA), and treatment means were compared by Tukey's HSD in XLSTAT add-in of Microsoft Excel. (Semi)-quantitative reverse transcription PCR RT-PCR was conducted using first-strand cDNA from leaf and root tissues of control and stressed SaADF2-OE and WT plants as described earlier (Baisakh et al., 2012; Method S1) to study the expression of SaADF2 and genes, which were overrepresented in the comparative transcriptome analysis (SaADF2-OE vs OsADF2-OE and WT) and selected from the network analysis using STRING (www.stringdb.org) and RiceNet v2 (Lee et al., 2011) after excluding the hypothetical and ribosomal proteins, using genespecific primers (Table S1). Leaf protoplast isolation and staining of actin filaments Green protoplasts were isolated from 10-d-old control and mannitol-stressed (equivalent with water stress to ѱ os = À0.3 MPa) seedlings of WT, SaADF2-OE, and OsADF2-OE following Zhang et al. (2011). Twenty microlitre of intact protoplast suspensions was permeabilized on poly-L-lysine-coated slides in a humid chamber with 3% Triton X-100 in PBS (pH 7.4). Actin staining with 5 IU of Alexa Fluor 488-Phalloidin (Cytoskeleton Inc) was performed following Zhao et al. (2011), and optical sections in Z-stacks at 1 lM interval were taken by LSM700 (Zeiss; 40x/1.3 objective; Method S1). Bimolecular fluorescence (BiFC) complementation Bimolecular fluorescence (BiFC) complementation was performed using the method described by Pattanaik et al. (2011). Split-YFP vectors, pA7-SaADF2/NYFP and pA7-OsADF2/NYFP containing SaADF2 and OsADF2 fused with N-terminal end of YFP, and pA7interacting protein(s) fused with C-terminal end of YFP, were constructed (Method S1) and delivered by a particle gun into onion epidermal cells at 1100 psi following Baisakh et al. (2012). GFP fluorescence was observed after 22 h under blue light with an Olympus SZH10 GFP-stereomicroscope equipped with a Nikon DXM1200C camera and ACT-1 software. Genomewide transcriptome analysis of SaADF2-OE vis-avis OsADF2-OE and WT RNA-Seq libraries were prepared from unstressed (control) and 3 and 7 DAS WT, SaADF2-OE and OsADF2-OE in three biological replicates and sequenced in an Illumina HiSeq 2000 platform with 150-cycle paired-end as described in Bedre et al. (2015). A total of 706 904 570 sequence reads (70.69 Gbp) were generated that corresponded to 623.83X coverage of the transcriptome (raw reads deposited in NCBI SRA database, Acc. No. PRJNA393177). Downstream sequence manipulations, such as filtering, mapping, assembly, differential gene expression, GO and KEGG analyses, were performed following Bedre et al. (2016).
9,514.4
2018-06-28T00:00:00.000
[ "Environmental Science", "Biology" ]
SNR Wall Effect Alleviation by Generalized Detector Employed in Cognitive Radio Networks The most commonly used spectrum sensing techniques in cognitive radio (CR) networks, such as the energy detector (ED), matched filter (MF), and others, suffer from the noise uncertainty and signal-to-noise ratio (SNR) wall phenomenon. These detectors cannot achieve the required signal detection performance regardless of the sensing time. In this paper, we explore a signal processing scheme, namely, the generalized detector (GD) constructed based on the generalized approach to signal processing (GASP) in noise, in spectrum sensing of CR network based on antenna array with the purpose to alleviate the SNR wall problem and improve the signal detection robustness under the low SNR. The simulation results confirm our theoretical issues and effectiveness of GD implementation in CR networks based on antenna array. Introduction The main aim of the cognitive radio (CR) network is to improve the spectrum utilization efficiency, by introducing an opportunistic use of unemployed frequency band by the primary user (PU) (see Figure 1). The spectrum sensing is needed to define the frequency holes that could be allocated for OPEN ACCESS the secondary user (SU). The spectrum sensors search continuously an availability of frequency holes and assign them to SU without causing harmful interference to the PU. Fundamental limitations in practice are involved in spectrum sensing process [1][2][3]. The sensitivity to noise power uncertainty, for example, variations in the noise variance as a function of real time, is one of the most common and serious problems among the well-known spectrum sensors such as the energy detector (ED), matched filter (MF), and even the cyclostationary detector under some conditions at the low signal-to-noise ratio (SNR) [4]. The impact of noise power uncertainty is quantified by SNR wall location, i.e., if the SNR value is less than the SNR wall, the PU signal detector will fail to achieve the desired performance and maintain a robustness against power noise uncertainty independently of how long the sensing time is [3][4][5]. Both theoretical and experimental analysis confirmed the SNR wall phenomenon existence under the noise power uncertainty conditions. This phenomenon negatively effects the receiver operation characteristic (ROC). Other uncertainties also can be considered as SNR wall generators, for example, the noise power estimation error, assumptions made under the white and stationary noise, fading process, shadowing, non-ideal filters, non-precise analog-to-digital (A/D) converters, quantization noise, aliasing effect caused by imperfect front-end filters, and interference between the PU and SU. An alternative presentation for the SNR wall is given by the number of samples N as a function of SNR, the probability of false alarm FA P and probability of miss miss P , i.e., ( , , ) FA miss N f SNR P P = . The PU signal detector should minimize the number of samples N required to achieve the desired detection performance. The lowest SNR satisfying the probability of false alarm FA P and the probability of miss miss P constraints is called the detector sensitivity [3]. In general, the ideal ED does not have the SNR wall, but owing to the noise power uncertainty the ED suffers from the SNR wall phenomenon making the ED non-robust under the low SNR [6,7]. In many published papers, the ED spectrum sensing performance is investigated under the noise uncertainty conditions. Different solutions are presented in the form of the dynamic detection threshold [8], log-normal approximation of the noise uncertainty [9], falling the SNR wall using the cross-correlation [10], improving the noise power estimation using the maximum likelihood (ML) estimator [6], SNR estimation based on the pseudo bit error rate (BER) for the modified ED [11], and algebraic spike detection method introduced in [12,13]. In fact, the best non-coherent detector is non-robust as the ED under the noise power uncertainty. In the coherent detector case, the SNR wall is pushed back only to a limited value and for a large channel coherence time c K → ∞ . In the MF case, the SNR wall location is proportional to 1/ c K [3], and in the case of feature detector, the SNR wall value is less in comparison with the ED one and scales only as 1/ c K with the relevant channel coherence time [3]. An interesting new four-level hypothesis blind detector for spectrum sensing in CR systems is presented in [14]. The proposed detector in [14] reduces the negative effects on the CR system performance, which are forming under the in-phase and quadrature-phase (I/Q) imbalance, based on the orthogonal frequency division multiplexing (OFDM) multiple access scheme, and presents a promising solution for any noise power uncertainties or SNR wall problem that could be caused by this I/Q imbalance. Cooperative spectrum sensing, in the course of which the multiple sensors are involved in cooperative spectrum sensing, demonstrates an effective approach to improve the spectrum sensing performance under several problems such as the noise power uncertainty, multipath fading, shadowing, and receiver uncertainties issues. The cooperative spectrum sensing can also solve the critical energy efficiency issue as shown in [15] where the energy efficient cooperative spectrum sensing is proposed and the optimal scheduling of active time for each spectrum sensor helps to extend the network lifetime. Selective grouping based on the cooperative sensing is discussed in [16] where during the sensing time each sensors group senses different radio channels while sensors in the same group perform the joint detection by the targeted channel. This process assures obtaining the more robust and efficient sensing performance comparing with the individual spectrum sensor case under the noise power uncertainty. To mitigate the negative effects of noise power uncertainty at the low SNR, an implementation of the generalized detector (GD), which is constructed based on the generalized approach to signal processing (GASP) in noise, for the spectrum sensing in CR networks based on antenna array is proposed. The GD represents a combination of the correlation detector, which is optimal in the Neyman-Pearson (NP) criterion sense when there is a priori information about the PU signal parameters, and ED, which is optimal in the NP criterion sense if there is no any a priori information about the PU signal parameters that are random [17][18][19]. The GD likelihood ratio test, based on which we can make a decision about the PU signal presence or absence in the process incoming at the SU input, demonstrates a definition of the jointly sufficient statistics of the mean and variance of the likelihood ratio and does not require any information about the PU signal and its parameters [17], ([18], Chapter 3). As was discussed in detail in ( [18], Chapter 7, pp. 685-692), the main function of GD energy detector (GD ED) is to detect the PU signal and the main function of the GD correlation detector is to define the detected PU signal parameters and make a decision: the detected signal is the expected PU signal with the required parameters or not. Note that the conventional correlation detector makes a decision about the PU signal presence or absence in the incoming process based on definition of the mean only of the process incoming at the SU input. The conventional ED defines the decision statistics with respect to PU signal presence or absence at the SU input based on determination of the variance only of the process incoming at the SU input. Definition of the jointly sufficient statistics of the mean and variance based of the incoming process at the SU input allows us to make more accurate decision in favor of the PU signal presence or absence and obtain more information about the PU signal parameters under GD employment in CR networks in comparison with the conventional MF, ED, correlation receiver and so on. A great difference between the GD ED and conventional ED is a presence of the additional linear system (the additional bandpass filter at the GD input) considered as the secondary data or reference noise source. The PU signal bandwidth is mismatched with the additional linear system bandwidth. The PU signal bandwidth is matched only with another linear system bandwidth at the GD front-end. Thus, the GD has two input linear systems, namely, the preliminary filter (PF) and the additional filter (AF). The last is considered as the reference noise source ( [18], Chapter 3), [19]. The GD PF central frequency is detuned relatively to the GD AF central frequency to ensure, firstly, the PU signal passing only through GD PF and, secondly, the independence and uncorrelatedness between the stochastic processes at the GD PF and AF outputs. Thus, it is possible to obtain the PU signal plus noise at the GD PF output in the case of "a yes" PU signal at the GD input and only the noise in the opposite case. Consequently, only the noise is obtained at the GD AF output for both cases of "a yes" and "a no" PU signal at the GD input, in other words, under the hypotheses 1 H and 0 H . The case when there is the PU signal generated by another source with the frequency content within the limits of the GD AF bandwidth, and considered as the additional interference, is discussed in [20]. The GD employment in wireless communications [21,22], radar sensor systems [20,23], and CR networks for spectrum sensing [24] allows us to improve the signal detection performance of these systems in comparison with implementation of widely used conventional detectors. This work differs from the previously published paper [24] by introducing a new advantage of GD employing in CR network systems based on antenna array, which is the SNR wall problem alleviation under the noise power uncertainty. Additionally, the GD optimal detection threshold is defined based on the minimal probability of error criterion under the noise power uncertainly at the low SNR condition. Intuitive approach to reduce the noise power uncertainty at run time by employing the GD in CR network is to define the noise power at the GD AF output, i.e., the another narrow band closed to the PU signal frequency band, with the purpose to calibrate the noise power in the PU signal frequency band. Even if we believe that the noise power forming at the GD PF and AF outputs is not the same, the noise calibration error can be much lower than the noise power uncertainty itself. The noise power calibration in real time improves the immunity against the SNR wall phenomenon [3]. In this paper, we investigate the GD noise power calibration effects on the SNR wall problem in coarse spectrum sensing for CR network systems based on antenna array and we define the GD sample complexity under the noise power uncertainty. The complementary receiver operating characteristic (ROC) and sample complexity of the ED, MF, and GD are compared under the same initial conditions for different uncertainty parameters. The real scenario of simulation demonstrates that the GD is able to alleviate the SNR wall problem and achieve the low probability of error in comparison with the conventional ED. The reminder of this paper is organized as follows. Section 2 presents the system model and the GD test statistics. Section 3 delivers the GD signal detection performance under the noise power uncertainty. The real scenario simulation results are discussed in Section 4. The concluding remarks are presented in Section 5. System Model The spectrum sensor has an antenna array with the number of elements equal to M and each antenna array element receives N samples during the sensing time. The spectrum sensing problem can be modeled as the conventional binary hypothesis test: uncorrelated between each other. The same channel model is widely used in [25][26][27]. In general, the ED does not require channel state information (CSI) for spectrum sensing [28] and the GD shares this property with ED because the ED is a constituent of the GD. It is well known that information about the CSI allows us to obtain better spectrum sensing performance in comparison with unknown CSI case. The knowledge about CSI can be more useful and effective in the cooperative spectrum sensing case. Under the low SNR and noise power uncertainty conditions, we can claim that we have imperfect CSI [29]. When the noise power estimation is applied, we have partial knowledge about the CSI. In this paper, we assume that the coarse spectrum sensing is performed without knowledge about the CSI. Owing to its simplicity, the exponential matrix model is widely used to describe the spatial correlation between the adjacent antenna array elements [30]. The components of the M M × antenna array element correlation matrix C can be presented in the following form: where ρ is the coefficient of spatial correlation between the adjacent antenna array elements (0 1 ≤ ρ ≤ , the real values). Applying the results presented in [30], the coefficient of spatial correlation ρ can be given as where Λ is the angular spread, an important propagation parameter defining a distribution of multipath power of radio waves coming in at the receiver input from a number of azimuthal directions with respect to the horizon; λ is the wavelength; and d is the distance between two adjacent antenna array elements (the antenna array element spacing). The correlation matrix of antenna array elements C given by Equation (2) is the symmetric Toeplitz matrix [25]. We define the 1 NM × signal vector Z that collects all the observed signal samples during the sensing time using the following form: where 0 M is the M M × zero matrix. GD Statistics The GD has been constructed based on the (GASP) in noise discussed in detail in [17][18][19]. The GD is considered as a linear combination of the correlation detector, which is optimal in the Neyman-Pearson criterion sense under detection of signals with a priori known parameters, and the ED, which is optimal in the Neyman-Pearson criterion sense under detection of signals with a priori unknown or random parameters. The main functioning principle of GD is a complete matching between the model signal generated by the local oscillator in GD and the information signal, in particular, the PU signal at the GD input by whole range of parameters. In this case, the noise component of the GD correlation detector caused by interaction between the model signal generated by the local oscillator in GD and the input noise and the random component of the GD ED caused by interaction between the incoming information signal (the PU signal) and input noise are cancelled in the statistical sense. This GD feature allows us to obtain the better detection performance in comparison with other classical receivers or detectors. The specific feature of GASP is introduction of the additional noise source that does not carry any information about the incoming signal with the purpose to improve a qualitative signal detection performance. This additional noise can be considered as the reference noise without any information about the PU signal [17]. The jointly sufficient statistics of the mean and variance of the likelihood ratio is obtained in the case of GASP implementation, while the classical and modern signal processing theories can deliver only a sufficient statistics of the mean or variance of the likelihood ratio. Thus, the implementation of GASP allows us to obtain more information about the input process or received information signal (the PU signal). Owing to this fact, an implementation of receivers constructed based on the GASP basis allows us to improve the spectrum sensing performance of CR wireless networks in comparison with employment of other conventional receivers at the sensing node. The GD flowchart is presented in Figure 2. As we can see from Figure 2, the GD consists of three channels: • The GD correlation channel-the PF, multipliers 1 and 2, model signal generator MSG; • The GD ED channel-the PF, AF, multipliers 3 and 4, summator 1; • The GD compensation channel-the summators 2 and 3 and accumulator Σ. As follows from Figure 2 To describe the GD flowchart we consider the discrete-time processes without loss of any generality. Evidently, the cancelation in the statistical sense between the GD correlation channel noise component h m , respectively. For simplicity of analysis, we assume that these filters have the same amplitude-frequency characteristics or impulse responses by shape. Moreover, the GD AF central frequency is detuned with respect to the GD PF central frequency on such a value that the information signal (the PU signal) cannot pass through the GD AF. Thus, the PU signal and noise can appear at the GD PF output and the only noise is appeared at the GD AF output (see Figure 3). If a value of detuning between the GD AF and PF central frequencies is more than 4 or 5 s f Δ , where s f Δ is the PU signal bandwidth, the processes at the GD AF and PF outputs can be considered as the uncorrelated and independent processes and, in practice, under this condition, the coefficient of correlation between GD PF and AF output processes is not more than 0.05 that was confirmed experimentally [31,32]. In the present paper, we consider the spectrum sensing problem of a single radio channel where the GD AF bandwidth is always idle and cannot be used by the SU because it is out of the useful spectrum of the PU network. There is a need to note that in a general case, the GD AF portion of the spectrum may be occupied by the PU signals from other networks and can be not absolutely unoccupied. In this case, the PU signals from other networks can be considered as interferences or interfering signals. Investigation and study of GD under this case is discussed in [20]. The processes at the GD AF and PF outputs present the input stochastic samples from two independent frequency-time regions. If the noise [ ] w k at the GD PF and AF inputs is Gaussian, the noise at the GD PF and AF outputs is Gaussian, too, because the GD PF and AF are the linear systems, and we believe that these linear systems do not change the statistical parameters of the input process. We use this assumption for simplicity of theoretical analysis. Thus, the GD AF can be considered as a reference noise source with a priori knowledge a "no" signal (the reference noise sample). Detailed discussion of the GD AF and PF can be found in [18,19]. The noise at the GD PF and AF outputs can be presented in the following form: Under the hypothesis 1 , H the signal at the GD PF output can be defined is the observed noise at the GD PF output and [17,18] is extended to the case of antenna array employment when an adoption of multiple antennas and antenna arrays is effective to mitigate the negative attenuation and fading effects [20,24]. The GD decision statistics can be presented in the following form: where 1 1 is the stochastic process vector at the GD PF output and GD THR is the GD detection threshold. We can rewrite Equation (11) in the vector form: where is the 1 M × vector of the random process at the GD PF output with elements defined as is the 1 M × vector of the process at the MSG output with the elements defined as is the 1 M × vector of the random process at the AF output with the elements defined as and GD THR is the GD detection threshold. According to GASP and GD structure shown in Figure 2 and the main GD functioning condition (8), the GD test statistics takes the following form under the hypotheses 1 H and 0 H , respectively: The term We use representation Equation (22) in the following discussion, for example, in Section 3. Moment Generation Function of the GD Partial Test Statistics is required. The MGF of the GD partial test statistics ( ) X GD k T is presented as: Derivation of Equation (24) GD Spectrum Sensing and Sample Complexity The spectrum sensor should minimize the number of samples ( 1 ) is the Gaussian Q -function. In the noise power uncertainty case, the noise power or variance at the GD PF and AF outputs can be determined only within the limits of a definite range [3] (see Figure 4 is the uncertainty parameter; ε is the parameter used to define the amount of non-probabilistic uncertainty in the noise power. In the case of noise power uncertainty, Equations (29) and (30) can be written in the following form: is the SNR at the GD input. Based on Equations (34) and (35) As 1 SNR << , substituting Equation (37) in Equation (35), the GD sample complexity can be defined as Here we assume that 2 w w − η σ ∈ ρ σ ρσ . As follows from Equation (38), the sample complexity GD N is inversely proportional to the squared SNR. We can notice that there is no additional term involving the SNR in the denominator of Equation (38) which leads to the noise power uncertainty calibration, i.e., the SNR wall alleviation. This is caused by the complete compensation in the ideal case between the noise component Following the above-mentioned procedure we can obtain the sample complexity for ED, which can be determined in the following form: As follows from (40), we can define the ED SNR wall in the following form [3] The relation between the probability of miss ED miss P and probability of false alarm ED FA P can be defined as In the MF case, the effective SNR is provided by the coherent processing gain. Thus, the MF sample complexity is given by [3] where c K is the coherence time of the radio channel, i.e., the time interval, within the limits of which the channel impulse response is not varied; θ is a fraction of the total power that is allocated to the known pilot tone. This concept covers many practical wireless communication systems employing the pilot tones and training known sequences for synchronization and timing acquisition. The MF SNR wall can be presented in the following form [3]: The ED has the better sample complexity performance at the high SNR in comparison with the MF because the ED uses the total average PU signal power for detection while the MF uses only a fraction of the total PU signal power. In the case of MF possessing the pilot tone detection scheme, the SNR wall phenomenon is a consequence of time-selectivity of the channel fading process and the signal power is increased with the factor c K owing to increasing the coherent processing gain. This is the reason why we see that the MF is sensitive to the channel coherence time c K . Thus, the effective SNR of the coherently combined signal according to [1,2] where μ is the amplitude coefficient of proportionality. Under the condition given by Equation (46), the MGF of the GD partial decision statistics Based on Equation (48) In the case of noise power uncertainty and under the condition given by Equation (46), Equation (49) allows us to define the probability of false alarm GD FA P and the probability of miss GD miss P for GD using the following form: ; σ σ σ 2ρσ Under delivering (51) we ignore the term 2 2 (2 1) SNR μ − since CR networks operate at very low SNR values, i.e., 1 SNR << . Defining the threshold GD THR in terms of the probability of false alarm GD FA P based on (50) and substituting it in (51), we obtain At the low SNR values, we can apply the following approximation 2 2 ( 1) 1 1 SNR μ − + ≈ and determine the GD sample complexity using the following form: At μ 1 = we obtain the sample complexity GD N given by Equation (38). The GD Optimal Threshold As a matter of fact, the ED and GD ignore the PU signal characteristics and rely only on the PU signal energy. Thus, the ED and GD optimal threshold should be proportional to the nominal noise power at the SU input. In practice, the noise power is unknown and should be estimated by the GD noise power estimator (NPE in Figure 2). As a result, both the ED and GD detection thresholds can be defined based on the total error rate minimization [34][35][36]. In the case of the additive white Gaussian noise (AWGN) channel, the GD optimal threshold can be defined using the minimal probability of error in the following form: where GD er P is the probability of error given by σ ; σ σ σ 1 ρ As we can see from (56), in the ideal case, i.e., when there is no noise power uncertainty 1 ρ = and 1 β = or 2 2 2 ξ η σ = σ = σ , the optimal detection threshold is determined as In practice, in the GD case, there is no need to define or know a priori the value of ρ since the noise power is estimated in the real time using the NPE (see Figure 2). We consider the optimal threshold under the noise power uncertainty for the theoretical analysis presented in this paper. Since the estimated noise power is differed from the real noise power, the noise power uncertainty is an unavoidable problem in practice [2,3,37]. As discussed in [6,38], in the ED case, the SNR wall phenomenon is caused by insufficient refinement of the noise power estimation while the observation time is increased and the noise power estimation approach can avoid the SNR wall problem if the noise power estimate is consistent within the limits of the observation interval. Finally, we cannot rely on the noise power estimation to solve the SNR wall problem. The results presented in [6] are applicable for ED under the use of the noise power estimation and can be applied to GD implementation in CR networks. Simulation and Discussion The sample complexity and existence of the SNR wall for the ED and MF and non-existence of the SNR wall for the GD are verified by the real scenario simulation that is performed using MATLAB in accordance with the parameters presented in the IEEE 802.22 standards, i.e., the standard for wireless regional area network WRAN using white spaces in the TV broadcast bands such as the digital video broadcasting-terrestrial DVB-T. The simulation parameters are presented in Table 1. , the GD presents the best sample complexity performance in comparison with the ED and MF under conditions of the noise power uncertainty. The GD overcomes a negative impact of the noise power uncertainty. Thus, the GD can detect the PU signal at any arbitrary low SNR increasing the number N of samples. In other words, there is no SNR wall. In the case of ED, when there is no noise power uncertainty, i.e., 1 ρ = , there is no SNR wall and the PU signal can be detected at any low SNR by increasing the sensing time or the number N of samples. If there is the noise power uncertainty, there is the SNR wall for the ED and its location depends on the value of ε and, consequently, the uncertainty parameter . ρ Small values of ε , the least uncertainty case, are preferred because, in this case, there is a decreasing in SNR wall. σ ≠ σ , is presented in Figure 6 at 1 ε = dB, 2 M = and several values of μ and β . As we can see from Figure 6, the best GD sample complexity efficiency is obtained at 1 μ = or mod = σ . Additionally, we can see that there is no SNR wall in the GD case, but the GD sample complexity efficiency decreases at 1 μ ≠ and 1 β ≠ . In this case, more samples are needed at the same SNR value to achieve the required probability of false alarm. The complementary receiver operating characteristic (ROC) curves, which are widely used in practice, for example in [39][40][41], for the ED and GD are presented in Figure 7 with and without the noise power uncertainty at 6 M = and 20 N = . In a general, for both detectors the noise power uncertainty leads to the complementary ROC curves shifting away from the (0,0) origin. As shown in Figure 7, the GD demonstrates the better sensing performance in comparison with the ED and the sensing performance degradation rate of GD is less under the noise power uncertainty conditions. In the GD case, under the low SNR or if the SNR is above the Figure 6). In Figure 8, a comparison between the ED and GD performance in terms of the probability of error er P as a function of the sample number N, the analogous performance is discussed in [36], is shown at 0.1 ε = dB, 10 SNR = − dB, and 13 SNR = − dB. The GD demonstrates the better sensing performance in comparison with the ED one. For example, at 2 10 N = the probability of error er P is equal to 0.3126 in the GD case and 0.5346 in the ED case. At 13 SNR = − dB that corresponds to the ED wall SNR when 0.1 ε = dB, we can see that the probability of error er P in the ED case fails to be robust and is distinctly differed owing to the SNR wall phenomenon. In this case, an increasing in the number of samples N is not effective to improve the probability of error er P performance for ED. At the same time, the GD has the same normal behavior meaning that the probability of error er P performance for GD is improved with increasing in the number of samples N. Comparison between the probability of error er P for the ED and GD as a function of the normalized optimal detection threshold, where NM is the normalization factor, is presented in Figure 9. The probability of error er P is evaluated for both detectors in two cases: there is the noise power uncertainty and there is no noise power uncertainty at the 5 SNR = − dB, 2 M = , 100 N = , 0.1 ε = and 1dB. As shown in Figure 9, the GD can achieve the lower probability of error er P in comparison with the ED for both cases. For example, if there is no noise power uncertainty the minimal probability of error er P is equal to 0.13 in the GD case and 0.25 in the ED case. If there is the noise power uncertainty with 0.1 ε = dB, the lowest probability of error er P for the GD is equal to 0.24 and 0.33 for the ED. In a general case, the noise power uncertainty affects negatively on the ED and GD probability of error er P . Thus, we can make the following conclusion: increasing in the noise power uncertainty leads to increasing in the probability of error er P . σ ≠ σ at the GD PF and AF outputs on the probability of error er P as a function of the normalized detection threshold given by Equation (56) dB. We can notice that the β value effects GD performance. For example, at 0.9 β = the probability of error GD er P is approximately equal to 0.26 and at 0.5 β = the probability of error GD er P is equal to 0.32. As follows form the theoretical analysis and simulation results, the GD implementation allows us to improve the spectrum sensing accuracy that is defined by the probability of false alarm FA P and the probability of detection D P . Additionally, the GD allows us to alleviate the SNR wall problem by calibrating the noise power uncertainly increasing the number of samples. Thus, the GD employment allows us to improve the signal detection and signal processing performance. The main GD can be applicable in many practical systems, such as the adaptive and spectrum efficient communication systems, CR network systems, and carrier sense multiple access based on wireless networks. In terms of complexity, the GD implementation can be more complicated in comparison with some conventional detectors, for example, the ED. The complexity of GD implementation in practice is caused by the following problems: (1) the inequality between the noise power or variances at the GD PF and AF outputs (discussed in this paper); (2) the problem of matching by parameters between the model signal and the incoming PU signal parameters, for example, by the amplitude or energy (discussed in this paper); (3) the interfering signals within the frequency content of the GD AF, i.e., the GD AF bandwidth (discussed in [20]). Conclusions The actual spectrum sensing performance of the well-known detectors employed in CR networks based on antenna array, such as the ED and MF deviates from the theoretical results owing to the noise power uncertainty and SNR wall phenomenon. This phenomenon has a negative impact on the spectrum sensing performance and on the receiver operation characteristic (ROC) when increasing in the sensing time has no any compensating effects. In this paper, we demonstrate that under implementation of the GD in CR networks based on antenna array there is no GD wall SNR in the case of noise power uncertainty that is confirmed by the real scenario simulation. The GD can calibrate the noise power uncertainty problem by the compensation channel (see Figure 2) using the reference noise forming at the GD AF output. The GD is able to detect the PU signal at any low SNR value with increasing in the number of samples that is still not the ideal solution under fast spectrum sensing. Thus, the GD implementation in CR networks based on antenna array allows us to reduce some negative effects caused by the noise power uncertainty and improve the PU signal detection performance and robustness. The probability of error er P as a function of the normalized optimal detection threshold is evaluated for the ED and GD both under presence and absence of the noise power uncertainty. The GD demonstrates the better probability of error er P performance in comparison with the ED in both cases. Finally, as is demonstrated by the simulation results, with an increase in the noise power uncertainty, the probability of error er P increases as well. We say that any random variable x has a chi-square distribution with υ degree of freedom if its probability density function (pdf) is presented as where c is a constant given by [42] 0 where the values a and b are the arbitrary constants, we can represent Equation (B4) using the following form:
8,111.6
2015-07-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Does the merger improve the operating performance of the company? Evidence from the beverage industry in India Background: There is fierce market competition both locally and globally. Every organisation seeks to maintain itself and, more crucially, to develop quickly through inorganic means. The expansion of a company through mergers and acquisitions is an inorganic process. Organic growth takes a very long period and is time-bound, but inorganic growth through mergers may be achieved quickly. This research aimed to determine whether the operating results of Indian beverage firms have improved after the merger or not. Methods: In order to assess merger-related advantages to the acquiring firms, this study used the operating performance technique, which contrasts the pre-merger and post-merger performance of corporations using accounting data. Secondary data were used to carry out this study. The operating performance was assessed on six operating parameters (ratios) i.e. Operating Profit Margin, Gross and Net Profit Margin, Debt-Equity, Return on Net Worth and Capital Employed. The comparison was done for three years pre and post-merger period of these operating ratios. Results: The findings demonstrate that mergers do not seek to increase owner wealth. This finding shows that rather than just becoming larger and achieving covert goals, managers should pay more attention to post-merger integration challenges in order to produce merger-induced synergies. Conclusion: This study shows that the M&As have not had a good effect on a company’s operating performance, especially for the chosen beverage companies in India. Since financial measures cannot fully account for the influence of mergers on business performance, future research may create other metrics for merger-related gains. Research that provides profound insights into the causes and trends of post-merger business performance through the different types of mergers and industries would also be beneficial. Introduction Mergers and acquisitions (M&As) in India peaked its activity levels in 2021.Many first-time buyers and an increase in industry disruptors, or insurgents, that too across multiple sectors and business activities, is what led M&As to reach such high levels (Dezan Shira & Associates, 2022).This illustrates how the global economy is undergoing strong upheaval.In reality, this serves as a response to the changes brought about by rapid technological advancements, lower communication and transportation costs that led to the emergence of a global market, elevated competition, the emanation of new industries, a supportive financial and economic environment, and the liberalisation of the majority of economies, which too serve as motivators for mergers (Tambi, 2005).Now a day, corporations across the globe are frequently using M&As as a business restructuring tactic.There are many studies investigating the merger phenomena, in line with the growing M&A trends (Boateng et al., 2011). The fact that it is challenging to determine how a merger impacts the financial results of a company is a significant obstacle in completing this assignment.An alteration in profitability might have a number of causes.Mergers may produce an all-around effective reaction to a supply or demand shock in the market.They may also provide a chance to obtain cutting-edge technology or to realise economies of scale. Even if they are significant, mergers' impacts are still up for debate.The "market for corporate control," which sees M&As as ways to transfer underperforming assets to companies that can use them more effectively and therefore realise the value gain, is referred to by proponents.Sceptics point out that while many mergers can be benign or advantageous, others may be driven by market dominance, arrogance, or unintentional errors, all of which have a negative impact on society.Each viewpoint is supported by evidence.The efficient-merger hypothesis appears to be supported by the regular discovery of shareholder advantages from mergers, at least in the short term, in stock market event studies.On the other hand, studies of the actual operational consequences more frequently seem to reveal that merger advantages are the exception as opposed to the rule. Overview of India's Food & Beverages (F&B) industries Globally, Indian economy is lauded as one of the quickest in terms of growth parameters.India survived the aftermath of sub-prime crisis in 2008.With a growing young and educated middle classes, which is the Indian economy's development engine, India is predicted to surpass industrialised nations like Germany and Japan and achieve third position in the world economic rankings by the year 2030.The Indian economy underwent a dramatic structural revolution during the previous ten years as it switched from being driven by agriculture to being driven by services.Agriculture still employs 60% of the people and generates 14% of the country's GDP.Despite the fact that the agricultural industry has advanced significantly, there are still many areas that may be improved and, if done so, would promote growth in both agribusiness and its connected industries.In order to satisfy India's predicted significant rise in consumption over the next 10 years, agriculture and consequently the food and beverage industry would be better equipped if these challenges were addressed.With its growing economy, India's total yearly household consumption is anticipated to quadruple, which will take India to the fifth rank by 2030 amongst the countries with the largest goods market.F&B occupy the largest space in the basket of goods consumed.This can be considered a significant accomplishment of the F&B sector in India (Grant Thornton, 2014). Significance Businesses are increasingly employing M&A (mergers and acquisitions) strategies for their regional and worldwide development in order to expand their company scope or seize new possibilities (Ferreira et al., 2016).Given the context, this research effort has been made to investigate, observe, and evaluate the operational results of the Indian beverages sector with regard to United Spirits Limited and United Breweries Limited, which have participated in M&A activities following the post liberalization, privatization, globalisation (LPG) era in India, and to ascertain whether M&As significantly affected the financial operating performance of merging entities.The purpose of this study is to investigate M&A in the beverage sector in India to analyze whether there were differences in outcomes for various companies operating within the same sector. Literature review Our study assesses the consequences of mergers on competition.Various studies have found varying effects from mergers in various industries, which is not surprising.A variety of research has been conducted about the association between M&As and business performances (Bi Z., 2016).Using several types of financial (such as profits and stock prices) and non-financial (such as the reputations of the firms) indicators and of course, the time periods (such as initial market reaction to the M&As, pre and post-measurement, etc.).According to these studies, M&A deals often benefit the target's shareholders more than the acquirer's shareholders.In reality, the performance of the buying firm generated a variety of outcomes (Schweiger and Very, 2003). The 50 biggest mergers in the US between 1979 and 1984 were quantified and their cash flow performance was assessed by Healy, Palepu, and Ruback in 1992.They found that, compared to their respective industries, the operating performance of merging companies substantially enhanced in post-merger period (Healy et al., 1992). In 1983, Katsuhiko Ikeda et al. examined the financial results of forty-three (43) combining enterprises from the manufacturing sector in Japan.In more than half of the cases, they noticed an increased Return on Equity (RoE), whereas only approximately half the cases saw an improvement in the rate of return on total assets.However, "both profit rates improved in more than half of the cases in the five-year test, indicating that improvements in firm performance after mergers began in line with internal adjustments made by the merging firms.This suggests that there was a necessary gestation period during which merging firms learned how to manage their new businesses" (Ikeda and Doi, 1983). The impact of M&As on the financial health of 40 United Kingdom corporations were researched by Jallow, Masazing, and Basit (2017) between the years of 2006 and 2010.According to the analysis, M&As had "a large influence on ROA, ROE, and EPS but a negligible impact on NPM".The study concluded that a lack of managerial effectiveness, an inefficient utilisation of shareholders' funds, and escalated financial costs are responsible for companies' insignificant decreases in Return on Assets (RoAs) and RoEs after the mergers took place (Jallow et al., 2017).Between 1995and 2000, Beena (2000) used a set of financial ratios 4 and a t-test to compare the performances of a sample of 115 acquirers, from Indian industrial sector, before and after the merger."The investigation was unable to identify any proof that the financial ratios for the acquiring corporations had improved in the post-merger era compared to the premerger period" (Beena, 2000). The financial holding companies' post-merger banks generated merger synergies.The top 10 banks in financial holding companies and top 10 banks in non-financial holding companies revealed that 3 out of the top 10 financial holding company banks were founded in the banking industry and are connected to financial holding companies that place a strong emphasis on banking.This finding indicates that financial holding companies perform better overall, post-merger, if banking is their primary operating entity (Liu, 2010). The study examines a few financial parameters (ratios) before and after a merger of Indian F&B industry acquiring firms to determine the effects of M&As on their operating financial results.The outcome refers to a minor, but not statistically significant, improvement in profitability ratios in the food industry.While the return on invested capital and net worth have decreased.In the post-merger period, both the food and beverage industries have seen a negligible hike in leverage.Post-merger, the combined performance of the food and beverage companies improved significantly, but statistical analysis cannot determine whether the mean of the two variables differed significantly (Mahamuni and Jumle, 2012).Mahamuni and Jumle, in 2018, carried out a study of manufacturing machinery and metal products firms to verify if M&A activity helps the firms in improving their performances after the merger in terms of parameters like improvements in liquidity position, better solvency scenarios, expansion of their businesses, overall improvement in profitability.The result revealed that manufacturing companies which merged "did not achieve liquidity, solvency, profitability after merger".Also, it is seen that after the merger, the operating results of the combined manufacturing firms has not improved.But the merged companies, post-merger, expanded their business activities (Mahamuni and Jumle, 2018). According to research carried out by Pramod Mantravadi and A Vidyadhar Reddy (2008), mergers appear to have experienced "a marginally positive impact on the profitability of businesses in the banking and finance sector, while they had a marginally negative impact on operating performance (in terms of profitability and returns on investment) for businesses in the pharmaceutical, textile, and electrical equipment sectors".In terms of profitability margins, ROI, and asset values, the Chemicals and Agri-products industries had experienced a significant decrease due to mergers (Mantravadi and Reddy, 2008). The study by Ahmad Ismail, Ian Davidson, and Regina Frank (2009), is focused on European banks.It examined operating performance following the merger event, and found that the industry-adjusted average cash flow return was not substantially changed after the merger but remained positive.Additionally, it was observed, low profitability, conservative credit policies, and robust cost-efficiency status in pre-merger period, which provided the source for increasing these returns post-merger are the major predictors of industry-adjusted cash flow returns (Ismail et al., 2009).Mahesh Kumar Tambi (2005) took forty companies' database from CMIE's PROWESS and applied a paired t-test for mean differences for 4 parameters viz.total performance improvement, economies of scale, operating synergy and financial synergy.He investigated the impact of mergers on Indian enterprises.The investigation indicates that Indian companies are comparable to those in various places of the globe and that mergers did not significantly increase performance (Tambi, 2005). Using measures 5 of profitability, growth, leverage, and liquidity, Pawaskar V. ( 2001) focused on the before and after the merger operating performance of 36 acquiring firms between 1992-1995 and revealed that these firms surpassed the profitability average for the sector.Regression analysis, though, discovered that growth in the profitability did not shown growth following the merger period when compared to the acquiring firms' top rivals (Pawaskar, 2001).Sinha, Kaushik and Timcy (2010) conducted the study to measure Post Merger and Acquisition Performance.Through this research, they investigated selected organizations from Financial Sector in India.With an aim to understand how M&As sway the financial performance of the select Indian 'Financial Institutions'.The researchers discovered that M&As incidents in India showed "a significant correlation between financial performance and the M&A deal", in the long-run, along with the fact that the acquiring firms could generate value (Sinha et al., 2010). According to a 2009 study by Murugesan, Manivannan, Gunasekaran, and Bennet titled "Impact of Mergers on the Corporate Performance of Acquirer and Target Companies in India," the acquirer businesses' shareholders improved their liquidity performance following the merger event (Selvam et al., 2009). Marina Martynova, Sjoerd Oosting and Luc Renneboog ( 2006) looked at the long-term profitability of business takeovers in Europe and observed that "both acquiring and target companies significantly outperformed the median peers in their industry prior to the takeovers, but the profitability of the combined firm decreased significantly following the takeover" (Marina et al., 2006). From 1993 to 2010, Sinha and Gupta (2011) examined the effects of M&As on the Indian financial sector.80 companies that went through M&A over the past 18 years were examined in the study.The reveals that M&As had 'a favorable impact on profitability' represented by the net profit and the ratio of profit before interest, tax, depreciation and amortization (PBITDA), 'a negative effect on liquidity', also decreased total and systematic risk (Sinha and Gupta, 2011). Abdullah Mamun, George Tannous, Sicong Zhang (2021) studied how bank mergers (regulatory mergers) performed (operating) post-merger during and after the 2008-2009 financial crisis.Up to two years after the acquisition, regulated mergers are seen to significantly increase profitability and cost effectiveness.In comparison with rivals who were not involved in the merger, these improvements are significantly higher.However, the operating result of non-regulatory mergers following the merger does not differ substantially from that of their non-merger peers (Mamun et al., 2021). The impact of mergers when businesses compete on pricing and cost-cutting efforts was examined by Motta and Tarantino (2021).They discover that following the merger, overall investments and consumer surpluses are lower when efficiency benefits are missing.Only when efficiency improvements are substantial enough are the impacts of a merger competitively advantageous.The effect of horizontal mergers that lead to monopolies on businesses' incentives to engage in demand-enhancing innovation is examined to discover that a merger's overall effect on innovation might be either favourable or unfavourable (Motta and Tarantino, 2021). Numerous research papers have been examined, and it has been determined that the impacts on financial results are inconsistent, mixed, and different depending on the industry.The fact that the researchers' methodologies varied made it difficult to summarise their findings, as in some of their results, they used a variety of variables, parameters, and financial information.Financial performance metrics were employed in several studies.The majority of studies that employed financial performance measures found no appreciable differences (on either side) between the financial performance prior to and following the M&A. These preceding works of research might be used to draw the conclusion that mergers generally do not appear to enhance the post-merger efficiency of acquirers.Gains are either negligible or non-existent by various measurements.Event studies and accounting both fail to provide any proof of value generation.This study's goal is to investigate these theories in the context of India.There are few studies on the performance following a merger of Indian corporations and consequently a large knowledge gap in this field.The operating efficiency technique is utilised in this study to determine how a merger will affect the efficiency of acquiring organisations. Research objectives Based on the literature review, the researcher frames the objective and the hypothesis to carry-out the study as below; Objective: To measure, compare and study the merger's impact on the operating performance. Hypothesis: Merged firms have improved their operating performances. Methods According to several merger studies, evaluating and comparing the merged firms to a similar industry group that is based on the performance before and after the merger is an effective way to find operating performance improvements (Behr & Heid, 2011;Fee & Thomas, 2004;Ghosh, 2001;Powell & Stark, 2005).The operating performance is assessed on six operating parameter (ratios) i.e.Operating Profit Margin, Gross and Net Profit Margin, Debt-Equity, Return on Net Worth and Capital Employed.The required financial data are extracted through the Centre for Monitoring Indian Economy (CMIE) Prowess Database.The comparison of three years pre and post-merger period of these operating ratios (Aggarwal and Garg, 2022). Research design The researcher used an analytical and quantitative research design to measure and compare the operating result before and after the merger period. Data source All data used in this paper can be found at the Centre for Monitoring Indian Economy (CMIE) Prowess Database (Version Prowess IQ v3.0).The relevant data of three years period pre-merger and post-merger, considering the merger year as the baseline year i.e. 0 (zero) were used. Research papers, reports of research organization and books used for the study are mentioned in references with URLs. Sampling method The researcher followed the non-probability convenient sampling method to select firms from the Indian beverage industries.United Spirits Limited (USL) and United Breweries Limited (UBL) were selected for the study for the reason that they are two of the most renowned and prestigious companies in India's beverage industry. Data analysis and tools used For all the sample firms that underwent mergers, operating performance ratios both before and after the merger were estimated, and averages (mean) were computed and compared in order to assess the merger's impact and using a "paired two sample t-test" with a confidence level of 0.05, it was determined whether there had been any statistically significant change in operating performance as a result of mergers.Mean, paired 't'-test, and Ratio Analysis, are a few of the methods applied for analyzing and assessing the data that was collected.SPSS (Statistical Package for the Social Sciences), also known as IBM SPSS Statistics.IBM SPSS Statistics Base 29.0 was used for data processing and paired 't' test analysis. Results and discussions Study of historical cases of Mergers and Acquisitions have established the changes in financial performance of the restructured firms.However, the previous studies and the current paper confirm that the firm's performance in terms of profitability, liquidity, and solvency does not show any significant improvement in the short run in the post-merger period.The present study of post-merger activity's long-term effects may add interesting results which can yield a future research dimension. Operating profit margin Table 1 shows that the Operating Profit Margin (the mean) of both beverage companies is much lower after the merger than it was before the merger.However, in the instance of United Breweries Limited, the operating profit margin postmerger is statistically worse (-14.88 in the pre period and -24.55 in the post period, t-value = 4.964, 'p'= 0.05).This means both companies are unable to control their costs, and as a result, operational profit after the merger is lower.After covering all operational expenses, operating profit will typically decline throughout the post-merger period. Gross profit margin Gross Profit Margin from Table 2 indicates that United Breweries Limited's mean (pre-merger: 8.38; post-merger: 6.48) is down, while United Spirits Limited's mean (pre-merger: 1.61; post-merger: 0.69) is up, indicating that both companies can recover their costs from sales (COGS).However, since the 'p' value is more than 0.05, changes in the gross profit margins of the two companies are statistically insignificant. Net profit margin In Table 3, the Net Profit Margin of acquired businesses during the pre-merger and post-merger periods is shown.The average Net Profit Margin ratio for United Breweries Limited (1.55 in the pre and 2.93 in the post-period) is improving, indicating that the company is better at converting sales to actual profit, but at the required probability level, the gain is not statistically significant.The Net Profit Margin (the mean) of United Spirits Limited, on the other hand, has dropped in after the merger; at the required probability level, the decline is not significant, indicating that the net profit margin has reduced, rather than increased, post-merger.It indicates that not all of the activities are carried out efficiently. Return on net worth Table 4 provides the sample merged firms' average Return on Net worth over the pre and post-merger periods.It should be noticed that the variation in average returns on net worth for both of the chosen merged corporations, i.e.United Breweries Limited (pre-merger =19.47 and post-merger = 13.34), and United Breweries Limited (pre-merger = 20.38 and post-merger = 9.67), is lower after following merger as compared to before the merger event Therefore, it may be concluded that although net value has increased substantially as a result of M&As, the merged companies were incapable of delivering the necessary returns on their net worth post-merger. Return on capital employed The average value of return on capital employed by United Spirits Limited has gone down from 8.36 (before merger) to 5.80 (after merger), based on the analysis of Table 5 of both the Sample Merged Firms during the before and after Merger Periods.Nevertheless, in the case of United Breweries Limited, return on capital employed has improved from 2.26 (before merger) to 5.21 (after merger).It suggests this company, following the merger, proved efficient in utilising its funds.Additionally, it indicates that management exhibited efficiency in employing investments and the creditors.Furthermore, the derived 't' values for the above two firms, at the required degree of probability, are not statistically significant, nor is an increase or decrease in the ratio. Debt equity ratio The findings from the analysis of Table 6 of the sample merged firms' debt-equity ratios for the pre-and post-merger periods show that United Breweries Limited's debt-equity ratio was substantially reduced from 1.61 (before merger) to 0.69 (after merger), t-value = 3.267 and p > 0.05).It clarifies that a large portion of assets after the merger are financed by debt rather than equity.It indicates that these companies are embarking on more debt as a result of merger activity. Hypotheses testing The hypothesis that the Operating performance of the merged firms has improved has been rejected after examining the results mentioned above.As Table 7 clearly indicates, the sample companies' operating performance has declined as a result of the merger activity they undertook.It reveals a negative impact on the sample companies' overall profitability over the post-merger period. The representative sample firms from the beverage industry's post-merger operating performance is declining.Profitability ratios are declining along with general falloffs in returns on net worth and capital invested.The earnings ratios for United Breweries Limited have somewhat improved, albeit not statistically significantly.For United Spirits Limited, the return on net worth and investment made has declined.A negligible increase can be observed in leverages of both of the firms that belong to the Indian beverage sector that were picked during the period following the merger.On the reverse hand, this ratio of debt to equity has decreased dramatically since the merger compared to prior.It suggests that debt rather than equity is used to fund a large part of assets in the post-merger era.It demonstrates that these businesses are increasing their debt loads as a result of merger activity.Overall, it can be said that there was a negative on operating performance by the merger activity done by these two representative sample businesses in India's beverage industry. One may draw the conclusion that the three financial variables included in this study do not statistically significantly vary the operating results after merging.The null hypothesis is true because all estimated t-values are lower than (or more on the negative side of) the table value.This outcome indicates that merger activities do not affect the acquired businesses' operational performance. Another reason why profitability did not increase after a merger is because acquisition of a company could have led to "managerial control loss problems."One may argue that the acquirers encounter unforeseen difficulties while handling and integrating their purchases.The acquiring leadership loses control and is unable to manage the merged firm effectively as it grows more complex.After a merger, profitability levels fall as a result of this loss of control. Conclusion This research work was conducted to better understand the impact of M&As activity on operating financial results.This study shows that the M&As have not had a good effect on a company's operating performance, especially for the chosen beverage companies in India.Despite the limited favourable effects, they are statistically negligible. Although there are many motivations for a firm to participate in merger activity, our aim in this search was to understand one crucial feature of M&A activity.It might be difficult to interpret the conclusion or get insights from the quantitative information when these motives or reasons are qualitative.Additionally, it has been noted from several studies that merger and acquisition efforts for numerous organisations in their post-merger phase did not result in beneficial shortterm effects.However, if the companies do not conduct thorough research before deciding on M&As, it will not meet their expectations or the objectives for the activity. By calculating it and comparing it to the average for an industry or sector, subsequent studies in this area may expand on the present study.Any variations, if any, may then be further investigated to get a better understanding.The results of research demonstrating inferior performance in the post-merger era might be compared and connected to post-merger return to investors of acquiring corporations who are part of mergers taking place in India. Previous studies have shown that there was no significant improvement in business performance.It was discovered that merger-induced changes in a company's industry-adjusted profitability, asset efficiency, and solvency status were statistically negligible.This finding suggests that mergers do not increase the acquirers' operating performance.These empirical findings allow us to draw the conclusion that merger decisions are not made with the intention of maximising shareholder value through increased profitability.The pursuit of larger scale, market consolidation, and empire building may have served as inspirations for merger choices. There may occasionally be unstated goals, such as the post-merger asset stripping of the target firm that provides the promoters with a significant cash premium over and above the net worth. In order to accomplish the true goals of the merger, management must continue to concentrate on the company's operations, especially the post-merger integration phase. Industry mergers and acquisitions do not appear to be slowing down.Why do businesses choose mergers and acquisitions (M&A) when, on average, data shows that doing so would hurt them more than help the target?It appears to point to some issues with the conceptual framework, the technique, or the accuracy of the data.This topic may be explored in more detail. Financial metrics may not fully reflect the impact of mergers on business performance or reveal the driving forces behind M&A decisions.Therefore, in future research, the post-merger performance gains might be examined in terms of some additional criteria including social value provided, improvements in gains to other stakeholders of the firms engaging in M&A, and advantages at the industry and economy level at both the national and worldwide levels. Limitations of the study Due to several limitations, the research only offers a few explanations for why there was no merger-induced increase in corporate performance.Furthermore, the article does not examine the results to see if any trends in post-merger performance across merger types and industries exist.Future research might address concerns in these areas. Introduction: The rationale behind the research issues is presently insufficiently established.To enhance this aspect, it is recommended to augment the reference base with a more extensive array of scholarly sources.Simultaneously, it is advisable to place increased emphasis on both elaborating upon these concerns and providing a comprehensive contextual backdrop for the study.2. So, the introduction section requires a redesign, and there is a need for improvement in the academic writing style. 3. Literature Review: In the literature review, the authors are encouraged to construct compelling discussions on the research objectives, drawing upon the insights of esteemed scholars.It is vital to ensure this practice is consistently followed. 1. There are lack of using range of relevant literature.Critical evaluation of key concepts and theories is also required. 2. Research Objectives should be included in the Introduction part and the Research questions. 3. Research Methodology: The research design lacks adequate information, and it is essential that a more comprehensive explanation is provided. 1. The description of the sampling method is deficient in detail, necessitating a thorough exposition of the rationale for opting to utilize CMIE.Clarification in this regard is indispensable. 2. Discussion: Kindly proceed with the formulation of the Discussion section, utilising the results and findings as the basis. 1. In Discussion, authors need to justify the research results through scholars' references.2. Should justify why only two firms has selected for the study, justify the same with citations.2. Data Analysis: Authors should justify, why only paired two sample t-test opted for analysis purpose, with some citations. 1. Trend analysis can also be useful for measure purpose, authors may use this in this section.5. From page no.11 seems that authors has selected only three years for comparison of performance of the firms which seems unjustified, must be cite some study who did the same. 6. My suggestions that authors may use at least five years each pre and post M&A data to set the comparison in performance, because in short periods of time comparison can not be done specially M&A has taken place or corporate restructuring has done by the entity. Conclusion: Authors should cite some study who has concluded the same results.or may justify why this study results are different from them. 1. If all above suggestions will incorporate in the study, i am sure the quality of paper will improve. With Best wishes. Priyanka Tandon Regenesys Business School, Sandton, South Africa Brief description of an article This study employed the operating performance technique, which compares the pre-merger and post-merger performance of organizations using accounting data, to determine the benefits of mergers to the acquiring company.Using secondary data six operating metrics (ratios), namely Operating Profit Margin, Gross and Net Profit Margin, Debt-Equity, Return on Net Worth, and Capital Employed, were used to evaluate the operating performance.These operating ratios were compared for three years before and after the merger. Major concerns In section-1, author(s) failed to address the research objectives and questions pertaining to the research study.The research objectives which is actually mentioned after LR should be the sub-part of section-1. 1. Ideally, the introduction section must clarify the gaps, the objectives, the guiding research questions, and the philosophical stance.In the current form, the introduction section fails to meet any one of these criteria.Moreover, the authors' research questions do not conform to the study. 2. Author(s) must discuss at least three contributions of study (theoretical and practical) which must be convincing towards the novelty of the study. 3. Author(s) must mention the structure/organization of study.4. The literature review is quite descriptive.It lacks critical debates that help shape the theoretical debates which are essential for scientific arguments.I am sorry but the literature review is prepared like a thesis chapter.Moreover, research gaps, research questions, and the research objectives should be presented in the introduction section. 5. Discussion should be separate section after results and must specifically discuss the results (linkage to similar and contrary studies) and also implications of study in detail. Minor concerns I found typos and inconsistencies in citations which should be carefully sorted out. Overall, the manuscript is good and it can be indexed after incorporating the above suggestions. Is the work clearly and accurately presented and does it cite the current literature? suggestion, both Literature Review and Introduction section revised and updated Discussion should be separate section after results and must specifically discuss the results and also implications of study in detail -This is done I found typos and inconsistencies in citations -Thank you for this now it is corrected Competing Interests: Nil The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact<EMAIL_ADDRESS>find any research objectives in the Introduction part.Please develop the Research objectives based on the Research problem and Research questions. the work clearly and accurately presented and does it cite the current literature?Partly Is the study design appropriate and is the work technically sound?Partly Are sufficient details of methods and analysis provided to allow replication by others?Partly If applicable, is the statistical analysis and its interpretation appropriate?Yes Are all the source data underlying the results available to ensure full reproducibility?No source data required Are the conclusions drawn adequately supported by the results?Partly Competing Interests: No competing interests were disclosed.Reviewer Expertise: Operations Management I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.objectives based on the Research problem and Research questions.-Purpose is mentioned in the Significance in the sub-section of the Introduction.In the literature review, the authors are encouraged to construct compelling discussions on the research objectives, drawing upon the insights of esteemed scholars.It is vital to ensure this practice is consistently followed -Discussion and summary of the literature review is mentioned in the last 3 paragraphs of the Literature.Research Objectives should be included in the Introduction part and the Research questions.-Mentioned after the Literature Review A thorough exposition of the rationale for opting to utilize CMIE -Updated in data source In Discussion, authors need to justify the research results through scholars' references -Added in Results and Discussion section Competing Interests: NIL Reviewer Report 13 October 2023 https://doi.org/10.5256/f1000research.152785.r208032 1. Table 4 . Return on Net Worth. Table 5 . Return on Capital Employed. Table 6 . Debt to Equity Ratio. Table 7 . Overview of the Performance Parameters. Table 1 , 2, 3, 4, 5, should contains S.D., N, also because to apply t test we need all these information. Is the work clearly and accurately presented and does it cite the current literature? Yes Is the study design appropriate and is the work technically sound? Partly Are sufficient details of methods and analysis provided to allow replication by others? Partly If applicable, is the statistical analysis and its interpretation appropriate? Partly Are all the source data underlying the results available to ensure full reproducibility? No Are the conclusions drawn adequately supported by the results? Partly Reviewer Expertise: Corporate Finance, Accounting, Economic developments I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. Thank you for suggestions, we have added in data section Should justify why only two firms has selected for the study, justify the same with citations -Thank you for the point mentioned, we have added in Sampling section Authors should justify, why only paired two sample t-test opted for analysis purpose, with some citations -Thank you now we have added in Data Analysis section Table1, 2, 3, 4, 5, should contains S.D., N, also because to apply t test we need all these information -Thank you we have added SD in the respective table From page no.11 seems that authors has selected only three years for comparison of performance of the firms which seems unjustified, must be cite some study who did the same -Thank you, we have added in Limitations section My suggestions that authors may use at least five years each pre and post M&A data to set the comparison in performance, because in short periods of time comparison cannot be done specially M&A has taken place or corporate restructuring has done by the entity -Thank you we have added in Limitations section Authors should cite some study who has concluded the same results.or may justify why this study results are different from them -Thank you we have added in Results and Discussion section This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
8,393
2023-09-11T00:00:00.000
[ "Business", "Economics" ]
Analyzing challenging aspects of IPv6 over IPv4 Received 01 July 2020, Revised 15 July 2020, Accepted 25 July 2020. The exponential expansion of the Internet has exhausted the IPv4 addresses provided by IANA. The new IP edition, i.e., IPv6 introduced by IETF with new features such as a simplified packet header, a greater address space, a different address sort, improved encryption, powerful section routing, and stronger QoS. ISPs are slowly seeking to migrate from current IPv4 physical networks to new generation IPv6 networks. The move from actual IPv4 to software-based IPv6 is very sluggish since billions of computers across the globe use IPv4 addresses. The configuration and actions of IP4 and IPv6 protocols are distinct. Direct correspondence between IPv4 and IPv6 is also not feasible. In terms of the incompatibility problems, all protocols can coexist throughout the transformation for a few years. Compatibility, interoperability, and stability are key concerns between IP4 and IPv6 protocols. After the conversion of the network through an IPv6, the move causes several issues for ISPs. The key challenges faced by ISPs are packet traversing, routing scalability, performance reliability, and protection. Within this study, we meticulously analyzed a detailed overview of all aforementioned issues during switching into IPv6 network. INTRODUCTION The fast growth of the Internet is taking place across the globe. After decades of struggle, due to speedy and appropriate technological advancement, a large number of technologies like 3G and 4G become a part of the Internet, which is supported by mobile devices. The fast changes of the Internet world enlarged the requirement for a unique IP address, which can be used for individual devices. Benefiting from the services of the Internet, the home users who are linked through smartphones can enjoy and take advantage of different services, and the billions of IP addressing can only be provided through IPv4, 32-bit addressing technique, which is about 4 billion [1]. The ISPs faced difficulties in providing Internet access to new users. Internet Assigned Number Authority (IANA) mentioned that IPv4 addresses are approximately ended [2]. The solution is to move on to the new IPv6 network. IPv6 was developed by Internet Engineering Task Force (IETF) with extra features, such as smaller header size, larger address space, new any-cast addressing type, integrated security, efficient routing, and better QoS [3]. It is a 128-bit architecture and can provide undecillion IP addresses. It is said to be a next-generation IP protocol. Both IP4 and IPv6 protocols are different in format and behavior and cannot communicate directly with each other. ISPs are moving towards Next Generation Network (NGN) [4], progressively and the changeover process is very sluggish due to billions of devices are working throughout the world. Therefore, it is not possible to replace the entire network with new IPv6 at once in a short span of time. According to a Google survey report, after over 25 years, the transition process is 25 % completed approximately. There are many reasons behind this slow conversion. The economic factor is also at a high rate. Hardware cost, more energy consumption, staff training, and else, altogether increases the economic cost [5]. The dual-stack technique and virtualized network architectures are introduced to overcome VIRTUALIZATION IN NETWORKING In the present era, by increasing the size of ISPs can increase the number of devices. Therefore, ISPs are continuously buying the physical items for increasing the size; as a result, the cost and electricity consumption is increasing. Virtualization concepts are introduced in networking for reducing energy consumption and expenditure costs for proprietary hardware. Network Services Virtualization (NSV) The Technique is effectively applied in some forms of virtual LAN, Virtual Router Redundancy Protocol, and virtual routing and forwarding. These NSV concepts are called virtualization and provide support after eliminating the hardware. VLANs is a subnetwork that can group collections of devices on separate physical local area networks (LANs). A single broadcast domain of the switch is separated into multiple broadcast domains through VLANs, which reduce the cost, split the size of the network into multiple networks, lessen broadcast traffic and improve security [19]. Similarly, VPNs provide a secure and logical connection over the public network by sending/receiving secure data over the public network with the use of VPNs. The VRRP provides availability and reliability with multiple redundant virtual routers as gateways on a single router for efficient traffic delivery. If one gateway is down, then the traffic is passed from another gateway [20]. The VRF technique creates multiple virtual routing tables in a single router. VRF splits a single router into multiple logical routers. VMWARE INFRASTRUCTURE The Dell Corporation is providing a VMware platform called for virtualization with the partnership of other companies to provide service. This VMware Workstation is used to manage IT Environment; it allows different users to set of connections of Virtual Machines (VMs) on a physical machine and communicate them simultaneously along with the original machine. A hypervisor is computer software or firmware used to create and run more than one VM as a guest machine on a physical machine. These VMs may run different types of guest operating systems like (Microsoft, Linux, and Mac) and share the virtualized hardware resources. Each VM can use up to 16-GB RAM and 4 CPUs with VMware Virtual Symmetric Multi-Processing (SMP). VMware offers a variety of software to provide "vServices" in terms of desktop computing, servers, cloud management, application management, storage management, networking, and security. Cloud Computing It is an on-demand technology that worked on virtualization concepts. It is a de facto standard based hosting and providing services to many users over the internet. This cloud computing technology has many ISSN 2338-3070 Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI) 57 Vol. 6, No. 1, June 2020, pp. 54-67 Analyzing challenging aspects of IPv6 over IPv4 (Shahzad Ashraf) benefits over traditional techniques and adopting and implementing very fast by ISPs and end-users. More, it further benefits like saving cost, scheduling of jobs, energy efficiency, no limitation of storage, scalability, access ever time and anywhere of the globe, to tolerate the fault that occurs in the system. The next generation of cloud technology should be equipped with a mixture of traditional and non-traditional development [21], such as SDN, Nano Computing, Quantum Computing, Neuromorphic, etc. CORE ISSUES DURING MOVING TOWARD NGN The addressing method of IPv4 and IPv6 is totally different, which cannot be worked exchangeability. By using a dual-stack approach, the network became hybrid in nature. The co-existence of IPv4-IPv6 generated several core issues in different aspects. These issues are the main reason for decreasing the overall performance of ISPs. These issues are: Packet Traversing In the meantime, the Addressing Techniques, i.e., IPv4 and IPv6, are not well matched. The users or machine fit in the IPv4 method cannot communicate with the IPv6 technique. The two IPv6 networks cannot communicate with each other if the IPv4 network is involved between them. It creates a packet traversing issue. To resolve this issue, researchers adopted a smart solution. A tunnel is deployed when two IPv6 separate networks are directly connected with the IPv4 network and want communication with each other, as shown in Figure 1. In tunneling, a virtual connection is established between two networks over the middle of the network. Network-layer virtualization provides segregation to realize end-to-end connectivity. It joins two homogeneous networks through the virtual network [22]. It's a temporal solution until the entire network shifts to IPv6. At the destination, the decapsulation process is executed. In the decapsulation process, it extracts the IPv4 header and delivers the originalIPv6 packet to its destination. It is used to achieve heterogeneous traversing. There are several IPv6 tunneling protocols like 6-in-4, 6-to-4, ISATAP, tredo, 6rd, 6over4, and GRE. These are different from each other in performance and configuration basis. The6in4, 6rd, and GRE tunneling protocols are static, while 6-to-4, 6over4, and ISATAP are dynamic. The static/ manual tunnel is a point-topoint while the automatic/dynamic tunnel is a point-to-multipoint. In the static tunneling method, source and destination IPv6 addresses of the tunnel are defined while in the dynamic method, the source address is assigned by the operator, and the destination address is found automatically [23]. The comparison of IPv6 tunneling protocols is shown in Table 1. The packet traversing issue is resolved by tunneling. Numerous research studies addressed IPv6 tunneling protocols in which researchers measured, compared, and analyzed the performance of the most common IPv6 tunneling protocols in the small and large sizes of VNs through different simulators. Researchers concluded the results on the performance basis of theIPv6 tunneling protocols through different kinds of parameters such as convergence, throughput, jitter, end-to-end delay, RTT, and tunnel overhead. Detail comparison of the IPv6 tunneling protocols is displayed in Table 2. It shows that the performance of the 6-in-4 tunnel is better than all others in most of the above-mentioned parameters. Due to better performance, it is widely used. It is a static and point-to-point tunnel. Mostly, researchers measured the performance in a small size of VNs through simulators. Although IPv6 tunneling technique resolved packet traversing issue nevertheless it is not a secure virtual connection [24]. It is more vulnerable to a breach as compared to physical links. The IPv4/IPv6 source address of the encapsulating packet can be spoofed. The attacker can alter the encapsulated IPv6 packet anywhere on the Internet during transmission. With the wild development of IPv6 tunneling methods, certain types of attacks like tunnel injection, tunnel sniffing, reflector attack, and routing loop attack are noticed. To provide a secure virtual IPv6 connection, it is needed to combine the 6in4 tunnel with IPsec. The security association in IPsec is established to protect the traffic defined by IPv6-sourceand IPv6-destination during transmission over the Internet [25]. In this scenario, the tunnel's packet once again is encapsulated in the IPsec security header before the transition. On receiving end, two times decapsulation is performed. First for IPsec header and second is IPv6 tunnel's header that creates extra overhead for every tunnel's packet during encapsulation/decapsulation. A new IPv6 tunneling technique with security features needs to be addressed for reducing extra overhead with security features. Generic support several types, can be used with routing protocol. Firewall challenges (IP protocol type 47 for IPv4 datagram for inbound and outbound must be opened). Simple key authentication between the tunnel end point. Key transmitted in clear text. Routing Scalability The most necessary part of the network is routing. Lack of proper routing, the data cannot be sent to the destination host, and the network cannot function properly. The device named router decides to deliver the packet to the right machine after matching Mac with the routing table. If the path is matched from the table, then the destination host will receive data; otherwise, the packet will be discarded [26]. The routing table can store a large number of routes. A variety of routing protocols for IPv4 and IPv6 are available. The goal of routing protocol is to achieve accuracy, stability, redundancy, routing information integrity, manageable routing policy, and fast convergence. A comparison of IPv6 routing protocols is shown in Table 3. However, the routing process is performed by routing protocols. Routing protocols detect any change or failures easily if occurred in the network. IPv6 protocols are different in nature and performance. Researchers examined the performance of IPv6 routing protocols in small and medium sizes of networks through different simulators. Research studies may help ISPs to provide routing services on large scale next-generation virtualized IP networks. Detailed performance comparison of the IPv6 routing protocols on the basis of several parameters like convergence, throughput, jitter, packet loss, end-to-end delay, and RTT are displayed in Table 4. Link state. Dijkstra's algorithm is used to calculate the best route. Cost is the metric. The administrative distance is 110. Table IV shows the detailed comparison of different IPv6 routing protocols in small and medium sizes of VNs. In this comparison, the RIPng has an advantage over the rest of the IPv6 routing protocols in most of the parameters. RIPng is a distance vector routing protocol and is not used in the large network [27]. EIGRPv6 and OSPFv3 are the best choices for a larger network. EIGRPv6 is developed by CISCO as a proprietary but later on, declared as an open standard. It is best for the flat network. When the network is moving towards a decoupling of hardware and virtualized network, then OSPFv3 is a better choice for routing. It is an open standard and hierarchical model routing protocol proposed by IETF. OSPFv3 becomes industry standard and most widely deployed protocol on the Internet due to its open standard feature, hierarchical nature and Optimized Link State Routing (OLSR). Its design focused on scalability and robustness against failures. In OSPF, the routing domain is divided into multiple areas and limiting the processing overhead of the protocol. Due to its hierarchical nature, it is a more scalable routing protocol in Multi-Protocol Label Switching (MPLS) and NGN. In traditional IP routing, the router determines the path incrementally based on the destination IP address. Another alternative connection-oriented routing technique based on label switching is called MPLS. Segment routing (SR) is also a modern and fast form of routing introduced by IETF. It is a variant of traditional IP routing. It works within MPLS and IPv6 networks. In segment routing, an IPv6 ingress node prepends a new type of header SRH (Segment Routing Header), which contains a list of segments. In the MPLS network, segments are encoded as labels. While in the IPv6 network, segments are encoded as a list of IPv6 addresses. In a distributed control plane, the segments are allocated by OSPF or BGP. SR decreases the lookup delay at every router. As a result, network performance is increased. SR increases network scalability, efficiency, and rerouting. The researcher [28] presents their design and implementation of routing function in a virtualized mode over an Open Flow network. Open Flow is the most common configuration protocol for enabling Software Defined Network (SDN) architecture. The SDN is a programmable network approach that separates the control plane and forwarding plane through standardized manners. It defines two types of communication devices. One is the controller, and the second is a switch. The controller handles the network forwarding elements while the switch is accountable for packet forwarding. The researchers emphasize the idea of routing service as NFV over an Open Flow network. The researchers achieved benefits on the basis of reducing routing devices, configuration, space, costs, energy consumption, and deployment time. By increasing the number of requests, the RTT lasts stable in unrelated proposed method extracted from the experiment. The performance and scalability are assured. More evaluations are needed to determine the robustness of the virtualized functions. Network Performance Guarantee The Virtualization of Network is a model to tackle different network challenges within a traditional network by decoupling the hardware by leveraging. It provides general-purpose services, such as servers, storage, switches, controllers, and security, through software implementation along with several emerging technologies like NFV, SDN, and cloud computing. Virtualized Data Center (VDC) provides better management flexibility, lower cost, scalability, better resource utilization, and energy efficiency through NFV. There are several technical challenges to network operators such as, how to migrate from the large scale as tight coupling exists in network infrastructure to NSV-based solutions smoothly and how to make sure the guarantee of network performance for virtual appliances during migration. Commercial data centers process a variety of services such as web services, real-time applications, gaming, audio, and video live streaming, etc. that demand high network bandwidth. It is the primary job of network operators to provide a guarantee of services to users and satisfy them. When moving towards virtualized technology implementation, network operators are reluctant due to performance issues throughput and latency. Virtualized data centers are capable of overcoming throughput and delay challenges. It divides a data center network into numerous logical networks. These logical networks independently achieve performance objectives. To achieve a guarantee of performance in virtualized data centers, multiple recommended architectures, namely, Second Net, Oktopus, Gatekeeper, Cloud NaaS, and Seawall, are available.  Second Net: In [29], researchers offered Second Net VDC architecture as a resource allocator for multiple tenants in cloud computing. It provides service variation, computation, storage, and bandwidth guarantee among multiple VMs to define three basic service types type 0, type 1, and type 2, respectively. Type 1 service deals bandwidth guarantee. It is a highly scalable architecture and supports up to 232 VMs and achieves high scalability by distributing all the virtual-to-physical mapping, routing, and the bandwidth reservation from switches to server hypervisors. The authors designed architecture, implemented it on a simulated test-bed, and evaluated the performance. The designed algorithm achieved high network operations during experiments with low time complexity. Some limitations are highlighted in SecondNet architecture. First, its performance depends upon the physical arrangement of the network. Second, it does not consider the latency associated with the performance of the network.  Oktopus: In [30], researchers developed a new Oktopus architecture to prove the practicability of VNs. It depends on two proposed VN abstraction. It captures the exchange between the performance guarantees offered to multi-tenants and costs. It increases the performance of applications and provides better flexibility. In this architecture, renters find stability between higher application performance and lower cost. Renters are involved in metrics like reliability, bandwidth, and latency between VMs and failure resiliency of the path between VMs. The researchers deployed it on a 25-node two-tier test-bed through simulation. Researchers confirmed that abstraction is a practical, better approach. Moreover, they find out that abstractions can reduce tenant costs by up to 74%. The limitation of Oktopus is the support of tree topologies, and research is needed on implementation for other types of topologies.  Gatekeeper: The researchers focused on the problem related to network performance segregation [31], and designed a new model named Gatekeeper. The solution should be scalable, on the basis of the quantity of VMs, expected performance, robust against malicious behaviors of tenants. Gatekeeper architecture emphasizes on providing assured bandwidth among VMs in multi-tenant data centers by attaining a high bandwidth consumption. It is a point-to-point protocol and generates one or more logical switch, which is connected with VMs who belong to the same tenant. The degree of incoming traffic is monitored by the virtual NIC (vNIC) of each receiving VM through different counter's set. If congestion occurs during the transmission process, the sender's vNIC is informed. The traffic controller uses this information and tries to control the traffic rate resulting in the level of congestion to be reduced. Researchers implemented a Gatekeeper prototype with 2 tenants and 6 physical machines; their results showed that Gatekeeper works well within simple scenarios. Gatekeeper does not focus on latency and still under progress.  Cloud NaaS: It is a VN architecture; thereby, professionals deploy and manage enterprise applications in clouds in a well-organized way by using this architecture [32]. The researchers designed, presented, implemented, and evaluated a networking framework model of the cloud. The model provides the facility to deploy their applications on the cloud to access VNFs. It also gives permission to deploy a variety of middlebox appliances. The authors demonstrated the flexibility of Cloud NaaS in the cloud using a multitier application model in test-bed with commercial Open Flow enabled network devices to support several network functions. In this model, several techniques are used to reduce the number of entries in each switch. It uses a single path for traffic delivering and few paths for QoS traffic based on the type of service. It uses wildcard bits for aggregation IP forwarding entries. The results show that Cloud NaaS performs well in large numbers of provisioning requests. The limitation of Cloud NaaS is the use of limited paths for QoS.  Seawall: Seawall is another bandwidth allocation architecture [33] that defines a mechanism of how the bandwidth will be shared among multiple tenants in virtualized data centers. The researchers presented Seawall, which is a bandwidth allocation system. It divides the network size according to a specified policy set by the administrator. It assigns weights to each VN and process. It allocates bandwidth according to weights. Congestion-control tunnels are used for bandwidth sharing between pairs of networks. For improving efficiency in Seawall, the end-to-end congestion control technique could be used. After the evaluation of the Seawall prototype, the researchers observed that it adds little overhead and achieves strong performance isolation. It does not address failures explicitly. The first prototype of Seawall was implemented on Windows 7 and Hyper-V. Detailed quantitative comparisons of the architectures mentioned above based on the forwarding scheme, bandwidth guarantee, scalability, QoS, and deployment ability factors are summarized in Table 5. In Table 5 comparison, all the architectures provide QoS in VNs except Seawall. QoS is measured after the calculation of the network performance. It purely focuses on technology-driven perspective measurement. It is evaluated using classical network performance metrics such as latency, jitter, and throughput. QoS and application-specific performance metrics are quantitative. QoS is achieved in all VN architectures except Seawallby allocating bandwidth for each virtual link. The Seawall shares bandwidth among tenants on the basis of weights. It does not provide guaranteed bandwidth allocation and does not expect performance. It is needed to focus on a new performance paradigm along with QoS, and that is Quality of Experience (QoE).  QoE: QoE is positive feedback given by users based upon services provided by a system. User feedback is dependent on how much the user is satisfied in terms of usability, accessibility, and integrity of the QoS. It is measured by surveys and Means Opinion Scores (MOS) methods. It is qualitative. It is not only based on QoS but also based on non-technical aspects, such as end-user feelings and reactions. Nowadays, national or International service provider companies inquire about user's satisfactory level after their services by directly engaging users with the help of different online applications. Overall, the quality of the system is dependent on both QoS and QoE. Multi-user may have perceived different qualities provided by the same service on the same system. Practically, the calculation of QoE is a more challenging task due to the dependency on three factors. First, the human influence factor is based on age, gender, and user's mood. Second, the system influence factor is based on the responsiveness of the system, bandwidth, delay, jitter, screen resolution, packet loss, and display size, etc. Third, context influences factors are based on location, time, interpersonal relations, and economic context. QoE is an emerging multidisciplinary field. It is an important metric in the design and implementation of video streaming systems. In video streaming systems, due to high traffic demands and worst network performances may highly affect the user's experience. In live audio/video streaming and online game applications, packet loss affects QoE. Security Security plays a vital role in any Network where IPv6 provides a built-in security feature. Despite these security facets, the IPv6 network faces many challenges; the challenges in IPv6 are some new types of attacks. Network security is a significant issue, especially when moving towards virtualized NGN and during the coexistence of IPv4-IPv6 networks. Some kind of attacks affected both IPv4-IPv6 architectures and did not discriminate by appearance. A few examples of such kind of attacks are sniffing attacks, flooding attacks, manin-the-middle attacks, and application-layer attacks. A set of attacks with countermeasures are shown in Table 6. In a sniffing attack, an intruder can easily capture private data sent in plain text form with the help of some sniffer tools during transmission over the network. A sniffing attack can be avoided by using proper encryption techniques. Several encryption techniques, like DES, 3DES, and AES are available for data confidentiality. In a flooding attack, the attacker hits network devices, routers, and servers. The network device is engaged with a large amount of network traffic and became out of service. It is also called a DoS attack. A proper IPS is used to avoid a DoS attack. In a man-in-the-middle attack, an intruder can easily capture data, alter it, and then transmit to its destination if data is not secure. IPv6 header has no security mechanism itself. The hashing technique is used to attain data integrity. Hashing and encryption algorithms are used within the IPsec protocol to protect data from intruders during transmission. The attacks in the application layer are the most common attacks in both IPv4 and IPv6 networks. Different types of viruses and worms tried to destroy data. Updated anti-virus software is installed to avoid these types of attacks. However, IPv6 introduced and implemented a built-in security feature in the form of an extension header. Some new security threats directly related to IPv6 networks arise. Some of them are:  Reconnaissance Attacks: In this type of attack, an intruder collects essential data about the targeted network by using investigation and engaging with systems. The intruder uses different approaches, such as active methods, different scanning techniques, or passive data mining, for gathering information. This information can use in further attacks. The intruder tries to trace IP addresses, which are used in a network with the help of "PING sweeps". The "PING" command helps to find out an accessible system and port scanning. The larger subnet size of the IPv6 and some type of multicast addresses are helped to identify resources in the network easily. A software tool "Nmap" is used to discover hosts and services. Attacker misuses such kind of tools. Reconnaissance attacks can be mitigated to perform the following methods. A suitable IPS is deployed at the border. IPv6 packet filtering is also applied where applicable. When using DHCPv6, avoid using sequential addresses. Configured MAC addresses manually when VM is employed.  ICMPv6 Attacks: In IPv6 networks, the neighbor discovery mechanism depends on some types of ICMPv6 messages. Therefore, we cannot block ICMPv6 messages completely the same as in IPv4. We need to allow some types of ICMPv6 messages for proper network operations. It can be misused for an attacker. ICMPv6 attacks can be mitigated to enforce a proper IPv6 packet filtering technique.  IPv6 Routing Headers: All nodes of IPv6 are capable of processing, routing headers according to the IPv6 protocol. An attacker sends a specific packet containing a "forbidden" address in routing headers to access hosts through bypass the network security devices. The accessible host will forward the packet to a destination address even though that destination address is filtered. This publicly accessible host can easily use a DoS attack by an intruder. Mobile IPv6 requires routing headers. Enforcing a firewall can be mitigated attacks.  Security Issues during Transition: In conjunction with each other has been solved by Dual Stack and Tunneling Methods. In Dual stack, IPV4 and IPv6 work at the same period and at the same time, two separate tables are being maintained. The packets of every addressing technique are sent to their respective mode. This dual-stack has two categories; the first one is to maintain both IPv4 and IPv6 but does not support Tunneling, while the second one provides Tunneling support. The IPv4 and IPv6 are also facing the vulnerability of attacks in dual-stack, while tunneling mechanism has the possibility to misuse. The intruder can avoid entering filtering the checkup. So the network address IPv4 or IPv6 can be hacked and used for Daniel of Service Attacks. Network designers and security specialists need to understand the security implications of transition mechanisms. To minimize the security threats during the co-existence of IPv4-IPv6 networks, dedicated security appliances such as firewalls and IPS are used in networks. When firewall actives, then tunneling traffic may be blocked. Security specialist enables tunneling traffic by using protocol field value that is 41. The NFV allows network functions to be accomplished in VMs rather than in dedicated devices. When different Virtual Machines (VMs) share the resources, the vigor issues and different attacks may increase. These attacks are two types the first types are network-based security challenges, and the others are VM related issues. Network function-specific threats refer to attacks on network functions or resources, for example, spoofing, sniffing, and DoS. These threats are related to the attacker's abilities and physical agreement of the network. To overcome these threats by using packet filtering firewalls and IDS. General virtualization-related threats refer to security issues related to virtualized infrastructure. Physical infrastructure is shared virtually among multiple entities and brings new security vulnerabilities.  Network Domain: Network domain manages the VNFs, which refers to shared logical-networking layers (vSwitchesandvRouters) and shared physical NICs. It creates security threats due to sharing multiple logical network layers against a single physical NIC. To overcome security threats by adopting secured networking techniques such as TLS, IPsec, or SSH. INCREDULOUS PERFORMANCE A meticulous performance result has been achieved by transferring a data file over IPv6 and IPv4. It can be shown that during the transfer of data, the IPv6 protocol created a higher deletion level compared to IPv4, as seen in Figure 2. It has been analyzed that IPv4 is still generally faster than IPv6, but for a significant fraction of measurements, IPv6 is the faster protocol. Further, the size of the file transfer data itself also affects the speed performance both on IPv6 and IPv4. Several variables might affect and lower the output during file transfer over IPv6 tunneling compared to IPv4, which are: Fig. 2. Performance metric of IPv6 over IPv4 during file transfer  Packet header size: the packet header size for IPv6 is much wider than the IPv4 standard. The implementation of IPv6 therefore introduces concerns related to extended packet headers. In this situation, the IPv4 packet header size is multiplied from 20 bytes to at least 40 bytes of IPv6.  Number of hops: the number of hops often impacts and therefore decreases the efficiency as the file size moves down the network route to the expected destination. In fact, the main cause of this delay is due to factors such as serialization, packetization, coder, and propagation, dejitter buffer and processing. IPv6 launched as the next-generation Internet protocol with several new features. ISPs have no choice to shift their existing traditional IPv4 network towards IPv6. A traditional network is based on proprietary hardware, and it provides services through dedicated devices. It increases the expenditure costs, high electricity consumption, difficult management, and controlling services. The virtualization paradigm is introduced in networking to overcome all the issues present in the physical network. The NFV idea was projected as a new emerging technology to design, deploy, and manage networking services with lower cost and lower energy consumption through the decoupling of physical proprietary network equipment. It also provides many benefits in terms of openness of platforms, improved operating performance, operation efficiency, scalability, and flexibility. The network operators are trying to shift the traditional IPv4 physical network to a virtualized IPv6 network. The infrastructure and architecture of these two types of network models are different. The transition process is slow and cannot attain in a short time due to billions of devices all over the world. Therefore, IPv4 and IPv6 will co-exist for a long time. The co-existence has created several core issues like packet traversing, routing scalability, a guarantee of network performance, and security during the transition. In this comprehensive survey, we focused on all these challenges during the transition process and provided corresponding solutions. Moreover, we highlighted limitations in all these corresponding solutions and suggested some new research directions.
7,308
2020-07-26T00:00:00.000
[ "Computer Science" ]
Evaluation of Thermal Comfort and Energy Consumption of Water Flow Glazing as a Radiant Heating and Cooling System: A Case Study of an O ffi ce Space : Large glass areas, even high-performance glazing with Low-E coating, could lead to discomfort if exposed to solar radiation due to radiant asymmetry. In addition, air-to-air cooling systems a ff ect the thermal environment indoors. Water-Flow Glazing (WFG) is a disruptive technology that enables architects and engineers to design transparent and translucent facades with new features, such as energy management. Water modifies the thermal behavior of glass envelopes, the spectral distribution of solar radiation, the non-uniform nature of radiation absorption, and the di ff usion of heat by conduction across the glass pane. The main goal of this article was to assess energy consumption and comfort conditions in o ffi ce spaces with a large glass area by using WFG as a radiant heating and cooling system. This article evaluates the design and operation of an energy management system coupled with WFG throughout a year in an actual o ffi ce space. Temperature, relative humidity, and solar radiation sensors were connected to a control unit that actuated the di ff erent devices to keep comfortable conditions with minimum energy consumption. The results in summer conditions revealed that if the mean radiant temperature ranged from 19.3 to 23 ◦ C, it helped reduce the operative temperature to comfortable levels when the indoor air temperature was between 25 and 27.5 ◦ C. The Predicted Mean Vote in summer conditions was between 0 and − 0.5 in working hours, within the recommended values of ASHRAE-55 standard. Introduction Obsolete equipment, design flaws, and inappropriate use can account for up to 20% of the energy that buildings use over the operation period [1]. Dwellings, offices, educational facilities, and commercial buildings show different consumption patterns. For example, commercial buildings exhibit high energy consumption associated with heating, ventilation, and air conditioning (HVAC) systems and lighting [2]. Office buildings have a high amount of energy use by computers and monitors, while educational buildings have significantly more energy consumption for lighting [3]. Office buildings are likely to have higher cooling demands due to the impact of internal gains from Description of the Facility The testing facility was an office space of the Department of Applied Mathematics in the School of Aeronautics and Space Engineering in Madrid, Spain (40,44389 • N, −3,7261972 • E). Two faculty members occupy the room from 8:00 a.m. to 8:00 p.m., and there are meetings with students during office hours. The occupancy is limited to six people at a time. The facility validated the WFG behavior as a component of the heating and cooling system. Figure 1 illustrates the floor plan. Four transparent WFG panels (WFG1, WFG2, WFG3, and WFG4) separated the corridor from the office. The thermal and spectral properties of these transparent panels were carefully selected to absorb the maximum heat from the beam solar radiation, which entered through the main glazed facade, impinging into the WFG in the afternoon for four to five hours, depending on the season. The northeast facade was an insulated opaque wall, and the rest of the interior partitions were translucent WFG (WFG_TP01 to WFG_TP09). In all, there were thirteen WFG panels of 1500 mm height by 1300 mm width. The energy management system is placed outdoors, in the north-east facade. The electronic control unit (ECU) monitored the temperatures of the WFG and the indoor, corridor, and outdoor temperatures. Table 1 shows the thermal transmittance and areas of the office envelope. The opaque internal partitions were modular walls with melamine panel finish (0.5 cm) and rock-wool acoustic insulation (3 cm). The northeast facade was an insulated opaque wall made up of a zinc plate external finish (1 mm), ventilated air chamber (3 cm), a brick wall (11 cm), rock-wool thermal insulation (6 cm), air chamber (5 cm), and a plaster board (12 mm). The roof was composed of a zinc plate external finish (1 mm), ventilated air chamber (3 cm), metal deck with concrete (10 cm), air chamber (10 cm), rock-wool thermal insulation (6 cm), and a plaster board (12 mm Table 1 shows the thermal transmittance and areas of the office envelope. The opaque internal partitions were modular walls with melamine panel finish (0.5 cm) and rock-wool acoustic insulation (3 cm). The northeast facade was an insulated opaque wall made up of a zinc plate external finish (1 mm), ventilated air chamber (3 cm), a brick wall (11 cm), rock-wool thermal insulation (6 cm), air chamber (5 cm), and a plaster board (12 mm). The roof was composed of a zinc plate external finish (1 mm), ventilated air chamber (3 cm), metal deck with concrete (10 cm), air chamber (10 cm), rockwool thermal insulation (6 cm), and a plaster board (12 mm Figure 2 shows the space with transparent WFG (a) facing south-west and translucent interior partitions (b). The former was double glazed; each glass pane was composed of 8 mm planiclear, 1.54 mm saflex Rsolar SG41, 8 mm planiclear, and a 20 mm water chamber. The latter was double glazed; each glass pane was formed of 10 mm planiclear, 1 mm translucent Polyvinyl butyral (PVB) 000A CoolWhite, 3 mm planiclear, and a 16 mm water chamber. The mass flow rate through the transparent WFG was set to be 2 L/min, and through the translucent glazing, it was 1 L/min. The transparent panes were exposed to western solar radiation and had to absorb a large amount of heat. In contrast, the translucent panes were designed to deliver heat or cold in winter or summer. mm saflex Rsolar SG41, 8 mm planiclear, and a 20 mm water chamber. The latter was double glazed; each glass pane was formed of 10 mm planiclear, 1 mm translucent Polyvinyl butyral (PVB) 000A CoolWhite, 3 mm planiclear, and a 16 mm water chamber. The mass flow rate through the transparent WFG was set to be 2 L/min, and through the translucent glazing, it was 1 L/min. The transparent panes were exposed to western solar radiation and had to absorb a large amount of heat. In contrast, the translucent panes were designed to deliver heat or cold in winter or summer. Table 2 shows the estimated heating and cooling loads in the office space. Ventilation loads (Vent) and internal loads (IL) were calculated for an occupancy of six people and average office equipment [38,39]. The total glazed surface was 7.8 m 2 of transparent WFG and 17.55 m 2 of translucent interior partitions. The wall-to-window ratio of the wall exposed to solar radiation was 40%. The total area of WFG radiant panels was 25.35 m 2 , with a floor area of 40 m 2 . The expected power delivered by WFG was 130 W/m 2 when the difference between the circulating water and the indoor air temperature was 13 °C . The dew point temperature for the indoor air temperature was at 27 °C and relative humidity at 40% was 12 °C . Therefore, keeping the WFG inlet temperature above 12 °C , the indoor air temperature at 27 °C , and the average water temperature at 14 °C , the delivered cooling power would be 130 W/m 2 . The total WFG surface area was 25.35 m 2 and the total cooling power was 3295 W, which was above the predicted cooling loads shown in Table 2. The cost of the system depends on a few different factors, including the dimensions of the glass, thickness, and distance between the energy management system and the panels. A typical 2 m 2 double glass panel costs 900 USD (around 450 USD/m 2 ), including the piping and individual circulating devices. Installation of WFG requires a professional team, which could run 50 to 70 USD per hour. Table 2 shows the estimated heating and cooling loads in the office space. Ventilation loads (Vent) and internal loads (IL) were calculated for an occupancy of six people and average office equipment [38,39]. The total glazed surface was 7.8 m 2 of transparent WFG and 17.55 m 2 of translucent interior partitions. The wall-to-window ratio of the wall exposed to solar radiation was 40%. The total area of WFG radiant panels was 25.35 m 2 , with a floor area of 40 m 2 . The expected power delivered by WFG was 130 W/m 2 when the difference between the circulating water and the indoor air temperature was 13 • C. The dew point temperature for the indoor air temperature was at 27 • C and relative humidity at 40% was 12 • C. Therefore, keeping the WFG inlet temperature above 12 • C, the indoor air temperature at 27 • C, and the average water temperature at 14 • C, the delivered cooling power would be 130 W/m 2 . The total WFG surface area was 25.35 m 2 and the total cooling power was 3295 W, which was above the predicted cooling loads shown in Table 2. The cost of the system depends on a few different factors, including the dimensions of the glass, thickness, and distance between the energy management system and the panels. A typical 2 m 2 double glass panel costs 900 USD (around 450 USD/m 2 ), including the piping and individual circulating devices. Installation of WFG requires a professional team, which could run 50 to 70 USD per hour. Figure 3 shows the schematics of both circuits. The energy management circuit consisted of a 370 L buffer tank, an expansion tank, an air-to-water heat pump, and an air heat exchanger. The heat pump's (Saunier Duval Genia Air 8/1 Power A7/W35 = Power A35/W18) nominal power was 7.60 kW in winter (at an outdoor air temperature of 7 • C and inlet water temperature of 35 • C) and summer (at an outdoor air temperature of 35 • C and inlet water temperature of 18 • C). The heat pump was selected for commercial reasons, regarding availability and budget constraints. Some malfunctions and operating issues related to the oversized cooling and heating power are addressed in the following sections. The air heat exchanger works when the outdoor air temperature is low enough to cool down water. This cooling mode can only be used when outdoor ambient air temperatures are below 12 • C. When the air heat exchanger is used for free cooling, the control system uses valves to isolate the heat pump from the rest of the loop, and the heat exchanger is used like a chiller. Once the buffer tank is heated or cooled down, the water flows to transfer heat or cold to the circulating device. Then, the secondary circuit transports the heated or cooled water to thirteen radiant WFG units. A control system with a thermostat based on the indoor temperature turned the heat pump and the flow rate ON and OFF. The secondary circuit was made up of two branches-one that transferred heat or cold to translucent partitions and another one for the transparent WFG modules. Each transparent WFG module had a circulating device (CDi). The mass flow rate through the transparent modules was set toṁ = 2 L/min m 2 when the system was ON. All the translucent WFG panels were connected to the same circulating device (CD TP), and the flow rate wasṁ = 1 L/min m 2 . The influence of the mass flow rate on the ability to deliver or absorb heat and the recommended values have been studied in previous articles [37]. Transparent WFG panels are exposed to solar radiation, so the mass flow rate had to be higher to absorb heat in summer and keep the water temperature within acceptable values. The electronic control unit actuated the WFG circulating devices, the heat pump, and the air heat exchanger using the basic commands of ON and OFF, with the control logic explained in Table 3. There was a mechanical ventilation system that met the requirements of the Spanish Regulation of Thermal Installations in Buildings (RITE) for ventilation of office spaces (12.5 L per second per person) [40]. The mechanical ventilation provided conditioned air and operated over the working hours (8:00 a.m. to 8:00 p.m.) at a constant air volume. However, it was not a component of the controlled energy management system. The lack of control of the ventilation device was one of the system's uncertainties because high relative humidity can cause condensation in radiant panels when operating in cooling mode, and can affect the latent loads. Sustainability 2020, 12, x FOR PEER REVIEW 7 of 28 Figure 3. Schematics of the testing facility. The primary circuit connects the energy management devices (heat pump, air heat exchanger, and buffer tank). The secondary circuit goes from the buffer tank to the WFG. Tables 3 and 4 show the proposed energy management strategy in the heating and cooling modes. The heat pump (HP) was set to operate during working hours, whereas the air heat exchanger (AHX) operated only in cooling mode during non-working hours. The first condition was related to the indoor air temperature (T_ int ) and the second condition depended on the difference between the outdoor air temperature (T_ ext ) and the tank temperatures (T _tank_top , T _tank_bottom ). Description of the Sensors To measure the water heat gain of the WFG panels, flow meters and inlet and outlet digital thermometers were installed in the primary and secondary circuits. The DS18B20-PAR digital thermometer communicated over a one-wire bus with the energy control unit (ECU). They had an operating temperature range of −55 to +100 • C and an accuracy of ±0.5 • C. A pyranometer Delta Ohm LP PYRA 03, placed on the vertical south-western facade, allowed measurement of the solar irradiance. It is a second-class pyranometer according to ISO 9060 standards and the World Meteorological Organization (WMO); it had to be placed outdoors because obstacles and reflections can affect the measurements. The same monitoring equipment has been described in other articles [37]. Figure 4 shows the position of the temperature sensors in the WFG and the circulating device. The flow meter (s) measures the flow rate at the inlet of the WFG panels. The flow meter (p) measures the flow rate of the primary circuit. The temperature sensors, T i2 and T o2 , measure the inlet and outlet temperatures in the WFG 2, respectively, and T p2 and T p2 measure the temperatures at the primary circuit. Every WFG module had a circulator that comprised a water pump, a plate heat exchanger, and two one-wire sensors inserted into two pocket wells to measure the inlet and outlet temperatures of the glazing. In addition, one module was monitored with a digital flow meter for the primary circuit and another digital flow meter for the secondary circuit. Together with the inlet and outlet temperatures, these flow meters allowed validation of the design flow rate of the glazing as well as having precise actual values for the water heat gain of each WFG panel. The one-wire digital thermometers were inserted into the pocket wells. Each sensor had a unique 64-bit serial number etched into it, and allowed the housing of a considerable number of sensors to be used on one data bus. There were four transparent WFG modules and two thermometers per module, plus the inlet and outlet temperatures for the primary circuit, measured with the same data bus. Thermostats and timers controlled the heating and cooling system. All indoor temperatures were measured 150 cm above the floor level. The main objective of this strategy was to maintain a comfortable indoor temperature and to minimize energy consumption using solar energy harvesting and free cooling. Table 5 presents a description of the sensors and parameters that have been measured. The WFG transparent panels were located in a corridor with south-west orientation. When the solar radiation impinged on the glazing, the water absorbed the energy. After analyzing the indoor temperature, the EMS decided whether to store the heat or to distribute it through the rest of the translucent interior partitions. The energy surplus could be stored in the buffer tank. If there was no solar energy to harvest or there was not enough energy harvested in the buffer tank, the heat pump would work to satisfy the demand. Generally, an office building demands cold throughout the year due to its high internal heat load. In winter, the outdoor temperature is low enough to dissipate the internal heat load utilizing an air heat exchanger. The heat pump electricity consumption was not measured. The electricity consumption was estimated with the heat pump thermal power, the coefficient of performance, and the energy efficiency ratio provided by the manufacturer. Every WFG module had a circulator that comprised a water pump, a plate heat exchanger, and two one-wire sensors inserted into two pocket wells to measure the inlet and outlet temperatures of the glazing. In addition, one module was monitored with a digital flow meter for the primary circuit and another digital flow meter for the secondary circuit. Together with the inlet and outlet temperatures, these flow meters allowed validation of the design flow rate of the glazing as well as having precise actual values for the water heat gain of each WFG panel. The one-wire digital thermometers were inserted into the pocket wells. Each sensor had a unique 64-bit serial number etched into it, and allowed the housing of a considerable number of sensors to be used on one data bus. There were four transparent WFG modules and two thermometers per module, plus the inlet and outlet temperatures for the primary circuit, measured with the same data bus. Thermostats and timers controlled the heating and cooling system. All indoor temperatures were measured 150 cm above the floor level. The main objective of this strategy was to maintain a comfortable indoor temperature and to minimize energy consumption using solar energy harvesting and free cooling. Table 5 presents a description of the sensors and parameters that have been measured. The WFG transparent panels were located in a corridor with south-west orientation. When the solar radiation Results This section presents monitoring temperatures and the power efficiency of WFG modules. The implementation of different energy strategies was validated. By analyzing the system's performance, the energy strategy is improved, achieving significant energy savings. Finally, the power performance of the WFG module is obtained by measuring the inlet and outlet temperatures and flow rate of each WFG panel. Analysis in Summer Conditions Figure 5 shows the system temperatures and the irradiance curve of a sample summer week from 10 July 2019 to 16 July 2019. T i2 and T o2 illustrate the inlet and outlet temperatures of the WFG. T _int is the indoor temperature, and T _ext is the exterior temperature. T _ext_C corresponds to the temperature in the corridor between the office and the exterior. The first day, 10 July 2019, was clear, with some evolution clouds between 16:30 and 18:00. On clear days, direct beam radiation prevailed over diffuse radiation. The typical irradiance curve (Sun_rad) reached maximum levels above 700 W/m 2 . From 9:00 a.m. to 1:00 p.m., the south-west facade was shaded due to geometrical obstructions, and the irradiance was mainly diffuse, reaching values around 200 W/m 2 . However, in the afternoon, the facade was exposed to direct solar radiation, and the corridor temperature rose to 35 • C. On 11 July 2019, the indoor and outdoor temperatures showed a similar performance, although the oscillations of the inlet and outlet temperatures were different from those of the previous day. On 12 July 2019, the solar irradiance showed irregular values because of clouds, and it affected the temperature of the corridor, which was slightly above 30 • C. Over the weekend, on 13 July 2019 and 14 July 2019, the mass flow rate was 0 and the heat pump did not operate. Inlet and outlet temperatures of the WFG (T i2 and T o2 ) did not show any difference and reached peak values of 32 • C. The indoor air temperature reached a maximum of 34 • C, whereas the temperature in the corridor (T _ext_C ) was 39 • C. A WFG circuit is a closed loop and there are two cases, mass flow rateṁ = 0 orṁ = design flow rate. Over the weekend, the mass flow rate was 0 and the heat pump was not in operation. After two weekend days, the indoor temperature rose to 32 • C, making it necessary to cool down the office temperature. Figure 5 shows that the inlet and outlet temperatures dropped on Sunday 14/07/2019 before 7:00 a.m., although the heating pump did not operate that day. The same behavior was shown on Monday, 15/07/2019, before 7:00 a.m. The reason was that the air heat exchanger operated both days for two hours when the difference between the top tank water temperature (T _tank_top ) and the outdoor air temperature (T_ ext ) was above 10 • C. temperature of the corridor, which was slightly above 30 °C . Over the weekend, on 13 July 2019 and 14 July 2019, the mass flow rate was 0 and the heat pump did not operate. Inlet and outlet temperatures of the WFG (Ti2 and To2) did not show any difference and reached peak values of 32 °C . The indoor air temperature reached a maximum of 34 °C , whereas the temperature in the corridor (T_ext_C) was 39 °C . A WFG circuit is a closed loop and there are two cases, mass flow rate ṁ = 0 or ṁ = design flow rate. Over the weekend, the mass flow rate was 0 and the heat pump was not in operation. After two weekend days, the indoor temperature rose to 32 °C , making it necessary to cool down the office temperature. Figure 5 shows that the inlet and outlet temperatures dropped on Sunday 14/07/2019 before 7:00 AM, although the heating pump did not operate that day. The same behavior was shown on Monday, 15/07/2019, before 7:00 AM. The reason was that the air heat exchanger operated both days for two hours when the difference between the top tank water temperature (T_tank_top) and the outdoor air temperature (T_ext) was above 10 °C . Figure 6 shows the detailed evolution of temperatures on two consecutive days. Figure 6a shows that the irradiance curve on 10 July 2019 had some oscillations in the afternoon, and the outdoor temperature declined, which indicated the existence of clouds. The inlet and outlet temperatures (T_i2 ,T_o2) showed that the heat pump worked at three cycles per hour. The heat pump parameters were fixed to meet the manufacturer's requirement for minimum cycle times. Figure 6b showed that the minimum time between starts was, at least, forty minutes. On 10 July 2019 at 7:00 PM, there was a peak in the corridor temperature (T_ext_C), and this peak did not occur on 11 July 2019. The corridor had a cooling system that was not monitored or controlled by the studied energy management system, and its temperature was a boundary condition of the studied space. The indoor air temperature rose to 27 °C on 10 July 2019 and to 29.5 °C on 11 July 2019. Although the temperatures might seem too high, due to the effect of radiating panels and a low mean radiant temperature, there is thermal comfort in the space, as shown in the discussion section. Figure 6 shows the detailed evolution of temperatures on two consecutive days. Figure 6a shows that the irradiance curve on 10 July 2019 had some oscillations in the afternoon, and the outdoor temperature declined, which indicated the existence of clouds. The inlet and outlet temperatures (T _i2, T _o2 ) showed that the heat pump worked at three cycles per hour. The heat pump parameters were fixed to meet the manufacturer's requirement for minimum cycle times. Figure 6b showed that the minimum time between starts was, at least, forty minutes. On 10 July 2019 at 7:00 p.m., there was a peak in the corridor temperature (T _ext_C ), and this peak did not occur on 11 July 2019. The corridor had a cooling system that was not monitored or controlled by the studied energy management system, and its temperature was a boundary condition of the studied space. The indoor air temperature rose to 27 • C on 10 July 2019 and to 29.5 • C on 11 July 2019. Although the temperatures might seem too high, due to the effect of radiating panels and a low mean radiant temperature, there is thermal comfort in the space, as shown in the discussion section. Figure 7 shows the temperatures and the irradiance curve of a sample winter week from 08 January 2020 to 14 January 2020. On sunny working days (from 08 January 2020 to 10 January 2020), the outdoor temperature showed typical values of winter in Madrid, with a minimum temperature slightly below 0 • C and a maximum temperature between 10 and 15 • C. The solar radiation impinged on the south-west facade as of 11:00 a.m. with a peak value of 300 W/m 2 . The indoor air temperature (T _int ) was below comfort until 7:00 p.m. because the heating system was off. In the morning, the heat pump started working, and the radiant WFG panels were delivering heat. In the afternoon, the temperature in the corridor (T _ext_C ) rose to 30 • C, which helped to heat the office space air temperature (T _int ) to 22 • C. The solar radiation in the afternoon made the heat delivered by the WFG unnecessary. Over the weekend (11 January 2020 and 12 January 2020), the heat pump was not operating, and the indoor air temperature declined and reached its lowest value (14 • C) on Monday 13 July 2020 at 7:00 a.m. Due to the solar radiation, the temperature in the corridor rose to 28 • C. On weekend days, the heat pump did not operate in the morning, so the indoor temperature continued to drop until the afternoon, when the solar radiation and the corridor overheating contributed to raising the indoor air temperature from 17 to 19 • C on 11 January 2020 and from 15 to 17 • C on 12 January 2020. Nevertheless, the indoor temperature on 11 January 2020 at 7:00 a.m. was 18 • C, and on 13 January 2020, it was 14 • C after two days without operating the heat pump. On working days, the indoor air temperature was above 18 • C at the beginning of the working hours. On 13 July 2020 and 14 July 2020, the solar irradiance was low, and the outdoor temperature variation over the day was only 5 • C. The heat pump operated most of the working hours, unlike on sunny days, when it operated only in the morning. Figure 7 shows the temperatures and the irradiance curve of a sample winter week from 08 January 2020 to 14 January 2020. On sunny working days (from 08 January 2020 to 10 January 2020), the outdoor temperature showed typical values of winter in Madrid, with a minimum temperature slightly below 0 °C and a maximum temperature between 10 and 15 °C . The solar radiation impinged on the south-west facade as of 11:00 AM with a peak value of 300 W/m 2 . The indoor air temperature (T_int) was below comfort until 7:00 PM because the heating system was off. In the morning, the heat pump started working, and the radiant WFG panels were delivering heat. In the afternoon, the temperature in the corridor (T_ext_C) rose to 30 °C , which helped to heat the office space air temperature (T_int) to 22 °C . The solar radiation in the afternoon made the heat delivered by the WFG unnecessary. Over the weekend (11 January 2020 and 12 January 2020), the heat pump was not operating, and the indoor air temperature declined and reached its lowest value (14 °C ) on Monday 13 July 2020 at 7:00 AM. Due to the solar radiation, the temperature in the corridor rose to 28 °C . On weekend days, the heat pump did not operate in the morning, so the indoor temperature continued to drop until the afternoon, when the solar radiation and the corridor overheating contributed to raising the indoor air temperature from 17 to 19 °C on 11 January 2020 and from 15 to 17 °C on 12 January 2020. Nevertheless, the indoor temperature on 11 January 2020 at 7:00 AM was 18 °C , and on 13 January 2020, it was 14 °C after two days without operating the heat pump. On working days, the indoor air temperature was above 18 °C at the beginning of the working hours. On 13 July 2020 and 14 July 2020, the solar irradiance was low, and the outdoor temperature variation over the day was only 5 °C . The heat pump operated most of the working hours, unlike on sunny days, when it operated only in the morning. Figure 8a illustrates a sunny winter day when the solar irradiance reached a peak value of 300 W/m 2 , and the outdoor temperature ranged from −1 to 11 °C . The WFG started working in heating mode from 7:00 AM, when the indoor air temperature was 18 °C , to 9:30 AM, when the indoor air temperature reached 20 °C . The indoor air temperature continued to rise to 22 °C because the corridor air temperature reached a peak of 30 °C . Figure 8b shows a winter day with little solar radiation and an outdoor temperature that ranged from 4 to 10 °C . The WFG started working in heating mode at 7:00 Figure 8a illustrates a sunny winter day when the solar irradiance reached a peak value of 300 W/m 2 , and the outdoor temperature ranged from −1 to 11 • C. The WFG started working in heating mode from 7:00 a.m., when the indoor air temperature was 18 • C, to 9:30 a.m., when the indoor air temperature reached 20 • C. The indoor air temperature continued to rise to 22 • C because the corridor air temperature reached a peak of 30 • C. Figure 8b shows a winter day with little solar radiation and an outdoor temperature that ranged from 4 to 10 • C. The WFG started working in heating mode at 7:00 a.m., when the indoor air temperature was 16 • C. It took the system four hours to increase the indoor air temperature to 20 • C. The heat pump was connected to the buffer tank, so the heating time seemed too long due to the thermal inertia. Starting the heat pump four hours before the working hours would be an excellent strategy to improve comfort conditions on winter days after the holidays. Figure 8a illustrates a sunny winter day when the solar irradiance reached a peak value of 300 W/m 2 , and the outdoor temperature ranged from −1 to 11 °C . The WFG started working in heating mode from 7:00 AM, when the indoor air temperature was 18 °C , to 9:30 AM, when the indoor air temperature reached 20 °C . The indoor air temperature continued to rise to 22 °C because the corridor air temperature reached a peak of 30 °C . Figure 8b shows a winter day with little solar radiation and an outdoor temperature that ranged from 4 to 10 °C . The WFG started working in heating mode at 7:00 AM, when the indoor air temperature was 16 °C . It took the system four hours to increase the indoor air temperature to 20 °C . The heat pump was connected to the buffer tank, so the heating time seemed too long due to the thermal inertia. Starting the heat pump four hours before the working hours would be an excellent strategy to improve comfort conditions on winter days after the holidays. Figure 9 presents a sample week of February, from 19 February 2020 to 25 February 2020. The minimum outdoor air temperature was 0 • C on 20 February 2020, and the maximum temperature was 21 • C on 24 February 2020. The indoor air temperature (T _int ) in the office space maintained comfortable conditions operating in a free-floating temperature regime with zero energy consumption. The WFG circuit was never empty. During the free-floating regime, the mass flow rate was 0 and the heat pump was not in operation. Temperature in the corridor (T _ext_C ) showed peak values above 32 • C in the afternoon. The solar irradiance on the west facade (Sun_rad) reached a peak of 480 W/m 2 . Figure 10 illustrates the performance on two consecutive February days. Although the minimum outdoor air temperature was 0 • C on 19 February 2020 and 20 February 2020, the peak solar radiation (440 W/m 2 ) increased the temperature inside the studied office in the afternoon. When the indoor air temperature reached 25 • C, the water inlet temperature dropped, and the outlet temperature was above the inlet. As stated in Table 3, the heat pump was set to operate in cooling mode when indoor temperature was above 25 • C. The heat pump cooled down water three times between 5:00 p.m. and 7:00 p.m. on 19 February 2020, and only once at 6:30 p.m. on 20 February 2020. was 21 °C on 24 February 2020. The indoor air temperature (T_int) in the office space maintained comfortable conditions operating in a free-floating temperature regime with zero energy consumption. The WFG circuit was never empty. During the free-floating regime, the mass flow rate was 0 and the heat pump was not in operation. Temperature in the corridor (T_ext_C) showed peak values above 32 °C in the afternoon. The solar irradiance on the west facade (Sun_rad) reached a peak of 480 W/m 2 . Figure 10 illustrates the performance on two consecutive February days. Although the minimum outdoor air temperature was 0 °C on 19 February 2020 and 20 February 2020, the peak solar radiation (440 W/m 2 ) increased the temperature inside the studied office in the afternoon. When the indoor air temperature reached 25 °C , the water inlet temperature dropped, and the outlet temperature was above the inlet. As stated in Table 3, the heat pump was set to operate in cooling mode when indoor temperature was above 25 °C . The heat pump cooled down water three times between 5:00 PM and 7:00 PM on 19 February 2020, and only once at 6:30 PM on 20 February 2020. Table 6 shows a summary of the energy performance on four days in different seasons. On 10 July 2019, the system was working in cooling mode. The heat removed from the office space by the transparent WFG (kWh_WFG) and by the translucent partitions (kWh_WFG_TP) was 4.9 KWh. The transparent WFG absorbed the most significant amount of heat during the working hours because of the high mass flow rate (ṁ = 2 L/min m 2 ), whereas the translucent interior partitions performed better during the night. The contribution of the air heat exchanger (kWh_AXH) during the night was negligible compared with the heat pump, which operated from 7:00 AM to 8:00 PM. Analysis in Winter Conditions On 09 January 2020, the heat delivered by the translucent WFG (KWh_WFG_TP) was 18.7 kWh, whereas the total amount of energy delivered by the transparent WFG (KWh_WFG) was 3.4 kWh. In the afternoon, the transparent WFG circuit was stopped to allow solar radiation to enter the office space. The translucent WFG supplied most of the heat during the working hours. The thermal energy delivered by the heat pump (kWh_HP) was 7.12 kWh from 7:00 AM to 11:00 AM. In the afternoon, the thermal inertia of the tank and the solar radiation made it unnecessary to operate the heat pump again. On 14 January 2020, the contribution of the transparent WFG was higher because there was little solar radiation in the afternoon. The heat pump operated over the working hours and released Table 6 shows a summary of the energy performance on four days in different seasons. On 10 July 2019, the system was working in cooling mode. The heat removed from the office space by the transparent WFG (kWh_WFG) and by the translucent partitions (kWh_WFG_TP) was 4.9 KWh. The transparent WFG absorbed the most significant amount of heat during the working hours because of the high mass flow rate (ṁ = 2 L/min m 2 ), whereas the translucent interior partitions performed better during the night. The contribution of the air heat exchanger (kWh_AXH) during the night was negligible compared with the heat pump, which operated from 7:00 a.m. to 8:00 p.m. On 09 January 2020, the heat delivered by the translucent WFG (KWh_WFG_TP) was 18.7 kWh, whereas the total amount of energy delivered by the transparent WFG (KWh_WFG) was 3.4 kWh. In the afternoon, the transparent WFG circuit was stopped to allow solar radiation to enter the office space. The translucent WFG supplied most of the heat during the working hours. The thermal energy delivered by the heat pump (kWh_HP) was 7.12 kWh from 7:00 a.m. to 11:00 a.m. In the afternoon, the thermal inertia of the tank and the solar radiation made it unnecessary to operate the heat pump again. On 14 January 2020, the contribution of the transparent WFG was higher because there was little solar radiation in the afternoon. The heat pump operated over the working hours and released twice as much thermal energy as on 09 January 2020. On 20 February 2020, the system was working in cooling mode. The air heat exchanger (kW_AXH) was cooling down the buffer tank during the night, and the heat pump operated during the working hours. The heat removed by the translucent WFG (kWh_WFG_TP) was 4.8 kWh. The energy delivered by the heat pump was 0.6 kWh, and the thermal inertia of the buffer tank was enough to keep indoor temperature between 20 and 26 • C. In Section 4.4, these conditions are assessed to evaluate the occupants' comfort. Discussion Radiant WFG panels were part of the heating and cooling system. They impact the indoor air temperature and help reduce the mean radiant temperature and, therefore, the operative temperature. The thermal problem of the glazing is coupled with the thermal problem of the room, and the indoor temperatures should be measured. Validation of Energy Management System The power released or absorbed by the water (P) is measured in watts per square meter (W/m 2 ), and is shown in Equation (1). whereṁ is the mass flow rate (Kg/s m 2 ), c (J/Kg • C)is the specific heat of the water, and T o and T i are the temperatures of water leaving and entering the glazing, respectively ( • C). The mass flow rate is the mass of a fluid passing by a point over time. In summer conditions, the transparent WFG was set to operate during working hours. It had to absorb most of the solar radiation impinging on the glazing. Figure 11 illustrates the buffer tank temperatures and the thermal energy provided by the heat pump in a sample summer week. The top tank temperature (T _tank_top ) showed that the heat pump was set to work when T _tank_top was between 15 and 18 • C. On 10 July 2019, it worked at three cycles per hour. The following days, it was fixed to operate at a minimum time between starts of forty minutes. Over the weekend, the heat pump did not operate, and the buffer tank temperature reached 35 • C. The maximum power delivered by the heat pump (31.13 kWh) took place on 11 July 2019, when the solar irradiance reached its maximum value without any obstructions, according to Figure 5. was set to work when T_tank_top was between 15 and 18 °C . On 10 July 2019, it worked at three cycles per hour. The following days, it was fixed to operate at a minimum time between starts of forty minutes. Over the weekend, the heat pump did not operate, and the buffer tank temperature reached 35 °C . The maximum power delivered by the heat pump (31.13 kWh) took place on 11 July 2019, when the solar irradiance reached its maximum value without any obstructions, according to Figure 5. When the heat pump was working in the heating mode in winter conditions, the transparent WFG was set to operate in the morning. It did not operate in the afternoon because the solar radiation on the south-west partition helped reduce the heating load. Figure 12 shows the tank temperatures (T _tank_top , T _tank_middle , T _tank_bottom ) and the thermal consumption of the heat pump (kWh_heatpump) measured with the water flow rate and the difference of water temperature between the inlet and outlet in the heat pump. On sunny days, the heat pump operated mainly in the morning because the solar radiation heated up the office space in the afternoon. On 09 January 2020, when the outdoor air temperature ranged from −1 to 11 • C and a peak solar radiation of 300 W/m 2 , the heat pump heated the buffer tank from 7:00 a.m. to 9:00 a.m. The thermal inertia of the tank and the solar radiation in the afternoon made it unnecessary to operate the heat pump again. The total energy consumption per day was 7.12 kWh. The average heat pump thermal energy was 7 kWh on 08 January 2020, 09 January 2020, and 10 January 2020, whereas on Monday 13 January 2020, a cloudy winter day after non-working days, the total energy consumption was 20.05 kWh. The warm-up response was too low, and it took four hours to raise the temperature to comfort conditions. Over the weekend, the tank temperature dropped, and this made it necessary to increase the energy supplied by the heat pump. The lack of solar radiation in the afternoon was the reason to operate the heat pump until the end of the working hours. Figure 13 shows the tank temperatures (T _tank_top , T _tank_middle , T _tank_bottom ) and the thermal consumption of the heat pump (kWh_heatpump) on six February days. The heat pump operated in cooling mode and cooled down the top tank temperature in the afternoon. On 21 February 2020 and 22 February 2020, the heat pump did not operate, and the buffer tank was in a free-floating regime. The average energy consumption per day was 1 kWh on working days. The difference between the heat pump consumption on 14 January 2020 (15 kWh) and on 20 February 2020 (1.1 kWh) can be explained because the peak solar radiation on 09 January 2020 was below 300 W/m 2 , and the outdoor temperature was above 10 • C for four hours. On 20 February 2020, the peak solar radiation was 450 W/m 2 , and the outdoor temperature was close to 18 • C for 7 h. per day was 7.12 kWh. The average heat pump thermal energy was 7 kWh on 08 January 2020, 09 January 2020, and 10 January 2020, whereas on Monday 13 January 2020, a cloudy winter day after non-working days, the total energy consumption was 20.05 kWh. The warm-up response was too low, and it took four hours to raise the temperature to comfort conditions. Over the weekend, the tank temperature dropped, and this made it necessary to increase the energy supplied by the heat pump. The lack of solar radiation in the afternoon was the reason to operate the heat pump until the end of the working hours. Figure 13 shows the tank temperatures (T_tank_top, T_tank_middle, T_tank_bottom) and the thermal consumption of the heat pump (kWh_heatpump) on six February days. The heat pump operated in cooling mode and cooled down the top tank temperature in the afternoon. On 21 February 2020 and 22 February 2020, the heat pump did not operate, and the buffer tank was in a free-floating regime. The average energy consumption per day was 1 kWh on working days. The difference between the heat pump consumption on 14 January 2020 (15 kWh) and on 20 February 2020 (1.1 kWh) can be explained because the peak solar radiation on 09 January 2020 was below 300 W/m 2 , and the outdoor temperature was above 10 °C for four hours. On 20 February 2020, the peak solar radiation was 450 W/m 2 , and the outdoor temperature was close to 18 °C for 7 h. Tables 7 and 8 show the estimated heating (positive) and cooling (negative) loads. Ventilation loads (Vent) were calculated with the number of occupants (n) at each hour. Internal loads (IH) are calculated with the number of occupants, the metabolic rate of typical office activity, and 20 W/m 2 for lighting and equipment. Solar radiation (SR) was taken from Figure 6a with a surface area of 7.8 m 2 and a solar heat gain coefficient of 0.5. The same procedure was repeated to calculate the values on five sample days. Tables 7 and 8 show the estimated heating (positive) and cooling (negative) loads. Ventilation loads (Vent) were calculated with the number of occupants (n) at each hour. Internal loads (IH) are calculated with the number of occupants, the metabolic rate of typical office activity, and 20 W/m 2 for lighting and equipment. Solar radiation (SR) was taken from Figure 6a with a surface area of 7.8 m 2 and a solar heat gain coefficient of 0.5. The same procedure was repeated to calculate the values on five sample days. Estimation of Final Energy Consumption Tables 9 and 10 compare the thermal energy consumption of the air-to-water heat pump with the calculated cooling and heating loads. The values are taken from Figures 11 and 12 (kWh_heatpump) and Tables 7 and 8 by adding the heating and cooling loads over the working hours. Final energy (FE) consumption, non-renewable final energy (NRFE) consumption, and the CO 2 emissions in kg are primary energy factors in calculating the energy performance of buildings, according to the Energy Performance of Buildings Directive (EPDB 2018) [39]. The Spanish regulation of building thermal systems (RITE) recommends a conversion factor between final energy (FE) and non-renewable final energy (NRFE) of 1.954 [40]. The factor of emitted CO 2 for electricity is 0.331. The final energy consumption and CO 2 emissions were calculated with two different heat pumps. Table 11 illustrates the performance of the air-to-water heat pump in cooling and heating mode. The performance depends on the outlet temperature of the WFG (T o = 15 • C in summer, T o = 30 • C in winter) and the source inlet temperature in the heat pump (T s,i = 20-35 • C in summer, T s,i = 15-20 • C in winter). The outdoor temperature, T _ext , is shown in Figures 5 and 7, respectively. T s,i values were taken from the top tank temperatures (T _tank_top ) shown in Figures 11 and 12. The air-to-water heat pump shows a better coefficient of performance (COP) when the water temperature is close to 35 • C and a better energy efficiency ratio (EER) when the water temperature is close to 18 • C. The top tank temperatures (T _tank_top ) in Figures 11 and 12 confirmed the range of optimal operating temperatures. Although the actual heat pump electrical energy consumption has not been measured, the estimated COP and EER have been taken from [41]. Table 11. Final energy analysis. Air-to-water heat pump. Air-to-air heat pumps were also analyzed using the cooling and heating loads from Tables 9 and 10. The parameters that influence air-to-air heat pump performance are the dry bulb exterior air temperature (T _ext_db ) and the dry bulb interior return air temperature (T ri_db ). Table 12 shows the final energy (FE), non-renewable final energy (NRFE), and the emitted CO 2 for electricity of the air-to-air heat pump. The radiant WFG panel system coupled with a buffer tank and air-to-water heat pump showed non-renewable final energy (NRFE) consumption of 72.13 kWh in cooling mode and 24.29 kWh in heating mode, whereas the expected values of an air-to-air system were 93.56 kWh and 32.05 kWh in the studied summer and winter weeks. This resulted in a final energy savings of 23% in summer and 24% in winter. The reductions of CO 2 emissions were 3.63 kg/week in summer and 1.32 kg/week in winter. As stated in Section 2.1, the ventilation device was not a component of the energy management system, and its performance was not controlled. The ventilation load was estimated by multiplying the air flow by the specific enthalpy (kJ/kg) difference between indoor and outdoor conditions. In summer, the specific enthalpy of outdoor air at 31.3 • C with 35% relative humidity was 58.8 kJ/kg. At 26 • C and 36% relative humidity, the indoor air specific enthalpy was 46.7 kJ/kg. At a ventilation air flow rate of 75 L per second, the total ventilation cooling load over 12 h was 10.8 kWh. In winter, the indoor and outdoor specific enthalpy were 37.11 kJ/kg and 16.36 kJ/kg, respectively, and the ventilation load over the working hours was 13.24 kWh. The electrical consumption of the ventilation device, including the engine and the fan, was 3.24 kWh [42]. Air-to-Water Heat Pump The non-renewable energy consumption was 72 kWh in a summer week and 24 kWh in a winter week. The expected energy consumption projection throughout the year was 1700 kWh with a floor area of 40 m 2 . Therefore, the yearly heating and cooling energy consumption per m 2 was 42.5 kWh/m 2 per year. If the average energy savings compared to an air-to-air heat pump with multi-split were 23%, the total non renewable energy consumption (NREC) savings accounted for 391 kWh/year. The average price of electricity in Spain is 0.12 EUR/kWh [22], and the system overcosts compared to traditional indoor wall partitions plus the split system can be 50 EUR/m 2 . For 24 m 2 of radiant WFG panels, the expected payback period would be 20 years. WFG technology is not competitive nowadays, so future research is needed in industrialization and standardization to bring down the initial costs. Mean Radiant and Operative Temperatures Mean radiant temperature (MRT) expresses the influence of surface temperatures in the room on occupant comfort. The area-weighted method shown in Equation (2) is a simple way to calculate MRT, but it does not reflect the geometric position, posture, and orientation of the occupant, ceiling height, or radiant asymmetry [29]. In Equation (3), the calculation of mean radiant temperature from surrounding surfaces considers the surface temperatures of the surrounding elements and the angle factor. The angle factor is a function of shape, size, and the position concerning the occupant standing or being seated. The surfaces of the room are assumed as black, with high emissivity and no reflection. In this case, the angle factors weight the enclosing surface temperatures to the fourth power [28]. where T mr = mean radiant temperature, • C, T N = surface temperature of surface N, • C (calculated or measured), A N = area of surface, F p-N = is the angle factor between the person and surface N. The angle factors quantify the amount of radiation energy that leaves the human body and reaches each surface. They were calculated according to Figures B.2 to B.5 in [28]. If the difference between the indoor surface temperatures is relatively small (<10 • C), Equation (4) can be used. The MRT is calculated as the average value of the surrounding temperatures weighted according to the angle factors. If the temperature difference between indoor surfaces is below 10 • C, then the MRT error calculated with Equation (4) will be less than 0.2 • C [28]. Equation (5) shows the formula to calculate the angle factor [43]. where γ = A+B(a/c), Parameters a, b, and c, defined in Figure 14, are related to dimensions and distances between the occupant and the envelope. Table 13 shows the parameters A, B, C, and D to calculate angle factors for seated persons and walls, floors, and ceilings. (Floor/ceiling facing person) 1 Values are taken from [28]. Figure 14 illustrates the dimensions and geometry of the office space and the different surfaces considered to calculate the MRT for a seated person. The facing direction was ignored for simplification. The temperature of each rectangle (1 to 21) was measured to calculate the MRT. Due to small differences, only five temperatures have been taken into account. T1 = T6 = T1′ = T14 = T15 = T16 = T17 = Tceiling; T3 = T3′ = T4 = T4′ = T9 = T11 = T12 = T18 = T19 = T20 = T21 = Tfloor; T7 = T5 = T5′ = Twall; T8 = TWFG; T2 = T2′ = A rough approximation to obtain the operative temperature may be to use the arithmetic average of the mean radiant temperature (MRT) of the heated space and dry-bulb air temperature if air velocity is less than 0.2 m/s and MRT is less than 50 • C. In cases where the air velocity is between 0.2 and 1 m/s, or where the difference between mean radiant and air temperature is above 4 • C, the ASHRAE 55 provides a formula, shown in Equation (6), to calculate operative temperature [27]. where T op = operative temperature ( • C), T a = indoor air temperature ( • C), and T mr = mean radiant temperature ( • C). The value of A can be found in Table 14 as a function of the relative air speed, v r . Figure 15 illustrates the indoor relative humidity (RH), the surface temperature of indoor surfaces, and the MRT calculated according to Equation (4). The WFG panel temperatures (T WFG , T WFG-TP ) contribute to cooling the mean radiant temperature down to 20 • C when the energy management system is in operation. T WFG was lower than T WFG-TP because the mass flow rate through the transparent panels was set toṁ = 2 L/min m 2 , and through the translucent interior partitions, it wasṁ = 1 L/min m 2 . Another reason to explain the temperature difference was that each transparent WFG has its circulating device, whereas the translucent panels share the same circulating device. The former has proven to be more effective in delivering the cold from the heat pump than the latter. Floor, opaque walls, and ceiling temperatures (T floor , T wall, T ceiling ) are taken into account with their angle factors, which are calculated according to Equation (5). A rough approximation to obtain the operative temperature may be to use the arithmetic average of the mean radiant temperature (MRT) of the heated space and dry-bulb air temperature if air velocity is less than 0.2 m/s and MRT is less than 50 °C . In cases where the air velocity is between 0.2 and 1 m/s, or where the difference between mean radiant and air temperature is above 4 °C , the ASHRAE 55 provides a formula, shown in Equation (6), to calculate operative temperature [27]. where Top = operative temperature (°C ), Ta = indoor air temperature (°C ), and Tmr = mean radiant temperature (°C ). The value of A can be found in Table 14 as a function of the relative air speed, vr. Figure 15 illustrates the indoor relative humidity (RH), the surface temperature of indoor surfaces, and the MRT calculated according to Equation (4). The WFG panel temperatures (TWFG, TWFG-TP) contribute to cooling the mean radiant temperature down to 20 °C when the energy management system is in operation. TWFG was lower than TWFG-TP because the mass flow rate through the transparent panels was set to ṁ = 2 L/min m 2 , and through the translucent interior partitions, it was ṁ = 1 L/min m 2 . Another reason to explain the temperature difference was that each transparent WFG has its circulating device, whereas the translucent panels share the same circulating device. The former has proven to be more effective in delivering the cold from the heat pump than the latter. Floor, opaque walls, and ceiling temperatures (Tfloor, Twall, Tceiling) are taken into account with their angle factors, which are calculated according to Equation (5). Predicted Mean Vote (PMV) The Predicted Mean Vote (PMV) model uses six key factors to address thermal comfort: metabolic rate, clothing insulation, air temperature, radiant temperature, airspeed, and humidity. These factors may vary with time; however, in this article, the airspeed, metabolic rate, and clothing insulation are considered steady. Compliance with the ASHRAE-55 standard is tested using the CBE Thermal Comfort Tool. This tool, developed at the University of California at Berkeley, allows designers to calculate thermal comfort according to ASHRAE Standard 55-2017. The indoor air Predicted Mean Vote (PMV) The Predicted Mean Vote (PMV) model uses six key factors to address thermal comfort: metabolic rate, clothing insulation, air temperature, radiant temperature, airspeed, and humidity. These factors may vary with time; however, in this article, the airspeed, metabolic rate, and clothing insulation are considered steady. Compliance with the ASHRAE-55 standard is tested using the CBE Thermal Comfort Tool. This tool, developed at the University of California at Berkeley, allows designers to calculate thermal comfort according to ASHRAE Standard 55-2017. The indoor air temperature and the MRT were taken from Figure 15 during operating hours. Clothing was set as 0.8 Clo (typical office indoor clothing); the metabolic rate was set as 1 Met (sedentary activity), the relative humidity was taken from Figure 14, and air velocity was set as 0.10 m/s (mean air velocity of the day). The ASHRAE-55 Comfort Zone, shaded in gray in Figure 16, represents the recommended predicted mean vote, between −0.5 and +0.5, for buildings where the occupants have metabolic rates of between 1.0 met and 1.3 met, and clothing provides between 0.5 clo and 1.0 clo of thermal insulation. Figure 16 illustrates the variations of the predicted mean vote (PMV), mean radiant temperature (MRT), indoor air temperature (T _int ), and operative temperature (T op ) on four summer days. The PMV over the working hours ranged from −0.04 to −0.42 on 10/07/2019, while the MRT ranged from 23.0 to 19.3 • C, and the indoor air temperature ranged from 25.2 to 27.4 • C. During the working hours, the highest indoor temperature was on 11 July 2019 at 8:00 p.m., when the MRT was 20.1 • C and the predicted mean vote was 0.1, very close to the optimum value. Similar values are shown over the four days. temperature and the MRT were taken from Figure 15 during operating hours. Clothing was set as 0.8 Clo (typical office indoor clothing); the metabolic rate was set as 1 Met (sedentary activity), the relative humidity was taken from Figure 14, and air velocity was set as 0.10 m/s (mean air velocity of the day). The ASHRAE-55 Comfort Zone, shaded in gray in Figure 16, represents the recommended predicted mean vote, between −0.5 and +0.5, for buildings where the occupants have metabolic rates of between 1.0 met and 1.3 met, and clothing provides between 0.5 clo and 1.0 clo of thermal insulation. Figure 16 illustrates the variations of the predicted mean vote (PMV), mean radiant temperature (MRT), indoor air temperature (T_int), and operative temperature (Top) on four summer days. The PMV over the working hours ranged from −0.04 to −0.42 on 10/07/2019, while the MRT ranged from 23.0 to 19.3 °C , and the indoor air temperature ranged from 25.2 to 27.4 °C . During the working hours, the highest indoor temperature was on 11 July 2019 at 8:00 PM, when the MRT was 20.1 °C and the predicted mean vote was 0.1, very close to the optimum value. Similar values are shown over the four days. The comfort zone is defined by the combinations of the six key factors for thermal comfort. The PMV model is calculated with the air temperature and mean radiant temperature in question along with the applicable metabolic rate, clothing insulation, airspeed, and humidity. If the resulting PMV value generated by the model is within the recommended range, the conditions are within the comfort zone. Table 15 defines the PMV range for the thermal sensation scale. For 1.1% of the working hours, the PMV was above +0.4; for 6.7%, the PMV was from 0 to +0.2; for 32.3%, the PMV was from 0 to −0.2; for 54.3%, the PMV was from −0.2 to −0.4; and for 5.6% of the time, the PMV was below −0.4. Despite the high indoor air temperature, the PMV showed that occupants would describe their comfort conditions as "Slightly Cool" and always within the recommended limits specified by ASHRAE-55 (−0.5 < PMV < +0.5). The transparent WFG provided the partition exposed to solar radiation with a temperature that prevented thermal asymmetry and a lack of comfort. Hence, the results in Figure 15 indicated that the system gave consistent performance and provided comfortable conditions. The comfort zone is defined by the combinations of the six key factors for thermal comfort. The PMV model is calculated with the air temperature and mean radiant temperature in question along with the applicable metabolic rate, clothing insulation, airspeed, and humidity. If the resulting PMV value generated by the model is within the recommended range, the conditions are within the comfort zone. Table 15 defines the PMV range for the thermal sensation scale. For 1.1% of the working hours, the PMV was above +0.4; for 6.7%, the PMV was from 0 to +0.2; for 32.3%, the PMV was from 0 to −0.2; for 54.3%, the PMV was from −0.2 to −0.4; and for 5.6% of the time, the PMV was below −0.4. Despite the high indoor air temperature, the PMV showed that occupants would describe their comfort conditions as "Slightly Cool" and always within the recommended limits specified by ASHRAE-55 (−0.5 < PMV < +0.5). The transparent WFG provided the partition exposed to solar radiation with a temperature that prevented thermal asymmetry and a lack of comfort. Hence, the results in Figure 15 indicated that the system gave consistent performance and provided comfortable conditions. The same comfort analysis was carried out in February. Figure 17 illustrates the variations of the predicted mean vote (PMV), mean radiant temperature (MRT), indoor air temperature (T _int ) and operative temperature (T op ), and relative humidity (RH) on four February days. As shown in Figure 9, the conditions on sunny winter days are required to operate the heat pump in cooling mode in the afternoon. The indoor air temperature dropped to 20.5 • C on 19 February 2020 at 8:00 a.m., and reached 27 • C on 24 February 2020 at 7:00 p.m. The relative humidity ranged from 35% to 40%. The PMV over the working hours ranged from −1 on 19 February 2020 to 0.8 on 24 February 2020. Both values are out of the comfort range. In the morning, the PMV on the four days was below −0.5, so the occupants would describe their comfort conditions as "Slightly Cool" or "Cool". The heat pump was set to operate in heating mode when the indoor temperature was below 20 • C, and that condition was not met. On 24 February 2020, the PMV was above 0.5 from 5:00 p.m. to 8:00 p.m. Even though the heat pump was operating in cooling mode, the occupants would describe their comfort conditions as "Slightly Warm" or "Warm". For 45% of the working hours, the predicted mean vote was below −0.5, out of the shaded area representing the recommended comfort range. For 8% of the working hours, the predicted mean vote was above the comfort range when the indoor temperature surpassed 25.5 • C, and the WFG temperature was not low enough to bring down the mean radiant temperature. The same comfort analysis was carried out in February. Figure 17 illustrates the variations of the predicted mean vote (PMV), mean radiant temperature (MRT), indoor air temperature (T_int) and operative temperature (Top), and relative humidity (RH) on four February days. As shown in Figure 9, the conditions on sunny winter days are required to operate the heat pump in cooling mode in the afternoon. The indoor air temperature dropped to 20.5 °C on 19 February 2020 at 8:00 AM, and reached 27 °C on 24 February 2020 at 7:00 PM. The relative humidity ranged from 35% to 40%. The PMV over the working hours ranged from −1 on 19 February 2020 to 0.8 on 24 February 2020. Both values are out of the comfort range. In the morning, the PMV on the four days was below −0.5, so the occupants would describe their comfort conditions as "Slightly Cool" or "Cool". The heat pump was set to operate in heating mode when the indoor temperature was below 20 °C , and that condition was not met. On 24 February 2020, the PMV was above 0.5 from 5:00 PM to 8:00 PM. Even though the heat pump was operating in cooling mode, the occupants would describe their comfort conditions as "Slightly Warm" or "Warm". For 45% of the working hours, the predicted mean vote was below −0.5, out of the shaded area representing the recommended comfort range. For 8% of the working hours, the predicted mean vote was above the comfort range when the indoor temperature surpassed 25.5 °C , and the WFG temperature was not low enough to bring down the mean radiant temperature. Conclusions This paper has studied the energy performance of innovative building envelopes (facade and internal partitions), such as water flow glazing (WFG), coupled with an energy management system, as well as the relationships with steady and transient parameters. The energy strategies varied from a free-floating temperature regime on sunny winter days to the air-to-water heat pump, air heat exchanger, and buffer tank in summer conditions. A simple logic energy management system received inputs from temperature and relative humidity sensors. It controlled the heat pump and the air heat exchanger to deliver heat or cold to the buffer tank. The results included actual indoor air and glazing temperatures, heating and cooling energy consumption, and the influence of WFG in the mean radiant temperature and comfort. Water-Flow Glazing was evaluated as a component of a hydronic radiant heating and cooling system. It showed final energy-saving potential, provided thermal comfort, and may be considered a valid option for office retrofitting. On the hottest day of the year, when the temperature ranged from 18 to 40 °C and the peak solar radiation was above 700 W/m 2 , the energy system consumed 32 kWh (0.8 kWh/m 2 ) and the WFG managed to keep the indoor air temperature between 25 and 27 °C . The contribution of the air heat exchanger was negligible over the year because it was set to work for cooling only when the difference between the tank top temperature and outdoor temperature (T_tank_top Conclusions This paper has studied the energy performance of innovative building envelopes (facade and internal partitions), such as water flow glazing (WFG), coupled with an energy management system, as well as the relationships with steady and transient parameters. The energy strategies varied from a free-floating temperature regime on sunny winter days to the air-to-water heat pump, air heat exchanger, and buffer tank in summer conditions. A simple logic energy management system received inputs from temperature and relative humidity sensors. It controlled the heat pump and the air heat exchanger to deliver heat or cold to the buffer tank. The results included actual indoor air and glazing temperatures, heating and cooling energy consumption, and the influence of WFG in the mean radiant temperature and comfort. Water-Flow Glazing was evaluated as a component of a hydronic radiant heating and cooling system. It showed final energy-saving potential, provided thermal comfort, and may be considered a valid option for office retrofitting. On the hottest day of the year, when the temperature ranged from 18 to 40 • C and the peak solar radiation was above 700 W/m 2 , the energy system consumed 32 kWh (0.8 kWh/m 2 ) and the WFG managed to keep the indoor air temperature between 25 and 27 • C. The contribution of the air heat exchanger was negligible over the year because it was set to work for cooling only when the difference between the tank top temperature and outdoor temperature (T _tank_top − T_ ext ) was above 10 • C. It complicated the piping and the control logic and did not improve the energy performance. Radiant panels improve the performance of air-to-water heat pumps. The energy efficiency ratio (EER) reached 3.62 when the water temperature was 18 • C, and the coefficient of performance (COP) was 4.5 when the water temperature was 35 • C in heating mode. Using WFG as a radiant cooling facade and indoor partitions effectively reduced the operative temperature to comfortable levels when the indoor air temperature was between 25 and 27.5 • C. The Predicted Mean Vote (PMV) in summer conditions was between 0 and −0.5 in working hours, within the recommended values of ASHRAE-55 standard. The MRT ranged from 19.3 to 23 • C, and the indoor air temperature ranged from 25.2 to 29.1 • C. In winter conditions, the electronic control unit was set to operate in heating mode if the indoor air temperature was below 20 • C. Then, for 45% of the working hours, the predicted mean vote was below −0.5, out of the comfort range, so the occupants would describe their comfort conditions as "Slightly Cool" or "Cool". The control unit logic should be fixed to start operating the heating mode if the indoor temperature drops below 21 • C. On mild sunny winter days, when the outdoor temperature reached 17 • C in the afternoon, the heat pump cooled down the buffer tank, but the WFG failed to deliver enough cooling power. The predicted mean vote was above 0.5, and the conditions could be described as "Warm" and out of the comfort range for more than three hours. There were two conditions to activate WFG in the cooling mode; first, indoor air temperature should be above 25 • C, and second, the difference between indoor air temperature and the bottom tank temperature should be more than 10 • C. Water-Flow Glazing was evaluated as a component of a hydronic radiant heating and cooling system. It showed final energy-saving potential, provided thermal comfort, and may be considered a valid option for office retrofitting. The system is limited by its high initial cost and the need for an energy management system integrated with the rest of the equipment, especially the ventilation system and the heat pump. The ventilation system is an essential aspect of comfort. Controlling the relative humidity is indispensable in radiant systems to avoid condensation issues. Therefore, a more advanced ventilation device could help optimize the whole system's performance. Including a heat recovery and variable airflow would reduce the sensible and latent thermal loads and control the dew point temperature. There were uncertainties with the air-to-water heat pump operation. Although the radiant WFG panels could improve the heat pump COP and EER, there were issues with the operating cycles that could affect its performance. The selected heat pump was oversized, and frequently started and stopped because it prematurely detected that it had reached the target temperature. After the first year of monitoring, there are uncertainties, misfunctions, and system issues that must be addressed. Firstly, due to the complexity of the elements involved in human comfort, the control unit must integrate the ventilation device. The operation logic should be able to modify the water mass flow rate and ventilation air heat flow. Secondly, the devices must be adequately dimensioned to avoid misfunctions, especially the air-to-water heat pump. Further research must include heat pump electricity monitoring to compare the actual thermal and electricity consumption and assess energy performance more accurately. Finally, further research on the standardization of its manufacturing process and deployment is needed to bring down initial costs and payback periods. Another research line would be to integrate WFG into commercial building performance simulations.
16,868.6
2020-09-15T00:00:00.000
[ "Engineering" ]
Big Data De-duplication using modified SHA algorithm in cloud servers for optimal capacity utilization and reduced transmission bandwidth Data de-duplication in cloud storage is crucial for optimizing resource utilization and reducing transmission overhead. By eliminating redundant copies of data, it enhances storage efficiency, lowers costs, and minimizes network bandwidth requirements, thereby improving overall performance and scalability of cloud-based systems. The research investigates the critical intersection of data de-duplication (DD) and privacy concerns within cloud storage services. Distributed Data (DD), a widely employed technique in these services and aims to enhance capacity utilization and reduce transmission bandwidth. However, it poses challenges to information privacy, typically addressed through encoding mechanisms. One significant approach to mitigating this conflict is hierarchical approved de-duplication, which empowers cloud users to conduct privilege-based duplicate checks before data upload. This hierarchical structure allows cloud servers to profile users based on their privileges, enabling more nuanced control over data management. In this research, we introduce the SHA method for de-duplication within cloud servers, supplemented by a secure pre-processing assessment. The proposed method accommodates dynamic privilege modifications, providing flexibility and adaptability to evolving user needs and access levels. Extensive theoretical analysis and simulated investigations validate the efficacy and security of the proposed system. By leveraging the SHA algorithm and incorporating robust pre-processing techniques, our approach not only enhances efficiency in data de-duplication but also addresses crucial privacy concerns inherent in cloud storage environments. This research contributes to advancing the understanding and implementation of efficient and secure data management practices within cloud infrastructures, with implications for a wide range of applications and industries. INTRODUCTION The absence of dedicated assistance for data-escalated logical work processes, data administration is the misplaced ability restricts greater acceptance of clouds for logical processing. (1)Currently, work process information handling in the cloud is attained through the use of both the MapReduce programming model, and an application-specific overlays, which direct the output of one project to the contribution of an alternative in a pipeline mold.Superior storage frameworks that enable virtual machines (VMs) to utilize shared information simultaneously are necessary for such applications. (2)However, the current reference business clouds only provide high-dormancy REST (HTTP) interfaces for accessing the storage.Also, conditions could occur by which the implementations are required to modify the method information is accomplished to conform to the definite access technique. (3)The necessity of efficient storage for workloads involving a lot of data.Using such open cloud question stores in conjunction with a more conventional parallel record structure for the application would be the first method of data oversight.In any event, because of the aforementioned data access protocols, compute hubs and storage hubs are separated in the current cloud topologies, and communication among the two is quite inactive. Furthermore, as these facilities mainly target storage, they only facilitate data sharing as an indication, meaning they do not facilitate transfers among the optional VMs deprived of a middleman to store the data.Along with the expense of renting the VMs, clients must pay for the storage and transfer of data in and out of these archives. (4)Recently, cloud providers: Azure Drives or Amazon EBS offered the option to link cloud storage in the form of the virtual volumes to the registration hubs.Not only is this option susceptible to identically elevated delays by the usual storage access, but it also offers flexibility and sharing restrictions because only one VM can mount such a volume at a time.In order to prevent data misuse when storage and transmitting work process data, other to cloud storage is to provide an equivalent record architecture on the process hubs.Distributed storage solutions, like Gfarm (5) , activate in the host operating system of the physical hub with the purpose of storing the data in the machine's surrounding storage rings.They were delivered in a register cloud called Eucalyptus. Data De-Duplication is a unique form of data firmness that requires entire data owners who upload identical data to share a single copy of the duplicate data, hence removing the duplicate copies from the storage. The cloud storage server verifies whether or not uploaded data has been placed once users send it.The data will really be written in the storage if it hasn't been stored; if it has, the cloud storage server will merely store a pole pointing to the initial data copy rather than the complete set.As a result, it can prevent repeatedly storing the same data.In general, there are two main methods for DD: a) Target-DD.b) Source-DD.DD is the procedure of eliminating redundant data, which lowers data redundancies within (intra file) and between (inter file) files.Identify common data segments and store them only once, both within and between files.DD reduces the usefulness of a given quantity of storage, which can result in cost savings.The various DD methods and data sets all have somewhat varying levels of the DD performance.While DD can result in significant space savings, it is a data-intensive process that has greater resource overheads on the existing storage systems. One compression technique called DD is primarily used to remove duplicate files or data by maintaining Data and Metadata.2024; 3:245 2 a single copy in the storage system to minimise space consumption.Additionally, it makes searches faster and more efficient in terms of outcomes.Sometimes, it is referred to as storage capacity optimisation, finds the files that are duplicated from the data repository or from the storage systems and specifically employs the "reference pointer" to identify the chunks that are not necessary.Block level DD and file level DD are two possible methods of DD.Keeping the original physical copy of the data after DD allows you to eliminate unnecessary data without keeping multiple duplicate copies of the same file or data with comparable content. (6)There are numerous cloud storage services available, including Memopal, Mozy, and the Dropboxthat use deduplication techniques to protect customer data.Concerns regarding privacy and security are brought up by outsourcing.De-duplication is engineered to take these aspects into account, optimising data storage capacity and network bandwidth while aligning with the latest convergent key management features.Proof of protocols are used to secure data from unauthorised users.Cloud load balancing is a feature that cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google to facilitate easy task distribution.Elastic Load Balancing (ELB) technology is one example of AWS offers to distribute traffic between EC2 instances.The majority of applications supported by AWS with ELBs implemented as a crucial architectural element. Objective of the study I. Design and implementation a modified SHA algorithm within cloud servers for efficient and secure data de-duplication, addressing the conflict between capacity optimization and information privacy.II.Proposing a hierarchical structure enabling privilege-based duplicate checks by cloud users before data upload, aiming to reconcile de-duplication requirements with privacy concerns.III.To enable the adaptation of user privileges within the de-duplication process, facilitating dynamic changes in access levels and enhancing user control over data management. Literature survey For effective, adaptive P2P streaming of scalable video, error-resilient, Murat Tekalp et al. (7) suggest two changes to the Torrent protocol: variable sized chunking and adaptive P2P.The suggested changes produce better P2P video streaming outcomes by the quantity of decoded frames and, consequently, better user experience.The suggested changes to BitTorrent for video streaming produce better outcomes by the quantity of decoded frames (greater experience quality) and the chunks that are shared among the (P2P task) leechers.The amount of decodable frames has greatly augmented, as evidenced by the variable-sized chunk testing, boosting the PSNR and QoE.The suggested adaptive windowing enables improved scalability against an aggregating amount of leechers, as demonstrated by the varying size chunking test.As a result, the suggested changes would end in higher-quality video being received by peers and less bandwidth costs for content creators (seeders). For P2P live streaming, Jin Li et al. (8) suggest a chunk-driven overlay with DHT support that aims for greater scalability, improved accessibility, and reduced latency.A video source selection algorithm, 2-layer hierarchical DHT-infrastructure, and chunk sharing algorithm, comprise the three primary parts of the architecture.The DHT-based hierarchical infrastructure has a great scalability.High availability is ensured by the chunk sharing algorithm, which offers services for chunk index discovery as well as collection.The technique used for provider selection allows for complete system bandwidth utilization.Consequently, the overlay has the ability to stream videos in high quality.Additionally, they suggest a decentralized provider selection algorithm that is simplified and centralized.When it comes to handling churn, DCO outperforms Tree-systems in terms of latency and bandwidth consumption.More significantly, by dynamically matching chunk requesters and suppliers, it may flexibly utilize all available system bandwidth.According to the experimental findings, DCO enhances the scalability, availability, latency, and overhead of mesh-systems (pull and push) as well as tree-systems.The test outcomes further validate the significance of choosing chunk providers with adequate bandwidth for chunk distribution and offering reasons to nodes to act as coordinators in the DHT-architecture. Extreme Binning is a new technique for scalable and parallel deduplication that was established by Kave Eshghil et al. (9) It is particularly well-suitable for workloads that comprise distinct files with low locality.With such a workload, current methods that depend on locality to guarantee a fair throughput perform badly.In order to reduce the disk bottleneck issue, Extreme Binning uses file comparison rather than position to use a single disc access for each file's chunk in search as opposed for each chunk.In comparison to a flat index method, it divides the chunk index as two tiers, ending in a small RAM footprint that enables the method to sustain throughput for a bigger data set.It is simple and straightforward to partition the data chunks and the twotier chunk index.There is no data or index exchange amongst nodes in a dispersed method by several backup nodes.A stateless routing mechanism is used to assign files to a distinct node for storage, and deduplication; this indicates the backup nodes content are not need to be known at the time of allocation.The distribution of one file per backup node allows for maximum parallelization.Since there are no dependences among the bins or among pieces tied to various bins, redistributing indices and chunks is a clean process, and backup nodes can be augmented to increase throughput.Data management operations including integrity checks, data restore, and the trash collection requests are made efficient by the autonomy of backup nodes.The improvements in RAM applications and scalability more than make up for the minor loss of de-duplication. The SHA family was expanded by Shih-Pei Chien et al. (10) which admits messages of any length as input and produces a message digest of the necessary length.The eight working variables, the for-loop operation, the first hash values, constants, Boolean expressions and functions, the padding and parsing, and the message program are all modified in this generalised version of SHA-mn.Additionally, the ith intermediary hash values are computed. In (11) , the authors have addressed the LHV problem, which was absent from the initial SHA standard, has been resolved.Due to security concerns, SHA-mn is standardized using the SHA family design guidelines.The structure of SHA was significantly enhanced, even though many people may disagree on the birthday paradox technique for determining complexity because the full SHA-1 collision was discovered in 2005.Determining effective methods for SHA-256 collision detection is still a major area of study for several scientists. The SHA processor architecture described by Sang-Hyun Lee et al. (12) implements 3 hash algorithms: SHA-512/256, SHA-512/224, and SHA-512.Based on hash algorithms, the SHA processor produces digests with 512, 224, and 256 bits, which are the three different lengths.Because it was built on a 32-bit data path and employed SHA-512 to construct the initial hash values of SHA512/224 and SHA-512/256, the application is region effective.The FPGA implementation validated the HDL-designed SHA processor.Operating at up to 185 MHz clock frequency, the SHA processor created by 0,18µm CMOS cell library takes up 27,368 GEs (Gate Equivalents).IoT security applications can make advantage of the SHA processor.Key words: security for IoT devices, hash, integrity and SHA. Proposed work The proposed preprocessing Approach The proposed methodology is shown in figure1.Suggesting the NPFD (Novel preprocessing framework for de-duplication) architecture as a solution to these problems with cloud data management.It is a PaaS-level cloud storage system that leverages virtual discs and is concurrency-optimized.It associates the local discs of these virtual machines into a comprehensively shared data store for an application that consists of an excessive number of VMs.Applications thus exchange input files and save intermediate data or output files directly on the local disc of the virtual machine instance.The findings shown in this chapter show that this strategy improves the throughput over remote cloud storage by more than two times.Moreover, by creating an Azure prototype that employs the suggested storage strategy as the data management back-end and implements this computation paradigm, the advantages of the NPFD method were verified in the context of MapReduce.The architecture addresses each of the fundamental requirements of data-serious apps point-by-point by providing simultaneous access to low-inertia data storage.We start from the assumption that many cloud organizations do not abuse the discs secretly linked to the VMs, which have storage limits of several GBs that are available at no further cost. As a result, we suggest pooling some of the virtual discs' total storage space into a shared basic pool can be attained in a dispersed way.The purpose of this pool is to hold application-level data.To optimize the heap and provide flexibility, information is stored in a striped fashion, meaning that it is divided into chunks that are uniformly distributed over the storage device's neighboring discs.Every lump is duplicated across other nearby Data and Metadata.2024; 3:245 4 discs with the ultimate purpose of enduring setbacks.This method greatly improves read and write throughput under simultaneous conditions by distributing the global input/output effort evenly across the adjacent discs.Additionally, this approach reduces latencies by enabling data space and has the capacity to be highly versatile because an increasing VMs counts calls for a larger storage structure. De-duplication in SHA Algorithm High availability, scalability, security, fault tolerance, and cost-effective services are all possible with cloud storage.De-duplication is a well-accepted technique that has gained increasing traction recently for its ability to build scalable data organization in cloud-based settings.The method known as "DD" stores only one copy of the data and provides a link to it instead of storing the actual copy of the data.Block level or file level deduplication is accomplished using this method.Eliminating duplicate data blocks at the block level and the file level results in de-duplication. Security and privacy are the main issues with DD since users' sensitive information is susceptible to attacks from within as well as outside.Conventional encryption is incompatible using data de-duplication techniques when it comes to maintaining data security.Utilising their unique key, each user encrypts their data during the traditional encryption process.As a result, different users' matching copies of the data will produce different cypher texts, making data deduplication impossible.In the proposed approach, we are addressing that problem by employing a technique that offers data security and scalability.Although the DD technique has several benefits, customers still worry about security and confidentiality because their sensitive information is susceptible to threats that are internal as well as external. Every file is constrained to a hash function in this manner.Duplicate file detection will be aided by the hash function's continuous encoding; such as message direct or secure hash algorithm-1 encoded hash value.Analysts calculated the efficacy of the technique in the same data sets.Two pieces of data will be regarded as having duplicate content if their hash values are the same.The method was substituted at the bottom rating score with predefined block size and Chunking techniques.This method of updating data condenses throughput by using the SHA-1 algorithm for a large number of data sources, making it inefficient for saving a significant amount of storage space.However, because this method is quick and requires little math, it is inexpensive and highly effective. Improved SHA De-Duplication Algorithm -Pseudo code 1. Initialize cloud server with modified SHA algorithm for data de-duplication. RESULTS The results of the pre-processing steps are depicted in table 1.The result indicate that the time taken for Input-Output of the proposed method is better than the P1 and P2 methods discussed in section that followed.The metrics of the suggested approaches are low in comparison to the proposed method.Values for the suggested method range from 0,299 to 0,501 as it is seen in figure 2. ).The values of the suggested approaches are low in comparison to the current approaches.Values for the suggested approach range from 0,167 to 0,388 as shown in figure 3. The computation time comparison chart displays the disparities in values between the suggested approach and the previous methods is shown in table 3. Record count on the x-axis and series levels on the y-axis.The values of the suggested approach are lower than those of the previous method when compared with it.Values for the suggested approach range from 0,167 to 0,38 as shown in figure and table 3. A comparison table outlining the read/write execution throughput of 3 previous methods (P1, P2, and P3) and one planned approach is provided.The suggested approaches have higher values than the previous ones.Values for the suggested approach range from 0,5 to 0,666.The various values of the suggested approach and the current methods are displayed in the Read/Write Processing Throughput Comparison Chart.Record count on the x-axis and series level on the y-axis.The values of the suggested approach are greater than those of the previous technique when compared with it.Values for the suggested approach range from 0,5 to 0,666 as shown in figure 4. Cross Dataset Sharing Three methods Data Routing Technique, Multilayer Metadata, and Byte Index Chunking technique as well as the suggested technique are compared in the Cross Dataset Sharing Comparison as shown in table 4. The values of the suggested method are higher as compared to the previous method when compared to it. Full File Duplicates Three methods namely, the Multilayer Metadata, Data Routing Technique, Byte Index Chunking technique as well as the proposed technique are compared in the Full File Duplicates Comparison Table 5.The values of the suggested approach are higher than those of the previous method when compared to it.The values of the suggested method range from 10,26 to 30,06.The Full File Duplicates Comparison Chart displays the disparities between the suggested and previous methods' values.Duplication Proportion on the y axis and datasets in the x axis.The values of the current approaches are quite low in comparison to the suggested techniques.Values between 10,26 and 30,06 are considered as shown in figure 6. Zero Chunk Removal The suggested approach and the 3 currently utilizing methods Multilayer Metadata, Data Routing Technique, Byte Index Chunking approach are compared in the Zero Chunk Removal Comparison table 6.The values of the proposed approach are higher than those of the previous technique when compared to it.The values of the proposed approach range from 12,02 to 30,06 Figure 1 . Figure 1.Proposed methodology Novel preprocessing framework for deduplication(NPFD): Virtual Disc Associating for a Communication-Efficient Storage NPFD technique for analyzing virtual discs in VMs is presented in this part of the presentation.The architecture addresses each of the fundamental requirements of data-serious apps point-by-point by providing simultaneous access to low-inertia data storage.We start from the assumption that many cloud organizations do not abuse the discs secretly linked to the VMs, which have storage limits of several GBs that are available at no further cost.As a result, we suggest pooling some of the virtual discs' total storage space into a shared basic pool can be attained in a dispersed way.The purpose of this pool is to hold application-level data.To optimize the heap and provide flexibility, information is stored in a striped fashion, meaning that it is divided into chunks that are uniformly distributed over the storage device's neighboring discs.Every lump is duplicated across other nearby Figure 2 . Figure 2. Comparison Chart of I/O Time per JobThe suggested approach and the current techniques' differing figures are displayed in the I/O time per job comparison table.Both the sequence level and the number of records on the X and Y axes.The values of the suggested approach are less than those of the present technique when compared with it.Values for the suggested technique range from 0,299 to 0,501. Figure 3 . Figure 3.Comparison Chart of Computing Time Figure 4 . Figure 4. Comparison Chart of Read/Write Processing Throughput Figure 5 . Figure 5.Comparison chart of Cross Dataset Sharing The Cross Dataset Sharing comparison graph displays the disparities between the suggested and previous methods' values.Duplication Proportion on the y axis and datasets in the x axis.The values of the current approaches are quite low in comparison to the suggested techniques.Values between 21,3 and 50,55 are suggested as shown in figure 5. Table 1 . Comparison Table of I/O Time per Job Table 2 . Comparison Table of Computing Time Table 3 . Comparison Table of Read/Write Processing Throughput Table 4 . Comparison table of Cross Dataset Sharing Table 5 . Comparison table of Full File Duplicates Comparison table of Full File Duplicates Table 6 . Comparison table of Zero Chunk Removal
4,879
2024-03-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Characterization of Constrained Continuous Multiobjective Optimization Problems: A Performance Space Perspective Constrained multiobjective optimization has gained much interest in the past few years. However, constrained multiobjective optimization problems (CMOPs) are still unsatisfactorily understood. Consequently, the choice of adequate CMOPs for benchmarking is difficult and lacks a formal background. This paper addresses this issue by exploring CMOPs from a performance space perspective. First, it presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization. This methodology offers a first attempt to simultaneously measure the performance in approximating the Pareto front and constraint satisfaction. Secondly, it proposes an approach to measure the capability of the given optimization problem to differentiate among algorithm performances. Finally, this approach is used to contrast eight frequently used artificial test suites of CMOPs. The experimental results reveal which suites are more efficient in discerning between three well-known multiobjective optimization algorithms. Benchmark designers can use these results to select the most appropriate CMOPs for their needs. I. INTRODUCTION M ANY real-world continuous optimization problems in- volve the optimization of multiple, often conflicting objectives, and constraints that need to be respected [1].Such problems are known as constrained multiobjective optimization problems (CMOPs) and have recently gained much interest in the evolutionary computation community.Indeed, several novel techniques for constraint handling and new test suites of CMOPs have been proposed recently (e.g., [2]- [5]). Despite the large amount of recently published articles in the field of constrained multiobjective optimization, the CMOPs for benchmarking multiobjective evolutionary algorithms (MOEAs) and corresponding constraint handling techniques (CHTs) are still unsatisfactorily understood and characterized [6], [7].Consequently, the selection of appropriate CMOPs for benchmarking is difficult and lacks a formal background.In the circumstances, preparing a sound and welldesigned experimental setup for constrained multiobjective op-timization is a challenging task.A poorly designed benchmark might lead to inadequate conclusions about CMOP landscapes and MOEA performance [7]. According to [8], there are two main options for characterizing and evaluating the quality of optimization problems, namely through the feature space and performance space.The feature space can be seen as a space of problem characteristics, including basic characteristics such as problem dimensionality and type of objective and constraint functions, as well as more advanced problem characteristics derived using methods developed in the field of exploratory landscape analysis (ELA) [9].On the other hand, the performance space represents the problems based on the obtained algorithm performance (behavior) while solving these problems.Similar to the feature space, basic statistics can be used, such as mean or median algorithm performance, as well as more advanced methods, e.g., data profiles [10] or empirical cumulative distribution functions (ECDFs) [11], [12].In contrast to the aggregated values (means, medians, etc.), the latter two methods consider the progress of the whole algorithm run and, this way, provide more comprehensive information about the algorithm behavior.In our previous work [7], we provided an extensive study of characterizing CMOPs through the feature space, while, to the best of our knowledge, the performance space has not been addressed in sufficient depth yet. In the literature, the performance indicators used in constrained multiobjective optimization are the same as those used in unconstrained multiobjective optimization, they are simply applied only to feasible solutions.The most frequently employed indicators are the hypervolume indicator [13] and inverted generational distance [14] since they can provide information about the convergence and the diversity of the obtained Pareto front approximation.For monitoring the performance during the run, one can use convergence graphs, data profiles or ECDFs.However, none of these techniques provides relevant information until feasible solutions are discovered.As a result, essential insights about the algorithm behavior and CMOP characteristics are missed.To overcome this situation, some papers report also the constraint satisfaction progress [15].Nevertheless, to the best of our knowledge, no method from the literature simultaneously measures both the convergence towards the Pareto front and constraint satisfaction, making the experimental analysis incomplete. Moreover, we are aware of only a single work analyzing the CMOPs from the performance space perspective which was conducted in 2017 [16].The authors used five CHTs to characterize five artificial and seven real-world test problems.The results revealed that only a single artificial test problem was suitable for benchmarking algorithms since the other four problems could be solved even without employing a CHT.Additionally, the studied real-world problems were inadequate since they could not differentiate among MOEAs-a desired property of a test problem as it provides relevant information for algorithm designers [8].Since 2017, several novel test suites of CMOPs have been proposed, and their ability to differentiate among MOEAs has not been investigated yet [17]. In this paper, we present a novel anytime performance assessment approach specifically designed for constrained multiobjective optimization.It simultaneously monitors both the Pareto front approximation and constraint satisfaction.The approach is inspired by the anytime performance assessment of algorithms on unconstrained bi-objective optimization problems used in COCO (COmparing Continuous Optimizers) [11].In addition, we propose an approach to measure the capability of a given problem to differentiate among MOEAs.The resulting measure is then used to evaluate the most frequently used artificial test suites of CMOPs.Because of space limitations, we present only selected results in this paper.The interested reader can find the complete results online 1 . The rest of this paper is organized as follows.In Section II, we provide the theoretical background for constrained multiobjective optimization and introduce the COCO platform Then, in Section III, we extend the performance assessment from the COCO platform to CMOPs and propose an approach to characterize CMOPs based on the algorithm performance.Section IV provides details on the experimental setup, while Section V presents the results, evaluates the existing test suites of CMOPs, and discusses some limitations of the proposed methodology.Finally, a summary of findings and ideas for future work are discussed in Section VI. II. BACKGROUND In this section, we provide the theoretical background for this work.After the definitions for CMOPs and constraint violation, we describe the performance assessment approach for multiobjective optimization used in the COCO platform. A. Constrained Multiobjective Optimization Problems A CMOP is, without loss of generality, formulated as: where x = (x 1 , . . ., x D ) is a search vector, f m : S → R are objective functions, g i : S → R constraint functions, S ⊆ R D is a search space of dimension D, and M and I are the numbers of objectives and constraints, respectively.In particular, when M = 1 the corresponding problem is a constrained single-objective optimization problem (CSOP).To differentiate between CMOPs and CSOPs, in the later case, we will omit the index m (f = f 1 ).Additionally, if the problem has no constraints, it is called a single-objective optimization problem (SOP) when M = 1, and a multiobjective optimization problem (MOP) when M > 1. One of the most important concepts in constrained optimization is the notion of the constraint violation.For a single constraint g i it is defined as v i (x) = max(0, g i (x)) and combined for all constraints together as into the overall constraint violation.A solution x is feasible iff its overall constraint violation equals zero (v(x) = 0).Note that other definitions for overall constraint violation exist, and their use would impact the analysis performed in this study.However, the proposed definition for the overall constraint violation is by far the most commonly adopted in constrained optimization [17], and as such, it represents the most appropriate choice. A feasible solution x ∈ S dominates another solution y ∈ S iff f m (x) ≤ f m (y) for all 1 ≤ m ≤ M and f m (x) < f m (y) for at least one 1 ≤ m ≤ M .Additionally, a solution x * ∈ S is Pareto optimal if there is no solution x ∈ S such that dominates x * .We can generalize the Pareto dominance to sets.A set X dominates set Y iff for each y ∈ Y there exists at least one solution x ∈ X that dominates y. The set of all feasible solutions is called the feasible region and is denoted by F = {x ∈ S | v(x) = 0}.All nondominated feasible solutions represent a Pareto-optimal set, S o .The image of the Pareto-optimal set in the objective space is the Pareto front and is denoted here by The ideal objective vector, z ide , is defined as the vector in the objective space that contains the optimal objective value for each objective separately and it is expressed as Additionally, the nadir objective vector, z nad , consists in each objective of the worst value obtained by any Pareto-optimal solution.It can be expressed as An additional important concept is the region of interest in the objective space, Z, which represents the set of objective vectors bounded by the ideal and nadir objective vectors.If good approximations for ideal and nadir objective vectors are known, the objective functions can be normalized to ( This way, the objective values are of approximately the same magnitude and the range of the objective values for Paretooptimal solutions is [0, 1].Note that after normalization the z ide consists of m zeros (z ide = (0, . . ., 0)), z nad of m ones (z nad = (1, . . ., 1)), and the region of interest Z equals [0, 1] M .In particular, for SOPs and CSOPs the normalization results in f (x * ) = 0.In the rest of this paper, we assume that all the objective functions are normalized. B. Empirical Runtime Distributions (ERDs) The performance measurement approach used in the COCO framework [11], [12] relies on the number of function evaluations 2 -called runtime-needed for an algorithm, a, to reach predefined quality indicator targets.More precisely, we can present an algorithm run after preforming T function evaluations as a sequence of candidate solutions, A T (a) = {x 1 (a), . . ., x T (a)}.Within this framework, a quality indicator, I, is defined as a function mapping A T (a) to a real value.Here, we assume that low quality indicator values indicate better sequences of candidate solutions, and vice versa.Additionally, the runtime for a given quality indicator target equals to the lowest T for which I(A T (a)) reaches the given target precision value, τ .Note that in the following, if there is no ambiguity, we remove the algorithm notation a from A T (a). In practice, we define several target precision values to understand the algorithm behavior through the entire run.Runtimes can be formally defined as random variables and displayed as an empirical cumulative distribution functioncalled empirical runtime distribution (ERD) in the COCO framework.ERDs are used to display the proportion of target values reached within a specified budget and can be easily aggregated over multiple restarts, runs or even multiple problems.For more details on ERDs, see [11], [12].The runtime data set for an algorithm a and all targets τ -s is denoted as {T a (τ )} τ .Finally, the runtimes in COCO are usually studied in a logarithmic scale and this perspective is used throughout this paper as well. C. Quality Indicators Based on the nature of the optimization problem there are various quality indicators used by the COCO framework.Those relevant for this work are as follows. 1) Single-objective optimization: In this case, the quality indicator is the best so far observed objective function value: 2) Constrained single-objective optimization: The quality indicator for unconstrained problems ( 6) is extended by the addition of the overall constraint violation as follows: 3) Multiobjective optimization: The quality indicator for MOPs consists of two parts.When no solution from the sequence A T dominates the nadir point (reference point), the distance to the region of interest Z is used to measure the quality of the solutions (see Fig. 1b).In contrast, when at least one of the solutions dominates the nadir point, the hypervolume indicator is used instead (see Fig. 1c).This quality indicator can be mathematically expressed as: 2 When we refer to a function evaluation, we actually mean the evaluation of all the objective and constraint functions.For example, for a bi-objective problem with three constraints we need to perform five evaluations, however, we count this as a single function evaluation. Here, represents the hypervolume of the archive A T , N (A T ) is the set of all the points from A T dominating the reference point which equals (1, . . ., 1), and is the smallest Euclidean distance between the archive and the region of interest Z.Additional information on this quality indicator can be found in [12]. D. Target Precision Values for MOPs For each problem a set of quality indicator target values is chosen, which is used to measure algorithm runtimes and in turn to calculate ERDs.The target values are computed in the form of τ (ε) = τ ref +ε, where τ ref is a reference I MOP value.It is either based on the hypervolume of the true Pareto front or an estimation for it.In COCO, 51 positive target precision values ε ∈ {10 −5 , 10 −4.9 , . . ., 10 −0.1 , 10 0 } are chosen 3 .Note that it is not uncommon that the quality indicator value of the algorithm never reaches some of these target values, which leads to missing runtime measurements. III. METHODOLOGY This section provides an extension of ERDs to constrained multiobjective optimization.It also discusses an approach to measure the problem's effectiveness to distinguish algorithms based on the distance between ERDs. A. Quality Indicator for CMOPs There are two main paradigms to approach constrained optimization problems.The first one is applicable when the constraints must be satisfied at any cost, while the second one allows for partial violation of constraints if the objective values of a solution are of good quality.Although both paradigms have their pros and cons, we study the former one as this is the prevalent approach in the literature 4 . Furthermore, in many real-world scenarios the objective values cannot be calculated if the solution is infeasible [18].This often happens in simulation-based optimization where the simulator cannot return meaningful results if some of the constraints are not satisfied.Consequently, our main assumption in constructing a quality indicator for constrained multiobjective optimization is that an infeasible solution is strictly worse than any feasible solutions regardless of the objective value quality-this is exactly the pillar of the first paradigm.For example, in Fig. 1c, solution z 4 has better objective values than z 5 .Actually, if no constraints were considered, z 4 would dominate z 5 .Nevertheless, z 5 is considered to be strictly better than z 4 .This desired property of a quality indicator can be mathematically expressed as follows: The quality indicator for CSOPs (7) does not satisfy this property.The biggest disadvantage of an indicator not satisfying (11) is that no matter how small the quality indicator value is, we cannot know whether there exists a feasible solution in A T .For example, for a certain CSOP there might exist a solution in A T with f (x) = 0 and an arbitrarily small overall constraint violation value.In other words, unless I CSOP equals zero (x * ∈ A T ), we cannot know for certain whether we found a feasible solution relying solely on the quality indicator values.From a practical point of view, we wish for a quality indicator to involve a threshold that unequivocally indicates when the algorithm reached the feasible space. Considering this, we propose an extension of the quality indicator for MOPs (8) as follows: where τ * is a threshold to indicate that the feasible space was reached.For example, in the COCO framework, it can be set to the largest considered target for MOPs, which equals 1.It is also obvious that I CMOP satisfies the property (11).The behavior of the proposed quality indicator is illustrated in Fig. 1. Additionally, note that in (12) only feasible solutions are considered in the calculation of I MOP .This can be seen in Figs.1b and 1c where the infeasible solutions are not considered for the calculation of the distance and hypervolume, respectively, once feasible solutions have been found. B. Performance Space Comparison According to [8], a good test suite should include problems that "enable the user to tell the algorithms apart in the performance space".To measure the ability of a given problem to differentiate among MOEAs, we rely on the area between the corresponding ERDs (see Fig. 2, area in gray).The intuition is that a large area between ERDs indicates large differences between the runtimes and in turn between the algorithms. Based on the area between the two ERDs we want to propose a metric, ∆, that with a single number provides information about the similarity between algorithms and their performance.Let us assume we are dealing with two algorithms a and b, and we have their corresponding runtime data sets for a certain optimization problem {T a (τ )} τ and {T b (τ )} τ .Then, the area of a single segment between the runtimes (in the logarithmic scale) for the same target can be obviously calculated as follows (see Fig. 2, the bold line between runtimes): where N τ is the number of targets.In particular, when a certain runtime is missing, we set its value to the maximal budget (number of function evaluations).This is done for calculation purpose and has no particular meaning.Using the formula (13), the area bounded by the two ERDs and thus ∆ can be expressed as the sum of these segment areas over all the targets: where N f is the number of function evaluations.The formula is additionally divided by log N f for normalization purposes.It can be easily seen that ∆(a, b) ∈ [0, 1] for all algorithms and problems.In particular, small values indicate similar behavior of the chosen algorithms, and vice versa.For example, in the extreme case algorithm a might solve all the targets within a single evaluation, while the algorithm b does not reach any target.In this case ∆(a, b) = 1.In the second extreme case all the runtimes coincide and ∆(a, b) = 0. The ∆(a, b) metric can be additionally expressed as: where ∆ − and ∆ + represent the sum of segments ( 13) over targets measuring constraint satisfaction, τ − , and targets expressing the algorithm effectiveness in approximating the Pareto front (called front approximation for short), τ + , respectively.Additionally, N τ + is the number of τ + targets, and N τ − the number of τ − targets.In particular, ∆ − can be seen as a measure of algorithm differences in constraint satisfaction, while ∆ + is a metric measuring differences in front approximation. IV. EXPERIMENTAL ANALYSIS This section introduces the test suites of CMOPs used for the experiments, discusses the chosen MOEAs and their CHTs, and provides the parameter and implementation details. B. Multiobjective Evolutionary Optimization Algorithms Three well-known CMOEAs were used to investigate the proposed assessment methodology and to compare the test suites: NSGA-III [21], [24], C-TAEA [23] and MOEA/D-IEpsilon [15] all equipped with their default CHTs. NSGA-III is a well-known algorithm that uses the constrained domination principle (CDP) [25] as a CHT.This principle is an extension of the dominance relation and is the most widely-used technique to solve CMOPs.It strictly favors feasible solutions over infeasible ones.While feasible solutions are compared based on Pareto dominance ( ), infeasible solutions are compared according to the overall constraint violation.The formal definition of CDP, as presented in [3], is the following: Next, the CHT used in MOEA/D-IEpsilon is based on the ε-constraint relation: where is the Tchebycheff aggregation function. The comparison level ε t is updated in each generation following the expression: where t is the generational counter, τ , α, T c are user-defined parameters, v(x θ ) is the overall constraint violation of the top θ-th individual (according to overall constraint violation value) in the initial population, and ρ F (P t ) is the proportion of feasible solutions in the current population P t .Additional details on MOEA/D-IEpsilon can be found in [15].Finally, the main idea behind C-TAEA is the maintenance of two separate archives.One archive is used to promote convergence, while the other one to maintain diversity.Besides, a special restricted mating approach is employed to balance between the two archives.The CHT used by C-TAEA is incorporated within the update of the convergence archive.Similarly to CDP, this CHT strictly favors feasible solutions which are compared based on Pareto dominance.On the other hand, the infeasible solutions are ranked using nondominated sorting for a custom bi-objective problem expressed as minimize (v(x), g tc (x | ν)). ( The convergence archive is updated with all feasible solutions and the best infeasible solutions according to the Pareto ranking applied in (20).In addition, the diversity archive does not consider feasibility at all allowing infeasible solutions to persists in the population.More information on this method are available in [23]. We chose three MOEAs with complementary CHTs: (i) the CDP employed in NSGA-III strictly favors feasible solutions, (ii) the diversity archive in C-TAEA allows infeasible solutions to remain in the population, and (iii) MOEA/D-IEpsilon adaptively updates the comparison level each generation following (19); when the feasibility ratio of the current population becomes large, ε t increases and progressively more solutions (infeasible ones included) are compared solely according to the objective values. C. Parameter Settings The proposed performance assessment is demonstrated on on the test suites listed in Table I.In particular, three-objective C-DTLZ and DC-DTLZ problems were considered with the default number of constraints.Additionally, a difficulty triplet of (0.5, 0.5, 0.5) was used for the DAS-CMOP suite as this is by far the most frequently used difficulty triplet in the literature. Three dimensions of the search space D ∈ {5, 10, 30} were used to evaluate the proposed performance assessment methodology and compare the test suites.All the Pareto fronts can be analytically expressed and the corresponding hypervolume values exactly calculated. All MOEAs were run with an equal population size, N p , and the same number of generations, N g .In particular, the population size was set to N p = 100M .The number of generations was set to N g = 120D/M and was selected as approximately the minimal value to obtain convergence for all the MOEAs (in total, 12000D function evaluations).Note, the division by M in the expression for N g is necessary to enable aggregation over CMOPs with different number of objectives.The resulting values for N p and N g are shown in Table II. Other parameters of the algorithms and their operators were set to their default values [15], [23], [24]: The polynomial mutation was used in all the MOEAs.The mutation probability was set to 1/D and the distribution index to 20.Specifically, the simulated binary crossover was used in NSGA-III and C-TAEA with the crossover probability of 1 and the distribution index of 30.In contrast, a differential-evolution-based crossover was used in MOEA/D with a crossover probability of 1 and scaling factor of 0.5.Additionally, in MOEA/D, the neighborhood size was set to 30, probability of neighborhood mating to 0.9, the maximal number of solutions replaced by a child to 2, τ to 0.1, α to 0.95, T c to 0.8, and θ to 0.05N p . Finally, ERDs were computed without employing restarts or bootstrapping. D. Target Precision Values for CMOPs The values of the distance metric d defined in (10), as well as those of the overall constraint violation v can result in different magnitudes for different CMOPs.Consequently, it is impossible to define a set of target precision values that would provide meaningful results for all the studied CMOPs.As we wanted to compare different CMOPs and suites, we first sampled 100 solutions {x i } i for each CMOP and normalized d and v as follows: and v = d/10 log(med({v i }i)) (22) where med({d i } i ) and med({v i } i ) are median values of the sets {d i } i = {d(x i , Z)} i and {v i } i = {v(x i )} i , respectively.After applying this procedure and preforming several experiments, we set τ * to 1. Additionally, a good set of target precision values for The first half of target precision values τ + applies to feasible solutions and represents how well the algorithm approximates the Pareto front, while the second half of the targets τ − is used to understand the algorithm performance in satisfying the constraints. E. Implementation Details All the CMOPs, MOEAs and the performance measurement procedure were implemented in the Python programming language [26].We used the pymoo [27] implementation for CTP, DAS-CMOP, MW, NSGA-III, C-TAEA and hypervolume calculation.The rest of the suites, MOEA/D-IEpsilon and other functionalities were implemented from scratch. V. RESULTS AND DISCUSSION In this section, we first present the experimental results.Next, we discuss the existing test suites of CMOPs and their potential in differentiating algorithms.Finally, we present some limitations of the proposed methodology. A. Results Let us first look at the results for a single problem.As an example we select the MW13 problem.Fig. 3 shows the ERDs for this problem aggregated over 30 runs for each of the three algorithms.To aid visualization and comparison, the function evaluations (x-axis) are divided by problem dimension and shown in logarithmic scale. The horizontal dashed line divides the targets into τ − and τ + .Note that this intuition is true only for a single run without aggregation.Nevertheless, if an ERD (single or aggregated) starts above this line, then all the algorithm runs started in the feasible region-the first initialized solution is feasible.On the other hand, if an ERD starts below the dashed line and never crosses it, then this indicates that the corresponding algorithm runs never reached the feasible region.Moreover, the line is thicker when some, but not all runs have found feasible solutions. As we can see, all the MOEAs reach all the targets for the two smaller dimensions, while NSGA-III fails to reach some targets on 30-D problems.All the algorithms are able to find feasible solutions in all of the runs with NSGA-III being generally quickest in this regard.The aggregated results over all problems of a suite are shown in Figs. 4 and 5 on the left hand side of each subfigure.For example, Fig. 4a shows the ERDs for the selected MOEAs aggregated over all problems from the CTP suite in 5-D.On the right hand side of each subfigure we see violin plots of distributions for ∆ + (top left), ∆ − (bottom left), and ∆ (right) values.Each of these values was computed for 30 runs of each pair of algorithms on each problem and is represented in the plot as a black dot.The first column shows the distributions for all the considered CMOPs and the rest of the columns correspond to each suite separately.The y-axis depicts the ∆ + , ∆ − , or ∆ value, while the x-axis has no specific meaning and is used solely for better visualization.The violin plot (colored area) approximates the probability density function for ∆ + , ∆ − , or ∆ values.For example, Fig. 5j shows there are more problem instances in the MW suite with ∆ ≈ 0.05 than those with ∆ ≈ 0.10.In addition, there are no problem instances with ∆ > 0.25. B. Test Suites Evaluation As already discussed in Section III-B, a well-designed test suite should include a wide variety of problems that can differentiate among MOEAs (∆).Since we are dealing with constrained problems, we are particularly interested in evaluating the ability of the problems to differentiate among the algorithms with respect to constraint handling (∆ − ). • CTP: As we can see, for all problems the algorithms start in the feasible space (Figs.4a, 4b, and 4c).The main difficulty they face is front approximation.This is additionally confirmed by the violin plots showing that ∆ − equals 0 for all dimensions.• CF: Unlike for CTPs, the MOEAs find no feasible solutions at the very beginning of the evolution process (Figs.4d, 4e, and 4f).Interestingly, the difference in algorithm performance originates mainly in constraint satisfaction.• C-DTLZ: From the performance space perspective, this suite is well-designed.The algorithms struggle to find feasible solutions in the initial phase of the evolution process (Figs.4g, 4h, and 4i).In addition, the suite can differentiate between algorithms in both constraint satisfaction and front approximation. • NCTP: Although the three MOEAs need a large number of function evaluations to reach a feasible region, their main challenge is front approximation (Figs.4j, 4k, and 4l).The vast majority of the difference in algorithm performance also comes from front approximation.• DC-DTLZ: Figs.5a, 5b, and 5c show that all the three MOEAs struggle in obtaining feasible solutions, which are discovered only late in the evolution process.Like CFs, the DC-DTLZ suite is especially good at differentiating the constraint satisfaction part of algorithm performance. • DAS-CMOP: As we can see, for all the DAS-CMOPs the NSGA-III algorithm always starts with a feasible solution, while this is not true for other two MOEAs (Figs. 5d, 5e, and 5f).Nevertheless, feasible solutions are easily discovered by all the algorithms.Moreover, algorithm performance differences are almost exclusively obtained in front approximation, since ∆ − ≈ 0 for all problems and MOEAs. • LIR-CMOP: The performance space characteristics of this suite are very similar to those of NCTPs.Although the studied algorithms need some time to find feasible solutions, the main difference in the algorithm performance is contained in the front approximation phase (Figs.5g, 5h, and 5i).• MW: From the performance space perspective, MW is one of the most versatile and well-designed artificial test suites found in the literature.It is the best among the studied suites in differentiating the three MOEAs (Figs. 5j, 5k, and 5l).Moreover, as shown by the violin plots, the algorithm performance is diverse in both constraint satisfaction and front approximation. C. Limitations We see two potential limitations of the proposed methodology to evaluate the performance space.Firstly, the results can be severely effected by the selection of the algorithms and their budgets, and secondly, the choice of target precision values also has a great impact on the outcome. To alleviate the first issue, we selected three different MOEAs equipped with distinct CHTs.Additionally, the number of function evaluations was set large enough to assure convergence thus revealing the deviations between the algorithms.Finally, the logarithmic scale was applied to the budget to not bias the results towards the tail of the convergence graphs where the algorithms have already converged. On the other hand, we were not able to satisfactorily address the issue of some targets having greater impact than others.For example, there is no assurance that progressing from target 1 + 10 −4.2 to target 1 + 10 −4.3 is equally important or difficult as progressing from target 1 + 10 −4.3 to target 1 + 10 −4.4 for all the problems and algorithms at hand.Nevertheless, using the target approach and logarithmic scale to define these targets is argued to be much more efficient to compare algorithm performance than just relying on regular convergence graphs [12]. VI. CONCLUSIONS This paper presents a holistic investigation of the existing artificial test CMOPs from a performance space perspective.Firstly, we have proposed a performance assessment methodology capable of simultaneously monitoring both the front approximation and constraint satisfaction.This methodology is an extension of the approach used by the COCO platform for unconstrained bi-objective optimization problems.Next, the resulting performance methodology has been used to analyze and contrast eight artificial test suites.In particular, the test suites have been assessed with respect to the effectiveness of differentiating between three well-known MOEAs.Finally, the paper discusses the advantages and drawbacks of the existing artificial test suites and discloses some limitations of the proposed methodology. The experimental results show that the CF, DC-DTLZ, and especially MW suites have the greatest potential in differentiating the three MOEAs.They all include multiple CMOPs that can separate the MOEAs in both front approximation and constraint satisfaction.Additionally, our findings indicate that half of the artificial test suites fail to satisfactorily differentiate among the three MOEAs.This suggests that CMOPs from those suites provide limited information for the algorithm designer and are thus of little value for the benchmarking purpose.Finally, we saw that the predominant source of complexity in artificial test CMOPs is the front approximation. As for the future work, we suggest to extend the proposed methodology to CMOPs with more than three objectives.In particular, since the hypervolume calculation is expensive in high-dimensional objective spaces, one could investigate the effect of using different performance indicators, e.g., inverted generational distance, epsilon indicator, etc.Additionally, the potential of the proposed methodology in studying algorithm behavior while solving real-world problems should also be addressed.Measuring algorithm performance in this case is especially challenging as in a real-world scenario the Pareto front is usually unknown.Finally, the proposed methodology should be tested with additional MOEAs to further support our findings. Fig. 1 . Fig. 1.The quality indicator I CMOP at three stages of the algorithm search: (a) All the solutions belong to the infeasible space (areas in gray) and the quality indicator relies on the overall constraint violation.(b) There exists at least one feasible solution but no solutions dominate the reference point z nad .The quality indicator relies on the distance to the region of interest Z (area bounded by the doted lines and the coordinate axes).(c) There exists at least one feasible solution dominating the reference point.The quality indicator is based on the hypervolume (area depicted with a mesh). Fig. 2 . Fig. 2. ERDs corresponding to algorithms a (solid line) and b (dashed line).The area between the lines (in gray) represents the difference in algorithm performance, ∆(a, b). Fig. 4 . Fig. 4. Results of the three MOEAs on CMOPs from CTP, CF, C-DTLZ, and NCTP suites.The left plot of each subfigure shows empirical runtime distribution aggregated over all CMOPs in the suite and all targets in dimension 5 (left), 10 (center) and 30 (right).On the right of each subfigure, violin plots depict distributions of ∆ + (top left), ∆ − (bottom left), and ∆ (right) values.The larger the diversity, the better. Fig. 5 . Fig. 5. Results of the three MOEAs on CMOPs from DC-DTLZ, DAS-CMOP, LIR-CMOP, and MW suites.The left plot of each subfigure shows empirical runtime distribution aggregated over all CMOPs in the suite and all targets in dimension 5 (left), 10 (center) and 30 (right).On the right of each subfigure, violin plots depict distributions of ∆ + (top left), ∆ − (bottom left), and ∆ (right) values.The larger the diversity, the better. TABLE I CHARACTERISTICS OF TEST SUITES: NUMBER OF PROBLEMS, DIMENSION OF THE SEARCH SPACE D, NUMBER OF OBJECTIVES M , AND NUMBER OF CONSTRAINTS I . TABLE II THE POPULATION SIZE Np AND NUMBER OF GENERATIONS Ng USED IN THE EXPERIMENTAL ANALYSIS, BASED ON THE NUMBER OF OBJECTIVES M AND THE DIMENSION OF THE SEARCH SPACE D.
8,093.4
2023-02-04T00:00:00.000
[ "Engineering", "Computer Science", "Mathematics" ]
Decreased PRC2 activity supports the survival of basal-like breast cancer cells to cytotoxic treatments Breast cancer (BC) is the most common cancer occurring in women but also rarely develops in men. Recent advances in early diagnosis and development of targeted therapies have greatly improved the survival rate of BC patients. However, the basal-like BC subtype (BLBC), largely overlapping with the triple-negative BC subtype (TNBC), lacks such drug targets and conventional cytotoxic chemotherapies often remain the only treatment option. Thus, the development of resistance to cytotoxic therapies has fatal consequences. To assess the involvement of epigenetic mechanisms and their therapeutic potential increasing cytotoxic drug efficiency, we combined high-throughput RNA- and ChIP-sequencing analyses in BLBC cells. Tumor cells surviving chemotherapy upregulated transcriptional programs of epithelial-to-mesenchymal transition (EMT) and stemness. To our surprise, the same cells showed a pronounced reduction of polycomb repressive complex 2 (PRC2) activity via downregulation of its subunits Ezh2, Suz12, Rbbp7 and Mtf2. Mechanistically, loss of PRC2 activity leads to the de-repression of a set of genes through an epigenetic switch from repressive H3K27me3 to activating H3K27ac mark at regulatory regions. We identified Nfatc1 as an upregulated gene upon loss of PRC2 activity and directly implicated in the transcriptional changes happening upon survival to chemotherapy. Blocking NFATc1 activation reduced epithelial-to-mesenchymal transition, aggressiveness, and therapy resistance of BLBC cells. Our data demonstrate a previously unknown function of PRC2 maintaining low Nfatc1 expression levels and thereby repressing aggressiveness and therapy resistance in BLBC. INTRODUCTION Breast cancer (BC) is the most common cancerous disease in women with over 2.2 million new cases in 2020 worldwide, but also represents 1% of all male malignancies [1,2]. The mortality of BC patients has significantly decreased over the past decades, mostly because of early diagnosis improvements and the development of several targeted therapies. However, despite intensive efforts to combat the disease, BC remains the first cancer-related cause of death among women. Even when early detected, BC recurrence rate fluctuates between 5% and 10% within 10 years [3][4][5][6]. Nowadays, around 25% of BC patients still develop resistant and/or metastatic lesions with an unfavorable outcome [7]. Therefore, there is an urgent need for improved treatment options efficiently targeting resistant relapses and metastases. BC is a remarkably heterogeneous disease. The lesions can be classified into distinct subtypes with specific therapeutic approaches and outcomes, based on the estrogen receptor (ER), the progesterone receptor (PR), and the human epidermal growth factor receptor 2 (HER2) expression [8]. High-throughput gene expression profiling studies led to the definition of at least five different molecular subtypes of BC with very different incidence, prognosis, and response to treatments: Luminal A, Luminal B, HER-2 enriched, BLBC, Normal-like, and Claudin-low [9,10]. Targeted therapies specifically inhibiting ER, PR, and/or HER2 greatly improved the therapeutic options and prognosis of mammary carcinomas (MaCas) subtypes expressing those receptors. Unfortunately, TNBC patients (~15% of all BC), that largely overlap with the BLBC subtype, lack the expression of hormone and HER2 receptors and do not profit from these therapeutic advances. These lesions are clinically treated with a combination of surgery, radiation, and conventional chemotherapy, depending on the stage of the disease. Despite a good first response to cytotoxic therapies, a large fraction of BLBC patients rapidly develops resistance. Consequently, BLBC shows the highest recurrence rate after treatment and the poorest prognosis among BC diseases [10]. To survive conventional chemotherapeutic treatments, tumor cells need to adapt to new hostile conditions. Acquisition of epithelial-mesenchymal plasticity (EMP) and stemness properties have often been shown to support this process [11,12]. Such dynamic properties necessitate the presence of epigenetic mechanisms allowing a rapid and reversible reorganization of whole gene expression profiles, rendering them attractive therapeutic targets in cancer [13]. Numerous reports demonstrate the central role of epigenetic factors during epithelial to mesenchymal transition (EMT) [11,14]. Similarly, epigenetic factors are indispensable for the acquisition and maintenance of cancer stem cell (CSC) properties [14,15]. This is especially the case for the polycomb repressive complexes 2 (PRC2), a well-characterized epigenetic complex of four core subunits EED, SUZ12, RBBP7, EZH1, or EZH2. PRC2 catalyzes the di-and trimethylation of histone 3 at lysine 27 (H3K27me2 and H3K27me3, respectively) through its catalytic subunit EZH1 or EZH2, promoting chromatin compaction and leading to gene silencing [16,17]. Interestingly, PRC2 was shown to support adult stem cell homeostasis by repressing differentiation programs, and to promote CSC properties in numerous cancers including BC [18][19][20]. Furthermore, the enzymatic activity of the PRC2 complex was shown to actively promote EMT by positively regulating the expression of central EMT-transcription factors (EMT-TFs) like SNAI1 or ZEB1 [21,22]. In the past, we developed and characterized the WAP-T MaCa mouse model to study the biology, progression, and metastatic processes in BLBC [23][24][25][26][27][28]. WAP-T mice carry the simian virus 40 (SV40) early region under the control of the mammary tissuespecific WAP-promoter, exclusively activated during the lactation. Upon induction through mating, female transgenic animals develop endogenous tumors with strong CSC properties and phenotypic plasticity [26,29,30]. In a former effort to understand the effects of conventional cytotoxic combination therapy (cyclophosphamide, anthracycline, and 5-fluorouracil; short CAF) on BC, we observed that the chemotherapeutic treatment was not able to eradicate the disease in vivo, recapitulating the clinical situation. Interestingly, surviving tumor cells displayed a more aggressive mesenchymal-like phenotype with increased stem cell traits and showed a pronounced tendency to disseminate [31]. In the present study, we established an in vitro approach to interrogate the molecular mechanisms underlying the acquisition of EMP and stemness allowing murine WAP-T and human BLBC cells to survive the chemotherapy. BC patient material and in vivo experiments were finally used to support and validate our findings. Collectively, we identified a previously unknown PRC2mediated repressive function exerted on EMT-and CSC programs by suppressing the expression of the nuclear factor of activated T cells 1 (NFATC1) in BLBC cells. WAP-T cells surviving a conventional cytotoxic combination therapy (CAF) gain stem cell traits and EMT properties in vitro To identify the molecular mechanisms underlying the survival and the emergence of resistance to CAF chemotherapy in vitro, we first optimized the treatment settings of a well-characterized WAP-T cell line (G-2 cells) in the cell culture [29]. The aim here was the identification of treatment conditions eradicating most of the tumor cells but allowing the survival and regrowth of a small tumor cell fraction, mimicking, thereby, the in vivo relapse situation. Combination therapy consisting of 312.5 ng/ml cyclophosphamide, 15.6 ng/ml doxorubicin, and 312.5 ng/ml 5-FU, corresponding to the 1/32 dilution of the therapy previously utilized in Jannasch et al. [31], was identified as the most appropriate setting (Fig. 1A and B). Therefore, this treatment was adopted for the rest of the in vitro experiments of the present study (designated as CAF). Interestingly, parental G-2 (pG-2) cells surviving CAF treatment acquired a more elongated morphology characteristic for EMT-undergoing cells (Fig. 1C). A chemoresistant variant of the pG-2 cells called resistant G-2 (rG-2) cells was established through several cycles of CAF treatments (see the "Methods" section). rG-2 harbor cells at basal state a mesenchymal-like phenotype, further supporting the implication of EMT-mechanisms in CAF-resistance ( Fig. S1A and S1B). We compared the transcriptome of pG-2 cells after 48 h of CAF treatment to vehicle conditions (veh) using mRNA-sequencing (mRNA-seq). DESeq2 analyses identified 1021 downregulated and 1448 upregulated genes (|Log2(Fold Change)| > 1, p-adj < 0.05) in CAF-treated cells (Fig. 1D). Gene set enrichment analyses (GSEA) revealed strong enrichment of EMT-gene sets, cancer aggressiveness, and stemness (Fig. 1E). Indeed, well-known EMT markers (Vim, Cdh2, Fn1, and Acta2) and EMT-TFs (Snai1, Snai2, Twist2, and Zeb1) were upregulated in surviving cells whereas the expression of epithelial markers (Cdh1, Epcam, Krt14, Krt8, and Krt18) was strongly reduced (Fig. 1F). The regulation of selected epithelial (Epcam, Cdh1, Krt18, and Krt14) and mesenchymal genes (Vim, Snai1, Snai2, Twist1, Twist2, Zeb1) was validated via qPCR ( Fig. 1G and H). We confirmed the increased protein levels of Vimentin and N-cadherin as well as the decrease of E-cadherin via western blot (Fig. 1I). Interestingly, the expression of stem cell-specific transcription factors (e.g. Sox2 and Nanog) was also found to be increased in CAF-treated pG-2 cells (Fig. S1C). Similarly, at basal state, rG-2 cells showed increased expression of several EMT-and stem cell-markers compared to pG-2 cells (Fig. S1C). Altogether, these results strengthened the validity of our in vitro approach mimicking our previous in vivo studies [31] and further emphasize the implication of EMT-and stem cell properties in chemotherapy survival mechanisms. WAP-T tumor cells surviving CAF treatment downregulate the expression of PRC2 core subunits Further GSEA analyses identified an accumulation of epigeneticregulatory gene signatures in CAF-treated cells ( Fig. 2A). This was an interesting finding, as several epigenetic mechanisms are involved in processes controlling cellular plasticity [32]. mRNA-seq identified 63 down-regulated and 12 up-regulated epigenetic factors (Fig. 2B, listed in Table S1). Notably, GSEA and Enrichr analyses pointed at the regulation of genes known to be H3K27me3-marked and/or repressed by PRC2 (Figs. 2C and S2A). A closer look at core PRC2 subunits expression revealed the downregulation of Ezh2, Suz12, Rbbp7, and the classical accessory factor Mtf2 in pG-2 cells surviving CAF treatment (Fig. 2D). Also on the protein level, EZH2 and SUZ12 were significantly reduced in the CAF condition, as assessed via western blots and immunofluorescence staining (Figs. 2E, F, S2C, D). In line with these findings, rG-2 cells grown under normal conditions harbored a constant lower expression of the core PRC2 subunits Ezh2, Suz12, Rbbp7, and Mtf2 compared to untreated or treated pG-2 cells. Noticeably, their expression levels were even more reduced upon CAF treatment (Fig. S2B). We concluded that the reduction of PRC2 levels was associated with the survival to cytotoxic therapies and chemotherapy-resistant phenotypes. Reduction of EZH2 activity enhances the aggressiveness of BLBC tumor cells Although PRC2 was so far majorly associated with tumorpromoting functions, recent publications have also pointed toward a possible tumor-suppressive role, for example in ovarian carcinoma [33]. Therefore, we asked whether the identified reduction of PRC2 activity could directly mediate WAP-T tumor cell survival to cytotoxic therapies by de-repressing specific gene expression programs involved in tumor aggressiveness and/or proliferation. To assess the effect of EZH2 activity loss on the proliferation of pG-2 cells, we silenced Ezh2 using targeted siRNA or treatment with a small molecule inhibitor against EZH2 (EPZ-6438). Interestingly, impairment of EZH2 activity did not reduce the proliferation of the tumor cells as it was observed for numerous other BC cell lines in the past [34,35]. On the contrary, the growth of pG-2 cells was slightly but significantly increased by the loss of EZH2 activity upon knockdown (Fig. 3A) and at a low (500 nM) and very low (62.5 nM) concentration of EPZ-6438 (Fig. 3D). EZH2 knockdown efficiency was validated on mRNA level (Fig. 3B) and gradual loss of H3K27me3 resulting from EPZ-6438 treatment was measured by western blot for different concentrations (Fig. 3C). The treatments of pG-2 cells with 62.5 and 500 nM EPZ6438 induced a reduction of 60% and 80% of the H3K27me3 signal, respectively. Interestingly, the colony formation capacity of pG-2 cells seeded at low cell density was strongly improved upon EZH2 inhibition, suggesting increased tumorinitiating properties (Fig. 3E). Remarkably, this increased colony formation capacity was maintained upon chemotherapy treatment, indicating that inhibition of PRC2 complex activity indeed supported cell survival and resistance to the therapy (Fig. 3F). We asked whether this observation was limited to the murine WAP-T MaCas or if other human cancer cell lines could also get growth and survival advantage upon PRC2 activity loss. Although certain BC cell lines showed impaired or unchanged proliferation upon EZH2 inhibition, knockdown of EZH2 in the MDA-MB-468 TNBC cell line stimulated the growth properties of the cells, with an even more pronounced effect under CAF treatment ( Fig. S3A and B). Noticeably, the increased proliferation upon EZH2 inhibition was also observed in human cancer cell lines of other origins, such as colorectal and bile duct carcinoma ( Fig. S3C-F). Together, our data showed that PRC2 inhibition can increase aggressiveness and cytotoxic therapy survival of cancer cells in a context-dependent manner. Reduction of PRC2 activity during chemotherapy treatment allows activation of gene expression programs promoting tumor cell survival Loss of PRC2 activity during chemotherapy survival could lead to an epigenetic switch enabling tumor cells to activate expression programs promoting aggressiveness and therapy resistance. To test this hypothesis, we assessed genome-wide occupancy of H3K27me3 and H3K27ac via ChIP-seq in treated and untreated pG-2 cells. The analysis of genome-wide H3K27ac peak changes showed that cytotoxic treatment of pG-2 cells leads to a genome-wide signal increase, especially at transcription start sites (TSSs), whereas only a minority of regions showed a reduction (Figs. 4A and S4A). This result aligns with the observation that chemotherapy broadly leads to gene up-regulation, as shown in our previous mRNA-seq analysis (Fig. 1D), and further supports the potential occurrence of global transcriptional de-repression caused by a PRC2 activity reduction. Therefore, we analyzed changes of H3K27me3 occupancy upon CAF treatment. Interestingly, we observed a mild but significant genomewide decrease in H3K27me3 signal (Fig. S4B). Notably, the observed H3K27me3 signal decrease upon chemotherapy treatment was found even more pronounced in actively repressed gene bodies by the PRC2 under basal conditions (Fig. 4B). We focused on changes of epigenetic state at the TSSs of upregulated genes. Interestingly, average levels of H3K27me3 at TSS of up-regulated genes were pronouncedly reduced upon treatment (Fig. 4C). In parallel and as expected, H3K27ac occupancy strongly increased at TSSs of upregulated genes (Fig. 4D). We, therefore, suspected a direct connection between loss of PRC2 repressive activity and activation of gene expression programs upon CAF treatment. To confirm our assumption, we decided to identify the subset of up-regulated genes directly controlled through a H3K27me3-to-H3K27ac epigenetic switch (Fig. 4E). This analysis identified 139 genes with a switch from trimethylation to acetylation at H3K27. Strikingly, 75 genes showed at the same time a robust up-regulation at the RNA level (Log2FC > 0.7, p-val < 0.05). As EMT was identified as one of the major features of cells surviving CAF-treatment, we screened for EMT-master regulators in gene ontology and pathway analyses using the DAVID database. We identified here the nuclear factor of activated T-cells cytoplasmic 1 (Nfatc1), the High Mobility Group AT-Hook 2 (Hmga2), and fibroblast growth factor receptor 2 (Fgfr2) as being significantly enriched (Fig. 4E). Changes in epigenetic profiles were visualized for these three genes (Figs. 4F and S4C) and validated by ChIP-qPCR (Figs. 4G and S4D). Changes in Nfatc1, Fgfr2 and Hmga2 expression levels were further validated by qRT-PCR and western blot (Figs. 4H-I, S4E and S4F). Nfatc1 retained our attention as it was shown to promote EMT and tumor progression in several tumor entities. Furthermore, Chen et al. reported a contextdependent epigenetic regulation of NFATc1 expression by EZH2 in pancreatic tissues [36]. Additionally, NFATc1 activity can be targeted by small molecule inhibitors, some of them being commonly employed in the clinic (e.g. Cyclosporin A, CsA), making this factor very attractive to study in the context of survival and resistance to chemotherapy [37]. EZH2 loss stimulates NFATc1 expression and correlates with poor prognosis in BLBC patients Next, we investigated whether EZH2 directly modulates NFATc1 expression in WAP-T cells. We, therefore, performed a knockdown of EZH2 in pG-2 cells followed by western blot analyses. As expected, EZH2 loss induced a global decrease of H3K27me3 levels. Interestingly, the global levels of H3K27ac were increased, confirming the occurrence of a profound epigenetic switch upon PRC2 activity loss, as observed earlier in our ChIP-seq results (Figs. 4A, B, S4A, and B). Strikingly, loss of EZH2 induced a dramatic increase of NFATc1 at the protein level, demonstrating that the sole impairment of PRC2 activity suffices to promote Nfatc1 expression in vitro (Fig. 5A). Accordingly, EPZ-6438 mediated EZH2 inhibition in pG-2, rG-2 and MDA-MB-468 cells also led to increased NFATc1 levels, while EZH2 overexpression in pG-2 cells led to a significant reduction of Nfatc1 expression, demonstrating the validity of the EZH2-mediated repressive activity on NFATc1 expression in other models ( Fig. S5A-E). We next analyzed the behavior of EZH2 and NFATc1 in vivo on paraffin sections of WAP-T tumors at different time points of a CAF chemotherapy treatment generated in our former study [31] (Fig. 5B). IHC staining confirmed a strong decrease of EZH2 levels in surviving tumor cells during the acute phase of the treatment (Group 2) compared to the untreated control group (Group 1). Simultaneously, the number of cells expressing NFATc1 as well as its staining intensity were increased in Group 2 tumors, confirming the anti-correlative behavior of these two factors in vivo. Interestingly, tumors re-growing after chemotherapy (Group 3) showed an almost complete restoration of EZH2 expression whereas NFATc1 expression came back to a level close to the control group (Group 1) (Figs. 5B and S5F). Additionally, we asked if the transcriptional control of NFATc1 by EZH2 could be observed in BC patient material. The staining of a BC tumor microarray (TMA) with 119 different samples revealed that the majority of the tumor samples expressed moderate levels of EZH2 and high levels of NFATc1 (Fig. 5C left panel and Fig. S5G). Strikingly, an anticorrelation between EZH2 and NFATc1 levels was observed in~2/3 of the samples (Fig. 5C, right panel). To further support our finding, we extracted the expression values of EZH2 and NFATc1 from different publicly available human primary BC datasets and observed a mild but significant anti-correlation (Fig. 5D, E). Survival analyses using the TCGA PAM50-based dataset for human basal-like cancers showed that patients with low EZH2 (Fig. 5F), high NFATC1 ( Fig. 5G) or low (EZH2/NFATC1)-ratio (Fig. 5H) have a worse prognosis compared to EZH2 high, NFATC1 low or high (EZH2/NFATC1)-ratio patients. Additional mining of publicly available data revealed that TNBC patients with low response to chemotherapy show significantly lower levels of EZH2 and higher NFATc1 expression (Fig. S5H and I). Therefore, NFATC1 induction by EZH2 loss is implicated in increased tumor aggressiveness and progression in BLBC patients. Inhibition of NFATc1 sensitizes BLBC to conventional chemotherapy The NFAT transcription factor family has been shown to regulate EMT processes in different cancers, including BC [38,39]. Therefore, we decided to investigate the impact of NFATc1 on pG-2 cell aggressiveness. For this purpose, we performed several in vitro functional assays upon siNfatc1 treatment. The knockdown efficiency was first confirmed via western blot and qRT-PCR (Figs. 6A and S6A). Cell growth kinetic measurements demonstrated that NFATc1 loss decreases pG-2 cell growth properties (Fig. 6B). Remarkably, NFATc1 silencing in the human MDA-MB-468 induced an even more pronounced impairment of cell proliferation (Fig. S6B). The treatment of pG-2 cells with CAF leads to increased tumor cell dissemination and EMT in vivo [31]. To determine if increased NFATc1 levels could be involved in these phenomena, we assessed changes of pG-2 cell motility upon loss of NFATc1. We observed that pG-2 cell migratory properties were significantly reduced upon NFATc1 knockdown in a gap closure and a trans-well assay ( Fig. 6C and D). Thus, we asked if NFATc1 could influence pG-2 motility by regulating EMT-transcriptional programs. Thus, we performed qRT-PCRs for several EMT-related markers that were previously found to be regulated upon chemotherapy treatment (Fig. 1G and H). Strikingly, NFATc1 loss led to a significant reduction of the EMT signature, as visible by increased levels of epithelial Cdh1 and decreased EMT-factors Chd2, Vim, Snai1, and Zeb1 expression ( Fig. 6E and F). To confirm that NFATc1 signaling impairment indeed favors more epithelial phenotypes, we treated pG-2 cells with increasing concentration I.K. Mieczkowska et al. of cyclosporine A (CsA), a well-established inhibitor of NFAT family members. We then measured changes of the epithelial population size by assessing EpCAM-positive (EpCAM pos ) cell fraction by flow cytometry. Interestingly, CsA treatment of pG-2 cells led to a pronounced increase of the EpCAM pos cell population (Fig. 6G). Remarkably, activation of the NFAT signaling by Thapsigargin treatment induced a strong concentration-dependent decrease of the EpCAM pos cell population, pointing at an efficient induction of EMT upon NFATc1 activation. (Fig. 6H). To finally test if the modulation of NFATc1 activity has an influence on cell's resistance to chemotherapy, we treated pG-2 with increasing concentrations of CsA and another NFAT-specific inhibitor (VIVIT), in presence or absence of CAF treatment. Both treatments significantly reduced the growth capacity of G-2 cells under basal conditions ( Fig. S6C and D). Remarkably, the combination of NFATc1 inhibition and chemotherapy significantly increased the efficiency of the treatment, demonstrating that NFATc1 plays a critical role during survival of tumor cells to cytotoxic treatments ( Fig. 6I and J). These effects were not limited to the murine cell line, as CsA treatment also sensitized MDA-MB-468 cells to CAF therapy (Fig. S6E). Finally, we assessed the ability of CsA to inhibit pG-2 cell growth with or without CAF in chorio-allantoic membrane (CAM) assay and in in vivo syngeneic mice (Fig. 6K-N). Interestingly, the single CsA treatment was not able to reduce the growth behavior of the tumors in both assays. However, the combined CsA and CAF therapy showed the highest effectiveness in the CAM assay and in orthotopic growing lesions ( Fig. 6K and N). Together, our results demonstrate a key-role of NFATc1 signaling modulating EMP, aggressiveness and chemotherapy survival in pG-2 cells, under the control of PRC2 epigenetic regulation. DISCUSSION In the present study, we leveraged murine WAP-T MaCa cells and human BC-cell lines to model and investigate molecular mechanisms underlying BLBC survival to conventional chemotherapy. Similarly to former studies on in vitro, animal models, and patient material [40,41], our transcriptome-wide analyses showed that WAP-T cells activate transcriptional programs characteristic for EMT and CSC to survive the treatment. Indeed, the gain of EMP was shown to promote tumor cell invasiveness and protect them against pro-apoptotic signals [42,43]. Additionally, EMT and CSC properties are tightly linked together and have been frequently shown to positively influence their respective transcriptional programs [44][45][46][47]. As the acquisition of such properties requires rapid and profound transcriptional changes, we expected epigenetic mechanisms to be involved in these processes [48]. Combining mRNA-seq and ChIP-seq approaches, we surprisingly identified a reduction of the PRC2/EZH2 activity occurring during chemotherapy survival in WAP-T cells. The repressive activity of EZH2 on gene expression is mostly known to promote cancer progression and contribute to therapy resistance, to metastasis and resistance to programmed cell death in numerous cancers including TNBC [49][50][51][52][53][54][55][56][57][58][59][60]. Paradoxically, our results unraveled an opposite role of PRC2/EZH2 in WAP-T and other BLBC cells, maintaining a more chemotherapy-sensitive phenotype via specific repression of EMT and CSC transcriptional programs. Although contradictory at the first glance, our results align with still scarce but growing evidence, that loss of PRC2/EZH2 activity can drive or support the initiation and progression of cancers in a context-specific manner [61][62][63]. For instance, loss of EZH2 was shown to promote genomic instability, to act as a barrier to KRASdriven inflammation and EMT or to promote therapy resistance in BC, colorectal cancer, lung cancer, acute myeloid and T-cell acute lymphoblastic leukemia, respectively [62,[64][65][66][67][68][69][70]. Our investigations on murine and human BLBC cell lines corroborated the occurrence of EZH2-specific tumor-suppressive activity and described thereby a new molecular mechanism by which PRC2/ EZH2 can exert its repressive function on the EMT transcriptional program. Specifically, loss of PRC2 subunits upon chemotherapeutic treatment led to rapid upregulation of central EMT regulators via a repressive (H3K27me3) to activating (H3K27ac) epigenetic switch. Strikingly, we identified here NFATc1 as one of the major EMT-TF under the immediate epigenetic control of PRC2 in BLBC and upregulated in cells surviving chemotherapy. Interestingly, the Hessmann group reported a few years ago that NFATc1 is needed for the pancreas regeneration after injury, and is epigenetically silenced by EZH2 activity once regeneration is completed, supporting the mechanism of regulation identified in the present study [36]. The pivotal role of NFATc1 in the activation of EMT transcriptional programs in cancer cells and the availability of specific small molecule inhibitors (e.g. CsA or VIVIT) renders this factor very interesting as a potential drug target to increase conventional therapies efficiency [71,72]. NFAT-signaling was established as being crucial for the survival and metastatic properties of triple-negative BC [73]. In this study, we observed increased efficiency of CAF treatment on BLBC cells when co-treated with cyclosporine A or VIVIT. These results are in line with former studies on lung cancer, acute myeloid leukemia (AML), and bladder cancer showing that NFATc1 inhibition sensitized cancer cells to cisplatin, sorafenib-and tacrolimus-induced apoptosis, respectively [74][75][76]. Beside these considerations, NFATc1 has a prominent function in T-cells induction and is necessitated for the correct activation of cytotoxic T cells during tumor cell clearance [77]. The reversible character of NFATc1 inhibitors like CsA as well as their anti-tumorigenic properties at significantly lower doses than immunosuppressive ones, might therefore represent a major advantage in the design of combinatory therapies [37]. In summary, this study presents the evidence of a very contextdependent PRC2/EZH2 function in BC that in certain circumstances can maintain more therapy-sensitive states by epigenetically repressing NFATc1 expression. Our data suggest that targeting the NFATc1 signaling in EZH2 low TNBC/BLBC patients could represent an attractive opportunity to increase the efficiency of conventional chemotherapeutic treatments and reduce the development of deadly resistant cancer cells phenotypes. MATERIALS AND METHODS Cell culture Cell lines. The murine MaCa pG-2 cell line was generated in a previous study 29 and cultured in DMEM GlutaMAX™ (Invitrogen). The human cell line MDA-MB-468 (triple-negative BC) was cultured in DMEM GlutaMAX™ medium, HCT116 and HT29 (colorectal cancer) in McCoy's medium (Gibco), and EGI-1 and TFK1 (cholangiocarcinoma) in MEM medium (Gibco). All media were supplemented with 10% fetal bovine serum (FBS) and 1% penicillin/ streptomycin (P/S) and all cell lines were maintained at 37°C, 5% CO 2 . Chemotherapy treatment. Optimization steps were started with a CAFchemotherapy concentration of 10 μg/ml cyclophosphamide, 0.5 μg/ml doxorubicin, and 10 μg/ml 5-FU, related as concentration "1" [31]. A dilution of "1/32" (312.5 ng/ml cyclophosphamide, 15.6 ng/ml doxorubicin, and 312.5 ng/ml 5-FU) was identified as most appropriate for this study and was employed in all further experiments of this project. Cells were treated for 48 h. siRNA transfection. Cells were reverse transfected with siRNA using Lipofectamine ® RNAiMAX (Thermo Scientific), according to the manufacturer's instruction. For siRNA sequences (see Table S2). In vitro functional assays Proliferation assays. 1×10 4 cells per well were seeded on 24-well plates. For EZH2 inhibition experiments, cells were pre-treated for 2 days with different concentrations of EPZ-6438 inhibitor in normal culture medium at the indicated concentration. In experiments assessing resistance to chemotherapy, cells were subsequently treated with a combination of CAF and EPZ-6438 for 2 days, and finally with EPZ-6438 alone for two more days. The cell proliferation was monitored with a Celigo S Cell Imaging Cytometer (Nexcelom, UK). The measurements at different time points were normalized to the respective day 0 values. Crystal violet staining. Cells were stained with crystal violet on the last day of experiments. After a short wash step with PBS, cells were fixed for 2 min with 100% methanol and stained 20 min with 0.05% crystal violet in 20% ethanol. Stained cells were then washed at least three times with tap water and finally dried. Pictures of the staining were taken with an Epson Perfection V700 Photo scanner (Epson). Migration assays. 4×10 5 cells were reverse transfected with siRNA or treated with the respective inhibitor in six-wells. On the next day, the adherent cells (~95% confluency) were serum-starved by replacing the complete growth medium with serum-free medium. After 4 h, scratches were performed on the cells monolayer using a sterile pipet tip. The medium was immediately replaced with fresh complete medium. The wounds were photographed at 0 and 24 h and analyzed via ImageJ. Colony formation assay. Cells were pre-treated with inhibitors or transfected with siRNA in the same way as for proliferation assays. The next day, 2 × 10 3 cells were seeded per well in a six-well plate. Once colonies reached adequate size (generally between 12 and 15 days), cells were fixed and stained as previously described. RNA extraction and cDNA synthesis Total RNA from cultured cells was isolated using Qiazol (Qiagen, Germany) according to the manufacturer's protocol and quantified using a Denovix DS11 + spectrophotometer (Denovix, USA). 0.5-1 μg of RNA were reverse transcribed using M-MuLV transcriptase and buffer according to the manufacturer protocol (New England Biolabs GmbH, Germany). Quantitative real-time PCR (qPCR) Quantification of gene expression (RT-qPCR) and chromatin occupancy (ChIP-qPCR) were performed in a CFX qPCR-cycler (Bio-Rad, Germany) with 25 μl reaction volumes. The following program was utilized for RT-qPCR: initial denaturation (2 min at 95°C); 40 cycles of amplification (10 s at 95°C, 30 s at 60°C). The housekeeping gene Rplp0/RPLP0 was utilized to normalized gene expression results. For ChIP-qPCR, the following program was used: initial denaturation (2 min at 95°C); 40 cycles of amplification (15 s at 95°C, 45 s at 60°C), termination (1 min at 95°C, 10 s at 65°C). The results of chromatin occupancy were normalized to the respective input values and calibrated to their respective controls. Primer sequences utilized in this study are listed in Tables S2 and S3 for RT-qPCR and ChIP-qPCR, respectively. RNA sequencing and analysis RNA library preparation was carried out using NEXTflex™ Rapid Illumina Directional Kit according to the manufacturer's instructions. The size of the generated libraries was estimated on a high sensitivity DNA chip (Agilent) using a Bioanalyser 2100 (Agilent) and their concentration was determined using Qubit fluorimeter (Invitrogen). Finally, libraries were multiplexed to a final concentration of 2 nM and sequenced on a HiSeq 2500 Illumina-Sequencer at the NGS-Integrative Genomics (NIG) at the University Medical Center Göttingen (UMG). The mRNA-seq raw data (Fastq files) were processed in the Galaxy environment (https://galaxy.gwdg.de) provided by the "Gesellschaft für Wissenschaftliche Datenverarbeitung mbH Göttingen" (GWDG) following the pipeline established in the past [78]. After quality check using FastQC, sequencing data were trimmed (FASTQ Trimmer tool), aligned to the murine reference genome (mm9) with the TopHat tool. Next, reads were assigned to their respective genomic features using htseq-count. Finally, differential gene expression analyses were performed using DESeq2. Analyses of gene signature enrichment were performed using the gene set enrichment analysis (GSEA) tool (http://www.broadinstitute.org/gsea/downloads.jsp) and the web-based Enrichr tool (http://amp.pharm.mssm.edu/Enrichr/). The GSEA analyses were ran on a count matrix containing all genes harboring expression level over the background (basemean > 15 normalized counts) with following parameters: Chromatin immunoprecipitation (ChIP) Chromatin immunoprecipitation for H3K27me3 and H3K27ac was performed 48 h after chemotherapy or control vehicle treatment, as described previously [79]. Briefly, pG-2 cells were cultured in 15 cm plates. Protein-DNA complexes were crosslinked with 1% formaldehyde (Sigma-Aldrich, Germany) and the nuclear fraction was extracted and sonicated with a Bioruptor pico (Diagenode, Belgium). After controlling the size of the DNA fragments and a pre-cleaning step, the same amounts of samples were incubated with 1 μg anti-H3K27me3 or anti-H3K27ac antibody overnight at 4°C and immuno-precipitated with protein A-sepharose. Finally, DNA-protein complexes were reverse-crosslinked, DNA fragments were purified by phenol-chloroform extraction, and I.K. Mieczkowska et al. concentration was determined using Qubit fluorimeter (Invitrogen). Finally, 20-30 μg immunoprecipitated DNA fragments were used for library generation with the MicroPlex Library Preparation Kit v2 (Diagenode). The size of the libraries was estimated with a high sensitivity DNA chip (Agilent) using a Bioanalyser 2100 (Agilent) and libraries were subsequently multiplexed to a final concentration of 2 nM. Sequencing reactions were performed on a HiSeq 4000 Illumina-Sequencer (NGS Integrative Genomics Core Unit, University Medical Center of Göttingen). Analysis of ChIP-seq data ChIP-seq data were processed and analyzed in the Galaxy environment (https://galaxy.gwdg.de/) following a pipeline established in the past [80]. After a quality check (FastQC), reads were aligned to the mouse reference genome (mm9) using Bowtie2. H3K27ac peaks were identified with the MACS2 tool and Differential Binding analyses were performed with Diffbind. The deep tools suite was used for the generation of normalized coverage files (bamCompare). To visualize occupied regions, region scoring matrixes were computed (computeMatrix) and profiles plots or heatmaps were generated (plotProfile and plotHeatmap). Histone modification occupancy at specific genomic regions was visualized with the integrative genome Viewer (IGV; http://software.broadinstitute.org/software/igv/). Enrichments of genomic regions related to specific gene signatures were performed with the Enrichr web-based tool (http://amp.pharm.mssm.edu/Enrichr/). IHC staining and scoring IHC staining was performed as described previously [80]. Briefly, tumor sections were deparaffinized and rehydrated using decreasing alcohol concentration. Antigen retrieval was performed with citric acid buffer (1 mM citric buffer pH 6.0, 0.05% Τween 20) or EDTA buffer (10 mM EDTA, 0.05% Tween 20, pH 8.0) for EZH2 and NFATc1, respectively, in a microwave pressure cooker for 10 min. After blocking endogenous peroxidase and unspecific epitopes, sections were subsequently incubated with primary antibodies overnight at 4°C, washed with PBT-T, and finally incubated with biotinylated secondary antibodies for 1 h at room temperature. Next, Horse Radish Peroxidase coupled avidin (Sigma-Aldrich, 1:1000 in PBS) was applied for 90 min. The slides were washed in PBS and developed with a DAB-chromogen solution. Counterstain was performed with hematoxylin (Roth). Last, tissues were dehydrated with increasing alcohol concentration and mounted with Histokitt (Roth). Brightfield pictures of the staining were acquired with a Zeiss AXIO Scope.A1 microscope (Zeiss). The list of the utilized antibodies and their dilutions is available in Tables S4 and S5 in the supplemental data. Immunostained WAP-T tumors were scored based on the percentage of DAB-positive cells (EZH2 + , NFATC1 + ) per acquired field (min. 5 fields per treated group). EZH2 and NFATC1 scoring on the patient tissuemicroarrays were established based on the staining intensity (null = no detectable staining, low = weak staining intensity, high = strong staining intensity). Antibodies used for immunohistochemical staining are provided in Supplementary Information. Immunofluorescence staining Cells were plated on glass coverslips and grown in 24-well plates. For immunofluorescence staining, cells were washed with PBS, fixed using 4% paraformaldehyde in PBS for 10 min, and washed again with PBS for 5 min. Cells were then permeabilized with 0.1% Triton X-100 in PBS for 10 min, washed with PBS for 5 min, and blocked 5% BSA in PBS for 30 min. Primary antibodies diluted in blocking solution were applied to the coverslips overnight at 4°C in a humid chamber. After a washing step in PBS-T, cells were incubated with appropriate fluorophore-conjugated secondary antibodies for 1 h at room temperature in a dark humid chamber. Cells were washed with PBS-T and nuclei were stained with DAPI in PBS (1 μg/ml) for 5 min. After the last wash step in PBS, coverslips were mounted on normal glass slides using Mowiol 4-88 mounting medium (Sigma). Fluorescence pictures were acquired with a Zeiss AXIO Scope.A1 microscope (Zeiss). Detailed lists of primary and secondary antibodies used in this study are provided in the supplemental data. Fluorescence intensity was quantified using the ImageJ software. Mouse experiments All animal experiments were performed according to the German regulations for animal experimentation and authorized by the local ethics office (Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit, LAVES) under the registration number 33.19-42502-04-16/2169. Mice were housed in a controlled environment at a 12 h dark/light cycle and 22°C and were fed laboratory chow and tap water ad libitum. Cyclosporin A treatment. Syngeneic virgin female WAP-T mice (Balb/c) were anesthetized with an injection of ketamine/xylazine and 1 × 10 6 pG-2 cells in 20 μl DMEM were injected in the right abdominal mammary gland. The animals were randomly assigned to four groups: controls (group 1), CsA treatment (group 2), CAF treatment (group 3), and CAF + CsA treatment (group 4). Once tumors reached an average volume of 200 mm², animals of groups 2 and 4 were treated with 5 mg/kg of CsA intraperitoneally (IP), while groups 1 and 3 received equal amounts of the vector solution (2% DMSO; 30% PEG300; 5% Tween 80 in H 2 O) three times a week. On the day following the first CsA injection, animals of groups 3 and 4 were treated with a single dose chemotherapy IP (50 mg/kg cyclophosphamide, 2.5 mg/kg doxorubicin, 50 mg/kg 5-FU), while groups 1 and 2 received a single dose of an equal volume of NaCl 0,9%. Treatment with CsA was continued until the end of the experiment; animal weight and tumor volumes (caliper measurement) were monitored three times a week. Tissues analyzed in IHC staining. The MaCa tissues utilized for IHC staining originated from Jannasch et al. where animals were treated with a single dose combining 100 mg/kg cyclophosphamide, 5 mg/kg doxorubicin, and 100 mg/kg 5-fluorouracil [31]. Analyzing volumes of the growing tumors via micro-CT. Excised tumors were briefly rinsed five times in water and then transferred to 35% and 70% ethanol (1 h each). For staining and fixation, tumors were placed overnight at room temperature (RT) under slow rotation in a 4% paraformaldehyde solution (PFA, Serva Electrophoresis) in phosphate-buffered saline, pH 7.4 (PBS, Invitrogen), containing 0.7% phosphotungstic acid solution (PTA, Sigma-Aldrich Corp.) diluted in 70% ethanol. Samples were then briefly rinsed in water and stored in fresh 70% ethanol. For further μCT analysis, the PTAstained tumors were dehydrated with ascending ethanol series and embedded in paraffin (Suesse Labortechnik). The paraffin blocks were scanned in an in vivo microCT system QuantumFX (Perkin Elmer) operated with the following settings: 90 kV tube voltage, 200 μA tube current, 10 × 10 mm 2 field-of-view, 3 min total acquisition time resulting in 3D data sets with a resolution of 20 μm. These data sets were visualized and analyzed in Scry7.0 (custommade render software, Christian Dullin, 2021). A threshold of 12,000 GVal (in the arbitrary units of the CT data sets) was applied to separate tissue from paraffin, air, and the sample holder. A virtual scalpel was utilized to remove residual CAM. Tumor volume was measured by multiplying the number of segmented tumor voxels with the voxel volume. Statistical analysis Statistical analyses were performed with GraphPad Prism version 8.0.1. Appropriate statistical tests were used for quantitative PCR, densitometry, functional assays (colony number, confluency, population, migration, and scratch assays), immunohistochemistry scoring, fluorescence intensity and survival analyses results and mentioned at the respective figure legends (*p < 0.05, **p < 0.01, ***p < 0.001). Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
8,864.8
2021-11-29T00:00:00.000
[ "Biology", "Medicine" ]
Connectedness of stock markets with gold and oil: New evidence from COVID-19 pandemic This paper sets out to explore the impact of COVID-19 pandemic on the dynamic connectedness among gold, oil and five leading stock markets by applying a new DCC-GARCH connectedness approach. We find stronger connectedness between these markets during the COVID-19 pandemic than in the pre-pandemic period. We also find that during this pandemic, gold is a receiver of shocks from the five stock markets, whereas the oil is a net transmitter of shocks. Introduction The COVID-19 pandemic is having a great impact on global financial markets. Because of this turmoil, global financial markets have experienced heavy losses and the kind of deep changes that have not been seen since the 2008 financial crisis (Cembalest, 2020). Assessing connectedness between financial markets during this outbreak has been a remarkable challenge facing researchers and policymakers because it helps them to analyze the behavior of the markets facing this major event, prepare plans and strategies to minimize the financial effects of the COVID-19 outbreak, and make well-founded and informed decisions about global portfolio diversification opportunities. As economic activity came to a halt during lockdowns in nearly all industrialized economies, oil prices fell dramatically due to the significant decline in global demand. For instance, the US reference crude oil's price (West Texas Intermediate) had fallen by 37 USD per barrel by 20 April 2020. While oil price volatility tends to influence the world economy negatively, gold prices have shown a clear upward trend. Oil and gold are strategic for investors as they are usually included in their asset portfolios. Oil, as a highly volatile commodity, presents valuable information in forecasting financial asset prices. Conversely, gold is regularly regarded as a safe-haven asset during periods of turmoil (Baur and Lucey, 2010). Accordingly, oil and gold are, theoretically, strongly associated with stock markets. Therefore, an accurate assessment of the linkages between oil, gold, and stock markets may aid investors in their portfolio allocation during the crisis period. Many empirical studies that have examined relationships between financial markets have confirmed that the COVID-19 pandemic has influenced the degree of connectedness between these markets. For instance, So et al. (2021) used dynamic financial networks based on stock return correlations to examine the connectedness between financial networks in Hong Kong during the COVID-19 outbreak. They found that network connectedness within the financial network increased during the outbreak. Similarly, Zhang et al. (2020) found that the pandemic is having important repercussions on global financial markets. Specifically, they showed that the linkages between global stock markets exhibited obviously different patterns during the pandemic period. In the same vein, Bissoondoyal-Bheenick et al. (2020) investigated the impact of the pandemic on the connectedness between stock returns and volatility. They provided empirical evidence that the connectedness between these two became significantly more marked as the COVID-19 outbreak increased in severity. In the same line, Costa et al. (2021) examined sectoral connectedness in the US using data from 2013 to the end of 2020. They showed that total connectedness has experienced a dramatic increase during the outbreak. This paper contributes to the existing literature of connectedness between financial markets by assessing the impacts of the COVID-19 pandemic on the dynamic connectedness among gold, oil, and five leading stock markets. For this purpose, we use Gabauer's (2020) DCC-GARCH connectedness approach to evaluate the total and net connectedness between financial markets. This approach offers a number of advantages. First, it allows us to overcome the major drawbacks of the rolling window analysis, namely the arbitrary choice of the window size in most cases and the loss of observations. Second, it lets us test whether or not the propagation mechanism is time-varying. To our knowledge, this is the first research to apply this novel approach to investigating the connectedness between financial markets. Our focus on gold and oil was motivated by several factors. First, there are a few empirical studies linking oil and gold to financial markets during the COVID-19 pandemic (Zhang and Hamori;2021;Wang et al., 2021;Drake, 2021). Second, oil prices experienced high volatility during the period of turmoil. The connection between oil and the financial markets is highlighted by this volatility and uncertainty. Third, since ancient times, stock investors have considered gold a safe haven and have used it to protect against market instabilities and financial turbulence. Gold contributes to reducing connectedness between financial markets when it is used this way by acting as a shock damper during periods of turmoil. Our findings show a stronger connectedness among gold, oil and the selected stock markets during the COVID-19 pandemic than in the pre-pandemic period. Our results also reveal that gold is a receiver of shocks from the five stock markets, whereas oil is a transmitter of shocks during the outbreak. The increased connectedness among the considered commodities and stock markets has been due to the intensification of the transmission of the crisis effect between them. Overall, these findings support the hypothesis in the literature of market contagion, which suggests that periods of financial crisis generate large return connectedness among commodities and stock markets. The remainder of this paper is organized as follows: Section 2 outlines the econometric specifications and data description, while Section 3 reports and discusses the paper's empirical findings. Finally, Section 4 concludes and discusses policy implications. Data Daily data spanning from 14 November 2018 to 24 March 2021 were collected from Blomberg database to explore the patterns of dynamic connectedness between oil, gold and five of the world's largest stock markets in the pre and during the COVI-19 pandemic periods. The decision to focus on these stock markets was motivated by the following reasons. First, the stock markets that are considered are among the world's ten largest stock markets. Second, all of these markets have been severely affected during the COVID-19 pandemic and experienced losses of between 30 and 42% of their value (Yarovaya et al., 2020;Zhang et al., 2020). As suggested by Wan et al. (2021), we consider 20 January 2020 as the start date of the pandemic period. Fig. 1 depicts the daily evolution of the returns of the series considered in our analysis. Methodology This paper employs the DCC-GARCH connectedness approach proposed by Gabauer (2020). It is based on the volatility impulse response function (VIRF) that represents the impact of a shock in variable ′′ i ′′ on variable j ′ s conditional volatilities. The VIRF can be written as where δ j,t is a selection vector with a one at the j th position and zero otherwise. Conditional variance-covariance forecasting by using the DCC-GARCH model is at the core of VIRF and can be done by iteration in three steps. First, using the GARCH(1, 1), the conditional volatilities (D t + h \F t ) can be predicted by As a second step, allows to predict the dynamic conditional correlations as suggested by Engle and Shepard (2001). In the third step, the dynamic conditional variance-covariances are predicted by The generalized forecast error variance decomposition (GFEVD) is then computed using the VIRF and is interpreted as the variance share of one variable explains on others. The normalized variance share is computed by where g ij,t (J) = N. By employing the GFEVD, it is possible to construct the total connectedness index (TCI) as follows: The total connectedness presented in Eq. (9) allows finding the spillovers that variable ′ ′ i ′′ transmits to variables ′′ j ′′ (total directional connectedness), which are calculated by: The total directional connectedness from others is computed by The difference between the two previous measures allows computing the net total connectedness used to describe the influence of the variable "i" on the studied network, which can be presented by Finally, Gabauer (2020) defines the net pairwise directional connectedness (NPDC) between variable "i" and variable "j" by A negative value of NPDC ij indicates that variable "i" is dominated by "j", whereas a positive value indicates the opposite. Connectedness tables Tables 1 and 2 illustrate the average dynamic connectedness measures among gold, oil, and the five studied stock markets before and during the COVID-19 pandemic. Table 1 shows that the total connectedness (TCI) between gold and the five stock markets increased during the COVID-19 pandemic, reaching 37.09% against 32.98% in the pre-COVID-19 period. Table 2 shows that the TCI between oil and the stock markets also increased during the COVID-19 pandemic (45.78% vs. 37.09% before the outbreak). Overall, the results indicate that the studied markets are moderately inter-connected and that the TCI among them increased during the pandemic period. In addition, our findings suggest that oil becomes the main net transmitter during the COVD-19 pandemic period while gold acts as the main net receiver, which receives from all studied stock markets. Dynamic total connectedness Figs. 2 and 3 display the dynamic total connectedness (TCI). Fig.2 indicates the TCI between gold and the five stock markets spanned approximately between 25 and 43% before the pandemic and from 30 to 60% during it. Therefore, the TCI sharply increased instantly after COVID-19 was declared a pandemic by the World Health Organization (WHO), indicating this event significantly affected the connectedness between gold and the selected stock markets. Regarding oil with these stock markets, the total connectedness index is approximately between 28 and 40%. Fig.3 shows that this index increased during the pandemic, peaking at over 60% shortly after the pandemic was declared. Overall, our findings show that COVID-19 has influenced the connectedness level between the gold market and the studied stock markets, and between the oil market and these markets, as well. This outcome is owed to the amplification of crisis effect transmission between them. These findings are consistent with the hypothesis of market contagion in the literature, suggesting that periods of financial crisis generate large return connectedness between commodities and stock markets. To provide evidence concerning the statistical significance of the total connectedness increase, we tested the following hypothesis { H 0 : μ Before = μ During H 1 : μ Before ∕ = μ During where μ Before is the average total connectedness index before the COVID-19 pandemic and μ During is the average total connectedness index during the COVID-19 pandemic. The appropriate procedure to test the null hypothesis of the equality of these means is the Satterthwaite-Welch t-test (Welch, 1947) because it takes into account the unequal variances condition. The results of this test, reported in Table 3, show that the null hypothesis is rejected for both submarkets, gold-stock markets and oil-stock markets. Dynamic net connectedness In this section, we analyze the net connectedness dynamic patterns of the studied markets. Figs. 4 and 5 display the net total directional connectedness index over the compared periods. The main results emerging from these figures follow. First, the results highlight that the net connectedness of all studied markets peaked during the COVID-19 pandemic, in a behavior similar to the total connectedness index presented in the previous section. From these figures, it is evident that the net connectedness behavior has significantly changed during the pandemic compared to the pre-COVID-19 period. Second, Fig. 4 shows that gold is a receiver of shocks from the five stock markets. More specifically, gold becomes a receiver of shocks after a short period of being a transmitter for the US. For the UK stock market, it becomes a net transmitter during the COVID-19 period. In the case of Germany, gold remains as a transmitter of shocks, but to a stronger extent during the COVID-19 period. For the Japanese stock market, gold seems to have a similar behavior during the two considered periods. When it comes to the Chinese stock market, gold becomes a receiver of shocks during the first months of COVID-19 after being a net transmitter during the pre-pandemic period. Finally, similar behavior is found when considering the oil returns with the five stock markets. As shown in Fig. 5, oil exhibits a higher net connectedness index during the pandemic period. It remains a net transmitter of shock to all five selected stock markets during the entire pandemic period. The findings of the dynamic net connectedness replicate how stock markets react with oil and gold in periods of crisis. During the COVID-19 pandemic, the combination of stagnant production and high economic uncertainty generated increasing volatility in oil prices, making this strategic commodity the largest transmitter of shock to stock markets. Individual investors' pessimism boosts investment in gold, which is commonly viewed as a primary safe-haven asset, making this commodity the largest receiver of shock from stock markets. Note. Statistical significance: * at 10%, ** at 5% and *** at 1% Fig.4. Net connectedness index for gold and the five stock markets. As we proceeded for the total connectedness, we employed the Satterthwaite-Welch procedure to check whether the dynamic net connectedness has significantly changed during the COVID-19 outbreak by testing the following hypothesis. where μ i,Before is the average net connectedness index of the market "i" before the COVID-19 pandemic and μ i,During is the average net connectedness index of the market "i" during the COVID-19 outbreak. The results reported in Table 6 reveal that the change in net connectivity among the markets studied is statistically significant except for the Japanese stock market. Conclusions This paper investigates the effects of the COVID-19 pandemic on the dynamic connectedness between gold, oil and five leading stock markets. Using a new DCC-GARCH connectedness approach, we found that the COVID-19 pandemic increased the connectedness among oil, gold and the five selected stock markets. Our results also show that gold is a receiver from the five stock markets, whereas oil is a transmitter of shocks during this outbreak. These outcomes are associated with the hypothesis of market contagion, suggesting that the periods of financial distress induce large return connectedness in several asset markets. More specifically, during the COVID- Notes: The asterisk *, **, *** are the significance levels at 1, 5 and 10%. 19 period, the global uncertainty caused by the contagion has influenced the structural connectedness patterns between the markets under investigation. Furthermore, the dynamic net connectedness findings replicate how stock markets respond to oil prices and gold in periods of crisis. The results from this study have policy implications that could potentially be important to stock investors and policymakers. For instance, policymakers should be aware of the links between oil and gold and their effect on financial markets when preparing strategies to reduce connectedness between stock markets during pandemics. Declaration of Competing Interest Noureddine Benlagha and Salaheddine El Omari declare no conflicts of interest in this manuscript.
3,388
2021-08-01T00:00:00.000
[ "Economics" ]
How to succeed at holographic correlators without really trying We give a detailed account of the methods introduced in [1] to calculate holographic four-point correlators in IIB supergravity on AdS5 × S5. Our approach relies entirely on general consistency conditions and maximal supersymmetry. We discuss two related methods, one in position space and the other in Mellin space. The position space method is based on the observation that the holographic four-point correlators of one-half BPS single-trace operators can be written as finite sums of contact Witten diagrams. We demonstrate in several examples that imposing the superconformal Ward identity is sufficient to fix the parameters of this ansatz uniquely, avoiding the need for a detailed knowledge of the supergravity effective action. The Mellin space approach is an “on-shell method” inspired by the close analogy between holographic correlators and flat space scattering amplitudes. We conjecture a compact formula for the four-point correlators of one-half BPS single-trace operators of arbitrary weights. Our general formula has the expected analytic structure, obeys the superconformal Ward identity, satisfies the appropriate asymptotic conditions and reproduces all the previously calculated cases. We believe that these conditions determine it uniquely. Introduction Thanks to integrability, N = 4 super Yang-Mills (SYM) theory should be completely tractable in the planar limit. However, much work remains to turn this statement of principle into a practical computational recipe. A basic class of observables that still defy our technical abilities are the four-point correlation functions of one-half BPS local operators, JHEP04(2018)014 with O p (x) = Tr X {I 1 . . . X Ip} (x), I k = 1, . . . 6, in the symmetric-traceless representation of the SO(6) R-symmetry. For general weights p i and arbitrary 't Hooft coupling λ these correlators are extremely complicated functions of the conformal and R-symmetry cross ratios, encoding a large amount of non-protected spectral data and operator product coefficients. 1 Finding a useful representation for these correlators at any value of the 't Hooft coupling λ will be a crucial benchmark for the statement that planar N = 4 SYM has been exactly solved. 2 At strong coupling, planar N = 4 SYM has a dual description in terms of classical IIB supergravity on AdS 5 × S 5 [19][20][21]. Casual readers could be forgiven for supposing that a complete calculation of (1.1) in the supergravity limit must have been achieved in the early days of AdS/CFT. Far from it! Kaluza-Klein supergravity is a devilishly complicated theory -or so it appears in its effective action presentation -and the standard methods of calculation run out of steam very quickly. Prior to our work only a few non-trivial cases were known: (i) The three simplest cases with four identical weights, namely p i = 2 [22], p i = 3 [23], and p i = 4 [24]. (ii) The next-to-next-to-extremal correlators with two equal weights, i.e. the cases p 1 = n + k, p 2 = n − k, p 3 = p 4 = k + 2 [25][26][27]. 3 The standard algorithm to evaluate holographic correlators is straightforward but very cumbersome. To the leading non-trivial order in the large N expansion, one is instructed to calculate a sum of tree-level Witten diagrams, with external legs given by bulk-to-boundary propagators and internal legs by bulk-to-bulk propagators. The vertices are read off from the effective action in AdS 5 obtained by Kaluza-Klein (KK) reduction of IIB supergravity on S 5 . The evaluation of the exchange Witten diagrams is not immediate, but has been streamlined in a series of early papers [23,[31][32][33][34][35][36]. A key simplification [36] that occurs for the AdS 5 × S 5 background is that all the requisite exchange diagrams (see figure 1) can be written as finite sums of contact diagrams (figure 2), the so-called D-functions. However, the supergravity effective action is extremely complicated [3,29,37]. The scalar quartic vertices were obtained by Arutyunov and Frolov [29] in a heroic undertaking and they fill 15 pages. Moreover, the number of exchange diagrams grows rapidly as the weights p i are increased, 4 making it practically impossible to go beyond p i of the order of a few. What's JHEP04(2018)014 worse, the final answer takes the completely unintuitive form of a sum of D-functions. It takes some work to extract from it even the leading OPE singularities. This sorry state of affairs is all the more embarrassing when contrasted with the beautiful progress in the field of flat space scattering amplitudes (see, e.g., [38,39] for recent textbook presentations). Holographic correlators are the direct AdS analog of S-matrix amplitudes, to which in fact they reduce in a suitable limit, so we might hope to find for them analogous computational shortcuts and elegant geometric structures. A related motivation to revisit this problem is our prejudice that for the maximally supersymmetric AdS 5 × S 5 background the holographic n-point functions of arbitrary KK modes must be completely fixed by general consistency conditions such as crossing symmetry and superconformal Ward identities. This is just a restatement of the on-shell uniqueness for the two-derivative action of IIB supergravity. It should then be possible to directly "bootstrap" the holographic correlators. The natural language for this approach is the Mellin representation of conformal correlators, introduced by Mack [40] for a general CFTs and advocated by Penedones and others [41][42][43][44] as particularly natural in the holographic context. The analogy between AdS correlators and flat space scattering amplitudes becomes manifest in Mellin space: holographic correlators are functions of Mandelstam-like invariants s, t, u, with poles and residues controlled by OPE factorization. (For the AdS 5 × S 5 background, tree-level correlators are in fact rational functions -this is the Mellin counterpart of the fact that only a finite number of D-functions are needed in position space.) However, most applications to date of the Mellin technology to holography (e.g., [41][42][43][44][45]) have focussed on the study of individual Witten diagrams in toy models. This is not where the real simplification lies. The main message of our work is that one should focus on the total on-shell answer of the complete theory and avoid the diagrammatic expansion altogether. Our principal result is a compelling conjecture for the Mellin representation of the general one-half BPS four-point functions (1.1) in the supergravity limit. We have found a very compact formula that obeys all the consistency conditions: Bose symmetry, expected analytic structure, correct asymptotic behavior at large s and t, and superconformal Ward invariance. We have checked that our formula reproduces (in a more concise presentation) all the previously calculated examples. We believe it is the unique solution of our set of algebraic conditions, but at present we can show uniqueness only for the simplest case (p i = 2). We have also developed an independent position space method. This method mimics the conventional algorithm to calculate holographic correlators, writing the answer as a sum of exchange and contact Witten diagrams, but it eschews knowledge of the precise cubic and quartic couplings, which are left as undetermined parameters. The exchange diagrams are expressed in terms of D-functions, so that all in all one is led to an ansatz as a finite sum of D-functions. Finally, the undetermined couplings are fixed by imposing the superconformal Ward identity. This method is completely rigorous, relying only on the structure of the supergravity calculation with no further assumptions. Despite being simpler than the conventional approach, it also becomes intractable as the weights p i increase. We have obtained results for the cases with equal weights p i = 2, 3, 4, 5. The p i = 5 result is new and it agrees both with our Mellin formula and with a previous conjecture by Dolan, Nirschl and Osborn [46]. JHEP04(2018)014 The remainder of the paper is organized as follows. In section 2 we start with a quick review of the traditional method of calculation of four-point functions using supergravity. In section 3 we review and discuss the Mellin representation for CFT correlators and of Witten diagrams. We place a special emphasis on the simplifications expected in the large N limit and when the operator dimensions take the special values that occur in our supergravity case. In section 4, after reviewing the constraints of superconformal invariance, we formulate and solve an algebraic problem for the four-point Mellin amplitude of generic one-half BPS operators. We also discuss some technical subtleties about the relation between the Mellin and position space representations. The position space method is developed in section 5. We conclude in section 6 with a brief discussion. Four appendices collect some of the lengthier formulae and technical details. The traditional method The standard recipe to calculate holographic correlation functions follows from the most basic entry of the AdS/CFT dictionary [19][20][21], which states that the generating functional of boundary CFT correlators equals the AdS path integral with boundary sources. Schematically, (2.1) Here and throughout the paper we are using the Poincaré coordinates (2. 2) The AdS radius R will be set to one by a choice of units, unless otherwise stated. We focus on the limit of the duality where the bulk theory becomes a weakly coupled gravity theory. As is familiar, for the canonical duality pair of N = 4 SYM and type IIB string theory on AdS 5 ×S 5 this amounts to taking the number of colors N large and further sending the 't Hooft coupling λ = g 2 Y M N to infinity. In this limit, the bulk theory reduces to IIB supergravity with a small five-dimensional Newton constant κ 2 5 = 4π 2 /N 2 1. The task of computing correlation functions in the strongly coupled planar gauge theory has thus become the task of computing suitably defined "scattering amplitudes" in the weakly coupled supergravity on an AdS 5 background. The AdS supergravity amplitudes can be computed by a perturbative diagrammatic expansion, in powers of the small Newton constant, where the so-called "Witten diagrams" play the role of position space Feynman diagrams. The Witten diagrams are "LSZ reduced", in the sense that their external legs (the bulk-to-boundary propagators) have been put "on-shell" with Dirichlet-like boundary conditions at the boundary ∂AdS d+1 . In this paper we restrict ourselves to the evaluation of four-point correlation functions of the single-trace one-half BPS operators, [0,p,0] . They are annihilated by half of the Poincaré supercharges and have protected dimensions ∆ = p. By acting with the other half of the supercharges, one generates the full supermultiplet, which comprises a finite number of conformal primary operators in various SU(4) representations and spin 2 (see, e.g., [47] for a complete tabulation of the multiplet). Each conformal primary in the B [0,p,0] multiplet is dual to a supergravity field in AdS 5 , arising from the Kaluza-Klein reduction of IIB supergravity on S 5 [48], with the integer p corresponding to the KK level. For example, the superprimary O (p) is mapped to a bulk scalar field s p , which is a certain linear combination of KK modes of the 10d metric and four-form with indices on the S 5 . The traditional method evaluates the correlator of four operators (2.3) as the sum of all tree level diagrams with external legs s p 1 , s p 2 , s p 3 , s p 4 . One needs the precise values of the cubic vertices responsible for exchange diagrams (figure 1), and of the quartic vertices responsible for the contact diagrams (figure 2). The relevant vertices have been systematically worked out in the literature [3,29,37,49] and take very complicated expressions. Our methods, on the other hand, do not require the detailed form of these vertices, so we will only review some pertinent qualitative features. Let us first focus on the cubic vertices. The only information that we need are selection rules, i.e., which cubic vertices are non-vanishing. An obvious constraint comes from the following product rule of SU(4) representations, which restricts the SU(4) representations that can show up in an exchange diagram. We collect in table 1 (reproduced from [24,47]) the list of bulk fields {ϕ µ 1 ...µ } that are a priori allowed in an exchange diagram with external s p i legs if one only imposes the R-symmetry selection rule. From the explicit expressions of the cubic vertices [37] one deduces two additional selection rules on the twist ∆− of the field φ µ 1 ...µ in order for the cubic vertex s p 1 s p 2 φ µ 1 ...µ JHEP04(2018)014 to be non-vanishing, The selection rule on the parity of the twist can be understood as follows. In order for the cubic vertex s p 1 s p 2 φ µ 1 ...µ to be non-zero, it is necessary for the "parent" vertex s p 1 s p 2 s p 3 be non-zero, where s p 3 is the superprimary of which φ µ 1 ...µ is a descendant. By SU(4) selection rules, p 3 must have the same parity as p 1 + p 2 . One then checks that all descendants of s p 3 that are allowed to couple to s p 1 and s p 2 by SU(4) selection rules have the same twist parity as p 3 . On the other hand, the selection rule O p 1 O p 2 O p 1 +p 2 is not fully explained by this kind of reasoning. To understand it, we first need to recall that the cubic vertices obtained in [3,37] are cast in a "canonical form" by performing field redefinitions that eliminate vertices with spacetime derivatives. This is harmless so long as the twists of the three fields satisfy a strict triangular inequality, but subtle for the "extremal case" of one twist being equal to the sum of the other two [11]. For example, for the superprimaries, one finds that the cubic coupling s p 1 s p 2 s p 1 +p 2 is absent, in apparent contradiction with the fact that the in N = 4 SYM three-point function [3,11]. One finds that while the coupling c p 1 p 2 p 3 ∼ (p 3 −p 2 −p 1 ), the requisite cubic contact Witten diagram diverges as 1/(p 3 −p 1 −p 2 ), so that their product yields the finite correct answer. 5 From this viewpoint, it is in fact necessary for the extremal coupling c p 1 p 2 p 1 +p 2 to vanish, or else one would find an infinite answer for the three-point function. This provides a rationale for the selection rule ∆ − < p 1 + p 2 . When it is violated, the requisite three-point contact Witten diagram diverges, so the corresponding coupling must vanish. We will see in section 3.2, 3.3 that the selection rule has also a natural interpretation in Mellin space. The requisite quartic vertices were obtained in [29]. The quartic terms in the effective action for the s k fields contain up to four spacetime derivatives, but we argued in [1] that compatibility with the flat space limit requires that holographic correlators can get contributions from vertices with at most two derivatives. The argument is easiest to phrase in Mellin space and will be reviewed in section 3.4. That is indeed the case in the handful of explicitly calculated examples [22][23][24][25][26][27]. Our claim has been recently proven in full generality [50]. These authors have shown that the four-derivative terms effectively cancel out in 5 If one wishes to work exactly at extremality p3 = p1 + p2, one can understand the finite three-point function as arising from boundary terms that are thrown away by the field redefinition that brings the cubic vertex to the canonical non-derivative form [11]. One can rephrase this phenomenon as follows [30]: the field redefinition on the supergravity side (which throws away boundary terms) amounts to a redefinition of the dual operators that adds admixtures of multi-trace terms, all four-point correlators of one-half BPS operators, thanks to non-trivial group theoretic identities. The rules of evaluation of Witten diagrams are entirely analogous to the ones for position space Feynman diagrams: we assign a bulk-to-bulk propagator G BB (z, w) to each internal line connecting two bulk vertices at positions z and w; and a bulk-to-boundary propagator G B∂ (z, x) to each external line connecting a bulk vertex at z and a boundary point x. These propagators are Green's functions in AdS with appropriate boundary conditions. Finally, integrations over the bulk AdS space are performed for each interacting vertex point. The simplest connected Witten diagram is a contact diagram of external scalars with no derivatives in the quartic vertex (figure 2). It is given by the integral of the product of four scalar bulk-to-boundary propagators integrated over the common bulk point, Here, the scalar bulk-to-boundary propagator is [20], 6 where ∆ i is the conformal dimension of the ith boundary CFT operator. The integral can be evaluated in terms of derivatives of the dilogarithm function. It is useful to give it a JHEP04(2018)014 name, defining the so-called D-functions as the four-point scalar contact diagrams with external dimensions ∆ i , The other type of tree-level four-point diagrams are the exchange diagrams (figure 1), Exchange diagrams are usually difficult to evaluate in closed form. In [36] a technique was invented that allows, when certain "truncation conditions" for the quantum numbers of the external and exchanged operators are met, to trade the propagator of an exchange diagram for a finite sum of contact vertices. In such cases, one is able to evaluate an exchange Witten diagram as a finite sum of D-functions. Fortunately, the spectrum and selection rules of IIB supergravity on AdS 5 × S 5 are precisely such that all exchange diagrams obey the truncation conditions. We will exploit this fact in our position space method (section 5). The formulae for the requisite exchange diagrams have been collected in appendix A. Mellin formalism In this section we review and discuss the Mellin amplitude formalism introduced by Mack [40] and developed in [41][42][43][44][51][52][53]. 7 After introducing the basic formalism in section 3.1, we discuss the special features that occur large N CFTs in section 3.2 and review the application to tree-level four-point Witten diagrams in section 3.3. A remarkable simplification occurs for Witten diagrams with special values of the external and exchanged operator dimensions: the associated Mellin amplitude is a rational function of the Mandelstam invariants s and t. We explain that this is dictated by the consistency with the structure of the operator product expansion at large N . Finally, in section 3.4 we discuss the asymptotic behavior of the supergravity Mellin amplitude. Compatibility with the flat space limit gives an upper bound for the asymptotic growth of the supergravity Mellin amplitude at large s and t. Mellin amplitudes for scalar correlators We consider a general correlation function of n scalar operators with conformal dimensions ∆ i . Conformal symmetry restricts its form to be where ξ r are the conformally invariant cross ratios constructed from x 2 ij , JHEP04(2018)014 Requiring that the correlator transforms with appropriate weights under conformal transformations, one finds the constraints 3) The number of independent cross ratios in a d-dimensional spacetime is given by as seen from a simple counting argument. We have a configuration space of n points which is nd-dimensional, while the dimension of the conformal group SO(d+1, 1) is 1 2 (d+1)(d+2). For sufficiently large n, the difference of the two gives the number of free parameters unfixed by the conformal symmetry, as in the second line of (3.4). However this is incorrect for n < d + 1 because we have overlooked a nontrivial stability group. To see this, we first use a conformal transformation to send two of the n points to the origin and the infinity. If n < d + 1, the remaining n − 2 points will define a hyperplane and the stability group is the rotation group SO(d + 2 − n) perpendicular to the hyperplane. After adding back the dimension of the stability group we get the first line of the counting. To phrase it differently, when the spacetime dimension d is high enough, there are always 1 2 n(n − 3) conformal cross ratios, independent of the spacetime dimension. But when n ≥ d + 1 there exist nontrivial algebraic relations among the 1 2 n(n − 3) conformal cross ratios. The constraints (3.3) admit 1 2 n(n − 3) solutions, in correspondence with the 1 2 n(n − 3) cross ratios (ignoring the algebraic relations that exist for small n). Mack [40] suggested instead of taking δ 0 ij to be fixed, we should view them as variables δ ij satisfying the same constraints, and write the correlator as an integral transform with respect to these variables. More precisely, one defines the following (inverse) Mellin transform for the connected 8 part of the correlator, The integration is performed with respect to the 1 2 n(n − 3) independent variables along the imaginary axis. We will be more specific about the integration in a moment. The correlator G(ξ r ) conn is captured by the function M (δ ij ), which following Mack we shall call the reduced Mellin amplitude. The constraints (3.5) can be solved by introducing some fictitious "momentum" variables k i living in a D-dimensional spacetime, (3.7) JHEP04(2018)014 These variables obey "momentum conservation" n i=1 k i = 0 (3.8) and the "on-shell" condition The number of independent Lorentz invariants δ ij ("Mandelstam variables") in a Ddimensional spacetime is given by (3.10) The counting goes as follows. The configuration space of n on-shell momenta in D dimensions is n(D − 1)-dimensional, while the Poincaré group has dimension 1 2 D(D + 1). Assuming that the stability group is trivial, there will be n(D − 1) − 1 2 D(D + 1) free parameters, giving the second line of (3.10). However for n < D there is a nontrivial stability group SO(D − n + 1). This can be seen by using momentum conservation to make the n momenta lie in a n − 1 dimensional hyperplane -the rotations orthogonal to the hyperplane generate the stability group SO(D − n + 1). Adding back the dimension of the stability group we obtain the first line of (3.10). Again we see when D is high enough, the number of independent Mandelstam variables is a D-independent number 1 2 n(n − 3). When n ≥ D, the 1 2 n(n − 3) Mandelstam variables are subject to further relations. This is the counterpart of the statement we made about the conformal cross ratios. We conclude that the counting of independent Mandelstam variables in D dimensions coincides precisely with the counting of independent conformal cross ratios in d dimensions if we set D = d+1. The virtue of the integral representation (3.6) is to encode the consequences of the operator product expansion into simple analytic properties for M (δ ij ). Indeed, consider the OPE where for simplicity O k is taken to be a scalar operator. To reproduce the leading behavior as , as can be seen by closing the δ ij integration contour to the left of the complex plane. More generally, the location of the leading pole is controlled by the twist τ of the exchanged operator (τ ≡ ∆ − , the conformal dimension minus the spin). Conformal descendants contribute an infinite sequence of satellite poles, so that all in all for any primary operator O k of twist τ k that contributes to the O i O j OPE the reduced Mellin amplitude M (δ ij ) has poles at JHEP04(2018)014 Mack further defined Mellin amplitude M(δ ij ) by stripping off a product of Gamma functions, . (3.13) This is a convenient definition because M has simpler factorization properties. In particular, for the four-point function, the s-channel OPE (x 12 → 0) implies that the Mellin amplitude M(s, t) has poles in s with residues that are polynomials of t. These Mack polynomials depend on the spin of the exchanged operator, in analogy with the familiar partial wave expansion of a flat-space S-matrix. (The analogy is not perfect, because each operator contributes an infinite of satellite poles, and because Mack polynomials are significantly more involved than the Gegenbauer polynomials that appear in the usual flat-space partial wave expansion.) We will see in section 3.2 that Mack's definition of M is particularly natural for large N theories. Finally let us comment on the integration contours in (3.6). The prescription given in [40] is that the real part of the arguments in the stripped off Gamma functions be all positive along the integration contours. To be more precise, one is instructed to integrate 1 2 n(n − 3) independent variables s k along the imaginary axis, where s k are related to δ ij via Here δ 0 ij is a special solution of the constraints (3.5) with (δ 0 ij ) > 0. The coefficients c ij,k are any solution of which is just the homogenous version (3.5). There are 1 2 n(n − 3) independent coefficients c ij,k for each k. We can choose to integrate over c ij,k with 2 ≤ i < j ≤ n except for c 23,k , so that the chosen c ij,k forms a n(n−3) 2 × n(n−3) 2 square matrix (the row index are the independent elements of the pair (ij) and the column index is k). We normalize this matrix to satisfy For four-point amplitudes, which are the focus of this paper, it is convenient to introduce "Mandelstam" variables s, t, u, and write (3.17) JHEP04(2018)014 With this parametrization, the constraints obeyed by δ ij translate into the single constraint We can take s and t as the independent integration variables, and rewrite the integration measure as In fact this simple contour prescription will need some modification. In the context of the AdS supergravity calculations, we will find it necessary to break the connected correlator into several terms and associate different contours to each term, instead of using a universal contour. The are usually poles inside the region specified by (δ 0 ij ) > 0, and the answer given by the correct modified prescription differs from the naive one by the residues that are crossed in deforming the contours. Large N The Mellin formalism is ideally suited for large N CFTs. While in a general CFT the analytic structure of Mellin amplitudes is rather intricate, it becomes much simpler at large N . To appreciate this point, we recall the remarkable theorem about the spectrum of CFTs in dimension d > 2 proven in [65,66]. For any two primary operators O 1 and O 2 of twists τ 1 and τ 2 , and for each non-negative integer k, the CFT must contain an infinite family of so-called "double-twist" operators with increasing spin and twist approaching τ 1 +τ 2 +2k as → ∞ [65,66]. This implies that the Mellin amplitude has infinite sequences of poles accumulating at these asymptotic values of the twist, so it is not a meromorphic function. 9 As emphasized by Penedones [41], a key simplification occurs in large N CFTs, where the double-twist operators are recognized as the usual double-trace operators. Thanks to large N factorization, spin conformal primaries of the schematic form : where O 1 and O 2 are single-trace operators, have twist τ 1 + τ 2 + 2n + O(1/N 2 ), 10 for any . Recall also that the Mellin amplitude is defined in terms of the connected part of the k-point correlator, which is of order O(1/N k−2 ) for unit-normalized single-trace operators. The contribution of intermediate double-trace operators arises precisely at O(1/N 2 ), so that to this order we can use their uncorrected dimensions. Remarkably, the poles corresponding to the exchanged double-trace operators are precisely captured by the product of Gamma functions i<j Γ(δ ij ) that Mack stripped off to define the Mellin amplitude M. All in all, we conclude that the O(1/N k−2 ) Mellin amplitude M is a meromorphic function, whose poles are controlled by just the exchanged single-trace operators. 9 In two dimensions, there are no double-twist families, but one encounters a different pathology: the existence of infinitely many operators of the same twist, because Virasoro generators have twist zero. 10 For definiteness, we are using the large N counting appropriate to a theory with matrix degrees of freedom, e.g., a U(N ) gauge theory. In other kinds of large N CFTs the leading correction would have a different power -for example, O(1/N 3 ) in the AN six-dimensional (2, 0) theory, and O(1/N ) in twodimensional symmetric product orbifolds. JHEP04(2018)014 Let us analyze in some detail the case of the four-point function. For four scalar operators O i of dimensions ∆ i , conformal covariance implies (3.20) where U and V are the usual conformal cross-ratios 11 Taking the operators O i to be unit-normalized single-trace operators, and separating out the disconnected and connected terms, 12 we have the following familiar large N counting: The Mellin amplitude M is defined by the integral transform Let us first assume that the dimensions ∆ i are generic. In the s-channel OPE, we expect contributions to G conn from the tower of double-trace operators of the form 13 , and from the tower : O 3 n ∂ O 4 :, which have . The OPE coefficients scale as so that to leading O(1/N 2 ) order, we can neglect the 1/N 2 corrections to the conformal dimensions of the double-trace operators. All in all, we expect that these towers of doubletrace operators contribute poles in s at JHEP04(2018)014 These are precisely the locations of the poles of the first two Gamma functions in (3.24). In complete analogy, the poles in t and u in the other Gamma functions account for the contributions of the double-trace operators exchanged in the t and u channels. If ∆ 1 + ∆ 2 − (∆ 3 + ∆ 4 ) = 0 mod 2, the two sequences of poles in (3.26) (partially) overlap, giving rise to a sequence of double poles at A double pole at s = s 0 gives a contribution to G conn (U, V ) of the from U s 0 /2 log U . This has a natural interpretation in terms of the O(1/N 2 ) anomalous dimensions of the exchanged double-trace operators. Indeed, a little thinking shows that in this case both OPE coefficients in the s-channel conformal block expansion are of order one (in contrast with the generic case (3.26)), so that the O(1/N 2 ) correction to the dilation operator gives a leading contribution to the connected four-point function. Let's see this more explicitly. Let's take for definiteness ∆ 1 + ∆ 2 ∆ 3 + ∆ 4 , so that ∆ 3 + ∆ 4 = ∆ 1 + ∆ 2 + 2k for some non-negative integer k. Then the double-trace operators of the schematic form have the same conformal dimension to leading large N order, as well as the same Lorentz quantum numbers. They are then expected to mix under the action of the O(1/N 2 ) dilation operator. It is important to realize that the mixing matrix that relates the ba- All is all, we find a contribution to G conn of the form In Mellin space, this corresponds to a double-pole at s = ∆ 3 + ∆ 4 + 2n, just as needed. In summary, the explicit Gamma functions that appear in Mack's definition provide precisely the analytic structure expected in a large N CFT, if we take the O(1/N 2 ) Mellin amplitude M to have poles associated with just the exchanged single-trace operators. The upshot is that to leading O(1/N 2 ) order, fixing the single-trace contributions to the OPE is sufficient determine the double-trace contributions as well. 14 By following a similar reasoning, we will now argue that compatibility with the large N OPE imposes some further constraints on the analytic structure of M. We have seen that to leading O(1/N 2 ) order the Mellin amplitude M(s, t, u) is a meromorphic function with only simple poles associated to the exchanged single-trace operators. In the generic case, a single-trace operator O ST of twist τ contributing to the s-channel OPE is responsible for an infinite sequence of simple poles at s = τ + 2n, n ∈ Z 0 (and similarly for the other JHEP04(2018)014 channels). But this rule needs to be modified if this sequence of "single-trace poles" overlaps with the "double-trace poles" from the explicit Gamma functions in (3.24). This happens if τ = ∆ 1 +∆ 2 mod 2, or if τ = ∆ 3 +∆ 4 mod 2. (We assume for now that ∆ 1 +∆ 2 = ∆ 3 +∆ 4 mod 2, so that only one of the two options is realized.) In the first case, the infinite sequence of poles in M must truncate to the set {τ, τ + 2, . . . , τ + ∆ 1 + ∆ 2 − 2}, and in the second case to the set {τ, τ + 2, . . . , τ + ∆ 3 + ∆ 4 − 2}. 15 This truncation must happen because double poles in s, translating to ∼ log U terms in G conn , are incompatible with the large N counting. Indeed, the OPE coefficients already provide an O(1/N 2 ) suppression, so that we should use the O(1) dilation operator, and no logarithmic terms can arise in G conn to leading O(1/N 2 ) order. 16 Mellin amplitudes for Witten diagrams The effectiveness of Mellin formalism is best illustrated by its application to the calculation of Witten diagrams. Conceptually, Mellin space makes transparent the analogy of holographic correlators and S-matrix amplitudes. Practically, Mellin space expressions for Witten diagrams are much simpler than their position space counterparts. For starters, the Mellin amplitude of a four-point contact diagram, which is the building blocks of AdS four-point correlators as we reviewed in section 2, is just a constant, As was shown in [41], this generalizes to n-point contact diagram with a non-derivative vertex: their Mellin amplitude is again a constant. Contact diagrams with derivative vertices are also easily evaluated. It will be important in the following that the Mellin amplitude for a contact diagram arising from a vertex with 2n derivatives is an order n polynomial in the Mandelstam variables δ ij . Exchange diagrams are also much simpler in Mellin space. The s-channel exchange Witten diagram with an exchanged field of conformal dimension ∆ and spin J has a Mellin amplitude with the following simple analytic structure [44], where τ = ∆ − J is the twist. Here Q J,m (t) are polynomials in t of degree J and P J−1 (s, t) polynomials in s and t of degree J − 1. These polynomials depend on the dimensions ∆ 1,2,3,4 , ∆, as well as the spin J. The detailed expressions for these polynomials are quite 15 Note that the first set empty if ∆1 + ∆2 < τ (again we are assuming ∆1 + ∆2 = τ mod 2) and the second is empty if ∆3 + ∆4 < τ (with ∆3 + ∆4 = τ mod 2). In these cases, O ST does not contribute any poles to M. 16 In the even more fine-tuned case τ = ∆1 + ∆2 = ∆3 + ∆4 mod 2, clearly the poles in s in the O(1/N 2 ) JHEP04(2018)014 complicated but will not be needed for our analysis. The m = 0 pole at s = τ is called the leading pole, corresponding to the primary operator that is dual to the exchanged field, while the m > 0 poles are called satellite poles, and they are associated with conformal descendants. It has been observed (see, e.g., [41]) that the infinite series of poles in (3.31) truncates to a finite sum if τ = ∆ 1 + ∆ 2 mod 2 or if τ = ∆ 3 + ∆ 4 mod 2. One finds that the upper limit of the sum m max is given by τ − ∆ 1 − ∆ 2 = 2(m max + 1) in the first case and by τ − ∆ 3 − ∆ 4 = 2(m max + 1) in the second case. This is the Mellin space version of the phenomenon described in section 2: an exchange Witten diagram with these special values of quantum numbers can be written as a finite sum of contact Witten diagrams. As we have explained in the previous subsection, this remarkable simplification is dictated by compatibility with the large N OPE in the dual CFT. Asymptotics and the flat space limit In the next section we will determine the supergravity four-point Mellin amplitude using general consistency principles. A crucial constraint will be provided by the asymptotic behavior of M(s, t) when s and t are simultaneously scaled to infinity. On general grounds, one can argue [41] that in this limit the Mellin amplitude should reduce to the flat-space bulk S-matrix (in R d,1 ). A precise prescription for relating the massless 17 flat-space scattering amplitude T (K i ) to the asymptotic form of the holographic Mellin amplitude was given in [41] and justified in [71], where S ij = −(K i +K j ) 2 are the Mandelstam invariants of the flat-space scattering process. We have a precise opinion for asymptotic behavior of the flat-space four-point amplitude T (S, T ) -it can grow at most linearly for large S and T . Indeed, a spin exchange diagrams grows with power − 1, and the highest spin state is of course the graviton with = 2. Similarly, contact interactions with 2n derivatives give a power n growth, and IIB supergravity (in ten-dimensional flat space) contains contact interactions with at most two derivatives. From (3.32) we then deduce It is of course crucial to this argument that we are calculating within the standard twoderivative supergravity theory. Stringy α -corrections would introduce higher derivative terms and invalidate this conclusion. 18 Curiously, the asymptotic behavior (3.33) is not immediately obvious if one computes holographic correlators in AdS 5 × S 5 by the standard diagrammatic approach. Exchange JHEP04(2018)014 Witten diagrams have the expected behavior, with growth at most linear from spin two exchanges, see (3.31). 19 However, the AdS 5 effective action [29] obtained by Kaluza-Klein reduction of IIB supergravity on S 5 contains quartic vertices with four derivatives (or fewer). The four-derivative vertices are in danger of producing an O(β 2 ) growth, which would ruin the expected flat space asymptotics. On this basis, we made the assumption in [1] that the total contribution of the four-derivative vertices to a holographic correlator must also grow at most linearly for large β. Indeed, this was experimentally the case in all the explicit supergravity calculations performed at the time. Fortunately, the conjectured cancellation of the O(β 2 ) terms has been recently proved in full generality [50]. The general one-half BPS four-point amplitude in Mellin space As we have just reviewed, holographic correlators are most naturally evaluated in Mellin space. Mellin amplitudes have an intuitive interpretation as scattering processes in AdS space, and their analytic structure is simple and well understood. We have also discussed the additional simplification that occurs for one-half BPS correlators in AdS 5 × S 5 supergravity. The Kaluza-Klein spectrum satisfies the "truncation conditions" that allow exchange Witten diagrams to be expressed as finite sum of contact diagrams. This translates into the statement that the Mellin amplitude for these correlators is a rational function, with poles at predictable locations controlled by the single-particle spectrum. We have not yet imposed the constraints of super conformal invariance. They turn out to be so stringent that when combined with the analytic structure of the Mellin amplitude they appear to completely fix the answer! In this section we derive a set of algebraic and analytic conditions on the Mellin amplitude for one-half BPS correlators with arbitrary weights. We have found a simple solution of these constraints, which we believe to be unique. We start in section 4.1 by reviewing the superconformal Ward identity in position space. A useful technical step is the introduction of auxiliary variables σ and τ to keep track of the R-symmetry quantum numbers. We translate the Ward identity in Mellin space in section 4.2. The Mellin amplitude M(s, t; σ, τ ) is written in terms of a difference operator acting on an auxiliary object M(s, t; σ, τ ). A purely algebraic problem is then formulated in section 4.3 by imposing a set of consistency conditions on M(s, t; σ, τ ). We find a simple elegant solution to this problem in section 4.4. While we lack a general proof, we believe that this is the unique solution, and we do show uniqueness in section 4.4.1 in the simplest case where all p i = 2. Finally, in section 4.5 we discuss some subtleties with the contour prescription in the inverse Mellin transform. We show in particular how the "free" piece of the correlator can arise as a regularization effect. Superconformal Ward identity: position space The global symmetry group of N = 4 SYM is PSU(2, 2|4), which contains as subgroups the four-dimensional conformal group SO(4, 2) ∼ = SU(2, 2) and the R-symmetry group JHEP04(2018)014 The four-point function is thus a function of the spacetime coordinates x i as well as the "internal" coordinates t i . The R-symmetry covariance and null property requires that the t i variables can only show up as sum of monomials i<j (t ij ) γ ij with integer powers γ ij 0, where we have defined t ij ≡ t i · t j . Moreover the exponents γ ij are constrained by i =j γ ij = p j , as seen by requiring the correct homogeneity under independent scaling of each null vector t i → ζ i t i . We can solve this set of constraints by using the following parameterization, with the additional condition a + b + c = p 1 + p 2 + p 3 + p 4 . Without loss of generality we can assume p 1 p 2 p 3 p 4 . Then we should distinguish two possibilities, p 1 + p 4 p 2 + p 3 (case I) and p 1 + p 4 > p 2 + p 3 (case II) . (4.4) In either case the inequality constraints γ ij 0 define a cube inside the parameter space (a, b, c), as shown in figure 3. The solution is further restricted by the condition JHEP04(2018)014 a + b + c = p 1 + p 2 + p 3 + p 4 , which carves out the equilateral triangle inside the cube shown shaded in the figure. It is useful to find the coordinates of vertices of the cube closest and furthest from the origin, which we denote as (a min , b min , c min ) and (a max , b max , c max ). Then in case I, and in case II, Denoting by 2L the length of each side of the cube, we find in the two cases (4.7) From the parametrization (4.3) we see that γ ij ≥ γ 0 ij , where γ 0 ij are obtained by substituting the maximal values (a max , b max , c max ), where besides the usual conformal cross ratios we have introduced analogous R-symmetry cross ratios It is easy to see that G(U, V ; σ, τ ) is a polynomial of degree L in σ and τ . So far we have only imposed covariance under the bosonic subgroups of the supergroup PSU(2, 2|4). The fermionic generators impose further constraints on the four-point function. It is useful to introduce the following change of variables (4.12) In terms of these variables, the superconformal Ward identity reads [72,73] ∂z G(zz, Its solution can be written as [72,73] 20 where G free is the answer in free SYM theory and All dynamical information is contained in the a priori unknown function H(U, V ; σ, τ ). Note that H(U, V ; σ, τ ) is a polynomial in σ, τ of degree L − 2. Superconformal Ward identity: Mellin space We now turn to analyze the constraints of superconformal symmetry in Mellin space. We rewrite (4.14) for the connected correlator, 20 There is an implicit regularity assumption for H(U, V ; σ, τ ) asᾱ → 1/z, otherwise the following equation would be an empty statement. 21 This definition should be taken with a grain of salt. In general, the integral transform of the full connected correlator is divergent. In the supergravity limit, there is a natural decomposition of Gconn into a sum ofD functions, each of which has a well-defined Mellin transform in a certain region of the s and t complex domains. However, it is often the case that there is no common region such that the transforms of theD functions are all convergent. On the other hand, the inverse Mellin transform (4.23) is welldefined, but care must be taken in specifying the integration contours. We will come back to this subtlety in section 4.5. JHEP04(2018)014 from which we define the Mellin amplitude M, where as always On the right-hand side of (4.16), the first term is the free part of the correlator. It consists of a sum of terms of the form σ a τ b U m V n , where m, n are integers and a, b non-negative integers. The Mellin transform of any such term is ill-defined. As we shall explain in section 4.5, there is a consistent sense in which it can be defined to be zero. The function G free,conn (U, V ; σ, τ ) will be recovered as a regularization effect in transforming back from Mellin space to position space. 22 We then turn to the second term on the on the right-hand side of (4.16). We define an auxiliary amplitude M from the Mellin transform of the dynamical function H, (4.21) Note that we have introduced a "shifted" Mandelstam variableũ, This shift is motived by the desire to keep the crossing symmetry properties of H as simple as possible, as we shall explain shortly. Let us also record the expressions of the inverse transforms, where the precise definition of the integration contours will require a careful discussion in section 4.5 below. 22 Our treatment for the free part of the correlator also turns out to be consistent in the context of holographic higher spin theory, as is discussed in v3 of [74]. JHEP04(2018)014 We are now ready to write down the Mellin translation of (4.16). It takes the simple form M(s, t; σ, τ ) = R • M(s, t, ; σ, τ ) . (4.25) The multiplicative factor R has turned into a difference operator R, where the hatted monomials in U and V are defined to act as follows, Crossing symmetry andũ The Mellin amplitude M satisfies Bose symmetry, namely, it is invariant under permutation of the Mandelstam variables s, t, u if the external quantum numbers are also permuted accordingly. The auxiliary amplitude M has been defined to enjoy the same symmetry under permutation of the shifted Mandelstam variables s, t,ũ. The point is that the factor R multiplying H is not crossing-invariant, and the shift in u precisely compensates for this asymmetry. Let us see this in some detail. To make expressions more compact, we introduce some shorthand notations for the following combinations of coordinates, (4.28) In the equal-weights case (on which we focus for simplicity), the four-point function G(x i , t i ) is related to G(U, V ; σ, τ ) by Substituting into this expression the inverse Mellin transformation (4.23), one finds where we defined I+J+K=L a K b I c J M IJK (s, t) ≡ a L M(s, t; σ, τ ). In terms of these new variables, crossing amounts to permuting simultaneously (A, B, C) and (a, b, c): On the other hand, a similar representation exists for RH. The factor R can be expressed as with a crossing-invariant numerator R but a non-invariant denominator. When we go to the Mellin representation of a A L RH by substituting in (4.24), we find that the power of B receives an additional −2 from the denominator of R in (4.33), explaining the shift from u toũ, We see that in the auxiliary amplitude M, the role of u is played byũ. This generalizes to the unequal-weight cases. An algebraic problem Let us now take stock and summarize the properties of M that we have demonstrated so far: We will now show that compatibility with (4.37) requires that M ijk (s, t) must also be rational functions. (The argument that follows is elementary but slightly elaborate and can be safely skipped on first reading.) The two sets of R-symmetry monomials {σ I τ J } and {σ i τ j } can be conveniently arranged into two equilateral triangles, illustrated respectively by figure 4 and figure 5. The Bose symmetry that relates different R-symmetry monomials corresponds to the S 3 the symmetry of the equilateral triangle. Let us start by considering the monomial 1 in M, which is associated to the coefficient M 0,0,L (s, t). This monomial can only be reproduced ) is also finite. The strategy is now clear. We start from the corners of the triangle and move along the edges. Each time we encounter a new element of M i,j,k (s, t) multiplied by a single difference operator of the type U m V n and by recursion we can prove this new term has finitely many poles. After finishing the outer layer of the R-symmetry triangle, we move onto the adjacent layer, again starting from the three corners and then moving along the edges. It is not hard to see that at each step the same situation occurs and we only need to deal with one new element at a time. For example, στ M 1,1,L−2 (s, t), which is on the top corner of the second layer, is generated by M 0,0,L−2 (s, t) with the action of −στ Among these four elements of the auxiliary amplitude M 0,0,L−2 (s, t), σ M 1,0,L−3 (s, t), τ M 0,1,L−3 (s, t) belong to the outer layer which are determined to be rational in the previous round. Only the element στ M 1,1,L−3 (s, t) belongs to the inner layer and is acted on by the simple difference operator V . This concludes by recursion that M 1,1,L−3 (s, t) is also rational. In finitely many steps, we can exhaust all the elements of M ijk . This concludes the proof of rationality of M. It might at first sight appear that this procedure amounts to an algorithm to invert the difference operator R, but of course this is not the case. For general M IJK , one would find contradictory results for some element M ijk applying the recursion procedure by following different paths in the triangle. Solution Experimentation with low-weight examples led us to the following ansatz for M, (4.48) The reader can check that this ansatz leads to an M that satisfies the asymptotic requirement, obeys Bose symmetry and has simple poles at the required location. The further requirements that the poles have polynomials residues fixes the coefficients a ijk uniquely up to normalization, where L−2 i,j,k is the trinomial coefficient. The overall normalization cannot be fixed from our homogenous consistency conditions. In principle, it can be determined by transforming back to the position-space expression (4.16). As we shall show below, the term G free,conn arises as a regularization effect in the inverse Mellin transformation. The constant f (p 1 , p 2 , p 3 , p 4 ) is fixed by requiring that the regularization procedure gives the correctly normalized free-field correlator. In practice, this is very cumbersome, and it is easier to take instead G free,conn as an input from free-field theory. The overall normalization of M is then fixed by imposing the cancellation of spurious singularity associated to single-trace long operators [46], which are separately present in G free,conn and in R H but must cancel in the sum. This method has been used in [75] to determine f (p, p, q, q), the normalization in all cases with pairwise equal weights. The normalization for arbitrary weights f (p 1 , p 2 , p 3 , p 4 ) has been recently determined in [76] by further taking a light-like limit. Uniqueness for p i = 2 Uniqueness of the ansatz (4.47) is in general difficult to prove. However in simple examples it is possible to solve the algebraic problem directly, thereby proving that the answer is unique. In this subsection we demonstrate it for the simplest case, the equal-weights case with p = 2. This case is particularly simple because M has no σ, τ dependence. Recall that the Mellin amplitude M has simple poles in s, t and u whose positions are restricted by the condition (4.41). Specifically in the case of p i = 2, it means that which is just our solution (4.47). Contour subtleties and the free correlator In this section we address some subtleties related to s and t integration contours in the Mellin representation. These subtleties are related to the decomposition of the position space correlator into a "free" and a dynamical term. In transforming to Mellin space, we have ignored the term G free,conn . We are going to see how this term can be recovered by taking the inverse Mellin transform with proper integration contours. The four-point function calculated from supergravity with the traditional method is a sum of four-point contact diagrams, known asD-functions. (Their precise definition is given in (5.4)). Through the repeated use of identities obeyed theD functions, the supergravity JHEP04(2018)014 answer can be massaged into a form that agrees with the solution to the superconformal Ward identity -with a singled-out "free" piece. Manipulations of this sort can be found in, e.g., [23,24,27,77]. Most of the requisite identities have an elementary proof either in position space or in Mellin space, but the crucial identity which is key to the separation of the free term, namely (4.55) requires additional care. The Mellin transform of the r.h.s. is clearly ill-defined. We will now show that the Mellin transform of the l.h.s. is also ill-defined, because while each of the three terms has a perfectly good transform for a finite domain of s and t (known as the "fundamental domain"), the three domains have no common overlap. A suitable regularization procedure is required to make sense of this identity. Let us see this in detail. Recall that the Mellin transform of an individualD-function is just a product of Gamma functions. Its fundamental domain can be characterized by the condition that all the arguments of Gamma functions are positive [78]. For the threeD-functions appearing on the l.h.s. of (4.55), we havē (4.56) so the contours are specified by selecting a point inside the fundamental domains, (s 0i , t 0i ) ∈ D i . With ∆ 4 = ∆ 1 +∆ 2 +∆ 3 , one finds that the fundamental domains are given by Multiplication by U and V in the second and the third terms, respectively, shifts 23 the domains D 2 and D 3 into new domains D 2 and D 3 , (4.59) 23 To absorb U m V n outside the integral into U s/2 V t/2 inside the integral and then shift s and t to bring it back to the form U s/2 V t/2 . Doing so amounts to shift D to D by a vector (2m, 2n). JHEP04(2018)014 This is problematic because Clearly it makes no sense to add up the integrands if the contour integrals share no common domain. On the other hand, if one is being cavalier and sums up the integrands anyway, one finds that the total integrand vanishes. This is "almost" the correct result, since the r.h.s. of the identity (4.55) is simply a constant, whose Mellin transform is ill-defined and was indeed set to zero in our analysis in the previous section. We can however do better and reproduce the exact identity if we adopt the following "regularization" prescription: we shift s + t → s + t + , with a small positive real number. After this shift, the three domains develop a small common domain of size (figure 6), We can therefore place the common integral contour inside D and combine the integrands, (4.62) As → 0, we can just substitute s = t = 0 into the non-singular part of the integrand. The resulting integral is easily evaluated, (4.63) This amounts to a "proof" of the identity (4.55) directly in Mellin space. This exercise contains a useful general lesson. As we have already remarked, the identity (4.55) is responsible for generating the term G free,conn by collapsing sums ofD functions in the supergravity answer. We have shown that it is consistent to treat the Mellin transform of G free,conn as "zero", provided that we are careful about the s, t integration contours in the inverse Mellin transform. In general, when one is adding up integrands, one should make sure the integrals share the same contour, which may require a regularization procedure of the kind we have just used. A naively "zero" Mellin amplitude can then give nonzero contributions to the integral if the contour is pinched to an infinitesimal domain where the integrand has a pole. In appendix C we illustrate in the simplest case of equal weights p i = 2 how the free field correlator is correctly reproduced by this mechanism. We conclude by alerting the reader about another small subtlety. The free term G free,conn depends on the precise identification of the operators dual to the supergravity modes s p . As explained in footnote 5, if one adopts the scheme where the fields s p contain no derivative cubic couplings, the dual operators are necessarily admixtures of single-and JHEP04(2018)014 multi-trace operators. While the multi-trace pieces are in general subleading, they can affect the free-field four-point function if the four weights are sufficiently "unbalanced". This phenomenon was encountered in [26,27], where the four-point functions with weights (2, 2, p, p) were evaluated from supergravity. A discrepancy was found for p ≥ 4 between the function G free,conn obtained by writing the supergravity result in the split form (4.16) and the free-field result obtained in free field theory from Wick contractions, assuming that the operators are pure single-traces. The resolution is that supergravity is really computing the four-point function of more complicated operators with multi-trace admixtures. Note that the contribution to the four-point functions from the multi-trace terms takes the form of a product of two-and three-point functions of one-half BPS operators, and is thus protected [30]. The ambiguity in the precise identification of the dual operators can then only affect G free,conn and not the dynamical part. The position space method We now switch gears and describe a logically independent position space method. This methods mimics the traditional recipe for computing four-point functions in supergravity, but eschews detailed knowledge supergravity effective action and complicated combinatorics. The idea is to write the write full amplitude as a sum of exchange diagrams and contact diagrams, but parametrizing the vertices with undetermined coefficients. The spectrum of IIB supergravity on AdS 5 × S 5 is such that all the exchange diagrams can be written as a finite sum of contact diagrams, i.e., D-functions, making the whole amplitude a sum of D-functions. We then use the property of D-functions to decompose the amplitude into a basis of independent functions. The full amplitude is encoded into four rational coefficient functions. Imposing the superconformal Ward identity we find a large number of relations among the undetermined coefficients. Uniqueness of the maximally supersymmetric Lagrangian guarantees that all the coefficients in the ansatz can be fixed up to overall rescaling. Finally the overall constant can be determined by comparing with the free field JHEP04(2018)014 result after restricting the R-symmetry cross ratios to a special slice [73] (this is related the chiral algebra twist [79]). We emphasize that there is no guesswork anywhere. The position space method is guaranteed to give the same results as a direct supergravity calculation, but it is technically much simpler. We will discuss the method only for the equal-weight case p i = p, but its generalization to the unequal-weight case is straightforward. In addition to reproducing the known examples p = 2, 3, 4, we computed the new case of p = 5 and found it to be an agreement both with the conjecture of [46] and with our Mellin amplitude conjecture (4.47), (4.49). We have included these results in the form of tables of coefficients in appendix D. Explicit form of the results as sums of D-functions is also available as a Mathematica notebook included in the ArXiv version of this paper. We start by reviewing some facts about exchange and contact Witten diagrams, some of which have already been mentioned in the previous sections. We then explain in detail how to decompose the position space ansatz into a basis and how to implement the superconformal Ward identity. We end the section with a demonstration of the method in the simplest p i = 2 case. Explicit formulae and technical details are given in the appendices. Exchange diagrams The Kaluza-Klein fields that can appear in the internal propagator of an exchanged diagrams have been listed in table 1. The allowed fields are restricted by the R-symmetry selection rule, and by a twist cut-off, τ < 2p . This origin of the twist cut-off has been discussed in section 2, and an alternative explanation from the Mellin amplitude perspective was given in section 3.3. The requisite exchange diagrams have been computed in the early days of the AdS/CFT correspondence. They can all be represented as finite sums ofD-functions. We have summarized the relevant formulae in appendix A. As each exchanged field belongs to a certain R-symmetry representation [n − m, 2m, n − m], we should multiply the exchange Witten diagrams by the corresponding R-symmetry polynomial Y nm . These polynomials were derived in [73], with P n (α) are the usual Legendre polynomials. The R-symmetry polynomials are eigenfunctions of the R-symmetry Casimir operator and are thus the "compact" analogue of the conformal partial waves. Contact diagrams In addition to the exchange diagrams, the four-point functions receive contribution from contact diagrams. The contact vertices in the effective Lagrangian have been explicitly JHEP04(2018)014 worked out in [29], with the number of derivatives going up to four. However as we argued, the requirement of a good flat space limit forbids genuine four-derivative contributions. This was recently confirmed in [50] by explicit computation. Therefore only zero-derivative vertices and two-derivative vertices will effectively contribute to the four-point function. We also observe a further simplification that for the equal-weight case: the zero-derivative contributions can be absorbed into the two-derivative ones when the external dimension satisfies p = 4. The proof of this statement is presented in appendix B. Reducing the amplitude to four rational coefficient functions As always, it will be convenient to write the amplitude as function of the conformal and R-symmetry cross-ratios, pulling out an overall kinematic factor. TheD-functions are defined in terms of contact Witten diagrams (known as D-functions, see (4.47)) by the extraction of such a kinematic factor, The set ofD-functions is overcomplete, as they are related by several identities, but our method requires a basis of independent functions. To remove this redundancy we will represent theD-functions in a way that makes the identities manifest. We use the fact that allD ∆ 1 ∆ 2 ∆ 3 ∆ 4 can be obtained from Φ(U, V ) ≡D 1111 (U, V ) with the action of differential operators of U and V . The following six differential operators allow to move around in the weight space (∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 ) ofD-functions (see, e.g., [23]), The "seed"D-function Φ(U, V ) is the famous scalar one-loop box integral in four dimensions and can be expressed in closed form in terms logarithms and dilogarithms. After the change of variable into U = zz and V = (1 − z)(1 −z), the integral can be written as The function Φ obeys the following differential relations [72], log(zz) . JHEP04(2018)014 The recursive use of these two identities makes it clear that eachD-function can be uniquely written asD where R Φ,U,V,0 are rational functions of z andz. As a result, the supergravity amplitude also admits such a unique decomposition as The coefficient functions R sugra Φ,U,V,0 are now polynomials of the R-symmetry variables α and α, where each R-symmetry monomial α mᾱn is multiplied by a rational function of U and V . The coefficients functions R sugra Φ,U,V,0 depend linearly on the undetermined coefficients that we have used to parameterize the vertices. Our ansatz A sugra must satisfy superconformal Ward identity. The solution can be simply written as A sugra (z,z; α, 1/z) = G free (z,z; α, 1/z) (5.10) with G free (z,z; α, 1/z) being a rational function that depends only on z and α and can be obtained from free field theory [73]. Under our decomposition, the superconformal Ward identity becomes a set of conditions on the rational coefficient functions, These conditions imply a large set of linear equations for the undetermined parameters. Uniqueness of two-derivative IIB supergravity strongly suggests that these conditions must admit a unique solution, up to overall rescaling. This is indeed what we have found in all examples. Finally, the overall normalization is determined by comparing the coefficient function of R 0 with the free-field result, R 0 (z,z; α, 1/z) = G free,conn (z,z; α, 1/z) . (5.12) An example: p i = 2 We now illustrate the position space method in the simplest case p i = 2. The four-point amplitude with four identical external scalar has an S 3 crossing symmetry. Since the total amplitude is a sum over all Witten diagrams, we can just compute one channel and use crossing symmetry to relate to the other two channels. In the s-channel, we know from table 1 and the twist cut-off τ < 4 that there are only three fields which can be exchanged: there is an exchange of scalar with dimension two and in the representation [0, 2, 0], 14) JHEP04(2018)014 and a massless symmetric graviton in the singlet representation, In the above expressions we have used the formulae for exchange Witten diagrams from appendix A and multiplied with the explicit expression of R-symmetry polynomials Y 00 , Y 11 , Y 10 given by (5.3). The constants λ s , λ v and λ g are undetermined parameters. For the contact diagram, following the discussion of appendix B, we only need to consider two-derivative vertices. The most general contribution is as follows (only in the s-channel, as we will sum over the channels in the next step), where c ab = c ba because the s-channel is symmetric under the exchange of 1 and 2. We can obtain the amplitudes in t-and u-channels by crossing. The total amplitude is the sum of the contributions from the three channels. Denoting the s-channel contribution as A s , the ansatz for the crossing-symmetric total amplitude is Being a sum ofD-functions, A sugra can be systematically decomposed into Φ, ln U , ln V and the rational part. For example, the coefficient function of Φ is of the form where the numerator T (z,z, α,ᾱ) a polynomial of degree 2 in α,ᾱ and of degree 12 in z andz. The superconformal Ward identity then requires T (z,z; α, 1/z) = 0 and reduces to a set of homogenous linear equations. Their solution is where ξ is an arbitrary overall constant. We then compute "twisted" correlator 21) and compare it to the free field result JHEP04(2018)014 The functional agreement of the two expressions provides a consistency check, and fixes the value of the last undetermined constant, ξ = 32 3N 2 π 2 . (5.23) The final answer agrees with the result in the literature [22]. Conclusion The striking simplicity of the general Mellin formula (4.47), (4.49) is a real surprise. Like the Parke-Taylor formula for tree-level MHV gluon scattering amplitudes, it encodes in a succinct expression the sum of an intimidating number of diagrams. The authors of [75,80,81] have used the information contained in our Mellin formula to disentangle the degeneracies and compute the O(1/N 2 ) anomalous dimensions of double-trace operators of the form O p n ∂ O q . The solution of this mixing problem turns out to be remarkably simple, giving further evidence for some hidden elegant structure. An interesting question is whether our results could be recovered by a more constructive approach, perhaps in the form of a Mellin version of the BCFW recursion relations. 24 Such an approach would lend itself more easily to the generalization to higher n-point correlators. A preliminary step in this direction is setting up the Mellin formalism for operators with spin (see [42,53] for the state of the art of this problem). Our work admits several natural extensions. At tree level, a direct generalization of the methods developed here has led to structurally similar results for holographic correlators in AdS 7 × S 4 , which will be described in an upcoming paper [84]. The extension to AdS 3 × S 3 × M 4 also appears within reach [85]. 25 In all these backgrounds, the KK spectrum obeys the truncation conditions, and Mellin amplitudes for tree level correlators are rational functions. This is not the case for a generic holographic background. The most important example that violates the truncation conditions is the maximally supersymmetric case AdS 4 × S 7 . New techniques will have to be developed to handle such cases [87]. At the loop level, impressive progress has been made recently by several authors [45,75,80,81] and it will be interesting to push this program using the insights of our methods. In conclusion, holographic correlators in N = 4 SYM theory appear to be much simpler and elegant than previously understood. We believe that this warrants their renewed exploration, following the spirit of the modern approach to perturbative gauge theory amplitudes. JHEP04(2018)014 A Formulae for exchange Witten diagrams We are interested in here the case where the exchange diagrams truncate to a finite number of D-functions, as a result of the conspiracy of the spectrum and the space-time dimension. A simple general method for calculating such exchange diagrams in AdS d+1 was found [36]. We collect in this appendix the relevant formulae needed in the computation of four-point function of identical scalars. The external operators have conformal dimension ∆ and the exchanged operator conformal dimension δ. Scalar exchanges. and Vector exchanges. . Massive symmetric tensor exchanges. The Witten diagrams for massive symmetric tensor exchange were worked out in [23] for the general case 26 of AdS d , and applied to the AdS 5 case. We fixed a small error in [23], which only affects the results for d = 5 and thus leaves the conclusions of [23] unaltered. For future reference, we reproduce the general calculation here. Due to the complexity of the explicit form of the general solution, we will not present here the answer as a sum of D-functions. Instead we will break down the evaluation into a few parts and give the prescription of how to assemble them into a sum of D-functions. The four-point amplitude T (x 1 , x 2 , x 3 , x 4 ) due to the exchange of a massive symmetric tensor of dimension δ is (A.9) Here f = δ(δ − d + 1) is the m 2 of the exchanged massive tensor and is the scalar bulk-to-boundary propagator. By conformal inversion and translation A µν can be rewritten as (A.12) For any scalar function b(t), (A.14) JHEP04(2018)014 Here, as standard in the literature, we have denoted The functions h(t), φ(t), X(t), Y (t) are subject to the following set of equations, where a is an integration constant that will cancel out when we substitute the solution into the ansatz for I µν . These equations come from the action of the modified Ricci operator W µν ρλ , 27 on A µν and equating terms of the same structure. We omitted the tedious algebra here. We start from the last equation and look for a polynomial solution for φ(t). As we will see shortly, a polynomial solution will lead to a truncation of the exchange diagram to finitely many D-functions. We find For the polynomial solution to exist, k max − k min = ∆ − δ/2 must be an non-negative integer. When 2∆ = δ − 2, which is the extremal case, we see the polynomial solution will stop from existing. 27 There is an error in (E.4) of [23] that must be fixed in order to generalize to arbitrary d. The correct equation is [88] Wµν JHEP04(2018)014 After obtaining the polynomial solution for φ(t), we can easily solve out h(t), X(t), Y (t) from the rest three equations. And it is easy to see I µν (t) contains only finitely many terms of the following four types We can get A µν from I µν (w − x ) with the following substitutions: The last step is to contract A µν with T µν . We list below the following handy contraction formulae, (A.24) The above derivation amounts to an algorithm to write the requisite exchange diagrams as a sum of D-function. The explicit final result is too long to be reproduced here. B Simplification of contact vertices In this appendix we show that the zero-derivative contact vertex can be absorbed into the two-derivative ones when the dimension of external scalar particle does not equal the spacetime dimension of the boundary theory. A zero-derivative contact vertex takes the form of while a two-derivative contact vertex is Here α i collectively denotes the R-symmetry index of ith field s. JHEP04(2018)014 Following the standard procedure in AdS supergravity calculation, we substitute in the on-shell value of scalar field so that it is determined by its boundary value s α (P ). Then the two types of contact vertices become Because the external fields are identical, C α 1 α 2 α 3 α 4 is totally symmetric while S α 1 α 2 α 3 α 4 is only required to be symmetric under α 1 ↔ α 2 , α 3 ↔ α 4 and (α 1 α 2 ) ↔ (α 3 α 4 ). This in particular means that the totally symmetric C α 1 α 2 α 3 α 4 can be a S α 1 α 2 α 3 α 4 . Let us see what the consequence is if we take S α 1 α 2 α 3 α 4 = C α 1 α 2 α 3 α 4 , Here K i ≡ K ∆ (P i ) and we have used the total symmetry of C α 1 α 2 α 3 α 4 to symmetrize the expression. If we now perform the AdS integral first, each term can be written as a sum of D-functions. For example The two-derivative vertex then becomes we simplify the expression to We have therefore proved that when ∆ = d, we can absorb the contribution from zeroderivative contact vertices into the two-derivative ones. (C.1) We can get the Mellin transform of G sugra,conn (U, V ; σ, τ ) by Mellin-transforming eachDfunction in G sugra,conn . Formally, the transformation reads M (s, t; σ, τ ) = The factor R was introduced before and we repeat here for reader's convenience, The first term G free,conn is the connected free field four-point function, which can be computed by Wick contractions, The function H was obtained in [89], We write H as an inverse Mellin transform, (C.8) where C is associated with a point inside the fundamental domain (s 0 , t 0 ) ∈ D = {(s 0 , t 0 )| (s) < 2, (t) < 2, (s) + (t) > 2} , (C.9) represented by the yellow size-two triangle in figure 8. When multiplied by R, this domain will lead to six different domains generated by the six different shifts in R, namely, 1, U , V , U V , U 2 , V 2 . They are the six colored triangles 28 in figure 8. Having stated the results for the two sides of (C.4) (the "unmassaged" lhs, whose Mellin transform is given by (C.3), and the "massaged" rhs, where the Mellin transform 28 In addition to the previously defined red, green, orange triangles, there are also size-two pink of H is given by (C.8)), we will now try to match them. Compared to the supergravity answer, there are three more size-two triangles on the right side. They are in the colors of yellow, pink and blue, and are respectively due to the shifts caused by the terms τ , V 2 σ and U 2 στ . Using the regularization procedure we introduced in section 4.5, they can be eliminated by combining with terms from the other triangles that we want to keep. Let us now describe in detail how this can be done. We first pay attention to the terms multiplied by τ in R. We will combine it with terms multiplied by −τ V and −τ U from R. Naively the three shifted domains will not overlap. Under the regularization, these three domains grow a small overlap and allows us to add the integrands once the contours have all been moved there (C.10) Here C (2,2), denotes that we put the contour inside the size-triangle (not shown in the picture) at (2, 2) shared by these three triangles. We now analyze the terms in this integral. The 1 term is the same integral as the one that we have encountered in the proof of the identity. It is evaluated to give The 2 term is easily seen to be zero. For the 0 term, we rewrite it as st − 4 2 = 1 2 (s − 2)(t − 2) + (s − 2) + (t − 2) . (C.12) The point of this rewriting is that these zeros of (s−2) and (t−2) will cancel the same poles in the Gamma functions, such that one is allowed to "open up the boundaries" to enter a JHEP04(2018)014 bigger domain. For example, consider the above term (s − 2). Its contour was originally placed at the size-domain at (2, 2) but now it can be moved into size-two green triangle because (s − 2) cancels the simple pole at s = 2 from Γ[1 − s 2 ]. Similarly the domain of the 1 2 (s − 2)(t − 2) term can be extended to the size-four grey triangle and the (t − 2) term extended to the size-two orange triangle with the same reason. On the other hand, for the σV 2 triangle, we will combine it into σ(−V + V 2 − U V )H. The goal of splitting the O( 0 ) term here is to open up the boundaries into the orange, red and grey triangle and from the 1 term one will get a monomial −σ 4 N 2 U . For the στ U 2 triangle, one combines into στ (−U + U 2 − U V )H. The term from the rewriting generates a monomial −στ 4 N 2 U 2 V −1 . Already, collecting these monomials, one get −G free,conn , canceling precisely the free field part in the split formula. To carry out the rest of the check, it is simplest to check by gathering terms with the same R-symmetry monomial. In the p = 2 case one has six R-symmetry monomials and one can divide them into two groups: first check 1, σ 2 and τ 2 , then τ , σ, στ . In fact, checking just one term in each class is enough, because both the supergravity result and the result written in a split form have crossing symmetry. These two classes of monomials form two orbits under the S 3 crossing symmetry group. One will need also to use the above trick of using zeros to open up boundaries (or the opposite, use poles to close). But the here one will find it is only necessary to shrink or expand between the size-four grey triangle and a size-two orange, red, green triangles. Because the manipulation is from a finite-size domain to another finite-size domain, the contour will always have room to escape and one will never get additional terms from the "domain-pinching" mechanism. We performed this explicit check and found a perfect match. D The p = 3, 4, 5 results from the position space method p = 3. The p = 3 computation is very similar to the p = 2 case. In total there are 6 exchange diagrams in the s-channel. They include the full k = 2 multiplet s 2 , A 2 , ϕ 2 and 3 fields s 4 , A 4 , ϕ 4 from the k = 4 multiplet. We have the following ansatz for the s-channel exchange amplitude Note here A field contains in it R-symmetry polynomial Y nm and the exchange formulae as sum ofD-functions can be found in appendix A. JHEP04(2018)014 p = 5. The computation of p = 5 is similar to that of p = 2 and p = 3. The ansatz is given by The solution to this case is Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
19,265.4
2018-04-01T00:00:00.000
[ "Mathematics" ]
Role of Technological Knowledge in Remote Learning/E-Learning: Exploring the Post-Pandemic Scenario of the Tertiary-Level Students of Bangladesh The study aims to look into students’ technological knowledge from the perspective of academia. The study tried to find out the advantages the students enjoy due to having the essential technological skills and the problems students faced during the online classes for requiring more skills. The results of this study look at the tech skills one should develop for an effective and smooth teaching-learning process. The recent pandemic has had a considerable impact on the studies of graduate students and has made it mandatory to have technological knowledge to conduct studies. Data have been collected through a quantitative approach. Introduction Technology in education has been invented over the last 50 years.Computers were not standard inside the classroom even 20 years ago.In contrast, today's students and children are considered "digital natives," which means growing up with digital technology such as the internet, computers, Smartphones, etc.The rapid advancement of technology in education cannot go unnoticed as it occurs worldwide.It has been providing students with access to numerous resources and aids them in the learning process (Evanouski).Most universities and educational institutions have already started utilizing technology within their teaching methods.From online colleges to digital certificate programs and hybrid set-ups, technology is reshaping the world of education.The effective use of digital learning tools in classrooms can increase student engagement, help teachers improve their lesson plans, and facilitate personalized learning.It also helps students build essential 21st-century skills.Virtual classrooms, videos, augmented reality (AR), robots, and other technology tools cannot make classes livelier; they can also create more inclusive learning environments with faster collaboration and enable teachers to collect data on student performance (Importance of Technology in Education). Still, it is essential to note that technology is a tool used in education and not an end in itself.The promise of educational technology lies in what educators do with it and how it is used to support their students' needs (How Important is Technology in Education).This study will explore the students' advantages or disadvantages due to needing proper knowledge about the available technology.This study also aims to determine the probable required skills a student needs to complete their overall academic process.Based on this research output, the institution can offer a skill development package for participants and practitioners to contribute to future development. Research Objectives Research objectives describe what research is trying to achieve and explain why it is being pursued (Ryan).These are the outcomes that are aimed to be achieved by conducting research.The research objectives aim to drive the research project, including data collection, analysis, and conclusions (Indeed Editorial Team).The entire world faced a pandemic where the teaching-learning process was conducted online.The objective of this research is to look into students' technological knowledge.Data of this study is focused on the advantages and challenges a student faces during online classes for having and needing more tech skills.Another purpose of this research is to determine the required tech skills for an effective and smooth teaching-learning process. Research Question 1. What are the advantages the students have due to having the basic tech skills? 2. What are the problems a student faces during online classes for needing basic tech skills?3. What tech skills should one develop for an effective and smooth teaching-learning process? Literature Review The British have founded the foundation of the Bangladeshi education system.It consists of three levels -primary, secondary, and higher education.In Bangladesh, both Primary and secondary education are compulsory.Primary education consists of eight years, while secondary education lasts four years.Secondary education is divided into a lower level and a higher level, and public examinations are held after each level of schooling.Schools in cities and towns are generally better-staffed and better-financed than those in rural areas.There are hundreds of colleges, most affiliated with the prominent public universities of Bangladesh.Bangladesh relies on several engineering colleges and a network of polytechnic and law colleges for vocational training.In addition, an array of specialized colleges is dedicated to training students in areas such as the arts, home economics, social welfare and research, and various aspects of agriculture.Literacy improved significantly in the 21st century: less than half of the population could read and write at the beginning of the century, but by the late 2010s, more than two-thirds were literate (Bangladesh -Education). The introduction of the first four-bit Intel microprocessor 4004 in 1971 and the subsequent overwhelming growth of personal computer use in every sphere of life has established that development in the IT sector will be the primary key to a nation's success.Realizing this truth and observing the successful history of neighboring countries like India, Sri Lanka, Singapore, Thailand, Malaysia, and many others, the Bangladesh Government has recently declared IT a thrust sector (Information Technology Education in Bangladesh).The education sector has experienced tremendous growth as a result of technological development.Innovation has enhanced educational technology, a field of study that specializes in evaluating, designing, and developing new techniques, implementing them, and assessing the productivity of the instructional environment.Educational technology allows students to learn and appreciate new technologies as soon as they emerge.Today's learners are more inclined toward using new technologies more efficiently.Understanding emerging innovations allows them to weigh new tools' positive and negative outcomes before implementing them in professional careers and studies (Enos). In Bangladesh University of Engineering and Technology, formal computer education was first started in 1984 with the foundation of the Computer Science and Engineering Department.After that, IT education gradually extended to bachelor's, higher secondary, and secondary levels.Because of the poor economic conditions, most schools in the country cannot afford to buy computers for their students.A few city-based schools need more computer laboratory facilities.However, they fail to familiarize their students with the internet, emails, and related technology because of the lack of nationwide telecommunication infrastructure and internet facilities.In addition, the teaching community of these levels needs more IT training (Information Technology Education in Bangladesh). The world was fighting the coronavirus, which had spread to nearly every point of the globe over the first three months of 2020.At the end of May, the death toll crossed 369,124, while the total quantity of infected was over 6 million across the world.Aside from the economy, the sector that has been harmed the most by the outbreak of COVID-19 is the education sector.The disease started spreading from China at the start of 2020, and infection became quicker in March.Consequently, by this time, different educational institutions across the globe began to shut down gradually, and Bangladesh shut down all its educational institutions on March 17.COVID-19 forced digital education on teachers and students, where students use their home computers, laptops, or smartphones through the internet, staying away from their academic institutions (Khan et al.).Educational technology has become essential for today's learners because it allows them to learn much faster than they would if they were not using such tools and programs.Similarly, teachers need to use the latest tools available in their work to engage students.To engage students in learning, one needs to be innovative, and new ideas should be introduced so that students get excited about what they are learning.Educational technology has become essential for teachers because of its importance in today's education industry (Lim). Methodology This research is on the undergraduate students of different universities in Bangladesh under different faculties regarding their technological knowledge for academic purposes.The undergraduate students of the modern world usually learn technology as they want to come up with modern technology.As Bangladesh is not yet part of the modern world and our style is somewhat different from the modern world, we aimed to see how much technological knowledge students have in education and to know how much technological knowledge they used during the pandemic, and what problems they faced due to lack of technological knowledge.Data from this research have been collected through a quantitative approach. Instrument The study has benefited from a quantitative research approach.Data was collected through a digital questionnaire containing 18 questions, and it was distributed among 120 students. Among them, 15 are close-ended, and three are open-ended.Data was collected from undergraduate students of different universities in Bangladesh under different faculties. Data Analysis The data generated from the questionnaire were analyzed using Google Forms.Descriptive statistics were used to analyze and represent data. Study Group The participants of this study were undergraduate students studying in different departments throughout Bangladeshi universities.Both male and female students participate in answering the required questions. Findings The following conclusions are drawn from the responses provided by university students to the questions contained in the statistics series tool. Which Device(s) do you use for Academic Purposes? Most of the survey respondents (75%) are using smartphones and other devices.Then, the second majority (42.5%) use laptops for academic purposes.18.3% of the survey respondents use desktops, while only 2.5% use tablets. Are you Skilled in Different Tools used for Academic Purposes? 51.7% of survey respondents have responded with yeses, 30% have responded to some extent, and 18.3% have responded with no. Which Platform did you use for Academic Purposes during the Pandemic? Students used multiple platforms for academic purposes during a pandemic.The most frequently used platform was Google Meet (79.7%).The second highest platform was Google Classroom (64.4%).Zoom was in third position (60.2%).Email (40.7%),Facebook (34.7%), and Google browsing (23.7%) were also used for academic purposes during the pandemic. What are your Tech Skills? (You can Choose Multiple) We can observe that students have multiple technical skills.However, most students (78.3%) are skilled with office packages.67.5% of the respondents are skilled with social media and online meeting applications.In comparison, the skills of sending emails and browsing the internet are 65.8%, and essential computer operation is the lowest, 63.3% of the respondents. Having the Required Tech Skills Helped me to Join the Class on Time The majority of the students (62.5%) agree with the above statement.15% of the survey respondents strongly agree with this.12.5% of students are neutral, while only 3.3% of the respondents disagree, and 6.7% strongly disagree with this statement. Having the required tech skills helped me to interact actively in the online class The statement above was agreed upon by 61.9% of students, and 14.4% strongly agreed.21.2% of students were neutral.Besides this, 1.7% of students disagreed, and only 0.8% of students strongly disagreed. Having the Required Tech Skills helped me to Submit My Class Work on Time and Properly 62.1% of respondents in this survey agreed, and 22.4% strongly agreed with this statement.12.1% were neutral.Besides, only 1.7% disagreed, and 1.7% strongly disagreed. Having the Required Tech Skills helped me to take the Assessment Smoothly The majority of the survey respondents (62.4%) agree with the above statement, and 22.2% of students strongly agree that the required tech skills helped them to take the assessment smoothly.12% of the students were neutral, only 1.7% disagreed, and 1.7% strongly disagreed with this statement. Having the Required Tech Skills helped me Get/ Give Feedback Effectively and Promptly 67.8% of students agreed, and 16.9% strongly agreed with this statement.11.9% of the total respondents to this survey were neutral, while only 2% disagreed and 1.4% strongly disagreed. Did you Need help with Online Classes if you Need the Required Technical Skills? Most students (45.8%) responded with sometimes 'with this question.24.2% of students responded yes 'while only 30% responded no'. In your Experience, what Problems may Students who Need to Gain basic Tech Skills Face? (You can Choose Multiple) 60.8% of respondents need help with presenting a presentation.57.5% of students are facing challenges in sending assignments to their teachers.45% of the respondents are facing issues while joining an online class, 38.3% of participants in this survey are facing problems in completing a term paper, 37.5% are having problems sending emails, and 24.2% of students are facing other technical problems. Do you Agree that Specific Tech Skills are Required for an Effective and Smooth Teaching-Learning Process? 59.2% of the students agree, and 29.2% of students strongly agree with this question.11.7% of students neither agree nor disagree, and none disagree with this question. Should there be a Basic Skill Training Facility for Developing Tech Skills?94.1% of students think there should be a basic skill training facility for developing tech skills, while only 5.9% of the respondents think there should be something other than that. Discussion on the Findings Having technological skills gives students several advantages.76.3% of students agree (14.4% strongly agree) that required tech skills helped them actively participate in the online class.84.5% of respondents agree (among the 22.4% strongly agree) that the required technological skills helped them to submit class work on time and correctly.Not only did 67.8% of students agree, but 16.9% strongly agreed that the required tech skills helped them get/give feedback effectively and on time.In the questionnaire, the Response illustrates that students have diverse advantages due to tech skills. Students need help with online classes, such as needing basic technological skills.As per the questionnaire, the Response illustrates so.24.2% of students confirmed that they faced problems, and 45.8% of students confirmed that sometimes they faced problems during online classes because of not having the required technical skills.According to the survey questionnaire responses, 60.8% of respondents need help presenting a presentation session.57.5% of students are facing challenges in sending assignments to their teachers.45% of the respondents are facing issues while joining an online class, 38.3% of participants in this survey are facing problems in completing a term paper, 37.5% are having problems sending emails, and 24.2% of students are facing other technical problems. As per the questionnaire, the Response illustrates that certain tech skills are required for an effective and smooth teaching-learning process.88.4% of students agreed (among the 29.2% strongly agree) on this.94.1% of students think there should be a basic skill training facility for developing tech skills.65% of respondents feel that the training program should include an office package (word, spreadsheet, presentation), 57.5% feel that computer operating should be included in the training, 42.5% and 40.8% respondents think that the training should include Google Meet, Zoom, and Google Classroom respectively.32.5% of students also think internet browsing should be in the tech training program. Limitations The study is based on a few participants from different public and private universities in Bangladesh.As the survey was conducted online, most of the students skipped filling out the questionnaire.The defensibility and reliability of this research could be increased if the number of participants was more than this.Although, the results will remain consistent if we conduct the research again.Another limitation is that the instruments used here to collect data digitally may have been misinterpreted by many and have provided incorrect information.Many things need to be addressed due to time constraints.The validity and reliability of this study can be increased if more research is conducted on more prominent participants. Conclusion and Recommendations Students get an advantage if they have the necessary technical skills.It helps them in many ways, such as actively interacting in the online class, submitting class work correctly, and getting/giving timely feedback.We found that students face different problems, such as needing more technological skills in their academic life, such as presenting presentation sessions, sending assignments to teachers, completing term papers, even joining online classes, and other difficulties.On the basis of the Response to the questionnaire of this research, students who are backed up with technological knowledge are getting an advantage in their academic life in terms of efficient class activity, submitting class works, presentations, etc.Those needing more technological knowledge face different issues, including presenting presentation sessions, submitting assignments and term papers, and even joining online classes.We found that most students (94.1%) feel they need technological training to overcome this problem.Our recommendation is based on the Responses to the survey questionnaire.We recommend that a technological training workshop be held to help the students develop the necessary technological skills for their academic life, which should include essential computer operation, office package (word, spreadsheet, PowerPoint), and online conference and class applications such as Google Meet, Zoom, and Google Classroom. Figure Figure: Frequency Distribution of Department Students' Response Following Items should be Included in the Training Program?Most survey respondents (65%) have responded that office programs should be included in the training program.57.5% of respondents think computer operating should be included in the training program.42.5% of respondents expressed that the training should also include operating Google Meet and Zoom.40.8% of students want to learn to operate Google Classroom, and 32.5% of students want to get training on internet browsing.
3,805.2
2024-04-01T00:00:00.000
[ "Education", "Computer Science", "Engineering" ]
The Impact of LED Color Rendering on the Dark Adaptation of Human Eyes at Tunnel Entrances The dark adaptation of drivers’ eyes at a tunnel entrance seriously affects traffic safety. This can be improved by the design of tunnel lighting. Light-Emitting Diode (LEDs) have been applied as a new type of luminaire in tunnel lighting in recent years, but at present, there are few studies on the influence of color rendering of LEDs on tunnel traffic safety, and there is no explicit indicator for the selection of appropriate color rendering parameters in tunnel lighting specifications, which has aroused researchers’ concern. In this article, several new color rendering evaluation indexes were compared, and as a result, it is considered that CRI2012 (a color difference-based color rendering index) is more suitable for evaluating the color rendering of LEDs used at tunnel entrances. The dark adaptation phenomenon was simulated in the laboratory. Four CRI2012s, three color temperatures and eight colored targets were used in the experiments. The results showed that yellow, silver and white can provide shorter reaction times, while red and brown lead to longer reaction times, which can provide a reference for the design of road and warning signs at tunnel entrances. The effect of target color on reaction time was greater than that of color rendering. Under most target colors, the higher the CRI2012, the shorter the reaction time. When designing the color rendering of the LEDs at a tunnel entrance, the value should thus be as large as possible (close to 100), and a lower color temperature value (about 2800 K) should be selected. This paper provides technical support for tunnel lighting design and a reference for tunnel lighting specifications, which is of significance to improve driving safety and avoid traffic accidents in highway tunnels. Introduction Highway tunnel entrances are the sections with the highest traffic accident rate in the whole tunnel [1][2][3]. Although fewer accidents occur in tunnels than on open roads [4][5][6], the casualties and losses of traffic accidents happening in tunnels are more serious than those on open roads [7][8][9]. The main factor that causes the frequent traffic accidents at the tunnel entrance is the dark adaptation of human eyes. The drivers' dynamic visual characteristics are most closely related to traffic accidents [10,11]. Traffic accidents will bring serious threats to personal safety and property safety. As a result, it is very significant to analyze the causes of traffic accidents and improve the traffic environment at tunnel entrances. When a driver enters a tunnel during the daytime, the human eye will suffer a "black hole effect" due to the sharp change in luminance [12], which will become more obvious when the difference between the internal and external luminance is large [13][14][15]. As the driving speed on highways is relatively fast, serious traffic accidents may occur if the reaction time is too long [16,17]. Therefore, reducing the dark adaptation period can increase the traffic safety factor at the tunnel entrance. In addition to limiting the vehicle speed at the tunnel entrance, the effect of dark adaptation is usually attenuated by reducing the luminance difference between inside and outside of the tunnel. Limited by the traditional luminaires such as high pressure sodium lamps, whose characteristic parameters like correlated color temperature (CCT) and color rendering are fixed, tunnel lighting was designed based on luminance [18][19][20]. Light-Emitting Diodes (LEDs), which have the advantages of low light attenuation, high luminous efficiency, long life and energy savings, have been widely used in tunnel lighting in recent years. As the parameters of LEDs including CCT and color rendering are not fixed, the applicability and rationality of these parameters in tunnel lighting are not clearly considered in the current specifications. As a result, researchers have begun to focus on the effect of characteristics of LEDs on tunnel lighting. Studies have shown that three important characteristics of LED-CCT, color rendering, and luminous intensity-all play significant roles in driving safety [21][22][23]. At present, there are many studies on the influence of luminance and CCT on tunnel driving safety [21,[24][25][26], but few on the influence of color rendering on tunnel driving safety. There are some studies on the application of color rendering in tunnel lighting and open roads. In 2009, Ekrias [27] and others stated that in road lighting environments colors have a major effect on target visibility in road lighting with lamps of adequate color rendering properties. It is not known whether the use of light sources with good color rendering properties can actually reduce traffic accident rates by improving the visibility of colored targets. In 2016, Deng [28] and others studied the effect of tunnel light color to drivers' visual performance. Two tunnels were selected to carry out a small target identification experiment. The results showed that the higher the color rendering, the better drivers can identify the obstacle. In 2017, Zhang [29] and others used 15 light combinations (five CCTs and four color rendering indexes (CRIs) incomplete traversal combinations) to study the effect of color rendering on visual performance in tunnel. The results showed that increasing the light color rendering can improve the visibility without increasing the light power. These studies offer few selections for color rendering parameters. Although they chose different color rendering values with different CCTs, they did not control the consistency of the CCT of the lights used in experiments at different color rendering indices. Besides, not all luminaries are LEDs and their spectra are not the same as that in tunnels. To make the results more convincing, in this paper, for the selection of parameters, CCT is kept at the same value while different color rendering properties are selected, and a variety of common CCTs are considered in order to maintain the accuracy and comprehensiveness of the experimental results. As for the selection of spectrum of LEDs, it is consistent with the LED spectrum commonly used in tunnels, both of which are bimodal discontinuous spectra to make the conclusions easier to apply to practice. While are few studies on the application of color rendering in tunnel lighting, there are more in museums [30], homes and office lighting scenarios [31]. These studies generally agree that LEDs with high color rendering are more suitable for indoor lighting. The experiments in these studies are generally based on subjective feelings, while the research in this paper is based on an objective parameter-reaction time. In addition, there are some studies on the effect of color rendering on visual properties. In 2003, Chee et al. [32] studied visual acuity with different CCT and color rendering of light sources. The results showed that visual acuity is approximately proportional to the average color rendering index under fluorescent lamps, high pressure sodium lamps, metal halide lamps and electrodeless discharge lamps. In 2015, Watanuki [33] found that the color rendering property certainly affects color emotions. Males and females feel different color emotions from skin color. Males feel "lightness" of color while females feel "activity" of color first. The lightness factor shows a correlation with the intensity of illumination, and the activity factor has a negative correlation with the intensity of illumination and alternative CRI. In 2017, Huang et al. [34] conducted a series of psychophysical experiments to investigate and compare the effect of certain factors on color preference, including spectral power distribution (SPD) of light, lighting application, observers personal color preference, regional cultural difference and gender difference. The results showed that the impact of SPD on color preference is significantly stronger than that of other factors, as well as their interactions. Although these studies did not involve specific application scenarios, they can also provide references for the research in this paper. These studies prove that color rendering does affect some visual characteristics, but few studies have linked color rendering to tunnel traffic safety and there is little research on the effect of color rendering on dark adaptation in tunnel entrances. In this article, the effect of color rendering on reaction time is mainly studied, which is directly related to personal safety. Generally speaking, most studies on color rendering are based on subjective perception, and these studies generally show a positive correlation between color rendering and visual characteristics. In the past few years, our laboratory has provided lighting design recommendations for several tunnels of the Heda highway in Jilin Province of China and accumulated some practical engineering experience and issues to be improved. The tunnel lighting design department and management pay great attention to the selection of color rendering, but there are no official guidelines for reference. The specifications for tunnel lighting [35] do not yet have provisions on which color rendering LEDs are better. Therefore, it is of great significance to study the effect of different color rendering LEDs on tunnel lighting. Figure 1 shows an overview of several important aspects of this article. Firstly, a more suitable color rendering valuation index in this research was discussed and selected. Four CRI2012s (a color difference-based color rendering index) with three different CCTs were designed to simulate the lighting environment at a tunnel entrance. Secondly, the dark adaptation was simulated by designing a dynamic reduction in luminance based on the luminance outside and inside the tunnel. Thirdly, a Landolt chart was designed using the common car colors for observation. The experiment was conducted in the laboratory simulating the tunnel environment and the degree of dark adaptation is indicated by the reaction time of the subjects. The purpose of this paper is to investigate the effect of color rendering on dark adaptation at tunnel entrances, which has been rarely studied so far. The results showed that high CRI2012 can improve the dark adaptation at the tunnel entrance and reduce the reaction time of drivers. LEDs with high CRI2012 are recommended for tunnel entrances. By improving the dark adaptation and reducing their reaction time, drivers can identify the obstacles ahead faster. This paper provides technical support for tunnel lighting design and a reference for tunnel lighting specifications, which is of significance to improve driving safety and avoid traffic accidents in highway tunnels. Int. J. Environ. Res. Public Health 2020, 17, x 3 of 23 their interactions. Although these studies did not involve specific application scenarios, they can also provide references for the research in this paper. These studies prove that color rendering does affect some visual characteristics, but few studies have linked color rendering to tunnel traffic safety and there is little research on the effect of color rendering on dark adaptation in tunnel entrances. In this article, the effect of color rendering on reaction time is mainly studied, which is directly related to personal safety. Generally speaking, most studies on color rendering are based on subjective perception, and these studies generally show a positive correlation between color rendering and visual characteristics. In the past few years, our laboratory has provided lighting design recommendations for several tunnels of the Heda highway in Jilin Province of China and accumulated some practical engineering experience and issues to be improved. The tunnel lighting design department and management pay great attention to the selection of color rendering, but there are no official guidelines for reference. The specifications for tunnel lighting [35] do not yet have provisions on which color rendering LEDs are better. Therefore, it is of great significance to study the effect of different color rendering LEDs on tunnel lighting. Figure 1 shows an overview of several important aspects of this article. Firstly, a more suitable color rendering valuation index in this research was discussed and selected. Four CRI2012s (a color difference-based color rendering index) with three different CCTs were designed to simulate the lighting environment at a tunnel entrance. Secondly, the dark adaptation was simulated by designing a dynamic reduction in luminance based on the luminance outside and inside the tunnel. Thirdly, a Landolt chart was designed using the common car colors for observation. The experiment was conducted in the laboratory simulating the tunnel environment and the degree of dark adaptation is indicated by the reaction time of the subjects. The purpose of this paper is to investigate the effect of color rendering on dark adaptation at tunnel entrances, which has been rarely studied so far. The results showed that high CRI2012 can improve the dark adaptation at the tunnel entrance and reduce the reaction time of drivers. LEDs with high CRI2012 are recommended for tunnel entrances. By improving the dark adaptation and reducing their reaction time, drivers can identify the obstacles ahead faster. This paper provides technical support for tunnel lighting design and a reference for tunnel lighting specifications, which is of significance to improve driving safety and avoid traffic accidents in highway tunnels. Color Rendering iIndex Evaluation At present, the evaluation index of color rendering is still controversial. To specify the visual rendering properties of a light source, the Commission Internationale de l'Eclairage (CIE) proposed a method called Color Rendering Index (CRI) [36], which has been improved over the years [37,38]. With the advent of new types of lighting devices like LEDs, CRI cannot provide a reliable measurement [39][40][41], as it fails to evaluate LEDs with discontinuous spectra [42,43]. In view of this problem, many researchers have provided new color rendering evaluation methods considering: Color Quality Scale (CQS) [44], Gamut Area Index (GAI) [45], CRI2012 [46], etc. Some researchers have made a series of comparisons of these methods [47][48][49]. These methods have made some improvements to CRI, but also have some limitations, as they measure the color rendering from different perspectives (color matching, fidelity, quality, preference, memory, etc.). The most widely used luminaires in the tunnel are LED lamps with noncontinuous spectra. CRI2012 uses a CIE endorsed state-of-the-art color appearance model (CAM02-UCS) and for the calculation of special indices, using uniform sampling of wavelength space to avoid selective optimization-that is, taking advantage of the unequal contributions of different wavelength regions to the general color rendering score-of light source SPDs [50]. Although CQS also performs well among the above metrics, the resemblance of object colors to their appearance under a well-known reference illuminant is critical and optimizing for an increased chroma using CQS could result in misleading color decisions. For most general interior lighting applications, the increase of chroma is generally not desirable. CRI2012 (a color difference-based color rendering index) eliminated the influence of CRI that some measurement results were unreliable due to nonuniform color space. As a result, in most general lighting applications, the CRI2012 will be the most important target parameter. Therefore, we employ CRI2012 in this article to study the color rendering of LEDs used in tunnels. Subjects Twenty-five subjects with normal vision (or corrected vision) with driver's licenses participated in the study, including seven women and 18 men, ranging in age from 30 to 51. They all had normal color vision in terms of the Ishihara test and none of them had night blindness. Parameters Setting Two kinds of LEDs were applied in the experiment: LEDcube and high power LED (HP-LED). The LEDcube can simulate different light sources with different characteristics (CCT and color rendering) by input SPD (SPDs are shown in Figure 2 and was used to simulate the light source environment inside the tunnel entrance. HP-LED was used to simulate the sunlight outside the tunnel. In this paper, the CCTs simulated by LEDcube are defined as LP-CCT, and the CCT of HP-LEDs are defined as HP-CCT. Simulation of CRI2012 and CCT in Tunnel Entrance The color rendering index of white LEDs used in a tunnel (blue chips to excite yellow phosphors) are generally greater than 55. As a result, four different CRI2012 values were selected (55, 65, 75 and 85, respectively). The difference in color rendering is due to the different SPD values of the LEDs. Different SPDs will influence the CCT value of the LED. CCT can provide people with intuitive color feelings like warm, cold, or yellow, white. Therefore, this article considered the effect of both color rendering and CCT on dark adaptation for more comprehensive consideration. In general, the CCT of white LEDs can cover the range of 2800 K to 6500 K [51,52]. Three different LP-CCTs were selected in the experiment: 2800, 4500 and 6400 K, respectively. Four CRI2012s and three LP-CCTs for a total of 12 lighting conditions in the entrance zone of a tunnel were considered in the experiment. The SPD curves of different CRI2012s and LP-CCTs are displayed in Figure 2 and were measured by a CS2000 spectroradiometer (Konica Minolta, Tokyo, Japan). All curves follow the dual-peak spectrum LED used in the tunnel. Table 1 shows the specific values of various parameters of 12 experimental luminaries simulating the tunnel lighting, including CCT, deviation of the target from the blackbody locus (duv), CRI2012, CRI and CQS. These parameters were calculated by the software provided by Smet [46] by entering the SPD of the LEDs. When CCT is 6400 K, the curves of CRI2012 and CQS are slightly less fitting. In general, when the color rendering performance is better (higher than 85), the three indicators tend to be more consistent. Figure 3 shows the comparison of three evaluation indexes of color rendering: CRI2012, CRI and CQS. It can be seen that the curves of CRI2012 and CQS have a higher fitting degree, while the curve of CRI has a lower fitting degree when CCTs are 2800 K and 4500 K. When CCT is 6400 K, the curves of CRI2012 and CQS are slightly less fitting. In general, when the color rendering performance is better (higher than 85), the three indicators tend to be more consistent. When CCT is 6400 K, the curves of CRI2012 and CQS are slightly less fitting. In general, when the color rendering performance is better (higher than 85), the three indicators tend to be more consistent. Simulation of CRI2012 and CCT Outside Tunnel In 2017, Xiong et al. [53] measured the luminance and CCT of sunlight throughout the day in different months. The results showed that the brightest hours of the day were between 12:00 and 13:00, with CCT ranging from 5000 K to 5700 K. It can be considered that the effect of dark adaptation is most obvious when CCT is in this range. Sunlight cannot be used in the experiment as the luminance as its CCT can't be controlled, which may affect the results of the experiments. As a result, HP-LEDs with two CCTs-5700 K and 2800 K-were used to simulate the sunlight outside the tunnel. 5700 K, which is usually defined as high CCT, represents the most accident-prone CCT outside the tunnel. 3000 K is compared with the former as a low CCT. The SPDs of HP-LEDs and the sunlight (measured at 1 pm) was shown in Figure 4. The CRI2012 of the sunlight is 99. The CRI2012 of HP-LEDs are 80. Although the SPD values of LEDs do not match the sunlight very well, the luminance and CCT of the HP-LED fits the parameters of the sunlight at midday, when the luminance difference between inside and outside the tunnel is the largest, and when traffic accidents are most likely to occur. Luminance Value Setting For the setting of the simulation of the luminance outside the tunnel, in a sunny daytime, the luminance of road surface outside the tunnel can be more than 10000 cd/m 2 . The luminance of simulated road surface using six evenly distributed HP-LEDs in our laboratory is 5000 cd/m 2 , which is lower than that of sunlight in a sunny day at noon, but higher than the annual average for the same period. For the setting of the simulation of the luminance inside the tunnel, according to the 2014 Guidelines [35], the luminance of the entrance zone of a tunnel ranges from 40 cd/m 2 to 140 cd/m 2 basing on the designed speed (usually limited to 80 km/h), the traffic volume and other environmental factors [26]. In order to make the experimental effect more obvious, we considered maximizing the difference between internal and external luminance under the condition of matching the actual situation of a tunnel entrance, therefor the luminance selected to simulate the inside of a tunnel was 40 cd/m 2 . Luminance Value Setting For the setting of the simulation of the luminance outside the tunnel, in a sunny daytime, the luminance of road surface outside the tunnel can be more than 10000 cd/m 2 . The luminance of simulated road surface using six evenly distributed HP-LEDs in our laboratory is 5000 cd/m 2 , which is lower than that of sunlight in a sunny day at noon, but higher than the annual average for the same period. For the setting of the simulation of the luminance inside the tunnel, according to the 2014 Guidelines [35], the luminance of the entrance zone of a tunnel ranges from 40 cd/m 2 to 140 cd/m 2 basing on the designed speed (usually limited to 80 km/h), the traffic volume and other environmental factors [26]. In order to make the experimental effect more obvious, we considered maximizing the difference between internal and external luminance under the condition of matching the actual situation of a tunnel entrance, therefor the luminance selected to simulate the inside of a tunnel was 40 cd/m 2 . Design of Observation Targets A Landolt chart ("C" visual chart) was used in the experiment as the observation target. Its dark adaptation was more obvious than that of the E visual chart in the preliminary experiments. The selection of the targets' colors refers to the common car colors. In total eight colors were selected in the experiment, as shown in Figure 5: black, silver, white, yellow, red, blue, green, and brown, Design of Observation Targets A Landolt chart ("C" visual chart) was used in the experiment as the observation target. Its dark adaptation was more obvious than that of the E visual chart in the preliminary experiments. The selection of the targets' colors refers to the common car colors. In total eight colors were selected in the experiment, as shown in Figure 5: black, silver, white, yellow, red, blue, green, and brown, respectively. The color of the background of targets is similar to that of the tunnel pavement. The orientation of "C" is random, and there are many groups like Figure 5 with random orientations of each color to prevent the subjects from remembering the object orientation and affect the experimental results. respectively. The color of the background of targets is similar to that of the tunnel pavement. The orientation of "C" is random, and there are many groups like Figure 5 with random orientations of each color to prevent the subjects from remembering the object orientation and affect the experimental results. In order to determine the size of the target C, we asked all subjects to observe objects of eight colors in 12 experimental lighting environments in the preliminary experiment. We made sure that everyone could see the orientation of the targets, and made the size as small as possible. The outer diameter of the target C used in the formal experiment finally was 20 mm. Figure 6 shows the SPDs of eight targets with different colors, the SPD of the background of targets and the SPD of tunnel pavement measured by a Konica Minolta CS2000 (Tokyo, Japan, Asia) in sunlight at 1 pm on a sunny day. In order to determine the size of the target C, we asked all subjects to observe objects of eight colors in 12 experimental lighting environments in the preliminary experiment. We made sure that everyone could see the orientation of the targets, and made the size as small as possible. The outer diameter of the target C used in the formal experiment finally was 20 mm. Figure 6 shows the SPDs of eight targets with different colors, the SPD of the background of targets and the SPD of tunnel pavement measured by a Konica Minolta CS2000 (Tokyo, Japan, Asia) in sunlight at 1 pm on a sunny day. Figure 7 illustrates a schematic diagram of experimental set-up. Figure 8 shows the real experimental situation. Six HP-LEDs and two LEDcubes were suspended on either side of the simulated tunnel. These LEDs were installed 1 m from the horizontal table at a spacing of 0.5 m. Experimental Set-Up A white board was used for observation by subjects when simulating the environment outside the tunnel. The main purpose of this was to allow the subjects to receive a full lighting spectrum, so as to prevent partial spectrum loss from affecting the experimental results and sufficient luminance can make the experimental results more obvious. The target "C" with grey background was placed under the white board. The observation window was at the same level as the target, the distance of which was 3 m. The experimental surroundings were painted a concrete color similar to the tunnel walls to simulate the true environment of tunnel as possible. The target "C" was randomly placed in a 1 m 2 square restricted area (shows in Figure 5, the black border around the target). In each observation experiment, only one target of one color was placed in front of the observer's line of sight. The orientation of the target was random. The stopwatch was controlled by the observers for timing. Procedure Specification Firstly, HP-LEDs and LEDcubes were adjusted to the demanded luminance (5000 cd/m 2 and 40 cd/m 2 ). One kind of CCT of HP-LEDs was selected. LEDcubes were adjusted to a demanded SPD (one kind of CCT and CRI2012) in a random order every time. A target of one color was placed in Procedure Specification Firstly, HP-LEDs and LEDcubes were adjusted to the demanded luminance (5000 cd/m 2 and 40 cd/m 2 ). One kind of CCT of HP-LEDs was selected. LEDcubes were adjusted to a demanded SPD (one kind of CCT and CRI2012) in a random order every time. A target of one color was placed in A white board was used for observation by subjects when simulating the environment outside the tunnel. The main purpose of this was to allow the subjects to receive a full lighting spectrum, so as to prevent partial spectrum loss from affecting the experimental results and sufficient luminance can make the experimental results more obvious. The target "C" with grey background was placed under the white board. The observation window was at the same level as the target, the distance of which was 3 m. The experimental surroundings were painted a concrete color similar to the tunnel walls to simulate the true environment of tunnel as possible. The target "C" was randomly placed in a 1 m 2 square restricted area (shows in Figure 5, the black border around the target). In each observation experiment, only one target of one color was placed in front of the observer's line of sight. The orientation of the target was random. The stopwatch was controlled by the observers for timing. Procedure Specification Firstly, HP-LEDs and LEDcubes were adjusted to the demanded luminance (5000 cd/m 2 and 40 cd/m 2 ). One kind of CCT of HP-LEDs was selected. LEDcubes were adjusted to a demanded SPD (one kind of CCT and CRI2012) in a random order every time. A target of one color was placed in front of the view. Subjects were asked not to glance the target but to observe the white board through the observation window and take 5 minutes to adjust to ambient brightness. Secondly, the experimenter turn off HP-LEDs and meanwhile, the subjects detected a change in brightness, pressed the timer in their hand, and looked down at the target area to look for the randomly placed target C. When the subjects perceived the location and then the orientation the target, they pressed the timer again to stop timing. The experimenter recorded the dark adaptation time. Thirdly, the experimenter changed the color, position and orientation of the target and repeated the procedure above. After eight different colors of objects were tested under the same luminance circumstances, the experimenter adjusted the LEDcubes to another demanded SPD with different CCT or CRI2012. The above process was repeated until all the combinations were tested. Instruction To make the results more accurate, each subject underwent three complete experiments, whose results were averaged. In order to familiarize the subjects with the whole experimental process, three preliminary experiments were be conducted in a lighting environment before the formal experiment begins, and these three results are not be considered in the final data. In order to prevent visual fatigue from affecting the results of the experiments, subjects were given a 5-minute rest before the next experiment. Figure 9 shows the driving conditions of the actual tunnel entrance and the visual states of the subjects in the experiment to illustrate the feasibility of the experiment. The whole process of dark adaptation is refined into four states. Firstly, before the driver accesses the entrance of tunnel, the human eyes are exposed to a high luminance. In the experimental environment, HP-LED was used to simulate this high luminance. Secondly, as the driver enters the tunnel entrance, the human eyes experience a sharp reduction in luminance and a sight loss. In the experimental environment, the HP-LED is turned off to achieve this effect. Thirdly, after a period of time, the driver gradually regains his vision and can perceive the presence of obstacles ahead. In the experiment, the subjects could perceive the presence of target C in the designated area after a period of time. Finally, the driver was able to tell the details of the obstacle ahead. In the experiment, after some time, the subjects were able to tell the orientation of the target C. Although the experimental environment is not completely consistent with a real driving environment, the inevitable errors in a real tunnel, such as changing outdoor luminance, can be avoided in the laboratory. The dynamic process of the tunnel entrance is simulated. The effect of color rendering on reaction time can be obtained accurately by this method. able to tell the details of the obstacle ahead. In the experiment, after some time, the subjects were able to tell the orientation of the target C. Although the experimental environment is not completely consistent with a real driving environment, the inevitable errors in a real tunnel, such as changing outdoor luminance, can be avoided in the laboratory. The dynamic process of the tunnel entrance is simulated. The effect of color rendering on reaction time can be obtained accurately by this method. Results and Discussion To measure the error of the experiment, in a preliminary experiment, five of 25 subjects were asked to observe a black target and conduct dark adaptation for 10 times using LEDcubes with CRI2012 of 85 and LP-CCT (the CCTs simulated by LEDcube) of 2800 K as the lighting sources. The HP-CCT is 5700 K. The reaction times of the five subjects are shown in Figure 10. It can be seen that the reaction time value is between 2.5 s and 3.5 s and the value of standard deviation is less than 0.2. Results and Discussion To measure the error of the experiment, in a preliminary experiment, five of 25 subjects were asked to observe a black target and conduct dark adaptation for 10 times using LEDcubes with CRI2012 of 85 and LP-CCT (the CCTs simulated by LEDcube) of 2800 K as the lighting sources. The HP-CCT is 5700 K. The reaction times of the five subjects are shown in Figure 10. It can be seen that the reaction time value is between 2.5 s and 3.5 s and the value of standard deviation is less than 0.2. Figure 11 and Figure 12 show the mean reaction time of all subjects under different CRI2012s and LP-CCTs under different HP-CCTs. It can be seen that the color of targets greatly affect the reaction time, the CRI2012 and CCT affect the reaction time relatively less. In Figure 11, the experimental data of most colors (black, blue, red, silver, white and brown) showed that the reaction time decreased with the increase of CRI2012. The green one showed that the CRI2012 was positively correlated with the reaction time. The yellow one had no obvious uniform trend for CRI2012 and the reaction time may because that the short reaction time results in an insignificant regularity. In Figure 12, most of the results follow a similar trend to the results in Figure 11 except for the blue target, which showed a positive correlation between color rendering and reaction time. Comparing Figure 11 and Figure 12, their trends were similar and the range of reaction time was close. When HP-CCT = 5700K, the trends of CRI2012 and reaction time were more linear. When HP-CCT = 5700K, LP-CCT = 2800K, the response time difference under different CRI2012s is larger than that under other lighting conditions. It can be seen that the difference of reaction time is less than 1 s under different lighting Figures 11 and 12 show the mean reaction time of all subjects under different CRI2012s and LP-CCTs under different HP-CCTs. It can be seen that the color of targets greatly affect the reaction time, the CRI2012 and CCT affect the reaction time relatively less. In Figure 11, the experimental data of most colors (black, blue, red, silver, white and brown) showed that the reaction time decreased with the increase of CRI2012. The green one showed that the CRI2012 was positively correlated with the reaction time. The yellow one had no obvious uniform trend for CRI2012 and the reaction time may because that the short reaction time results in an insignificant regularity. In Figure 12, most of the results follow a similar trend to the results in Figure 11 except for the blue target, which showed a positive correlation between color rendering and reaction time. Comparing Figures 11 and 12, their trends were similar and the range of reaction time was close. When HP-CCT = 5700K, the trends of CRI2012 and reaction time were more linear. When HP-CCT = 5700K, LP-CCT = 2800K, the response time difference under different CRI2012s is larger than that under other lighting conditions. It can be seen that the difference of reaction time is less than 1 s under different lighting conditions. However, a difference of 0.1 s in reaction time can greatly increase the probability of traffic accidents [54]. Figures 13 and 14 show the probability distribution of the correlation between CRI2012 and reaction time for 25 subjects observing different colors. For example, when the 25 subjects were asked to observed the black target under LP-CCT = 2800K and HP-CCT = 5700 K, the results of the reaction time showed that among the 25 subjects, eight subjects showed a positive correlation, 10 subjects showed a negative correlation, and seven subjects showed no significant correlation. For the determination of correlation, we judge that if three or four CRI2012s are correlated with reaction time, the CRI2012 are identified as correlated with reaction time. Figure 15 shows several possibilities for determining the correlation. It can be seen in Figures 13 and 14 that the probability of most colors showing a negative correlation is higher, while the probability of showing a positive correlation and no correlation is lower. Although the experimental results of each subject were different, the experimental data of most subjects showed a relatively consistent trend, that is, when observing objects of different colors, the reaction time of most colors decreased with the increase of CRI2012. As a result, it can be concluded that higher CRI2012 can improve the dark adaptation of human eyes and reduce the reaction time. The effect of targets with different colors on reaction time is the only consideration in Figure 16. The reaction time of all subjects to observe the same color of targets under different lighting conditions (four CRI2012s and three CCTs) was averaged. It can be seen that, under two HP-LEDs, the two bar charts are similar except for the yellow group. When HP-CCT is 5700 K, the reaction time ranking of different colors from low to high is white, yellow, silver, black, blue, green, red and brown. When HP-CCT is 3000 K, the reaction time of the yellow target is significantly higher. When HP-LED =5700 K, the average reaction time of all colors was longer than that of HP-LED =3000 K, except for the yellow group. Despite red being intuitively a more striking color, the reaction time of a red target is relatively long in this experiment. Most of the subjects reported in the experimental feedback that when they looked at the red target, they could quickly detect the position and outline of the target, but it was difficult to identify the orientation of the gap in the target. It can be explained by the fact that the contrast between the red target and the cement background is low, which results in the subjects' discrimination of details being low. The experimental results of this part can provide reference for the color selection of traffic signs at the tunnel entrance. White, yellow and silver are recommended because they are easier to identify by drivers than the other colors. At the same time, cars of these three colors will be more easily perceived by other drivers at the entrance of the tunnel, which should be safer in theory. Figure 17 and Figure 18 show the reaction time under 12 lighting conditions in Figure 11 and Figure 12. The reaction time of eight colors under each lighting condition is averaged. It can be seen The effect of targets with different colors on reaction time is the only consideration in Figure 16. The reaction time of all subjects to observe the same color of targets under different lighting conditions (four CRI2012s and three CCTs) was averaged. It can be seen that, under two HP-LEDs, the two bar charts are similar except for the yellow group. When HP-CCT is 5700 K, the reaction time ranking of different colors from low to high is white, yellow, silver, black, blue, green, red and brown. When HP-CCT is 3000 K, the reaction time of the yellow target is significantly higher. When HP-LED =5700 K, the average reaction time of all colors was longer than that of HP-LED =3000 K, except for the yellow group. The effect of targets with different colors on reaction time is the only consideration in Figure 16. The reaction time of all subjects to observe the same color of targets under different lighting conditions (four CRI2012s and three CCTs) was averaged. It can be seen that, under two HP-LEDs, the two bar charts are similar except for the yellow group. When HP-CCT is 5700 K, the reaction time ranking of different colors from low to high is white, yellow, silver, black, blue, green, red and brown. When HP-CCT is 3000 K, the reaction time of the yellow target is significantly higher. When HP-LED =5700 K, the average reaction time of all colors was longer than that of HP-LED =3000 K, except for the yellow group. Despite red being intuitively a more striking color, the reaction time of a red target is relatively long in this experiment. Most of the subjects reported in the experimental feedback that when they looked at the red target, they could quickly detect the position and outline of the target, but it was difficult to identify the orientation of the gap in the target. It can be explained by the fact that the contrast between the red target and the cement background is low, which results in the subjects' discrimination of details being low. The experimental results of this part can provide reference for the color selection of traffic signs at the tunnel entrance. White, yellow and silver are recommended because they are easier to identify by drivers than the other colors. At the same time, cars of these three colors will be more easily perceived by other drivers at the entrance of the tunnel, which should be safer in theory. Figure 17 and Figure 18 show the reaction time under 12 lighting conditions in Figure 11 and Figure 12. The reaction time of eight colors under each lighting condition is averaged. It can be seen Despite red being intuitively a more striking color, the reaction time of a red target is relatively long in this experiment. Most of the subjects reported in the experimental feedback that when they looked at the red target, they could quickly detect the position and outline of the target, but it was difficult to identify the orientation of the gap in the target. It can be explained by the fact that the contrast between the red target and the cement background is low, which results in the subjects' discrimination of details being low. The experimental results of this part can provide reference for the color selection of traffic signs at the tunnel entrance. White, yellow and silver are recommended because they are easier to identify by drivers than the other colors. At the same time, cars of these three colors will be more easily perceived by other drivers at the entrance of the tunnel, which should be safer in theory. Figures 17 and 18 show the reaction time under 12 lighting conditions in Figures 11 and 12. The reaction time of eight colors under each lighting condition is averaged. It can be seen in Figure 17 that when LP-CCT = 2800 K and 4500 K, CRI2012 is negatively correlated with reaction time. When LP-CCT = 6400 K, although CRI2012 is positively correlated with reaction time, the reaction time of high CRI2012 (85) was also very short. In Figure 17b, when CRI2012 = 55 and 65, CCT is negatively correlated with reaction time. When CRI2012 = 75 and 85, CCT is positively correlated with reaction time. In Figure 18, when LP-CCT = 2800 K, CRI2012 is positively correlated with reaction time. When LP-CCT = 4500 K and 6400 K, CRI2012 is negatively correlated with reaction time. It can be seen that the effect of CCT on dark adaptation is inaccurate if only CCT is considered without consideration of CRI2012. in Figure 17 that when LP-CCT = 2800 K and 4500 K, CRI2012 is negatively correlated with reaction time. When LP-CCT = 6400 K, although CRI2012 is positively correlated with reaction time, the reaction time of high CRI2012 (85) was also very short. In Figure 17 (b), when CRI2012 = 55 and 65, CCT is negatively correlated with reaction time. When CRI2012 = 75 and 85, CCT is positively correlated with reaction time. In Figure 18, when LP-CCT = 2800 K, CRI2012 is positively correlated with reaction time. When LP-CCT = 4500 K and 6400 K, CRI2012 is negatively correlated with reaction time. It can be seen that the effect of CCT on dark adaptation is inaccurate if only CCT is considered without consideration of CRI2012. It can be concluded that considering multiple colors, high CRI2012 provides shorter reaction time and improves dark adaptation at tunnel entrance. Under two different HP-CCTs, the reaction time is the shortest when LP-CCT = 3000 K and CRI2012 = 85. Considering both the influence of CCT and CRI2012, LEDs with low CCT (2800 K) and high CRI2012 (over 85) are recommended for the lighting of tunnel entrance. From the experimental results, no significant differences and rules were found in the reaction time data of different genders and ages. Table 2 shows the significance analysis of the above factors including CRI2012, color, LP-CCT and HP-CCT with reaction time as the dependent variable. It can be seen that the difference of CRI2012, Color, LP-CCT, CRI2012  Color, Color  LP-CCT and Color  HP-CCT are statistically significant (p < 0.05). Color has the most significant effect on reaction time, followed by CRI2012 and LP-CCT, while HP-CCT has no significant effect on reaction time. When studying the effect of LED characteristics on visual characteristics, it is necessary to consider multiple colors as the observation target as they will have a significant impact on the results. Table 2. P value of significance analysis for the above factors It can be concluded that considering multiple colors, high CRI2012 provides shorter reaction time and improves dark adaptation at tunnel entrance. Under two different HP-CCTs, the reaction time is the shortest when LP-CCT = 3000 K and CRI2012 = 85. Considering both the influence of CCT and CRI2012, LEDs with low CCT (2800 K) and high CRI2012 (over 85) are recommended for the lighting of tunnel entrance. From the experimental results, no significant differences and rules were found in the reaction time data of different genders and ages. Table 2 shows the significance analysis of the above factors including CRI2012, color, LP-CCT and HP-CCT with reaction time as the dependent variable. It can be seen that the difference of CRI2012, Color, LP-CCT, CRI2012 × Color, Color × LP-CCT and Color × HP-CCT are statistically significant (p < 0.05). Color has the most significant effect on reaction time, followed by CRI2012 and LP-CCT, while HP-CCT has no significant effect on reaction time. When studying the effect of LED characteristics on visual characteristics, it is necessary to consider multiple colors as the observation target as they will have a significant impact on the results. Figure 19 shows the simple effect analysis of three groups of significant interactions including (a) CRI2012 × Color, (b) Color × LP-CCT and (c) Color × HP-CCT. In Figure 19a,b, the effect of CRI2012 on reaction time is significantly different when the colors are green, red and brown. In Figure 19c, the effect of CRI2012 on reaction time is significantly different when the colors are red, yellow and brown. Based on the above experimental results, we can condense the following conclusions: (1) The effect of different colors on reaction time is greater than that of CRI2012 and CCT. (2) Yellow, silver and white can provide the shortest reaction times, which can provide a reference for the design of road signs and warning signs at tunnel entrances. (3) For targets of different colors and different CCTs, most subjects had shorter reaction times under high CRI2012, which can lead to the conclusion that LEDs with high CRI2012 are recommended for the lighting design in tunnel entrances. According to the trend of experimental data, it can be inferred that LEDs with CRI2012 value approaching 100 is more suitable for the lighting in tunnel entrance. (4) For the CCT of LED, under different CRI2012 conditions, the change trend of reaction time with the increase of CCT is not consistent. According to the current experimental results, on the basis of determining the high CRI2012 of LED, the CCT should be selected at a lower value (about 2800 K). In a previous study [26], we used a method similar to the one described in this article to study the effect of CCT on reaction time at tunnel entrances. However the previous study ignored the interaction of color rendering and CCT on reaction time. This paper is a more comprehensive study on the parameter design of LED at tunnel entrances. Compared with previous studies by other researchers [28,29], this paper considered more kinds of color rendering and CCT, used more colors as the observation targets, and has updated the selection of light sources and evaluation methods for color rendering, which made the results more convincing and obtained conclusions more applicable to actual tunnel entrances. Although the results presented in the paper can lead to some positive conclusions, there are also some limitations. Firstly, the small sample size may lead to deviation of data regularity and affect the accuracy of the results. Thus, the future studies should increase the case studies sample to ensure a convincing result. Meanwhile, the results will be more accurate by taking into account different characteristics of the samples, such as driving experience and frequency through the tunnel. Secondly, although the experimental environment to the greatest extent represents a real environment, the differences between the two are difficult to assess. Therefore, in the following research, experiments will be conducted in real tunnels. Eye trackers and other devices will be applied to measure reaction times. Comparing further results with the conclusions of this paper may lead to more convincing conclusions. the increase of CCT is not consistent. According to the current experimental results, on the basis of determining the high CRI2012 of LED, the CCT should be selected at a lower value (about 2800 K). In a previous study [26], we used a method similar to the one described in this article to study the effect of CCT on reaction time at tunnel entrances. However the previous study ignored the interaction of color rendering and CCT on reaction time. This paper is a more comprehensive study on the parameter design of LED at tunnel entrances. Compared with previous studies by other researchers [28,29], this paper considered more kinds of color rendering and CCT, used more colors as the observation targets, and has updated the selection of light sources and evaluation methods for color rendering, which made the results more convincing and obtained conclusions more applicable to actual tunnel entrances. Although the results presented in the paper can lead to some positive conclusions, there are also some limitations. Firstly, the small sample size may lead to deviation of data regularity and affect the Conclusions In this paper, the effects of color rendering on dark adaptation of human eye in tunnel entrance were analyzed from the point of view of traffic safety. Firstly, the influence of dark adaptation at tunnel entrances on traffic safety is discussed. Color rendering is one of the important characteristics of LEDs, and its research significance to driving safety is discussed. It is explained that the current color rendering evaluation index (CRI) is not applicable to evaluate LEDs. Several new evaluation indexes were compared, and it is considered that CRI2012 is more suitable for evaluating the color rendering of LEDs used at tunnel entrances. Then, a reaction time experiment was designed to investigate the relationship between CRI2012 and reaction time. In the experiment, four CRI2012s, three CCTs and eight colors of targets were used to simulate in the laboratory the visual dynamic state of a driver at the entrance of a tunnel. In the experiments, the method of switching from HP-LED to low-brightness LED was used to simulate the dark adaptation phenomenon at the tunnel entrance. The similarities and differences between experimental environment and the actual tunnel are discussed. Twenty five subjects with different driving experiences on open road and frequencies through tunnels attended the experiment.
11,754.4
2020-02-28T00:00:00.000
[ "Engineering", "Environmental Science" ]
Scale-Independent Inflation and Hierarchy Generation We discuss models involving two scalar fields coupled to classical gravity that satisfy the general criteria: (i) the theory has no mass input parameters, (ii) classical scale symmetry is broken only through $-\frac{1}{12}\varsigma \phi^2 R$ couplings where $\varsigma$ departs from the special conformal value of $1$; (iii) the Planck mass is dynamically generated by the vacuum expectations values (VEVs) of the scalars (iv) there is a stage of viable inflation associated with slow roll in the two--scalar potential; (v) the final vacuum has a small to vanishing cosmological constant and an hierarchically small ratio of the VEVs and the ratio of the scalar masses to the Planck scale. This assumes the paradigm of classical scale symmetry as a custodial symmetry of large hierarchies. The discovery of the weakly interacting Brout-Englert-Higgs (BEH) boson, coupled with the absence of significant evidence for physics beyond the Standard Model, has stimulated a re-evaluation of the possible explanations of the hierarchy problem. In the Standard Model (SM) of the strong and electroweak interactions, which has no fundamental input mass scale other than the BEH mass, an apparent hierarchy problem arises that is due to the additive quadratically divergent radiative corrections to the mass squared of the BEH boson. However, in the pure Standard Model the quadratic divergences are an artifact of the introduction of a mass scale cut-off in momentum space [1]. In the context of field theory, the coefficients of relevant operators have to be renormalised and the theory is defined ultimately by observable renormalised coefficients. In this case neither the quadratically divergent radiative correction to the BEH mass nor the mass counter-term is measurable and only the renormalised mass is physically meaningful. If one maintains scale invariance broken only explicitly by the various trace anomalies and spontaneously to generate the BEH boson mass, then the latter must be viewed as multiplicatively renormalized since no quadratic divergence arises in the trace anomaly. This has further led to the proposal of classically-scale-invariant models that contain the SM, in which the electroweak scale is generated through spontaneous breaking of scale invariance via Coleman-Weinberg mechanism [2,3]. It has been suggested that scale invariance might even apply at the quantum level through "endogenous" renormalisation which requires that the regulator mass scale, µ, associated with quantum loops in dimensional regularization, is itself generated by a moduli field 1 . Alternatively, one can always introduce an arbitrary cut-off scale Λ, e.g., by way of momentum space cut-off or Pauli-Villars regularization, but then renormalize the theory at a renormalization scale given by a moduli field to remove the Λ dependence 2 . However we will not explore this possibility here, concentrating on whether it is possible to build a viable scale invariant theory broken only spontaneously and via the trace anomaly. Of course a complete theory must include gravity and, if one is to maintain classical scale invariance, it is necessary to do so in a way that generates the Planck scale through spontaneous breaking of the scale invariance such as occurs in the Brans Dicke theory of gravity [9]. The inclusion of gravity means there are additional additive divergent contributions to the BEH mass but these, too, are unphysical and should be absorbed in the renormalised mass which, as before, is multiplicatively renormalised due to the underlying scale invariance and thus avoids the hierarchy problem. A problem with the scale independent approach occurs if there are massive states coupled to the BEH scalar for then there are large finite calculable corrections to the Higgs mass. In the Standard Model the presence of the Landau pole associated with the U (1) gauge group factor indicates that the SM becomes strongly interacting at the scale associated with the Landau pole. It is common to assume that there will be massive bound states associated with this strong interaction that will couple significantly to the BEH boson and create the "real" hierarchy problem. One possible way to evade this is to embed the SM in a theory with no Abelian gauge group factor that does not have a Landau pole [21]. This must be done close to the electroweak scale to avoid introducing the hierarchy problem via new massive states and leads to a profusion of new states that may be visible at the LHC. However the Landau pole in the SM lies above the Planck scale where gravitational effects cannot be neglected and it is far from clear clear what the physics above the Landau pole will be and whether it indeed reintroduces the hierarchy problem. For the same reason we did not insist on the absence of a Landau pole in the model considered here. Similarly it is possible that, when gravity becomes strong, it leads to massive states that generate the real hierarchy problem. Of course there are black holes that can carry SM gauge group charges and couple to the BEH boson. In general such states do not give rise to an hierarchy problem due to their form factor suppression. It is possible that microscopic black holes exist that do not have such form factor suppression but this is not firmly established and, as with the Landau pole problem, we chose to ignore this possibility here. In this paper we construct a spontaneously broken scale-free model that includes gravity. As such, there is no physical meaning to the vacuum expectation value (vev) of a single scalar field and only dimensionless ratios are measurable. A mimimal model capable of generating an hierarchy requires the introduction of two scalar fields, φ and χ coupled to gravity in the form: where: W (φ, χ) = λφ 4 +ξχ 4 +δφ 2 χ 2 . This theory has no input mass scales, is conformally invariant if α = β = 1 and is invariant under independent φ → ±φ, χ → ±χ. The theory has remarkable properties that we illustrate for one representative choice of parameters (α, β, λ, ξ, δ) in Figure 1. At early times it has a period of inflation during which, as we will show later on, observationally viable spectra of scalar and tensor perturbations can be generated. Furthermore, it has an infra-red (IR) fixed point set by ratios of the coupling constants and which is radiatively stable to quantum corrections and during which the universe can undergo accelerated expansion. In the context of unimodular gravity 3 references [5,6] provide seminal studies of the model. These studies concentrate on the ξ = O(1) case in which the field χ may be interpreted as the Higgs, in turn requiring β = O(10 5 ) to produce "Higgs inflation". In this paper we extend the analysis to cover other values of the parameters. By way of motivation we note that in the context of the hierarchy problem it is important that there should be no heavy states significantly coupled to the Higgs. In this case it has been argued [8] that the solution to the strong CP problem requires the introduction of the axion and, in the context of this model, the most economical solution is to identify the axion with a component of the χ field. However then the coupling ξ must be small to avoid the introduction of a low-lying Landau pole. A second difference is that we determine the inflationary solution in the "Jordan" frame of eq(1) whereas the analysis of reference [5,6] was performed in the Einstein frame. Our analysis has the advantage that it has a simple analytic solution in the slow-roll region, clarifying the origin of the structure of the model. Finally, the IR fixed point structure of the model studied here differs from that in [5,6] where the unimodular constraint introduces an explicit cosmological constant. In the Jordan frame the field equations immediately follow from eq.(1): where and: The effective planck mass, χ ), obeying current constraints on gravitational physics. To obtain the normal form of the Einstein equations at late times, M 2 must be positive and therefore at least one of the coefficients α or β must be negative, inconsistent with the conformally invariant choice. However the resultant theory is still scale-independent [7]. Taking the trace of the Einstein field equations we have: which determines the Ricci scalar. We now restrict the analysis to study the cosmological evolution for a Friedmann Robertson Walker (FRW) metric, g αβ = (−1, a 2 δ ij ). The FRW equation is given by: where H ≡ȧ/a is the Hubble parameter, D = αφφ+βχχ and ρ T =φ 2 /2 +χ 2 /2 + W . The evolution equations for φ and χ can be uncoupled to give: where K = 1 + (α 2 φ 2 + β 2 χ 2 )/(6M 2 ) and: As advertised, this theory has an infrared fixed point which can be found by settingφ =φ =χ =χ = 0 leading to: Note that φS φ + χS χ = 0 is automatically satisfied since our full potential, W (φ, χ), is classically scale invariant: δW /δ ln φ + δW /δ ln χ = 4W This guarantees that nontrivial solutions generally exist in the ratio of the VEV's of φ and χ given by: One can readily show that this is an IR stable fixed point so that φ 0 , χ 0 are the IR vevs of the scalar fields. Note that it is only dimensionless ratios of VEVs that are physical. The absolute value of a VEV, not determined by the static equations, is not measurable. We are interested in the case that φ 2 0 χ 2 0 so that, at late times, a large hierarchy develops. To have an hierarchically light "matter" sector also requires that the χ mass should be small relative to the Planck scale and this in turn requires that the χ mass contribution coming from the δφ 2 χ 2 term should be hierarchically small relative to the Planck mass, i.e. δ ≤ χ 2 0 / φ 2 0 . Finally if the cosmological constant at late times is small then this requires a fine-tuning of the parameters in W such that it is (or is close to) a perfect square. Furthermore, we need λ ≤ χ 4 0 / φ 4 0 which, in the absence of a α 12 φ 2 R term, is natural because φ is shift symmetric in the limit the small parameters vanish. Thus the radiative corrections to the small parameters can only be gravitational in origin (we will discuss these corrections later in this letter). What happens to the scale factor in the IR? For static scalar fields the FRW equation, Eq. 6, implies: (where µ 2 ≡ χ 0 2 / φ 0 2 ) and we can define an effective cosmological constant Λ eff = (λ + ξµ 4 + δµ 2 )φ 2 0 /(α + βµ 2 ). With the ordering of the couplings discussed above Λ ef f ≤ ξχ 4 0 /M 2 . To obtain zero cosmological constant requires fine tuning of the couplings corresponding to the potential having the form of a perfect square. This theory is equivalent to a multi-scalar Jordan-Brans-Dicke theory of gravity with a potential [9][10][11]. Current constraints on Brans-Dicke theories from Shapiro time delay measurements are particularly stringent and a naive application to this theory leads to α < 2.5 × 10 −5 . However, the scale invariance of the theory implies that a change in the Planck mass will be compensated by a corresponding change in massive objects that cancel the effect so that the bound does not apply. A remarkable feature of the scale-independent structure, that we see in Fig 1, is that it readily leads to an inflationary era. Non-minimally coupled models of inflation have been looked at before [12][13][14][15]. Multifield, non-minimal models have also been looked at in some detail, with a particular focus on models with an explicit Planck mass [16] or perfectly (or almost perfect) conformal invariance (with α = β = 1) [17]. However this case is characteristically different, with no explicit Planck mass and the slow-roll condition resulting from a cancellation of terms due to the scale invariance of non-gravitational sector. To understand its inflationary regime, it useful to rewrite Eq. 7 in terms of M 2 φ and M 2 χ . In the regime where W ξχ 4 , Eqs 7 gives us: where N = ln a. Slow-roll results in the β α regime where With our analytical solution in hand, assuming that at the beginning of inflation we have φ ∼ χ ∼ Φ I , we find that the total number of e-folding during inflation is We can also calculate the predictions for the inflationary observables [18]. The standard procedure, in the case of single field models is to calculate the slow parameters in the Einstein frame; following [5] we will do so here although effects arising from the multi dynamics may change our results somewhat. In the Einstein frame (which we denote with a tilde over all quantities, e.g.X) we have that the Hubble rate is given byH 2 (Ñ ) (36ξ/β 2 )M 4 χ /(3M 2 χ − M 2 φ ) which we use to determine the slow roll parameters,˜ = −H and η =˜ −˜ /2˜ , and then calculate the tensor to scalar ratio, r = 16˜ and the scalar spectra index, n s = 1 + 2η − 4˜ . We then find the expressions: where ζ = β/(β − 1) and N e is the number of e-foldings before inflation. In order to obtain fluctuations of the observed magnitude we need ξ/β 2 = O(10 −10 ). For Higgs inflation ξ = O(1) so one must have β = O(10 5 ). Here we explore smaller values of β which will require correspondingly smaller values of ξ. It is straightforward to obtain (r, n s ) consistent with the Planck measurements [19], i.e. r ≤ 0.1 and n s ∼ 0.96; indeed, typical values of r range from 10 −3 to 10 −2 . Future B-mode constraints will further tighten bounds on r, leading to bounds on α and β. This analysis has assumed that only a single field is active. Being a two field system, Φ, there are possible additional isocurvature fluctuations and non-negligable non-Gaussian effects [20] that are proportional to η ⊥ .δΦ ⊥ where η ⊥ and δΦ ⊥ are the components of the slow roll vector, − → η , and the field perturbations orthogonal to the background field trajectory respectively. In the slow roll regime one may see from eq(12) that the ratio M 2 φ /M 2 χ is field independent implying η ⊥ vanishes, being an attractor of the scale invariant theory [6], thus justifying the assumption. The generation of an hierarchy requires that the choice of parameters in the tree level Lagrangian is also hi-erarchical and it is important to check whether this choice is stable against radiative corrections. The choice λ δ ξ is stable against non-gravitational corrections because in the limit that λ and δ vanish there is an enhanced shift symmetry φ → φ + c. This implies that non-gravitational corrections to δ are proportional to δ while the corrections to λ are proportional to δ 2 or λ, both being perturbatively small. The gravitational corrections have been studied in detail in reference [6] and we do not repeat the discussion here. Calculating the radiative calculations using dimensional regularisation as an example of endogenous renomalisation it was shown that the model results discussed here are essentially unchanged by gravitational corrections. While the model is very simple, it provides a basis to extend the Standard Model to include gravity in a scale invariant theory. Reference [5] identified the χ field with the Higgs scalar and so that the inflationary era is Higgs inflation. However this is not the only possibility. As we commented above it may be advantageous to identify χ as the field giving rise to the axion solution to the strong CP problem. Of course the SM states should have hierarchically small coupling to the φ field but such small couplings will again be radiatively stable due to the enhanced symmetry when the couplings are zero. We have shown that a simple two-scalar model coupled to gravity can satisfies the general criteria: (i) the theory has no mass input parameters, i.e., is classically scale invariant. One can readily see that this model possesses a conserved current of the form j µ = (1 − α)φ∂ µ φ + (1 − β)χ∂ µ χ. This current arises upon combining eqs. (4,5) to eliminate R and it is covariantly conserved, D µ j µ = 0 and it plays an important role in the dynamics which will be explored in subsequent work ref. (22); (ii) scale symmetry is broken only through the scalar coupling to the Ricci scalar which depart from the special conformal value of −1/6; (iii) the Planck mass is dynamically generated by the scalar VEV's (iv) there is a viable stage of inflation associated with slow roll in the two-scalar potential; (v) the final vacuum has a small to vanishing cosmological constant and an hierarchical ratio between the Planck scale and the scalar mass scale. Our analysis assumes the paradigm of scale symmetry as a custodial symmetry of large hierarchies. We will present generalizations of this scheme to multi-scalar theories as well as the inclusion of SM states and expand the formal implications elsewhere [22].
4,090.8
2016-03-18T00:00:00.000
[ "Physics" ]
Design of Office Chair: A Quality Function Deployment Approach This paper employs a quality function deployment (QFD) methodology to translate customer requirements into design characteristics to improve the design of an office chair. A factor analysis has been carried out on the responses obtained from a cross sectional survey directed at users through a set of questionnaires. It has been obtained that three factors with twenty two items are loaded with a threshold value above 0.7. Finally, quality function deployment is used to extract important design characteristics satisfying the customer requirements. Introduction Quality is a characteristic of a product which has its ability to satisfy the implied customer's needs or in other words it fulfils customer's expectation from a product.quality function deployment (QFD) provides a means of translating customer requirements into appropriate technical characteristics for each stage of product development and production (i.e.marketing strategies, planning, product design and engineering, prototype evaluation, production process development, production, and sales).A fuzzy logic based quality function deployment approach has been used to identify the e-learning service provider for effective distance education [1].To extract bottleneck techniques, some researchers have used quality function deployment to design an assistive device [2].A QFD technique has been used for management of engineering institutions to provide guidelines to prioritize the improvement policies for their organisation [3].Some researchers have proposed a fuzzy quality function deployment for determining optimum level of engineering characteristics to randomized customer attributes [4].Here, an analytical hierarchy process (AHP) has been combined with quality function deployment for the selection of software project [5].Some researchers present a systematic approach to quality function deployment by addressing the customer's voice using symmetrical triangular fuzzy member [6].A study has been done by carrying two approaches (crisp and fuzzy approach) of QFD technique to develop a new shampoo [7].Some researchers focused on QFD to the product development process of contact manufacturing [8].Some researchers have developed an integrated framework based on Fuzzy QFD and Fuzzy optimization model to determine the product technical requirements [9]. II. DATA COLLECTION Data are collected from employees of various offices like technical institutes, banks (private and government), hospitals (private and government) and some other organization.A questionnaire is prepared including forty variables for customer requirements, dimensions for design attributes (twenty four continuous design elements and thirteen categorical design elements) regarding office chair.One hundred twenty five data are collected from respondents through cross-sectional survey on forty items.The respondents need to answer in likert-type scale (1 for strongly disagree and 5 for strongly agree).A factor analysis has been carried out to examine the validity and reliability of variables to obtain a statically proven identification of customer requirements.The validity was tested by using principal component method following varimax rotation to extract the important customer requirements for model analysis which removes the redundancy and duplication from a set of correlated variables.The most important factors are determined on the basis of absolute sample size and magnitude of factor loading [10].To decide the sample size, Kaiser-Mayer-Olkin (KMO) test is carried out.If the value of KMO is greater than 0.5 then the sample size is treated as adequate.When data are adequate, a correlation matrix is created to calculate the correlation between variables [10].The internal consistency of the data was tested by Cronbach's Alpha (α) value.A total variance was determined for principal component varimax rotated factor loading procedure to avoid the correlation between various factors. III. QFD ANALYSIS The customer requirements shown under three different factors are found to be in indistinct form.It needs to be converted into design characteristics through a suitable method like Quality Function Deployment (QFD).Quality function deployment (QFD) is a powerful tool for converting indistinct customer voice (customer requirements) into engineering characteristic (continuous design characteristic) [11,12].The customer ratings for each customer requirement were obtained from left correlation matrix using equation ( 1) where, ij B denotes the relationship between customer needs, i Z is the initial customer rating, The individual rating of each design characteristic is obtained from the central matrix by using eqn.(2) where and denote the relative importance of the ith design characteristics with respect to the j thcustomer need in the relationship matrix and the importance of jth customer needs (customer requirements) and n is the number of customer requirements. IV. RESULT AND DISCUSSION A total of one hundred twenty five data including 40 customer requirements were collected from survey analysis.The survey data were subjected to factor analysis which was carried out by SPSS 14.0.Out of 40 customer requirements, 22 were loaded under 3 factors.Total variance explained by three factors was found to be 78.5% which is acceptable value for principal component with varimax rotated factor loading procedure.Ten items were loaded under factor 1, five items under factor 2 and seven items under factor 3. Factors extracted from factor analysis are named as comfortness (Factor 1), balance (Factor 2) and luxuriousness (Factor 3).Cronbach alpha (α) has been used to assess the internal consistency of the scale.The value of alpha for all dimensions is 0.702, which is just the acceptable value for demonstrating internal consistency of the established scale.The values of α obtained are 0.878, 0.933, and 0.939 for factors 1, 2, and 3 respectively.From Kaiser-Meyer-Olkin test (KMO>0.6), it can be concluded that the matrix did not suffer from multi collinearity or singularity.With a set of continuous design characteristics as given in Table 1, (categorical design characteristics are not considered here) three different QFD models named as QFD model 1 (comfortness), QFD model 2 (balance) and QFD model 3 (luxuriousness) were considered for correlating customer requirement with design characteristic.Figure 1, 2, 3 are represented by QFD model 1 (comfortness), 2 (balance) and 3 (luxuriousness) as given below.Ten, five and seven items (customer requirements) are considered under QFD model 1, 2 and 3 respectively.Similarly for design characteristics nine, seven and nine items are used respectively for three models 1, 2, and 3 as shown in Table 2.The design attributes extracted from the experts for three models are shown in Table 3.Initial rating of customer requirements for each model is derived by using a 1-10 scale for three models as shown in Figure 1, 2 and 3 respectively.The correlation of customer requirements (left matrix), design requirements (top matrix) and customer requirements with design characteristic (central matrix) are extracted from the experts using scale of 0.8, 0.6, 0.4 and 0.2 which indicates "strong", "moderate", "weak", and "very weak" respectively.Finally, initial design requirements and correlation values from top matrix are used in equation 1 to obtain final design ratings.The normalized refined rating of design attributes are obtained by dividing each rating with maximum value.From the normalized refined rating for design attributes, "Tilt of backrest" is the most prioritized element followed by "Number of controls" in case of model 1 (Table 3).Finally, four design attributes such as tilt of back rest, number of controls, overall width and overall height are considered out of nine design attributes having normalized refined rating value of 0.85 (threshold).Similarly, other two models have been developed with prioritized design characteristics as shown in same Table 3.For QFD model 2 (balance), four design attributes such as widthheight ratio of backrest, width-height ratio of seat pan, widthheight ratio of whole body, and width-height ratio of armrest exhibiting normalized refined rating value of 0.80 (threshold) and above have been considered.Similarly, five design attributes such as seat adjustment range, use of pattern, use of cushion, use of decoration, and backrest height showing normalized refined rating value of 0.90 (threshold) and above have been considered for QFD model 3 (luxuriousness). V. CONCLUSION The major contribution of this paper is to provide an integrated approach for modelling design characteristic of a product (office chair) in an office environment as the interaction between the product and customer varies from customer to customer.An office chair must satisfy the customer's requirements for a wide range of population by selecting a less but important set of design characteristic.So this work provides a QFD approach to improve the quality of the product by identifying important design characteristic. ijA denote the relative importance of th i characteristic with respect to th j customer's needs in the relationship matrix, j X represents the importance of th j customer needs and n is the number of customer needs.
1,905.4
2013-03-13T00:00:00.000
[ "Engineering", "Materials Science" ]
An Increase in Reactive Oxygen Species by Deregulation of ARNT Enhances Chemotherapeutic Drug-Induced Cancer Cell Death Background Unique characteristics of tumor microenvironments can be used as targets of cancer therapy. The aryl hydrocarbon receptor nuclear translocator (ARNT) is an important mediator of tumor progression. However, the functional role of ARNT in chemotherapeutic drug-treated cancer remains unclear. Methodology/Principal Findings Here, we found that knockdown of ARNT in cancer cells reduced the proliferation rate and the transformation ability of those cells. Moreover, cisplatin-induced cell apoptosis was enhanced in ARNT-deficient cells. Expression of ARNT also decreased in the presence of cisplatin through proteasomal degradation pathway. However, ARNT level was maintained in cisplatin-treated drug-resistant cells, which prevented cell from apoptosis. Interestingly, reactive oxygen species (ROS) dramatically increased when ARNT was knocked down in cancer cells, enhancing cisplatin-induced apoptosis. ROS promoted cell death was inhibited in cells treated with the ROS scavenger, N-acetyl-cysteine (NAC). Conclusions/Significance These results suggested that the anticancer activity of cisplatin is attributable to its induction of the production of ROS by ARNT degradation. Targeting ARNT could be a potential strategy to eliminate drug resistance in cancer cells. Introduction The aryl hydrocarbon receptor nuclear translocator (ARNT), also known as hypoxia-inducible factor (HIF)-1b, is a transcription factor that belongs to the basic helix-loop-helix Per-ARNT-Sim (bHLH-PAS) family, such as endothelial PAS domain protein 1 (EPAS1), HIF-1a, and aryl hydrocarbon receptor (AhR) [1][2][3]. The ARNT forms a heterodimer with HIF-1a in response to varying oxygen levels of microenvironments, and further promotes cell survival and angiogenesis [4][5][6]. In addition, disruption of ARNT in mouse embryonic stem cells causes hypoglycemia, an angiogenesis deficiency and a failure to respond to hypoxia [7]. Moreover, ARNT is a mediator in normoxic conditions when cells face harmful factors in the microenvironment, such as 2,3,7,8tetra-chlorodibenzo[b,e] [1,4]-dioxin (TCDD) or anti-cancer drugs [8,9]. The ARNT dimerizes with the aryl hydrocarbon receptor (AhR) and regulates Sp1 transcription activity, following upregulation of the promoter of cytochrome P450 subfamily polypeptide 1 (CYP1A1) to resist xenobiotic stresses, e.g., TCDD [3]. When regulating the ARNT in cells, it can be stabilized through interacting with the BRCA1 protein during TCDD stress [10]. On the other hand, active caspase-3 cleaves the ARNT during apoptosis to reduce cell survival signals [11]. Loss of HIF-1a and ARNT also leads to an increased response to radiotherapy, a reduction in tumor growth, and decrease in angiogenesis in tumors transplanted into immune-deficient mice [12]. In our previous studies, we found that ARNT interacted with c-Jun to form c-Jun/ ARNT and c-Jun/ARNT/Sp1 complexes which promote expressions of cyclooxygenase (COX)-2, 12(S)-lipoxygenase, and p21 WAF1/CIP1 , in epidermal growth factor (EGF)-treated cervical cancer cells in a normoxic condition [1,13]. Those studies indicated that ARNT interacts with HIF-1a in response to hypoxic conditions and also binds with specific transcription factors which have the ability to trigger the signaling of tumorigenesis in a normoxic condition. Cisplatin is a major chemotherapeutic drug used with many kinds of cancers, especially testicular, ovarian, esophageal, cervical, gastric, prostate, and non-small-cell lung cancers [14]. After influx into a cell, cisplatin is hydrolyzed and becomes its active form. Crosslink of DNA strands occurs by active cisplatin binding to DNA on position 7 of guanine which causes cancer cell death [15,16]. In addition to causing apoptosis by inducing cytochrome c release in response to DNA stress, cisplatin also induces cell apoptosis caused by reactive oxygen species (ROS) through a p53-mediated p38a mitogen-activated protein kinase (MAPK) pathway in HCT1116 colon cancer cells [17]. ROS are constantly generated and eliminated during normal physiological and biological functioning [18]. During oxidant stress, such as hypoxia or xenobiotic stimulation, ROS can be eliminated by scavenging proteins, such as superoxide dismutases (SODs) and glutathione peroxidase. Cancer cells exhibit greater tolerance of ROS than do normal cells. A low level of ROS facilitates cancer cells driving cell growth-associated genes, but a higher production of ROS also causes cells to undergo apoptosis [19]. However, cancer cells adapt to damage from cisplatin by upregulating efflux transporters, such as the ATP-binding cassette (ABC) transporter. Multidrug resistance 1 (MDR1) is a member of drug efflux ABC transporters that pump anti-cancer drugs outside of the membrane [20]. Moreover, overexpression of MDR1 allows human KB carcinoma cells to effectively resist colchicine, vinblastine, and doxorubicin [21]. Anti-cancer drugs cause cancer cells to acquire resistance through overexpression of drug-resistant genes. Therefore, understanding the molecular mechanism involved in regulating the acquisition of resistance in cancer cells would be beneficial to effective therapy. In our previous study, we found that ARNT interacted with Sp1 to regulate MDR1 expression and protected cells from cisplatininduced apoptosis [9]. The ARNT-regulated efflux of drugs was also observed in MDR1-upregulated cancer cells. These results reveal that ARNT is one of regulators to maintain cancer cells survival under cisplatin treatment. To further pursue the potential role of ARNT in maintaining cell survival, the effect of ARNT level on chemotherapeutic efficiency was studied in various cancer types. In this study, we found that the deregulation of ARNT not only reduced cell viability, but promoted cisplatin-induced cell death by enhancing the production of ROS. These results indicated that ARNT could simultaneously regulate MDR1 expression and reduce the ROS level to protect cancer cells from drug-induced apoptosis. The ARNT regulates cancer cell proliferation To clarify that ARNT is essential for tumor cell growth under normoxic conditions, cell proliferation was determined by a BrdU incorporation assay under an ARNT-knockdown condition. As shown in Fig. 1A, the cell proliferation rate was significantly reduced in siARNT cells. Results from pulse-labeling of BrdU (20 min) and synchronization of cells by thymidine block showed that siARNT reduced DNA synthesis (Fig. 1B) and retained cells in S-phase progression of the cell cycle (Fig. S1), respectively. These results suggested that ARNT regulates cell proliferation possibly through control of S-phase progression in normoxia. Knockdown of ARNT inhibits cell transformation Based on the findings that ARNT expression enhances proliferation of cancer cells, the effect of ARNT on cell proliferation was further confirmed using a colony formation assay. As shown in Fig. 2A, although the size was not consistent with parental cells, numbers of colony decreased more obviously in ARNT-deficient cells. The reduced proliferation rate was also observed in ARNT deficient cells (Fig. S2). Interestingly, we found that the growth of shARNT cells was inhibited under 3D-culture condition (Fig. 2B), but not in 2D-culture conditions ( Fig. 2A). These results indicated that depletion of ARNT reduced the ability of cellular transformation by cancer cells. ARNT-knockdown leads drug-resistant cells to be more sensitive to cisplatin Among chemotherapeutic drugs, cisplatin induces cell death by forming cisplatin-DNA adducts which lead to DNA damage and impairs progression of the S-phase. The possibility that ARNT controls cell proliferation through regulating S-phase progression prompted us to examine whether it produced any effect on cisplatin-induced cell death. To clarify the importance of ARNT in regulating cisplatin resistance, we used a pair of cells, HONE-1 and HONE-1-C15 (HONE-1 cisplatin-resistant cells). Stable ARNT knockdown cell lines were generated by stably transfecting HONE-1 and HONE-1-C15 cell lines with the shARNT vector ( Fig. S3). As shown in Fig. 3, cisplatin-induced apoptosis was more significant in shARNT cells than in parental HONE-1 cells. In addition, cisplatin caused less cell death in HONE-1-C15 cells than in HONE-1 cells, even with high-concentration treatment. However, HONE-1-C15 cells lost their resistance ability in an ARNT-knockdown condition (Fig. 3). Under cisplatin treatment, the ARNT was also degraded in parental HONE-1 cells, but not in resistant HONE-1-C15 cells (Fig. 4). Cisplatin promoted more caspase-3 activation in HONE-1 cells but not in HONE-1-C15 cells, even when a higher concentration of cisplatin was applied to HONE-1-C15 cells (Fig. 4). However, both HONE-1 and HONE-1-C15 cells became more sensitive to cisplatin when ARNT was deficient (Fig. 4). These results confirmed that expression of ARNT is a major factor regulating drug resistance by cancer cells. Chemotherapeutic drugs induce ARNT degradation and cell apoptosis To clarify the mechanism involved in downregulation of ARNT in cisplatin-treated sensitive cells, expression of ARNT was further examined in various cancer cell lines. As shown in Fig. 5A, cisplatin produced no effect on the transcriptional level of ARNT messenger (m)RNA. However, cisplatin-induced ARNT deregulation was reversed in cells treated with a proteasome inhibitor (Fig. 5B). These results showed that cisplatin triggered ARNT degradation through proteasome-dependent pathways in sensitive cells. Interestingly, cisplatin-induced ARNT deregulation could also be reversed when different types of cancer cells were pretreated with the ROS scavenger NAC (Fig. S4). These results revealed that production of ROS, at least is one the causes to eliminate ARNT in cisplatin-treated cancer cells. To further examine whether deregulation of ARNT is also required for other chemotherapeutic drugs such as taxol and doxorubicin to induce cell death, HeLa and HeLa cisplatin-resistant (HeLa R) cells were treated with these drugs, and expressions of ARNT and fragmented DNA were studied. Doxorubicin and taxol inhibited ARNT expression and increased in fragmented DNA in HeLa cells ( Fig. 5C & 5D). However, expression of ARNT was not changed in HeLa R cells after treatment with taxol or doxorubicin (Fig. 5C), and these results were consistent with the lack of fragmented DNA observed in HeLa R cells (Fig. 5D). In addition, shARNT cell lines were also more sensitive to cisplatin-induced cell death (Fig. 3). These results suggested that ARNT plays a pivotal role in contributing to resistance by cancer cells to various chemotherapeutic drugs. We also found that the caspase inhibitor, ZVAD, blocked the expression of p53 and activation of caspase-3 in cisplatin-treated cells (Fig. 5E). Interestingly, cisplatin-induced deregulation of ARNT was restored in ZVAD treated sensitive HeLa cells (Fig. 5E). In addition, cisplatin-induced caspase-3 activation, depletion of ARNT, and an increase of p53 were reversed in cells pre-treated with NAC (Fig. S4). These results indicated that cisplatin-induced activation of caspase-3 mediated the expression of ARNT and p53. ZVAD blocked caspase-3 and rescued ARNT expression in cisplatin-treated sensitive cells, suggesting that caspase-3 activation is essential for cisplatininduced cell death in sensitive cells. The increase in ROS in ARNT-knockdown cells contributes to cisplatin-induced apoptosis In addition to deregulating efflux pumps, an increase in the ROS level is another way for chemotherapeutic drugs to induce apoptosis. As shown in Fig. S4, depletion of ROS dramatically inhibited cisplatin-induced cell apoptosis. To further clarify the mechanism involved in ARNT-regulated cisplatin resistance, the role of ROS in cisplatin-induced cell death was examined. First, we identified the amount of ROS in parental (P), shLacZ (Z), and shARNT cells. Interestingly, we found that ROS was obviously higher in ARNT-deficient cells than in parental cells under normoxic conditions ( Fig. 6 and Fig. S5A). The increase of ROS was reduced in shARNT cells treated with NAC (Fig. S5A). In addition, cisplatin-enhanced ROS amount was eliminated in cells treated with NAC (Fig. S5B). Cisplatin also induced higher production of ROS in shARNT cells (Fig. S5C), and the amount of ROS was reduced in cells treated with NAC. These results suggested that the expression of ARNT inhibited cisplatinenhanced ROS production. To clarify whether the increase in ROS in shARNT cells plays a role in cisplatin-induced apoptosis, NAC was used to deplete ROS, and the apoptosis ratio was analyzed. As shown in Fig. 7, NAC significantly inhibited cisplatininduced cell death in shARNT cells (Fig. 7). These results revealed that ARNT protected cancer cells from cisplatin-induced apoptosis, at least through reducing ROS production in cells. Discussion Cancer therapeutic applications, such as radical treatment combined with chemo-therapeutic drugs mediated by induction of ROS, are major ways to eliminate cancer cells [22]. However, cancer cells are capable of resistance to the damage caused by ROS-induced apoptosis through alternative anti-apoptotic pathways, such as Akt, Kras, Braf, and Myc [23,24]. In this study, we demonstrated that ARNT conferred an anti-apoptosis ability on cancer cells treated with the anti-cancer drug, cisplatin. Moreover, cisplatin produced greater ROS generation in ARNT-knockdown cells, resulting in enhancement of cell death. Similar to our findings that ARNT protected against cell damage by reducing ROS levels, leukemia cells were sensitive to troglitazone which also induces apoptosis through intracellular ROS [25]. Resistant leukemia cells exhibited an abundance of ARNT and also followed upregulation of SODs (SOD2), nuclear factor erythroid 2-related factor 2 (Nrf2) transcript, and intracellular glutathione concentration [25]. SOD2 reduces antioxidants and Nrf2 increases antioxidant enzyme activities [25]. In addition, Nrf2 regulates the transcription of mir125b to inhibit AhR repressor (AhRR), which protects the kidney from acute injury [26]. This indicates that formation of AhR/ARNT complexes may be regulated by mir125b [8]. These results suggest that ARNT may regulate SOD2 expression or ARNT/AhR complex formation to reduce cell injury caused by increased ROS. As to regulation of ARNT, we found that cisplatin may inhibit the ARNT in some undiscovered ways. ARNT was degraded in a dose-dependent manner in non-resistant cells treated with cisplatin. ARNT's half-life seemed to be important for apoptosis caused by cisplatin. For example, proteasome inhibitors, such as MG132 and lactacystin, blocked the degradation of ARNT caused by cisplatin. However, cisplatin-resistant cancer cell lines showed that ARNT stability was not changed during treatment with cisplatin. These cancer cells with constant ARNT levels also showed a good ability to prevent ability of apoptosis. These results corresponded to a previous study in which ARNT disruption was correlated with proteasomal degradation via the ubiquitination process [27]. Consistent with our findings that ARNT was depleted by anti-cancer drugs, ARNT was also degraded by curcumin in normoxic and hypoxic conditions in various cancer cell types [27]. In addition, expression of ARNT can be restored by treatment with NAC, an ROS scavenger, in the presence of curcumin. These results are consistent with our findings that NAC rescued the expression of ARNT in cisplatin-treated sensitive cells. The ARNT can also be disrupted using H 2 O 2 in cancer cells [27]. In addition to ROS and consistent with the fact that caspase-3 and caspase-9 also cleave ARNT at the Asp 151 amino acid site in vitro [11], we found that cisplatin-induced degradation of ARNT was repressed following treatment with the caspase inhibitor ZVAD. Thus, cisplatin-activated caspases may cause deregulation of ARNT in drug-sensitive cells but not in resistant cells. Chemotherapeutic drugs such as taxol and doxorubicin can also respectively induce cancer cell apoptosis through caspase-10 and p53 pathways [28,29]. In this study, taxol and doxorubicin also enhanced ARNT degradation, resulting in apoptosis of sensitive cells. These results revealed that ARNT is an essential factor in protecting cancer cells against drug-induced damage. A previous study also revealed that cleavage of ARNT enhanced by hypoxia induced active caspase, and this downregulated transcriptional activity of survival genes induced by the HIF [27]. The production of ROS is one of effects through which cisplatin causes cell death [17]. It was also reported that mir-24 induced by ROS causes the depletion of ARNT in protein level in human hepatocellular carcinoma cell lines [30]. Taken together, ROS could be induced by depletion of ARNT, and then further produces negative feedback regulation of ARNT by induction of mirRNA. For the reason, the prevention of ARNT degradation in the initial treatment of drugs is important for survival of cancer cells. However, whether the ROS-induced mir-24 is the cause of suppression of ARNT in cisplatin-treated cells would be investigated in our further studies. Although, the mechanism involved in upregulation of ROS level induced by depletion of ARNT was not clear and would also be investigated, we speculated that the repression of ARNT could be one of mechanisms responsible for altering the level of ROS in cisplatin-induced cell death. In general, ARNT can interact with HIF-1a to regulate genes involved in promoting angiogenesis, interact with c-Jun and Sp1 to modulate MDR1 expression, or regulate EGF-induced expression of COX-2, 12(S)-lipoxygenase, and p21 WAF1/CIP1 , which sense the change of microenvironmental component [1,13]. ARNT also plays vital role in regulating tumor progression, detoxification, and efflux of anti-cancer drugs, which increases the survival chance in adverse circumstances [3,8,9,24]. In chemotherapeutic treatment of cancer, cisplatin accumulates in cancer cells that causes DNA damage and induces apoptosis. The drug efflux pump, MDR1, is upregulated by ARNT to prevent cisplatin-induced apoptosis [9]. In addition to regulating drug resistance in previous study, ARNT also protects cells from microenvironmental toxicity. For example, the AhR/ARNT complex senses environmental toxins and regulates pathways responsible for detoxification. For example, TCDD induces numerous genes mediated by the AhR/ARNT complex, such as cytochrome P450 1A1 (CYP1A1) which can detoxify of polycyclic aromatic compounds [8,31]. In this study, we found that ARNT plays further role in down-regulation of ROS to prevent cancer cells suffering damage from cisplatin treatment. Taken together, we speculated that the ARNT is a central mediator to eliminate cytotoxicity excited by environmental factors. In conclusion, our results revealed that depletion of ARNT promoted in the effect of anti-cancer drugs on cancer cell death. In our understanding, there were at least two mechanisms involved in ARNT-caused drug resistance: MDR1 upregulation [9] and prevention of ROS production. Although the mechanism involved in regulation of ROS production by ARNT expression remains unknown, development of antagonists targeting the ARNT may provide new strategies for destroying resistant cancer cells. Cell cultures Cell lines of human malignant melanoma (A375) and human cervical cancer (HeLa) were purchased from American Type Culture Collection. Human nasopharyngeal carcinoma cells (HONE1 and HONE-1-C15) were kindly provided by Dr. Jane-Yang Chang (National Health Research Institutes, Taiwan) [32]. A375 and HeLa cell lines were maintained in Dulbecco's modified Eagle medium (DMEM) with 1% glucose (Gibco). HONE-1 cells and their derivative cisplatin-resistant variant, HONE-1-C15, were maintained in RPMI media 1640 (Gibco). All cell lines were supplemented with 10% fetal bovine serum (FBS, Hyclone), 100 mg/ml streptomycin (Sigma-Aldrich), and 100 U/ml penicillin (Sigma-Aldrich) in culture medium and incubated with 5% CO 2 at 37 uC for maintenance. Cisplatin-resistant HONE-1-C15 cells were maintained in medium containing 15 mM cisplatin. The ARNT-deficient A375 cell lines were independent stable cell lines infected with lentiviral vector-derived ARNT small hairpin (sh)RNA [9]. The ARNT-deficient HONE-1 and HONE-1-C15 cell lines were stable cell lines infected with lentiviral vectorderived ARNT shRNA [9]. Drug-resistance schedules were used to develop the sublines resistant to cisplatin [9]. HeLa cells were exposed to cisplatin for a 9-month period. The resistance subline obtained by this procedure is denoted as HeLa R. Trypan blue exclusion assay Cells were plated at 5610 4 per well in 24-well plates. After cells were allowed to attach overnight, the medium was replaced with fresh medium with or without cisplatin. Cells were incubated at 37 uC under a humidified hypoxic condition (1% O 2 ) for an additional 24 h and trypsinized to resuspend them in medium containing serum. Trypan blue dye at 20 ml and 20 ml of a cell suspension were mixed to measure the numbers of living cells (with a dilution factor of 2). Stained cells were counted and regarded as dead cells. Bromodeoxyuridine (BrdU) incorporation assay DNA synthesis in proliferating cells was detected by BrdU incorporation (Cell Signaling Technology, Danvers, MA). Parental and ARNT knockdown cells were spread onto 96-well plates and incubated for 24 h. 5-bromodeoxyuridine (BrdU) reagent was added to culture media for 0 to 48 h, 100 ml Fixing Solution was added to the cells for 30 min. The cells were washed with Wash Buffer and incubated for 1 h with 100 ml 1 x BrdU antibody. After adding 100 ml 1 x HRP-conjugate solution for 30 min, 100 ml TMB substrate solution was added. Following 30 min incubation, the stop solution was added. The OD was measured at 450 nm using a plate reader. For immunofluorescence, cells transfected with 30 nM ARNT siRNA oligonucleotides were pulse labeled with 20 nM BrdU for 20 min or 24 h. Cells were fixed and stained with anti-BrdU antibodies followed by anti-mouse FITC and DAPI. The percentage of cancer cells with positive nuclear and BrdU staining was calculated by counting immunopositive cells in four randomly chosen using Image J software. Colony formation assay Cells were plated as separate single cells in 6-well cell-culture plates. After cells were allowed to attach overnight, cells were treated with or without 1 mM cisplatin for 8 h and the medium was changed to fresh culture medium. The culture medium was changed every 2 days. After incubation for 7 days, the cell colony was washed with PBS twice and fixed with 4% paraformaldehyde (PFA, Sigma-Aldrich). Methanol (JT Baker) was used to increase the penetration of cells, and Giemsa stain (Sigma-Aldrich) was used for cell staining. Colony numbers were determined with Image J software [33]. The colony formation was confirmed at least for three times. Soft agar assay High-glucose DMEM containing 10% FBS with 0.5% agarose was plated on the bottom (1.5 ml/well) of 6-well plates. After the basal agar had congealed, 5610 3 cells were resuspended in high glucose DMEM containing 10% FBS with 0.25% agarose and layered on top (1.5 ml/well) of 6-well plates. Cells were incubated with 0.5 ml of supplemented medium after the top agar layer had congealed. The culture medium was changed every 2 days. After incubation for 2 weeks, cells were washed with PBS twice and fixed with 4% PFA. Methanol was used to increase the penetration of cells, and Giemsa stain was used for cell staining. The colony formation was confirmed at least for three times. Reverse transcription-PCR (RT-PCR) Cell RNA was extracted by TriZol (Invriogen) reagent and followed by manuscript. Two mg RNA was reversed by ImProm-II TM reverse transcriptase (Promega) and used for cDNA template of polymerase chain reaction (PCR). 50 ng cDNA template was used for PCR with 2.5U Tag DNA polymerase (MD Bio, Taiwan) [34]. The primer sets were used as following: Western blot analysis Cells were harvested with 1x CCLR or RIPA buffer (10 mM Tris buffer at pH 6.5, 150 mM NaCl, 50 mM EDTA, 1% DOC, and 1% NP-40 containing protease inhibitors). The protein concentration of cell lysates was determined with a BCA protein assay kit (23225, Thermo) and iMark microplate absorbance reader (Bio-Rad). Proteins were resolved on a 10% sodium dodecylsulfate (SDS)-polyacrylamide gel for electrophoresis and transferred to polyvinylidene fluoride (PVDF) membranes. Non-fat dry milk (5%) in PBS containing 0.1% Tween-20 (PBST) was used for blocking. After washing three times with PBST, the PVDF membranes were incubated with primary antibodies overnight at 4uC. After primary antibodies were removed, PVDF membranes were washed four times with PBST and incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies for 1 h at room temperature. Membranes were washed with PBST three times and incubated with an enhanced chemiluminescent HRP substrate (Millipore) for x-ray film detection. The antibodies used in this study were as follows: ARNT (sc-17811, Santa Cruz), a-tubulin (Sigma-Aldrich), and caspase-3 (9665S, Cell Signaling). The expression of protein was confirmed at least for three times. RNA interference (RNAi) transfection Cells were transfected with Duplexed Stealth RNAi against ARNT [9] and stealth RNAi negative control (SC, Invitrogen) using serum-free Opti-MEM medium and lipofectamine RNAi-MAX transfection reagent. Small interfering (si)RNA was incubated with lipofectamine RNAiMAX (siRNA: lipofectamine RNAiMAX of 25 nM: 1 ml) and serum-free Opti-MEM medium for 15 min at room temperature before being transfected into cells. After 8 h, medium was changed to serum-free high glucose DMEM. DNA fragmentation assay Cells that were or were not treated with doxorubicin were collected and washed with PBS; lysed in a solution containing 10 mM Tris HCl at pH 8.0, 10 mM EDTA, and 0.5% Triton X-100; digested with 0.1 mg/ml RNase A at 37 uC for 1 h; and then centrifuged at 12,000xg for 25 min to pelletize the chromosomal DNA. The supernatant was digested with 1 mg/ml proteinase K at 50 uC for 2 h in the present of 1% SDS, extracted with phenol and chloroform, precipitated in cold ethanol, and subjected to electrophoresis on 1.5% agarose gels containing 0.5 mg/ml ethidium bromide. DNA fragments were visualized by ultraviolet light transillumination. Photographs were taken with the aid of a computer-assisted image processor. Flow cytometry to detect apoptosis Cells from different conditions were trypsinized and combined with cells in the medium by centrifugation. Harvested cells were washed with PBS and incubated with annexin V binding buffer containing annexin V-FITC/propidium iodide (PI, 556547, BD) at room temperature for double staining for 15 min. Flow cytometry was used to analyze cell apoptosis with a Cell Lab Quanta SC Flow Cytometer (Beckman Coulter). The experiment was repeated for three times. Flow cytometry to analyze ROS Cells were incubated overnight at 37 uC under humidified normoxic conditions or hypoxic condition (1% O 2 ). After being washed with PBS, cells were incubated with 0.1 mM 5-(and-6)carboxy-2',7'-dichlorofluorescein diacetate (carboxy-DCFDA, C369, Invitrogen) in serum free medium for 30 min at 37uC for staining and then replaced serum-containing culture medium for cell recovery for 15 min. Flow cytometry was used to analyze ROS production with a Cell Lab Quanta SC Flow Cytometer by using wavelength of 525 nm band pass. The experiment was repeated for three times. Statistical analysis In all experiments, statistical significance was analyzed by Student's t test. P,0.05 was considered significant.
5,815.6
2014-06-12T00:00:00.000
[ "Biology", "Medicine" ]
Philosophy of ubuntu and collaborative project-based learning in post-apartheid South Africa: A case study of underperforming learners at Hope Saturday school Utilising a qualitative case study, we set out to investigate how learners at Hope Saturday School evoked the principles of ubuntu/humanity as they collaborated during project-based learning. The article is part of a broader study in which a mix of semi-structured interviews, focus group interviews, observations, document analysis and field notes were used to capture data. The learner participants were Black, and almost all of them resided in informal settlements, townships, and farming communities. Data were analysed using content analysis. The philosophy of ubuntu was used to underpin this study. The finding of this study shows that values like interdependence, sharing, caring, teamwork, solidarity, unity and helping one another were evoked as learners collaborated in project-based learning. The article concludes that a supportive environment that aids the development of ubuntu values can improve learning experiences of underperforming learners. Introduction The quality of South Africa's schooling system often attracts negative remarks. Every year when the final National Senior Certificate i (NSC) results are released, commentators usually debate about the falling standards of a qualification that is the pinnacle of the South African schooling system. The negative trends of learner performance in disadvantaged communities are noted by the general survey conducted by Statistics South Africa (2016), which highlights that the prospects of African learners progressing through the schooling system are lower than that of other population groups. The obsession with individualistic standardised testing is typical of an education system driven by neoliberalists whose aim is to create a market which restricts access to certain privileges (Connell, 2013). In South Africa, school exit examinations are high stakes and inform decisions taken on who enters higher education institutions and has access to some work opportunities. The fact that in South Africa, even 26 years after the advent of democracy, there is still reference to disadvantaged schools is testimony to education that is rationed, and therefore, quality education remains a privilege for a select group. This is notwithstanding the fact that South African education policies advance inclusivity, redress and human rights. The unequal education landscape in the schooling sector is further exacerbated by maladministration and poor implementation of policies. The influence of neoliberalism can also be seen in the way that teachers implement their teaching strategies (Hedegaard-Soerensen & Grumloese, 2020). The challenge that arises is that teachers predominantly focus on whole-class teaching and a narrow-prescribed curriculum. This turns out to be even more problematic because they fall short of focusing on inclusion, differentiation, and learners' needs (Hedegaard-Soerensen & Grumloese, 2020). These systemic restrictions are coupled with challenges in teacher capacity and poor infrastructure. One of the concerns is that the possibility of capable teachers taking autonomous judgment in terms of curriculum and pedagogy in the interest of their learners' needs is undermined by the system with strict curriculum frameworks (Connell, 2013). A serious drawback is that restrictions on curriculum and pedagogy lead to many learners being left with learning deficiencies. Moreover, learners' inability to fulfil the implicit expectations about performance makes them susceptible to exclusion from privileges and rewards associated with achieving certain education standards (Hedegaard-Soerensen & Grumloese, 2020). In addition, learning becomes competitive and individualised rather than an undertaking of the society (Saunders, 2015). Education underpinned by the neoliberal ideology of classism and competitiveness deviates from the indigenous South African social and cultural contexts, especially in Black communities. Tabulawa (2003) exemplifies education that moves from competitiveness to one that encourages cooperation and learner agency. The interchanges of ubuntu and other indigenous convergences are in stark contrast with an education system that encourages learners to outperform one another. Furthermore, the practice of categorising schools into functional and dysfunctional, and learners into underperformers and high achievers (Connell, 2013) is highly corrosive of what the philosophy of ubuntu advocates. Furthermore, Davies and Bansel (2007) also conclude that neoliberalism removes value from the social good and increases individualism. It is also observed that the main weakness of a system influenced by neoliberal ideology is the failure to advance common interest and selfawareness of the society (Connell, 2013), thus increased individualism is seen as an indicator of freedom (Davies & Bansel, 2007). Learner-centred methods grounded in social constructivist epistemology are not value-neutral. In his highly cited study, Tabulawa (2003) posits that in African countries such as Namibia, South Africa and Botswana, curriculum reforms advancing learner-centredness are driven by aid agencies based on political and ideological intentions instead of educational ones. Even so, Tabulawa (2003) indicates that learner-centred pedagogies have the potential to instil individual autonomy, open-mindedness and tolerance for other people's perspectives as aligned to a liberal democratic environment. In this article we present a case of a Saturday school in South Africa that reimagined pedagogy and used project-based learning (PBL) to teach skills related to the country's economic aspirations. It emphasises the importance of advancing the philosophy of ubuntu as a legitimate indigenous knowledge system that can be integrated into pedagogy. Furthermore, we argue that the principles of ubuntu can be used to encourage collaboration in teaching and learning contexts where PBL is used to teach necessary skills. We propose the use of collaborative teaching and learning approaches to improve the learning of different skills sets and to instil cohesion and togetherness in society, as advocated by the ubuntu philosophy. Exploring the Terrain: Underperformance in South African Schools Despite the plethora of legislative and regulatory frameworks aimed at redressing and curbing educational inequalities introduced after 1994, many South African schools still face numerous challenges (Bantwini & Feza, 2017). It was revealed that some schools are inundated with a shortage of infrastructure and resources necessary to facilitate the teaching and learning process ( (Bantwini & Feza, 2017). Elsewhere, a lack of parental involvement, poor learner discipline and motivation were observed (Jacobs & Richardson, 2016). Studies also show that the availability or scarcity of critical resources in South African schools influence educational outcomes (Visser, Juan & Feza, 2015). The phenomenon of underperformance in some South African schools has been reported extensively in literature (Makgato & Mji, 2006;Spaull & Kotze, 2015). Findings from research reveal that the majority of South African learners lack skills that allow them to learn the required academic content in schools, especially in previously underprivileged Black communities (Taylor, 2008). This deficiency in learning was also noted by Letseka (2014) who state that in dysfunctional schools, many learners are unable to develop the skills and attributes needed to master reading and mathematics. Moreover, Taylor (2008) mentions that in former disadvantaged schools, only four learners in a hundred were reading at the expected level. Spaull and Kotze (2015) argue that poor learning abilities can be attributed to a learning deficit acquired in the early years of schooling, which creates a backlog, thus, negatively influencing learning in later years. The concept of learner underperformance is determined by testing learners and measuring their performance (Reyes & Garcia, 2014). According to Reyes and Garcia (2014) learners who perform below grade averages are seen as underperformers. Similarly, in a study by Walters (2011) learners who could not read or write at an expected level were perceived as underperformers. Underperforming learners in this study refer to learners whose performance in numeracy and literacy fell below the expected grade and age proficiencies. The instruments used to evaluate their performance included written baseline assessments and observations from teachers. Relating PBL and Collaboration PBL is defined as a teaching method that engages learners in exploring real-world issues relevant to the topic of a lesson in a collaborative setting to promote active and deep learning (Shafaei & Rahim, 2015). It is widely acknowledged that PBL emphasises the importance of the learner, with some researchers referring to it as a learner-centred approach (Malan, Ndlovu & Engelbrecht, 2014). According to Thomas (2000:3-4) there are five prerequisite criteria for projects to be classified as PBL, namely "projects are central, not peripheral to the curriculum; are driven by questions or problems; involve students in a constructive investigation; are student-driven to some significant degree are realistic, not school-like." In this study we adopted the definition given by Shafaei and Rahim (2015). A review of the literature reveals that there are advantages to implementing PBL (Beers, 2011). Bell (2010) found that PBL is sufficient to teach learners 21st-century skills. In addition, these skills might help learners in becoming a productive workforce and members of society (Bell, 2010;Meyer & Wurdinger, 2016). According to Styla and Michalopoulou (2016) learners who participated in various studies confirmed that PBL helped them develop both academic and social skills. Moreover, PBL is believed to have the ability to motivate learners and foster learner courage (Brennan, Hugo & Gu, 2013;Holmes & Hwang, 2016). Problems associated with PBL were also reported in the literature (Beane, 2016;Kızkapan & Bektaş, 2017). Beane (2016) argues that it is difficult to effectively implement PBL in a system reliant on high-stakes standardised testing. Likewise, Frank and Barzilai (2004) report some challenges with evaluating each learner's personal contributions during PBL. Kızkapan and Bektaş (2017) believe that PBL is time-consuming and poor planning may lead to incomplete or inferior projects. Collaboration is one of the key features of PBL. The literature reveals several definitions of collaboration in the context of learning. The term "collaboration" has been applied to situations where learners work together on the same task, instead of engaging in parallel activities of the task (Lai, 2011). Goodsell, Maher, Tinto, Smith and MacGregor (1992) view collaborative learning as an umbrella term for a variety of educational approaches involving joint intellectual effort by learners, or learners and teachers. In this study we used the definition by Pluta, Richards and Mutnick (2013) who define collaboration as any learning activity that involves the coordinated participation of two or more learners for the goal of accomplishing activities that lead to desired learning outcomes. Therefore, in the context of this article, assigning pupils to undertake group work isn't enough for collaborative learning; learners must also engage in meaningful activities aimed at achieving a better social construction of knowledge. Several studies suggest that there are advantages of employing collaboration in the classroom (Elboj & Niemelä, 2010;Gomez-Lanier, 2018;Lai, 2011). A study by Lai (2011) reports that underperforming learners are likely to benefit through collaboration with their peers. In essence, collaboration provides learners with a platform that enables sharing of knowledge and experience (Gomez-Lanier, 2018;Kessler & Bikowski, 2010). Even more, each member of the team brings new perspectives and skills such as problem-solving, application of concepts and so forth, which benefit all learners (Gomez-Lanier, 2018). Similarly, Lai (2011) mentions that collaboration includes interdependence, a considerable degree of negotiation, and interactivity. These combined factors ensure that each team member becomes accountable for the success of the group. Gomez-Lanier (2018) argues that as the team reaches its goals, members will be inspired and motivated to take ownership of their own learning and do more for the group to succeed. As might be expected when team cohesion is strong, solidarity between learners is also enhanced. Thus, underperforming learners benefit from their interaction with high performers. Through collaboration, learners become critical friends who give each other constructive feedback and help each other reach personal goals (Bell, 2010). The benefit of collaboration is that learners will become a community of practice that assists each other when approaching academic challenges. It also emerged from the literature that collaboration in PBL goes beyond the learners to include the teachers and the community within which the school exists (Meyer & Wurdinger, 2016). The teacher remains the facilitator throughout the learning process and is responsible for managing group dynamics and disagreements. Theoretical Mooring: Understanding the Philosophy of Ubuntu In this article, the philosophy of ubuntu is used to underpin the understanding of the collaborative interactions between a community of learners and teachers during PBL. Ubuntu has been selected for its potential to link social interdependence to the imperatives of collaborative teaching and learning approaches. The concept of collaboration and the philosophy of ubuntu are clearly interlinked as they both require communal relations. Letseka's study (2012) shows that the supporters of ubuntu promote the integration of ubuntu principles into teaching and learning with the assumption that it will enhance the development of critical dispositions among learners. In this study, the tenets of ubuntu were used to situate the experiences of learners within the interchange between expectations, experiences and interpretations determining their motivation to collaborate in a PBL environment. The assumption made here was that the study could provide a useful account of how ubuntu-oriented attributes and dispositions can benefit learners who are expected to work together towards a common goal. Through the philosophy of ubuntu, we view learning as an undertaking embedded and negotiated in a social interdependence setting. The philosophy of ubuntu is widely used in South African indigenous communities and communities in Sub-Sub-Saharan Africa. Hence, it is referred to as an African philosophy (Mugumbate & Nyanguru, 2013). Studies found that although ubuntu is a commonly used concept in many South African communities, it is difficult to pin it on one definition (Mabovula, 2011). The term "ubuntu" is universally understood to mean humaneness, personhood and morality (Letseka, 2012). In the South African context, ubuntu is grounded in the ethical maxim known as motho ke motho ka batho (in Sotho languages) and umuntu ngumuntu ngabantu (in Nguni languages) (Letseka, 2012). When expressed in English this guiding principle loosely translates to a person is a person through other persons (Letseka, 2012;Shepherd & Mhlanga, 2014). It is commonly reflected in the aphorism, I am because we all are (Mugumbate & Nyanguru, 2013). This expression is used in most indigenous African languages (Mugumbate & Nyanguru, 2013). The literature reviewed in this study provides insight into the values of ubuntu (Letseka, 2013;Taringa, 2007). The study by Taringa (2007) lists values such as cooperation, humility, sharing, hospitality, relationship, empathy, and compassion. By extension, Letseka (2013) uses words like respect for others, courtesy, benevolence, and altruism. By the same token, Mugumbate and Nyanguru (2013) mention values like interdependence, collectivity and solidarity. As far as the values of ubuntu are concerned, individuals that are morally irreproachable must treat fellow community members with dignity and gratification (Taringa, 2007). These values of ubuntu have the capacity to challenge and inspire learners to work with their counterparts and to see others succeed. The key aspiration of the proponents of ubuntu is to sustain the values of ubuntu for the good and benefit of all individuals belonging to a community (Bondai & Kaputa, 2016). Mbigi (1997, as cited in Mabovula, 2011 lists four tenets of ubuntu: First, morality which includes trust and credibility. Second, interdependence which involves corporation, participation, sharing and caring. Third, spirit of man which refers to human dignity and mutual respect that insists that human activity should be person driven and humanness should be central. Fourth, totality, which pertains to continuous improvement of everything by every member. The tenet of social interdependence has received considerable attention in the literature (Bondai & Kaputa, 2016;Letseka, 2013;Oviawe, 2016). Primarily, the relationship between a person and others around him or her is that of mutual interdependence (Shepherd & Mhlanga, 2014). By drawing from the concept of interdependence, Mugumbate and Nyanguru (2013) have been able to show that groups are a key feature within the ecosystem of African societies. In essence, the spirit of ubuntu is seen as a factor that binds the groups together (Mugumbate & Nyanguru, 2013). The tenet of interdependence which emphasises community members working together for the benefit of the collective is central to this study. The tenet of totality, which entails continuous improvement of everything by every member also finds expression in the findings of this study. Ubuntu is a philosophy that is grounded in the interconnectedness of individuals. We perceived it to be appropriate for a more profound understanding of learners' responsiveness towards one another. Hence, we use the philosophy of ubuntu to understand and interpret collaborative engagement during PBL. It is also presented as an indication of how the boundaries that define epistemologies that inform knowledge construction in the classroom can be broadened to include philosophies of African origin. By evoking the principles of ubuntu, participants in this study confirmed the legitimacy of indigenous knowledge systems. Methodology The meta-theoretical paradigm of this study was that of social constructivism. Research conducted from a social constructivist lens believes in the notion of multiple realities, and the researcher endeavours to explore such world views (Erlingsson & Brysiewicz, 2013). To this effect, meaning is a social construct that is fundamentally created during interaction with other human beings (Creswell, 2009). Social constructivism regards individuals and the realm of the social as interconnected. Similarly, in this article we view learning as a process during which development takes place by means of collaborative activities and socialisation practices (Coghlan & Brydon-Miller, 2014). The study was qualitative in nature. According to Denzin and Lincoln (2018), qualitative research exposes the researcher to the world, and it is the responsibility of the researcher to make the world visible. Therefore, qualitative research allowed us to be in a close and prolonged relationship with the participants in the research field and we interpreted their narratives and lived experiences. In the main study we infiltrated the community of teachers and learners with the aim of learning and understanding the teacher's beliefs and attitudes and how these beliefs and attitudes shaped the unique delivery of PBL in the context of this study. Our role remained that of researchers and we were not involved with the participants beyond the research project. A qualitative bounded case study research design was adopted. The focus was on one Saturday school (the case), with the aim of getting a holistic understanding of the phenomenon under study, thus a single case was explored. Saturday schools are generally established to enhance learning outcomes and help learners meet various educational needs (Akarsu, 2012). In the context of this study, Saturday school is a non-governmental institution that offers extra tuition to learners on Saturdays only. The use of case study research design generated a large amount of text from which we wrote the narratives. Likewise, we did not seek to generalise the findings of this research but rather to narrate the experiences and stories of the participants. In accordance with case study design, we presented broad interpretations of what we have learnt from exploring the case (Creswell & Poth, 2018). It was considered that narrative inquiry would supplement and extend the case study research design. By employing narrative inquiry, we wanted to highlight the meaning of personal stories and experiences of the study participants (Wang & Geale, 2015). In relation to this, Clandinin, Caine, Lessard and Huber (2016:13) mention that "the role of a narrative inquirer is to understand, to systematically inquire into the phenomenon of the storied experience of people." A prolonged time was spent alongside the participants in 2019 to pay attention to the narrative inquiry space (Clandinin et al., 2016). During our engagement with the participants, we sought a holistic exploration of the case under study using multiple data collection methods (Lindsay & Schwind, 2016). Literature reveals that narrative inquiry is a relationship, process, and phenomenon that makes visible the extent to which beliefs, values, and assumptions influence our perspectives (Lindsay & Schwind, 2016). The research site for this study was a Saturday school located in the Gauteng province. The school was chosen because it used PBL as an innovation to teach underperforming learners. It, therefore, provided a setting to learn about teacher beliefs and attitudes about PBL. The participants were selected through purposeful sampling. Teachers who have been with the project for at least more than 1 year were identified and six of them volunteered to participate in this study. Teachers selected to participate in this study had first-hand knowledge about PBL and, therefore, interacting with them provided credible descriptive data. Focus groups were conducted with six groups of six to eight learners per respective teacher. In this study, data collection methods included semi-structured interviews, focus group interviews, observations, document analysis and field notes. The time allocated for each semistructured interview was approximately 60 minutes. Semi-structured interviews were conducted at the school with six teachers who taught Grade 5 to 10 at times that were convenient for them. Six focus group interviews each comprising six to eight learners, were conducted. Six project-based lessons were observed. For this article we took another look at the data from the main study in which influence of teachers' beliefs and attitudes on underperforming learners were investigated. For the purpose of this article, we employed data collected from semi-structured interviews and focus group interviews. The benefit of using focus groups is that they allow multiple learners to be interviewed together (Hesse-Biber, 2017). Thus, multiple perspectives were heard simultaneously. Each focus group comprised six to eight learners. The duration of each focus group interview was approximately one and a half hours. The focus group interviews yielded data on learners' interpretations and experiences about PBL. Data analysis in qualitative research happens concurrently with data collection. In this study, content analysis was used as the main method of data analysis. According to Cohen, Manion and Morrison (2013), content analysis is accomplished using coding frames and can be conducted with any text material, including documents and interview transcripts. ATLAS.ti software was employed to code the data transcripts. Further analysis was done on the preliminary codes generated through ATLAS.ti to identify connections and form patterns. Themes were then generated as required in qualitative studies. The names of the participants and other identifiable labels were removed during data analysis and replaced with pseudonyms. Trustworthiness is essential for qualitative studies (Creswell, 2014). Qualitative researchers have to give assurances why their results and the implications of their study can be viewed as adequate and of worth to the reader by making the methodology and methods that underpinned the research transparent (Morgan & Ravitch, 2018). To ensure trustworthiness; confirmability, transferability, dependability, and credibility were employed as quality assurance measures. Ethical principles were observed in this study. The prescripts set out by the Ethics Committee at the University of Pretoria provided guidance and ethics clearance was granted by the University in this regard. Permission to conduct the study was obtained from the school principal and the parents gave assent for the participation of the learners. Teachers' Views on Collaboration Teachers spoke positively about the significance of learners' collaboration. Findings indicate that teachers believed that by working together, learners shared their knowledge and encouraged one another. They also believed that collaborative learning would benefit learners in the future. The findings seem to suggest that underperforming learners benefited from the group work approach as it allowed them to learn with individuals with whom they shared common characteristics and needs. I like it, so we prefer the kids to work together because if they work together, it helps them share their knowledge and to encourage one another. And also, that is one of the 21st century skills mentioned earlier, it is to be able to collaborate with people. (Luke, male teacher) I think it is good. It trains them for work environments where they will work with other people. And it will help them in dealing with different personalities (Joy, female teacher). I think it is a good way of learning when you learn with peers because you get to learn with people like you or with people who are where you are at. The only problem is the kids that are quiet and how to get them involved. I think that is a real mission and it is not easy. (Sarah, female teacher) Although the teachers believed that collaboration during PBL provided learners with an opportunity to learn from one another, some of them also cautioned that care should be taken not to leave slow learners behind. I think it is good because you are able to learn from others. One big challenge with teaching that I found is that it is very easy to move with the fast learners because they engage but leave everyone behind. So you need to be mindful of that and need to move at a slower pace. But with the teams, they learn from each other. The bright kids will influence the others. But you need to be careful on how you group them because if they group themselves it would be along the lines of friendship. (Jerry, male teacher) I think working in a group allows you to feed on one another's ideas and concepts and in a positive way. On the negative side you may have some dominant personalities in a group … in a conversation and the direction in which conversation is going. So, we try to balance that by giving people time to express their views, allowing all group members to participate. (Brat, male teacher) Evidence also revealed testimonies from learners who affirmed that working in groups provided a supportive environment that helped to reduce their anxiety. In this study, learning was seen as a collaborative activity during which participants encouraged each other and took responsibility for each team member's learning commitments. It was also found that there was ample collaboration between Hope Saturday school, the local church, and the host school. The church and the host school provided infrastructure and other human and nonhuman resources which supported PBL. Furthermore, the school partnered with work-based professionals who volunteered to teach learners the requisite skills. Attitudes of Learners towards Individualistic Learning Many of the learners interviewed in this study were positive about the prospects of collaborative learning. They believed that working in groups with others provided a supportive environment, which allowed them to learn from others. The extracts below show how most of the learners used the word "alone", to show that learning in isolation does not yield better results. It is nice, is not like when you are working alone. Because when you do something wrong, your friends can help you and tell you no this is wrong you must do this and this (Ntsako, Tsonga male). It helps because you do not make mistakes alone. When you do something wrong, they will tell you this is wrong, and you must do it like this or this (Daniel, Venda male). When you work with a group you can't fail alone. Like when you sit alone it is not right. Like when you pass you pass together and learn together and do the same things (Kelly, Tswana female). I think groups are people who work together and want to improve something better than being alone. Working alone will make you feel lonely and not successful in some work (Mosa, Pedi female). I remember last year we were only … we built a greenhouse, so I do not think I could have done it alone (Bongi, Zulu female). It is nicer when you work together. Because when you work with others, other people know other things and have other ideas but if you work alone you will not know many answers (Mosa, Pedi female). A position developed by learners in this study showed that individualistic learning was less attractive than collaborative learning. Learners attested that it would be difficult to achieve learning outcomes on their own. It also became evident from this study that shared interest in solving problems or accomplishing a given project eliminated the supposed desire for competition between learners. Findings also demonstrate that learners valued the interpersonal relationships with their peers. This resulted in learners showing appreciation for the basic principle of human interaction within their learning space. Findings of this study reveal that learners found validation in being listened to and they were proud to be part of a group to which they positively contributed. There is evidence that the focus was not on out-performing other learners but on collective success. One of the important findings in this study was that learners who showed understanding of certain concepts were determined to uplift learners who were seen as underperformers to be at the same level as them. Yes, because if you learn things in a group, you can go and teach other children so they can be the same with us (Thandi, Zulu female). For example, you might find out that you know something they do not know and that will help them (Ntsako, Tsonga male). Yes, because you can help others when they make mistakes (Levy, Sotho male). I also feel proud when they listen to me in the group (Kelly, Tswana female). You feel like you are included (Jane, Zulu female Being able to help others was seen as a noble and desirable action. Learners' self-esteem was also enhanced by knowing that their knowledge was worth sharing with others. It may be that these learners benefited from a supportive environment that was conducive for developing a sense of belonging. As a result, learners interpreted the classroom as a safe space for participation and sharing knowledge. Flocking Together in Times of Challenge Most of the learners in this study indicated that working in groups was beneficial for everyone involved as they faced challenges and triumphs together. They seemed to view the classroom as a community in which members worked together to construct knowledge while at the same time, navigating the challenges of learning. Learners used the word "together" to indicate the support system that existed among them. What I like is that we work together and when we pass, we also pass together and when we do projects, we help each other (Levy, Sotho male). The projects that we did help us to know more … and I like them because we work in groups (Thabo, Pedi male). When we work in a group, we work nicely and together (Lebo, Sotho male). We do group work because working in a group makes you understand things more (Daniel, Venda male). Below are some of the explanations given to show how they personally benefitted from working with their peers: They help me when I write the wrong spelling. They also help me so I can get high marks and we do fine, working in a group (Jane, Zulu female). We learn from each other and get to understand things … we learn from each other (Levy, Sotho male). When I don't know an answer to something someone can help me … when I don't understand they help me and show me the way to do it (Ntombi, Zulu female). You receive more knowledge from other learners (Khabi, Tsonga female). When you make mistakes, they correct your mistakes. So, you will know you made a mistake and you will improve (Sam, Tswana male). Learners believed that group work allowed for an environment in which they could motivate each other. Learning was seen as a collaborative activity during which participants encouraged each other as they embarked on completing the given projects. However, learners acknowledged that some of them were laid back and needed some encouragement to fully participate during PBL. The conversation with learners revealed that they experienced the spirit of oneness as they joined forces to complete the projects. Everyone does their part so we can finish the project, but some are lazy, and they do not want to help. We encourage them to work harder, so that we can work together (Lucky, Tsonga male). I like it because when we participate together the things come easier, we have fun and we learn more (Lebo, Sotho male). In projects I actually like that we become united and we do not take advantage of each other, we understand each other (Ben, Pedi male). They mostly motivate us when we are working in groups. They will be like guys look at that group. They have built something nice and would say, guys we need to stick together and work together (Ntombi, Zulu female). Learners were of the opinion that the advantage of using group work during PBL was that they could share their limited resources and information. I think group work is when you are working together and, for example, there is Group A and Group B, and mine, and you Collaboration was highly demonstrated within and across groups as learners relied on each other for completion of projects. Findings reveal that sharing and joint use of resources instilled an ethos of togetherness and strengthened social ties. In like manner, unity was perceived as an indication of a functional group that had prospects of achieving the learning objectives. Discussion Findings in this study echoed the philosophy of ubuntu. According to Bondai and Kaputa (2016) the philosophy of ubuntu positions identity and lived experiences within a communal system. Teachers in this study created a communal entity in which learners were encouraged to learn through collaboration. The use of collaboration during PBL further supports the creation of settings in which learners come together to seek and find solutions to problems (Mabovula, 2011). The classroom becomes a platform where ideas are shared by all community members in each real-life context (Mabovula, 2011). The idea of people coming together to seek solutions to their problems, as articulated by Mabovula (2011), is corroborated by the teachers in this study. They revealed that underperforming learners benefited from collaborative activities as they were given an opportunity to learn with individuals with whom they shared common challenges. In African communities, the concept of ubuntu is evident in members collaborating in working the fields to plant or harvest crops (Mugumbate & Nyanguru, 2013). In developing economies education is seen as a means to promote national unity and a precursor for economic and social consciousness (Muyia, Wekullo & Nafukho, 2018). Results of this study clearly indicate that collaborative learning has the potential to instil skills required in society and the world of work. Narratives of teachers reveal that there were challenges that learners navigated through as they interacted with their peers in a collaborative environment. For instance, the possibility of dominant personalities existing in some groups. As a way of mitigating such challenges, teachers revealed how they gave learners opportunities to express their views to allow all group members to participate in an activity. This expression echoes the principle of tolerance as purported by the proponents of ubuntu. As stated by Mabovula (2011), tolerance entails individuals respecting others' points of view. To this end, teachers in this study modelled the value of tolerance by encouraging learners to open up about their views on group dynamics to promote participation. We also found that almost all the learners were of the perception that working in groups with others provided a supportive environment, which allowed them to learn from their counterparts. This finding supports the principles of ubuntu as outlined by Oviawe (2016) who argues that an individual is not independent of the collective. By contrast, the relationship between an individual and her/his community is reciprocal, interdependent and of mutual value. By the same token, the results of this study show that learners detested individualistic learning as they believed that they were best placed to achieve their learning goals if they worked as a collective. These results mirror the tenet of interdependence where a learner depends on others and they in turn depend on him or her (Shepherd & Mhlanga, 2014). Learners used words like "I could not have done it alone" to affirm their reliance on other team members. From the study it became clear that the success of an individual was seen to ultimately lead to the success of the collective. Literature reveals that the values of ubuntu include empathy, compassion, and solidarity among members of a group (Letseka, 2013;Taringa, 2007)values that were corroborated by the results of this study. Learners sought to show compassion for those who showed evidence of underperforming in concepts that they seemed to understand more. Rather than pursuing to outperform others, they conceived their roles as helpers of those in need. Learners' narratives demonstrated the ideal of "we are in this together" and they were determined not to leave their counterparts behind. In alignment with ubuntu principles, the behaviour of the learners was morally creditable. This finding supports previous research by Taringa (2007) who argues that people who are driven by moral values interact with other community members with dignity and gratification (Taringa, 2007). Accordingly, in this study, collaboration during PBL showed evidence of building learner character and capacity to help others learn. The philosophy of ubuntu advocates that a person's humanness is accentuated if he or she says "I participate; therefore, I am" (Mugumbate & Nyanguru, 2013). In this study a sense of agency was noted among the learners. They viewed themselves as active participants in a community of learners. The results reveal that learners associated their participation with the benefit of the whole group achieving the learning goals. Another important finding was that learners motivated one another. The sense of togetherness held individuals accountable to other team members to achieve project goals. Therefore, they encouraged each other to work harder. This finding echoes the tenet of totality, which entails continuous improvement of everything by every member of the community (Mbigi, 1997, as cited in Mabovula, 2011:39). It was evident from this study that learners who were seen to be underperformers started to hold high self-efficacy beliefs about their proficiencies. Their improvement was also demonstrated in the manner that they wanted to take the lead and be valued as partners in PBL. These findings highlight that learners were developing valuable skills that are required in emerging economies, namely the ability to self-motivate, motivate others, and take accountability. The tenet of totality which advocates continuous improvement by every member of the society provides a firm foundation for the economic development and well-being of any emerging economy. The tenet of interdependence in ubuntu includes sharing of resources (Mabovula, 2011). Learners in this study interpreted the act of sharing resources as a symbol of unity and togetherness. One of the learners used a Zulu expression, Izandla ziyagezana, meaning that "one good turn deserves another." In this context, the learner cited the principle of ubuntu by which she believed that helping other learners in the classroom would benefit her in the future. Findings of this study also reveal that learners believed that mobilisation of resources and information should happen across the groups to create a bigger pool that would benefit the whole community of learners. This aspect of the findings reveals how learners in this study learned the agency of using resources that are in short supply to benefit the collective. With this kind of approach, disparities between learners in emerging economies can be controlled by creating conducive learning environments. Rather than further marginalisation of the poor, learners mobilised their resources to increase participation and performance. This is an important finding in the context of many emerging economies as poverty continues to be a notable feature (Napier, Harvey & Usui, 2008). Conclusion Learners' testimonies revealed that the values expressed in a collaborative PBL classroom were aligned with the philosophy of ubuntu and strengthened cohesion and togetherness between learners. Learners evoked values such as sharing, caring, teamwork, solidarity, unity and helping one another as they navigated through the problems posed in assigned projects. With this study we ascertained that underperforming learners can work together to improve their learning experience. The interconnectedness and common humanity that existed between the learners motivated them to take responsibility for each other's learning (Letseka, 2013). South Africa's curriculum frameworks advocates for human dignity, inclusivity and infusion of social and environmental justice (Department of Basic Education [DBE], Republic of South Africa [RSA], 2011). One of the aims of the South African national curriculum is to produce learners who "work effectively as individuals and with others as members of a team" (DBE, RSA, 2011:5). These values in the curriculum frameworks can only be realised when learners are taught in a supportive environment that takes into consideration that learners are a community, and their relationship is that of interdependence rather than an emphasis on individual achievements. The findings of this study reveal that participants articulated values aligned to the philosophy of ubuntu as being more desirable than the neoliberal ideology that shapes current education practices. The trajectory of competition emphasised in traditional education settings influenced by neoliberal ideology was discouraged as learners reported that they were best positioned to succeed if they worked together as a team rather than as individuals. These findings reveal that the education system should refrain from focussing only on individualistic learner achievements and rankings and should embrace more authentic methods of assessment. We propose that current curriculum policies be reviewed to ensure that local philosophies are sufficiently represented to reduce overreliance on "borrowed" policies. It is recommended that a deeper examination be conducted on how and which indigenous knowledge systems can be strengthened to support current pedagogies. Authors' Contributions SV and MAM jointly wrote the manuscript. MAM provided data for Table 1 and conducted the interviews. SV assisted with the analysis of data. All authors reviewed the final manuscript.
9,555
2022-11-30T00:00:00.000
[ "Philosophy", "Education" ]
The Natural Products Atlas 2.0: a database of microbially-derived natural products Abstract Within the natural products field there is an increasing emphasis on the study of compounds from microbial sources. This has been fuelled by interest in the central role that microorganisms play in mediating both interspecies interactions and host-microbe relationships. To support the study of natural products chemistry produced by microorganisms we released the Natural Products Atlas, a database of known microbial natural products structures, in 2019. This paper reports the release of a new version of the database which includes a full RESTful application programming interface (API), a new website framework, and an expanded database that includes 8128 new compounds, bringing the total to 32 552. In addition to these structural and content changes we have added full taxonomic descriptions for all microbial taxa and have added chemical ontology terms from both NP Classifier and ClassyFire. We have also performed manual curation to review all entries with incomplete configurational assignments and have integrated data from external resources, including CyanoMetDB. Finally, we have improved the user experience by updating the Overview dashboard and creating a dashboard for taxonomic origin. The database can be accessed via the new interactive website at https://www.npatlas.org. INTRODUCTION Despite growing efforts to catalogue the known global secondary metabolome (e.g. COCONUT (1), LOTUS (2)), inconsistent dereplication methodologies and high rates of rediscovery still plague natural products discovery programs. (3). The Natural Products Atlas aims to address these issues by collating a standardized database of all known microbial natural product structures, source organisms and citations. This resource provides new discovery tools for the natural products community, including a user-friendly open-access platform for compound dereplication, and a standardized dataset of microbial natural product structures for new tool development. Recent tools that have incorporated the NP Atlas database include SMART 2.0 (4), NP Classifier (5), MIBiG (6), METASPACE (7) and the Natural Products Magnetic Resonance Database (www.np-mrd.org). The original publication in 2019 describing the Natural Products Atlas contained 24 594 compounds and included a web interface for manual exploration of the data (8). In this new release we have increased the size of the database to 32 552 compounds, created a new API infrastructure and website to permit automated database queries, incorporated biosynthetic and small molecule ontology terms, and added full taxonomic descriptions of all source organisms, permitting filtering of the database at any taxonomic rank. Finally, we have performed an extensive re-curation of existing data to update structures with partial or missing configurational assignments. Together these advancements significantly improve the coverage, accuracy, and utility of this open access resource. A RESTful API for the underlying Natural Products Atlas database was created to facilitate extension and development of our web services, and to provide developers with facile access to the most up-to-date data and a suite of tools for automated complex queries. We have migrated our relational datastore from MySQL to PostgreSQL. This allowed us to leverage the RDKit PostgreSQL database cartridge extension, which provided full utility for chemical structurebased queries. Administrative functionality for creating, updating, and deleting data are included in an internal (nonpublic) version of the API, which also keeps a detailed changelog for improved data provenance. Additional custom search endpoints have also been added to simplify data queries from the Natural Products Atlas website. The FastAPI framework for Python was used to build the API. We provide a detailed OpenAPI specification and interactive documentation at https://www.npatlas.org/api/v1/docs. Addition of full taxonomic hierarchy Every entry in the Natural Products Atlas includes the chemical structure, the original isolation reference, and the source organism from the original isolation. In the initial release source organism information was limited to the genus and species, as defined in the original publication. In the new version of the database we have incorporated additional data from Mycobank (9) and The List of Prokaryotic names with Standing in Nomenclature (LPSN) (10) to include assignments at all taxonomic ranks. This required us to refactor the origin tables in the database to accommodate terms for all taxa present in the compound database. Currently, these include: domain (3), kingdom (2), phylum (27), class (65), order (171), family (427) and genus (1,178). To leverage this new information the search options in both the Basic and Advanced Search pages have been updated to accommodate filtering at any taxonomic rank. This can be combined with other search terms or structure or substructure searches to create custom search queries. In addition, we have created a new taxonomy dashboard that provides a visualization of compound diversity as a function of source organism taxonomy ( Figure 1). Finally, the taxa in the Natural Products Atlas have been manually aligned against the NCBI taxonomy (11) and both NCBI taxonomy identifier (TaxId) numbers and links to Mycobank/LPSN entries are provided for each rank in the source organism section of the Compound page. Manual curation of additional database entries The original release of the Natural Products Atlas included 24 594 compounds and covered a period up to early 2019. In this new release, we have expanded the database to 32 552 compounds, covering the period up to early 2021. This included targeted curation of 50 priority journals relevant to the field of natural products for 2019 and 2020, the insertion of new compounds submitted to the database through the deposition page on the website, and the integration of data from other external databases (see below). In addition, we continued our review of the historical literature to include additional compounds missed during the original curation effort. This included a retrospective review of existing data and the removal of 119 compounds from protists that are outside the scope of the Natural Products Atlas, as well as the removal of 51 compounds that were not of natural origin. Currently the alignment between the Natural Products Atlas and MIBiG 2.0 (a database of natural product biosynthetic gene clusters) is complete and up to date. It is our intention to maintain this alignment through ongoing bidirectional data curation for future MIBiG releases. Manual review of entries with incomplete configurational assignments Retrospective evaluation of the database revealed that ∼10% of compounds were missing configurational information at one or more chiral centers. Because many of the original structures were derived from PubChem (12) or ChEMBL (13,14) rather than de novo transcription from the original papers into chemical drawing software we were concerned that configurational information may not have been captured in the original curation effort. To address this, we manually reviewed all articles describing compounds containing one or more undefined stereocenters (3154 compounds total) to verify the accuracy of the structures. This resulted in 487 updated structures, as well as the addition of 226 compounds not captured in the original curation effort. Incorporation of external databases A valuable aspect of the Natural Products Atlas infrastructure are the options for user depositions and corrections. Although the number of reported corrections since the original release has been low, new depositions have been a steady source of information for the database. In addition, we have collaborated with external research teams to integrate data from various specialty collections. The largest of these efforts was the incorporation of the newly created CyanoMetDB database of cyanobacterial natural products (15). This included the alignment of structures and compound names between the two databases, review and correction of conflicts with source organisms and isolation references, and the addition of 800 CyanoMetDB compounds previously not present in the Natural Products Atlas. As with previous integration efforts, we have included CyanoMetDB ID numbers for all relevant compounds, permitting bidirectional navigation between the two resources. Separately, Rudolf and co-workers recently performed a systematic review of terpenoid natural products from bacterial sources (16). We have collaborated with the authors of this review to integrate these data, including addition of 311 new compounds and corrections to structures and compound names for existing entries. Finally, we have also incorporated in-house compound collections from several academic research groups including the Müller laboratory at the Helmholtz Institute for Pharmaceutical Research Saarland (100 compounds) and the Clardy laboratory from Harvard Medical School (75 compounds). Addition of chemical ontology terms Two different chemical classification systems have recently been developed for describing small molecule structures, NP Classifier (5) and ClassyFire (17). NP Classifier employs a biosynthetic ontology that is specific to natural products and is based on classifications that are widely accepted in the natural products community. By contrast, ClassyFire is a general classification system for small molecules (including organic molecules) that is based on the ChemOnt ontology. Both classification systems enable the subdivision of the database for targeted search applications, using terms of relevance to the natural products community. In this new release of the Natural Products Atlas, we have added classifications from both systems to every entry. These data have been added to the bottom of the Compound page and have been added as search terms in the Advanced Search page. Modifications to database structure The original version of the database was built on a MySQL relational database that included capacity for a single compound name and source organism for each compound. Recognizing that molecules often have several synonyms in the literature, and that compounds are frequently reported from additional source organisms, we have refactored the database structure and the search engine to accommodate multiple entries for both fields. Currently, neither synonym terms nor additional source organism lists are complete. Work to identify and curate additional data in these two fields is an ongoing objective for the next database release. Creation of new website framework The previous version of the database included a web interface built on the Content Management System (CMS) framework Joomla. Searches and interactive features were developed using a mixture of PHP and JavaScript (JS), and structure searches performed using a suite of custom tools within the Marvin chemical drawing plugin. Development of the new API provided an opportunity to simplify and modernize the website framework. We have removed the CMS layer and rebuilt the website from the ground up using the Django framework for Python with templated HTML pages and native JS for all interactive components and employing RESTful API queries for all search functions and database access. This has improved response times, and significantly simplified the framework for future development. For example, development of search pages and custom dashboards was particularly challenging within the CMS environment. This restriction has been removed with the refactor of the website, reducing the barrier to development for future data visualizations. Additional download options The number of download options for the database has been increased in this new version. In addition to the original TSV download of the full database we now offer an Excel version of the same TSV format, an SDF download of all compounds, a structured JSON format for the full database, and graphML exports of both the Cluster and Node graphs displayed in the Explore section of the website. D1320 Nucleic Acids Research, 2022, Vol. 50, Database issue The SDF file is useful for importing into software that incorporates chemical structures, such as mass spectrometry and nuclear magnetic resonance data processing packages. The structured JSON is useful for developers who wish to recreate structured elements of the database without parsing the TSV file. The graphML files provide network representations of the chemical diversity of the database that were previously only available as interactive network visualizations in the web interface. Finally, previous versions of the database remain available to users via the Zenodo repository (https://doi.org/10.5281/zenodo.3530792). Data overview The new release increases the size of the database by 8128 compounds: 3176 of fungal origin and 4952 of bacterial origin. Over the past 10 years (2011-2020) the number of new compounds reported from bacterial sources has remained roughly constant at ∼540 per annum. However, the number of compounds reported annually from fungi has increased dramatically during this same period, from 655 in 2011 to 1236 in 2020 ( Figure 2A). This has been accompanied by only a moderate increase in the number of publications on fungal natural products over the same period, from 212 in 2011 to 308 in 2020. This suggests that recent reports on fungal natural products are discovering more analogues per study than was typical in the previous decade. Notably, the number of compounds with low structural similarity to known scaffolds has remained roughly constant at ∼60 compounds per year ( Figure 2B). This is in line with our previous evaluation of compound novelty that demonstrated a steady rate of novel compound discovery in the presence of increasing rates of compound isolation. In this context, novel compounds are defined as those compounds which have maximum similarity scores <0.5 (Dice similarity, Morgan fingerprinting with radius 2) when compared to compounds from the same source type (bacterial or fungal) isolated in prior years. Somewhat surprisingly there is little difference between the rates of novel compound discovery from bacterial and fungal sources, even though many more fungal compounds are reported per year. Advantages of incorporating full taxonomic hierarchy The addition of full taxonomic hierarchy and NP Classifier descriptions provides an opportunity to examine the distribution of compound classes across taxonomic space. Figure 3 presents the distribution of biosynthetic classes (NP Classifier pathway) by taxonomic class. Interestingly, there is strong consistency for the biosynthetic origins of compounds within each of the three domains (bacteria, archaea, and eukaryotes) but significant divergence in biosynthetic distributions between domains. For example, most bacterial classes are dominated by compounds of peptidic and hybrid polyketide synthase/non-ribosomal peptide synthetase (PKS-NRPS) origin, whereas most fungal classes include large numbers of polyketide and terpenoid natural products but few peptidic or PKS-NRPS structures. This is in line with other recent studies that have examined the biosynthetic distributions of compounds from microbial sources A B Figure 2. (A) Rates of compound discovery from bacterial (blue) and fungal (red) sources over the period 2011-2020. (B) Rates of 'novel' compound discovery from bacterial (blue) and fungal (red) sources over the period 2011-2020. (18). Inclusion of these terms now permits users to filter search results by both taxonomic origin and biosynthetic class, enabling the creation of custom reference libraries for either category. This is valuable for research groups that study specific types of organisms (e.g. myxobacteria) and developers creating annotation tools for specific compound classes (e.g. DEREPLICATOR+) who require training sets for specific compound classes. Development of API and new website framework The creation of a RESTful API further improves our commitment to FAIR (Findable, Accessible, Interoperable and Reusable) data principles, (19) providing improvements to two key facets: interoperability and reusability. Interoperability is dramatically improved by providing a persistent endpoint for other resources to automatically retrieve and query the latest version of the database. Power users have the added benefit of being able to construct complex queries and download large slices of data. From a reusability standpoint, the detailed changelog maintained by the API also provides much clearer data versioning and provenance. The RESTful API was designed with four resources that are closely interrelated: compounds, references, taxa, and networks. These resources allow access to the data from all four perspectives, with compound entries linking all the data together. Supporting these four perspectives provides a clear path to users from various disciplines to be able to access or integrate data from the Natural Products Atlas into their own projects. For example, a taxonomy or BGC database is now able to automatically query and retrieve data from our API about which compounds were originally isolated from a given taxon at any level on the taxonomic tree. It also simplifies the development of novel dashboards and visualizations, such as our new taxonomy Discover dashboard (Figure 1). Legacy data review Retrospective review of database entries can significantly improve database quality, particularly if existing data can be used to highlight 'outliers' for re-examination. However, re-curation of existing entries does not expand database coverage, and is therefore typically of low priority for academic database teams. Regular use of the Natural Products Atlas revealed that a small but significant number of compounds (∼10%) were missing configurational information at one or more chiral centers. This could either be due to incomplete configurational description at the time of discovery (more common for older papers), or inclusion of an incomplete structural representation from one of the public compound repositories (PubChem, ChEMBL etc.). Re-evaluation of these entries identified 487 natural products for which additional configurational information was available. More importantly, this re-evaluation validates the existing structural information, providing a strong foundation upon which to build future database development efforts and improving user confidence in the database contents. A related issue is that natural product structures are occasionally corrected due to re-isolation and re-evaluation (20), computational reassessment of the original NMR data (21)(22)(23), or total synthesis (24,25). The Natural Products Atlas includes fields and search terms for reassignment data; however, the reassignment dataset remains incomplete. This is a complex task because structural reassignments are rarely the central objective of research studies. In consequence, these results are often not mentioned in article titles, making it time consuming to scan the literature for reassignment data. Development of tools to automatically identify articles reporting structure reassignments is ongoing in our group and forms one of the aims for the next cycle of database development. An ongoing challenge with database development is the limited availability of machine-readable structure representations for new compounds. This significantly increases curation time due to the need to manually enter new structures and increases the number of structural errors. Recently, the Journal of Chemoinformatics has adopted a chemical structure data template (26) and has begun to encourage authors to deposit structure data as part of article submission (27). As noted by the proponents, this policy will greatly improve the FAIR properties of data from these articles. We hope that other journals will adopt this forward-thinking policy to improve access to chemical data and decrease the time required to incorporate new articles in to subject databases. CONCLUSION The Natural Products Atlas has been structurally refactored to incorporate a new RESTful API and a new framework for the associated web interface. Within the database itself we have expanded the coverage of taxonomic information to include taxa at all levels and have added 8128 new compounds. These efforts extend current coverage to early 2021, while also backfilling legacy compounds that were omitted in the first iteration of the database and confirming or correcting the structures of all molecules missing configurational information at one or more chiral centers. Finally, we have added compounds from several custom databases, and have included ontological terms from two small molecule classification systems. Together, these improvements have expanded the range and scope of queries that can be performed using the web interface and provide new automated database access for developers of related external resources. DATA AVAILABILITY The Natural Products Atlas is available at https://www. npatlas.org. The database is provided under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Documentation for the API is available at https://www. npatlas.org/api/v1/docs.
4,412.8
2021-10-28T00:00:00.000
[ "Computer Science" ]
The Construction of Zeno's "Ideal Community" : Within the annals of Western political thought, the essence and centrality of the Stoic School have played a vital role in the development of Hellenism and subsequent philosophical ideologies. Zeno of Kition, one of the founding figures of Stoicism, made substantial contributions to the formulation of early Stoic principles. Alongside his notable advancements in fields such as ethics, physics, and logic, Zeno's concepts of "the whole world" and "the citizen of the world" represented a pioneering breakthrough within the societal context of that era. These concepts transcended the prevailing class system and held epoch-making significance 1.Preface In 333 BC, Zeno was born into a merchant family in Kition, Cyprus.Despite being a Phoenician, he was exposed to Greek philosophy from an early age.His father would often bring back books from his business trips to Greece, which not only developed Zeno's reading and thinking habits, but also introduced him to the works of Socrates, which were renowned in their neighborhood for their wisdom. [1]As he grew older, Zeno started accompanying his father on trips to transport dyes from Phoenicia to Athens.However, a tragic incident occurred when their merchant ship sank, drastically altering Zeno's life trajectory. In 312 BC, Zeno's ship embarked from Phoenicia with the purpose of conducting business with Bayrius.However, it tragically met its demise in the waters near Athens.Once he was rescued and reached land, Zeno found himself aimlessly wandering the streets of Athens, soaked from head to toe.In his hands, he held cherished Greek philosophical texts that he had cherished since his teenage years.Zeno's addiction to books was so profound that he tirelessly inquired about locations where he could engage with thinkers akin to Socrates.It was during this period that fate intervened, as the owner of a local bookstore noticed Clates.Coincidentally passing by, the owner motioned towards Clates and directed Zeno to "Follow him."From that moment forward, Zeno chose Clates as his mentor and delved deeper into the realm of Greek philosophy under his guidance. He studied in Athens for twenty years, during which time Zeno not only absorbed the thoughts of cynicism but also actively engaged with other schools.Through these interactions, his ideological system matured gradually.After leaving Clates and other teachers, Zeno frequently engaged in philosophical discussions with like-minded individuals such as Ariston, beneath the colonnade of the palace located in downtown Athens.It was during this period that the Stoics emerged onto the stage of Western philosophy. The Athenians held Zeno in great reverence, entrusting him with the key to their city wall for safekeeping.Furthermore, they bestowed upon him a golden crown and a bronze statue as tokens of their admiration.Throughout his lifetime, Zeno dedicated himself to exploration and approached academic pursuits and reasoning with utmost seriousness.His intellectual endeavors resulted in the authoring of 20 significant works, including "On Life Following Nature," "On Responsibility," "On Law," and "Ethics."Regrettably, the core essence of these profound insights has not withstood the test of time, as they remain lost to contemporary understanding. Zeno possesses the commendable qualities of thriftiness and resilience.According to reports, he consumes food that does not require the use of fire for preparation and wears a remarkably thin cloak.Additionally, Zeno exhibits remarkable adaptability, and is not particularly fond of engaging in exhilarating activities.He holds the conviction that young individuals should embrace humility and strive to avoid arrogance.Moreover, he highlights the significance of focusing more on the advantages inherent within language rather than mere expression and vocabulary, with the aim of becoming more intellectually astute.Zeno emphasizes the importance of maintaining a dignified and proper demeanor at all times. Virtue and dignity hold immense value for Zeno, as they are not only seen by him as the means to achieving happiness but also as happiness itself.This notion also remains consistent with Zeno's life, as he lived to the remarkable age of 89 without ever succumbing to illness.Zeno is credited with being the pioneer to introduce the concept of "duty" into discourse. 2.The connotation of Zeno's "national thought" Before discussing Zeno's concept of the state, it is essential to understand the teachings of Heraclitus, an ancient Greek philosopher.Heraclitus, who existed in the 5th century BC, proposed the theory of the origin of fire.According to him, the world consists of active matter and passive matter, with fire representing the active matter and an unspecified material representing the passive matter.Every element in the world is a combination of these two fundamental constituents.In Heraclitus' philosophy, fire itself embodies the concept of "logos" (which can be equated to "idea" and "God" in idealism).Therefore, the notion of "logos" is inherent in everything that exists.This "logos" is what gives every entity in the world its ability to function.By expanding our perspective, we can grasp that the entire universe is also governed by this "logos," meaning that the entire universe operates in accordance with an overarching "idea." [2]he Stoics were heavily influenced by the ideas of Heraclitus as well.They believed that "logos" encompassed everything in the world and governed the entire universe.All things operated in accordance with "logos", and all individuals adhered to its laws.Building upon this foundation, Zeno presented the concepts of "the law of nature" and "the whole world" in his book "Ideal World" (originally known as "Politeia", to distinguish it from the works of Plato and Aristotle while incorporating the essence of Zeno's thoughts). Let us begin by discussing the concept of "the whole world".The interpretation of the term "whole world" in this context differs from the cosmopolitanism currently being studied.However, I am confident that the former significantly impacts the latter.Furthermore, Zeno's philosophy has been translated by some as "world country" or "world city-state", but I contend that neither term adequately captures the essence of Zeno's concept.This is due to the fact that both country and city-state are politically biased, whereas Zeno's notion of "world whole" resembles more of a spiritual community. As previously stated, all individuals are governed by the principle of "logos," thus belonging to a collective known as "the whole world."Within this global community, every person is deemed equal and referred to as a "citizen of the world." [3]imply stated, virtue is the sole good and its opposite is evil.Happiness, which fosters the flourishing of individuals, represents the ultimate objective, and it can be attained through the pursuit of virtue. [4]Zeno's notion of "the whole world" aims to establish a profound sense of human harmony.To this end, Zeno has established a citizenship criterion within this "whole world," wherein only individuals possessing "good moral character" are eligible to become citizens.Zeno's conceptualization of "The World as a Whole" overlooks distinctions based on race, class, and physiology, instead, employing morality as the sole criterion for differentiation. It is evident that he embodies not only the notion of being a "citizen of the world," but also encompasses a comprehensive design of the entire world. Firstly, it is prohibited to construct temples, courtrooms, and sports facilities within this area.Secondly, there is no circulation of currency within this zone.Furthermore, both men and women are expected to wear identical clothing, disregarding physiological distinctions.Nonetheless, Zeno did not elaborate further on the specifics of the lifestyle. [5]These three points delineate Zeno's departure from the city-state system, his pursuit of harmony, and his rejection of hierarchy.A more detailed explanation of these concepts will be provided subsequently. The pursuit of a unified world may seem idealistic, but in reality, the systems, laws, and other elements among political entities such as city-states and countries vary greatly.Thus, the question arises: how can we strive for a cohesive global entity?As previously discussed, the world operates based on the law of logos, representing a natural law capable of reconciling divergent realities.According to Zeno, this natural law remains intangible, as citizens unconsciously adhere to and embrace it.It serves as the foundation for all concrete laws, existing within various ideological systems. [6]The essential requirements of natural law for citizens can be summarized as "responsibility" and "virtue.""Responsibility" entails aligning one's actions and words with the natural laws, while "virtue" represents the aim of observing these laws and fostering the harmonious development of the global community. From Zeno's exposition on "the whole world" and "the citizens of the world," it becomes evident that this entity is fundamentally distinct from the city-states of that time.The public spaces abolished by Zeno were core elements of the city-states during that era.Additionally, the size of the world as a whole is not constrained but rather contingent upon the number of virtuous citizens within it.Virtue is the sole criterion for citizenship, thereby abolishing past divisions between Greeks and Manchus, as well as between free individuals and slaves in actual city-states.Zeno also places great faith in enlightenment, asserting that even the wicked can transform into virtuous individuals and attain the status of world citizens. [7]Moreover, the absence of currency circulation in Zeno's constructed world can be best understood as an indication of the state of self-sufficiency and harmony that has been achieved. 3.1Alexander's Expansion To fully grasp the essence of Zeno's ideas, it is crucial to delve into the historical backdrop of his time.As previously stated, Zeno was born in Phoenicia, Cyprus, but due to his father's professional engagements, he immersed himself in Greek philosophy, particularly the works of Socrates.Eventually, he journeyed to Athens, where he became a devoted student of the cynic Clates.Consequently, Zeno's thoughts bear a significant imprint of Greek philosophy, exemplified by the "cosmopolitanism" inherent in cynicism.Nevertheless, Zeno's thinking also exhibits a notable departure from the conventional city-state-oriented mindset depicted in Plato's Republic. In addition to the influence of Greek philosophy, the political circumstances of Zeno's era greatly impacted his conception of the entire world.Following a decade of intense warfare, Alexander consistently emerged victorious, conquering and expanding into other regions.The Middle East, and even India, became marked by the presence of Greek city-states.Realizing that sheer force was insufficient to ensure stability within this vast empire, Alexander implemented numerous policies aimed at fostering ethnic integration and establishing positive relationships between Greeks and non-Greeks.In an effort to dismantle Greece's inherent sense of superiority, Alexander also embraced the inclusion of two foreign princesses, setting a precedent for imitation. [8]Despite Alexander's extensive endeavors, he was unable to quell the chaos brought on by expansion.The resulting disorder had a profound impact on the social fabric, leading to moral decay and a deterioration of the world at large. Under the circumstances of expansion and integration, a relevant concept emerged that greatly influenced traditional Greek philosophy and compelled the ancient Greeks to reconsider the potential for global unification. Construction and Development of Zeno's Ethical Theory According to the Stoics, philosophy can be categorized into logic, physics, and ethics.They liken logic to a trunk, ethics to flesh and blood, physics to the soul, and philosophy as a whole to an "animal" that integrates the first three. Some people classify the logical part of this categorization as rhetoric and dialectics.The soul is initially devoid of knowledge and experiences, but perception, akin to a seal imprinted on wax, generates an impression which then solidifies into a memory, ultimately combining to form an experiential framework.Through the synthesis of sensation and imagery, ideas are formed, and if these ideas are rooted in experiences, they will yield common understandings.Scientific concepts emerge consciously and systematically, representing the "culmination of meticulous contemplation."Moreover, they argue that the yardstick for knowledge is the "self-evident presence of impression and concept."In essence, if an individual perceives the existence of a tangible object corresponding to a concept upon reading about it, that concept is considered scientifically valid and accurate.The Stoics give significant importance to formal logic, particularly syllogism. [9]thics, often associated with logic, holds significant importance for the Stoics, akin to the significance of flesh and blood.The founder of Stoicism, Zeno, expounded extensively on ethical research, even during his time as a teacher in Clades, where he was exposed to the radical ethical ideals of cynicism.This exposure greatly influenced Stoicism, leading to the incorporation of ethical doctrines right from its inception.Stoicism holds human society in high regard while disregarding natural philosophy.Furthermore, according to Stoics, the relationship between man and the universe can be likened to that of a small spark and a raging fire, with the former being an integral part of the latter, exhibiting synchronization and coordination.Additionally, Zeno emphasized the eventual burning up and return to the original state of fire for all things, followed by a fresh start.According to Stoicism, the world unfolds according to this supreme law, with everything occurring in a predetermined manner for a specific purpose.This belief can thus be characterized as Stoic fatalism. Therefore, it is essential for individuals to gauge their personal objectives based on the realization of the universe's purpose. [10]This necessitates adhering to God's will, abiding by societal norms, and fulfilling one's responsibilities.Such a moral existence, in their belief, leads to the pursuit of happiness.The early Stoic school of thought maintained that genuine joy and fulfillment in life could only be achieved through leading a virtuous and rational existence.Consequently, they advocated for the restraint of desires and the abandonment of irrationality.Whether one is wealthy or destitute, honored or disgraced, high-born or low-born, healthy or plagued by illness, the Stoics advocated for adopting an indifferent stance, aiming to achieve tranquility and free themselves from cravings.This early Stoic philosophy, which aligns with the natural course of destiny and demonstrates apathy, combines elements of both fatalism and asceticism.However, in contrast to the Epicureans, the Stoics place excessive emphasis on leading a simpler life and exhibiting indifference, ultimately succumbing entirely to idealistic fatalistic beliefs. The Stoic concept of "cosmic tranquility" or "freedom from passion or indifference" is fundamentally guided by the principle of reason, known as the cosmic logos (which will be elaborated later).In the context of human society, the implementation of this principle manifests as "virtue is the only good."This idea emphasizes that such "good" can be attained through practice and education.Socrates viewed error as merely a specific instance of reason, whereas Stoics, who regard themselves as upholding the Orthodox Stoic tradition, expand on it to include various passions such as joy, fear, desire, and sadness. Misinterpreting current kindness brings happiness, while misinterpreting future goodness incites desire.Misinterpreting current malevolence leads to pain, and misjudging future evils evokes fear.All forms of emotions are abnormal states of the soul, which ought to be not only controlled, but also eliminated. [11].Breakthrough and enlightenment of Zeno's "national thought" 4.1Breakthroughs and Limitations of "The World as a Whole" As previously mentioned, Plato's Republic and Cynics' Cosmopolitanism had a significant impact on Zeno's ideology. [12]Both Plato and Zeno extensively discussed the concept of the "state", yet they presented two distinct worlds.Plato's envisioning of the "state" remained confined within the framework of a city-state, lacking any transcendence of the existing Greek city-state system.His Republic was comprised of three classes: rulers, defenders, and laborers, each fulfilling their respective responsibilities.In contrast, Zeno's "world as a whole" represented a departure from the limitations of the city-state system, embodying the essence of a global community.This whole had no predefined boundaries, and all citizens were considered equal.Influenced by his mentor Clates, Zeno's ideas were further shaped by the principles of cynicism.One hallmark of cynicism is the proposition that the city-state deviates from its inherent nature, a notion also reflected in Zeno's conception of the World as a Whole. However, it is worth noting that cynicism only pays attention to personal happiness, while Zeno's "The World as a Whole" pays attention to the community under "logos", and this community is a spiritual community, not a political unity, with morality and ethics as the link. From the above discussion on Zeno's thought, it is evident that his concept of the "world as a whole" defies the prevailing principle and hierarchy of the Greek traditional thought, thus showcasing the value of Zeno's ideas.While the city-state system was dominant during that era, it also fostered division and isolation.Each city-state, as an independent political entity, established its own self-contained system encompassing citizens, institutions, and laws, consequently leading to the fragmentation and separation among city-states.The Peloponnesian War serves as a vivid example of the hostility and conflicts that occurred between these city-states.Within a multitude of city-states, the inhabitants boasted a sense of superiority over foreigners, thus restricting the boundaries of ancient Greek thought within the confines of the city-state system.However, with Alexander's extensive conquests, people began to realize the need for a new perspective, and it was Zeno who first proposed the idea of viewing the world as a whole.Additionally, Zeno believed that virtue was the sole criterion for distinguishing individuals, thus rejecting prevailing ancient Greek ideas such as slavery and the division of society into barbarians and non-barbarians.This revolutionary stance transcended the established class system of that time and held immense significance for its era. However, it must be acknowledged that Zeno's World as a Whole is characterized by rationalism.Firstly, within this world, virtue has replaced race, nationality, rank, and physiology as the sole criterion for distinguishing individuals.However, this concept is disconnected from reality, and this moral community only exists in the realm of imagination.Secondly, it is unrealistic to attribute dominance to the natural law determined by "logos", as it fails to fulfill the role of a genuine law in maintaining social operations.Additionally, there are numerous contradictions in Zeno's ideology.For instance, while assuming that the world is governed by natural laws, the purpose of blindly striving for ultimate goodness and virtue becomes questionable.Similarly, Zeno's view on destiny suggests that people's fate is predetermined by "logos", yet he also acknowledges the significance of individuals' efforts in shaping their own destiny.When this perspective on destiny is applied to society, citizens find themselves trapped in a dilemma of either adapting to their fate or attempting to change it. 4.2"The World as a Whole" and "a community with a shared future of mankind" Throughout the course of human development, we can see that the concept of "the whole world" is constantly enriched and improved in the social evolution.Both the attempt of supranational organization and the assumption of world government fully demonstrate the yearning for peace of mankind.However, the differences in ideology, nationality and geography still limit human exploration.We can't ignore these objective facts, but this does not mean a total negation of the idea of "the whole world". Zeno's "world whole" is different from the present "world country".The core of the former is actually a moral community, which is not subordinate to modern political or economic fields.In order to achieve the harmony of the world and even the universe, human beings should respect each other, conform to nature and abide by morality, and take this as the way to get happiness.Up to now, the reasonable part of the core of the thought of "the whole world" is still applicable: global problems such as climate deterioration, epidemic situation and rising terrorism stand in front of all mankind, which cannot be dealt with by the meager strength of one country and one region.Only by letting go of prejudice and diluting differences can we join hands to better promote the development of human society.This is the significance of "a community with a shared future of mankind". "The World as a Whole" holds that the link connecting the people of the world is "logos"; For the "a community with a shared future of mankind", this link refers to the fundamental interests of human survival.The background of the former is the expansion of imperial territory and national integration under Alexander's rule, and the resulting social disorder and chaos; The latter was born in the unprecedented changes in the world, and the change of international pattern and order added instability to the international community.The former has idealistic color and puts forward ideas for solving social problems at that time; The latter is a feasible scheme based on reality and contributing to the current human development. 5.Conclusion From the above discussion, it can be seen that both "the whole world" and "a community with a shared future of mankind" have crossed the existence of objective obstacles, thus achieving a high degree of human harmony.Since ancient times, mankind has never stopped exploring world peace and stability, but how to face the current global challenges, I think we can get the answer from Zhi Nuo's "world as a whole" and "a community with a shared future of mankind".
4,713.8
2023-01-01T00:00:00.000
[ "Philosophy" ]
Critical switching current density induced by spin Hall effect in magnetic structures with first- and second-order perpendicular magnetic anisotropy In this study, we derive analytical expressions for the critical switching current density induced by spin Hall effect in magnetic structures with the first- and second-order perpendicular magnetic anisotropy. We confirm the validity of the expressions by comparing the analytical results with those obtained from a macrospin simulation. Moreover, we find that for a particular thermal stability parameter, the switching current density can be minimized for a slightly positive second-order perpendicular magnetic anisotropy and the minimum switching current density can further be tuned using an external magnetic field. The analytical expressions are of considerable value in designing high-density magnetic random access memory and cryogenic memory. The strength of the perpendicular magnetic anisotropy (PMA) is an important parameter that affects the performance of spin devices, such as magnetic random access memory (MRAM) 1 , magnetic sensors 2,3 , and spin torque oscillators [4][5][6][7] . Magnetic tunnel junctions (MTJs) with PMA are promising as storage cells in high-density MRAM because of their high thermal stability and low switching current density 1,8 . Recently, spin devices using magnetic structures with both first-and second-order PMA have attracted considerable interest owing to their novel properties [9][10][11][12][13][14][15][16] . For MRAM where the magnetization is switched by the spin-transfer torque (STT), the magnetization switching can be fast, without heat assistance and incubation time, by using MTJs with an easy-cone state, which can be formed when the second-order PMA is considerably strong 9,10 . Recently, it was demonstrated that in NM/ FM bilayers (NM and FM denote the nonmagnetic and ferromagnetic, respectively), an in-plane current flowing through an NM layer can induce a spin torque acting on the adjacent FM layer, which is sufficient to reverse its magnetization [17][18][19][20][21][22][23] . This spin torque is completely different from the STT generated by an out-of-plane current flowing through the MTJ stack. The spin torque by the in-plane current is known as a spin-orbit torque (SOT) generated by the spin Hall effect (SHE) 18,19,[24][25][26] . The SOT in NM/FM bilayers attracted intense interest because the SOT-MRAM can be operated at an ultrafast speed and therefore, is a promising candidate for replacing conventional static random access memory (SRAM) [27][28][29] . A cryogenic memory cell integrated with a Josephson junction circuit can be another promising application of NM/FM bilayers utilizing the ultrafast SOT switching 28,[30][31][32] . Although numerous studies have been conducted for examining the effects of various parameters such as applied current pulse width and external magnetic field on the performance of the SOT switching, no investigations have been performed on the effects of the second-order PMA, which is particularly important in cryogenic applications where the second-order PMA is strong and sometimes dominant over the first-order PMA 27,28,33 . In this study, analytical expressions for the SHE-induced critical switching current density (J c ) are derived in magnetic structures with the first-and second-order PMA, and their validity is tested by comparing the analytical results with those obtained from a macrospin simulation. Then, the analytical expressions are used for systematically examining the SHE-induced J c as functions of first-and second-order PMA strengths. Finally, the analytical expressions are utilized to optimize the SHE-induced J c and the thermal stability parameter, which are important device parameters related to the power consumption and data duration time, respectively. Here, K 1 eff is the effective first-order PMA energy density that considers the demagnetizing term: and M S are the first-order PMA energy density, demagnetizing factor, and saturation magnetization, respectively). K 2 is the second-order PMA energy density, and V F is the volume of the FM. The following equations were used for obtaining H K1 eff and H K2 : H K1 eff ≡ 2K 1 eff /M S ; H K2 ≡ 4K 2 /M S . Using equation (1), it is a straightforward task to construct the phase diagram shown in Fig. 1(b). The magnetic easy axis is along the z-axis (out-of-plane easy state) when H K1 eff > 0 and H K2 /H K1 eff ≥ −1, and it is canted slightly from the z-axis (easy-cone state) when H K1 eff > 0 and H K2 /H K1 eff < −1. The equilibrium polar angle of the magnetization (θ E ) is 0 and π for the out-of-plane easy state; whereas, for the easy-cone state, θ E can be determined from the relation cos 2 θ E = −H K1 For the macrospin simulation, the modified Landau−Liftshitz−Gilbert equation including a damping-like SOT was numerically solved 19,27 . Here, the symbols γ, α, and c J denote the gyromagnetic ratio, the damping constant, and the strength of the damping-like SOT, which is shown by the relation c J = (ħ/2e)(θ SH J/M S t F ) (ħ is the Planck constant, e is the electron charge, θ SH is the spin Hall angle, J is the in-plane current density, and M S is the saturation magnetization) 39 . The temporal variation of the normalized magnetization vector (∂m/∂t) is described as the summation of three terms: the precessional torque induced by the effective field (H eff ), the damping torque, and the damping-like SOT. This work focuses on the damping-like SOT predominantly; however, some results related to the field-like SOT, which is considerably large in some cases [20][21][22] , are described in Supplementary Section 4. The direction of the SOT is perpendicular to that of H K1 eff and H K2 , whereas the direction of the STT is collinear to that of H K1 eff and H K2 . Because of this, there are two main differences in the switching behavior. First, the SOT indirectly competes with the damping torque originating from H K1 eff and H K2 , whereas the STT competes directly with the damping torque. Owing to its indirect competition, the SOT switching is significantly faster than STT one. This feature of indirect competition also makes the critical switching current density to be independent of α. Second, the SOT manipulates m to be along + y (or −y) and therefore, the SOT switching is stochastic. To acquire a deterministic SOT switching, it is necessary to apply the field component H x ; the precessional torque due to H x causes m to be tilted slightly to + z (or −z). In order to understand the magnetization switching behavior of a system showing the easy-cone state, the macrospin simulation was performed using the following parameters: H K1 eff = 5 kOe, H K2 = −10 kOe, and H x = 0.05 kOe. An unusually large value of H K2 was used for precisely demonstrating its effect on the magnetization switching behavior more clearly. Figure 2(a) shows the macrospin simulation results for the temporal dependences of θ and ϕ under an applied pulse J, the shape of which is shown in Fig. 2(b). The pulse is on at t = 4.0 ns and off at t = 9.0 ns, and the variation exhibits an exponential shape with a characteristic time of 0.5 ns. It is observed from Fig. 2(a) that, under a particular pulse cycle, m is switched from θ = ~θ E and ϕ = 0 to θ = ~(π − θ E ) and ϕ = 0. The ϕ value is zero before and after the switching, owing to the application of H x for the deterministic switching. A detailed switching process is as follows. In the first period of t = 4.0-7.0 ns, θ increases gradually but ϕ remains nearly unchanged except for some oscillations during the initial stage of the period with their strength decaying with time. In the second period of t = 7.0-7.5 ns, an abrupt change in both θ and ϕ occurs. The change occurs in the opposite direction, with θ in the positive direction and ϕ in the negative direction. Furthermore, the change in θ is significantly smaller than that in ϕ. After this abrupt change, both θ and ϕ remain nearly unchanged in the third period of t = 7.5-9.0 ns; θ is slightly higher than π/2, but ϕ is slightly lower than −π/2. When the pulse is turned off at t = 9.0 ns, m shows a precession motion until it reaches the new equilibrium position. This behavior can be observed more clearly from a 3D illustration of the trajectory of m, as shown in Fig. 2(c). Derivations of analytical expressions for critical switching current density. The temporal variations of θ and ϕ in the first period of t = 4.0-7.0 ns, which are of considerable importance in deriving the analytical expressions for J c , can be explained by the competition between the precessional torque and damping-like SOT. Provided ϕ ~ 0 during the period, equations (2) and (3) can be rewritten in the form of temporal variations of θ and ϕ (refer to Supplementary Section 1 for a detailed derivation): Here, f(θ) is the strength of the precessional torque induced by H eff . The temporal variation of θ can be described by equation (4), whereas that of ϕ can be explained by equation (5), which in a strict sense, is for the y-axis component of ∂m/∂t. It is clear from equations (4) and (5) that the temporal variations of both θ and ϕ depend on the difference of the precessional torque (f(θ)) and the damping-like SOT (c J ), with their directions being opposite to each other. In the initial stage of the period where J increases rapidly, c J is dominant over f(θ), making f(θ) − c J < 0. Therefore, θ increases, whereas ϕ decreases with time. Considering that f(θ) increases with increasing θ, f(θ) will start to be dominant over c J , thereby reversing the initial temporal variations. In this stage, ϕ returns to the original position of zero, but θ does not; precisely, a closer examination shows that the return path of θ is found to be considerably small (refer to the magnified results in the inset of Fig. 2(a)). This is because the θ value at which f(θ) − c J = 0 continuously increases with the increase of J. This process occurs repeatedly until θ reaches a critical angle (θ c ) at which f(θ) is maximum. On reaching θ c , even a slight increase in θ decreases f(θ) considerably, thereby resulting in the abrupt change in θ and ϕ observed in the second period. These temporal variations of θ and ϕ may indicate that the SHE-induced switching occurs at f(θ c ) − c J < 0. From this, it is a straightforward task to derive analytical expressions for J c , which are the central results of this study (refer to Supplementary Section 2 for a detailed derivation): The analytical expressions for J c are rather general in the sense that they can be applied to all structures exhibiting SHE, which comprises NM and FM with PMA. With H K2 = 0, the J c values from the present analytical expressions are identical to those from the equations reported in the literature 27 . The sign of θ SH differs depending on the type of NM. The symmetry of the torques, however, is the same whenever the sign of H x changes together. To consider this feature, absolute values are used for H x and θ SH in equations (7)- (9). Some examples of the NM materials that possess SHE include 3d-, 4d-, and 5d-transition elements (such as Pt, Ta, and W) and alloys (such as CuIr and CuBi) 18,19,23,25,26 . The phase diagram in Fig. 1(b) shows four different types of magnetic states, among which the out-of-plane easy and easy-cone states are of practical importance and occur when the sign of H K1 eff is positive. The window for the former is wider than that for the latter, which occurs only when H K2 /H K1 eff < −1 33,37,38 . It is worth noting that although the phase region for the out-of-plane easy state is predicted by the phenomenological (1), its existence is not fully confirmed by the experimental evidence. In order to confirm the validity of the analytical expressions, the results for J c obtained from equations (7)-(9) are compared with the numerical results from the macrospin simulation, as shown in Fig. 2(d), where the results for J c are shown as a function of H K2 /H K1 eff at various values of H x /H K1 eff ranging from 0.01 to 0.4 (here the H K1 eff value is fixed at 5 kOe). Precisely, in wide ranges of H K2 /H K1 eff and H x /H K1 eff , which include both the out-of-plane easy and easy-cone states, the agreement between the two sets of results is excellent, thereby confirming the accuracy of the analytical expressions. Discussion The derived analytical expressions can be of considerable value in the design of SHE-based devices, and the results shown in Fig. 3 are one related example. Figures 3(a) and 3(b) show the contour results for the thermal stability parameter (Δ) and J c as functions of H K1 eff and H K2 at a fixed H x value of 0.2 kOe. The results for Δ are calculated using the relation Δ = [E PMA (π/2) − E PMA (θ E )]/k B T (k B and T are the Boltzmann constant and absolute temperature, respectively). It is expected that Δ and J c scale in a similar manner, and this expectation agrees well with the results in Fig. 3(a) and 3(b), where both Δ and J c increase with the increase of H K1 eff and H K2 . A closer examination, however, shows a different tendency for the two parameters. This can be observed clearly in Fig. 3(b) where some of the results for Δ (in dotted contours) are superimposed with those for J c (solid lines). In fact, the tendency for Δ is clearly different from that for J c indicating room for design optimization. Along the dotted lines in Fig. 3 that the SOT also can be induced by the Rashba effect 17,20,[40][41][42][43] . It is, therefore, important to understand the SOT switching induced by the Rashba effect as well as SHE for an FM with H K2 . Investigation toward this direction, which may offer complete understanding on the effects of H K2 on the SOT switching, is in progress. The pulse duration and rising times can be important parameters affecting the SOT switching. A numerical study performed on a simple PMA system shows that the critical current density for the SOT switching increases with a decrease in the pulse duration time of an in-plane current 28 . A further study on the effects of the pulse rising time on the SOT switching shows that the critical current density is independent of the pulse characteristic time, except that the characteristic time is very short (0.16 ns or lower) (refer to Supplementary Section 3 for detailed results). In summary, analytical expressions for J c induced by SHE were derived in magnetic structures with the firstand second-order perpendicular magnetic anisotropy, and their accuracy was validated by comparing the analytical results with the macrospin simulation results. One example of the usage of the analytical expressions in the design of SOT-MRAM is demonstrated in this study; even at an identical Δ value, a minimum in J c is observed at slightly positive H K2 values. Methods For the macrospin simulation, the modified Landau-Liftshitz-Gilbert equation (refer to equations (2) and (3)) including a damping-like SOT was numerically solved using the fourth-order Runge-Kutta method. The following values were used in the simulation: γ = 1.76 × 10 7 O e −1 s −1 ; α = 0.1; θ SH = 0.3; M S = 1000 emu/cm 3 . The number of the time step was 5000 and the magnitude for the step-by-step m displacements was controlled in the range of 0.01 ~ 0.001 by varying the length of the time step.
3,738
2017-11-10T00:00:00.000
[ "Physics" ]
Experimental Study of Hardened Young’s Modulus for 3D Printed Mortar Few studies have focused on determining the Young’s modulus of 3D printed structures. This study presents the results of experimental investigations of Young’s modulus of a 3D printed mortar. Specimens were prepared in four different ways to investigate possible application of different methods for 3D printed structures. Study determines the influence of the number of layers on mechanical properties of printed samples. Results have shown a strong statistical correlation between the number of layers and value of Young’s modulus. The compressive strength and Young’s modulus reduction compared to standard cylindrical sample were up to 43.1% and 19.8%, respectively. Results of the study shed light on the differences between the current standard specimen used for determination of Young’s modulus and the specimen prepared by 3D printing. The community should discuss the problem of standardization of test methods in view of visible differences between different types of specimens. Introduction The 3D printing of cementitious materials is one of the fastest growing branches of industry in recent years. A significant number of studies have been conducted on the properties of the fresh mix used in the process of printing. The focus is on being able to print bigger structures in a shorter time [1][2][3][4][5][6][7][8][9]. However, the long-term material properties of hardened cementitious composite should also be considered [10][11][12][13][14][15]. Due to multilayer characteristic of the printed structure, it is necessary to determine properties such as compressive strength, flexural strength, or modulus of elasticity to fully understand the behaviour of the structure as a whole. Those parameters are essential for proper structural designing with 3D printing. The majority of studies on hardened properties have focused on compressive strength and flexural strength in regard to the anisotropic behaviour of printed elements. Anisotropy is caused by the layered structure of the elements [16][17][18][19][20][21]. Additionally, a significant number of published research articles have determined the properties on standard samples, proving the negative impact of the printing process on the final values of compressive strength and flexural strength [17,18,[22][23][24][25][26]. However, several studies implicate that the printed specimens have higher strength than the standard ones [27,28]. Besides the mechanical strength, one of the key characteristics in structural design is the modulus of elasticity and the Poisson ratio. A handful of studies have determined the stiffness of the fresh mix, which is directly correlated to buildability, a major property in 3D printing [2,[29][30][31][32][33][34][35][36][37]. Unfortunately, there are not many studies that undertake the topic of modulus of elasticity and Poisson ratio in 3D printed, multilayer hardened structures. Based on extensive search on Scopus and Web of Science databases (keywords: 3D concrete, 3D mortar, Young's modulus, elastic modulus, and hardened property), the most Zhang et al. [21] has conducted a study of modulus of elasticity on hardened concrete by cutting out prism specimen (100 mm × 100 mm × 300 mm) from a multilayer printed structure. Due to low height of the prepared print, the load during the test was applied to specimens in the printing direction. The compressive strength of samples tested in the study was 35.12 MPa, the modulus of elasticity E = 36.6 GPa, and the Poisson ratio was 0.28. The mix prepared for the study had ratio of sand (<1 mm) to cement of S/C = 1.2 and W/C = 0.35. The authors added 2% of nanoclay and 2% of silica fume. Unfortunately, the study does not present the results of standard samples; therefore, it is impossible to determine the impact of 3D printing. Van der Heever et al. [38] has conducted a study of the modulus of elasticity for samples cut out of a printed structure. Cylindrical samples (d = 28 mm, h = 60 mm) were taken in a perpendicular and longitudinal direction to the layer orientation. The height of the layer in the study was assumed to be 10 mm. The mix had the w/c = 0.46 (w/b = 0.32). Mix was made on CEM II 52.5N cement with the maximum size of the aggregate being 4.75 mm, and additional polypropylene fibers (l = 6 mm). Obtained results of the Young's modulus were similar regardless of the specimen orientation (perpendicular E mod = 21.6, CoV = 6.2%, longitudinal E mod = 21.9 GPa, and CoV = 4.8%). The article does not present the comparison to standard samples; the authors only refer to the theoretical values of Young's modulus based on the compressive strength of printed specimen. The difference according to authors reached 8 GPa and was seen as a result of differences in porosity [39,40]. Wu et al. [41] has used a nanoindentation at micro-scale tests and the representative volume element methods (Monte Carlo), as well as results found in the literature, to obtain the results of Young's modulus for a 3D printed structure. The authors have obtained in their theoretical calculations a Young's modulus of 29.17 GPa (Poisson's ration of 0.2), while the initial results taken from literature have shown a E mod = 30 GPa with Poisson's ration of 0.22 [42]. It needs to be said that the results were only theoretical and not confirmed experimentally. Individual results of linear elastic constitutive matrix have huge discrepancies, particularly the mean values of the components of effective elasticity matrix. Additionally, the simulation omitted the interlayer transition zone, which can play a major role in the change of mechanical characteristics of 3D printed concrete [43][44][45][46]. Zahabizadeh et al. [16] has designed two mixes on a CEM I 42.5 (w/b = 0.31), with an aggregate of up to 1 mm and the addition of fly ash (FA). The mixes had different ratios of cement and FA. Nominal compressive strengths of tested mixes were 58.0 MPa and 75.6 MPa. The authors have determined the modulus of elasticity on molded prism samples (50 mm × 50 mm × 100 mm) and samples cut out from printed structure. Both specimen types were cut to the size of 40 mm × 40 mm × 80 mm. The Young's modulus was tested in two directions: perpendicular and longitudinal, to the layer orientation. Obtained results for studied mixes ranged from 27 GPa to 36 GPa. The biggest difference in Young's modulus between molded and printed samples was 8%, while for compressive strength the discrepancy increased to 18%. The values of Young's modulus and compressive strength were 8% and 18% higher in the longitudinal direction, respectively. Feng et al. [47] has analyzed the Young's modulus for powder bed fusion prints. In this method, the printed structure has a support that allows for a better compaction between the layers. The determination was performed for a cubic specimen with sides of 70.7 mm and 50 mm. The height of a single layer was 0.0875 mm. The Young's modulus determined on cubic 70.7 mm specimen was tested in longitudinal and lateral directions. The results of the determination were 3.6 GPa and 1.9 GPa, respectively. To summarize, only several articles take on the topic of determination of Young's modulus in 3D printed structures. Moreover, none of above-mentioned studies directly refer to the influence of layer number on the results of Young's modulus. Only Van der Heever et al. [38] has performed the tests on the cylindrical samples, where the stress distribution is easy to determine and can be compared to samples made in accordance with European standards [48]. This approach allows one to obtain values that can be used in real-life structural design. The authors of that study did not compare the results to standard, molded samples. Other researchers chose prism specimens [16,21,47], which cannot be directly correlated to samples determined by the standards. In some cases, the specimens were not even compared to molded samples [21,47]. The aim of this study is to determine the influence of number of layers and specimen size on the values of Young's modulus in printed structures. The research determines the relation between the specimen size and method of preparation on the values of Youngs modulus. The values are compared to standards samples, which will allow one to implement them for the purpose of structural design. Materials The mix used in the study was previously presented in [3,32,49]. The water/binder ratio was 0.3. Total binder amount is 829 kg/m 3 . The binder in mix consists of 70% of cement (CEM I 52.5R), 20% of fly ash and 10% of silica fume. The fly ash used in the study was obtained from a local coal power plant. The aggregate was a fine natural sand of 0-2 mm. A polycarboxylate powder water-reducing admixture was used to adjust the rheological properties of the mix. The chemical compositions of the materials used are shown in Table 1. Particle size distribution for used materials is presented in Figure 1. The curves for cement, silica fume and fly ash were obtained by laser diffraction method and for fine aggregate by sieve method. Mix composition is given in Table 2. Experimental Procedure The experimental procedure was designed to compare the results of Young's modu lus of samples prepared in different way. The study also tries to determine if the size o printed specimen, which results in different number of layers and layer locations, influ ences this material property. The study determines the correlation of layer number on th mechanical performance of 3D printed elements. Mixing Procedure A standard 110 L planetary mixer (Controls, Milan, Italy) was used in this study. A dry materials, including cement, mineral additives, sand and powder water-reducing ad mixture (PCA) were initially mixed for 5 min. Then, three quarters of the water was adde to the mixer. The mix was mixed for 5 min, then the homogeneity of the mix was evalu ated. The remaining water was added to improve the workability of the mix. Mix preparation and printing were made in a laboratory at a temperature of 20 ° (±2 °C) and relative humidity of RH = 55% (±5%). Fresh Properties For the purpose of this study, a constant slump flow of 160 mm ± 10 mm was a sumed. The slump was determined 15 min after adding the water in accordance with th standard [50]. Similar assumptions for suitability of mixes for 3D printing were propose in other studies [3,31,[51][52][53][54]. The mix was then pumped to obtain mix for the determina tion of the buildability. The buildability was determined in an unconfined uniaxial compression test. A sim ilar test can be found in [17,30,55,56]. The test uses cylindrical Φ60 mm × 35 mm sample The test allows one to obtain the stress-strain relationship for the examined mix. The te results determine the green strength at failure and Young's Modulus of the mix. This ap proach allows one to find mix load-bearing and deformation behavior after deposition Test was performed at a constant displacement rate of 30 mm/min. The test was pe formed between 15 and 30 min after adding water to the dry ingredients. The specimen were formed immediately before testing and compacted manually. The test was pe formed three times. Experimental Procedure The experimental procedure was designed to compare the results of Young's modulus of samples prepared in different way. The study also tries to determine if the size of printed specimen, which results in different number of layers and layer locations, influences this material property. The study determines the correlation of layer number on the mechanical performance of 3D printed elements. Mixing Procedure A standard 110 L planetary mixer (Controls, Milan, Italy) was used in this study. All dry materials, including cement, mineral additives, sand and powder water-reducing admixture (PCA) were initially mixed for 5 min. Then, three quarters of the water was added to the mixer. The mix was mixed for 5 min, then the homogeneity of the mix was evaluated. The remaining water was added to improve the workability of the mix. Mix preparation and printing were made in a laboratory at a temperature of 20 • C (±2 • C) and relative humidity of RH = 55% (±5%). Fresh Properties For the purpose of this study, a constant slump flow of 160 mm ± 10 mm was assumed. The slump was determined 15 min after adding the water in accordance with the standard [50]. Similar assumptions for suitability of mixes for 3D printing were proposed in other studies [3,31,[51][52][53][54]. The mix was then pumped to obtain mix for the determination of the buildability. The buildability was determined in an unconfined uniaxial compression test. A similar test can be found in [17,30,55,56]. The test uses cylindrical Φ60 mm × 35 mm samples. The test allows one to obtain the stress-strain relationship for the examined mix. The test results determine the green strength at failure and Young's Modulus of the mix. This approach allows one to find mix load-bearing and deformation behavior after deposition. Test was performed at a constant displacement rate of 30 mm/min. The test was performed between 15 and 30 min after adding water to the dry ingredients. The specimens were formed immediately before testing and compacted manually. The test was performed three times. The deformation of the specimen during the test was recorded by LVDT (Linear Variable Differential Transformer) displacement transducers (0.01 mm accuracy) (HBM, Darmstadt, Germany) connected to the HBM QuantumX strain gauge bridge (MX840A, HBM, Darmstadt, Germany). The detailed description of the testing bench can be found in [23]. The specimen during the test is presented in Figure 2. • The sizes of path should be constant; • The global deformations of path are unacceptable; • The printed layer must be free of surface defects and cracks (only small minor crac and defect can be acceptable). By meeting the mentioned criteria, the mix was accepted for preparation of specim Specimen Preparation Four different methods of preparing samples were considered in this study. Typi cylindrical specimen prepared for ordinary concrete do not reflect the way how t printed concrete works. Therefore, the chosen methods not only address the mater properties but also various ways of preparing the samples, which could have a potent application in in-situ testing of printed structure. The first type was the reference, standard Φ15 cm × 30 cm cylindrical specimen th was mold-casted. The samples were prepared in a conventional way in accordance w EN 12390-3 [57] and EN 12390-13 [48]. The specimens were left for 24 h in laboratory fore demolding and further curing. The 3D printed specimens were prepared using the additive manufacturing ext sion method [6,58]. For this purpose, a gantry printer (3DoF) with a concrete rotor-sta pump was used. The system is controlled by a G-Code. For all printed specimen, the co stant deposition rate of 0.75 L/min was assumed. Depending on the type of specimen d scribed below, the printing speed and pump output was adjusted. The printing setup presented in Figure 3. The mix was then initially printed to visually evaluate the path quality. The quality of path was evaluated by following points: • The sizes of path should be constant; • The global deformations of path are unacceptable; • The printed layer must be free of surface defects and cracks (only small minor cracks and defect can be acceptable). By meeting the mentioned criteria, the mix was accepted for preparation of specimen. Specimen Preparation Four different methods of preparing samples were considered in this study. Typical cylindrical specimen prepared for ordinary concrete do not reflect the way how the printed concrete works. Therefore, the chosen methods not only address the material properties but also various ways of preparing the samples, which could have a potential application in in-situ testing of printed structure. The first type was the reference, standard Φ15 cm × 30 cm cylindrical specimen that was mold-casted. The samples were prepared in a conventional way in accordance with EN 12390-3 [57] and EN 12390-13 [48]. The specimens were left for 24 h in laboratory before demolding and further curing. The 3D printed specimens were prepared using the additive manufacturing extrusion method [6,58]. For this purpose, a gantry printer (3DoF) with a concrete rotor-stator pump was used. The system is controlled by a G-Code. For all printed specimen, the constant deposition rate of 0.75 L/min was assumed. Depending on the type of specimen described below, the printing speed and pump output was adjusted. The printing setup is presented in Figure 3. Second type of specimens was prepared by printing within the typical Φ15 cm × 30 cm mold. The printing path was generated based on a spiral to ensure proper infill of the mold. The pump parameters were set to obtain full cross-section of the specimen. Evaluation of proper pump output and the sample during printing within the mold is presented in Figure 4. The samples were printed directly from a Φ25 mm hose, rigidly fixed to the printer. Similar to the standard samples, the specimens were left for 24 h in laboratory before demolding and further curing. Second type of specimens was prepared by printing within the typical Φ15 cm × 30 cm mold. The printing path was generated based on a spiral to ensure proper infill of the mold. The pump parameters were set to obtain full cross-section of the specimen. Evalu ation of proper pump output and the sample during printing within the mold is presented in Figure 4. The samples were printed directly from a Φ25 mm hose, rigidly fixed to the printer. Similar to the standard samples, the specimens were left for 24 h in laboratory before demolding and further curing. Second type of specimens was prepared by printing within the typical Φ15 cm × 3 cm mold. The printing path was generated based on a spiral to ensure proper infill of th mold. The pump parameters were set to obtain full cross-section of the specimen. Eval ation of proper pump output and the sample during printing within the mold is presente in Figure 4. The samples were printed directly from a Φ25 mm hose, rigidly fixed to th printer. Similar to the standard samples, the specimens were left for 24 h in laborato before demolding and further curing. Third type of specimens was created by printing columns (as presented in Figure 5). The printing was performed using a Φ25 mm nozzle with a flat end, to provide as even a surface of the layer as possible. The columns had an outer diameter of 150 ± 5 mm. The outer sides of the specimen were not trowelled. The loose, excessive chunks of the fresh mix were gently removed from the specimen immediately after printing. The samples were then sprayed with a water mist and covered with a PE film for 24 h before being cured in water for the remaining period. Third type of specimens was created by printing columns (as presented in Figure 5). The printing was performed using a Φ25 mm nozzle with a flat end, to provide as even a surface of the layer as possible. The columns had an outer diameter of 150 ± 5 mm. The outer sides of the specimen were not trowelled. The loose, excessive chunks of the fresh mix were gently removed from the specimen immediately after printing. The samples were then sprayed with a water mist and covered with a PE film for 24 h before being cured in water for the remaining period. The fourth type of specimens was prepared by cutting them from a bigger 3D printed multi-layer block. The initial block was printed using a Φ25 mm nozzle with parallel layers. The printing speed was adjusted to obtain good visual vertical adhesion of the layers. The specimen were cut in four different sizes to determine the influence of the number of layers on the mechanical characteristics. The diameter of cut out samples chosen for the study was 44 mm, 74 mm, 99 mm and 144 mm. The cutting was made using typical diamond saw for concrete and stone. The sizes were chosen based on available core drill sizes. The cut-out samples were then cut to reach the ratio of length to diameter of l/d = 2 ± 0.1. The schematics of drilled samples are presented in Figure 6. The fourth type of specimens was prepared by cutting them from a bigger 3D printed multi-layer block. The initial block was printed using a Φ25 mm nozzle with parallel layers. The printing speed was adjusted to obtain good visual vertical adhesion of the layers. The specimen were cut in four different sizes to determine the influence of the number of layers on the mechanical characteristics. The diameter of cut out samples chosen for the study was 44 mm, 74 mm, 99 mm and 144 mm. The cutting was made using typical diamond saw for concrete and stone. The sizes were chosen based on available core drill sizes. The cut-out samples were then cut to reach the ratio of length to diameter of l/d = 2 ± 0.1. The schematics of drilled samples are presented in Figure 6. Third type of specimens was created by printing columns (as presented in Figure 5). The printing was performed using a Φ25 mm nozzle with a flat end, to provide as even a surface of the layer as possible. The columns had an outer diameter of 150 ± 5 mm. The outer sides of the specimen were not trowelled. The loose, excessive chunks of the fresh mix were gently removed from the specimen immediately after printing. The samples were then sprayed with a water mist and covered with a PE film for 24 h before being cured in water for the remaining period. The fourth type of specimens was prepared by cutting them from a bigger 3D printed multi-layer block. The initial block was printed using a Φ25 mm nozzle with parallel layers. The printing speed was adjusted to obtain good visual vertical adhesion of the layers. The specimen were cut in four different sizes to determine the influence of the number of layers on the mechanical characteristics. The diameter of cut out samples chosen for the study was 44 mm, 74 mm, 99 mm and 144 mm. The cutting was made using typical diamond saw for concrete and stone. The sizes were chosen based on available core drill sizes. The cut-out samples were then cut to reach the ratio of length to diameter of l/d = 2 ± 0.1. The schematics of drilled samples are presented in Figure 6. (b) (c) Figure 6. Schematics of samples: (a) core drilled from printed structure-plan with layers orientation; (b) core drilled from printed structure-example of cross-section; and (c) freely 3d printed columns and specimens printed into mold. The notation of all samples prepared in the study is as follows: • STDR-standard mold-casted specimens Φ15 cm × 30 cm; • 3DP_M-specimens 3D printed into a Φ15 cm × 30 cm mold; • 3DP_F-freely 3D printed columns approx. Φ15 cm × 30 cm; • 3DP_C_X-3D printed specimens cut from a block. The X stands for a diameter of the sample in mm. All specimens after initial 24 h curing time were cured in water at 20 ± 2 °C for additional 26 days before the tests. The samples were then taken out and surface-dried to pre- Figure 6. Schematics of samples: (a) core drilled from printed structure-plan with layers orientation; (b) core drilled from printed structure-example of cross-section; and (c) freely 3d printed columns and specimens printed into mold. The notation of all samples prepared in the study is as follows: • STDR-standard mold-casted specimens Φ15 cm × 30 cm; • 3DP_M-specimens 3D printed into a Φ15 cm × 30 cm mold; • 3DP_F-freely 3D printed columns approx. Φ15 cm × 30 cm; • 3DP_C_X-3D printed specimens cut from a block. The X stands for a diameter of the sample in mm. All specimens after initial 24 h curing time were cured in water at 20 ± 2 • C for additional 26 days before the tests. The samples were then taken out and surface-dried to prepare them for attaching the strain gauges (Techno-Mechanik, Gdańsk, Poland). The specimen was stored for the last 24 h in laboratory conditions. The total curing took 28 days. Notations and sample characteristics were shown in Table 3. Young's Modulus and Compressive Strength For the purpose of compressive strength and Young's modulus determination, six samples were prepared for each specimen type. The samples were prepared and tested in accordance with [48]. The upper and bottom parts of the specimen were either cut off to obtain even and parallel surfaces or if possible, capped with high-strength fast setting mix. The Young's modulus test was performed in accordance with [48]. Each specimen Φ15 cm × 30 cm was prepared by symmetrically attaching three vertical and two horizontal strain gauges with a base of 75 mm and k-Gauge factor of 2.15. For core drilled specimen smaller strain gauges, with bases of 20 mm, were installed. Examples of samples with strain gauges installed are presented in Figure 7. The strain gauges were connected in a half-bridge. The measurements were recorded by the HBM UPM 60 device (HBM, Darmstadt, Germany). Unconfined Uniaxial Compression Test The unconfined uniaxial compression test was used to determine the buildability of the mix. The mixes were tested between 15 and 30 min after adding water to the mix, which corresponds to time of printing. Figure 8 presents the stress-strain relation σ( ). The dots represent each individual measurement, while line corresponds to the mean values between two adjacent results. Table 4 presents the mean values obtained in the study. The green strength an Young's modulus value, calculated as a slope of ( ) (see [59]), has been presented. The mixture during printing can transfer loads between 16.15 kPa and 21.03 kPa. addition, its stiffness ranges between 263 kPa and 359.32 kPa. Similar results were o tained by Esposito et al. [52], where the compressive strength (green strength) at 15 to 3 min was in the range of 11.64 kPa to 26.04 kPa depending on the type of mixture and te method. The Young's modulus in their study was between 252 kPa and 488 kPa. Wolfs al. [60] have obtained strength in the range of 6.99 kPa to 10.87 kPa and Young's modulu in the range of 54.42 kPa to 98.52 kPa. Ding et al. [34] obtained compressive strength the range of 9.51 to 45 kPa and Young's modulus between 29 kPa and 280 kPa. In sum mary, the values obtained in this study can be considered as correct and meeting the r quirements for 3D printed mixes. Results are corroborated by other studi [3,23,29,34,52,59,60]. Young's Modulus and Compressive Strength Determination of the mechanical parameters of studied samples is presented in Tab 5. Mean values of compressive strength , Young's modulus and Poisson's rat with coefficient of variations CoV are given. Additionally, a percentage strength r lation to standard specimens STDR is given calculated based on (1): where: • ℎ -percentage change ( ℎ -for Young's modulus, ℎ -for compressiv Table 4 presents the mean values obtained in the study. The green strength and Young's modulus value, calculated as a slope of σ( ) (see [59]), has been presented. The mixture during printing can transfer loads between 16.15 kPa and 21.03 kPa. In addition, its stiffness ranges between 263 kPa and 359.32 kPa. Similar results were obtained by Esposito et al. [52], where the compressive strength (green strength) at 15 to 30 min was in the range of 11.64 kPa to 26.04 kPa depending on the type of mixture and test method. The Young's modulus in their study was between 252 kPa and 488 kPa. Wolfs et al. [60] have obtained strength in the range of 6.99 kPa to 10.87 kPa and Young's modulus in the range of 54.42 kPa to 98.52 kPa. Ding et al. [34] obtained compressive strength in the range of 9.51 to 45 kPa and Young's modulus between 29 kPa and 280 kPa. In summary, the values obtained in this study can be considered as correct and meeting the requirements for 3D printed mixes. Results are corroborated by other studies [3,23,29,34,52,59,60]. Young's Modulus and Compressive Strength Determination of the mechanical parameters of studied samples is presented in Table 5. Mean values of compressive strength f cm , Young's modulus E cm and Poisson's ratio ϑ cm with coefficient of variations CoV are given. Additionally, a percentage strength relation to standard specimens STDR is given calculated based on (1): where: • X change -percentage change (E change -for Young's modulus, f change -for compressive strength, ϑ change -for Poisson's ratio); • X st -mean value obtained for standard specimens; • X cm -mean value obtained on a specific specimens type. The failure mechanism of studied samples is presented in Figure 9. Comparison of results is presented in Figures 10-12. No clear correlation between the method of preparing the samples and Poisson's ratio was observed. The values for all samples were between 0.17 and 0.21, corresponding to a −13.3% to +7.5% change compared to STDR samples. As the CoV for all specimens is rather low, the values obtained in this study concur with the EN 1992-1-1 [62] standard, which assumes the Poisson's ratio of = 0.2. Figure 13 presents the statistical correlation between the number of layers, and the compressive strength and Young's modulus of specimen. The analysis was performed for In case of the compressive strength, the highest was achieved by the standard samples (STDR), which seems to be a proper result considering other publications on 3D printing where standard and printed samples where compared [18,22,24,25,61]. The reduction in strength for printed samples ranged from 12% to 43.1%. The lowest compressive strength value was achieved by the 3DP_F specimen (printed cylindrical specimen without any lateral support). For this specimen, the reduction in strength relative to the standard specimen was the greatest, as much as 43.1%. This can also be considered correct because this sample was printed without any side support, resulting in worse compaction at the interlayer zone [19,39,40]. Other samples had either lateral support in the form of a mold (3DP_M samples) or in the form of surrounding layers of printed material-lateral elastic support (samples 3DP_C_44 to C_144). The strength reduction for samples other than the 3DP_F is between 12% and 29.8%. For Young's modulus, the difference between the results is less visible. The difference between freely printed samples (3DP_F) and STDR samples reached 19.8%. The highest results of the Young's modulus were observed for the STDR samples (E = 39, 93 GPa). No clear correlation between the method of preparing the samples and Poisson's ratio was observed. The values for all samples were between 0.17 and 0.21, corresponding to a −13.3% to +7.5% change compared to STDR samples. As the CoV for all specimens is rather low, the values obtained in this study concur with the EN 1992-1-1 [62] standard, which assumes the Poisson's ratio of ϑ cm = 0.2. Figure 13 presents the statistical correlation between the number of layers, and the compressive strength and Young's modulus of specimen. The analysis was performed for the samples cut from a bigger block (3DP_C_X) and a freely printed column (3DP_F), to exclude the possible compaction of layers that could occur in samples printed in the mold. • , and , -only cut out samples, excluding 3DP_F Those two groups were chosen to determine if the sample preparation method (freely printed column or core drilled samples) for 3D printed concrete influences the mechanical properties. Figure 13 presents also the value of the R 2 (coefficient of determination) as well as standard deviation of the results. The main reason behind it is the difference between the results of the biggest core drilled samples (3DP_C_144) and the freely printed samples (3DP_F). The latter has a significantly lower mean compressive strength, which results in the decrement of the R 2 value. Freely printed samples do not have any lateral support, whereas the core drilled samples were initially restricted by surrounding layers. In case of the second group of samples, where the 3DP_F was excluded, the value of R 2 was 0.89, which is much closer to the value considered as strong correlation of the results. Figure 13b presents the results of the analysis of Young's modulus. As in the analysis of the compressive strength, the Young's modulus decreases with the increase of the number of layers. This confirms the assumption that mechanical material properties will similarly change with the change of layer number. Looking at the results for the biggest printed samples (3DP_F and 3DP_C_144), the differences in the values of Young's modulus were insignificant and are within the values of CoV. The value of the coefficient of determination for all printed samples ( , ) was R 2 = 0.92, while only for core printed samples ( , ) was R 2 = 0.88. Both values prove the good correlation of the results. It is worth noticing that the percentage differences in the Young's modulus is close to the values of CoV, which proves that the differences are insignificant. A linear regression was applied to analyze the results of this study. Analysis was performed for the compressive strength and Young's modulus. The linear regression was calculated for two groups of samples: • f cm,all and E cm,all -all cut out samples 3DP_C_X and 3DP_F • f cm,core and E cm,core -only cut out samples, excluding 3DP_F Those two groups were chosen to determine if the sample preparation method (freely printed column or core drilled samples) for 3D printed concrete influences the mechanical properties. Figure 13 presents also the value of the R 2 (coefficient of determination) as well as standard deviation of the results. Figure 13a presents the analysis of the results for the compressive strength obtained in the study. The compressive strength of samples decreases with the increase of the number of layers. As seen in the linear regression for all samples ( f cm,all ), the coefficient of determination R 2 = 0.83. This means that the correlation of the results is not satisfactory. The main reason behind it is the difference between the results of the biggest core drilled samples (3DP_C_144) and the freely printed samples (3DP_F). The latter has a significantly lower mean compressive strength, which results in the decrement of the R 2 value. Freely printed samples do not have any lateral support, whereas the core drilled samples were initially restricted by surrounding layers. In case of the second group of samples, where the 3DP_F was excluded, the value of R 2 was 0.89, which is much closer to the value considered as strong correlation of the results. Figure 13b presents the results of the analysis of Young's modulus. As in the analysis of the compressive strength, the Young's modulus decreases with the increase of the number of layers. This confirms the assumption that mechanical material properties will similarly change with the change of layer number. Looking at the results for the biggest printed samples (3DP_F and 3DP_C_144), the differences in the values of Young's modulus were insignificant and are within the values of CoV. The value of the coefficient of determination for all printed samples (E cm,all ) was R 2 = 0.92, while only for core printed samples (E cm,core ) was R 2 = 0.88. Both values prove the good correlation of the results. It is worth noticing that the percentage differences in the Young's modulus is close to the values of CoV, which proves that the differences are insignificant. Conclusions The paper presents the influence of number of layers and preparation method on the values of Young's modulus of 3D printed concrete. Obtained results for 3D printed samples were compared to standard cylindrical samples. The study extends the knowledge on the determination of Young's modulus for 3D printed structures. Following conclusions have been drawn: • The bigger the specimen, the lower the mechanical performance of cut-out samples. • The higher the number of layers, the lower the value of compressive strength of printed samples. The strength reduction compared to standard cylindrical sample was the highest for freely printed columns (approximately 43%). The strength reduction was lower for samples printed into a mold or cut out from a bigger printed block. • The higher the number of layers, the lower the value of Young's modulus of printed samples. The difference between the biggest printed sample and the standard sample reached 20%. • Samples printed into a mold or cut out from a bigger printed block had better mechanical performance than freely printed columns. This is caused by lateral restriction of concrete due to either mold itself or surrounding layers. • The value of Poisson's ratio for printed samples in this study differed by ±13% from the standard samples. The article presents different ways of preparing the specimen and compares them. None of the printed specimen came close to the values obtained for a standard specimen. This means that the approach to preparing samples for evaluation of 3D printed elements should be reconsidered. The community needs to determine a single, standards methods for determining material parameters of 3D printed concrete for real-life structural applications. Each of the studied methods of preparing the samples has its disadvantages. Samples printed within a formwork require hose extension and can be bothersome. The samples freely printed as columns have variations of dimensions and do not exactly reflect the deformation that would occur when printing higher structures. The samples cut from a bigger printed block can have changed properties due to the cutting itself. The results of this study has shown a significant reduction in compressive strength and Young's modulus of 3D printed structures in regard to standard samples. This shows the importance of including the reduction factors in designing protocols of 3D printed structural elements. It is necessary not only to include the reduction in compressive or flexural strengths that can be found in other studies [18,22,61] but also the reduced values of the Young's modulus.
8,831.2
2021-12-01T00:00:00.000
[ "Materials Science" ]
Editorial Many decision-making situations are characterized by an overwhelming amount of information, complex dependencies between factors, and multiple decision criteria that stand in a trade-off relationship to each other, requiring weighting or preferences to be resolved to favor a certain alternative over another. Individuals and organizations are quickly overwhelmed by the plethora of decision options and alternatives. In such situations, decision makers need to systematically consider arguments in favor or against a certain option under consideration of different assumptions, perspectives, and preferences in order to reach an informed, justified, and balanced decision. Automatic decision-making support by machines will increasingly play a role in such contexts since humans have difficulties in taking into account all available information and in understanding the impact of decisions at different levels and for different stakeholders. In sum, we need machines that can provide rational argumentation support. Such rational argumentation machines are still rare, even among those that nowadays are claimed to be “explainable”. The reason is that machines lack domain knowledge and knowledge about causal relationships as well as understanding of how premises and assumptions relate sys- Many decision-making situations are characterized by an overwhelming amount of information, complex dependencies between factors, and multiple decision criteria that stand in a trade-off relationship to each other, requiring weighting or preferences to be resolved to favor a certain alternative over another. Individuals and organizations are quickly overwhelmed by the plethora of decision options and alternatives. In such situations, decision makers need to systematically consider arguments in favor or against a certain option under consideration of different assumptions, perspectives, and preferences in order to reach an informed, justified, and balanced decision. Automatic decision-making support by machines will increasingly play a role in such contexts since humans have difficulties in taking into account all available information and in understanding the impact of decisions at different levels and for different stakeholders. In sum, we need machines that can provide rational argumentation support. Such rational argumentation machines are still rare, even among those that nowadays are claimed to be "explainable". The reason is that machines lack domain knowledge and knowledge about causal relationships as well as understanding of how premises and assumptions relate sys- tematically to conclusions-a sine qua non for argumentative decision support. Without an understanding of the relation between premises and conclusions, without the ability to compare and evaluate different arguments, without the ability to understand how to resolve trade-offs and the implications thereof, without the ability to provide counterarguments that attack an inferential step, or without the ability to challenge another one's reasoning, there cannot be a rational decision support by machines except in the most low-level applications such as recognizing, e.g., tumors in diagnostic images. Without argumentative abilities such as described above, there is likely to be no rational level at which machines and humans can cooperate in terms of decision making. This leaves us with three scenarios: Machine decision making: In this scenario, machines are the only decision makers. The scenario raises all the well-discussed ethical and practical questions about responsibility, accountability, transparency, etc. However, this model may be justified in situations where timeliness or cost-effectiveness are decisive, and where the effects or repercussions of the decisions are limited or controllable. Human-confirmed machine decision making: Here, machines take decisions, but humans have to confirm or reject them. Humans may or may not be able to understand the reasons or relationships that lead a machine to draw a certain conclusion and thus may or may not be able to meaningfully intervene. Even if they are able to inspect the model of the machine, they may or may not be able to relate the decision to own background knowledge or to decision-making processes that rely on domain knowledge and understanding of the logical and causal relationships between the key factors. Humans as the ultimate and sole decision makers: Here, machines merely provide weights or probabilities for different decision alternatives. The actual decision making is left to the human who needs to construct the arguments supporting the decision for or against a certain alternative and needs to perform the full rationalization of the decision alone. Of course, this is not a clear-cut trichotomy, but more of a continuum. Nevertheless, all scenarios are characterized by the fact that there is no joint decision making in the sense that both parties involved can challenge the arguments of the other party and propose alternative views, perspectives, assumptions, implications, or possibilities for resolving trade-offs. Machines that merely "decide" on the basis of patterns found in data and that are detached from domain knowledge or the decision-making context can not relate these patterns to causal relationships between variables or make explicit how a conclusion follows from assumptions; actually, they will fail in empowering humans to make decisions. This special issue of the Datenbankspektrum features contributions from projects that are funded within the DFG priority program on Robust Argumentation Machines (RA-TIO). Started in 2017, the priority program seeks to foster a paradigm change, namely, to learn which argumentative structures are considered as core information units manipulated by machines. Hereby, RATIO aims at developing argumentative machines that can analyze, aggregate, and summarize large amounts of arguments exchanged by humans on the Web, but also at rational machines that can bring new arguments relying on deep knowledge about a domain and a deep understanding about how facts can be used in premises to yield meaningful conclusions, and that can engage in joint decision making with humans at a rational level. To induce this paradigm change, the priority program brings together the following computer science sub-disciplines to jointly investigate new methods supporting the development of rational machines: Knowledge Representation and Reasoning, Semantic Web, Information Retrieval, Computational Linguistics, and Human-Computer-Interaction (HCI). The research program comprises the development of methods that can extract, compare, and summarize arguments extracted from unstructured documents as well as the development of new semantic models, formal represen-tation languages, reasoning systems and ontologies for the representation of arguments in relation to domain knowledge. The program also supports the development of new search engines and information retrieval systems that index and retrieve arguments as the main unit of information and that can find all pro-and con-arguments for a given topic. In addition, the program aims at developing new methods that can enrich, extend, and complete arguments or even assess their plausibility using new inference and argumentation evaluation and validation methods. Finally, the program also investigates new HCI paradigms by which users can explore and interact with arguments to support rational decision making as well as cooperation between humans and machines along the lines sketched above. In the following, we give a brief summary of these papers. Argument Mining In their paper The ReCAP Project: Similarity Methods for Finding Arguments and Argument Graphs, R. Bergmann et al. present an approach to index arguments via a graph in order to support retrieval of relevant premises given a certain query topic (conclusion). In addition, they present an approach to use Case-Based-Reasoning methods to retrieve similar arguments from an argument graph relying on similarities between nodes using embeddings. The paper Relational and Fine-Grained Argument Mining by R. Trautmann et al. presents an NLP approach to identify argumentative units in textual discourse. They provide an overview of different argument mining tasks and present their results on sentence-level and token-level argument identification. The paper The Road Map to FAME: A Framework for Mining and Formal Evaluation of Arguments by R. Baumann et al. attempts to bridge between (a) NLP approaches to argument mining, which typically do not employ formal approaches to reasoning with arguments, and (b) approaches in the tradition of abstract argumentation frameworks, which do not represent the content or structure of arguments. The authors propose to use a controlled language as a way to represent natural language arguments while being translatable into first-order logic thus supporting reasoning. The paper ArgumenText: Argument Classification and Clustering in a Generalized Search Scenario by J. Daxenberger et al. presents an approach to extract arguments from heterogeneous textual sources including web crawls of news data and customer reviews. They present an approach supporting the clustering of arguments. The main applica-tion proposed is supporting decision making in innovation management and the analysis of customer feedback. The paper Reconstructing Arguments from Noisy Text: The Brexit Referendum on Twitter by N. Dykes et al. proposes an approach to extract arguments from text and formalize them in a co-algebraic logical framework. The identification of arguments relies on the identification of recurring linguistic argumentation patterns and represents a highprecision approach to identifying arguments in a text corpus. The paper Explaining Arguments with Background Knowledge-Towards Knowledge-based Argumentation Analysis by M. Becker et al. discusses the problem that many arguments appearing in textual sources are incomplete in the sense that premises may be omitted. The paper discusses how to reconstruct such enthymemes by leveraging external knowledge resources such as ConceptNet, WordNet, or DBpedia. Further, it discusses how state-ofthe-art, transformer-based language models can be used to infer relations between arguments. The main task considered is inferring and classifying argumentative relations such as attack and support. The paper shows that the performance on the task is positively affected by the inclusion of common sense or background knowledge. The paper Analysis of Political Debates through Newspaper Reports: Methods and Outcomes by G. Lapesa et al. proposes a hybrid approach to analyze political debates carried out in the news. The methods are applied to the analysis of the debates around immigration in Germany in the year 2015. The hybrid methodology consists of a combination of discourse network analysis and NLP methods, which partially automatize some processes of this methodology. The authors present and discuss their first results on automatic claim detection. Interacting with Argumentation Machines The paper Answering Comparative Questions with Arguments by A. Bondarenko et al. discusses an approach that allows users to submit comparative queries to search engines and to obtain results in which the entities in question are compared along key aspects. The authors describe their work on a prototype that-given two entities-can extract and rank sentences in which the entities are compared. They further discuss work on identifying comparative questions using a machine learning approach as a first step towards allowing users to directly pose comparative questions to a search engine. The paper How to Win Arguments-Empowering Virtual Agents to Improve their Persuasiveness by K. Weber et al. argues that the way arguments are framed using non-verbal elements such as body language, gazing behavior as well as emotions can have a significant effect on the level of persuasiveness and thus on the audience's stance on the topic. The paper presents a reinforcement-based approach by which two policies can be learned, one that optimizes the strategic aspects of an argument and a second that optimizes the emotional flavor of an argument. Opening the ML Blackbox The paper Towards Understanding and Arguing with Machine Learning: Recent Progress by X. Shao et al. proposes new machine learning approaches that support users in understanding and arguing with classifiers, thus allowing to open the machine learning box. The authors develop a novel tractable deep probabilistic classifier which is a conditional variant of sum-product networks (SPNs). These CSPNs combine simple models in a hierarchical fashion in order to create a deep representation that can model multivariate and mixed conditional distributions while maintaining tractability. An approach to interactively arguing with classifiers is also presented. In their paper Leveraging Arguments in User Reviews for Generating and Explaining Recommendations, T. Donkers and J. Ziegler aim at opening up black-box models for recommendation algorithms by including explanations in the form of arguments, highlighting why a certain item is recommended to a user. The authors propose a novel architecture based on Aspect-based Transparent Memories (ATMs). The architecture can memorize user opinions on relevant items as mentioned in raw texts to derive multifaceted user and item representation. Experiments on three datasets show that the proposed approach outperforms existing methods such as NARRE. Data Management for Future Hardware This special issue of the "Datenbank-Spektrum" is dedicated to the research achieved by the DFG Priority Programme "Scalable Data Management on Future Hardware". We invite submissions on original research as well as overview articles addressing the challenges and opportunities of modern and future hardware for data management such as many-core processors, co-processing units, new memory and network technologies.
2,870
2020-07-01T00:00:00.000
[ "Computer Science", "Business" ]
Rosiglitazone Alleviates Contrast-Induced Acute Kidney Injury in Rats via the PPAR γ /NLRP3 Signaling Pathway Background . This study investigated the e ff ect and mechanism of rosiglitazone on a rat model with contrast-induced acute kidney injury (CI-AKI). Materials and Methods . The CI-AKI rat model was established from Sprague Dawley rats by furosemide injection (10 ml/kg) to the caudal vein followed by iohexol (11.7 ml/kg). The experimental grouping was randomly allocated into control, model, rosiglitazone, and T0070907 groups. Blood samples were collected from the abdominal aorta. Serum creatinine, urea nitrogen, MDA, and SOD contents were detected by biochemical analysis. TNF- α and IL-10 expression was detected by ELISA. Urine creatinine and urine protein were measured following 24-h urine biochemistry testing. Cell pathology and apoptosis were detected by H&E and TUNEL staining, respectively. PPAR γ , NLRP3, eNOS, and caspase-3 mRNA expression were detected by qPCR. Caspase-3 and NLRP3 expression were detected by immunohistochemistry. Results . The CI-AKI rat model was successfully established because the results showed that compared with control, serum creatinine, urea nitrogen, MDA, SOD, TNF- α , and IL-10, urine creatinine and urine protein levels were signi fi cantly increased in the model group, indicating AKI, but was signi fi cantly decreased with rosiglitazone treatment, indicating recovery from injury, while opposite results were obtained with SOD. Apoptosis rate was signi fi cantly increased in the model group and signi fi cantly decreased with rosiglitazone treatment. NLRP3 and eNOS increased signi fi cantly in the model group and decreased signi fi cantly with rosiglitazone treatment, while opposite results were obtained with PPAR γ . NLRP3 and caspase-3 protein expression was signi fi cantly increased in the model group and signi fi cantly decreased with rosiglitazone treatment. Conclusion . Rosiglitazone could alleviate acute renal injury in the CI-AKI rat model by regulating the PPAR γ /NLRP3 signaling pathway and should be further investigated as a potential treatment in clinical studies. Introduction In recent years, with the wide application of interventional therapy using multislice spiral computed tomography (CT) and new three-dimensional reconstruction technology, iodinecontaining contrast agents are used more frequently and broadly applied in disease diagnosis and treatment. Contrast-induced acute kidney injury (CI-AKI) remains the third most common cause of acute renal insufficiency [1]. The occurrence of CI-AKI seriously affects the rehabilitation of patients and is considered as one of the important complications after interventional therapy, which was reported to be associated with acute hypo-tension, age, diabetes, dehydration, and others [2][3][4]. At present, there is still a lack of effective measures to reverse the injury process of CI-AKI. Therefore, understanding the underlying mechanism of CI-AKI and how to effectively alleviate it is of great clinical importance. The mechanism of CI-AKI remains unclear. At present, it is believed that its occurrence is closely related to contrast agent nephrotoxicity [5,6]. It includes renal hemodynamic changes, renal tubular toxic injury, inflammatory response, oxidative stress, and apoptosis [7][8][9]. Destruction of the renal tubular epithelial cell barrier and extensive necrosis of renal tubular epithelial cells are the main pathological features of AKI. Injury and necrosis can also cause a strong host immune response and release of inflammatory factors, including interleukin (IL)-1 and IL-18 [10]. In addition, apoptosis is also responsible for the pathogenesis of AKI [11]. And contrast media can be taken up into the cells and damage mitochondrial function resulting in the increased generation of ROS and cell apoptosis [7]. Therefore, apoptosis is the pathological outcome of most CI-AKI, and inflammatory response and oxidative stress are important ways leading to apoptosis. Rosiglitazone (RSG) is a synthetic peroxisome proliferatoractivated receptor gamma (PPARγ) ligand that activates the PPARγ transcriptional level to regulate downstream target genes [12]. PPARγ is a nuclear receptor involved in immunity and vascular health [13]. Synthetic PPARγ ligand agonists were demonstrated to be renoprotective in diabetic and nondiabetic patients [14]. In this study, a CI-AKI model was established using Sprague Dawley (SD) rats, which were intervened with rosiglitazone and PPARγ inhibitor, to explore whether rosiglitazone can alleviate CI-AKI, its renoprotective functions, and the underlying signaling pathway through which rosiglitazone played its role. We hypothesized that rosiglitazone could alleviate AKI in the CI-AKI rat model by regulating the PPARγ/NLRP3 signaling pathway. The findings of this study might help in the development of a new therapeutic approach for CI-AKI. Materials and Methods 2.1. Experimental Animals. The experiment was performed in a humane manner in accordance with Ethical Guidelines for Care and Use of Laboratory Animals of Fujian Medical University Union Hospital. This study was approved by the Animal Protocol Committee of Fujian Medical University Union Hospital (2021022301). Twenty-four 250-300 g specific-pathogenfree male SD rats were purchased from Changsha Tianqin Biotechnology Co. Ltd., Changsha, Hunan, P.R. China, with the license no. SCXK (Xiang) 2019-0014. The rats were raised in separate cages and acclimated for 7 days to the laboratory environment (22°C-25°C, 50%-60% humidity, standard 12/ 12 h light/dark cycle) prior to the experiments. Experimental Grouping and Animal Modeling. 1. The experiment was randomly assigned to 4 groups (n = 6 rats per group) as follows: (1)control group: no intervention was performed. (2) Model group: a CI-AKI rat model was established by referring to the study of Liu et al. [15]. The rats were deprived of water for 72 h and allowed free access to food [15,16]. After 72 h, their caudal vein was injected with furosemide (10 ml/kg), followed by iohexol (11.7 ml/kg) 20 min later. Upon completion, free access to food and water was resumed for 24 h. (3) Rosiglitazone group: after successful modeling, 200 mg rosiglitazone hydrochloride powder was accurately weighed and dissolved in 50 ml normal saline. The modeled rats were given intragastric administration of 40 mg/kg rosiglitazone solution per day (divided into three times) for 3 days. Each group was administered at the same time period. Upon completion, free access to food and water was resumed for 24 h. (4) T0070907 group: after successful modeling, 15 mg of PPARγ inhibitor powder (T0070907) was dissolved in 3 ml DMSO solution, and 97 ml of corn oil solution was added and mixed. Each rat was given an intraperitoneal injection of 0.15 mg/ml PPARγ inhibitor T0070907 solution 20 min before intragastric administration of rosiglitazone solution. Upon completion, free access to food and water was resumed for 24 h. Disease Markers The rats were fed in the metabolic cages after treatment to collect urine samples. Urine Cr and protein (PRO) were determined by 24-h urine biochemistry. 24 h after completing the above procedure, the rats in each group were given an intraperitoneal injection of 45 mg/kg sodium pentobarbital for anesthesia and euthanized. Blood samples were taken from the abdominal aorta of three rats in each group for biochemical analysis of the serum Cr, urea nitrogen, MDA, and SOD contents, and ELISA detection of TNF-α and IL-10 levels. The right renal tissue was taken for hematoxylin-eosin (H&E) staining and TUNEL staining for pathological examination and apoptosis detection. The left renal tissue was used for quantitative polymerase chain reaction (qPCR) detection of PPARγ, NLRP3, endothelial nitric oxide synthase (eNOS), and caspase-3 mRNA expression. Caspase-3 and NLRP3 protein expression were detected by immunohistochemistry. H&E Staining. The renal tissues were collected and rinsed with running water for several hours before dehydration in 70%, 80%, and 90% ethanol solutions, pure alcohol, and xylene (mixed in equal amounts) for 15 min, and xylene I and II for 15 min each (until clear). The tissues were then immersed for 15min in a mixture of xylene and paraffin (equal amount), followed by 50-60 min each in paraffin I and paraffin II. The tissues were embedded in paraffin and sectioned. The sections were baked, dewaxed, and hydrated, following which they were immersed in distilled water and stained with hematoxylin aqueous solution for 3 min. The sections were then differentiated for 15 s in hydrochloric acid ethanol solution, washed slightly, bluing for 15 s, rinsed with running water, stained with eosin for 3 min, and rinsed with running water gain. Lastly, the sections were dehydrated, cleared, mounted, and examined under a microscope. 2.5. ELISA Test. The samples of each group were restored to room temperature. The TNF-α and IL-10 levels in the serum samples from the abdominal aorta of rats were determined according to the ELISA kit instructions. The concentrated washing solution and distilled water were diluted by 1 : 20. The standard wells and sample wells were prepared. To each standard well, 50 μl of standards were added at different concentrations. The blank and sample wells were set up. 40 μl of sample diluent was applied to the sample wells of the enzymelabeled coated plate, followed by 10 μl of samples (final dilution of sample was 5 times), except for the blank wells, and 100 μl of the enzyme-labeled reagent was added to each well. The plate was then sealed with sealing film and incubated for 60 min at 37°C. It was then washed 5 times and patted dry. Chromogenic reagents A and B were added in sequence for 15 min in the dark. Then, 50 μl of stop solution was added to each well to terminate the reaction. The blank wells were adjusted to zero, and each well's absorbance (optical density [OD] value) was measured in sequence under 450 nm wavelength. 2.6. Biochemical Testing. The blood and urine samples of each group were restored to room temperature. The contents of serum Cr, urea nitrogen, MDA, and SOD of the abdominal aorta in the rats and the 24-h urine creatinine and PRO were detected according to the biochemical kit instructions. Concentrated washing solution and distilled water were diluted in a 1 : 20 ratio. The following steps of the experiment were the same as in the ELISA test. 2.7. TUNEL Detection. The tissue sections were baked for 2 h in an oven at 65°C before being immersed in xylene for 10 min, which was then replaced and left for another 10 min. The sections were placed in 100% (twice), 95%, 80% ethanol, and purified water for 5 min each. Then, this was placed in a wet box, and dropwise Proteinase K working solution (50 ug/ml) was added to each sample and allowed to react at 37°C for 30 min. Then, this was thoroughly washed with phosphate buffer saline (PBS) for 5 min (3 times). The PBS around the tissue was absorbed with absorbent paper. Sufficient amount of TUNEL detection solution was added to each slide and incubated at 45°C for 2 h in the dark. They were then washed with PBS for 5 min (3 times). The liquid on the glass slide was absorbed with absorbent paper; following which antifade mounting media was added and examined under a fluorescence microscope. 2.8. qPCR Detection. The renal tissues from each group were grinded into powder. The RNA was extracted, and its concentration and purity were measured with a microultraviolet spectrophotometer. The quality of RNA was determined with agarose gel electrophoresis. A reverse transcription kit was used to synthesize the cDNA, which served as the template. The samples were loaded using the fluorescent dye method, and the program on the fluorescence qPCR instrument was set for the amplification reaction. The PCR reaction conditions were as follows: predenaturation 95°C, 10 min; denaturation 95°C, 10 s; annealing 58°C, 30 s; extension 72°C, 30 s; total 40 cycles. The relative quantitative 2 -△△CT method was applied. β-Actin was used as the internal reference, and the PPARγ, NLRP3, eNOS, and caspase-3 relative expressions were calculated. All primers were synthesized by Universal Biosystems (Anhui) Co., Ltd., Anhui, P.R. China. PAGE was applied as the purification method. The primer information was as shown in Table 1. 2.9. Immunohistochemical Staining. The paraffin sections were deparaffinized in xylene, hydrated with gradient ethanol, and boiled for 3 min in 10 mM citrate buffer (pH 6.0) for antigen retrieval. The sections were treated in 3% H 2 O 2 for 10 min at room temperature to inactivate the endogenous peroxidase and then, in 5% bovine serum albumin (BSA) for 30 min at 37°C to block nonspecific binding. Subsequently, the sections were incubated at 4°C overnight with the primary antibodies and then for 30 min at 37°C with the corresponding secondary antibody. Color development was performed for 5-10 min using DAB. The degree of staining was measured under a microscope. The sections were counterstained for 3 min with hematoxylin and differentiated in hydrochloric acid and alcohol. After rinsing with water for 1 min, the sections were dehydrated, cleared, mounted, and examined under an optical microscope. The Image-ProPlus 5.0 software was used for analysis. 2.10. Statistical Analysis. The SPSS v19 (IBM Corp., Armonk, NY, USA) software was used to analyze the data. The measurement data were expressed as mean ± standard deviation 3 Disease Markers (mean ± SD). Comparisons between multiple groups were performed using a one-way analysis of variance (ANOVA). The Tukey honestly significant difference (HSD) was used for post-hoc test. The experiment was performed 3 times. P < 0:05 indicated that the difference was statistically significant. H&E Staining of the Rat Renal Tissue. To observe the renal pathological conditions in rats with different treatment (normal renal tissue, CI-AKI, and rosiglitazone treatment), H&E staining was performed to compare the renal tissue sections of each group, where nuclei were shown as blue-purple and cytoplasmic area as red. As shown in Figure 1, the size and shape of the renal tubular lumen of rats in control group were normal, with clear structures, neatly arranged renal tubule epithelial cells, and very few cell necrosis and red blood cell infiltration. But the renal tubules were disordered, the lumina were generally smaller, and most of them were occluded in model group, and the glomerulus was atrophied, renal corpuscle capsule enlarged, renal tubular epithelial cells severely vacuolated and degenerated, cell morphology changed, a large number of red blood cells were seen, and the number of inflammatory cell infiltration in the tubulointerstitium was increased. The interstitial congestion, edema and inflammatory cell infiltration were improved in the rosiglitazone group compared with model group. The T0070907 group showed increased inflammatory cell infiltration and vacuolar degeneration of renal tubular epithelial cells compared with the rosiglitazone group, which was similar to the model group. Figure 2, the contents of serum TNF-α, IL-10, MDA, SOD, urea nitrogen and Cr of the abdominal aorta, and urine Cr and urine PRO in each group of rats were detected. The contents of serum TNF-α, IL-10, MDA, BUN and Cr, and urine Cr and PRO were significantly increased (P < 0:05) in the model group compared with the control group, while serum SOD was significantly decreased (P < 0:05), indicating that the CI-AKI rat was seriously injured and the CI-AKI model was successfully established. Except for SOD, which was significantly increased (P < 0:05), the content of other biomarkers decreased in varying degrees (P < 0:05) after rosiglitazone treatment compared with the model group. When rosiglitazone treatment was superimposed with PPARγ inhibitor T0070907, the SOD content decreased significantly, while the other biomarkers increased, indicating that rosiglitazone could effectively improve the renal function and protect against CI-AKI, and the inhibition of PPARγ expression might aggravate the injury of CI-AKI to a certain extent. TUNEL Detection of Apoptosis in Each Group. Apoptosis is one of the mechanisms of CI-AKI injury. The cell apoptosis of the rat renal tissue was detected using the TUNEL method. As shown in Figure 3, the nuclei showed blue fluorescence, and the apoptotic cells demonstrated green fluorescence. The control and rosiglitazone groups both had few TUNEL-positive cells. The model group had a significantly higher number of TUNEL-positive cells than the control group (P < 0:05). When rosiglitazone was superimposed with T0070907, the number of Compared with control group, * P < 0:05; compared with model group, # P < 0:05. n = 6/group. 6 Disease Markers TUNEL-positive cells increased significantly compared with the control group (P < 0:05) but decreased significantly compared with the model group (P < 0:05), indicating that rosiglitazone treatment could reduce the apoptosis in CI-AKI injury. 3.4. The Effect of Rosiglitazone on mRNA Expression in the CI-AKI Renal Tissue. qPCR was performed to detect changes in mRNA expression of PPARγ, NLRP3, eNOS, and caspase-3 in each group. Figure 4 showed that in the model group, NLRP3, eNOS, and caspase-3 mRNA expression increased significantly (P < 0:05, all) compared with the control group, while PPARγ mRNA expression decreased significantly (P < 0:05). The NLRP3 and eNOS mRNA expression in the rosiglitazone group decreased significantly (P < 0:05, both), PPARγ mRNA expression increased significantly (P < 0:05), while caspase-3 mRNA expression showed no significant difference compared with the model group. Rosiglitazone treatment superimposed with PPARγ inhibitor T0070907 showed little difference from the rosiglitazone group, indicating that rosiglitazone has a therapeutic effect on CI-AKI injury to a certain extent, and the mRNA expression levels showed that it had a relatively significant effect on the PPARγ, NLRP3, and eNOS genes. Effects of Rosiglitazone on Protein Expression of the CI-AKI Renal Tissue. Changes in NLRP3 and caspase-3 protein expression in each group were determined by immunohistochemistry. The blue-purple area indicates the nuclei, and the brown area indicates the target protein expression. As presented in Figure 5, the model group showed significantly increased NLRP3 and caspase-3 protein expression than the control group (P < 0:05). After treatment with rosiglitazone, the expression decreased significantly (P < 0:05) compared with the model group. It re-increased in the T0070907 group compared with the rosiglitazone group, indicating that from the protein level, the treatment effect of rosiglitazone on CI-AKI injury can be observed. Discussion According to the results of this study, after the CI-AKI rat model was established by iohexol, the arrangement of renal tubules was disordered, the lumen was occluded, the morphology of renal tubular epithelial cells was changed, the tubulointerstitium showed increased cell infiltration, and apoptosis was significantly increased. The model was successfully established. After intervention with rosiglitazone, the interstitial congestion and infiltration were significantly improved, and cell apoptosis was significantly reduced, indicating that rosiglitazone had a certain effect on the treatment of CI-AKI. The strengths of this study were the successful establishment of the CI-AKI (not only assessed by serum creatinine and blood urea nitrogen but also by urine analysis-based studies), potential translational evidence showing that rosiglitazone could have therapeutic effects in such disease, and identifying the underlying mechanism of rosiglitazone in alleviating AKI by regulating the PPARγ/NLRP3 signaling pathway. By detecting a series of inflammatory factors in the blood and urine of rats, it was found that after modeling, the con-tents of serum TNF-α, IL-10, MDA, SOD, BUN and Cr, and urine Cr and PRO increased significantly. After rosiglitazone intervention, the contents of serum TNF-α, IL-10, MDA, SOD, BUN, and Cr decreased significantly, as well as the urine Cr and PRO. Studies have shown that urinary IL-18 in patients undergoing coronary angiography with contrast agents was significantly increased [17]. Previous studies found that contrast media caused an increase in serum urea nitrogen and creatinine [11,12], tubular necrosis and peritubular capillary congestion [18,19], MDA [18,19] and caspase 3 [19], apoptosis [19,20], and a decreased SOD activity in the kidneys [18,19], which were concordant with our results. Conventional studies indicated that the main mechanisms of CI-AKI included local vasoconstriction caused by contrast agents entering the renal vessels and ischemia and hypoxia at the renal cortex and medulla junction. Local oxidative stress and the contrast agent's direct toxic effect on renal tubules eventually result in renal injury, particularly to the renal tubular epithelial cells [21,22]. Studies have shown that NLRP3 is a cytoplasmic receptor, and researchers have gradually recognized its role in innate immune responses in recent years. Currently, NLRP3 has been recognized as an important inflammatory molecule of pattern recognition receptors [23]. NLRP3 inflammasome, as an important component of innate immunity, plays a critical role in the body's immune response and disease occurrence. Studies have found that inhibiting the NLRP3 inflammasome pathway could reduce renal injury caused by cisplatin [23]. All of the above suggests that the NLRP3 pathway is likely to be activated during acute tubular injury and that NLRP3 inflammasomes may participate in local inflammation during the CI-AKI process. PPARγ is an important member of the nuclear receptor superfamily, participating in inflammatory and immune responses, cell differentiation, proliferation, and apoptosis [9,24,25]. After binding to the specific ligands, PPARγ can regulate the transcription of the target genes and inhibit immune cell activation and expression of the inflammatory factors. Caspase-3 is a key executive protease in the apoptosis process and plays the ultimate pivotal role in apoptosis caused by various factors. According to the results, after the SD rats were modeled, the expression of NLRP3 and caspase-3 at the mRNA and protein levels increased significantly, and the expression of PPARγ decreased significantly. After treatment with rosiglitazone, a PPARγ agonist, the NLRP3 and caspase-3 mRNA, and protein expressions decreased, and the expression of PPARγ increased. The expression of eNOS was basically consistent with that of caspase-3. The decrease in apoptosis protein and cell apoptosis rate indicated that NLRP3 was involved in CI-AKI injury, and that rosiglitazone could activate PPARγ to down-regulate the expression of NLRP3, effectively reducing the occurrence of cell apoptosis and achieving a certain therapeutic effect. This study had several limitations that should be clarified. The effects of different causes of renal injury were not investigated and whether rosiglitazone would be similarly effective in improving the AKI in these different models remained to be determined. Also, only rosiglitazone was used as the main drug, and comparisons with other potential drugs were not performed to assess which would be more effective. Lastly, 7 Disease Markers the upstream and downstream components affecting the PPARγ/NLRP3 pathway were not investigated; thus, further studies are warranted to gain more insight into the mechanism of the nephroprotective action of rosiglitazone. Conclusions The therapeutic mechanism of rosiglitazone on AKI of iodine-containing contrast media may be via upregulating the PPARγ expression and inhibiting the NLRP3 inflammasome signaling pathway. This study initially explored the relationship between rosiglitazone and the PPARγ/NLRP3 signaling pathway and the possible mechanism of action, laying a theoretical foundation for evaluating the role of NLRP3 inflammasome in CI-AKI and performing further in-depth studies on the specific mechanisms. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Ethical Approval This study was approved by the Animal Care and Use Committee of Fujian Medical University Union Hospital, and conducted in compliance with the guidelines. Conflicts of Interest The authors have no conflicts of interest to declare.
5,240.2
2022-10-03T00:00:00.000
[ "Biology", "Medicine" ]
Flag-dipole spinors: On the dual structure derivation and $\mathcal{C}$, $\mathcal{P}$ and $\mathcal{T}$ symmetries In this manuscript we report the flag-dipole spinors dual structure direct definition and analyze the properties behind the corresponding operator which generates such structure. This particular construction may be interesting for cosmological, phenomenological and mathematical physics applications. In addition, we analyse the behaviour of the flag-dipole spinors under action of discrete symmetries, facing an unconventional property encoded on (CPT ). PACS numbers: 04.62.+v, 03.70.+k, 03.65.-w I. INTRODUCTION Spinors play an important role in several areas of Quantum Field Theory. Such mathematical objects must be understood as an irreducible representation of the Lorentz group SO + (1, 3) [1][2][3], which carry an extensive information about the space-time in which they are defined. All relevant physical information associated with the spinors is encoded in its bilinear forms. In fact, some years ago an spinor classification based on bilinear covariants and multivectors of observables was developed by Lounesto [4]. Such classification sheds light on the existence of new classes of spinors. In particular, it revealed the so-called flag-dipole spinors, which reside between the Weyl, Majorana and Dirac spinors. It is of common knowledge that the relativistic description of the electron allow one to define the following set of bilinear forms: the invariant length σ =ψψ, the pseudo scalar amount ω =ψγ 5 ψ, the current density defined as J =ψγ µ ψγ µ , the spin projection in the momentum direction K =ψγ µ γ 5 ψγ µ , and the momentum electromagnetic density S =ψiγ µν ψγ µ ∧ γ ν , where we have definedψ = ψ † γ 0 and γ stands for the Dirac matrices [4,5]. A more comprehensive description for bilinear forms can be found at [6]. The 16 aforementioned bilinear forms are restricted to obey an algebraic quadratic relation known as Fierz-Pauli-Kofink identities [4]. The Lounesto's classification can be divided into two sectors, one embracing single-helicity spinors (classes 1, 2 and 3) and the other dual-helicity spinors (classes 4 and 5) [7][8][9]. The first three classes of the Lounesto's classification describe the Dirac spinors. The fourth class consists of flag-dipole spinors with a flag S on a dipole of two poles J and K. The fifth class (Majorana spinors) consists of flag-pole spinors with a flag S on a pole J , and the sixth class (Weyl spinors) consists of dipole spinors with two poles J and K [4]. At this point, it is worth mentioning that the properties of the flag-dipole spinors had not been yet defined properly, only slightly explored in some very specific scenarios [10][11][12][13][14][15][16][17]. Therefore, it is one of the purposes of this communication, to report and to describe some of the properties related to such spinor. Quite recently, interesting new insights were brought to scene after the Elko's theoretical discovery [18]. Proposed in its first formulation in 2004 [19], the spin-1/2 fermionic field endowed with mass dimension one, constructed upon a complete set of eigenspinors of the charge conjugation operator and, consequently, due to its restricted interactions with the Standard Model particles, is believed to be a strong candidate to describe dark matter [18]. The features carried throughout mass dimension one theory, opened windows to a physical content known as Beyond the Standard Model Theory, trying to answer many questions that seems to be incomplete. Since the mass dimension one theory is still being constructed [18,20], we believe that a broad and quite interesting content is hidden beyond the mass dimension one fermions. So far, in the literature, we have two theoretical examples of mass dimension one fermions: the Elko and the flag-dipole spinors. Both spinors carry quite peculiar and particular features. Regarding these particularities, we are able to list: a new dual structure, the dynamics, and their unconventional (meanwhile expected ) behaviour under C, P and T discrete symmetries. The focus of the present manuscript is to exhibit in detail an Ab Initio construction of the flag-dipole spinor dual structure. Interesting enough, the contrasting dual structure carry an involved operator, which is responsible to ensure a Lorentz invariant and non-null norm besides carrying much of the physical information encoded on flag-dipole spinors. In such a way, given emergent operator just playing an important role when one deal with phenomenological applications [21,22], cosmological applications and mathematical physics analysis [6,[23][24][25]. Therefore, we highlight its matrix form in addition to exploring some of its main characteristics. Notwithstanding, we analyse the fundamental characteristics of the flag-dipole spinors under the action of the C, P and T discrete symmetries. Moreover, we conclude that flag-dipole spinors hold (CPT ) 2 = +½, an unexpected behaviour for a spinorial field. However, previously predicted by Wigner in one of his works [2], placing the flag-dipole theory in a well-posed physical and mathematical level. Thus, by inspection, we suppose that flag-dipole spinors also belongs to a degenerated Hilbert space. Besides, we leave open windows for an approach like the one used in [26]. This paper is organized as it follows: In section II we provide a direct definition of the flag-dipole spinors dual structure, analyzing its main features, besides highlighting verisimilitude with other examples of dual-helicity spinors adjoint structure, present in the current literature. In section III we advance in the formalism of the C, P and T discrete symmetries and also we compute (CPT ) 2 . Finally, in section IV we present some concluding remarks. II. ON THE FLAG-DIPOLE DUAL STRUCTURE DEFINITION Flag-dipole (or type-4) spinors stands for a very rare set of spinors in the literature and which had not been listed in physics applications until recently. They are candidates to construct mass-dimension-one fermions [27] endowed with dual-helicity [28], and they may explain the reheating phase of the universe [20]. As soon as spin-1/2 massdimension-one flag-dipole spinors were explicitly defined in [20], we now turn our attention to explicitly define the flag-dipole adjoint structure ( Λ(p µ )), pointing out the main features encoded on the operator which compose such structure, and evincing some important details that were not previously explained. As it can be seen, Dirac's dual structure is well-defined, however, it is not unique and even is not applicable for all spinors. Some cases, such as Elko [19] and flag-dipole [20] spinors, for example, require a more involved dual structure. Accordingly, here we provide some details concerning the flag-dipole spinors dual structure. If one impose the Dirac's dual structure (ψ = ψ † γ 0 ) to the flag-dipole spinors (Λ(p µ )), we face the following norm relationΛ where the lower indexes stands for the right-hand and left-hand component helicity, respectively. Looking towards unveil a hidden physical content, we apply the very same procedure as was previously developed for the Elko spinors in Ref [29]. Thus, this section is reserved for the derivation of a mathematical protocol, and the requirement is a real and invariant norm under Lorentz transformations. Let us consider the flag-dipole spinors previously defined in [20] and The above spinors satisfy the following orthonormal relations The indexes S and A are related with the positive and negative sing on the right-hand side of the norms relations above. Please, note that the orthonormal relations are independent of the phase (β ± ). However, from now on, we confine ourselves to the only constraint |β ± | 2 = 1, otherwise, we do not guarantee a proper flag-dipole spinor, in very agreement with [28]. Such a judicious phases constraint lead to a complete set of flag-dipole spinors carrying a non-null and a Lorentz invariant norm. The above relations suggest that the new dual structure must flip the spinor helicity. Looking towards provide a direct definition of the dual structure, the set of relations above make an useful tool to accomplish such task. We start from the very definition of the new dual structure where h stands for the helicity. The Γ(p µ ) operator must obey a set of requirements. Denoting the spinor space by S, then the Γ(p µ ) operator is such that Moreover, Γ(p µ ) has to be idempotent ensuring an invertible mapping. From (12) we are able to have the following two possibilities: h = h ′ , for which Γ(p µ ) = ½, has it is the case for the Dirac spinors, or h = h ′ leading to a more involved operator [6,24,[29][30][31]. With the orthonormal relations (5)-(10) at hands, one is able to define in which its matricial form reads Here we have defined the functions g(θ, β), f 1 (φ, θ, β) and f 2 (φ, θ, β) as follows Note that we fully defined the important operator present in the flag-dipole dual structure. The Γ(p µ ) operator obeys the following requirements: Γ 2 (p µ ) = ½, and the inverse indeed equal itself. In the lights of [6] an useful fact concerning such operator is [Γ(p µ ), γ 5 ] = 0. Remarkably enough, a judicious choice of the phases value, as previously shown in Ref [28], turn it possible to recover the Elko's Ξ(p µ ) operator, in other words, Γ(p µ ) → Ξ(p µ ). Let us now analyse the behaviour of Γ(p µ ) under Lorentz transformations. For a Lorentz boost we have that with κ † = −κ, and then The expression above provides the following relation The action of the Γ(p µ ) operator on the flag-dipole spinors provide the following relations Such a direct definition of the dual structure provide additional support to the flag-dipole dual structure previously found [20]. It is worth mentioning that the above approach is important because, first -in parallel with the Elko's case -it shows the protocol of how the dual structure for dual-helicity spinors emerges. Consequently, we show some properties of the Γ(p µ ) operator and, finally, we evinced its explicit form, which is necessary for carrying out studies as the one developed in [6,[21][22][23]. We remark that the prescription contained here clearly leads to a theory where the dual structure presented in (11) provides a spin sum which contain a term which do not manifest covariantly via Lorentz transformations, consequently, at a quantum field level it brings a non-local quantum field. However, in [20], and supported by [29], a redefinition in the dual structure bring to light a local theory. The present section carry some fundamental aspects concerning discrete symmetries and flag-dipole spinors. Bearing in mind that through the relations that will be established here, we are able to connect, at the second quantization level, the junction among dynamics, quantum field and locality structure. In addition, such analysis is extremely relevant when it is intended to approach the flag-dipole spinors via the formalism developed in Ref [26, and references therein]. We start by analysing the behaviour of the flag-dipole spinors under action of the parity operator, given operator can be defined as P = m −1 γ µ p µ [32]. As it can be seen in the current literature, dual-helicity spinors do not obey the Dirac equation [7,19]. In order to illustrate the procedure, we choose the Λ S {+,−} (p µ ) spinor, and then apply the operator γ µ p µ on it, to obtain where σ ·p stands for the helicity operator, previously defined in [20]. When this operator acts on the spinor's components, we get After simple mathematical manipulations, one obtain the following relation taking into account the Einstein's dispersion relation, we are able to write and also the following relation among the components Then, from eq. (23), we get Note that the flag-dipole spinors do not form a set of eigenspinors of the Dirac operator. In other words, the flag-dipole spinors do not satisfy the Dirac equation. This fact had already been observed previously in [7]. Repeating the very same procedure described above, i.e., acting with γ µ p µ on the r.h.s of the equation (30), it leads to The above results lead us straightforwardly to conclude that P 2 obey the following relation Now, we focus on the charge-conjugation operator, which can be written as C = γ 2 K, where K stands for the algebraic complex conjugation operation [19]. As stated in [7,20] flag-dipole spinors do not necessarily hold conjugacy under C. Thus, here we provide a quick derivation of such observations: Note that the resulting spinor does not belong to the set of flag-dipoles in (2) and (3), which reinforces the argumentation in [33]. Nonetheless, acting twice with C operator on equation (33), provides the following result Finally, the last discrete symmetry concern to the time-reversal operator T = iγ 5 C, an anti-unitary operator. With the previous results at hands, the action of such an operator on a flag-dipole spinor is given by Note again that the result obtained is not compatible with the spinors shown in (2) and (3), evincing that the flag-dipole spinors do not stand for a set of eigenspinors of time-reversal operator. However, we have which allows us to conclude that Previous results evades Lee & Wick Theorem 1 in Ref [34], where it is stated that all the local spin-1/2 fields hold the relation P 2 = T 2 , and as it can be seen, for the flag-dipole spinors we obtained P 2 = T 2 . In the meantime, a very interesting and unexpected outcome emerges when we compute (CPT ) 2 for the Λ(p µ ) spinors, using the results above, it yields Thus, both Elko and flag-dipole spinors show congruous features. Such result combined with the above calculations evince that flag-dipole spinors also belong to the Wigner Class 3 [2] and may also belong to the degenerated Hilbert space [26]. Note that for the second time this behaviour is reported for a fermion. For completeness, we establish the commutation/anti-commutation among the discrete symmetry operators. Accordingly, acting C from the left on the equation (30) it yields On the other hand, The above two results leads to anti-commutativity for the C and P operators for flag-dipole spinors. Applying the same reasoning for the other flag-dipole spinors, one establishes that C and P anti-commute for all Λ S Moreover, one may find the following relations for the other operators: The only relation which is consistent with what is expected for fermions is the one given by eq. (41). We remark that the above approach is important at a quantum field level and here we provide some details. In the Weinberg framework [35], the analysis is developed for Dirac spinors, thus, the quantum field is defined upon spinors which satisfy Dirac dynamics (eigenspinors of parity operator) playing the role of expansion coefficients and, then, the quantum field hold invariance under parity transformation, being in agreement with all the Lee & Wick expectations. In [26], the situation is a slightly different from the previous case, where the analysis takes into account the eigenspinors of the charge conjugation operator (Elko spinors) to define the quantum field. As it can be seen, such spinors hold an extra degree of freedom (helicity) when compared with Dirac spinors, by the aforementioned reason Elko spinors do not satisfy the Dirac dynamics. Thus, this was the first reported case to evade Lee & Wick expectations, in other words, it can be understood as a fermion with bosonic traces. In such framework, as the authors claims, that given quantum field must belong to a degenerated Hilbert space. Now, the flag-dipole spinors shows to be an interesting case, due to the fact that it does not satisfy any discrete symmetry (C, P or T ), do not satisfy the Dirac dynamics, and also hold an extra degree of freedom, as Elko do, being the second reported case that do not match with Lee & Wick expectations. IV. CONCLUDING REMARKS As previously mentioned, we remark once again the importance of the dual structure direct definition due to its delicate structure. Since the dual structure for spinors endowed with dual-helicity feature is not a trivial structure -and strongly contrasts with Dirac's dual structure-then, it is necessary to define a more rigorous approach, accordingly, we have made a detailed inspection of the flag-dipole spinor dual structure. Hereupon, we gave an additional mathematical support in the definition of such a structure, evincing some details of the operator that composes such a dual structure. The explicit form, besides the properties of the Γ(p µ ) operator which is part of the flag-dipole dual structure, is extremely necessary so that we can advance in other branches of research, e.g., particle
3,851.4
2020-03-17T00:00:00.000
[ "Physics" ]
A New Handwritten Signature Verification System Based on the Histogram of Templates Feature and the Joint Use of the Artificial Immune System with SVM . Verifying the authenticity of handwritten signatures is required in various current life domains, notably with official contracts, banking or (cid:12)nancial transactions. Therefore, in this paper a novel histogram-based descriptor and an improved classi(cid:12)cation of the bio-inspired Ar-ti(cid:12)cial Immune Recognition System (AIRS) are proposed for handwritten signature veri(cid:12)cation. Precisely, the Histogram Of Templates (HOT) is introduced to characterize the most widespread orientations of local strokes in handwritten signatures, while the combination of AIRS and SVM is proposed to achieve the veri(cid:12)cation task. Usually, using the k Nearest Neighbor rule, a questioned signature is classi(cid:12)ed by computing dissimilarities with respect to all AIRS outputs. In this work, using these dissimilarities, a second round of training is achieved by the SVM clas-si(cid:12)er to further improve the discrimination power. In comparison with existing methods, the experiments on two widely-used datasets show the potential and the effectiveness of the proposed system. Introduction Handwritten signature is a biometric feature unique to each person.As it depends on physical and psychological conditions of the writer, researchers have to deal with the intra variability of the signer in order to develop robust systems for signature verification.One can mention two verification approaches: on-line and off-line.The on-line verification, in which signatures are acquired via an electronic device, considers dynamic information of signatures.In the off-line approach, signatures are written on a sheet of paper.In this case, features are calculated from the signature shape.Furthermore, the verification can be carried out according to two strategies: writer-dependent or writer-independent [1].To authenticate genuine and forged signatures, the writer-dependent strategy develops a specific system adapted to each person's style while only one generic system is developed for all persons in the writer-independent framework. To characterize effectively signature images, several global and local descriptors were employed during the past years.For instance, typical global features are the mathematical transforms, such as Wavelets, Ridgelets and Contourlets [2][3][4].Nevertheless, local features are much preferred since they describe specific parts of signature images, which makes them robust to global shape variations [5].In this respect, we note topological features, such as pixel density and pixel distribution, curvature features, orientation features and gradient features [1,6,5]. Moreover, various methods were developed to achieve the verification task, such as dynamic time warping, neural networks, hidden Markov models and SVM [7].Currently, SVM is the most commonly used classifier since it can significantly outperform the others [8].Nevertheless, the scores reported in literature are not optimal and still need improvements.Recently, many interesting mechanisms inspired from the natural immune system allowed the development of Artificial Immune Recognition Systems (AIRS) that tackle with various pattern recognition applications, such as thyroid diagnosis [9] and fault detection [10].AIRS classification adopts a supervised learning process to create new representative data for each class, called Memory Cells (MC).Then, the k Nearest Neighbor (kNN) rule is performed over the established MC to classify test data.In [11,6,12], the authors successfully employed the AIRS classifier for off-line signature verification.However, experiments showed that the user-defined parameters must be carefully tuned for each writer's characteristics in order to achieve a competitive performance.Also, since the kNN decision depends only on the pertinence of the produced MC, a more powerful decision is conceivable. Presently, we propose a robust system for off-line signature verification.A novel descriptor using the Histogram Of Templates (HOT) is introduced to characterize stroke orientations in signatures.For the verification step, we jointly use AIRS and SVM to overcome the shortcomings of the conventional AIRS classifier.Precisely, after the AIRS training, a set of dissimilarities is calculated between original data and the evolved MC from the AIRS training.Then, these dissimilarities are used to train a SVM to develop an automatic decision about questioned signatures.The rest of this paper is organized as follows: Section 2 introduces the proposed signature verification system.Experiments are presented and discussed in Section 3, followed by the main conclusions in the last Section. Proposed Signature Verification System The proposed Signature Verification System (SVS) is composed of a feature generation module that is based on the Histogram Of Templates and a verification module, which combines AIRS with SVM.The verification task is achieved according to the writer-dependent strategy.So, for each writer, an SVS is developed to discriminate between genuine signatures and skilled forgeries.In this section, a detailed explanation of the feature extractor "Histogram Of Templates" is carried out.Then, a brief overview of the AIRS theory is made, followed by the details of how the combination of AIRS and SVM is performed. Histogram Of Templates The Histogram Of Templates (HOT) is proposed for highlighting local stroke orientations by using a set of templates.As shown in Fig. 1, sliding windows covering (3 × 3) pixels are applied on a signature image to count the number of pixels that fit each template [13].The resulting counts constitute the histogram of templates.So, if we consider twenty templates, the histogram will have 20 bins.Each bin corresponds to the number of pixels P matching a template k.Presently, HOT feature is computed by considering both pixel information and gradient information.This leads to a histogram of 40 bins that combines pixel and gradient information vectors. Pixel information-based HOT For each template, if the gray value I(P ) of a pixel P is greater than the gray value of the two adjacent pixels, P matches the template. I(P ) > I(P 1) && I(P ) > I(P 2) . ( Gradient information-based HOT For each template, if the gradient magnitude M ag(P ) of a pixel P is greater than the gradient magnitudes of the two adjacent pixels, P matches the template. Artificial Immune Recognition System The Artificial Immune Recognition System (AIRS) is a bio-inspired classifier that was introduced by Watkins in [14].-MC-match selection: MC-match represents the highest stimulated MC.The stimulation ST is calculated between each MC and the actual antigen as: While affinity is the Euclidian distance and ag i is the i th antigen.Then, using the selected MC-match a set of randomly mutated clones (ARBs) is generated.-Resources competition: a competition between the generated clones (ARBs) is carried according to their stimulation level in order to ensure the development of more representative cells (i.e. the most stimulated ARBs that allow the recognition of antigens).-MC-candidate selection and MC pool update: the ARB having the highest stimulation is selected as being the MC-candidate.Then, based on a comparison with MC-match, MC-candidate will replace MC-match or will be added to the memory cells population. Classification A questioned signature is classified as being genuine or forged according to its k Nearest Neighbors within the MC population. AIRS shortcomings Because of the writer-dependent protocol, the AIRS training employs several set-up parameters that must be tuned according each writer's characteristics.This leads to several tests to find the optimal combination of parameters [6].Mainly, AIRS parameters are described as follows: -Mutation rate: a real taken between [0.002-0.01]that represents the mutation probability of an ARB.-Clonal rate: an integer in the range Furthermore, as the decision of the classical AIRS depends only on the pertinence of produced MC, we propose a hybrid verification system, in which the kNN classification is substituted by a support vector decision.This implementation allows us to globally tune the AIRS parameters for all writers while improving greatly the verification performance. The Joint use AIRS-SVM The joint use AIRS-SVM as a verification system is achieved according to the following steps (see Fig. 2). -Train AIRS according to the steps reported in subsection 2.2. -Develop new training and testing sets by substituting signature features by their dissimilarities with respect to all memory cells in the MC Pool.-Train SVM with the new training dissimilarity set to separate genuine dissimilarities from forged dissimilarities.-Incorporate the dissimilarity vector of each questioned signature into the support vector decision to decide if it is a genuine or a forged signature. Experimental Results Our experimental study is conducted on two widely-used datasets: MCYT-75 and GPDS-300.MCYT-75 (FRR), the False Acceptance Rate (FAR) and the Average Error Rate (AER).This latter represents the average value between FAR and FRR. Following the protocol reported in [15], for each dataset, the training stage utilizes 10 genuine and 10 forged signatures that are randomly selected while the remaining signatures are used to test the verification performance.In this work, AIRS parameters take the same values for all writers to facilitate its implementation.So, as a first experiment, we tried to select the optimal k th neighbor allowing the best AER on training data.As shown in Fig. 3 for both datasets, the best accuracy is obtained when considering one neighbor with an AER about 31% and 18% for MCYT-75 and GPDS-300, respectively.From these outcomes, we deduce that the kNN classification cannot deal with the variability of information offered by the evolved memory cells.Consequently, to improve the AIRS classifier, the proposed system performs a second round of training to take more advantage from both training set and memory cells to achieve a more robust verification.Table 1 reports error rates as well as the verification time using AIRS, SVM and the proposed joint use of AIRS-SVM.For the proposed system, the verification time includes the HOT calculation, the dissimilarity computation, the SVM training and the decision time.In the conventional AIRS, the verification time corresponds to the calculation of HOT features and the computation of the KNN decision for a questioned signature while in SVM, it corresponds to the HOT computation plus the support vector decision time. The results show that the proposed joint use AIRS-SVM outperforms both the classical AIRS and SVM performances.Indeed, the AIRS-SVM combination allows a significant improvement in AER values with at least a gain 6% for MCYT-75 and 0.7% for GPDS-300.Moreover, thanks to the support vector decision, the proposed combined verification system provides lower FAR than FRR, which reflects its ability to favor the reduction of false accepted signatures.The comparison of the verification time required to treat a questioned signature reveals that the proposed combination requires approximately the same duration as AIRS or as SVM.In addition, compared to the state-of-the-art results re-ported in Tables 2 and 3, the joint use of AIRS with SVM provides competitive outcomes. Conclusion This paper aimed to introduce a novel histogram-based descriptor to characterize off-line signatures and proposes to verify the authenticity of these signatures using a joint use of Artificial Immune Recognition System with SVM.Specifically the kNN decision, which is commonly associated with the conventional AIRS is substituted by a support vector decision.Experiments conducted on two public datasets demonstrated the effectiveness of the proposed algorithm despite of using the same parameters selection for all writers.Precisely, a gain of 20.4% for MCYT-75 and of 7.52% for GPDS-300 in the AER is achieved over the classical AIRS performance.In order to further improve the verification accuracies, the histogram of templates descriptor could be implemented in different local parts of signature images for extracting more accurate information. that controls the number of generated mutated clones.-Affinity threshold scalar: a real taken in the range [0.1-1] used within the MC-candidate and MC-match comparison.-Stimulation threshold: a real ranged between [0.1-1] used as a stopping criterion in the training routine of an antigen.-Resources number: an integer between [100-700] that limits the number of mutated clones (ARBs) allowed in the system. Fig. 2 . Fig. 2. Proposed signature verification system (continuous lines indicate the training flowchart while dashed lines indicate the verification flowchart). Through mutations and resources competition processes, the training of AIRS generates new data that are called antibodies (or Memory Cells, MC) in order to represent variability within the classes of interest.The training algorithm considers each training signature as an antigen.Also, each generated antibody with its associated class label (geniune class or forged class) is called Artificial Recognition Ball (ARB).Note that ARBs are provisional MC that will be used during the training process to produce the final established MC.Before the beginning of the AIRS training, an initialization of the MC set is carried out by selecting randomly one training sample from each class. 1contains off-line signatures of 75 writers with 15 genuine and 15 skilled forgeries each.While the GPDS-300 corpus 2 contains offline signatures of 300 writers represented by 24 genuine signatures and 30 skilled forgeries for each.Performance evaluation is based on the False Rejection Rate Table 1 . Signature verification results obtained for AIRS and the proposed implementation.
2,853.2
2018-05-08T00:00:00.000
[ "Computer Science" ]
Latent virus infection upregulates CD40 expression facilitating enhanced autoimmunity in a model of multiple sclerosis Epstein-Barr virus (EBV) has been identified as a putative environmental trigger of multiple sclerosis (MS) by multiple groups working worldwide. Previously, we reported that when experimental autoimmune encephalomyelitis (EAE) was induced in mice latently infected with murine γ-herpesvirus 68 (γHV-68), the murine homolog to EBV, a disease more reminiscent of MS developed. Specifically, MS-like lesions developed in the brain that included equal numbers of IFN-γ producing CD4+ and CD8+ T cells and demyelination, none of which is observed in MOG induced EAE. Herein, we demonstrate that this enhanced disease was dependent on the γHV-68 latent life cycle and was associated with STAT1 and CD40 upregulation on uninfected dendritic cells. Importantly, we also show that, during viral latency, the frequency of regulatory T cells is reduced via a CD40 dependent mechanism and this contributes towards a strong T helper 1 response that resolves in severe EAE disease pathology. Latent γ-herpesvirus infection established a long-lasting impact that enhances subsequent adaptive autoimmune responses. used to study MS in rodents, is exacerbated by acute γ HV-68 infection 20 . More severe disease is also observed in γ HV-68 infected mice in the context of inflammatory bowel disease 21 and Crohn's disease 22 . In all these studies, the mechanisms that the virus is exploiting to lead to these enhanced autoimmune phenotypes were not described. We previously demonstrated that EAE pathology is severely heightened in mice latently infected with γ HV-68 and it is more reminiscent of MS including brain specific lesions with CD4 + and CD8 + T cells and demyelination 23 . Mice latently infected with γ HV-68 were induced for EAE (γ HV-68 EAE) and mounted a potent Th1 response that lacked an IL-17 response and leads to brain parenchyma inflammation. Whereas, in uninfected EAE mice, inflammation is confined in the spinal cords and it is driven by a mixed Th1-Th17 response. Antigen presenting cells (APCs) expressing higher levels of CD40 and MHC II were found to be responsible for the preferential Th1 skewing in γ HV-68 EAE mice 23 . This suggests that a γ -herpesvirus is able to manipulate innate immunity and influence both the skewing and the strength of T cell responses upon a second pro-inflammatory stimulus. However, it remained to be addressed mechanistically how γ HV-68 latency triggers the enhancement of the disease, specifically addressing whether the latent life cycle is required and what role upregulation of costimulatory molecules, in particular CD40, has in inciting enhanced EAE. CD40 is a co-stimulatory molecule that is expressed on mature dendritic cells (DCs), B cells, monocytes, epithelial cells and endothelial cells [24][25][26] . The presence of microbial products or danger signals activates Toll-like receptors leading to the activation and maturation of DCs and, in particular, upregulation of surface expression of CD40. Type I interferons (Type I IFNs) play a critical role not only in the regulation of CD40 expression, but also in the control and maintenance of both γ HV-68 acute and latent infection 27 . Type I IFNs serve to program DCs to drive Th1 responses through activation/ phosphorylation of the STAT 1 pathway 28,29 . In addition to type I IFNs, the cognate interaction between CD40 and CD40 ligand (CD40L) enhances DC activation and it is important in providing T cell help to B cells, activating cytotoxic CD8 + T cells, directing a Th1 response and controlling regulatory T cells (Tregs) 24,30,31 . Interestingly, CD40 polymorphisms and dysregulation have been shown to play a role in the development of autoimmunity 32 . We hypothesized that heightened CD40 expression was mechanistically participating in the development of severe EAE pathology. Our data demonstrate that CD40 is required for the efficient priming of strong Th1 responses and for decreased Tregs frequencies in mice latently infected with γ HV-68. We were also able to show the converse using a latency deficient virus, in which we observed that enhancement of EAE pathology absolutely required γ HV-68 latency. Essentially, the ability to skew the adaptive immune response towards a Th1 phenotype and retain low frequencies of Treg cells through CD40 expression and co-stimulation represents a novel mechanism in which latent γ -herpesviruses like EBV can act to facilitate and induce autoimmune disease. Results During acute infection with γHV-68, EAE is delayed. We have previously shown 23 that mice latently infected with γ HV-68 (five weeks post primary infection) present a heightened EAE pathology characterized by earlier onset of paralysis, more severe clinical symptoms, enhanced T cell infiltrations inside the CNS, and a potent Th1 response accompanied by downregulation of Th17 responses. Strikingly, γ HV-68 EAE mice also showed CD8 + T cell infiltrations and myelin damage inside the brain parenchyma 23 , closely resembling the composition of immune infiltrates in MS plaques. Further, we observed an increase in surface expression of CD40 and MHC II on CD11b + CD11c + cells during the antigen presentation phase of EAE responsible for the enhanced Th1 responses detected in γ HV-68 EAE mice 23 . To determine if the establishment of latency by γ HV-68 is a requirement in order to see enhancement of EAE symptoms, EAE was induced in mice during acute infection with γ HV-68 before the latent life cycle was established. We observed that EAE was delayed during acute infection (Fig. 1) and the severity of the disease was similar between uninfected mice and γ HV-68 acutely infected mice (Fig. 1). Interestingly, the onset of EAE symptoms in infected mice was concomitant with the date of establishment of γ HV-68 latency (day 14/15) as was previously demonstrated by a number of different laboratories 33-35 . Mice infected with a latency deficient γHV-68 have a less severe EAE course and lower amounts of T cell infiltrations inside the CNS than mice infected with γHV-68. The observations that during acute infection, EAE onset was delayed and the disease scores were similar between uninfected mice and γ HV-68 acutely infected mice suggested that establishment of latency prior to EAE induction is required for the development of enhanced disease. We hypothesized that the latency was indispensable for the enhancement of EAE symptoms, so we chose to study a recombinant γ HV-68 (γ HV-68 AC-RTA) in which the genes responsible for the establishment of latency have been deleted and the gene driving lytic infection is constitutively expressed 36 . In a manner similar to wild type virus, this recombinant virus acutely infects mice and stimulates a cellular and humoral immune response comparable to that elicited by the wild type virus 36 . As reported previously, no viral DNA was detected in the splenocytes of mice infected with this virus 15 days post infection 36 , indicating that latency was not established and that virus was cleared. To directly determine the role of latency in this model, we characterized EAE development in γ HV-68 AC-RTA mice. We assessed the differences in the immune cell composition inside the CNS and analyzed Scientific RepoRts | 5:13995 | DOi: 10.1038/srep13995 the T cell response after infection of wild type mice with either γ HV-68 or γ HV-68 AC-RTA followed by EAE induction. Similar to uninfected EAE mice, γ HV-68 AC-RTA EAE mice showed milder clinical CNS symptoms ( Fig. 2A), delayed disease onset ( Fig. 2A) and fewer CD4 + and CD8 + infiltrations inside the brain parenchyma (Fig. 2B) when compared to γ HV-68 EAE mice. In the end, the disease pathology of the γ HV-68 AC-RTA EAE mice resembled those of uninfected EAE mice 23 . These data indicate that the heightened EAE pathology observed in mice latently infected with γ HV-68 was not observed when mice were infected with a recombinant γ HV-68 that was not able to establish latency. Further, γ HV-68 AC-RTA EAE mice presented with reduced CD4 + and CD8 + T cell infiltrations in both the brains and the spinal cords when compared to γ HV-68 EAE mice (Fig. 3A). Additionally they showed increased Th17 responses in the CNS (Fig. 3B) and a decrease in IFN-γ production in both CD4 + and CD8 + T cells. Overall, the type of immune response skewed by γ HV-68 AC-RTA mice after EAE induction was more similar to naïve EAE mice than to γ HV-68 EAE mice. Mice infected with the latency-free γHV-68 AC-RTA strain do not upregulate CD40 expression on antigen presenting cells upon EAE induction. As previously mentioned, we have already demonstrated that latently infected mice present with an upregulation of CD40 on APCs. We then sought to determine whether infection with γ HV-68 AC-RTA would, in contrast to latent γ HV-68 infection, not upregulate CD40 and also establish whether CD40 expression is an important key to immunological regulation during latent viral infections. APCs from C57Bl/6 mice infected with γ HV-68 AC-RTA express CD40, during the antigen presentation phase of EAE at levels comparable to naïve EAE mice ( Fig. 4) rather than the increased expression observed in γ HV-68 latently infected mice (Fig. 4) thereby associating latent infection with increased CD40 surface expression and likely co-stimulation. From our prior manuscript, increased CD40 expression was defined on APCs identified by CD11b + , CD11c + surface marker staining. This set of cells represents a broad spectrum of cell types and it is likely that only a small subset of APCs within this population has increased CD40 surface expression. The variable nature Two weeks post EAE induction, mice were perfused, brain and spinal cords were harvested and immune infiltrations were isolated. Isolated immune cells were stimulated with PMA and ionomycin and stained for CD4 (A) and CD8 (A) to measure IFN-γ and IL-17 (B) production. The amount of CD8 T cells infiltrating inside the CNS of γ HV-68 AC-RTA EAE and naïve EAE mice was too low to perform intracellular staining (B). Three separate experiments with 5-6 mice/group; data are represented as mean, error bars are SEM and were analyzed with t-test: **p < 0.01; ***p < 0.001. of this subset within the larger population is likely reflected in the variability of the mouse-to-mouse comparisons. Overall, the increase in CD40 surface expression is significantly greater from the cells isolated from γ HV-68 latently infected mice. The presence of CD40 on antigen presenting cells is required to induce enhanced Th1 responses in mice latently infected with γHV-68. Enhanced Th1 responses and enhanced CD8 + T cell activation have been shown to be dependent on CD40 [37][38][39][40][41][42][43][44] . Further, these characteristics were observed in γ HV-68 EAE mice and in association with an upregulation of CD40 23 . Additionally, in γ HV-68 AC-RTA EAE mice, brain inflammation is highly reduced, indicating that the priming of a strong Th1 response is required to drive both CD4 + and CD8 + T cells inside the brain parenchyma leading to the development of myelin lesion as observed in γ HV-68 EAE mice 23 . To investigate the effect of the lack of CD40 during γ HV-68 infection and latency, C57Bl/6 wild-type (wt) and C57Bl/6 CD40KO mice were infected or not with γ HV-68. Five weeks post infection (p.i.), the level of IFN-γ produced by T cells was assessed. C57Bl/6 CD40KO mice infected with γ HV-68 fail to mount the strong Th1 response typical of γ HV-68 C57Bl/6 wt mice. In fact, IFN-γ production by CD4 + T cells in γ HV-68 C57Bl/6 CD40KO mice was comparable to naïve wt mice both in the spleen and in the lymph nodes (Fig. 5A). IFN-γ produced by CD8 + T cells was reduced in the spleen of γ HV-68 C57Bl/6 CD40KO mice when compared to γ HV-68 C57Bl/6 wt. In contrast, there was no comparative reduction of IFN-γ in the lymph nodes (Fig. 5B). Additionally, as CD40 was found to be upregulated on the surface of CD11b + CD11c + cells upon EAE induction 23 , an in vitro antigen presentation assay was performed to assess if CD40 was critical to skew the potent Th1 response observed in γ HV-68 EAE mice upon myelin peptide presentation. Transgenic T cells, bearing a TCR specific for myelin olygodendrocyte glycoprotein (MOG), were incubated with CD11b + CD11c + cells isolated from a γ HV-68 EAE wt mouse and were primed. These cells produced significantly greater amounts of IFN-γ than when incubated with the same cell subset from uninfected EAE wt mice. This effect was abrogated if CD11b + CD11c + were taken from CD40KO mice (Fig. 6A,B). This result shows that CD40 expression on the surface of APCs is an absolute requirement to trigger enhanced IFN-γ production in T cells upon antigen presentation in mice latently infected with γ HV-68. γHV-68 infection drives increased STAT1 responses in APCs. Type I IFNs play a crucial role in the maintenance of latency of γ HV-68 27 . In addition, upregulation of CD40 and MHCII on APCs is dependent on the presence of Type I IFNs 45,46 . Type I IFNs modulate APCs including dendritic cells to mediate Th1 responses by activating the STAT1 pathway 28,29 . We tried to measure the levels of Type I IFNs during γ HV-68 latency with no success. The increase in Type I IFNs production during viral latency is likely localized to only a small subset of cells and reasonably difficult to measure. To indirectly measure the effect of IFN I, we chose to measure the levels of phosphorylation of STAT1 (pSTAT1). pSTAT1 is a signature molecule that defines the potential for Type I IFN mediated activity 28,29 . pSTAT1 can also be driven by other inflammatory mediators such as IFNγ , however by sampling during the inflammatory quiescent period prior to EAE, these alternative mediators may have a reduced contribution. More importantly, it is our premise that the APCs are programmed by the presence of a latent virus (in a different cell, memory B cell) and that this programming will lead to a greater phosphorylation of STAT1 and a greater STAT1 response. We harvested spleens from latently infected mice and sorted CD11b + CD11c + cells. We then determined pSTAT1 levels by flow cytometry. We observed that, even prior to EAE induction, pSTAT1 levels were increased in APCs harvested from γ HV-68 latently infected mice (Fig. 7) as compared to those of uninfected mice. While these results suggest a potential link to IFN I, they leave open the possibility that other mediators are providing a greater STAT1 response. Overall, these results suggest that there is likely a strong association between γ HV-68 induced upregulation of pSTAT1 levels and the upregulation of surface CD40 in APCs, ultimately driving the response towards a Th1 phenotype. Most importantly, it would be expected that a strong Th1 response would include both upregulation of pSTAT1 and CD40 in APCs. Mice latently infected with γHV-68 have decreased Treg in the periphery and in the CNS after EAE induction. Viral latency and induction of CD40 upregulation are required for this phenotype. In the periphery, Treg induction through APC:T cell costimulation, specifically CD40:CD40L, has been clearly demonstrated 38,41 . The increased surface expression of CD40 and potent Th1 response in γ HV-68 infected mice led us to ask, if there were relevant changes in the frequency of Tregs. To specifically investigate the frequency of Tregs in the periphery and CNS we examined the Treg populations in γ HV-68 latently infected mice and γ HV-68 AC-RTA infected mice after EAE induction. The frequencies of Tregs were decreased in the spleens of γ HV-68 EAE mice when compared to both γ HV-68 AC-RTA EAE mice and naïve EAE mice (Fig. 8A). The same results were obtained when the frequencies of Tregs in the CNS were analyzed (Fig. 8B): the levels of Tregs in γ HV-68 AC-RTA infected EAE mice were comparable to those of uninfected EAE mice. This suggests that viral latency likely through a CD40 mechanism has a role in controlling the frequencies of Tregs in γ HV-68 mice. Finally, the frequency of Tregs was decreased in γ HV-68 C57Bl/6 wt mice before EAE induction (50 days post γ HV-68 infection), but it was rescued in γ HV-68 C57Bl/6 CD40KO mice that showed the same Treg frequency as naïve CD40 KO mice (Fig. 8C). Since CD40 has been shown to be critical to control Treg frequencies 38,41 , it was expected that the Treg frequency on latently infected γ HV-68 CD40KO mice would be similar to that observed in naïve uninfected CD40KO mice. Further, the lack of disease enhancement post EAE in mice infected with γ HV-68 AC-RTA demonstrates that viral latency and its influence on innate immunity is critical to the control of Treg frequencies and adaptive immunity during latent γ HV-68 infection. Discussion Despite a large body of work that associates EBV infection to the development of autoimmunity, it is currently not clear what mechanisms this virus is exploiting to cause autoimmunity. Previous work focused primarily on EBV specific adaptive immunity in patients affected by autoimmunity and how this response is different when compared to healthy individuals. However the influence that EBV latency has on innate immunity has not been investigated in the context of autoimmune diseases. Previously, we used the murine equivalent to EBV, γ HV-68, to demonstrate that γ -herpesviruses have the ability to modulate subsequent immune interactions and to specifically heighten EAE pathology to more resemble MS. Here, we determined that this enhanced disease requires γ HV-68 latency and likely acts by upregulating CD40 surface expression on APCs. CD40 expression and co-stimulation is pivotal in controlling the type and strength of the adaptive immune response in response to a second pro-inflammatory stimulus such as EAE: CD40 co-stimulation and γ HV-68 latency act to enhance both CD4 + and CD8 + effector T cell activation and reduce Tregs during EAE. Specifically, we showed that γ HV-68 enhanced T cell activation is accompanied by a decrease in Tregs frequencies both in the periphery and in the CNS during EAE. It has been previously shown that mice infected with γ HV-68 present with decreased expression of Foxp3 and diminished T cell regulatory activity up to day 15 post γ HV-68 infection 47 . Our results show for the first time, to our knowledge, that γ HV-68 mice have decreased splenic percentages of Tregs and that this reduction is long lasting, still evident more than 50 days after infection. Further, this suppression of Treg frequency is removed in the absence of virus latency and CD40. We suggest that γ HV-68 latency, in concert with the increase in CD40 expression on APCs, drives a decrease in Treg frequencies and thereby increases susceptibility to autoimmunity. Our data is supported by previous work that demonstrated that CD40 signaling suppresses the development of Tregs 38,41 . It is conceivable that decreased numbers of Tregs are contributing to the exacerbation of EAE symptoms observed in γ HV-68 mice. In fact, it has been extensively shown that Tregs have an important role in preventing EAE development in mice (for a review 48 ). Adoptive Treg transfers and treatment with monoclonal antibodies aimed at increasing the numbers of Tregs are both effective means to ameliorate EAE 48 . Interestingly, Tregs have decreased suppressive functions in MS patients [49][50][51] and patients with mononucleosis have also decreased frequencies of Tregs in the blood 52 . A decrease in Tregs as a consequence of γ -herpesvirus infection could be a predisposing factor for the development of autoimmunity. Additionally, it has been shown that CD40 is important for cytotoxic CD8 + T cell activation 37,40,42,43 , suggesting that increased CD40 expression in γ HV-68 EAE mice may have a role in CD8 + T cell enhanced activation and CNS infiltration that we observed during EAE in latently infected mice. Our findings that enhanced CD8 + T cell activation during EAE is dependent on both CD40 expression and γ HV-68 latency are in agreement with previous studies that have demonstrated that CD40L-CD40 interaction is required for the prolonged clonal expansion and activation of CD8 + T cells during the "mononucleosis like" phase of γ HV-68 infection, coincident with latency establishment 53 . It has also been shown that enhancement of CD40 signaling substitutes for CD4 + T cell help during control of latency to prevent γ HV-68 reactivation 54 . From these results, it is clear that CD40 is playing an important role especially during γ HV-68 establishment of latency. A question that remains open is how does γ HV-68 control CD40 expression. We have previously shown that CD11c + CD11b + cells expressing high levels of CD40 are not infected by γ HV-68. It is conceivable that the virus is using an indirect mechanism to upregulate CD40, possibly through Type I IFNs, however other mediators including IFNγ could be mechanistically important. Type I IFNs are produced during the infection and maintenance of γ HV-68 latent infection 27 . γ HV-68 latency requires type I IFN to maintain its latent life cycle 27 . Alternatively, a viral protein or components of the viral genome could bind to pattern recognition receptors and further activate APCs to produce increased amounts of co-stimulatory molecules like CD40. Intriguingly, MS patients upregulate CD40 in their peripheral blood 55 and CD40 positive cells co-localize with T cells in active MS lesions 56 . Additionally interaction between CD40 and CD40L on T cells isolated from MS patients stimulates increased production of IL-12 by APCs 57 and IFN-β, used as a therapeutic agent in MS, decreases the level of CD40L on T cells 58 . The EBV gene product, LMP-1, is a decoy receptor for the CD40 receptor and can replace CD40 signaling in B cells 59 . This observation is particularly intriguing considering the tight link between EBV infection, mononucleosis and MS, the successful results of anti-B cell therapies in treating MS and the ability of B cells to act as APCs. While an LMP-1-like gene product has not yet been observed for γ HV-68, the virus may influence CD40 signaling in a different manner because controlling co-stimulation is critical to EBV and γ HV-68 pathogenesis. Our results suggest a possible, yet to be described, mechanism where EBV might be influencing autoimmunity in humans: EBV affects APCs leading to Th1 skewing, CD8 + T cell activation and decreased Tregs frequencies. MS and other autoimmune diseases such as Lupus in which EBV has been implicated are heterogeneous and it is conceivable that they do not have only one etiologic agent. It is likely that EBV could be the trigger of the disease in only a subset of patients. To address this, the activation status of APCs in humans affected by different sub-types of MS should be investigated and linked to both EBV infection/mononucleosis history and T helper responses/CD8 + activation status. In conclusion, we have demonstrated that latent infection with γ -herpesviruses predisposes individuals to severe autoimmune disease by modulating DCs and suppressing Treg frequencies likely through DC modulation (CD40 expression) and this in turn, exaggerates CD4 + and CD8 + T cell aggression. This profound ability of γ -herpesviruses to influence autoimmunity represents a unique mechanism and offers a novel target for potential therapies. Materials and Methods Ethics Statement. All animal work was performed under strict accordance with the recommendations of the Canadian Council for Animal Care. The protocol was approved by the Animal Care Committee (ACC) of the University of British Columbia (certificate numbers: A0\415 and A08-0622). Infections and EAE induction. C57Bl/6 mice, C57Bl/6 CD40KO mice and 2D2 mice were purchased from the Jackson Laboratory and were bred and maintained in our rodent facility at the University of British Columbia. Mice were infected intraperitoneally (i.p.) between 7-10 weeks of age with 10 4 pfu of γ HV-68 WUMS strain (purchased from ATCC, propagated on BHK cells); or 10 4 pfu of latency deficient γ HV-68 AC-RTA (originally developed by Dr. Ting-Ting Wu, generous gift of Dr. Marcia A. Blackman) 36 IFN-γ (clone XMG1.2), Foxp3 (clone FJK-16s) were all purchased from eBiosciences. Samples were acquired using a FACS LSR II (BD Biosciences) and analyzed using FlowJo software (Tree Star, Inc.). Immunohistochemistry. Brains from perfused mice were frozen in OCT (Fisher Scientific) and ten-micron thick sections were processed for immunohistochemistry. Briefly, sections were fixed in ice cold 95% ethanol for 15 min and washed in PBS several times. This was followed by washes in TBS with 0.1% Tween (TBST) and incubation for 10 min with 3% H 2 O 2 to block endogenous peroxidase. After washing, blocking buffer was added for 1 h (10% normal goat serum in PBS). Primary antibody was added overnight at 4 °C: purified rat anti-mouse CD4, anti-mouse CD8 and anti-mouse F4/80 (all from eBiosciences), diluted 1:100 in PBS 2% normal goat serum. After washes in TBST, the biotinylated secondary antibody (anti-rat IgG, mouse absorbed, Vector) was added for 1 h, diluted 1:200 in PBS 2% normal goat serum. After washes in TBST, the Vectastain ABC reagent was used (Vector) following manufacturer's instruction. Then, DAB (Sigma) was added as a substrate and, after incubation for 8 min in the dark and several washes in distilled water, sections were counterstained with Harris hematoxylin for 20 seconds, in lithium carbonate for 30 sec, washed in several changes of distilled water and mount with VectaMount AQ (Vector). Antigen presentation assay in vitro and pSTAT1 staining of antigen presenting cells. Spleens and inguinal lymph nodes were harvested at day 4 post EAE induction. A single cell suspension was prepared and stained with anti-CD11c and anti-CD11b antibodies (see above for details). CD11c + CD11b + cells were sorted with a FACSAria cell sorter (BD Biosciences). CD4 + T cells from 2D2 mice were isolated from spleens with a CD4 + T cells negative selection kit following manufacturer's instructions (STEMCELL technologies). Isolated CD11b + CD11c + (3-4 × 10 4 /well) and 2D2 CD4 T cells (5 × 10 5 /well) were seeded on a 24 well plate in RPMI, 10% FBS and Pen/Strep (all from GIBCO) with or without 100 μ M MOG peptide. After 72 hours, T cells were restimulated with PMA, ionomycin and GolgiPlug and stained for CD4 and IFN-γ as described above. For pSTAT1 staining: spleens were harvested 5 weeks post γ HV-68 infection. CD11b + CD11c + cells were stained and sorted as above. Once sorted, cells were fixed with 4% paraformaldehyde and incubated for 10 min at 37 °C. Cells were then washed with 2% PBS/FBS and permeabilized with 90% methanol for 30 min on ice, and then washed and stained for pSTAT1 (clone BD 4a) for 1 hour at room temperature. Cells were acquired and analyzed as detailed above. Statistical analysis. Two-way ANOVA followed by Bonferroni's post test was employed to compare EAE scores. Unpaired Student's t-test or one way ANOVA was used for all the other analyses (GraphPad Prism).
6,238.4
2015-09-10T00:00:00.000
[ "Biology" ]